You can drive for ten thousand hours and capture maybe a handful of genuine near-misses. A real red-light violation. An emergency swerve to avoid a pedestrian. A vehicle doing 120 mph on a Romanian highway.
These are the moments that define the boundary between a safe AI system and a dangerous one — and they are vanishingly rare in any driving dataset. Most camera footage is highway cruising, gentle lane changes, and uneventful parking. The long tail of safety-critical events is where world models and AV systems actually fail, and it's the hardest data to collect.
Bee cameras solve this. Thousands of Bee cameras run edge AI across 50+ countries, automatically detecting and recording driving incidents the moment they happen — no human review, no manual labeling, no data collection campaign. Each event ships with synchronized video, GNSS traces, and IMU data.
This post is a complete guide to what AI Event Videos are, what's inside them, and why they matter for building the next generation of driving AI.
Here's what one looks like — hard braking on a Portuguese highway at over 62 mph, narrowly avoiding an animal on the road.
How AI Event Videos Work
The fundamental challenge with capturing safety-critical driving data is that you cannot predict when or where it will happen. You can instrument a test fleet, drive millions of miles, and still end up with a dataset that is overwhelmingly composed of uneventful driving. The interesting moments — the ones that actually matter for training robust AI systems — are distributed across a vast spatiotemporal space with no discernible pattern.
Bee cameras take a different approach. Every device runs computer vision and sensor fusion models directly on-device, continuously analyzing the driving environment. When the on-device AI detects a safety-critical event — a harsh braking incident, a high-speed maneuver, a stop sign violation — it triggers an automatic capture pipeline:
| Data | Description |
|---|---|
| Video | ~20-30 seconds of MP4 footage captured around the event |
| GNSS | Millisecond-resolution GPS traces with latitude, longitude, and altitude |
| IMU | 3-axis accelerometer and 3-axis gyroscope data at ~100Hz |
| Upload | Everything is transmitted over LTE to the cloud with structured metadata |
The result is an AI Event Video: a multi-modal data package that captures the full context of a real-world driving incident.
What makes this powerful is what it eliminates. No human reviews the footage. No one manually labels the event type. No one decides which moments are worth keeping. The network of thousands of cameras across 50+ countries just runs, and the safety-critical moments surface automatically. The scale is the product — you get the long tail of driving behavior not by looking for it, but by deploying enough sensors that it finds you.
Types of AI Events
Bee cameras detect 6 distinct event types, each triggered by specific sensor thresholds:
| Event Type | What It Captures |
|---|---|
| Harsh Braking | Rapid deceleration — emergency stops, near-misses |
| Fast Acceleration | Aggressive takeoffs from stops or merges |
| Swerving | Sudden lateral movement — lane departures, evasive maneuvers |
| High Speed | Sustained speed above the posted limit |
| High G-Force | Extreme acceleration forces in any direction |
| Stop Sign Violation | Rolling through or ignoring a stop sign |
Additionally, VRU (Vulnerable Road User) detection is coming soon — identifying pedestrians, cyclists, and scooter riders in the video frame. This adds critical context to every event: a harsh braking incident means something very different when there's a pedestrian in the crosswalk. See Coming Soon below for more details.
What's Inside an AI Event
Every AI Event is a multi-modal data package. Here's what you get:
1. Video
A full MP4 clip captured by the Bee camera — real driving video, not a reconstructed simulation.
| Property | Value |
|---|---|
| Format | MP4 |
| Resolution | 1280x720 |
| Bitrate | 4.5 Mbps |
| Duration | ~20-30 seconds centered around the event |
2. Metadata
Each event includes structured metadata. The location field contains the GPS coordinates where the incident occurred:
{
"id": "69a8379efcae0f2b12353c17",
"type": "HARSH_BRAKING",
"timestamp": "2026-03-04T13:45:25.249Z",
"location": { "lat": 29.9899, "lon": -97.437 },
"metadata": {
"ACCELERATION_MS2": 1.312,
"SPEED_MS": 31.75,
"SPEED_LIMIT_MS": 24.587,
"TIME_ABOVE_SPEED_LIMIT_S": 12.5
}
}
3. Synchronized GNSS Data
Millisecond-resolution GPS traces that let you reconstruct the exact path of the vehicle during the event:
{
"timestamp": 1772811925166.66,
"lat": 29.9899059,
"lon": -97.437032,
"alt": 185.42
}
Typically ~900-1000 GNSS points per event, covering the full video duration.
4. Synchronized IMU Data
3-axis accelerometer and 3-axis gyroscope readings at ~100Hz resolution:
{
"unix_milliseconds": 1772811925166,
"acc_x": -0.42,
"acc_y": -8.15,
"acc_z": 9.72,
"gyro_x": 0.012,
"gyro_y": -0.003,
"gyro_z": 0.008
}
This gives you the raw physics of the event — lateral acceleration during a swerve, deceleration curve during braking, rotational rates during lane changes.
The Bee Camera

The Bee camera is purpose-built for high-fidelity driving data capture. Every AI Event Video is recorded by this hardware:
| Spec | Details |
|---|---|
| Vision System | 12.3 MP main camera at 30 Hz. 800p stereo depth (13 cm baseline) for high-fidelity capture. |
| Edge Compute | Edge compute for AI with performance on par with advanced driver-assist systems. |
| LTE Connectivity | Always-on LTE keeps Bee online and transmitting data in real time. |
| Precision Sensors | Pro-grade positioning sensors calibrated for precise, map-ready data capture. |
| Onboard Storage | 64 GB onboard flash designed for reliable, high-speed data access. |
| Lane-Level GPS | Dual-band L1/L5 GNSS with embedded security for accurate, lane-level positioning. Integrated GNSS antenna for reliable satellite lock. |
Why This Matters
Most consumer dashcams record video and nothing else — single-band GPS that drifts 5–10 meters, no IMU, and footage on an SD card that overwrites itself. The Bee camera is a different class of device: dual-band L1/L5 GNSS for lane-level positioning, a 6-axis IMU at ~100Hz for real acceleration and rotation data, 12.3 MP stereo vision with depth sensing, and on-device edge AI that detects events in real time. When a harsh braking event or swerve happens, the camera captures the video, synchronizes the sensor streams, and uploads a complete data package over LTE — structured, labeled, and ready to use.
Sample Videos and Sensor Data
These are real AI Events captured by Bee cameras. Each embed below plays the dashcam video with a live speed overlay, interactive GPS map, and downloadable event data.
Harsh Braking
These events were captured across three continents. That geographic diversity is part of the value — driving behavior in Belgrade is different from San Marcos, and both are different from Sibiu.
Swerving
High G-Force
High Speed
Use Cases
AI World Models
World models need to simulate reality — not just the boring parts. A model trained on a million hours of highway cruising will confidently predict straight roads in good weather and have almost no basis for simulating the scenarios that actually matter.
AI Event Videos provide exactly the data that's missing:
- Real physics, not approximations. How does a vehicle actually decelerate in an emergency? What does a real evasive swerve look like from the driver's perspective? The synchronized video + IMU + GNSS data captures the full physical dynamics of these moments.
- Labeled edge cases at scale. Each event is pre-categorized by type (braking, swerving, speeding, violation) with precise sensor measurements. No manual annotation required.
- Geographic and cultural diversity. Events from 50+ countries mean your world model encounters Romanian highway behavior, Mexican urban driving, British roundabout dynamics, and American interstate physics — all from real observations.
- The long tail, automatically. You don't need to organize data collection campaigns or pay drivers to simulate near-misses. The network captures genuine safety-critical moments as they naturally occur.
Autonomous Vehicle R&D
AV systems fail on edge cases they've never seen. The entire challenge of autonomous driving is the long tail — the rare scenarios that are underrepresented in training data but disproportionately dangerous.
AI Event Videos are purpose-built for this problem:
- Sensor fusion validation. Each event comes with synchronized video, GNSS, and IMU — the same modalities your AV stack processes. Test your perception and prediction modules against real-world incidents, not synthetic replays.
- Regression test suites. Build a library of actual safety-critical scenarios — harsh braking events, traffic violations, evasive maneuvers — and run your planning module against them. When you ship a new model, verify it still handles every real near-miss in your test set.
- Scenario mining. Query events by type, location, speed, or geographic polygon. Need 500 harsh braking events on highways above 60 mph? Need swerving events in urban intersections across European cities? The API lets you slice the data precisely.
- Pre-labeled, ready to use. Event type, severity (acceleration magnitude), speed context, and precise timestamps are all included. No labeling pipeline required.
Accessing AI Event Videos via API
The Bee Maps API exposes a search endpoint for querying AI Events programmatically:
curl -X POST "https://beemaps.com/api/developer/aievents/search?apiKey=YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"startDate": "2026-02-01",
"endDate": "2026-03-01",
"types": ["HARSH_BRAKING", "SWERVING"],
"limit": 50
}'
You can filter by:
- Date range — up to 31 days per query
- Event types — any combination of the 6 types
- Geographic polygon — events within a bounding area
To include sensor data, request a single event with query params:
curl "https://beemaps.com/api/developer/aievents/EVENT_ID?apiKey=YOUR_KEY&includeGnssData=true&includeImuData=true"
Results are paginated (up to 500 per page) and include presigned video download URLs.
Quick Start: Python
Fetch an event and plot the IMU data in a few lines:
import requests
import matplotlib.pyplot as plt
API_KEY = "YOUR_KEY"
EVENT_ID = "69a8379efcae0f2b12353c17"
# Fetch event with sensor data
event = requests.get(
f"https://beemaps.com/api/developer/aievents/{EVENT_ID}",
params={"apiKey": API_KEY, "includeImuData": "true", "includeGnssData": "true"}
).json()
# Plot accelerometer data
imu = event["imuData"]
timestamps = [(p["timestamp"] - imu[0]["timestamp"]) / 1000 for p in imu]
acc_x = [p["acc_x"] for p in imu]
acc_y = [p["acc_y"] for p in imu]
plt.figure(figsize=(12, 4))
plt.plot(timestamps, acc_x, label="Longitudinal (acc_x)")
plt.plot(timestamps, acc_y, label="Lateral (acc_y)")
plt.xlabel("Time (s)")
plt.ylabel("Acceleration (m/s²)")
plt.title("IMU Acceleration Profile")
plt.legend()
plt.tight_layout()
plt.show()
Download the video for your training pipeline:
video_url = event["videoUrl"]
video = requests.get(video_url)
with open(f"{EVENT_ID}.mp4", "wb") as f:
f.write(video.content)
Plot the GNSS trace on a map:
lats = [p["lat"] for p in event["gnssData"]]
lons = [p["lon"] for p in event["gnssData"]]
plt.figure(figsize=(6, 6))
plt.plot(lons, lats, linewidth=1)
plt.scatter(lons[0], lats[0], c="green", s=60, label="Start")
plt.scatter(lons[-1], lats[-1], c="red", s=60, label="End")
plt.xlabel("Longitude")
plt.ylabel("Latitude")
plt.title("Vehicle Path During Event")
plt.legend()
plt.tight_layout()
plt.show()
Coming Soon
We're actively expanding the data available with each AI Event:
| Feature | Summary |
|---|---|
| VRU Detection | Pedestrians, cyclists, and scooter riders identified in the video frame and included as structured metadata. Critical context for AV safety validation — a harsh braking event means something very different when there's a pedestrian in the frame. |
| Time of Day Classification | Day, night, dawn, or dusk labels for each event. Essential for training models that need to perform reliably across lighting conditions. |
| Road Type | Highway, urban, residential, or rural classification. Filter training data by the driving environment that matches your deployment scenario. |
| Video Summary | Natural language descriptions of each clip, generated automatically. Search and filter events by what actually happened: "vehicle swerves to avoid stopped car in right lane" or "driver brakes hard at yellow light with pedestrian in crosswalk." |
FAQ
Can I request AI Event Videos at known dangerous intersections or specific locations?
Yes. You can query events by geographic polygon, so you can target specific intersections, highway segments, or any area of interest. If you need events from locations that aren't yet covered, Bee's on-demand coverage system can direct camera collection to your target areas.
Can I request higher bitrate or video resolution?
Yes. The standard output is 1280x720 at 4.5 Mbps, but higher resolution and bitrate options are available for enterprise customers. Contact us to discuss your requirements.
How many AI Event Videos are available?
The network captures new events every day across 50+ countries. The total dataset grows continuously — and because events are captured by real drivers in real conditions, the distribution naturally reflects actual driving behavior.
Can I filter events by multiple criteria at once?
Yes. The API supports combining filters — event type, date range, device ID, and geographic polygon can all be used together. For example, you can query all harsh braking events within a specific city during a specific week.
Is the sensor data synchronized with the video?
Yes. GNSS and IMU data are timestamped to the same clock as the video frames, so you can correlate any moment in the video with the exact position, speed, and forces acting on the vehicle at that instant.
Can I use AI Event Videos to train models and build products?
Yes. AI Event Videos are licensed for commercial use, including model training, fine-tuning, simulation, and derivative products. Faces and license plates are not anonymized in the standard output — contact us if you need anonymized data for your use case.
Can I access AI Event Videos in bulk for model training?
Yes. The API supports pagination up to 500 events per page, and bulk data export options are available for large-scale training workloads. Reach out to discuss volume pricing and delivery formats.
Get Started
- Create a free Bee Maps account
- Generate an API key from your Developer dashboard
- Try a query in the API Playground — no code required
- Start pulling AI Event Videos into your pipeline via the API docs
Questions? Reach out on X or contact us directly.
