Datasets:
π OmniTraffic: A Large-scale Multi-view Spatiotemporal Benchmark for Traffic Understanding
π Benchmark Summary
OmniTraffic is a comprehensive evaluation benchmark designed to test the multi-view spatiotemporal reasoning and Bird's-Eye View (BEV) perception capabilities of multimodal large language models (MLLMs) and autonomous driving systems.
While the complete OmniTraffic dataset ecosystem contains an underlying pool of over 8 million generated VQA samples, this repository specifically hosts the OmniTraffic Gold-Standard Benchmark. It consists of 3,200 highly-curated VQA pairs that were systematically sampled from the massive 8M pool and rigorously validated by human experts.
This curated subset ensures logically sound reasoning chains and eliminates low-quality noise, serving as a reliable metric for deep traffic logic evaluation.
Figure 2: Distribution of the 3,200 curated VQA pairs across the three evaluation levels, task categories, and required capabilities.
π Key Features
- Gold-Standard Quality: 3,200 human-validated QA pairs sampled from an 8M+ massive data pool.
- Spatiotemporal Reasoning: Requires models to track object trajectories, infer future states, and reason over long video contexts.
- Multi-View & BEV Integration: Covers comprehensive perception angles (Front, Rear, Sides, BEV) for robust autonomous driving simulation.
- Hierarchical Evaluation: Features a unique three-level evaluation framework testing foundation perception, spatiotemporal prediction, and strategic planning.
π Three-Level Evaluation Framework
To systematically evaluate model capabilities, OmniTraffic introduces a progressive three-level framework:
- Level 1: Foundation Perception Focuses on object detection, state recognition (e.g., traffic light status, vehicle type), and basic spatial relationships across different camera views.
- Level 2: Spatiotemporal Prediction Requires models to understand temporal dynamics, predict vehicle trajectories, anticipate pedestrian movements, and identify potential hazards.
- Level 3: Strategic Planning & Reasoning The most advanced level, asking the model to act as the ego-vehicle driver, making complex decisions based on safety, traffic rules, and dynamic obstacle interactions.
β οΈ Looking for Massive Original Data? If you are looking for our complete underlying pool of over 8 million generated VQA samples (~280G) to pre-train, fine-tune, or evaluate your multimodal models, please visit our companion OmniTraffic Dataset repository.
π Dataset Structure
The benchmark is structured to facilitate seamless integration with standard Hugging Face datasets pipelines.
Data Instances
A typical data instance contains the visual input path, the structured VQA prompt, multiple-choice options, and comprehensive metadata regarding the task and required capabilities.
{
"question": "Does the image contain any emergency vehicles such as police cars, ambulances, or fire trucks?",
"answer": "No, there are no emergency vehicles in the image.",
"options": {
"A": "Yes",
"B": "No"
},
"correct_answer": "B",
"category": "Special Vehicles",
"task": "Single Image",
"subtask": "Existence",
"capabilities": [
"Object Detection",
"Vehicle Classification",
"Distance Estimation"
],
"image_path": "images/394/2.png",
"direction": 2,
"timestep": "394"
}
Data Fields
question(string): The main VQA prompt asking about the traffic scenario.answer(string): A detailed textual explanation or the full expected text answer.options(dict): A dictionary mapping choice letters (e.g., "A", "B") to their respective candidate texts.correct_answer(string): The key corresponding to the correct option (e.g., "B").category(string): The overarching domain of the question (e.g., "Special Vehicles", "Traffic Lights").task(string): The input format or task scope (e.g., "Single Image", "Multi-view Video").subtask(string): The specific analytical goal (e.g., "Existence", "Trajectory Prediction").capabilities(listofstring): The core perception and reasoning skills required by the model to solve the problem (e.g., "Object Detection", "Distance Estimation").image_path(string): Relative path to the visual input associated with the query.direction(int): An identifier mapping to the specific camera view or sensor angle.timestep(string): The temporal identifier or frame index within the specific driving scenario.
π οΈ Data Construction & Collection
The OmniTraffic benchmark was constructed through a rigorous two-stage process:
- Stage 1 (Massive Generation): 8 million QA pairs were generated via an automated pipeline covering diverse simulated urban, suburban, and highway scenarios.
- Stage 2 (Expert Validation): A representative subset of 3,200 instances was sampled across all task levels and capabilities. These instances underwent strict human-in-the-loop verification to correct ambiguous questions, rectify bounding box errors, and ensure accurate ground-truth answers.
- Privacy & Anonymization: All data has been processed to remove personally identifiable information (PII). License plates and human faces (if any) are anonymized.
β οΈ Limitations & Bias
While OmniTraffic aims for comprehensiveness, the current version primarily covers standardized traffic scenarios. Extreme weather conditions (e.g., severe blizzards), highly unstructured rural roads, and rare long-tail edge cases may be underrepresented. Users should exercise caution when utilizing models fine-tuned on this dataset for real-world physical deployment without further rigorous domain adaptation and safety testing.
π§ Maintenance Plan
- Updates & Errata: We will actively monitor community feedback via Hugging Face Discussions and the GitHub issue tracker. Corrections to annotations or corrupted files will be released as new dataset versions (e.g.,
v1.1). - Hosting: The dataset will be permanently hosted on the Hugging Face Hub.
- Downloads last month
- 11