An official website of the United States government
dot gov icon

Official websites use .gov

A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS

A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Challenge Summary

In response to growing concerns regarding the safety of vulnerable road users at intersections, the U.S. DOT launched the Intersection Safety Challenge to transform intersection safety through the development of innovative intersection safety systems (ISS) that can identify, predict, and mitigate unsafe conditions involving vehicles and vulnerable road users in real time. The Challenge was organized as a multi-stage prize competition that launched in April 2023 and concluded in January 2025. The Challenge aimed to assess the maturity and incentivize the development of new, cost-effective, real-time roadway ISS concepts that leverage sensor fusion and AI.

Intersection Safety Challenge Logo

Challenge Structure

The competition was split into two stages with Stage 1A focused on concept assessment and Stage 1B focused on system assessment and virtual testing.

Stage 1A

Concept Assessment

Stage 1A brought together innovative teams combining expertise in emerging technologies with traffic and safety engineering to develop new and potentially transformative intersection safety approaches. Participants submitted concept papers on their proposed intersection safety system designs.

120 innovative concept papers submitted

15 teams selected for prizes and invited to Stage 1B

Stage 1B

System Assessment and Virtual Testing

Stage 1B challenged the winning teams from Stage 1A to develop, train, and improve algorithms for the detection, localization, classification, path prediction, and conflict prediction of vulnerable road users and vehicles utilizing U.S. DOT-provided real-world sensor data collected on a closed course at the Federal Highway Administration (FHWA) Turner-Fairbank Highway Research Center (TFHRC).

13 teams participated by developing and training algorithms

10 teams selected for prizes

Stage 1A — Concept Assessment

Summary

Stage 1A brought together innovative teams combining expertise in emerging technologies with experience in traffic and safety engineering to develop new and potentially transformative intersection safety approaches. Participants submitted concept papers on their proposed intersection safety system designs that helped identify and mitigate unsafe conditions involving vehicles and vulnerable road users.

Results

The U.S. DOT evaluated 120 innovative concept papers, selecting 15 teams for prizes. Of these 15 teams, 2 were led by State DOTs, 7 by academic institutions, and 6 by other organizations. Following final verification of eligibility, these teams received a prize of $100,000 each and were invited to participate in Stage 1B: System Assessment and Virtual Testing.

Learn more about the winning teams and their approaches below.

Team Lead EntitySubmission Title
CNA Safe Warnings for Intersections Forecasting Tool (SWIFT)
Deloitte Consulting Intersection Safety System: Foundation for Smart & Connected Intersection
DENSO International America Driving Behavior Integrated Intersection Safety System for Vulnerable Road Users
Derq USA Derq's Intersection Safety System
Florida A&M University and Florida State University Predictive Intersection Safety System (PREDISS)
Global Traffic Technologies/Miovision USA White Alert: A Digital Multi-Channel Vision for Scalable Intersection Safety
Ohio State University Transforming Intersection Safety Through Emerging Technologies for All Road Users
Orion Robotics Labs Orion Labs Saiph Intersection Safety System
Texas Department of Transportation Applying LiDAR-based Multimodal Tracking to Improve Vulnerable Road User Safety at Signalized Intersections
University of California, Los Angeles InfraShield: Pioneering Safe Intersections for All Road Users through AI-Powered Infrastructure Solutions
University of California, Riverside Safety Assurance System for Vulnerable Road Users at Signalized Intersections (SAINT)
University of Hawaii Toward Vision Zero: Sensing, Predicting, and Preventing Intersection Collisions
University of Michigan SAFETI: Safety Actions for Everyone at Traffic Intersections
University of Washington Comprehensive and Cooperative Intersection Safety Systems
Utah Department of Transportation Improving Intersection Safety with Light Detection and Ranging (LiDAR)

Stage 1B — System Assessment and Virtual Testing

Summary

Stage 1B challenged the winning teams from Stage 1A to develop, train, and improve algorithms for the detection, localization, classification, path prediction, and conflict prediction of vulnerable road users and vehicles utilizing U.S. DOT-provided real-world sensor data collected on a closed course at the Federal Highway Administration (FHWA) Turner-Fairbank Highway Research Center (TFHRC).

Data Collection

The Intersection Safety Challenge Dataset features a comprehensive collection of conflict/non-conflict scenario data involving various road users, captured under various weather and lighting conditions by visual and thermal cameras, LiDAR, and radar sensors.

Teams used the data to develop and train models that predict potential conflicts and provide enough time for warnings or other real-time countermeasures. This rich and unique dataset lays the foundation for key research efforts aimed at improving intersection safety.

Interested users can download sample data and request access to the full challenge dataset from the U.S. DOT's public data portal .

Results

The USDOT awarded 10 teams prize amounts ranging from $166,666 to $750,000, for a total of $4,000,000 in prize awards.

Stage 1B demonstrated that many teams could perform acceptable detection, localization, and classification under ideal conditions. However, there is room for improvement for night and low-visibility conditions. Additionally, further testing is required to assess the speed and accuracy of real-time path and conflict prediction as well as the effectiveness of various conflict mitigation strategies.

Learn more about the winning teams and their approaches below.

Team Lead EntitySummary
Derq USA, Inc.
  • Utilizes an approach that fuses varied perception sensors and learns from historical data to build real-time situational awareness to monitor and analyze road user behavior.
  • Developed real-time cooperative perception technology based on computer vision, machine learning, and sensor fusion with applications in traffic control, connected vehicles and safety analytics including illegal road-user movement detection, real-time near-miss (conflict) detection and real time crash detection for vehicles and vulnerable road users.
  • Approach includes a process for sensor calibration, including the alignment of camera and LiDAR data, and synchronization of timestamps for data integration.
    • Perception algorithms are employed to detect and classify road users in a variety of sensor feeds, which are tracked and then fused in a common frame of reference.
  • Path prediction models are applied to anticipate the future movements of road users, feeding into conflict detection algorithms to identify potential collision scenarios.
University of California, Los Angeles (UCLA) Mobility Lab
  • InfraShield system uses sensor fusion and path prediction technologies which leverage multimodal sensor data, including LiDAR, red, green, and blue wavelengths (RGB) cameras, and radar, to detect, classify, and track vulnerable road users and vehicles under challenging conditions.
  • Utilizes a late fusion approach to combine sensor data for object detection, classification, and tracking, addressing calibration issues and sensor limitations.
  • For path prediction, InfraShield employs machine learning models to forecast future movements of road users, utilizing high-definition maps and historical object trajectory data, accounting for diverse paths of vehicles and vulnerable road users, ensuring robust predictions despite noise data, and can be used to identify conflict points using time-to-collision calculations.
University of Hawaii
  • Relies on sensor fusion across multiple modalities including LiDAR, RGB cameras, thermal cameras and signal data providing highly accurate 3D localization, open vocabulary detection for even potentially unknown test-time classes and multi-mode probabilistic path prediction, which are combined for conflict prediction.
  • Approach in optimal utilization and fusion of sensors, allows real-time inference on cheaper devices, minimizes data curation costs and ensures good generalization across conditions, which are crucial to ensure scalability to intersections all over the nation.
University of Michigan
  • Team includes Mcity of the University of Michigan, General Motors Global R&D;, Ouster, and Texas A&M; University.
  • SAFETI real-time algorithms are designed to work with DOT-supplied sensor data, focusing on identifying and predicting the movement of vehicles and vulnerable road users at intersections.
  • The approach integrates 2D detection from images and 3D detection from LiDAR data, followed by sensor fusion and trajectory prediction, with a conflict detection module that evaluates potential collisions between agents in real-time.

Team Lead EntitySummary
Florida A&M; University (FAMU) and Florida State University (FSU)
  • Predictive Intersection Safety System's (PREDISS's) goal is to leverage machine learning, controls, optimization, connected and autonomous vehicles technologies to improve the safety of vulnerable road users at signalized intersections.
  • Approach fuses low-cost sensors' data to detect (differentiate and classify), localize, track, and predict the trajectories of vehicles and vulnerable road users.
    • Strikes a balance between compute power and practicality, factoring in the long-term goal of retrofitting such a system in intersections across the United States. Designed system in collaboration with the Tallahassee Advanced Traffic Management System (TATMS) to be deployable on existing infrastructure, including testing on live feeds.
    • Key design choices include: 1) Modular architecture; 2) User-friendly calibration; 3) Python-based implementation; 4) Efficient algorithms; 5) Adaptive fusion techniques
Miovision (Global Traffic Technologies)
  • Team includes Miovision USA, Carnegie Mellon University, Amazon Web Services, and Telus.
  • Devised a perception, path prediction, and conflict prediction framework centered around RGB camera and LiDAR sensors.
  • Emphasizing decision-level sensor fusion, approach amalgamates independent detections from multiple strategically positioned cameras with granular 3D spatial details from LiDAR data, fostering enhanced detection and localization.
    • Perception module encompasses components such as refined YOLO-based object detection and classification in 2D and LiDAR-based object detection in 3D, multi-camera object tracking based on DeepSORT, and advanced LiDAR-based 3D object localization.
    • Subsequent path prediction, bolstered by an expanded dataset and the AutoBots-Joint model, predicts complex scenarios for each road user into the future at intersections, using bird's-eye-view projections enriched by PCA-based ground plane estimations.
    • At the end, conflict prediction framework integrates time-based Surrogate Safety Measure of Time to Collision to capture complex interactions to anticipate potential collision scenarios, complemented by probabilistic filtering to reduce false positives.
Ohio State University
  • Approach leverages a late fusion strategy that integrates data from LiDAR, RGB cameras, and infrared sensors.
    • Utilizes a Euler-Region Proposal Network (E-RPN) to process Bird's Eye View (BEV) projections of LiDAR point cloud data.
    • Concurrently, a YOLOv10 network is employed for 2D object detection, and a ByteTrack2 tracker is used to track 2D bounding boxes over time. YOLO is applied independently to both RGB and infrared images to maximize detection accuracy.
    • By analyzing the velocity states of the tracked objects, it predicts their future trajectories over a specified time horizon, assuming constant velocity and performing linear extrapolation.
    • Potential collisions are identified by examining the predicted trajectories on the x, y plane.
Orion Robotics Labs
  • Orion Robotics Labs is a small woman-owned business in rural Colorado. Orion Robotics Labs has expertise in machine learning, edge compute, sensing technologies and robotics.
  • Developed a solution for detection, localization, classification and path/conflict detection to increase intersection safety by combining lightweight algorithms, fine-tuned calibrations and fast processing.
University of California, Riverside
  • Approach aims to develop an Intersection Safety System (ISS) using roadside sensor-based data, vehicle-to-everything (V2X) communications, and artificial intelligence (AI) to continuously monitor traffic, predict traffic states (including trajectories) and potential conflicts, as well as to enhance vulnerable road user safety at signalized intersections, with Stage 1B focus on roadside perception and collision prediction.
  • The approach developed centers around the following modules: 1) Data Processing; 2) Sensor Fusion; 3) Multi-Object Tracking; 4) Path Prediction; 5) Collision Prediction.
    • Integrates computer vision technologies and other machine learning techniques for road users' detection (also including sub-classification and localization), tracking, path prediction, and conflict prediction at signalized intersections.
University of Washington
  • Developed a Cooperative Perception System (CPS) to generate a comprehensive understanding of intersection dynamics.
  • System integrates multiple sensors, including eight visual cameras, five thermal cameras, and two 3D LiDARs, enabling 3D object detection, classification, and path and conflict prediction.
  • Architecture of CPS is structured into three primary modules: Object Detection and Classification, 2D-3D Camera Calibration, and Tracking and Prediction.
    • Object Detection and Classification Module acts as the foundation of the CPS and processes incoming data from both visual and thermal cameras to detect and classify road users in various lighting conditions.
    • 2D-3D Camera Calibration Module converts 2D detection results into 3D object representations using a multi-sensor re-identification process that merges data from cameras and 3D LiDAR sensors.
    • Tracking and Prediction Module utilizes the DeepSORT algorithm to track the 3D detections, capturing crucial movement data such as trajectories, speeds, and orientations. This information feeds into a Seq2Seq prediction model, which processes the sequences of past object states to forecast future movements. The model predicts potential paths and identifies possible conflicts by calculating the time-to-collision (TTC), thus assessing the likelihood of hazardous interactions.

Additional Resources

Intersection Safety Challenge — From Conceptualization to Initial Testing

Webinar | July 27, 2023

Intersection Safety Challenge Prize Competition

Webinar | May 22, 2023