Responses to ITS PAC Questions for October 2012 Meeting

1. How will positioning, communication, and driver interface technology be measured, tested and evaluated in the Pilot?

In addition to the GPS positioning and GPS/DSRC communication research performed for V2V crash warning applications in the CAMP VSC2 Vehicle Safety Communications – Applications project (2006-2009) and the CAMP VSC3 Interoperability Project (2010-present), CAMP is continuing research in this area within two Safety Pilot projects.  The prior research focused on relative vs. absolute positioning performance, RTK vs. WAAS GPS corrections, cross channel interference, and range, power and packet rate evaluations. 

The current research is focused on:
    • Assessing the performance and reliability of 5.9 GHz DSRC communications and GPS in diverse geographic locations and environmental conditions.  This is concentrated on Relative Positioning performance for V2V safety applications based upon Global Navigation Satellite Systems (GNSS), as well as DSRC range and link quality.
    • Intelligently specifying the relative positioning needs in absolute positioning requirements which don’t overburden the automotive device suppliers.

This research began during the Driver Acceptance Clinic (DAC) project, in which 20,000 miles of data were collected using eight vehicles (2 groups of 4) driven by professional drivers on predetermined routes near each of the six DAC locations (spread across the USA).  The routes were designed to obtain specific percentages of deep urban, major rural thruway, major urban throughway, major roads, local roads, interstates/freeways, and mountain roads.  The experiment was specifically designed to simultaneously collect data from multiple GPS receivers (up to four) on each vehicle.  This included a combination of survey grade and automotive grade receivers, with one receiver (Novatel OEM V) being common across all vehicles.

The positioning research is being performed by CAMP VSC3 in parallel to the Ann Arbor Model Deployment using eight template vehicles rather than the sixty-four Model Deployment vehicles.  However, data from the 64 integrated vehicles may also be analyzed. 

The driver-vehicle interfaces (DVIs) were evaluated prior to the Ann Arbor Model Deployment and feedback was provided to the OEMs and Aftermarket Safety Device developers. The intention was to ensure the DVIs met a minimum set of design requirements prior to participating in the model deployment. No evaluations of the DVIs will be formally conducted during the Pilot.

2. Will missed alarm/false alarms be detected and measured, and driver reactions to such?

False alarms will be collected and identified through the data analysis activities.  Missed alarms may be difficult but can be identified if the missed alarm is associated with a conflict that can be identified in the data.  Missed alarms and False alarms that are identified will be incorporated into the estimation of the safety effectiveness of the associated safety application.  The driver reaction may be able to be gaged from studying the video, however, the post driver survey will ask questions concerning the driver’s acceptance of safety applications that takes into account if false alarms factors into the drivers acceptance.

3. Will there be an assessment and tradeoff of alternative HMIs, visual, audible, haptic?

The Driver Acceptance Clinic experiment and Ann Arbor Model Deployment were not designed to provide feedback on the effectiveness and intuitiveness of the individual OEM DVI implementations. As noted above, the individual DVIs (i.e., HMIs) were evaluated relative to a set of minimum requirements prior to acceptance into the Model Deployment. Because of this, the associated results should be interpreted with caution.

4. How will the differences and comparisons of integrated systems vs retrofit vs aftermarket vs vehicle awareness devices be assessed?

The Safety Pilot independent evaluator is charged with assessing the safety impact, system capability, and driver acceptance of each of the devices in the Safety Pilot. In addition, the differences in capabilities (security, safety applications, DVI, etc.) across these types of devices will be documented as an output and considered in conjunction with the results from the independent evaluator.

More discussion on this topic can take place during the Safety Pilot session.

5. Will traffic congestion relief be observed and measured?

No.  However, data about the road network is being collected, archived, and will be made available for additional research by government and industry.

6. What are the device and system failure detection and resolution modes/solutions?

For the Integrated Light Vehicles, this will be addressed in the DAC presentation in addition to the response to question 2, above.

7. Is there any testing of the security procedures and process in the Safety Pilot?

All security messages and functions have been tested prior to model deployment and will continue to be tested during Model Deployment. This includes integrated, retrofit and aftermarket devices requesting, receiving and decrypting batches of certificates, then using the security credentials to sign outgoing BSMs, verify certificates in received BSMs.  To accomplish this, a Model Deployment Security Credential Management System (SCMS) will generate, encrypt and distribute batches of new certificates upon requests from the various devices. 

In addition, The SCMS will send Certificate Revocation Lists for use by various devices to reject “bad” BSMs, as well as, exercise the reporting process for misbehavior detection.

8. What else is being measured and evaluated?

Besides supporting the 2013 and 2014 NHTSA agency decisions, all basic safety messages (BSMs) are being logged by the RSEs for later analysis by FHWA to determine if BSMs can be used to estimate traffic counts and patterns to support traffic operations and possibly used to time traffic signals.

9. What are the greatest uncertainties going into the Pilot that we hope to understand better?

Our major objective coming out of Safety Pilot is to gain enough high quality empirical data to support the 2013 and 2014 NHTSA agency decisions.  To ensure this, we completed a quantitative analysis that has helped shape the scope and parameters of the model deployment. Additionally, we have completed traffic simulations specific to the selected site (Ann Arbor) to help refine the experimental design.  This will be discussed in detail as part of the formal presentations.

We also hope to better understand what role aftermarket devices might play in helping to accelerate market penetration, and therefore, benefits from such deployed systems.

10. What are the greatest risks in the concept and operations currently?

At this point in the program, the greatest risk is that the experimental design may not generate a sufficient volume of data for analysis to support the 2013 and 2014 decisions. The team is taking a number of steps to monitor this risk, including weekly performance reports and preliminary analysis of the data. Furthermore, if the team detects that less data is being collected than expected, there are a series of response plans in place that would be implemented to increase the quantity of data being generated and collected.

11. What reports will be published and when?

  • Crash Framework/initial crash information – Documentation to be published in 3 months
  • Interoperability including security – Documentation published 6 months after 2013 Agency Decision
  • Device Certification, Research related Objective Test Procedures, and Performance Measures - Documentation published 4 months after 2013 Agency Decision
  • Model Deployment evaluation of Safety Impacts, System Capabilities, Driver Acceptance, and System Communications and Positioning Performance – Documentation published 7 months after 2013 Agency Decision
  • Safety Effectiveness and Benefit Estimates – Documentation published 7 months after 2013 Agency Decision

12. How will Safety Pilot results feed the NHTSA rulemaking decision?

This item will be presented during the meeting.

13. How will Safety Pilot results feed industry and operating agency planning?

An example of this item will be presented during the meeting.

14. How do Safety Pilot prototypes components differ from future operational components?

This item may be addressed during the meeting as a discussion item and as part of the presentation on driver clinics.

15. What were the results of the driver clinics?

This item will be presented during the meeting.

16. Is there any way to test the driver complacency concern/question?

It will be possible to assess changes in driver behavior in response to alerts for the Pilot vehicles with DVIs (e.g., integrated vehicles, ASDs, integrated trucks, etc.). Changes in driver responses to alert situations or the alerts themselves over time will provide good indication whether drivers become complacent. To address this issue more directly, the Connected Vehicle Safety program is funding a separate field operational experiment to observe a set of drivers as they experience production-level warning systems over several months of exposure.

17. What further testing will be required to support both a rulemaking decision as well as operational deployment?

This item will be presented during the meeting.

 

1200 New Jersey Avenue, SE • Washington, DC 20590 • 800.853.1351 • E-mail OST-R

Accessibility | Disclaimer | Fast Lane | FedStats | Freedom of Information Act | No FEAR Act | OIG Hotline | Privacy Policy | USA.gov | White House


OST-R's privacy policies and procedures do not necessarily apply to external web sites. We suggest contacting these sites directly for information on their data collection and distribution policies.