Skip to main content

We Need Smarter Driverless Vehicle Regulations More Urgently Than We Need Smarter AI

Recent Tesla Autopilot and Cruise robo-taxi news has raised public concern. Strong federal and state regulations are needed to ensure the safety of driverless vehicles’ AI-based software

Cockpit of driverless car driving on highway viewed from rear seat

Excitement about artificial intelligence has inflated expectations about what machine learning technology could do for automated driving. There are fundamental differences between AI’s large language models (LLMs) manipulating words into sentences, however, and machines driving vehicles on public roads. Automated driving has safety-of-life implications not only for the passengers of driverless vehicles but also for everybody else who shares the road. Its software must be held to much higher standards of accuracy and dependability than LLMs that support desktop or mobile phone apps.

Although well-justified concerns surround human driving errors, the frequency of serious traffic crashes in the U.S. is already remarkably low. Based on the traffic statistics from the National Highway Traffic Safety Administration (NHTSA), fatal crashes occur approximately once in every 3.6 million hours of driving and injury-causing crashes about once in every 61,000 hours of driving. That’s one fatal crash in 411 years and one injury-causing crash in seven years of continuous 24/7 driving. Comparably long mean times between failures are extremely difficult to achieve for complex software-powered systems, particularly ones mass-produced at affordable prices.

Driverless vehicle company Cruise’s problems with California’s safety regulators and Tesla’s problems with NHTSA indicate some of the safety challenges that automated driving software systems face. They are more than purely technological because they also demonstrate the serious risks associated with both companies’ attempts to bring the Silicon Valley culture of “moving fast and breaking things” into an application where safety needs to be the top priority. Developing safe systems requires patience and meticulous attention to detail, both of which are incompatible with speed. And our vehicles should not be breaking things—especially people.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


That’s why the U.S. needs a rigorous safety regulatory framework for automated driving—so that the safety-enhancing potential of the technology can be realized and public trust in its safety can be earned by the industry, once it is properly vetted by safety experts and safety regulators. Because of its safety-critical nature, the software that drives vehicles will need to operate to an unprecedentedly high level of dependability. Both the general public and safety regulators will need to receive provable and explainable evidence that it can improve traffic safety rather than making it worse. This means that the software cannot depend entirely on AI methods of machine learning but will also need to incorporate explicit algorithmic safety guardrails. Tesla and Cruise provide forewarning of why this is necessary.

In Tesla’s case, NHTSA has been investigating safety concerns with Level 2 partial driving automation systems, which are designed to control vehicle speed and steering under constant driver supervision and specific limited road and traffic conditions. On December 12 of last year it announced an agreement with Tesla for a recall of vehicles equipped with Autopilot capability because the company did not include adequate safeguards against misuse by drivers. In stark contrast to the comparable driving automation features from Ford and General Motors, Tesla’s Autopilot does not use direct (infrared) video monitoring of drivers’ gaze to assess their vigilance in supervising the operation of the system. And the software allows the system to be used anywhere, without regard for whether it is on the limited-access freeways for which it was designed. Simple modifications could have provided reasonable indications of driver vigilance and restricted the system’s use to locations with suitable road conditions to reduce safety risks. The company refused to do this and is only implementing some additional warnings (via an over-the-air software update) into Autopilot to try to discourage misuse. Stronger regulatory interventions are needed to compel them to  “geofence” the system so that it can only be used where it has been proven to operate safely and when the cameras show that the driver is looking ahead for hazards that it may not recognize.

Cruise’s authority to provide driverless ride-hailing service in San Francisco was rescinded by the California Department of Motor Vehicles after the company failed to provide complete and timely reporting of an October 2 event in which one of its vehicles dragged a crash victim who was trapped under the vehicle and was seriously injured. This triggered a comprehensive internal reexamination of Cruise’s operations that revealed significant problems with both the organization’s safety culture and its interactions with the public and public agency officials. Cruise chose a Silicon Valley culture that valued the speed of development and expansion over safety, and in contrast with the other leading companies that have been developing driverless ride-hailing services, it did not have a chief safety officer or an effective corporate safety management system. Although safety has been a prominent talking point for Cruise, the company apparently did not give it high priority when making decisions with significant safety implications.

In the near term, while automated driving technology is still maturing and insufficient data exist to define precise performance-based regulations, progress can still be made in implementing elementary requirements at the state or (preferably) national level to improve safety and enhance public perceptions of safety. Automated driving system (ADS) developers and fleet operators should be required to: ensure the ADS can’t operate where their behavior has not been shown to be safe; report all crashes and near misses (as well as high-g maneuvers and human control takeovers); and implement audited and regulated safety management systems. Finally, they should develop comprehensive safety cases subject to review and approval by state or federal regulators prior to deployment. The safety case should identify reasonably foreseeable hazards and describe how the risks to public safety from each hazard have been mitigated based on quantitative evidence from testing under human supervision in real-world conditions.

The safety-enhancing potential of automated driving technology won’t be realized until public trust in the safety of the technology is earned by the industry. This will require regulations to set minimum requirements for safe system development and operation processes and sufficient disclosure of safety-relevant data for vetting by independent safety experts and regulators.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

Steven E. Shladover is an engineer who has been conducting research on applications of advanced technologies for transportation systems for more than 50 years. He has expertise in transportation systems planning and analysis, vehicle dynamics and control, large-scale systems and economics. Shladover helped found the California PATH Program at the University of California, Berkeley, and has served in a variety of management roles, including program technical director and program manager for Advanced Vehicle Control and Safety Systems. He retired from his position as program manager for mobility at PATH in late 2017.

More by Steven E. Shladover