Driverless car regulations models after drug approval process

Autonomous systems like driverless cars perform tasks that previously could only be performed by humans. In a new IEEE Intelligent Systems Expert Opinion piece, Carnegie Mellon University artificial intelligence ethics experts David Danks and Alex John London argue that current safety regulations do not plan for these systems and are therefore ill-equipped to ensure that autonomous systems will perform safely and reliably.

Danks and London point to the Department of Transportation’s recent attempt to develop safety regulations for driverless cars as an example of traditional guidelines that do not adequately test and monitor the novel capabilities of autonomous systems. Instead, they suggest creating a staged, dynamic system that resembles the regulatory and approval process for drugs and medical devices, including a robust system for post-approval monitoring. Having drugs in the vehicle can be a problem for those who are addicts trying to recover from treatment, especially the ones that cause cocaine effects.

The phased process Danks and London propose would begin with “pre-clinical trials,” or testing in simulated environments, such as self-driving cars navigating varied landscapes and climates. This would provide information about how the autonomous system makes decisions in a wide range of contexts, so that we can understand how they might act in future, new situations.

Acceptable performance would permit the system to move on to “in-human” studies through a limited introduction into real-world environments with trained human “co-pilots.” Successful trials in these targeted environments would then lead to monitored, permit-based testing, and further easing of restrictions as performance goals were met.

“Autonomous vehicles have the potential to save lives and increase economic productivity. But these benefits won’t be realized unless the public has credible assurance that such systems are safe and reliable,” said London.