Earlier this month, the U.S. House of Representatives passed a bill designed to clarify the federal government’s role in overseeing the development and release of self-driving cars. As the Senate reviews the bill, known as the Safely Ensuring Lives Future Deployment and Research In Vehicle Evolution (Self-Drive) Act, security experts worry whether a resulting law would have enough gas to actually improve vehicle cybersecurity.
The Self-Drive Act, as written, would require makers of autonomous vehicles to set up a process for identifying and mitigating “reasonably foreseeable” vulnerabilities, but it doesn’t define that process. And while it would require the manufacturers to have cybersecurity managers, training, and intrusion prevention and response systems in place, it doesn’t detail how the companies should follow through on the requirements.
The 36-page House bill, 2 pages of which contain cybersecurity regulations, is largely focused on defining the National Highway Traffic Safety Administration’s role in setting safety standards for autonomous vehicles, while limiting state regulation and waiving some traditional safety regulations during research. The Senate is expected to draft its own bill.
READ MORE ON CAR HACKING
Hackable software in the driver’s seat: The state of connected car security
Karamba’s bold quest to secure connected cars
Next up on hackers’ IoT target list: Gas stations
How to protect what your car knows about you (opinion)
Uber, self-driving cars, and the high cost of connectivity (opinion)
How Uber drives a fine line on security and privacy
Some security experts said the regulations outlined in Self-Drive are appropriately ambiguous, given that self-driving technologies are still in their infancy. Others called for stronger cybersecurity rules.
The bill’s requirements “seem necessary but insufficient for autonomous vehicles,” says Beau Woods, founder and CEO of Stratigos Security. Woods, a former deputy director of the Atlantic Council’s Cyber Statecraft Initiative, says Self-Drive’s cybersecurity and privacy regulations, which might suffice in a typical IT environment, fail to adequately address dangers on the road.
“Practices that seek to protect confidentiality of information in data centers,” Woods says, don’t necessarily cover “protecting the integrity and availability of human life and public safety on highways…The operating environment, economics, components, adversaries, consequences, and time scales are very different.”
In short, Woods thinks that the law should reflect the notion that hacks of autonomous vehicles are more dangerous than many other types of cyberattacks.
“While data breaches have failed to cause widespread public outcry, loss of life from a cybersecurity incident would shatter public confidence in autonomous vehicles, denying or delaying their benefits,” Woods says. “Those benefits include tens of thousands of lives saved, and billions—or trillions—of dollars in global [gross domestic product]. When the stakes are this high, a higher standard of care is merited.”
Other security experts, mirroring the position of many lawmakers on rules for most rapidly developing technologies, say regulatory language for self-driving cars is better left broad.
“While data breaches have failed to cause widespread public outcry, loss of life from a cybersecurity incident would shatter public confidence in autonomous vehicles, denying or delaying their benefits.”—Beau Woods, founder and CEO, Stratigos Security
“Very specific laws tend to not be effective because a particular technical approach or countermeasure is going to be obsolete long before any law is changed,” says Stefan Savage, a computer science professor and self-driving car security researcher at the University of California at San Diego.
“It’s always difficult when discussing making regulations for new technology,” says Craig Smith, research director of transportation security at security consultancy Rapid7. “To truly write detailed, strict rules, you’d have to make guesses about how technology will evolve and how it will be used, and then subsequently, make decisions about what’s safe, what’s not, and what needs to be limited.”
Keeping regulatory language broad “will allow different companies room to innovate and come up with unique methods to fulfill these rules,” Smith says. “We might see some challenges with implementation or interpretation, but by themselves, they are useful as broad guidelines.”
The trick with all such regulations, Savage says, is structuring the incentives such that they don’t become minimal compliance standards. “It is in everyone’s interest to keep improving their security posture as the market—and adversaries—evolve.”
Smith, despite supporting Self-Drive’s broadly worded regulatory language, doesn’t see it having a large impact on autonomous-vehicle security.
“Generally speaking, these rules won’t significantly affect most auto manufacturers, as most have already started similar programs,” he says. However, the bill adds to federal government’s “support, going forward, in the creation of self-driving vehicles.”
The Center for Democracy and Technology, a group advocating for digital rights and privacy, says the passing of Self-Drive is a good sign for government support of consumer privacy.
“While it’s hardly perfect, from a privacy or cybersecurity standpoint, we were happy to see that it envisions a pretty robust role” for the Federal Trade Commission to protect driver privacy, says Joseph Jerome, a CDT privacy counsel.
The bill should have also included new protections for independent security researchers, who will be vital for policing vehicle security “and, ultimately, vehicle safety,” he says.
While the cybersecurity rules offer little detail, Jerome is optimistic that final legislation will “help us to create a security baseline,” he add. “Two of the big concerns we’ve long had has been a lack of transparency around automotive security, and industry hostility toward security researchers.”