Uber, self-driving cars, and the high cost of connectivity
5 min read

Uber, self-driving cars, and the high cost of connectivity

Uber, self-driving cars, and the high cost of connectivity

One of Uber’s self-driving cars hit and killed a pedestrian in Tempe, Ariz., and the real and potential consequences for the already beleaguered company—and for autonomous vehicles in general—are serious.

For now, the family and friends of 49-year-old Elaine Herzberg are in mourning. Arizona has banned Uber from testing self-driving cars in the state. Uber has backed out of its plan to renew its permit to test them in California. Its current permit expires Sunday. And if we consider the bigger picture, widely releasing these cars onto the road before they’re proven to reliably be at least as safe and secure as human-driven cars could prove ominous for technologies far beyond the automotive realm.

Herzberg’s death is not the first due to an apparent self-driving car error. Tesla’s “autopilot” feature has killed at least two of its drivers, one in Florida in 2016 and one in Mountain View, Calif., on March 23. But it does appear to be the first of a pedestrian, exactly the scenario that makers of self-driving cars promise their technology will prevent.



READ MORE ON UBER AND CONNECTED CARS

Hackable software in the driver’s seat: The current state of connected car security
How Uber drives a fine line on security and privacy
When taking Uber or Lyft, is your ride-sharing data buckled up?
Uber isn’t the only company fingerprinting devices. Here’s why
In the gig economy, a cybersecurity divide
What’s in a bug bounty? Not extortion


In looking at this case, there’s no doubt that Uber screwed up yet again. The car, a Volvo XC90 SUV that Uber modified with self-driving technology, including sensor arrays to detect suddenly appearing objects in front of it, killed Herzberg just before 10 p.m. on Sunday, March 18, according to the Tempe Police Department.

The car was moving at 40 mph, 5 mph over the speed limit, in the far-right lane; Herzberg was attempting to walk her bicycle across the street. The car didn’t detect Herzberg, and neither did the “human operator” behind the wheel.

Science shows that what makes car crashes so lethal for pedestrians is not how fast over the speed limit the car is driving, but how fast it is driving, period. When an average person is hit by a car traveling 20 mph, he or she has a 90 percent chance of surviving. But when hit at 40 mph, the chances of survival drop to 10 percent. The human body just wasn’t designed to survive that kind of impact from a 2 ton-plus hunk of metal.

To Herzberg’s husband and daughter, who settled with Uber on undisclosed terms Thursday, the statistics and situation that led to her death don’t matter as much as the death itself. She was ripped from their lives by an experimental technology whose stated purpose is to make the roads safer, particularly for pedestrians.

What’s going to happen when a pedestrian is killed because the car’s software has been hacked?

That’s a noble cause, given that cars have been killing pedestrians since 1896. The Center for Disease Control says pedestrians are 1.5 times more likely to be killed by car crashes than a car passenger per car trip, and that car crashes most often kill pedestrians in cities at night outside of intersections. The United States tallies at least 37,000 deaths resulting from car accidents every year. (Globally, the tally is 1.3 million.) And of those, the 6,000 pedestrian deaths in the United States in each of the last two years mark a decades-long high.

Self-driving cars are not throwaway apps that a Silicon Valley tech bro has convinced venture capitalists can change the world. They hold promise and fear in the same breath.

In 2014, I rode in one of Google’s autonomous autos. As a commuter who has used motorcycles, bicycles, and cars on crowded, urban streets for more than 25 years, I marveled at how the car repeatedly executed the most perfect left turn I’ve ever witnessed from the inside of a vehicle. (The car did not repeatedly turn left; we were driving around downtown Mountain View, just not in a never-ending counter-clockwise circle.) And that was four years ago.

The current debate over whether self-driving cars should be allowed on the road, even in controlled experiments like Uber’s, centers on this question: Are errors that lead to a few people being killed today worth potentially saving tens of thousands of lives per year in the future?

The answer, of course, is yes. Yes, you develop new technology to reduce harm caused by human-operated technology or, in other words, to prevent people from accidentally killing one another. But obviously, your test technology needs to be at least as safe as its human-operated predecessor.

Why is it important for the information security industry to carefully watch how this case unfolds both in the legal system and the court of public opinion? The answer is straightforward: The software steering the car worked exactly as programmed, and it still failed to stop the vehicle from slamming into and killing a pedestrian. What’s going to happen when a pedestrian is killed because the car’s software has been hacked?

What strikes me is that the line between Uber’s apparent technology screw-up, and Chrysler’s recall of 1.4 million Jeep Cherokees in 2015—a response to a Wired story pointing out what a hacker could do to them—is incredibly thin. We know that modern cars are as much computers as they are vehicles. The next self-driving car death easily could be the result of a hack.

Cities around the world have endured numerous vehicular terror attacks over the past couple years. It’s not hard to imagine the panic that would ensue, were terrorists to hack, and thus gain remote control of, consumer vehicles.

A terrorist hack of a connected or fully autonomous car—a successful attempt to make it do something against its programming—could result in vehicular deaths. It could result in the blockage of roads, including evacuation exits, from cities such as Manhattan, in conjunction with another type of terror attack. At the very least, it could result in highway havoc, as hacked fleets of big-rig trucks jackknife on major freeways.

Self-driving cars are not throwaway apps that a Silicon Valley tech bro has convinced venture capitalists can change the world. They hold promise and fear in the same breath.

Such dangers apply not just to connected cars, but for all kinds of devices that tech companies are bringing online, apparently with about as much forethought and attention to safety as one gives to walking up a flight of stairs.

It’s not hard to imagine that after the first mass hack of insulin pumps, or pacemakers, that leads to multiple deaths, hospitals or doctors would refuse to use those devices. These concerns are reflected in the Consumer Confidence Index, an estimate of how the U.S. economy is perceived. The perception of whether a technology is safe (or secure) could trump the reality.

A setback in perception could lead to a years- or decades-long setback in development and release. Spending a bit more time to ensure a technology’s safety and security before testing it in public would help companies dodge situations that could diminish its impact.

Security researcher Josh Corman, formerly of the Atlantic Council and now the chief security officer at global software company PTC, paraphrases Stan Lee, the co-creator of Spider-Man, when he talks about the risks of embedding an Internet connection and computer in every device we make: “With great connectivity comes great responsibility.”

So far, Uber has failed to live up to that responsibility. Competitor Waymo, Alphabet’s self-driving car company that started as a Google project, has charged ahead with a new-car announcement and a claim that its tech would have saved Herzberg’s life.

For the sake of all pedestrians (and at some point, that’s everybody), I hope that its assertion is accurate.

Enjoying these posts? Subscribe for more