Technically, there are already self-driving cars on the road because automobile autonomy exists across a spectrum. According to the National Highway Traffic Safety Administration’s system, there are five levels of autonomy, where 0 means the driver is fully in control of steering and speed control at all times (the car you own right now probably), and 4 is no option for the human driver to control the car. (The Society of Automotive Engineers also has a system with a few minor differences). BMW, Ford, GM, Mercedes-Benz, and Fiat Chrysler all have 2016 models at Level 1, where the cars are equipped with cameras, radars and sensors that allow for automatic braking, self-park steering, adaptive cruise control, some hands-off steering, and/or lane centering. Tesla’s Model S and Model X both have autopilot software that goes a step further with auto-steering and lane changes; they’re considered to be at Level 2. Ford and Volvo plan to simply skip Level 3, where the autonomous car cedes control to the driver under certain circumstances that the car itself monitors, due to concerns over whether there would be enough time between the car recognizing changing conditions and the driver taking control. Ironically, it’s Google—the untraditional carmaker—that has really been making the push to get to Level 4 as soon as possible (2020 looks like the earliest) with its self-driving project, which it has been working on since 2009. Aside from personally owned vehicles, the car sharing industry is getting in on the action as well: Uber is currently testing self-driving cars in Pittsburgh, and Lyft’s president John Zimmer has pledged that by 2021 or 2022, the company’s most of its fleet will be self-driving cars. As for Tesla, which is hoping to stay ahead of the game and will have high expectations to live up to when it someday (soon) rolls out its autonomous car, Musk has been mum on the details of what Level 4 will look like for them, but he has never been one to miss an opportunity to get people incredibly excited about his plans. In August, during an earnings conference call, he said, “It blows me away the progress we are making. And if it blows me away, it’s going to blow away other people too when they see it for the first time.” Do tell Elon, do tell.
It’s obvious that the hype around the revolution of autonomous vehicles, and having a lot of them on the road within a few years, is coming from the tech and auto industries—essentially, people like Musk—and not drivers. According to a survey published by Kelley Blue Book, 60% of respondents said they knew nothing or little about self-driving cars. Of course, there was a point in time when we didn’t know our phones would be music players, cameras and payment methods, either. The survey also found that 63% thought the roads would be safer with autonomous cars as the standard, but a third said they would never buy a car that was completely self-driving. People overall reported that they enjoyed driving themselves and didn’t see it as a mundane task, but statistics show that humans are still really terrible at it: 2.5 million Americans land in the emergency room every year because of car accidents.
Cool technology aside, many people look at self-driving cars as not only “Can we and when will we” but also, “should we”. There’s no doubt self-driving cars would certainly reduce the human error involved in driving, and even the U.S. government seems to be on board with how these cars could change the nation’s roads for the better. In an op-ed for the Pittsburgh Post-Gazette, President Obama wrote, “ Right now, too many people die on our roads – 35,200 last year alone – with 94 percent of those the result of human error or choice. Automated vehicles have the potential to save tens of thousands of lives each year.” He went on to stress the importance of the technology maintaining safety standards as it is developed: “Safer, more accessible driving. Less congested, less polluted roads. That’s what harnessing technology looks like. But we have to get it right.” The piece was released in conjunction with a fifteen-point checklist from the Department of Transportation, and the president asked self-driving car manufactures to sign it as they move forward. So far, Google’s self-driving car has been in a handful of crashes over the seven years of testing—with a high-profile one as recently as September—but so far only one has been the fault of the car. A fatal Tesla Model S crash in Florida in May is the only known death involving an autonomous car where the autopilot feature was on. It was widely reported that at the time of the accident that the car was speeding. Musk has since blamed it on the automatic braking system or the autopilot system not reading that the object the car slammed into was a trailer. There have also been a few nonfatal crashes of Teslas where it appears the autopilot was engaged as well. We’re not that far away from the 2020 benchmarks boastful CEO’s have put on the calendar but certainly calling the state of self-driving cars in late 2016 as “worry free” would definitely be an overstatement.
Musk remains confident in the technology though. What he does worry about though, strangely, is the software he is using to make self-driving cars smarter and safer: artificial intelligence. It seems counter-intuitive that such a high-profile technologist would be anti-A.I. but he has been warning for quite sometime of the dangers of too much reliance on robots that can think and learn. For Musk, it’s about sounding the alarm that if the technology can be used for good, there’s also a chance it can be used for bad. He’s working on ways to keep access to AI balanced through non-profits like OpenAI, which aims to develop safe artificial intelligence and make sure the benefits of AI are available to all. It’s worth noting that Musk was quick to point out during the August call that Tesla was only using “advanced narrow AI.” He elaborated with, “When I say narrow AI, it’s not going to take over the world.”
But it is clear that he hopes self-driving cars will.