Johnnycab (Automation Paradox, Pt. 2)
In 1956, General Motors hosted a car expo called “Motorama.” As with all car expos, Motorama was a chance to show off concept cars and other kinds of long-shot projects that they hoped would revolutionize the auto industry. One of the most forward-looking things on display wasn’t a car, but a movie.
The film was called “Key to the Future.” In it, we see a family of four cruising along a desert highway in a beautiful, futuristic GM Firebird. But this car was actually just the set dressing for what GM really wanted to show us: their vision for a self-driving vehicle.
For nearly as long as there’s been an auto industry, there have been dreams of a car that drives on its own. In 1956, tnd because more than 90% of all automobile accidents are all attributable to human error, for some industry people, a fully-automated car is a kind of holy grail.
However, as automation makes our lives easier and safer, it also creates more complex systems, and fewer humans who understand those systems. Which means when problems do arise—people can be left unable to deal with them. Human factors engineers call this “the automation paradox.”
Last week,in our story about automation in aviation, we heard about various ways the industry is people trying to deal with this paradox. For one, pilots are being encouraged to practice manually flying their planes to keep their skills polished. Engineers are also trying to make make smarter, more collaborative automation that doesn’t strip skills from pilots, but works with them to complete tasks.
But Google has a very different approach. Their plan for solving the paradox is to take human drivers out of the equation entirely.
[Courtesy of Google.]
“If you have a steering wheel there there’s an implicit expectation that you’re going to do something with it,” says Chris Urmson of Google’s self-driving car project. Chris’s goal is to make a driverless car that is safebecause it lacks a human driver. “You get get to sidestep all of these control confusion potential challenges by taking that out of the way.”
In 2009, Google started retrofitting Toyota and Lexus cars with new technologies that allow the cars to drive on their own.
The cars can accelerate, stop at traffic signals, make turns, merge onto freeways, and avoid pedestrians with no intervention from anyone in the car.
Then, in 2014, Google started manufacturing cars of their own design: cute little two-seaters with no gas pedal, brake, or steering wheel. They are designed for the user to input a destination, and just sit back, and let the car do everything else.
Chris Urmson of Google says that if one of their self-driving cars has a problem on the road—if the computer malfunctions or the sensors break—the car can pull over, and a different car will come fetch the passengers, leaving the broken one for technicians to fix. Meaning: Google doesn’t necessarily envision us owning these cars. It’s possible that self-driving cars will enable a world in which all of us get around by robot taxi.
A world full of robot-taxis could mean fewer parking lots, denser urban cores, and less traffic. Since autonomous cars can make decisions and react faster than we can, cars in motion could get much closer together—and not just bumper-to-bumper, but also side-to-side. The Department of Transportation requires that highway lanes be at least twelve feet wide, which is about twice as wide of the average car. So, suddenly, a three-lane highway can become a six-lane highway without any new construction.
But these cars could also reorganize cities for the worse. If people could read or sleep or write emails in an autonomous car, they might feel fine about having longer commutes. And of course, your route data could be exploited by advertisers. Maybe Google knows you like Starbucks–will the car drive just a tad slower as you pass it? Or worse, route data could become a matter of national security. “Bad actors could take control of your car or a fleet of cars and you know, stop every car that’s on the Bay Bridge at one time for malicious intent,” says Costa Samaras, a professor at Carnegie Mellon University and co-autor ofAutonomous Vehicle Technology: A Guide for Policy Makers.
Clearly, there are still a lot of details to work out. But once the science is done and the policy is carefully considered, are people even going to want an autonomous vehicle? Will people still go on road-trips, or tailgate in the stadium parking lot? Will you still be able to get in your car on go on an aimless, contemplative drive?
“I guess we haven’t really thought about that,” says Chris Urmson from Google. “My assumption is you can give it a destination of where you want to go, but you can always change it.”
But, sometimes you just really need to get in a car and go. Case in point: this scene from Total Recall.
Frustrated with the slow automated system, Arnold Schwarzenegger’s character rips the robot driver out of the car and pilots it himself. It’s funny, because as fantastical as this sci-fi world is, we can recognize the same kinds of user annoyances we have in the present. And we can see Arnold as heroic because he can do the things the machines can’t.
And think about Star Wars: Luke turns off his automation and uses his own skills (and “the force”) to blow up the Death Star.
[See especially 11:30 onward]
As much as we love building things that make our lives easier, it seems we never get tired of seeing someone cast the robots aside. We love seeing people do things by hand. Maybe because we all have anxiety about losing the ability to do something ourselves.
So how soon will we need to answer these existential questions about our cars?
Chris Urmson has said that his personal goal is to get the Google self-driving car done by 2020, so that his two sons, the oldest of whom is 11, won’t ever need to get a driver’s license.
But Raj Rajkumar, who is co-director of Carnegie Mellon’sAutonomous Driving Collaborative Research Lab, thinks it will be still be 10-20 years before we have fully autonomous vehicles commercially available. Even though Rajkumar’s companyOttomatikabuilt an autonomous car that has already driven itself from San Francisco to New York, Rajkumar doesn’t see taking the steering wheel out any time soon.
Carnegie Mellon has been working on autonomous vehicles since the 1980s, and Rajkumar imagines that the transition to full automation will be gradual. “The number of scenarios that are automatable will increase over time, and one fine day, the vehicle is able to control itself completely, but that last step will be a minor, incremental step and one will barely notice this actually happened,” says Rajkumar.
And when that day comes, Rajkumar says there will still be accidents. “There will always be some edge cases where things do go beyond anybody’s control.”
If and when people do get hurt in autonomous cars, there may be circumstances in which the passengers wouldn’t have been injured had there been a human at the wheel. These cases will be hard to reckon with, but we’ll have to keep in mind that there were 30,000 Americans dying every year in human-driven cars. If autonomous vehicles lead to fewer car accidents, then as with planes, we may need to accept edge cases and periodic failures as the cost of living in this safer world.
[From The Oatmeal’s take on Google’s self-driving car, which writer Matthew Innan callsSkynet Marshmellow Bumper Bots]
Producer Sam Greenspan spoke withNadine Sarter, a human factors engineer at University of Michigan; Chris Urmson, who leads Google’s self-driving car project;Costa Samaras, Assistant Professor of Civil and Environmental Engineering at Carnegie Mellon University;Raj Rajkumar, co-director of Carnegie Mellon’sAutonomous Driving Collaborative Research Lab.
This story is the second of our two-part series on the automation paradox.Listen to Part 1 here.