There's No Ignoring Driverless Cars
You might have missed the news that the University of Michigan last month completed construction of a 32-acre miniature city that will enable the safe testing of, among other things, “connected and automated vehicle technology” -- that is, driverless cars. The carpocalypse may be closer than we think -- Chris Urmson, who runs Google’s driverless car project, expects to see driverless cars on public roads within two to five years -- which is why it’s important to ask hard questions now.
Automobiles that operate themselves are all the rage. “Driverless technology is the future,” said Claire Perry, an undersecretary in Britain’s Department for Transport, this past October. “We can’t avoid it, and I don’t want us to.” The autonomous Audi A7, unveiled at the Consumer Electronics Show this month in Las Vegas, has received rhapsodic reviews from technology writers. The National Aeronautics and Space Administration, which has deployed rovers on the surface of Mars, announced last week that it will be partnering with Nissan to develop dual-use technologies.
Google, probably the world’s biggest supporter of legalization, boasts that its test vehicles have driven more than 700,000 miles without accident, once even stopping for “an elderly woman in a motorized wheelchair flailing a broom at a duck she was chasing around the street.”
Most experts agree that driverless cars will be safer. True, there isn’t enough experience to enable the market to price the insurance. But the safe betting is that as the technology becomes ubiquitous, auto premiums will plummet. (So might the income of personal injury lawyers.) Supporters also contend that the cars will be better for the environment, because cars that can safely bunch closer together will actually reduce congestion.
There are doomsayers as well. The most common worry seems to be that the computers that run the cars might be hacked. And there are larger fears. Last summer, the Guardian quoted a restricted report from the Federal Bureau of Investigation warning that criminals or terrorists might use driverless cars to their advantage. Imagine a car bomb whose builder doesn’t need to go to the trouble of recruiting a sufficiently fanatical driver.
These warnings, however, may be less dire than they seem. Yes, there might be harm if the cars were hacked, but that risk would likely be offset by a sharp reduction in drivers operating under the influence of drugs or alcohol. And, yes, terrorists would most certainly find a way to turn autonomous cars to their advantage. But terrorists will find a way to turn every new technology to their advantage. That’s a reason to fight a war on terror, not a war on technology.
A more sober objection was raised the other day by Holman Jenkins of the Wall Street Journal, who denounced Google’s efforts as “an elaborate charade” created as “a branding exercise, a ticket to free media, a way to market Google software to auto consumers for onboard infotainment systems.” Few automobile manufacturers, Jenkins says, are likely to bring such vehicles to market. And that’s a good thing, he contends, asking pointedly: “Why would a driver activate such a system except to turn his attention elsewhere?”
Now, I will confess that I feel much the same way about mobile phones, whether in cars or anywhere else, which seem to me little more than an excuse to ignore the people around you. But Jenkins raises an important objection. Things can go wrong. The car can try to hand control back to the driver. The driver is supposed to pay attention, just in case necessity arises. But many drivers won’t.
The cheerleaders have thought of that. If the driver ignores all the various tools the autonomous Audi A7 uses to catch his attention, the car is programmed to turn on its hazard lights, slow down, pull to the side of the road and stop. Although the idea that the driver will eventually come to ignore the car may be troubling at first blush, I would imagine that someone raised the same objection to self-service elevators. They used to have operators, too, until people came to trust the technology.
Critics note that most testing of autonomous vehicles has come on closed tracks and highways. Part of the point of the Michigan artificial city is to allow the testing of driverless vehicles (as well as vehicles otherwise wired to each other) in an urban setting, where the challenges are unique. I doubt that the problems of stop-and-go driving will prove intractable, and lots of prototypes are tackling them already.
My point isn’t that every technological advance is good. When we change our tools we change how we interact with the world, and we can never predict what direction things will go. Even when the cars come to market and prices fall, it’s likely that many consumers will stick with old-fashioned driver-controlled models simply because they like the feel of being in control of the great American freedom machine. If I’m still around and driving, I suspect I’ll be among the curmudgeonly resisters.
But I will also confess that I am of two minds. The lapsed geek in me is half in love with driverless cars already, for no better reason than that the idea is so cool; the scholar in me wants to live long enough to see how society will adjust to a world where climbing into your car is like boarding a train, and falling asleep behind the wheel is nothing to be ashamed of.
California requires companies to post a $5 million bond in order to test autonomous vehicles on state roads.
There’s a widely cited statistic holding that 90 percent or more of traffic accidents are caused by driver error. It’s not clear where the figure comes from, and most of the supporting studies are antiquated. Whatever the correct figure, nobody doubts that driver error causes the great majority of accidents.
This column does not necessarily reflect the opinion of Bloomberg View's editorial board or Bloomberg LP, its owners and investors.
To contact the author on this story:
Stephen L Carter at firstname.lastname@example.org
To contact the editor on this story:
Stacey Shick at email@example.com