Our Robot Overlords Might Be Delayed
Are you stressed out about the singularity? Living in fear of the day when computers decide that humans are no longer necessary? Not to worry, say some leading experts in artificial intelligence: Research in the field might have actually hit a wall.
No doubt, AI is everywhere. Computers assess financial news, identify viruses and even act as physics theorists, analyzing flows of fluid and heat. So-called deep learning algorithms allow services such as Google Translate and Apple’s Siri to outperform people on many basic tasks. With big tech companies such as Google and Facebook pushing the technology further, some people believe that human-level intelligence is just around the corner.
Yet we’ve been here before. In 1970, the cognitive scientist Marvin Minsky confidently claimed that “a machine with the general intelligence of an average human being” would exist within a decade. The history of artificial intelligence is littered with episodes of wild optimism that have, ultimately, given way to disappointment and gloom -- and that could happen again, as Google software engineer Francois Chollet recently warned in a popular textbook about algorithmic methods. Research progress, Chollet notes, has been slowing for several years.
Now, psychologist Gary Marcus of New York University -- formerly director of Uber’s AI labs -- argues that the lack of progress isn’t surprising, as researchers are running up against a host of new challenges.
One Marcus identifies is building a more flexible technology. Today’s algorithms work only on a narrow range of problems. The goal must be extremely well-defined and unchanging, and huge amounts of data must be available for training. Examples include translating text, recognizing speech and identifying faces in a photo. The algorithm has one job, and researchers supply it with the masses of perfectly organized data required to learn how to do it.
Humans regularly perform many tasks that are not so clearly delineated -- where the nature of an answer, or what information might be needed to approach it, is not given. Tangle up some rope in a bicycle wheel, and any five-year-old can easily work out how to extract it -- not because he has trained on thousands of wheels, but because he can understand the spatial relationships. People have an impressive ability to solve problems and gain insight using almost no data at all, by using abstract reasoning.
Algorithms also can’t engage in what Marcus calls “open-ended inference,” which entails bringing background knowledge to bear on a question. We all know the difference between “John promised Mary to leave” and “John promised to leave Mary.” We make the distinction using information that isn’t explicitly included in either phrase. Researchers haven’t made much progress in getting computers to do the same.
Then there’s the question of reliability. Despite computer scientists’ best efforts, algorithms are prone to make spectacular errors -- such as mistaking a law-abiding person for a criminal. Worse, it’s often impossible to understand what went wrong: With billions of parameters involved, even an algorithm’s creators often do not know how and why it works. The reliability of an aircraft engine can be predicted, because it’s made of many parts for which we can mostly guarantee performance. Not so with algorithms. This limits their use in situations -- such as making financial trades or medical diagnoses -- where errors can be disastrous and it’s important to understand the process by which decisions are made.
In other words, there's nothing very deep about deep learning. The technology will have far-reaching social and economic consequences, in large part because industry will steer economic activity toward the things that algorithms do well. It will take over many mundane tasks. But it probably won’t soon be able to think through problems like people do, or to converse with us in a recognizably human way.
For some, this may be a disappointment. But for those who wouldn’t welcome the arrival of our robot overlords, it might offer some relief.
To contact the editor responsible for this story:
Mark Whitehouse at email@example.com