Artificial Intelligence

By | Updated May 18, 2016 7:13 PM UTC

It’s the stuff of sci-fi movies and dystopian, end-of-humanity nightmares — and now, of mind-numbingly dull white-collar work. After decades of premature promises, artificial intelligence is finding its way into all sorts of businesses. Its arrival has been low-key. That’s primarily because the line between ordinary software and AI software has blurred as artificial intelligence has been adapted to narrow, unsexy tasks. But there is a difference. AI programs can look at a confusing situation, make an informed guess about what’s going on and act on it — and learn from what happens. The result has been progress so fast that people are now asking themselves two very different questions: What can we do with this to make money, and how do we stop it from going awry? AI could usher in an era of unprecedented prosperity or unprecedented inequality. Bill Gates, Stephen Hawking and Elon Musk, among others, have a deeper fear: That we may be, in Musk’s words, “summoning the demon.”

The Situation

AI’s advancement has drawn in large amounts of money. Toyota, for instance, will invest $1 billion to create a research institute focused on artificial intelligence and robotics technology. Some big tech names, including Musk and Pieter Thiel, have created a nonprofit research group named OpenAI that Musk said has secured funding of “at least a billion.” Google has unveiled a new mobile messaging application called Allo containing a digital personal assistant that's based on the AI technology that powers other Google services. Here are things AI systems can do that would have been considered overly ambitious or unaffordable five years ago:

  • Beat humans at 2D computer games from the 1980s without being programmed with information about the games.
  • Beat a top-ranked human player at Go, a board game with many more possible moves than chess.
  • Learn to pick up randomly positioned objects with 90 percent accuracy in an hour – a task that used to take humans days to program.
  • Play the ancient board game Go at levels approaching human masters.
  • Translate crudely but coherently from one language to another rapidly enough to have a conversation.
  • Transport medicine and supplies through the halls of a children’s hospital without running over children.
  • Classify legal documents by confidentiality level.
  • Develop strategies for trading in financial markets.
Play It Again, HAL
Google researchers have been working out how to help their AI systems master old Atari games

The Background

In the 1950s, academics flush with the rapid early success of computers turned their thoughts to teaching machines to think. Progress came easily at first, with the invention of neural networks — software that can process data with some of the pattern-recognition capabilities of our own brains. After that came a more refined program called a perceptron, which its creator claimed would soon create a talking, walking and thinking machine. This bout of over-promising was succeeded by the first of several “AI Winters” as researchers hit a wall and funding dried up. Then in the last decade a new class of industrial research labs took root in companies such as Google, Microsoft and Facebook. With vast concentrations of user data and computing power, deep pockets for hiring cadres of AI scientists and an unusually open attitude toward publishing research, these companies started breaking records in speech recognition and image analysis. Venture capitalists took notice and invested $309.2 million in AI startups in 2014, a twentyfold increase from 2010. While the field has been dominated by U.S. companies, Baidu, China’s most popular search engine, has also joined its top ranks.

Source: ImageNet, Stanford Vision Lab

The Argument

Artificial intelligence can help scientists solve the world’s “hard problems,” like climate change, says Alphabet Inc.’s chairman, Eric Schmidt. But what if we created a super-smart, autonomous artificial intelligence that ran a paperclip factory and, due to some poor programming or a cyberattack, tried to turn everything it could grab into paperclips? It’s a fanciful scenario proposed to crystallize concerns. Less far-fetched is the question of whether AI will kill good middle-class jobs. Many economists argue that technological change has so far led to the creation of new and better jobs. But even AI proponents acknowledge that its rapid development could make its growth harder for society to digest. A future in which trucks drive themselves, mammograms are read by computers and the crowds at sporting events are scanned for suspected terrorists can sound great or terrifying depending on whether you’re a truckdriver, radiologist or somebody concerned about privacy. Another worry is AI’s possible effect on inequality. Anything that reduces labor costs is likely to disproportionately benefit holders of capital. If the race to develop artificial intelligence depends on huge amounts of data and computing power, a big chunk of the future economy could be controlled by a handful of companies.

The Reference Shelf

  • An overview of artificial intelligence by three prominent researchers, Yann LeCun of Facebook, Yoshua Bengio of the University of Montreal and Geoffrey Hinton of Google and the University of Toronto.
  • In 2006, for the 50th anniversary of the coining of the term “artificial intelligence,” AI Magazine published this “Brief History.”
  • The Allen Institute for Artificial Intelligence, founded by Microsoft co-founder Paul Allen, argues that fears of AI’s effects are exaggerated; the Future of Life Institute explores them.
  • A June 2015 Bloomberg article on how the quest to improve AI centers on understanding the minds of toddlers.
  • A 2014 Wired magazine article on recent breakthroughs in AI, and an overview in the Economist.
  • The 1943 paper that laid the groundwork for neural networks, by Warren McCulloch and Walter Pitts.

First published July 9, 2015

To contact the writer of this QuickTake:
Jack Clark in San Francisco at

To contact the editor responsible for this QuickTake:
John O'Neil at