Photographer: Everett Collection

Is Elon Musk Right About AI? Researchers Don't Think So

To quell fears of artificial intelligence running amok, supporters want to give the field an image makeover

Depending on whom you ask, advances in artificial intelligence are either humanity's biggest threat or our best shot at curing diseases.

On one side of the debate are billionaire entrepreneurs such as Elon Musk and Bill Gates and physicist Stephen Hawking, who say AI is a potential menace to humankind with super-intelligent machines that could run amok. Some of its biggest backers include billionaires Paul Allen and Jack Ma.

With criticism on the rise, supporters -- led by researchers at Allen's AI institute and Stanford University -- are seeking to give their field an image makeover.

Allen's group recently began touting an AI project aimed at improving medical care. Stanford is undertaking an AI study on ethics and safety set to run for 100 years. It's all part of a deliberate push in the AI community to address growing concerns about the technology as the field expands, with venture capital funding in the area rising 20-fold since 2010 and dozens of new startups popping up.

"Someone has impugned us in very strong language saying we are unleashing the demon, and so we're answering," said Oren Etzioni, chief executive officer of the Allen Institute for Artificial Intelligence. "The conversation in the public media has been very one-sided."

The more organized effort now marks the first sustained moves by scientists and entrepreneurs to engage the public and try to quell their fears.

Max Tegmark, a Massachusetts Institute of Technology physics professor and co-founder of the Future of Life Institute, is one researcher trying to carve out common ground. Tegmark began circulating an open letter in early January in Puerto Rico at the institute's first conference, which was attended by Musk, among others. The letter, whose signers now include Musk, Etzioni and many researchers and advocates on both sides, was made public on Jan. 12.

"There had been a ridiculous amount of scaremongering," said Tegmark, who sometimes goes by "Mad Max." "And understandably a lot of AI researchers feel threatened by this."

Hal 9000 from
Hal 9000 from "2001: A Space Odyssey"

Talking about how to imbue intelligent agents with human ethics is "common sense," said Stuart Russell, a professor at University of California at Berkeley and co-author of the textbook "Artificial Intelligence: A Modern Approach." Take the example of a household robot, he said -- sort of like Rosie the maid from "The Jetsons."

"It has to understand human values so it doesn't do stupid things," he said. "You don't want it to accidentally cook the cat for dinner."

Until recently, researchers mostly ignored these issues, Russell said -- they were either coming from cranks or seemed a long way off. That's changed, with some of the world's best-known scientists and innovators raising the alarm.

"Things are moving faster and there's been a lot of investment in making things quite fast," he said. "So the attitude in the field has changed."

On Jan. 28, Microsoft Corp. co-founder Gates joined the conversation, saying he's in the camp that is concerned about "super-intelligence." In December, Cambridge University Professor Hawking told the BBC that more advanced uses of AI "could spell the end of the human race." In October, Tesla Motors Inc.'s Musk said AI is humanity's "biggest existential threat" and called for more oversight.

In January, following the release of the open letter, Musk donated $10 million to the Future of Life Institute, whose website says its focus is "mitigating existential threats facing humanity." Musk and Hawking are both on the institute's scientific advisory board. Neither returned e-mailed requests for comment.

Now researchers have stepped up their media game. Etzioni is among proponents who have recently focused not on technical papers, but on writing columns to tout AI's benefits to a broader audience, including one for the website Medium. He's also been coordinating efforts with Microsoft research director Eric Horvitz, a former president of the Association for the Advancement of Artificial Intelligence, and the man behind the Stanford study.

Etzioni wants to show how the work to create thinking machines will help mankind in practical ways. An Allen Institute program called Semantic Scholar is designed to show how the field can improve medical care. The project scans reams of journal articles, extracts metadata, classifies papers and homes in on the key citations. Etzioni's goal is to speed the time it takes new discoveries to become practice among care providers.

"If it's really good, it will help us find scientific papers more quickly and help us solve cancer," Etzioni said of AI research.

Horvitz, meanwhile, is working on the Stanford University One Hundred Year Study of AI, a project that will study ethical conundrums as they emerge and examine how AI will play out in areas like national security, psychology, automation and privacy.

"These concerns I don't think should be dismissed out of hand," Horvitz said. "The community of AI researchers and scientists need to say, 'here's why we're not concerned.'"

More demonization of the field may lead to a rise in government interference and limits on research, Etzioni said. There is a dangerous precedent for such limits, he said, citing U.S. restrictions on stem-cell research enacted amid questions about the ethics of creating and destroying human embryos.

"AI escaping from the lab and running amok may make a great plot for a Hollywood movie, but it is not realistic," said Etzioni, who has been a researcher in the field for more than two decades. "We have a very hard time building programs that do a small amount of learning, let alone this kind of runaway learning that people imagine in their worst nightmares."

The fear of artificially intelligent beings going rogue and setting about to destroy their human creators is a pervasive one, from Mary Shelley's 1818 novel "Frankenstein" to morally confused computer HAL 9000 in Stanley Kubrick's 1968 movie "2001: A Space Odyssey." In 2013, moviegoers watched an adaptable operating system break a man's heart in "Her." This year will bring the film "Chappie," about a robot built with the ability to learn and feel.

AI technologies have been in development since the 1960s, though they've yet to yield a machine or system that can mimic human intelligence at even a grade-school level, Horvitz said. There has been renewed optimism and increased investment in recent years, though, as companies, public agencies, hospitals and governments contend with reams of data that require new tools for more efficient analysis.

The AI research team at Microsoft is behind a product for the Skype Web-calling service that automatically recognizes speech, translates it and then reads it aloud, allowing multilingual conversations. Etzioni's Allen Institute, backed by tens of millions of dollars from Microsoft co-founder Allen, is working on training a machine to pass elementary school standardized tests in science and math.

Other companies are getting into the mix. Google Inc. in January 2014 purchased London-based DeepMind Technologies Ltd., while Allen and Facebook Inc. CEO Mark Zuckerberg are investing in AI research. Funding for AI startups surged to $309.2 million last year from $14.9 million in 2010, according to CB Insights.

It's not AI that will destroy humankind, Horvitz said -- it's the absence of these technologies that is already killing people, he said, citing medical errors and ineffective disease treatment that could be prevented and improved using AI.

Even if he's not expecting an attack by a robot horde, Horvitz said AI scientists do need to wrestle with the ethical questions that will arise. He cited concerns about how these technologies might be used to sway public opinion in elections, for example, or the possible use of large public data sets like social-media posts to predict whether someone is suffering from depression. Horvitz ran a similar study at Microsoft to scan for postpartum depression in a subject's posts on Twitter. Assessing what's appropriate is part of the mandate of the Stanford project.

"These concerns have been around for a long time," he said. "When there are concerns, we need to actually as scientists address them in a mature, receptive manner."