Thinking outside the box.

Photographer: Chip Somodevilla/Getty Images

Could Artificial Intelligence Lose Its Mind?

Leonid Bershidsky is a Bloomberg View columnist. He was the founding editor of the Russian business daily Vedomosti and founded the opinion website Slon.ru.
Read More.
a | A

The DeepDream algorithm Google made public this month is a strange offshoot of image recognition technology based on artificial intelligence. It can turn mundane images into hallucinatory worlds, and has spawned sites where you can have photos processed with the software and even a mobile app. Beyond the pretty pictures, however, DeepDream hints at the kind of personality that artificial intelligence could develop quite by accident.

Google engineers Alexander Mordvintsev, Christopher Olah and Mike Tyka first wrote about DeepDream in June. They explained how their software recognizes and tags images, and how it is able to distinguish between, say, a pizza on a stove top and a motorcycle rider on a dirt road:

We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the "output" layer is reached. The network's "answer" comes from this final output layer.

DeepDream's engineers turned the process inside out, "showing" lots of images of, say, a screw or a banana to the neural network and getting it to generate its own images of the objects. DeepDream was trained with animal images, and even in pictures that contained no animals -- clouds, a street scene, a human face -- it "recognized" dogs, birds and deer. I ran the algorithm, and here's what it did to my Twitter avatar, a penguin-cat hybrid:


All kinds of animals emerged that weren't originally there. The artificial neural network appeared to have come down with an acute case of pareidolia -- a condition that causes sufferers to perceive nonexistent patterns in images and sounds (such as faces in clouds or a religious images in random objects). We all do this to some degree; otherwise Rorschach tests would be useless. But recent research by Norimichi Kitagawa of the NNT Communication Science Laboratory in Tokyo shows that some people may be more susceptible than others.

In extreme cases, pareidolia can be a symptom of psychosis. Although the images DeepDream produces are visually stunning, they are not the product of a "normal" consciousness by human standards. The Google researchers wrote that "neural networks could become a tool for artists -- a new way to remix visual concepts -- or perhaps even shed a little light on the roots of the creative process in general." Sure, but it's easy to imagine how such a network could, in human terms, go off its rocker.

Programmers, of course, can calibrate the network to look for specific patterns and ignore others. But at some point, networks will be more complex than those available today, and I doubt it will be possible to control for every eventuality.

We already have a sense of how unpleasant these accidents can be. In May, Yahoo!-owned photo hosting Flickr and Google Photos introduced artificial intelligence-based autotagging. The Flickr one marked concentration camp photos "sport" and "jungle gym," and the Google one made offensive mistakes, too. Both companies subsequently fiddled with their algorithms.

It's seemingly much easier to make an artificial brain to revise "bad thinking" than to get a human being to abandon faulty ideas or prejudices. But machines can process prodigious enormous amounts of material and at speeds that are inconceivable to a human brain; and that means the machines also could develop unfortunate personality traits, convictions and ways of looking at the world faster than they could be corrected. 

At this point, those worries belong to the realm of science fiction. DeepDream only does one highly specific task. Still, it points to one of the dangers of artificial intelligence: the potential for the machine to develop a mind of its own that is in profound disagreement with its human creator. 

And then there's the danger that the artificial mind might be so much better than our own and render us obsolete. For the moment, at least, DeepDream has allayed those fears.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

To contact the author on this story:
Leonid Bershidsky at lbershidsky@bloomberg.net

To contact the editor on this story:
Max Berley at mberley@bloomberg.net