All thumbs.

Stephen Lam/Getty Images

Facebook's Thoughtless Artificial Intelligence

Leonid Bershidsky is a Bloomberg View columnist. He was the founding editor of the Russian business daily Vedomosti and founded the opinion website Slon.ru.
Read More.
a | A

This year saw Google acquiring an artificial intelligence startup and two of the smartest people around, Stephen Hawking and Elon Musk, warning the world about the dangers of AI bursting out of human control. Yet it's ending with Facebook apologizing for what designer Eric Meyer called "inadvertent algorithmic cruelty" -- a reminder of how primitive machine intelligence still is, and how far from threatening human dominance in any endeavors that require more than rote learning.

Last January, Google paid $400 million for U.K.-based Deepmind, which, like other AI startups, aims to recreate the human brain in digital form. Musk was an investor in Deepmind, and in March he -- together with Facebook's Mark Zuckerberg and actor Ashton Kutcher -- put $40 million in Vicarious FPC, a firm that is trying to recreate the neocortex, the part of the brain that recognizes images, interprets language and does math. In July, Numenta, founded by Jeff Hawkins, the creator of the Palm Pilot (remember that?), presented the first commercial software resulting from its 9-year effort to reverse-engineer the brain. 

Amid all this business activity, the University of Reading announced that a computer had, for the first time in history, passed the Turing test, deceiving human judges into thinking they were chatting with a person.

With all the big names getting involved and the big money making its way to start-up founders, AI was one of the most hyped technological fields of 2014 -- it was also one of the most feared. Hawking warned that "the development of full artificial intelligence could spell the end of the human race" and Musk's repeatedly made doomsday prophecies -- the last one predicting a 5-year horizon before AI becomes an existential threat. The suggestion is that if computers can learn to design and redesign themselves, they would begin evolving so fast that they could eventually supersede humans and take over the world (as they did, for instance, in the Terminator movies). 

For some, the new advancements were almost as controversial as human cloning. After all, the purported dangers are similar, but because AI engineers do not work with biological material, religions and regulators are letting them run with their ideas where biologists are stymied.

I've written before that the latest AI achievements indicate the technology is still in its infancy, though it's at least half a century old. The actual advances reported by AI companies pale in comparison to the dire warnings about them. They involve machines that recognize and analyze pictures (that's what Vicarious is good at), remember -- and link together -- basic notions (Deepmind), or notice unusual patterns and make predictions based on them (Numenta). The machine that passed the Turing test was a primitive chatbot that wasn't good for anything much when I tested it. 

Those working in the field tend to describe these developments in more glowing terms. The Ted lecture by angel investor and AI expert Jeremy Howard is a good example. There are machines that can recognize pictures and describe them ("man in black shirt is playing guitar") better than humans! And other machines can read Chinese "at about native Chinese speaker level"! 

All this is particularly incredible for software engineers, who are used to programming computers to perform every tiny operation in a complex task. If machines learn to operate more like humans, using the entire accumulated database of human knowledge -- also known as the Internet -- to improve the way they work, that will make programming easier and more creative. Eventually it may amount to mere goal-setting: you'd tell the computer, say, to design an object with certain qualities, and it will do the rest of the work.

It also threatens to make a lot of programmers redundant, but I have little sympathy for those afflicted with that particular fear. Programmable machines have already made many handicrafts obsolete; their creators could withstand a taste of their own medicine. The spectrum of human thinking, however, stretches far beyond computing, even at its most advanced level, even at the frightening level it will achieve in five years, if Musk is right (and not just hyping up a company in which he owns some shares). 

That brings us to Facebook and its "Year in Review" feature. If you're still on Facebook, you've surely seen it: It's a selection of popular pictures from your timeline that is supposed to sum up your year. Facebook really wants you to share it: The reminder is prominent on the timeline, and it's not immediately obvious how to get rid of it. The algorithm that builds the "Year in Review" is a crude form of artificial intelligence. It apparently picks up the pictures that elicited the most interactions from a user's friends and followers and builds them into confetti frames to remind the user what a great year it was. (You wouldn't be able to tell what happened to me this year -- emigration; a lot of hard work -- from my "Year in Review".) It works for most people, though the picture choice is often startling. 

But one day Meyer came across the reminder: The face of his six-year-old daughter, who died of brain cancer this year, surrounded by stock images of people having fun. Facebook eventually apologized to him. But as Meyer rightly pointed out, this wasn't a matter of faulty ethics, but a basic problem with algorithms:

Algorithms are essentially thoughtless.  They model certain decision flows, but once you run them, no more thought occurs.  To call a person “thoughtless” is usually considered a slight, or an outright insult; and yet, we unleash so many literally thoughtless processes on our users, on our lives, on ourselves.

I can imagine a truly intelligent machine figuring out what happened to Meyer's daughter and not putting her picture in that merry party-themed frame. I can even imagine one smart enough to keep the girl's picture out of the sequence altogether. But I cannot conceive of a machine so intelligent that it would decide that Meyer doesn't want to relive the year 2014 in any shape or form. This is the kind of knowledge that cannot be mined from big data. It is also the kind of unspoken knowledge the majority of humans share -- and it powers their interactions, from relationships to political choices. 

Artificial intelligence will someday be intelligent enough to pass from performing specific tasks -- like writing those captions, or making passable non-literary translations -- to building worlds of its own. We may even lose control of these worlds, as Hawking and Musk suggest. But they will always be separate from ours because the human world does not really run on intelligence. It's about those silly things that imbue pictures, music and language despite not being part of the data, and that's something technology isn't likely to change.

This column does not necessarily reflect the opinion of Bloomberg View's editorial board or Bloomberg LP, its owners and investors.

To contact the author on this story:
Leonid Bershidsky at lbershidsky@bloomberg.net

To contact the editor on this story:
Cameron Abadi at cabadi2@bloomberg.net