Get Ready to Fall in Love With a Robot
Soon after Facebook apologized to a user for suggesting he look back fondly on a year that saw him lose his six-year-old daughter, Cambridge and Stanford scientists have published a paper saying the analysis of a person's Facebook likes provides a more accurate personality assessment than even friends and family can. Though its authors present it in the glowing terms typical of artificial intelligence hype, it actually provides a great map of technology's limitations when it comes to making human-like judgments.
The problem with Facebook's "Year in Review" feature, which traumatized designer Eric Meyer, was exactly that its content was based on the number of reader interactions with certain posts. Friends commiserated with Meyer on his daughter's death, and her picture automatically came up in the year-end feature, surrounded by images of partying people. It's hard to interpret the number of likes, shares and comments algorithmically. They could reflect a wide range of emotions: grief, indignation, admiration sarcastic contempt, gratitude, acquiescence, excitement.
So how could the researchers claim they had made sense of likes, especially since they confined themselves to the simplest interpretation of a like -- that it means a Facebook user actually likes something?
Wu Youyou, Michal Kosinski and David Stillwell did a huge amount of work, asking 86,220 volunteers to fill in a personality questionnaire. Then, they asked the volunteers' friends, colleagues and family questions about them, and collected information about their "likes." It transpired that the analysis of 10 likes provided a better match with a person's self-assessment than a colleague's evaluation. The computer needed 70 likes to beat a friend, 150 to defeat a family member and 300 to outperform a spouse at this matching game, called self-other agreement in psychologists' parlance.
There's a bit of a shortcut in the researchers' reasoning though: They used a person's self-assessment as a measure of accuracy. We all know we often try to look our best on social networks, inflating our self-image rather than just being ourselves and acting the way others are used to seeing us act in real life. On the other hand, perhaps an artificial intelligence meant to communicate with humans should flatter us by talking to our ideal selves: We'll like it better that way. The researchers wrote:
In the film "Her", for example, the main character falls in love with his operating system. By curating and analyzing his digital records, his computer can understand and respond to his thoughts and needs much better than other humans, including his long-term girlfriend and closest friends. Our research, along with developments in robotics, provides empirical evidence that such a scenario is becoming increasingly likely as tools for digital assessment come to maturity.
Our likes do give away important clues about our character. "For example," Youyou, Kosinski and Stillwell wrote, "Participants with high openness to experience tend to like Salvador Dali, meditation or TED talks; participants with high extraversion tend to like partying, Snooki (reality show star) or dancing." The researchers' computer was especially good at grading openness, a trait that is otherwise hard to pin down: The more eclectic a person's intellectual interests, the more open-minded she is. It also showed better-than-human results when determining a person's political orientation, field of study, personal network size and even substance use.
If the AI analyzing our likes were trying to manipulate us -- say, into buying something or supporting a political candidate -- this kind of analysis would give it a lot of cards to play. That's nothing new to us in 2014: We are already being manipulated by machines for commercial gain, that's why Google's bread and butter technology -- advertising tailored to our Internet searches -- works so well.
The computer, however, was worse than human judges at determining the level of a person's life satisfaction. Colleagues, friends and relatives are still better than machines at sensing how happy or how dejected we are. There are probably other gaps in the machine's understanding that the researchers' questionnaire didn't catch. My guess is that these gaps would be in the areas that are the most difficult to quantify -- the ones where the meaning of the like is untypical for a particular individual or otherwise uncertain.
Even as they become more perfect, machines will often act insensitively, as Facebook's "Year in Review" program acted toward Meyer. That's because they will always be bad at handling complex emotions (love-hate, for example), sarcasm and any other kind of inherently human ambiguity.
Of course, that doesn't mean an AI girlfriend won't be perfect for many people. After all, ambiguity and complexity are things we tend to flee in real-life relationships.
This column does not necessarily reflect the opinion of Bloomberg View's editorial board or Bloomberg LP, its owners and investors.
To contact the author on this story:
Leonid Bershidsky at firstname.lastname@example.org
To contact the editor on this story:
Cameron Abadi at email@example.com