Tech

If Fake News Fools You, It Can Fool Robots, Too

Algorithms are no better than people at recognizing what's true or right.

Your fact-checking friend?

Photographer: Fabrice Coffrini/AFP/Getty Images

Uninformative as fake news may be, it's shedding light on an important limitation of the algorithms that have helped make the likes of Facebook and Google into multi-billion-dollar companies: They're no better than people at recognizing what is true or right.

Remember Tay, the Microsoft bot that was supposed to converse breezily with regular folks on Twitter? People on Twitter are nuts, so within 16 hours it was spewing racist and anti-semitic obscenities and had to be yanked. More recently, Microsoft released an updated version called Zo, explicitly designed to avoid certain topics, on the smaller social network Kik. Zo's problem is that she doesn’t make much sense.

The lesson from these experiments: Algorithms, machine learning, artificial intelligence or whatever else you’d like to call such things are not good at general knowledge and understanding. They can avoid a blacklist of topics, or respond in some special way to a whitelist, but that’s about it. They have no underlying model of the world that allows them to make nuanced distinctions between truth and falsehoods. Instead, they rely on pattern matching from a large corpus of consistently true information.

That’s not to say they can't infer information, or that they are logically flawed. They excel in tiny, toy universes where the rules of the game are precisely understood and consistent -- games such as chess or go, for example. They can even handle trivia, as the success of IBM's Watson in playing Jeopardy has demonstrated.

Watson's ability to study and recall data involves a lot of sophisticated machine learning and graph theory. But those data -- the “ground truth” for Watson, consisting of articles, research reports, blogs and tweets found on the internet -- must be reliable. If the internet were half wrong, or 99 percent wrong, Watson would be terrible at Jeopardy.

Our society is embroiled in a debate about what is true, what is opinion and what is propaganda, and it leaves most of us confused. Why should artificial intelligence be any different?

It would be great if an algorithmic gatekeeper could help us out. Google, for one, has done a pretty good job of algorithmically vetting websites for quality. Even here, though, groups devoted to propaganda around Jews, women, Hitler and Muslims have managed to game autocomplete and search algorithms, leading users to bogus websites. Google has cleaned up the more embarrassing examples, but by employing a sophisticated version of blacklisting rather than any deep change in its algorithmic methodology.

If we must rely on lists, can we at least crowdsource them? Facebook apparently hopes to do so by getting its users to flag fake news and using outside fact-checking resources. This sounds like a pretty good idea until you consider the success that fake news creators have already enjoyed -- including in getting real people to repost their disinformation. If they can game Google, they will likely game the “flag the fake” system, too, if only by flagging everything and overwhelming the fact-checkers.

Algorithms have worked really well for the past few decades, mainly because people tended to post mostly trustworthy or useful information. Companies such as Google and Facebook, which have a lot of money riding on the algorithms, will naturally try to make the case that the public should keep trusting them. But in an environment of intentionally false information, users will need to move past the algorithms and decide which individuals -- and which news sources -- to rely on for vetted information.

'Pizzagate' Brings Fake News Into the Real World

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

    To contact the author of this story:
    Cathy O'Neil at cathy.oneil@gmail.com

    To contact the editor responsible for this story:
    Mark Whitehouse at mwhitehouse1@bloomberg.net

    Before it's here, it's on the Bloomberg Terminal.
    LEARN MORE
    Comments