Cathy O'Neil, Columnist

Facebook and Twitter Can’t Police What Gets Posted

Neither AI nor humans seem capable of properly moderating content.

Sometimes it works.

Photographer: Robyn Beck/AFP/Getty Images
Lock
This article is for subscribers only.

I wouldn’t want to work at a social media company right now. With the spotlight on insurrection planning, conspiracy theories and otherwise harmful content, Facebook, Twitter and the rest will face renewed pressure to clean up their act. Yet no matter what they try, all I can see are obstacles.

My own experience with content moderation has left me deeply skeptical of the companies’ motives. I once declined to work on an artificial intelligence project at Google that was supposed to parse YouTube’s famously toxic comments: The amount of money devoted to the effort was so small, particularly in comparison to YouTube’s $1.65 billion valuation, that I concluded it was either unserious or expected to fail. I had a similar experience with an anti-harassment project at Twitter: The person who tried to hire me quit shortly after we spoke.