QuickTake Q&A

Can Facebook, Twitter Crack Down on Deception?

From

Facebook Finds Itself at Center of Russia Investigation

How much control do Facebook, Twitter and Google have over what happens on their platforms? Lawmakers called representatives from some of the world’s biggest tech companies to Washington in late October to explain how millions of Americans were exposed to deceptive content pushed by Russian sources during the 2016 presidential election. Days later, the problem of misinformation online reemerged in the wake of a mass shooting in Texas, with conspiracy theories about the event landing prominent placement on social media and quickly spreading. Lawmakers are fed up with both kinds of problems.

1. What’s the problem?

People with ties to the Russian government, including the St. Petersburg-based "troll farm" called the Internet Research Agency, used Facebook, Google and Twitter to spread content designed to sow social discord during the 2016 U.S. campaign. Sometimes this meant buying advertising to target a particular message to a specific population; in other cases it meant posting unpaid content and letting it spread on its own. About 150 million users saw posts from a company whose main purpose is to push Kremlin propaganda, and 11 million users saw ads it purchased. Twitter evidently offered to sell 15 percent of its U.S. election advertising to RT, the Russian news outlet that later registered as a foreign agent. At Google, some engineers coined the term "evil unicorns" to describe unverified, lie-filled posts on obscure topics.

2. Can social media companies be more alert to this?

The challenge is one of scale. Facebook and Google each have more than 2 billion monthly users; Twitter has about 330 million. The companies say there’s just too much moving through their systems to monitor it all. Their stepped-up efforts to flag or block such content using algorithms have so far fallen flat.

3. So what can be done?

The social media giants promise to get serious -- a claim that left some lawmakers skeptical, given that the companies’ representatives testified that they don’t currently and may never have tools to stop a repeat. Facebook has said it will double its security staff to 20,000 and invest in new artificial intelligence systems to help the newly hired humans review questionable content. Google says it’s more carefully curating the carousels that list “Top Stories” and the posts it pulls from Twitter. The entire industry is promising to continue to build technology that will automatically identify problematic patterns of behavior. They’re also promising to force advertisers to be more transparent.

4. Will those steps work?

It’s hard to say, since the plans are still in early stages. Plus, they would address just a part of the issue. As Facebook acknowledged just before the hearings, far more people saw unpaid "organic" posts than saw paid ads placed by the Russian troll farm. Social media blurs the line between advertising and everything else in a way that older forms of media don’t. By far the largest exposure to Russian-backed content on Facebook came from users sharing posts within their networks.

5. Will the U.S. government get involved?

It’s starting to. The Federal Election Commission is considering requiring that internet political advertising include disclosures of who paid for them. A proposed bill in the U.S. Senate would do much the same. But those steps, even if enacted, wouldn’t address political communication that isn’t technically advertising.

6. How much can the government do?

Any attempt to regulate these companies as if they were media outlets would run into one of the tech industry’s most cherished pieces of legislation -- a 1996 law protecting them from being held liable for the actions of their users. The courts have interpreted this immunity quite broadly. Any big changes to that would inspire a huge fight. 

7. Which way is Silicon Valley heading on this?

None of the big tech platforms took this very seriously until the political situation forced their hand. Following the election, Facebook Chief Executive Officer Mark Zuckerberg dismissed the idea that fake news was a problem, and Twitter scoffed at the idea that bots were a factor in politics. That didn’t improve their credibility with lawmakers or the public. For now, tech companies hope to avert legislation that orders them what to do. Nor do tech executives want to be cast as editors of the world’s discourse. Then again, a trade group representing Facebook, Google and Twitter recently dropped its opposition to a sex-trafficking bill that would weaken the legal immunity enjoyed by websites. That was widely interpreted as a sign that the political atmosphere surrounding tech is evolving rapidly.

The Reference Shelf

  • Inside Google’s struggle to filter lies from breaking news.
  • Senate Judiciary Committee page on hearing on Russian disinformation online, with video and links to prepared testimony.
  • Government website for the Senate’s Honest Ads act, with full text and updates on the bill’s progress.
  • Senator Al Franken’s speech about how big tech threatens security, freedoms and democracy.
  • QuickTake Q&As on the Trump-Russia investigation and on Facebook’s fake news problem.
    Before it's here, it's on the Bloomberg Terminal.
    LEARN MORE