Can Social Media Close Gaps That Let Russians In?

From

A staff member arranges a display showing a social media post during a House Intelligence Committee hearing in Washington, D.C., on Nov. 1, 2017.

Photographer: Andrew Harrer/Bloomberg

Lawmakers called representatives from some of the world’s biggest tech companies to Washington in late October to explain how millions of Americans were exposed to content pushed by Russian sources during the 2016 presidential election. Days later, the broader problem of misinformation online emerged again in the wake of a mass shooting in Texas, with conspiracy theories about the event quickly spreading. Lawmakers are fed up with both kinds of problems, as summed up by Senator Dianne Feinstein, a California Democrat. "You bear this responsibility. You created these platforms, and now they’re being misused. And you have to be the ones who do something about it, or we will,” she told tech companies during one of the hearings. But what does “doing something” mean?

1. What is the problem?

It’s not just "fake news” and it’s not just ads -- it’s a mix of those and other forms of social-media sharing. The most intense focus is on how people with ties to the Russian government used Facebook, Google and Twitter to foment discord during last year’s campaign. About 150 million users saw posts from a company whose main purpose is to push Kremlin propaganda, in addition to 11 million users who saw ads the company purchased. Twitter reportedly offered to sell 15 percent of its U.S. election advertising to RT, the Russian news outlet that later registered as a foreign agent.

2. What were the Russians doing?

The U.S. intelligence community has determined that the Kremlin interfered in the 2016 election with the goal of helping Trump win. But in many cases, rather than explicitly back a candidate, its strategy was to spread content designed to sow social discord. Sometimes this meant buying advertising to target a particular message to a specific population; in other cases it meant posting content in the hopes it would spread on its own. Both kinds of activities were possible because of basic choices these tech platforms made: Unlike traditional media outlets, they’re designed to be unmanaged conduits of information. A broader question is whether they’re not just unmanaged, but unmanageable.

3. What makes this content so hard to deal with?

Scale, first of all. Facebook and Google each have more than 2 billion monthly users, and Twitter has about 330 million. The companies have argued repeatedly that there’s just too much moving through their systems to monitor it all. Critics say this humbleness is a convenient departure from their steady boasts of technological prowess, like wrapping the Earth in internet signals beamed down from balloons or building alternate worlds in virtual reality. Tech companies’ claims that the problematic material was a minuscule proportion of the overall content on their systems particularly irked lawmakers. So has the fact that their stepped-up efforts to flag or block such content using algorithms have so far fallen flat.

4. Is it unsolvable, or did Silicon Valley do something wrong?

At the least, it seems that none of the big tech platforms took this very seriously until the political situation forced them to make it look like they were doing so. Following the election, Facebook Chief Executive Officer Mark Zuckerberg dismissed the idea that fake news was a problem, and Twitter scoffed at the idea that bots were a factor in politics. That hasn’t exactly improved their credibility with lawmakers or the public.

5. What do the companies say they’ll do?

They’re all promising to get serious, a claim that left some lawmakers skeptical, given that the companies’ representatives testified that they don’t currently and may never have tools to stop a repeat. None of them sent high-profile executives to Washington. On the same day Facebook’s general counsel was getting grilled by lawmakers, Zuckerberg was on the phone with investors talking about the $4.7 billion profit the company made last quarter.

6. Have they made specific plans?

Facebook has said it will double its security staff to 20,000, while also investing in new artificial intelligence systems to help the newly hired humans review questionable content, and the entire industry is promising to continue to build technology that will automatically identify problematic patterns of behavior. They’re also promising to force advertisers to be more transparent. These plans are still in early stages of formation, so it’s hard to determine how well they might work.

7. How far would that go?

It would address just a part of the issue. As Facebook acknowledged just before the hearings, far more people saw so-called organic posts from the Russian troll factory than the ads it bought. Social media blurs the line between advertising and everything else in a way that older forms of media don’t. A malevolent political actor — or a company selling deodorant — could post something on Facebook that wasn’t an ad, then pay to promote it, making it an ad, in the hopes that people would repost it, disseminating it further as not-an-ad. By far the largest exposure to Russian-backed content on Facebook came from users sharing posts within their networks.

8. Isn’t the government getting involved?

A bit, maybe. There’s a proposed bill — the Honest Ads Act — that would put new disclosure requirements on political advertising. While the provisions of the bill have a lot in common with what the tech companies say they’ll do on their own, the industry is opposing it. And despite the anger directed at Silicon Valley in Washington, it’s not clear that it has much momentum. Republican lawmakers have generally opposed new regulations on campaign speech. In any case, the bill wouldn’t address political communication that isn’t technically advertising.

9. How much else can the government do?

It’s not even clear what approach the government would take. Much of the criticism has focused on companies insufficiently policing their platforms. But there is also concern about them interfering too much. For instance, Senator Al Franken, a Minnesota Democrat, said in a recent speech that tech companies should remain neutral in their treatment of the flow of lawful information and commerce on their platforms. Any attempt to regulate these companies as if they were media outlets would run into one of the tech industry’s most cherished pieces of legislation -- a 1996 law protecting them from being held liable for the actions of their users. The courts have interpreted this immunity quite broadly. Any big changes to that would inspire a huge fight. 

10. Which way is Silicon Valley heading on this?

Tech executives say that casting them in the role of editor for the world’s discourse isn’t the best way to ensure freedom of expression. Then again, a trade group representing Facebook, Google and Twitter recently dropped its opposition to a sex-trafficking bill, which they said would weaken the 20-year-old law. That was widely interpreted as a sign that the political atmosphere surrounding tech is evolving rapidly.

The Reference Shelf

  • Senate Judiciary Committee page on hearing on Russian disinformation online, with video and links to prepared testimony.
  • Government website for the Senate’s Honest Ads act, with full text and updates on the bill’s progress.
  • Senator Al Franken’s speech about how big tech threatens security, freedoms and democracy.
  • QuickTake Q&As on the Trump-Russia investigation and on Facebook’s fake news problem.
    Before it's here, it's on the Bloomberg Terminal.
    LEARN MORE