Gadfly

Facebook Can't Leave Advertising on Autopilot

Responsibility has to take precedence over automation and speed.
From
Photographer: Loic Venance/AFP/Getty Images

Internet companies live in fear of being labeled censors. Google, Facebook, Twitter and other big web companies squirm when politicians or other people demand they decide which photos, news articles and other forms of speech are appropriate within their digital walls. That's why the companies tread carefully even when it comes to policing activity by terrorist groups on their sites.

I understand their discomfort. We can debate whether Facebook should kick white supremacists out of its virtual nightclub of 2 billion people. But Facebook has no excuse for how lax it is about policing its own advertising business. The company makes it too easy on itself to generate revenue from people abusing its technology as megaphones for vitriol, scams and foreign propaganda.

Twice in recent weeks, Facebook acknowledged that people were able to pay the company for nefarious missions, in one case to sow political controversy on behalf of possible Russian mischief-makers and in another to reach people interested in anti-Semitic topics. These scandals had a common cause: Facebook's misplaced zeal to remove any possible speed bumps on the road to advertising riches.

These weren't the first times the company has stepped in muck because it opts for automation and speed over responsibility when it comes to making money. Facebook cannot operate this way anymore. It's too big, too powerful and too much under the spotlight.

The latest flap included a discovery by news organization ProPublica, which showed it was able to purchase ads on Facebook targeted at people who indicated they were interested in topics such as "Jew hater" or "how to burn Jews." The company's computerized advertising system even recommended additional categories ProPublica could select for its ads to reach more people. (On Friday, BuzzFeed showed that Google's giant advertising system has similar flaws.)

This happened because Facebook users can put whatever they want in profile fields for their employer, education and other personal information. Facebook's computers turn that self-identified background into categories for advertising messages. Most of the time, this is innocuous. Nike might target ads for its workout shoes at people whose Facebook profiles say they're "gym rats," or "love CrossFit." But sometimes people fill these fields with vitriol, and then ad targeting is not innocuous. 

Facebook said on Thursday that it would temporarily block advertisers from pitching people based on some self-reported profile fields.

Facebook also found itself in a firestorm for its disclosure that $100,000 worth of ads were circulated on the social network around the 2016 U.S. president election campaign by "inauthentic" accounts that it thinks are linked to Russia. Those ads spread propaganda about "divisive social and political messages across the ideological spectrum," Facebook said. Robert Mueller, the attorney investigating Russian efforts to influence U.S. voters, is very interested in learning more about this Facebook propaganda. 

Facebook doesn't want to make money from Russian trolls or anti-Semites. But the company did because the advertising system of this $500 billion company lets computers run things on autopilot and leaves humans to clean up the inevitable messes. Each time someone exposes flaws in Facebook's ad system, the company puts a Band-Aid on the problem and promises to do better. 1 This approach is not going to work anymore. The messes happen too often and the consequences are too severe. 

Facebook needs to set more guardrails to actively weed out nefarious advertising. It could use computerized filters and human screening to block certain advertising categories from being automatically generated from users' profile information. Facebook also needs to do more to scrutinize the accounts that are buying advertising, particularly when they are new or have other potential red flags. 

These kinds of safeguards are not unheard of even in the technology industry obsessed with speed and automation. Apple personnel are involved in reviewing iPhone apps to make sure there are no security vulnerabilities or inappropriate content before people unwittingly download them. If the most world's most valuable public company can actively screen millions of apps that are essential to its business, then Facebook can add more checks into its ad system. 

Facebook and its internet superpower brethren know they need to operate cautiously. Politicians on the left and the right, academics, their business rivals and many others have a common mission to blunt influential technology companies. The more missteps these companies make, the more it lays bare their power over our lives and raises questions about whether they need more government oversight. 

For Facebook's own good and for our own, the company must become less reactive and more proactive in its business. I'm not arguing for digital colonization to spread Facebook's principles to the world. I'm asking for Facebook to apply its principles to itself. 

This column does not necessarily reflect the opinion of Bloomberg LP and its owners.
  1. Last year, ProPublica also was able to block an ad it purchased in Facebook's housing categories from being shown to African-Americans, Hispanics and Asian-Americans. Facebook made changes in response to that article, which raised the question of whether Facebook was violating laws prohibiting discrimination in housing advertising. 

Before it's here, it's on the Bloomberg Terminal.
LEARN MORE