Trump's Twitter Bots Turned Out on Election Day
Donald Trump’s supporters made a surprisingly strong showing on Nov. 8, and not just at polling places in the rust belt. Twitter bots accounted for nearly a quarter of all postings that included hashtags related to the election, according to an analysis by researchers at Corvinus University, Oxford, and the University of Washington published on Thursday. They found that pro-Trump hashtags got five times as much traffic from automated accounts as hashtags that were pro-Hillary Clinton.
“The use of automated accounts was deliberate and strategic throughout the election, most clearly with pro-Trump campaigners and programmers who carefully adjusted the timing of content production during the debates, strategically colonized pro-Clinton hashtags, and then disabled automated activities after election day,” the researchers wrote.
Twitter dismissed the idea that automated propaganda influenced voters. But the discussion of coordinated bot campaigns fits into the broader post-election theme that social media companies need to re-examine their role in amplifying abusive or misleading political messages. In the days before the vote, messages designed to confuse pro-Clinton voters about the process circulated through the system, and Twitter was criticized for being slow to respond. This week, the company has cracked down on white nationalist accounts and introduced a new filtering tool that allows people to mute certain words in an attempt to clamp down on harassment. Facebook has also been forced into some soul-searching over its approach towards fake news on its platform.
The study on Twitter bots, part of an effort called the Project on Computational Propaganda, examined 19.4 million tweets posted between Nov. 1 and Nov. 9. All of the messages used some combination of hashtags related to the presidential campaign, with some being clearly supportive of one candidate (#MakeAmericaGreatAgain, #ImWithHer) and others having no explicit political leaning (#Election2016, #iVoted). The study then flagged accounts that tweeted too often to be credibly human as bots, defining any account that posted at least 50 messages a day using one of the election-related hashtags as "highly automated."
This didn’t mean there wasn’t a human involved. Philip Howard, one of the authors of the report, says that there are signs that people are actively managing fleets of automated accounts to make them seem more authentic. “They look good. They have good photos, they sometimes tweet about soccer scores, and they are rabidly pro-Trump,” he said. It’s hard to know for sure, but Howard says most automated accounts are probably controlled by independent supporters of the candidates.
In the study’s sample, highly automated accounts generated close to 18 percent of all Twitter traffic about the election, but that proportion grew at key times such as the televised debates and in the final few days before voting. The top 20 accounts tweeted over 1,300 times a day, about 2,600 times the rate of the average account.
The partisan disparity grew over time. During the first debate, there was about four times the amount of highly-automated pro-Trump activity, compared to similar traffic supporting Clinton. By election day, the split had grown to five to one. The day after the election, bot activity dropped off significantly.
Twitter says that the report uses flawed measurement tactics to come up with misleading claims. It says the study's threshold for determining whether an account is automated is low enough to capture some prolific human users of the service. Moreover, by measuring how much an account tweets rather than how many people see its messages, the study isn’t considering whether these accounts make an impact. Twitter also says it uses algorithmic methods to determine which hashtags show up in its trending topics section, which guard against attempts to spoof the system with high volumes of low-quality traffic.
“Anyone who claims that automated, spam accounts that tweeted about the U.S. election had an effect on voters’ opinions or influenced the national Twitter conversation clearly underestimates voters and fails to understand how Twitter works,” said Nick Pacilio, a spokesman for the company.
Twitter has always allowed automated posting, although it deactivates accounts that send spam. Howard says there's also no clear reason to think automated politicking is illegal. "Bot use is probably a form of protected speech," he said. Howard says he met with Twitter last year to discuss ways to work together, but the company declined to share its internal methods for identifying bot activity.
Researchers who study the use of bots in political propaganda say the tactics are still evolving. In Venezuela, radical opponents of the government have made accounts purporting to be political figures for the purposes of spreading misinformation, but mostly have been used to promote innocuous political events. When opponents of Syrian President Bashar al-Assad began using hashtags on Twitter to organize and share evidence of abuses, bots began using the same hashtags to post a wave of recipes and other inane content, according to Douglas Guilbeault, a doctorate student at the University of Pennsylvania’s Annenberg School for Communication.
On Wednesday, the white nationalist website the Daily Stormer said it had created over 1,000 fake Twitter accounts that purported to be the personal accounts of black people, and urged its readers to do the same. It alluded to a future trolling campaign. "Twitter is about to learn what happens when you mess with Republicans," wrote Andrew Anglin on the site.
Bot tactics are often relatively unsophisticated, says Guilbeault, but bots are a cheap tool that seem to prove successful at muddying things up, spreading confusion, and making life unpleasant for political opponents. Guilbeault, who works with the Project on Computational Propaganda, says it’s not clear how big of an impact bots are having today. “If bots have even a minor influence, that’s scary,” he said.