Google, Twitter, Facebook Failing on Extremism, U.K. Panel Saysby and
Parliamentary committee calls for more internet monitoring
Companies say they’ve blocked accounts, removed content
The companies should employ more staff to police content and British law-enforcement agencies should have extra resources to detect and block dangerous posts and users, Parliament’s Home Affairs Committee said in a report published in London Thursday. The networks are the “vehicle of choice in spreading propaganda” and key “recruiting platforms” for extremists, the cross-party panel said.
“Huge corporations like Google, Facebook and Twitter, with their billion-dollar incomes, are consciously failing to tackle this threat and passing the buck by hiding behind their supranational legal status, despite knowing that their sites are being used by the instigators of terror.” the committee chairman, opposition Labour Party lawmaker Keith Vaz, said in a statement.
The companies defended their record on tackling terrorism, saying they have blocked accounts and removed inflammatory content.
About 800 people with links to the U.K. have traveled to Syria and Iraq to join the Islamic State terror group, also known as Daesh, and about half have returned to Britain, the panel said, highlighting the need to stop the radicalization of Muslims online.
“We are engaged in a war for hearts and minds in the fight against terrorism,” Vaz said. “The modern frontline is the internet. Its forums, message boards and social media platforms are the lifeblood of Daesh and other terrorist groups for their recruitment and financing and the spread of ideology.”
YouTube, which is owned by Google, pledged to continue working with the British government and law-enforcement agencies to reduce the opportunities for radicalization through videos posted on the site.
“We take our role in combating the spread of extremist material very seriously,” the company said in an e-mailed statement. “We remove content that incites violence, terminate accounts run by terrorist organisations and respond to legal requests to remove content that breaks U.K. law.”
Twitter said Aug. 19 that it has suspended 360,000 accounts for advocating extremism since the middle of 2015, including 235,000 accounts since February 2016.
Simon Milner, director of policy for Facebook U.K., said the company has been working with experts on “counter speech initiatives,” in which people are encouraged “to use Facebook and other online platforms to condemn terrorist activity and to offer moderate voices in response to extremist ones.”
“Terrorists and the support of terrorist activity are not allowed on Facebook and we deal swiftly and robustly with reports of terrorism-related content," Milner said.
While welcoming the action already taken by the companies, the panel said it’s not enough given the numbers of users and profits made through the web platforms. Twitter told lawmakers it employs “more than 100” staff to assess and remove extremist content, while Google and Facebook refused to provide detailed numbers.
“They must accept that the hundreds of millions in revenues generated from billions of people using their products needs to be accompanied by a greater sense of responsibility and ownership for the impact that extremist material on their sites is having,” the committee said. “If they continue to fail to tackle this issue and allow their platforms to become the ‘Wild West’ of the internet, then it will erode their reputation as responsible operators.”
Charlotte Holloway, policy director at techUK, an industry association for Britain’s large technology companies, said that the report "painted an inaccurate picture" of their commitment to combating extremism.
“Responsibilities to tackle online extremism are a serious and ongoing priority, backed by significant resources,” she said. The majority of counter-terrorism operations would not succeed without cooperation from tech companies, Holloway said.