Twitter Inc., the Web-based social-messaging service, helped spur a 30 percent growth in online forums for hate and terror over the past year, according to a report by the Simon Wiesenthal Center.
The center is currently tracking about 20,000 hate and terror-related sites, up from 15,000 a year earlier, according to a report set to be presented at a briefing in Washington, D.C., tomorrow.
Social-media services including Twitter, Facebook Inc. (FB) and Google Inc. (GOOG)’s YouTube video-sharing website should restrict the dissemination of hateful speech or content that aids terrorists, according to Rabbi Abraham Cooper, associate dean at the Simon Wiesenthal Center, a Los Angeles-based human-rights advocacy group. Events such as the Boston Marathon bombings highlight the need for some content to be blocked, Cooper said. U.S. officials have said the attackers were motivated by radical Islamic teachings on the Internet.
“Twitter is allowing itself to be one click away from full libraries of terrorist material,” Cooper said in an interview.
While Facebook and other Internet companies have met regularly with Cooper’s organization to address concerns related to content, Twitter has not, according to the report.
Jim Prosser, a spokesman for San Francisco-based Twitter, said that the company supports a global communication service with a variety of voices, ideas and perspectives.
“As a policy, we do not mediate content or intervene in disputes between users,” he said in an e-mailed statement. “However, targeted abuse or harassment may constitute a violation of the Twitter rules and terms of service.”
The growth rate in Web-based hate speech may be slower than the gains in use of social media. Twitter, for example, tracked more than 400 million posts a day in November, twice the activity level from June 2011, Prosser said.
YouTube bars users from posting material that may incite violence, train terrorists or contain hate speech, and relies on its members to flag questionable content, the company said in an e-mailed statement.
“We review flagged videos around the clock, routinely removing material that violates our guidelines,” YouTube said.
Facebook also enforces its own rules against speech promoting hate or terrorism on its site and relies on team of professional investigators to identify and remove such material, said Fred Wolens, associate manager of public policy at the company.
“Where abusive content is posted and reported, Facebook removes it and disables accounts of those responsible,” he said in an e-mailed statement.
Social-networking sites face difficult questions in balancing their users’ right to free expression with rules against hate speech written into their terms of service. Last October, Twitter blocked access to the account of a banned German right-wing group for viewers from that country, the first time the microblogging service has made use of an option to withhold content.
Twitter has taken the right approach in letting most voices write freely on the service, said Gabriel Rottman, legislative counsel at the American Civil Liberties Union in Washington.
“They deserve a lot of credit in fostering an open platform for speech,” he said in an interview. “The good speech is right up there facing off against the bad speech, the hate speech.”
To contact the reporter on this story: Douglas MacMillan in San Francisco at email@example.com
To contact the editor responsible for this story: Tom Giles at firstname.lastname@example.org