Why Silicon Valley Won't Enlist in Anti-Terror Fight
President Barack Obama’s Oval Office speech on combating ISIS included a plea for "high-tech and law enforcement leaders to make it harder for terrorists to use technology to escape from justice." Hours earlier, Hillary Clinton gave her own speech urging Silicon Valley's "disrupters" to get to work disrupting the terrorist organization. Neither provided any details—a tacit acknowledgment that it's much easier to make a broad call for technological action against terrorism than it is to navigate the messy specifics that would follow.
Since former intelligence analyst Edward Snowden exposed widespread government spying on Americans in 2013, U.S. tech companies have increasingly relied on encryption technology that prevents anyone but a specific user from accessing private information. On Monday, White House spokesman Josh Earnest told reporters that "the president believes in the importance of strong encryption, but at the same time, as the president mentioned last night, we don’t want terrorists to have a safe haven in cyberspace." Earnest seemed to be suggesting that there's a middle ground between the two sides in the debate over encryption. But after months of conversations between the government and Silicon Valley, no such compromise has emerged—and probably won't.
For more than a year, law enforcement officials have fumed as tech giants such as Apple, Google and Facebook embraced techniques that make it virtually impossible for anyone to access certain information that passes through their services or makes it on to their devices. After complaining loudly about such actions, FBI Director James Comey said in October that the White House wouldn't seek legislation that would require companies to design encryption in a way that would allow government officials to access it. Then came the attacks in Paris, and he once again began to emphasize the government's need to access encrypted communications. Manhattan District Attorney Cyrus Vance backs legislation that would require companies to preserve data to be handed over to investigators after securing a warrant.
The technical realities of encryption defy a political solution like the one Vance has proposed. Apple and Facebook have designed technology that even they can't break into, because a system that is unbreakable even to its creator is more secure than the alternatives. Steven Bellovin, a computer-science professor at Columbia University, says designing cryptography is difficult enough without worrying about granting special access to certain people or organizations. Trying to accommodate law enforcement will add a layer of complexity that others—read hackers—could exploit. "Maybe it'll make it easier to track ISIS, maybe it won't," Bellovin says. "It might let in the Chinese, Russians and Iranians."
Even if Washington did require U.S. technology companies to build so-called backdoors into their encrypted products, it wouldn’t eliminate the ability of terrorists—or anyone else—to use encrypted communications to talk privately. Silicon Valley doesn't hold a monopoly on this technology, says Kevin Bankston, director of New America's Open Technology Institute. "Any software engineer anywhere in the world can create and distribute their own encryption apps," he says. "That's exactly what the jihadists do. They don't need WhatsApp." Bankston says ISIS has issued security guidelines to followers that name five applications—all developed outside the U.S. or based on open-source code.
There is more common ground between tech companies and law enforcement when it comes to trying to keep extremist groups from using social media to recruit new members or advertise their exploits. There has been a steady back and forth between tech companies, their users, and government officials over controversial speech like bullying, racism, and child pornography for years. Major social networks have policies against abusive speech, and regularly take down content that violates it. YouTube, for example, removed 14 million videos that violated its policies in 2014 alone. "We share the government’s goal of keeping terrorist content off our site," says Jodi Seth, a spokeswoman for Facebook, saying the company had "zero tolerance for terrorists, terror propaganda, or the praising of terror activity."
Counting on private companies to decide what kinds of speech are acceptable is problematic, says Matthew Green, assistant professor at the Johns Hopkins Security Institute. There's never a clear line walling off dangerous communications from everything else, and even if there was, the sheer amount of content created by Facebook or Twitter users every day is too much to monitor. Automated techniques such as natural-language processing software could help reach Internet scale, but you'd end up filtering out a lot of legitimate content. Besides, ISIS members can start new Twitter accounts once their old ones have been shut down. "It's kind of like playing whack-a-mole," Green says.
—With Sarah Frier and Jack Clark