Ransomware and the NSA
The effects of this month’s global ransomware attack seem to be fading, fortunately. But a crucial question the incident raised is only getting more urgent. When it comes to online security, the U.S. government’s priorities -- preventing terrorism and protecting cyberspace -- are in permanent tension. Is there a way to resolve it?
The National Security Agency routinely seeks out flaws in common software and builds tools, known as exploits, to take advantage of them. Doing so is an essential part of the agency’s mission of spying on terrorists and foreign adversaries, yet it comes with grave risks.
The latest attack -- still evolving -- is an example. Researchers say it takes advantage of a stolen NSA tool to exploit a flaw in some versions of Windows. Microsoft Corp. has suggested that the NSA knew of the flaw for some time, yet didn’t disclose it until the theft.
That may sound unnerving. Windows is ubiquitous, and governments are generally expected to respect online security, not undermine it. Microsoft is understandably unhappy. Worse, the initial attack crippled everything from banks to hospitals. It’s fair to say that lives were at risk.
So why keep such a harmful vulnerability secret? Simple: Exploiting it proved hugely effective in swooping up intelligence -- “like fishing with dynamite,” as one former NSA employee put it.
Deciding whether such intelligence is worth the risk is a fraught and secretive process. When a significant new flaw is found by a federal agency, it’s shared among experts from the intelligence, defense and cybersecurity bureaucracies (among others), who debate whether to disclose or exploit it, according to nine criteria. A review board then makes a final decision. In almost all cases involving a product made or used in the U.S. -- more than 90 percent, according to the NSA -- the flaws are disclosed.
Although it’s an imperfect process, a better way isn’t obvious. Simply disclosing all vulnerabilities, as some activists demand, would be nuts. Intelligence would dry up, investigations would be hobbled, and the Pentagon would lose crucial insight into foreign militaries, for starters. Other countries would continue exploiting such flaws to their advantage. To echo a Cold War locution, it would amount to unilateral disarmament.
Likewise, Microsoft has proposed a “digital Geneva Convention,” or a global agreement to disclose flaws. But the worst actors online -- thieves, gangsters, North Korea -- would hardly feel constrained by such a protocol, while the restraints put in place could well eliminate crucial methods of tracking them.
A better approach is to improve the current system. One problem is that the secrecy required makes it hard to know how well the stated criteria for retaining vulnerabilities are being followed. Reporting the total number found and disclosed each year might offer some reassurance to tech companies and the public, without divulging anything sensitive. Periodic audits of those that have been retained could help ensure that agencies aren’t hoarding dangerous stuff that’s no longer useful. Most important, though, is to better secure these flaws -- and the tools meant to exploit them -- while having a strategy to mitigate the risks if they’re once again leaked.
Failing that, the public may quickly lose confidence in this process. And that may be the biggest risk of all.
--Editors: Timothy Lavin, Michael Newman.
To contact the senior editor responsible for Bloomberg View’s editorials: David Shipley at email@example.com .