On the morning of May 12, computer systems were held for ransom as a cyberattack on companies and critical infrastructure began to pinball around the world, rolling through Spain’s Telefónica SA, French carmaker Renault SA, and Russia’s interior ministry and crippling U.K. hospitals. The rogue code has reached more than 300,000 computers that run on Microsoft Windows in thousands of companies in 150 countries. It all sounds grim. In fact, we’ve been lucky.
The WannaCry ransomware, the code in question, was fairly shoddy work. It achieves the basics: spreading through the computers on a network, encrypting everything on the machines, and demanding payment to return the contents to their original form. But it can’t evade investigators the way sophisticated malware can—which is designed to avoid systems used to test for dangerous code. WannaCry is also more obvious in going about its work of destroying backup contents on a computer, so it’s easier to stop, according to Allan Liska, a ransomware expert at cybersecurity firm Recorded Future. Another stroke of luck: Within hours, a security researcher in the U.K. discovered an unregistered web address in the malware. He was able to register the domain, disabling a key mechanism the code used to spread to more computers. The amateurishness also shows in WannaCry’s bottom line. The hackers demanded payment in the digital currency bitcoin, but the wallets they set up for the ransoms show they’d gathered just a bit more than $71,000 as of May 16.
We can’t depend on luck next time. These assaults are becoming the archetypal crime of the 21st century, hurtling toward us in full view. This one was heralded more than a month ago. That’s when a group calling itself Shadow Brokers released a hacking toolset targeting Windows operating systems, purportedly stolen from the National Security Agency, an intelligence branch of the U.S. Department of Defense. Researchers have been warning for weeks that criminals would begin using the toolset on vulnerable systems.
What can companies and institutions do to step out of the way of these attacks? In theory it’s simple, and security experts have repeated the same phrase for decades: “Just patch.” That is, install code that replaces the bugs that give hackers entry. Microsoft Corp. released a “critical” patch in March, fixing the vulnerabilities that the Shadow Brokers tools used.
Consumers get an instant fix when their machines are auto-updated by Microsoft, Apple Inc., and other technology makers. But it’s not that simple for budget-strapped institutions. The National Health Service in the U.K. is a case in point. Many of its computers work on the Windows XP operating system—so old that it’s no longer supported by the software maker, which hasn’t issued patches for it since 2014. The NHS did pay Microsoft to continue issuing special patches until 2015, but in the midst of an austerity move, it ended the £5.5 million ($7.1 million) support deal. By the time Microsoft responded to the attacks and issued an emergency patch, it was too late. Patients were stranded, emergency services hobbled.
Corporations and government agencies sometimes have good reason for not patching quickly. Complex computer networks, fragile custom code, and sheer bureaucratic inertia mean organizations spend a long time testing patches to ensure they don’t break anything and cause crippling network downtime. And computers do break from patches: Some users criticized Microsoft last year when the upgrade to Windows 10 “bricked” their PCs, turning the machines into useless lumps. The financial-services industry—already among the biggest investors in cybersecurity—is so thorough and cautious in its process that it takes an average of 176 days, longer than almost any other industry, from identifying a vulnerability to fixing it, according to a study by NopSec Labs.
In contrast, health-care organizations average 97 days, according to NopSec. Hospitals and doctors’ offices have the added challenge of balancing security with the demands of providing medical services, where patients don’t just stop coming in with problems on the weekends so critical systems can go offline. Most of the time, the plodding pace of improving network security is punished only mildly. In the case of WannaCry and the NHS, the result was a whipping—and it may prod reform.
“Events like this raise the public’s awareness, and the more that happens the more demand there is for some type of government reaction to it,” says French Caldwell, a former Gartner Inc. analyst who’s chief evangelist at MetricStream, which makes risk analysis and compliance software. “People have to die or lose lots of money before government steps in to regulate.”
The WannaCry chaos has reignited a fractious debate in Silicon Valley over who ultimately bears responsibility. Is it the customers’ fault for dilly-dallying and failing to apply a patch that Microsoft labeled “critical”? Or are tech vendors to blame for creating a system in which ancient software such as Windows XP and Server 2003 is kept on life support indefinitely for users willing to pay extra, an accommodation that simply keeps insecure software in circulation longer? “One could argue that given the severity of the vulnerability, they should have provided patches for XP and Server 2003 to everyone, not just the organizations that pay a king’s ransom for the privilege of having Microsoft support their legacy software,” says Ryan Kalember, senior vice president for cybersecurity strategy at Proofpoint, a Sunnyvale, Calif.-based email security company whose technology scans 1 billion emails a day. “To be clear, Microsoft would prefer that companies upgrade and realize the full benefits of the latest version rather than choose custom support. Security experts agree that the best protection is to be on a modern, up-to-date system that incorporates the latest defense-in-depth innovations,” said a Microsoft spokesperson in an emailed statement.
A blog post by Microsoft President Brad Smith has pointed the finger at the U.S. government, calling the attack a “wake-up call” for military and intelligence agencies to stop stockpiling tools to exploit the digital vulnerabilities of potential cyber rivals. The criticism was echoed by Russian President Vladimir Putin, who blamed the NSA and its toolkit for the WannaCry assaults, which hit his country particularly hard. Meanwhile, researchers scrambling to figure out who’s behind the ransomware have found similarities between WannaCry and code used in attacks pinned on North Korea.
The identity of the perpetrators would matter less if everyone were better prepared. Patching is essential, but it’s not the only answer, says Nathaniel Gleicher, a former Department of Justice prosecutor and director of cybersecurity policy for the White House under President Obama. He’s now head of cybersecurity strategy at Illumio, which helps companies stop cyberthreats from spreading laterally inside networks, data centers, and the cloud. Companies that know they’re vulnerable but don’t want to shut down key systems to patch can reduce the potential impact by shutting off the computer ports used by malware; in the case of the Shadow Broker tools and WannaCry, that’s port 445. Defenders have to think about cybersecurity more like physical security. “You leave this door open everywhere even though you’re not using it. At some point, someone’s going to walk through it,” Gleicher says.
Andrew Whinston, a University of Texas at Austin professor of management science and information systems, has been working to show what might happen if governments did intervene more actively to assess the security status of companies and give them a public rating. His research, which uses data on spam and phishing emails from thousands of business networks to give companies a security rating, suggests that government policy would spur companies to spend more on security, driven by consumer awareness. “Let’s say you have to go to the hospital, and maybe besides checking the doctors, you check the security level,” Whinston says. “If, according to the ranking from the government, this is a place that’s not all that safe, you’d say, ‘Well, I’ll drive a couple of miles more and go to a safer place.’ ”
There’s no easy way to stop companies from clinging to flawed, obsolete software. But the stakes are only getting higher with the emergence of internet-connected devices deployed in previously unimaginable places, from power meters to pacemakers, some of which have no built-in security controls, says Frank Zou, founder of Sunnyvale-based startup HoloNet Security. These become new portals for infection. Says Zou: “There’s no reason not to expect future attacks to go beyond health-care organizations to other mission-critical national infrastructure like nuclear energy, water treatment plants, air traffic control, or the nation’s financial or communication systems.” In fact, an online statement purportedly posted by Shadow Brokers on May 16 promised a “Data Dump of the Month” service, with more advanced hacking tools for sale. We’ll need better luck—or better preparation—next time.
(Adds comment from Microsoft Corp. in ninth paragraph.)
The bottom line: The WannaCry computer hack reached 300,000 machines in 150 countries. It could have been much, much worse.