Inside Operation InVersion, the Code Freeze That Saved LinkedIn

A risky project made LinkedIn a bastion of computing power

Inside Operation InVersion, the Code Freeze That Saved LinkedIn
A LinkedIn banner hangs on the front of the New York Stock Exchange in 2011
Photograph by Michael Nagle/Bloomberg

LinkedIn’s May 2011 initial public offering was a blowout. Its share price more than doubled in the first day of trading, giving the networking site a nearly $9 billion valuation. Behind the scenes, though, the company’s computing systems were a total mess. In the months that followed, hundreds of engineers struggled to hold the site together with the digital equivalent of chewing gum and duct tape.

By November 2011, Kevin Scott, LinkedIn’s top engineer, had had enough. The system was taxed as LinkedIn attracted more users, and engineers were burnt out. To fix the problems, Scott, who’d arrived from Google that February, launched Operation InVersion. He froze development on new features so engineers could overhaul the computing architecture. That may not sound like a big deal, but in the frenetic world of the social Web, it’s sacrilege. “You go public, have all the world looking at you, and then we tell management we’re not going to deliver anything new while all of engineering works on this project for the next two months,” Scott says. “It was a scary thing.”

Scary as it was, the move paid off. LinkedIn now develops some of the most advanced coding tools for its engineers, giving them the ability to add new features on the fly, such as a sleek mobile version of the site. This engineering agility is key to LinkedIn’s $18 billion valuation. So far this year, its share price is up more than 50 percent.

LinkedIn’s engineers constantly run tests to see what language and graphic choices keep users engaged and what tweaks make mobile pages load faster. Changes—be it a fresh look for a menu or a new service—are built into the live site almost immediately. LinkedIn updates three times a day, while such rivals as Facebook and Google typically update once a day or every few days.

When it was founded in 2003, LinkedIn used Oracle software for its central database, instead of the cheaper, more flexible open-source databases that now dominate Web computing. Under Scott, LinkedIn has embraced modern Web software. It has built a data storage system dubbed Voldemort and a messaging system called Kafka. Both have been open-sourced and picked up by other companies.

Key to Scott’s InVersion project are the artificial intelligence code checkers that scour software for any errors put in by engineers. Companies such as Google and Facebook have their own algorithimic tools but also dispatch teams of people to oversee the process of adding new code to the companies’ live websites. At LinkedIn, it’s almost entirely automated. “Humans have largely been removed from the process,” Scott says. “Humans slow you down.”

This obsession with speed has earned LinkedIn street cred—and some ribbing. Mike Abbott, a former engineer at Twitter and now a general partner at venture capital firm Kleiner Perkins Caufield & Byers, likens the coding advances at LinkedIn to “doing open heart surgery on a runner while he’s sprinting a marathon.” But he also says that updating multiple times a day is “silly.” And plenty of people have noted that their LinkedIn homepage has become crowded with all manner of services and feeds.

Scott believes all this work has primed LinkedIn for what he sees as its next stage: mining users’ economic and job data to spot trends early and advise people on advancing their careers. “We’ll be able to see things like where welders are migrating and what skills the successful ones are learning,” Scott says. “It is great to have perfect information running in both directions.”

Before it's here, it's on the Bloomberg Terminal. LEARN MORE