Microchips: The Transistor Was the First StepWalter Isaacson
1947 Researchers at Bell Labs invent the transistor, the critical first step toward development of the microchip.
The Digital Revolution was spawned in Murray Hill, N.J., shortly after lunchtime on Tuesday, Dec. 16, 1947. Scientists at Bell Labs succeeded that day in putting together a tiny contraption that they had concocted from strips of gold foil, a chip of semiconducting material, and a bent paper clip. When wiggled just right, it could amplify an electric current and switch it on and off. The transistor, as the device was soon named, and the ability to etch millions of them onto microchips became to the Digital Age what the steam engine had been to the Industrial Revolution.
Three colleagues would go down in history as the inventors of the transistor: a deft experimentalist named Walter Brattain, a quantum theorist named John Bardeen, and a solid-state physics expert named William Shockley. But there was another player in this drama that was as important as any individual: Bell Labs, where these men worked. The transistor required a team that threw together theorists who had an intuitive feel for quantum phenomena with material scientists who were adroit at baking impurities into silicon. In Bell Labs’ long corridors, designed to encourage serendipitous encounters, an information theorist named Claude Shannon rode a unicycle and juggled balls as he shared ideas with ingenious metallurgists, engineers, physicists, and even a few AT&T pole climbers with grease under their fingernails.
The need to combine theorists with engineers was particularly essential in a field known as solid-state physics, the study of how electrons flow through solid materials. In the 1940s, Bell Labs engineers were tinkering with materials such as silicon to juice them into performing electronic tricks. At the same time, Bell theorists were wrestling with the mind-bending realm of quantum mechanics, which is based on the model of atomic structure in which electrons orbit around a nucleus at specific levels. Electrons could make a quantum leap from one level to the next but never be in between. Their number in the outer orbital level helped to determine how well an element conducted electricity.
Some elements, e.g., copper, are good conductors. Others, such as sulfur, resist electrical current. And then there are those in between—the semiconductors, like silicon and germanium. What makes semiconductors useful is that they can be easily manipulated to become better conductors. For example, if you contaminate silicon with a tiny amount of arsenic or boron, its electrons become more free to move.
The mission of the solid-state team at Bell Labs was to find a solid, simple, and sturdy device that could replace the unwieldy vacuum tubes that still powered electronic computers. Its leader was Shockley, who had the ability to visualize the movement of electrons the way a choreographer can visualize a dance. He developed a theory that if an electrical field were placed right next to a slab of semiconducting material, it would pull some electrons to the surface and permit a surge of current through the slab. Thus a semiconductor might be used as an amplifier or an on-off switch, just like a vacuum tube.
The theorist Bardeen and experimentalist Brattain shared a workspace, like a librettist and a composer sharing a piano bench, so they could perform a call-and-response all day about how to manipulate silicon and germanium to make such a device. Their work culminated in late 1947, when they devised ways to deal with a shield that formed in what was called the “surface state” of the semiconductor. They realized that the best way to overcome this problem was to jab two metal points into the silicon or germanium really close together. Bardeen calculated that the points should be less than two-thousandths of an inch apart. That was a challenge, even for Brattain. But he came up with a clever method: He glued a piece of gold foil onto a small plastic wedge shaped like an arrowhead, then used a razor blade to cut a slit in the foil at the tip, forming two gold contact points close together.
When Brattain and Bardeen tried it on the afternoon of Dec. 16, something amazing happened: The contraption worked. “I found if I wiggled it just right,” Brattain recalled, “I had an amplifier with the order of magnitude of 100 amplification.” On his way home that evening, the voluble Brattain told the others in his carpool he had just done “the most important experiment that I’d ever do in my life.” When the less talkative Bardeen got home, he told his wife about something that happened at the office. It was only a sentence. As she was peeling carrots at the kitchen sink, he mumbled quietly, “We discovered something important today.”
There was a limit to how useful the transistor could be, however. Any powerful logical circuit would require stringing together millions of them. In a paper published in the fall of 1957 to celebrate the 10th anniversary of the transistor, a Bell Labs executive dubbed this problem “the tyranny of numbers.” As the components in a circuit increased, the number of connections increased much faster. If a system had 10,000 components, that might require 100,000 or more little wire links on the circuit boards, some soldered by hand. This was not a recipe for reliability.
Two important events happened just when the paper was published. The company that Shockley founded to make transistors disintegrated because of his increasingly erratic personality, and some of his best engineers—including Robert Noyce and Gordon Moore—split off to start a company called Fairchild Semiconductor (they later formed Intel). Then on Oct. 4, the Russians launched Sputnik, a satellite that orbited the earth, and set off a space race with the U.S. Because computers had to be made small enough to fit into a rocket’s nose cone, it was imperative to find ways to cram thousands of transistors into tiny devices.
The stage was thus set for the innovation that would make transistors not merely useful but historically transformative: the integrated circuit, also known as the microchip. It happened almost simultaneously in two different places.
The first was a Dallas-based oil exploration company that had changed its mission and renamed itself Texas Instruments. A few months after Sputnik, it hired an engineer named Jack Kilby, who was working at a company that made hearing aids and who had taken a course at Bell Labs on how to use transistors. The policy at Texas Instruments was for everyone to take off the same two weeks in July, but Kilby had accrued no vacation time in 1958, so he found himself alone in the lab. This gave him time to think about what could be done with silicon other than fabricate it into transistors.
He knew that if he created a bit of silicon without any impurities, it would act as a resistor. There was also a way, he realized, to make a piece of silicon act as a capacitor, meaning it could store a small electrical charge. You could make any electronic component out of differently treated silicon—in fact, you could put them all on one piece of silicon.
Meanwhile, Noyce and his Fairchild colleagues discovered that their transistors were not working very well. A tiny piece of dust, or even exposure to some gases, could cause them to fizzle.
A Fairchild engineer named Jean Hoerni came up with an ingenious fix. On the surface of a silicon transistor he would place a thin layer of silicon oxide, like icing atop a cake, that would protect the silicon below. He soon realized that tiny windows could be engraved in this protective layer to allow impurities to be diffused at precise spots. That way, different types of transistors and other components could be etched on a single chip of silicon. Instead of cumbersome copper wires connecting the components, Noyce proposed that tiny copper lines be laid down onto the silicon to integrate complex circuits on the chip. He and his team had come up with the concept of a microchip independently of, and a few months later than, Kilby.
There is an inspiring lesson in how Kilby and Noyce personally handled the question of who invented the microchip. Both came from tightknit, small communities in the Midwest and were well grounded. Even though their companies spent years on legal battles before agreeing to cross-license patents, Kilby and Noyce were generous about sharing credit. When Kilby was told that he had won the Nobel Prize in Physics in 2000, 10 years after Noyce had died, among the first things he did was praise Noyce. “I’m sorry he’s not still alive,” he told reporters. “If he were, I suspect we’d share this prize.” When a Swedish physicist introduced him at the ceremony by saying that his invention had launched the Digital Revolution, Kilby displayed his aw-shucks humility. “When I hear that kind of thing,” he responded, “it reminds me of what the beaver told the rabbit as they stood at the base of Hoover Dam: ‘No, I didn’t build it myself, but it’s based on an idea of mine.’ ”