Computing took a leap forward when chipmakers started putting more than one core—or central brain—on a single chip. It was a way to make machines work harder even as they consumed less power. Just wait until a single chip can sport 80 cores.
The wait won't be long. Chipmaking giant Intel (INTC) on Feb. 11 said it has successfully produced just such a chip, the size of a fingernail, capable of processing a mind-boggling 1 trillion calculations a second. The chip, which Intel claims is the fastest ever made, could start being used commercially in "in five years, if not sooner," Intel Chief Technology Officer Justin Rattner says.
Rattner has reason to crow. The massive processing power each chip would provide will dramatically change the way consumers and businesses work and play. Financial analysis that takes days to perform in back offices could be done in seconds at a trader's terminal on Wall Street. Real-time physics calculations could let consumers create on-the-fly games that make even the cutting-edge motion-control techniques in Nintendo's Wii game console seem like child's play.
The test chip also demonstrates chipmakers' ability to continue to increase dramatically the number of processors placed on a tiny sliver of silicon. Just 10 years ago, a cluster of supercomputers capable of processing the same amount of calculations took up more than 2,000 square feet and consumed a half-megawatt of electricity.
The new Intel chip, which does not yet use the standard x86 architecture common to most PCs and servers, consumes an average 62 watts of energy—less than some chips on the market today. It also takes a novel approach of stacking memory in three dimensions directly on top of the chip, creating an architecture that would transfer information at lightning speed.
Other chipmakers also are hard at work pursuing ways to make chips work harder while sipping power. Rivals, including AMD (AMD) and IBM (IBM), have been pursuing so-called parallel computing, which breaks up huge tasks into pieces, enabling them to be managed by different parts of a chip. With one such chip, multiple streams of high-definition video could zip around the home, while a beleaguered business could assemble Sarbanes-Oxley paperwork in minutes.
AMD in 2006 purchased graphics chipmaker ATI. The company says a new platform, code-named "Fusion," combines a central processor with graphics processors to either lower the cost of buying such items separately or increase floating-point calculations dramatically.
Analysts say the Intel announcement signals chipmakers are on the right track. "This is putting the proof-point out there that their road maps are on the right track," says Jim McGregor, principal analyst at researcher In-Stat.
Software Lags Behind
The main stumbling block to widespread acceptance of such chips, however, is the difficulty in writing software to take advantage of multiple cores. Even as Intel and AMD race to deliver quad-core chips in the next few months, software developers continue to struggle to write threaded applications to take advantage of just two cores. Intel's Rattner suggests the chipmaker made the announcement of the new chip early to get software developers thinking about massively multicore chips. "If we just go two, four, eight cores, we'll never get there [with software]," he says.
Even so, the jump in processing power could tax the resources even of Microsoft (MSFT), Apple (AAPL), and other operating system providers, analysts say. Current operating systems are designed for more linear tasks, and are less efficient at allocating resources and power requirements.
Still, the new chips would require working much more closely with the hardware makers, which could give Intel or another first mover a massive advantage. And as much as anything, the announcement demonstrates that the processor race has plenty of room to run.