One immutable law rules the ever-changing computer business: The machine you buy tomorrow will run a lot faster than the one you own now, thanks to relentless advances in microprocessors. When Intel Corp. invented the microprocessor 23 years ago, it used 2,300 transistors to run a calculator. Today's versions--the "brains" for every type of computer and most other electronic systems--contain millions of transistors in a numbing maze of circuit lines. They are among the most complex products ever built. Every few years, though, the progress gets harder to sustain.
In fact, the next generation of circuits will be so complicated that some experts think it might be the last to make the usual quantum leap in performance. Intel's next-generation chip, due next year, will pack 6 million transistors--double the number on its latest Pentium models. Digital Equipment Corp.'s latest Alpha chip already has 9 million transistors. If this keeps up, developing chips will "make the Apollo moon shot look like a weekend romp," says Jeffrey T. Deutsch, a Silicon Valley chip-design consultant. And the circuitry could become so convoluted as to hobble computer speeds.
To avoid that, chip designers are eyeing some radically different concepts to simplify things. The most far-out is called very long instruction word. The VLIW approach would pack a whole stream of commands into extraordinarily long software instructions. This would shift many time-consuming overhead tasks--such as sorting program commands into the proper order and routing them to the right place--from the chip to software. The circuitry that handles these chores occupies up to a third of the silicon real estate on some chips. So eliminating it would make room for more circuits that crunch numbers. The result, says Joseph F. Fisher, a VLIW pioneer with Hewlett-Packard Co., would be microprocessors at least twice as fast as anything now on the horizon.
There's little doubt that VLIW is technically feasible. In the late 1980s, several companies built VLIW-based computers--notably Multiflow Computer Inc., co-founded by Fisher. And in 1990, Philips Semiconductors, the Silicon Valley unit of Philips Electronics, unveiled a VLIW chip. Because of difficulties in adapting software, the computers soon disappeared, and the Philips chip hasn't caught on.
SCRAMBLE FOR SURVIVAL? So, VLIW looked like a bust--until last June. That's when Intel and HP jointly announced a new departure in microprocessors. They will use VLIW to bridge the technology gap between Intel's X86 family and HP's svelte reduced instruction-set computing (RISC) chips. In essence, the two companies are betting their futures on VLIW. And they're not the only ones revisiting VLIW. Sun Microsystems Inc. has at least two teams, including one in Russia, digging into VLIW. And in September, IBM said future generations of its PowerPC chips, developed with Motorola Inc. and Apple Computer Inc., would tap VLIW methods as well. "Everybody's looking into it," says Glenn Henry, head of the personal-computer division at MIPS Technologies Inc., the RISC-chip arm of Silicon Graphics Inc.
The quest for VLIW could even become a scramble for survival--for computer companies as well as chipmakers. Worldwide, microprocessors are an $11.4 billion market. But Intel's 80%-plus share leaves barely $2 billion for the six runners-up: Motorola, IBM, HP, Sun, MIPS, and DEC, plus a few others in Japan, Europe, and the U.S. Given the cost of developing a new chip--up to $100 million for design engineering, plus $1 billion for a factory--most pundits expect the U.S. field to be whittled to three players by decade's end. Says former HP chip designer Linley Gwennap, now editor of the newsletter Microprocessor Report: "It's hard to see how they can all survive."
It won't be for lack of demand. Getting John and Jane Doe on the Information Superhighway will take an enormous boost in computing power. Couch potatoes may not budge without much friendlier screen interfaces and PCs that respond to verbal commands. Generating that much horsepower with current technology might require tens of millions of transistors--enough to choke the chips. The reason dates to 1971. To conserve then-precious memory space, Intel designed the first silicon brain so each software instruction could dictate multiple on-chip operations. The trade-off was that complex instruction-set computing (CISC) chips had to "decode" each instruction--to determine how many operations there were and sort them into the right sequence. As a result, processing one instruction often took several ticks of the chip's internal clock.
A decade later, HP helped pioneer the RISC approach. To speed up processing, RISC uses a software tool, called a compiler, to slice computer programs into simple instructions of uniform size. Since the chip has to contend with fewer overhead functions, it runs faster. Initially, RISC chips could polish off as many as one instruction per clock tick, or cycle. Back then, clocks ticked about 10 million times a second (10 megahertz).
HIGHER PLANE. Now, even RISC chips are about to catch the Intel "disease." All the latest microprocessors chew on two or more instructions per cycle. Regulating this takes extra traffic-cop circuits, since processing one instruction may require the results from the other--just as sales must be calculated before figuring profits. But cramming multiple instructions into each clock cycle is the only way to compensate for another factor: It's getting tougher to increase performance by speeding up the clock rate. Over the past decade, the clock speed of Intel's chips has jumped tenfold, to 100 Mhz in today's top Pentium model. It's dubious whether the physics of silicon crystals will permit another tenfold increase, however. That's why next year's crop of chips will handle at least four instructions per cycle. Six may be the limit, some experts say: Beyond that, chips could spend so much time directing traffic that they wouldn't run much faster.
Enter VLIW. To move to a higher performance plane, VLIW would use an even smarter compiler that would presort eight or more instructions and clump them together into one extra-long "word," which could be fed straight to the chip's number-crunching circuits. Like a chess grandmaster who thinks a dozen moves ahead, the software compiler would keep a lookout for instructions that could be run in one fell swoop. The chip's pesky scheduling circuits could then be eliminated.
VLIW is no shoo-in, though. It could get short-circuited by the same software-incompatibility problem that blocked speedier RISC chips from grabbing more of Intel's market. This time, in fact, the software issue looms larger. To realize its potential, a VLIW chip must be fed instructions in a very precise order. Any change in silicon design could require all software to be "recompiled," or re-sorted. And not many customers will upgrade to a new computer if it means trashing their existing programs.
PARALLEL APPROACH. Intel and HP will pull out the stops to solve that problem--and they insist a possible fix is in sight. How it works is the 64-gigabit question. One speculation is that VLIW chips might include a patch of slower circuits for running old software. This could be necessary, say VLIW skeptics, because conventional programs, such as spreadsheets and database software, may not be predictable enough for instructions to be arranged in VLIW strings. In any case, no existing compiler could do the job.
Some experts believe a better solution is to gang up multiple microprocessors in one computer. This parallel-processing approach is already a major trend in scientific computing, which was the prime market for Multiflow's VLIW systems. The next step, according to consultant Deutsch and others, could be to slap multiple processors on the same slice of silicon.
Still, no microprocessor hopeful can ignore the possibility that the Intel-HP partnership will hit a home run--a superspeedy VLIW chip that handles existing software. Such a breakthrough could cement Intel's near-monopoly for years. Eric Harslem, senior vice-president at Dell Computer Corp., gulps at the prospect. Are computer makers about to end up with a "single microprocessor source?" he wonders. Intel hopes so--and is warming up in the batter's box.