Call It Superbig Blue
Since 1986, the story of IBM has been one of contracting earnings, massive write-offs, and disappointing turnaround plans. But there's another story that has been overlooked. During the same period, IBM has scored a success that's potentially crucial to its long-term survival: Quietly, it has become a major force in supercomputing. Even though it still doesn't sell what market leader Cray Research Inc. would call a supercomputer, Big Blue has assembled the broadest array of technical computing gear around (table), ranging from speedy workstations to $20 million souped-up mainframes that compete favorably with all but the fastest Crays. And IBM is involved in some of the most important supercomputing projects.
IBM is seeking more than bragging rights--or even just a bigger chunk of the $2.5 billion supercomputing business. While it fully intends to boost its share of that market, IBM's real goal is to master the technologies that will soon be crucial to preserving the $50 billion commercial mainframe business, which it still dominates with a 65% share.
STRONG BASE. Already, the lines are blurring between commercial and scientific computing. Wall Street is buying supers from Cray and Intel Corp. to model the behavior of complex financial markets. Insurers are considering so-called massively parallel supercomputers--consisting of hundreds or thousands of processors--to manage enormous data bases. A strong base in supercomputing helps IBM identify such cutting-edge commercial customers. More important, it alerts the company to the new capabilities that, in a few years, mainframe customers will demand.
Right now, by most measures, IBM holds a strong hand in high-performance computing. Its largest mainframes, for instance, do certain kinds of arithmetic at record speed. IBM also sells optional vector processors for mainframes that perform arithmetic on an entire string of numbers at once--a way to tackle jobs such as simulating an automobile crash. IBM has installed about 1,000 vector units on 550 mainframes, giving them the speed of true supercomputers. Smaby Group Inc., a market researcher in Minneapolis, reckons that last year, sales of vector processors added $480 million in revenues.
But Big Blue realizes that even a souped-up mainframe isn't a true supercomputer. With circuits and hardware designed more for speeding through retail transactions than tracking global weather patterns, mainframes can't do the most challenging supercomputing jobs. That's why in 1987 IBM began investing in Supercomputing Systems Inc. SSI was founded by designer Steve Chen, who left Cray Research when it decided not to fund his plan to create a world-beating vector processor with as many as 64 individual processors working in parallel. After four years and an estimated $100 million in funding from IBM, Chen has yet to fully describe the machine. SSI says test units will be ready next year, but the actual production schedule for the machines--costing an estimated $70 million each--has not been set.
To cover its bets, meanwhile, IBM is pushing hard in massively parallel processing, or MPP. This is one of the most exciting yet most difficult technology in supercomputing today. It has the potential, experts say, to do away with both traditional supercomputers, such as those from Cray Research, and traditional mainframes, too. For IBM to fail in MPP, therefore, may be for it to fail eventually in its bread-and-butter business, commercial data processing.
In essence, MPP calls for ganging together dozens, hundreds, or even thousands of cheap, powerful microprocessors to attack large computing problems en masse. Choreographed with the right software, 100 small processors can often execute large programs in a fraction of the time it would take even the fastest traditional supercomputer to run them in serial fashion, one instruction at a time. And they can do the work for a tenth or even a twentieth of the cost of mainframes and traditional supers. Indeed, Cray Research has begun a crash effort to build an MPP add-on for its supers.
CAN-DO MOOD. Early this year, responding to demands from some key customers, IBM decided to drastically speed up its MPP development. In February, it set up the Highly Parallel Supercomputing Systems Laboratory in Kingston, N.Y., and announced that it will set first delivery dates by yearend. The high-profile lab draws funding and knowhow from Big Blue's mainframe, workstation, research, and government divisions--an example of the can-do climate that Chairman John F. Akers has tried to foster since reorganizing IBM last December. The MPP systems IBM is proposing, based on the same high-speed microprocessors IBM uses in its RS/6000 workstation line, would by the late 1990s reach theoretical peak speeds of more than 1 trillion operations per second.
Already, IBM has won a key endorsement for its massively parallel computing effort. Argonne National Laboratory, the Argonne (Ill.) U.S. government research lab, has tapped IBM to supply a giant MPP system as the heart of a proposed supercomputing research center. The center is planned for use by both government and industry scientists tackling such huge problems as modeling what goes on in the core of a nuclear reactor.
Together, IBM and Argonne are asking the Energy Dept. for $120 million to fund the project, about half of which would be spent on MPP hardware. Current plans call for IBM to install its first MPP system at Argonne in 1993 or 1994. A year later, IBM would install a production-level machine slated to perform up to 400 billion instructions per second while connected to an array of disk drives storing 6 trillion characters of information. A year after that, the IBM hardware will jump into the 1 trillion instructions-per-second range, Argonne officials say.
Impressive as the new MPP effort is, IBM's high-performance computing initiative is not without problems. For one thing, when it invested in Chen, IBM expected to see production machines in the first half of the '90s, a deadline that will almost certainly slip. And IBM's recent reorganization could hamper marketing efforts. At the same time IBM set up the new MPP lab, it cut back its supercomputing marketing and sales group. The sales team that had been dedicated to selling mainframe vector units was effectively dissolved. That leaves IBM's main sales force--albeit with the assistance of market support specialists--to sell the complex gear against Cray, Thinking Machines, Intel, and others.
HOT RIVALRY. The changes, say IBM executives, were part of the corporation's restructuring and cost-cutting efforts. But the moves may be short-sighted, says George Lindamood, a senior analyst at market researcher Gartner Group Inc.. If IBM can't make the key sales it needs to keep up with the leaders in supercomputing, soon it could start losing commercial customers, too. That's because MPP technology seems especially promising in commercial data processing work, says Lindamood. In a few years, he figures, customers will start buying today's scientific MPP machines to run huge transaction networks--for banks, say--that need to call on massive data bases. Such work is what most of IBM's mainframes now spend most of their time doing.
The competition is heating up. Suppliers--including AT&T's NCR unit, nCube, startup Kendall Square Research, and software maker Oracle--are all pushing transaction-processing on MPP machines as an alternative to mainframes. Not only are the new machines cheaper, but they can also tackle jobs that weren't practical before: Wal-Mart Stores Inc. uses one to sift through an inventory and sales-trend data base totaling 1.8 trillion bytes of data. Other customers interested in the benefits of parallelism range from American Airlines Inc.'s SABRE unit to Hartford Life Insurance Co..
If there's one advantage IBM holds over those with a head start in MPP, it's deep pockets. Says analyst Gary Smaby: "The [MPP] sands haven't really settled. IBM has sufficient resources to fund multiple efforts. They're in an enviable position." IBM's challenge now is to turn position into profits.