Back in the days when computers were as big as house trailers, Herbert A. Simon, professor of psychology and computer science at Carnegie Mellon University and a future Nobel laureate, had a flash of inspiration. He was strolling through a park in October, 1955, when it suddenly dawned on him how he could program a computer to reason on its own. He and fellow CMU computer scientist Alan Newell spent Christmas vacation writing a small program to prove the concept. Dubbed Logic Theorist, the software enabled a computer to work out independently the proofs for simple math theorems.
When school resumed in January, Simon walked into his mathematics class and dropped a blockbuster. "Over the Christmas holiday," he told his students, "Al Newell and I invented a thinking machine."
Thus began the modern search for the Holy Grail of computer science: an artificial intelligence that can rival the human brand--and even surpass it. The ultimate goal is nothing less than a machine with a mind of its own. This electronic wizard would discover new knowledge through bolts of digital insight, solve problems in leaps of silicon intuition, and learn from its mistakes. Forty years later, though, the common perception is that artificial intelligence (AI) has been a dismal disappointment. But anyone who believes that needs to take another look.
AI is alive and well in thousands of applications in commerce, industry, and the professions. More than 70% of America's top 500 companies are using it, according to a recent Commerce Dept. survey. The study pegged 1993 AI software sales at $900 million worldwide--and last year, the market probably topped $1 billion. If these numbers come as a shock, it's because AI often doesn't get the credit it's due. "Whenever something works, it ceases to be called AI," says David Shpilberg, who heads Ernst & Young's information-technology services. "It becomes some other discipline instead," such as database marketing or voice recognition.
Probably the best example is object-oriented programming, now indispensable to the software industry. Programs the size of IBM's OS/2 or Microsoft Corp.'s Windows 95 are too complex to develop in one fell swoop. So the job gets chopped up into modules, which are linked via smart connections that enable the different elements to exchange data and function cohesively. AI researchers boosted the object-oriented approach into the big leagues. "Without question, object-oriented programming is AI's biggest contribution so far," says Shpilberg.
Perhaps not for much longer, though. Today, AI's ultimate quest seems almost within grasp. Indeed, some researchers believe the question is no longer whether AI will give birth to inherently intelligent machines, but how soon. Optimists brazenly predict 10 or 15 years. Already, a few scientists are running experiments that once would have seemed science fiction. Researchers at Massachusetts Institute of Technology are building an android robot and educating it like a child instead of feeding it canned software. Others are even growing silicon brains.
SCAM SCANS. Companies aren't losing any sleep pondering when AI systems will become truly intelligent. They're too busy unleashing the smartest systems yet by harnessing new AI tools to the blazing speeds of today's desktop computers. Whether the results are really intelligent isn't important, says Piero P. Bonissone, a computer scientist at General Electric Co.'s AI lab--"just so they get the job done."
Because they do, AI is solidly entrenched in manufacturing, engineering, and finance--and catching on in marketing, services, and government. Hospitals and doctors are using AI programs developed by institutions such as Massachusetts General Hospital and George Washington University, plus a dozen small suppliers such as Odin and Applied Informatics. These software packages help prevent adverse interactions among drugs prescribed for patients, check 50 million electrocardiograms a year, and diagnose illness--often more accurately than doctors do. Government jobs include screening welfare recipients and assisting U.S. Customs agents to spot illegal cargo.
Banks have become big-time converts because AI is saving them a bundle. Among MasterCard International Inc.'s member institutions, for example, AI programs designed to nip credit-card fraud in the bud have prevented the loss of an estimated $50 million over the past 18 months. Many banks use so-called neural networks from such suppliers as HNC Software Inc. in San Diego. Because neural networks are structured to mimic the rudimentary circuity of brain cells, they learn from examples and don't require detailed instructions. A neural net consists of computer simulations or silicon circuits that resemble a network of brain cells. The network learns by strengthening or weakening the interconnections among these various neurons. Banks train neural nets to spot oddities in the purchasing patterns associated with individual accounts. The software is so effective it regularly notices that a card is stolen before the owner does.
Now, MasterCard wants to help banks scrutinize transactions among stores as well as individual accounts. The idea is to flag signs of scams. MasterCard is enlisting Los Alamos National Laboratory--and a bevy of AI techniques. Using multiple processes in combination, says Steven V. Coggeshall, the Los Alamos physicist who heads development, "will provide synergies and lead to innovative solutions" not feasible before.
Blending two or more AI technologies, each contributing a strength to offset a weakness in the other, has emerged as a major trend. At least a dozen suppliers, from small outfits such as BioComp Systems and Intelligent Machines to giants such as IBM, National Semiconductor, and Toshiba, offer hybrid software.
It's already big in Japan. Hitachi, Mitsubishi, Ricoh, Sanyo, and others have been revamping their product lines to incorporate hybrid AI in everything from home appliances to office equipment to factory machinery. GE's Bonissone, who examined some of the results at a recent Japanese trade show, says: "Their progress is scary. When I think about what we'll have to do to compete--well, running faster with the same tools won't do it."
Japan's AI craze kicked off in the late 1980s, focusing first on fuzzy logic. Unlike their Western counterparts, Japanese engineers weren't bothered by the technology's unfortunate name--the logic isn't fuzzy--and it became a central force in speeding new products to market. Fuzzy logic applies precise mathematical formulas to ambiguous phenomena, making it possible, say, for a camcorder to compensate for a user's unsteady hands.
TOP MARKS. In the 1990s, the Japanese have been moving up to hybrid systems, such as the "neurofuzzy" (neural nets and fuzzy logic combined) washing machine introduced in 1991 by Matsushita Electric Industrial Co. Nikko Securities Co. began working with AI researchers at Fujitsu Ltd. in 1989 to develop a neurofuzzy system to forecast convertible-bond ratings. The system was switched on in 1992, and Nikko says its advice has been on the mark 92% of the time.
Now, the Japanese are intent on hatching autonomous systems. By tightly integrating multiple AI techniques, Japan's Ministry of International Trade & Industry and more than a dozen companies aim to develop software that will imbue tomorrow's products with enough smarts to do whatever they do without human guidance. For traditional AI tools, that's tough, because they each address narrow aspects of intelligence.
Expert systems are great for capturing and preserving the savvy of specialists such as geologists to assist novices. But like humans, they can be inept in adjusting to rapid change. Neural nets, on the other hand, can deftly sift through mountains of data and uncover obscure causal relationships. Put the techniques together, and the combo edges closer to emulating human mental dexterity.
But neural networks are sensitive critters. If a feural net's training covers too much or too little data, it can be garbage in, garbage out. This is where genetic algorithms can help. These adapt the Darwinian principle of survival of the fittest to "breed" solutions, including how to design neural networks. Unlike neural nets, which start as blank slates and learn to find patterns and relationships hidden in data, genetic algorithms start with building blocks that are assembled in different ways to produce solutions. Genetic algorithms often produce astonishingly simple and innovative answers that have eluded humans.
It's not black magic--just the result of endless trial-and-error experiments. While a person will grow weary after a few dozen stabs at a problem and decide to settle for whatever looks best so far, a genetic algorithm keeps on going. It will tirelessly plod through millions of permutations. Most will be discarded instantly. But thousands may be promising approaches no human has ever evaluated.
For example, when GE's designers were asked to come up with a more efficient fan blade for Boeing Co.'s 777 jet engines, they faced a mind-boggling array of choices. The factors affecting a jet fan's performance and cost add up to a number with 129 trios of zeroes. A supercomputer doing a billion calculations a second would take billions of years to test every combination. But an AI hybrid--an expert system supplemented by genetic algorithms--cracked the problem in less than a week.
The hybrid system, dubbed Engeneous, starts with a pool of digital chromosomes, each representing a design factor. These mix together to create dozens of hypothetical designs. The best models are allowed to "breed"--to exchange genes and spawn a new generation of fan blades, some of which are still better. Again, only the fittest survive to electronically breed. This process soon homes in on combinations that will produce a good design. In just three days, GE had a design improving engine efficiency by 1%--no small feat in a field as mature as jet engines. "People kill for 1%," says Peter M. Finnigan, GE's research manager of mechanical-design methods.
BREAKING RULES. In another case, Engeneous boosted the efficiency of a new fan for power-plant turbines by 5%. Astonishingly, the genetic algorithm violated some of the expert system's design rules. People tend to believe that what worked before is the only way to go. But Engeneous has no such assumptions, so it can find solutions humans would dismiss out of hand. "Once you understand how it did this, you can capture the new knowledge in new design rules," says Finnigan, and thereby make the expert system smarter than it was before.
That points to a tantalizing prospect: that genetic algorithms can find answers to issues too complex to even define. Ordinary smart systems need some notion of how to proceed--what's called the system's model of the real world. However, this model can be misleading, since the forces that drive most real-world events are rarely understood in full. But genetic algorithms--and neural networks--discover reality from the bottom up. Such "model-free" methods can open new windows of understanding, says Bart A. Kosko, a computer science professor at the University of Southern California.
The stock market is a classic example. Vast sums have been spent to devise statistical models in hopes of predicting movements in the price of index futures or securities. But statistical systems that do well in bull markets run into trouble when the market turns bearish. And their proficiency inevitably degrades over time, since the market is a constantly changing, nonlinear milieu.
UNFATHOMABLE. So Citibank is taking a hybrid neurogenetic approach. Genetic algorithms evolve models that can predict currency trends under various past market conditions. But as they say on Wall Street, past performance is no guarantee of future returns. That's where neural networks come in. These brainlike circuits can discern which past model is closest to current trends. Since adopting this approach in 1992, Citibank has earned 25% annual profits on its currency trading--much more than most human traders.
While no one would dispute that the financial markets' antics remain unfathomed, some managers may be surprised to hear that factories are in the same boat. "One of manufacturing's dirty little secrets," says H. Van Dyke Parunak, a research fellow at Industrial Technology Institute in Ann Arbor, Mich., is that shop-floor scheduling doesn't work, by and large. Each morning, the computers running sophisticated manufacturing operations spit out a detailed schedule for that day's production. But when something goes haywire, the best-laid plans of mouse and computer go out the window, leaving managers to muddle through. This usually happens within about 60 minutes, says Parunak, whether the plant is making semiconductors or automobiles.
Deere & Co. has started to clean up its act with hybrid AI. The expert system it uses for scheduling in one plant has been supplemented by genetic algorithms. When a machine goes on the fritz, the genetic algorithms immediately begin evolving a new schedule. For chemical plants, engineers at Microelectronics & Computer Technology Corp. (MCC), a research consortium in Austin, Tex., took a different hybrid route, coupling fuzzy logic to neural nets. From reams of past operating data, the neural net figures out what really makes a chemical process tick. The result, which often is at odds with management's top-down view, is then used to fine-tune a simulated process. Once the simulation is humming, the software generates fuzzy-logic rules for optimizing the real plant. This approach was so successful in its first tryout in 1990 at an Eastman Chemical Co. plant--improving some operations by 30%--that MCC spun off the technology as Pavilion Technologies Inc. Today, Pavilion's software is helping 500 plants maintain peak output in the chemical, paper, and refining industries.
The bottom-up approach isn't necessarily unique to neural nets and genetic algorithms. Take Bacon, an expert system that Simon and Newell wrote in the 1970s. Given basic data on the solar system plus a few hints--for example, if two numbers vary in concert, look for a ratio--Bacon quickly pieced together
Kepler's third law of motion. It also rediscovered Ohm's law of electrical resistance. "What this shows," says Simon, "is that great scientific discoveries can be produced by feeding known data to a fairly simple set of rules."
BABY TALK. Today, with huge amounts of data residing in scientific databases--many of them online, ready to be combed out by software similar to Bacon--who knows what new discoveries are waiting to be teased from cyberspace? Already, companies are sifting through their internal databases, searching for unnoticed relationships. This frequently turns up new knowledge that can tip the competitive scales--for example, what it takes to keep customers loyal.
But today's smartest AI systems may soon seem like baby patter next to what could be in store. Perhaps the most electrifying project now under way is at Japan's Advanced Telecom Research Institute. ATR's Brain Builder Group in Kyoto wants to use genetic techniques to grow silicon brains trillions of times as complex as a human one.
Even without ATR's brains, Simon of CMU is positive that computers will become more intelligent than humans. "Why people feel threatened by that, I don't know," he says. "We all know someone who's smarter than we are, and cars run faster than we do. Why is this so different?" So, Simon is upbeat: "Our species can stand some improvement--and we can use all the assistance we can get."
If such fantasies ever come to pass--and it might happen in our lifetimes--we could witness the emergence of an artificial intelligence that far outstrips its creators'. The implications of that are profoundly exciting. And a little frightening. It may be that only a machine will be capable of figuring it all out.
Building a Smarter Machine
At first, artificial-intelligence researchers hoped they could discover a single magic formula that would give computers the power to emulate human thinking. But now, the various AI technologies are coming together--yielding by far the smartest software ever.
A group of rules that outlines a reasoning process. What distinguishes expert rules from the rules in an ordinary program is that the expert system can draw deductions, producing new information and even modifying rules or writing new ones--that is, learn.
A precise system of rules not restricted to "either/or" choices, enabling it to deal better with ambiguities. People tend to think in imprecise terms, and fuzzy logic allows computers to seem more natural. Fuzzy logic can be incorporated into an expert system, making it a "fuzzy expert."
Extracting previously unknown information from existing data, often with the help of another AI program, using statistical and visualization techniques to discover and present knowledge in a form that is easily comprehensible to humans. Also known as Knowledge Discovery.
A program that uses Darwinian principles of random mutation to improve itself. The program's various elements are broken down into segments, called chromosomes. They haphazardly link together to form programs. Most will get discarded, but the few that come close to doing the job combine with other survivors and spawn offspring programs--a process that may produce results superior to anything crafted by humans.
An electronic circuit or simulation patterned on the brain's parallel-processing structure. Each neuron has multiple connections that simultaneously receive signals from many neurons. A neural network is trained by examples, such as images of faces.
From these, it derives inductive conclusions--the opposite of an expert system.