For decades, artificial intelligence has been a staple of science fiction--from the Hal computer in 2001: A Space Odyssey to the Arnold Schwarzenegger character in Terminator 2. Add to that a large dollop of media hype and inflated promises by scientists and entrepreneurs, and you begin to see why fantasy left the reality of artificial intelligence in the dust.
These days, a less ambitious technology--call it "applied intelligence"--is emerging as a major force in computer software. Programmers have come up with systems that, instead of attempting to replicate human thinking, embody human experience and expertise. Instead of software that thinks, we have software that "knows"--whether it's how to pick a stock, detect tax fraud, or give a prognosis more reliably than most human doctors. Applied intelligence is simply a way to apply human knowledge, via computers, to real-world problems.
Do signs of success mean that U.S. computer scientists should abandon their quest for true artificial intelligence, machines that somehow learn and reason like people? Not a bit. On Apr. 1, Japan's Ministry of International Trade & Industry kicks off a $1 billion, 10-year research effort in artificial intelligence. This is clearly not the time for the U.S. to give up on research that could still yield a large reward. Universities and government research centers could provide the next breakthrough. In one intriguing example, Carnegie Mellon University's prototype of a self-driving car, might ultimately pay off in a warning system to alert drivers of impending danger.