The trouble with artificial vision systems is that they try to analyze everything. Humans, on the other hand, just look for some salient feature that leads to a quick, tentative conclusion about what's being viewed. Then, the eye jumps to a few other elements in order to confirm or modify the original interpretation.
If robotic vision systems could ignore everything but a few key features, they might be more accurate as well as faster. That's why researchers at the University of Rochester are combining the "decision tree" approach used in expert systems software with the latest image-analysis algorithms. The first result is a robot butler that checks place settings on a table to tell whether the meal will be a formal dinner or a luncheon. The system earned a PhD for Raymond D. Rimey. But interpreting table tops and other static scenes is a snap compared with moving images, says Christopher M. Brown, a professor of computer science. "That's our next challenge."