Dusting off creaky algorithms

Stephen Baker

Think of all the researchers who worked in the days of adding machines and slide rules. Many of them came up with algorithms that were theoretically brilliant, but impractical. To make them useful, you'd need a machine that could run not hundreds, not millions, but billions of calculations per second. Well... Michael Trick points to a (pay-only) article in the Harvard Business Review that discusses efforts to bring down this "Rembrants from the Attic." Breakthroughs in coming years could come from old thinking. I know IBM is busy mining its algorithmic gems from past decades. The article, written by MIT's Michael Schrage, notes that "Google’s search engine was possible only because the founders adapted a century-old theorem about matrices to software for ranking Web pages according to links from other sites."

Seems to me that the opportunities for tech scavengers are near boundless. Think of all of the old dot-com and wireless dreams that died six or seven years ago for lack of broadband or a Net-savvy public. Then, of course, there's ancient biology. J. Craig Venter's team has dredged up some six million previously unknown genes and thousands of protein families by hauling in ocean microbes by the tub-load. Like IBM's dusty algorithms, Venter's haul would be of little practical value without the modern tools, the "whole environment shotgun sequencing and new computational tools," to unveil its genomic treasures.

Before it's here, it's on the Bloomberg Terminal.