In 1993, Compton's New Media showed up at the computer industry's largest trade show with a big announcement: It had won a patent for putting multimedia information on a disk.
The news certainly was big -- but for all the wrong reasons. Compton's "new" idea -- Patent No. 5,241,671 -- was already in widespread use throughout the industry. Yet Compton's suddenly had the legal right to demand licensing royalties and sue folks who didn't pony up. The U.S. Patent and Trademark Office (PTO) rushed to fix its mistake. But when the agency invited the computer industry to supply examples of prior art -- that is, proof that the technology was already in use -- so it could invalidate Compton's patent, no one stepped up to the plate.
Thirteen years later, the PTO and the tech industry are still struggling mightily with software patents: Witness the high-profile legal battle between BlackBerry maker Research in Motion (RIMM) and NTP, which claims rights to certain BlackBerry technology. Exponential growth in software patents has resulted in thousands of patents that stake claims on processes that already are widely used, claims that are utterly vague, or programming techniques so simplistic that avoiding infringement becomes all but impossible.
As a result, vendors and developers are being forced to navigate a potential minefield to avoid patent infringement and lawsuits. There's a growing imperative to improve software quality. On Jan. 10, the Patent & Trademark Office announced a sweeping initiative to do just that.
The agency will team up with IBM (IBM), the Open Source Development Lab, Novell (NOVL), Red Hat (RHAT), and SourceForge.net, which are putting aside their philosophical differences over whether software should be patented at all. (Patenting what is basically an algorithm -- an equation or a means of solving a problem -- is anathema to many programmers.) Also at the table: Google (GOOG), Eclipse, the Software Freedom Law Center, and a smattering of intellectual-property lawyers and scholars. Next month, the team will assemble at the PTO's cathedral-like headquarters in Alexandria, Va., to begin development of a giant, searchable repository of open-source software.
The idea: Create a catalog of existing source code that patent examiners can use to check against software patent applications. The goal: Prevent the agency from unwittingly turning over the exclusive rights to common knowledge. The effect should be a reduction in overlapping claims and lawsuits -- and, ideally, a boost to innovation.
"There's a lot of concern about software patents," says John Doll, commissioner for patents. "There's a huge amount of source code out there, but it's not in a usable form that examiners can access."
A software library is a great idea. In fact, it's such a great idea that someone had already thought of it. In 1992, a coalition of companies, including IBM, launched the Software Patent Institute (SPI), a project to assemble all known software prior art into a single database that PTO examiners and others could use.
From an office in Indianapolis, SPI staffers are still chugging away at their Sisyphean task. But even as software patent applications and the accompanying litigation have skyrocketed, fewer people are taking notice of SPI's work. Budget cuts years ago forced the PTO to stop sending its examiners to train at the institute, and use of the SPI's database appears to be in decline.
"We've had a fairly low profile for the past couple years, even though we've actually chunked in more data than ever before," says SPI Executive Director Roland J. Cole. Even the patent office's own use of the database "is very spotty," says Cole, who bases that conclusion on spot checks of the system. "Some [examiners] use us, a lot of others don't."
Before the patent office and the ideas industry roll up their sleeves to fix software patents, they might take a lesson from SPI's obscurity. Access to information is but a small part of the PTO's woes. Any number of databases exist that examiners have access to but don't appear to use. Even those who take the initiative have little time -- the patent office allots examiners less than 26 hours to review a software application -- to slog through massive documentation that might shed light on the novelty of an applicant's idea.
Searching a database that purports to compile massive amounts of code and documentation isn't like using Google. An institutional knowledge of the subject, for one thing, is required. An application that involves, say, computer memory can't be researched by simply typing "RAM" into a search engine, partly because terminology is so fluid. Remember core memory?
"There are a lot of databases out there," Doll acknowledges. "But the protocol or logic statements required to search [each one] are difficult or foreign or inaccessible."
The result: Thousands of patents granting exclusive rights to processes and code that everybody already uses. Business is partly to blame for the mess. The vast majority of software patent applications cite no prior art, meaning the idea is so new and unique that nothing out there is remotely like it. It's a disingenuous tactic designed to boost an application's chance of approval by making it tougher for an examiner to reject it.
Doll is counting heavily on industry to build a database that's as comprehensive as it is easy to use. The project's cost and funding haven't been worked out, "but the open-source community is going to fund this effort," Doll says. "If there is a cost, it's going to be borne by the industry."
Then he adds this reality check: "That might be wishful thinking." He may be right. Don't forget, the patent office has been left to its own devices before.
And what became of Compton's New Media's controversial patent? In 1994, a PTO staffer happened to take a look at a 1987 reference, The Complete HyperCard Handbook. He was in luck: The book had laid out Compton's idea years before the company's patent application, proving that the concept was neither novel nor original. Patent No. 5,241,671 bit the dust.