It’s still unclear exactly how law enforcement officials zeroed in on the two figures in surveillance footage suspected of carrying out the deadly bomb attack at Monday’s Boston Marathon—figures whom officials have identified as Dzhokhar and Tamerlan Tsarnaev, two young brothers from a family of Chechen immigrants. But it’s likely that investigators used some form of facial-recognition software as part of their effort. These technologies remain in their infancy, but law enforcement is relying on them more and more.
The FBI is rolling out an ambitious, billion-dollar biometric information system that will include iris scans, voice recognition, and facial-recognition software, developed with Lockheed Martin, IBM, Accenture, and BAE Systems, among others. Law enforcement authorities are uploading mugshots into an image database, which can then be searched against images from crime scenes, like the instantly notorious surveillance camera footage of Boston’s Boylston Street. The program will have 12 million searchable images.
The Next Generation Identification (NGI) program won’t be fully operational until next year, and although the images it uses will be mugshots, the software—think of a more powerful version of Facebook image search—could be used to match any two images. Civil liberties advocates worry it could be used to track people on the street regardless of whether they’re suspected of a crime. The Pentagon’s Defense Advanced Research Projects Agency (Darpa) and the NYPD have also expressed interest in more exotic technologies, including one that analyzes people’s gait for clues as to whether they’re carrying a bomb. Programmers are developing machine vision techniques that can link images of the same person across different video cameras or spot behaviors that are out of the ordinary for a certain setting (e.g., leaving a bag unattended in a public place).
Current facial-recognition technology has its limits: As the FBI puts it in a staff paper posted on the website of the Electronic Frontier Foundation, a digital privacy watchdog group: “The performance of facial matching systems is highly dependent upon the quality of images enrolled in the system.” Thus, the grainy surveillance imagery from Boylston Street might have proven particularly tough to work with. It’s likely that the breakthroughs in the case were made by sharp-eyed investigators: spotting one of the suspects dropping a bag at the site of one of the two bombings in the surveillance footage, then matching the face with an image from the security camera of the 7-Eleven in Cambridge that was allegedly robbed by Dzhokhar Tsarnaev last night. The robbery, and the match, triggered the wild, rolling shootout and manhunt that continues today.
As more and more images have made their way onto the Web—not just the surveillance footage, but photos like this and this—the thousands, perhaps millions, of people following the case are all experiencing some form of the same sensation of recognition that those investigators did. People are suggestible, of course, but being able to recognize faces we’ve seen in other settings is something our brains are particularly good at—even when certain details are obscured or distorted. The best algorithms have yet to master that. The ability to instantly recognize the face of an acquaintance, after all, is part of what makes us such a sociable species—it helps cement the sort of fellow feeling that makes attacks like Monday’s so horrifying, inconceivable, and—mercifully—rare.