by Scott Berinato

Tests of Face Recognition Technology End in Failure

Nov 01, 20034 mins
IT Strategy

Since 9/11, few counterterrorism technologies have been hyped more than face recognition. Recently, though, reality interrupted the hype when two public pilot projects of the technology ended.

The city of Tampa, Fla., which first tested face recognition at the January 2001 Super Bowl (the technology on game day produced 19 “hits” with an FBI database but no arrests), in August dropped support for a program that scanned faces in Tampa’s Ybor City entertainment district. Some protesters of the system donned Groucho Marx masks in Ybor City to render it useless.

In one case, according to The St. Petersburg Times, the police used an image of a man eating lunch in Ybor City to demonstrate the system on the local TV news. A woman in Oklahoma saw the picture and accused the man of being her deadbeat husband who owed her child support. The police approached the man who turned out to have never been married.

While Tampa’s experience ran into public resistance, poor performance ended a second facial recognition trial at Boston’s Logan International Airport, where there were high hopes for the technology. It is the airport from which half of the 9/11 terrorists departed on flights that crashed into the World Trade Center.

But the Boston tests recently ended when the system failed to identify positive matches 38 percent of the time. While false positives based on an operator’s decision didn’t exceed 1 percent, machine-generated false positives exceeded 50 percent. The trial occurred between January and April 2002. The American Civil Liberties Union, which requested the results under the Freedom of Information Act, publicized the results in September 2003, according to The Boston Globe.

Meir Kahtan, a spokesman for Identix, one of the companies involved in the Logan trial, pointed to the final report of the trial and noted that the test met many of its objectives, including accuracy of results. He also noted that the trial was an operational one trying to determine if face recognition was logistically feasible in an airport—not a technical one trying to determine the software’s accuracy.

But operations was where the airport officials’ report was most critical. It said the program “requires much more participation than initially anticipated,” and that because of the false positives, “the operators’ workload is taxing and strenuous, requiring constant undivided attention and periodic relief, which amounts to a staffing minimum of two persons for one workstation.” Slowing the technology’s progress are high R&D costs and vendors’ “aggressive marketing strategies,” the report added.

Charles Wilson, a scientist at the National Institute of Standards and Technology who was part of a landmark face recognition vendor test in 2002, points to the Logan test and another aborted trial at Palm Beach International Airport in May 2002, and says those trials don’t mean the technology failed. Rather, it’s a return to earth of those who were swept away by marketers who promised more than they could deliver. The systems aren’t bad; they just aren’t as good as the hype many wanted to hear, especially after 9/11. (For more about the facial recognition study, see

“Scientists have one degree of optimism, and marketing has another,” says Wilson. Think of the conditions where the systems operate, he adds. Poor or inconsistent lighting. Cameras high off the ground, which mean steep angles to capture facial images. It’s no wonder the results weren’t dazzling.

“I look at those airport tests, and I think the systems worked as well as I thought they would in those conditions,” says Wilson. “In fact, there were airport trials a year before the Logan trial. Guess what? They turned out the same. My question is, why are we wasting all this money on individual airport trials? I think people were told they were getting a panacea to fight terrorism. But science rarely if ever delivers panaceas.”