By Ben Worthen
Ed note: Ron Suskind’s new book, The One Percent Doctrine, is named after the Bush administration’s guiding principle that if a terror threat has a one percent chance of being real, it needs to treat it like an absolute certainty. Similar thinking has influenced the government’s use of data mining as the prevailing technology strategy for fighting terror. Instead of requiring data mining program leaders to develop a business case demonstrating a project’s value, agency heads approve these programs with the rationalization that if it has a chance—however remote—to catch a terrorist, then it is worth it—despite issues such as cost, project delays due to unlimited project scope and a clearly demonstrated public sensitivity to encroachments on civil liberties. This in an excerpt from an article, which will be published in CIO magazine on Aug. 1, by Senior Writer Ben Worthen detailing the issues at stake in preventing terror with IT.
In the aftermath of Sept. 11, the government concluded that data mining could help it prevent future terror attacks.
Data mining is a relatively new field within computer science. In the broadest sense, it combines statistical models, powerful processors and artificial intelligence to find and retrieve valuable information that might otherwise remain buried inside vast volumes of data. Retailers use it to predict consumer buying patterns, and credit card companies use it to detect fraud.
Experts say that the government, and in particular the intelligence community, has come to rely heavily on data mining. A 2004 Government Accountability Office report found that federal agencies were actively engaged in or planning 14 data mining projects that focused explicitly on catching terrorists and preventing attacks, a total that does not include projects at seven agencies, including the CIA and the National Security Agency. Over the past year, The New York Times, USA Today and other media outlets have uncovered top-secret programs within those agencies that collect and look for patterns in phone records, e-mail headers and other personal information. When these programs were made public, the president and other members of his administration defended them as critical to the war on terrorism.
Given the administration’s commitment to programs using these data mining tools and the pressure on everyone to prevent another attack, it comes as no surprise that these projects are being approved by agency heads almost as fast as they are being conceived, experts say. “There is a real fear of not going down this path because if there is value, you don’t want to be on the side that opposed [a data mining project],” says Robert Popp, who was deputy director of the Information Awareness Office at the Defense Advanced Research Projects Agency.
In that climate, projects are approved without having undergone the sort of scrutiny that a private sector CIO would be required to perform. “No one [in the government] has looked at data mining from an IT value perspective,” says Steve Cooper, former CIO of the Department of Homeland Security. “I couldn’t figure out [the value of data mining] when I was in DHS, and I can’t figure it out now. But that didn’t stop us from using it.” In other words, according to Cooper, no one has done a business case analysis to determine whether the government is getting a return on its investment.
As every experienced CIO knows, without a proper business case, projects tend to grow in scope, budget and schedule. And when this happens, projects fail. “It doesn’t matter if it is a supply chain project, an ERP system or data mining,” says Jim Johnson, chairman of the Standish Group, an analyst firm that tracks IT project success rates. Experts worry that without a rigorous business analysis, the government’s data mining projects could be hampered by all the issues that affect traditional IT projects: the cost overruns and delays that come from a lack of clear project scope, poor system and process alignment, and rebellion from users and others affected by the system—in this case, the American public. Experts say there is no effort being made to make sure that positive outcomes—preventing an attack—outweigh negative ones—sending investigators to look at false positives and intrusions into innocent people’s privacy.
This has two consequences. First, if users see a system as an obstacle to getting their jobs done effectively, they will rebel or simply ignore it. This was the behavior described in a January 2006 article in The New York Times reporting that hundreds of FBI agents were looking into thousands of data mining–generated leads every month, almost all of which turned out to be dead ends. Secondly, Congress could cut a project’s funding in response to privacy and civil liberties concerns, potentially throwing out good parts with a bad project. In fact, Congress has already halted a number of data mining projects, including the Department of Defense’s Total Information Awareness project, an ambitious 2003 attempt to create a massive database containing just about everything and anything that could be used to identify possible terrorists, and CAPPS II, which would have replaced the current airline passenger screening system.
“There are some extraordinarily smart people [working on data mining systems], and I would be hard pressed to think that they are wasting their lives on something that doesn’t work,” says Fred Cate, director of the Center for Applied Cybersecurity Research at the University of Indiana, who served as counsel for the Technology and Privacy Advisory Committee (TAPAC) created in 2003 by Defense Secretary Donald Rumsfeld to study his agency’s use of data mining. “But one of the things [TAPAC] kept focusing on was that you have to be able to show that it works within acceptable parameters,” a responsibility that Cate says rests with agency heads. Agency heads aren’t accepting that responsibility, he says. “As far as the oversight process is concerned, it is clear that [data mining to prevent terror] is a disaster.”