Finally, a Real Return on Security Spending

For years CIOs have had to use scare tactics to justify an investment in security. Now, they may be able to get numbers they need to show a measurable ROI.

You need fire sprinklers. Obvious advice, maybe, but once upon a time fire sprinklers were considered a waste of money. In fact, in 1882, sprinklers were considered to be as dubious an investment as information security is today.

That’s why George Parmalee, in March of that year, set a Bolton, England, cotton spinning factory on fire. In 90 seconds, flames and billows of thick black smoke engulfed the mill. After two minutes, 32 automatic sprinklers kicked in and extinguished the fire.

It was a sales pitch. Parmalee’s brother Henry had recently patented the sprinklers and George hoped the demonstration would inspire Britain’s mill owners—many of whom came to watch—to invest in his brother’s new form of security.

But they didn’t. "It was slow work getting sprinklers established in this country," wrote Sir John Wormald, a witness to the conflagration. Only a score of factories bought the devices over the next two years.

The reason was simple, and it will sound familiar to CIOs and chief security officers: "[Parmalee] realized that he could never succeed in obtaining contracts from the mill owners...unless he could ensure for them a reasonable return upon their outlay," Wormald wrote.

Today, it’s data warehouses, but data is as combustible as cotton. Thousands of George Parmalees—CIOs and CSOs, not to mention security consultants and vendors—are eager to demonstrate inventions that extinguish threats to information before those threats take down the company. But the investment conundrum remains precisely what it was 120 year ago. CEOs and CFOs want quantifiable proof of an ROI before they invest.

The problem, of course, is that until just recently a quantifiable return on security investment (ROSI) didn’t exist. The best ROSI argument CIOs had was that spending might prevent a certain amount of losses from security breaches.

But now several research groups have developed surprisingly robust and supportable ROSI numbers. Their research is dense and somewhat raw, but security experts praise the efforts as a solid beginning toward a quantifiable ROSI.

"I was quite surprised, to be honest," says Dorothy Denning, a professor at Georgetown University and a widely regarded information security expert. "I have a good sense of what’s good research, and all of this seems good. They are applying academic rigor."

IT executives are hungry for this kind of data. "It’s very easy to get a budget [for security] after a virus hits. But doing it up front makes more sense; it’s always more secure," says Phil Go, CIO at design and construction services company Barton Malow in Southfield, Mich. "Numbers from an objective study would help me. I don’t even need to get hung up on the exact numbers as long as I can prove the numbers are there from an unbiased study."

If the new findings about ROSI are proven true, they will fundamentally change how information security vendors sell security to you and how you sell security to your bosses. And the statement "You need information security" will sound as commonsensical as "You need fire sprinklers."

Soft ROSI

Tom Oliver, a security architect for NASA, recently spent tens of thousands of dollars on a comprehensive, seven-week external security audit. At the end, Oliver received a 100-page booklet with the results—which were mostly useless.

"[The auditors] said, ’You were very secure. We were surprised we couldn’t access more [sensitive data],’" says Oliver, who is employed by Computer Sciences (under contract to NASA) at the Marshall Space Flight Center in Huntsville, Ala. "But I wanted to know how we compared to other government agencies. If I put another $500,000 into security, will that make me more secure?

"There was no return on investment in there at all," he adds. "I spent $110,000, and I got, ’You’re good.’ What’s that?"

This is the dilemma that faces CIOs and CSOs everywhere. A lack of data on infosecurity makes it difficult to quantify what security gets you. In lieu of numbers, information executives rely on soft ROSIs—explanations of returns that are obvious and important but impossible to verify.

Executives know the threat is real, but CIOs say executives don’t feel the threat. No one buys burglar alarms until someone they know is robbed. For that reason, IT relies on, more than anything, fear, uncertainty and doubt to sell security—in other words, FUD. The thinking is, if you scare them, they will spend.

But even FUD has limitations, especially during a recession. The signs of the down economy’s impact are everywhere. At Fidelity, the chief information security officer (CISO) position was eliminated. At State Street Global Advisors in Boston, CISO Michael Young needs four more security staffers, but there’s a hiring freeze. "If we invest in anything that promotes less downtime, that’s a positive ROI," Young says. "But still, there’s no quantified value associated with [staffing], and that’s a problem. If I could go in there with a return on the bottom line resulting from these hires, bingo! That would be it."

To say there’s no good ROSI data is not to say there’s no data. Numbers are indeed used to sell security; it’s just that they’ve had zero statistical validity.

The marquee example of that is the Computer Security Institute’s (CSI) annual computer crime survey. Each year, CSI and the FBI report security trends in plain, often stark terms. The 2001 report’s centerfold is a chart called "The Cost of Computer Crime." It says that losses from computer crime for a five-year period from 1997 to 2001 were an eye-popping $1,004,135,495.

There’s just one problem with that number. "It’s crap," says Bruce Schneier, security expert, founder and CTO of security services vendor Counterpane Internet Security in Cupertino, Calif.

"There’s absolutely no methodology behind it. The numbers are fuzzy," agrees Bill Spernow, CISO of the Georgia Student Finance Commission in Atlanta. "If you try to justify your ROSI this way, you’ll spend as much time just trying to justify these numbers first."

Therein lies the appeal of the current crop of studies. They have scientific method and a foundation of previously established research.

Hard Numbers, at Last

In 2000 and 2001, a team at the University of Idaho followed George Parmalee’s example. The team built an intrusion detection box, a security device that sits at the edge of a network and watches for suspicious activity among users who get past the firewall. Incoming traffic that follows a certain pattern is flagged, and someone is alerted to look into it.

The researchers then hacked the box, code-named Hummer. Their goal was to prove that it’s more cost-effective to detect and then deal with attacks using intrusion detection than it is to try to prevent them using other means. The problem was assigning valid costs for this cost-benefit analysis. For instance, what does it cost to detect an incident? What are day-to-day operational costs of security? What are the cost consequences if you miss an attack?

The Idaho team, led by University of Idaho researcher HuaQiang Wei, began by culling research from all over. Then they combined what they found with some of their own theories, assigning values to everything from tangible assets (measured in dollars with depreciation taken into account) to intangible assets (measured in relative value, for example, software A is three times as valuable as software B). Different types of hacks were assigned costs according to an existing and largely accepted taxonomy developed by the Department of Defense. Annual Loss Expectancy (ALE) was figured. ALE is an attack’s damage multiplied by frequency. In other words, an attack that costs $200,000 and occurs once every two years has an ALE of $100,000.

To verify the model, the team went about attacking their intrusion detection box with commonly attempted hacks to see if the costs the simulation produced matched the theoretical costs. They did.

Determining cost-benefit became the simple task of subtracting the security investment from the damage prevented. If you end up with a positive number, there’s a positive ROSI. And there was. An intrusion detection system that cost $40,000 and was 85 percent effective netted an ROI of $45,000 on a network that expected to lose $100,000 per year due to security breaches.

If applied to real-life examples, the Idaho model could produce the data that CIOs need in order to demonstrate not only that their investment pays off, but by how much. Next, the Idaho team wants to put the ROSI analysis inside Hummer. As threats are detected, the box will compare response cost against damage cost. Only if the damage cost is higher will it stop an attack. In other words, the device itself decides if it’s cost-effective to launch an emergency response.

Of course, Hummer’s data would be logged for review. Putting those features in commercial intrusion detection systems would yield reports that showed how much money CIOs saved using intrusion detection. This would then allow them to compare the costs of one security system against another. And wouldn’t that be handy?

The Value of Building Security in Early

While Idaho was toying with Hummer, a group of researchers from MIT, Stanford University and @Stake, a security consultancy located in Cambridge, Mass., was playing with Hoover.

Hoover is a database. Amassed by @Stake, it contains detailed information about software security flaws—from simple oversights to serious weaknesses. Hoover reveals an ugly truth about software design: Securitywise, it’s not very good.

Right now, Hoover contains more than 500 data entries from nearly 100 companies. Participants in the study, such as Bedford, Mass.-based RSA and Fairfax, Va.-based WebMethods, wanted to assess how securely they were building their software and how to do it better.

First, the Hoover group focused on the ROSI of secure software engineering. The group wanted to prove a concept that seems somewhat intuitive: The earlier you build security into the software engineering process, the higher your return on that investment. And prove it they did.

It took 18 months of letting Hoover suck up data from @Stake’s clients to create a representative sample of the entire software landscape. Data in hand, they looked for previous research to base their work on. There was little, so they made a critical assumption, which unlocked the study’s potential. The team decided that a security bug is no different than any other software bug.

Suddenly, security was a quality assurance game, and there was a ton of existing data and research on quality assurance and software. For example, one bit of research they used came from a widely accepted 1981 study that said that spending a dollar to fix a bug (any bug) in the design process saves $99 against fixing it during implementation.

"The idea of security software as quality assurance is extremely new," according to team member and Stanford economics PhD Kevin Soo Hoo. "Security has been an add-on at the last minute, and detecting security problems has been left to users." And, of course, hackers.

With the research in hand, Soo Hoo, MIT Sloane School of Management student Andrew Sudbury and @Stake Director Andrew Jaquith tweaked the general quality assurance models to reflect the security world, as based on the Hoover data.

Overall, the average company catches only a quarter of software security holes. On average, enterprise software has seven significant bugs, four of which the software designer might choose to fix. Armed with such data, the researchers concluded that fixing those four defects during the testing phase cost $24,000. Fixing the same defects after deployment cost $160,000, nearly seven times as much.

The ROSI breakdown: Building security into software engineering at the design stage nets a 21 percent ROSI. Waiting until the implementation stage reduces that to 15 percent. At the testing stage, the ROSI falls to 12 percent.

"Our developers have said they believe they save 30 percent by putting security in earlier, and it’s encouraging to see proof," says Mike Hager, vice president of network security and disaster recovery at Oppenheimer Funds in Engelwood, Colo. "Executives need answers to questions like, ’What risk am I mitigating?’ We haven’t had the means to educate them without FUD." From numbers like those, he adds, "We’ll be able to sell security from a business perspective."

Hoover keeps growing. The group plans to publish other ROSI numbers. Next up: assigning a statistically valid ROSI to incident readiness. It will (they hope) show how ROSI increases as the effective response time to a security incident decreases.

The Law of Diminishing ROSI

If you want to give CEOs and CFOs a ROSI they can love, show them a curve.

That’s what researchers at Carnegie Mellon University (CMU) did in "The Survivability of Network Systems: An Empirical Analysis." The study is as dense and dispassionate as its title. (So are its bureaucratic underpinnings: It was done at the Software Engineering Institute in conjunction with the public-private cooperative effort called CERT, both housed at CMU.)

1 2 Page 1
Page 1 of 2
Survey says! Share your insights in our 19th annual State of the CIO study