by Scott Berinato

Finally, a Real Return on Security Spending

Feature
Feb 15, 200217 mins
IT LeadershipROI and Metrics

For years CIOs have had to use scare tactics to justify an investment in security. Now, they may be able to get numbers they need to show a measurable ROI.

You need fire sprinklers. Obvious advice, maybe, but once upon a time fire sprinklers were considered a waste of money. In fact, in 1882, sprinklers were considered to be as dubious an investment as information security is today.

That’s why George Parmalee, in March of that year, set a Bolton, England, cotton spinning factory on fire. In 90 seconds, flames and billows of thick black smoke engulfed the mill. After two minutes, 32 automatic sprinklers kicked in and extinguished the fire.

It was a sales pitch. Parmalee’s brother Henry had recently patented the sprinklers and George hoped the demonstration would inspire Britain’s mill owners—many of whom came to watch—to invest in his brother’s new form of security.

But they didn’t. “It was slow work getting sprinklers established in this country,” wrote Sir John Wormald, a witness to the conflagration. Only a score of factories bought the devices over the next two years.

The reason was simple, and it will sound familiar to CIOs and chief security officers: “[Parmalee] realized that he could never succeed in obtaining contracts from the mill owners…unless he could ensure for them a reasonable return upon their outlay,” Wormald wrote.

Today, it’s data warehouses, but data is as combustible as cotton. Thousands of George Parmalees—CIOs and CSOs, not to mention security consultants and vendors—are eager to demonstrate inventions that extinguish threats to information before those threats take down the company. But the investment conundrum remains precisely what it was 120 year ago. CEOs and CFOs want quantifiable proof of an ROI before they invest.

The problem, of course, is that until just recently a quantifiable return on security investment (ROSI) didn’t exist. The best ROSI argument CIOs had was that spending might prevent a certain amount of losses from security breaches.

But now several research groups have developed surprisingly robust and supportable ROSI numbers. Their research is dense and somewhat raw, but security experts praise the efforts as a solid beginning toward a quantifiable ROSI.

“I was quite surprised, to be honest,” says Dorothy Denning, a professor at Georgetown University and a widely regarded information security expert. “I have a good sense of what’s good research, and all of this seems good. They are applying academic rigor.”

IT executives are hungry for this kind of data. “It’s very easy to get a budget [for security] after a virus hits. But doing it up front makes more sense; it’s always more secure,” says Phil Go, CIO at design and construction services company Barton Malow in Southfield, Mich. “Numbers from an objective study would help me. I don’t even need to get hung up on the exact numbers as long as I can prove the numbers are there from an unbiased study.”

If the new findings about ROSI are proven true, they will fundamentally change how information security vendors sell security to you and how you sell security to your bosses. And the statement “You need information security” will sound as commonsensical as “You need fire sprinklers.”

Soft ROSI

Tom Oliver, a security architect for NASA, recently spent tens of thousands of dollars on a comprehensive, seven-week external security audit. At the end, Oliver received a 100-page booklet with the results—which were mostly useless.

“[The auditors] said, ’You were very secure. We were surprised we couldn’t access more [sensitive data],’” says Oliver, who is employed by Computer Sciences (under contract to NASA) at the Marshall Space Flight Center in Huntsville, Ala. “But I wanted to know how we compared to other government agencies. If I put another $500,000 into security, will that make me more secure?

“There was no return on investment in there at all,” he adds. “I spent $110,000, and I got, ’You’re good.’ What’s that?”

This is the dilemma that faces CIOs and CSOs everywhere. A lack of data on infosecurity makes it difficult to quantify what security gets you. In lieu of numbers, information executives rely on soft ROSIs—explanations of returns that are obvious and important but impossible to verify.

Executives know the threat is real, but CIOs say executives don’t feel the threat. No one buys burglar alarms until someone they know is robbed. For that reason, IT relies on, more than anything, fear, uncertainty and doubt to sell security—in other words, FUD. The thinking is, if you scare them, they will spend.

But even FUD has limitations, especially during a recession. The signs of the down economy’s impact are everywhere. At Fidelity, the chief information security officer (CISO) position was eliminated. At State Street Global Advisors in Boston, CISO Michael Young needs four more security staffers, but there’s a hiring freeze. “If we invest in anything that promotes less downtime, that’s a positive ROI,” Young says. “But still, there’s no quantified value associated with [staffing], and that’s a problem. If I could go in there with a return on the bottom line resulting from these hires, bingo! That would be it.”

To say there’s no good ROSI data is not to say there’s no data. Numbers are indeed used to sell security; it’s just that they’ve had zero statistical validity.

The marquee example of that is the Computer Security Institute’s (CSI) annual computer crime survey. Each year, CSI and the FBI report security trends in plain, often stark terms. The 2001 report’s centerfold is a chart called “The Cost of Computer Crime.” It says that losses from computer crime for a five-year period from 1997 to 2001 were an eye-popping $1,004,135,495.

There’s just one problem with that number. “It’s crap,” says Bruce Schneier, security expert, founder and CTO of security services vendor Counterpane Internet Security in Cupertino, Calif.

“There’s absolutely no methodology behind it. The numbers are fuzzy,” agrees Bill Spernow, CISO of the Georgia Student Finance Commission in Atlanta. “If you try to justify your ROSI this way, you’ll spend as much time just trying to justify these numbers first.”

Therein lies the appeal of the current crop of studies. They have scientific method and a foundation of previously established research.

Hard Numbers, at Last

In 2000 and 2001, a team at the University of Idaho followed George Parmalee’s example. The team built an intrusion detection box, a security device that sits at the edge of a network and watches for suspicious activity among users who get past the firewall. Incoming traffic that follows a certain pattern is flagged, and someone is alerted to look into it.

The researchers then hacked the box, code-named Hummer. Their goal was to prove that it’s more cost-effective to detect and then deal with attacks using intrusion detection than it is to try to prevent them using other means. The problem was assigning valid costs for this cost-benefit analysis. For instance, what does it cost to detect an incident? What are day-to-day operational costs of security? What are the cost consequences if you miss an attack?

The Idaho team, led by University of Idaho researcher HuaQiang Wei, began by culling research from all over. Then they combined what they found with some of their own theories, assigning values to everything from tangible assets (measured in dollars with depreciation taken into account) to intangible assets (measured in relative value, for example, software A is three times as valuable as software B). Different types of hacks were assigned costs according to an existing and largely accepted taxonomy developed by the Department of Defense. Annual Loss Expectancy (ALE) was figured. ALE is an attack’s damage multiplied by frequency. In other words, an attack that costs $200,000 and occurs once every two years has an ALE of $100,000.

To verify the model, the team went about attacking their intrusion detection box with commonly attempted hacks to see if the costs the simulation produced matched the theoretical costs. They did.

Determining cost-benefit became the simple task of subtracting the security investment from the damage prevented. If you end up with a positive number, there’s a positive ROSI. And there was. An intrusion detection system that cost $40,000 and was 85 percent effective netted an ROI of $45,000 on a network that expected to lose $100,000 per year due to security breaches.

If applied to real-life examples, the Idaho model could produce the data that CIOs need in order to demonstrate not only that their investment pays off, but by how much. Next, the Idaho team wants to put the ROSI analysis inside Hummer. As threats are detected, the box will compare response cost against damage cost. Only if the damage cost is higher will it stop an attack. In other words, the device itself decides if it’s cost-effective to launch an emergency response.

Of course, Hummer’s data would be logged for review. Putting those features in commercial intrusion detection systems would yield reports that showed how much money CIOs saved using intrusion detection. This would then allow them to compare the costs of one security system against another. And wouldn’t that be handy?

The Value of Building Security in Early

While Idaho was toying with Hummer, a group of researchers from MIT, Stanford University and @Stake, a security consultancy located in Cambridge, Mass., was playing with Hoover.

Hoover is a database. Amassed by @Stake, it contains detailed information about software security flaws—from simple oversights to serious weaknesses. Hoover reveals an ugly truth about software design: Securitywise, it’s not very good.

Right now, Hoover contains more than 500 data entries from nearly 100 companies. Participants in the study, such as Bedford, Mass.-based RSA and Fairfax, Va.-based WebMethods, wanted to assess how securely they were building their software and how to do it better.

First, the Hoover group focused on the ROSI of secure software engineering. The group wanted to prove a concept that seems somewhat intuitive: The earlier you build security into the software engineering process, the higher your return on that investment. And prove it they did.

It took 18 months of letting Hoover suck up data from @Stake’s clients to create a representative sample of the entire software landscape. Data in hand, they looked for previous research to base their work on. There was little, so they made a critical assumption, which unlocked the study’s potential. The team decided that a security bug is no different than any other software bug.

Suddenly, security was a quality assurance game, and there was a ton of existing data and research on quality assurance and software. For example, one bit of research they used came from a widely accepted 1981 study that said that spending a dollar to fix a bug (any bug) in the design process saves $99 against fixing it during implementation.

“The idea of security software as quality assurance is extremely new,” according to team member and Stanford economics PhD Kevin Soo Hoo. “Security has been an add-on at the last minute, and detecting security problems has been left to users.” And, of course, hackers.

With the research in hand, Soo Hoo, MIT Sloane School of Management student Andrew Sudbury and @Stake Director Andrew Jaquith tweaked the general quality assurance models to reflect the security world, as based on the Hoover data.

Overall, the average company catches only a quarter of software security holes. On average, enterprise software has seven significant bugs, four of which the software designer might choose to fix. Armed with such data, the researchers concluded that fixing those four defects during the testing phase cost $24,000. Fixing the same defects after deployment cost $160,000, nearly seven times as much.

The ROSI breakdown: Building security into software engineering at the design stage nets a 21 percent ROSI. Waiting until the implementation stage reduces that to 15 percent. At the testing stage, the ROSI falls to 12 percent.

“Our developers have said they believe they save 30 percent by putting security in earlier, and it’s encouraging to see proof,” says Mike Hager, vice president of network security and disaster recovery at Oppenheimer Funds in Engelwood, Colo. “Executives need answers to questions like, ’What risk am I mitigating?’ We haven’t had the means to educate them without FUD.” From numbers like those, he adds, “We’ll be able to sell security from a business perspective.”

Hoover keeps growing. The group plans to publish other ROSI numbers. Next up: assigning a statistically valid ROSI to incident readiness. It will (they hope) show how ROSI increases as the effective response time to a security incident decreases.

The Law of Diminishing ROSI

If you want to give CEOs and CFOs a ROSI they can love, show them a curve.

That’s what researchers at Carnegie Mellon University (CMU) did in “The Survivability of Network Systems: An Empirical Analysis.” The study is as dense and dispassionate as its title. (So are its bureaucratic underpinnings: It was done at the Software Engineering Institute in conjunction with the public-private cooperative effort called CERT, both housed at CMU.)

The study measures how survivability of attacks increases as you increase security spending. Economists call it regression analysis. It’s basically a curve showing the trade-off between what you spend and how safe you are.

To get the curve, the team relied on data from CERT, established by the government in 1988 after a virulent worm took down 10 percent of the then-very-limited public network (what would become the Internet). CERT logged security breaches and tracked threats, mostly through the volunteer efforts of the private and public organizations directly affected.

CMU researchers took all the CERT data from 1988 to 1995 and modeled it. Among the variables they defined were what attacks happened, how often, the odds any one attack would strike any given company, what damage the attacks produced, what defenses were used and how they held up.

The researchers used the data to build an engine that generated attacks on a simulated enterprise, which reflected the rate and severity of attacks in the real world. The computer program was an attack dog?CMU set it loose on a fictitious network and said, “Sic!”

Then they recorded what happened, how the network survived the attacks. After that, the researchers tweaked the variables. Sometimes they gave the faux-enterprise stronger defenses (higher cost). Other times they increased the probability of attack to see how the network would hold up against a more vicious dog.

An inventive aspect of the CMU study was that it didn’t treat security as a binary proposition. That is, it didn’t assume you were either hacked or not hacked. Rather it measured how much you were hacked. Survivability was defined as a state between 0 and 1, where 0 is an enterprise completely compromised by attack, and 1 is an enterprise attacked but completely unaffected. This provides a far more realistic model for the state of systems under attack than an either-or proposition.

The data from the simulation was plotted on a curve. The X-axis was cost, which was in absolute terms (that is, a cost of 10 is twice as much as a cost of 5, but they don’t have direct analogs to dollars). The Y-axis was survivability, plotted from 0 to 1.

The curve looks like smoke pouring out of a smoke stack; it rises in a sharp vertical at first, then trails off in an ever more tapering curve. The ROSI rises as you spend more, but (and this will gladden the hearts of CFOs) it rises at a diminishing rate.

The researchers believe that they could also overlay that curve with something called an indifference curve, which instead of mapping data maps behavior. It plots the points at which the CEO is satisfied with the combination of cost and survivability. The curve always slopes down and to the right, like the bottom half of a C.

Where the indifference curve and the actual ROSI curve intersect would provide the optimal security spending point. In other words, not only could you prove you need fire sprinklers, you could tell the CEO and CFO how much should be spent on them.

Green Data = Skepticism

Most information executives and security experts believe these ROSI studies will be a significant new tool. But a certain caution lingers. Some CIOs point out that the studies are useless as raw documents; they require translation before the data hits their desks. Several executives also worried about applicability—taking the data out of the lab and putting it in the real world. “The worst thing is for people to say security requires a trillion dollars, and then offer no solution in the real world,” says Micki Krause, director of information security of PacifiCare Health Systems, an HMO in Santa Ana, Calif.

The data itself was also a concern. The CERT data used in CMU’s models only went to 1995, for example. The model for types and frequency of attacks has changed since then. And while Hoover, @Stake’s database, provides gritty details about security holes in software, they are gritty details only from companies willing to participate. Is that representative?

In risk management parlance, the actuarial data is quite green, and CIOs bemoan that fact. The rub is, you can’t just collect data about security the way you can about auto accidents. More CIOs must agree to disclose detailed data about the state of their own security in order to build a portfolio of numbers that will test the early theories.

CIOs want proof, yet they don’t want to be the ones providing the data that will improve the science. Those collecting data have promised privacy in exchange for the knowledge of what the enterprise is spending on security, but it’s slow going getting recruits. “At CERT we’ve protected confidentiality for 12 years. But it’s so hard because they keep [data] to themselves,” says Jim McCurley, technical staff at Software Engineering Institute. Despite all this, security experts such as Georgetown’s Denning believe that those studies are the beginning of a golden age in information security, with the potential to change every aspect of security—from how it’s built, to how it’s perceived in the enterprise, to how it’s paid for.

Such research could set off a chain reaction. First, ROSI numbers could be used to convince executives to invest in security, thereby spurring the development of new technologies and the hiring of more knowledgeable security workers.

Then, as the studies are repeated and improved, insurance companies could use the ROSI numbers to create “hacking insurance,” with adjustable rates based on what security you employ. Dave O’Neill will be one of the people writing those insurance plans over the next year. Currently, as vice president of e-commerce solutions, he writes plans for general e-commerce insurance for Schaumburg, Ill.-based Zurich North America. Today, he confesses, the rates for such plans are mostly set by guesswork. Zurich bases its premiums largely on a 58-question yes-or-no survey, with questions such as “Are security logs reviewed at least daily for suspicious activities?”

“From our perspective this will change by the end of 2002. It will be a whole different landscape. We’ll know much more scientifically how to do this,” says O’Neill. “What it boils down to is getting credible data.”

The insurance industry in all likelihood will be the engine that drives both the science of ROSI and the technology of security. All other factors being equal, the insurance discounts will eventually make one Web server a better buy than another. Software vendors will be forced to fix the holes in their products in order to benefit from lower premiums.

In fact, that is precisely what happened with fire sprinklers. Shortly after Parmalee’s fiery demonstration, British insurance carriers began offering discounts to mill owners who bought sprinklers and deeper discounts to owners with more advanced sprinkler systems. Naturally, insurance rates rose on mills without them.

Ultimately, because it made no business sense not to invest in fire sprinklers, everyone had them. And mill owners could stop thinking about fires and start thinking about their business.