How methodically do you track what you learn from past experience about mitigating risks? Score your answers on a scale of 0 to 2, where 0 means you and your business partners have less understanding about this risk and its contributing factors than others on your list; 1 means your understanding is about average; and 2 means you understand it better than other risks. Add up your answers for all five questions. Scores fall between 0 and 10; 5 means you think your ability to weigh a risk is average across the five factors. It doesn’t matter if you’re a tough or an easy grader: What you’re doing is ranking your risk competence.
Now rank your organization’s information security risks by their risk intelligence score. You may want to allocate more mitigation resources to the ones that score the lowest, because these are the ones you are worst at assessing. For larger companies, it may be important to score the risk intelligence of each business unit facing a single risk. In this way, you can figure out which business unit has the clearest understanding of the threat, though you may still allocate more resources to the unit that scores the lowest.
By the way, this is the opposite of the conclusion you’d draw for elective projects. It makes sense to pursue discretionary projects that pose risks we’re good at assessing. But when the risks are unavoidable, the question is different. We need to focus on the risks—or the parts of the business—where we’re most likely to make a mistake.
How Assessments Help Decision Making
Here’s how to apply the risk intelligence methodology. Suppose your company has been spooked by recent security breaches that have compromised customer data. You’re trying to figure out just how much—and where—to invest in security safeguards. The company’s network has never been breached, although a competitor’s customer database was compromised and the story was all over the news. Closer to home, a laptop was stolen from a salesperson’s car a few weeks earlier.
So you ask the heads of your company’s business units (let’s say there are three) what would be their worst-case loss for a security breach. Compared to their revenue, the estimate from business unit A seems too large, B seems too small, and C falls between A and B. You want to judge who is most likely to be accurate, so you score the risk intelligence of each of the three business unit leaders.
The business leaders have different amounts of experience with security breaches. Because of the volume of its customer data, you give a 2 to business unit A, meaning a lot of potentially valuable experience. You give B and C each a 1 because their experience is about average for their business segments—they keep track of the problem but haven’t suffered a breach so far.
Next you ask how surprising the experience of each of these business units tends to be. The salesperson who lost the laptop works for A, so A gets another 2. B hasn’t typically attracted privacy threats, so it gets a 0. C gets a 1 because its experience in this area is about as surprising as that of most companies.
Now evaluate how relevant this experience is. You believe the number of integrated customer files is a big factor. A keeps each set of data in separate systems, so it gets a 0. B has both multiple- and single-file customer systems; it gets a 2 because this experience should be highly relevant to whether the integration of files really matters. C’s experience seems average, so you assign a 1.
And so on. Tallying the scores, it turns out A has the best understanding of the magnitude of your company’s problem with security breaches. Thus, you apply A’s standard for evaluating the risk to the whole company. But you decide to pilot new security systems with C because there’s reason to expect it is least prepared to deal with the risk of a security breach.
Risk intelligence analysis does not replace the exercise of judgment in prioritizing security or any other IT-related risks. But laying out the main issues—the worst-case loss assessments and the reliability of those assessments—helps you apply your judgment systematically. And it provides a basis for discussing with your executive colleagues the key trade-offs in your risk management strategy.
David Apgar is the author of Risk Intelligence: Learning to Manage What We Don’t Know.