Useful and legitimate metrics have long evaded the information security community as a whole. Without proper metrics, you cannot truly prove the value of a security program. This makes it difficult to justify increasing the budget and even maintaining the budget that you have.
Security awareness is especially vulnerable to criticism of its value. We take for granted all of the times we do not click on a phishing email or exercise good judgment. It is also hard to know all of the incidents that were prevented, because there was no vulnerability to be exploited.
Even with the best awareness program in place, as with all security countermeasures, there will be failures, and it will be easy to point to the cost of the failures. So it is essentially impossible to prove all of the losses your security awareness program prevented and the money that you saved, while the failures make themselves apparent. For that reason, it is important to determine how to measure improvements in security awareness and the savings generated by those improvements.
As our past articles stress, it is critical to understand the difference between awareness and training. Training provides a fixed body of knowledge, while awareness intends to change behaviors. While knowledge must be imparted on the students with awareness, the knowledge is irrelevant if it does not result in the desired behaviors. With training, as long as the fixed body of knowledge was provided and sometimes the students pass a test regurgitating basic knowledge, the training is considered successful.
The most common metrics associated with security awareness do not help justifying security awareness efforts. The most common metric is simply whether or not a person took a mandatory computer-based training (CBT) course. Completing a mandatory course of varying time and quality does not do much to actually demonstrate whether or not the students understand the materials, and more importantly, put the training into practice by changing their behaviors. It does however account for compliance requirements, which generally say that an organization must provide awareness training, without regards to effectiveness or results. This is not to say that CBT is ineffective or should not be a part of an awareness program the metrics associated with the training do not represent an actual improvement with awareness.
Then there are attack simulations, which inherently provide metrics. Such simulations include phishing simulations, USB drops, and Social Engineering simulations. However, just seeing a reduced rate of "success" might not tell you as much as you think. For example, even if you get 0 clicks on a phishing simulation, it has limited use if the users just know not to click on a very basic pretext. There is a broad range of attack sophistications, and simulation results only apply to the specific pretext(s) used.
CBT and simulations aside, there are more tangible metrics to collect that will be much more useful in determining actual awareness, as well as how to determine how to optimize budgets.
Metrics that help optimizing budgets are perhaps the most useful to collect. An additional driver in collecting metrics is to determine which components of your awareness programs are actually being used and having an impact. An easy example is embedding analytics in emails and webpages. You can gauge if people are actually using the components that you are spending time and money to create. If they are not, you should quickly determine why they are not using those components and address it, or stop investing in those components.
Some components, like posters, coffee sleeves, and other physical (commonly paper) awareness materials, are harder to measure. In this case, placing QR codes on those materials can give an indication as to how many people are engaging with these components. If you hold events, it is relatively easy to count the number of people who attend the events.
These metrics indicate where your successes or failures are. Time and money can therefore be adjusted accordingly. Clearly, there needs to be some analysis applied to the metrics. For example, if your analytics on email newsletters are low, but you find that the newsletter is being printed and distributed, it is clearly being distributed beyond what analytics would otherwise indicate.
Now that you know you've gotten your employees attention, the next type of metrics is actually more important in demonstrating that you are achieving your awareness goals. Specifically, you want to ensure that you are changing behaviors. That is a very key distinction. You need to measure actual behaviors.
This specifically excludes measuring how many people take a particular training course and any results of the associated test scores. As most security practitioners can attest to, the fact that people say that they know that they should not give out their password does not mean that they will not give out their password when actually asked for it. You need to measure the desired behavior in practice, not knowledge.
For that reason, you need to determine how to measure root behaviors. We have performed extensive research to determine 17 root awareness behaviors and how to measure the associated behaviors. While some behavioral measurements are obvious, such as performing social engineering simulations to determine if a person does not divulge their password to a stranger, others such as mobile device security and phishing require more thought and are dependent upon the resources available to the organizations.
Additionally, metrics can frequently be misleading. For example, phishing simulations can produce numbers, however they only represent the metrics for susceptibility to the pretext being used. For example, if a phishing message involves a cat video, you can only generalize results to people who click on phishing messages containing cat videos. They do not represent the awareness related to phishing messages that appear to come from an executive stating that the recipient must review an attached document.
Another key component of successful metrics gathering is collecting metrics proactively, before beginning an awareness effort. Only by collecting Day 0 metrics can you know whether or not your program had an impact. While it is ideal to see that your results are increasing the desired behaviors, it is just as important to know when you are not. Only in that case can you know that you have to improve, and get directions on how to do so.
Making it Tangible
Assuming that you are getting the desired results, the next step is to attempt to put a cost savings on the effort. This requires estimating the incidents that are prevented, along with the associated costs. For example, if you estimate that you reduced network malware (a tangible symptom of phishing) by 5 incidents per month and estimate that each incident costs $10,000, you can claim to save your organization $50,000 per month. This is more complicated than it is described, but the underlying principle is something that should be applied in every security discipline, not just awareness.
Security is about cost-effectively mitigating risks, not preventing all compromises. This is especially true with security awareness. Whenever you are dealing with people, there will always be a security related loss. Good security awareness programs will save an organization exponentially more in reduced losses than they cost. Metrics will allow you to demonstrate this and prove the value of everything else that you do.
This story, "4 Ways Metrics Can Improve Security Awareness Programs" was originally published by CSO.