How to rethink security risk analysis
Why formulating the right kind of question is crucial to measuring risk.
Why formulating the right kind of question is crucial to measuring risk.
In the spring of 2010, I came across a story about French mathematicians in the mid-17th century and how they would discuss their work, thoughts, and challenges with each other. This was an exciting period of discovery for mathematics in France: Descartes developed the Cartesian coordinate system (the x/y plots we still create today), Pascal and Fermat wrote their famous letters back and forth about the “problem of points”, which led to the first treatise on probability theory by Christiaan Huygens. As I read about these now famous people and their correspondence and discussions, I felt a little envious. Thinking about the gathering of great minds was particularly appealing to me as I had been struggling with my own challenges around performing risk analyses in cybersecurity.
I had been searching in vain for an answer to what seemed like a rather simple problem: Given a business application (or server or whatever), how can I measure the risk it presents to my company? All of the guidance I found focused on risk frameworks and risk management practices. Every single document I read glossed over the part about actually measuring risk. After reading through everything I could get my hands on, I was completely frustrated at the archaic status of risk analysis in cybersecurity. That’s when I met Alex Hutton.
As I told Alex about the 17th-century French mathematicians and their discussions and discoveries, we lamented the lack of such discussions in our time. We continued our correspondence and a few weeks later, Alex had the bright idea of trying to get a few folks together to try and tackle this risk analysis problem. At the end of May, 2010, Alex posted a blog on the New School of Information Security announcing that we were starting the Society of Information Risk Analysts (SIRA). We thought we’d get maybe a dozen friends and colleagues together to talk about this challenge. Today the mailing list has grown to over 600 participants, and our last conference drew over 100 attendees.
SIRA has grown into a large group of (relatively) like-minded individuals, all wanting to learn more about the risk landscape. We have made some important strides in risk analysis and I think even the most ardent critics would agree we are onto something as a community. We’ve made great progress in our approach to thinking about risk and frameworks focusing on quantifying risk. But even the most passionate members would agree we still have a long way to travel. Risk Matrices are used too often as a crutch and the pitfalls (pdf) are unfortunately, not well known or worse, ignored. But even with the community and progress, we still dance around this topic of measuring risk and my basic challenge: Given a business application (or server or whatever), how can I measure the risk it presents to my company?
That was six years ago and since then I have backed off my head-on approach to solving my challenge of measuring risk and turned to other fields. I went on to study statistics, do data analysis and visualization full-time, co-authoring the Verizon Data Breach Investigations Report for 4 years. This experience, coupled with ongoing discussions with my SIRA colleagues, helped me create enough distance so I could see the forest through the trees and I have managed to learn a trick or two about measuring risk.
I’ve learned that if we are staring at a business application trying to measure the risk it presents to a company, we won’t find a good answer because we’re hyper-focused on a small fraction of the whole picture. We simply cannot measure risk by looking at any one application or server alone. To measure security risk, we must think of our one application (or server or whatever) as a member of a larger population and try to learn about that population. This led me to discover the original challenge I created is all wrong—it’s set up to fail.
Good analysis starts with a question, and that question can make or break an analysis. Want to know how much risk an application presents? The analysis is already doomed. Instead, think of a question that has these three qualities: (1) it is objective and measurable, (2) it can be answered within budget, and (3) someone wants to know the answer.
Bill James, the famous sabermetrician featured in Moneyball, said, “My job was to find questions about baseball that have objective answers; that’s all that I do; that’s all that I’ve done.” He didn’t set out to create baseball metrics; he simply started with a question. Something like, “Does stealing second base lead to more runs?” We can and should apply the same principle to security. Rather than ask how much risk an application has, we should ask how many tickets have been opened on an application. Or, we could look specifically for security incidents, and even think bigger: “How many incidents have involved the application layer?” It’s questions like this that really help frame analysis and set it up to be objective and repeatable.
While it’d be great to have all the data at hand and ready for analysis, the world doesn’t work that way. We have to collect and often prepare the data before we can learn from it, and that has a real cost (in time, resources, infrastructure) associated with it. The question we create should take this into account. It may be great to know the details of every breach in the world involving an application like ours, but that won’t happen. Perhaps using the list of events and security incidents on the application (or all applications) will help set the frame.
I know this seems like a no-brainer, but take a look at a metrics program and what’s being collected, and ask who cares. The classic example is how much spam is being blocked at the email server. While this seems interesting and one could see changes over time, nobody cares (unless you are engineering the spam filter). Instead, people care about the ones that aren’t blocked and their impact on employees’ time and productivity. One sanity check is to do a thought exercise; if the answer was at either extreme (bad or good, high or low), would anyone make a new decision? If the answer is no, then no one cares about the answer.
Framing any analysis as a series of one or more questions (with objective answers), will undoubtedly help set the frame; just be sure the questions are set before the analysis is started. With that in mind, consider turning this around, and use the points above to evaluate the questions behind analyses already done. Trying to work backwards to find the question after the analysis is complete can be a sobering exercise. Often, the questions being asked cannot be answered objectively or, worse, nobody cares about the answers.
Like the mathematicians advancing their world in the 17th century, we have an opportunity to improve the state of security risk analysis if we can work together. Setting the frame with objective and measurable questions is just the beginning of any good analysis. We can overcome many other challenges by pooling our collective experience and getting together to talk about with each other. The friends and colleagues I’ve come to lean on because of SIRA have become invaluable in my work.
Security risk analysis is a challenge for all of us. Much like the development of mathematics 17th century France, we have an opportunity to learn from each other and improve our collective challenges in risk analysis. Consider getting involved and talking with others about the challenges you are facing. Perhaps you can do that by participating in a group like SIRA. Perhaps you can attend an upcoming conference and learn about the challenges others are facing. Perhaps you’ve already had some successes (or even some interesting failures)—consider sharing those by submitting to talk at a conference. We can and will make great advancements through communication and our combined perspectives. Get out there and start talking!