In this episode, I talk with Jack Whitsitt, senior strategist at EnergySec. We discuss the ways in which language can either divide or unite people and organizations, the illusion of control when it comes to security, and how any model or framework for security must include people in order to have any chance of success.
Here are some highlights:
Language can unite (or divide)
I think language is a huge, huge part of our cyber security problems faced right now. You can get people in a room, and they're using the same words, but meaning different things. They're not actually effectively making their world a better place. “Cyber” versus “information” security is something I talk about a lot. When you look at it, it's unhelpful to say, "Well that word doesn't mean what you think it does," and to kind of ostracize that set of thinking from your world view. If we can't socialize common language and figure out what the big picture looks like, we're going to have a tough time making progress.
Securing your network vs. securing your business
There's an important linguistic distinction between securing your network and securing your business. When we talk about language, your CFO or CEO, they don't care about their network. They really don't, nor should they. They want to keep producing the value they want to produce, and focus on the costs they're willing to invest in that. What you talk about, as an information security professional, should be focused on helping them produce that value. Whether or not somebody can get into your network on a Tuesday at 5 p.m. is potentially unimportant to their worldview, and the language that they use, and the things that they care about.
The illusion of control
I actually believe, to some extent, information security is a symptom. It's an outcome of a larger problem, as opposed to a causal factor. As information security professionals, by and large, we don't control our budget exposure; the kind of exposure to cyber security risk that we face is created largely outside of our span of influence. I think we have much less control than we think we do over the security of our environment. Unless we begin offloading it into the rest of the business, in a much more substantial and meaningful way than we have in the past—as we add lines to code, as we add complexity, as we add connectivity, and as we add consequence, as all of that escalates—it's going to be increasingly hard to even look like we're doing a particularly good job of keeping things secure and stable.
Modeling people in your systems
Unless you include the people, and how they behave—the decisions they make, what their psychological constraints are, what their cultural constraints are, their political and legal constraints are—in that conversation, in that threat model, then, you're not really actually modelling the security state, or the threats to your system. You're only modelling a piece of it, and there's only so far you can go in defending that, when you limit your scope like that. We can isolate ourselves and talk about trust perimeters, but the world doesn’t work that way. There’s something larger than the models we’ve used so far that’s at play.