Dan Kaminsky on creating an NIH for the security industry

The O’Reilly Security Podcast: Coarse-grained security, embracing the ephemeral, and empathy for everyone.

By Courtney Nash
October 12, 2016
Sand Castle at Cannon Beach. Sand Castle at Cannon Beach. (source: Curt Smith on Flickr)

In this episode, I talk with Dan Kaminsky, founder and chief scientist at White Ops. We discuss what a National Institutes of Health (NIH) for security would look like, the pros and cons of Docker and ephemeral solutions, and how the mere act of listening to people better can improve security for everyone.

Here are some highlights:

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Creating an NIH for security research

The hard truth is that there just are societal scale problems: cities burn, people need to transit from one location to another, we need food that doesn’t poison us. The reality is that there are problems that affect all of us if they’re present. The Internet is not a safe place right now, and, more importantly, the tools we’re using to interact with it are relatively broken. This is a problem, but we shouldn’t be ashamed.

I think we need to have a larger-scale response to the problems of the Internet. It has been a tremendous boon to our society. It is the heart of our economic growth. It’s the greatest growth since the Industrial Revolution, but it’s got some problems that we’re not just going to guilt people out of. We’ve got to do some engineering work. We’re going to have to share a lot more. The FBI has crime statistics, and it’s incredibly useful on a societal scale. There needs to be this lack of shame that things are burning and say, ‘Yeah, this breach, here’s what happened.’ Let’s do some month-long investigations about what happened. Get that data out there and try to respond to it. This is not the first time we’ve had problems in an important tech, and it won’t be the last time, but let’s actually work on it. The reason I talk about the NIH is because they actually fund work on these sorts of problems, and things do get better.

Coarse-grained security

We’ve been trying to build these incredibly fine-grained security models based on the presumption that every little bit of a system potentially needs to talk to every other little bit of a system. You get what in Windows we call ‘ACL hell’ for the access control lists. They just get enormous. Linux has SELinux—all these very, very fine-grained systems and I don’t think they work. I’ve become a real fan of coarse-grained security, where there are well-defined interfaces and known good state.

For example—this is a real-world thing that happens in operations—you have a bunch of machines you know get compromised from time to time, and you know developers need to access them to figure out when they’re not compromised to understand why they’re slow, why they’re crashing, why they’re unusable, whatever. How do you let your developers, who have very sensitive desktops, access these machines in the data center? You do things where the only signal that gets through is a keyboard, a mouse, a screen. In fact, that might literally be a remote desktop connection that goes to a device where that’s all it transmits: keyboard, video, mouse. You don’t let the developer desktop talk directly over IP to the machine you know probably got compromised. What you’ve done there is you’ve squeezed the signal down to really almost nothing; to a deeply well-defined interface. When the only thing it can do is what it’s supposed to do operationally, you get some security properties that people can reason about. They can think about what the system is doing. A lot of security is just making computers behave like people think they’re behaving.

Getting rid of the goop

There’s alignment between what developers want and what security wants. Everyone wants the system to behave in a predictable manner. Now, as a deployment methodology, Docker’s got some really cool things. As a security mechanism, there’s a lot of goop that we sort of paper over, stuff that’s being shared between the one kernel and the many user spaces. No one quite knows what the goop is, what needs to be saved, what needs to be restored, what needs to be secured. No one quite knows all the states and all the information being exchanged between the isolated environment and the important kernel. That’s a big deal; this is literally the scenario that means this is going to be rough to secure because that’s where the hackers hide. When you don’t know what you’re tracking, they’ll go find that.

Embracing the ephemeral

No one wants to go back to the way things worked in virtual machines. No one has enough disk space for that. But there are a bunch of really good properties in the virtual machine architecture. We don’t need to do deployments like the old VMs, but we can use the properties of the actual hardware.

I’ve been exploring that myself. I have this mechanism called Autoclave. Autoclave is basically doing a bunch of stunts between containers and VMs. I have full Linux and Windows environments booting up in less than a quarter of a second, fully functional, fully operational, fully ephemeral. You go in, you do whatever you’re going to do, you leave, the thing’s destroyed. The goal I want to get to is when you interact with a server, on connecting to it, a virtual machine spawns—you do your business, it leaves, it’s okay. So, then there are these architectural stunts you can play where it’s just, ‘I want you to do the same thing you did before. I want you to do it on every connection. I want you to do it efficiently, and I don’t want you to throw away everything after.’ This is actually technically feasible. I’ve been playing with it, and I’m going to demonstrate it at the Security conference in New York next month.

Empathy builds better solutions

Empathy is actually caring about someone else’s problems. Empathy is how you make things that don’t suck. It is the process of putting your mind in someone else’s life experience and thinking, ‘Okay, this is where you’re coming from. What do you need?’ Because you know no one wants to get hacked. People have a budget for not suffering that. People don’t want their houses to burn. People don’t want their bank accounts emptied. It’s not that we don’t have buy-in, it’s just that we have to change things so that the first thing you do when you get home doesn’t have to be figuring out how to keep your house from burning down tomorrow. That’s where we are in security right now, and it’s not okay. We should figure out how we can integrate in other people’s lives.

Post topics: O'Reilly Security Podcast