Thinking with containers: 3 tips for moving to Docker

The O’Reilly Podcast: Sean P. Kane discusses how to get your team using Docker in the real world.

By Brian Anderson
August 16, 2016
Parked truck Parked truck (source: Unsplash via Pixabay)

Spinning up containers is one thing, but how do you actually use Docker as a team? This episode of the O’Reilly Podcast features my discussion with Sean P. Kane, lead site reliability engineer at New Relic. We talk about what Docker offers to your team, and how you can realistically adopt it in your organization.

The full conversation is available through the embedded audio. Highlights from the discussion are noted below.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Common misconceptions people bring to Docker

Obviously, like any tool, Docker is not going to solve all of your problems. It’s not some magical tool that makes life perfect, but used correctly it can put a serious dent in a lot of the problems people have around deployment workflow—basically that whole pipeline. Docker isn’t virtualization; that’s a common misconception. It doesn’t actually virtualize at all. All your processes are running on top of the Linux kernel directly. So, as an example, in something like the VMware or KVM you could run Windows on top of a Linux server, but with Docker you can actually only run Linux binaries inside a Linux container because it’s still running natively on top of the kernel.

The real value proposition of Docker

In a phrase: it’s the development pipeline. Streamlining your development pipeline is a big plus.

There’s also this ability to take all these application dependencies, combine them all together, and deliver them basically in one large artifact that doesn’t necessarily need to be a supply to any other team. It could just be once a developer has potentially pushed it up to the repository, it can be pulled down by your Jenkins job automatically and used, and then even pulled down if you had a deployment process as well, and then deployed out into production. Docker can make it incredibly simple to redeploy applications during regular operations or emergencies.

In our environment we wrote a small wrapper tool for some of our work called Centurion, which is an open-sourced project intended to be a very simple client for developers to use to manage and deploy their applications to Docker. This was long before most of the orchestration tools existed, or were widely used for Docker. That tool allowed us as a company to empower anyone to redeploy an application, even without knowing anything about the application.

If I got paged in the middle of the night as an operations engineer and it was obvious that service X was in a bad state and maybe we couldn’t get ahold of the on calls for those teams or it was just urgent and we couldn’t wait fifteen minutes for somebody to get online, I knew that I could at least try to redeploy the current version or even potentially do a rollback to the previous version if a deploy had been done recently and we thought that this was code related. I could do that as an operations engineer, very easily and reliably, and I knew that it was going to work.

Whereas if I’d attempted it before, there were a hundred services out there and they all were done with their special developer magic, and some people used Capistrano and other people used some other deployment tooling. It was very difficult for those of us on the operations team to actually make a decision like: We’re just going to redeploy this to see if we can get it back into a healthy state. With Docker and Centurion in our case, that became much easier to do.

Three tips for moving your organization to Docker

(1) Don’t rush, but don’t really hold back either. If you look at your processes, try to find the worst pain point in your current deployment process pipeline. Maybe it’s your testing. Focus on that. Try to see how you can potentially use Docker to help make that small piece better.

(2) Start with a simple problem. In our case it was, “How can we make it easier for developers to deploy in a more repeatable fashion?” We just focused on that initially, and that was the first thing we rolled out. Then we built upon that. You can deploy something very simple to start with. Maybe your developers don’t even use Docker, but you’re going to use Jenkins to take the code base and build it inside a Docker container and do all the testing inside a container. Verify that the test works, and then destroy the container. Start there, with that very focused thing, and then expand from there.

(3) Start static and then evolve to a truly dynamic environment, leveraging orchestration technologies like Kubernetes. Unless you have really experienced operation engineers in your organization, jumping feet first into a dynamic environment may be biting off more than you can chew. It doesn’t allow you to spend time actually getting to understand the technology of Docker and what it’s good at. So, focus a bit on Docker first, and migrate up to large scale orchestration.

What’s the best way to get started working with containers?

Docker is pretty easy to get started with. It’s harder to completely understand how you can best use it. First take a look at is Docker Engine, the Docker client. This is the tool used to do things like build a Docker image, push that image up to a repository, pull it back down and run it. It’s the easiest tool to get started with.

There’s a lot of good documentation and tutorials out there about how to use it. And then of course both the book that Karl Matthias and I wrote for O’Reilly, Docker: Up and Running, and my online class are great places for people who want to really quickly get their feet wet and understand both the tools and the overall way to approach and think about Docker.

Post topics: Infrastructure
Share: