Using Docker in production

Five questions for Laura Frank about orchestration, security, and beyond.

By Brian Anderson and Laura Frank
May 18, 2017
Golden Gate Bridge Golden Gate Bridge (source: tpsdave)

I recently sat down with Laura Frank, Docker captain and director of engineering at Codeship, to discuss the evolution of the Docker ecosystem and how it compares to other orchestration tools. Here are some highlights from our talk.

The Docker ecosystem has evolved rapidly over the past couple of years. How is using Docker now different than it was, say, two years ago?

Right now, Docker is an excellent tool to manage distributed applications. This is the result of quite a bit of evolution; in its earlier stages, Docker focused mainly on managing containers themselves. Thinking back to two or three years ago, getting started with Docker was a bit of a pain because there weren’t very mature developer tools in the ecosystem. Instead you were left with documentation and really long “docker run” commands, and you really had to know what was happening at the container level. Now Docker has grown and evolved a bit to where the container is just an implementation detail, allowing you as an engineer to focus on what’s really important: the services themselves. Orchestration tools like Docker (in Swarm Mode), Kubernetes, and Mesosphere allow you to declare your services once and then run them anywhere using containers. The focus now is more on running highly-available applications and less on the inner workings of the container itself, so you interact with Docker on a different level.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

In your tutorial you cover, among other things, how to use Docker in Swarm Mode effectively. How does Docker do orchestration, and how does it compare to similar tools like Kubernetes and Mesos?

Docker has built-in orchestration tooling commonly called “Docker in Swarm Mode.” It was announced last year at DockerCon and is available right in Docker itself.

The goal of Docker, Kubernetes, and other orchestration and scheduling tools is to stand up your application and manage it across a distributed cluster. It takes care of scheduling your services across a cluster, service discovery, and things like networking. Plus these orchestrators will “self-heal” if one of your containers fails. Each of these tools generally has its own set of config files that you need to author in order to declare your services. They’re all a bit different in specific feature sets, but they all attempt to solve the same problem: orchestrating containers over a cluster.

The big difference with Docker is that it’s not an external service; it’s built in. This doesn’t necessarily make it a more performant tool—that depends highly on your use case. But if you’re new to orchestration, using Docker in Swarm Mode may help you get started faster as it uses the same commands and patterns you’re already familiar with using Docker to run one-off containers.

What do sysadmins and ops professionals need to understand about Docker?

I can’t stress this enough: Docker isn’t going to solve your scalability and operational reliability problems. It’s true that it can help a lot (using declarative services with a self-healing system like Docker in Swarm Mode or Kubernetes, for example, can ease a lot of the pain), but bottlenecks and performance problems are still going to exist. In some cases, you might even feel more pain because wrapping everything in a container can expose design and process flaws that might have been hidden before. Docker isn’t magical—it won’t fix everything—but it’s a really powerful tool that can help you build, ship, and run faster.

What are some of the other tools that go well with Docker, and why?

Orchestration tools are extremely important when it comes to running highly-available services in production. Continuous integration and continuous delivery (CI/CD) is also critical, and that relies on the fact that you’ve set up good automated tests. Aside from that, metrics and monitoring are essential. I love having an insane amount of data pumped to Librato that I can visualize to get a clear picture of the state of systems. Sometimes people joke that an outage with microservices, or on a distributed system, can feel like a murder mystery since it can be really hard to identify a point of failure. Self-healing orchestration tools like Docker and Kubernetes help ease the pain, but they’re not a replacement for good metrics and monitoring.

You’re speaking at the Velocity Conference in San Jose this June. What presentations are you looking forward to attending while there?

I’m really excited to have Kelsey Hightower on stage for a keynote. I will always attend Kelsey’s talks, even if I’ve seen them before. Aside from being an A+ human being, he has an extreme talent for making the most complex topics both accessible and entertaining.

I’m also looking forward to Nora Jones’s talk on chaos engineering. I work on a distributed, scalable system, and at any moment there could be a hundred different failures. Deliberately injecting chaos into your systems can help expose areas that are opaque to your engineering team, and ultimately improve the reliability and stability of your systems.

 

Post topics: Infrastructure
Share: