If you want to integrate Docker into your enterprise but you’re not sure how to get started, this chapter is for you. It discusses the decisions you’ll have to make as you plan a migration to Docker containers, resources that can help smooth the process, and potential pitfalls to avoid.
The first step in migrating to Docker containers is to identify how you want to use them. This is important because, as earlier chapters have noted, Docker containers will exist alongside virtual machines or other older forms of infrastructure technology. A successful Docker strategy requires evaluation of which types of workloads you want to migrate to containers, and where your containers will fit within the rest of your infrastructure.
Questions to ask yourself as you plan a Docker migration include:
Another essential task is deciding what to include in your Docker container stack. In addition to Docker itself, which provides the container runtime, a container deployment should include the following components:
A variety of open source container registries are available, such as Docker Registry and CoreOS Quay. Some of these registries can be deployed on-premise, others are hosted services. Some are free, and some cost money.
A variety of choices are available here as well. The most popular options include Swarm (an orchestrator that is integrated into Docker itself but does not have to be used), Kubernetes, and Mesos.
To keep your containers secure, you’ll want an image scanner, like Docker Security Scanning or Clair. Enterprise-oriented security and compliance platforms for containers, such as Twistlock, may also be wise investments, especially if security and data privacy are essential considerations for your business.
You can collect some basic information about the status of a container from Docker itself. For more robust monitoring of your containerized applications, including analytics tools and visual dashboards, you’ll want a container-ready monitoring solution like Netuitive, Datadog, or New Relic.
Docker can run on any modern Linux distribution, as well as Windows Server 2016 (although, as noted previously, Docker support on Windows is not yet fully mature). Some Linux distributions, such as RancherOS, are designed for the express purpose of hosting Docker and may be a convenient choice. On the other hand, if your IT team is already familiar with a different Linux platform, they may be more comfortable using that to host Docker. In most cases, installing Docker on any modern Linux distribution is as simple as running a few commands.
If you’re searching for a quick on-ramp to containerized infrastructure, you might consider adopting a containers-as-a-service (CaaS) platform. A CaaS eliminates the need to build a Docker software stack from the ground up.
CaaS offerings are designed to be turnkey solutions for running containers. Their major benefit is that they require much less time to set up and maintain than a Docker software stack built from scratch. Just like software-as-a-service (SaaS) and platform-as-a-service (PaaS) platforms, CaaS systems give you everything you need to run containers out of the box.
The largest drawback of CaaS platforms is that, like other SaaS and PaaS offerings, they constrict user choice to some degree. If you use a CaaS, you’ll likely be limited to using whichever orchestrators, container registries, and other components it supports. You may not have as much freedom to choose between different options as you would if you built the stack yourself.
A number of enterprise-ready CaaS options have appeared over the past two years. Major offerings include:
The main differences between these various offerings boil down to two main factors:
Some CaaS platforms only support the public cloud. Others only run on-premise. Some can run in both settings.
Some CaaS platforms, like Rancher, allow relatively wide freedom in choosing which types of container orchestrators and registries to use and which operating systems they can run on. Others are more limited; for example, OpenShift supports only Kubernetes and runs on top of Red Hat Enterprise Linux.
CaaS platforms are not the only way to migrate to Docker. If you can find a CaaS that fits your enterprises’s particular needs, however, it may be a faster and less complicated path to Docker implementation than one that requires you to configure and manage all components of the container stack yourself.
Whichever route you take to implementing containers, you’ll want to steer clear of common pitfalls that can undermine the efficiency of your Docker stack.
The beauty of containers—and an advantage of containers over virtual machines—is that it is easy to make multiple containers interact with one another in order to compose a complete application. There is no need to run a full application inside a single container. Instead, break your application down as much as possible into discrete services, and distribute services across multiple containers. This maximizes flexibility and reliability.
It is possible to install a complete Linux operating system inside a container. In most cases, however, this is not necessary. If your goal is to host just a single application or part of an application in the container, you need to install only the essential pieces inside the container. Installing an operating system on top of those pieces adds unnecessary overhead.
To make the very most of containers, you want each container to be as lean as possible. This maximizes performance and minimizes security risks. For this reason, avoid running services that are not strictly necessary. For example, unless it is absolutely essential to have an SSH service running inside the container—which is probably not the case because there are other ways to log in to a container, such as with the
docker exec call—don’t include an SSH service.
You can run containers without security and monitoring solutions in place. But if you want to use containers securely and efficiently, you’ll need to include these additional components in your stack.
This is a mistake because anyone with access to the registry in which the container image is stored, or to the running container, can read the information. Instead, store sensitive data on a secure filesystem to which the container can connect. In most scenarios, this filesystem would exist on the container host or be available over the network.
As noted earlier, any data stored inside a running container will disappear forever when that container shuts down, unless it is exported to a persistent storage location that is external to the container. That means you need to be sure to have such a persistent storage solution in place for any logs or other container data that you want to store permanently.
A container registry is designed to do one thing and one thing only: store container images. Although container registries offer features that might make them attractive as all-purpose repositories for storing other types of data, doing so is a mistake. Vine learned this the hard way in 2016, when the company was using a container registry to store sensitive source code, which was inadvertently exposed to the public.
A successful migration to Docker in the enterprise involves several steps:
The next chapter will broaden our discussion of container-like technology by taking a look at other examples of containers and microservices beyond Docker.