“If you don’t like testing your product, most likely your customers won’t like testing it either.”
Testing is a core part of any continuous delivery build pipeline, and the introduction of Docker into the system technology stack has no impact in some areas of testing and a great impact in others. This chapter attempts to highlight these two cases, and makes recommendations for modifying an existing Java build pipeline to support the testing of containerized Java applications.
The good news is that running Java applications within containers does not fundamentally affect functional testing. The bad news is that many systems being developed do not have an adequate level of functional testing regardless of the deployment technology.
The approach to unit and integration testing with Java applications that will ultimately be executed within containers remains unchanged. However, any component or system-level end-to-end tests should be run against applications running within containers in an environment as close to production as possible. In reality, placing an application inside a container can often make it easier to “spin up” and test, and the major area for concern is avoiding port and data clashes when testing concurrently.
Although it is not yet generally recommended to run production data stores from within containers, running RDBMS and other middleware within containers during the build pipeline test phase can allow easier scenario creation by the loading of “pre-canned” data within containers, “data containers,” or by cloning and specifying data directories to be mounted into a container.
Finally, regardless of whether or not an application is being deployed within a container, it is strongly recommended to use executable specifications and automated acceptance tools to define, capture, and run functional tests. Favorites in the Java development ecosystem include Serenity BDD, Cucumber, JBehave, REST-assured, and Selenium.
Running Java applications within containers makes it much easier to configure hardware and operating system resource limits (for example, using cgroups). Because of this, it is imperative that during the later stages of the build pipeline, performing nonfunctional testing of the runtime environment is as production-like as possible. This includes executing the container with production-like options (e.g., with Docker CPU cores configured via the
--cpuset command-line option, CPU shares via
-c, and memory limits set via
-m), running the container within the same orchestration framework used in production, and executing the orchestration/container runtime on comparable hardware infrastructure (perhaps scaled down).
Performance and load testing can be implemented via Jenkins by using a tool like Docker Compose to orchestrate deployment of an application (or a series of services) and tools like Gatling or JMeter to run the tests. In order to isolate individual services under test, the technique of service virtualization can be used to control the performance characteristics of interdependent services. More information on this technique in the context of testing microservices can be found in my “Proposed Recipe for Designing, Building, and Testing Microservices” blog post.
Security should be of paramount concern to any developer, regardless of whether deployment is occurring in containers. However, executing Java applications within a containerized runtime can add new security attack vectors, and these must be mitigated.
Any host running containers must be hardened at the operating system level. This includes:
Ensuring the latest operating system version available is being used, and that the OS is fully patched (potentially with additional kernel hardening options like grsecurity)
Ensuring the application attack surface exposed in minimized (e.g., correctly exposing ports, running applications behind a firewall with DMZ, and using certificate-based login)
Using an application-specific seccomp whitelist if possible
Enabling user namespaces
The Center for Internet Security (CIS) regularly publishes guidance for running containers in production. The CIS Docker 1.12.0 benchmark can be found on the CIS website. The Docker team has also created the Docker Bench for Security, which attempts to automate many of the checks and assertions that the CIS documentation recommends. The Docker Bench for Security tool can be downloaded and executed as a container on the target host, and a comprehensive report about the current state of the machine’s security will be generated. Ideally, execution of the Docker Bench Security should be conducted on the initialization of any host that will be running containers, and also periodically against all servers to check for configuration drift or new issues. This process can be automated as part of an infrastructure build pipeline.
Java developers may not be used to dealing with OS-level security threats, but will be exposed to this when packaging applications within containers. A Docker image running a Java application will typically include a Linux-based operating system such as Alpine, Ubuntu, or Debian Jessie, and may also have other tooling installed via a package manager. Ensuring the OS and all associated software is up-to-date and configured correctly is very challenging without automation. Fortunately, tooling like CoreOS’s open source Clair project can be used to statically analyze Docker images for vulnerabilities. This tool can be integrated within a build pipeline, or if Docker images are being pushed to a commercial centralized image repository like Docker Hub or CoreOS Quay, then this will most likely be enabled by default (but do take care to ensure it is, and also that actionable results—such as vulnerability detection—are fed back into the build pipeline).
Any build pipeline should include automated security testing in addition to manual vulnerability analysis and penetration testing. Tooling such as OWASP’s ZAP and Continuum’s bdd-security can also be included in a standard build pipeline, and run at the (micro)service/application and system-level testing phases.