I recently sat down with Lee Calcote, senior director of technology strategy at SolarWinds, to talk about the benefits of container networks. Here are some highlights from our chat.
What is container networking? How are people deploying container networks?
Much of what container networking is today revolves around core Linux network technologies, whether that be
iptables for port-forwarding, firewalling and network address translation, or
ipvs for load-balancing and service abstraction (virtual IP addressing). These battle-tested technologies are old friends of systems engineers, who have leveraged these kernel capabilities as they’ve built container engines and orchestrators.
To date, container networking has largely been focused on simple network services like connectivity, IP addressing (IPAM), (domain) name services, and load-balancing. Beyond connectivity, most higher-level network services—like quality of service (QoS), virtual private networking, security policy (complex and dynamic firewalling), and topology optimization—are still emerging. So far, connectivity has largely equated to use of Linux bridges and network overlays, with VXLAN being a popular protocol. These common choices are in the face of a style of networking that’s arguably more straightforward in its approach: layer 3 networking.
Layer 3 networking involves routing as the connectivity method; BGP is the most popular protocol. I see layer 3 routing selected far less often than overlays as the deployment approach of choice, in part because public clouds don’t necessarily make layer 3 routing an easy choice. The convenience of overlays being at the fingertips of developers means that their deployments outstrip underlay deployments, despite the higher scale and efficiency provided by routing.
What do engineers need to understand about containers and networking? And how does service discovery relate to them?
The need to understand container networking in-depth is really a factor of the architecture of the application being deployed. Whether engineers are deploying containers onto single hosts or multi-container applications across hosts will influence their need to consider networking and related services. The more distributed the application design, the more the network comes into focus.
Conceptually, developers using network overlays and the inherent load-balancing afforded through “service” constructs in any of the popular container orchestrators need a lesser understanding of networking intricacies. Through overlays, they will be able to interconnect multiple hosts (and their containers) within a cluster (whether under management by a container orchestrator or not).
Service discovery plays an important function in helping various components of a multi-container application identify one another and communicate. In clusters managed by a container orchestrator, service discovery is a built-in capability, wherein new container application services are announced and centrally tracked. The service discovery function is often delivered through domain name services (DNS), a common network service.
Are the approaches to container networking compatible across different container technologies?
Yes, there are a number of container projects to consider compatibility! There are two proposed standards for configuring network interfaces for Linux containers: the container network model (CNM) and the container network interface (CNI). While there are various plugins created to integrate with each, the latter has been somewhat ubiquitously adopted across the container orchestration landscape. In fact, in part because of its broad compatibility across the container ecosystem, I’ve been working within the Cloud Native Computing Foundation (CNCF) to have CNI adopted as the 10th project within the foundation.
What are some strategies and tricks for optimizing the performance of containerized architectures?
Interestingly, performance is a key factor to consider when deploying container networks. There are a number of different types of network plugins to use and possible configurations that have significant ramifications on performance (most notably throughput). In my role at SolarWinds, I’m having tooling created to educate engineers with respect to the nuances of these choices and their impact on network performance. I think people will be surprised at what they learn: there’s a cost to the convenience of network overlays.
You’re speaking at the O'Reilly Velocity Conference in San Jose this June. What presentations are you looking forward to attending while there?
Surprisingly, this is my first Velocity conference, and it will certainly not be my last. The list of talks and workshops on the conference schedule is amazing! There are a couple of sessions—one by Sasha Goldshtein (Sela Group) and one by Brendan Gregg (Netflix)—on eBPF (extended Berkeley Packet Filter), which is a technology that piques my interest as a performant way to inject tracing into the kernel. In my mind, eBPF opens a new world of performance analysis of Linux systems from kernel to user-space frameworks. A number of monitoring vendors have begun to incorporate this technology into their offerings. I’m excited about the broad use cases of eBPF as virtualized in-kernel IO services for tracing, analytics, monitoring, security, and networking functions.