Chapter 16. Managing State and Stateful Applications
In the early days of container orchestration, the targeted workloads were usually stateless applications that used external systems to store state when it was needed. The thought was that containers are very temporal, and orchestration of the backing storage needed to keep state consistently was difficult at best. Over time, the need for container-based workloads that kept state became a reality, and, in select cases, this need might be more performant. As more organizations looked to the cloud for computing power and Kubernetes became the de facto container runtime for applications, the impeding factor became the amount of data and performant access to the data, sometimes called “data gravity.” Kubernetes adapted over many iterations. Now, not only does it allow for storage volumes mounted into the pod, but it also allows for those volumes to be managed by Kubernetes directly. This was an important component in orchestration of storage with the workloads that require it.
If the ability to mount an external volume to the container was enough, many more examples of stateful applications running at scale in Kubernetes would exist. The reality is that volume mounting is the easy component in the grand scheme of stateful applications. The majority of applications that require state to be maintained after node failure are complicated data-state engines such as relational database systems, distributed key/value stores, and complex ...
Get Kubernetes Best Practices, 2nd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.