Chapter 18. FinOps for the Container World

Alongside the adoption of microservices, containers have gained popularity. Over the last few years, the number of concurrent containers being run by organizations has rapidly increased. Managing a single running container instance is quite simple overall; however, running hundreds or thousands of containers across many server instances becomes difficult. Thus, along came orchestration options like AWS Elastic Container Service (ECS) and Kubernetes, which enable DevOps teams to maintain the configuration and orchestrate the deployment and management of hundreds, if not thousands, of containers.


While there are many similarities between AWS Elastic Container Service (ECS) and Kubernetes, there are different terms used within each. For simplicity’s sake—besides when talking about Kubernetes specifically—we refer to “containers” and “server instances” where Kubernetes would refer to “pods” and “nodes.”

Since containers and container orchestrators are becoming a popular choice for many teams, it’s vital to understand the fundamental impact of these containerized workloads on FinOps practices. Shared resources like containers cause challenges with cost allocation, cost visibility, and resource optimization. In the containerized world, traditional FinOps cost allocation doesn’t work. You can’t simply allocate the cost of a resource to a tag or label, because resources may be running multiple containers, with each supporting a different ...

Get Cloud FinOps now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.