Among others, two goals of capacity planning are to employ the resources that your organization has at hand in the most efficient manner, and to predict future needs based on the patterns of current use. For well-defined workloads, you potentially can get pretty close to utilizing most of the hardware resources for each class of server you have, such as databases, web servers, and storage devices. Unfortunately, web application workloads are rarely (if ever) perfectly aligned with the available hardware resources.
In such circumstances, you end up with inefficiencies in usage of available capacity. For example, if you know that a database’s specific ceiling (limit) is determined by its memory or disk usage, but meanwhile it uses very little CPU, there’s no reason to buy servers with two quad-core CPUs. That resource (and investment) will simply be wasted unless you direct the server to work on other CPU-intensive tasks. Even buying a single CPU can be overkill. But often, that’s all that’s available, so you end up with idle resources. It’s the continual need to balance correct resources to workload demand that makes capacity planning so important, and in recent years some technologies and approaches have emerged that render this balance easier to manage, with ever-finer granularity.
There are many definitions of virtualization. In general, virtualization ...