Chapter 1. How We Got Here: History of Data Centers and Current Choices

Data center processing capabilities were designed to do multiple and complex equations, transactions, executions, and storage. The limitations of the mainframe are often the abilities and brain trust of the information technology (IT) director/operator in using the mainframe, and the bandwidth to the mainframe often limits its use. For example, one mainframe properly utilized can collapse 10,000 to 20,000 square feet of legacy servers and white space. This is not only one big single point of potential failure but one remarkably efficient use of space and environmentals.

The utilization of the mainframe is often 50% or less, which is not a good return on investment (ROI). The protocols of the mainframe functions are not as nimble as those of enterprise systems, unless the programmers are confident and fluent. Older mainframes were nine feet by five feet wide, broken into modules (5 to 11 modular components). A minimum of three feet had to be left for service accessibility on all sides. Mainframes had fixed power from whips and were fixed to plumbing for cooling; they did not easily move once they were set. In the 20-year total cost of ownership model, the 20-year environmentals would service three to four IT life cycles of equipment with nonsevere (low-velocity) increases of power distributions or cooling.

Mainframes were expensive, and they did a myriad of complex functions simultaneously. They cost millions of ...

Get Business Continuity Planning for Data Centers and Systems: A Strategic Implementation Guide now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.