Chapter 7. Scaling 353
The following year - Scale out.
– Add two building blocks.
– Two IBM eServer x365 servers.
– Four processors with 16 GB of memory each.
– Adjust and balance storage and I/O bandwidth across all servers as
required by workload.
– Move two database partitions from each of the old servers to the new
servers. A detailed example of this procedure is shown in 7.3.2, “Moving
database partitions without data redistribution” on page 365. Briefly, the
i. Stop DB2.
ii. Unmount DB2 file systems from old servers and mount to new severs.
iii. Update the db2nodes.cfg file.
iv. Start DB2.
7.1.2 Scalability dimensions
Let us take a look at some examples of linear scalability:
If a workload (for example, a query stream) executes in 10 seconds, a
doubling of system resources will result in an execution time of five seconds
for that same workload against that same amount of data.
If a workload executes in 10 seconds, a doubling of system resources and a
doubling in the amount of user data will maintain a 10-second execution time.
If 20 concurrent streams of a workload execute in an average of 15 seconds,
a doubling of system resources will maintain a 15-second execution time
average under a 40 concurrent stream workload.
If the average response time for 50 user transactions is one second, a tripling
of system resources will support 150 users while maintaining a one-second
Notice that all of the examples are defined using three common dimensions.
Workload - The set of tasks (for example, queries, load jobs, transactions) to
be executed. These are meant to mimic the expected load on the system.
Note: The addition of partitions in this step is not required. You may or may
not add database partitions when you scale up. In this scenario we suggest
it in this step to simplify our scale out in the following year.
354 DB2 Integrated Cluster Environment Deployment Guide
Scale factors - A combination of data size and/or system resource levels. In
any scaling exercise, adjust either or both of these in order to maintain or
Performance metrics - The methods used to measure the performance of the
These three dimensions help us to arrive data points at various scale levels.
Scalability rates are simply the ratio of these data points. Let us look at each in
The key to success in predicting system performance under growth scenarios is
an accurately modeled benchmark workload. All major benchmarks, including
the Transaction Processing Council’s (http://www.tpc.org) TPC-H and TPC-C,
as well as the major ISV software benchmarks (PeopleSoft, SAP), utilize
well-known standardized workloads in an attempt to facilitate “apples-to-apples”
comparisons of systems. The same holds true for your own benchmark
workloads. Your model should be comprised of a combination of:
Batch processing (for example, data loads, data archival, reporting)
Queries that are representative of those run by end users
Queries that are representative of those run by applications
Regularly scheduled database maintenance (for example, backup, reorg,
The workload should be developed with strong involvement from the application
development and end-user interface teams who should provide the majority of
the benchmark workload from these perspectives. The DBA team should be
responsible for providing the benchmark workload with the database
There are a number of benefits to maintaining a defined benchmark workload. It:
Allows for more accurate and clearly defined system testing
Provides a reusable, clearly defined test suite for repeatable, predictable
Provides a common frame of reference for interested parties (database
administrators, application developers, and systems administrators)
Provides a baseline so that tuning and scaling test comparisons are
The workload should be periodically reviewed and tasks added or removed to
ensure that the workload remains reflective of the current “real world” workload.