Chapter 15. Data sharing specifics 405
Move system boundaries
The impact of putting multiple processors under control of a single LPAR is one of the
considerations of utilizing an increased number of processors. Some installations prefer to
use two LPARs with 6 processors rather than one huge LPAR with 12 processors. Each z990
box can accommodate up to 32 central processors. An IBM ^® zSeries z990 model
2084-324, which has 24 processors on board, provides the equivalent of 15.39 single engine
machines in a mixed workload environment.
The more recent IBM System z9™ 109 S54 can have 54 Processor Units. Although adding
further capacity can degrade performance as the processors compete for shared resources
(the law of diminishing returns), there are many customers running with 16 processors and
several other customers are moving beyond 16. The Large Systems Performance Reference,
SC28-1187, is the source for IBM's assessment of relative processor capacity and can help in
identifying the point beyond which horizontal scaling can come into play with the Parallel
Move DB2 boundaries
The DB2 active log is a shared resource for all applications connected to the same DB2
subsystem. Web applications can create tremendous sudden workloads with log activity
spikes. With LOB (LOG YES) and the introduction of XML data types, we may also see an
increased demand for logging throughput. The DB2 active log is a single VSAM data set,
which has two copies when using dual logging, that has a limited bandwidth. The log write
bandwidth is related to the number of I/Os that are performed per second. This is related to
the number of log force write events, which is related to the number of CIs that have to be
written per second. High total logwrite volumes have required the use of VSAM striping
techniques and the use of FICON channels. But there still is a practical limit of about 40 MB
per second. If your application volume needs more than this bandwidth, data sharing can
increase this value by adding additional DB2 members, each one of them with the 40 Mbyte
15.2 Data sharing overhead
Ideally, a DB2 subsystem using many identical processors would generate a throughput that
can be defined as:
Throughput = (number of DB2 members) x (transaction rate of each member)
The definition assumes that the response time of each transaction remains unchanged.
However, combining multiple processors creates an overhead that results from the additional
processing cost due to the introduction of inter-DB2 data coherency control and global
locking. The design and implementation of DB2 data sharing can help mitigate the overhead.
The overhead depends on the percentage of total application processing capacity against
DB2 data sharing in read-write (R/W) or write-write (W/W) mode, the intensity of CF access
(accesses per second of CPU time), technical configuration, workload distribution, and
volume of lock contention. Data sharing overhead should be very small for read-only (R/O)
OLTP applications with thread reuse and effective lock avoidance. Data sharing overhead
may be relatively high for an update-intensive batch application. Typical overhead is 5% to
15%, with a lot of variation at the individual application level.

Get DB2 UDB for z/OS: Design Guidelines for High Performance and Availability now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.