192 DB2 UDB for z/OS: Application Design for High Performance and Availability
for the row by performing I/Os and wasting CPU time. If you update multiple columns of a
certain row, combine the changes and issue the update statement once, not for each affected
column.
Rollback instead of application logic?
In general, an application program should check certain conditions before applying changes
to a database. Do not apply your changes first and then later check for data consistency to roll
back in case of errors.
The additional overhead caused by locking and undoing all the changes you have made after
your last COMMIT results in unnecessary consumption of CPU and I/Os, probably slowing
down concurrently running processes by stealing resources. In a transaction, try to put all
applied changes to the database right before COMMIT to keep the amount of time while a
lock is held (and your application owns a resource exclusively) as short as possible. For more
information about DB2 locking, see “Serialization mechanisms used by DB2” on page 329.
Think of a ticket sales system where each person is only allowed to buy six tickets and three
have already been bought by a certain person. If the same person asks for another five
tickets, do you want to mark five more tickets as “sold” before figuring out that this is more
than the allowed amount and perform a ROLLBACK? Or do you want the application to check
if the person is allowed to buy another five tickets before applying any updates?
Depending on chances of failure for your transactions, you may say that 99% of your
transactions complete successfully. In this case, follow the optimistic approach, it is cheaper
to roll back for 1% of all transactions than perform data consistency checks in 99% of all
transactions which are OK.
The same rule applies when reading the same data more than once, instead of keeping a
copy in the working storage of your application program. Whenever you access DB2 data,
make sure that this data is not already available to your application. When you deal with
business cases requiring the same data access twice, you might consider using ROWID for
getting a more efficient access path. For more detailed information about ROWID access, see
“Data access using ROWID” on page 276.
8.2 Number of rows searched
Include as many predicates as possible to limit the number of rows that need to be scanned
by DB2 to return the correct result set. Do not perform filtering in the application since it
consumes more resources to pass a row to the application than letting DB2 verify that the row
does not qualify.
For instance, if you need to calculate the number of employees of a department, there is no
point in retrieving all of the employee rows to count them in your application. You can ask DB2
to perform this calculation on your behalf. This is a trivial example, but the logic applies in
general.
Providing as many predicates as possible is even more important when you access a
partitioned table. By specifying the partitioning key, you allow DB2 to look for qualifying rows
only in the specified partition; otherwise, all partitions have to be scanned producing
unnecessary I/Os. The access path can even get worse if your DB2 subsystem does not
exploit I/O parallelism. See also “Page range screening” on page 101.
If you are sure about a certain value for a specific column in your WHERE clause while
developing the application, code it directly, do not use host variables. Do not move your
constant values to host variables. See Example 8-3 on page 193.

Get DB2 UDB for z/OS: Design Guidelines for High Performance and Availability now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.