Chapter 6. Recovery 203
Besides closing and reopening the data set (which consumes lots of elapsed time for the
unlikely event of an abend, decreasing your performance dramatically) to ensure
externalization of your allocated QSAM buffers, you can think of ESTAE (Extended Specify
Task Abnormal Exit) routines to clean up your environment (for example, closing files to force
buffer externalization) in case of an abnormal termination.
If your application abends with abend code B37 (not enough space available), you can lose
your output buffers, regardless of an established ESTAE routine.
You will need to talk to your z/OS systems programmer to implement ESTAE routines.
Instead of repositioning on your output data sets, you can consider using GDG (generation
data group) data sets. At each restart, you create a new generation of the GDG. When you
are concerned about eliminating duplicate records in sequential data sets as a result of a
restart, the elimination of those records can be postponed until the end of all application
phases by running an additional DFSORT™ for suppressing records with duplicate control
fields, thus involving all created generations. Example 6-11 shows the required DFSORT
statement.
Example 6-11 Eliminating duplicate records using DFSORT
SORT FIELDS=(1,3,CH,A)
SUM FIELDS=NONE
6.5.8 Restart using DB2 tables for input and output files
Since sequential file handling during restart operations can be challenging, data consistency
can be guaranteed if you use the following technique:
1. LOAD your input files into DB2 table A without indexes.
2. Read your input data from DB2 table A using a FOR UPDATE OF cursor.
3. Write your output records to DB2 table B.
4. Delete rows you have already processed from table A using WHERE CURRENT OF
CURSOR.
In case of a restart, your cursor can simply reposition on table A since only non-processed
rows are in the table.
There will be an additional overhead of doing INSERTS to a DB2 table instead of writing to a
sequential file, but the special checkpoint and restart considerations for sequential files will be
eliminated. To reduce overhead for round-trips to DB2 for each insert statement, consider
using multi-row insert if your commit frequency contains a reasonable number of output
records. Often an application does one write to a sequential file after having executed several
SQL statements. The overhead of the INSERTs may then be small.
6.6 Scrollable cursors
Scrollable cursors were initially introduced in DB2 V7. They provide support for:
Fetching a cursor result set in backward and forward directions
Using a relative number to fetch forward or backward
Using an absolute number to fetch forward or backward
The technique used in DB2 V7 is based on declared temporary tables automatically created
by DB2 at runtime. The actual database and table space for the DTT are created by the user,

Get Data Integrity with DB2 for z/OS now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.