174 WebSphere Replication Server for z/OS Using Q Replication: High Availability Scenarios for the z/OS Platform
򐂰 Failover operations with no inflight automatic load in progress at the time of failure where
unidirectional Q replication is restored to the secondary server SC59 without any data loss
after the failure of LPAR SC53. The Q Capture process that was running on the failed
LPAR SC53 is started (manually or automatically) on LPAR SC67.
򐂰 Failover operations with inflight automatic load in progress at the time of failure where
unidirectional Q replication is restored to the secondary server SC59 without any data loss
after the failure of LPAR SC53. The Q Capture process that was running on the failed
LPAR SC53 is started (manually or automatically) on LPAR SC67.
These test cases and results are described in detail in the following sections.
5.6.1 Failover operations with no inflight automatic load in progress
The objective of this test case is to demonstrate that under failover operations with no inflight
automatic load in progress, Q replication operates in a seamless fashion in our WebSphere
MQ shared disk high availability primary server environment with no data loss.
The following assumptions are made about the test case environment:
򐂰 Application workloads are running against both members of the DB2 data sharing group
D8GG comprising member databases D8G1 on LPAR SC53 and D8G2 on LPAR SC67.
򐂰 Q Capture is running on the primary server LPAR SC53.
Attention: The objectives of these test cases and most of the steps listed for each test
case are identical to those conducted in Section 4.6, “Test cases” on page 134, for the
shared queues scenario. The differences are as follows:
򐂰 The shared disk scenarios described here have additional steps to stop the queue
manager MQZ1 on LPAR SC53 and start queue manager MQZ1 again on LPAR SC67
after the simulation of the failure of LPAR SC53 through a STOP DB2 MODE(FORCE)
command.
򐂰 Where the steps are identical, the shared disk scenario steps refer back to their
corresponding steps in the shared queues test cases. It should be noted, however, that
the shared disk steps use the schema ITSOD instead of ITSOQ that is used in the
shared queues steps, and the character ‘D’ is added in various object names for shared
disk. For example:
CREATE TABLE ITSOD.PRODUCTS in shared disk versus CREATE TABLE
ITSOQ.PRODUCTS in shared queues
CREATE TABLE ITSOD.IBMQREP_CAPPARMS in shared disk versus CREATE
TABLE ITSOQ.IBMQREP_CAPPARMS in shared queues
QREP.SRCD.ADMINQ in shared disk versus QREP.SRC.ADMINQ in shared
queues
Attention: As mentioned earlier, even though WebSphere queues used by Q replication
objects are stored on shared disk, they may only be accessed by queue manager MQZ1
on LPAR SC53 or queue manager MQZ1 on LPAR SC67 in a mutually exclusive manner.
Q replication also accesses these queues on shared disk from only one LPAR or the
other—in other words, mutually exclusive access to shared Q replication WebSphere MQ
objects on the shared disk. Any attempt to start Q Capture on LPAR SC67 when it is
already running on LPAR SC53 in our scenario results in an error message ASN0554E “Q
Capture” : “ITSOQ” : “Initial” : The program is already active, as shown in the
MVS log output in Example 4-22 on page 134.

Get WebSphere Replication Server for z/OS Using Q Replication: High Availability Scenarios for the z/OS Platform now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.