Chapter 5. Implementing HA queue managers: Part 3 177
Table 5-2 Summary of HA Configuration
5.4 Running the shared-disk test
We are using the same test as in scenario two. Refer to 4.7, “Running the
shared-disk test” on page 136.
In this section, we examine the differences in a failover scenario between the
shared SCSI disk enclosure and shared fiber enclosure setups.
If brk2 node fails, brk1 starts a takeover of its function, that is running the queue
manager qmgr5. See Figure 5-15 on page 178. As both brk1 and brk2 have
simultaneous access to the disks, the file system is already available to brk1 and
no disk takeover is needed as in the scenario with the shared SCSI enclosure.
In a failback situation, when brk2 recovers, the host brk2 first establishes access
to the disks at boot time then takes over the task of running the qmgr5, without the
need to unmount or mount file systems. Because both the nodes have a
connection to the shared disks, both failback and failover situation become
marginally faster than with shared SCSI disks.
Node(s) Up: IP Addresses: Mounted Filesystems:
brk1 and brk2 brk1 - 192.168.10.101
brk2 - 192.168.10.102
brk1 - /var/mqm/qmgrs
brk1 - /var/mqm/log
brk2 - /var/mqm/qmgrs
brk2 - /var/mqm/log
brk1 only brk1 - 192.168.10.101
brk1 - 192.168.10.102
brk1 - /var/mqm/qmgrs
brk1 - /var/mqm/log
brk2 only brk2 - 192.168.10.102
brk2 - 192.168.10.101
brk2 - /var/mqm/qmgrs
brk2 - /var/mqm/log
178 Messaging Solutions in a Linux Environment
Figure 5-15 Scenario three in failover
brk1
qmgr4
brk2
qmgr5
HeartBeat
/var/mqm/qmgrs/qmgr4
/var/mqm/qmgrs/qmgr5
/var/mqm/log/qmgr4
/var/mqm/log/qmgr5

Get Messaging Solutions in a Linux Environment now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.