30 Highly Available WebSphere Business Integration Solutions
Figure 3-2 MQ clustering
As shown in Figure 3-2, if an application doing MQPUTs has opened the output
queue called Q1 specifying Bind Not Fixed, the loss of access to Q1 on QM3 will
not stop operations. The messages will simply be routed to the remaining
available instances of Q1 within the queue manager cluster CLUS1. For more in
depth information on clustering, refer to the manual WebSphere MQ Queue
Manager Clusters, SC34-6061.
3.4 InterChange Server availability
The availability of the InterChange Server system is built on a hardware
clustering solution, which is discussed in 3.6, “HA cluster design” on page 34.
For correct failover and recovery behavior, the broker is dependant on message
resilience and message availabilty provided by the underlying transport layer
discussed in 3.3, “WebSphere MQ availability” on page 26, and on event and
transaction information provided by a highly available database. Independent
from the cluster solution, the InterChange Server should be configured to provide
a reliable service. That means, its database connectivity must not exceed the
Multiple queues with a single image
Definition local on each destination Qmgr
Failure isolation
Scalable throughput
MQGet always local
Chapter 3. Design considerations 31
database’s capacity; it must not consume more than the available memory, and
its recover behavior must be configured correctly. All configuration parameters
mentioned in this section are documented and explained in the IBM WebSphere
InterChange Server System Administration Guide V4.2.2, and the IBM
WebSphere InterChange Server Implementation Guide for WebSphere
InterChange Server V4.2.2 respectively. Please refer to the product
documentation for complete details.
Database connectivity
Configure InterChange Server for optimal database resource allocation:
1. MAX_CONNECTIONS: Specifies how many simultaneous connections
InterChange Server can establish with the DBMS server. Ensure that this
number is high to allow processing such as MAX_CONNECTIONS=100. The
best way is not to specify any limit if the DBMS allows for that. InterChange
Server times out idle connections after 2 minutes, or the configured value of
the IDLE_TIMEOUT parameter.
2. MAX_CONNECTION_POOLS: Specify the number of databases you are
using. The minimum value is 4 (EventDB, RepositoryDB, TransactionDB and
3. MAX_DEADLOCK_RETRY_COUNT: Allow InterChange Server to wait for
possible database deadlocks to be resolved by setting this parameter (and
the parameter DEADLOCK_RETRY_INTERVAL) to a value higher than 0. If
the value is set to 0 and a deadlock occurs, the transaction will not be retried.
This can cause the InterChange Server to shut down. In short deadlocks can
occur while concurrently executing collaboration groups that call each other.
For details refer to the IBM WebSphere InterChange Server Implementation
Guide for WebSphere InterChange Server V4.2.2.
Memory management
InterChange Server prevents itself from using up too much memory and shutting
down with an OutOfMemory Exception. It does this by checking the available
memory and pausing the polling of adapters should InterChange Server become
short of memory. Refer to section “Controlling Server Memory Usage” in the IBM
WebSphere InterChange Server Implementation Guide for WebSphere
InterChange Server V4.2.2 for details:
1. CW_MEMORY_MAX: Set this value in the InterChange Server start script to
the maximum amount of memory available for the InterChange Server JVM. It
needs to accommodate your largest business objects. It also needs to
accommodate other processes (such as WebSphere Business Integration
Adapters) running in the same memory.
2. FLOW CONTROL (component based): Configure MaxCapacity on controller
or collaborations where you expect the most memory consumption.

Get Highly Available WebSphere Business Integration Solutions now with the O’Reilly learning platform.

O’Reilly members experience live online training, plus books, videos, and digital content from nearly 200 publishers.