Every queue manager in a cluster must refer to one or other of the full repositories to gather information about the cluster and so build up its own partial repository. It is of no particular significance which repository you choose. In this example we choose NEWYORK. Once the new queue manager has joined the cluster it will communicate with both of the repositories.
Every queue manager in a cluster needs to define a cluster-receiver channel on which it can receive messages. On TORONTO, define:
DEFINE CHANNEL(TO.TORONTO) CHLTYPE(CLUSRCVR) TRPTYPE(TCP) CONNAME(TORONTO.CHSTORE.COM) CLUSTER(INVENTORY) DESCR('Cluster-receiver channel for TORONTO')
This advertises the queue manager's availability to receive messages from other queue managers in the cluster, INVENTORY.
Every queue manager in a cluster needs to define one cluster-sender channel on which it can send messages to its first full repository. In this case we have chosen NEWYORK, so TORONTO needs the following definition:
DEFINE CHANNEL(TO.NEWYORK) CHLTYPE(CLUSSDR) TRPTYPE(TCP) CONNAME(NEWYORK.CHSTORE.COM) CLUSTER(INVENTORY) DESCR('Cluster-sender channel from TORONTO to repository at NEWYORK')
Before proceeding, ensure that the inventory application does not have any dependencies on the sequence of processing of messages. See Reviewing applications for message affinities for more information.
See the WebSphere MQ Application Programming Guide for information about how to do this.
The INVENTQ queue, which is already hosted by the NEWYORK queue manager, is also to be hosted by TORONTO. Define it on the TORONTO queue manager as follows:
DEFINE QLOCAL(INVENTQ) CLUSTER(INVENTORY)
Now that you have completed all the definitions, if you have not already done so you should start the channel initiator on WebSphere MQ for z/OS and, on all platforms, start a listener program on queue manager TORONTO. The listener program listens for incoming network requests and starts the cluster-receiver channel when it is needed. See Establishing communication in a cluster for more information.
The cluster set up by this task looks like this:
The INVENTQ queue and the inventory application are now hosted on two queue managers in the cluster. This increases their availability, speeds up throughput of messages, and allows the workload to be distributed between the two queue managers. Messages put to INVENTQ by either TORONTO or NEWYORK are handled by the instance on the local queue manager whenever possible. Messages put by LONDON or PARIS are routed alternately to TORONTO or NEWYORK, so that the workload is balanced.
This modification to the cluster was accomplished without you having to make any alterations to the queue managers NEWYORK, LONDON, and PARIS. The full repositories in these queue managers are updated automatically with the information they need to be able to send messages to INVENTQ at TORONTO.
Assuming that the inventory application is designed appropriately and that there is sufficient processing capacity on the systems in New York and Toronto, the inventory application will continue to function if either the NEWYORK or the TORONTO queue manager becomes unavailable.
As you can see from the result of this task, you can have the same application running on more than one queue manager. You can use the facility to allow even distribution of your workload, or you may decide to control the distribution yourself by using a data partitioning technique.
For example, suppose that you decide to add a customer-account query and update application running in LONDON and NEWYORK. Account information can only be held in one place, but you could arrange for half the records, for example for account numbers 00000 to 49999, to be held in LONDON, and the other half, in the range 50000 to 99999, to be held in NEWYORK. Write a cluster workload exit program to examine the account field in all messages, and route the messages to the appropriate queue manager.
Notices |
Downloads |
Library |
Support |
Feedback
![]() ![]() |
csqzah0758 |