When you collect message flow statistics,
you can choose the output destination for the data.
Select one of the following destinations:
Statistics data is written to the specified output location in the following
circumstances:
- When the archive data interval expires.
- When the snapshot interval expires.
- When the broker shuts down. Any data that has been collected by the broker,
but has not yet been written to the specified output destination, is written
during shutdown. It might therefore represent data for an incomplete interval.
- When any part of the broker configuration is redeployed. Redeployed configuration
data might contain an updated configuration that is not consistent with the
existing record structure (for example, a message flow might include an additional
node, or an execution group might include a new message flow). Therefore the
current data, which might represent an incomplete interval, is written to
the output destination. Data collection continues for the redeployed configuration
until you change data collection parameters or stop data collection.
- When data collection parameters are modified. If you update the parameters
that you have set for data collection, all data that is collected for the
message flow (or message flows) is written to the output destination to retain
data integrity. Statistics collection is restarted according to the new parameters.
- When an error occurs that terminates data collection. You must restart
data collection yourself in this case.
User trace
You can specify that the data
that is collected is written to the user trace log. The data is written even
when trace is switched off. The default output destination for accounting
and statistics data is the user trace log. The data is written to one of the
following locations:
XML publication
You can specify that the
data that is collected is published. The publication message is created in
XML format and is available to subscribers registered in the broker network
that subscribe to the correct topic.
The topic on which the data is
published has the following structure:
$SYS/Broker/brokerName/StatisticsAccounting/recordType/executionGroupLabel/messageFlowLabel
The
variables correspond to the following values:
- brokerName
- The name of the broker for which statistics are collected.
- recordType
- Set to Snapshot or Archive, depending
on the type of data to which you are subscribing. Alternatively, use # to
register for both snapshot and archive data if it is being produced.
- executionGroupLabel
- The name of the execution group for which statistics are collected.
- messageFlowLabel
- The label on the message flow for which statistics are collected.
Subscribers can include filter expressions to limit
the publications that they receive. For example, they can choose to see only
snapshot data, or to see data that is collected for a single broker. Subscribers
can specify wild cards (+ and #) to receive publications that
refer to multiple resources.
The following examples show the topic with
which a subscriber should register to receive different sorts of data:
- Register the following topic for the subscriber to receive data for all
message flows running on BrokerA:
$SYS/Broker/BrokerA/StatisticsAccounting/#
- Register the following topic for the subscriber to receive only archive
statistics relating to a message flow Flow1 running on
execution group Execution on broker BrokerA:
$SYS/Broker/BrokerA/StatisticsAccounting/Archive/Execution/Flow1
- Register the following topic for the subscriber to receive both snapshot
and archive data for message flow Flow1 running on execution
group Execution on broker BrokerA
$SYS/Broker/BrokerA/StatisticsAccouting/#/Execution/Flow1
Message display, test and
performance utilities SupportPac (IH03) can help
you with registering your subscriber.
SMF
On z/OS, you can specify that the data collected
is written to SMF. Accounting and statistics data uses SMF type 117 records.
SMF supports the collection of data from multiple subsystems, and you might
therefore be able to synchronize the information that is recorded from different
sources.
When you want to interpret the information recorded, you can
use any utility program that processes SMF records.