Typical performance review questions

Use the following questions as a basis for your own checklist when carrying out a review of performance data. Many of these questions can be answered by performance reporting packages such as CICS® Performance Analyzer or Tivoli® Decision Support for z/OS®.

Some of the questions are not strictly to do with performance. For instance, if the transaction statistics show a high frequency of transaction abends with usage of the abnormal condition program, this could perhaps indicate signon errors and, therefore, a lack of terminal operator training. This, in itself, is not a performance problem, but is an example of the additional information that can be provided by monitoring.

  1. What are the characteristics of your transaction workload?
    1. Has the frequency of use of each transaction identifier altered?
    2. Does the mix vary from one time of the day to another?
    3. Should statistics be requested more frequently during the day to verify this?

    A different approach must be taken:

    In these cases, you have to identify the function by program or data set usage, with appropriate reference to the CICS program statistics, file statistics, or other statistics. In addition, you may be able to put user tags into the monitoring data (for example, a user character field in the case of the CICS monitoring facility), which can be used as a basis for analysis by products such as CICS Performance Analyzer for z/OS, or Tivoli Decision Support for z/OS.

    The questions asked above should be directed at the appropriate set of statistics.

  2. What is the usage of the telecommunication lines?
    1. Do the CICS terminal statistics indicate any increase in the number of messages on the terminals on each of the lines?
    2. Does the average message length on the CICS performance class monitor reports vary for any transaction type? This can easily happen with an application where the number of lines or fields output depends on the input data.
    3. Is the number of terminal errors acceptable? If you are using a terminal error program or node error program, does this indicate any line problems? If not, this may be a pointer to terminal operator difficulties in using the system.
  3. What is the DASD usage?
    1. Is the number of requests to file control increasing? Remember that CICS records the number of logical requests made. The number of physical I/Os depends on the configuration of indexes, and on the data records per control interval and the buffer allocations.
    2. Is intrapartition transient data usage increasing? Transient data involves a number of I/Os depending on the queue mix. You should at least review the number of requests made to see how it compares with previous runs.
    3. Is auxiliary temporary storage usage increasing? Temporary storage uses control interval access, but writes the control interval out only at syncpoint or when the buffer is full.
  4. What is the virtual storage usage?
    1. How large are the dynamic storage areas?
    2. Is the number of GETMAIN requests consistent with the number and types of tasks?
    3. Is the short-on-storage (SOS) condition being reached often?
    4. Have any incidents been reported of tasks being purged after deadlock timeout interval (DTIMOUT) expiry?
    5. How much program loading activity is there?
    6. From the monitor report data, is the use of dynamic storage by task type as expected?
    7. Is storage usage similar at each execution of CICS?
    8. Are there any incident reports showing that the first invocation of a function takes a lot longer than subsequent ones? This may arise when programs are loaded that then have to open data sets, particularly in IMS/ESA®, for example. Can this be reconciled with application design?
  5. What is the processor usage?
    1. Is the processor usage as measured by the monitor report consistent with previous observations?
    2. Are batch jobs that are planned to run, able to run successfully?
    3. Is there any increase in usage of functions running at a higher priority than CICS? Include in this MVS™ readers and writers, MVS JES, and VTAM® if running above CICS, and overall I/O, because of the lower-priority regions.
  6. What is the coupling facility usage?
    1. What is the average storage usage?
    2. What is the ISC link utilization?
  7. Do any figures indicate design, coding, or operational errors?
    1. Are any of the resources mentioned above heavily used? If so, was this expected at design time? If not, can the heavy use be explained in terms of heavier use of transactions?
    2. Is the heavy usage associated with a particular application? If so, is there evidence of planned growth or peak periods?
    3. Are browse transactions issuing more than the expected number of requests? In other words, is the count of browse requests issued by a transaction greater than what you expected users to cause?
    4. Is the CICS CSAC transaction (provided by the DFHACP abnormal condition program) being used frequently? Is this because invalid transaction identifiers are being entered? For example, errors are signaled if transaction identifiers are entered in lowercase on IBM® 3270 terminals but automatic translation of input to uppercase has not been specified.

      A high use of the DFHACP program without a corresponding count of CSAC may indicate that transactions are being entered without proper operator signon. This may, in turn, indicate that some terminal operators need more training in using the system.

In addition to the above, you should regularly review certain items in the CICS statistics, such as:

You should also satisfy yourself that large numbers of dumps are not being produced.

Furthermore, you should review the effects of and reasons for system outages and their duration. If there is a series of outages, you may be able to detect a common cause of them.

Related tasks
Performance monitoring and review

Deciding on monitoring activities and techniques
Developing monitoring activities and techniques
Planning the performance review process
Planning your monitoring schedule
Reviewing performance data
Confirming that the system-oriented objectives are reasonable
Anticipating and monitoring system changes and growth
[[ Contents Previous Page | Next Page Index ]]