Using modules in the link pack area (LPA/ELPA)

Some CICS® management and user modules can be moved into the link pack area (LPA) or the extended link pack area (ELPA). For systems running multiple copies of CICS, this can allow those multiple copies to share the same set of CICS management code.

Effects

The benefits of placing code in the LPA or ELPA are:

Limitations

Putting modules in the LPA or ELPA requires an IPL of the operating system. Maintenance requirements should also be considered. If test and production systems are sharing LPA or ELPA modules, it may be desirable to run the test system without the LPA or ELPA modules when new maintenance is being tested.

The disadvantage of placing too many modules in the LPA (but not the ELPA) is that it may become excessively large. Because the boundary between the CSA and the private area is on a segment boundary, this means that the boundary may move down one megabyte. The size of the ELPA is not usually a problem.

Recommendations

Use the SMP/E USERMOD called LPAUMOD to select those modules that you want to use for the LPA. This indicates the modules that are eligible for LPA or ELPA. You can use this USERMOD to move the modules into your LPA library.

The objective is to use the LPA wisely to derive the maximum benefit from placing modules in the LPA.

All users with multiple CICS address spaces should put all eligible modules in the ELPA.

How implemented

LPA=YES must be specified in the system initialization table (SIT). Specifying LPA=NO allows you to test a system with new versions of CICS programs (for example, a new release) before moving the code to the production system. The production system can then continue to use modules from the LPA while you are testing the new versions.

An additional control, the PRVMOD system initialization parameter, enables you to exclude particular modules explicitly from use in the LPA.

For information on installing modules in the LPA, see the CICS Transaction Server for z/OS® Installation Guide.

Related tasks
Virtual and real storage: performance considerations
Tuning CICS virtual storage
Splitting online systems: virtual storage
Setting the maximum task specification (MXT)
Using transaction classes (MAXACTIVE) to control transactions
Specifying a transaction class purge threshold (PURGETHRESH)
Prioritizing tasks
Adjusting the limits for dynamic storage areas
Choosing aligned or unaligned maps
Defining programs as resident, nonresident, or transient
Putting application programs above the 16MB line
Allocating real storage when using transaction isolation
Limiting the expansion of subpool 229 using VTAM pacing
[[ Contents Previous Page | Next Page Index ]]