IBM Integration Bus, Version 10.0.0.0 Operating Systems: AIX, HP-Itanium, Linux, Solaris, Windows, z/OS


Long-lived variables

You can use appropriate long-lived ESQL data types to cache data in memory.

Sometimes data has to be stored beyond the lifetime of a single message passing through a flow. One way to store this data is to store the data in a database. Using a database is good for long-term persistence and transactionality, but access (particularly write access) is slow.

Alternatively, you can use appropriate long-lived ESQL data types to provide an in-memory cache of the data for a certain period of time. Using long-lived ESQL data types makes access faster than from a database, although this speed is at the expense of shorter persistence and no transactionality.

You create long-lifetime variables by using the SHARED keyword on the DECLARE statement. For further information, see DECLARE statement.

Long-lived data types have an extended lifetime beyond that of a single message passing through a node. Long-lived data types are shared between threads and exist for the life of a message flow (the time between configuration changes to a message flow), as described in the following tables.

Table 1. Short lifetime variables
  Scope Life Shared
Schema & Module Node Thread within node Not at all
Routine Local Node Thread within routine Not at all
Block Local Node Thread within block Not at all
Table 2. Long lifetime variables
  Scope Life Shared
Node Shared Node Life of node All threads of flow
Flow Shared Flow Life of flow All threads of flow
Features of long-lived ESQL data types include:

A typical use of these data types might be in a flow in which data tables are 'read-only' as far as the flow is concerned. Although the table data is not actually static, the flow does not change it, and thousands of messages pass through the flow before there is any change to the table data.

Examples include:
  • A table which contains a day's credit card transactions. The table is created each day and that day's messages are run against it. Then the flow is stopped, the table updated and the next day's messages run. These flows might perform better if they cache the table data rather than read it from a database for each message.
  • The accumulation and integration of data from multiple messages.

ak35120_.htm | Last updated 2015-03-27 19:26:58