Banner Home Previous Next Index Help



Estimating Disk and Memory Requirements for a Database


This chapter helps you estimate disk and memory requirements and tells you about database settings, including how to change settings. This chapter contains the following sections:

Note:   If you are migrating from an earlier version of Hyperion Essbase, see the Hyperion Essbase Start Here booklet for guidance in estimating space requirements.

For information about specifying and changing Hyperion Essbase kernel settings, see Specifying Hyperion Essbase Kernel Settings.


Go to top Understanding How Hyperion Essbase Stores Data

This topic describes the units of storage that you need to know about in order to size a database. The discussion assumes that you are familiar with basic concepts of the Hyperion Essbase kernel. See Introducing the Hyperion Essbase Kernel to learn about the Hyperion Essbase kernel.

A Hyperion Essbase database consists of many different components. In addition to an outline file and a data file, Hyperion Essbase uses various files and memory structures to manage data storage, calculation, and retrieval operations. Table 15-1 describes the major components that you must consider when you estimate the disk and memory requirements of a database.

.

Table 15-1: Storage Units Relevant to Calculation of Disk and Memory Requirements

Storage Unit Description Disk Memory
Outline A structure that defines all elements of a database. The number of members in an outline determine the size of the outline.
Data files A group of files in which Hyperion Essbase stores data values in data blocks. Named ESSxxxxx.PAG, where xxxxx is a number.
Hyperion Essbase increments the number, starting with ESS00001.PAG, on each disk volume.

Data block A subdivision of a data file. A multidimensional array that represents all cells of all dense dimensions relative to a particular intersection of sparse dimensions.
Index files A group of files that Hyperion Essbase uses to retrieve data blocks from data files. Named ESSxxxxx.IND, where xxxxx is a number. Hyperion Essbase increments the number, starting with ESS00001.IND, on each disk volume
Index page A subdivision of an index file. Contains index entries that point to data blocks.
Index cache A buffer in memory that holds index pages. Hyperion Essbase allocates this memory at startup of the database.
Data file cache A buffer in memory that holds data files. Hyperion Essbase allocates this memory during data load, calculation, and retrieval operations, as needed.
Data cache A buffer in memory that holds data blocks. Hyperion Essbase allocates this memory during data load, calculation, and retrieval operations, as needed.
Calculator cache A buffer in memory that Hyperion Essbase uses to create and track data blocks during calculation operations.

When you load data to a database, Hyperion Essbase creates and populates index and data files on the disk. When you access data, Hyperion Essbase executes an index search and loads index information and data into memory buffers called caches.

Hyperion Essbase loads index information from the disk into the index cache on demand. The index information points to data, which Hyperion Essbase retrieves from data files into the data file cache.

The calculator cache is an optional cache used for calculations. For information on the calculator cache, see Setting Memory Cache Sizes.


Go to top Determining Disk Space Requirements

Hyperion Essbase requires disk space for basic server software and for each database. Each database comprises many files including the data files, the index files, the overhead files and the outline file. The following subtopics enable you to estimate the size of each of these database components. After you determine each individual component size, see Estimating Disk Space for a Database, to approximate the amount of disk space needed for an entire database.


Go to top General Disk Space Requirements for a Database

This discussion begins with guidelines for calculating adequate disk space for a single database. To estimate overall disk space requirements, see Total Disk Requirement.

The total amount of disk space required for a database depends on the following factors:

Most multidimensional applications tend to be sparse, and the amount of space required is difficult to determine precisely. The following guidelines, in conjunction with information presented in the Database Information dialog box of Hyperion Essbase Application Manager, provide a rough estimate of disk space requirements.

Note:   The database sizing calculations that follow assume an ideal scenario with an optimum database design and unlimited disk space.

Determining the Expanded Block Size of Stored Data

The actual size of a data block depends on the amount of data that exists where dimensions intersect.

To determine the potential, expanded (uncompressed) size of each stored data block, follow these steps:
  1. Multiply the number of stored members (including Dynamic Calc And Store members) in each dense dimension together.

    For example, the Sample Basic database contains the following:

  2. Multiply the figure derived in Step 1 by the size of each cell (8 bytes); for example:
    192 data cells x 8 bytes = 1,536 bytes
    

Estimating the Size of Compressed Data Blocks

The use of compression (whether it is used and, if it is used, what type is used) affects the actual disk space consumed by a data file. For more information about data compression, see Data Compression.

The following formula provides a rough estimate of compressed block size. It assumes that bitmap compression is enabled.

Compressed block size = Expanded block size x Block density

For example, assume an expanded block size of 1,536 bytes and a block density of 25%:

(1,536 bytes x .25) = 384 bytes

To determine block density, you can load the database and in Hyperion Essbase Application Manager, select Database > Information, select the Statistics tab, and look at the Block Density label. If you want to estimate block density prior to loading data, estimate the ratio of existing data values to potential data values.

Note:   Actual block density varies widely from block to block. The calculations in this discussion are only for estimation purposes.

Estimating the Number of Data Blocks

To estimate the number of data blocks, you must consider the potential number of data blocks and the density of the database.

Determining the Potential Number of Data Blocks

When you determine the potential number of data blocks, you assume that there are data values for all combinations of stored members; for example, for Sample Basic, you assume that all products are sold in all markets. Stored members include neither attribute dimensions and their members nor members tagged as label only, shared, or Dynamic Calc.

To determine the potential, or maximum, number of blocks, multiply the number of stored members in each sparse dimension together.

For example, the Sample Basic database contains the following:

Thus, the Sample Basic database has:

19 x 25 = 475 potential data blocks.

Estimating the Actual Number of Data Blocks

Product and Market are sparse dimensions because not every product is sold in every market. Therefore, the actual number of blocks is far less than the potential number of data blocks.

If the database is already loaded, you can see the existing number of blocks and the potential number of blocks on the Statistics page of the Database Information dialog box of Hyperion Essbase Application Manager. Instead of estimating a number, you can use the existing number of data blocks in later calculations.

It is very difficult to determine the size of a database without loading the database. If you must determine size before loading the database, estimate the percentage of cells that contain data values and multiply this percentage against the potential number of data blocks. The percentage can vary based on the sparsity or density of the data blocks. This guideline provides only a rough estimate.

The following examples assume 100,000,000 potential data blocks:

Estimating the Size of the Compressed Data Files

To estimate the space required to store the data files (ESSxxxxx.PAG), multiply the compressed block size by the number of blocks. For this example and the remaining examples, consider the number of data blocks from the previous example calculated at a density of 15%.

For example:

384 (compressed block size) x 15,000,000 (number of data blocks) = 5,760,000,000 bytes
Note:   If compression is not used, substitute the expanded block size for the compressed block size in this formula.

Estimating the Size of the Index

The following formula helps you estimate the total size of the index, including all index files. The formula provides only a rough estimation.

Number of blocks x 112 bytes

This formula is based on the size of an index entry, which is 112 bytes.

For example, assuming a database with 15,000,000 blocks, you should allow about 1,680,000,000 bytes for the entire Hyperion Essbase index:

15,000,000 bytes x 112 = 1,680,000,000 bytes

If the database is loaded, select Database > Information and look at the Files tab to determine the actual size of the index.

Estimating Fixed-Size Overhead

The following subtopics show you how to calculate fixed-size overhead. You should use one of two methods of calculation, depending on whether the database uses bitmap compression, run-length encoding (RLE), or no compression.

Bitmap Compression

Begin the calculation of fixed-size overhead for a database that uses bitmap compression by using this formula:

((Expanded block size in bytes / 64) + 72) x Number of blocks 

The formula may be explained as follows:

Then, round the value produced by the formula up to the nearest multiple of eight.

For example, assume the expanded block size is 4,802 bytes and that there are 15,000,000 blocks.

  1. Calculate the formula:
    (4,802 / 64) + 72 = 147.03 bytes 
    
  2. Round the result up to the nearest multiple of eight.

    a. Divide the result by 8 147.03 / 8 = 18.38
    b. Use the whole number only 18
    c. Add 1 18 + 1 = 19
    d. Multiply by 8 19 x 8 = 152

    The result is 152 bytes per block.

Finally, calculate the fixed-size overhead for the entire database by multiplying the overhead per block by the number of blocks:

Bitmap overhead using bitmap compression =

152 x 15,000,000 bytes = 2,280,000,000 bytes

Run-Length Encoding (RLE) or No Compression

To calculate the fixed-size overhead for a database that uses RLE or no compression, use the total amount of overhead per block, 72 bytes per block.

This is the total amount of fixed overhead per block.

To calculate the fixed-size overhead for the entire database, multiply the overhead per block by the number of blocks.

Assuming 15,000,000 blocks, bitmap overhead using RLE or no compression =

72 x 15,000,000 bytes = 1,080,000,000 bytes

Calculating a Fragmentation Allowance

If you are using bitmap or RLE compression, a certain amount of fragmentation occurs. The amount of fragmentation is based on individual database and operating system configurations and cannot be precisely predicted.

As a rough estimate, calculate 20% of the compressed database size. For example, a compressed database size of 5,769,000,000 bytes produces the following calculation:

5,769,000,000 bytes (database size) x .2 = 1,152,000,000 bytes

Calculating a Data Recovery Area

For recovery purposes, Hyperion Essbase maintains a data recovery area on the disk. The size of this area increases until the database is restructured. To calculate the size of the data recovery area, multiply by 2 the sum of the sizes of the index, the overhead, the compressed data blocks, and the fragmentation allowance.

(Index size + Fixed size overhead
+ Size of compressed data blocks 
+ Fragmentation allowance) x 2

Estimating the Size of the Outline

Factors that affect the size of an outline include the following:

First, estimate the main area of the outline by multiplying the number of members by an estimated number of bytes between 350 and 450. If the database includes few aliases or very short aliases and short member names, use a smaller number within the range. If you know that the names or aliases are very long, use a larger number within the range. (The maximum size for a member name and for an alias is 80 characters.) The following example uses a number in the middle of the range and assumes the outline has 26,000 members. This formula provides only a rough estimate.

400 bytes x 26,000 members = 10,400,000 bytes

Then, if the outline includes attribute dimensions, you must also calculate an area that stores attribute association information. Calculate the size of this area for each base dimension and add the sum of these areas to the basic outline size you already estimated.

To calculate the attribute area in an outline for a base dimension in bytes, multiply the number of members of the base dimension by the sum of the number of members of all attribute dimensions associated with the base dimension, and then divide by 8.

For example, assume that the base dimension Product (with 23,000 members) has two attribute dimensions associated with it: Ounces (with 20 members) and Pkg Type (with 50 members). The calculation for the attribute area in the outline for the Product dimension is:

(23,000 x (20 + 50)) bits / 8 bits/byte = 201,250 bytes

Assume another base dimension, Market (with 2,500 members), is associated with the 12-member attribute dimension, Population. The attribute area in the outline for the Market dimension is equal to (2,500 x 12) / 8, or 3,750 bytes.

The total estimated size of the outline is equal to the sum of 10,400,000 bytes
+ 201,250 bytes + 3,750 bytes which is 10,605,000 bytes.

Allowing for Restructuring and Migration Work Areas

During restructuring, Hyperion Essbase uses a restructuring work area on the disk.

During migration from prior releases of Hyperion Essbase, for recovery purposes, Hyperion Essbase creates a copy of the database in a migration work area.

To create these temporary work areas, Hyperion Essbase may require disk space equal to the size of the entire database, including the outline. Because migration and restructuring do not occur at the same time, a single allocation can represent both requirements.

Estimating Disk Space for a Database

After you estimate the size of each database component, you are ready to calculate an estimate of disk space that you need for the database.

The process for determining the total disk space needed for a database is shown in Table 15-2. It is assumed that the database does not contain linked reporting objects and that bitmap compression is enabled.

Table 15-2: Estimating Disk Space for a Database

  Component Example Value  
  Size of data files (If blocks are compressed, use compressed block size.) 5,760,000,000  
+ Estimated size of index 1,680,000,000  
+ Fixed size overhead (header and bitmap) 2,280,000,000  
+ Fragmentation allowance 1,152,000,000  
=   10,872,000,000  
x 2 To allow for the data recovery area    
=   21,744,000,000  
+ Estimated size of outline 10,605,000  
=   21,754,605,000  
x 2 To allow for the restructuring and migration work area    
= Total estimated disk space needed 43,509,210,000 bytes


Go to top Considerations When Using Partitioning

If you are using replicated partitions, you need to allocate enough disk space at the target database to hold double the total size of all replicated partitions. You need to allocate double the size of all replicated partitions because, to aid in seamless recovery, Hyperion Essbase temporarily retains duplicate blocks before committing transactions.

When you use any type of partition, Hyperion Essbase stores certain data (such as connection and time stamp information) at both the source and target databases. However, the amount of storage Hyperion Essbase uses for these items is insignificant.

For information about Hyperion Essbase Partitioning, see Designing Partitioned Applications.


Go to top Considerations When Using Linked Reporting Objects

You can use the Linked Reporting Objects (LRO) to associate objects, such as flat files, with data cells.

If the linked object associated with a cell is a flat file, Hyperion Essbase copies the file to the server. Therefore, you need to know the combined size of all flat files you are using as linked reporting objects.

You can set a limit on the size of a linked object, if the linked object is a file (as opposed to a cell note). In Hyperion Essbase Application Manager, select Application > Settings and enter the desired limit in the Max. Attachment File Size text box.

You can also use alter application in MaxL or the SETAPPSTATE command in ESSCMD to limit the size of a linked object. See the online Technical Reference in the DOCS directory for information.

The Hyperion Essbase kernel stores information about linked reporting objects in a Linked Reporting Objects catalog. A catalog entry is stored as an index page. For every catalog entry, Hyperion Essbase uses 8 KB if direct I/O is used; if buffered I/O is used, Hyperion Essbase uses the index page size. For every cell note, Hyperion Essbase uses 600 bytes. The cell note is stored in the catalog.

For more information, see Linking Objects to Hyperion Essbase Data.


Go to top Total Disk Requirement

The previous calculations in this chapter estimate the data storage requirement for a single database. Often, more than one database resides on the server.

The total Hyperion Essbase data storage requirement on a server includes the Hyperion Essbase software and the sum of the requirements of all the databases to be stored on the server.

Allow 35 to 50 MB for the base installation of Hyperion Essbase server software and sample applications. The allowance varies by platform. For details, see the Hyperion Essbase Installation Guide.


Go to top Determining Memory Requirements

The minimum memory requirement for running Hyperion Essbase is 64 megabytes (MB) per database. On UNIX systems, the minimum requirement is 128 MB per database. Based on the number of databases and the database operations on the server, the amount of memory you require may be more.

Hyperion Essbase uses buffers called caches to hold data and index information in memory while database operations are being performed. To optimize your database performance, Hyperion Essbase enables you to define cache settings and sizes.

This topic provides guidelines for determining the cache settings and sizing your memory requirements. It is recommended that you load your databases, define the cache settings based on the guidelines that follow, and run the basic calculation and retrieval operations for each database. Depending on the amount of memory that is available, you may need to fine tune the cache settings to optimize performance.

The following subtopics enable you to estimate the size of each of the individual database components that require memory resource. After you determine the individual component sizes, to approximate the total memory requirement, see Estimating Total Memory Requirements. These guidelines assume one database per application.


Go to top General Memory Requirements Per Database

At startup of a database, Hyperion Essbase sets aside memory for the following components:

How much additional memory is required for normal operation of a database, after it is started, depends on the following factors:

In addition, the following database settings affect memory usage:


Go to top Sizing Your Index, Data File, and Data Caches

Cache size settings can have a significant impact on database and general server performance. For general performance information regarding cache sizes, see Introducing the Hyperion Essbase Kernel.

This topic provides information about setting up your cache sizes with recommendations for ideal starting sizes to optimize performance. The exact setting you should use for each cache also depends on data distribution and the dense/sparse configuration of the database. For definitions of index cache, index page, data file cache, and data cache, see Understanding How Hyperion Essbase Stores Data.

. If your memory resource is restricted, you can optimize your performance by adjusting your settings.

Note:   If you are migrating from a previous release of Hyperion Essbase, see the Hyperion Essbase Start Here booklet for important information on migrating existing cache settings.

Using Cache Memory Locking

Before setting cache sizes, you need to determine whether or not you want to enable cache memory locking. The cache memory locking setting locks the memory used for the index cache, data file cache, and data cache into physical memory.

If you want to give the Hyperion Essbase kernel priority usage of the physical memory, enable cache memory locking. If you enable cache memory locking, leave sufficient physical memory available for non-Hyperion Essbase kernel use. If you do not want to give the Hyperion Essbase kernel priority usage of the physical memory, disable cache memory locking. By default, cache memory locking is turned off.

For information about how to enable cache memory locking, see Enabling Cache Memory Locking.

Note:   In order to take advantage of the cache memory locking feature on Solaris, you must run the Bourne shell script, root.sh, after installing. This sets the server to run in Superuser mode so it can lock memory. For more information, see the Hyperion Essbase Installation Guide.

Estimating Cache Sizes

If you are using Hyperion Essbase for the first time, cache sizes are automatically set to the default values shown in Table 15-3. If you are migrating from a previous version of Hyperion Essbase, the new data file cache is set to the default value and the other cache settings from that version are retained when you migrate. See the Hyperion Essbase Start Here booklet for migration information.

To help you estimate cache memory requirements, Table 15-3 shows default and recommended cache settings. These caches are listed in priority order from highest priority to lowest priority. For example, if memory resources are restricted, allocating adequate memory to the index cache is top priority, followed by the index page, the data file cache, and so on.

The needs for each site and even for a particular database can vary. Depending on the complexity and type of each operation, Hyperion Essbase allocates as much memory for data file cache and data cache as needed. Use the recommended values in this table to estimate enough memory for optimal performance.

Table 15-3: Recommendations for Cache-Related Settings  

Cache- Related Settings in priority order Minimum
Value
Default
Value
Recommended Value
Index Cache Size 1 MB
(1048576 bytes)
10 MB (10485760 bytes) Combined size of all ESS*.IND files, if possible; as large as possible otherwise. Do not set this cache size higher than the total index size, as no performance improvement results.

(To determine the total index size see Estimating the Size of the Index.)
Index Page
Size
1 KB
(1024 bytes)
  • Direct I/O: 8 KB (8192 bytes)
  • Buffered I/O: 1 KB (1024 bytes)
  • If direct I/O, Hyperion Essbase ignores the Index Page Size setting and always employs an index page size of 8 KB.
  • If buffered I/O and you are migrating, keep the pre-Release 6 value. Otherwise, set to 8 KB.
Data File Cache Size 8 MB
(8388608 bytes)
32 MB (33554432 bytes) Combined size of all ESS*.PAG files, if possible; as large as possible otherwise.
Data Cache Size 3 MB
(3145728 bytes)
3 MB
(3145728 bytes)
0.125 * data file cache size value.
Allocate more if:
  • Many concurrent users are accessing different data blocks
  • Calc scripts contain functions on sparse ranges and the functions require all those members in memory (for example, @RANK and @RANGE)
Note:   If migrating, keep the pre-Release 6 value.
Cache Memory Locking Not applicable Disabled To give the Hyperion Essbase kernel priority usage of system RAM, enable cache memory locking. Otherwise, disable it.

For instructions on setting index cache, index page, data file cache, and data cache sizes and on defining cache memory locking settings, see Specifying and Changing Hyperion Essbase Kernel Settings.

An additional cache, the calculator cache, is used only during calculations. Consider the calculator cache memory requirements when you size memory requirements for database operations (see Estimating Additional Memory Requirements for Calculations).

Fine-Tuning Your Cache Settings

After using the database under your installation's usual conditions, check to see how Hyperion Essbase is performing. You may need to adjust the settings.

Testing Your Cache Settings

You can check Hyperion Essbase performance in several ways. Using Diagnostics to Monitor Performance provides more information, but here is an example:

Check the Run-time page of the Database Information dialog box (select Database > Information from the Hyperion Essbase Application Manager menu). In particular, check the hit ratios:

A higher hit ratio on the index and data file caches indicates better performance; 100 is the highest possible value.

Determining Which Settings to Change

The sizes of the index cache and the data file cache are the most critical Hyperion Essbase cache settings. In general, the larger these buffers, the less swapping activity occurs; however, it does not always help performance to set cache sizes larger and larger. Read this entire section to understand the cache size considerations.

The advantages of a large index cache start to level off after a certain point. At any given time, when the index cache size equals or exceeds the index size (including all index files on all volumes), performance does not improve. However, to account for future growth of the index, you can set the index cache size larger than the current index size. See Estimating the Size of the Index for an example of estimating index size.

Because the index cache is filled with index pages, for optimum use of storage, set the size of the index cache to be a multiple of the size of the index page.

If possible, set the data file cache to equal the size of the stored data, which is the combined size of all ESS*.PAG files. Otherwise, the data file cache should be as large as possible. If you want to account for future growth of stored data, you can set the data file cache size larger than the current size of stored data.

The data cache should be about 0.125 times the data file cache. However, certain calculations require a larger data cache size. If many concurrent users are accessing different data blocks, this cache should be larger.

In general, if you have to choose between allocating memory to the data file cache or allocating it to the data cache, choose the data file cache. If you are migrating from a previous version of Hyperion Essbase, see the Hyperion Essbase Start Here booklet for important migration information.


Go to top Sizing Cache-Related Overhead

In addition to the memory used by the caches, Hyperion Essbase uses some additional memory while it works with the caches. To calculate these areas, use the formulas in Table 15-4.

Table 15-4: Estimating Cache-Related Overhead

Overhead Formula When Used
Index cache Index cache size * .5 At startup
General cache ((# of server threads allocated to the Hyperion Essbase server process * 3) * 256) + 5242880 bytes At startup
Data file cache (Data file cache size/8192) * 110 During calculations, retrievals, and data loads
Data cache (Data cache size/Potential stored block size * 8) * 160 During calculations, retrievals, and data loads

To determine the potential stored block size, see Determining the Expanded Block Size of Stored Data.

To determine the # of server threads initially allocated to the Hyperion Essbase server process, see the SERVERTHREADS setting in the online Technical Reference in the DOCS directory


Go to top Sizing the Startup Memory Area for Data Structures

At application startup time, Hyperion Essbase sets aside an area of memory based on the following factors:

Use the following formula to calculate the data structure area in memory

Number of threads * ((Number of members in the outline * 26 bytes) 
+ (Logical block cell count * 36 bytes))

Using Sample Basic as an example, assuming 20 threads, the startup area in memory required for data structures would be calculated as follows:

20 threads * ((79 members * 26 bytes) + (952 cells * 36 bytes)) 
= 726,520 bytes

Go to top Estimating Memory Requirements for Database Operations

In addition to startup memory requirements and cache requirements, other operations such as queries and calculations require additional memory. To estimate the amount of memory you need, you must also evaluate the memory used during these operations.

Determining Memory Requirements for Data Retrievals

Hyperion Essbase processes requests for database information (queries) from a variety of sources. For example, it processes queries from the Hyperion Essbase Spreadsheet Add-in and from Hyperion Essbase Report Writer. Hyperion Essbase uses additional memory when it retrieves the data for these queries, especially when it must perform dynamic calculations to retrieve the data. This section describes Hyperion Essbase's memory requirements for query processing.

Hyperion Essbase is a multithreaded application in which queries are assigned to threads. Threads are created when Hyperion Essbase is started. In general, once a thread is created, it exists until you shut down the Hyperion Essbase server (for more information, see Running Hyperion Essbase, Applications, and Databases).

As Hyperion Essbase processes queries, it cycles through the available threads. For example, assume 20 threads are available at startup. As each query is processed, Hyperion Essbase assigns each succeeding query to the next sequential thread. After it has assigned the 20th thread, Hyperion Essbase cycles back to the beginning, assigning the 21st query to the first thread.

While processing a query, a thread allocates some memory and then releases most of it when the query is completed. Some of the memory is released to the operating system and some of it is released to the dynamic calculator cache for the database being used. However, the thread retains a portion of the memory for possible use in processing subsequent queries. As a result, after a thread has processed its first query, the memory held by the thread is greater than it was when Hyperion Essbase first started.

The maximum amount of memory required for query processing occurs in the following situation:

In the example where 20 threads are available at startup, the maximum amount of memory used for queries occurs when at least 20 queries have been processed and the maximum number of simultaneous queries are in process.

Calculating the Maximum Amount of Additional Memory Required for Data Retrievals

This topic describes a method of estimating query memory requirements by observing the memory used during queries. To calculate the maximum possible use of memory for query processing, summarize the memory used by simultaneous queries plus the extra memory acquired by threads that are waiting for queries.

The following sections define variables and tell you how to determine values for the variables. You then use the variables in formulas to calculate the amount of memory needed for retrieval operations. The final formula uses the following variables:

Determining the Total Number of Threads

The potential number of threads available is based on the number of licensed ports. The actual number of available threads depends on settings you define for the Agent or the server. See Multithreading. Use the number of threads on the system as the value for Total#Threads in Determining the Maximum Memory Used for A Query.

Estimating the Maximum Number of Concurrent Queries

Based on your knowledge of your company and the requirements for each database, determine the maximum number of concurrent queries and use this value for Max#ConcQueries in Determining the Maximum Memory Used for A Query. This value cannot exceed the value for Total#Threads.

Determining the Maximum Memory Used for A Query

The memory usage of individual queries depends on the size of each query and the number of data blocks that Hyperion Essbase needs to access in order to process each query. To estimate the memory used in a query, calculate the additional memory Hyperion Essbase uses while processing and after processing each query.

Choose several queries that you expect to use the most memory. Consider queries that must process large numbers of members; for example, queries that perform range or rank processing.

For each query, complete the following steps:
  1. Turn the dynamic calculator cache off by setting the ESSBASE.CFG setting DYNCALCCACHEMAXSIZE to 0 (zero). For more information, see the online Technical Reference in the DOCS directory.

    Turning off the dynamic calculator cache enables measurement of memory still held by a thread by ensuring that, after the query is complete, the memory used for blocks during dynamic calculations is released by the ESSSVR process to the operating system.

  2. Start the Hyperion Essbase application.
  3. Using memory monitoring tools for the operating system, note the memory used by the Hyperion Essbase server before processing the query. This value is associated with the ESSSVR process.

    Use this value for MemBeforeP.

  4. Run the query.
  5. Using memory monitoring tools for the operating system, note the peak memory usage of the Hyperion Essbase server while the query is in process. This value is associated with the ESSSVR process.

    Use this value for MemDuringP.

  6. Using memory monitoring tools for the operating system, note the memory usage of the Hyperion Essbase server after the query is completed. This value is associated with the ESSSVR process.

    Use this value for MemAfterP.

  7. Calculate the following two values:

Determining the Maximum Memory Used During and After Query Processing Across All Queries

After you have completed the above calculations for all relevant queries, compare all results to determine the following two values:

Determining the Maximum Memory Needed by All Concurrent Queries

The final formula calculates the maximum amount of memory all users will need for query processing. Insert the appropriate values from the steps you have just completed into the formula in the following statement:

The amount of additional memory required for data retrievals will not exceed:

Max#ConcQueries*MAXAdditionalMemDuringP + (Total#Threads - Max#ConcQueries)*MAXAdditionalMemAfterP.

Because this calculation assumes that all of the concurrent queries are maximum-sized queries, the result may exceed your actual requirement. It is difficult to estimate the actual types of queries that will be run concurrently.

To adjust the memory used during queries, you can set values for the retrieval buffer, the retrieval sort buffer, and the dynamic calculator cache. For information, see Setting the Retrieval Buffer Size, Setting the Retrieval Sort Buffer Size, and Sizing Dynamic Calculator Caches. You can also adjust the CALCLOCKBLOCK setting in the configuration file or use SET LOCKBLOCK to select a different setting. For further information, see the online Technical Reference in the DOCS directory.

Estimating Retrieval Requirements

If you cannot test actual queries, you can use the following formula to calculate a very rough estimate of memory needed for concurrent queries:

Memory for retrievals = Max#ConcQueries * (RetrBufSize + SortBufSize + DCCompSize)

Max#ConcQueries is the maximum number of possible concurrent queries. See Estimating the Maximum Number of Concurrent Queries.

RetrBufSize is the size of the retrieval buffer. See Setting the Retrieval Buffer Size.

SortBufSize is the size of the retrieval sort buffer. Include this value if sorting is involved in any query. See Setting the Retrieval Sort Buffer Size.

DCCompSize is the amount of memory Hyperion Essbase uses to calculate dynamically calculated values. The amount of memory used is limited by the SET LOCKBLOCK command. (See the online Technical Reference in the DOCS directory for information about this command.) Include this value only if any query involves dynamic calculations.

To determine the amount of memory used for dynamically calculated values, multiply together two factors:

If SET LOCKBLOCK specifies 100 blocks, the value DCCompSize equals:

100 blocks * 7616 bytes = 761,600 bytes

Continuing with this example, to estimate the maximum memory needed for concurrent queries, assume the following values:

Memory for retrievals = Max#ConcQueries * (RetrBufSize + SortBufSize + DCCompSize)

= 20 * (10,240 bytes + 10,240 bytes + 761,600 bytes)
= 15,641,600 bytes

Estimating Additional Memory Requirements for Calculations

For existing calc scripts, you can use the memory monitoring tools provided for the operating system on the server to observe memory usage. Run the most complex calculation and take note of the memory used both before running the calculation and while running the calculation. Calculate the difference and use that figure as the additional memory requirement for the calc script.

To understand calculation performance, see Optimizing Calculations

. If you cannot perform a test with a calc script, you can calculate a very rough estimate for the operational requirement of a calculation by summarizing the following values:

Note:   The size and complexity of the calc scripts that you run affects the memory requirement. How this affects the amount of memory required is difficult to estimate.

For the total calculation requirement, summarize the amount of memory needed for all calculations that will be run simultaneously,


Go to top Estimating Total Memory Requirements

To estimate the total memory required for a server, you must consider the following factors:

The basic formula for estimating the total start-up memory needed per database is shown in Table 15-5.

Table 15-5: Formula for Calculating Database Start-Up Memory Requirements


Component Size (in MB)
+ Code and static data Approximately 10 MB
+ (Optional) Java Virtual Machine (The size depends on the operating system.) Between 2 MB and 4 MB
+ Size of outline (See Estimating the Size of the Outline)
+ Index cache size (See Sizing Your Index, Data File, and Data Caches)
+ Index cache-related overhead (See Estimating Cache Sizes)
+ Other cache-related overhead used at startup (See Estimating Cache Sizes)
+ The startup memory area for data structures (See Sizing the Startup Memory Area for Data Structures)
= Total database memory used at startup

Additional operational requirements for memory may include memory used during the following operations:

The maximum operational requirements for queries (See Estimating Memory Requirements for Database Operations)
The maximum operational requirement for calculations (See Estimating Additional Memory Requirements for Calculations)

To estimate the total Hyperion Essbase memory requirement on a server, perform the following steps:
  1. As shown in Table 15-5, calculate the start-up memory requirement for each database.
  2. Calculate the operational memory requirement for each database.
  3. Determine the different combinations (sets) of databases that will run concurrently on the server. For example, if at one time, only Sample Basic and Demo Basic will run concurrently, Sample Basic and Demo Basic would comprise one set.
  4. Summarize the start-up memory requirement for each set of databases.
  5. For each set of databases, determine which operations will be performed concurrently.
  6. To determine the combination of operations that use the most memory, summarize the additional memory usage for each combination of operations. Use the additional memory figures you observed or calculated, as described in Determining Memory Requirements for Data Retrievals and Estimating Additional Memory Requirements for Calculations. Select the largest requirement.

    This is the total additional operational memory requirement.

  7. Add the operational memory requirement to the start-up memory requirement.

    This is an estimate of the memory needed for the Hyperion Essbase databases on the server.

  8. To estimate the total memory requirement, add the base memory used by the operating system to the result from step 7.
  9. Compare the result from step 8 with the total available RAM on the server.

    If cache memory locking is enabled, the total memory requirement should not exceed two-thirds of available RAM; otherwise, system performance is severely degraded. If cache memory locking is disabled, the total memory requirement should not exceed available RAM.

    If there is insufficient memory available, you can redefine your cache settings and recalculate the memory requirements. This can be an iterative process. For guidelines, see Fine-Tuning Your Cache Settings. In some cases, you may need to purchase additional RAM.


Home Previous Next Index Help Banner


Copyright © 1991-2000 Hyperion Solutions Corporation. All rights reserved.