[Top] [Prev] [Next] [Bottom][TOC] [Index] [Help Contents] [Search]

ObjecTime Model Size Estimation


Overview

The art of trading off complexity and performance for memory size is part of the essential engineering problem of real-time systems. These are considerations that you should take into account when you are designing your application for the Target Services Library. You could design an application in ObjecTime so that every data object is an actor, however, it would take up a considerable amount of memory space. You must pragmatically decide where to draw the line and use passive data objects vs. actors (active data objects). You must also decide when to trade-off static vs dynamic allocation, for example, fixed replicated actor references against dynamically allocated optional replicated actor references. Fixed replicated actor references will optimize the application's performance but will:

Definitions

Definitions used are as follows:

Compiler Settings

The two compile environments used will be referred to as RISC and CISC within the body of this section.

a) RISC

This is a Solaris sparc environment using GNU and the following options:

CC = g++ -V2.7.1 (for 5.1.1)
CC = g++ -V2.8.1 (for 5.2)
LIBSETCCFLAGS = -fno-exceptions -fno-rtti -fno-builtin
LIBSETCCEXTRA = -O4 -finline -finline-functions

b) CISC

This is a Tornado 1.0.1 68040 environment using Cygnus 2.7.2-960126 and the following options:

CC = cc68k
LIBSETCCFLAGS = -DPRAGMA -ansi -nostdinc -DCPU=MC68040
LIBSETCCEXTRA = -O4 -finline -finline-functions -m68040

Static Sizings

Generated Code

The generated code captures the behavior of an actor. Each actor class is included only once per model and the size of the class is dependent on the:

Target Services Libraries

The Target Services Library provides a range of tuning options which are documented in "Target Services Library Configuration Definitions" on page 47. Instructions are also provided for automatically making a new variant of the Target Services Library. Two variants (at each end of the spectrum) are used. In a given project the Target Services Library libraries will fall somewhere in between the "Minimal" Target Services Library (this Target Services Library is both small in size and lowest in real-time overheads) and the "Box" Target Services Library (this Target Services Library supports the greatest number of features).

When either Target Observability or the basic Target Services Library debugger are configured additional code is compiled as part of the Target Services Libraries. This code does not impact the static size per actor but does imply an additional overhead for a number of Target Services Library primitives.

Note: + => compiled in, - => compiled out

Box = The standard "Out of the Box" configuration
+ Threads
+ Target Observability
+ Target Services Library debugger
+ External Layer (over TCP/IP)
+ Logging
- Statistics
Minimal= A minimal configuration containing only containing threads.

In both cases a small model consisting of a limited number of frame and timer service calls was used to generate a lower bound. The use of additional service interfaces will result in an incremental increase of the size. Size was measured both on a RISC and a CISC using a sizing command. This command measures the size of the code and data which is static (and could be loaded into ROM if required).

- Static Target Services Library Size



5.0


5.0


5.1.1


5.1.1


5.2


5.2



Box


Minimal


Box


Minimal


Box


Minimal


RISC


148 KB


68 KB


158 KB


50 KB


133 KB


50 KB


CISC


93 KB


40 KB


95 KB


30 KB


111 KB


36 KB

User Code

The user code in the reference model is approximately 3600 lines (or 20% of the total code in the model). Percentages of 80% generated code vs 20% user code are often achieved by the appropriate use of ROOM constructs. Therefore the estimates of the amount of user written code are often lower than what would be calculated through standard estimation techniques like function point analysis.

In the case of a RISC processor each additional line of transition (or user) code added approximately 11 bytes to the model for a total of 39KB. For a CISC processor, the user code is estimated as 30KB. The number of bytes per line of user entered code is highly dependent on both the counting mechanism (for example, lines vs source statements) and the compiler used. Values in the range 10-25 bytes (per non-commented source line) are typical.

A key advantage of automatic code generation is the automatic reuse of already proven patterns of code for the complicated portions of a real-time, event driven design. Fewer lines of user written code also provide a lower defect rate when measured against the total body of code.

Total for the Reference Model

- Static Size - Totals


Comp.


5.1.1-RISC


5.1.1-CISC


5.2-RISC


5.2-CISC


user code


39


30


39


30


OT code


212


157


110


101


Minimal

Target Services Library


50


30


50


36


OS


x


x


x


x


Total


301+x


217+x


199+x


167+x

Note:

The units in the above table are given in kilobytes of memory (KB).

Dynamic Sizing

ROOM Objects

ROOM allows actor classes to be referenced in multiple locations and also allows the design to contain replication factors. The static, per actor, data is not replicated, however, memory is allocated per "each instance" as actors are incarnated.

The objects described below are sufficient for a first level of estimation and assume that the instrumentation required for the Target Services Library debugger (and hence Target Observability) has been compiled out (USE_THREADS option is ON in the following examples.). Compiling with Target Observability enabled will increase these numbers. These objects are created whenever an actor is incarnated (that is, at run-time not compile-time).

Actors

Everywhere an actor class is used in the model an actor reference (RTActorRef(n)) is required. When the actor is incarnated, each instance also requires data (RTActor).

RTActor(s) = ( sizeof(RTStateId) * (s+1) ) + 24

{s is the number of states}

Note:

sizeof(RTStateId) is 2 by default. It can be reduced to 1 if no actor in the design contains greater than 255 states. See "RTActorRef Member Data" on page 31 for more information.

Each component of an actor is defined by a reference object. Actor components are defined by:

RTActorRef(n)= 4n+12 for a fixed actor
12n+20 for an optional actor
20n+20 for an imported actor
{n is the replication factor of the contained actor}

Ports

Actors contain ports which also require memory for both the reference and instance information. In the example in FIGURE 31. three actors are shown (A,B,C). B and C are replicated actors (factor n) containing relay and end ports each with a replication factor of m. The model is drawn in "see through" mode to show the internals of B and C.

Note:

An important design rule to note is that even if the end port is drawn on the border for simplicity, it is always represented at run-time as a relay port on the border and an end port internally.

RTEndPortRef(n)= 12 {n is the replication factor of the port reference}

RTEndPort = 12 {instance data for each end port}

RTRelayPort(i) = 4i+12 {i is the number of unbound ports in the context where this port was incarnated.}

All ports are bound in the example resulting in i = 0. In a model where multi-aspect actors are used, each instance of the actor will result in additional memory being allocated so that all possible importations are supported. It should also be noted that having a port with replication factor >1 when all instances of the port are bound to only one other actor would not normally be a good design practice. In a design where actors are imported into multiple locations this is a reasonable approach.

In the model, each of B and C contain s states and v bytes of Extended State Variables (ESVs).

Sample Model

The dynamic size of model = Size A + Size replicated B + Size replicated C

Size A =

RTActorRef(1) {A is referenced by the system}

+ RTActor(1) {A has one state}

+ 2*RTActorRef(n) {reference for B and C}

Size of each B =

RTActor(s) {actor B is replicated n times}
+ RTEndPortRef(m) {only 1 reference per instance}

+ m*RTEndPort {m end ports per instance}
+ RTRelayPort(0) {one relay port per instance}

+ v {each instance has ESVs}

Size C = Size B (This assumes that B and C have the same number of states.)

Size replicated C = Size replicated B = n * Size B

Therefore the dynamic size of model = Size A + 2n * Size B

Size of Example (by class)


Class usage

for actors A,B&C


Example

Model Totals


RTActorRef(n)


16 + 2(12+4n)


RTActor(s)


28 + 2n(24+2(s+1))


RTEndPortRef(m)


2n(12)


RTEndPort


2mn*12


RTRelayPort(i)


2n*12


ESVs


2nv


Total


68+108n+24nm+4ns+2nv


n=m=1

s=5, v=40


300


n=m=10

s=v=0


3548


n=m=10

s=5, v=40


4548

Per Thread

The Target Services Library comes in either a single or a multi-threaded version. The version best suited for the application is dependent on the target and the requirements of the application. In both versions, each thread requires a stack and a set of message buffers. Each message buffer is 20 bytes and the default pool is allocated in units of 250 messages. Every thread is then allocated a minimum of 50 messages. See "RTResourceMgr" on page 12 for more information.

RTMessage = 20 bytes

If objects are deep copied on all message sends, in cases when RTDataObject's are sent, the new memory is pointed to by RTMessage. The memory for the deep copy operation is taken from the heap.

Stack size is dependent on the application requirements and the stack must be able to hold all parameters and local data through the worst case calling sequence. Stack size minimums and maximums may also be enforced by the Target OS. You should tune the 20-kilobytes default up or down depending on your determined worse case run-time need. It is a reasonable practice to start with large stack sizes since stack overflow often results in strange failure modes.

The multi-threaded Target Services Library always has a main thread, a timer thread, an optional External Layer thread, and potentially additional user threads.

A simple multi-threaded example would contain 4 threads with default stack size of 20KB.

Each thread =

50* 20 {RTMessages)

+ 20KB {stack}

+ 200 {thread controller data}

= 21.2KB.

Four threads would consume 84KB. In ObjecTime Developer 5.2 the stack size of the main thread may need to be configured through the operating system (OS).

Heap

C++ new requests by default are satisfied from the heap. The use of the heap by a given application needs to be properly engineered and potential fragmentation taken into account. In many embedded applications new will be overridden to provide application-tuned memory management. ROOM objects and the per thread data will also normally be allocated from the heap however the majority of the heap will be application specific data in most applications.

System Resources

System resources relating to using the External Layer and other TCP/IP based applications must also be accounted for. TCP can be configured to use anywhere up to 128KB per connection (both a Tx and Rx buffer are required). This memory may also be taken from the heap.

Summary

Benchmarking

The sizing rules covered in this section can be used to calculate a first-order approximation. Benchmarking should be performed as early as possible into the project to properly calibrate the development environment. For example, a simple change in compilation options can reduce the size by 20-30% as well as improve the real-time performance significantly.

Applications which are single-threaded require less memory and have less overhead. Only the multi-threaded variant was examined in this section.

Model Variations

As described earlier, a reference model was used to generate the size estimates. Several other models are summarized here with the goal of providing a qualitative feel of the variation which is typical. The variation between the control and reference model is explained by the fact that the control model makes extensive use of ROOM passive data classes. The reference model provides a better basis for first-order approximation since it is easier to model the problem as active actors initially.

Variation of Models (RISC)


Model


Static Model Size per Actor

(kilobyte)


Description



5.1.1 5.2



Reference


4.0 2.1


The reference model used in this section.

53 Actor Classes, 54 Data Classes, 28 Protocol Classes


Demo


3.1 1.6


A Demo model of a GSM call processing system.

35 Actor Classes, 20 Data Classes, 12 Protocol Classes


Control


4.1 2.8


A telecommunications Control model containing 3 complicated Data classes per each actor class. Almost 40% of the static model size that is used results from the data classes. Model consists of 318 actor classes, 980 data classes, 134 Protocol classes.

Calculating the Overall Total

This section discusses the factors impacting both the static and dynamic memory requirements of an ObjecTime model. Understanding the technical parameters of these size equations allows the designer to make better time vs space trade-offs.

Overall Total


Item


Memory Used


Static Size (code and data)


This information can be placed into ROM.

When located in RAM it is loaded into memory by a program loading mechanism specific to the target.

The static size can be measured by a sizing command (not the file size).


Dynamic Size (ROOM model)


This must be allocated in RAM and is normally allocated from the system heap.


Dynamic Size (thread data)


This is normally allocated from the heap. The size of the individual stacks must be engineered so as to handle the worst case calling usage.


Dynamic Size (application data)


This data is allocated from the heap. Application data includes the use of ObjecTime classes (for example: RTDataObject and its subclasses), external C++ classes and C code which does malloc (and free).


Dynamic Size (heap)


The heap must be allocated in RAM and be large enough to accommodate both the Target Services Library and application requests. In addition, other factors, such as fragmentation which result in the heap not being 100% efficient, must be engineered.



[Top] [Prev] [Next] [Bottom][TOC] [Index] [Help Contents] [Search]

support@objectime.com
Copyright © 1998, ObjecTime Limited. All rights reserved.