TOC PREV NEXT INDEX DOC LIST MASTER INDEX



Migrating Legacy C/C++ Code to Apex

The following sections are included in this document:


Introduction

In this document, we discuss how to convert your existing C/C++ development environment into one that can benefit from the many features of the Apex C/C++ environment. This conversion is referred to as a migration since it is often an evolutionary process that gradually brings you closer to the ultimate goal of complete integration with Apex C/C++.

Although much of the following discussion focuses on the use of the Apex C/C++ compiling system, a great deal of this information is also applicable to the use of third party C/C++ compilers. This is especially true for those compilers for which an Apex model has been provided. It is also true, but to a lesser extent, for C/C++ compilers that have no specific model.

There are three major aspects to the migration process:

1 . Getting your source files (legacy C/C++ code and other data files) into Apex subsystems/views and under Change Management and Version Control (Summit/CM)

2 . Using Apex's automated build management facilities (makefile maintenance)

3 . Using the Apex C/C++ development tools (compiler, code browser, debugger) and architecture control feature

Extensive information about the Apex build management process can be found in the C++ Compiler Reference Manual.

In general, and for the purposes of this discussion, each above aspect depends on the previous ones in order to obtain the greatest benefit. Therefore, it is assumed that you will always want to put some or all of your legacy source files into Apex views and under Summit/CM control. However, under what circumstances might you NOT want to place your source files in an Apex view? Before answering that question, we need to discuss how the rest of your source files should be placed into Apex views.

Apex Architecture Control

Apex subsystems/views are the fundamental units of architecture control and the primary units for resource (such as, object code or library) sharing. The larger the project, the more important it is to properly partition it into architecturally sound regions (subsystems/views). Each region should be as independent from the others as is reasonably possible at least from a logical sense (high level design viewpoint) if not from a more physical sense (C/C++ source file dependencies) in order to maximize the degree of information hiding. This will increase the potential for reuse and decrease future maintenance costs. Apex C/C++ can enforce this more physical sense of independence through user defined view import/export relationships. Regarding resource sharing, each development group (team or individual) typically has its own set of views for the subsystems that it is actively working on. Other groups may have their own views for the same as well as additional subsystems. There may be some subsystems, however, containing views which many groups want to share either because the views change infrequently or are a stable part of the final product release (such as an integration area). Apex C/C++ can accommodate this type of sharing at the view level. However, it is important to keep in mind that if a given view is shared, then all of the views in its import closure (dependent views) must also be shared. This typically means that only lower level views in the dependency hierarchy are ever shared.

Given these features of Apex views, you need to analyze your source files and determine how best to structure them within the Apex subsystem/view architecture. Your current source files are probably stored in several directories containing multiple levels of subdirectories. Since Apex views can contain nested subdirectories, you could just migrate all of your files into a single subsystem/view while preserving the original subdirectory structure. While this scheme might circumvent a number of migration issues that would otherwise have to be addressed, it is not recommended for anything but the smallest of products. Such a structure will not be able to benefit from the advantages of Apex views. Instead, you should examine your source files and determine their architectural boundaries based on their original design and interdependence. It may also be constructive to take into account the group divisions that are responsible for the development of the various subcomponents of the product. Likewise, you should consider which subcomponents might be useful in some other product or might be expected to be completely replaced or removed from the current product sometime in the future. Note that you can use the migration tools described later in this document to analyze the existing C/C++ source file dependencies and then use that information to help you decide which Apex view architectures make the most sense.

Now that you have an idea of how to organize your Apex views, we can address the issue of when you shouldn't put your legacy C/C++ code into Apex views and under Summit/CM control. This situation arises when you want to use Apex C/C++ to compile your source code, but your code must use header files (and probably some related archive libraries) that are presently maintained in a source directory that you cannot move or modify in any way. Such is typically the case when your product uses a library developed by some third party for which you do not control (or even possess) all of the source code and is called an external library. An external library does not contain any C/C++ source bodies that you need to compile into object files in your normal course of development. For more detailed information on how to set up Apex C/C++ to deal with this situation, see Using an External Library As Is. If you are permitted to copy the files in the external library or can create certain Apex related files and directories within the original external library directory, then it may be beneficial to migrate the external library source files into an Apex view. One such benefit is that you can then put the files under Summit/CM control and thereby track any changes that the third party makes to the external library over the life of your product. If this is a viable option for you, then see Migrating an External Library to an Apex View for the details. Note that in either case, you must verify that any external library object code files or archive libraries are compatible with the Apex C/C++ compiler as discussed in Third Party C/C++ Compiler Compatibility.

Apex Build Management

Having addressed the Apex architecture control aspect of the migration process, you should now be ready to tackle the issues involving the use of Apex's automated build management facilities. Using its standard, predefined models and build keys, Apex C/C++ automates the day-to-day aspects of build management and still provides a high degree of user customization (with consistency). It accomplishes this by defining a set of specially designed makefiles that are maintained and invoked (to compile or link a program) via the Apex environment. For example, it automatically updates its makefiles when a new C/C++ source file is created in a view or if a new source subdirectory is added. When linking programs, Apex automatically determines the link contributions (such as archive libraries) that other imported views need to provide. These features greatly simplify the cost of maintaining a reliable and consistent build process with the prospects of additional improvements in future releases of Apex C/C++.

You most certainly have your own build process that you have invested a good deal of effort into perfecting and it probably also uses makefile technology. Unfortunately, your makefiles will not be directly usable with Apex's standard C/C++ models and their makefiles. So, you will have to determine if it is worthwhile for you to reimplement your build process to use Apex's. Naturally, you will gain the most benefit from the Apex C/C++ development environment if you use Apex's build management facilities. Fortunately, you do not have to convert your makefiles now if you do not want to. You can continue to use your existing build process and still have your source files under Summit/CM control. If you wish to exercise this option, then you should to use the migration tools discussed below in their "migrate by reference" mode. See Migrating to Apex Views by Reference for more information.

Even if you do plan to convert your makefiles and use the Apex build management facilities, you may still wish to migrate your sources by reference, at least initially, because this will minimize the impact of the migration process on current development activity. Migrating a large development organization can take an appreciable amount of time to complete. You most likely will not want to stop all development work in order to migrate the source and retrain your personnel all at once. When properly administered, development can proceed in parallel using the original legacy source directories and build process alongside the new Apex views (but using your original source directories and build process) with each working off of the same source code base controlled by Summit/CM.

If you do have the option of completely migrating your sources and build process into the standard Apex C/C++ model, then you will want to use the migration tools in their "migrate by copy" mode. For the details on that, see Migrating to Apex Views by Copy. Note that you can initially migrate your legacy source files by reference and then, effectively migrate them by copy and convert your makefiles at a more leisurely pace while not upsetting on-going development.

Apex C/C++ Development Tools

If you want to use the Apex C/C++ compiler and its related development tools to build your product, you will need to use its standard Apex C/C++ model. Therefore, you will have to convert your original build process (and its makefiles) to use that defined by the standard model as described in Migrating to Apex Views by Copy. In some cases, the Apex environment provides models for other, third party C/C++ compiling systems. If you want to use those models, you will have to convert to the build process prescribed by those models which will also probably require migrating by copy.

If the Apex environment does not provide a model for the C/C++ compiling system that you need to use, then you can migrate your sources by reference as described in Migrating to Apex Views by Reference. Alternatively, you can migrate your sources by copy, but use the special Apex C/C++ "migrate" model that does not generate any makefiles and has no predefined build process. In this case, you will have to implement an appropriate build process yourself.

As mentioned previously, if your product requires an external library that was developed using some other C/C++ compiler, then, if it is to be used by the Apex C/C++ compiling system, it must be compatible as discussed in Third Party C/C++ Compiler Compatibility.


Third Party C/C++ Compiler Compatibility

Before you try to link a program built using the Apex C/C++ compiler with a an external library that was built using a third party C/C++ compiler, you must verify that the code generated by the two compilers are compatible. This should normally not be a problem if the external library contains just C code and was compiled using the C compiler provided by the vendor who makes the platform on which you are running the Apex C/C++ compiler. However, even in this case, there is a potential for incompatibilities if the wrong versions of the vendor's C compiler or platform operating system were used to build the external library. For a given platform, all C compilers generally produce compatible object code, but they can differ in their function calling conventions and in their runtime libraries (different functions supported or with different argument lists).

If your external library is based on C++ code, it is more likely that it will not be compatible with the Apex C++ compiler. In addition to the above potential problems with C, C++ compilers depend more heavily on the implementation of their runtime support library. For example, different C++ compilers can use different mechanisms to implement static initialization, virtual function tables and exception handling. C++ compilers also vary in the algorithms they use to generate external link symbols for class member function entry points, commonly called name mangling. Both the caller and callee code must agree on how a function name is mangled to avoid undefined references at link time. Finally, C++ compilers can differ in how they handle template instantiations.

For help in determining if your particular external library will be compatible with Apex C/C++, talk to your Customer Service Representative.


Using an External Library As Is

This section discusses how you can use an existing external library in its original form with one or more Apex C/C++ views. The external library's files will not be placed under Summit/CM control. It is assumed that you have verified that the external library is compatible with Apex C/C++ as noted in Third Party C/C++ Compiler Compatibility. This information only applies when using the standard Apex C/C++ model and associated build key. If other models or build processes are used, you will have to figure out how to apply this information to those circumstances.

There are two potential issues that need to be resolved to access an external library that resides outside of an Apex C/C++ view: how to include its header files (and handle other compile time options) and how to link in its archive libraries (and handle other link time options).

All of these issues can be resolved a number of different ways with differing degrees of setup time, flexibility and ease of maintenance. The first approach presented is the one we recommend. It may require a bit more time to set up, but it provides a more flexible and maintainable solution and is more architecturally sound. The other two approaches are easier to set up (at least initially) at the cost of flexibility and architectural control. Hybrid approaches are also possible if you think they would better serve your needs.

All of the presented approaches assume that the header files in the external library will be referenced by your C/C++ code in #include directives using relative pathnames and not full (or absolute) pathnames. Appropriate include path (-I) compiler options will then be used to enable the Apex C/C++ compiler to resolve these relative pathnames. Full pathnames can be used to reference external library header files if your situation demands it, but this will significantly reduce the flexibility of the product and eliminate the ability to enforce any architecture controls. If you select to do this, then the include path (-I) compiler options discussed below will not be needed.

In the examples below, your external library resides in the directory /extlib and contains an archive library called libext.a.

Recommended Approach

In this approach, only those Apex C/C++ views which need to use the external library will be granted access to it. In each such view, make the following changes to the view's context switch file (view/Policy/Switches).

For C code, add any include path options and other compile time options (such as macro definitions) to the C_OPTIONS (or C_PRE_OPTIONS) switch. Add any link time options to the C_LINK_OPTIONS (or C_LINK_PRE_OPTIONS) switch. Note that the compile time switches must be set in any view containing sources that reference the external library's header files. The link time switches only need to be set in those views containing main programs that must be linked with the external library.

For example, these switches might look something that this:

For C++ code, the following switches should be set for similar reasons:

Alternatively, the link options can be set using the link contribution switches as discussed in Migrating an External Library to an Apex View. In this case, these switches would be set in the views containing the code that was compiled against the external library and not necessarily in the views in which main programs are linked.

Apex C/C++ Model Based Approach

In this approach, all Apex C/C++ views which use your Apex C/C++ model will be able to access the external library whether they need to or not. In particular, all main programs will be linked with the external library. Naturally, if all of your views do need access to it, then this may be the best approach for your product.

In your Apex C/C++ model, make similar changes to the model's view context switch file (model_view/Policy/Switches) as were made in the previous approach for each individual view.

For example, these switches might look something that this for C code:

And this for C++ code:

Whenever you make changes to your model's context switches remember to propagate those changes to your views with the remodel command.

Apex C/C++ Build Key Based Approach

In this approach, all Apex C/C++ views which use all Apex C/C++ models which use your build key will be able to access the external library whether they need to or not. This approach has the advantage that the compiler and linker options only ever appear in a single file and are not copied into every view's context switches. However, tampering with a build key requires a more advanced degree of customization skills and is not recommended. In particular, future releases of the Apex C/C++ build keys will require you to reimplement your customizations.

In your build key directory, you will have to locate the scripts that are used to invoke the Apex C/C++ compiler and linker (typically called cc, CC, ld and LD). You will then have to edit these scripts to pass the appropriate compiler and linker options to the actual Apex program (typically rcc or RCC).


Migrating an External Library to an Apex View

This section discusses how to set up an Apex C/C++ view so that it can be used to contain an external library. This will enable your external library to benefit from Apex's architectural and version control features. It is assumed that you have verified that the external library is compatible with Apex C/C++ as noted in Third Party C/C++ Compiler Compatibility.

First of all, the source files (such as C/C++ header files and archive libraries) need to be migrated into an Apex C/C++ view. This should be done when you migrate all of your other legacy source files into Apex C/C++ views. If you do not want to change the original external library file directories in any way, you will have to copy them into the views as described in Migrating to Apex Views by Copy. Otherwise, you can migrate them by reference as outlined in Migrating to Apex Views by Reference. In either case, any reference in a C/C++ #include directive to an external library header file will have to use relative pathnames.

Once your external library's legacy files (header files and archive libraries) are placed into an Apex C/C++ view, you will need to set up various compile time and link time switches (options). Apex C/C++ will automatically take care of providing the compiler with the proper include path (-I) options so that C/C++ source files, compiled in any other view which imports the external library view, will be able to resolve references to the library's header files. However, if there are other compile time options (such as macro definitions) that need to be provided when compiling the library's header files, then they must be provided to each view that uses (explicitly or implicitly imports) the library's view as described in Using an External Library As Is. This involves setting the importing views' C_OPTIONS, C_PRE_OPTIONS, CPP_OPTIONS and CPP_PRE_OPTIONS context switches to the appropriate values.

Since your external library does not contain any C/C++ source code bodies that need to be compiled directly in the library's view, you need to tell Apex C/C++ not to bother trying to compile anything in that view. This is accomplished by setting the BUILD_POLICY switch of the external library view's context switches as follows:

If your external library contains one or more archive libraries, then you will want to set the Apex C/C++ link contribution switches. These only have to be set in the external library view's context switches. They will cause the archive libraries to be linked into any main program in any other view that has the external library view in its import closure. If the external library contains archive library called libext.a, then the following switch setting can be used:

Other link contribution switches which may be relevant to your product include:

LINK_CONTRIBUTION_PRE_OPTIONS
LINK_CONTRIBUTION_LIBRARY
LINK_CONTRIBUTION_SHARED_PRE_OPTIONS
LINK_CONTRIBUTION_SHARED_OPTIONS
LINK_CONTRIBUTION_SHARED_LIBRARY
LINK_CONTRIBUTION_DEFAULT_MODE
LINK_DEPENDENCIES


Migration Tools Overview

The migration tools were created with the intent of assisting in the task of transforming your current development environment into the Apex environment and methodology thereby gaining greater control and automation of your software project's source versions, architecture and build processes. However, migrating legacy code is far from a trivial task. It requires learning how to use the migration tools and studying your source code and development tools to determine how best to perform the migration. If there is just a small amount of code to migrate, then it may take longer to figure out how to use the migration tools then to do the work manually. On the other hand, the greater the amount of legacy code, the greater the benefit of the migration tools to automate much of the work. However, more code often also implies greater complexity of your development environment. This complexity can introduce a number of special (unique or inconsistent) situations which the migration tools are not able to handle automatically, and thus, may require some manual intervention to analyze, diagnose, and remedy. In many cases the migration tools may be able to deal with a special situation when given additional information. In other cases, the situation will have to be dealt with by other creative means. Given the infinite number of ways in which you can structure your source code and perform your build processes, it is not possible to predict where the break-even point will be in order to determine if using the migration tools will pay off in the end. The only real way to know, is to try it. Keep in mind, however, that using the migration tools can pay off in a big way if, after migrating the legacy code, you realize that you should have performed the migration differently, in which case, you can just rerun the migration tools with different inputs. In fact many migration problems can be resolved before any files are actually created or changed.

You should start out by learning how the migration tools work. There are some simple demonstrations of their use included with the product (in $APEX_HOME/migration/demos). Study them, try them out and read the documentation to become aware of the various migration tool options. As mentioned before, you should have a good understanding of how your legacy source code is organized and how its programs are built.

The migration tools have two basic modes, "migrate by copy" and "migrate by reference". The biggest decision you have to make is to choose which mode to use.

In "migrate by copy" mode, the Apex subsystems and views are logically and physically separated from your original source directories and files. All migrated source files are copied into the desired views. This mode provides the greatest flexibility in terms of restructuring the source code to fit into the Apex subsystem/view model. The legacy source files can be copied into whichever Apex subsystems you desire. However, this mode may require source code changes (to #include directives) and usually requires that your build process (such as makefiles) be significantly (if not totally) redesigned. Furthermore, since files are copied, this mode typically makes it more difficult to keep your current development activities in sync with the new Apex scheme while the migration task is being performed and perfected. The two copies of the source files may have a tendency to diverge if not properly managed.

In "migrate by reference" mode, the Apex subsystems and views are still logically separate from your original sources, but, physically, the source directories and files are shared between the Apex views and your original source directories. Since the source files are shared, there is no need to physically copy them to the Apex views. The "migrate by reference" mode sacrifices a good deal of flexibility in exchange for preserving a higher degree of compatibility between the old and new development processes. Fewer, if any, source files will need to be modified and your original build process (makefiles) can be used as is. However, with this mode, Apex will not be able to automatically maintain your makefiles; that will have to be done the old fashioned way. Furthermore, the old legacy build tools (in particular, the C/C++ compiler) will continue to be used with this mode, as controlled by your original makefiles. Another restriction is that, if a given legacy source directory is migrated to a particular Apex subsystem, then all of its subdirectories must also be migrated to the same subsystem with the same directory tree structure. This is because the new Apex view is actually a symbolic link back to the original source code directory (a special use of Apex's view storage mechanism). A final (potentially disagreeable) consequence of migrating your legacy source files by reference is that Apex will need to create a few special purpose directories and files within your original source directories.

Some special situations which may complicate the task of migrating legacy code include:

In both modes, the migration tools can be used to create Apex subsystems and views, define the appropriate view import relationships (including mutual import relationships) and place source files under Summit/CM control.

Once you have determined which migration mode to use, you need to perform the various migration phases as described in the following sections.


Migrating to Apex Views by Reference

This section describes how to use the migration tools in "migrate by reference" mode to get your legacy source files into Apex C/C++ views. Before going through the details, you should consider examining some of the general issues with using these tools presented in Migration Tools Overview.

Note: When migrating your sources by reference, you should use the special Apex C/C++ "migrate" model for your views. This model has no makefiles of its own so you can still use your original makefiles.

The migration process defined below involves four tools used in the following five phases:

1 . Directory Migration Map Phase

With the assistance of the directory migration map tool, dirmig, you create a mapping of each of your original source directories onto some set of Apex subsystem/view directories. In "migrate by reference" mode, if a given source directory is mapped to a particular Apex view, the all of its subdirectories are mapped to the same view and with the same subdirectory structure. Therefore, it is sufficient to identify which original source directories will serve as the single, top-level directory of each Apex view. The resulting "directory migration map" serves as input to the File Migration Map Phase.

2 . Makefile Analysis Phase

With the assistance of the makefile analysis tool, makemig, you can extract information, from your original makefiles, which can be used by the subsequent phases to resolve C/C++ #include directive file references. More specifically, this tool works much like the common make program to generate the command lines used to compile the C/C++ source files located in the directories to be migrated. These command lines often contain -I options which are useful in resolving the #include directives. The command lines generated as well as information identifying the source directories in which the command lines would be invoked serve as input to the next phase.

Note that this phase is optional since a similar result can often be achieved by using perfmig's -resolve_includes_by any_available_file option. Also, strictly speaking, include file dependencies do not have to be analyzed at all in order to determine view import relationships for "migrate by reference" mode. However, you are strongly encouraged to do so anyway since eventually you will want to make full use of Apex's architecture control features.

3 . File Migration Map Phase

With the assistance of the file migration map tool, filemig, you create a more detailed mapping indicating where each of your original source files is to be migrated into the Apex subsystems and how each file is to be treated. In "migrate by reference" mode, you don't really have much say in choosing where individual files will be migrated since all of the files in a migrated source directory get placed into the same Apex view directory. You can, however, control which files are placed under version control and which files are analyzed as C/C++ source files. The resulting "file migration map" serves as input to the next phase.

4 . Preview Migration Phase

With the perform migration tool, perfmig, you can test your file migration map by doing a dry run which attempts to simulate the execution of the commands that will be performed in the last phase. Every reasonable effort is taken in this phase to identify problems which may arise in actually performing the migration. The commands that would be executed in the last phase are displayed so that you can examine them for correctness.

5 . Perform Migration Phase

With perfmig, you attempt to actually carry out the migration as specified in the detailed file migration map. The functionality of this tool is broken down into the following three steps that can be carried out either individually or together in a single invocation of the tool:

a . Subsystem Decomposition Step

This step creates the necessary subsystems and views. The subsysmig tool can also be used to perform this step.

b . Architectural Control Step

During this step C/C++ files are examined to identify any main programs and to analyze their #include directives, import relationships may be defined, and main programs may be registered.

The archmig tool can also be used to perform this step.

c . Version Control Step

This step places the desired source files under version control.

The vermig tool can also be used to perform this step.

If, after actually migrating your source files to Apex views, you change your mind and want to undo everything, you can use the cleanmig tool. This tool will destroy all the Apex views, version control databases and subsystems that you created in the Perform Migration Phase while leaving your original source directories and files intact.

There is one additional tool called duprefmig. This tool is useful for duplicating a tower of views that have been previously migrated by reference using perfmig. It duplicates the views as well as the original source directories associated with them. The new views and source directories are structured like a collection of "migrate by reference" views. This arrangement makes it possible for concurrent development to take place in both the old, original source directories and their associated old views as well as in the new source directories and their associated new views.


Migrating to Apex Views by Copy

This section describes how to use the migration tools in "migrate by copy" mode to move your legacy source files into Apex C/C++ views. Before going through the details, you should consider examining some of the general issues with using these tools presented in Migration Tools Overview.

The migration process defined below involves four tools used in the following five phases:

1 . Directory Migration Map Phase

With the assistance of the directory migration map tool, dirmig, you create a mapping of each of your original source directories onto some set of Apex subsystem/view directories and subdirectories. Some source directories may be ignored, some may be converted into a top-level subsystem directory and some may be treated as a subdirectory within a subsystem. This "directory migration map" serves as input to the File Migration Map Phase.

2 . Makefile Analysis Phase

With the assistance of the makefile analysis tool, makemig, you can extract information, from your original makefiles, which can be used by the subsequent phases to resolve C/C++ #include directive file references. More specifically, this tool works much like the common make program to generate the command lines used to compile the C/C++ source files located in the directories to be migrated. These command lines often contain -I options which are useful in resolving the #include directives. The command lines generated as well as information identifying the source directories in which the command lines would be invoked serve as input to the next phase.

Note that this phase is optional since a similar result can often be achieved by using perfmig's -resolve_includes_by any_available_file option.

3 . File Migration Map Phase

With the assistance of the file migration map tool, filemig, you create a more detailed mapping indicating where each of your original source files is to be migrated into the Apex subsystems and how each file is to be treated. This "file migration map" serves as input to the next phase. Some files may be ignored, some may be put under version control, some may be identified as C/C++ source files which may be converted to simplify name resolution, and some may be merely copied into the destination subsystem directories.

4 . Preview Migration Phase

With the perform migration tool, perfmig, you can test your file migration map by doing a dry run which attempts to simulate the execution of the commands that will be performed in the last phase. Every reasonable effort is taken in this phase to identify problems which may arise in actually performing the migration. The commands that would be executed in the last phase are displayed so that you can examine them for correctness.

5 . Perform Migration Phase

With perfmig, you attempt to actually carry out the migration as specified in the detailed file migration map. The functionality of this tool is broken down into the following three steps that can be carried out either individually or together in a single invocation of the tool:

a . Subsystem Decomposition Step

This step creates the necessary subsystems and views. The subsysmig tool can also be used to perform this step.

b . Architectural Control Step

During this step source files may be copied, C/C++ files are examined to identify any main programs and may have their #include directives converted, import relationships may be defined, and main programs may be registered.

The archmig tool can also be used to perform this step.

c . Version Control Step

This step places the desired source files under version control.

The vermig tool can also be used to perform this step.

If, after actually migrating your source files to Apex views, you change your mind and want to undo everything, you can use the cleanmig tool. This tool will destroy all the Apex views, version control databases and subsystems that you created in the Perform Migration Phase while leaving your original source directories and files intact.


Migration Tools

dirmig - compose a directory migration map

Syntax

Parameters

This option is not supported in "migrate by reference" mode.

Default: none

  • -map_ss_tree src dest

    Indicates that the source directory src will be mapped to the destination directory dest and its mapping mode will be set to "Subsystem Tree". Both directories must be full pathnames. dest must NOT identify a subsystem explicitly with the .ss extension even though dest may be mapped to a subsystem if src matches the "Subsystem Tree" criteria. Thus, the options

    are illegal.

    Default: none

  • -migrate_by ( copy | reference )

    Specifies which type of migration is desired. If -migrate_by copy is given, then the Apex subsystem and views are logically and physically separate from the original source directories and files. If -migrate_by reference is given, then the Apex subsystem and views are still logically separate from the original sources, but they share the same physical file space. The value of this option is written to the directory migration map file so that it does not have to be specified again when filemig is used.

    Since, in "migrate by reference" mode, the original source directories are used as the view directories, there is considerably less flexibility in mapping source directories to Apex subsystems/views and their subdirectories than is available in "migrate by copy" mode. The loss of flexibility, however, allows the migration to proceed more quickly, and enables Apex to use the original makefiles to build the user's programs. It also permits the continued use of the user's original software development model and the new Apex model, in parallel, without having to worry about diverging source files.

    Default: reference

  • -outmap map_file

    Write directory migration map file to map_file.

    Default: standard output

  • -rename

    Indicates that if a subsystem is detected whose simple name matches that of a previously mapped subsystem, then it should be renamed to some other unique name. The name generated is a function of the subsystem's full pathname. For example, without this option, the following directory migration map might be generated:

    With this option, the map would look like this:

  • -single_ss directory_spec

    Identifies those source directories which will have their mapping mode changed to "Single Subsystem". It does not change the source directories' destination directories which can only be done with one of the -map options. directory_spec can contain a list of wildcarded specifications.

    Default: none

  • -source_dir directory_spec

    Identifies those source directories which will have their mapping mode changed to "Source Directory". It does not change the source directories' destination directories which can only be done with one of the -map options. directory_spec can contain a list of wildcarded specifications.

    This option is not supported in "migrate by reference" mode.

    Default: none

  • -ss_tree directory_spec

    Identifies those source directories which will have their mapping mode changed to "Subsystem Tree". It does not change the source directories' destination directories which can only be done with one of the -map options. directory_spec can contain a list of wildcarded specifications.

    This option is not supported in "migrate by reference" mode.

    Default: none

  • -tabs integer

    Indicates, in the output directory migration map, at what tab stop the destination directory will be positioned to make the map easier to read. If the source directory extends beyond the specified tab stop, then only a single space will be output between the two directories.

    Default: 0

    Description

    The general purpose of dirmig is to compose a migration mapping between a set of source directories and corresponding Apex subsystem/view directories. Dirmig does not actually modify any files, it merely generates a "directory mapping" which serves as input to the next stage of migration, filemig. The directory map may be manually changed before passing it to filemig, but such alterations should be kept to a minimum since it may take several attempts to figure out the ideal migration map. The resultant directory map is output to standard output unless overridden with the -outmap option.

    In "migrate by copy" mode, the correspondence between source directories and Apex directories can be one-to-none, one-to-one or many-to-one, but not one-to-many. The overall structure of the subsystem directories need not match that of the source directories.

    In "migrate by reference" mode, the correspondence between directories must be either one-to-none or one-to-one since the overall directory structure is preserved.

    The source directories to be migrated and their associated destination subsystem directories are given by the -map_single_ss, -map_source_dir, and -map_ss_tree options, of which there should be at least one. In addition, these options indicate how the source directories should be migrated as given by the "mapping mode". In general, the given source directory and all of its descendant subdirectories are migrated into the corresponding destination directory, preserving the original subdirectory tree structure. Each such subdirectory normally "inherits" the mapping mode of its parent directory.

    There are actually four source directory mapping modes:

    Single System

    A source directory whose mapping mode is "Single Subsystem" is migrated to its own top-level subsystem. The mapping mode of its subdirectories, however, are implicitly set to "Source Directory". Thus, the given source directory and all of its subdirectories are migrated to a single subsystem with a similar internal subdirectory tree structure.

    Source Directory

    A source directory whose mapping mode is "Source Directory" is migrated to some subdirectory of a subsystem. The mapping mode of its subdirectories, are also implicitly set to "Source Directory". Thus, the given source directory and all of its subdirectories are migrated to some subdirectory of a subsystem with a similar internal subdirectory tree structure.

    Subsystem Tree

    A source directory whose mapping mode is "Subsystem Tree" may or may not be migrated to its own top-level subsystem depending on what data files are in the directory. If there are any files in the directory other than those that match the file specifications given by any -file_exclusions options, then the source directory is migrated to its own subsystem. Otherwise, the source directory is migrated to a plain directory in the destination. In either case, the mapping mode of its subdirectories, are implicitly set to "Subsystem Tree", as well. Thus, the source directory and its subdirectories may be migrated to one or more separate subsystems. This mapping mode is particularly useful when the source directory tree has many "intermediate node" high-level subdirectories that impose an organization on a collection of "leaf node" low-level subdirectories which contain mainly data files and few, if any, subdirectories. If a source directory with this mapping mode is not migrated to a subsystem, an entry for it is still generated in the output directory migration map, but it is commented out with the prefix "#N" to indicate that it has no useful files.

    Excluded

    A source directory whose mapping mode is "Excluded" is not migrated. The mapping mode of its subdirectories, are also set to "Excluded". An entry for an excluded directory is still generated in the output directory migration map, but it is commented out with the prefix "#E".

    As mentioned previously, the -map_single_ss, -map_source_dir, and -map_ss_tree options explicitly define the destination and mapping mode for one or more source directories. This mapping mode implicitly defines the mapping mode of any subdirectories of the source directories. Other options can be used to explicitly override any (and only) implicitly defined mapping modes as follows:

    -dir_exclusions
    changes mapping mode to "Excluded"
    -single_ss
    changes mapping mode to "Single Subsystem"
    -source_dir
    changes mapping mode to "Source Directory"
    -ss_tree
    changes mapping mode to "Subsystem Tree"

    These options can change only the mapping mode and not a source directory's destination directory. Each of these options provide a directory specification which may identify a particular directory name or it may include the wildcard characters "*" or "?" to refer to a collection of directories. Furthermore, the specifications may be relative pathnames or full pathnames. Full pathnames have to match the full pathname of a source subdirectory, whereas relative pathnames only have to match the "tail end" of a subdirectory's full pathname. If more than one of these mapping mode changing options is given, then the order in which they appear (left to right) is the order in which they will be compared to candidate subdirectory pathnames. The first specification that matches will indicate the new mapping mode.

    In the course of generating a directory migration map, it is possible to define two or more different subsystems with the same simple name (such as /a/x.ss and /a/b/x.ss). Since these situations may potentially cause name resolution conflicts under Apex, these entries in the directory map are flagged with the trailing comment "##Dup". Users are encouraged to change the names of these subsystems if there is any chance that they will be imported by the same subsystem. Alternatively, the -rename option can be specified, which causes dirmig to automatically generate unique names for all subsystems.

    makemig - analyze makefiles for migration-relevant information

    Syntax

    Options

    The following options are currently accepted, but ignored, by makemig:

    -b, -k, -m, -o file, -p, -R resume, -q, -t, -v, -W file.

    Description

    Makemig is a significant, but incomplete, reimplementation of the popular automated software development tool, make. It processes makefiles much like the real make tool and accepts the same options and arguments. Although it does not exactly reproduce all the behaviors of make, makemig should be able to accept any makefile and generate all the make commands (rules) that are significant to the migration process. Please see the documentation on the real make program for more detailed information on the operation of makemig.

    Enhancements have been added to makemig to enable it to extract information about the build process that is useful to the migration process. In particular, its output is used by filemig to determine what include paths (-I options) are used to compile the various C/C++ source files that are being migrated. To do this, makemig must be run on all the makefiles used to build the original source programs. In addition, the makemig options -n -T -w must be specified and the resultant output captured in a file. This output is then passed on to filemig via its -use_make_output option.

    filemig - define a full file migration map

    Syntax

    Options

    Description

    The general purpose of filemig is to define a migration mapping between the data files in a set of source directories and corresponding files and directories within Apex subsystems. A given file can be mapped to at most one destination. Filemig does not actually modify any files, it merely generates a "full file mapping" which serves as input to the next stage of migration, perfmig. The file map may be manually changed before passing it to perfmig, but such alterations should be kept to a minimum since it may take several attempts to figure out the ideal migration map. The resultant file map is output to the standard output unless overridden with the -outmap option.

    The source directories and associated files to be migrated are identified by one or more instances of the -map or -map_file options. The -map options used are typically generated by the dirmig tool and provided to filemig via the -options option. A -map option takes one of two forms:

    or

    For example:

    or

    The first form indicates that the data files in the source directory src_dir1 should be migrated into the Apex subsystem ss1 at its top-level directory. The second form indicates that the data files in source directory src_dir2 should be migrated into the subdirectory sub_dir of the Apex subsystem ss2. Notice that the view directories are not explicitly included in the destinations. The view is specified with the -view option and is automatically inserted by the migration tools when it is needed.

    The -control, -control_c, -uncontrol, uncontrol_c, and -ignore options specify how the files in a given source directory are to be treated during migration. Each of these options provide a file specification which may identify a particular file name or it may include the wildcard characters "*" or "?" to refer to a collection of files. Furthermore, the specifications may be relative pathnames or full pathnames. Full pathnames have to match the full pathname of a source file, whereas relative pathnames only have to match the "tail end" of a file's full pathname. If more than one of these file treatment options is given, then the order in which they appear (left to right) is the order in which they will be compared to candidate file pathnames. The first specification that matches indicates how the file should be treated.

    A source file matched by a -control option is copied to its destination subsystem directory and put under version control, whereas a match by an -uncontrol option only copies the file. The -control_c and -uncontrol_c options are identical to the -control and -uncontrol options, respectively, except that the file is also identified as a C or C++ source code file and may cause the files to be processed in a special manner depending on the use of the perfmig options -no_imports, -include_scheme, and -register_main_programs. A file matched by an -ignore option is not migrated but it is listed in the full file migration map unless the -omit_ignored option indicates otherwise.

    Finally, the -history, -include_paths, -model, -ss_storage, -view, and -view_storage options specify the initial global default values of various Apex subsystem, view, source directory and file object attributes. These attributes can also be defined for specific objects with the -set_attr option.

    See the perfmig command for a description of the file migration map.

    perfmig - test and perform migration steps

    Syntax

    Options

  • -config_file config_file

    If specified, build a configuration file containing the full pathnames of all the views created by perfmig in config_file. This file is used by perfmig to define the Apex import dependencies and can also be used by various other Apex commands such as the show_status and compile commands.

    Default: Description.cfg, unless the -no_imports option is given, in which case no configuration file is built.

  • -confirm

    Must be specified to actually cause changes to any files.

    Default: make no changes to files

  • -external_includes list_of_source_directory_pathnames

    Specifies list of source directories which contain C or C++ include files that are not intended to be migrated to an Apex subsystem but are still referenced by migrated source code files in an #include directive. This is only useful if the -no_imports option is not specified or the -include_scheme option is specified. Its purpose is to suppress the warning messages perfmig generates when it encounters an include file that is not migrated to an Apex subsystem. Any subdirectory of the specified source directories are also treated as external. The source directories must be full pathnames. Multiple uses of this option are additive.

    Note that "built-in" include directories are automatically treated as external directories and do not have to be explicitly specified as such. See the -builtin_includes option.

    Default: none

  • -ignore_includes included_file_spec including_file_spec

    Do not generate any warning messages for missing include files that match the included_file_spec within C/C++ source files that match the including_file_spec. The included_file_spec and including_file_spec can contain the wildcard characters "*" or "?" to refer to a collection of files. For example:

    will not report a warning if, while analyzing the C++ source file /src/tools/change.cpp an #include "xyz.h" or #include <xyz.h> directive is found, but the xyz.h file cannot be located. Other possible examples using wildcards are:

  • -inmap map_file

    Use map_file as the file migration map to drive the migration process.

    Default: standard input

  • -make_depend

    Execute the Apex dependencies command to calculate the make dependencies between the various C/C++ source files. This option is only meaningful if the "migrate by copy" mode is used and the -no_imports option is not specified. This option should only be used with models defined for foreign C/C++ compilers where the command translation tables for the C/C++ source dependencies commands have been appropriately defined. This option should not be used with the Rational Apex C/C++ compiler.

    Default: do not calculate the make dependencies

  • -missing_includes missing_output_file

    Whenever a missing include file is detected (which is not already supposed to be ignored due to use of the -ignore_includes or -ignore_standard_includes options), output a line to the missing_output_file in the form:

    Default: do not generate a file containing the missing include file references

  • -no_imports

    Do not derive the import relationships between subsystems from the #include directives in C/C++ source files.

    Default: do generate imports

  • -no_group_controls

    Do not group multiple controlled files into a single Apex control command. Instead, issue a separate Apex control command for each controlled file.

    Default: group multiple controlled files into a single Apex control command to improve performance

  • -no_usr_include

    Do not automatically append /usr/include to the end of the list of include directories specified by the -builtin_includes option.

    Default: automatically append /usr/include to the built-in include directories list

  • -options options_file

    Read command line options from the given file. This option is typically used to read in a set of -ignore_includes options that were generated through the use of the -missing_includes option. In an options file, the "#" character marks the beginning of a comment which is terminated by the end of line character (newline).

    Default: none

  • -register_main_programs

    Search all C/C++ source files and if a declaration is found for the "main" function, then register the source file as a main program with Apex.

    Default: do not register main programs with Apex

  • -resolve_includes_by ( any_available_file | limited_search )

    This option specifies what method perfmig uses to resolve #include directives in the original C/C++ source files.

    The any_available_file value tells perfmig to assume that any file it finds among the original source files that matches the simple name of the include file will resolve the reference. If more than one file with the same simple name is found, then perfmig displays an error message (only the first time this particular conflict is encountered) and arbitrarily picks one of the possible choices to actually use for the include file. For example:

    Each of these includes could be resolved by ANY of these possible actual source files:

    The limited_search value tells perfmig that it has to resolve all #include directives using the include paths provided to it. Furthermore, it will only resolve include file references relative to the parent directory of each source file and NOT relative to the directory from which a source file was compiled. Perfmig gets its include paths from the following sources:

    a . INCLUDE_PATHS: attributes within the file migration map identified by the -inmap option. These are typically provided automatically via use of the makemig tool. Note that these can be manually augmented with filemig's -set_attr option.

    b . The -builtin_includes option.

    c . The -external_includes option.

    The resolution of include files is also affected by the -ignore_includes and -ignore_standard_includes options.

    Thus, at the cost of some accuracy, the any_available_file value can be used to resolve include file references without having to deal with makefiles and makemig.

    Default: any_available_file

  • -start_at step
    -stop_at step

    These options tell perfmig which migration steps to perform. The -start_at option indicates at which step to begin and the -stop_at option says at which step to end.

    The legal step values, their respective full names and a description of their general behaviors are as follows:

    Value

    Step

    Description

    subsystem
    Subsystem Decomposition Step
    This step creates the necessary subsystems and views. It must be the first step performed when a new migration is attempted.
    architecture
    Architectural Control Step
    During this step source files may be copied, C/C++ files are examined to identify any main programs and may have their #include directives converted, import relationships may be defined, C/C++ dependency relationships are established, and main programs may be registered. The subsystem step must have been successfully performed before this step can be attempted.
    version
    Version Control Step
    This step places the desired source files under version control. In "migrate by copy" mode, the architecture step must have been successfully performed before this step can be attempted. In "migrate by reference" mode, the architecture step can be skipped.
    cleanup
    Cleanup Step
    This step assists the user in destroying the original migration tower that was created during the first three migration steps while leaving the original source files intact. This step can be performed even if the other steps were not completed successfully.

    Perfmig can perform the subsystem, architecture and version steps individually, in separate invocations of the tool, or in combination. The cleanup step can only be performed as a separate invocation of the tool. Thus, the following uses of these options are valid:

    but, the following uses are illegal:

    Default for -start_at: subsystem, unless the -stop_at option is given, in which case its value is used as the default for the -start_at option

    Default for -stop_at: version, unless the -start_at option is given, in which case its value is used as the default for the -stop_at option

  • -sysd_file sysd_file

    If specified, then build a system description file containing the full pathnames of all the subsystems created by perfmig along with all their respective imports in sysd_file. This file is used by perfmig to define the Apex import dependencies.

    Default: Description.sysd, unless the -no_imports option is given, in which case no system description file is built.

  • -verbose

    Display all commands which would change any files before changing them.

    Default: do not display commands

  • -verbose2

    This option is like the -verbose option, but it outputs more of the file migration map as it processes and in a different format.

    Default: do not display file migration map nor any commands

    Description

    Perfmig takes a file migration map, typically one generated by filemig, and generates (and optionally executes if the -confirm option is specified) the commands necessary to actually migrate a collection of source files into Apex subsystems, views and directories. These are generally commands to Apex to create subsystems and views, copy source files, place files under version control and register main programs. They may also be internal commands which are used to convert C/C++ source files under certain situations. By default, perfmig reads the map from standard input. This can be overridden with the -inmap option. It also outputs the commands to standard output as it processes them if the -verbose or -verbose2 options are given. If the -confirm option is not provided, perfmig attempts to simulate what the commands would do in order to identify as many error situations as possible.

    Perfmig performs the migration steps as specified by the -start_at and -stop_at options. The default behavior is to do the subsystem, architecture and version steps in sequence.

    The file migration map processed by perfmig has a simple structure controlled by a number of directives as is illustrated by this sample map:

    This map will cause the following operations to be performed, although not necessarily in this exact order:

    1 . Process the map in "migrate by copy" mode.

    2 . Create Apex subsystem /new/sys1.ss with its storage in /disk1/ss_store.

    3 . Create Apex view /new/sys1.ss/my.wrk with its storage in /disk2/view_store and based on the model in /apex/model.ss/sun4.

    4 . Create the subdirectory /new/sys1.ss/my.wrk/sub1.

    5 . Perform the following file copies:

    6 . Don't do anything with file /src/dir1/junk1.

    7 . Put file /new/sys1.ss/my.wrk/prog.c under version control with version history my_history.

    8 . If the -no_imports option is not given, then find all the #include directives in /src/dir1/prog.c and analyze them using the include paths /u/inc1 and ../inc2 to determine which original source directories the included files came from. Next figure out which views these source directories were migrated to and, finally, define import relationships between those views and /new/sys1.ss/my.wrk.

    9 . If the -include_scheme option has a value other than none, analyze /src/dir1/prog.c's include files as in step 8 above and use them to fix any name resolution problems. If this option has the values vdf_specific or vdf_general, try to create an Apex visibility description file to resolve the references. If its value is update_includes, change the name of the files in the #include directives if necessary for them to be properly resolved.

    10 . If the -register_main_programs option is specified, then examine the source code in /new/sys1.ss/my.wrk/prog.c. If it contains a declaration for the main function, then tell Apex to register it as a main program.

    More formal Description of a File Migration Map.

    In a file migration map, spaces are optional except where needed to disambiguate adjacent arguments. All directives must be terminated by the end of line character (newline). Comments are introduced by a "#" character and continue till the end of the line. The order of the directives in a migration map are significant.

    The very first directive should indicate the mode of migration being attempted, such as:

    or

    If this directive is missing, then "migrate by reference" mode is assumed.

    The next set of directives provide values for various attributes of Apex subsystems, views and files. They are:

    The HISTORY: attribute directive specifies the name of the version history that will be used when a file is placed under version control. If none is given, then the default version history is used.

    The INCLUDE_PATHS: attribute directive provides the list of source directories that are used to resolve #include directives within C/C++ source code files if they need to be examined or converted. Perfmig follows the same search rules used by C/C++ compilers in resolving #include file directives.

    The MODEL: attribute directive identifies the model view that will be used whenever an Apex view is created. Its value defaults to that defined by the APEX_DEFAULT_MODEL session switch.

    The SS_STORAGE: attribute directive indicates where the data files associated with an Apex subsystem (such as its version control database) should actually be stored. By default, they are stored on the same file system as that of the subsystem's parent directory. This attribute is used whenever a new subsystem is created.

    The VIEW: attribute directive tells what name to use for any views that need to be created. Its default value is migrate.wrk. A view is always created as a working view.

    Finally, the VIEW_STORAGE: attribute directive specifies where the data files associated with an Apex view should actually be stored. Its value defaults to that of the view's associated subsystem. This attribute is ignored in "migrate by reference" mode.

    The values of the MODEL:, SS_STORAGE:, and VIEW_STORAGE: attribute directives must specify full pathnames. The HISTORY: and VIEW: attributes must give only simple names. The INCLUDE_PATHS: attribute may include both relative and complete pathnames.

    Although an attribute directive may appear anywhere in a file migration map, the position of the directive determines the objects to which it will apply.

    All this is to say that the scope of file attributes is nested within that of file treatment attributes which is nested within that of source directory attributes which is nested within that of subsystem attributes which is nested with that of global attributes.

    The global attribute directives in a map, if any, are followed by zero or more SUBSYSTEM: directives. A SUBSYSTEM: directive specifies the full pathname of a new subsystem which is to be created along with its associated view. The current value of the SS_STORAGE: attribute directive applies to the creation of a subsystem. The current values of the MODEL:, VIEW:, and VIEW_STORAGE: attribute directives apply to the creation of a view. The subsystem and its view are not actually created until the first non-ignored file directive within the subsystem is found. Therefore, empty subsystems and views will not be created.

    Immediately following a SUBSYSTEM: directive may be zero or more subsystem attribute directives followed in turn by zero or more SOURCE_DIRECTORY: directives. A SOURCE_DIRECTORY: directive merely identifies the source directory from which data files will be migrated to the subsystem identified by the preceding SUBSYSTEM: directive.

    Immediately following a SOURCE_DIRECTORY: directive may be zero or more source directory attribute directives followed in turn by zero or more file treatment directives. A file treatment directive is one of CONTROLLED_C_FILES:, CONTROLLED_FILES:, UNCONTROLLED_C_FILES:, UNCONTROLLED_FILES:, or IGNORED_FILES: which indicate how a given source file is to be treated during the migration.

    Immediately following a file treatment directive may be zero or more file treatment attribute directives followed in turn by zero or more file directives.

    Immediately following a file directive may be zero or more file attribute directives. Only the HISTORY: and INCLUDE_PATHS: attribute directives are applicable to file objects.

    The list of file directives associated with the IGNORED_FILES: directive are present only for documentation purposes. Perfmig does nothing with these files. The file directives associated with the other file treatment directives can take any one of the following forms:

    In all four forms src_file refers to the simple name by which the file is known in the source directory identified by the preceding SOURCE_DIRECTORY: directive. In the first form, the source file will be copied to a file of the same name in the top-level directory of the subsystem/view. In the second form, the source file will be copied into a file of the same name, but in subdirectory sub_dir of the subsystem/view. This subdirectory can be more than one level deep. In the third form, the source file will be copied to the top-level subsystem/view directory and renamed dst_file. In the last form, the source file will be copied to the sub_dir subdirectory of the subsystem/view and renamed dst_file.

    In "migrate by reference" mode, only the first and second forms are allowed and even these are subject to additional restrictions.

    subsysmig - test and perform subsystem decomposition step

    Syntax

    Options

    The following options are valid for subsysmig. See perfmig - test and perform migration steps for a complete description of these options.

    -confirm
    -inmap map_file
    -options options_file
    -verbose
    -verbose2

    Description

    Subsysmig performs the Subsystem Decomposition Step of the migration process.

    archmig - test and perform architectural control step

    Syntax

    Options

    The following options are valid for archmig. Please see perfmig - test and perform migration steps for a complete description of these options.

    -builtin_includes list_of_source_directory_pathnames
    -config_file config_file
    -confirm
    --external_includes list_of_source_directory_pathnames
    -ignore_includes included_file_spec including_file_spec
    -ignore_standard_includes
    -include_scheme ( none | vdf_general | vdf_specific | update_includes )
    -inmap map_file
    -make_depend
    -missing_includes missing_output_file
    -no_imports
    -no_usr_include
    -options options_file
    -register_main_programs
    -start_at step -stop_at step
    -sysd_file sysd_file
    -verbose
    -verbose2

    Description

    Archmig performs the Architectural Control Step of the migration process. Seeperfmig - test and perform migration steps for a complete description of this tool's behavior.

    vermig - test and perform version control step

    Syntax

    Options

    The following options are valid for archsmig. See perfmig - test and perform migration steps for a complete description of these options.

    -confirm
    -inmap map_file
    -no_group_controls
    -options options_file
    -verbose
    -verbose2

    Description

    Vermig performs only the Version Control Step of the migration process. See the perfmig - test and perform migration steps for a complete description of this tool's behavior.

    cleanmig - test and perform cleanup step

    Syntax

    Options

    The following options are valid for cleanmig. See the perfmig tool documentation on for a complete description of these options.

    -confirm
    -inmap map_file
    -options options_file
    -verbose
    -verbose2

    Description

    Cleanmig performs only the Cleanup Step of the migration process. See the perfmig - test and perform migration steps for a complete description of this tool's behavior.

    duprefmig - duplicate a "migrate by reference" tower of views

    Syntax

    Options

    Note that in assigning new source directories care must be taken to avoid making changes that will break existing C/C++ source file #include directives. For this reason, it is advisable to change only the top level directory of a source tree.

    Default: none

  • -dup_view old_view new_view

    This option is used to specify where the new, duplicated versions of the old views should be placed. There must be one or more instances of this option. The old_view can identify a particular old view or it can include the wildcard characters "*" or "?" to refer to a collection of old views. Furthermore, old_view can be a relative pathname or a full pathname. Full pathnames have to match the full pathname of an old view, whereas relative pathnames only have to match the "tail end" of a views's full pathname. If more than one of these options is given, then the order in which they appear (left to right) is the order in which they will be compared to candidate old view pathnames. The first specification that matches will identify the new view name. For example, the option:

    would define the following new views:

    Default: none

  • -ignore_tar_permission_errors

    Duprefmig uses the tar program to duplicate source directories. If the read/search permissions on a file/subdirectory within a source directory prohibits tar from reading/searching the file/subdirectory, this option causes duprefmig to ignore the error and continue processing.

    Default: report tar permission errors when duplicating source directories

  • -new_config_file new_config_file

    Put the new configuration file built by duprefmig, containing the full pathnames of all the new views, in new_config_file. This file is used by duprefmig to define the Apex import dependencies and can also be used by various other Apex commands such as the show_status and compile commands.

    Default: New_Description.cfg

  • -old_config_file old_config_file

    The old_config_file contains the full pathnames of all the old views which are to be duplicated. This information is used by duprefmig to locate the associated old source directories. These old views and source directories are used to derive the new views and source directories as controlled by the -dup_source and -dup_view options. This configuration file must describe a complete "migrate by reference" tower of old views. That is, none of the old views can import another view that is not included in the configuration file. Furthermore, in general, all the old views that were created in a given migrate by reference attempt should be included in the configuration file. Duprefmig does not verify that these requirements are met.

    Default: none, this option is always required

  • -options options_file

    Read command line options from the given file. This option is typically used to read in a set of -dup_source or -dup_view options of which there can be more than one. In an options file, the "#" character marks the beginning of a comment which is terminated by the end of line character (newline).

    Default: none

  • -use_new_source

    If a specified new source directory already exists, use it instead of duplicating its associated old source directory.

    Default: do not use new source directories, if any. It is an error if they already exist

  • -use_new_view

    If a specified new view already exists, use it instead of duplicating its associated old view. This can be used to handle old views which were not migrated by reference or to preserve an old view in the new configuration.

    Default: do not use new views, if any. It is an error if they already exist

  • -verbose

    Display all commands which would change any files before changing them.

    Default: do not display commands

    Description

    Duprefmig takes a configuration file, containing a collection of views that were previously migrated by reference using perfmig, and produces a duplicate copy of these views and their associated original source directories. It accomplishes this in such a manner that concurrent development can take place in both the old, original source directories and their associated old views as well as the new source directories and their associated new views. All controlled files are maintained by the common Apex subsystems. By default, duprefmig just displays the Apex and shell commands it would perform to duplicate the information. The -confirm option must be specified for duprefmig to actually execute these commands.

    Duprefmig performs the following steps to duplicate a migrated by reference tower of views, although not necessarily in this exact order:

    1 . The configuration file specified by the -old_config_file option is read to find out the names of the old views to be duplicated. The old source directory associated with each view is acquired from the symbolic link that the old view pathname actually refers to. New source directory and new view pathnames are derived from the old view and source paths and the -dup_source and -dup_view options.

    By default, the new source directories cannot already exist. However, the -use_new_source option can be specified to cause duprefmig to use any existing new source directories it finds and to skip the following steps which test for checked out files and copy the directory. Thus, it is possible to duplicate the old source directories using other tools before running duprefmig to create the new views.

    Also, by default, the new views cannot already exist. But, the -use_new_view option can be given to remove this restriction and cause duprefmig to use any existing new views it finds. If a new view exists, then the following steps, which test for checked out files, copy the directory and create the new symbolic links, are not performed on it. This feature has two potential uses. First, it can be used to deal with views which were not migrated by reference. Duprefmig notices this situation when it examines the old view and displays an error to the effect that the old view was not migrated by reference. In this case, the views must be copied beforehand and appropriate -dup_view options must be provided to tell where the new views are located. Second, this feature can be used to tell duprefmig to use the old view as the new view, (that is, keep the old view in the new configuration). This is accomplished by providing a -dup_view option which specifies that the same name should be used for the new view. For example:

    Note that care must be exercised in using this feature to make sure that the new import relationships will make sense. In particular, if a given old and new view are identical, then that view cannot import a view which is not also identical in both the old and new configurations. It is also not recommended to use this option on a view that was migrated by reference.

    2 . A new configuration file is generated in the file identified by the -new_config_file option. This is done even if the -confirm option is not given. The new configuration file is used to make the necessary import changes later.

    3 . The old views are examined to make sure that they do not have any version controlled files that are checked out. Files can be checked out in other views which are not being duplicated and do not have to be up to date.

    4 . The old source directories are copied to the new source directories using the tar command. The -ignore_tar_permission_errors option can be used to ignore tar errors which are usually due to not being able to read a file or search a directory.

    5 . The new views are duplicated by creating a couple of special symbolic links. The new view itself is actually a symbolic link to its associated new source directory as is the case for any view which was migrated by reference.

    6 . Finally, the import relationships between the new views are updated using Apex's accept_import_changes command.


    Full Migration Map Syntax


    Apex Commands Used During Migration

    The following Apex commands are used during the migration process. A full description of each command can be found in the Command Reference Guide.

    Command

    Options

    abandon
    view
    accept_import_changes
    -identical -source sysd_file config_file
    control
    [-control_history history -create_history]
    -no_keyword_replacement
    file ...

    create_directory
    directory
    create_export_set
    -view view all_units
    create_subsystem
    [-storage directory] subsystem
    create_working
    [-model view] [-storage directory] view
    dependencies
    view
    discard
    -recursive -force view
    discard
    subsystem
    maintain
    view
    migrate
    -into directory file
    register_main_program
    program_unit
    remodel
    [-model model] -replace_switches view


  • Rational Software Corporation 
    http://www.rational.com
    support@rational.com
    techpubs@rational.com
    Copyright © 1993-2001, Rational Software Corporation. All rights reserved.
    TOC PREV NEXT INDEX DOC LIST MASTER INDEX TECHNOTES APEX TIPS