![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Migrating Legacy C/C++ Code to Apex The following sections are included in this document:
- Introduction
- Third Party C/C++ Compiler Compatibility
- Using an External Library As Is
- Migrating an External Library to an Apex View
- Migration Tools Overview
- Migrating to Apex Views by Reference
- Migrating to Apex Views by Copy
- Migration Tools
IntroductionIn this document, we discuss how to convert your existing C/C++ development environment into one that can benefit from the many features of the Apex C/C++ environment. This conversion is referred to as a migration since it is often an evolutionary process that gradually brings you closer to the ultimate goal of complete integration with Apex C/C++.
Although much of the following discussion focuses on the use of the Apex C/C++ compiling system, a great deal of this information is also applicable to the use of third party C/C++ compilers. This is especially true for those compilers for which an Apex model has been provided. It is also true, but to a lesser extent, for C/C++ compilers that have no specific model.
There are three major aspects to the migration process:
- 1 . Getting your source files (legacy C/C++ code and other data files) into Apex subsystems/views and under Change Management and Version Control (Summit/CM)
- 2 . Using Apex's automated build management facilities (makefile maintenance)
- 3 . Using the Apex C/C++ development tools (compiler, code browser, debugger) and architecture control feature
Extensive information about the Apex build management process can be found in the C++ Compiler Reference Manual.
In general, and for the purposes of this discussion, each above aspect depends on the previous ones in order to obtain the greatest benefit. Therefore, it is assumed that you will always want to put some or all of your legacy source files into Apex views and under Summit/CM control. However, under what circumstances might you NOT want to place your source files in an Apex view? Before answering that question, we need to discuss how the rest of your source files should be placed into Apex views.
Apex Architecture Control
Apex subsystems/views are the fundamental units of architecture control and the primary units for resource (such as, object code or library) sharing. The larger the project, the more important it is to properly partition it into architecturally sound regions (subsystems/views). Each region should be as independent from the others as is reasonably possible at least from a logical sense (high level design viewpoint) if not from a more physical sense (C/C++ source file dependencies) in order to maximize the degree of information hiding. This will increase the potential for reuse and decrease future maintenance costs. Apex C/C++ can enforce this more physical sense of independence through user defined view import/export relationships. Regarding resource sharing, each development group (team or individual) typically has its own set of views for the subsystems that it is actively working on. Other groups may have their own views for the same as well as additional subsystems. There may be some subsystems, however, containing views which many groups want to share either because the views change infrequently or are a stable part of the final product release (such as an integration area). Apex C/C++ can accommodate this type of sharing at the view level. However, it is important to keep in mind that if a given view is shared, then all of the views in its import closure (dependent views) must also be shared. This typically means that only lower level views in the dependency hierarchy are ever shared.
Given these features of Apex views, you need to analyze your source files and determine how best to structure them within the Apex subsystem/view architecture. Your current source files are probably stored in several directories containing multiple levels of subdirectories. Since Apex views can contain nested subdirectories, you could just migrate all of your files into a single subsystem/view while preserving the original subdirectory structure. While this scheme might circumvent a number of migration issues that would otherwise have to be addressed, it is not recommended for anything but the smallest of products. Such a structure will not be able to benefit from the advantages of Apex views. Instead, you should examine your source files and determine their architectural boundaries based on their original design and interdependence. It may also be constructive to take into account the group divisions that are responsible for the development of the various subcomponents of the product. Likewise, you should consider which subcomponents might be useful in some other product or might be expected to be completely replaced or removed from the current product sometime in the future. Note that you can use the migration tools described later in this document to analyze the existing C/C++ source file dependencies and then use that information to help you decide which Apex view architectures make the most sense.
Now that you have an idea of how to organize your Apex views, we can address the issue of when you shouldn't put your legacy C/C++ code into Apex views and under Summit/CM control. This situation arises when you want to use Apex C/C++ to compile your source code, but your code must use header files (and probably some related archive libraries) that are presently maintained in a source directory that you cannot move or modify in any way. Such is typically the case when your product uses a library developed by some third party for which you do not control (or even possess) all of the source code and is called an external library. An external library does not contain any C/C++ source bodies that you need to compile into object files in your normal course of development. For more detailed information on how to set up Apex C/C++ to deal with this situation, see Using an External Library As Is. If you are permitted to copy the files in the external library or can create certain Apex related files and directories within the original external library directory, then it may be beneficial to migrate the external library source files into an Apex view. One such benefit is that you can then put the files under Summit/CM control and thereby track any changes that the third party makes to the external library over the life of your product. If this is a viable option for you, then see Migrating an External Library to an Apex View for the details. Note that in either case, you must verify that any external library object code files or archive libraries are compatible with the Apex C/C++ compiler as discussed in Third Party C/C++ Compiler Compatibility.
Apex Build Management
Having addressed the Apex architecture control aspect of the migration process, you should now be ready to tackle the issues involving the use of Apex's automated build management facilities. Using its standard, predefined models and build keys, Apex C/C++ automates the day-to-day aspects of build management and still provides a high degree of user customization (with consistency). It accomplishes this by defining a set of specially designed makefiles that are maintained and invoked (to compile or link a program) via the Apex environment. For example, it automatically updates its makefiles when a new C/C++ source file is created in a view or if a new source subdirectory is added. When linking programs, Apex automatically determines the link contributions (such as archive libraries) that other imported views need to provide. These features greatly simplify the cost of maintaining a reliable and consistent build process with the prospects of additional improvements in future releases of Apex C/C++.
You most certainly have your own build process that you have invested a good deal of effort into perfecting and it probably also uses makefile technology. Unfortunately, your makefiles will not be directly usable with Apex's standard C/C++ models and their makefiles. So, you will have to determine if it is worthwhile for you to reimplement your build process to use Apex's. Naturally, you will gain the most benefit from the Apex C/C++ development environment if you use Apex's build management facilities. Fortunately, you do not have to convert your makefiles now if you do not want to. You can continue to use your existing build process and still have your source files under Summit/CM control. If you wish to exercise this option, then you should to use the migration tools discussed below in their "migrate by reference" mode. See Migrating to Apex Views by Reference for more information.
Even if you do plan to convert your makefiles and use the Apex build management facilities, you may still wish to migrate your sources by reference, at least initially, because this will minimize the impact of the migration process on current development activity. Migrating a large development organization can take an appreciable amount of time to complete. You most likely will not want to stop all development work in order to migrate the source and retrain your personnel all at once. When properly administered, development can proceed in parallel using the original legacy source directories and build process alongside the new Apex views (but using your original source directories and build process) with each working off of the same source code base controlled by Summit/CM.
If you do have the option of completely migrating your sources and build process into the standard Apex C/C++ model, then you will want to use the migration tools in their "migrate by copy" mode. For the details on that, see Migrating to Apex Views by Copy. Note that you can initially migrate your legacy source files by reference and then, effectively migrate them by copy and convert your makefiles at a more leisurely pace while not upsetting on-going development.
Apex C/C++ Development Tools
If you want to use the Apex C/C++ compiler and its related development tools to build your product, you will need to use its standard Apex C/C++ model. Therefore, you will have to convert your original build process (and its makefiles) to use that defined by the standard model as described in Migrating to Apex Views by Copy. In some cases, the Apex environment provides models for other, third party C/C++ compiling systems. If you want to use those models, you will have to convert to the build process prescribed by those models which will also probably require migrating by copy.
If the Apex environment does not provide a model for the C/C++ compiling system that you need to use, then you can migrate your sources by reference as described in Migrating to Apex Views by Reference. Alternatively, you can migrate your sources by copy, but use the special Apex C/C++ "migrate" model that does not generate any makefiles and has no predefined build process. In this case, you will have to implement an appropriate build process yourself.
As mentioned previously, if your product requires an external library that was developed using some other C/C++ compiler, then, if it is to be used by the Apex C/C++ compiling system, it must be compatible as discussed in Third Party C/C++ Compiler Compatibility.
Third Party C/C++ Compiler CompatibilityBefore you try to link a program built using the Apex C/C++ compiler with a an external library that was built using a third party C/C++ compiler, you must verify that the code generated by the two compilers are compatible. This should normally not be a problem if the external library contains just C code and was compiled using the C compiler provided by the vendor who makes the platform on which you are running the Apex C/C++ compiler. However, even in this case, there is a potential for incompatibilities if the wrong versions of the vendor's C compiler or platform operating system were used to build the external library. For a given platform, all C compilers generally produce compatible object code, but they can differ in their function calling conventions and in their runtime libraries (different functions supported or with different argument lists).
If your external library is based on C++ code, it is more likely that it will not be compatible with the Apex C++ compiler. In addition to the above potential problems with C, C++ compilers depend more heavily on the implementation of their runtime support library. For example, different C++ compilers can use different mechanisms to implement static initialization, virtual function tables and exception handling. C++ compilers also vary in the algorithms they use to generate external link symbols for class member function entry points, commonly called name mangling. Both the caller and callee code must agree on how a function name is mangled to avoid undefined references at link time. Finally, C++ compilers can differ in how they handle template instantiations.
For help in determining if your particular external library will be compatible with Apex C/C++, talk to your Customer Service Representative.
Using an External Library As IsThis section discusses how you can use an existing external library in its original form with one or more Apex C/C++ views. The external library's files will not be placed under Summit/CM control. It is assumed that you have verified that the external library is compatible with Apex C/C++ as noted in Third Party C/C++ Compiler Compatibility. This information only applies when using the standard Apex C/C++ model and associated build key. If other models or build processes are used, you will have to figure out how to apply this information to those circumstances.
There are two potential issues that need to be resolved to access an external library that resides outside of an Apex C/C++ view: how to include its header files (and handle other compile time options) and how to link in its archive libraries (and handle other link time options).
All of these issues can be resolved a number of different ways with differing degrees of setup time, flexibility and ease of maintenance. The first approach presented is the one we recommend. It may require a bit more time to set up, but it provides a more flexible and maintainable solution and is more architecturally sound. The other two approaches are easier to set up (at least initially) at the cost of flexibility and architectural control. Hybrid approaches are also possible if you think they would better serve your needs.
All of the presented approaches assume that the header files in the external library will be referenced by your C/C++ code in #include directives using relative pathnames and not full (or absolute) pathnames. Appropriate include path (-I) compiler options will then be used to enable the Apex C/C++ compiler to resolve these relative pathnames. Full pathnames can be used to reference external library header files if your situation demands it, but this will significantly reduce the flexibility of the product and eliminate the ability to enforce any architecture controls. If you select to do this, then the include path (-I) compiler options discussed below will not be needed.
In the examples below, your external library resides in the directory /extlib and contains an archive library called libext.a.
Recommended Approach
In this approach, only those Apex C/C++ views which need to use the external library will be granted access to it. In each such view, make the following changes to the view's context switch file (view/Policy/Switches).
For C code, add any include path options and other compile time options (such as macro definitions) to the C_OPTIONS (or C_PRE_OPTIONS) switch. Add any link time options to the C_LINK_OPTIONS (or C_LINK_PRE_OPTIONS) switch. Note that the compile time switches must be set in any view containing sources that reference the external library's header files. The link time switches only need to be set in those views containing main programs that must be linked with the external library.
For example, these switches might look something that this:
C_OPTIONS: -I/extlib -DFULL C_LINK_OPTIONS: -L/extlib -lext
For C++ code, the following switches should be set for similar reasons:
CPP_OPTIONS: -I/extlib -DFULL CPP_LINK_OPTIONS: -L/extlib -lext
Alternatively, the link options can be set using the link contribution switches as discussed in Migrating an External Library to an Apex View. In this case, these switches would be set in the views containing the code that was compiled against the external library and not necessarily in the views in which main programs are linked.
Apex C/C++ Model Based Approach
In this approach, all Apex C/C++ views which use your Apex C/C++ model will be able to access the external library whether they need to or not. In particular, all main programs will be linked with the external library. Naturally, if all of your views do need access to it, then this may be the best approach for your product.
In your Apex C/C++ model, make similar changes to the model's view context switch file (model_view/Policy/Switches) as were made in the previous approach for each individual view.
For example, these switches might look something that this for C code:
C_OPTIONS: -I/extlib -DFULL C_LINK_OPTIONS: -L/extlib -lext
CPP_OPTIONS: -I/extlib -DFULL CPP_LINK_OPTIONS: -L/extlib -lext
Whenever you make changes to your model's context switches remember to propagate those changes to your views with the remodel command.
Apex C/C++ Build Key Based Approach
In this approach, all Apex C/C++ views which use all Apex C/C++ models which use your build key will be able to access the external library whether they need to or not. This approach has the advantage that the compiler and linker options only ever appear in a single file and are not copied into every view's context switches. However, tampering with a build key requires a more advanced degree of customization skills and is not recommended. In particular, future releases of the Apex C/C++ build keys will require you to reimplement your customizations.
In your build key directory, you will have to locate the scripts that are used to invoke the Apex C/C++ compiler and linker (typically called cc, CC, ld and LD). You will then have to edit these scripts to pass the appropriate compiler and linker options to the actual Apex program (typically rcc or RCC).
Migrating an External Library to an Apex ViewThis section discusses how to set up an Apex C/C++ view so that it can be used to contain an external library. This will enable your external library to benefit from Apex's architectural and version control features. It is assumed that you have verified that the external library is compatible with Apex C/C++ as noted in Third Party C/C++ Compiler Compatibility.
First of all, the source files (such as C/C++ header files and archive libraries) need to be migrated into an Apex C/C++ view. This should be done when you migrate all of your other legacy source files into Apex C/C++ views. If you do not want to change the original external library file directories in any way, you will have to copy them into the views as described in Migrating to Apex Views by Copy. Otherwise, you can migrate them by reference as outlined in Migrating to Apex Views by Reference. In either case, any reference in a C/C++ #include directive to an external library header file will have to use relative pathnames.
Once your external library's legacy files (header files and archive libraries) are placed into an Apex C/C++ view, you will need to set up various compile time and link time switches (options). Apex C/C++ will automatically take care of providing the compiler with the proper include path (-I) options so that C/C++ source files, compiled in any other view which imports the external library view, will be able to resolve references to the library's header files. However, if there are other compile time options (such as macro definitions) that need to be provided when compiling the library's header files, then they must be provided to each view that uses (explicitly or implicitly imports) the library's view as described in Using an External Library As Is. This involves setting the importing views' C_OPTIONS, C_PRE_OPTIONS, CPP_OPTIONS and CPP_PRE_OPTIONS context switches to the appropriate values.
Since your external library does not contain any C/C++ source code bodies that need to be compiled directly in the library's view, you need to tell Apex C/C++ not to bother trying to compile anything in that view. This is accomplished by setting the BUILD_POLICY switch of the external library view's context switches as follows:
BUILD_POLICY: external_library
If your external library contains one or more archive libraries, then you will want to set the Apex C/C++ link contribution switches. These only have to be set in the external library view's context switches. They will cause the archive libraries to be linked into any main program in any other view that has the external library view in its import closure. If the external library contains archive library called libext.a, then the following switch setting can be used:
LINK_CONTRIBUTION_OPTIONS: -L<view> -Bstatic -lext
Other link contribution switches which may be relevant to your product include:
LINK_CONTRIBUTION_PRE_OPTIONS
LINK_CONTRIBUTION_LIBRARY
LINK_CONTRIBUTION_SHARED_PRE_OPTIONS
LINK_CONTRIBUTION_SHARED_OPTIONS
LINK_CONTRIBUTION_SHARED_LIBRARY
LINK_CONTRIBUTION_DEFAULT_MODE
LINK_DEPENDENCIES
Migration Tools OverviewThe migration tools were created with the intent of assisting in the task of transforming your current development environment into the Apex environment and methodology thereby gaining greater control and automation of your software project's source versions, architecture and build processes. However, migrating legacy code is far from a trivial task. It requires learning how to use the migration tools and studying your source code and development tools to determine how best to perform the migration. If there is just a small amount of code to migrate, then it may take longer to figure out how to use the migration tools then to do the work manually. On the other hand, the greater the amount of legacy code, the greater the benefit of the migration tools to automate much of the work. However, more code often also implies greater complexity of your development environment. This complexity can introduce a number of special (unique or inconsistent) situations which the migration tools are not able to handle automatically, and thus, may require some manual intervention to analyze, diagnose, and remedy. In many cases the migration tools may be able to deal with a special situation when given additional information. In other cases, the situation will have to be dealt with by other creative means. Given the infinite number of ways in which you can structure your source code and perform your build processes, it is not possible to predict where the break-even point will be in order to determine if using the migration tools will pay off in the end. The only real way to know, is to try it. Keep in mind, however, that using the migration tools can pay off in a big way if, after migrating the legacy code, you realize that you should have performed the migration differently, in which case, you can just rerun the migration tools with different inputs. In fact many migration problems can be resolved before any files are actually created or changed.
You should start out by learning how the migration tools work. There are some simple demonstrations of their use included with the product (in $APEX_HOME/migration/demos). Study them, try them out and read the documentation to become aware of the various migration tool options. As mentioned before, you should have a good understanding of how your legacy source code is organized and how its programs are built.
The migration tools have two basic modes, "migrate by copy" and "migrate by reference". The biggest decision you have to make is to choose which mode to use.
In "migrate by copy" mode, the Apex subsystems and views are logically and physically separated from your original source directories and files. All migrated source files are copied into the desired views. This mode provides the greatest flexibility in terms of restructuring the source code to fit into the Apex subsystem/view model. The legacy source files can be copied into whichever Apex subsystems you desire. However, this mode may require source code changes (to #include directives) and usually requires that your build process (such as makefiles) be significantly (if not totally) redesigned. Furthermore, since files are copied, this mode typically makes it more difficult to keep your current development activities in sync with the new Apex scheme while the migration task is being performed and perfected. The two copies of the source files may have a tendency to diverge if not properly managed.
In "migrate by reference" mode, the Apex subsystems and views are still logically separate from your original sources, but, physically, the source directories and files are shared between the Apex views and your original source directories. Since the source files are shared, there is no need to physically copy them to the Apex views. The "migrate by reference" mode sacrifices a good deal of flexibility in exchange for preserving a higher degree of compatibility between the old and new development processes. Fewer, if any, source files will need to be modified and your original build process (makefiles) can be used as is. However, with this mode, Apex will not be able to automatically maintain your makefiles; that will have to be done the old fashioned way. Furthermore, the old legacy build tools (in particular, the C/C++ compiler) will continue to be used with this mode, as controlled by your original makefiles. Another restriction is that, if a given legacy source directory is migrated to a particular Apex subsystem, then all of its subdirectories must also be migrated to the same subsystem with the same directory tree structure. This is because the new Apex view is actually a symbolic link back to the original source code directory (a special use of Apex's view storage mechanism). A final (potentially disagreeable) consequence of migrating your legacy source files by reference is that Apex will need to create a few special purpose directories and files within your original source directories.
Some special situations which may complicate the task of migrating legacy code include:
- Having hard or symbolic links to files or directories within the legacy source directories.
- Having #include directives which reference parent directories or absolute pathnames as in these examples:
#include "../../dir/file.h" #include "/product/common/file.h"
- Having source files with the same simple name migrated to the same subsystem/view (but to different subdirectories). This is usually only a problem with "migrate by copy" mode when using the standard Apex C/C++ models which flatten the header file name space in a view. Non-flattening models can be used to remedy this problem.
- Having source files whose names conflict with the few special purpose directories and files that Apex requires in all views.
In both modes, the migration tools can be used to create Apex subsystems and views, define the appropriate view import relationships (including mutual import relationships) and place source files under Summit/CM control.
Once you have determined which migration mode to use, you need to perform the various migration phases as described in the following sections.
Migrating to Apex Views by ReferenceThis section describes how to use the migration tools in "migrate by reference" mode to get your legacy source files into Apex C/C++ views. Before going through the details, you should consider examining some of the general issues with using these tools presented in Migration Tools Overview.
Note: When migrating your sources by reference, you should use the special Apex C/C++ "migrate" model for your views. This model has no makefiles of its own so you can still use your original makefiles.
The migration process defined below involves four tools used in the following five phases:
- 1 . Directory Migration Map Phase
With the assistance of the directory migration map tool, dirmig, you create a mapping of each of your original source directories onto some set of Apex subsystem/view directories. In "migrate by reference" mode, if a given source directory is mapped to a particular Apex view, the all of its subdirectories are mapped to the same view and with the same subdirectory structure. Therefore, it is sufficient to identify which original source directories will serve as the single, top-level directory of each Apex view. The resulting "directory migration map" serves as input to the File Migration Map Phase.
- 2 . Makefile Analysis Phase
With the assistance of the makefile analysis tool, makemig, you can extract information, from your original makefiles, which can be used by the subsequent phases to resolve C/C++ #include directive file references. More specifically, this tool works much like the common make program to generate the command lines used to compile the C/C++ source files located in the directories to be migrated. These command lines often contain -I options which are useful in resolving the #include directives. The command lines generated as well as information identifying the source directories in which the command lines would be invoked serve as input to the next phase.
Note that this phase is optional since a similar result can often be achieved by using perfmig's -resolve_includes_by any_available_file option. Also, strictly speaking, include file dependencies do not have to be analyzed at all in order to determine view import relationships for "migrate by reference" mode. However, you are strongly encouraged to do so anyway since eventually you will want to make full use of Apex's architecture control features.
- 3 . File Migration Map Phase
With the assistance of the file migration map tool, filemig, you create a more detailed mapping indicating where each of your original source files is to be migrated into the Apex subsystems and how each file is to be treated. In "migrate by reference" mode, you don't really have much say in choosing where individual files will be migrated since all of the files in a migrated source directory get placed into the same Apex view directory. You can, however, control which files are placed under version control and which files are analyzed as C/C++ source files. The resulting "file migration map" serves as input to the next phase.
- 4 . Preview Migration Phase
With the perform migration tool, perfmig, you can test your file migration map by doing a dry run which attempts to simulate the execution of the commands that will be performed in the last phase. Every reasonable effort is taken in this phase to identify problems which may arise in actually performing the migration. The commands that would be executed in the last phase are displayed so that you can examine them for correctness.
- 5 . Perform Migration Phase
With perfmig, you attempt to actually carry out the migration as specified in the detailed file migration map. The functionality of this tool is broken down into the following three steps that can be carried out either individually or together in a single invocation of the tool:
- a .
Subsystem Decomposition StepThis step creates the necessary subsystems and views. The subsysmig tool can also be used to perform this step.
- b .
Architectural Control StepDuring this step C/C++ files are examined to identify any main programs and to analyze their #include directives, import relationships may be defined, and main programs may be registered.
The archmig tool can also be used to perform this step.
- c .
Version Control StepThis step places the desired source files under version control.
If, after actually migrating your source files to Apex views, you change your mind and want to undo everything, you can use the cleanmig tool. This tool will destroy all the Apex views, version control databases and subsystems that you created in the Perform Migration Phase while leaving your original source directories and files intact.
There is one additional tool called duprefmig. This tool is useful for duplicating a tower of views that have been previously migrated by reference using perfmig. It duplicates the views as well as the original source directories associated with them. The new views and source directories are structured like a collection of "migrate by reference" views. This arrangement makes it possible for concurrent development to take place in both the old, original source directories and their associated old views as well as in the new source directories and their associated new views.
Migrating to Apex Views by CopyThis section describes how to use the migration tools in "migrate by copy" mode to move your legacy source files into Apex C/C++ views. Before going through the details, you should consider examining some of the general issues with using these tools presented in Migration Tools Overview.
The migration process defined below involves four tools used in the following five phases:
- 1 . Directory Migration Map Phase
With the assistance of the directory migration map tool, dirmig, you create a mapping of each of your original source directories onto some set of Apex subsystem/view directories and subdirectories. Some source directories may be ignored, some may be converted into a top-level subsystem directory and some may be treated as a subdirectory within a subsystem. This "directory migration map" serves as input to the File Migration Map Phase.
- 2 . Makefile Analysis Phase
With the assistance of the makefile analysis tool, makemig, you can extract information, from your original makefiles, which can be used by the subsequent phases to resolve C/C++ #include directive file references. More specifically, this tool works much like the common make program to generate the command lines used to compile the C/C++ source files located in the directories to be migrated. These command lines often contain -I options which are useful in resolving the #include directives. The command lines generated as well as information identifying the source directories in which the command lines would be invoked serve as input to the next phase.
Note that this phase is optional since a similar result can often be achieved by using perfmig's -resolve_includes_by any_available_file option.
- 3 . File Migration Map Phase
With the assistance of the file migration map tool, filemig, you create a more detailed mapping indicating where each of your original source files is to be migrated into the Apex subsystems and how each file is to be treated. This "file migration map" serves as input to the next phase. Some files may be ignored, some may be put under version control, some may be identified as C/C++ source files which may be converted to simplify name resolution, and some may be merely copied into the destination subsystem directories.
- 4 . Preview Migration Phase
With the perform migration tool, perfmig, you can test your file migration map by doing a dry run which attempts to simulate the execution of the commands that will be performed in the last phase. Every reasonable effort is taken in this phase to identify problems which may arise in actually performing the migration. The commands that would be executed in the last phase are displayed so that you can examine them for correctness.
- 5 . Perform Migration Phase
With perfmig, you attempt to actually carry out the migration as specified in the detailed file migration map. The functionality of this tool is broken down into the following three steps that can be carried out either individually or together in a single invocation of the tool:
- a .
Subsystem Decomposition StepThis step creates the necessary subsystems and views. The subsysmig tool can also be used to perform this step.
- b .
Architectural Control StepDuring this step source files may be copied, C/C++ files are examined to identify any main programs and may have their #include directives converted, import relationships may be defined, and main programs may be registered.
The archmig tool can also be used to perform this step.
- c .
Version Control StepThis step places the desired source files under version control.
If, after actually migrating your source files to Apex views, you change your mind and want to undo everything, you can use the cleanmig tool. This tool will destroy all the Apex views, version control databases and subsystems that you created in the Perform Migration Phase while leaving your original source directories and files intact.
Migration Toolsdirmig - compose a directory migration map
Syntax
dirmig {options
}
Parameters
- -dir_exclusions directory_spec
Identifies those source directories which will have their mapping mode changed to "Excluded". directory_spec can contain the "*" or "?" wildcard characters with their normal shell interpretations. A list of directory specifications can be provided by surrounding the list in quotes and separating each spec with a space such as:
"*junk trash"
- -file_exclusions file_spec
Identifies those files which will not be considered in determining if a source directory should be mapped into a subsystem when in "Subsystem Tree" mapping mode. file_spec can contain a list of wildcarded specifications.
- -follow_dir_links directory_spec
Normally, when dirmig encounters a symbolic link to a directory via a relative pathname, it outputs a warning message to that effect and then ignores the link. This is because, most of the time such links are merely duplicate references to actual directories that are also being migrated and it would not be very useful to migrate the same physical directory more than once. This option, tells dirmig not to ignore the symbolic links identified by the given directory_spec which may contain a list of wildcarded specifications.
For example, suppose the source tree to be migrated, /root/src, contained the following "directories":
/root/src/dir # Normal directory /root/src/dir2 => ../dir # Relative symbolic link # to normal directory # /root/src/dir
By default, dirmig would migrate /root/src/dir, but ignore /root/src/dir2. However, if the following option were provided:
-follow_dir_links dir2
then /root/src/dir2 would be migrated along with /root/src/dir.
In this case, the files in /root/src/dir2 would become duplicates of the files in /root/src/dir. That is, the link would effectively be broken.
- -map_single_ss src dest
Indicates that the source directory src will be mapped to the destination directory dest and its mapping mode will be set to "Single Subsystem". Both directories must be full pathnames. The last name in the dest path must identify a subsystem with the .ss extension. The view directory name must not be included in dest. Thus, the options
-map_single_ss /src/base /apex/base.ss -map_single_ss /src/idle/junk /apex/idle/junk.ss
-map_single_ss /src/acts /apex/acts -map_single_ss /src/card/date /apex/card.ss/date -map_single_ss /src/exam/free /apex/exam.ss/free.ss
- -map_source_dir src dest
Indicates that the source directory src will be mapped to the destination directory dest and its mapping mode will be set to "Source Directory". Both directories must be full pathnames. The last name in the dest path must identify a subdirectory within a subsystem, excluding the view directory. The subsystem must be indicated with the .ss extension in dest. Thus, the options
-map_source_dir /src/card/date /apex/card.ss/date -map_source_dir /src/keys/last/main /apex/keys/last.ss/main
This option is not supported in "migrate by reference" mode.
-map_ss_tree src dest Indicates that the source directory src will be mapped to the destination directory dest and its mapping mode will be set to "Subsystem Tree". Both directories must be full pathnames. dest must NOT identify a subsystem explicitly with the .ss extension even though dest may be mapped to a subsystem if src matches the "Subsystem Tree" criteria. Thus, the options
-map_ss_tree /src/acts /apex/acts -map_ss_tree /src/game/high /apex/game/high
-map_ss_tree /src/base /apex/base.ss -map_ss_tree /src/card/date /apex/card.ss/date -map_ss_tree /src/exam/free /apex/exam.ss/free.ss
-migrate_by ( copy | reference ) Specifies which type of migration is desired. If -migrate_by copy is given, then the Apex subsystem and views are logically and physically separate from the original source directories and files. If -migrate_by reference is given, then the Apex subsystem and views are still logically separate from the original sources, but they share the same physical file space. The value of this option is written to the directory migration map file so that it does not have to be specified again when filemig is used.
Since, in "migrate by reference" mode, the original source directories are used as the view directories, there is considerably less flexibility in mapping source directories to Apex subsystems/views and their subdirectories than is available in "migrate by copy" mode. The loss of flexibility, however, allows the migration to proceed more quickly, and enables Apex to use the original makefiles to build the user's programs. It also permits the continued use of the user's original software development model and the new Apex model, in parallel, without having to worry about diverging source files.
-outmap map_file Write directory migration map file to map_file.
-rename Indicates that if a subsystem is detected whose simple name matches that of a previously mapped subsystem, then it should be renamed to some other unique name. The name generated is a function of the subsystem's full pathname. For example, without this option, the following directory migration map might be generated:
-map /src/game/test /apex/game/test.ss -map /src/tool/test /apex/tool/test.ss ##Dup
With this option, the map would look like this:
-map /src/game/test /apex/game/test.ss -map /src/tool/test /apex/tool/tool_test.ss ##Ren
-single_ss directory_spec Identifies those source directories which will have their mapping mode changed to "Single Subsystem". It does not change the source directories' destination directories which can only be done with one of the -map options. directory_spec can contain a list of wildcarded specifications.
-source_dir directory_spec Identifies those source directories which will have their mapping mode changed to "Source Directory". It does not change the source directories' destination directories which can only be done with one of the -map options. directory_spec can contain a list of wildcarded specifications.
This option is not supported in "migrate by reference" mode.
-ss_tree directory_spec Identifies those source directories which will have their mapping mode changed to "Subsystem Tree". It does not change the source directories' destination directories which can only be done with one of the -map options. directory_spec can contain a list of wildcarded specifications.
This option is not supported in "migrate by reference" mode.
-tabs integer Indicates, in the output directory migration map, at what tab stop the destination directory will be positioned to make the map easier to read. If the source directory extends beyond the specified tab stop, then only a single space will be output between the two directories.
Description
The general purpose of dirmig is to compose a migration mapping between a set of source directories and corresponding Apex subsystem/view directories. Dirmig does not actually modify any files, it merely generates a "directory mapping" which serves as input to the next stage of migration, filemig. The directory map may be manually changed before passing it to filemig, but such alterations should be kept to a minimum since it may take several attempts to figure out the ideal migration map. The resultant directory map is output to standard output unless overridden with the -outmap option.
In "migrate by copy" mode, the correspondence between source directories and Apex directories can be one-to-none, one-to-one or many-to-one, but not one-to-many. The overall structure of the subsystem directories need not match that of the source directories.
In "migrate by reference" mode, the correspondence between directories must be either one-to-none or one-to-one since the overall directory structure is preserved.
The source directories to be migrated and their associated destination subsystem directories are given by the -map_single_ss, -map_source_dir, and -map_ss_tree options, of which there should be at least one. In addition, these options indicate how the source directories should be migrated as given by the "mapping mode". In general, the given source directory and all of its descendant subdirectories are migrated into the corresponding destination directory, preserving the original subdirectory tree structure. Each such subdirectory normally "inherits" the mapping mode of its parent directory.
There are actually four source directory mapping modes:
Single System
A source directory whose mapping mode is "Single Subsystem" is migrated to its own top-level subsystem. The mapping mode of its subdirectories, however, are implicitly set to "Source Directory". Thus, the given source directory and all of its subdirectories are migrated to a single subsystem with a similar internal subdirectory tree structure.
Source Directory
A source directory whose mapping mode is "Source Directory" is migrated to some subdirectory of a subsystem. The mapping mode of its subdirectories, are also implicitly set to "Source Directory". Thus, the given source directory and all of its subdirectories are migrated to some subdirectory of a subsystem with a similar internal subdirectory tree structure.
Subsystem Tree
A source directory whose mapping mode is "Subsystem Tree" may or may not be migrated to its own top-level subsystem depending on what data files are in the directory. If there are any files in the directory other than those that match the file specifications given by any -file_exclusions options, then the source directory is migrated to its own subsystem. Otherwise, the source directory is migrated to a plain directory in the destination. In either case, the mapping mode of its subdirectories, are implicitly set to "Subsystem Tree", as well. Thus, the source directory and its subdirectories may be migrated to one or more separate subsystems. This mapping mode is particularly useful when the source directory tree has many "intermediate node" high-level subdirectories that impose an organization on a collection of "leaf node" low-level subdirectories which contain mainly data files and few, if any, subdirectories. If a source directory with this mapping mode is not migrated to a subsystem, an entry for it is still generated in the output directory migration map, but it is commented out with the prefix "#N" to indicate that it has no useful files.
Excluded
A source directory whose mapping mode is "Excluded" is not migrated. The mapping mode of its subdirectories, are also set to "Excluded". An entry for an excluded directory is still generated in the output directory migration map, but it is commented out with the prefix "#E".
As mentioned previously, the -map_single_ss, -map_source_dir, and -map_ss_tree options explicitly define the destination and mapping mode for one or more source directories. This mapping mode implicitly defines the mapping mode of any subdirectories of the source directories. Other options can be used to explicitly override any (and only) implicitly defined mapping modes as follows:
These options can change only the mapping mode and not a source directory's destination directory. Each of these options provide a directory specification which may identify a particular directory name or it may include the wildcard characters "*" or "?" to refer to a collection of directories. Furthermore, the specifications may be relative pathnames or full pathnames. Full pathnames have to match the full pathname of a source subdirectory, whereas relative pathnames only have to match the "tail end" of a subdirectory's full pathname. If more than one of these mapping mode changing options is given, then the order in which they appear (left to right) is the order in which they will be compared to candidate subdirectory pathnames. The first specification that matches will indicate the new mapping mode.
In the course of generating a directory migration map, it is possible to define two or more different subsystems with the same simple name (such as /a/x.ss and /a/b/x.ss). Since these situations may potentially cause name resolution conflicts under Apex, these entries in the directory map are flagged with the trailing comment "##Dup". Users are encouraged to change the names of these subsystems if there is any chance that they will be imported by the same subsystem. Alternatively, the -rename option can be specified, which causes dirmig to automatically generate unique names for all subsystems.
makemig - analyze makefiles for migration-relevant information
Syntax
makemig {options
} {target
} {macro=value
}Options
- -A
Causes all possible targets to be brought up to date.
- -C directory
Causes the current working directory to be changed to directory before processing the makefiles.
- -d
Display the reasons why make chooses to rebuild a target.
- -D
Display the text of the makefiles read in.
- -e
Environment variables override assignments within makefiles.
- -f makefile
Read the make rules from makefile. Multiple uses of this option cause concatenation of the makefiles, in the order given.
- -F
Display elapsed times of analysis phases.
- -i
Ignore error codes returned by commands.
- -I directory
Add directory to the list of include paths that are used to resolve references to make include file directives.
- -n
No execution mode. Print commands, but do not execute them. Even lines beginning with an @ are printed. However, if a command line contains a reference to the $(MAKE) macro, that line is always executed.
- -P
Prints out some timing statistics.
- -r
Display the make rules from the makefiles read in.
- -s
Silent mode. Do not print command lines before executing them.
- -T
Treat all dependencies as if they are out of date with their target file.
- -U
Invoke an interactive makefile debugger (Unimplemented).
- -V
Invoke makefile debugger in visual mode (Unimplemented).
- -w
Prints out the current working directory whenever entering or leaving a nested invocation of make.
The following options are currently accepted, but ignored, by makemig:
-b, -k, -m, -o file, -p, -R resume, -q, -t, -v, -W file.
Description
Makemig is a significant, but incomplete, reimplementation of the popular automated software development tool, make. It processes makefiles much like the real make tool and accepts the same options and arguments. Although it does not exactly reproduce all the behaviors of make, makemig should be able to accept any makefile and generate all the make commands (rules) that are significant to the migration process. Please see the documentation on the real make program for more detailed information on the operation of makemig.
Enhancements have been added to makemig to enable it to extract information about the build process that is useful to the migration process. In particular, its output is used by filemig to determine what include paths (-I options) are used to compile the various C/C++ source files that are being migrated. To do this, makemig must be run on all the makefiles used to build the original source programs. In addition, the makemig options -n -T -w must be specified and the resultant output captured in a file. This output is then passed on to filemig via its -use_make_output option.
filemig - define a full file migration map
Syntax
filemig {options}
Options
- -control file_spec
Identify data files which will be copied to destination subsystems and placed under version control. file_spec can contain the "*" or "?" wildcard characters with their normal shell interpretations. A list of file specifications can be provided by surrounding the list in quotes and separating each specification with a space, such as: "*.h *.c".
- -control_c file_spec
Identify data files which will be copied to destination subsystems, placed under version control and processed as C or C++ source files when necessary. file_spec can contain a list of wildcarded specifications.
- -history version_history
Put all controlled files under the given version history. This defines the global default value of the HISTORY: attribute.
Default: whatever the view's default history is
- -ignore file_spec
Identify data files in the source directories which will be ignored (that is, neither copied nor controlled). file_spec can contain a list of wildcarded specifications. Such files are still listed in the output file migration map unless suppressed by the -omit_ignored option.
- -include_paths list_of_source_directory_pathnames
Specifies a list of source directories in which to search for C or C++ include files identified by #include directives. This is only useful if the perfmig tool is to be used later to generate the import relationships between the various subsystems or if perfmig is to convert the #include directives within C/C++ source files. This defines the global default value of the INCLUDE_PATHS: attribute.
At migration time (when using perfmig), any relative pathnames in an #include directive will be evaluated relative to the directory in which a source file resides. The include directories are searched in the order given by the list. Multiple directories can be specified by enclosing the list in quotes and separating the directories with spaces. As defined by C/C++, for #include "file" directives, the current working directory, ".", is always assumed to be at the head of the list, whether specified explicitly or not. For #include "file" and #include <file> directives, the "built-in" directories (as specified by perfmig's -builtin_includes option), such as /usr/include, are always assumed to be at the end of the list, whether specified explicitly or not.
This list should be identical to the list of -I options given to the C/C++ compiler when a source file was compiled in its original source directory. See the -use_make_output option to find out how to automate this process if makefiles are used to compile the source files.
- -map src_dir subsystem
Specifies that the files in source directory src_dir will be migrated into the subsystem directory subsystem. Both directories must be identified by full pathnames. subsystem must identify the subsystem's top-level directory (with the .ss extension), and may also select a subdirectory within the top-level. In no case, however, should the view directory be included since that is specified by the -view option. A given src_dir can appear in only one -map option.
-view myview.wrk -map /src/test /apex/test.ss -map /src/test/tools /apex/test.ss/tools
indicate that the files in /src/test should be migrated to /apex/test.ss/myview.wrk and the files in /src/test/tools should be migrated to /apex/test.ss/myview.wrk/tools.
- -map_file src_dir_spec dst_spec
This option is not supported in "migrate by reference" mode.
This option is used to modify the destination of a collection of source files. The destination file can be given a different name from that of the corresponding source file or a source file can be migrated to a destination directory that is different from that specified by the -map option for its parent source directory. In either case, however, a source file cannot be migrated to a different subsystem from that specified by its parent's -map option. For this reason, dst_spec must be a relative pathname. The src_dir_spec argument can be either a full or relative pathname and can contain wildcard characters to select a group of files.
The -map_file option can take anyone of the following forms:
1. -map_file <src file spec> <dst file spec> 2. -map_file <src file spec> <dst dir spec>/ 3. -map_file <src file spec> .ss/<dst file spec> 4. -map_file <src file spec> .ss/<dst dir spec>/
The first form indicates that the source file should be migrated to a specific destination file located relative to the source file's parent's destination directory. The second form indicates that the source file should be migrated to a destination file of the same name located in a directory relative to the source file's parent's destination directory. The third form indicates that the source file should be migrated to a specific destination file located relative to the source file's parent's destination subsystem/view. The final form indicates that the source file should be migrated to a destination file of the same name located in a directory relative to the source file's parent's destination subsystem/view.
There can be at most one source file mapped to a particular destination file.
For example, if the following source files exist:
/r/test/set/data /r/test/set/jobs /r/test/set/plan /r/test/set/ship.old /r/test/set/train.old
-map /r/test/set /apex/test.ss/set -map_file data data.old -map_file /r/test/set/jobs qa/reg/ -map_file plan .ss/plan2 -map_file *.old .ss/old/ -view m.wrk
then the following file mappings are defined:
/r/test/set/data => /apex/test.ss/m.wrk/set/data.old /r/test/set/jobs => /apex/test.ss/m.wrk/set/qa/reg/jobs /r/test/set/plan => /apex/test.ss/m.wrk/plan2 /r/test/set/ship.old => /apex/test.ss/m.wrk/old/ship.old /r/test/set/train.old => /apex/test.ss/m.wrk/old/train.old
A particularly useful purpose for this option is to rename any makefiles that exist in the source directories since Apex creates its own makefiles. Such an option is:
-map_file Makefile Makefile.conflict
This and other related conflict avoiding file mappings can be found in the file conflicts.opts included with the product. Use the -options option to add them to the list.
- -migrate_by ( copy | reference )
Specifies which type of migration is desired. If -migrate_by copy is given, then the Apex subsystem and views are logically and physically separate from the original source directories and files. If -migrate_by reference is given, then the Apex subsystem and views are still logically separate from the original sources, but they share the same physical source file space.
If the dirmig tool is used, this option should not be specified, since dirmig automatically provides the proper option value in the generated directory migration map.
Whether or not this option is used, its value is written out at the beginning of the file migration map in a MIGRATE_BY: directive for the benefit of the perfmig tool.
- -model view
Name of view to use as the model of newly created views. view must specify a full pathname. This defines the global default value of the MODEL: attribute.
Default: value of the APEX_DEFAULT_MODEL session switch
- -no_dup_file_errors
Do not generate errors when source or header files with the same simple name are being migrated into the same subsystem (but to different subdirectories). Duplicate file names cause errors in "migrate by copy" mode because the standard Apex models flatten the name space in views. If non-flattening models are being used then duplicate simple names may be used if this option is set.
Default: generate error when duplicate names are used
- -no_link_warnings
Do not display the messages that warn when two migrated source files appear to be links to the same physical file.
Normally, filemig tries to identify if a given source file is a link to some other source file (either via a hard link or a symbolic link) so that both source files reference the same physical file. If filemig detects such a situation, it displays a warning message so that the user can decide if both references of the file should be migrated or if one of the references should be ignored. If both references are migrated in "migrate by copy" mode, then two copies of the file will be made, effectively breaking the link. If both are migrated in "migrate by reference" mode and both files are controlled, then potential confusion may result when new versions of the controlled files are created. For these reasons, it is strongly recommended that only one reference to a given physical file be migrated.
Default: display link warnings
- -omit_ignored
Do not include the list of ignored files in the migration map.
Default: list any ignored files in the map
- -options options_file
Read command line options from the given file. This option is typically used to read in the directory migration map generated by dirmig. In an options file, the "#" character marks the beginning of a comment which is terminated by the end of line character (newline).
- -outmap map_file
Write detailed file migration map to map_file.
- -ss_storage directory
Location to actually store a subsystem's data files (such as its version control information). The data files in all the view's within the subsystem are also stored here unless the views have their own view storage defined. This defines the global default value of the SS_STORAGE: attribute.
Default: store files in same file system as subsystem's parent directory.
- -set_attr obj attr_name attr_value
Set the value of attribute attr_name over the scope of object obj to attr_value.
As discussed more thoroughly in the section on the perfmig tool, a file migration map identifies a number of objects and associates with these objects a number of attributes which apply over a particular scope of the map. The filemig tool has a number of options (namely -history, -include_paths, -model, -ss_storage, -view and -view_storage) for defining the values of the global attributes whose scope encompasses the entire file migration map. This option provides additional flexibility in specifying the values of attributes associated with the scope of a subsystem, source directory or source file. That is, it provides the ability to set the values of the attributes for these types of objects. The object parameter, obj, must be given in one of these forms:
1. subsystem.ss 2. src_dir_spec/ 3. src_file_spec
The first form is used to refer to subsystem objects and must include the subsystem .ss extension. Attributes associated with this object are placed immediately after the subsystem's respective SUBSYSTEM: directive. The second form refers to source directory objects and must have a trailing slash, "/". Attributes associated with this object are placed immediately after the source directory's SOURCE_DIRECTORY: directive. The final form refers to source file objects. Attributes associated with this object are placed right after the respective file's file directive. All object forms, can be relative or full pathnames and can include the wildcard characters, "*" and "?". Thus, a single -set_attr option can specify the attributes for a collection of objects.
The attribute name, attr_name, must be one of the following with the indicated restrictions:
HISTORY: INCLUDE_PATHS: MODEL: (subsystem objects only) SS_STORAGE: (subsystem objects only) VIEW: (subsystem objects only) VIEW_STORAGE: (subsystem objects only)
The attribute value, attr_value, is one or more parameters as long as they do not begin with the option introducer character, "-".
To set the view of the /apex/test.ss subsystem to proj.wrk:
-set_attr /apex/test.ss VIEW: proj.wrk
To set the include paths of the source directory /src/test to ../include /opt/scm:
-set_attr /src/test/ INCLUDE_PATHS: ../include /opt/scm
To set the version history of the source file /src/test/port.c to sun4:
-set_attr /src/test/port.c HISTORY: sun4
- -uncontrol file_spec
Identify data files which will be copied to destination subsystems and NOT placed under version control. This is the default behavior for all files unless overridden by some other option. file_spec can contain a list of wildcarded specifications.
- -uncontrol_c file_spec
Identify data files which will be copied to destination subsystems, NOT placed under version control and processed as C or C++ source files when necessary. file_spec can contain a list of wildcarded specifications.
- -use_make_output make_output_file
Read the make_output_file (created by running makemig with the -n -T -w options on the original source's makefiles) to get the include paths that were used for each source file. Place the include path information found in the generated file migration map.
- -view view_simplename
Put sources within the specified view. The view is always treated as a working view. This defines the global default value of the VIEW: attribute.
- -view_storage directory
Location to actually store the view's data files. This defines the global default value of the VIEW_STORAGE: attribute. This attribute is ignored in "migrate by reference" mode.
Description
The general purpose of filemig is to define a migration mapping between the data files in a set of source directories and corresponding files and directories within Apex subsystems. A given file can be mapped to at most one destination. Filemig does not actually modify any files, it merely generates a "full file mapping" which serves as input to the next stage of migration, perfmig. The file map may be manually changed before passing it to perfmig, but such alterations should be kept to a minimum since it may take several attempts to figure out the ideal migration map. The resultant file map is output to the standard output unless overridden with the -outmap option.
The source directories and associated files to be migrated are identified by one or more instances of the -map or -map_file options. The -map options used are typically generated by the dirmig tool and provided to filemig via the -options option. A -map option takes one of two forms:
-mapsrc_dir1 ss1
-mapsrc_dir2 ss2/sub_dir
-map /src/test /apex/test.ss
-map /src/test/tools /apex/test.ss/tools -map /src/test/tools/data /apex/test.ss/tools/data
The first form indicates that the data files in the source directory src_dir1 should be migrated into the Apex subsystem ss1 at its top-level directory. The second form indicates that the data files in source directory src_dir2 should be migrated into the subdirectory sub_dir of the Apex subsystem ss2. Notice that the view directories are not explicitly included in the destinations. The view is specified with the -view option and is automatically inserted by the migration tools when it is needed.
The -control, -control_c, -uncontrol, uncontrol_c, and -ignore options specify how the files in a given source directory are to be treated during migration. Each of these options provide a file specification which may identify a particular file name or it may include the wildcard characters "*" or "?" to refer to a collection of files. Furthermore, the specifications may be relative pathnames or full pathnames. Full pathnames have to match the full pathname of a source file, whereas relative pathnames only have to match the "tail end" of a file's full pathname. If more than one of these file treatment options is given, then the order in which they appear (left to right) is the order in which they will be compared to candidate file pathnames. The first specification that matches indicates how the file should be treated.
A source file matched by a -control option is copied to its destination subsystem directory and put under version control, whereas a match by an -uncontrol option only copies the file. The -control_c and -uncontrol_c options are identical to the -control and -uncontrol options, respectively, except that the file is also identified as a C or C++ source code file and may cause the files to be processed in a special manner depending on the use of the perfmig options -no_imports, -include_scheme, and -register_main_programs. A file matched by an -ignore option is not migrated but it is listed in the full file migration map unless the -omit_ignored option indicates otherwise.
Finally, the -history, -include_paths, -model, -ss_storage, -view, and -view_storage options specify the initial global default values of various Apex subsystem, view, source directory and file object attributes. These attributes can also be defined for specific objects with the -set_attr option.
See the perfmig command for a description of the file migration map.
perfmig - test and perform migration steps
Syntax
perfmig {options}
Options
- -builtin_includes list_of_source_directory_pathnames
Specifies a list of source directories which contain the "built-in" include files for a given C/C++ compiler. In this context, "built-in" refers to any include file that a C/C++ compiler can find without having to provide a -I option to the compiler or without having to provide the full pathname to the include file in the #include directive. This typically refers to the "standard" header files that a C/C++ compiler supports. The directories may in fact be built into the compiler such as is often the case with the /usr/include directory, or they may be provided to the compiler through some environment variable, or they may be given to the compiler through an intermediate shell script.
This option is only useful if the -no_imports option is not specified or the -include_scheme option is specified. This option suppresses the warning messages that perfmig generates when it encounters an include file that it cannot locate using the information provided by the INCLUDE_PATHS: attributes because the file is implicitly recognized by the C/C++ compiler. Any subdirectory of the specified source directories are also treated as built-in. The source directories must be full pathnames. Multiple uses of this option are additive.
Note that "built-in" include directories are automatically treated as external directories. See the -external_includes option.
Default: By default, the include directory /usr/include is always automatically appended to the end of the list of values explicitly specified for this option unless the -no_usr_include option is also given. Therefore, to cause perfmig to use the built-in directories /CC_compiler/include and /usr/include, do the following:
-config_file config_file If specified, build a configuration file containing the full pathnames of all the views created by perfmig in config_file. This file is used by perfmig to define the Apex import dependencies and can also be used by various other Apex commands such as the show_status and compile commands.
Default: Description.cfg, unless the -no_imports option is given, in which case no configuration file is built.
-confirm Must be specified to actually cause changes to any files.
Default: make no changes to files
-external_includes list_of_source_directory_pathnames Specifies list of source directories which contain C or C++ include files that are not intended to be migrated to an Apex subsystem but are still referenced by migrated source code files in an #include directive. This is only useful if the -no_imports option is not specified or the -include_scheme option is specified. Its purpose is to suppress the warning messages perfmig generates when it encounters an include file that is not migrated to an Apex subsystem. Any subdirectory of the specified source directories are also treated as external. The source directories must be full pathnames. Multiple uses of this option are additive.
Note that "built-in" include directories are automatically treated as external directories and do not have to be explicitly specified as such. See the -builtin_includes option.
-ignore_includes included_file_spec including_file_spec Do not generate any warning messages for missing include files that match the included_file_spec within C/C++ source files that match the including_file_spec. The included_file_spec and including_file_spec can contain the wildcard characters "*" or "?" to refer to a collection of files. For example:
-ignore_includes xyz.h /src/tools/change.cpp
will not report a warning if, while analyzing the C++ source file /src/tools/change.cpp an #include "xyz.h" or #include <xyz.h> directive is found, but the xyz.h file cannot be located. Other possible examples using wildcards are:
-ignore_includes strange.h *.c -ignore_includes tmod.h tools/*.h -ignore_includes * /src/test/*.c* -ignore_includes mt/mod.h * -ignore_includes *.hxx *
- -ignore_standard_includes
Do not display a warning message if an include directive in the form #include <file> is found (as opposed to the #include "file" form), but the included file (file) cannot be located. This form of include directive is typically (but not always) used to identify standard C/C++ include files that are provided by the compiler.
Default: display missing standard include file warning messages
- -include_scheme ( none | vdf_general | vdf_specific | update_includes )
The update_includes value for this option is not supported in "migrate by reference" mode.
This option specifies what scheme Apex will use to resolve #include directives in C/C++ source files, after the source files have been migrated.
If none is specified, perfmig will do nothing and the user will have to fix any #include problems manually that might arise.
If update_includes is specified, #include directives in C/C++ source files are modified to correctly identify the included files if the files have been relocated during the migration process. This uses the conventional Apex C/C++ name resolution scheme where a reference to an include file, say c.h, in subsystem x.ss appears as #include "x/c.h" in a C/C++ source file. This include file naming convention provides a simple, effective and easy to maintain scheme which is well adapted to the Apex subsystem/view structure.
If vdf_specific or vdf_general is specified, no source files will be modified, but Apex's Visibility Description File feature will be used to resolve #include file directives. These values can be used in either "migrate by copy" or "migrate by reference" mode, however, they only affect the ability to compile source files with the Apex C/C++ compiler in "migrate by copy" mode.
The Visibility Description File feature places the following limitations on the types of #include file references supported:
- a .
Full pathnames cannot be used to reference an include file within any view. For example:
#include "/usr/fred/inc/file.h"
- b .
In general, the ".." notation cannot be used to refer to an include file in a different view. For example:
#include "../../fred/inc/file.h"
- c .
Within a given view, the same relative include file directory path cannot be used to refer to different actual directories for different #include directives. For example:
#include "inc/file1.h" #include "inc/file2.h"
This pair of #include file references is not supported if they are referenced by any source files within a given view either directly or indirectly (not necessarily within the same source file) and "inc/file1.h" is in a different directory from "inc/file2.h".
The differences between the option values vdf_specific and vdf_general have to do with the types of visibility description file entries they generate.
With vdf_specific, very specific entries will be produced that refer to particular subsystems and subdirectories within them. With vdf_general, an attempt is made to reduce the number of entries by using subsystem wildcards. For example, the following entries might be generated with vdf_specific:
/root/apex/fff.ss /root/apex/fff.ss subdir1 /root/apex/xxx.ss /root/apex/xxx.ss subdir2 /root/apex/xxx.ss subdir3
An entry such as "/root/apex/fff.ss subdir1" means use the C/C++ compiler option "-I/root/apex/fff.ss/view/subdir1", where view represents the actual simple view name.
With vdf_general, all the above entries will usually be reduced to one or both of the following entries:
* Links/Local * .
The "* Links/Local" entry provides immediate access to all the header files in all the imported views, even ones in subdirectories of the views. The "* ." entry is a shorthand notation for indicating that every imported view's directory should be treated as an include directory (that is, for every view add the C/C++ compiler option "-I/view_path").
Thus, the vdf_specific option value provides finer selectivity over the include name space, while the vdf_general value provides greater maintainability since, if a new import is added, it is not necessary to change the visibility description file.
In the vdf_specific, vdf_general and update_includes cases, ALL #include directives are processed regardless of any conditional compilation directives. No macro processing is performed, so #include directives that use macros will generate warning messages.
-inmap map_file Use map_file as the file migration map to drive the migration process.
-make_depend Execute the Apex dependencies command to calculate the make dependencies between the various C/C++ source files. This option is only meaningful if the "migrate by copy" mode is used and the -no_imports option is not specified. This option should only be used with models defined for foreign C/C++ compilers where the command translation tables for the C/C++ source dependencies commands have been appropriately defined. This option should not be used with the Rational Apex C/C++ compiler.
Default: do not calculate the make dependencies
-missing_includes missing_output_file Whenever a missing include file is detected (which is not already supposed to be ignored due to use of the -ignore_includes or -ignore_standard_includes options), output a line to the missing_output_file in the form:
-ignore_includes <missing include file> <source file>
This output can be passed to perfmig via the -options option in a subsequent invocation to cause perfmig to ignore the missing includes. The output should be reviewed to make sure that ignoring the includes is the proper action. Do not use the same file name in the -options and -missing_includes options.
Default: do not generate a file containing the missing include file references
-no_imports Do not derive the import relationships between subsystems from the #include directives in C/C++ source files.
-no_group_controls Do not group multiple controlled files into a single Apex control command. Instead, issue a separate Apex control command for each controlled file.
Default: group multiple controlled files into a single Apex control command to improve performance
-no_usr_include Do not automatically append /usr/include to the end of the list of include directories specified by the -builtin_includes option.
Default: automatically append /usr/include to the built-in include directories list
-options options_file Read command line options from the given file. This option is typically used to read in a set of -ignore_includes options that were generated through the use of the -missing_includes option. In an options file, the "#" character marks the beginning of a comment which is terminated by the end of line character (newline).
-register_main_programs Search all C/C++ source files and if a declaration is found for the "main" function, then register the source file as a main program with Apex.
Default: do not register main programs with Apex
-resolve_includes_by ( any_available_file | limited_search ) This option specifies what method perfmig uses to resolve #include directives in the original C/C++ source files.
The any_available_file value tells perfmig to assume that any file it finds among the original source files that matches the simple name of the include file will resolve the reference. If more than one file with the same simple name is found, then perfmig displays an error message (only the first time this particular conflict is encountered) and arbitrarily picks one of the possible choices to actually use for the include file. For example:
#include "stuff.h" #include "more/stuff.h" #include "../stuff.h"
Each of these includes could be resolved by ANY of these possible actual source files:
/root/src/stuff.h /root/src/more/stuff.h /root/src/still_more/stuff.h
The limited_search value tells perfmig that it has to resolve all #include directives using the include paths provided to it. Furthermore, it will only resolve include file references relative to the parent directory of each source file and NOT relative to the directory from which a source file was compiled. Perfmig gets its include paths from the following sources:
- a .
INCLUDE_PATHS: attributes within the file migration map identified by the -inmap option. These are typically provided automatically via use of the makemig tool. Note that these can be manually augmented with filemig's -set_attr option.- b .
The -builtin_includes option.- c .
The -external_includes option.The resolution of include files is also affected by the -ignore_includes and -ignore_standard_includes options.
Thus, at the cost of some accuracy, the any_available_file value can be used to resolve include file references without having to deal with makefiles and makemig.
-start_at step
-stop_at stepThese options tell perfmig which migration steps to perform. The -start_at option indicates at which step to begin and the -stop_at option says at which step to end.
The legal step values, their respective full names and a description of their general behaviors are as follows:
Perfmig can perform the subsystem, architecture and version steps individually, in separate invocations of the tool, or in combination. The cleanup step can only be performed as a separate invocation of the tool. Thus, the following uses of these options are valid:
-start_at subsystem -start_at architecture -stop_at version -start_at cleanup -stop_at cleanup
but, the following uses are illegal:
-start_at version -stop_at subsystem -start_at version -stop_at cleanup
Default for -start_at: subsystem, unless the -stop_at option is given, in which case its value is used as the default for the -start_at option
Default for -stop_at: version, unless the -start_at option is given, in which case its value is used as the default for the -stop_at option
-sysd_file sysd_file If specified, then build a system description file containing the full pathnames of all the subsystems created by perfmig along with all their respective imports in sysd_file. This file is used by perfmig to define the Apex import dependencies.
Default: Description.sysd, unless the -no_imports option is given, in which case no system description file is built.
-verbose Display all commands which would change any files before changing them.
Default: do not display commands
-verbose2 This option is like the -verbose option, but it outputs more of the file migration map as it processes and in a different format.
Default: do not display file migration map nor any commands
Description
Perfmig takes a file migration map, typically one generated by filemig, and generates (and optionally executes if the -confirm option is specified) the commands necessary to actually migrate a collection of source files into Apex subsystems, views and directories. These are generally commands to Apex to create subsystems and views, copy source files, place files under version control and register main programs. They may also be internal commands which are used to convert C/C++ source files under certain situations. By default, perfmig reads the map from standard input. This can be overridden with the -inmap option. It also outputs the commands to standard output as it processes them if the -verbose or -verbose2 options are given. If the -confirm option is not provided, perfmig attempts to simulate what the commands would do in order to identify as many error situations as possible.
Perfmig performs the migration steps as specified by the -start_at and -stop_at options. The default behavior is to do the subsystem, architecture and version steps in sequence.
The file migration map processed by perfmig has a simple structure controlled by a number of directives as is illustrated by this sample map:
# Beginning of map comment MIGRATE_BY: copy # Side comment HISTORY: my_history INCLUDE_PATHS: /u/inc1 ../inc2 MODEL: /apex/model.ss/sun4 SS_STORAGE: /disk1/ss_store VIEW: my.wrk VIEW_STORAGE: /disk2/view_store
SUBSYSTEM: /new/sys1.ss SOURCE_DIRECTORY: /src/dir1 CONTROLLED_C_FILES: prog.c IGNORED_FILES: junk1
SOURCE_DIRECTORY: /src/dir2/dev UNCONTROLLED_FILES: sfile1 dfile1 sub1/sfile2 sfile3 sub1/sub2/dfile3
SUBSYSTEM: /new/sys2.ss # etc., etc., ... # End of map comment
This map will cause the following operations to be performed, although not necessarily in this exact order:
- 1 . Process the map in "migrate by copy" mode.
- 2 . Create Apex subsystem /new/sys1.ss with its storage in /disk1/ss_store.
- 3 . Create Apex view /new/sys1.ss/my.wrk with its storage in /disk2/view_store and based on the model in /apex/model.ss/sun4.
- 4 . Create the subdirectory /new/sys1.ss/my.wrk/sub1.
- 5 . Perform the following file copies:
/src/dir1/prog.c to /new/sys1.ss/my.wrk/prog.c /src/dir2/dev/sfile1 to /new/sys1.ss/my.wrk/dfile1 /src/dir2/dev/sfile2 to /new/sys1.ss/my.wrk/sub1/sfile2 /src/dir2/dev/sfile3 to /new/sys1.ss/my.wrk/sub1/sub2/dfile3
- 6 . Don't do anything with file /src/dir1/junk1.
- 7 . Put file /new/sys1.ss/my.wrk/prog.c under version control with version history my_history.
- 8 . If the -no_imports option is not given, then find all the #include directives in /src/dir1/prog.c and analyze them using the include paths /u/inc1 and ../inc2 to determine which original source directories the included files came from. Next figure out which views these source directories were migrated to and, finally, define import relationships between those views and /new/sys1.ss/my.wrk.
- 9 . If the -include_scheme option has a value other than none, analyze /src/dir1/prog.c's include files as in step 8 above and use them to fix any name resolution problems. If this option has the values vdf_specific or vdf_general, try to create an Apex visibility description file to resolve the references. If its value is update_includes, change the name of the files in the #include directives if necessary for them to be properly resolved.
- 10 . If the -register_main_programs option is specified, then examine the source code in /new/sys1.ss/my.wrk/prog.c. If it contains a declaration for the main function, then tell Apex to register it as a main program.
More formal Description of a File Migration Map.
In a file migration map, spaces are optional except where needed to disambiguate adjacent arguments. All directives must be terminated by the end of line character (newline). Comments are introduced by a "#" character and continue till the end of the line. The order of the directives in a migration map are significant.
The very first directive should indicate the mode of migration being attempted, such as:
MIGRATE_BY: copy
MIGRATE_BY: reference
If this directive is missing, then "migrate by reference" mode is assumed.
The next set of directives provide values for various attributes of Apex subsystems, views and files. They are:
HISTORY: INCLUDE_PATHS: MODEL: SS_STORAGE: VIEW: VIEW_STORAGE:
The HISTORY: attribute directive specifies the name of the version history that will be used when a file is placed under version control. If none is given, then the default version history is used.
The INCLUDE_PATHS: attribute directive provides the list of source directories that are used to resolve #include directives within C/C++ source code files if they need to be examined or converted. Perfmig follows the same search rules used by C/C++ compilers in resolving #include file directives.
The MODEL: attribute directive identifies the model view that will be used whenever an Apex view is created. Its value defaults to that defined by the APEX_DEFAULT_MODEL session switch.
The SS_STORAGE: attribute directive indicates where the data files associated with an Apex subsystem (such as its version control database) should actually be stored. By default, they are stored on the same file system as that of the subsystem's parent directory. This attribute is used whenever a new subsystem is created.
The VIEW: attribute directive tells what name to use for any views that need to be created. Its default value is migrate.wrk. A view is always created as a working view.
Finally, the VIEW_STORAGE: attribute directive specifies where the data files associated with an Apex view should actually be stored. Its value defaults to that of the view's associated subsystem. This attribute is ignored in "migrate by reference" mode.
The values of the MODEL:, SS_STORAGE:, and VIEW_STORAGE: attribute directives must specify full pathnames. The HISTORY: and VIEW: attributes must give only simple names. The INCLUDE_PATHS: attribute may include both relative and complete pathnames.
Although an attribute directive may appear anywhere in a file migration map, the position of the directive determines the objects to which it will apply.
- Attributes placed at the very beginning of a map are called global attributes and apply to any relevant object which follows, unless they are overridden by a subsequent attribute directive of the same name.
- Attribute directives which appear after a SUBSYSTEM: directive but before any following SOURCE_DIRECTORY: directives are called subsystem attributes and apply to any relevant object which follows up to but not including the next SUBSYSTEM: directive.
- Attribute directives which appear after a SOURCE_DIRECTORY: directive but before any following file treatment directives (such as CONTROLLED_FILES:) are called source directory attributes and apply to any relevant object which follows up to but not including the next SOURCE_DIRECTORY: or SUBSYSTEM: directive.
- Attribute directives which appear after one of the file treatment directives but before any specific file directives (such as prog.c) are called file treatment attributes and apply to any relevant object which follows up to but not including the next file treatment, SOURCE_DIRECTORY:, or SUBSYSTEM: directive.
- Finally, attribute directives which appear after a file directive are called file attributes and apply only to that single file.
All this is to say that the scope of file attributes is nested within that of file treatment attributes which is nested within that of source directory attributes which is nested within that of subsystem attributes which is nested with that of global attributes.
The global attribute directives in a map, if any, are followed by zero or more SUBSYSTEM: directives. A SUBSYSTEM: directive specifies the full pathname of a new subsystem which is to be created along with its associated view. The current value of the SS_STORAGE: attribute directive applies to the creation of a subsystem. The current values of the MODEL:, VIEW:, and VIEW_STORAGE: attribute directives apply to the creation of a view. The subsystem and its view are not actually created until the first non-ignored file directive within the subsystem is found. Therefore, empty subsystems and views will not be created.
Immediately following a SUBSYSTEM: directive may be zero or more subsystem attribute directives followed in turn by zero or more SOURCE_DIRECTORY: directives. A SOURCE_DIRECTORY: directive merely identifies the source directory from which data files will be migrated to the subsystem identified by the preceding SUBSYSTEM: directive.
Immediately following a SOURCE_DIRECTORY: directive may be zero or more source directory attribute directives followed in turn by zero or more file treatment directives. A file treatment directive is one of CONTROLLED_C_FILES:, CONTROLLED_FILES:, UNCONTROLLED_C_FILES:, UNCONTROLLED_FILES:, or IGNORED_FILES: which indicate how a given source file is to be treated during the migration.
Immediately following a file treatment directive may be zero or more file treatment attribute directives followed in turn by zero or more file directives.
Immediately following a file directive may be zero or more file attribute directives. Only the HISTORY: and INCLUDE_PATHS: attribute directives are applicable to file objects.
The list of file directives associated with the IGNORED_FILES: directive are present only for documentation purposes. Perfmig does nothing with these files. The file directives associated with the other file treatment directives can take any one of the following forms:
1. src_file 2. sub_dir/src_file 3. src_file dst_file 4. src_file sub_dir/dst_file
In all four forms src_file refers to the simple name by which the file is known in the source directory identified by the preceding SOURCE_DIRECTORY: directive. In the first form, the source file will be copied to a file of the same name in the top-level directory of the subsystem/view. In the second form, the source file will be copied into a file of the same name, but in subdirectory sub_dir of the subsystem/view. This subdirectory can be more than one level deep. In the third form, the source file will be copied to the top-level subsystem/view directory and renamed dst_file. In the last form, the source file will be copied to the sub_dir subdirectory of the subsystem/view and renamed dst_file.
In "migrate by reference" mode, only the first and second forms are allowed and even these are subject to additional restrictions.
- Files associated with the CONTROLLED_C_FILES: and CONTROLLED_FILES: directives are placed under version control after they are copied. The current value of the HISTORY: attribute directive is used to determine the file's version history.
- Files associated with the UNCONTROLLED_FILES: and UNCONTROLLED_C_FILES: directives are copied to the appropriate subsystem/view directory, but are not placed under version control.
- Files associated with the CONTROLLED_C_FILES: and UNCONTROLLED_C_FILES: directives are processed as C or C++ source code files depending on the values of the -no_imports, -include_scheme, and -register_main_programs options. The current value of the INCLUDE_PATHS: attribute directive may be used to carry out the necessary processing.
subsysmig - test and perform subsystem decomposition step
Syntax
subsysmig {options}
Options
The following options are valid for subsysmig. See perfmig - test and perform migration steps for a complete description of these options.
-confirm
-inmap map_file
-options options_file
-verbose
-verbose2Description
Subsysmig performs the Subsystem Decomposition Step of the migration process.
archmig - test and perform architectural control step
Syntax
archmig {options}
Options
The following options are valid for archmig. Please see perfmig - test and perform migration steps for a complete description of these options.
-builtin_includes list_of_source_directory_pathnames
-config_file config_file
-confirm
--external_includes list_of_source_directory_pathnames
-ignore_includes included_file_spec including_file_spec
-ignore_standard_includes
-include_scheme ( none | vdf_general | vdf_specific | update_includes )
-inmap map_file
-make_depend
-missing_includes missing_output_file
-no_imports
-no_usr_include
-options options_file
-register_main_programs
-start_at step -stop_at step
-sysd_file sysd_file
-verbose
-verbose2Description
Archmig performs the Architectural Control Step of the migration process. Seeperfmig - test and perform migration steps for a complete description of this tool's behavior.
vermig - test and perform version control step
Syntax
vermig {option}
Options
The following options are valid for archsmig. See perfmig - test and perform migration steps for a complete description of these options.
-confirm
-inmap map_file
-no_group_controls
-options options_file
-verbose
-verbose2Description
Vermig performs only the Version Control Step of the migration process. See the perfmig - test and perform migration steps for a complete description of this tool's behavior.
cleanmig - test and perform cleanup step
Syntax
cleanmig {options}
Options
The following options are valid for cleanmig. See the perfmig tool documentation on for a complete description of these options.
-confirm
-inmap map_file
-options options_file
-verbose
-verbose2Description
Cleanmig performs only the Cleanup Step of the migration process. See the perfmig - test and perform migration steps for a complete description of this tool's behavior.
duprefmig - duplicate a "migrate by reference" tower of views
Syntax
duprefmig {<options>}
Options
- -confirm
Must be specified to actually cause changes to any files. Furthermore, Apex must be running in batch mode within the shell that runs duprefmig and the environment variable APEX_CPP_ENABLED must be set to True.
Default: make no changes to files
- -dup_source old_directory new_directory
This option is used to specify where the new, duplicated versions of the old source directories should be located. There must be one or more instances of this option. Both the old_directory and new_directory arguments must be full pathnames (that is, they must begin with a /). When duprefmig needs to locate the new source directory for a particular old source directory it examines the set of -dup_source options, in the order specified, until it finds the first one whose old_directory argument is a prefix of the given old source directory. It then uses the corresponding new_directory argument as the prefix for the new source directory. For example, the option:
-dup_source /src/old /my/new
would indicate that the following old source directories should be duplicated onto these respective new source directories:
/src/old/doc => /my/new/doc /src/old/test => /my/new/test /src/old/prog/collect => /my/new/prog/collect /src/old/prog/analyze => /my/new/prog/analyze
Alternatively, this set of options:
-dup_source /src/old/test /my/new/qa -dup_source /src/old/prog /my/new/forecast -dup_source /src/old /my/new
would define the following new source directory destinations:
/src/old/doc => /my/new/doc /src/old/test => /my/new/qa /src/old/prog/collect => /my/new/forecast/collect /src/old/prog/analyze => /my/new/forecast/analyze
Note that in assigning new source directories care must be taken to avoid making changes that will break existing C/C++ source file #include directives. For this reason, it is advisable to change only the top level directory of a source tree.
-dup_view old_view new_view This option is used to specify where the new, duplicated versions of the old views should be placed. There must be one or more instances of this option. The old_view can identify a particular old view or it can include the wildcard characters "*" or "?" to refer to a collection of old views. Furthermore, old_view can be a relative pathname or a full pathname. Full pathnames have to match the full pathname of an old view, whereas relative pathnames only have to match the "tail end" of a views's full pathname. If more than one of these options is given, then the order in which they appear (left to right) is the order in which they will be compared to candidate old view pathnames. The first specification that matches will identify the new view name. For example, the option:
-dup_view "*" new.wrk
would cause all new views to have the name new.wrk, such as these:
/apex/doc.ss/v1.wrk => /apex/doc.ss/new.wrk /apex/test.ss/v2.wrk => /apex/test.ss/new.wrk /apex/prog/collect.ss/v2.wrk => /apex/prog/collect.ss/new.wrk /apex/prog/analyze.ss/v3.wrk => /apex/prog/analyze.ss/new.wrk
Alternatively, this set of options:
-dup_view /apex/doc.ss/v1.wrk n1.wrk -dup_view prog/*/*.wrk n2.wrk -dup_view v2.wrk n3.wrk
would define the following new views:
/apex/doc.ss/v1.wrk => /apex/doc.ss/n1.wrk /apex/test.ss/v2.wrk => /apex/test.ss/n3.wrk /apex/prog/collect.ss/v2.wrk => /apex/prog/collect.ss/n2.wrk /apex/prog/analyze.ss/v3.wrk => /apex/prog/analyze.ss/n2.wrk
-ignore_tar_permission_errors Duprefmig uses the tar program to duplicate source directories. If the read/search permissions on a file/subdirectory within a source directory prohibits tar from reading/searching the file/subdirectory, this option causes duprefmig to ignore the error and continue processing.
Default: report tar permission errors when duplicating source directories
-new_config_file new_config_file Put the new configuration file built by duprefmig, containing the full pathnames of all the new views, in new_config_file. This file is used by duprefmig to define the Apex import dependencies and can also be used by various other Apex commands such as the show_status and compile commands.
-old_config_file old_config_file The old_config_file contains the full pathnames of all the old views which are to be duplicated. This information is used by duprefmig to locate the associated old source directories. These old views and source directories are used to derive the new views and source directories as controlled by the -dup_source and -dup_view options. This configuration file must describe a complete "migrate by reference" tower of old views. That is, none of the old views can import another view that is not included in the configuration file. Furthermore, in general, all the old views that were created in a given migrate by reference attempt should be included in the configuration file. Duprefmig does not verify that these requirements are met.
Default: none, this option is always required
-options options_file Read command line options from the given file. This option is typically used to read in a set of -dup_source or -dup_view options of which there can be more than one. In an options file, the "#" character marks the beginning of a comment which is terminated by the end of line character (newline).
-use_new_source If a specified new source directory already exists, use it instead of duplicating its associated old source directory.
Default: do not use new source directories, if any. It is an error if they already exist
-use_new_view If a specified new view already exists, use it instead of duplicating its associated old view. This can be used to handle old views which were not migrated by reference or to preserve an old view in the new configuration.
Default: do not use new views, if any. It is an error if they already exist
-verbose Display all commands which would change any files before changing them.
Default: do not display commands
Description
Duprefmig takes a configuration file, containing a collection of views that were previously migrated by reference using perfmig, and produces a duplicate copy of these views and their associated original source directories. It accomplishes this in such a manner that concurrent development can take place in both the old, original source directories and their associated old views as well as the new source directories and their associated new views. All controlled files are maintained by the common Apex subsystems. By default, duprefmig just displays the Apex and shell commands it would perform to duplicate the information. The -confirm option must be specified for duprefmig to actually execute these commands.
Duprefmig performs the following steps to duplicate a migrated by reference tower of views, although not necessarily in this exact order:
- 1 . The configuration file specified by the -old_config_file option is read to find out the names of the old views to be duplicated. The old source directory associated with each view is acquired from the symbolic link that the old view pathname actually refers to. New source directory and new view pathnames are derived from the old view and source paths and the -dup_source and -dup_view options.
By default, the new source directories cannot already exist. However, the -use_new_source option can be specified to cause duprefmig to use any existing new source directories it finds and to skip the following steps which test for checked out files and copy the directory. Thus, it is possible to duplicate the old source directories using other tools before running duprefmig to create the new views.
Also, by default, the new views cannot already exist. But, the -use_new_view option can be given to remove this restriction and cause duprefmig to use any existing new views it finds. If a new view exists, then the following steps, which test for checked out files, copy the directory and create the new symbolic links, are not performed on it. This feature has two potential uses. First, it can be used to deal with views which were not migrated by reference. Duprefmig notices this situation when it examines the old view and displays an error to the effect that the old view was not migrated by reference. In this case, the views must be copied beforehand and appropriate -dup_view options must be provided to tell where the new views are located. Second, this feature can be used to tell duprefmig to use the old view as the new view, (that is, keep the old view in the new configuration). This is accomplished by providing a -dup_view option which specifies that the same name should be used for the new view. For example:
-dup_view /apex/doc.ss/v1.wrk v1.wrk
Note that care must be exercised in using this feature to make sure that the new import relationships will make sense. In particular, if a given old and new view are identical, then that view cannot import a view which is not also identical in both the old and new configurations. It is also not recommended to use this option on a view that was migrated by reference.
- 2 . A new configuration file is generated in the file identified by the -new_config_file option. This is done even if the -confirm option is not given. The new configuration file is used to make the necessary import changes later.
- 3 . The old views are examined to make sure that they do not have any version controlled files that are checked out. Files can be checked out in other views which are not being duplicated and do not have to be up to date.
- 4 . The old source directories are copied to the new source directories using the tar command. The -ignore_tar_permission_errors option can be used to ignore tar errors which are usually due to not being able to read a file or search a directory.
- 5 . The new views are duplicated by creating a couple of special symbolic links. The new view itself is actually a symbolic link to its associated new source directory as is the case for any view which was migrated by reference.
- 6 . Finally, the import relationships between the new views are updated using Apex's accept_import_changes command.
Full Migration Map Syntax
<migration map> ::= [ <sep> ] [ <migration mode> ] { <global attr> } { <subsystem> }
<migration mode> ::= "MIGRATE_BY:" ( "copy" | "reference" )
<sep>
<global attr> ::= <attribute>
<subsystem> ::= "SUBSYSTEM:" <directory> <sep> { <subsystem attr> } { <source directory> }
<subsystem attr> ::= <attribute>
<source directory> ::= "SOURCE_DIRECTORY:" <directory> <sep> { <source dir attr> } { <file treatment }
<source dir attr> ::= <attribute>
<file treatment> ::= <controlled c files> | <controlled files> | <uncontrolled c files> | <uncontrolled files> | <ignored files>
<controlled files> ::= "CONTROLLED_FILES:" <sep> { <file treatment attr> } { <used file> }
<controlled c files> ::= "CONTROLLED_C_FILES:" <sep> { <file treatment attr> } { <used file> }
<uncontrolled c files> ::= "UNCONTROLLED_C_FILES:" <sep> { <file treatment attr> } { <used file> }
<uncontrolled files> ::= "UNCONTROLLED_FILES:" <sep> { <file treatment attr> } { <used file> }
<file treatment attr> ::= <attribute>
<used file> ::= ( <file name> | <file path> | <file name> <file name> | <file name> <file path> ) <sep> { <file attr> }
<file attr> ::= <attribute>
<ignored files> ::= "IGNORED_FILES:" <sep>
{ <ignored file> }
<ignored file> ::= <file path> <sep>
<attribute> ::= <history> | <include paths> | <model> | <subsystem storage> | <view> | <view storage>
<history> ::= "HISTORY:" [ <version history> ] <sep>
<version history> ::= <simple name>
<include paths> ::= "INCLUDE_PATHS:" <directory list> <sep>
<directory list> ::= <directory> { " " <directory> }
<model> ::= "MODEL:" <directory> <sep>
<subsystem storage> ::= "SS_STORAGE:" <directory> <sep>
<view> ::= "VIEW:" <simple name> <sep>
<view storage> ::= "VIEW_STORAGE:" <directory> <sep>
<directory> ::= <string>
<file path> ::= <string>
<file name> ::= <simple name>
<simple name> ::= <a <string> which does not contain a />
<string> ** ::= <char> { <char> }
<char> ::= <any graphic character except space>
<sep> ::= ( <comment> | <EOL> ) { <comment> | <EOL> }
<comment> ::= "#" { <any character> } <EOL>
<EOL> ::= <one or more end of line characters>
(** => no spaces or tabs allowed between syntactic elements)
Apex Commands Used During MigrationThe following Apex commands are used during the migration process. A full description of each command can be found in the Command Reference Guide.
Rational Software Corporation http://www.rational.com support@rational.com techpubs@rational.com Copyright © 1993-2001, Rational Software Corporation. All rights reserved. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |