WebSphere Product Center: Support Guide

 

Version 5.2

 

 

 

 

 

 


Note! Before using this information and the product it supports, read the information in “Notices” at the end of this document.

23 March2005

This edition of this document applies to WebSphere Product Center (5724-I68), version 5.2, and to all subsequent releases and modifications until otherwise indicated in new editions.

© Copyright International Business Machines Corporations 2001, 2005. All rights reserved.
US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Table of Contents

Ch 1 WebSphere Product Center Monitoring

    WebSphere Product Center services
        Obtain the short status of a service
        Obtain the long status of a service

    Database monitoring and management
        1. Allocate more space whenever necessary
        2. Apply Fix Packs / Patch Sets
        3. Startup and shutdown the database / db manager
        4. Analyze the database schema and collect the statistics
        5. Re-organize the tables and indexes
        6. Check the status of Backup jobs scheduled
        7. Restore and Recover the database
        8. Tune the database performance

Ch 2 WebSphere Product Center Performance

    Managing disk space
        Temporary files
    Caching web pages
    Hardware specifications

Ch 3 Database Administration

    Database user
    Database backup
        Physical backups    
        Logical backups
    Database health check
        Setup DB2 health center alerts
    Database management toolkit

Ch 4 Document Store

    Directories
    Architecture
    Managing tablespace
    Deleting files
    Optional GZIPing of BLOBs
    Defragmentation
    Document Store frequently asked questions

Ch 5 Backup and Recovery

    WebSphere Product Center backup
    Database backup
    Recovery

Ch 6 WebSphere Product Center Logger

    WebSphere Product Center services log configuration files
    Runtime generated logs
    Configure log files
        Change location
        Change file size
        Change file backup option
        Change conversion pattern
    Conversion specifiers
    Format modifiers
    Conversion characters
    WebSphere Product Center logging setup files

Ch 7 Enable Spell Check Feature

        Limitations
        Spell checking functions
    Enabling Spell Checker
        Requirements
        Configure WebSphere Product Centerfor WinterTree Spelling Engine Runtime Configuration

Ch 8 Security

    LDAP Integration
        Feature overview
        Functional overview
        Assumptions
        Limitations
        Impact on migrating pre-5.2
    Integrating LDAP with WebSphere Product Center
        1. Configure LDAP schema for users and roles
        2. Edit LDAP configuration file
        3. Restart the System

Ch 9 Troubleshooting

    Tools
    Application server issues
        Environment issues
        Common incorrect configuration file settings
    Application server unresponsive
    Database issues
        1. Character conversion during data exports / imports
        2. Database space allocation problems
        3. WebSphere Product Center slows once a running job is killed
        4. Redo log switch problems
        5. WebSphere Product Center middleware hangs and the GUI is frozen
        6. Analyze schema job hangs
    Monitor log files for errors
    Connectivity issues
        HTTP post errors
        FTP Fetch Error
        Test Java Connectivity
    Other Issues
        Stopping and restarting WebSphere Product Center

Ch 10 Migration Framework

    Migration from 4.2.0.x to 5.2
        Exporting a company
        Importing a company
        Impact on migration
    Migration from 4.2.1 to 5.1

Ch 11 Web Services Support

    Web Services Definition Language (WSDL) Support
    Web Service User Interface
    Script Operations Supporting Web Services
    Support for Document/Literal Style

Ch 12 Command Line Job Interface

    Scheduler integration with IBM Tivoli Workload Scheduler
    Scheduler control through command line interface

Ch 13 Integration Best Practices

    Definitions and Acronyms
    Integration Dimensions
        WebSphere Product Center as source or target system
        Controlling system
        Protocol
        Format
        Size of data
        Types of communication
        Frequency
        Integration thread
        Acronyms
    Design Principles
        Re-usability
        Information sharing
        Information processing
        Event handling
        Change tracking
        Re-usable connectors
    Implementation
    Scaling the Implementation
    Performance Tuning
    Validation
        Stability
    Scalable Testing
    Visibility
    Reporting
    Documentation
    Top Ten Guidelines for WebSphere Product Center Integrations
        Use clear and common terminology to describe integrations
        Re-usability
        Visibility
        Mini-integrations
        Representative vs. complete environments
        Scalable process testing
        Performance
        Establish single thread early
        Design specs and documentation
            Single owner
    EAI Platform Integrations
        Approach
    Additional advantages

Notices

Ch 1 Websphere Product Center Monitoring

WebSphere Product Center monitoring can be done through the use of the rootadmin and rmi_status scripts or through the GUI. There is no standalone-monitoring tool provided with WebSphere Product Center.

The creation of a monitoring tool is a beyond the scope of this document; however, there are several simple ideas which can be mentioned:


Obtain the short status of a service

To get the short status of a service, pass the following parameters:

-cmd=check -svc=<service name>

The short status return one of the following conditions:

running The service is running and responding to a "heartbeat" function.
not found The service is not found. The service might not have been started or it might have crashed.
found but not responding The service was found as being registered with the RMI registry, but it is not responding to the "heartbeat" function. The service might have to be restarted.

Obtain the long status of a service

To get the long status of a service, pass the following parameters to rootadmin.sh:

-cmd=status -svc=<service name>

It will produce an HTML file that can be viewed using any browser. On a terminal, use Lynx (or similar tool) to format the output.

The status gives an overview of the different threads running in the service, as well as a status of the database connections currently taken by the service.

Example:

To get the status of the scheduler:

rootadmin.sh -cmd=status -svc=scheduler > /tmp/sch_status.html; lynx /tmp/sch_status.html

or

rootadmin.sh -cmd=status -svc=scheduler > /tmp/sch_status.html; lynx -dump /tmp/sch_status.html

Note: The ">" used in the example above directs the status details to a file output location.


Database monitoring and management

Since the relational database is the main storage for the bulk of the product information content, it is important to provide management actions that prevent any degradation and loss of performance.

Setting up alerts in WebSphere Product Center can provide notification on issues that may arise, which may be resolved before it gets out of hand. A monitoring system should also be implemented to constantly monitor the WebSphere Product Center database.

The following tasks should be performed on a regular basis.

1. Allocate more space whenever necessary

Space management is an ongoing task for most of you. Unless you have a completely static database, tables and indexes will regularly grow, or shrink, in size. You need to ensure that sufficient space is available for this to occur without interruption to the ongoing processing. You also need to help ensure that the space is being used efficiently. You can use DB2 Control Center to allocate space when needed. You can also use command line inter to complete the same task.

2. Apply Fix Packs / Patch Sets

Fix packs / patch sets are database system vendor's mechanism for delivering fully tested and integrated product fixes on a regular basis. They provide bug fixes only; they do not include new functionality, and do not require certification on the target system. It is very important to apply the fix packs / patch sets when they are available to avoid any known problems with the database system. Contact database system vendor for more information on the fixes.

3. Startup and shutdown the database manager and the database

It is required to shutdown the database manager / database as part of applying the fixes, moving the databases from one server to another server etc. You will have to startup/shutdown the database as and when needed.

4. Analyze the database schema and collect the statistics

Database schema is analyzed to collect the latest statistics about tables and indexes in the database. The cost-based optimization approach uses statistics to determine an estimate for the cost of each execution plan. Gather statistics on a regular basis to provide the optimizer with the best information about schema objects. For example, after loading a significant number of rows into a table, you should collect new statistics for the table.

To analyze database schema, run the shell script analyze_schema.sh located at $TOP/src/db/schema/util directory.

5. Re-organize the tables and indexes

It is recommended to re-organize the tables and indexes on regular intervals of time for better performance

With today's databases growing faster than ever, the typical DBA must spend a significant amount of time performing space management and reorganization to achieve optimal performance.

Optimal performance means optimal response time. But this can degrade due to a number of space management issues. Most of these issues fall in three main areas namely table-related issues, Stagnated indexes and I/O balancing and data partitioning

Table-related issues are well known to most DBAs. They include underutilized space inside table blocks, chained rows, poor data proximity, and fragmented (overextended) tables.

The second major issue in the performance challenge is stagnated indexes ¾ indexes that have become large and sparsely populated.

This condition can severely degrade the performance of index range scans. It can also waste a substantial amount of disk space.

The third major issue in the performance challenge is I/O balancing and data partitioning. When objects that are frequently accessed reside in the same data file, I/O bottlenecks can result. Tools like reorgchk in DB2 will give you information about which objects need to be reorganized. There are many methods and tools available to re-organize database objects. Read database system vendor specific documentation on re-organizing tables and indexes.

6. Check the status of Backup jobs scheduled

Backups are an integral part of the restore and recovery process. Verify the status of all backup jobs to make sure they are running as scheduled.

Checking the status of backup depends on how you define the backup procedure and what tools are used to take backups. Please go through vendor database system vendor specific documentation on Backups for more information.

7. Restore and Recover the database

In the case of a database failure, determine the type and extent of failure. The analysis should dictate the steps taken to recover the system. Use the restore and recovery process as defined by your IT support group.

Restoring a physical backup is reconstructing it and making it available to the database server. To recover a restored data file is to update it using redo records, that is, records of changes made to the database after the backup was taken.

8. Tune the database performance

One of the biggest responsibilities of a DBA is to ensure that the database is tuned properly. Any RDBMS is highly tunable and allows the database to be monitored and adjusted to increase its performance.

One should do performance tuning for the following reasons:

Optimize hardware usage to save money (companies are spending millions on hardware).

Refer to the product documentation that was provided with the DB2 product for more information on different methods available to tune the performance of the database.

Ch 2 WebSphere Product Center Performance


Managing disk space

It is recommend having 30-50 gigabytes of usable space that is used for both the WebSphere Product Center middleware and temporary partitions.

In a clustered configuration, shared storage is necessary for the application servers. The static HTML and image files can be synchronized used a utility such as rsync, but shared storage for the web servers are also recommended.

For the application servers, $TOP, the ftp directory and the web server's document root (the location of static HTML and images) are typically on the shared device while supporting applications such as Apache, the JDK and the application server are installed on local storage. Logs can be kept on local storage or shared storage. The temp directory as specified in common.properties should be local.

Temporary files

The following directories hold temporary run-time generated files and are located on the file system:

Note: The temporary file directories may be different depending on the version of WebSphere Product Center installed.

$TOP/public_html/created_files/distributor

Example Using Linux

cd $TOP/public_html/created_files/distributor

find . -type f -mtime +7 -exec ls -l {} \; <-- to view which files would be deleted

find . -type f -mtime +7 -exec rm -f {} \; <-- to delete the files.

$TOP/public_html/suppliers/company code/aggregated_files/

$TOP/public_html/suppliers/company code/tmp_files:

$TOP/logs


Caching web pages

The default installation of WebSphere Product Center is set to direct proxy servers NOT to cache pages. Allowing the caching of pages will severely limit the ability to use the browser's back button, producing error messages and expired pages. If caching is desired, use the GUI navigational features and avoid the use of the Back button.

Edit the file: common.properties
Parameter: no_cache_directive=on/off

By default, the parameter is set to off

If set to on, it will set the parameters on the response to direct proxy servers not to cache the pages, and will severely limit the ability of the browser's back button

If set to off, the browser's back button is functional and will not cause errors to occur


Hardware specifications

Hardware specification should be chosen based on best practices, past experience, and capacity requirements in order to derive optimal performance from WebSphere Product Center.

Application Server

A majority of data objects in WebSphere Product Center are stored within the database server. For this reason, disk storage on the application servers will only be used to store the OS components, WebSphere Product Center executables, 3rd party components, WebSphere Product Center temporary work files, and WebSphere Product Center logs.

The WebSphere Product Center middleware utilizes several J2EE components, which can each take a large amount of memory. WebSphere Product Center recommends having an application server with 4GBs of memory, of which 2.5GB would typically be utilized for an instance of the WebSphere Product Center middleware.

Database Server

The size of the database server depends on a variety of factors. These can include the number of catalog items, the number of attributes associated with each item, and the size of the catalog attributes.

A safe rule of thumb is to allocate 8kb of space for each attribute. For example, a catalog with 500,000 items, each having 14 attributes, requires the minimum database storage of 56GBs (500,000 item x 14 attributes x 8kb).

This space does not include what is needed for the database binaries, undo segments, temporary table spaces, etc.

Recommended Architecture

The option to utilize an optional scheduler server to handle background transactions is recommended if WebSphere Product Center is used to handle large batch jobs.

Ch 3 Database Administration


Database user

The database user and password, which was created for the WebSphere Product Center installation, is defined in common.properities. Changing the password for the database user without updating the common.properties file will cause the WebSphere Product Center middleware to crash. If the password for the database user needs to be changed, be sure to update the property db_password in common.properties. Password authentication is at the operating system level in DB2.


Database backup

Backing up and recovering the database is one of the most critical operations that a database administrator (DBA) performs. For this reason, it is extremely important to implement a well-defined backup and recovery strategy. The following backup strategies are suggested to maintain optimal performance with WebSphere Product Center.

Physical backups

WebSphere Product Center recommends taking daily physical backup of the database. You can take offline physical backup (image backup) of the database or online physical backup (hot backup) of the database based on the availability of system down time. Most of the WebSphere Product Center databases are accessed 24/7 i.e. there may not be any downtime available to take offline backup of the database. Database must be running in logretain mode in DB2 to be able to take online backup of the database. Taking online backup of the database enables you to recover the database to its state at a particular point in time. Refer to the DB2 product documentation for more information.

Logical backups

Logical backups store information about the schema objects created for a database. Using DB2MOVE utility in DB2 you can selectively export specific objects for supplemental protection and flexibility in a database's backup strategy. Database exports are not a substitute for physical backups and cannot provide the same complete recovery advantages that the physical backup offers. Some times logical backups are very handy to setup QA or test instances with the production data. WebSphere Product Center DBM toolkit also has WebSphere Product Center specific instructions on taking logical backup of the WebSphere Product Center database schema.


Database health check

Checking the health of the database system at regular intervals is key to high availability of the system.

Setup DB2 health center alerts

Use the DB2 Health Center to monitor the state of the database environment and make any necessary changes when needed. Health monitor continuously monitors a set of health indicators. If the current value of a health indicator is outside the acceptable operating range defined by its warning and alarm thresholds, the health monitor generates a health alert. DB2 comes with a set of predefined threshold values, which you can later customize.

The following are some of the key tasks that you can perform with the Health Center:


Database management toolkit

There are several db management scripts available to manage the WebSphere Product Center Database. All these scripts are put together in the form of a toolkit.

Different tasks covered in the toolkit for DB2 are:

Ch 4 Document Store

The Document Store is the area within WebSphere Product Center where every incoming and every outgoing file is stored. This includes import feeds, scripts, reports, and specification files.

The GUI structure provides hyperlinks to files that are stored on the database, which are essentially pointers to the location of the files.

Directories

The document store is displayed in a file structure manner. Files can be accessed from the following Document Store directories:

archives

public_html

eventprocessor

schedule_logs

feed_files

scripts

ftp

tmp

params

users

Ftp and public_html are file system directories that are mounted into the Document Store. They are defined in $TOP/etc/docstore_mount.xml. This file provides the location of various OS File System mount points.

The variables used are "$ftp_root_dir" and "$supplier_base_dir", which are defined in the common.properties configuration file.

Architecture

The database has a tablespace designated for the files stored in the Document store. When a file is stored in the document store, a new record in the DB is created. The database stores the file as a BLOB (Binary Large Object) file.

A BLOB file refers to any random large block of bits that needs to be stored in a database, such as a picture or sound file. The essential point about a BLOB is that it's an object that cannot be interpreted within the database itself.

The database stores BLOBs within a tablespace in the database itself. The advantage of this method is that the database protects the data, using the database server mechanisms that protect all other types of table data, such as backup-and-recovery and security mechanisms.

Managing tablespace

Space management is an ongoing task. The Document Store table will grow or shrink in size. Ensure that sufficient space is available to support the large binary files without interruption to the ongoing processing. Also, ensure that the space is being used efficiently.

Deleting files

When WebSphere Product Center deletes a BLOB file and the corresponding references, the database engine does not free up the allocated space but rather reuses the space for new files.

Thus, each file is stored in a memory block and as the file is deleted, the memory block is reused as new files are added.

Optional GZIPing of BLOBs

To compress documents stored in BLOBs, do the following:

File to edit: common.properties

Parameter: gzip_blobs=true/false

  • Valid values are true and false
  • If absent, it defaults to false
  • If true, documents stored in blobs are compressed

Defragmentation

Due to the multiple additions and deletions of files in the Document Store, the memory blocks can become fragmented. Fragmentation occurs naturally when you use a disk frequently, creating, deleting, and modifying files.

At some point, the operating system needs to store parts of a file in noncontiguous clusters. This is entirely invisible to users, but it can slow down the speed at which data is accessed because the disk drive must search through different parts of the disk to put together a single file.

To improve the Document Store performance, it is best to export and then import the DBL table using compress=y. This will chunk all of the files into one continuous cluster, thus increasing the time to import files.

Note: Depending on the allocation of tablespace, defragmentation may not be needed regularly. Monitor the disk speed regularly to determine if defragmentation of the disk space is needed.

Document Store frequently asked questions

Problem: Once the blobs are deleted, does the WebSphere Product Center speed continues to be impacted?

No. Once the rows are deleted, the slow Document Store pages improve.

Problem: Is the space still allocated causing slow exports/imports?

Yes. The only way to fix this is to export and import the DBL table with compress=y.

Ch 5 Backup and Recovery

The particular backup method and software employed is beyond the scope of this document; however, backup concepts are presented here.

Backing up WebSphere Product Center consists of two components: backing up the file system directories where WebSphere Product Center is installed and backing up the database.

WebSphere Product Center backup

To backup WebSphere Product Center, simply backup the $TOP directory as defined in common.properties. Because files do change in these directories, daily backups are recommended. The institution of a backup schedule consisting of a regular full backup and daily incremental backups is recommended.

Database backup

The manner of backing up the database is well beyond the scope of this document, particularly because of the variety of methods available: exports, hot backups, cold backups, mirroring, etc. Whatever manner is chosen, the WebSphere Product Center database user's schema, as defined in common.properties, is all that must be backed up.

Because the database must be available for WebSphere Product Center to run, it is recommended that daily online or 'hot' backups be used. Exports or offline backups should also be taken regularly.

Please see the section "Database backup" for more information on database backups.

Recovery

Recovery can be separated into two categories: recovery of the WebSphere Product Center and supporting files; and recover of the database.

To recover WebSphere Product Center and supporting files, simply restore the missing files or directories to their original locations, and then start WebSphere Product Center.

To recover the database, take the following steps:

Ch 6 WebSphere Product Center Logger

WebSphere Product Center provides pre-configured files that generate logs, which can then be used to troubleshoot problems within WebSphere Product Center. This document provides an overview of the logging mechanism and explains how to setup the log files.


WebSphere Product Center services log configuration files

The following files control various subsystems within the entire WebSphere Product Center. The location of the generated log is defined in each file.

Note: All paths are relative to $TOP

/etc/logs/eventprocessor.log.xml

/etc/logs/scheduler.log.xml

/etc/logs/system.log.xml

/etc/logs/appsvr.log.xml

/etc/logs/workflowengine.log.xml


Runtime generated logs

Runtime generated logs can be viewed for errors, which help to troubleshoot if a problem is related to the WebSphere Product Center or internal support infrastructure.

The log files generated by WebSphere Product Center are stored in $TOP/logs/*.log.

Configure log files

The properties of the WebSphere Product Center log files can be edited as needed (i.e. location, max size, format.) The following sections describe the elements used to configure the logs and to provide a list of values that may be used when configuring a WebSphere Product Center log file.

Change location

Note: Applies only to File and Rolling appenders

To change the location of a generated log file, change the parameters of the specified log configuration files.

For example:

<param name="File"   value="${TOP}/logs/webserver_db.log " />

Change file size

Note: Applies only to rolling appenders

The size of the log file can be set to a specified storage size before it starts to rotate and purge the top order of the output. To control when the file begins to truncate, change the log file size parameter value.

For example:

<param name="maxFileSize" value="10MB" />

Change file backup option

Note: Applies only to rolling appenders

The logger can be defined to keep a specified number of backups for a log file. Once the max value is reached, the oldest file is thrown out.

For example:

<param name="maxBackupIndex" value="2" />

Change conversion pattern

The layout configuration of the logs can be change by redefining the conversion pattern.

For example:

<param name="ConversionPattern" value=
"%d [%t] %-5p %c (%F:%L) %x- %m%n"/>

The conversion pattern is closely related to the conversion pattern of the printf function in C. A conversion pattern is composed of literal text and format control expressions called conversion specifiers.

Note: You are free to insert any literal text within the conversion pattern.


Conversion specifiers

Each conversion specifier starts with a percent sign "%" and is followed by optional format modifiers and a conversion character.

% (format modifiers)(conversion character)

For example,

%-5p [%t]: %m%n

By default the relevant information is output as is. However, with the aid of format modifiers it is possible to change the minimum field width, the maximum field width and justification.

The optional format modifier is placed between the percent sign and the conversion character. In the example the conversion specifier
%-5p means the priority of the logging event should be left justified to a width of five characters.

The first optional format modifier is the left justification flag which is just the minus (-) character. Then comes the optional minimum field width modifier. This is a decimal constant that represents the minimum number of characters to output. If the data item requires fewer characters, it is padded on either the left or the right until the minimum width is reached.

The default is to pad on the left (right justify) but you can specify right padding with the left justification flag. The padding character is space. If the data item is larger than the minimum field width, the field is expanded to accommodate the data. The value is never truncated.

This behavior can be changed using the maximum field width modifier, which is designated by a period followed by a decimal constant. If the data item is longer than the maximum field, then the extra characters are removed from the beginning of the data item and not from the end.

For example, if the maximum field width is eight and the data item is ten characters long, then the first two characters of the data item are dropped.

Note: This behavior deviates from the printf function in C where truncation is done from the end.

The following pages provide the values use to define the conversion specifiers.


Format modifiers

Below are various format modifier examples for the category conversion specifier.

Format modifier

Left justify

Min

width

Max width

Comment

%20c

False

20

None

Left pad with spaces if the category name is less than 20 characters long.

%-20c

True

20

None

Right pad with spaces if the category name is less than 20 characters long.

%30c

NA

None

30

Truncate from the beginning if the category name is longer than 30 characters.

%20.30c

False

20

30

Left pad with spaces if the category name is shorter than 20 characters. However, if category name is longer than 30 characters, then truncate from the beginning.

%-20.30c

True

20

30

Right pad with spaces if the category name is shorter than 20 characters. However, if category name is longer than 30 characters, then truncate from the beginning.


Conversion characters

The following is a list of recognized conversion characters:

Conversion Character

Effect

c

Used to output the category of the logging event. A precision specifier can optionally follow the category conversion specifier, which is a decimal constant in brackets.

If a precision specifier is given, then only the corresponding number of right most components of the category name will be printed. By default the category name is printed in full.

For example, for the category name "a.b.c" the pattern %c{2} will output "b.c".

d

Used to output the date of the logging event. A date format specifier enclosed between braces may follow the date conversion specifier.

For example, %d{HH:mm:ss,SSS} or %d{dd MMM yyyy HH:mm:ss,SSS}. If no date format specifier is given then ISO8601 format is assumed.

The date format specifier admits the same syntax as the time pattern string of the SimpleDateFormat. Although part of the standard JDK, the performance of SimpleDateFormat is quite poor.

For better results it is recommended to use the log4j date formatters. These can be specified using one of the strings "ABSOLUTE", "DATE" and "ISO8601" for specifying AbsoluteTimeDateFormat, DateTimeDateFormat and respectively ISO8601DateFormat. For example, %d{ISO8601} or %d{ABSOLUTE}.

These dedicated date formatters perform significantly better than SimpleDateFormat.

m

Used to output the WebSphere Product Center supplied message associated with the logging event.

n

Outputs the platform dependent line separator character or characters.

This conversion character offers practically the same performance as using non-portable line separator strings such as "\n", or "\r\n". Thus, it is the preferred way of specifying a line separator.

p

Used to output the priority of the logging event.

r

Used to output the number of milliseconds elapsed since the start of the WebSphere Product Center until the creation of the logging event.

t

Used to output the name of the thread that generated the logging event.

x

Used to output the NDC (nested diagnostic context) associated with the thread that generated the logging event.

%

The sequence %% outputs a single percent sign.


WebSphere Product Center logging setup files

The following examples demonstrate how WebSphere Product Center's log files are defined. The entries in bold set the configuration of the WebSphere Product Center log files.

<!-- basic ASYNC appender -->
<appender name="ASYNC" class="org.apache.log4j.AsyncAppender">
<appender-ref ref="DEFAULT"/>
</appender>

 

<!-- basic CONSOLE appender. This is the same as doing system.out-->
<appender name="STDOUT" class="org.apache.log4j.ConsoleAppender">
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value=

"[%t] %-5p %c (%F:%L) %x- %m%n"/>
</layout>
</appender>

<!-- simple FILE appender. The file will be opened and if append is true  -->
<!--                       it will not be truncated                       -->
<appender name="DEFAULT" class="org.apache.log4j.FileAppender">
   <param name="File"   value="${TOP}/logs/tomcat_default.log " />
   <param name="Append" value="true" />           
   <layout class="org.apache.log4j.PatternLayout">
     <param name="ConversionPattern" value=

"%d [%t] %-5p %c (%F:%L) %x- %m%n"/>
    </layout>       
</appender>

<!-- Rolling FILE appender. The file will be opened and if append is true  -->
<!--                        it will not be truncated                       -->
<!--                        maxFileSize: How big before you rotate         -->
<!--                        maxBackupIndex: How many backups do you keep?  -->
   <appender name="DB" class="org.apache.log4j.RollingFileAppender">
      <param name="File"   value="${TOP}/logs/tomcat_db.log " />
      <param name="Append" value="true" />           
<param name="maxFileSize" value="10MB" />
<param name="maxBackupIndex" value="2" />
<layout class="org.apache.log4j.PatternLayout">
  <param name="ConversionPattern" value=

"%d [%t] %-5p %c (%F:%L) %x- %m%n"/>
  </layout>       
</appender>

<!-- For the austin.db category, you want to have only a few logs kept hence -->
<!--  the rollingappender -->
<category name="austin.db" additivity=" false">
      <priority value="INFO" />
      <appender-ref ref="DB" />
</category>

<!-- ROOT CATEGORY -->
<!-- MUST ALWAYS BE LAST ENTRY AND HAVE AN APPENDER-->
<!-- If a logging event is not caught by any other logger it will be handled by this-->
<!-- rule. -->
<root>
      <priority value="error"/>
      <appender-ref ref="DEFAULT"/>
</root>

</log4j:configuration>

Ch 7 Enable Spell Check Feature

The Spell Checking functionality in WebSphere Product Center is made available by using the framework of the third party product  "Sentry Spell Checking Engine" from WinterTree. For this reason, WebSphere Product Center is not bundled with any Spell Checker functionality and the purchase of WinterTree Software’s Spelling Service Engine version 5.10 is required to enable spell checking functionality. 

Note: To use the spell checker feature in WebSphere Product Center, Wintertree Sentry Spelling Checker Engine Java SDK 5.10 is a prerequisite.

With the Spell Checker functionality enabled in this release, users can only perform spell checking within the Item Detail and Single Edit Content Authoring Screens. Support for spell checker functionality the Multi-Edit or the Bulk-Edit screens will be available in a future release.

Limitations

Spell Checking functions


Enabling Spell Checker

This document describes the configuration setup required for WebSphere Product Center to work with WinterTree Software’s Spelling Service Engine version 5.10 at runtime.

Requirements

Configure WebSphere Product Center for WinterTree Spelling Engine Runtime Configuration

To configure WebSphere Product Center for WinterTree Spelling Engine Runtime configuration, three property files need to be changed:

NOTE: Once all property files have been changed, restart WebSphere Product Center to engage the Runtime Spelling Engine Configuration parameters.

common.properties

Location:  <WPC5.2_INSTALL_DIR>/etc/default/common.properties file

Values: Edit the common.properties file to include the following property values:

spell_check=true  (This enables the Spelling Engine) 
spell_check_vendor=wintertree (This sets the Spell Engine Vendor to WinterTree SSCE)
spell_check_vendor_class=common.plugins.wintertree.WinterTreePlugin (This sets the plugin For Wintertree SSCE)
 spell_license=<license_key> (Key in the Value of the license key of Purchased Software for Spelling Engine version 5.10 from WinterTree in the property <license_key>)

spellservice.properties

Location: <WPC5.2_INSTALL_DIR>/etc/default/plugins/wintertree/spellservice.properties file

Values: Replace every occurrence of <WINTERTREE_INSTALL_DIR> against the MainLexicon<n> properties with the installed location of WinterTree Spelling Engine Software on your system . This configures the Lexicons/Dictionaries and the runtime properties of the Spelling engine.

ccd.rc

Location: <WPC5.2_INSTALL_DIR>/setup/ccd.rc file

Create a Symbolic Link to the installed WinterTree jar file named ssce.jar in the <WINTERTREE_INSTALL_DIR>/runtime/lib from <WPC_INSTALL_DIR>/jars/ssce.jar. This can be achieved by adding the line uncommented, as shown in the example below, to this file.

For example:

- AddJar $JARDIR/ssce.jar

 

Ch 8 Security

LDAP Integration

LDAP (Lightweight Directory Access Protocol) integration improves WebSphere Product Center's security infrastructure with the introduction of three function points into WebSphere Product Center:


Feature overview

LDAP integration provides the ability to use third party LDAP systems for authentication purposes. Given the complexity involved in using third party LDAP capabilities for authorization, the existing authorization infrastructure available within WebSphere Product Center 5.2 is used to authorize the LDAP users whereas the authentication will be in the realm of LDAP. The entitlements of LDAP users and roles into WPC are both runtime as well as based on user/system invoked script operations. The LDAP user in WebSphere Product Center is differentiated by the use of a LDAP Flag.

The integration of LDAP with WebSphere Product Center provides an improved security authorization infrastructure that allows the support of 1000+ casual users that require authorization for a variety of (internal and external) roles. For example, Category Managers would be an internal role and an Assistant Brand Manager would be an external role.

For WebSphere Product Center 5.2, LDAP integration is only certified with IBM Tivoli Directory Server version 5.2 (supporting LDAP v3). Although, implementation is made extendable to work with the following LDAP Servers: Sun Java System Directory Server 5.2, Weblogic 8.1 – Embedded LDAP Server, and Novell® eDirectory™ 8.7.3.

Note: There is no support for Single Sign capabilities in this release. Single sign implementation is planned for a future release.

Functional overview

Assumptions

If a user is authenticated in a session then the user will continue to be authenticated until the end of the session, even if the user identity changes during that period (I.e. change in role, password, etc).

Limitations

Locale specific String extraction in LDAP entry searches have not been certified for this release. 

Impact on migrating pre-5.2

There is a schema change in the WPC USER ENTITY as a result of introducing a new LDAP FLAG to differentiate between LDAP Users and Product Center users.


Integrating LDAP with WebSphere Product Center

This section describes the tasks that need to be performed to integrate LDAP for IBM Tivoli Directory Server Version 5.2 with WebSphere Product Center 5.2. It is assumed that IBM Tivoli Directory Server Version 5.2 has been properly installed. The configuration of LDAP requires an LDAP schema configured for users and roles for IBM Tivoli Directory Server Version 5.2.

To integrate LDAP with WebSphere Product Center, do the following:

1. Locate configuration file packaged for LDAP configuration

2. Configure LDAP schema for users and roles for IBM Tivoli Directory Server Version 5.2

3. Edit LDAP configuration file

4. Enable LDAP flat in WebSphere Product Center

5. Restart WebSphere Product Center

1. Locate configuration file package for LDAP configuration

    <WPC5.2_INSTALL_DIR>/etc/default/ldap_config.xml

2. Configure LDAP schema for users and roles

Create a new realm

1. Create a new Realm from the IBM Tivoli Directory Server Web Administration Tool using the menu path Realms and Templates > Add Realm.

2. Complete all the required fields. 

3. Select the Object Class domain as the Parent DN.

For example:

Relative DN Parent DN
cn=myrealm dc=wpcdomain.dc=isl.dc=com

Create a new user template

1. Create a new User Template from the IBM Tivoli Directory Server Web Administration Tool by clicking Realms and Templates > Add User Template

2. Key in above created realm entry as the Parent DN. Select the Structural object class as inetOrgPerson

3. Edit the Required attribute tab to include all the following list of required attributes:

4. Associate this User Template with the above created Realm using the menu path Realms and Templates > Manage Realms > Edit.

For example:

Parent DN
dc=wpcdomain,dc=isl,dc=com
cn=mytemplate,dc=wpcdomain,dc=isl,dc=com


Create a new user

1. Create a new User from the IBM Tivoli Directory Server Web Administration Tool using the menu path Users and Groups > Add User.

2. Select the above-created realm as Realm for this user. 

3. Key in the "Required" attribute tab to include all the above-mentioned attributes.

Create a new group

1. Create a new Group from the IBM Tivoli Directory Server Web Administration Tool using the menu path Users and Groups > Add Group.

2. Select the previously created realm as Realm for this group. The Object class for the group is groupOfNames. 

3. Associate the Users to Groups.

3. Edit LDAP configuration file

The following LDAP configuration file is required to integrate LDAP with WebSphere Product Center:

<WPC5.2_INSTALL_DIR>/etc/default/ldap_config.xml

Edit the ldap_config.xml file for runtime LDAP authentication by replacing the values, shown in brackets, with the appropriate values of the LDAP installation.

<?xml version="1.0" encoding="UTF-8"?>
<LdapConfiguration>
    <connectionInfo>
        <connectionParam name = "java.naming.provider.url"> (Enter the LDAP server URL)</connectionParam>
        <connectionParam name = "java.naming.security.principal">(Enter username for logging into LDAP server)</connectionParam>
        <connectionParam name = "java.naming.security.credentials">(Enter password for logging into LDAP server)</connectionParam>
        <connectionParam name = "java.naming.security.authentication">simple</connectionParam>
        <connectionParam name = "java.naming.referral">follow</connectionParam>
        <connectionParam name = "java.naming.factory.initial">com.sun.jndi.ldap.LdapCtxFactory</connectionParam>
        <connectionParam name = "java.naming.ldap.version">3</connectionParam>
    </connectionInfo>

<RoleMapping>
    <Object name = "Role Class">groupOfNames</Object>
</RoleMapping>
<WPCUserCredentialMappings ParentDN="(Key in the base DN for the User Objects)" ObjectClass="inetOrgPerson">

For the current example the base DN is: cn=myrealm,dc=wpcdomain,dc=isl,dc=com

        <WPCUserParam name = "UserName">uid</WPCUserParam>
        <WPCUserParam name = "FirstName">cn</WPCUserParam> >
        <WPCUserParam name = "LastName">sn</WPCUserParam> >
        <WPCUserParam name = "Email">mail</WPCUserParam> >
        <WPCUserParam name = "Address">postalAddress</WPCUserParam> >
        <WPCUserParam name = "Phone">telephoneNumber </WPCUserParam> 
        <WPCUserParam name = "Fax"> TelexNumber</WPCUserParam> >
    </WPCUserCredentialMappings>
</LdapConfiguration>

4. Enable LDAP

From the WebSphere Product Center common.properties file, enable the LDAP flag.

    For example,

    enable_ldap=true

5. Restart the System

After the completion of the previous four steps to configure LDAP, restart WebSphere Product Center.

Ch 9 Troubleshooting

Tools


Application server issues

Environment issues

The WebSphere Product Center pseudo-user on the application server must have the following environment variables configured prior to starting WebSphere Product Center:

Additionally, the shell script init_ccd_vars.sh must be sourced before WebSphere Product Center is started. This is usually done in the user's .bashrc file.

The CLASSPATH environment variable should not be modified after init_ccd_vars.sh is sourced.

Common incorrect configuration file settings

The most common error is an incorrect database specifier in common.properties. An incorrectly configured database will have the following symptoms:

appsvr, eventprocessor, queuemanager, scheduler, and workflowengine will not start

Errors in log files logs/db_pool and logs/svc/

smtp_address. Smtp_address should point to an SMTP relay, either sendmail on the localhost, or another system which is capable of sending email out of the organization.

No services will start if the license file (WPC_license.xml) is missing or incorrect. This error will be reflected in the logs files under logs/svc

Application server unresponsive

Scenario

The application server becomes extremely unresponsive. Although it is possible to ping the server, users cannot log into environment and the administrator cannot log into the application server

Things to look for:

Check to see if a user recently launched an unusually large job. If the job was intentional, check the script used by the job.


Database issues

1. Character conversion during data exports / imports

2. Database space allocation problems

3. Data blocks corruption and index corruption problems

4. Import or export hangs with no change in the status bar after a long period of time

5. After killing a running job, the application becomes very slow

6. Redo log switch problems

7. WebSphere Product Center middleware hangs and the GUI is frozen

8. Analyze schema job hangs

9. SQL connection automatically restarts


1. Character conversion during data exports / imports

Issue

During the export/import of a database, to create test environments using a copy of the database, error messages regarding the character set used appears.

Symptoms

For example, if a database using character set US7ASCII is exported, the following error message appears in the export log:

Export done in US7ASCII character set and UTF8 NCHAR character set server uses UTF8 character set (possible charset conversion)

Resolution

Whenever exporting/importing the database, set the NLS_LANG parameter to use the character set american_america.utf8.

2. Database space allocation problems

Issue

Occasionally, import and export jobs fail because of insufficient space allocated for tables, indexes, rollback segments and temporary segments.

Symptom

If the rollback segment is full or the rollback segment tablespace is full then you will see the error message in the alert log file similar to following error message shown below:

ORA-1650: unable to extend rollback segment RBS8 by 512 in tablespace RBS

Failure to extend rollback segment 9 because of 1650 condition FULL status of rollback segment 9 set.

Resolution

3. WebSphere Product Center slows once a running job is killed

Issue

Whenever a job is killed, like import or export, the database system has to rollback the complete transaction to bring the database to a consistent state. This rollback process utilizes maximum system resources like CPU time and memory.

Symptoms

The WebSphere Product Center middleware slows once a running job is killed.

Resolution

Wait until the rollback completes and the system returns to a normal state. Do not kill a running job unless it is necessary.

4. Redo log switch problems

Issue

Inadequate number/size of log files can cause the database system to wait a long time for a log switch.

Symptoms

The database system is waiting for a very long time for a log switch and if all the redo log files are active.

Resolution

5. WebSphere Product Center middleware hangs and the GUI is frozen

Issue

If errors appear when accessing the WebSphere Product Center middleware, it is possible that the connection to the database has been lost.

Symptoms

The WebSphere Product Center middleware freezes or is in a constant wait state. Errors appear when attempting to access the WebSphere Product Center middleware.

Resolution

6. Analyze schema job hangs

Issue

It is suggested to analyze the schema once in a while when you load huge amount of data into the database or delete purge the tables in the database.

The WebSphere Product Center middleware must be stopped before running analyze schema. If the middleware is not stopped, the analyze schema job may hang because the tables are being used by the middleware.

Symptoms

WebSphere Product Center hangs when running analyze schema.

Resolution

If analyze schema hangs then kill the analyze job, stop the WebSphere Product Center Middleware, analyze schema again and start WebSphere Product Center.

Analyze the schema in regular intervals to collect the latest statistics about the data distribution in the database.


Monitor log files for errors

Monitoring and reviewing system log files can help diagnose and resolve many problems.

Note: This chapter will be extended in the next document version. More information on using log files and troubleshooting techniques will be provided.


Connectivity issues

HTTP post errors

When http post errors occur, consider the following:

1.    Can the WebSphere Product Center box see the target destination?

  • Use a Linux/Unix http browser such as "Lynx" and type in the WebSphere Product Center middleware URL to see if the target is accessible.
  • If a browser is not available from the WebSphere Product Center server, try telnetting to port 80 on the destination. For example, if the destination URL is http://myserver/>urlname<, type "telnet myserver 80" (port 80 is the default http port on most web servers).

2.    If WebSphere Product Center can see the destination, is the WebSphere Product Center Distributor working correctly?

  • Check for the existence of new files under $TOP/public_html/created_files/distributor. Check to see if any file has the approximate timestamp of when you tried to push the file through.
  • It is possible that a runaway script had generated a bad output file. Check the file size. Does the file size correspond to what you were expecting? If the file is an XML or otherwise readable file, type it out. Does it contain the correct information that you were expecting?

3.    If the file exists, is the transfer in progress?

  • You can use different tools to see if an actual transfer is in progress. At the minimum, you will need to use a combination of "netstat" and "snoop" (under Solaris) or "tcpdump" (under Linux).
  • Manage your expectations. If the file size is 300 MB, and it is posting to a URL through the Internet, the file can only go at the top speed of the Internet connection.

FTP Fetch Error

If WebSphere Product Center tried to login to a target FTP server and failed to find the specified directory, an error occurs, "Unable to change to remote directory." 

There are a couple of reasons for this error:

Test Java Connectivity

The JDBC URL is defined in the file common.properties. To test the Java connectivity from the WebSphere Product Center middleware to the JDBC URL, use the following script to test for java connectivity.

$TOP/bin/test_java_db.sh

The script tries to connect to the database and run a simple 'select count(*) from dual'. If connection is established, the results from the test script appear.


Other Issues

Stopping and restarting WebSphere Product Center

An issue has been reported when using regular stop scripts under Linux/Solaris. Apparently, WebSphere Product Center does not stop properly or smoothly. If this is the case, stop and start WebSphere Product Center using the following steps:

1. Attempt to gently stop WebSphere Product Center by executing the following script:

 $TOP/bin/go/stop_local.sh

2. Wait for approximately one minute, then type the following command:

ps –u (USERNAMEWITHOUT THE PARANTHESIS)

3. If there are any active java processes, a scheduled job may still be in progress. If desired, let the job complete, otherwise stop it manually using the following script:

$TOP/bin/go/abort_local.sh

4. Wait for approximately thirty seconds, then type the following command:

ps –u (USERNAMEWITHOUT THE PARANTHESIS)

5. If there continue to be active java processes, the JVM has most likely crashed. The java process must be killed manually using the following command:

kill `ps -u (USERNAMEWITHOUT THE PARANTHESIS)
| grep java | cut -b10-15`

Note: If any java processes still exist, the system may need to be restarted.

6. Once all java processes have been killed, restart the WebSphere Product Center using the following script:

$TOP/bin/go/start_local.sh

7. Wait approximately one minute and verify the WebSphere Product Center has been started correctly. Run the script $TOP/bin/go/rmi_status.sh or log into the WebSphere Product Center environment.

Ch 10 Migration Framework 

A migration framework is available to migrate from WebSphere Product Center version 4.2.0.x to to version 5.2. A migration framework to migrate from WebSphere Product Center 5.0 and 5.1 to 5.2 will be provided at a later time. Since there are very few known core changes between 5.0 and 5.2 releases, it is possible to perform the migration manually if needed. Please contact your WebSphere Product Center representative for more information.

Migration from 4.2.0.x to 5.2

There are shell scripts in 4.2.0.x, which aid in the export and import of all the objects for a particular company in WebSphere Product Center:

Exports

Imports

To facilitate exporting all WPC objects in 4.2.0.x version as a zip file so that the same zip file can be imported into 5.2 in order to perform a migration activity. 

Exporting a company

A company in WPC 4.2.0.x can be exported in two ways. 

1. Using a shell script utility named $TOP/bin/exportCompanyAsZip.sh

Usage: 

exportCompanyAsZip --company_code=<code> --script_path=<path/to/trigo/script>
where,

A Sample script is given below.

envObjList = new EnvObjectList();
envObjList.addAllObjectsToExport("CATALOG");
envObjList.addAllObjectsToExport("HIERARCHY_MAPS");
envObjList.addAllObjectsToExport("MAPS");
envObjList.addAllObjectsToExport("FEEDS");
envObjList.addAllObjectsToExport("LOOKUP_TABLE");
envObjList.addAllObjectsToExport("ATTRIBUTE_COLS");
envObjList.addAllObjectsToExport("CONTAINER_ACCESSPRV");
envObjList.addAllObjectsToExport("HIERARCHY");
envObjList.addAllObjectsToExport("COMPANY_ATTRIBUTES");
envObjList.addAllObjectsToExport("SPEC");
envObjList.addAllObjectsToExport("DATASOURCE");
envObjList.addAllObjectsToExport("USERS");
envObjList.addAllObjectsToExport("ACG");
envObjList.addAllObjectsToExport("ROLES");
envObjList.addAllObjectsToExport("CATALOG_CONTENT");
envObjList.addAllObjectsToExport("HIERARCHY_CONTENT");
envObjList.addAllObjectsToExport("LOOKUP_TABLE_CONTENT");
envObjList.addAllObjectsToExport("DOC_STORE");
envObjList.addAllObjectsToExport("MY_SETTINGS");
envObjList.addAllObjectsToExport("DISTRIBUTION");
envObjList.addAllObjectsToExport("DOC_STORE");

sDocFilePath = "archives/company.zip";
exportEnv(envObjList, sDocFilePath);

2. Using the script provided above and directly running it in a WebSphere Product Center script enabled environment (i.e. an import job, a report, or even directly in the Script Sandbox).

Certain Predefined WPC objects can also be exported from a WPC environment as scripts using $TOP/bin/exportCompany.sh shell script which exports the objects as shell scripts so that by executing these scripts in another env we can recreate these WPC objects. But as part of the migration effort this will not be used for exporting WPC objects since this utility is not capable of exporting WPC object content (say item information or category information in a hierarchy).

Importing a company

Importing a company in can be done in three ways.

1. Using the shell script, $TOP/bin/importCompanyFromZip.sh

Usage: importCompanyAsZip --company_code=<code> --zipfile_path=<path/to/import/archive>
where,

company_code is the company code of the company to be imported and 
zipfile_path is a location in the document store where a zip archive of a company exists.

2. Using the WPC script operation importEnv(String sDocFilePath)
where,

sDocFilePath is a location in the document store where a zip archive of a company exists.

A WPC company can also be imported using the result of exportCompany.sh. But as part of the migration effort this will not be used since exportCompany.sh is not capable of exporting WPC object content (say item information or category information in a hierarchy).

3. Using the Application GUI’s “Import Environment” option.

Importing Data using Application’s GUI

Impact on migration

As part of the existing import/export tool framework provided in WPC 4.2.0.x, the following WPC objects are not exported: 
Selections

The export facilities provided in 4.2.0.x will be updated to support export of these WPC objects as well.

In WPC 5.2, there will be support provided for Document/Literal based soap requests in addition to RPC/Encoded style that already exists in the earlier version. This migration activity will need to be tested. 


Migration from 4.2.1 to 5.2

4.2.1 includes the official release of the Import/Export Tools. This feature provides a GUI function point to the “Import Environment” for importing data exported from a company in the same version of WebSphere Product Center into another Company within WebSphere Product Center via a zip file.

An XML control file defines the order of imports. This control file is created and packaged into the zip file during the export.  The recommended migration framework for Customers would be to use exportCompanyAsZip.sh in 4.2.0.x to export all the company data. The output zip file of this script should be compatible for import by “Import Environment” or importCompanyFromZip.sh in 5.2.

Ch 11 Web Services Support

Web Services Definition Language (WSDL) Support - Support for WSDL 1.2 and SOAP 1.2 request/response for simple request messages. The Collaboration Manger menu includes a Web Services Module for the setup of services in the Web Service Console. Currently, SOAP over HTTP is the only supported protocol.

Definitions

WebSphere Product Center provides a scripting layer, which can be used as an API layer. These scripts can be further exposed as web-services. A web-service is created for every business function that needs to be exposed in WebSphere Product Center. A corresponding requester application is created to interact with the web service. The web service will execute one or more scripts on WebSphere Product Center, and also work with other web services to provide the desired business function.

The following diagram shows a use case of WSDL 1.2 and SOAP 1.2 request/response for simple request messages.

 

General WSDL use case

Web Services are set up using the Web Services Console. The following steps utilize a general WSDL use case.

1. Web Services Console – Click new and enter the required information for the following fields: 

2. Web Services Definition File

The web services definition file is uploaded from the Web Services Console and contains a description of the web service in WSDL 1.2 format. The web service uses SOAP 1.2 request/response encoding and the WSDL file includes the following:

Note: The Web Services definition file is published to the default HTTP server, which is the HTTP server for WebSphere Product Center. This is also where the Web Services Definition Script is published. The system provides assistance through the Help button.

3. Web Services implementation script - The Web Services implementation script is invoked by an incoming SOAP 1.2 request message complying with the above web service definition. Web Services Implementation Script undertakes the following:

4. Requestor Application Request Message - Administrator of Requestor Application writes process to create SOAP 1.2 message in compliance with the web services definition described in Step #2.

5. Requestor Application Response Message - Administrator of Requestor Application writes process to receive and handle SOAP 1.2 message in compliance with the web services definition described in Step #2.

Runtime events

After Web Services are setup, the following events occur:

1. User runs process in Requestor Application that triggers SOAP 1.2 message from Requestor Application to the WebSphere Product Center HTTP Server.

2. The request message only contains ID attributes such as GTIN or UPC, GLN, and Target Market.

3. WebSphere Product Center executes Web Services implementation script to parse the SOAP 1.2 request message, access product information, create a SOAP 1.2 response message, and transmit the response message.

4. (Optional) - System logs the request message and the response message in the Document Store. System logs a link to the message in the Document Store in a manner that can be accessed in the Message Console.

5. Requestor Application receives response message.

6. Requestor Application acts on response message.

WebSphere Product Center services log configuration files

The following files control various subsystems within the entire WebSphere Product Center. The location of the generated log is defined in each file.

Note: All paths are relative to $TOP

/etc/logs/eventprocessor.log.xml
/etc/logs/scheduler.log.xml
/etc/logs/system.log.xml
/etc/logs/appsvr.log.xml
/etc/logs/workflowengine.log.xml


Web Service User Interface

Web Services Console

The Web Service console provides the user the ability to create and manage web services that are exposed by WebSphere Product Center. A WSDL document can be written to define a service and an implementation script is created to control how the service is executed.

Web Service console columns

The Web Service console contains the following columns:

Accessing the Web Service Console

To access the Web Services console, use the menu path:

Collaboration Manager > Web Services > Web Service Console.

Creating a new web service

To create a new web service, use the menu path: 

Collaboration Manager > Web Services > New Web Service

The "Web Service Detail" screen appears. Enter the necessary information to define the new web service. 

Web Service Detail screen details

This section defines each field of the Web Service Detail screen.

Note: Refer to the following section for changes that have been made in support for Document-Literal style for web services.

Web Service Name: Enter the name of the web service. This name becomes part of the URL of the SOAP service. It must not contain any white space. 

Web Service Description: Enter a description for the web service.

Protocol: The protocol used for the web service. Currently, SOAP over HTTP is the only supported protocol. The default value is “SOAP_HTTP”. 

Style: The style may be either “DOCUMENT_LITERAL” or “RPC_ENCODED”. The WPC script implementing a Document/Literal service will be passed the entire request body and will be expect to return an entire response body. An RPC/Encoded service WPC script will be passed an array of string parameters and is expected to return a single string. RPC/Encoded services may be easier to use for simple applications, whereas Document/Literal services offer greater flexibility for more complex services.

URL: Provides the URL where the service may be accessed. This field is automatically populated after the web service has been saved. 

WSDL URL: The URL where the WSDL for the web service may be accessed. This field is automatically populated after the web service has been saved. 

WSDL: Enter WSDL for this service. A WSDL document is a description of the interface, URL and protocol of the service in XML format. You must enter this document manually, but we provide a sample WSDL document below. You must enter valid XML for the web service to save successfully. 

Implementation script: Enter a WPC script that implements this service. For an RPC/Encoded service, the incoming parameters for the service are available in the array variable "soapParams", and the return value for the service must be a string written to the “out” Writer variable. In the case of a Document/Literal service, the SOAP request body will be provided in the string variable “soapMessage”, and the response body must be written to the “out” Writer variable. For either style, write the fault code to the "soapFaultCode" Writer variable and write the fault message to the "soapFaultMsg" Writer variable to return a SOAP fault. A sample implementation script is provided below. 

Store requests?: If this is checked, WPC will store the parameters of all incoming requests in the docstore. They will be available from the Transaction Console. 

Store replies?: If this is checked, WPC will store the content of all responses in the docstore. They will be available from the Transaction Console.

Deployed: If this is checked, the service will be deployed. Otherwise this service will not be available.

A sample implementation script and WSDL document

The following Document/Literal web service returns a stock quote for a given ticker symbol. This limited example will only return a value for the “IBM” ticker; all other arguments will result in a soap fault.

This web service endpoint is equivalent to a Java method with the following signature:

java.math.BigDecimal getStockQuote(String ticker);

Implementation Script

// parse the request document
var doc = new XmlDocument(message);

// get the ticker parameter
var ticker = parseXMLNode("ticker");

// we only give out ibm quotes around here...
if (ticker == "IBM") {
out.println("<ibm:getStockQuoteResponse 
xmlns:ibm=\"http://ibm.com/wpc/test/stockQuote\">");
out.println(" <ibm:response>123.45</ibm:response>");
out.println("</ibm:getStockQuoteResponse>");
}
else {
soap_fault_msg.print("Only quotes for IBM are supported");
}

WSDL

<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="http://schemas.xmlsoap.org/wsdl/" 
xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" 
xmlns:xs="http://www.w3.org/2001/XMLSchema" 
xmlns:y="http://ibm.com/wpc/test/stockQuote" 
targetNamespace="http://ibm.com/wpc/test/stockQuote">
<types>
<xs:schema targetNamespace="http://ibm.com/wpc/test/stockQuote" 
elementFormDefault="qualified">
<xs:element name="getStockQuote">
<xs:complexType>
<xs:sequence>
<xs:element name="ticker" type="xs:string" nillable="false"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="getStockQuoteResponse">
<xs:complexType>
<xs:sequence>
<xs:element name="response" type="xs:decimal"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
</types>
<message name="getStockQuoteRequest">
<part name="parameters" element="y:getStockQuote"/>
</message>
<message name="getStockQuoteResponse">
<part name="parameters" element="y:getStockQuoteResponse"/>
</message>
<portType name="StockQuotePortType">
<operation name="getStockQuote">
<input message="y:getStockQuoteRequest"/>
<output message="y:getStockQuoteResponse"/>
</operation>
</portType>
<binding name="StockQuoteBinding" type="y:StockQuotePortType">
<soap:binding style="document" 
transport="http://schemas.xmlsoap.org/soap/http"/>
<operation name="getStockQuote">
<soap:operation soapAction=""/>
<input>
<soap:body use="literal"/>
</input>
<output>
<soap:body use="literal"/>
</output>
</operation>
</binding>
<service name="StockQuoteService">
<port name="StockQuotePort" binding="y:StockQuoteBinding">
<soap:address 
location="http://example.wpc.ibm.com/services/StockQuoteService"/>
</port>
</service>
</definitions>

Managing transactions

Accessing the Transaction Console

To search for all web services transaction, view the Transaction Console using the following menu path: 

Collaboration Manager > Web Services > Transaction Console.

Viewing a web service transaction

1. From the Transaction Console, view the list of transactions from the Web Service Transactions table.

2. Click on the View button from the Response or Request columns. The transaction details appear in a new browser window.

Searching a web service transaction

1. From the Transaction Console, select a date range from the Arrival Date From and Arrival Date to fields of the "Web Service Transaction Search" table.

2. Click the Search button. All transaction results appear in the "Web Service Transactions" table below the search table.

Portal Server Integration

Supplier portal integration provides a number of benefits to retailers including:

WebSphere Product Center provides a web services framework to properly integrate with WebSphere Portal Server, which includes the following features:

Web services framework support for Portal Server

To integrate with WebSphere Portal Server, Web Services capabilities includes support for the following features: 

Therefore, the Web Services framework includes the following features:

Script operations supporting Web Services 

The following list of script operations support WebSphere Product Centers support of Web Services, which are available in the Script Sandbox:

Note: For additional details (prototype and description) on each script operations, refer to the Script Sandbox in WebSphere Product Center.

· createWebService
· deleteWebService
· getDesc
· isDeployed
· getLoginString
· getImplScriptPath
· getName
· getProtocol
· getStoreIncoming
· getStoreOutgoing
· getStyle
· getUrl
· getWebServiceByName
· getWsdlDocPath
· getWsdlUrl
· invokeSoapServer
· saveWebService
· setDeployed
· setDesc
· setImplScriptPath
· setName
· setProtocol
· setStoreIncoming
· setStoreOutgoing
· setStyle
· setWsdlDocPath

Support for Document/Literal style

This section contains the details to support Document/Literal style for web services support in WebSphere Product Center. RPC/Encoded style of web services was already available in previous versions. However, RPC/Encoded web services only supported simple string types. To meet the demand to support sending and receiving of complex types, support for Document/Literal style of web services is included in WebSphere Product Center. 

How the Document/Literal style Web Service works in WebSphere Product Center?

In order to deploy a Document/Literal style web service, a user would need to create a web service, which includes a WSDL defining the schema of the service and a Websphere Product Center trigger script to invoke when a request is encountered. When saving the web service, the user would need to explicitly select that it be deployed. Upon deployment, Websphere Product Center will create a URL for the web service where one can access the deployed WSDL. The URL of the web service will take the following form:

http://<application-webserver>:<application-port-number>/services/<stored-webservice-name>

Appending the “?wsdl” string to the end of the URL will yield in the path to the stored WSDL for the web service.

A request for a Document/Literal web service would be enclosed in a SOAP envelope, and the body of the SOAP message would include the request document in its entirety. This request document must be in proper XML form, and will be passed to the Websphere Product Center web service handler as-is. A caller would have created this request with prior knowledge of the format of the schema node of the stored WSDL for the web service which is being invoked.

The Websphere Product Center web service handling mechanism will receive this request and validate its contents against the WSDL schema for Document/Literal style requests. If the request does not adhere to the WSDL schema, an AxisFault will be thrown. Otherwise, Websphere Product Center will eliminate the namespace references from the request body and pass the modified request to the Websphere Product Center trigger script which was stored at deployment time. The namespace removal is required due to the limitations of the Websphere Product Center script context’s inability to handle namespace-enabled XML documents. The Websphere Product Center trigger script will take the contents of the request and use them as defined by the script author. The script must output its results as a valid response to the incoming request. Therefore, the response will be validated against the WSDL prior to returning the output.

Example:

The Document/Literal schema would look like:

<element name="getStockQuote"/>
<complexType>
<sequence>
<element name="ticker" type="xsd:string"/>
</sequence>
</complexType>
</element>
<element name="getStockQuoteResponse"/>
<complexType>
<sequence>
<element name="response" type="xsd:decimal"/>
</sequence>
</complexType>
</element>

If the client invoked getStockQuote("IBM"), the flow would look like:

1. Websphere Product Center receives a SOAP request from the Axis SOAP stack.

2. Websphere Product Center validates the request message against the above schema.

3. Websphere Product Center strips all namespace prefixes from the request body. Not needed in this case, since this schema defines everything in the default namespace.

4. Websphere Product Center invokes the web service trigger script. The input variables are:

- operationName = "getStockQuote"
- message = 
"<getStockQuote>
<ticker>IBM</ticker>
</getStockQuote>"

5. The trigger script writes the response to the "out" Writer:

- out = 
"<getStockQuoteResponse>
<response>83.76</response>
</getStockQuoteResponse>"

6. Websphere Product Center validates the response against the above schema

7. Websphere Product Center sends the entire SOAP response back to the client through the Axis SOAP stack.

Changes done to support Document/Literal style

The following list are changes that have been done to support Document/Literal style.

What is the impact on migration from previous versions?

Depending on the WebSphere Product Center version, a minor database (DB2/Oracle) modification may need to be performed. Please consult your WebSphere Product Center representative for any migration issues.

Useful links about Document/Literal style of web services

http://java.sun.com/developer/technicalArticles/xml/jaxrpcpatterns/

 http://searchwebservices.techtarget.com/ateQuestionNResponse/0,289625,sid26_cid494324_tax289201,00.html

Known limitations

Namespace must be defined on schema node of WSDL due to DOM versions

Due to limitations caused by the XML parsing implementation shipped with Websphere Product Center (provided by Xerces version 2.4.0), the namespace declaration must be defined locally on the schema node of the WSDL. This will be noticed mostly when deploying Document-Literal style web services. For example, the following is a valid WSDL, which would not be correctly recognized by Websphere Product Center:

<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:y="http://ibm.com/wpc/test/stockQuote" targetNamespace="http://ibm.com/wpc/test/stockQuote">
<types>
<xs:schema targetNamespace="http://ibm.com/wpc/test/stockQuote" elementFormDefault="qualified">
<xs:element name="getStockQuote">
<xs:complexType>
<xs:sequence>
<xs:element name="ticker" type="xs:string" nillable="false"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="getStockQuoteResponse">
<xs:complexType>
<xs:sequence>
<xs:element name="response" type="xs:decimal"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
</types>
<message name="getStockQuoteRequest">
<part name="parameters" element="y:getStockQuote"/>
</message>
<message name="getStockQuoteResponse">
<part name="parameters" element="y:getStockQuoteResponse"/>
</message>
<portType name="StockQuotePortType">
<operation name="getStockQuote">
<input message="y:getStockQuoteRequest"/>
<output message="y:getStockQuoteResponse"/>
</operation>
</portType>
<binding name="StockQuoteBinding" type="y:StockQuotePortType">
<soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/>
<operation name="getStockQuote">
<soap:operation soapAction=""/>
<input>
<soap:body use="literal"/>
</input>
<output>
<soap:body use="literal"/>
</output>
</operation>
</binding>
<service name="StockQuoteService">
<port name="StockQuotePort" binding="y:StockQuoteBinding">
<soap:address location="http://localhost/axis/services/StockQuoteService"/>
</port>
</service>
</definitions>

The WSDL would have to be written, as follows, to be correctly parsed:

<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:y="http://ibm.com/wpc/test/stockQuote" targetNamespace="http://ibm.com/wpc/test/stockQuote">
<types>
<xs:schema targetNamespace="http://ibm.com/wpc/test/stockQuote" xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified">
<xs:element name="getStockQuote">
<xs:complexType>
<xs:sequence>
<xs:element name="ticker" type="xs:string" nillable="false"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="getStockQuoteResponse">
<xs:complexType>
<xs:sequence>
<xs:element name="response" type="xs:decimal"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
</types>
<message name="getStockQuoteRequest">
<part name="parameters" element="y:getStockQuote"/>
</message>
<message name="getStockQuoteResponse">
<part name="parameters" element="y:getStockQuoteResponse"/>
</message>
<portType name="StockQuotePortType">
<operation name="getStockQuote">
<input message="y:getStockQuoteRequest"/>
<output message="y:getStockQuoteResponse"/>
</operation>
</portType>
<binding name="StockQuoteBinding" type="y:StockQuotePortType">
<soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/>
<operation name="getStockQuote">
<soap:operation soapAction=""/>
<input>
<soap:body use="literal"/>
</input>
<output>
<soap:body use="literal"/>
</output>
</operation>
</binding>
<service name="StockQuoteService">
<port name="StockQuotePort" binding="y:StockQuoteBinding">
<soap:address location="http://localhost/axis/services/StockQuoteService"/>
</port>
</service>
</definitions>

Newly created Web services do not automatically deploy 

Case ID: P16473

Issue: Create a new web service and restart WebSphere Product Center. An error appears when attempting to invoke the newly created web service.

Workaround: Allow write access to the Axis configuration file "server-config.wsdd" under the "public_html/WEB-INF" directory. Additionally, for environments using WebLogic, the WebSphere Product Center instance must be deployed in expanded directory format. If this is not done, the auto-redeploy functionality of Axis will not deploy the web services created by WebSphere Product Center on restart, thus causing an error.

Enhanced WSDL: cannot change the style

Case ID: P16059

Create a web service using DOCUMENT-LITERAL. Save and go back to the newly created web service and change the style to RPC-ENCODED and save again. The style DOCUMENT-LITERAL is displayed.

This is a known limitation. A user cannot change the style of a web service that has been deployed.

Create Web Service for Doc-Literal Type and invoke that web service.

Running a document-literal style web service

1. Go to common.properties. Define a value for "soap_company" and "soap_user". This will be the company and user used by incoming SOAP requests to access the database and run scripts. Also, define a value for "wpc_web_url". 

For example: 

soap_company=acme
soap_user=Admin
wpc_web_url=http://myinstance.acme.com:1234/

2. Bounce WebSphere Product Center. Go to "Collaboration Manager->Web Services->New Web Service". Enter or Select values as given below.

<?xml version="1.0" encoding="UTF-8"?>
<!-- edited with XMLSPY v2004 rel. 4 U (http://www.xmlspy.com) by Dave Marquard (IBM) -->
<definitions xmlns="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:y="http://ibm.com/wpc/test/stockQuote" targetNamespace="http://ibm.com/wpc/test/stockQuote">
<types>
<xs:schema targetNamespace="http://ibm.com/wpc/test/stockQuote" elementFormDefault="qualified">
<xs:element name="getStockQuote">
<xs:complexType>
<xs:sequence>
<xs:element name="ticker" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="getStockQuoteResponse">
<xs:complexType>
<xs:sequence>
<xs:element name="response" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
</types>

<message name="getStockQuoteRequest">
<part name="parameters" element="y:getStockQuote"/>
</message>

<message name="getStockQuoteResponse">
<part name="parameters" element="y:getStockQuoteResponse"/>
</message>

<portType name="StockQuotePortType">
<operation name="getStockQuote">
<input message="y:getStockQuoteRequest"/>
<output message="y:getStockQuoteResponse"/>
</operation>
</portType>

<binding name="StockQuoteBinding" type="y:StockQuotePortType">
<soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/>
<operation name="getStockQuote">
<soap:operation soapAction=""/>
<input>
<soap:body use="literal"/>
</input>
<output>
<soap:body use="literal"/>
</output>
</operation>
</binding>
<service name="StockQuoteService">
<port name="StockQuotePort" binding="y:StockQuoteBinding">
<soap:address location="http://localhost/axis/services/StockQuoteService"/>
</port>
</service>
</definitions>

// parse the request document
var doc = new XmlDocument(soapMessage);

// get the ticker parameter
var ticker = parseXMLNode("ibm:ticker");

// we only give out ibm quotes around here...
if (ticker == "IBM") {
out.println("<ibm:getStockQuoteResponse xmlns:ibm=\"http://ibm.com/wpc/test/stockQuote\">");
out.println("<ibm:response>123.45</ibm:response>");
out.println("</ibm:getStockQuoteResponse>");
}
else {
// do I need to print to soapFaultCode too?
soapFaultMsg.print("Only quotes for IBM are supported");
}

3. Check Store requests so that you can view the request history from the transaction console

4. Check Store responses so that you can view the response history from the transaction console

5. Check Deployed to deploy the web serice. The web service is not available to you unless you check this option

6. Invoke the Web Service using com.ibm.ccd.soap.test.StockQuoteTest.java

Usage: $JAVA_RT com.ibm.ccd.soap.test.StockQuoteTest <URL> <NUM_CASE>
Example: $JAVA_RT com.ibm.ccd.soap.test.StockQuoteTest http://trillian:9099/services/DocumentWebServiceTest 0

<NUM_CASE> can be any integer from 0 to 5
0 can be used to query IBM's stock quote. Refer to the StockQuoteTest.java for more details.

6. Response will be, 

Called SOAP service at URL 'http://trillian:9099/services/TestingDocumentStyle14'
Request was '<ibm:getStockQuote xmlns:ibm="http://ibm.com/wpc/test/stockQuote">
<ibm:ticker>SNM</ibm:ticker>
</ibm:getStockQuote>'
SOAP call was successful.
Result was '<ibm:getStockQuoteResponse xmlns:ibm="http://ibm.com/wpc/test/stockQuote">
<ibm:response>123.45</ibm:response>
</ibm:getStockQuoteResponse>'

Ch 11 Command Line Job Invocation

A command line interface to WebSphere Product Center's scheduler has been implemented to allow integration with an external scheduler and in an automated fashion if desired. Currently, this feature is certified with IBM Tivoli Workload Scheduler. This feature allows users to either use a command interface or the use of the Tivoli Workload scheduler user interface to trigger a WebSphere Product Center import or export.

Limitations: This feature is only supported for the "English" locale. Installations using "US English" as the language on the server will be able to use this feature. All other languages are not fully supported as all messages displayed in the output console are in English.  It is planned for all other Group 1 locales to to be supported in a future Fix Pack or major release of WebSphere Product Center.

Pre-requisite

There are two methods to execute WebSphere Product Center import and export jobs. The first is to use the command interface to trigger a job using the WebSphere Product Center scheduler and the second is using the Tivoli Workload scheduler user interface. To invoke the second method, the following pre-requisites must be met:

1. Installation of Tivoli Workload Scheduler 8.2, Tivoli Management Framework 4.1 and Tivoli Management Framework Language support 4.1 on the application server.

2. Make sure the shell script, "run_job_template.sh" is available under "$TOP/bin/” folder in the environment where the scheduled jobs will run.

3. Install Tivoli Management Framework 4.1 on the desktop that will be used to facilitate the running or scheduling a job.

Scheduler integration with IBM Tivoli Workload Scheduler  

WebSphere Product Center's scheduler integrates with IBM Tivoli Workload Scheduler by way of a shell script provided by the WebSphere Product Center installation $TOP/bin/run_job_template.sh. This shell script is required to be available in the environment where a scheduled job will run and each job will require its own shell script. 

For example, 

Name of Job

Associated shell script

Feed 1  

  run_job_feed1.sh

Feed 2

  run_job_feed2.sh

DailyFeed3   run_job_dailyfeed3.sh

Modifying run_job_template.sh

The run_job_template requires modification for every test or production environment as per the job to be run.

For example: 

Open run_job_feed1.sh and one will see the following snippet:

#export TOP=<Path to WPC Installation home directory> # E.g. /usr/appinstalls/wpc52
#WPC_INIT_VARS=$TOP/setup/init_ccd_vars.sh
#. $WPC_INIT_VARS

# Set the job related variables as needed and do not modify anything else after this
# CCD_JOB_NAME=<Job Name> # [Required]
# CCD_JOB_TYPE=<Job Type> # [Required, Valid values are import|export]
# CCD_COMPANY_CODE=<Company Code> # [Optional, Default Value is trigo] 
# CCD_USERNAME=<User Name> # [Optional, Default Value is Admin]
# CCD_DEBUG=<Debug on or off> # [Optional, Default Value is off]

Note: The default value for CCD_COMPANY_CODE is "trigo", which is the default company that gets created when we run create_schema.sh.

The above snippet of parameters would change to:

export TOP=/usr/trigo/wpc52_41/bin
WPC_INIT_VARS=$TOP/setup/init_ccd_vars.sh
. $WPC_INIT_VARS

# Set the job related variables as needed and do not modify anything else after this
CCD_JOB_NAME=Feed1 # [Required]
CCD_JOB_TYPE=import # [Required, Valid values are import|export]
CCD_COMPANY_CODE=test # [Optional, Default Value is trigo] 
CCD_USERNAME=m # [Optional, Default Value is Admin]
CCD_DEBUG=on # [Optional, Default Value is off]

Running a job using Tivoli Workload Scheduler

The Tivoli Workload scheduler user interface can be used to trigger a WebSphere Product Center import or export. The user interface uses a shell script to define which job to run or schedule. This section provides detailed steps on how to execute a job using the IBM Tivoli Workload Scheduler.

Before a WebSphere Product Center job can be execute using Tivoli Workload scheduler, a task must be created to define the job.

Create a task

Creating a task will define the host that is to be used to run the scheduled job and the path to the required shell script file.

1. Open the Tivoli Desktop and in the "Tivoli Management Environment" dialog box, enter the information for the server that contains the required applications to utilize the merge scheduler feature:  WebSphere Product Center 5.2,  Tivoli Workload Scheduler 8.2, Tivoli Management Framework 4.1 and Tivoli Management Framework Language support 4.1. When done, click OK.

2. Double-click the name of the host machine where the Tivoli Workload Scheduler is installed. The "Policy Region" dialog appears.

3. From the menu bar, select Create > Task Library, the Create Task Library dialog appears. Enter a name for the new Task Library and click “Create & Close”

4. Double-click the task library to create a task. Creating a task is similar to creating a job in WebSphere Product Center.

5. From the menu bar, select Create > Task and enter a name for the task. For example, “Task for Feed 1”.

6. When editing the task, select the role that will be required to execute the task.

7. Select the platform where the task is to be executed. If the task is run on an AIX platform, check the AIX option and the "AIX Executable for Task" appears. Enter the required information to where the required shell script “run_job_template” is available:

Note: Changes made to the run_job_template.sh is not reflected dynamically in the task. From the "Edit Task" screen, uncheck and check the platform  option and specify the path of the updated shell script to reflect the latest changes made.

For information on making changes to the "run_job_template.sh", refer to the section "Modifying run_job_template.sh".

8. Click the button “Create & Close” to save the edits made to the task.

Execute a task

Once a task has been created in the Task Library, it can be executed manually or at a scheduled time. The task, when executed, will used its defined host and "run_job_template.sh" script file to start a WebSphere Product Center import/export.

1. From the Task Library screen, double-click on the desired task.

2. Check the option “ Display on Desktop” to see the execution details and status of the job completion.

3. Under "Available Task Endpoints" select the appropriate host so that is available under "Selected task Endpoints"

4. Click on “Execute” and check the results in the Formatted Output screen. The desktop shows the job execution details, which can be saved to a file by clicking "Save to File...".

Scheduling a task 

Before a job can be scheduled, create a Task for the WebSphere Product Center job that needs to be scheduled.

Create a task 

1. From Tivoli desktop, click on a host and double-click on the desired Task Library to create a job.

2. Under Create menu option, select Task and enter a task name. 

3. Select the Task for which the Job needs to be scheduled.

4. Check the option “ Display on Desktop” to see the execution details and status of the job completion.

5. Under "Available Task Endpoints" select the desired host so that it is available under "Selected Task Endpoints".

6. Click on "Create & Close".

Execute task

1. Go to the Task Library page and keep the Desktop open.

2. Drag the job “ Job for Feed 1” and drop it on “ Scheduler” on Desktop.

3. Enter a Job label. 

There are a few options to run the job here depending on the users requirement. For example, the job can be scheduled to run indefinitely or the user may want to run the job at regular time intervals. So depending on the requirement the user can change the settings on the Add Scheduled Job page.

Assume that the user wants to schedule a job such that the job runs three times in a time interval of every one hour|60 minutes. In such a scenario it is important to make sure the current time shows the exact server time as per the Time Zone set.

4. Check "Repeat the Job" and enter "3".

5. Enter "60" against minutes.

6. If the user wants to notify a particular group, check "Post Tivoli Notice" and click on "Available Groups" and select any group under Available Notice Groups list and click on Set to close the window.

7. Check "Post Status Dialog on Desktop" 

8. If email is already  set up in the users environment, choose to specify an email ID for notification.

9. Check "Log to File" and specify the log file path, specify the host name and the file path as desired.

10. Click on Schedule Job or Schedule Job & Close.

Checking job status

The status of a job can be viewed through the Notices feature of the IBM Tivoli Workload Scheduler. If desired, users can also log into WebSphere Product Center to check the status of a job.

1. On the Desktop page, double click on Notices. 

2. Select any specified group and click on Open.

Scheduler control through command line interface

If desired, a user can run an import or export using the WebSphere Product Center scheduler using the command line interface. This requires the user to have the necessary privileges to access the server where the scheduled jobs are to be controlled.

Running an import or export using the command line interface

To trigger a job, use the following command line:

$JAVA_RT com.ibm.ccd.scheduler.common.RunJob --job_name="aaaa" --job_type=import|export [--company_code=bbbb --username=cccc --debug=on / off]

For example:

$JAVA_RT com.ibm.ccd.scheduler.common.RunJob --job_name="Item Feed " --job_type=import --company_code=test --username=user1 --debug=on

This triggers an import job named "Item Feed" from the company called "test". 

Ch 13 Integration Best Practices

The purpose of this chapter is to summarize Integration Best Practices in a WebSphere Product Center implementation. Using these best practices will help to achieve a high quality and dependable integration between systems. To cover all aspects of integration, this document has been developed to identify the best practices associated with each of these different aspects of integration.

Key Elements of Integration:


Definitions and Acronyms

Integration Dimensions: We can use the dimensions listed below to understand the various kinds of implementations encountered in WebSphere Product Center implementations. The rest of the document will highlight which best practices or guidelines are applicable to which dimensions or kinds of implementations.

WebSphere Product Center as source or target system

The most obvious dimension is whether WebSphere Product Center is the source system or the destination system for the information being exchanged. A source system places its constraints on an integration, the most important of which are (a) the ability to do delta syndications, (b) the ability to initiate an integration, (c) the ability to receive a notification on the success/failure of the transmitted data and take appropriate action, and (d) the protocols and formats supported, as well as support of an EAI infrastructure.

Controlling system

We will define a controlling system as a system that takes action according to an internal trigger for an integration. An example would be WebSphere Product Center running a syndication on a scheduled basis as a job. Another example would be SAP triggering a WBI adaptor as the result of an item add. Whether WebSphere Product Center is the source or target system for an integration is completely independent of which system is the controlling system in an integration. Several cases are possible. When an intermediary, such as an FTP server or an EAI tool, is involved, both source and target systems could be controlling systems: a legacy system on a scheduled basis puts a file on an FTP server, while WebSphere Product Center on a scheduled basis picks up the file. An example of where WebSphere Product Center is a controlled destination system (i.e. it waits for something external to trigger an import of data) would be IBM WBI posting a message to WebSphere Product Center via the invoker with the message contents being an item update from Transora, let's say. An example of where WebSphere Product Center is a controlled source system (i.e. it waits for something external to trigger an export of data) is where IBM WBI polls a WebSphere Product Center queue periodically to see if there is a file ready to be picked up.

Protocol

There is a lot of confusion in WebSphere Product Center implementation teams and in customer resources between protocol, format and message vs. file-based integration. So, one goal of this document is to make sure that we've established a common nomenclature for these concepts. Examples of protocols are File Transfer Protocol (FTP), Hyper Text Transfer Protocol (HTTP), Simple Message Transfer Protocol (SMTP, i.e. email), Java Messaging Service (JMS) and IBM WebSphere Message Queuing (IBM WebSphere MQ). A protocol defines such things as envelopes, encoding of data such as numbers, and expected response, but have nothing to do with the contents being transmitted. In all integrations we should be quite clear on the protocols being used since there will always be at least one. In addition, the various stages of an integration might actually be using different protocols: The WBI adaptor in SAP might be transmitting data from SAP to the WBI Inter Connection Server (ICS) via HTTP, which then hands over to an IBM MQ queue manager for that data to be transmitted to another queue manager to which WebSphere Product Center is connected as an MQ client.

Format

The format in which the data is laid out is independent of the protocol. Examples of formats are Comma Separated Values (CSV), pipe delimited, eXtensible Markup Language (XML), or just some predefined field and record structure such as for Electronic Data Interchage (EDI) messages. Each format defines fields either through location/length parameters or through tags. It is important to keep in mind the encoding that might be needed for a particular format. For example the fact that characters such as angle brackets ('<', '>') need to be escaped in XML or that content may actually contain commas in CSV is often forgotten in implementations.

Size of data

This dimension is very often confused with "message-based" or "file-based" communication so it is important to get this right. "Message-based" integration is typically assumed to involve smaller exchanges of data, and properties like the following:

However, there is no clear line that can be drawn to distinguish a message-based from a file-based or batch integration, and it is important to define a clear set of dimensions and not confusing or overlapping ones. Thus, the overall size of data should be a more important dimension to consider than whether it is labeled as "message-based" or "batch" integration.

Types of communication

Another dimension to consider is the type of communication that will be involved in the integration. Synchronous communication gives direct feedback to a user or system of the result of a particular action. Using HTTP to communicate, for instance, gives automatic feedback to the system or user after an action has been posted. Asynchronous communication, on the other hand, uses more of a "fire-and-forget" strategy. If integration involves depositing a file on an FTP server, which will then be picked up by a system, for instance, there is no automatic feedback to the system depositing the file of the result of its action.

Frequency

Along with the "size of data" dimension, this "how often" dimension captures the total amount of data that will need to be processed on a periodic basis.

Integration thread

This intermediate systems and infrastructure dimension captures whether an EAI infrastructure is being used or not. Also at times in integrations with legacy systems intermediate programs might be written to upload or extract data into the legacy system. It is important to understand and document these intermediate systems or programs since they are most often the weakest link in the integration chain.

Especially in complex integrations that might require multiple hops (WebSphere Product Center to WBI to destination system, for example), non-standard means (such as direct database updates), multiple protocols, or other communication challenges (such as communicating through a firewall), establish a working single thread or complete path of integration early. This will identify issues and give other parties (such as network admins, or teams working on IBM WBI) ample time to resolve these connectivity issues in parallel.

The dimensions of an integration listed above should become the standard terminology for describing integration in WebSphere Product Center implementations. Documentation provided by PS teams in analysis/design stages should use these dimensions clearly and consistently.

Acronyms

Acronym

Definition

EAI

Enterprise Application Integration

FTP

File Transfer Protocol

HTTP

Hyper Text Transfer Protocol

MQ

IBM's message queuing middleware. Often referred to IBM Websphere MQ since all connectivity solutions are now under the WebSphere brand.

ICS

IBM's WBI Inter Connect Server

WBI

IBM's WebSphere Business Integration suite, the EAI suite from IBM.


Design Principles

Re-usability

The overall foundation underlying the implementation methodology of integration is re-usability. As WebSphere Product Center grows and more client implementations are done, it is necessary that we be able to quickly scale and solve integrations with both previously un-integrated systems and those that have been integrated with in previous implementations. In order to solve this requirement, it is necessary that we approach all integration efforts with re-usability in mind such that should we ever need to integrate with the same system for another client we will be able to do so with the utmost efficiency.

Re-usability is accomplished through: (a) Leveraging EAI tools such as IBM WBI and its model of generic business objects, (b) choosing formats that are independent of the data model, (c) writing libraries of WebSphere Product Center scripts (acknowledgement, polling, etc.) that are independent of the data model and can be reused in other implementations.

Information sharing

Communication as a means to integration

Conceptually, integration can be thought of simply as a series of events that can be triggered by communication between a controlling system WebSphere Product Center and controlled system(s). These events can be triggered through messages that are passed between the systems, automated processes that poll for content or files, or any other means of rudimentary communication. Communications would include, for instance, the type of change to be made (add, update, delete), a unique communication ID (for tracking / confirmation), and the relevant content for effecting the change either within WebSphere Product Center or within the integral system(s).

Reliability measures

In addition to passing information between systems in order to communicate changes, there should also be a means in place to communicate the success or failure of a particular transaction. Such hand-shaking communications can be implemented most intuitively with synchronous forms of communication, and allow the integrated systems to track whether a particular transaction may need to be re-sent due to failed reception at the other end and thus improve and ultimately ensure the integration's reliability.

Information formats

The specific format of these communications should be designed in a generic enough fashion that both the format and the processing functionality surrounding it can be re-used across implementations.

When considering the general format to utilize for communications between systems, it is important to note the usability of a format in light of the following needs:

Information processing

While it is conceivable that the format of information that is being sent between systems should be somewhat generic, it is understandable that not all implementations will be able to fit into a pre-defined format, especially if we are to view integration as a direct link between WebSphere Product Center and the integral system(s). To avoid a need for re-tooling of formats and mappings between formats and WebSphere Product Center specs at every implementation due to differences such as data models, it is recommended that we make use of re-usable mapping functionality between XML formats and WebSphere Product Center specs.

Using EAI platforms

One manner of doing this is to make use of EAI platforms such as the WBI or webMethods suite, which will allow us to build re-usable connectors that will allow WebSphere Product Center to, for instance, communicate via a single, completely re-usable message format (i.e. a single XML DTD). Differences that do arise due to an implementation's particulars can then be translated by WBI rather than requiring a re-tooling of WebSphere Product Center functionality to process the information. With no re-tooling of WebSphere Product Center functionality being required, the same functionality can be used across implementations.

Other options

Another factor to consider, however, is that particular clients may have a need to re-use a format already in use with other systems across their enterprise. This would make it difficult for WebSphere Product Center to introduce a completely separate DTD that will then need to be translated for other systems in the enterprise to understand rather than for WebSphere Product Center to make use of the DTD that already exists. In such cases, we should make use of re-usable functionality for translating between specs within WebSphere Product Center and the DTD.

It may also be that even with an EAI platform in use, this approach will be more useful to a particular client from the perspective of understanding the mapping of WebSphere Product Center -managed information to their internal DTD for communicating information, and could thus be a more ideal approach.

Event handling

Ideally an automated process within WebSphere Product Center will handle events. For instance, queuing functionality that was introduced in the WebSphere Product Center release could be used to handle both sending of messages (outbound queues) and receiving both responses and incoming messages (inbound queues). Queue processing scripts could then be utilized to handle the actual processing of the messages and thus, in effect, actually carry out the events that are to be triggered as a result of a specific message.

Event handling need not be tied directly to specific functionality or specific versions of WebSphere Product Center, however. Other means of handling events can include scheduled jobs within WebSphere Product Center that poll an FTP server, scheduled jobs that check a local file system for a file (via the Document Store), having an invoker-based trigger script fire events based on posted information, or other means. The method chosen should ultimately depend on careful consideration of the size of data and frequency dimension requirements of a particular integration.

Change tracking

To be able to implement a full synchronization between systems, there will need to be a means within WebSphere Product Center of tracking changes made to content and items that can furthermore be effectively tagged as communicated to the integral system(s) or not. For instance, if an item is deleted within WebSphere Product Center (as a source system), we likely want to be able to not only trigger a message being sent to the target system that notifies the target system it should delete the same item, but also track the success or failure of that particular communication so that WebSphere Product Center can be aware of whether the item was actually deleted from the target system.

Re-usable connectors

Connectors repository

As implementations progress and our partnerships evolve, we will gradually be building a repository of re-usable connectors to a variety of systems. Whenever possible, we should make every effort to re-use these connectors, as the functionality around processing items and so on that flow through these connectors can then be re-used with little or no modification from the standpoint of a specific implementation. Of course, this will greatly speed up the execution time of the implementation as a whole and enhance the overall reliability and stability of the connectors and the implementations that use theose connectors as issues are found and resolved over time.

When integrating with systems which do not yet have connectors defined, an integration expert should be involved who can quickly build a re-usable connector that can then be used both for the integration on the specific implementation and can also be stored in the repository of connectors for later use should we ever need to integrate with the system on another implementation.

Connector usage

Connectors should be used such that any modifications that may need to be done are done via an EAI layer handling translations of any information that is being passed between systems. In other words, prior to rewriting any of the re-usable functionality within WebSphere Product Center for processing information passed through EAI, we should take advantage of the EAI platform's ability to do any necessary translations such that we do not need to re-write any in- WebSphere Product Center functionality.


Implementation

Scaling the Implementation

Mini-integrations

The large task of an overall integration should be broken into much smaller, more easily manageable tasks. This can be done, for instance, by breaking a single, complete integration into much smaller integration – from "separate" integrations for each item type (spec), to integrations for each container (catalog), all the way down to integrations for a group of attributes (if necessary). Once there is confidence that each of these "mini-integrations" works flawlessly, they can then be combined to form the single, complete integration.

Granularity of functionality

Careful attention should be paid to the levels at which integration between systems needs to occur. For instance, when sending changes to a target system one may want to be able to send all changes since a particular date, only those changes for a particular catalog since the last changes were sent, only changes that have occurred on a specific group of items, or any changes that have occurred on a specific attribute across all items. The specific requirements will be implementation-dependent, but it is important to take the granularity required into consideration early in the design process of the implementation so that it can be catered for appropriately.

Performance Tuning

General performance comments

Do not leave performance issues as an afterthought. It will be easier to change and fix formats or other aspects of integration late in the game, but a performance bottleneck can require major redesign and sometimes engineering support. Put performance measurement hooks in the scripts in the due course of development.

Measuring performance

Assuming the mini-integration approach (as detailed in the Implementation section), performance should be measured at each step of the integration by measuring the total time required for each mini-integration task. Potential areas of poor performance can then be identified at an appropriately granular level and can thus be much more easily targeted for performance tweaking.

Tweaking performance

Once performance problem areas are identified, a detailed analysis should be done to determine the root cause of slow operation. Detailed analysis can be done by using tools like WebSphere Product Center's middleware profiling and the performance tab on the job detail screen. The analysis can then be applied to focus in on a particular area of a script or SQL query, and appropriate action can then be taken – modifying or rewriting the script, or involving Engineering to enhance a database query.

Validation

Stability

Advantages of mini-integrations

Implementing mini-integrations (as detailed in the Implementation section) should provide a much higher level of confidence that integration has been successful by providing a detailed listing of all areas where integration has been shown to be working. Without the visibility of mini-integrations, it is not only more difficult to provide proof of the details of integration working, but integration as a whole is also more likely to suffer from issues that are difficult to identify, diagnose and debug. Implementing mini-integrations will hence enhance the overall stability of integration.

Scalable Testing

Advantages of mini-integrations

Implementing mini-integrations (as detailed in the Implementation section) allows testing of integration to take place at a much finer level of detail so that any errors or problems that arise are not clouded by large amounts of (potentially irrelevant) complexity. Thus, as mentioned above, the processes of diagnosing, debugging, and resolving issues that are found should all be greatly sped up by this approach.

Representative vs. complete environments

Integration testing should be done on a representative environment with the same configuration as the final environment (same specs, validation rules, value rules, views), but with as few as possible representative entities (locales, catalogs, category trees, items, categories, organizations, users, roles). This should reduce the time it takes for tests to be run, screens to be loaded, and in general should speed up the time testing takes versus testing in an environment that is fully populated. All testing and debugging should be done in this environment.

Only after testing is complete in the representative environment and everything there appears to be working should integration then be verified in a complete and fully populated environment. This step should still be carried out, however, in order to ensure that no edge scenarios were accidentally ignored in the representative environment as well as to test the production-level performance of the integration.

Scalable process testing

Any schedule-able job (i.e. imports, exports) should first be run only with a very small number of representative items – 10 or less. This number should be increased proportional to the level of confidence that is gained in the script that is actually processing these items. This approach will ensure that a massive job is not executed over a matter of hours only for the person running it to find at the end that something went wrong without causing the entire process to fail outright within the first few minutes of running.

Only after there is full confidence in the operation of the script associated with the job should a job be run involving a complete set of data. As with the complete environment recommendation, this step should still be carried out in order to ensure that no edge scenarios were accidentally ignored in the smaller job runs as well as to test the production-level performance of the job.

Visibility

Reporting

Advantages of mini-integrations

Implementing mini-integrations (as detailed in the Implementation section) allows a more detailed level of reporting due to smaller and more quickly implement-able chunks of integration. Compared to reporting on the progress of implementation at the level of the entire integration, this finer level of reporting allows for a more concrete and quantitative tracking of the implementation.

The mini-integrations can be listed and their relationship to the larger picture of the complete integration detailed in a chart, and then an accurate picture of the overall progress of implementation can easily be drawn from reporting on the progress of the mini-integration tasks.

Ownership

Even when working with multiple teams, assign ownership to a single person across the integration. This person's job is to ensure that the single thread is established early, that the teams are working according to the guidelines in this document, and that the incremental build/test cycle across (a) mini-integrations, (b) scalable process testing and (c) representative vs. complete environments is in sync across the various teams.

Documentation

Clearly identify formats and approach

Decide on a clear path for execution and document clearly any formats that will be used when there are multiple teams working on integration. The most common example is where a WebSphere Product Center team is working on exporting data from WebSphere Product Center, and a customer or SI team is working on uploading that data into a destination system. Do not begin work without specs for the common format, and keep this documentation up to date daily. This is an absolute requirement that the project manager must enforce.

This approach is not inconsistent with using representative environments and doing mini-integrations. Both teams should build and test incrementally to ensure steady, visible progress.


Top Ten Guidelines for WebSphere Product Center Integrations

Use clear and common terminology to describe integrations

All implementations should use the dimensions identified in the section Integration Dimensions.

Re-usability

The key to scaling will lie in learning from prior integrations and packaging the integrations keeping the re-usability principles described in the Re-usability section in mind.

Visibility

Establish an overall metric for reporting progress and provide a clear status update every few days at worst to the project manager.

Mini-integrations

Slice up the complexity of a large integration according to various dimensions (catalogs, attributes) that make sense for that integration. Focus on one mini-integration at a time and tie directly to the visibility metrics.

Representative vs. complete environments

Maintain a representative environment that is easy to debug and test with. Move to the complete environment only when there is confidence in the validity of scripts and specs. Tie this in with visibility metrics.

Scalable process testing

Test all jobs with small data sets to check correctness before rolling out to complete data sets. Tie this in with visibility metrics.

Performance

Without worrying about correctness of logic or formatting, run some performance tests early in the development cycle and regularly thereafter to identify issues.

Establish single thread early

Especially in complex integrations that might require multiple hops, multiple protocols, or non-standard means, establish a working single thread of integration early.

Design specs and documentation

Define and document a clear path for execution and clearly document any formats that will be used, especially when there are multiple teams working on integration.

Single owner

Even when working with multiple teams, assign ownership to a single person across the integration.


EAI Platform Integrations

Approach

Generic communications format

Whenever possible, a generic communication format should be designed or re-used from a previous project. The more general the format, the more systems can be involved in the integration without special re-working of formats required in order for all systems to communicate with each other. Of course, there can be trade-offs in performance the more general a format becomes and thus the right format for one project may not be the ideal choice for another. The Integration Dimensions should still be taken into account when determining a specific format to use.

Content mappings

As much as possible, mappings between the in- WebSphere Product Center content model and the model seen through the communication format should be done through dynamically-updateable means. Again, based on an investigation of the Integration Dimensions certain project needs may dictate that the creation of these mappings cannot be fully dynamically-updateable – due to a high priority placed on absolute maximum throughput of processing, for instance. One manner of doing this involves using category trees (representing an XML structure, for instance) with a single-node spec related to them which can indicate the spec node path of the attribute a particular node of the category tree maps to in the in- WebSphere Product Center content model. A recursive processing script can then be utilized to process the mapping of an item into an XML file based on this category tree and its defined mappings, and can even cater for nested multi-occurrences with a little effort.


Additional advantages

Information translation / transformation

Systems involved in integration should have no need to handle of themselves the information or content restrictions and requirements of other systems in the integration. EAI platforms can be readily utilized to handle this content translation and transformation. For instance, while WebSphere Product Center stores a FLAG's value as "TRUE" or "FALSE", a system with which we integrate may store the value as "Y" or "N". EAI platforms can be used to make these translations such that WebSphere Product Center can always send TRUE/FALSE and assume that it will be sent TRUE/FALSE, while integrated systems can always send and assume to be sent Y/N. This ensures that down the line, if more systems are involved in integration, no re-coding will be required in light of these additional systems.

Client understanding

Because we can re-use a platform with which a client is likely to already be familiar, the client will gain the added confidence that the integration makes use of known functionality – i.e. the EAI platform. Additionally, if a client-specific communication format already exists and is re-used for the WebSphere Product Center integration, client-side developers need little to no additional training to understand the communication format that WebSphere Product Center will be mapping to.

Communications flexibility and reliability

Most EAI platforms have native functionality to allow communications to occur over a variety of protocols and also to ensure that communications are delivered by means of brokering. This allows WebSphere Product Center to focus on simply generating the necessary document to communicate and not have to worry about supporting potentially different means of communicating this document to a variety of systems nor needing to worry about tracking whether the document was received by each system or not – these concerns become concerns of the EAI layer and platform, and WebSphere Product Center need simply be aware of them from an overall integration thread perspective.


Notices

IBM may not offer the products, services, or features discussed in this document in all countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to:

IBM Director of Licensing

IBM Corporation

North Castle Drive

Armonk, NY 10504-1785

U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law:

INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or program(s) described in this publication at any time without notice.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.

Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact:

IBM Burlingame Laboratory

Director IBM Burlingame Laboratory

577 Airport Blvd., Suite 800

Burlingame, CA 94010

U.S.A

Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee.

The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement, or any equivalent agreement between us.

Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurement may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment.

Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not necessarily tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.

This information may contain examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples may include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.

All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Programming interface information

Programming interface information, if provided, is intended to help you create application software using this program.

General-use programming interfaces allow you to write application software that obtain the services of this program's tools.

However, this information may also contain diagnosis, modification, and tuning information. Diagnosis, modification and tuning information is provided to help you debug your application software.

Warning: Do not use this diagnosis, modification, and tuning information as a programming interface because it is subject to change.

Trademarks and service marks

The following terms are trademarks or registered trademarks of International Business Machines Corporation in the United States or other countries, or both:

IBM
the IBM logo
AIX
CrossWorlds
DB2
DB2 Universal Database
Domino
Lotus
Lotus Notes
MQIntegrator
MQSeries
Tivoli
WebSphere

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

MMX, Pentium, and ProShare are trademarks or registered trademarks of Intel Corporation in the United States, other countries, or both.

Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.

Other company, product or service names may be trademarks or service marks of others.


IBM WebSphere Product Center contains certain Excluded Components (as defined 
in the relevant License Information document), to which the following 
additional terms apply. This software is licensed to You under the terms and 
conditions of the International Program License Agreement, subject to its 
Excluded Components provisions. IBM is required to provide the following 
notices to You in connection with this software:

i.) IBM WebSphere Product Center includes the following software that was 
licensed by IBM from the Apache Software Foundation under the terms and 
conditions of the Apache 2.0 license:

- Apache Regular Expression v1.2
- Apache Axis v1.1
- Apache XML4J v3.0.1
- Apache Log4j v1.1.1
- Apache Jakarta Commons DBCP Package v1.1
- Apache Jakarta Commons Pool Package v1.1
- Apache Jakarta Commons Collections Package v3.0

Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/

TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

1. Definitions.

"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.

"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.

"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.

"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.

"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.

"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.

"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).

"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.

"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."

"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.

2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.

3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.

4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:

(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and

(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and

(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and

(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.

You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.

5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement You may have executed
with Licensor regarding such Contributions.

6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.

7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.

8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.

9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of Your accepting any such warranty or additional liability.

END OF TERMS AND CONDITIONS

APPENDIX: How to apply the Apache License to Your work.

To apply the Apache License to Your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with Your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.

Copyright [yyyy] [name of copyright owner]

Licensed under the Apache License, Version 2.0 (the "License");
You may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

ii.) IBM WebSphere Product Center includes the following software that was 
licensed by IBM from Scott Hudson, Frank Flannery and C. Scott Ananian under 
the following terms and conditions:

- Cup Parser Generator v0.10k

CUP Parser Generator Copyright Notice, License, and Disclaimer
Copyright 1996-1999 by Scott Hudson, Frank Flannery, C. Scott Ananian 
Permission to use, copy, modify, and distribute this software and its 
documentation for any purpose and without fee is hereby granted, provided that 
the above copyright notice appear in all copies and that both the copyright 
notice and this permission notice and warranty disclaimer appear in supporting 
documentation, and that the names of the authors or their employers not be 
used in advertising or publicity pertaining to distribution of the software 
without specific, written prior permission. The authors and their employers 
disclaim all warranties with regard to this software, including all implied 
warranties of merchantability and fitness. In no event shall the authors or 
their employers be liable for any special, indirect or consequential damages 
or any damages whatsoever resulting from loss of use, data or profits, whether 
in an action of contract, negligence or other tortious action, arising out of 
or in connection with the use or performance of this software. 

iii.) IBM WebSphere Product Center includes the following software that was 
licensed by IBM from Elliot Joel Berk and C. Scott Ananian under the following 
terms and conditions:

- JLex v1.2.6

JLEX COPYRIGHT NOTICE, LICENSE AND DISCLAIMER.
Copyright 1996-2003 by Elliot Joel Berk and C. Scott Ananian 
Permission to use, copy, modify, and distribute this software and its 
documentation for any purpose and without fee is hereby granted, provided that 
the above copyright notice appear in all copies and that both the copyright 
notice and this permission notice and warranty disclaimer appear in supporting 
documentation, and that the name of the authors or their employers not be used 
in advertising or publicity pertaining to distribution of the software without 
specific, written prior permission. The authors and their employers disclaim 
all warranties with regard to this software, including all implied warranties 
of merchantability and fitness. In no event shall the authors or their 
employers be liable for any special, indirect or consequential damages or any 
damages whatsoever resulting from loss of use, data or profits, whether in an 
action of contract, negligence or other tortious action, arising out of or in 
connection with the use or performance of this software. Java is a trademark 
of Sun Microsystems, Inc. References to the Java programming language in 
relation to JLex are not meant to imply that Sun endorses this product. 

iv.) IBM WebSphere Product Center includes the following software that was 
licensed by IBM from International Business Machines Corporation and others 
under the following terms and conditions:

- ICU4J v2.8

ICU License - ICU 1.8.1 and later
COPYRIGHT AND PERMISSION NOTICE

Copyright (c) 1995-2003 International Business Machines Corporation and others
All rights reserved.

Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, and/or sell copies of the Software, and to permit persons
to whom the Software is furnished to do so, provided that the above
copyright notice(s) and this permission notice appear in all copies of
the Software and that both the above copyright notice(s) and this
permission notice appear in supporting documentation.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
OF THIRD PARTY RIGHTS. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
HOLDERS INCLUDED IN THIS NOTICE BE LIABLE FOR ANY CLAIM, OR ANY SPECIAL
INDIRECT OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES WHATSOEVER RESULTING
FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION
WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

Except as contained in this notice, the name of a copyright holder
shall not be used in advertising or otherwise to promote the sale, use
or other dealings in this Software without prior written authorization
of the copyright holder.


All trademarks and registered trademarks mentioned herein are the property of 
their respective owners.