readme.txt
----------

This file explains the samples to support the CrossWorlds HA Solution on Solaris Using VERITAS.

PURPOSE:
--------

To demonstrate how the operational components of a running CrossWorlds system may be made Highly Available (HA) using a VERITAS Cluster Server on the Solaris operating system from Sun Microsystems.

SUMMARY:
--------

This sample walks through the process of setting up CrossWorlds software for the VERITAS Cluster Server (VCS).

IMPORTANT NOTE:
---------------

CrossWorlds Software provides these samples as one solution for making a CrossWorlds implementation HA. This solution has been tested and is supported by CrossWorlds only in the exact configuration described here. There are many possible variations of this configuration, however, and customers may find it necessary or desirable to substitute different hardware components and/or modify these samples to better suit local conditions. Therefore, CrossWorlds strongly encourages the customer to engage the services of professionals who have experience designing HA environments and can provide ongoing support to the customer for the HA solution they design and implement.

VERITAS is a trademark or registered trademark of VERITAS Software Corporation; Sun Microsystems and Solaris are trademarks or registered trademarks of Sun Microsystems, Inc. Neither VERITAS Software Corporation nor Sun Microsystems, Inc. has approved, authorized or been affiliated with the preparation of this readme file. The information contained in this readme file is subject to change at any time without notice. CrossWorlds makes no warranties, express, implied or otherwise, with respect to this readme file; CrossWorlds expressly disclaims any and all implied warranties regarding this readme file. In no event shall CrossWorlds be liable for any damages (including, without limitation, any indirect, special, incidental or consequential damages) relating to the use of this readme file or the information contained herein.

ASSUMPTIONS:
------------

1. Familiarity with CrossWorlds InterChange Server (ICS) and products. You should be able to install and configure CrossWorlds products. Familiarity with CrossWorlds System Manager (CSM) is assumed. Also CrossWorlds is installed and configured on your system.

2. The individuals using this feature are knowledgeable in VERITAS Cluster Server (VCS). They might achieve this knowledge, for example, by taking the VERITAS Cluster Administration training, from VERITAS Educational Services.

3. CROSSWORLDS refers to the directory where your CrossWorlds system is installed.

4. All paths in this document are written for Solaris.

5. Familiarity with Solaris 7.

6. You have read the CrossWorlds 3.1 documentation and the VERITAS Cluster Server manuals.

HARDWARE CONFIGURATION:
-----------------------

a. Two nodes Sun SPARC Enterprise Ultra-250 (named vcluster1, vcluster2)

b. Two A1000 Storage Disk groups from Sun

c. One Sun workstation

d. Network connections (refer to the hardware configuration picture: CROSSWORLDS/Samples/HA/VERITAS/crossworlds.jpg)

The sample configuration has an active-active cluster with two virtual hosts. Active-active means that both hosts are active during normal operation. Both nodes have VisiBroker installed and the Smart Agent automatically started. One virtual host has the Oracle RDBMS and supporting software. The other virtual host has all the other parts of the CrossWorlds environment (ICS, connector agents, Webgateway, SNMP agent, and MQSeries). One virtual host is run on each node during non-degraded operations. Upon failure, both virtual hosts are run on the same physical host.

This is called active-active because both hosts perform some processing during normal operations. The customer feels that all of the hardware is being used all the time, which is not exactly the case. You have to size both machines so that they can each handle the load during a failure. There is no way to really shed load during failure, so the machines have to be sized to handle everything.

SOFTWARE REQUIREMENTS:
----------------------

a. Solaris 7

b. VERITAS Cluster Server 1.3 including VERITAS Volume Manager

c. VERITAS Cluster Agent HA-Oracle

d. CrossWorlds product release 3.1.1.7 or higher (ICS, connectors, SNMP agent, Webgateway, MQ (with patches), VisiBroker, and other third party software),
Java (JVM) Version 1.2.2

CONFIGURATION:
--------------

The following configuration of the CrossWorlds software assumes that the components are being installed on two node Sun Ultra-250 cluster systems named cluster1 and cluster2.

1. VERITAS Cluster Server is installed and configured on both nodes.

2. Two disk groups (app_dg, ora_dg) for the cluster; one disk group (app_grp) for CrossWorlds and the other one (ora_dg) for Oracle, both running on VERITAS Cluster Server. That is, VERITAS cluster software is configured and should be able to fail over the diskgroup between the nodes.

3. Install Oracle on ora_dg (Oracle disk group). That is:

  1. Install the Oracle Software server on cluster2 (refer to the CrossWorlds and Oracle installation manuals).
  2. Configure the Oracle VERITAS agent on the cluster2 machine and check whether it can fail over to cluster1.

 

4. Create the user account "cwadmin" on both nodes.

5. Install the CrossWorlds InterChange Server on cluster1 but on the shared disk (app_dg disk group).

6. Install VisiBroker on both nodes (cluster1, cluster2) and update the rc file to start osagent automatically whenever the server is rebooted. Configure the following under both nodes:

  1. edit the /opt/vbroker/adm/localaddr file for both nodes, add the virtual interface info (IP address) of app_grp into that file. You can find the the interface info by the type "ifconfig -a" command. Example of the /opt/vbroker/adm/localaddr file is 

  #entries of format <address> <subnet_mask> <broadcast address>
 10.5.1.244 255.255.255.0 10.5.1.255

  1. edit the /opt/vbroker/adm/agentaddr file of node1 and node2 and add the IP address of client (e.g., CSM or other tools) machines
  2. edit the client (e.g., CSM or other tools) agentaddr file, and add the virtual address of app_grp into that file.

 

7. Install MQSeries on node 1 (cluster1) first on the shared disk and install MQSeries on node 2 (cluster2) on the system disk and create a soft link. For example, you can do the following (assume /cw1 is mounted on shared disk app_dg)  

  1. on node 1 (cluster1), make a soft link from local /opt/mqm to the shared disk

        ln -s /cw1/opt/mqm  /opt/mqm

      ln -s /cw1/var/mqm  /var/mqm

  1. install MQ5.2 on the node 1 (on default location)
  2. start the MQ on node 1 and make sure it is running fine, then stop the MQ
  3. now, switch the app_grp to the node 2
  4. on node 2, install MQ5.2 on the default location (/opt/mqm and /var/mqm). 
  5. on node 2, delete newly installed /opt/mqm and /var/mqm directory and make a soft link to the shared disk

       ln -s /cw1/opt/mqm  /opt/mqm

       ln -s /cw1/var/mqm  /var/mqm

     

  6. restart MQ on this node and make sure it is running fine.

 

8. Configure the following CrossWorlds Components using CrossWorlds configuration tool:
    a. InterChange Server
    b. Connectors (Email, SAP, etc.)
    c. Webgateway_Controller
    d. SNMP

    Note:

  1. Make sure that in the InterchangeSystem.cfg file (in [MESSAGING] section), use the shared IP address (hostname) as the MQ hostnam
  2. Make sure use the shared hostname for oracle DATA_SOURCE_NAME 

 

Start the InterChange Server on cluster1.

9. Configure all other CrossWorlds components as instructed in the CrossWorlds System Installation Guide for UNIX, and make sure they work on cluster1 before implementing the VERITAS cluster solution.

10. Fail over the app_dg disk group to the cluster2 node and make sure the CrossWorlds components work on the cluster2 node.

11. Once you've determined that the CrossWorlds components work on both nodes, move the app_dg disk group to the cluster1 node.

12. Log in as root to configure the VERITAS cluster.

13. Create the following directories in the /opt/VRTSvcs/bin directory:
a. ICS
b. Email (In this example we are installing the Email connector)
c. SNMP
d. Webgateway
e. MQ

14. Copy the VERITAS script files to the primary node (cluster1) and change the file permission to executable (example: chmod 755 *):

cp /CROSSWORLDS/Samples/HA/VERITAS/ICS/* /opt/VRTSvcs/bin/ICS
cp /CROSSWORLDS/Samples/HA/VERITAS/Connector/* /opt/VRTSvcs/bin/Email
cp /CROSSWORLDS/Samples/HA/VERITAS/SNMP/* /opt/VRTSvcs/bin/SNMP
cp /CROSSWORLDS/Samples/HA/VERITAS/Webgateway/* /opt/VRTSvcs/bin/Webgateway
cp /CROSSWORLDS/Samples/HA/VERITAS/MQ/* /opt/VRTSvcs/bin/MQ

15. Modify the setup scripts for your configuration, like changing the server name, connector names, Webgateway name, Webgateway config file, and others.

Example:

Files: ICS_setup.sh, Webgateway_setup.sh, Email_setup.sh and SNMP_setup.sh 

Change the User, Home, Pword (password), Owner, ServerName, WebgatewayName, ConnectorName, SNMPName values in all the above files.

Example:

hares -modify ICS_agent User admin
hares -modify ICS_agent Home /cw1/crossworlds
hares -modify ICS_agent Pword null
hares -modify ICS_agent Owner cwadmin
hares -modify ICS_agent ServerName hacw

Change node name to your system node name in all the setup files.

Syntax:  hares -clear ICS_agent -sys <SYSTEM NAME>

Example:
hares -clear ICS_agent -sys vcluster1
hares -online ICS_agent -sys vcluster1

File: Connector_setup.sh

Change the following references in the Connector.sh file:
replace "Connector" to "Email" and "Connector_agent" to "Email_agent".

Change the resource dependencies according to your system in MQ_setup.sh file:

hares -link MQ_agent app_cw2
hares -link MQ_agent app_cw1
hares -link MQ_agent app_vip

In this example we have MQ agent depends on IP (app_vip) & Mount (app_cw1,app_cw2) resources on vcluster1 system. 

Do not change following resource dependencies in the setup scripts. ICS_agent resource depends on MQ_agent and all other Connector agents, SNMP_agent, and Webgateway_agent depends on ICS_agent.  

16. Create ICS, Email, SNMP, Webgateway, MQ directories on the cluster2 node under the /opt/VRTSvcs/bin directory.

17. Copy VERITAS script files to the cluster2 node:

rcp /opt/VRTSvcs/bin/ICS/* cluster2:/opt/VRTSvcs/bin/ICS
rcp /opt/VRTSvcs/bin/Email/* cluster2:/opt/VRTSvcs/bin/Email
rcp /opt/VRTSvcs/bin/SNMP/* cluster2:/opt/VRTSvcs/bin/SNMP
rcp /opt/VRTSvcs/bin/Webgateway/* cluster2:/opt/VRTSvcs/bin/Webgateway
rcp /opt/VRTSvcs/bin/MQ/* cluster2:/opt/VRTSvcs/bin/MQ

18.Default values for RestartLimit, ToleranceLimit, OnlineRetryLimit values (VERITAS agent attributes) are zero for MQ agent. Do not change these default values for MQ agent in MQ_setup.sh file.

19. Modify the PATH variable and export $PATH
PATH=$PATH:/opt/VRTSvcs/bin:/opt/VRTSvmsa/bin:/etc/VRTSvcs/conf/config:

20. Execute the following setup commands in the order below to set up the VERITAS Cluster Server for CrossWorlds:
a. /CROSSWORLDS/Samples/HA/VERITAS/MQ_setup.sh
b. /CROSSWORLDS/Samples/HA/VERITAS/ICS_setup.sh
c. /CROSSWORLDS/Samples/HA/VERITAS/Connector.sh
d. /CROSSWORLDS/Samples/HA/VERITAS/SNMP_setup.sh
e. /CROSSWORLDS/Samples/HA/VERITAS/Webgateway.sh

Example:
cd /CROSSWORLDS/samples/HA/VERITAS
./MQ_setup.sh

This will set up CrossWorlds components configured for the VERITAS Cluster Server. At this point, VERITAS should have brought online the InterChange Server, Email connector, SNMP, and Webgateway processes.

Note :

In the event of process failure for the Interchange Server, Connector Agents, SNMP Agent and Webgateways, Veritas will restart these processes on the same node. In the case of MQ process failure, Veritas will FAULT the process on the current cluster node and failover the app_grp to another node in the cluster (starting all processes).

21. If there is any problem running the above setup scripts, the user can use the following commands to delete HA resource and rerun the setup scripts once again. Before deleting the resource, make sure to kill all resource processes from the UNIX system.

Example to delete ICS setup from HA VERITAS:
$haconf -makerw
$hares -delete ICS_Agent
$hatype -delete ICS
$haconf -dump -makero

22. Run hagui (the VERITAS GUI tool) to verify the cluster configuration for CrossWorlds components.

RUNNING THE SAMPLE:
-------------------

1. Check that the CrossWorlds components are online on cluster1.

2. Test the offline and online option from the hagui tool for all CrossWorlds resources.

3. Test the failover to cluster2 and make sure CrossWorlds components are online on cluster2.

 

KNOWN ISSUES:
-------------

When failover occurs on the cluster nodes, all the CrossWorlds components move from one node to another node. If all your components (connector agent, webgateway, etc.) are in the same HA machine, then all of them will restart and should work normally after the restart. The customer needs to pay attention for the remote clients (like CSM, Connector agents and Webgateway agents) which are not in the same HA machine as ICS and MQ. In that case, some components need to be restarted and some don't. CSM and tools needs to be restarted after the failover. For connector agent, if it does not use MQ as delivery transport, then it is not need to be restart; otherwise the connector agent needs to be restarted because the MQ session is lost during the failover. To resolve this problem, the customer can use the "Auto restart Connector Agent by OAD" feature. With the OAD feature turns on, if the connector agent caught the MQ connection lost error and shut down itself, the OAD will auto restart the connector agent. 

(c) 2001 CrossWorlds(R) Software, Inc.