![]() ![]() ![]() |
Appendix A: Cluster Monitoring with Tivoli
This appendix contains instructions for making an HACMP cluster known to Tivoli in order to monitor and administer the cluster through the Tivoli management console. This process is described in the following sections:
Overview
You can monitor the state of an HACMP cluster and its components through your Tivoli Framework enterprise management system. Using various windows of the Tivoli interface, you can monitor the following aspects of your cluster:
Cluster state and substate Configured networks and network state Participating nodes and node state Resource group location and state Individual resource location (not state). In addition, you can perform the following cluster administration tasks from within Tivoli:
Start cluster services on specified nodes. Stop cluster services on specified nodes. Bring a resource group online. Bring a resource group offline. Move a resource group to another node. Setting up this monitoring requires a number of installation and configuration steps to make Tivoli aware of the HACMP cluster and to ensure proper monitoring of IP address takeover.
After you complete the installation, you can use Tivoli to monitor and administer your cluster as described in the section Monitoring Clusters with Tivoli Distributed Monitoring in Chapter 10: Monitoring an HACMP Cluster in the Administration Guide.
Before You Start
Before installing and configuring cluster monitoring with Tivoli, make sure you have fulfilled the prerequisite conditions.
Prerequisites and Considerations
When planning and configuring Cluster Monitoring with Tivoli, keep the following points in mind:
The Tivoli administrator must be set to root@persistent_label for each node in the cluster before you start installing HATivoli. The Tivoli Management Region (TMR) should be located on an AIX 5L node outside the cluster. The HACMP cluster nodes must be configured as managed nodes in Tivoli. The Tivoli Framework, Distributed Monitoring, and AEF components must be installed on the Tivoli Management Region node and on each cluster node. For proper monitoring of IP address takeover activity, the ideal configuration is to have a separate network dedicated to communication between the TMR and the cluster nodes. If you do not have a separate network dedicated to Tivoli, you must take additional steps to ensure that IPAT is monitored properly. You must use a persistent node IP label on each node and make sure it is on the same subnet as the TMR. For more information about these requirements, see the sections Using Persistent Node IP Labels with HATivoli and Subnet Considerations for Cluster Monitoring with Tivoli.
The verification utility does not check for the following conditions (you must do so manually): Whether every node is installed with the Tivoli cluster monitoring software Whether the oserv process is running on all of your nodes. Memory and Disk Requirements for Cluster Monitoring with Tivoli
The memory required for individual Distributed Monitors for cluster components varies depending on the size of the cluster and the number of components being monitored. Consult your Tivoli documentation for more information.
Installation of the hativoli filesets requires 400 KB of disk space. Check your Tivoli documentation for additional disk space requirements for Tivoli.
Using Persistent Node IP Labels with HATivoli
In many cases, it is not possible to install separate physical adapters dedicated to communication with the non-cluster Tivoli Management Region node. Without a dedicated Tivoli network, proper monitoring of IPAT through Tivoli requires that each node in the cluster have a node-bound IP address defined that does not belong to any resource group and does not move between nodes.
You need to assign persistent node IP labels to a service or non-service interface on each node in HACMP, and configure Tivoli to use these persistent node IP labels. When the Tivoli oserv process starts, Tivoli uses the persistent IP node labels already configured in HACMP.
For information about how to configure persistent node IP labels in HACMP, see the section Configuring HACMP Persistent Node IP Labels/Addresses in Chapter 4: Configuring HACMP Cluster Topology and Resources (Extended) in the Administration Guide.
Note: If you already have Tivoli set up, you may need to change the TMR’s IP address to match the subnet of the monitored nodes’ persistent node IP labels.
Subnet Considerations for Cluster Monitoring with Tivoli
For proper monitoring of IPAT, make sure the service address of the TMR node is on the same subnet as the persistent IP alias assigned to the monitored nodes. Depending on whether you are using IPAT via IP Replacement or IPAT via Aliases, other subnet considerations for the persistent node IP label vary as described in the following sections.
Subnet Requirements for Non-Aliased Networks
For non-aliased networks, (networks in the cluster that use standard IPAT), the following requirements for subnets apply:
The subnet of a node’s persistent node IP label must be different from the subnet of the node’s non-service labels. The subnet of a node’s persistent node IP label may be the same as the subnet of the node’s service labels. For cluster monitoring through Tivoli, the subnet of the monitored node’s persistent node IP label must be the same as the subnet of the non-cluster TMR node. Subnet Requirements for Aliased Networks
For aliased networks (networks in the cluster that use IPAT via IP Aliases), the following requirements for subnets apply:
The subnet of the persistent node IP label must be different from the subnet of the node’s non-service IP labels. For cluster monitoring through Tivoli, the subnet of the monitored node’s persistent node IP label must be the same as the subnet of the non-cluster TMR node. For details on IPAT via IP Aliases, see the section IP Address Takeover via IP Aliases in the chapter on Planning Cluster Network Connectivity in the Planning Guide.
Steps for Installing and Configuring Cluster Monitoring with Tivoli
Preparing to monitor a cluster with Tivoli involves several stages and prerequisite tasks.
The following table provides an overview of all of the steps you will take. Use this table to familiarize yourself with the “big picture” of the installation and configuration steps. Then refer to the sections that follow for details on each step.
This sequence of steps assumes an environment in which:
Tivoli has already been installed and set up. The Tivoli configuration is being modified to monitor an HACMP cluster for the first time. You do not have a separate network for monitoring the HACMP cluster and, therefore, need to take steps to ensure proper IPAT monitoring. The following sections provide further details about each of the installation steps.
Step 1: Installing Required Tivoli Software
The following Tivoli software must be installed and running on the TMR node and on cluster nodes before installing the Tivoli-related HACMP filesets. See the IBM website for information on the latest supported versions:
Tivoli Managed Framework (on TMR and cluster nodes) Tivoli Application Extension Facility (AEF) (on TMR only) Tivoli Distributed Monitoring (on TMR and cluster nodes) Unix Monitors Universal Monitors. Note: If you are doing a fresh installation of Tivoli, see the steps related to the node IP aliases. You may want to ensure now that the IP address of the Tivoli TMR is on the same subnet as the node IP aliases.
Step 2: Creating a Cluster Policy Region and Profile Manager
On the TMR, create a Policy Region and Profile Manager for HACMP monitoring to handle the HACMP cluster information.
Consult your Tivoli documentation or online help if you need instructions for performing these Tivoli tasks.
Step 3: Defining HACMP Cluster Nodes as Tivoli Managed Nodes
Configure each HACMP cluster node as a subscriber (client) node to an HACMP Profile on the Tivoli Management Region (TMR). Each configured node is then considered a “managed node” that appears in the Tivoli Policy Region window. Each managed node maintains detailed node information in its local Tivoli database, which the TMR accesses for updated node information.
Note that since the TMR does not recognize HACMP automatically, enter the name of an adapter known to the cluster node you are defining as a client. Do this in the Add Clients window.
Note: If you already have Tivoli configured before adding HACMP nodes, and want to monitor IP address takeover, change the IP address of the TMR node to match the subnet of the persistent node IP address aliases you assigned to the cluster nodes. For details, see the section Subnet Considerations for Cluster Monitoring with Tivoli.
Follow the procedure you would follow to install any nodes for Tivoli to manage. Refer to Tivoli documentation and online help for instructions.
Step 4: Defining Administrators
Define the cluster nodes as Login Names in the Administrators panel. Consult your Tivoli documentation or online help if you need instructions for performing Tivoli tasks.
Step 5: Defining Other Managed Resources
At this stage, you define some other resources to be managed in addition to the cluster nodes, such as the profile manager and indicator collection, as follows:
1. In the TME Desktop initial window, click on the newly-created policy region. The Policy Region window appears.
2. From the Policy Region window, select Properties > Managed Resources. The Set Managed Resources window appears.
3. From the Available Resources list, double-click on the following items to move them to the Current Resources list:
ManagedNode IndicatorCollection ProfileManager SentryProfile TaskLibrary. 4. Click Set & Close to continue.
Step 6: Adding Nodes as Subscribers to the Profile Manager
Subscribe the cluster nodes to the Tivoli Profile Manager.
1. Double-click on the new Profile Manager icon. The Profile Manager window appears.
2. Select ProfileManager > Subscribers...
3. In the Subscribers window, move your cluster node names from the Available to become Subscribers list to the Current Subscribers list.
4. Click Set Subscriptions & Close.
5. Return to the main TME Desktop window.
Step 7: Installing the HACMP Cluster Monitoring (hativoli) Filesets
If you have not done so already, install the HACMP software, and install the three cluster.hativoli filesets on the both the Tivoli server node and the HACMP cluster nodes. The cluster.hativoli filesets that you can select during the SMIT installation are:cluster.hativoli.client, cluster.hativoli.server, and cluster.msg.en_US.hativoli.
Step 8: Configuring the HACMP Cluster
If you have not done so already, configure the HACMP cluster and synchronize. Make sure to configure a persistent node IP alias on each node in the cluster.
Step 9: Placing the Node IP Alias in the /etc/hosts and /etc/wlocalhosts Files
Make sure each node has a persistent IP alias with a subnet that matches that of the TMR adapter. Make sure this alias is included in the /etc/hosts file, the ipaliases.conf file (see next section), and the Tivoli /etc/wlocalhost files. Note that if the /etc/wlocalhost file was not created earlier in Tivoli, create it now.
Note: At this point, you may need to change the IP address of the TMR so that the TMR can communicate with the alias IP address on the cluster nodes. Refer to your Tivoli documentation or customer support for additional help.
Step 10: Creating the ipaliases.conf File
To monitor IPAT without a dedicated network, create a file called /usr/es/sbin/hativoli/ipaliases.conf and copy it to each cluster node. This file must contain the network name you will be using for the IP aliases, and the name of each cluster node with its alias label. For example:
Step 11: Starting the oserv Process
Start the Tivoli oserv process on all nodes. Note that the oserv process will not start if the alias IP address is not configured.
Note: The Tivoli oserv process must be running at all times in order to update the cluster information accurately. Set up a way to monitor the state of the oserv process. For information about defining an HACMP application monitor, see Chapter 4: Configuring HACMP Cluster Topology and Resources (Extended) in the Administration Guide.
To start oserv, run the following command on each node:
/etc/Tivoli/oserv.rc startUpon reintegration of a node, HACMP automatically sets the alias to an adapter (if applicable) and then starts the oserv process. For HACMP to do this, the failed node must be part of a non-concurrent resource group.
Step 12: Saving Previous Node Properties Customizations
If you previously customized the node properties displayed in the Tivoli Cluster Managed Node window, they will be lost when the hativoli scripts are installed so you should make sure they are saved.
HACMP automatically saves a copy of your parent dialog. If you need to restore earlier customizations, find the saved file in /usr/es/sbin/hativoli/ParentDialog.dsl.save.
Step 13: Running Additional hativoli Install Scripts
You now run three additional install scripts as follows. Note the node(s) on which you run each script, and note that you synchronize cluster resources after step 1.
1. Run /usr/es/sbin/hativoli/bin/install on any ONE cluster node:
You are prompted to select the Region, the Profile Manager, and the Indicator Collection, which you set up earlier on the TMR. There may be a delay of up to several minutes while the system creates and distributes profiles and indicators.
2. Run /usr/es/sbin/hativoli/AEF/install on the TMR node.
3. Run /usr/es/sbin/hativoli/AEF/install_aef_client on ALL cluster nodes.
Step 14: Starting Cluster Services
Start cluster services on each cluster node.
Step 15: Starting Tivoli
If Tivoli is not already running, start Tivoli by performing these steps on the TMR node:
1. Make sure access control has been granted to remote nodes by running the xhost command with the plus sign (+) or with specified nodes. This will allow you to open a SMIT window from Tivoli.
If you want to grant access to all computers in the network, type:
xhost +or
2. Also to ensure later viewing of SMIT windows, set DISPLAY=<TMR node>.
3. Run the command . /etc/Tivoli/setup_env.sh if it was not run earlier.
4. Type tivoli to start the application.
The Tivoli graphical user interface appears, showing the initial TME Desktop window.
Note that there may be a delay as Tivoli adds the indicators for the cluster.
Removing Cluster Monitoring with Tivoli
To discontinue cluster monitoring with Tivoli, perform the following steps to delete the HACMP-specific information from Tivoli:
1. Run a uninstall through the SMIT interface to remove the three hativoli filesets on all cluster nodes and the TMR.
2. If it is not already running, call Tivoli on the TMR:
Enter . /etc/Tivoli/setup_env.sh Enter tivoli 3. From the Policy Region for the cluster, open Modify HATivoli Properties task library.
4. A window appears containing task icons.
5. Select Edit > Select All to select all tasks, and then Edit > Delete to delete. The Operations Status window at the left shows the progress of the deletions.
6. Return to the Properties window and delete the Modify HATivoli Properties task icon.
7. Repeat steps 3 through 6 for the Cluster Services task library.
8. Open the Profile Manager.
9. Select Edit > Profiles > Select All to select all HACMP Indicators.
10. Select Edit > Profiles > Delete to delete the Indicators.
11. Unsubscribe the cluster nodes from the Profile Manager as follows:
In the Profile Manager window, select Subscribers. Highlight each HACMP node on the left, and click to move it to the right side. Click Set & Close to unsubscribe the nodes. Where You Go from Here
If the installation procedure has been completed successfully, Tivoli can now begin monitoring your cluster.
For information about monitoring and administering an HACMP cluster through the Tivoli management console, see the chapter on Monitoring an HACMP Cluster, in the Administration Guide.
![]() ![]() ![]() |