PreviousNextIndex

Chapter 9: Creating a Basic HACMP Cluster


This chapter describes how to create a basic two-node cluster by using the Two-Node Cluster Configuration Assistant. This chapter contains the following sections:

  • Overview
  • Prerequisites
  • Planning a Two-Node Cluster
  • Using the Two-Node Cluster Configuration Assistant
  • Preventing Single Points of Failure
  • Where You Go from Here.
  • Overview

    The Two-Node Cluster Configuration Assistant enables you to configure a basic two-node HACMP cluster quickly. Using the Assistant requires very little experience with HACMP. The Assistant guides you through the configuration process of entering data into several entry fields, and then HACMP discovers the remaining configuration information for you.

    Note: Do not use the Assistant to reconfigure a cluster during HACMP migration.
    Note: You can also configure a cluster with a WebSphere, DB2 UDB, or Oracle application. For information, see the corresponding HACMP Smart Assist guide.

    The Two-Node Cluster Configuration Assistant is available in two forms:

  • A SMIT-based Assistant. For information about this application, see the section Using the SMIT Assistant.
  • A standalone Java application. For information about this application, see Using the Standalone Assistant.
  • HACMP Cluster Definition

    The Two-Node Cluster Configuration Assistant creates an HACMP cluster definition with the following characteristics:

  • At least one network that connects the two cluster nodes. This network uses IP Address Takeover (IPAT) via Aliases configured as a result of HACMP topology discovery.
  • This allows a single NIC to support more than one service IP label. HACMP places a service IP label from the failed node onto a network interface of the takeover node as an IP alias, while keeping the interface’s original IP label and hardware address. IPAT uses the IP aliasing network capabilities of AIX 5L.
    Note: The Assistant configures multiple networks if the subnets to support them exist.
  • A non-IP disk heartbeating network over an enhanced concurrent volume group (if available).
  • A single resource group that includes the following resources:
  • Two nodes, identified as localnode and remotenode
  • The localnode, the node from which you run the Assistant, is assigned the higher priority.

  • A service IP label that you specify in the Assistant
  • An application start script that you specify in the Assistant
  • An application stop script that you specify in the Assistant
  • An application server that you specify in the Assistant
  • This application server is associated with the start script and the stop script that you configure for the application.

  • All shareable volume groups discovered by HACMP.
  • These volume groups are configured on at least one of the cluster nodes, and both cluster nodes share the disks.

    For more information about resource groups, see the chapter Planning Resource Groups in the Planning Guide.
    Note: HACMP uses the application server name specified in the Assistant to create the cluster name and the resource group name. For example, if you name the application server db1, the cluster name is db1_cluster, and the resource group name is db1_group.

    Resource Group Policies

    The Assistant creates an HACMP cluster that has a single resource group. This resource group has the following policies:

  • Startup policy: The resource group goes online on the higher priority node.
  • Fallover policy: The resource group falls over to the next priority node.
  • Fallback policy: The resource group does not fall back.
  • For more information about resource groups, see the chapter Planning Resource Groups in the Planning Guide.

    Prerequisites

    Before using the Two-Node Cluster Configuration Assistant to configure an HACMP cluster definition, make sure that:

  • The node running the Assistant has start and stop scripts for the application(s) to be made highly available.
  • Both nodes have TCP/IP connectivity to each other.
  • Both nodes are physically connected to all disks configured within the volume groups.
  • Both nodes have the HACMP software and the same version of the RSCT software.
  • Both nodes have a copy of the application that is to be highly available.
  • The etc/hosts file on both nodes is configured with a service IP label/address to be specified in the Assistant.
  • TCP/IP Connectivity

    To set up a cluster, the nodes require a network connection over TCP/IP.

    Copy of the Application

    The application should be configured and ready for use on each node.

    Ensure that the application licensing requirements are met. Some vendors require a unique license for each processor that runs an application, which means that you must license-protect the application by incorporating processor-specific information into the application when it is installed. As a result, even though the HACMP software processes a node failure correctly, it may be unable to restart the application on the takeover (remote) node because of a restriction on the number of licenses for that application available within the cluster. To avoid this problem, be sure that you have a license for each system unit in the cluster that may run the application.

    For additional information about potential licensing issues, see the section on Application Planning in the Planning Guide.

    Start Scripts and Stop Scripts

    When you run the Assistant, an application start script and an application stop script should be present on the node running the Assistant. If the scripts exist on only one node, the Assistant copies the start and stop scripts to the second node.

    If you want the scripts to be different on each node, create scripts on each node.

    Volume Groups

    Ensure that both nodes are physically connected to all disks that are configured within the volume groups on the node running the Assistant. These disks should appear in the AIX 5L configuration on each node.

    At least one of the shared volume groups should be an enhanced concurrent volume group. This way, HACMP can set up a non-IP network over the disk for heartbeating traffic.

    Note: The Assistant imports volume group definitions of any shareable volume group to both nodes.

    If the cluster does not have an enhanced concurrent mode volume group, establish a separate disk heartbeating network outside of the Assistant. After you finish running the Assistant, it displays a message if a disk heartbeating network could not be created. For information about heartbeating networks, see the chapter on Planning Cluster Network Connectivity in the Planning Guide.

    Service IP Label/Address

    Ensure that an /etc/hosts file on both of the nodes has a service IP label that is required by the application server configured. The service IP label is the IP label over which services are provided. HACMP keeps the service IP label available. Clients use the service IP label to access application programs, and HACMP uses it for disk heartbeating traffic. The service IP label should be on a different subnet than the IP label used at boot time on the interface.

    The /etc/hosts file on one of the nodes must contain all IP labels and associated IP addresses. The Assistant populates the /etc/hosts file on the second node if the file does not list the IP labels and associated IP addresses.

    Note: The cluster configured by the Assistant uses AIX 5L support for multiple IP aliases on a NIC.

    HACMP Software

    Ensure that the HACMP software is installed on the two nodes to be included in the cluster.

    For information about installing HACMP, see Chapter 4: Installing HACMP on Server Nodes.

    Note: Running the Two-Node Cluster Configuration Assistant standalone application requires Java version 1.3 or higher. AIX 5L v.5.2 and up includes Java version 1.3 or higher.

    Planning a Two-Node Cluster

    Before you start the Two-Node Cluster Configuration Assistant, make sure you have the following information available. You can use the Two-Node Cluster Configuration Worksheet as described in the Planning Guide to record this information.

    Local Node
    The node on which the application that is to be highly available typically runs.
    You run the Two-Node Cluster Configuration Assistant from this node to set up the cluster definition.
    Takeover (Remote) Node
    The node on which the application will run should the local node be unable to run the application.
    Communication Path to Takeover Node
    A resolvable IP label (this may be the hostname), IP address, or fully qualified domain name on the takeover node. This path is used to initiate communication between the local node and the takeover node. Examples of communication paths are NodeA, 10.11.12.13, and NodeC.ibm.com.
    In the SMIT Assistant, use the picklist display of the hostnames and addresses in the /etc/hosts file that are not already configured for HACMP.
    The Assistant uses this path for IP network discovery and automatic configuration of the HACMP topology.
    Application Server
    A label for the application that is to be made highly available. This label is associated with the names of the start script and the stop script for the application.
    The server name can include alphabetic and numeric characters and underscores.
    The application server name may contain no more than 24 characters. (Note that the number of characters is fewer than the number allowed in other SMIT fields to define an application. This is because the application server name in the Assistant is used as part of the name of the cluster and the resource group.)
    Application Start Script
    The name of the script that is called by the cluster event scripts to start the application server. The script name includes the name of the script, its full pathname, and is followed by any arguments. This script must be in the same location on each cluster node that may start the server. However, the contents of the script can differ.
    The application start script name may contain no more than 256 characters.
    Application Stop Script
    The full pathname of the script (followed by arguments) that is called by the cluster event scripts to stop the server. This script must be in the same location on each cluster node that may stop the server. However, the contents of the script can differ.
    The application stop script name may contain no more than 256 characters.
    Service IP Label
    The IP label/IP address to be kept highly available.
    In the SMIT Assistant, use the picklist display IP labels/addresses.

    Using the Two-Node Cluster Configuration Assistant

    Run the Assistant from the node that is to initially run the application server to be made highly available. Both the Assistant standalone application and the SMIT-based Assistant provide detailed information to help you enter the correct information for the Assistant. The Assistant standalone application also provides tool tips for command buttons, panel controls, and data entry fields. As you complete panels in the Assistant, onscreen messages describe the progress of your configuration.

    Note: If the Assistant encounters errors during the configuration process, it does not save the configuration.
    If the Assistant encounters errors during synchronization and verification, it saves the cluster configuration even though it may be invalid. Typically, you can correct most verification errors within AIX 5L.

    Before you use the Assistant, to make sure that you have the applicable information available, consider completing a Two-Node Cluster Configuration Worksheet, as described in the Planning Guide.

    User Privileges

    Only users with root privilege can use the Assistant. Although a user who does not have root privilege can start the Assistant, that user cannot use the application to make configuration changes.

    Existing Clusters

    If there is an existing HACMP cluster on the node on which you are running the Assistant, the Assistant removes the cluster definition and saves a snapshot of the cluster definition that was removed. The snapshot file created by the Assistant uses the following naming convention:

    clustername_YYYYMMDDhhmm 
    

    For example, a snapshot named db2_cluster_200512011254 indicates that the snapshot of the configuration for the cluster db2_cluster was saved at 12:54 p.m. on December 1, 2005.

    The Assistant creates a snapshot only for valid HACMP configurations. Information about this activity is saved to the log file for the Configuration Assistant. If you are using the SMIT Assistant, a status message appears after you complete the entry fields to show that a snapshot was created and the full pathname of that file.

    The system running the Assistant requires sufficient disk space to save a snapshot. If you are using the SMIT Assistant, it prompts you for an alternate location for the file if there is insufficient disk space.

    For information about the log file for the Configuration Assistant, see the section Logging for the Two-Node Cluster Configuration Assistant.

    For information about using a cluster snapshot, see the Administration Guide.

    Using the Standalone Assistant

    If you completed a Two-Node Cluster Configuration Worksheet as part of the planning process, refer to that information when entering information in the Assistant.

    To use the Two-Node Cluster Configuration Assistant:

      1. Enter the cl_configassist command to start the Two-Node Cluster Configuration Assistant.
    The cl_configassist command resides in the /usr/es/sbin/cluster/utilities directory. For more information about this command, see its man page.
      2. Read the overview information on the HACMP Two-Node Cluster Configuration Assistant panel. Make sure that you have the information listed.
      3. Click Start.
      4. Enter values in the following panels and click Next on each panel when finished:
  • Step 1 of 4: Topology Configuration panel
  • Step 2 of 4: Application Server Configuration panel
  • Step 3 of 4: Resource Configuration panel
  • For information about the values to enter, see the section Planning a Two-Node Cluster.
      5. On the Step 4 of 4: Verification and Synchronization panel, review the verification and synchronization messages.
    The verification and synchronization for the cluster in this step is the same as the standard verification and synchronization for an HACMP cluster.
      6. Click Finished if the verification and synchronization completes successfully.
    or
    Click Exit, if there are verification or synchronization errors. You can resolve any errors, then select the Back button to correct entries or run the Assistant again.
    Note: The Finished button is available if verification and synchronization complete successfully. The Exit button is available if there are verification and synchronization errors.

    Using the SMIT Assistant

    If you completed a Two-Node Cluster Configuration Worksheet, as described in the Planning Guide, refer to that information when entering information in the Assistant.

    To use the SMIT-based Assistant:

      1. Enter smit hacmp and select Initialization and Standard Configuration > Configuration Assistants > Two-Node Configuration Assistant.
    or
    Enter the fastpath smitty cl_configassist
    The Two-Node Configuration Assistant panel appears.
      2. Enter values for the following fields and Press Enter:
  • Communication Path to Takeover Node
  • Application Server
  • Application Start Script
  • Application Stop Script
  • Service IP Label
  • For information about the values to enter, see the section Planning a Two-Node Cluster.
    SMIT provides configuration status as each command is run. At command completion, SMIT automatically runs verification and synchronization. SMIT displays information about the verification and synchronization process.
    If verification and synchronization does not finish successfully, review the messages to determine the problem. Remedy any issues and then run the Assistant again.

    Logging for the Two-Node Cluster Configuration Assistant

    The Two-Node Cluster Configuration Assistant logs debugging information to the cl_configassist.log file. This log file is stored in the /var/hacmp/utilities/ directory by default and can be redirected to another location in the same way that you redirect other HACMP log files. For information about redirecting log files, see the Troubleshooting Guide.

    The Assistant stores up to 10 copies of the log file to assist with troubleshooting activities. The log files are numbered to differentiate one from the other, for example cl_configassist.log, cl_configassist.log.1, and cl_configassist.log.2.

    The clverify.log and smit.log files provide additional information for troubleshooting issues.

    Preventing Single Points of Failure

    A robust HACMP cluster requires configuring cluster components to prevent one from becoming a single point of failure. Review the following components to ensure that one does not become a single point of failure in your cluster:

  • Shared disks
  • For information about preventing a disk from being a single point of failure, see the chapter Planning Shared Disk and Tape Devices in the Planning Guide.
  • Shared Logical Volume Manager components
  • For information about preventing a volume group from being a single point of failure, see the chapter Planning Shared LVM Components in the Planning Guide.
  • Networks
  • If possible, the Assistant creates a non-IP disk heartbeating network to ensure connectivity between cluster nodes. If the Assistant cannot create a disk heartbeating network, it displays a message to this effect. In this case, set up a non-IP network to transmit disk heartbeating messages.
    For information about preventing the network from being a single point of failure, see the chapter Planning Cluster Network Connectivity in the Planning Guide.

    Where You Go from Here

    After you have a basic cluster configuration in place, set up cluster security. For information about configuring cluster security, see the Administration Guide.

    You can also customize your cluster by:

  • Configuring cluster events
  • See Chapter 8: Configuring AIX 5L for HACMP.
  • Setting up cluster monitoring
  • See Chapter 10: Monitoring an HACMP Cluster in the Administration Guide.


    PreviousNextIndex