![]() ![]() ![]() |
Chapter 3: Configuring an HACMP Cluster (Standard)
This chapter describes how to configure an HACMP cluster using the SMIT Initialization and Standard Configuration path.
Have your planning worksheets ready to help you through the configuration process. See the Planning Guide for details if you have not completed this step.
The main sections in this chapter include:
Overview
Using the options under the SMIT Initialization and Standard Configuration menu, you can add the basic components of a cluster to the HACMP Configuration Database (ODM) in a few steps. This HACMP configuration path significantly automates the discovery and selection of configuration information and chooses default behaviors.
If you are setting up a basic two-node cluster, use the Two-Node Cluster Configuration Assistant to simplify the process for configuring a two-node cluster. For more information, see the section on Using the Two-Node Cluster Configuration Assistant in the chapter on Creating a Basic HACMP Cluster in the Installation Guide.
You can also use the General Configuration Smart Assist to quickly set up your application. You are not limited to a two-node cluster with this Assist.
You can use either ASCII SMIT or WebSMIT to configure the cluster. For more information on WebSMIT, see Chapter 2: Administering a Cluster Using WebSMIT.
Prerequisite Tasks for Using the Standard Path
Before using the Standard Configuration path, HACMP must be installed on all the nodes, and connectivity must exist between the node where you are performing the configuration and all other nodes to be included in the cluster. That is, network interfaces must be both physically and logically configured (to AIX 5L) so that you can successfully communicate from one node to each of the other nodes. The HACMP discovery process runs on all server nodes, not just the local node.
Once you have configured and powered on all disks, communication devices, serial networks and also configured communication paths to other nodes in AIX 5L, HACMP automatically collects information about the physical and logical configuration and displays it in corresponding SMIT picklists, to aid you in the HACMP configuration process.
With the connectivity path established, HACMP can discover cluster information and you are able to access all of the nodes to perform any necessary AIX 5L administrative tasks. That is, you do not need to open additional windows or physically move to other nodes' consoles, and manually log in to each node individually. To ease this process, SMIT fastpaths to the relevant HACMP and/or AIX 5L SMIT screens on the remote nodes are available within the HACMP SMIT screen paths.
HACMP uses all interfaces defined for the connectivity paths to populate the picklists. If you do not want a particular interface to be used in the HACMP cluster, use the HACMP Extended Configuration to delete it from the HACMP cluster (this way, it will not be shown up in picklists and will not be available for selection).
By default, cluster heartbeats are sent through all discovered networks. The network is kept highly available once it has an HACMP IP label assigned and you synchronize the configuration.
Assumptions and Defaults for the Standard Path
HACMP makes some assumptions regarding the environment, such as assuming all network interfaces on a physical network belong to the same HACMP network. Using these assumptions, HACMP supplies or automatically configures intelligent and default parameters to its configuration process in SMIT. This helps to minimize the number of steps it takes to configure the cluster.
HACMP makes the following basic assumptions:
Hostnames are used as node names. HACMP automatically configures and monitors all network interfaces that can send a ping command to another network interface. The network interfaces that can send a ping command to each other without going through a router are placed on the same logical network. HACMP names each logical network. HACMP uses IP aliasing as the default mechanism for binding a service IP label/address to a network interface. For more information, see the chapter on planning the cluster networks in the Planning Guide. Note: If you cannot use IP aliases because of hardware restrictions, such as the limited number of subnets that are allocated for cluster utilization, you will need to use IP replacement to bind your IP labels to network interfaces. For instance, ATM network does not support IP aliasing. IP replacement can be configured only under the SMIT Extended Configuration path (where you disable IP aliasing).
IP Address Takeover via IP Aliases is configured for any logical network capable of taking over a service IP label as an alias. Note, in the Extended Configuration path, you can also configure IP Address Takeover that uses the IP replacement mechanism for binding IP labels/addresses with network interfaces. You can configure the resource groups with any of the policies for startup, fallover, and fallback (without specifying fallback timer policies). You can configure the application server start and stop scripts, but will need to use the Extended Configuration path to configure multiple monitors to track the health of each application server. Also, since you can add, change, or remove serial (non-IP) networks and devices using the Extended Configuration path, you must manually define which pair of end-points exist in the point-to-point network, before adding, changing or removing serial networks and devices.
To manually configure any part of the cluster, or to add more details or customization to the cluster configuration, use the SMIT HACMP Extended Configuration path. See Chapter 4: Configuring HACMP Cluster Topology and Resources (Extended) for information on those options.
Note: If you are using the Standard Configuration path and information that is required for configuration resides on remote nodes, HACMP automatically discovers the necessary cluster information for you.
Steps for Configuring a Cluster Using the Initialization and Standard Configuration Path
Here are the steps to configure the typical cluster components:
Configuring a Two-Node Cluster, or Using Smart Assists
You can configure a basic two-node cluster with just a few configuration steps. For information, see the section on Using the Two-Node Cluster Configuration Assistant in the chapter on Creating a Basic HACMP Cluster in the Installation Guide.
If you are configuring a WebSphere, DB2 UDB or Oracle application, see the corresponding HACMP Smart Assist guide.
To configure other applications, you can use the General Configuration Smart Assist.
Limitations and Prerequisites
The initial requirements for using Smart Assists are:
The application must be installed on all cluster nodes where you want to run it. The Smart Assist must be installed on all cluster nodes that run the application. Configuring Applications with the General Configuration Smart Assist
To configure your installed application (other than DB2, WebSphere, or Oracle):
1. On a local node, enter smitty hacmp
2. Select Initialization and Standard Configuration > Configuration Assistants > Make Applications Highly Available > Add an Application to the HACMP Configuration and press Enter.
If the cluster is not yet configured, you are directed to go to the Configure HACMP Nodes and Cluster SMIT panel. Here you need to list the communication paths to all nodes in the cluster. Then continue to the next step.
3. If the cluster is configured, SMIT displays a list of applications installed on this node. Select Other Applications and press Enter.
4. Select General Application Smart Assist and press Enter.
5. Enter values for the following fields on the Add an Application to HACMP panel:
Application Server Name Primary Node Takeover Nodes Application Server Start Script Application Server Stop Script Service IP Label. 6. Press Enter after you have filled in the values. The configuration will be synchronized and verified automatically.
7. (Optional) Return to the panel Make Applications Highly Available to select Test the HACMP Configuration and press Enter.
The Cluster Test Tool runs and displays results to the screen. If you get error messages, make the necessary corrections.
Defining HACMP Cluster Topology (Standard)
Complete the following procedures to define the cluster topology. You only need to perform these steps on one node. When you verify and synchronize the cluster topology, its definition is copied to the other nodes.
To configure the cluster topology:
1. Enter smit hacmp
2. In SMIT, select Initialization and Standard Configuration > Configure an HACMP Cluster and Nodes and press Enter.
3. Enter field values as follows:
Cluster Name Enter an ASCII text string that identifies the cluster. The cluster name can include alphanumeric characters and underscores, but cannot have a leading numeric. Use no more than 32 characters. It can be different from the hostname. Do not use reserved names. For a list of reserved names see Chapter 7: Verifying and Synchronizing an HACMP Cluster. New nodes (via selected communication paths Enter (or add) one resolvable IP label (this may be the hostname), IP address, or Fully Qualified Domain Name for each new node in the cluster, separated by spaces. HACMP uses this path to initiate communication with the node. Example 1:10.11.12.13 <space> NodeC.ibm.com.Example 2:NodeA<space>NodeB(where these are hostnames.)The picklist displays the hostnames and/or addresses included in /etc/hosts that are not already HACMP-configured IP labels/addresses.You can add node names or IP addresses in any order. Currently configured node(s) If nodes are already configured, they are displayed here.
4. Press Enter. Once communication paths are established, HACMP runs the discovery operation and prints results to the SMIT panel.
5. Verify that the results are reasonable for your cluster.
6. Return to the top level HACMP SMIT panel to continue with the configuration.
Configuring HACMP Resources (Standard)
Using the Standard Configuration path, you can configure the types of resources that are most often included in HACMP clusters. You must first define resources that may be used by HACMP to the AIX 5L operating system on each node. Later you group together the associated resources in resource groups. You can add them all at once or add them separately, as you prefer.
This section shows how to configure the following components on all of the nodes defined to the cluster using a single network interface:
Application servers (a collection of start and stop scripts that HACMP uses for the application). HACMP service IP labels/addresses. The service IP label/address is the IP label/address over which services are provided and which is kept highly available by HACMP. Shared volume groups, logical volumes, and filesystems. Concurrent volume groups, logical volumes, and filesystems. Configuring Application Servers
An HACMP application server is a cluster resource used to control an application that must be kept highly available. It contains application start and stop scripts. Configuring an application server does the following:
Associates a meaningful name with the server application. For example, you could give the application you are using with the HACMP software a name such as apserv. You then use this name to refer to the application server when you define it as a resource. When you set up the resource group that contains this resource, you define an application server as a resource. Points the cluster event scripts to the scripts that they call to start and stop the server application. Allows you to configure application monitoring for that application server. In HACMP 5.2 and up you can configure multiple application monitors for one application server. For more information see Steps for Configuring Multiple Application Monitors in Chapter 4: Configuring HACMP Cluster Topology and Resources (Extended). Note that this section does not discuss how to write the start and stop scripts. See the vendor documentation for specific product information on starting and stopping a particular application.
Ensure that the server scripts exist on all nodes that participate as possible owners of the resource group where this application server resides.
To configure an application server on any cluster node:
1. Enter smit hacmp
2. In SMIT, select Initialization and Standard Configuration > Configure Resources to Make Highly Available > Configure Application Servers > Add an Application Server and press Enter.
3. SMIT displays the Add an Application Server panel. Enter field values as follows:
4. Press Enter to add this information to the HACMP Configuration Database on the local node. Return to previous HACMP SMIT panels to perform other configuration tasks.
Configuring HACMP Service IP Labels/Addresses
A service IP label/address is used to establish communication between client nodes and the server node. Services, such as a database application, are provided using the connection made over the service IP label. This connection can be node-bound or taken over by multiple nodes. For the Initialization and Standard Configuration SMIT path, HACMP assumes that the connection will allow IP Address Takeover (IPAT) via IP Aliases (this is the default).
When you are using the standard configuration path, and add node names, IP labels/addresses, or hostnames to launch the initialization process, HACMP automatically discovers the networks for you.
The /etc/hosts file on all nodes must contain all IP labels and associated IP addresses that you want to discover.
Follow this procedure to define service IP labels for your cluster:
1. Enter smit hacmp
2. In SMIT, select HACMP > Initialization and Standard Configuration > Configure Resources to Make Highly Available > Configure Service IP Labels/Addresses and press Enter.
3. Fill in field values as follows:
4. Press Enter after filling in all required fields. HACMP now checks the validity of the IP interface configuration.
5. Repeat the previous steps until you have configured all service IP labels for each network, as needed.
Note: To control the placement of the service IP label aliases on the physical network interface cards on the cluster nodes, see the section in Chapter 4: Configuring HACMP Cluster Topology and Resources (Extended).
Configuring Volume Groups, Logical Volumes, and Filesystems as Cluster Shared Resources
You must define and properly configure volume groups, logical volumes and filesystems to AIX 5L, before using them as shared resources in an HACMP cluster. For information, see the relevant chapter in the Installation Guide, and Chapter 11: Managing Shared LVM Components in this Guide.
Configuring Concurrent Volume Groups, Logical Volumes, and Filesystems
These components must be defined to AIX 5L and properly configured, for use as shared resources. For information, see the relevant chapter in the Installation Guide and Chapter 12: Managing Shared LVM Components in a Concurrent Access Environment.
Configuring HACMP Resource Groups (Standard)
Refer to the Concepts Guide for an overview of resource groups you can configure in HACMP 5.4. Refer to the chapter on planning resource groups in the Planning Guide for further planning information. You should have your planning worksheets in hand.
Using the standard path, you can configure resource groups that use different startup, fallover, and fallback policies.
Once the resource groups are configured, if it seems necessary for handling certain applications, you can use the Extended Configuration path to change or refine the management policies of particular resource groups.
Configuring a resource group involves two phases:
1. Configuring the resource group name, startup, fallover and fallback policies, and the nodes that can own it (nodelist for a resource group)
2. Adding the resources and additional attributes to the resource group.
Refer to your planning worksheets as you name the groups and add the resources to each one.
Creating HACMP Resource Groups Using the Standard Path
To create a resource group:
1. Enter smit hacmp
2. In SMIT, select Initialization and Standard Configuration > Configure HACMP Resource Groups > Add a Resource Group and press Enter.
3. Enter information in the following fields:
Resource Group Name Enter the name for this group. The name of the resource group must be unique within the cluster and distinct from the service IP label and volume group name. It is helpful to create the name related to the application it serves, as well as to any corresponding device, such as websphere_service_address.Use no more than 32 alphanumeric characters or underscores; do not use a leading numeric. Do not use reserved words. See List of Reserved Words. Duplicate entries are not allowed. Participating Node Names Enter the names of the nodes that can own or take over this resource group. Enter the node with the highest priority for ownership first, followed by the nodes with the lower priorities, in the desired order. Leave a space between node names, for example, NodeA NodeB NodeX. Startup Policy Select a value from the picklist that defines the startup policy of the resource group:ONLINE ON HOME NODE ONLY. The resource group should be brought online only on its home (highest priority) node during the resource group startup. This requires the highest priority node to be available.ONLINE ON FIRST AVAILABLE NODE. The resource group activates on the first node that becomes available.ONLINE USING NODE DISTRIBUTION POLICY. If you select the node distribution policy, only one resource group is brought online on a node during startup.Note: Rotating resource groups migrated from HACMP 5.1 now have this node-based distribution policy.ONLINE ON ALL AVAILABLE NODES. The resource group is brought online on all nodes. This is equivalent to concurrent resource group behavior.If you select this option for the resource group, ensure that resources in this group can be brought online on multiple nodes simultaneously. Fallover policy Select a value from the list that defines the fallover policy of the resource group:FALLOVER TO NEXT PRIORITY NODE IN THE LIST. In the case of fallover, the resource group that is online on only one node at a time follows the default node priority order specified in the resource group’s nodelist (it moves to the highest priority node currently available).FALLOVER USING DYNAMIC NODE PRIORITY. If you select this option for the resource group (and Online on Home Node startup policy), you can choose one of the three predefined dynamic node priority policies. See Configuring Resource Group Runtime Policies in Chapter 5: Configuring HACMP Resource Groups (Extended).BRING OFFLINE (ON ERROR NODE ONLY). Select this option to bring a resource group offline on a node during an error condition.This option represents the behavior of a concurrent resource group and ensures that if a particular node fails, the resource group goes offline on that node only, but remains online on other nodes.Selecting this option as the fallover preference when the startup preference is not Online On All Available Nodes may allow resources to become unavailable during error conditions. If you do so, HACMP issues an error. Fallback policy Select a value from the list that defines the fallback policy of the resource group:NEVER FALLBACK. A resource group does not fall back when a higher priority node joins the cluster.FALLBACK TO HIGHER PRIORITY NODE IN THE LIST. A resource group falls back when a higher priority node joins the cluster.
4. Press Enter.
5. Return to the Add a Resource Group panel to continue adding all the resource groups you have planned for the HACMP cluster.
Configuring Resources in Resource Groups (Standard)
After you have defined a resource group, assign resources to it. SMIT can list possible shared resources for the node if the node is powered on (helping you avoid configuration errors).
When you are adding or changing resources in a resource group, HACMP displays only valid choices for resources, based on the resource group management policies that you have selected.
Resource Group Configuration Considerations
Keep the following in mind as you prepare to define the resources in your resource group:
You cannot configure a resource group until you have completed the information on the Add a Resource Group panel. If you need to do this, refer back to the instructions under Configuring HACMP Resource Groups (Standard). A resource group may include multiple service IP addresses. When a resource group configured with IPAT via IP Aliases is moved, all service labels in the resource group are moved as aliases to the available interfaces, according to the resource group management policies in HACMP. Also, you can specify the distribution preference for service IP labels. For more information, see Steps to Configure Distribution Preference for Service IP Label Aliases in the Chapter 4: Configuring HACMP Cluster Topology and Resources (Extended).
For information on how HACMP handles the resource groups configured with IPAT via IP Aliases see Appendix B: Resource Group Behavior during Cluster Events.
When you define a service IP label/address on a cluster node, the service label can be used in any non-concurrent resource group. IPAT function (both via IP Replacement and via IP Aliases) does not apply to concurrent resource groups (those with the startup policy Online on All Available Nodes). Assigning Resources to Resource Groups (Standard)
To assign the resources for a resource group:
1. Enter smit hacmp
2. In SMIT, select Initialization and Standard Configuration > Configure HACMP Resource Groups > Change/Show Resources for a Resource Group and press Enter to display a list of defined resource groups.
3. Select the resource group you want to configure and press Enter. SMIT displays the panel that matches the type of resource group you selected, with the Resource Group Name, and Participating Node Names (Default Node Priority) fields filled in.
Note: SMIT displays only valid choices for resources, depending on the type of resource group that you selected.
If the participating nodes are powered on, you can press F4 to list the shared resources in the picklists. If a resource group/node relationship has not been defined, or if a node is not powered on, pressing F4 causes HACMP SMIT to display the appropriate warnings.
4. Enter the field values as follows:
Note: If you are configuring a resource group with the startup policy of Online on Home Node and the fallover policy Fallover Using Dynamic Node Priority, this SMIT panel displays the field where you can select which one of the three predefined dynamic node priority policies you want to use.
5. Press Enter to add the values to the HACMP Configuration Database.
Verifying and Synchronizing the Standard Configuration
After all resource groups have been configured, verify the cluster configuration on all nodes to ensure compatibility. If no errors are found, the configuration is then copied (synchronized) to each node of the cluster. If you synchronize from a node where Cluster Services are running, one or more resources may change state when the configuration changes take effect.
The Cluster Topology Summary
At the beginning of verification, before HACMP verifies the cluster topology, the Cluster Topology Summary is displayed listing any nodes, networks, network interfaces, and resource groups that are “unavailable” at the time that cluster verification is run. “Unavailable” refers to those that have failed and are considered offline by the Cluster Manager. These components are also listed in the /var/hacmp/clverify/clverify.log file.
The output from the verification is displayed in the SMIT Command Status window. If you receive error messages, make the necessary changes and run the verification procedure again.
The output may take one of the following forms:
You may see warnings if the configuration has a limitation on its availability, for example, if only one interface per node per network is configured. Although no summary will be displayed to the user when no cluster topology components have failed, the clverify.log file displays the following: <DATE/TIME> Verification detected that all cluster topology components are available.If cluster components are unavailable, the utility providing the list of failed components puts similar information in the log file. If the Cluster Manager is not running or is unavailable at the time when verification is run, only the /var/hacmp/log/clutils.log, file, not the user's display, is updated to include the following: Cluster Manager is unavailable on the local node. Failed components verification was not complete.If verification detects failed components, a wall message displays throughout all available cluster nodes, listing nodes, networks, network interfaces, and resource groups that are unavailable. Procedure to Verify and Synchronize the HACMP Configuration
To verify and synchronize the cluster topology and resources configuration:
1. Enter smit hacmp
2. In SMIT, select Initialization and Standard Configuration > Verify and Synchronize HACMP Configuration and press Enter.
3. SMIT displays Are you sure? Press Enter again.
Viewing the HACMP Configuration
To display the HACMP cluster:
1. Enter smit hacmp
2. In SMIT, select Initialization and Standard Configuration > Display HACMP Configuration and press Enter.
Additional Configuration Tasks
After you have finished the tasks on the Initialization and Standard Configuration menu and have synchronized the cluster configuration, consider further customizing the cluster. For example, you can:
Define non-IP networks for heartbeats (highly recommended). Refine the distribution of service IP aliases placement on the nodes. For more information, see Steps to Configure Distribution Preference for Service IP Label Aliases in the Chapter 4: Configuring HACMP Cluster Topology and Resources (Extended). Configure dependencies between resource groups. Consider this step if you are planning to include multi-tiered applications in the cluster, where the startup of one application depends on the successful startup of another application. Refine resource groups behavior by specifying the delayed fallback timer, the settling time, and the node distribution policy. Configure a cluster snapshot so that it does not save log files. Configure multiple monitors for an application server, to monitor the health of your applications. Change runtime parameters and redirect log files for a node. Customize cluster events. Customize and configure different types of remote notification, such as pager, SMS messages, and email. Configure HACMP File Collections. Enable the cluster verification to run corrective actions. See Chapter 4: Configuring HACMP Cluster Topology and Resources (Extended) to continue with these additional tasks.
Testing Your Configuration
After you configure a cluster, test it before using it in a production environment. For information about using the Cluster Test Tool to test your cluster, see Chapter 8: Testing an HACMP Cluster.
![]() ![]() ![]() |