PreviousNextIndex

Chapter 14: Managing the Cluster Resources


This chapter describes how to manage the resources in your cluster. The first part of the chapter describes the dynamic reconfiguration process. The second part of the chapter describes procedures for making changes to individual cluster resources.

This chapter includes these topics:

  • Dynamic Reconfiguration: Overview
  • Changing or Removing Application Monitors
  • Reconfiguring Service IP Labels as Resources in Resource Groups
  • Reconfiguring Communication Links
  • Reconfiguring Tape Drive Resources
  • Using NFS with HACMP
  • Reconfiguring Resources in Clusters with Dependent Resource Groups
  • Synchronizing Cluster Resources.
  • For information on managing volume groups as resources, see Chapter 11: Managing Shared LVM Components and Chapter 12: Managing Shared LVM Components in a Concurrent Access Environment.

    For information on setting up NFS properly, see Chapter 5: Planning Shared LVM Components in the Planning Guide.

    Note: You can use either ASCII SMIT or WebSMIT to configure and manage the cluster. For more information on WebSMIT, see Chapter 2: Administering a Cluster Using WebSMIT.

    Dynamic Reconfiguration: Overview

    When you configure an HACMP cluster, configuration data is stored in HACMP-specific object classes in the ODM. The AIX 5L ODM object classes are stored in the default system configuration directory (DCD), /etc/objrepos.

    You can make certain changes to both the cluster topology and to the cluster resources while the cluster is running (dynamic reconfiguration, or DARE). You can make a combination of resource and topology changes via one dynamic reconfiguration operation making the whole operation faster, especially for complex configuration changes.

    Note: No automatic corrective actions take place during a DARE.

    Reconfiguring a Cluster Dynamically

    Warning: Do not make configuration changes or perform any action that affects a resource if any node in the cluster has cluster services stopped and its resource groups placed in an UNMANAGED state.

    At cluster startup, HACMP copies HACMP-specific ODM classes into a separate directory called the Active Configuration Directory (ACD). While a cluster is running, the HACMP daemons, scripts, and utilities reference the ODM data stored in the active configuration directory (ACD) in the ODM.

    If you synchronize the cluster topology and cluster resources definition while the Cluster Manager is running on the local node, this action triggers a dynamic reconfiguration (DARE) event. In a dynamic reconfiguration event, the ODM data in the Default Configuration Directory (DCD) on all cluster nodes is updated and the ODM data in the ACD is overwritten with the new configuration data. The HACMP daemons are refreshed so that the new configuration becomes the currently active configuration.

    The dynamic reconfiguration operation (that changes both resources and topology) progresses in the following order that ensures proper handling of resources:

  • Releases any resources affected by the reconfiguration
  • Reconfigures the topology
  • Acquires and reacquires any resources affected by the reconfiguration operation.
  • For information on DARE in clusters with dependent resource groups, see Reconfiguring Resources in Clusters with Dependent Resource Groups.

    Requirements before Reconfiguring

    Before making changes to a cluster definition, ensure that:

  • The same version of HACMP is installed on all nodes
  • Some nodes are up and running HACMP and able to communicate with each other: no node may have cluster services stopped with resource groups in an UNMANAGED state. You must make changes form a node that is up so the cluster can be synchronized.
  • The cluster is stable; the hacmp.out file does not contain recent event errors or config_too_long events.
  • Dynamic Cluster Resource Changes

    DARE supports resource and topology changes done in one operation. Starting with HACMP 5.3, DARE is supported in HACMP/XD configurations.

    You can make the following changes to cluster resources in an active cluster, dynamically:

  • Add, remove, or change an application server.
  • Add, remove, or change application monitoring.
  • Add or remove the contents of one or more resource groups.
  • Add, remove, or change a tape resource.
  • Add or remove one or more resource groups.
  • Add, remove, or change the order of participating nodes in a resource group.
  • Change the node relationship of the resource group.
  • Change resource group processing order.
  • Add, remove or change the fallback timer policy associated with a resource group.
  • The new value for the timer will come into effect after synchronizing the cluster and after the resource group is released and restarted (on a different node or on the same node) due to either a cluster event or if you move the group to another node.
  • Add, remove or change the settling time for resource groups.
  • Add or remove node distribution policy for resource groups.
  • Add, change, or remove parent/child or location dependencies for resource groups. (Some limitations apply. See the section Making Dynamic Changes to Dependent Resource Groups.)
  • Add, change, or remove inter-site management policy for resource groups.
  • Add or remove a replicated resource. (You cannot change a replicated resource to non-replicated or vice versa.)
  • Add, remove, or change pre- or post-events.
  • Additional Considerations with Dynamic Reconfiguration Changes

    Depending on your configuration, you may need to take the following issues into account:

  • DARE changes to the settling time. The current settling time continues to be active until the resource group moves to another node or goes offline. A DARE operation may result in the release and re-acquisition of a resource group, in which case the new settling time values take effect immediately.
  • Changing the name of an application server, node or resource group. You must stop cluster services before they become active. You can include such a change in a dynamic reconfiguration; however, HACMP interprets these changes, particularly name change, as defining a new cluster component rather than as changing an existing component. Such a change causes HACMP to stop the active component before starting the new component, causing an interruption in service.
  • Adding, removing, or changing a resource group with replicated resources. HACMP handles both the primary and secondary instances appropriately. For example, if you add a multi-site resource group to the configuration, HACMP will dynamically bring both primary and secondary instances online according to the site and node policies that you specify. You can also change a resource group’s site management policy from Ignore to another option. HACMP then adds the secondary instances.
  • Dynamic reconfiguration is not supported during a cluster migration to a new version of HACMP, or when any node in the cluster has resource groups in the UNMANAGED state.

    Reconfiguring Application Servers

    An application server is a cluster resource used to control an application that must be kept highly available. It includes start and stop scripts.

    Note that this section does not discuss how to write the start and stop scripts. See the vendor documentation for specific product information on starting and stopping a particular application.

    Note: If you intend to add an application server dynamically, it is very important to test the server scripts beforehand, as they will take effect during the dynamic reconfiguration operation.

    This section contains:

  • Changing an Application Server
  • Removing an Application Server
  • Suspending and Resuming Application Monitoring
  • Changing the Configuration of an Application Monitor
  • Removing an Application Monitor.
  • Changing an Application Server

    When you specify new start or stop scripts to be associated with an application server, the HACMP configuration database is updated but the application server is not configured or unconfigured dynamically; thus the application controlled by the application server is not stopped and restarted. The next time the application is stopped, HACMP calls the new stop script—not the stop script that was defined when the application server was originally started.

    Note: Changes to application server information are not automatically communicated to the application monitor configuration. Only the name of the server is updated on the SMIT panel for changing monitors. If you change an application server that has an application monitor defined, you must make the change separately to the application monitor as well. See Changing the Configuration of an Application Monitor.

    To change (or view) an application server, complete the following steps:

      1. Enter smit hacmp
      2. In SMIT, select Initialization and Standard Configuration > Configure Resources to Make Highly Available > Configure Application Servers and press Enter. You can also use the HACMP Extended Configuration SMIT path to configure, change or remove application servers.
      3. From this menu, select the Change/Show an Application Server option and press Enter. SMIT displays the application servers.
      4. Select the application server to change and press Enter. The Change/Show an Application Server panel appears, with the server name filled in.
      5. You can change the application name and/or the start and stop scripts.
      6. Press Enter to add this information to the HACMP configuration database on the local node.
      7. (Optional) Return to previous SMIT panels to perform other configuration tasks.
      8. In SMIT, select the Initialization and Standard Configuration or Extended Configuration menu and select Verification and Synchronization. If the Cluster Manager is running on the local node, synchronizing the cluster resources triggers a dynamic reconfiguration event.

    See Synchronizing Cluster Resources for more information.

    Removing an Application Server

    You can remove an application server from an active cluster dynamically. Before removing an application server, you must remove it from any resource group where it has been included as a resource. For more information, see Chapter 15: Managing Resource Groups in a Cluster.

    Note: If you remove an application server, HACMP checks all application monitors for that server, and if only this server (and no other servers) use the associated monitors, it also removes the monitors. HACMP sends a message if monitors have been removed or preserved as a result of removing an application server.

    To remove an application server, complete the following steps:

      1. Enter smit hacmp
      2. In SMIT, select Initialization and Standard Configuration > Configure Resources to Make Highly Available > Configure Application Servers > Remove an Application Server and press Enter.

    You can also use the HACMP Extended Configuration SMIT path to configure, change or remove application servers.

    SMIT displays the list of application servers.
      3. Select the server to remove and press Enter. HACMP asks if you are sure you want to remove the server.
      4. Press Enter again to confirm the removal. The server is removed from the HACMP configuration database on the local node.
      5. (Optional) Return to previous SMIT panels to perform other configuration tasks.
      6. Synchronize the cluster definition. If the Cluster Manager is running on the local node, synchronizing the cluster resources triggers a dynamic reconfiguration event.

    See Synchronizing Cluster Resources for more information.

    Changing or Removing Application Monitors

    If you have configured application monitoring, you may wish to suspend or remove the monitor at some point. You can also change some aspect of the monitoring you have set up (for instance, the processes to be monitored, the scripts to run, or the notify, cleanup, or restart methods).

    Note: This section discusses changing an existing application monitor. For information about adding a new application monitor, see Chapter 4: Configuring HACMP Cluster Topology and Resources (Extended).

    Suspending and Resuming Application Monitoring

    You can suspend the monitoring of a specified application while the cluster is running. This suspension of monitoring is temporary. If a cluster event occurs that results in the affected resource group moving to a different node, application monitoring resumes automatically on the new node. Similarly, if a node has resource group that are brought offline and then restarted, monitoring resumes automatically.

    Note: If you have multiple monitors configured for one application, and if a monitor with notify action is launched first, HACMP runs the notification methods for that monitor, and the remaining monitor(s) are shut down on that node. HACMP takes no actions specified in any other monitor. You can restart the fallover monitor using the Suspend/Resume Application Monitoring SMIT panel.

    To permanently stop monitoring of an application, do the steps in the section Removing an Application Monitor.

    To temporarily suspend application monitoring:

      1. Enter smit hacmp
      2. In SMIT, select HACMP System Management > Suspend/Resume Application Monitoring > Suspend Application Monitoring and press Enter.
    You are prompted to select the application server for which this monitor is configured. If you have multiple application monitors, they are all suspended until you choose to resume them or until a cluster event occurs to resume them automatically, as explained above.

    To resume monitoring after suspending it:

      1. Enter smit hacmp
      2. In SMIT, select HACMP System Management > Suspend/Resume Application Monitoring > Resume Application Monitoring.
    HACMP prompts you to select the application server that is associated with the suspended application monitor you want to resume.
      3. Select the server. All monitors resume, configured as they were prior to suspending them.
    Note: Do not make changes to the application monitor(s) configurations while they are in a suspended state.

    Changing the Configuration of an Application Monitor

    You can change the configuration details of an application monitor by editing the SMIT fields you defined when you configured the monitor initially.

    Note: When you configured application monitors originally, the Restart Method and Cleanup Method fields had default values. If you changed those fields, and now want to change back to the defaults, you must enter the information manually (by copying the scripts from the Change/Show an Application Server SMIT panel).

    To alter a defined application monitor, take the following steps.

      1. Enter smit hacmp
      2. In SMIT, select Extended Configuration > Extended Resource Configuration > HACMP Extended Resources Configuration > Configure HACMP Applications > Configure HACMP Application Monitoring and press Enter.
      3. Depending on which type of monitor you are altering, select either:
    Define Process Application Monitor > Change/Show Process Application Monitor
    or
    Define Custom Application Monitor > Change/Show Custom Application Monitor.
      4. From the list of monitors, select the previously defined application monitor you want to change.
      5. Make changes in the SMIT panel fields and press Enter. Remember that default values are not restored automatically.
    The changes you enter take effect the next time the resource group containing the application is restarted.

    Removing an Application Monitor

    To permanently remove an application monitor:

      1. Enter smit hacmp
      2. In SMIT, select Extended Configuration > Extended Resource Configuration > HACMP Extended Resources Configuration > Configure HACMP Applications > Configure HACMP Application Monitoring and press Enter.
      3. Depending on which type of monitor you are altering, select either:
    Define Process Application Monitor > Remove a Process Application Monitor
    or
    Define Custom Application Monitor > Remove a Custom Application Monitor.
      4. Select the monitor to remove.
      5. Press Enter. The selected monitor is deleted.

    If the monitor is currently running, it is not stopped until the next dynamic reconfiguration or synchronization occurs.

    Note: If you remove an application monitor, HACMP removes it from the server definition for all application servers that were using the monitor, and sends a message about the servers that will no longer use the monitor.
    If you remove an application server, HACMP removes that server from the definition of all application monitors that were configured to monitor the application. HACMP also sends a message about which monitor will no longer be used for the application. If you remove the last application server in use for any particular monitor; that is, if the monitor will no longer be used for any application, verification issues a warning that the monitor will no longer be used.

    Reconfiguring Service IP Labels as Resources in Resource Groups

    This section contains:

  • Steps for Changing the Service IP Labels/Addresses Definitions
  • Deleting Service IP Labels
  • Changing Distribution Preference for Service IP Label Aliases
  • Viewing Distribution Preference for Service IP Label Aliases.
  • You must stop cluster services to change service IP labels/address resources that are already included in resource group.

    Remember to add any new service IP labels/addresses to the /etc/hosts file before using them. If you intend to change the names of existing labels, first create the new names and add them to the etc/hosts file. Then make the name change in SMIT.

    Do not remove the previously used service IP label/address from the /etc/hosts file until after you have made the change in the cluster configuration. Once you make the change in the configuration and in the /etc/hosts file on the local node, make the change in the /etc/hosts files of the other nodes before you synchronize and restart the cluster.

    Steps for Changing the Service IP Labels/Addresses Definitions

    To change a service IP label/address definition, take the following steps:

      1. Stop cluster services on all nodes.
      2. On any cluster node, enter smit hacmp
      3. Select HACMP Initialization and Standard Configuration > Configure Resources to Make Highly Available > Configure Service IP Labels/Addresses > Change/Show a Service IP Label/Address.
    Note: In the Extended Cluster Configuration flow, the SMIT path is HACMP > Extended Configuration > HACMP Extended Resources Configuration > Configure Service IP Labels/Addresses > Change/Show a Service IP Label/Address.
      4. In the IP Label/Address to Change panel, select the IP Label/Address you want to change. The Change/Show a Service IP Label/Address panel appears.
      5. Make changes in the field values as needed.
      6. Press Enter after filling in all required fields. HACMP now checks the validity of the new configuration. You may receive warnings if a node cannot be reached, or if network interfaces are found to not actually be on the same physical network.
      7. On the local node, verify and synchronize the cluster.
    Return to the HACMP Standard or Extended Configuration SMIT panel and select the Verification and Synchronization option.
      8. Restart Cluster Services.

    Deleting Service IP Labels

    To delete an IP Label/Address definition, take the following steps:

      1. Stop cluster services on all nodes.
      2. On any cluster node, enter smit hacmp
      3. Select Initialization and Standard Configuration > Configure Resources to Make Highly Available > Configure Service IP Labels/Addresses > Delete a Service IP Label/Address. A panel appears with the list of IP labels/addresses configured to HACMP.
    Note: In the Extended Cluster Configuration flow, the SMIT path is Extended Configuration > Extended Resources Configuration > Configure Service IP Labels/Addresses > Delete a Service IP Label/Address.
      4. Select one or more labels that you want to delete from the list and press Enter.
      5. HACMP displays Are You Sure? If you press Enter, the selected labels/addresses are deleted.
      6. For maintenance purposes, delete the labels/addresses from the /etc/hosts file.

    After you delete service IP labels from the cluster configuration using SMIT, removing them from /etc/hosts is a good practice because it reduces the possibility of having conflicting entries if the labels are reused with different addresses in a future configuration.

    Changing AIX 5L Network Interface Names

    When you define communication interfaces by entering or selecting an HACMP IP label/address, HACMP discovers the associated AIX 5L network interface name. HACMP expects this relationship to remain unchanged. If you change the name of the AIX 5L network interface name after configuring and synchronizing the cluster, HACMP will not function correctly.

    If this problem occurs, you can reset the communication interface name from the SMIT HACMP Cluster System Management (C-SPOC) menu.

    To reset the HACMP communication interface:

      1. Enter smit hacmp
      2. In SMIT, select Cluster System Management (C-SPOC) > HACMP Communications Interface Management > Update HACMP Communication Interface with AIX Settings and press Enter.
      3. Press F4 and select the communication interface that you want to reset from the HACMP picklist.
      4. Press Enter to complete the reset operation.
      5. On the local node, verify and synchronize the cluster. Return to the Extended or Standard Configuration SMIT panel and select the Verification and Synchronization option.

    See Synchronizing Cluster Resources for more information.

    Changing Distribution Preference for Service IP Label Aliases

    You can configure a distribution preference for the service IP labels that are placed under HACMP control. HACMP lets you specify the distribution preference for the service IP label aliases. These are the service IP labels that are part of HACMP resource groups and that belong to IPAT via IP Aliasing networks.

    When you specify the new distribution preference to be associated with a network, the HACMP configuration database is updated but the preference is not changed dynamically; that is, HACMP does not interrupt the processing by relocating service IP labels at the time the preference is changed. Instead, the next time a cluster event, such as a fallover takes place for a resource group that has service IP labels on the network, HACMP uses the new distribution preference when it allocates the service IP label alias on the network interface on the backup node.

    For information on types of distribution preferences, see Types of Distribution for Service IP Label Aliases in Chapter 4: Configuring HACMP Cluster Topology and Resources (Extended).

    To change a defined distribution preference for service IP labels:

      1. Enter smit hacmp
      2. In SMIT, select Extended Configuration > Extended Resource Configuration > HACMP Extended Resources Configuration > Configure Resource Distribution Preferences > Configure Service IP labels/addresses Distribution Preferences and press Enter.
    HACMP displays a list of networks that use IPAT via IP Aliases.
      3. From the list of networks, select the network for which you want to change the distribution preference and press Enter.
      4. Change the distribution preference and press Enter. Remember that default values are not restored automatically.
    The changes you enter take effect the next time the resource group containing the service IP label is restarted.

    Viewing Distribution Preference for Service IP Label Aliases

    Use the cltopinfo command to display the service IP label distribution preference specified for a particular network.

    For information on this command, see the Troubleshooting Guide.

    Alternatively, you can use the cllsnw - c command to display the service IP label distribution preference (sldp) specified for a particular network.

    The syntax is as follows:

    cllsnw - c 
    #netname:attr:alias:monitor_method:sldp: 
    

    Where sldp stands for the service label distribution preference.

    Example:

    net_ether_01:public:true:default:ppstest::sldp_collocation_with_persistent 
    

    Reconfiguring Communication Links

    Highly available communication links can be of three types: SNA-over-LAN, X.25, or SNA-over-X.25.

    Changes to a communication link may involve changing the adapter information or changing link information such as the name, the ports or link stations, or the application server (start and stop) scripts. You can reconfigure communication links using the SMIT interface.

    To change the configuration of a highly available communication link, you may need to change both the adapter information and the link information.

    Note: When a resource group has a list of Service IP labels and Highly Available Communication Links with configured SNA resources, the first Service IP label in the list of Service IP labels defined in the resource group will be used to configure SNA.

    This section contains:

  • Changing Communication Adapter Information
  • Removing a Communication Adapter from HACMP
  • Changing Communication Link Information
  • Removing a Communication Link from HACMP.
  • Changing Communication Adapter Information

    To change or view the configuration information for an X.25 communication adapter, complete the following steps:

      1. Enter smit hacmp
      2. In SMIT, select the Extended Configuration > Extended Cluster Resources > HACMP Extended Resources Configuration > Configure Communication Adapters and Links for HACMP > Configure Communication Adapters for HACMP > Change/Show Communications Adapter and press Enter.
      3. Select the adapter to change and press Enter.
      4. Make the changes as needed. To review the instructions for the field entries, refer to Configuring Highly Available Communication Links in Chapter 4: Configuring HACMP Cluster Topology and Resources (Extended).
      5. Press Enter to add this information to the Configuration Database on the local node.
      6. Return to previous SMIT panels to perform other configuration tasks.
      7. To verify and synchronize the changes, return to the Extended Configuration SMIT panel and select the Verification and Synchronization option.

    If the Cluster Manager is running on the local node, synchronizing the cluster triggers a dynamic reconfiguration event. See Synchronizing Cluster Resources for more information.

    Removing a Communication Adapter from HACMP

    To remove a communication adapter, complete the following steps:

      1. Enter smit hacmp
      2. In SMIT, select the Extended Configuration > Extended Cluster Resources > HACMP Extended Resources Configuration > Configure Communication Adapters and Links for HACMP > Configure Communication Adapters for HACMP > Remove a Communications Adapter and press Enter.
      3. Select the adapter to remove and press Enter. A message asks if you are sure you want to remove the communication adapter.
      4. Press Enter again to confirm the removal. The adapter is removed from the Configuration Database object classes stored in the DCD on the local node.
      5. Return to previous SMIT panels to perform other configuration tasks.
      6. To verify and synchronize the changes, return to the Extended or Standard Configuration SMIT panel and select the Verification and Synchronization option.

    If the Cluster Manager is running on the local node, synchronizing the cluster triggers a dynamic reconfiguration event. See Synchronizing Cluster Resources for more information.

    Changing Communication Link Information

    To change or view a communication link, complete the following steps:

      1. Enter smit hacmp
      2. In SMIT, select the Extended Configuration > Extended Cluster Resources > HACMP Extended Resources Configuration > Configure Communication Adapters and Links for HACMP > Configure Highly Available Communication Links > Change/Show Highly Available Communication Link and press Enter.
    SMIT displays a list of links.
      3. Select the link to change and press Enter.
      4. Make the changes as needed. To review the instructions for the field entries, refer to Configuring Highly Available Communication Links in Chapter 4: Configuring HACMP Cluster Topology and Resources (Extended).
      5. Press Enter to add this information to the Configuration Database stored in the DCD on the local node.
      6. When all changes have been made, return to previous SMIT panels to perform other configuration tasks.
      7. To verify and synchronize the changes, return to the Extended Configuration SMIT panel and select the Verification and Synchronization option.

    If the Cluster Manager is running on the local node, synchronizing the cluster triggers a dynamic reconfiguration event. See Synchronizing Cluster Resources for more information.

    Removing a Communication Link from HACMP

    You can remove a communication link from an active cluster dynamically. Before removing a communication link, you must remove it from any resource group where it is included as a resource.

    To remove a communication link, complete the following steps:

      1. Enter smit hacmp
      2. In SMIT, select Extended Configuration > Extended Resources Configuration > HACMP Extended Resource Group Configuration > Change/Show Resources/Attributes for a Resource Group and press Enter.
    SMIT displays a list of resource groups.
      3. Select the appropriate resource group, and in the Communication Links field, remove the link(s) from the list.
      4. Next, remove the link definition from HACMP. In SMIT, select the Extended Resources Configuration > HACMP Extended Resources Configuration > Configure Communication Adapters and Links for HACMP > Configure Highly Available Communication Links > Remove Highly Available Communication Link and press Enter.
    SMIT displays a list of links.
      5. Select the communication link you want to remove and press Enter. A message asks if you are sure you want to remove the communication link.
      6. Press Enter again to confirm the removal. The link is removed from the Configuration Database on the local node.
      7. Return to previous SMIT panels to perform other configuration tasks.
      8. To verify and synchronize the changes, return to the Extended Configuration SMIT panel and select the Verification and Synchronization option.

    If the Cluster Manager is running on the local node, synchronizing the cluster triggers a dynamic reconfiguration event. See Synchronizing Cluster Resources for more information.

    Reconfiguring Tape Drive Resources

    Using HACMP SMIT panels you can take the following actions to reconfigure tape drives:

  • Add tape drives as HACMP resources
  • Specify synchronous or asynchronous tape operations
  • Specify appropriate error recovery procedures
  • Change/Show tape drive resources
  • Remove tape drive resources
  • Add or remove tape drives to/from HACMP resource groups.
  • To add tape drive resources, see Chapter 3: Configuring an HACMP Cluster (Standard).

    Changing a Tape Resource

    To change or show the current configuration of a tape drive resource, take the following steps:

      1. Enter smit hacmp
      2. In SMIT, select Extended Configuration > Extended Resource Configuration > HACMP Extended Resources Configuration > Configure HACMP Tape Resources > Change/Show a Tape Resource and press Enter.
    SMIT returns a picklist of the configured tape drive resources.
      3. Select the tape resource you want to see or change.
    SMIT displays the current configuration for the chosen tape device.
      4. Change the field values as necessary.
      5. Press Enter.

    Removing a Tape Device Resource

    To remove a tape device resource, take the following steps:

      1. Enter smit hacmp
      2. In SMIT, select Extended Configuration >Extended Resource Configuration > HACMP Extended Resources Configuration > Configure HACMP Tape Resources > Remove a Tape Resource and press Enter.
    SMIT returns a picklist of the configured tape drive resources.
      3. Select the tape resource you want to remove.
    SMIT displays the message Are You Sure?

    Using NFS with HACMP

    HACMP includes the following functionality:

  • Reliable NFS server capability that allows a backup processor to recover current NFS activity should the primary NFS server fail, preserving the locks on NFS filesystems and dupcache. This functionality is available for two-node clusters only.
  • Ability to specify a network for NFS mounting.
  • Ability to define NFS exports and mounts at the directory level.
  • Ability to specify export options for NFS-exported directories and filesystems.
  • See the section on Using NFS with HACMP in Chapter 5: Planning Shared LVM Components in the Planning Guide for complete information on this subject, including an example of NFS cross-mounting.

    Reconfiguring Resources in Clusters with Dependent Resource Groups

    If you have configured dependent resources in the cluster, the dynamic reconfiguration (DARE) lets you:

  • Make changes to the cluster resources
  • Make changes to the cluster topology
  • Dynamically add or remove resource groups from the cluster configuration.
  • When reconfiguring resources dynamically, HACMP ensures the availability of applications in resource groups. For resource groups that have dependencies between them, it means that HACMP only allows changing resources when it is safe to do so.

    This section describes the conditions under which HACMP performs dynamic reconfigurations in clusters with dependent resource groups.

    Reconfiguring Resources and Topology Dynamically

    Consider a cluster where resource group A (child) depends on resource group B (parent). In turn, resource group B depends on resource group C. Note that resource group B serves both as a parent for resource group A and a child for resource group C.

    The following rules for DARE apply:

  • You can make changes to the cluster topology and cluster resources dynamically for a child resource group and for a parent resource group.
  • For a child resource group, if this resource group has no other groups that depend on it, HACMP runs the reconfiguration events and performs the requested changes. HACMP performs a dynamic reconfiguration of a child resource group without taking any other resource groups offline and online.
  • For a parent resource group, before proceeding with dynamic reconfiguration events, you must manually take offline all child resource groups that depend on the parent resource group. After the dynamic reconfiguration is complete, you can bring the child resource groups back online.
  • For instance, in a A>B>C dependency, where A is a child resource group that depends on B, and B is a child resource group that depends on C, to make changes to the resource group C, you must first take offline resource group A, then resource group B, and then perform a dynamic reconfiguration for resource group C. Once HACMP completes the event, you can bring online resource group B and then resource group A.
    If you attempt a dynamic reconfiguration event and HACMP detects that the resource group has dependent resource groups, the DARE operation fails and HACMP displays a message prompting you to take the child resource groups offline, before attempting to dynamically change resources or make topology changes in the parent resource group.

    Making Dynamic Changes to Dependent Resource Groups

    If you have dependent resource groups configured, the following rules apply:

  • If you dynamically add a resource group to the cluster, HACMP processes this event without taking any resource groups offline or online.
  • If you dynamically remove a resource group from the cluster configuration and the resource group is included in a dependency with one or more resource groups, then:
  • If a resource group that you remove dynamically is a parent resource group, then before processing the dynamic reconfiguration event to remove the group, HACMP temporarily takes offline dependent (child) resource group(s). After the DARE event is complete, HACMP reacquires child resource groups.
  • For instance, consider the following resource group dependency: A >B>C, where A (child) depends on B, and B depends on C (parent). B is a child to resource group C and is a parent to resource group A.

    In this case, if you dynamically remove resource group C from the cluster configuration, HACMP takes resource group A offline, then it takes resource group B offline, removes resource group C, and reacquires first resource group B and then resource group A.

    Cluster Processing During DARE in Clusters with Dependent Resource Groups

    As with cluster processing for other events, if you have dependencies or sites configured in the cluster, cluster processing for dynamic reconfiguration is done in a different way than in clusters without dependencies between resource groups. As a result, the sequence of events in the hacmp.out file shows a series of rg_move events.

    See the Job Types: Processing in Clusters with Dependent Resource Groups section in Chapter 2: Using Cluster Log Files in the Troubleshooting Guide, for information on how to interpret events that HACMP runs in clusters with dependencies.

    Synchronizing Cluster Resources

    Whenever you modify the configuration of cluster resources in the Configuration Database on one node, you must synchronize the change across all cluster nodes. You perform a synchronization by choosing the Verification and Synchronization option from either the Standard or the Extended HACMP configuration SMIT panel.

    Note: If the cluster is running, make sure no node has been stopped with its resource groups placed in UNMANAGED state when performing a synchronization.

    The processing performed in synchronization varies depending on whether the Cluster Manager is active on the local node:

  • If the Cluster Manager is not active on the local node when you select this option, the Configuration Database data in the DCD on the local node is copied to the Configuration Databases stored in the DCDs on all cluster nodes.
  • If the Cluster Manager is active on the local node, synchronization triggers a cluster-wide, dynamic reconfiguration event. In dynamic reconfiguration, the configuration data stored in the DCD is updated on each cluster node and, in addition, the new Configuration Database data replaces the Configuration Database data stored in the ACD on each cluster node. The cluster daemons are refreshed and the new configuration becomes the active configuration. In the HACMP log file, reconfig_resource_release, reconfig_resource_acquire, and reconfig_resource_complete events mark the progress of the dynamic reconfiguration.
  • See Chapter 7: Verifying and Synchronizing an HACMP Cluster for complete information on the SMIT options.

    Notes

    In some cases, the verification uncovers errors that do not cause the synchronization to fail. HACMP reports the errors in the SMIT command status window so that you are aware of an area of the configuration that may be a problem. You should investigate any error reports, even when they do not interfere with the synchronization.

    Log files that are no longer stored in a default directory, but a user-specified directory instead, are verified by the cluster verification utility, which checks that each log file has the same pathname on every node in the cluster and reports an error if this is not the case.


    PreviousNextIndex