PreviousNextIndex

Chapter 3: HACMP Resources and Resource Groups


This chapter introduces resource-related concepts and definitions that are used throughout the documentation, and also in the HACMP user interface.

The information in this chapter is organized as follows:

  • Cluster Resources: Identifying and Keeping Available
  • Types of Cluster Resources
  • Cluster Resource Groups
  • Resource Group Policies and Attributes
  • Resource Group Dependencies
  • Sites and Resource Groups.
  • Cluster Resources: Identifying and Keeping Available

    The HACMP software provides a highly available environment by:

  • Identifying the set of cluster resources that are essential to processing.
  • Defining the resource group policies and attributes that dictate how HACMP manages resources to keep them highly available at different stages of cluster operation (startup, fallover and fallback).
  • By identifying resources and defining resource group policies, the HACMP software makes numerous cluster configurations possible, providing tremendous flexibility in defining a cluster environment tailored to individual requirements.

    Identifying Cluster Resources

    Cluster resources can include both hardware and software:

  • Disks
  • Volume Groups
  • Logical Volumes
  • Filesystems
  • Service IP Labels/Addresses
  • Applications
  • Tape Resources
  • Communication Links
  • Fast Connect Resources, and other resources.
  • A processor running HACMP owns a user-defined set of resources: disks, volume groups, filesystems, IP addresses, and applications. For the purpose of keeping resources highly available, sets of interdependent resources may be configured into resource groups.

    Resource groups allow you to combine related resources into a single logical entity for easier configuration and management. The Cluster Manager handles the resource group as a unit, thus keeping the interdependent resources together on one node, and keeping them highly available.

    Types of Cluster Resources

    This section provides a brief overview of the resources that you can configure in HACMP and include into resource groups to let HACMP keep them highly available.

    Volume Groups

    A volume group is a set of physical volumes that AIX 5L treats as a contiguous, addressable disk region. Volume groups are configured to AIX 5L, and can be included in resource groups in HACMP. In the HACMP environment, a shared volume group is a volume group that resides entirely on the external disks that are shared by the cluster nodes. Shared disks are those that are physically attached to the cluster nodes and logically configured on all cluster nodes.

    Logical Volumes

    A logical volume is a set of logical partitions that AIX 5L makes available as a single storage unit—that is, the logical volume is the “logical view” of a physical disk. Logical partitions may be mapped to one, two, or three physical partitions to implement mirroring.

    In the HACMP environment, logical volumes can be used to support a journaled filesystem (non-concurrent access), or a raw device (concurrent access). Concurrent access does not support filesystems. Databases and applications in concurrent access environments must access raw logical volumes (for example, /dev/rsharedlv).

    A shared logical volume must have a unique name within an HACMP cluster.

    Note: A shared volume group cannot contain an active paging space.

    Filesystems

    A filesystem is written to a single logical volume. Ordinarily, you organize a set of files as a filesystem for convenience and speed in managing data.

    Shared Filesystems

    In the HACMP system, a shared filesystem is a journaled filesystem that resides entirely in a shared logical volume.

    For non-concurrent access, you want to plan shared filesystems so that they will be placed on external disks shared by cluster nodes. Data resides in filesystems on these external shared disks in order to be made highly available.

    For concurrent access, you cannot use journaled filesystems. Instead, use raw logical volumes.

    Journaled File System and Enhanced Journaled File System

    An Enhanced Journaled File System (JFS2) provides the capability to store much larger files than the Journaled File System (JFS). JFS2 is the default filesystem for the 64-bit kernel. You can choose to implement either JFS, which is the recommended filesystem for 32-bit environments, or JFS2, which offers 64-bit functionality.

    JFS2 is more flexible than JFS because it allows you to dynamically increase and decrease the number of files you can have in a filesystem. JFS2 also lets you include the filesystem log in the same logical volume as the data, instead of allocating a separate logical volume for logs for all filesystems in the volume group.

    For more information on JFS2, see the AIX 5L Differences Guide Version 5.3:

    http://www.redbooks.ibm.com/pubs/pdfs/redbooks/sg24743.pdf

    Applications

    The purpose of a highly available system is to ensure that critical services are accessible to users. Applications usually need no modification to run in the HACMP environment. Any application that can be successfully restarted after an unexpected shutdown is a candidate for HACMP.

    For example, all commercial DBMS products checkpoint their state to disk in some sort of transaction journal. In the event of a server failure, the fallover server restarts the DBMS, which reestablishes database consistency and then resumes processing.

    If you use Fast Connect to share resources with non-AIX workstations, you can configure it as an HACMP resource, making it highly available in the event of node or network interface card failure, and making its correct configuration verifiable.

    Applications are managed by defining the application to HACMP as an application server resource. The application server includes application start and stop scripts. HACMP uses these scripts when the application needs to be brought online or offline on a particular node, to keep the application highly available.

    Note: The start and stop scripts are the main points of control for HACMP over an application. It is very important that the scripts you specify operate correctly to start and stop all aspects of the application. If the scripts fail to properly control the application, other parts of the application recovery may be affected. For example, if the stop script you use fails to completely stop the application and a process continues to access a disk, HACMP will not be able to bring the volume group offline on the node that failed or recover it on the backup node.

    Add your application server to an HACMP resource group only after you have thoroughly tested your application start and stop scripts.

    The resource group that contains the application server should also contain all the resources that the application depends on, including service IP addresses, volume groups, and filesystems. Once such a resource group is created, HACMP manages the entire resource group and, therefore, all the interdependent resources in it as a single entity. (Note that HACMP coordinates the application recovery and manages the resources in the order that ensures activating all interdependent resources before other resources.)

    In addition, HACMP includes application monitoring capability, whereby you can define a monitor to detect the unexpected termination of a process or to periodically poll the termination of an application and take automatic action upon detection of a problem.

    You can configure multiple application monitors and associate them with one or more application servers. By supporting multiple monitors per application, HACMP can support more complex configurations. For example, you can configure one monitor for each instance of an Oracle parallel server in use. Or, you can configure a custom monitor to check the health of the database, and a process termination monitor to instantly detect termination of the database process.

    You can also specify a mode for an application monitor. It can either track how the application is being run (running mode), or whether the application has started successfully (application startup mode). Using a monitor to watch the application startup is especially useful for complex cluster configurations.

    Service IP Labels/Addresses

    A service IP label is used to establish communication between client nodes and the server node. Services, such as a database application, are provided using the connection made over the service IP label.

    A service IP label can be placed in a resource group as a resource that allows HACMP to monitor its health and keep it highly available, either within a node or, if IP address takeover is configured, between the cluster nodes by transferring it to another node in the event of a failure.

    For more information about service IP labels, see Service IP Label/Address in Chapter 2: HACMP Cluster Nodes, Sites, Networks, and Heartbeating.

    Note: Certain subnet requirements apply for configuring service IP labels as resources in different types of resource groups. For more information, see the Planning Guide.

    Tape Resources

    You can configure a SCSI or a Fibre Channel tape drive as a cluster resource in a non-concurrent resource group, making it highly available to two nodes in a cluster. Management of shared tape drives is simplified by the following HACMP functionality:

  • Configuration of tape drives using the SMIT configuration tool
  • Verification of proper configuration of tape drives
  • Automatic management of tape drives during resource group start and stop operations
  • Reallocation of tape drives on node failure and node recovery
  • Controlled reallocation of tape drives on cluster shutdown
  • Controlled reallocation of tape drives during a dynamic reconfiguration of cluster resources.
  • Communication Links

    You can define the following communication links as resources in HACMP resource groups:

  • SNA configured over LAN network interface cards
  • SNA configured over X.25
  • Pure X.25.
  • By managing these links as resources in resource groups, HACMP ensures their high availability. Once defined as members of an HACMP resource group, communication links are protected in the same way other HACMP resources are. In the event of a LAN physical network interface or an X.25 link failure, or general node or network failures, a highly available communication link falls over to another available network interface card on the same node, or on a takeover node.

  • SNA configured over LAN. To be highly available, “SNA configured over LAN” resources need to be included in those resource groups that contain the corresponding service IP labels in them. These service IP labels, in turn, are defined on LAN network interface cards, such as Ethernet and Token Ring. In other words, the availability of the “SNA configured over LAN” resources is dependent upon the availability of service IP labels included in the resource group. If the NIC being used by the service IP label fails, and the service IP label is taken over by another interface, this interface will also take control over an “SNA configured over LAN” resource configured in the resource group.
  • SNA configured over X.25 links and pure X.25 links. These links are usually, although not always, used for WAN connections. They are used as a means of connecting dissimilar machines, from mainframes to dumb terminals. Because of the way X.25 networks are used, these physical network interface cards are really a different class of devices that are not included in the cluster topology and are not controlled by the standard HACMP topology management methods. This means that heartbeats are not used to monitor X.25 link status, and you do not define X.25-specific networks in HACMP. To summarize, you can include X.25 links as resources in resource groups, keeping in mind that the health and availability of these resources also relies on the health of X.25 networks themselves (which are not configured within HACMP.)
  • Cluster Resource Groups

    To be made highly available by the HACMP software, each resource must be included in a resource group. Resource groups allow you to combine related resources into a single logical entity for easier management.

    This first section includes the basic terms and definitions for HACMP resource group attributes and contains the following topics:

  • Participating Nodelist
  • Default Node Priority
  • Home Node
  • Startup, Fallover and Fallback.
  • Later sections of this chapter explain how HACMP uses resource groups to keep the resources and applications highly available.

    Participating Nodelist

    The participating nodelist defines a list of nodes that can host a particular resource group. You define a nodelist when you configure a resource group.

  • The participating nodelist for non-concurrent resource groups can contain some or all nodes in the cluster.
  • The participating nodelist for concurrent resource groups should contain all nodes in the cluster.
  • Typically, this list contains all nodes sharing the same data and disks.

    Default Node Priority

    Default node priority is identified by the position of a node in the nodelist for a particular resource group. The first node in the nodelist has the highest node priority; it is also called the home node for a resource group. The node that is listed before another node has a higher node priority than the current node.

    Depending on a fallback policy for a resource group, when a node with a higher priority for a resource group (that is currently being controlled by a lower priority node) joins or reintegrates into the cluster, it takes control of the resource group. That is, the resource group moves from nodes with lower priorities to the higher priority node.

    At any given time, the resource group can have a default node priority specified by the participating nodelist. However, various resource group policies you select can override the default node priority and “create” the actual node priority according to which a particular resource group would move in the cluster.

    Dynamic Node Priority

    Setting a dynamic node priority policy allows you to use an RSCT resource variable such as “lowest CPU load” to select the takeover node for a non-concurrent resource group. With a dynamic priority policy enabled, the order of the takeover nodelist is determined by the state of the cluster at the time of the event, as measured by the selected RSCT resource variable. You can set different policies for different groups or the same policy for several groups.

    Home Node

    The home node (or the highest priority node for this resource group) is the first node that is listed in the participating nodelist for a non-concurrent resource group. The home node is a node that normally owns the resource group. A non-concurrent resource group may or may not have a home node—it depends on the startup, fallover and fallback behaviors of a resource group.

    Note that due to different changes in the cluster, the group may not always start on the home node. It is important to differentiate between the home node for a resource group and the node that currently owns it.

    The term home node is not used for concurrent resource groups as they are owned by multiple nodes.

    Startup, Fallover and Fallback

    HACMP ensures the availability of cluster resources by moving resource groups from one node to another when the conditions in the cluster change. HACMP manages resource groups by activating them on a particular node or multiple nodes at cluster startup, or by moving them to another node if the conditions in the cluster change. These are the stages in a cluster lifecycle that affect how HACMP manages a particular resource group:

  • Cluster startup. Nodes are up and resource groups are distributed between them according to the resource group startup policy you selected.
  • Node failure. Resource groups that are active on this node fall over to another node.
  • Node recovery. A node reintegrates into the cluster and resource groups could be reacquired, depending on the resource group policies you select.
  • Resource failure and recovery. A resource group may fall over to another node, and be reacquired, when the resource becomes available.
  • Cluster shutdown. There are different ways of shutting down a cluster, one of which ensures that resource groups move to another node.
  • During each of these cluster stages, the behavior of resource groups in HACMP is defined by the following:

  • Which node, or nodes, activate the resource group at cluster startup
  • How many resource groups are allowed to be acquired on a node during cluster startup
  • Which node takes over the resource group when the node that owned the resource group fails and HACMP needs to move a resource group to another node
  • Whether a resource group falls back to a node that has just joined the cluster or stays on the node that currently owns it.
  • The resource group policies that you select determine which cluster node originally controls a resource group and which cluster nodes take over control of the resource group when the original node relinquishes control.

    Each combination of these policies allows you to specify varying degrees of control over which node, or nodes, control a resource group.

    To summarize, the focus of HACMP on resource group ownership makes numerous cluster configurations possible and provides tremendous flexibility in defining the cluster environment to fit the particular needs of the application. The combination of startup, fallover and fallback policies summarizes all the management policies available in previous releases without the requirement to specify the set of options that modified the behavior of “predefined” group types.

    When defining resource group behaviors, keep in mind that a resource group can be taken over by one or more nodes in the cluster.

    Startup, fallover and fallback are specific behaviors that describe how resource groups behave at different cluster stages. It is important to keep in mind the difference between fallover and fallback. These terms appear frequently in discussion of the various resource group policies.

    Startup

    Startup refers to the activation of a resource group on a node (or multiple nodes) on which it currently resides, or on the home node for this resource group. Resource group startup occurs during cluster startup, or initial acquisition of the group on a node.

    Fallover

    Fallover refers to the movement of a resource group from the node that currently owns the resource group to another active node after the current node experiences a failure. The new owner is not a reintegrating or joining node.

    Fallover is valid only for non-concurrent resource groups.

    Fallback

    Fallback refers to the movement of a resource group from the node on which it currently resides (which is not a home node for this resource group) to a node that is joining or reintegrating into the cluster.

    For example, when a node with a higher priority for that resource group joins or reintegrates into the cluster, it takes control of the resource group. That is, the resource group falls back from nodes with lesser priorities to the higher priority node.

    Defining a fallback behavior is valid only for non-concurrent resource groups.

    Resource Group Policies and Attributes

    In HACMP 5.2 and up, you configure resource groups to use specific startup, fallover and fallback policies.

    This section describes resource group attributes and scenarios, and helps you to decide which resource groups suit your cluster requirements.

    This section contains the following topics:

  • Overview
  • Resource Group Startup, Fallover and Fallback
  • Settling Time, Dynamic Node Priority and Fallback Timer
  • Distribution Policy
  • Cluster Networks and Resource Groups.
  • Overview

    In HACMP 5.4, the policies for resource groups offer a wide variety of choices. Resource group policies can now be tailored to your needs. This allows you to have a greater control of the resource group behavior, increase resource availability, and better plan node maintenance.

    The process of configuring a resource group is two-fold. First, you configure startup, fallover and fallback policies for a resource group. Second, you add specific resources to it. HACMP prevents you from configuring invalid combinations of behavior and resources in a resource group.

    In addition, using resource groups in HACMP 5.4 potentially increases availability of cluster resources:

  • You can configure resource groups to ensure that they are brought back online on reintegrating nodes during off-peak hours.
  • You can specify that a resource group that contains a certain application is the only one that will be given preference and be acquired during startup on a particular node. You do so by specifying the node distribution policy. This is relevant if multiple non-concurrent resource groups can potentially be acquired on a node, but a specific resource group owns an application that is more important to keep available.
  • You can specify that specific resource groups be kept together online on the same node, or kept apart online on different nodes, at startup, fallover, and fallback.
  • You can specify that specific replicated resource groups be maintained online on the same site when you have a cluster that includes nodes and resources distributed across a geographic distance. (Usually this means you have installed one of the HACMP/XD products.)
  • For resource group planning considerations, see the chapter on planning resource groups in the Planning Guide.

    Resource Group Startup, Fallover and Fallback

    In HACMP 5.4, the following policies exist for individual resource groups:

    Startup
    • Online on Home Node Only. The resource group is brought online only on its home (highest priority) node during the resource group startup. This requires the highest priority node to be available (first node in the resource group’s nodelist).
    • Online on First Available Node. The resource group comes online on the first participating node that becomes available.
    • Online on All Available Nodes. The resource group is brought online on all nodes.
    • Online Using Distribution Policy. Only one resource group is brought online on each node.
    Fallover
    • Fallover to Next Priority Node in the List. The resource group follows the default node priority order specified in the resource group’s nodelist.
    • Fallover Using Dynamic Node Priority. Before selecting this option, select one of the three predefined dynamic node priority policies. These are based on RSCT variables, such as the node with the most memory available.
    • Bring Offline (on Error Node Only). Select this option to bring a resource group offline on a node during an error condition.
    Fallback
    • Fallback to Higher Priority Node in the List. A resource group falls back when a higher priority node joins the cluster. If you select this option, you can use the delayed fallback timer. If you do not configure a delayed fallback policy, the resource group falls back immediately when a higher priority node joins the cluster.
    • Never Fallback. A resource group does not fall back to a higher priority node when it joins the cluster.

    For more information on each policy, see the Planning and Administration Guides.

    Settling Time, Dynamic Node Priority and Fallback Timer

    You can configure some additional parameters for resource groups that dictate how the resource group behaves at startup, fallover and fallback. They are:

  • Settling Time. You can configure a startup behavior of a resource group by specifying the settling time for a resource group that is currently offline. When the settling time is not configured, the resource group starts on the first available higher priority node that joins the cluster. If the settling time is configured, HACMP waits for the duration of the settling time period for a higher priority node to join the cluster before it activates a resource group. Specifying the settling time enables a resource group to be acquired on a node that has a higher priority, when multiple nodes are joining simultaneously. The settling time is a cluster-wide attribute that, if configured, affects the startup behavior of all resource groups in the cluster for which you selected Online on First Available Node startup behavior.
  • Distribution Policy. You can configure the startup behavior of a resource group to use the node-based distribution policy. This policy ensures that during startup, a node acquires only one resource group. See the following section for more information.
  • Dynamic Node Priority. You can configure a fallover behavior of a resource group to use one of three dynamic node priority policies. These are based on RSCT variables such as the most memory or lowest use of CPU. To recover the resource group HACMP selects the node that best fits the policy at the time of fallover.
  • Delayed Fallback Timer. You can configure a fallback behavior of a resource group to occur at one of the predefined recurring times: daily, weekly, monthly and yearly, or on a specific date and time, by specifying and assigning a delayed fallback timer. This is useful, for instance, for scheduling the resource group fallbacks to occur during off-peak business hours.
  • Distribution Policy

    On cluster startup, you can use a node-based distribution policy for resource groups. If you select this policy for several resource groups, HACMP tries to have each node acquire only one of those resource groups during startup. This lets you distribute your CPU-intensive applications on different nodes.

    For more information on this resource group distribution policy and how it is handled during migration from previous releases, see the Planning Guide.

    For configuration information, and for information on resource group management, see the Administration Guide.

    Cluster Networks and Resource Groups

    Starting with HACMP 5.2, all resource groups support service IP labels configured on either IPAT via IP replacement networks or on aliased networks.

    A service IP label can be included in any non-concurrent resource group—that resource group could have any of the allowed startup policies except Online on All Available Nodes.

    Resource Group Dependencies

    HACMP supports resource group ordering and customized serial processing of resources to accommodate cluster configurations where a dependency exists between applications residing in different resource groups. With customized serial processing, you can specify that a given resource group be processed before another resource group. HACMP offers an easy way to configure parent/child dependencies between resource groups (and applications that belong to them) to ensure proper processing during cluster events.

    As of HACMP 5.3, new location dependency policies were available for you to configure resource groups so they are distributed the way you expect not only when you start the cluster, but also during fallover and fallback. You can configure dependencies so that specified groups come online on different nodes or on the same nodes. HACMP processes the dependent resource groups in the proper order using parallel processing where possible and serial as necessary. You do not have to customize the processing.

    You can configure different types of dependencies among resource groups:

  • Parent/child dependencies
  • Location dependencies.
  • The dependencies between resource groups that you configure are:

  • Explicitly specified using the SMIT interface
  • Established cluster-wide, not just on the local node
  • Guaranteed to be honored in the cluster.
  • Child and Parent Resource Groups Dependencies

    Configuring a resource group parent/child dependency allows for easier cluster configuration and control for clusters with multi-tiered applications where one application depends on the successful startup of another application, and both applications are required to be kept highly available with HACMP.

    The following example illustrates the parent/child dependency behavior:

  • If resource group A depends on resource group B, upon node startup, resource group B must be brought online before resource group A is acquired on any node in the cluster. Upon fallover, the order is reversed: Resource group A must be taken offline before resource group B is taken offline.
  • In addition, if resource group A depends on resource group B, during a node startup or node reintegration, resource group A cannot be taken online before resource group B is brought online. If resource group B is taken offline, resource group A will be taken offline too, since it depends on resource group B.
  • Dependencies between resource groups offer a predictable and reliable way of building clusters with multi-tier applications. For more information on typical cluster environments that can use dependent resource groups, see Cluster Configurations with Multi-Tiered Applications in Chapter 6: HACMP Cluster Configurations.

    These terms describe parent/child dependencies between resource groups:

  • A parent resource group has to be in an online state before the resource group that depends on it (child) can be started.
  • A child resource group depends on a parent resource group. It will get activated on any node in the cluster only after the parent resource group has been activated. Typically, the child resource group depends on some application services that the parent resource group provides.
  • Upon resource group release (during fallover or stopping cluster services, for instance) HACMP brings offline a child resource group before a parent resource group is taken offline.

    The following graphic illustrates the parent/child dependency relationship between resource groups.

    Example of Two and Three Levels of Dependencies between Resource Groups 
    

    The example shows relationships that were structured under these guidelines and limitations:

  • You can configure a type of dependency where a parent resource group must be online on any node in the cluster before a child (dependent) resource group can be activated on a node.
  • A resource group can serve as both a parent and a child resource group, depending on which end of a given dependency link it is placed.
  • You can specify three levels of dependencies for resource groups.
  • You cannot specify circular dependencies between resource groups.
  • These guidelines and limitations also apply to parent/child dependencies between resource groups:

  • You can add, change or delete a dependency between resource groups, while the cluster services are running.
  • When you delete a dependency between two resource groups, only the link between these resource groups is removed from the HACMP Configuration Database. The resource groups are not deleted.
  • During fallover of a parent resource group, a child resource group containing the application temporarily goes offline and then online on any available node. The application that belongs to the child resource group is also stopped and restarted.
  • Resource Group Location Dependencies

    In addition to various policies for individual resource groups and parent/child dependencies, HACMP 5.4 offers policies to handle overall resource group interdependencies. HACMP recognizes these relationships and processes the resource groups in the proper order. You can configure resource groups so that:

  • Two or more specified resource groups will always be online on the same node. They start up, fall over, and fall back to the same node.
  • Two or more specified resource groups will always be online on different nodes. They start up, fall over, and fall back to different nodes. You assign priorities to the resource groups so that the most critical ones are handled first in case of fallover and fallback.
  • Two or more specified resource groups (with replicated resources) will always be online on the same site.
  • Once you configure individual resource groups with a given location dependency, they form a set that is handled as a unit by the Cluster Manager. The following rules apply when you move a resource group explicitly with the clRGmove command:

  • If a resource group participates in an Online On Same Node Dependency set, then it can be brought online only on the node where all other resource groups from the same node set are currently online. (This is the same rule for the Cluster Manager.)
  • If a resource group participates in an Online On Same Site Dependency set, then you can bring it online only on the site where the other resource groups from the same site set are currently online. (This is the same rule for the Cluster Manager.)
  • If a resource group participates in an Online On Different Nodes Dependency set, then you can bring it online only on a node that does not host any other resource group in the different node dependency set. (This is the same rule for the Cluster Manager.) However, when you move a resource group that belongs to this set, priorities are treated as of equal value, whereas when HACMP brings these groups online it takes priorities into account.
  • Sample Location Dependency Model

    Consider the following example, which the figure below illustrates: XYZ Publishing company follows a business continuity model that involves prioritizing the different platforms used to develop the web content. Location policies are used to keep some resource groups strictly on separate nodes and others together on the same node.

    The figure shows how Nodes 1, 2, and 3 are used for separate databases: production, system applications and QA while their respective databases must be kept on the same node as the application.

    For more information on planning location dependencies between resource groups, the application behavior in dependent resource groups and configuring dependencies that work successfully, see the Planning Guide and Administration Guide.

    Sites and Resource Groups

    Most HACMP configurations do not include sites and use the default inter-site management policy IGNORE. If you have installed an HACMP/XD component for disaster recovery, you distribute the cluster nodes between geographically separated sites and select one of the inter-site management policies.

    You include the resources you want to replicate in resource groups. You define the startup, fallover, and fallback policies for the primary instance of a replicated resource group. The primary instance is where the resource group is online. The node with the primary instance of the resource group activates all the group’s resources. The secondary instance (the replication) is activated on a node on the other site as a backup. The inter-site management policy in combination with the resource group startup, fallover, fallback policies determines the site where the primary instance is first located, and how fallover and fallback between sites is handled.

    In HACMP 5.4, the following options exist for configuring resource group inter-site management policies:

  • Prefer Primary Site
  • Online On Either Site
  • Online On Both Sites.
  • If you define sites for the cluster, then when you define the startup, fallover, and fallback policies for each resource group you want to replicate, you assign the resource group to a node on the primary site, and to a node at the other (secondary) site. The primary instance of the resource group runs on the primary site, the secondary instance runs at the secondary site.

    If you have a concurrent resource group, you define it to run on all nodes. In this case, you can select the inter-site management policy Online on Both Sites. Then, the instances on both sites are active (there are no secondary instances). You can also select the other inter-site management policies so that a concurrent resource group is online on all nodes at one site, and has backup instances on the other site.

    Starting with HACMP 5.3, you can also move the primary instance of a resource group across site boundaries with the clRGmove utility. HACMP then redistributes the peer secondary instances as necessary (or gives you a warning if the move is disallowed due to a configuration requirement).

    For more information on inter-site management policies and how they work with the startup, fallover, and fallback resource group policies, see the Planning Guide. See also Appendix B in the Administration Guide for detailed examples of resource group behavior with dependencies and sites configured.


    PreviousNextIndex