![]() ![]() ![]() |
Chapter 11: Managing Shared LVM Components
This chapter explains how to maintain AIX 5L Logical Volume Manager (LVM) components shared by nodes in an HACMP cluster and provides procedures for managing volume groups, filesystems, logical volumes, and physical volumes using the HACMP Cluster-Single Point of Control (C-SPOC) utility.
The C-SPOC utility simplifies maintenance of shared LVM components in clusters of up to 32 nodes. C-SPOC commands provide comparable functions in a cluster environment to the standard AIX 5L commands that work on a single node. By automating repetitive tasks, C-SPOC eliminates a potential source of errors, and speeds up the cluster maintenance process.
In SMIT, you access C-SPOC using the System Management (C-SPOC) menu.
Although you can also use AIX 5L on each node to do these procedures, using the C-SPOC utility ensures that all commands are executed in the proper order. For specific information on AIX 5L commands and SMIT panels, see your AIX 5L System Management Guide.
The main topics in this chapter include:
Overview
A key element of any HACMP cluster is the data used by the highly available applications. This data is stored on AIX 5L LVM entities. HACMP clusters use the capabilities of the LVM to make this data accessible to multiple nodes.
In an HACMP cluster, the following definitions are used:
A shared volume group is a volume group that resides entirely on the external disks shared by cluster nodes. A shared physical volume is a disk that resides in a shared volume group. A shared logical volume is a logical volume that resides entirely in a shared volume group. A shared filesystem is a filesystem that resides entirely in a shared logical volume. Common Maintenance Tasks
As a system administrator of an HACMP cluster, you may be called upon to perform any of the following LVM-related tasks:
Creating a new shared volume group Extending, reducing, changing, or removing an existing volume group Creating a new shared logical volume Extending, reducing, changing, or removing an existing logical volume Creating a new shared filesystem Extending, changing, or removing an existing filesystem Adding, removing physical volumes. When performing any of these maintenance tasks on shared LVM components, make sure that ownership and permissions are reset (on logical volumes) when a volume group is exported and then re-imported. After exporting and importing, a volume group is owned by root and accessible by the system group. Applications, such as some database servers that use raw logical volumes may be affected by this if they change the ownership of the raw logical volume device. You must restore the ownership and permissions to what is needed after this sequence.
Understanding C-SPOC
The C-SPOC commands only operate on both shared and concurrent LVM components that are defined as part of an HACMP resource group. When you use SMIT HACMP C-SPOC, it executes the command on the node that owns the LVM component (the node that has it varied on).
If you run a ps command to verify what processes are running during a C-SPOC LVM operation, such as creating, extending, mirroring or unmirroring a shared volume group, you see output similar to the following:
ps -ef | grep vg root 11952 13522 0 08:56:25 - 0:00 ksh /usr/es/sbin/cluster/cspoc/cexec cllvmcmd -extendvg -f gdgpgogdhfhchgghdb gigegjhdgldbda.That is a C-SPOC encapsulation of arguments and parameters when data is sent off to remote nodes.
Understanding C-SPOC and Its Relation to Resource Groups
The C-SPOC commands that modify LVM components require a resource group name as an argument. The LVM component that is the target of the command must be configured in the resource group specified. C-SPOC uses the resource group information to determine on which nodes it must execute the operation specified.
Removing a Filesystem or Logical Volume
When removing a filesystem or logical volume using C-SPOC, the target filesystem or logical volume must not be configured as a resource in the resource group specified. You must remove the configuration for it from the resource group before removing the filesystem or logical volume.
Migrating a Resource Group
You can use the Resource Group Management utility under the Extended Configuration > System Management (C-SPOC) > HACMP Resource Group and Application Management menu in SMIT to perform resource group maintenance tasks. This utility enhances failure recovery capabilities of HACMP and allows you to change the status or the location of any type of resource group (along with its resources—IP addresses, applications, and disks), without stopping cluster services. For instance, you can use this utility to free a given node of any resource groups in order to perform system maintenance on that cluster node.
Non-concurrent resource group management tasks that you can perform using the Resource Group Management utility are:
Dynamically move a specified non-concurrent resource group from a node it currently resides on to the destination node that you have specified. Take a non-concurrent resource group online or offline on one or all nodes in the cluster. For more information on Resource Group Migration, see the section Resource Group Migration in Chapter 15: Managing Resource Groups in a Cluster.
Updating LVM Components in an HACMP Cluster
When you change the definition of a shared LVM component in a cluster, the operation updates the LVM data that describes the component on the local node and in the Volume Group Descriptor Area (VGDA) on the disks in the volume group. AIX 5L LVM enhancements allow all nodes in the cluster to be aware of changes to a volume group, logical volume, and filesystem, at the time the changes are made, rather than waiting for the information to be retrieved during a lazy update.
If for some reason the node is not updated via the C-SPOC enhanced utilities, due to an error condition (a node is down, for example), the volume group will be updated and the change will be taken care of during execution of the clvaryonvg command.
If node failure does occur during a C-SPOC operation, an error is displayed to the panel and the error messages are recorded in the C-SPOC log. (/tmp/cspoc.log is the default location of this log.) Other C-SPOC failures are also logged to the cspoc.log but are not displayed. You should check this log when any C-SPOC problems occur.
If you change the name of a filesystem, or remove a filesystem and then perform a lazy update, lazy update does not run the imfs -lx command before running the imfs command. This may lead to a failure during fallover or prevent a successful restart of the HACMP cluster services.
To prevent this from occurring, use the C-SPOC utility to change or remove filesystems. This ensures that imfs -lx runs before imfs and that the changes are updated on all nodes in the cluster.
Error reporting provides detailed information about inconsistency in volume group state across the cluster. If this happens, you must take manual corrective action. For example, if the filesystem changes are not updated on all nodes, update the nodes manually with this information.
Lazy Update Processing in an HACMP Cluster
For LVM components under the control of HACMP, you do not have to explicitly do anything to bring the other cluster nodes up to date. Instead, HACMP can perform an importvg -L when it activates the volume group during a fallover. (In a cluster, HACMP controls when volume groups are activated.) HACMP implements a function, called lazy update, by keeping a copy of the time stamp from the volume group’s VGDA. AIX 5L updates this time stamp whenever the LVM component is modified. When another cluster node attempts to vary on the volume group, HACMP compares its copy of the time stamp with the time stamp in the VGDA on the disk. If the values are different, the HACMP software exports and re-imports the volume group before activating it. If the time stamps are the same, HACMP activates the volume group without exporting and re-importing.
The following figure illustrates how a lazy update can be achieved in a cluster using the C-SPOC utility. All nodes in the cluster are updated accordingly, if they are specified from the node originating the command.
Note: Starting with HACMP 5.2, HACMP does not require lazy update processing for enhanced concurrent volume groups, as it keeps all cluster nodes updated with the LVM information.
Forcing an Update before Fallover
In certain circumstances, you may want to update the LVM definition on remote cluster nodes before a fallover occurs. For example, if you rename a logical volume using C-SPOC, the LVM data describing the component is updated on the local node and is updated in the VGDA on the disks, as previously described. If you attempt to rename the logical volume a second time using C-SPOC, the operation fails if the LVM data on any other cluster node has not been updated.
When you use C-SPOC to update the LVM data on a remote node, it causes the specified remote node to run importvg -L to update the LVM data whether the time stamp associated with the LVM component is the same or different.
Note: The volume group must be varied off on all nodes accessing the shared LVM component before running the update.
Maintaining Shared Volume Groups
While maintaining the HACMP cluster, you may need to perform the following administrative tasks with shared volume groups:
Using C-SPOC simplifies the steps required for all tasks. Moreover, you do not have to stop and restart cluster services to do the tasks.
Enabling Fast Disk Takeover
HACMP automatically uses fast disk takeover for enhanced concurrent mode volume groups that are included as resources in shared resource groups residing on shared disks.
If you upgraded from previous releases and have existing volume groups included in non-concurrent resource groups, to use this functionality, convert these volume groups to enhanced concurrent volume groups using C-SPOC.
Fast disk takeover is supported when AIX 5L v.5.2 and greater is installed on all nodes in the cluster. If you include your enhanced concurrent volume groups in the shared resource groups and AIX 5L v.5.2 or greater is not detected as being installed on all nodes in the cluster, you receive an error upon cluster verification.
To use fast disk takeover after an upgrade from a previous release, use C-SPOC to convert existing volume groups included in non-concurrent resource groups to enhanced concurrent volume groups.
For more information on fast disk takeover, see Chapter 4: Planning Shared LVM Components in the Planning Guide.
Understanding Active and Passive Varyon in Enhanced Concurrent Mode
An enhanced concurrent volume group can be made active on the node, or varied on, in two states: active or passive. Note that active or passive state varyons are done automatically by HACMP upon detection of the enhanced concurrent mode volume group, based on the state of the volume group and current cluster configuration.
Warning: All nodes in the cluster must be available before making any LVM changes. This ensures that all nodes have an accurate view of the state of the volume group. For more information about safely performing a forced varyon operation, and on instructions how to configure it in SMIT, see Forcing a Varyon of Volume Groups in Chapter 5: Configuring HACMP Resource Groups (Extended).
Active State Varyon
Active state varyon behaves as ordinary varyon, and makes the logical volumes normally available. When an enhanced concurrent volume group is varied on in active state on a node, it allows the following operations:
Operations on filesystems, such as filesystem mounts Operations on applications Operations on logical volumes, such as creating logical volumes Synchronizing volume groups. Passive State Varyon
When an enhanced concurrent volume group is varied on in passive state, the LVM provides an equivalent of fencing for the volume group at the LVM level.
Passive state varyon allows only a limited number of read-only operations on the volume group:
LVM read-only access to the volume group’s special file LVM read-only access to the first 4k of all logical volumes that are owned by the volume group. The following operations are not allowed when a volume group is varied on in passive state:
Operations on filesystems, such as filesystems mounting Any operations on logical volumes, such as having logical volumes open Synchronizing volume groups. Using Active or Passive State Varyon in HACMP
HACMP detects when a volume group included in a shared resource group is converted to or defined as an enhanced concurrent mode volume group, and notifies the LVM which node currently owns the volume group. Based on this information, the LVM activates the volume group in the appropriate active or passive state depending on the node on which this operation takes place.
Upon cluster startup, if the volume group resides currently on the node that owns the resource group, HACMP activates the volume group on this node in active state. HACMP activates the volume group in passive state on all other nodes in the cluster. Note that HACMP will activate a volume group in active state only on one node at a time. Upon fallover, if a node releases a resource group, or, if the resource group is being moved to another node for any other reason, HACMP switches the varyon state for the volume group from active to passive on the node that releases the resource group (if cluster services are still running), and activates the volume group in active state on the node that acquires the resource group. The volume group remains in passive state on all other nodes in the cluster. Upon node reintegration, this procedure is repeated. HACMP changes the varyon state of the volume group from active to passive on the node that releases the resource group and varies on the volume group in active state on the joining node. While activating, the volume group remains passive on all other nodes in the cluster. Note: The switch between active and passive states is necessary to prevent mounting filesystems on more than one node at a time.
See Chapter 12: Managing Shared LVM Components in a Concurrent Access Environment for information on enhanced concurrent volume groups in concurrent resource groups.
Verification Checks for Shared Volume Groups Defined for Auto Varyon
Typically, shared volume groups listed within a resource group should have their auto-varyon attribute in the AIX 5L ODM set to No.
The HACMP verification checks that the volume group auto-varyon flag is set to No. If you use the interactive mode for verification, you will be prompted to set the auto-varyon flag to No on all the cluster nodes listed in the resource group.
Checking the Status of a Volume Group
As with regular cluster takeover operations, you can debug and trace the cluster activity for fast disk takeover using the information logged in the hacmp.out file. You may check the status of the volume group by issuing the lsvg command. Depending on your configuration, the lsvg command returns the following settings:
VG STATE will be active if it is varied on either actively or passively. VG PERMISSION will be read/write if it is actively varied on on the node, or passive-only, if it is passively varied on. CONCURRENT will either be Capable or Enhanced-Capable (for concurrent volume groups). Here is an example of lsvg output:
# lsvg vg1 VOLUME GROUP: vg1 VG IDENTIFIER: 00020adf00004c00000000f329382713 VG STATE: active PP SIZE: 16 megabyte(s) VG PERMISSION: passive-only TOTAL PPs: 542 (8672 megabytes) MAX LVs: 256 FREE PPs: 521 (8336 megabytes) LVs: 3 USED PPs: 21 (336 megabytes) OPEN LVs: 0 QUORUM: 2 TOTAL PVs: 1 VG DESCRIPTORS: 2 STALE PVs: 0 STALE PPs: 0 ACTIVE PVs: 1 AUTO ON: no Concurrent: Enhanced-Capable Auto-Concurrent: Disabled VG Mode: Concurrent Node ID: 2 Active Nodes: 1 4 MAX PPs per PV: 1016 MAX PVs: 32 LTG size: 128 kilobyte(s) AUTO SYNC: no HOT SPARE: no BB POLICY: relocatableAvoiding a Partitioned Cluster
When configuring enhanced concurrent volume groups in shared resource groups, ensure that multiple networks exist for communication between the nodes in the cluster, to avoid cluster partitioning. When fast disk takeover is used, the normal SCSI reserve is not set to prevent multiple nodes from accessing the volume group.
In a partitioned cluster, it is possible that nodes in each partition could accidentally vary on the volume group in active state. Because active state varyon of the volume group allows filesystem mounts and changes to the physical volumes, this state can potentially lead to different copies of the same volume group. Make sure that you configure multiple communication paths between the nodes in the cluster.
Collecting Information on Current Volume Group Configuration
HACMP can collect information about all shared volume groups that are currently available on the nodes in the cluster, and volume groups that can be imported to the other nodes in the resource group. HACMP filters out volume groups that are already included in any of the resource groups.
You can use this information to import discovered volume groups onto other nodes in the resource group that do not have these volume groups.
To collect the information about volume group configuration:
1. Enter smit hacmp
2. In SMIT, select Extended Configuration > Discover HACMP-related Information from Configured Nodes and press Enter.
Importing Shared Volume Groups
When adding a volume group to the resource group, you may choose to manually import a volume group onto the destination nodes, or you may automatically import it onto all the destination nodes in the resource group.
Importing a Volume Group Automatically
You can set automatic import of a volume group in SMIT under the HACMP Extended Resource Group Configuration menu. It enables HACMP to automatically import shareable volume groups onto all the destination nodes in the resource group. Automatic import allows you to create a volume group and then add it to the resource group immediately, without manually importing it onto each of the destination nodes in the resource group.
Prior to importing volume groups automatically, make sure that you have collected the information on appropriate volume groups, using the HACMP Extended Configuration > Discover HACMP-related Information from Configured Nodes action in SMIT.
Note: Each volume group is assigned a major number when it is created. When HACMP automatically imports a volume group, the major number already assigned to the volume group will be used if it is available on all the destination nodes. Otherwise, any free major number will be used.
Prerequisites and Notes
In order for HACMP to import available volume groups, ensure the following conditions are met:
Volume group names must be the same across cluster nodes and unique to the cluster. Logical volumes and filesystems must have unique names. All physical disks must be known to AIX 5L and have PVIDs assigned. The physical disks on which the volume group resides are available to all of the nodes in the resource group. Procedure for Importing a Volume Group Automatically
To add volume groups to a resource group via automatic import:
1. Enter smit hacmp
2. In SMIT, select Extended Configuration > Extended Resource Configuration > HACMP Extended Resource Group Configuration > Change/Show Resources/Attributes for a Resource Group and press Enter.
3. On the next panel, select the resource group for which you want to define a volume group and press Enter. A panel appears with fields for all the types of resources applicable for the type of resource group you selected.
4. In the Volume Groups field, you can select the volume groups from the picklist or enter volume groups names.
If, prior to this procedure, you requested that HACMP collect information about the appropriate volume groups, then pressing the F4 key gives you a list of all volume groups collected cluster-wide, including all shared volume groups in the resource group and the volume groups that are currently available for import onto the resource group nodes. HACMP filters out from this list those volume groups that are already included in any of the resource groups.
Note: The list of volume groups will include only the non-concurrent capable volume groups. This list will not include rootvg and any volume groups already defined to another resource group.
5. Set the Automatically Import Volume Groups flag to True. (The default is False.)
6. Press Enter. HACMP determines whether the volume groups that you entered or selected in the Volume Groups field need to be imported to any of the nodes that you defined for the resource group and proceeds to import them as needed.
Final State of Automatically Imported Volume Groups
When HACMP automatically imports volume groups, their final state (varied on or varied off) depends on the initial state of the volume group and whether the resource group is online or offline when the import occurs.
In all cases, the volume group ends up varied on after the resource group is started or after the cluster resources are synchronized, even if it is varied off at some point during the import process.
This table shows the initial condition of the volume group after its creation, the state of the resource group when the import occurs, and the resulting state of the imported volume group:
Importing a Volume Group Manually
If you want to import your volume group manually upon adding it to the resource group, make sure that in SMIT the Automatically Import Volume Groups flag is set to False (this is the default) and use the AIX 5L importvg fastpath.
Importing a Shared Volume Group with C-SPOC
To import a volume group using the C-SPOC utility:
1. Complete prerequisite tasks. The physical volumes (hdisks) in the volume group must be installed, configured, and available on all nodes that can own the volume group.
2. On any cluster node that can own the shared volume group (is in the participating nodes list for the resource group), vary on the volume group, using the SMIT varyonvg fastpath (if it is not varied on already).
3. On the source node, enter the fastpath smit cl_admin
4. In SMIT, select HACMP Logical Volume Management > Shared Volume Groups > Import a Shared Volume Group and press Enter.
A list of volume groups appears. (Enhanced concurrent volume groups are also included as choices in picklists for non-concurrent resource groups.)
5. Select a volume group and press Enter.
6. Select a physical volume and Press Enter.
SMIT displays the Import a Shared Volume Group panel. Values for fields you have selected are displayed.
7. Enter values for other fields as follows:
8. If this panel reflects the correct information, press Enter to import the shared volume group. All nodes in the cluster receive this updated information.
If you did this task from a cluster node that does not need the shared volume group varied on, vary off the volume group on that node.
Creating a Shared Volume Group with C-SPOC
Before creating a shared volume group for the cluster using C-SPOC, check that:
All disk devices are properly attached to the cluster nodes. All disk devices are properly configured on all cluster nodes and the devices are listed as available on all nodes. Disks have a PVID. If you create concurrent SSA volume groups, you must assign unique non-zero node numbers with ssar on each cluster node. If you select the option for SSA fencing, HACMP automatically assigns unique node numbers during cluster synchronization. Note: If you add a VPATH disk to a volume group made up of hdisks, the volume group will be converted to VPATHs on all nodes.
To create a shared volume group for a selected list of cluster nodes:
1. Enter the fastpath smit cl_admin
2. In SMIT, select HACMP Logical Volume Management > Shared Volume Groups > Create a Shared Volume Group. (Or Create a Shared Volume Group with Data Path Devices) and press Enter.
3. Select two or more nodes from the list and press Enter.
The system correlates a list of all free physical disks that are available to all nodes selected. (Free disks are those disks that currently are not part of a volume group and have PVIDs.) SMIT displays the list of free physical disks in a multi-picklist by PVIDs. If you are creating a shared volume group on Data Path Devices, only VPATH-capable disks are listed.
4. Select one or more disks from the list and press Enter.
5. Complete the selections as follows and press Enter.
Node Names The node(s) you selected. PVID PVID of the selected disk. VOLUME GROUP name The name of the volume group must be unique within the cluster and distinct from the service IP label/address and resource group names; it should relate to the application it serves, as well as to any corresponding device. For example, websphere_service_VG. Physical partition SIZE in megabytes Accept the default. Volume group MAJOR NUMBER The system displays the number that C-SPOC has determined to be correct.WARNING: Changing the volume group major number may result in the command’s inability to execute on a node that does not have that major number currently available. Please check for a commonly available major number on all nodes before changing this setting. Enable Cross-Site LVM Mirroring Set this field to True to enable data mirroring between sites. The default is False. See Configuring Cross-Site LVM Mirroring for more information.
C-SPOC verifies communication paths and version compatibility and then executes the command on all nodes in selection. If cross-site LVM mirroring is enabled, that configuration is verified.
Note: If the major number that you entered on the SMIT panel was not free at the time that the system attempted to make the volume group, HACMP displays an error for the node that did not complete the command, and continues to the other nodes. At the completion of the command the volume group will not be active on any node in the cluster.
6. Run the discovery process so that the new volume group is included in picklists for future actions.
Setting Characteristics of a Shared Volume Group
You can change the volume group’s characteristics by:
Adding or removing a volume from the shared volume group Enabling or disabling the volume group for cross-site LVM mirroring. Adding or Removing a Volume from a Shared Volume Group
To add or remove a volume to or from a shared volume group:
1. Enter the fastpath smit cl_admin
2. In SMIT, select HACMP Logical Volume Management > Shared Volume Groups > Set Characteristics of a Shared Volume Group > Add (or Remove) a Volume from a Shared Volume Group and press Enter.
3. Select the volume group and press Enter.
4. Select the volume to add or remove from the list and press Enter.
5. Synchronize the cluster.
Enabling or Disabling Cross-Site LVM Mirroring
For more information, see Configuring Cross-Site LVM Mirroring.
To enable or disable cross-site LVM mirroring of a shared volume group:
1. Enter the fastpath smit cl_admin
2. In SMIT, select HACMP Logical Volume Management > Shared Volume Groups > Set Characteristics of a Shared Volume Group > Enable/Disable a Shared Volume Group for Cross-site Mirroring and press Enter.
3. Select a volume group from the list.
4. Enable or disable the field Enable Cross-Site LVM Mirroring (True or False) and press Enter. If you disable cross-site mirroring, do this on each node that can own the volume group.
5. Synchronize the cluster if the cluster services are not running. If you have sites defined, you cannot synchronize while cluster services are running.
Mirroring a Volume Group Using C-SPOC
To mirror a shared volume group using the C-SPOC utility:
1. Complete prerequisite tasks. The physical volumes (hdisks) in the volume group must be installed, configured, and available.
2. On any cluster node that can own the shared volume group (is in the participating nodes list for the resource group), vary on the volume group, using the SMIT varyonvg fastpath (if it is not varied on already).
3. On the source node, enter smit cl_admin
4. In SMIT, select HACMP Logical Volume Management > Shared Volume Groups > Mirror a Shared Volume Group.
SMIT displays a list of resource groups and associated volume groups. (Enhanced concurrent volume groups are also included as choices in picklists for non-concurrent resource groups.)
5. Select a volume group and press Enter.
6. Select entries from the list of nodes and physical volumes (hdisks) and Press Enter.
7. Enter values for other fields as follows:
8. If this panel reflects the correct information, press Enter to mirror the shared volume group. All nodes in the cluster receive this updated information.
If you did this task from a cluster node that does not need the shared volume group varied on, vary off the volume group on that node.
Unmirroring a Volume Group Using C-SPOC
To unmirror a shared volume group using the C-SPOC utility:
1. Complete prerequisite tasks. The physical volumes (hdisks) in the volume group must be installed, configured, and available.
2. On any cluster node that can own the shared volume group (is in the participating nodes list for the resource group), vary on the volume group, using the SMIT varyonvg fastpath (if it is not varied on already).
3. On the source node, enter the fastpath smit cl_admin
4. In SMIT, select HACMP Logical Volume Management > Shared Volume Groups > Unmirror a Shared Volume Group and press Enter.
SMIT displays a list of resource groups and associated volume groups. (Enhanced concurrent volume groups are also included as choices in picklists for non-concurrent resource groups.)
5. Select a volume group and press Enter.
6. Select entries from the list of nodes and physical volumes (hdisks) and Press Enter.
7. Enter values for other fields as follows:
8. If this panel reflects the correct information, press Enter to unmirror the shared volume group. All nodes in the cluster receive this updated information.
If you did this task from a cluster node that does not need the shared volume group varied on, vary off the volume group on that node.
Synchronizing Volume Group Mirrors
To synchronize shared LVM Mirrors by volume group using the C-SPOC utility:
1. Complete prerequisite tasks. The physical volumes (hdisks) in the volume group must be installed, configured, and available.
2. On any cluster node that can own the shared volume group (is in the participating nodes list for the resource group), vary on the volume group, using the SMIT varyonvg fastpath (if it is not varied on already).
3. On the source node, enter the fastpath smit cl_admin
4. In SMIT, select HACMP Logical Volume Management > Synchronize Shared LVM Mirrors > Synchronize By Volume Group and press Enter.
SMIT displays a list of resource groups and associated volume groups. (Enhanced concurrent volume groups are also included as choices in picklists for non-concurrent resource groups.)
5. Select a volume group and press Enter.
6. Enter values for other fields as follows:
7. If this panel reflects the correct information, press Enter to synchronize LVM mirrors by the shared volume group. All nodes in the cluster receive this updated information.
If you did this task from a cluster node that does not need the shared volume group varied on, vary off the volume group on that node.
Synchronizing a Shared Volume Group Definition
To synchronize a shared volume group definition using the C-SPOC utility:
1. Complete prerequisite tasks. The physical volumes (hdisks) in the volume group must be installed, configured, and available.
2. On the source node, enter the fastpath smit cl_admin
3. In SMIT, select HACMP Logical Volume Management > Synchronize a Shared Volume Group Definition and press Enter.
4. On the Synchronize a Shared Volume Group Definition panel, enter the name of the shared volume group to synchronize. Press F4 for a picklist. The picklist contains only volume groups that are not varied on anywhere in the cluster.
5. Select a volume group and press Enter.
The command runs. All nodes in the cluster receive this updated information.
Maintaining Logical Volumes
The following administrative tasks involve shared logical volumes. You can perform all these tasks using the C-SPOC utility:
Note: On RAID devices, increasing or decreasing the number of copies (mirrors) of a shared logical volume is not supported.
Adding a Logical Volume to a Cluster Using C-SPOC
To add a logical volume to a cluster using C-SPOC:
1. Enter the C-SPOC fastpath: smit cl_admin
2. In SMIT, select Cluster Logical Volume Manager > Shared Logical Volumes > Add a Shared Logical Volume and press Enter.
3. Select a resource group-volume group combination and press Enter.
4. Select a physical volume and press Enter. The Add a Shared Logical Volume panel appears, with chosen fields filled in as shown in the sample below:
5. The default logical volume characteristics are most common. Make changes if necessary for your system and press Enter. Other cluster nodes are updated with this information.
Setting Characteristics of a Shared Logical Volume Using C-SPOC
This section contains instructions for the following tasks that you can do for all cluster nodes from one node with C-SPOC:
Renaming a Shared Logical Volume Using C-SPOC
To rename a shared logical volume on all nodes in a cluster by executing a C-SPOC command on any node:
1. Enter the fastpath smit cl_admin
2. In SMIT, select HACMP Logical Volume Management > Shared Logical Volumes > Set Characteristics of A Shared Logical Volume > Rename a Shared Logical Volume and press Enter. SMIT displays the Rename a Logical Volume on the Cluster panel and press Enter.
3. Select a logical volume and press Enter. SMIT displays a panel with the Resource group name and Current logical volume name filled in.
4. Enter the new name in the NEW logical volume name field and press Enter. The C-SPOC utility changes the name on all cluster nodes.
Note: After completing this procedure, confirm your changes by initiating failures and verifying correct fallover behavior before resuming normal cluster operations.
Increasing the Size of a Shared Logical Volume Using C-SPOC
To increase the size of a shared logical volume on all nodes in a cluster:
1. On any node, enter the SMIT fastpath: smit cl_admin
2. In SMIT, select HACMP Logical Volume Management > Shared Logical Volumes > Set Characteristics of A Shared Logical Volume > Increase the Size of a Shared Logical Volume and press Enter. SMIT displays a list of logical volumes arranged by resource group.
3. Select a logical volume from the picklist and press Enter.
4. Select a physical volume and press Enter. SMIT displays the Increase Size of a Shared Logical Volume panel with the Resource Group, Logical Volume, Reference Node and default fields filled.
5. Enter the new size in the Number of ADDITIONAL logical partitions field and press Enter. The C-SPOC utility changes the size of this logical volume on all cluster nodes.
Adding a Copy to a Shared Logical Volume Using C-SPOC
To add a copy to a shared logical volume on all nodes in a cluster:
1. On any node, enter the fastpath smit cl_admin
2. In SMIT, select HACMP Logical Volume Management > Shared Logical Volumes > Set Characteristics of A Shared Logical Volume > Add a Copy to a Shared Logical Volume and press Enter. SMIT displays a list of logical volumes arranged by resource group.
3. Select a logical volume from the picklist and press Enter. SMIT displays a list of physical volumes.
4. Select a physical volume and press Enter. SMIT displays the Add a Copy to a Shared Logical Volume panel with the Resource Group, Logical Volume, Reference Node and default fields filled.
5. Enter the new number of mirrors in the NEW TOTAL number of logical partitions field and press Enter. The C-SPOC utility changes the number of copies of this logical volume on all cluster nodes.
Removing a Copy from a Shared Logical Volume Using C-SPOC
To remove a copy of a shared logical volume on all nodes in a cluster:
1. On any node, enter the fastpath smit cl_admin
2. In SMIT, select HACMP Logical Volume Management > Shared Logical Volumes > Set Characteristics of A Shared Logical Volume > Remove a Copy from a Shared Logical Volume and press Enter. SMIT displays a list of logical volumes arranged by resource group.
3. Select the logical volume from the picklist and press Enter. SMIT displays a list of nodes and physical volumes.
4. Select the physical volumes from which you want to remove copies and press Enter. SMIT displays the Remove a Copy from a Shared Logical Volume panel with the Resource Group, Logical Volume name, Reference Node and Physical Volume names fields filled in.
5. Enter the new number of mirrors in the NEW maximum number of logical partitions copies field and check the PHYSICAL VOLUME name(s) to remove copies from field to make sure it is correct and press Enter. The C-SPOC utility changes the number of copies of this logical volume on all cluster nodes.
6. To check the status of the C-SPOC command execution on all nodes, view the C-SPOC log file in /tmp/cspoc.log.
Changing a Shared Logical Volume
To change the characteristics of a shared logical volume on all nodes in a cluster:
1. On any node, enter the fastpath smit cl_admin
2. In SMIT, select HACMP Logical Volume Management > Shared Logical Volumes > Change a Shared Logical Volume option and press Enter. SMIT displays a picklist of existing logical volumes.
3. Select the logical volume. SMIT displays the panel, with the values of the selected logical volume attributes filled in.
4. Enter data in the fields you want to change and press Enter. The C-SPOC utility changes the characteristics on the local node. The logical volume definition is updated on remote nodes.
5. To check the status of the C-SPOC command execution on all nodes, view the C-SPOC log file in /tmp/cspoc.log.
Note: After completing this procedure, confirm your changes by initiating failures and verifying correct fallover behavior before resuming normal cluster operations.
Removing a Logical Volume Using C-SPOC
Note: If the logical volume to be removed contains a filesystem, you first must remove the filesystem from any specified resource group before attempting to remove the logical volume. Afterwards, be sure to synchronize cluster resources on all cluster nodes.
To remove a logical volume on any node in a cluster:
1. On any node, enter the fastpath smit cl_admin
2. In SMIT, select HACMP Logical Volume Management > Shared Logical Volumes > Remove a Shared Logical Volume and press Enter.
3. C-SPOC provides a list of shared logical volumes, organized by HACMP resource group. Select the logical volume you want to remove and press Enter. Remote nodes are updated.
4. To check the status of the C-SPOC command execution on other cluster nodes, view the C-SPOC log file in /tmp/cspoc.log.
Synchronizing LVM Mirrors by Logical Volume
To synchronize shared LVM mirrors by logical volume using the C-SPOC utility:
1. Complete prerequisite tasks. The physical volumes (hdisks) in the volume group must be installed, configured, and available.
2. On any cluster node that can own the shared volume group (is included in the participating nodes list for the resource group), vary on the volume group, using the SMIT varyonvg fastpath (if it is not varied on already).
3. On the source node, enter the fastpath smit cl_admin
4. In SMIT, select HACMP Logical Volume Management > Synchronize Shared LVM Mirrors > Synchronize By Logical Volume and press Enter.
5. Select a logical volume and press Enter.
6. Enter values for other fields as follows:
7. If this panel reflects the correct information, press Enter to synchronize LVM mirrors by the shared logical volume. All nodes in the cluster receive this updated information.
8. If you did this task from a cluster node that does not need the shared volume group varied on, vary off the volume group on that node.
Maintaining Shared Filesystems
The following administrative tasks involve shared filesystems:
Each operation is described below. The sections also describe how to use the C-SPOC utility to create, change or remove a shared filesystem in a cluster.
Journaled Filesystem and Enhanced Journaled Filesystem
Enhanced Journaled Filesystem (JFS2) provides the capability to store much larger files than the Journaled File System (JFS). Additionally, it is the default filesystem for the 64-bit kernel. You can choose to implement either JFS, which is the recommended filesystem for 32-bit environments, or JFS2, which offers 64-bit functionality.
Note: Unlike the JFS filesystem, the JFS2 filesystem will not allow the link() API to be used on files of type directory. This limitation may cause some applications that operate correctly on a JFS filesystem to fail on a JFS2 filesystem.
See the AIX 5L documentation for more information.
The SMIT paths shown in the following sections of this chapter use the Journaled Filesystem; similar paths exist for the Enhanced Journaled Filesystem.
Reliable NFS Server and Enhanced Journaled Filesystem
You can use either JFS or JFS2 filesystems with the Reliable NFS Server functionality of HACMP. JFS2 is supported with a highly available NFS Server functionality only on AIX 5L 5.2 with one of the following minimum fileset levels installed:
Without these minimum fileset levels specified, you can specify JFS2 filesystems as HACMP NFS exported filesystems, but the highly available functionality of saving the dup cache for these filesystems is not supported. (The lock information transfer functionality is supported.)
Also, note that Reliable NFS Server functionality is supported only with JFS2 filesystems with external logs (jfs2logs). Filesystems with embedded logs are not supported.
Creating Shared Filesystems with C-SPOC
Before creating a journaled filesystem for the cluster using C-SPOC, check that:
All disk devices are properly attached to the cluster nodes All disk devices are properly configured and available on all cluster nodes The volume group that will contain the filesystem must be varied on, on at least one cluster node. You can add a journaled filesystem to either:
A shared volume group (no previously defined cluster logical volume) A previously defined cluster logical volume (on a shared volume group). To add a filesystem where no logical volume is currently defined:
1. Enter the fastpath smitty cl_admin
2. In SMIT, select HACMP Logical Volume Management > Shared Filesystems > Add a Journaled Filesystem and press Enter.
3. Select a filesystem type from the list.
4. Select the volume group where the filesystem will be added. SMIT displays the AIX 5L SMIT panel for selecting filesystem attributes.
5. Enter field values as follows:
6. Select the filesystem attributes and press Enter.
SMIT checks the nodelist for the resource group that contains the volume group, creates the logical volume (on an existing log logical volume if present, otherwise it will create a new log logical volume) and adds the filesystem to the node where the volume group is varied on. All other nodes in the resource group will run an importvg -L.
Adding the Filesystem to an HACMP Cluster Logical Volume
To add a filesystem to a previously defined cluster logical volume:
1. Enter the fastpath smitty cl_admin
2. In SMIT, select HACMP Logical Volume Management > Shared Filesystems > Add a Journaled Filesystem to a Previously Defined Logical Volume and press Enter. SMIT displays a list of filesystem types (Standard, Compressed, or Large File Enabled).
3. Select the filesystem type from the list. SMIT generates a list of all free logical volumes in the cluster and nodes they are on. SMIT reports a logical volume as free if:
The logical volume is part of a parent volume group that is configured as a resource in the cluster The logical volume is varied on prior to and during the system polling the disk for logical volume information The logical volume does not have a filesystem mount point. 4. Select a logical volume where the filesystems will be added. SMIT displays the AIX 5L SMIT panel for selecting filesystem attributes.
5. Enter field values as follows:
6. Select the filesystem attributes and press Enter. SMIT checks the nodelist for the resource group that contains the volume group where the logical volume is located and adds the filesystem to the node where the volume group is varied on. All other nodes in the resource group will run an importvg -L.
Changing a Shared Filesystem in HACMP Using C-SPOC
As system administrator of an HACMP cluster, you may need to change the characteristics of an existing filesystem. Using the C-SPOC utility, you can change the characteristics of a shared filesystem on cluster nodes by executing a command on a single cluster node. The C-SPOC command changes the attributes of the shared filesystem on all the nodes in the resource group.
To change the characteristics of a shared filesystem:
1. Vary on the volume group, if needed, by using the varyonvg command. You can use the C-SPOC utility to change a filesystem even if the volume group on which it is defined is varied off.
2. Enter the SMIT fastpath smit cl_admin
3. In SMIT, select the HACMP Logical Volume Management > Shared Filesystems > Change/Show Characteristics of a Shared Filesystem in the Cluster and press Enter.
4. Select the filesystem to change.
5. Enter data in the fields to change and press Enter. The C-SPOC utility changes the filesystem characteristics on all nodes in the resource group.
6. (Optional) To check the status of the C-SPOC command execution on cluster nodes, view the C-SPOC log file in /tmp/cspoc.log.
Removing a Shared Filesystem Using C-SPOC
As system administrator of an HACMP cluster, you may need to remove a filesystem. You can optionally remove the filesystem’s mount point as part of the same operation. Using the following procedure, you can remove a shared filesystem on any node in a cluster.
C-SPOC deletes the shared filesystem on the node that currently has the shared volume group varied on. It removes both the shared logical volume on which the filesystem resides and the associated stanza in the /etc/filesystems file.
To remove a shared filesystem:
1. On the source node, vary on the volume group, if needed, using the SMIT varyonvg fastpath.
You can use the C-SPOC utility on a volume group that is varied off; however, you must specify the -f flag.
2. Enter the SMIT fastpath cl_admin
3. In SMIT, select HACMP Logical Volume Management > Shared Filesystems > Remove a Shared Filesystem and press Enter.
4. Press the F4 key to obtain a picklist of existing filesystems from which you may select one. Set the Remove Mount Point option to yes if you want to remove the mount point in the same operation. When you finish entering data, press Enter.
The C-SPOC utility removes the filesystem on the source (local) node. The filesystem is not removed on remote nodes until the volume group on which the filesystem is defined is activated.
5. To check the status of the C-SPOC command execution on both nodes, view the C-SPOC log file in /tmp/cspoc.log.
Maintaining Physical Volumes
Administrative tasks that involve shared physical volumes include:
Adding a Disk Definition to Cluster Nodes Using C-SPOC
The nodes must be configured as part of the cluster.
To add a raw disk on selected cluster nodes:
1. Enter the fastpath smitty cl_admin
2. In SMIT, select HACMP Physical Volume Management > Add a Disk to the Cluster and press Enter. SMIT displays a list of nodes in the cluster and prompts you to select the nodes where the disk definition should be added.
3. Select one or more nodes where you want to have the new disk configured. The system generates a list of available disk types based on those disk types known to the first node in the list.
4. Select the type of disk you want to add to the cluster.
SCSI (various types listed) SSA (available types listed) Adding a SCSI Disk
If you select a SCSI disk type to define, SMIT displays a list of parent adapter/node name pairs. You are prompted to select one parent adapter per node.
To add a SCSI disk:
1. Select a parent adapter for each node connected to the disk and accept the selection.
SMIT displays the AIX 5L SCSI Add a Disk panel with all entries filled in except Connection Address:
2. Select the connection address and press Enter.
Adding an SSA Disk
If you select an SSA disk to define, SMIT displays the AIX 5L Add an SSA Logical Disk panel.
To add an SSA disk:
1. Select the connection address and other attributes as needed and press Enter:
Removing a Disk Definition on Cluster Nodes Using C-SPOC
Before removing a disk from the cluster using C-SPOC, check that the disk to be removed is not currently part of an existing volume group. If it is, use the C-SPOC cl_reducevg command to remove a physical volume from a volume group.
To remove a configured disk on all selected nodes in the cluster:
1. Enter the fastpath smitty cl_admin
2. In SMIT, select HACMP Physical Volume Management > Remove a Disk from the Cluster and press Enter.
SMIT displays a list of nodes in the cluster that currently have the disk configured and prompts you to select the nodes where the disk should be removed.
3. Select the one or more node names from which you want to remove the disk configuration. (You may have removed the cable from some of the nodes in the cluster and only want the disk configuration removed from those nodes.)
4. For the entry Keep the disk definition in database select yes to keep the definition in the database; select no to delete the disk from the database. Press Enter.
Using SMIT to Replace a Cluster Disk
The SMIT interface simplifies the process of replacing a failed disk. You can use this process with SCSI or SSA disks. Like the manual procedure for replacing a failed disk, SMIT uses C-SPOC commands.
Note: If you have VPATH devices configured, the procedure for replacing a cluster disk using C-SPOC requires additional steps. For instructions, see Replacing A Cluster Disk with a VPATH Device.
Prerequisites
Before you replace a disk, ensure that:
You have root user privilege to perform the disk replacement. The volume group is mirrored. You have a replacement disk with an assigned PVID configured on all nodes in the resource group to which the volume group belongs. If you do not have the PVID assigned, run chdev on all nodes in the resource group. To add a new disk, remove the old disk and put the new one in its place. Follow the steps in the section Adding a SCSI Disk or Adding an SSA Disk as appropriate. Ensure that the ASSIGN physical volume identifier field is set to yes.
Replacing a Cluster Disk
To replace a disk in the cluster:
1. Locate the failed disk. Make note of the PVID volume group.
2. Enter smitty cl_admin
3. In SMIT, select HACMP Physical Volume Management > Cluster Disk Replacement and press Enter.
SMIT displays a list of disks that are members of volume groups contained in cluster resource groups. There must be two or more disks in the volume group where the failed disk is located. The list includes the volume group, the hdisk, the disk PVID, and the reference cluster node. (This node is usually the cluster node that has the volume group varied on.)
Note: This process requires the volume group to be mirrored and the new disk that is available for replacement to have a PVID assigned to in on all nodes in the cluster. Use the chdev command to assign a PVID to the disk.
4. Select the disk for disk replacement (source disk) and press Enter.
SMIT displays a list of those available shared disk candidates that have a PVID assigned to them, to use for replacement. (Only a disk that is of the same capacity or larger than the failed disk is suitable to replace the failed disk.)
5. Select the replacement disk (destination disk) and press Enter.
6. Press Enter to continue or Cancel to terminate the disk replacement process.
SMIT warns you that continuing will delete any information you may have stored on the destination disk.
7. Press Enter to continue or Cancel to terminate.
If disk configuration fails and you wish to proceed with disk replacement, you must manually configure the destination disk. If you terminate the procedure at this point, be aware that the destination disk may be configured on more than one node in the cluster.
The replacepv utility updates the volume group in use in the disk replacement process (on the reference node only).
Note: SMIT displays the name of the recovery directory to use should replacepv fail. Make note of this information, as it is required in the recovery process.
8. If a node in the resource group fails to import the updated volume group, you must do this manually.
C-SPOC will not remove the failed disk information from the cluster nodes, hdisk, and pdisk. You must do this manually.
For information on recovering from a failed disk replacement, see the Cluster Disk Replacement Process Fails section in Chapter 3: Investigating System Components and Solving Common Problems in the Troubleshooting Guide.
Managing Data Path Devices with C-SPOC
All VPATH disk operations currently supported on AIX 5L are now supported by C-SPOC. You can define and configure VPATH devices, add paths, configure defined VPATHs, and remove VPATH devices. You can also display VPATH device and adapter configuration and status.
You must have SDD 1.3.1.3 or greater installed.
This section describes the following tasks:
Displaying Data Path Device Configuration
To display data path device configuration:
1. Enter the fastpath smit cl_admin
2. In SMIT, select HACMP Physical Volume Manager > Cluster Data Path Device Management > Display Data Path Device Configuration and press Enter.
3. Select a node and press Enter.
PVID: 000240bfd57e0746 herbert: vpath9 (Avail pv shvg1) 10612027 = hdisk59 (Avail ) hdisk65 (Avail ) PVID: 000240ffd5691fba herbert: vpath12 (Avail ) 10C12027 = hdisk62 (Avail pv ) hdisk68 (Avail pv ) PVID: 000240ffd5693251 herbert: vpath14 (Avail pv ) 10E12027 = hdisk64 (Avail ) hdisk70 (Avail ) PVID: 000240ffd56957ce herbert: vpath11 (Avail ) 10812027 = hdisk67 (Avail pv ) hdisk71 (Avail pv ) PVID: 0002413fef72a8f0 herbert: vpath13 (Avail pv ) 10D12027 = hdisk63 (Avail ) hdisk69 (Avail ) PVID: 0002413fef73d477 herbert: vpath10 (Avail pv ) 10712027 = hdisk60 (Avail ) hdisk66 (Avail )Displaying Data Path Device Status
To display data path device status:
1. Enter the fastpath smit cl_admin
2. In SMIT, select HACMP Physical Volume Manager > Cluster Data Path Device Management > Display Data Path Device Status and press Enter.
3. Select a node and press Enter.
4. SMIT displays the status as shown in the following example for node herbert:
[TOP] herbert: Total Devices : 6 PVID 000240bfd57e0746 herbert: DEV#: 0 DEVICE NAME: vpath9 TYPE: 2105F20 SERIAL: 10612027 POLICY: Optimized ================================================================ Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi1/hdisk59 OPEN NORMAL 1696 0 1 fscsi0/hdisk65 OPEN NORMAL 1677 0 PVID 000240ffd5691fba [MORE...57]Displaying Data Path Device Adapter Status
To display data path device adapter status:
1. Enter the fastpath smit cl_admin
2. In SMIT, select HACMP Physical Volume Manager > Cluster Data Path Device Management > Display Data Path Device Adapter Status and press Enter.
3. Select a node and press Enter.
4. SMIT displays the status as shown in the following example for node herbert:
herbert: Active Adapters :2 Adpt# Adapter Name State Mode Select Errors Paths Active 0 fscsi1 NORMAL ACTIVE 2204 0 6 1 1 fscsi0 NORMAL ACTIVE 2213 0 6 1Defining and Configuring all Data Path Devices
To define and configure data path devices:
1. Enter the fastpath smit cl_admin
2. In SMIT, select HACMP Physical Volume Manager > Cluster Data Path Device Management > Define and Configure all Data Path Devices and press Enter.
Adding Paths to Available Data Path Devices
To add paths to available data path devices:
1. Enter the fastpath smit cl_admin
2. In SMIT, select HACMP Physical Volume Manager > Cluster Data Path Device Management > Add Paths to Available Data Path Devices and press Enter.
3. Select one or more nodes, and press Enter.
Configuring a Defined Data Path Device
To configure a defined data path device:
1. Enter the fastpath smit cl_admin
2. In SMIT, select HACMP Physical Volume Manager > Cluster Data Path Device Management > Configure a Defined Data Path Device and press Enter.
3. Select one or more nodes, and press Enter.
4. Select a PVID and press Enter.
Removing a Data Path Device
To remove a data path device:
1. Enter the fastpath smit cl_admin
2. In SMIT, select HACMP Physical Volume Manager > Cluster Data Path Device Management > Remove a Data Path Device and press Enter.
3. Select a node and press Enter.
4. Keep the definition in Data Base Selector.
5. Select one or more devices and press Enter.
Converting ESS hdisk Device Volume Group to an SDD VPATH Device Volume Group
To convert ESS hdisk volume groups to SDD VPATH device volume groups:
1. Enter the fastpath smit cl_admin
2. In SMIT, select HACMP Physical Volume Manager > Cluster Data Path Device Management > Convert ESS hdisk Device Volume Group to an SDD VPATH Device Volume Group and press Enter.
3. Select the ESS hdisk volume group to convert and press Enter.
4. Press Enter.
Converting SDD VPATH Device Volume Group to an ESS hdisk Device Volume Group
To convert SDD VPATH device volume groups to ESS hdisk device volume groups:
1. Enter the fastpath smit cl_admin
2. In SMIT, select HACMP Physical Volume Manager > Cluster Data Path Device Management > Convert SDD VPATH Device Volume Group to an ESS hdisk Device Volume Group and press Enter.
3. Select the one to convert and press Enter.
4. Press Enter.
5. The command runs and the command status is displayed on the panel.
Replacing A Cluster Disk with a VPATH Device
If you need to replace a cluster disk that has a VPATH device configured, before you use C-SPOC, move the PVID of the VPATH devices to the corresponding hdisks. This is done by converting the volume group from VPATH devices to hdisks. After converting, use the C-SPOC procedure to replace a disk.
Note: The C-SPOC disk replacement utility does not recognize VPATH devices. If you do not convert the volume group from VPATH to hdisk, then during the C-SPOC disk replacement procedure, HACMP returns a “no free disks” message, although unused VPATH devices are available for replacement.
To replace a cluster disk that has a VPATH device configured:
1. Convert the volume group from VPATHs to hdisks. For instructions, see Converting SDD VPATH Device Volume Group to an ESS hdisk Device Volume Group.
2. Use the C-SPOC procedure to replace a cluster disk. For instructions, see Using SMIT to Replace a Cluster Disk.
3. Convert the volume group back to VPATH devices. For instructions, see Converting ESS hdisk Device Volume Group to an SDD VPATH Device Volume Group.
Configuring Cross-Site LVM Mirroring
This section describes the following tasks:
Prerequisites
Before configuring cross-site LVM mirroring:
Configure the sites and resource groups and run the HACMP cluster discovery process. Ensure that both sites have copies of the logical volumes and that forced varyon for a volume group is set to Yes if a resource group contains a volume group. Steps to Configure Cross-Site LVM Mirroring
To configure cross-site LVM mirroring (cluster services are not running):
1. Enter the fastpath smit cl_xslvmm
2. Select Add Disk/Site Definition for Cross-Site LVM Mirroring.
3. Press F4 to see the list. Select a site from the list.
4. Select the physical volumes you want to assign to the selected site. Only disks connected to at least one node at two sites are displayed, as in the following example:
10002411f95594596 hdisk1 nodeA20002411f95594595 hdisk2 nodeA30002411f95594597 hdisk3 nodeA40002411f95594595 hdisk4 nodeA5. Repeat the steps to assign disks to the second site.
6. When you finish configuring the disks for cross-site LVM mirroring, create the shared volume groups that have the disks you included in the cross-site LVM mirror configuration.
These volume groups should have Enable Cross-Site LVM Mirroring set to True. This setting causes verification to check the remote mirror configuration of the shared volume groups. See Creating a Shared Volume Group with C-SPOC for the procedure (or, Creating a Concurrent Volume Group on Cluster Nodes Using C-SPOC in Chapter 12: Managing Shared LVM Components in a Concurrent Access Environment if you are creating a concurrent volume group.
7. Add the volume groups to the appropriate resource groups. See Chapter 3: Configuring an HACMP Cluster (Standard).
8. Synchronize the cluster.
9. Start cluster services.
10. Add logical volumes to the volume groups. See Adding a Logical Volume to a Cluster Using C-SPOC or Adding a Concurrent Logical Volume to a Cluster in Chapter 12: Managing Shared LVM Components in a Concurrent Access Environment depending on the type of configuration (non-concurrent or concurrent).
Showing and Changing Cross-Site LVM Mirroring Definition
To display the current cross-site LVM mirroring definition:
1. Enter the fastpath smit cl_xslvmm
2. In SMIT, select Change/Show Disk Site Definition for Cross-Site LVM Mirroring and press Enter.
3. SMIT displays the current definition, as in the following example:
000000687fc7db9b hdisk1 Node_Kiev_1 Site_1 000000687fc7e1ce hdisk2 Node_Kiev_1 Site_2 00001281f551e35d hdisk5 Node_Kiev_1 Site_2 00001638ea883efc hdisk3 Node_Kiev_1 Site_2 000031994e4bbe3c hdisk6 Node_Kiev_1 Site_1 00004414d7723f69 hdisk7 Node_Kiev_1 Site_24. (optional) Change the current site assignments for disks.
5. Synchronize the cluster if you made any change.
Removing a Disk from a Cross-Site LVM Mirroring Site Definition
To remove a disk from the current cross-site LVM mirroring mapping:
1. Enter the fastpath smit cl_xslvmm
2. In SMIT, select Remove a Disk from Site Definition for Cross-Site LVM Mirroring and press Enter.
3. Press F4 for the list, then select the disks to remove from the list.
10002411f95594596 hdisk1 nodeA20002411f95594595 hdisk2 nodeA30002411f95594597 hdisk3 nodeA40002411f95594595 hdisk4 nodeA4. Verify the selections when SMIT asks if you are sure.
5. Synchronize the cluster.
Troubleshooting Cross-Site LVM Mirroring
If resynchronization fails, examine the AIX 5L errorlog for possible causes. You can also set up an AIX 5L error notification method.
Manual intervention is required in case of disk or disk subsystem failure:
Non-concurrent resource groups. When the disks become available, the logical volume mirrors must be synchronized using the Synchronize Shared LVM Mirrors SMIT menu. Do this whether quorum was lost or not. Concurrent resource groups. When the disks become available, the concurrent resource group must be brought ONLINE from the ERROR state if the quorum was lost during the disk failure. The mirrors will be synchronized automatically during the rg_move operation. When the disks become available, the logical volume mirrors must be synchronized using the Synchronize Concurrent LVM Mirrors SMIT menu if the quorum was not lost, since in this case the resource group will not move to ERROR state. To manually resynchronize the logical volume mirrors, use the appropriate SMIT menu: Synchronize Concurrent LVM Mirrors or Synchronize Shared LVM Mirrors.
Note: When you disable the cross-site mirror characteristic for a volume group, this turns off the verification check for the cross-site mirror configuration. It is not necessary to resynchronize after making this change.
![]() ![]() ![]() |