PreviousNextIndex

Chapter 12: Managing Shared LVM Components in a Concurrent Access Environment


This chapter explains how to maintain shared LVM components in a concurrent access environment using the C-SPOC utility. This chapter includes specific procedures for managing volume groups, filesystems, logical volumes, physical volumes, and data path devices (VPATH disks).

When you add a concurrent capable volume group to a resource group, you can select the option to import the volume group onto all the destination nodes in the participating resource group. In addition, using SMIT, you can collect information about all volume groups available either on a local node or on all nodes in the cluster defined to the resource group, and later automatically import the appropriate volume groups to the destination nodes, if needed.

While the maintenance tasks for shared LVM components in concurrent access environments are similar to those of non-concurrent access environments, described in Chapter 11: Managing Shared LVM Components, there are some special considerations in a concurrent access environment.

You can use AIX 5L commands to do these procedures; however the C-SPOC utility is designed specifically to facilitate HACMP system management.

The main topics in this chapter include:

  • Overview
  • Understanding Concurrent Access and HACMP Scripts
  • Maintaining Concurrent Access Volume Groups
  • Maintaining Concurrent Volume Groups with C-SPOC
  • Maintaining Concurrent Logical Volumes.
  • Overview

    In a concurrent access environment, all cluster nodes can have simultaneous (concurrent) access to a volume group that resides on shared external disks.

    Keep the following points in mind:

  • When concurrent volume groups are created in AIX 5L, they are created as enhanced concurrent mode volume groups by default.
  • In AIX 5L 5.2 or greater, you cannot create new SSA concurrent mode volume groups. If you have an SSA concurrent (mode=16) volume group that was created on AIX 5L 5.1, you can use it on AIX 5L 5.2. You can also convert these volume groups to enhanced concurrent mode.

    If you are running AIX 5L 5.3, you must convert all volume groups to enhanced concurrent mode.

  • If one node in a concurrent resource group runs a 64-bit kernel, enhanced concurrent mode must be used for that volume group.
  • SSA concurrent mode is not supported on 64-bit kernels.
  • SSA disks with the 32-bit kernel can use SSA concurrent mode.
  • The C-SPOC utility does not work with RAID concurrent volume groups. You need to convert them to enhanced concurrent mode (otherwise, AIX 5L sees them an non-concurrent).
  • Note: You should convert both SSA and RAID concurrent volume groups to enhanced concurrent mode because AIX 5L enhanced concurrent mode provides improved functionality. For information about how to convert these types of volume groups to enhanced concurrent mode, see the section Converting Volume Groups to Enhanced Concurrent Mode.

    For more information on enhanced concurrent mode, see Chapter 5 in the Planning Guide. Also, see Chapter 11: Managing Shared LVM Components, in this guide, the for information on Understanding Active and Passive Varyon in Enhanced Concurrent Mode and Enabling Fast Disk Takeover.

    You can define concurrent access volume groups and logical volumes on all the disk devices supported by the HACMP software.

    Note: You cannot define filesystems on a concurrent access volume group unless it is an enhanced concurrent mode volume group used as a serial resource.

    The chapter first describes how the HACMP scripts handle concurrent access LVM components and then describes how to perform specific administrative tasks for LVM components in a concurrent access environment.

    The final section of the chapter describes the Physical Volume Management HACMP SMIT panels, including how to manage VPATH disks.

    Most maintenance tasks can be performed using the HACMP C-SPOC utility.

    Understanding Concurrent Access and HACMP Scripts

    You should seldom, if ever, need to intervene in a concurrent access cluster. In a concurrent access environment, as in a non-concurrent environment, the HACMP event scripts control the actions taken by a node and coordinate the interactions between the nodes. However, as a system administrator, you should monitor the status of the concurrent access volume groups when HACMP events occur.

    When intervening in a cluster, you must understand how nodes in a concurrent access environment control their interaction with shared LVM components. For example, the HACMP node_up_local script may fail before varying on a volume group in concurrent mode. After fixing whatever problem caused the script to fail, you may need to manually vary on the volume group in concurrent access mode. The following sections describe the processing performed by these scripts.

    Nodes Join the Cluster

    A node joining a cluster calls the node_up_local script, which calls the cl_mode3 script to activate the concurrent capable volume group in concurrent access mode. If resource groups are processed in parallel, process_resources calls cl_mode3.

    The cl_mode3 script calls the varyonvg command with the -c flag. For more information about this command and its flags, see the section Activating a Volume Group in Concurrent Access Mode. If the concurrent capable volume group is defined on a RAID disk array device, the scripts use the convaryonvg command to vary on the concurrent volume groups in concurrent mode.

    Nodes Leave the Cluster

    Nodes leaving the cluster do not affect the concurrent access environment. They simply vary off the volume groups normally. The remaining nodes take no action to change the concurrent mode of the shared volume groups.

    When a node has cluster services stopped with resource groups brought offline, it executes the node_down_local script, which calls the cl_deactivate_vgs script. This script uses the varyoffvg command to vary off the concurrent volume groups.

    Maintaining Concurrent Access Volume Groups

    The LVM enables you to create concurrent access volume groups that can be varied on in either concurrent access mode or non-concurrent access mode. Maintaining these volume groups may require you to perform any of the following tasks:

  • Activating a Volume Group in Concurrent Access Mode
  • Determining the Access Mode of a Volume Group
  • Restarting the Concurrent Access Daemon (clvmd).
  • Verifying a Concurrent Volume Group.
  • The following sections describe how to perform these tasks.

    Tasks you can perform with C-SPOC are described in a section at the end of the chapter.

    Activating a Volume Group in Concurrent Access Mode

    As a system administrator, you may, at times, need to bring a resource group online. After correcting the failure, take the following steps to bring the resource group online:

      1. Enter smitty cl_admin
      2. In SMIT, select HACMP Resource Group and Application Management > Bring a Resource Group Online
      3. Select the resource group to bring online and press Enter.

    Activating Concurrent Access Volume Groups

    To activate a volume group in concurrent access mode. Use the following procedure:

      1. Enter smit varyonvg
    The Activate a Volume Group SMIT panel appears; it has an additional field in a concurrent access environment.
      2. Enter the field values as follows:.
    VOLUME GROUP name
    Specify name of volume group.
    RESYNCHRONIZE stale physical partitions?
    Set this field to no.
    Activate volume group in SYSTEM MANAGEMENT mode?
    Accept the default (no).
    FORCE activation of the Volume Group?
    Accept the default (no).
    Varyon VG in concurrent mode?
    Set to yes.
      3. Press Enter. The system prompts you to confirm. Press Enter again.

    Determining the Access Mode of a Volume Group

    To determine whether a volume group is a concurrent capable volume group and to determine its current mode, use the lsvg command specifying the name of the volume group as an argument. The lsvg command displays information about the volume group, as in the following example:

    # lsvg vg1 
    VOLUME GROUP:   vg1                      VG IDENTIFIER:  
    00020adf00004c00000000f329382713 
    VG STATE:       active                   PP SIZE:        16 megabyte(s) 
    VG PERMISSION:  passive-only             TOTAL PPs:      542 (8672 
    megabytes) 
    MAX LVs:        256                      FREE PPs:       521 (8336 
    megabytes) 
    LVs:            3                        USED PPs:       21 (336 
    megabytes) 
    OPEN LVs:       0                        QUORUM:         2 
    TOTAL PVs:      1                        VG DESCRIPTORS: 2 
    STALE PVs:      0                        STALE PPs:      0 
    ACTIVE PVs:     1                        AUTO ON:        no 
    Concurrent:     Enhanced-Capable         Auto-Concurrent: Disabled 
    VG Mode:        Concurrent                                
    Node ID:        2                        Active Nodes:   1 4  
    MAX PPs per PV: 1016                     MAX PVs:        32 
    LTG size:       128 kilobyte(s)          AUTO SYNC:      no 
    HOT SPARE:      no                       BB POLICY:      relocatable  
    

    To determine whether the volume group is concurrent capable, check the value of the Concurrent field. The volume group in the example was created as an Enhanced-capable volume group, as indicated by the value of this field. If this volume group was not a concurrent capable volume group, the value of this field would be Non-Capable.

    To determine whether the volume group is activated in concurrent access mode, check the value of the VG Mode field. In the example, the volume group is activated in concurrent access mode. If this volume group had not been varied on in concurrent access mode, the value of this field would be Non-Concurrent.

    The Auto-Concurrent field indicates whether the volume group should be varied on in concurrent access mode when the volume group is started automatically at system reboot. The value of this field is determined by the value of the -x option to the mkvg command when the volume group was created. In an HACMP environment, this option should always be disabled; HACMP scripts control when the volume should be varied on.

    Restarting the Concurrent Access Daemon (clvmd)

    As a system administrator, you may, at times, need to restart the concurrent access daemon used by SSA concurrent mode. The clvmd daemon normally gets started by the varyonvg command when you vary on a volume group in concurrent access mode by specifying the -c flag. You can restart the clvmd by re-executing the varyonvg -c command on the already varied-on concurrent access volume group. You cannot vary on an already varied-on volume group in a different mode, however.

    You can also restart the clvmd daemon using the following SRC command:

    startsrc -s clvmd

    Enhanced concurrent mode uses the gsclvmd daemon, which starts when HACMP services are started. Beginning with HACMP 5.3, the Cluster Manager daemon explicitly controls the startup and shutdown of dependent software, such as RSCT and gsclvmd.

    Note: In HACMP 5.3 and up, the Cluster Manager starts out of inittab.

    Verifying a Concurrent Volume Group

    On all nodes participating in a resource group that have the volume group defined, a volume group consistency check that HACMP runs during the verification process ensures the following:

  • The concurrent attribute setting for the volume group is consistent across all related cluster nodes
  • The list of PVIDs for this volume group is identical on all related cluster nodes
  • An automatic corrective action of the cluster verification utility updates volume group definitions on all related cluster nodes
  • Any problems detected are reported as errors.
  • Note: The SSA node number check is not performed for enhanced concurrent volume group that sit on the SSA hdisks. Disks that make up enhanced concurrent volume groups do not have any SSA-specific numbers assigned to them.

    Maintaining Concurrent Volume Groups with C-SPOC

    C-SPOC uses the AIX 5L CLVM capabilities that allow changes to concurrent LVM components without stopping and restarting the cluster.

    See Chapter 11: Managing Shared LVM Components, for a general explanation of how C-SPOC works.

    You can use the C-SPOC utility to do the following concurrent volume groups tasks:

  • Create a concurrent volume group on selected cluster nodes (using hdisks or data path devices)
  • Convert SSA concurrent or RAID concurrent volume groups to enhanced concurrent mode
  • List all concurrent volume groups in the cluster
  • Import a concurrent volume group
  • Extend a concurrent volume group
  • Reduce a concurrent volume group
  • Mirror a concurrent volume group
  • Unmirror a concurrent volume group
  • Synchronize concurrent LVM mirrors by volume group.
  • Note: The volume group must be varied on in concurrent mode in order to do these tasks.
    Warning: If you have specified a forced varyon attribute for volume groups in a resource group, all nodes of the cluster must be available before making LVM changes. This ensures that all nodes have the same view of the state of the volume group. For more information on when it is safe to perform a forced varyon operation, and on instructions how to specify it in SMIT, see Forcing a Varyon of Volume Groups in Chapter 5: Configuring HACMP Resource Groups (Extended).

    To perform concurrent resource group maintenance tasks, use the HACMP System Management (C-SPOC) > HACMP Resource Group and Application Management menu in SMIT.

    This utility allows you to take a concurrent resource group online or offline (along with its resources—IP addresses, applications, and disks)—without stopping cluster services. For more information on Resource Group Migration, see Requirements before Migrating a Resource Group in Chapter 15: Managing Resource Groups in a Cluster.

    Note: If you run a ps command during a C-SPOC LVM operation to verify what processes are running, you will see output similar to the following:
    ps -ef | grep vg root 11952 13522 0 08:56:25 - 0:00 ksh /usr/es/sbin/cluster/cspoc/cexec cllvmcmd -extendvg -f gdgpgogdhfhchgghdb gigegjhdgldbda.

    That is a C-SPOC encapsulation of arguments/parameters when data is sent off to remote nodes.

    Creating a Concurrent Volume Group on Cluster Nodes Using C-SPOC

    Using C-SPOC simplifies the procedure for creating a concurrent volume group on selected cluster nodes. By default, the concurrent volume group will be created in enhanced concurrent mode. If you change this default to false, the disks must support concurrent mode (currently this is only true for SSA disks).

    Note: For creating a concurrent volume path on VPATH disks, see Managing Data Path Devices with C-SPOC in Chapter 11: Managing Shared LVM Components. If you add a VPATH disk to a volume group made up of hdisks, the volume group will be converted to VPATHs on all nodes.

    Before creating a concurrent volume group for the cluster using C-SPOC, check that:

  • All disk devices are properly attached to the cluster nodes.
  • All disk devices are properly configured on all cluster nodes and listed as available on all nodes.
  • The cluster concurrent logical volume manager is installed.
  • All disks that will be part of the volume group are concurrent capable.
  • SSA disk subsystem have unique non-zero node numbers.
  • To create a concurrent volume group for a selected list of cluster nodes:

      1. Enter the fastpath smit cl_admin
      2. In SMIT, select HACMP Concurrent Logical Volume Management > HACMP Concurrent Volume Groups > Create a Concurrent Volume Group (or Create a Concurrent Volume Group with Data Path Devices) and press Enter.

    SMIT displays a list of cluster nodes.

      3. Select one or more nodes from the list of cluster nodes and press Enter.
    The system correlates a list of all free concurrent-capable physical disks that are available to all nodes selected. (Free disks are those disks that currently are not part of a volume group and have PVIDs.) SMIT displays the list of free physical disks in a multi-picklist by PVIDs. If you are creating a volume group with data path devices, only disks capable of hosting them will be listed.
      4. Select one or more PVIDs from the list and press Enter.

    SMIT displays the cl_mkvg panel with a major number inserted into the Major Number data field. The system determines this free major number; do not change it.

      5. Enter field values as follows:
    Node Names
    Names of the selected nodes are displayed.
    VOLUME GROUP name
    Enter a name for this concurrent capable volume group.
    Physical partition SIZE in megabytes
    Accept the default.
    Volume Group MAJOR NUMBER
    The system displays the number C-SPOC has determined to be correct.
    WARNING: Changing the volume group major number may result in the command’s inability to execute on a node that does not have that major number currently available. Check for a commonly available major number on all nodes before changing this setting.
    Enhanced Concurrent Mode
    The default is true. If you select false, the disks must support concurrent mode (SSA concurrent mode).
    Enable/Disable a Concurrent Volume Group for Cross-Site LVM Mirroring
    Set this field to True to enable data mirroring between sites. The default is False.

    C-SPOC verifies communication paths and version compatibility and then runs the command on all the nodes you selected.

    If cross-site LVM mirroring is enabled, that configuration is verified when you verify and synchronize. For more information on configuring cross-site LVM mirroring, see Configuring Cross-Site LVM Mirroring in Chapter 11: Managing Shared LVM Components.
    Note: If the major number entered on the SMIT panel was not free at the time that the system attempted to make the volume group the command will display an error for the node that did not complete the execution and continue to the other nodes. At the completion of the command the volume group will not be active on any node in cluster.
    Note: Verification issues an error if you run this command on some but not all nodes that own that resource group. All nodes in the same resource group must activate this resource group in enhanced concurrent mode.

    Converting Volume Groups to Enhanced Concurrent Mode

    It is highly recommended that you convert all concurrent volume groups, including RAID and SSA concurrent volume groups, to enhanced concurrent mode because:

  • RAID concurrent volume groups must be converted to enhanced concurrent mode for C-SPOC to handle them.
  • SSA concurrent volume groups can be varied on, but not created, in AIX 5L v.5.2 or greater.
  • Before converting a volume group to enhanced concurrent mode, ensure that the volume group:

  • Is included in a resource group
  • Is varied off everywhere in the cluster.
  • To convert a concurrent volume group to enhanced concurrent mode:

      1. Use the clRGmove utility to take the resource group offline.
      2. Enter the fastpath smit cl_admin
      3. In SMIT, select HACMP Concurrent Logical Volume Management > Concurrent Volume Groups > Set Characteristics of a Concurrent Volume Group > Convert a Concurrent Volume Group to Enhanced Concurrent Mode and press Enter.
    SMIT displays the volume groups.
      4. Select the volume group you want to convert.
      5. Confirm the selected volume group is the one to convert.
    The volume group is converted to enhanced concurrent mode. If it does not meet the requirements for the conversion you will receive messages to that effect.
      6. Use the clRGmove utility to bring the resource group back online.

    Listing All Concurrent Volume Groups in the Cluster

    To list all concurrent volume groups in the cluster:

      1. On the source node, enter smit cl_admin
      2. In SMIT, select HACMP Concurrent Logical Volume Management > Concurrent Volume Groups > List all Concurrent Volume Groups and press Enter.
    SMIT displays a message asking if you want to view only active concurrent volume groups.
      3. Select yes to see a list of active concurrent volume groups only, or select no to see a list of all concurrent volume groups.

    Importing a Concurrent Volume Group with C-SPOC

    The physical volumes (hdisks) in the volume group must be installed, configured, and available.

    To import a concurrent volume group using the C-SPOC utility:

      1. On any cluster node that can own the concurrent volume group (is in the participating nodes list for the resource group), vary on the volume group in concurrent mode, using the SMIT varyonvg fastpath (if it is not varied on already).
      2. On the source node, enter smit cl_admin
      3. In SMIT, select HACMP Concurrent Logical Volume Management > Concurrent Volume Groups > Import a Concurrent Volume Group and press Enter.
    SMIT displays a list of volume groups.
      4. Select a volume group and press Enter.
    SMIT displays a list of physical volumes.
      5. Select a physical volume and press Enter.
    SMIT displays the Import a Concurrent Volume Group panel. Values for fields you have selected are displayed.
      6. Use the defaults or enter the appropriate values for your operation:
    VOLUME GROUP name
    The name of the volume group that you are importing.
    PHYSICAL VOLUME name
    The name of one of the physical volumes that resides in the volume group. This is the hdisk name on the reference node.
    Reference node
    The node from which the physical disk was retrieved.
    Volume Group MAJOR NUMBER
    If you are not using NFS, use the default (which is the next available number in the valid range). If you are using NFS, you must be sure to use the same major number on all nodes. Use the lvlstmajor command on each node to determine a free major number common to all nodes.
    Make this VG concurrent capable?
    The default is no. Change to yes for concurrent VGs.
    Make default varyon of VG concurrent?
    The default is no. Change to yes for concurrent VGs.
      7. If this panel reflects the correct information, press Enter to import the concurrent volume group. All nodes in the cluster receive this updated information immediately.
      8. If you did this task from a cluster node that does not need the concurrent volume group varied on, vary off the volume group on that node.

    Extending a Concurrent Volume Group with C-SPOC

    Note: If you add a VPATH disk to a volume group made up of hdisks, the volume group will be converted to VPATHs on all nodes.

    The physical volumes (hdisks) being added to the volume group must be installed, configured, and available.

    To add a physical volume to a concurrent volume group using C-SPOC:

      1. On any cluster node that can own the volume group (is in the participating nodes list for the resource group), vary on the volume group, using the SMIT varyonvg fastpath (if it is not varied on already in concurrent mode).
      2. On the source node, enter smit cl_admin
      3. In SMIT, select HACMP Concurrent Logical Volume Management > Concurrent Volume Groups > Set Characteristics of a Concurrent Volume Group > Add a Physical Volume to a Concurrent Volume Group and press Enter.
    SMIT displays a list of volume groups.
      4. Select a volume group and press Enter.
    SMIT displays a list of those physical volumes that have PVIDs assigned to them.
      5. Select the PVIDs to add to the volume group and press Enter.
      6. SMIT displays the Add a Physical Volume to a Concurrent Volume Group panel, with the following entries filled in:
    Resource Group name
    The cluster resource group to which this concurrent volume group belongs.
    Volume Group name
    The name of the volume group where hdisks are to be added.
    Reference node
    The name of the node where the hdisks are found.
    Physical Volume names
    The names of the hdisks to be added to the volume group.
      7. If this panel reflects the correct information, press Enter to add the disks to the concurrent volume group. All nodes in the cluster receive this updated information.
      8. If you did this task from a cluster node that does not need the concurrent volume group varied on, vary off the volume group on that node.

    Enabling or Disabling Cross-Site LVM Mirroring

    To enable or disable cross-site LVM mirroring of a concurrent volume group:

      1. Enter the fastpath smit cl_admin
      2. In SMIT, select HACMP Concurrent Logical Volume Management > Concurrent Volume Groups > Set Characteristics of a Concurrent Volume Group > Enable/Disable a Concurrent Volume Group for Cross-site Mirroring and press Enter.
    SMIT displays a list of volume groups.
      3. Select a volume group from the list.
      4. Enable or disable the field Enable Cross-Site LVM Mirroring (True or False) and press Enter. If you disable the cross-site mirror configuration, do this for each node that can own the volume group.
      5. Synchronize the cluster if you are enabling cross-site LVM mirroring and cluster services are not running. (If sites are configured, you can only run verification while cluster services are not running.)
    Note: For more information on configuring cross-site LVM Mirroring, see Configuring Cross-Site LVM Mirroring in Chapter 11: Managing Shared LVM Components.

    Removing a Physical Volume from a Concurrent Volume Group with C-SPOC

    The physical volumes (hdisks) in the volume group must be installed, configured, and available.

    To remove a physical volume from a concurrent volume group using the C-SPOC utility:

      1. On any cluster node that can own the concurrent volume group (is in the participating nodes list for the resource group), vary on the volume group in concurrent mode, using the SMIT varyonvg fastpath (if it is not varied on already).
      2. On the source node, enter smit cl_admin
      3. In SMIT, select HACMP Concurrent Logical Volume Management > Concurrent Volume Groups > Set Characteristics of a Concurrent Volume Group > Remove a Physical Volume from a Concurrent Volume Group and press Enter.
    SMIT displays a list of volume groups.
      4. Select the desired volume group and press Enter.
    SMIT displays a list of physical volumes.
      5. Pick a physical volume and press Enter.
      6. SMIT displays the Remove a Volume from a Concurrent Volume Group panel, with the following entries filled in:
    VOLUME GROUP name
    The name of the volume group that you are reducing.
    Reference node
    The node from which the name of the physical disk was retrieved.
    PHYSICAL VOLUME name
    The name of the physical volume that you want to remove.This is the hdisk name on the reference node.
      7. If this panel reflects the correct information, press Enter to reduce the concurrent volume group. All nodes in the cluster receive this updated information immediately (before “lazy update”).
      8. If you did this task from a cluster node that does not need the concurrent volume group varied on, vary off the volume group on that node.

    Mirroring a Concurrent Volume Group Using C-SPOC

    The physical volumes (hdisks) in the volume group must be installed, configured, and available.

    To mirror a concurrent volume group using the C-SPOC utility:

      1. On any cluster node that can own the concurrent volume group (is in the participating nodes list for the resource group), vary on the volume group in concurrent mode, using the SMIT varyonvg fastpath (if it is not varied on already).
      2. On the source node, enter smit cl_admin
      3. In SMIT, select HACMP Concurrent Logical Volume Management > Concurrent Volume Groups > Mirror a Concurrent Volume Group and press Enter.
    SMIT displays a list of volume groups.
      4. Select a volume group and press Enter.
    SMIT displays a list of physical volumes.
      5. Pick a physical volume and press Enter.
    SMIT displays the Mirror a Concurrent Volume Group panel. Values for fields you have selected are displayed.
      6. Enter values in other fields as follows:
    Resource Group Name
    The name of the resource group to which this concurrent volume group belongs is displayed.
    VOLUME GROUP name
    The name of the volume group that you want to mirror is displayed.
    Reference node
    The node from which the name of the physical disk was retrieved is displayed.
    PHYSICAL VOLUME names
    The name of a physical volume on the volume group that you want to mirror.This is the hdisk name on the reference node.
    Mirror sync mode
    Foreground is the default. Other choices are Background and No Sync.
    Number of COPIES of each logical partition
    The default is 2. You can also select 3.
    Keep Quorum Checking On?
    The default is no. You can also select yes.
    Create Exact LV Mapping?
    The default is no.
      7. If this panel reflects the correct information, press Enter to mirror the concurrent volume group. All nodes in the cluster receive this updated information.
      8. If you did this task from a cluster node that does not need the concurrent volume group varied on, vary off the volume group on that node.

    Unmirroring a Concurrent Volume Group Using C-SPOC

    The physical volumes (hdisks) in the volume group must be installed, configured, and available.

    To unmirror a concurrent volume group using the C-SPOC utility:

      1. On any cluster node that can own the concurrent volume group (is in the participating nodes list for the resource group), vary on the volume group in concurrent mode, using the SMIT varyonvg fastpath (if it is not varied on already).
      2. On the source node, enter smit cl_admin
      3. In SMIT, select HACMP Concurrent Logical Volume Management > Concurrent Volume Groups > Unmirror a Concurrent Volume Group and press Enter.
    SMIT displays a list of volume groups.
      4. Select a volume group and press Enter.
    SMIT displays a list of physical volumes.
      5. Select a physical volume and press Enter.
    SMIT displays the Unmirror a Concurrent Volume Group panel, with the selected fields filled in.
      6. For other fields, use the defaults or enter the appropriate values:
    Resource Group Name
    The name of the resource group to which this concurrent volume group belongs is displayed.
    VOLUME GROUP name
    The name of the volume group that you want to mirror is displayed.
    Reference node
    The node from which the name of the physical disk was retrieved is displayed.
    PHYSICAL VOLUME names
    The name of a physical volume on the volume group that you want to unmirror.This is the hdisk name on the reference node.
    Number of COPIES of each logical partition
    The default is 2. You can also select 3.
      7. If this panel reflects the correct information, press Enter to unmirror the concurrent volume group. All nodes in the cluster receive this updated information.
      8. If you did this task from a cluster node that does not need the concurrent volume group varied on, vary off the volume group on that node.

    Synchronizing Concurrent Volume Group Mirrors

    The physical volumes (hdisks) in the volume group must be installed, configured, and available.

    To synchronize concurrent LVM Mirrors by volume group using the C-SPOC utility:

      1. On any cluster node that can own the concurrent volume group (is in the participating nodes list for the resource group), vary on the volume group in concurrent mode, using the SMIT varyonvg fastpath (if it is not varied on already).
      2. On the source node, enter smit cl_admin
      3. In SMIT, select HACMP Concurrent Logical Volume Management > Synchronize Concurrent LVM Mirrors > Synchronize By Volume Group and press Enter.
    SMIT displays a list of volume groups.
      4. Select a volume group and press Enter.
    SMIT displays a list of physical volumes.
      5. Pick a physical volume and Press Enter.
      6. SMIT displays the Synchronize Concurrent LVM Mirrors by Volume Group panel, with the chosen entries filled in.
      7. For other fields, use the defaults or the appropriate values for your operation:
    Resource Group Name
    The name of the resource group to which this concurrent volume group belongs is displayed.
    VOLUME GROUP name
    The name of the volume group that you want to mirror is displayed.
    Reference node
    The node from which the name of the physical disk was retrieved is displayed.
    Number of Partitions to Sync in Parallel
    Set the range from 1 to 32.
    Synchronize All Partitions
    The default is no.
    Delay Writes to VG from other cluster nodes during this Sync
    The default is no.
      8. If this panel reflects the correct information, press Enter to synchronize LVM mirrors by the concurrent volume group. All nodes in the cluster receive this updated information.
      9. If you did this task from a cluster node that does not need the concurrent volume group varied on, vary off the volume group on that node.

    Maintaining Concurrent Logical Volumes

    You can use the C-SPOC utility to do many maintenance tasks on concurrent logical volumes. You can create a new logical volume or change the size of the logical volume. After you complete the procedure using SMIT, the other cluster nodes are updated with the new information.

    Concurrent logical volumes tasks:

  • List all concurrent logical volumes by volume group
  • Add a concurrent logical volume to a volume group
  • Remove a concurrent logical volume
  • Make a copy of a concurrent logical volume
  • Remove a copy of a concurrent logical volume
  • Show the characteristics of a concurrent logical volume
  • Synchronize concurrent LVM mirrors by logical volume
  • Verify disk availability.
  • Listing All Concurrent Logical Volumes in the Cluster

    To list all concurrent logical volumes in the cluster:

      1. Enter smit cl_admin
      2. In SMIT, select HACMP Concurrent Logical Volume Management> Concurrent Logical Volumes > List all Concurrent Logical Volumes by Volume Groups and press Enter.
    SMIT prompts you to confirm that you want to view only active concurrent volume groups.
      3. Select yes to see a list of active concurrent volume groups only, or select no to see a list of all concurrent volume groups.

    Adding a Concurrent Logical Volume to a Cluster

    To add a concurrent logical volume to a cluster:

      1. Enter the fastpath smit cl_admin
      2. In SMIT, select HACMP Concurrent Logical Volume Management > Concurrent Logical Volumes > Add a Concurrent Logical Volume and press Enter.
    SMIT displays a list of concurrent volume groups.
      3. Select a concurrent logical volume and press Enter.
    SMIT displays a list of physical volumes.
      4. Select a physical volume and press Enter.
    The Add a Concurrent Logical Volume panel appears, with chosen fields filled in as shown in the sample below:

    Resource Group name
    ccur_rg
    VOLUME GROUP name
    concurrentvg1
    Reference node
    a1
    * Number of LOGICAL PARTITIONS
    []
    PHYSICAL VOLUME names
    hdisk16
    Logical volume NAME
    If you are using this logical volume in a cross-site LVM mirroring configuration, name this logical volume appropriately, for example, LV1site1.
    Logical volume TYPE
    jfs2 is recommended for cross-site LVM mirroring configurations.
    POSITION on physical volume
    middle
    RANGE of physical volumes
    minimum
    MAXIMUM NUMBER of PHYSICAL VOLUMES to use for allocation
    []
    Number of COPIES of each logical partition
    Use at least two copies for cross-site LVM mirror configurations.
    Mirror Write Consistency?
    no (for concurrent environments)
    Allocate each logical partition copy on a SEPARATE physical volume?
    It is recommended to set this field to super strict if a forced varyon operation is specified for the volume groups. For a cross-site LVM mirror configurations, select either superstrict or yes.
    RELOCATE the logical volume during reorganization
    yes
    Logical volume LABEL
    []
    MAXIMUM NUMBER of LOGICAL PARTITIONS
    [512]
    Enable BAD BLOCK relocation?
    no
    SCHEDULING POLICY for writing logical partition copies
    parallel
    Enable WRITE VERIFY?
    no
    File containing ALLOCATION MAP
    []
    Stripe Size?
    [Not Striped]


    The default logical volume characteristics are most common; however if you are using cross-site LVM mirroring, the characteristics recommended are noted above.

      5. Make changes if necessary for your system and press Enter. Other cluster nodes are updated with this information.

    Removing a Concurrent Logical Volume

    To remove a concurrent logical volume on any node in a cluster:

      1. Enter the fastpath smit cl_admin
      2. In SMIT, select HACMP Concurrent Logical Volume Management > Concurrent Logical Volumes > Remove a Concurrent Logical Volume and press Enter.
    C-SPOC displays a list of concurrent logical volumes, organized by HACMP resource group.
      3. Select the logical volume you want to remove and press Enter.
      4. To check the status of the C-SPOC command execution on other cluster nodes, view the C-SPOC log file in /tmp/cspoc.log.

    Setting Characteristics of a Concurrent Logical Volume

    You can use C-SPOC to do the following tasks:

  • Add copies to a concurrent logical volume
  • Remove copies from a concurrent logical volume
  • Show the current characteristics of a concurrent logical volume
  • Synchronize the concurrent LVM mirrors by logical volume.
  • Adding a Copy to a Concurrent Logical Volume Using C-SPOC

    To add a copy to a concurrent logical volume on all nodes in a cluster:

      1. Enter the fastpath smit cl_admin
      2. In SMIT, select HACMP Concurrent Logical Volume Management > Concurrent Logical Volumes > Set Characteristics of A Concurrent Logical Volume > Add a Copy to a Concurrent Logical Volume and press Enter.
    SMIT displays a list of logical volumes arranged by resource group.
      3. Select a logical volume from the picklist and press Enter.
    SMIT displays a list of physical volumes.
      4. Select a physical volume and press Enter.
    SMIT displays the Add a Copy to a Concurrent Logical Volume panel with the Resource Group, Logical Volume, Reference Node and default fields filled.
      5. Enter the new number of mirrors in the NEW TOTAL number of logical partitions field and press Enter.
    The C-SPOC utility changes the number of copies of this logical volume on all cluster nodes.
      6. To check the status of the C-SPOC command execution on all nodes, view the C-SPOC log file in /tmp/cspoc.log.

    Removing a Copy from a Concurrent Logical Volume Using C-SPOC

    To remove a copy of a concurrent logical volume on all nodes in a cluster:

      1. Enter the fastpath smit cl_admin
      2. In SMIT, select HACMP Concurrent Logical Volume Management >Concurrent Logical Volumes >Set Characteristics of A Concurrent Logical Volume > Remove a Copy from a Concurrent Logical Volume and press Enter.
    SMIT displays a list of logical volumes arranged by resource group.
      3. Select a logical volume from the picklist and press Enter.
    SMIT displays a list of physical volumes.
      4. Select the physical volumes from which you want to remove copies and press Enter.
    SMIT displays the Remove a Copy from a Concurrent Logical Volume panel with the Resource Group, Logical Volume name, Reference Node and Physical Volume names fields filled in.
      5. Enter the new number of mirrors in the NEW maximum number of logical partitions copies field and check the PHYSICAL VOLUME name(s) to remove copies from field to make sure it is correct and press Enter.
    The C-SPOC utility changes the number of copies of this logical volume on all cluster nodes.
      6. To check the status of the C-SPOC command execution on all nodes, view the C-SPOC log file in /tmp/cspoc.log.

    Show Characteristics of a Concurrent Logical Volume Using C-SPOC

    To show the current characteristics of a concurrent logical volume:

      1. On the source node, vary on the volume group in concurrent mode, using the SMIT varyonvg fastpath.
      2. Enter the fastpath smit cl_admin
      3. In SMIT, select HACMP Concurrent Logical Volume Management >Concurrent Logical Volumes > Show Characteristics of A Concurrent Logical Volume and press Enter.
    SMIT displays a list of logical volumes arranged by resource group.
      4. Select a logical volume from the picklist and press Enter.
    SMIT displays the Show Characteristics of A Concurrent Logical Volume panel with the Resource Group, and Logical Volume name fields filled in. The Option list field offers three choices for the display.
      5. Select Status, Physical Volume map, or Logical Partition map in the Option list field and press Enter.
    SMIT displays the characteristics of a concurrent logical volume in the selected format.

    Synchronizing LVM Mirrors by Logical Volume

    The physical volumes (hdisks) in the volume group must be installed, configured, and available.

    To synchronize concurrent LVM mirrors by logical volume:

      1. On any cluster node that can own the concurrent volume group (is in the participating nodes list for the resource group), vary on the volume group, using the SMIT varyonvg fastpath (if it is not varied on already).
      2. On the source node, enter smit cl_admin
      3. In SMIT, select HACMP Concurrent Logical Volume Management > Synchronize Concurrent LVM Mirrors > Synchronize By Logical Volume and press Enter.
    SMIT displays a list of logical volumes.
      4. Select a logical volume and press Enter.
    SMIT displays a list of physical volumes.
      5. Select a physical volume and press Enter.
    SMIT displays the Synchronize LVM Mirrors by Volume Group panel, with the chosen entries filled in.
      6. Enter values in other fields as needed for your operation:
    Resource Group Name
    The name of the resource group to which this logical volume belongs is displayed.
    LOGICAL VOLUME name
    The name of the logical volume that you want to synchronize is displayed.
    Reference node
    The node from which the name of the physical disk was retrieved is displayed.
    Number of Partitions to Sync in Parallel
    Set the range from 1 to 32.
    Synchronize All Partitions
    The default is no.
    Delay Writes to VG from other cluster nodes during this Sync
    The default is no.
      7. If this panel reflects the correct information, press Enter to synchronize LVM mirrors by the concurrent logical volume.
    All nodes in the cluster receive this updated information.
      8. If you did this task from a cluster node that does not need the concurrent volume group varied on, vary off the volume group on that node.


    PreviousNextIndex