PreviousNextIndex

Appendix A: Script Utilities


This appendix describes the utilities called by the event and startup scripts supplied with HACMP. These utilities are general-purpose tools that can be called from any script or from the AIX 5L command line. The examples assume they are called from a script.

This appendix also includes the reference pages for Cluster Resource Group Information commands.

Highlighting

The following highlighting conventions are used in this appendix:

Bold
Identifies command words, keywords, files, directories, and other items whose actual names are predefined by the system.
Italics
Identifies parameters whose actual names or values are supplied by the user.
Monospace
Identifies examples of specific data values, examples of text similar to what you may see displayed, examples of program code similar to what you may write as a programmer, messages from the system, or information you should actually type.

Reading Syntax Diagrams

Usually, a command follows this syntax:

[]
Material within brackets is optional.
{}
Material within braces is required.
|
Indicates an alternative. Only one of the options can be chosen.
< >
Text within brackets is a variable.
...
Indicates that one or more of the kinds of parameters or objects preceding the ellipsis can be entered.

Note: Flags listed in syntax diagrams throughout this appendix are those recommended for use with the HACMP for AIX 5L software. Flags used internally by SMIT are not listed.

Utilities

The script utilities are stored in the /usr/es/sbin/cluster/events/utils directory. The utilities described in this chapter are grouped in the following categories:

  • Disk Utilities
  • RS/6000 SP Utilities
  • Filesystem and Volume Group Utilities
  • Logging Utilities
  • Network Utilities
  • Resource Group Move Utilities
  • Emulation Utilities
  • Security Utilities
  • Start and Stop Tape Resource Utilities
  • Cluster Resource Group Information Commands.
  • Disk Utilities

    cl_disk_available

    Syntax

    cl_disk_available diskname ... 
    

    Description

    Checks to see if a disk named as an argument is currently available to the system and if not, makes the disk available.

    Parameters

    diskname
    List of one of more disks to be made available; for example, hdisk1.

    Return Values

    0
    Successfully made the specified disk available.
    1
    Failed to make the specified disk available.
    2
    Incorrect or bad arguments were used.

    cl_fs2disk

    Syntax

    cl_fs2disk [-lvip] mount_point 
    

    or

    cl_fs2disk -g volume_group 
    

    where -l identifies and returns the logical volume, -v returns the volume group, -i returns the physical volume ID, -p returns the physical volume, and -g is the mount point of a filesystem (given a volume group).

    Description

    Checks the ODM for the specified logical volume, volume group, physical volume ID, and physical volume information.

    Parameters

    mount point
    Mount point of filesystem to check.
    volume group
    Volume group to check.

    Return Values

    0
    Successfully retrieved filesystem information.
    1
    Failed to retrieve filesystem information.

    cl_get_disk_vg_fs_pvids

    Syntax

    cl_get_disk_vg_fs_pvids [filesystem_list volumegroup_list] 
    

    Description

    Given filesystems and/or volume groups, the function returns a list of the associated PVIDs.

    Parameters

    filesystem_list
    The filesystems to check.
    volumegroup_list
    The volume groups to check.

    Return Values

    0
    Success.
    1
    Failure.
    2
    Invalid arguments.

    cl_is_array

    Syntax

    cl_is_array diskname 
    

    Description

    Checks to see if a disk is a READI disk array.

    Parameters

    diskname
    Single disk to test; for example, hdisk1.

    Return Values

    0
    Disk is a READI disk array.
    1
    Disk is not a READI disk array.
    2
    An error occurred.

    cl_is_scsidisk

    Syntax

    cl_is_scsidisk diskname 
    

    Description

    Determines if a disk is a SCSI disk.

    Parameters

    diskname
    Single disk to test; for example, hdisk1.

    Return Values

    0
    Disk is a SCSI disk.
    1
    Disk is not a SCSI disk.
    2
    An error occurred.

    cl_raid_vg

    Syntax

    cl_raid_vg volume_group 
    

    Description

    Checks to see if the volume group is comprised of RAID disk arrays.

    Parameters

    volume_group
    Single volume group to check.

    Return Values

    0
    Successfully identified a RAID volume group.
    1
    Could not identify a RAID volume group; volume group must be SSA.
    2
    An error occurred. Mixed volume group identified.

    cl_scdiskreset

    Syntax

    cl_scdiskreset /dev/diskname ... 
    

    Description

    Issues a reset (SCSI ioctl) to each SCSI disk named as an argument.

    Parameters

    /dev/diskname
    List of one or more SCSI disks.

    Return Values

    0
    All specified disks have been reset.
    -1
    No disks have been reset.
    n
    Number of disks successfully reset.

    cl_scdiskrsrv

    Syntax

    cl_scsidiskrsrv /dev/diskname ... 
    

    Description

    Reserves the specified SCSI disk.

    Parameters

    /dev/diskname
    List of one or more SCSI disks.

    Return Values

    0
    All specified disks have been reserved.
    -1
    No disks have been reserved.
    n
    Number of disks successfully reserved.

    cl_sync_vgs

    Syntax

    cl_sync_vgs -b|f volume_group ... 
    

    Description

    Attempts to synchronize a volume group by calling syncvg for the specified volume group.

    Parameters

    volume_group
    Volume group list.
    -b
    Background sync.
    -f
    Foreground sync.

    Return Values

    0
    Successfully started syncvg for all specified volume groups.
    1
    The syncvg of at least one of the specified volume groups failed.
    2
    No arguments were passed.

    scdiskutil

    Syntax

    scdiskutil -t /dev/diskname 
    

    Description

    Tests and clears any pending SCSI disk status.

    Parameters

    -t
    Tests to see if a unit is ready.
    /dev/diskname
    Single SCSI disk.

    Return Values

    -1
    An error occurred or no arguments were passed.
    0
    The disk is not reserved.
    >0
    The disk is reserved.

    ssa_fence

    Syntax

    ssa_fence -e event pvid 
    

    Description

    Fences a node in or out.

    Additionally, this command also relies on environment variables; the first node up fences out all other nodes of the cluster regardless of their participation in the resource group.

    If it is not the first node up, then the remote nodes fence in the node coming up. The node joining the cluster will not do anything.

    If it is a node_down event, the remote nodes will fence out the node that is leaving. The node leaving the cluster will not do anything.

    The last node going down clears the fence register.

    Environment Variables

    PRE_EVENT_MEMBERSHIP
    Set by Cluster Manager.
    POST_EVENT_MEMBERSHIP
    Set by Cluster Manager.
    EVENT_ON_NODE
    Set by calling script.

    Parameters

    -e event
    1=up; 2=down.
    pvid
    Physical volume ID on which fencing will occur.

    Return Values

    0
    Success.
    1
    Failure. A problem occurred during execution. A message describing the problem is written to stderr and to the cluster log file.
    2
    Failure. Invalid number of arguments. A message describing the problem is written to stderr and to the cluster log file.

    ssa_clear

    Syntax

    ssa_clear -x | -d pvid 
    

    Description

    Clears or displays the contents of the fence register. If -d is used, a list of fenced out nodes will be displayed. If -x is used, the fence register will be cleared.

    Note: This command exposes data integrity of a disk, by unconditionally clearing its fencing register. It requires adequate operator controls and warnings, and should not be included within any takeover script.

    Return Values

    0
    Success.
    1
    Failure. A problem occurred during execution. A message describing the problem is written to stderr and to the cluster log file.
    2
    Failure. Invalid number of arguments. A message describing the problem is written to stderr and to the cluster log file.

    ssa_clear_all

    Syntax

    ssa_clear_all pvid1, pvid2 ... 
    

    Description

    Clears the fence register on multiple physical volumes.

    Return Values

    0
    Success.
    1
    Failure. A problem occurred during execution. A message describing the problem is written to stderr and to the cluster log file.
    2
    Failure. Invalid number of arguments. A message describing the problem is written to stderr and to the cluster log file.

    ssa_configure

    Syntax

    ssa_configure 
    

    Description

    Assigns unique node IDs to all the nodes of the cluster. Then it configures and unconfigures all SSA pdisks and hdisks on all nodes thus activating SSA fencing. This command is called from the SMIT panel during the sync of a node environment. If this command fails for any reason, that node should be rebooted.

    Return Values

    0
    Success.
    1
    Failure. A problem occurred during execution. A message describing the problem is written to stderr and to the cluster log file.

    RS/6000 SP Utilities

    cl_swap_HPS_IP_address

    Syntax

    cl_swap_HPS_IP_address [cascading rotating] [action] interface address 
    old_address netmask  
    

    Description

    This script is used to specify an alias address to an SP Switch interface, or remove an alias address, during IP address takeover. Note that adapter swapping does not make sense for the SP Switch since all addresses are alias addresses on the same network interface.

    Parameters

    action
    acquire or release
    IP label behavior
    rotating/cascading. Select rotating if an IP label should be placed on a boot interface; Select cascading if an IP label should be placed on a backup interface on a takeover node.
    interface
    The name of the interface.
    address
    new alias IP address
    old_address
    alias IP address you want to change
    netmask
    Netmask.

    Return Values

    0
    Success.
    1
    The network interface could not be configured (using the ifconfig command) at the specified address.
    2
    Invalid syntax.

    Examples

    The following example replaces the alias 1.1.1.1 with 1.1.1.2:

    cl_swap_HPS_IP_address cascading acquire css0 1.1.1.2  
        1.1.1.1 255.255.255.128 
    

    Filesystem and Volume Group Utilities

    The descriptions noted here apply to serial processing of resource groups. For a full explanation of parallel processing and JOB_TYPES, see Tracking Resource Group Parallel and Serial Processing in the hacmp.out File in Chapter 2: Using Cluster Log Files.

    cl_activate_fs

    Syntax

    cl_activate_fs /filesystem_mount_point ... 
    

    Description

    Mounts the filesystems passed as arguments.

    Parameters

    /filesystem_mount_point
    A list of one or more filesystems to mount.

    Return Values

    0
    All filesystems named as arguments were either already mounted or were successfully mounted.
    1
    One or more filesystems failed to mount.
    2
    No arguments were passed.

    cl_activate_nfs

    Syntax

    cl_activate_nfs retry host /filesystem_mount_point ... 
    

    Description

    NFS mounts the filesystems passed as arguments. The routine will keep trying to mount the specified NFS mount point until either the mount succeeds or until the retry count is reached. If the Filesystem Recovery Method attribute for the resource group being processed is parallel, then the mounts will be tried in the background while event processing continues. If the recovery method is sequential, then the mounts will be tried in the foreground. Note that this route assumes the filesystem is already mounted if any mounted filesystem has a
    matching name.

    Parameters

    retry
    Number of attempts. Sleeps 15 seconds between attempts.
    host
    NFS server host.
    /filesystem_mount_point
    List of one or more filesystems to activate.

    Return Values

    0
    All filesystems passed were either mounted or mounts were scheduled in the background.
    1
    One or more filesystems failed to mount.
    2
    No arguments were passed.

    cl_activate_vgs

    Syntax

    cl_activate_vgs [-n] volume_group_to_activate ... 
    

    Description

    Initiates a varyonvg of the volume groups passed as arguments.

    Parameters

    -n
    Do not sync the volume group when varyon is called.
    volume_group_to_activate
    List of one of more volume groups to activate.

    Return Values

    0
    All of the volume groups are successfully varied on.
    1
    The varyonvg of at least one volume group failed.
    2
    No arguments were passed.

    cl_deactivate_fs

    Syntax

    cl_deactivate_fs /filesystem_mount_point ... 
    

    Description

    Attempts to unmount any filesystem passed as an argument that is currently mounted.

    Parameters

    /filesystem_mount_point
    List of one or more filesystems to unmount.

    Return Values

    0
    All filesystems were successfully unmounted.
    1
    One or more filesystems failed to unmount.
    2
    No arguments were passed.

    cl_deactivate_nfs

    Syntax

    cl_deactivate_nfs file_system_to_deactivate ... 
    

    Description

    Attempts to unmount -f any filesystem passed as an argument that is currently mounted.

    Parameters

    file_system_to_deactivate
    List of one or more NFS-mounted filesystems to unmount.

    Return Values

    0
    Successfully unmounted a specified filesystem.
    1
    One or more filesystems failed to unmount.
    2
    No arguments were passed.

    cl_deactivate_vgs

    Syntax

    cl_deactivate_vgs volume_group_to_deactivate ... 
    

    Description

    Initiates a varyoffvg of any volume group that is currently varied on and that was passed as an argument.

    Parameters

    volume_group_to_deactivate
    List of one or more volume groups to vary off.

    Return Values

    0
    All of the volume groups are successfully varied off.
    1
    The varyoffvg of at least one volume group failed.
    2
    No arguments were passed.

    cl_export_fs

    Syntax

    cl_export_fs hostname file_system_to_export ... 
    

    Description

    NFS-exports the filesystems given as arguments so that NFS clients can continue to work.

    Parameters

    hostname
    Hostname of host given root access.
    filesystem_to_export
    List of one or more filesystems to NFS-export.

    Return Values

    0
    Successfully exported all filesystems specified.
    1
    A runtime error occurred: unable to export or unable to startsrc failures.
    2
    No arguments were passed.

    cl_nfskill

    Syntax

    cl_nfskill [-k] [-t] [-u] directory ...  
    

    Description

    Lists the process numbers of local processes using the specified NFS directory.

    Find and kill processes that are executables fetched from the NFS-mounted filesystem. Only the root user can kill a process of another user.

    If you specify the -t flag, all processes that have certain NFS module names within their stack will be killed.

    Warning: When using the -t flag it is not possible to tell which NFS filesystem the process is related to. This could result in killing processes that belong to NFS-mounted filesystems other than those that are cross-mounted from another HACMP node and under HACMP control. This could also mean that the processes found could be related to filesystems under HACMP control but not part of the current resources being taken. This flag should therefore be used with caution and only if you know you have a specific problem with unmounting the NFS filesystems.

    To help to control this, the cl_deactivate_nfs script contains the normal calls to cl_nfskill with the -k and -u flags and commented calls using the -t flag as well. If you use the -t flag, you should uncomment those calls and comment the original calls.

    Parameters

    -k
    Sends the SIGKILL signal to each local process,
    -u
    Provides the login name for local processes in parentheses after the process number.
    -t
    Finds and kills processes that are just opening on NFS filesystems.
    directory
    Lists of one or more NFS directories to check.

    Return Values

    None.

    Logging Utilities

    cl_log

    Syntax

    cl_log message_id default_message variables 
    

    Description

    Logs messages to syslog and standard error.

    Parameters

    message_id
    Message ID for the messages to be logged.
    default_message
    Default message to be logged.
    variables
    List of one or more variables to be logged.

    Return Values

    0
    Successfully logged messages to syslog and standard error.
    2
    No arguments were passed.

    cl_echo

    Syntax

    cl_echo message_id default_message variables 
    

    Description

    Logs messages to standard error.

    Parameters

    message_id
    Message ID for the messages to be displayed.
    default_message
    Default message to be displayed.
    variables
    List of one or more variables to be displayed.

    Return Values

    0
    Successfully displayed messages to stdout.
    2
    No arguments were passed.

    Network Utilities

    cl_swap_HW_address

    Syntax

    cl_swap_HW_address address interface 
    

    Description

    Checks to see if an alternate hardware address is specified for the address passed as the first argument. If so, it assigns the hardware address specified to the network interface.

    Parameters

    address
    Interface address or IP label.
    interface
    Interface name (for example, en0 or tr0).

    Return Values

    0
    Successfully assigned the specified hardware address to a network interface.
    1
    Could not assign the specified hardware address to a network interface.
    2
    Wrong number of arguments were passed.

    Note: This utility is used during adapter_swap and IP address takeover.

    cl_swap_IP_address

    Syntax

    cl_swap_IP_address cascading/rotating acquire/release interface 
    new_address old_address netmask 
    cl_swap_IP_address swap_adapter swap interface1 address1 interface2 
    address2 netmask 
    

    Description

    This routine is used during adapter_swap and IP address takeover.

    In the first form, the routine sets the specified interface to the specified address:

    cl_swap_IP_address rotating acquire en0 1.1.1.1 255.255.255.128 
    

    In the second form, the routine sets two interfaces in a single call. An example where this is required is the case of swapping two interfaces.

    cl_swap_IP_address swap-adapter swap en0 1.1.1.1 en1 2.2.2.2 
    255.255.255.128 
    

    Parameters

    interface
    Interface name
    address
    IP address
    IP label behavior
    rotating/cascading. Select rotating if an IP label should be placed on a boot interface; select cascading if an IP label should be placed on a backup interface on a takeover node.
    netmask
    Network mask. Must be in decimal format.

    Return Values

    0
    Successfully swapped IP addresses.
    1
    ifconfig failed.
    2
    Wrong or incorrect number of arguments.

    This utility is used for swapping the IP address of either a standby network interface with a local service network interface (called adapter swapping), or a standby network interface with a remote service network interface (called masquerading). For masquerading, the cl_swap_IP_address routine should sometimes be called before processes are stopped, and sometimes after processes are stopped. This is application dependent. Some applications respond better if they shutdown before the network connection is broken, and some respond better if the network connection is closed first.

    cl_unswap_HW_address

    Syntax

    cl_unswap_HW_address interface 
    

    Description

    Script used during adapter_swap and IP address takeover. It restores a network interface to its boot address.

    Parameters

    interface
    Interface name (for example, en0 or tr0).

    Return Values

    0
    Success.
    1
    Failure.
    2
    Invalid parameters.

    Resource Group Move Utilities

    clRGmove

    The clRGmove utility is stored in the /usr/es/sbin/cluster/utilities directory.

    Syntax

    The general syntax for migrating, starting, or stopping a resource group dynamically from the command line is this:

    clRGmove -g <groupname> [-n <nodename> | -r | -a] [-m | -u | -d] [-s 
    true|false ] [-p] [-i] 
    

    Description

    This utility communicates with the Cluster Manager to queue an rg_move event to bring a specified resource group offline or online, or to move a resource group to a different node. This utility provides the command line interface to the Resource Group Migration functionality, which can be accessed through the SMIT System Management (C-SPOC) panel. To move specific resource groups to the specified location or state, use the System Management (C-SPOC) > HACMP Resource Group and Application Management > Move a Resource Group to another Node/Site SMIT menu, or the clRGmove command. See the man page for the clRGmove command.

    You can also use this command from the command line, or include it in the pre- and post-event scripts.

    Parameters

    -a
    For concurrent resource groups only. This flag is interpreted as all nodes in the resource group when bringing the concurrent resource group offline or online.
    To bring a concurrent resource group online or offline on a single node, use the -n flag.
    -d
    Use this flag to bring the resource group offline.
    Cannot be used with -m or -u flags.
    -g <groupname>
    The name of the resource group to move.
    -i
    Displays the locations and states of all resource groups in the cluster after the migration has completed by calling the clRGinfo command.
    -m
    Specify this flag to move the resource group from its current node to a destination node that you specify.
    Cannot be used with -d or -u flags.
    -n <nodename>
    The name of the node to which the resource group will be moved. For a non-concurrent resource group this flag can only be used when bringing a resource group online or moving a resource group to another node. For a concurrent resource group this flag can be used to bring a resource group online or offline on a single node. Cannot be used with -r or -a flags.
    -p

    Use this flag to show the temporal changes in the resource group behavior that occurred because of the resource group migration utility.

    -r
    This flag can only be used when bringing a non-concurrent resource group online or moving a non-concurrent resource group to another node.
    If this flag is specified, the command uses the highest priority node that is available as the destination node to which the specified resource group will be moved.
    Cannot be used with -n or -a flags.
    -s true | false
    Use this flag to specify actions on the primary or secondary instance of a resource group (if sites are defined). With this flag, you can take the primary or the secondary instance of the resource group offline, online or move it to another node within the same site.
    - s true specifies actions on the secondary instance of a resource group.
    -s false specifies actions on the primary instance of a resource group.
    Use this flag with -r, -d, -u, and -m flags.
    -u
    Use this flag to bring the resource group online.
    Cannot be used with -m or -d flags.

    Repeat this syntax on the command line for each resource group you want to migrate.

    Return Values

    0
    Success.
    1
    Failure.

    Emulation Utilities

    Emulation utilities are found in the /usr/es/sbin/cluster/events/emulate/driver directory.

    cl_emulate

    Syntax

    cl_emulate -e node_up -n nodename 
    cl_emulate -e node_down -n nodename {f|g|t} 
    cl_emulate -e network_up -w networkname -n nodename 
    cl_emulate -e network_down -w networkname -n nodename 
    cl_emulate -e join_standby -n nodename -a ip_label 
    cl_emulate -e fail_standby -n nodename -a ip_label 
    cl_emulate -e swap_adapter -n nodename -w networkname -a ip_label 
    		-d ip_label 
    

    Description

    Emulates a specific cluster event and outputs the result of the emulation. The output is shown on the screen as the emulation runs, and is saved to an output file on the node from which the emulation was executed.

    The Event Emulation utility does not run customized scripts such as pre- and post- event scripts. In the output file the script is echoed and the syntax is checked, so you can predict possible errors in the script. However, if customized scripts exist, the outcome of running the actual event may differ from the outcome of the emulation.

    When emulating an event that contains a customized script, the Event Emulator uses the ksh flags -n and -v. The -n flag reads commands and checks them for syntax errors, but does not execute them. The -v flag indicates verbose mode. When writing customized scripts that may be accessed during an emulation, be aware that the other ksh flags may not be compatible with the -n flag and may cause unpredictable results during the emulation. See the ksh man page for flag descriptions.

    You can run only one instance of an event emulation at a time. If you attempt to start an emulation while an emulation is already running on a cluster, the integrity of the output cannot be guaranteed.

    Parameters

    -e eventname
    The name of the event to emulate: node_up, node_down, network_up, network_down, join standby, fail standby, swap adapter.
    -n nodename
    The node name used in the emulation.
    -f
    Emulates stopping cluster services with the option to place resource groups in an UNMANAGED state. Cluster daemons terminate without running any local procedures.
    -g
    Emulates stopping cluster services with the option to bring resource groups offline.
    -t
    Emulates stopping cluster services with the option to move resource groups to another node.
    -w networkname
    The network name used in the emulation.
    -a ip_label
    The standby network interface address with which to switch.
    -d ip_label
    The service network interface to fail.

    Return Values

    0
    Success.
    1
    Failure.

    Note: The cldare command also provides an emulation feature for dynamic reconfiguration events.

    Security Utilities

    HACMP security utilities include kerberos setup utilities.

    Kerberos Setup Utilities

    To simplify and automate the process of configuring Kerberos security on the SP, two scripts for setting up Kerberos service principals are provided with HACMP:

  • cl_setup_kerberos—extracts the HACMP network interface labels from an already configured node and creates a file, cl_krb_service, that contains all of the HACMP network interface labels and additional format information required by the add_principal Kerberos setup utility. Also creates the cl_adapters file that contains a list of the network interfaces required to extract the service principals from the authentication database.
  • cl_ext_krb—prompts the user to enter the Kerberos password to be used for the new principals, and uses this password to update the cl_krb_service file. Checks for a valid .k file and alerts the user if one does not exist. Once a valid .k file is found, the cl_ext_krb script runs the add_principal utility to add all the network interface labels from the cl_krb_service file into the authentication database; extracts the service principals and places them in a new Kerberos services file, cl_krb-srvtab; creates the cl_klogin file that contains additional entries required by the .klogin file; updates the .klogin file on the control workstation and all nodes in the cluster; and concatenates the cl_krb-srvtab file to each node’s /etc/krb-srvtab file.
  • Start and Stop Tape Resource Utilities

    The following sample scripts are supplied with the software.

    tape_resource_start_example

    Syntax

    tape_resource_start_example  
    

    Description

    Rewinds a highly available tape resource.

    Parameters

    none
     

    Return Values

    0
    Successfully started the tape resource.
    1
    Failure.
    2
    Usage error.

    tape_resource_stop_example

    Syntax

    tape_resource_stop_example 
    

    Description

    Rewinds the highly available tape resource.

    Parameters

    None.

    Return Values

    0
    Successfully stopped the tape resource.
    1
    Failure.
    2
    Usage error.

    Cluster Resource Group Information Commands

    The following commands are available for use by scripts or for execution from the command line:

  • clRMupdate
  • clRGinfo.
  • HACMP event scripts use the clRMupdate command to notify the Cluster Manager that it should process an event. It is not documented for end users; it should only be used in consultation with IBM support personnel.

    Users or scripts can execute the clRGinfo command to get information about resource group status and location.

    clRGinfo

    Syntax

    clRGinfo [-a] [-h] [-v] [-s| -c] | [-p] [-t] [-d][groupname1] 
    [groupname2] ... 
    

    Description

    Use the clRGinfo command to display the location and state of all resource groups.

    If clRGinfo cannot communicate with the Cluster Manager on the local node, it attempts to find a cluster node with the Cluster Manager running, from which resource group information may be retrieved. If clRGinfo fails to find at least one node with the Cluster Manager running, HACMP displays an error message.

    clRGinfo: Resource Manager daemon is unavailable 
    

    Parameters

    s or c
    Displays output in colon (shortened) format.
    a
    Displays the pre-event and the post- event node locations of the resource group. (Recommended for use in pre- and post-event scripts in clusters with dependent resource groups).
    d
    Displays the name of the server that provided the information for the command.
    p
    Displays the node that temporarily has the highest priority for this instance as well as the state for the primary and secondary instances of the resource group. The command shows information about those resource groups whose locations were temporally changed because of the resource group migration utility.
    t
    Collects information only from the local node and displays the delayed fallback timer and the settling time settings for resource groups on the local node.
    Note: This flag can be used only if the Cluster Manager is running on the local node.
    h
    Displays the usage message.
    v
    Displays the verbose output with the startup, fallover and fallback policies for resource groups

    Return Values:

    0
    Success
    1
    Operation could not be performed

    Examples of clRGinfo Output

    clRGinfo

    $ /usr/es/sbin/cluster/utilities/clRGinfo  
    ------------------------------------------------------------------------ 
    Group Name        Group State     Node           Node State    
    ------------------------------------------------------------------------ 
    Group1            ONLINE          merry          ONLINE      
                                      samwise        OFFLINE                    
    Group2            ONLINE          merry          ONLINE      
    

    If you run the clRGinfo command with sites configured, that information is displayed as in the following example:

    $ /usr/es/sbin/cluster/utilities/clRGinfo 
    ------------------------------------------------------------------------ 
    Group Name        Group State     Node           Node State    
    ------------------------------------------------------------------------ 
    Colors            ONLINE          white@Site1    ONLINE      
                                      amber@Site1    OFFLINE                    
                                      yellow@Site1   OFFLINE     
                                      navy@Site2     ONLINE_SECONDARY  
                                      ecru@Site2     OFFLINE 
    samwise 			ONLINE  
    

    clRGinfo -c -p

    If you run the clRGinfo -c -p command, it lists the output in a colon separated format and the parameter indicating status and location of the resource groups

    Possible States of a Resource Group

    ONLINE 
    OFFLINE 
    OFFLINE Unmet dependencies 
    OFFLINE User requested  
    UNKNOWN 
    ACQUIRING 
    RELEASING 
    ERROR 
    TEMPORARY ERROR 
    ONLINE SECONDARY 
    ONLINE PEER 
    ACQUIRING SECONDARY 
    RELEASING SECONDARY 
    ACQUIRING PEER 
    RELEASING PEER 
    ERROR SECONDARY 
    TEMPORARY ERROR SECONDARY 
    

    clRGinfo -a

    The clRGinfo -a command lets you know the pre-event location and the post-event location of a particular resource group. Because HACMP performs these calculations at event startup, this information will be available in pre-event scripts (such as a pre-event script to node_up), on all nodes in the cluster, regardless of whether the node where it is run takes any action on a particular resource group.

    Note: clRGinfo - a provides meaningful output only if you run it while a cluster event is being processed.
  • In this example, the resource group A is moving from the offline state to the online state on node B. The pre-event location is left blank, the post-event location is Node B:
  • :rg_move[112] /usr/es/sbin/cluster/utilities/clRGinfo -a
    --------------------------------------------------------
    Group Name Resource Group Movement
    --------------------------------------------------------
    rgA PRIMARY=":nodeB"
  • In this example, the resource group B is moving from Node B to the offline state. The pre-event location is node B, the post-event location is left blank:
  • :rg_move[112] /usr/es/sbin/cluster/utilities/clRGinfo -a
    --------------------------------------------------------
    Group Name Resource Group Movement
    --------------------------------------------------------
    rgB PRIMARY="nodeB:"
  • In this example, the resource group C is moving from Node A to Node B. The pre-event location is node A, the post-event location is node B:
  • :rg_move[112] /usr/es/sbin/cluster/utilities/clRGinfo -a
    --------------------------------------------------------
    Group Name Resource Group Movement
    --------------------------------------------------------
    rgC PRIMARY="nodeA:nodeB"
  • In this example with sites, the primary instance of resource group C is moving from Node A to Node B, and the secondary instance stays on node C:
  • :rg_move[112] /usr/es/sbin/cluster/utilities/clRGinfo -a
    --------------------------------------------------------
    Group Name Resource Group Movement
    --------------------------------------------------------
    rgC PRIMARY="nodeA:nodeB"
    SECONDARY=”nodeC:nodeC”
  • With concurrent resource groups, the output indicates each node from which a resource group is moving online or offline. In the following example, both nodes release the resource group:
  • :rg_move[112] /usr/es/sbin/cluster/utilities/clRGinfo -a
    --------------------------------------------------------
    Group Name Resource Group Movement
    --------------------------------------------------------
    rgA "nodeA:"
    rgA "nodeB:"

    clRGinfo -p

    The clRGinfo -p command displays the node that temporarily has the highest priority for this instance as well as the state for the primary and secondary instances of the resource group. The command shows information about those resource groups whose locations were temporally changed because of the resource group migration utility.

    $ /usr/es/sbin/cluster/utilities/clRGinfo -p 
    Cluster Name: TestCluster  
    Resource Group Name: Parent 
    Primary instance(s): 
    The following node temporarily has the highest priority for this 
    instance: 
    user-requested rg_move performed on Wed Dec 31 19:00:00 1969 
    Node                         State             
    ---------------------------- ---------------  
    node3@s2 		OFFLINE  
    node2@s1		ONLINE  
    node1@s0		OFFLINE          
    Resource Group Name: Child 
    Node                         State             
    ---------------------------- ---------------  
    node3@s2		ONLINE  
    node2@s1		OFFLINE  
    node1@s0		OFFLINE  
    ---------------------------- ---------------  
    

    clRGinfo -p -t

    The clRGinfo -p -t command displays the node that temporarily has the highest priority for this instance and a resource group's active timers:

    /usr/es/sbin/cluster/utilities/clRGinfo -p -t        
    Cluster Name: MyTestCluster  
    Resource Group Name: Parent 
    Primary instance(s): 
    The following node temporarily has the highest priority for this 
    instance: 
    node4, user-requested rg_move performed on Fri Jan 27 15:01:18 2006 
    Node 		Primary State    Secondary State		Delayed Timers    
    ------------------------------- --------------- 	------------------- 
    node1@siteA 		OFFLINE          ONLINE SECONDARY  
    node2@siteA  	 	OFFLINE          OFFLINE                            
    node3@siteB 		OFFLINE          OFFLINE  
    node4@siteB  		ONLINE           OFFLINE  
    Resource Group Name: Child 
    Node 		State            Delayed Timers    
    ---------------------------- --------------- ------------------- 
    node2		ONLINE                               
    node1 		OFFLINE                              
    node4 		OFFLINE                              
    node3 		OFFLINE                              
    

    clRGinfo -s

    $ /usr/es/sbin/cluster/utilities/clRGinfo -s 
    Group1:ONLINE:merry:OHN:FNPN:FBHPN:ignore: : :  
    Group1:OFFLINE:samwise:OHN:FNPN:FBHPN:ignore: : :  
    Group2:ONLINE:merry:OAAN:BO:NFB:ignore: : :  
    Group2:ONLINE:samwise:OAAN:BO:NFB:ignore: : :  
    

    The -s flag prints the output in the following order:

    RGName:state:node:type:startup:fallover:fallback:site:POL:POL_SEC: 
    fallbackTime:settlingTime 
    $ /usr/es/sbin/cluster/utilities/clRGinfo -s 
    Group1:ONLINE:white::ONLINE:OHN:FNPN:FBHPN:PPS: : : :ONLINE:Site1: 
    Group1:OFFLINE:amber::OFFLINE:OHN:FNPN:FBHPN:PPS: : : :ONLINE:Site1 : 
    Group1:ONLINE:yellow::ONLINE:OAAN:BO:NFB:PPS: : : :ONLINE:Site1: 
    Group1:ONLINE:navy::ONLINE:OAAN:BO:NFB:PPS: : : :ONLINE:Site2: 
    Group1:ONLINE:ecru::ONLINE:OAAN:BO:NFB:PPS: : : :ONLINE:Site2: 
    

    where the resource group startup fallover and fallback preferences are abbreviated as follows:

  • Resource group startup policies:
  • OHN: Online On Home Node Only
  • OFAN: Online On First Available Node
  • OUDP: Online Using Distribution Policy
  • OAAN: Online On All Available Nodes
  • Resource group fallover policies:
  • FNPN: fallover To Next Priority Node In The List
  • FUDNP: fallover Using Dynamic Node Priority
  • BO: Bring Offline (On Error Node Only)
  • Resource group fallback policies:
  • FHPN: Fallback To Higher Priority Node In The List
  • NFB: Never Fallback.
  • Resource group's intersite policies:
  • ignore: ignore
  • OES: Online On Either Site
  • OBS: Online Both Sites
  • PPS: Prefer Primary Site
  • If an attribute is not available for a resource group, the command displays a colon and a blank instead of the attribute.

    clRGinfo -v

    $ /usr/es/sbin/cluster/utilities/clRGinfo -v 
    Cluster Name: myCLuster  
    Resource Group Name: Group1 
    Startup Policy: Online On Home Node Only 
    fallover Policy: fallover To Next Priority Node In The List 
    Fallback Policy: Fallback To Higher Priority Node In The List 
    Site Policy: ignore 
    Node		State            
    --------------- ---------------  
    merry            ONLINE           
    samwise          OFFLINE          
    Resource Group Name: Group2 
    Startup Policy: Online On All Available Nodes 
    fallover Policy: Bring Offline (On Error Node Only) 
    Fallback Policy: Never Fallback 
    Site Policy: ignore 
    

    Node             State            
    --------------- ---------------  
    merry            ONLINE           
    samwise          ONLINE          
    


    PreviousNextIndex