![]() ![]() ![]() |
Appendix A: Script Utilities
This appendix describes the utilities called by the event and startup scripts supplied with HACMP. These utilities are general-purpose tools that can be called from any script or from the AIX 5L command line. The examples assume they are called from a script.
This appendix also includes the reference pages for Cluster Resource Group Information commands.
Highlighting
The following highlighting conventions are used in this appendix:
Reading Syntax Diagrams
Usually, a command follows this syntax:
Note: Flags listed in syntax diagrams throughout this appendix are those recommended for use with the HACMP for AIX 5L software. Flags used internally by SMIT are not listed.
Utilities
The script utilities are stored in the /usr/es/sbin/cluster/events/utils directory. The utilities described in this chapter are grouped in the following categories:
Disk Utilities
cl_disk_available
Syntax
Description
Checks to see if a disk named as an argument is currently available to the system and if not, makes the disk available.
Parameters
Return Values
0 Successfully made the specified disk available. 1 Failed to make the specified disk available. 2 Incorrect or bad arguments were used.
cl_fs2disk
Syntax
or
where -l identifies and returns the logical volume, -v returns the volume group, -i returns the physical volume ID, -p returns the physical volume, and -g is the mount point of a filesystem (given a volume group).
Description
Checks the ODM for the specified logical volume, volume group, physical volume ID, and physical volume information.
Parameters
Return Values
cl_get_disk_vg_fs_pvids
Syntax
Description
Given filesystems and/or volume groups, the function returns a list of the associated PVIDs.
Parameters
Return Values
cl_is_array
Syntax
Description
Checks to see if a disk is a READI disk array.
Parameters
Return Values
cl_is_scsidisk
Syntax
Description
Determines if a disk is a SCSI disk.
Parameters
Return Values
cl_raid_vg
Syntax
Description
Checks to see if the volume group is comprised of RAID disk arrays.
Parameters
Return Values
0 Successfully identified a RAID volume group. 1 Could not identify a RAID volume group; volume group must be SSA. 2 An error occurred. Mixed volume group identified.
cl_scdiskreset
Syntax
Description
Issues a reset (SCSI ioctl) to each SCSI disk named as an argument.
Parameters
Return Values
0 All specified disks have been reset. -1 No disks have been reset. n Number of disks successfully reset.
cl_scdiskrsrv
Syntax
Description
Reserves the specified SCSI disk.
Parameters
Return Values
0 All specified disks have been reserved. -1 No disks have been reserved. n Number of disks successfully reserved.
cl_sync_vgs
Syntax
Description
Attempts to synchronize a volume group by calling syncvg for the specified volume group.
Parameters
Return Values
0 Successfully started syncvg for all specified volume groups. 1 The syncvg of at least one of the specified volume groups failed. 2 No arguments were passed.
scdiskutil
Syntax
Description
Tests and clears any pending SCSI disk status.
Parameters
Return Values
-1 An error occurred or no arguments were passed. 0 The disk is not reserved. >0 The disk is reserved.
ssa_fence
Syntax
Description
Fences a node in or out.
Additionally, this command also relies on environment variables; the first node up fences out all other nodes of the cluster regardless of their participation in the resource group.
If it is not the first node up, then the remote nodes fence in the node coming up. The node joining the cluster will not do anything.
If it is a node_down event, the remote nodes will fence out the node that is leaving. The node leaving the cluster will not do anything.
The last node going down clears the fence register.
Environment Variables
PRE_EVENT_MEMBERSHIP Set by Cluster Manager. POST_EVENT_MEMBERSHIP Set by Cluster Manager. EVENT_ON_NODE Set by calling script.
Parameters
Return Values
ssa_clear
Syntax
Description
Clears or displays the contents of the fence register. If -d is used, a list of fenced out nodes will be displayed. If -x is used, the fence register will be cleared.
Note: This command exposes data integrity of a disk, by unconditionally clearing its fencing register. It requires adequate operator controls and warnings, and should not be included within any takeover script.
Return Values
ssa_clear_all
Syntax
Description
Clears the fence register on multiple physical volumes.
Return Values
ssa_configure
Syntax
Description
Assigns unique node IDs to all the nodes of the cluster. Then it configures and unconfigures all SSA pdisks and hdisks on all nodes thus activating SSA fencing. This command is called from the SMIT panel during the sync of a node environment. If this command fails for any reason, that node should be rebooted.
Return Values
0 Success. 1 Failure. A problem occurred during execution. A message describing the problem is written to stderr and to the cluster log file.
RS/6000 SP Utilities
cl_swap_HPS_IP_address
Syntax
Description
This script is used to specify an alias address to an SP Switch interface, or remove an alias address, during IP address takeover. Note that adapter swapping does not make sense for the SP Switch since all addresses are alias addresses on the same network interface.
Parameters
Return Values
0 Success. 1 The network interface could not be configured (using the ifconfig command) at the specified address. 2 Invalid syntax.
Examples
The following example replaces the alias 1.1.1.1 with 1.1.1.2:
Filesystem and Volume Group Utilities
The descriptions noted here apply to serial processing of resource groups. For a full explanation of parallel processing and JOB_TYPES, see Tracking Resource Group Parallel and Serial Processing in the hacmp.out File in Chapter 2: Using Cluster Log Files.
cl_activate_fs
Syntax
Description
Mounts the filesystems passed as arguments.
Parameters
Return Values
0 All filesystems named as arguments were either already mounted or were successfully mounted. 1 One or more filesystems failed to mount. 2 No arguments were passed.
cl_activate_nfs
Syntax
Description
NFS mounts the filesystems passed as arguments. The routine will keep trying to mount the specified NFS mount point until either the mount succeeds or until the retry count is reached. If the Filesystem Recovery Method attribute for the resource group being processed is parallel, then the mounts will be tried in the background while event processing continues. If the recovery method is sequential, then the mounts will be tried in the foreground. Note that this route assumes the filesystem is already mounted if any mounted filesystem has a
matching name.Parameters
retry Number of attempts. Sleeps 15 seconds between attempts. host NFS server host. /filesystem_mount_point List of one or more filesystems to activate.
Return Values
0 All filesystems passed were either mounted or mounts were scheduled in the background. 1 One or more filesystems failed to mount. 2 No arguments were passed.
cl_activate_vgs
Syntax
Description
Initiates a varyonvg of the volume groups passed as arguments.
Parameters
-n Do not sync the volume group when varyon is called. volume_group_to_activate List of one of more volume groups to activate.
Return Values
0 All of the volume groups are successfully varied on. 1 The varyonvg of at least one volume group failed. 2 No arguments were passed.
cl_deactivate_fs
Syntax
Description
Attempts to unmount any filesystem passed as an argument that is currently mounted.
Parameters
Return Values
0 All filesystems were successfully unmounted. 1 One or more filesystems failed to unmount. 2 No arguments were passed.
cl_deactivate_nfs
Syntax
Description
Attempts to unmount -f any filesystem passed as an argument that is currently mounted.
Parameters
Return Values
0 Successfully unmounted a specified filesystem. 1 One or more filesystems failed to unmount. 2 No arguments were passed.
cl_deactivate_vgs
Syntax
Description
Initiates a varyoffvg of any volume group that is currently varied on and that was passed as an argument.
Parameters
Return Values
0 All of the volume groups are successfully varied off. 1 The varyoffvg of at least one volume group failed. 2 No arguments were passed.
cl_export_fs
Syntax
Description
NFS-exports the filesystems given as arguments so that NFS clients can continue to work.
Parameters
hostname Hostname of host given root access. filesystem_to_export List of one or more filesystems to NFS-export.
Return Values
0 Successfully exported all filesystems specified. 1 A runtime error occurred: unable to export or unable to startsrc failures. 2 No arguments were passed.
cl_nfskill
Syntax
Description
Lists the process numbers of local processes using the specified NFS directory.
Find and kill processes that are executables fetched from the NFS-mounted filesystem. Only the root user can kill a process of another user.
If you specify the -t flag, all processes that have certain NFS module names within their stack will be killed.
Warning: When using the -t flag it is not possible to tell which NFS filesystem the process is related to. This could result in killing processes that belong to NFS-mounted filesystems other than those that are cross-mounted from another HACMP node and under HACMP control. This could also mean that the processes found could be related to filesystems under HACMP control but not part of the current resources being taken. This flag should therefore be used with caution and only if you know you have a specific problem with unmounting the NFS filesystems.
To help to control this, the cl_deactivate_nfs script contains the normal calls to cl_nfskill with the -k and -u flags and commented calls using the -t flag as well. If you use the -t flag, you should uncomment those calls and comment the original calls.
Parameters
Return Values
None.
Logging Utilities
cl_log
Syntax
Description
Logs messages to syslog and standard error.
Parameters
message_id Message ID for the messages to be logged. default_message Default message to be logged. variables List of one or more variables to be logged.
Return Values
cl_echo
Syntax
Description
Logs messages to standard error.
Parameters
message_id Message ID for the messages to be displayed. default_message Default message to be displayed. variables List of one or more variables to be displayed.
Return Values
Network Utilities
cl_swap_HW_address
Syntax
Description
Checks to see if an alternate hardware address is specified for the address passed as the first argument. If so, it assigns the hardware address specified to the network interface.
Parameters
Return Values
0 Successfully assigned the specified hardware address to a network interface. 1 Could not assign the specified hardware address to a network interface. 2 Wrong number of arguments were passed.
cl_swap_IP_address
Syntax
cl_swap_IP_address cascading/rotating acquire/release interface new_address old_address netmask cl_swap_IP_address swap_adapter swap interface1 address1 interface2 address2 netmaskDescription
This routine is used during adapter_swap and IP address takeover.
In the first form, the routine sets the specified interface to the specified address:
In the second form, the routine sets two interfaces in a single call. An example where this is required is the case of swapping two interfaces.
Parameters
Return Values
This utility is used for swapping the IP address of either a standby network interface with a local service network interface (called adapter swapping), or a standby network interface with a remote service network interface (called masquerading). For masquerading, the cl_swap_IP_address routine should sometimes be called before processes are stopped, and sometimes after processes are stopped. This is application dependent. Some applications respond better if they shutdown before the network connection is broken, and some respond better if the network connection is closed first.
cl_unswap_HW_address
Syntax
Description
Script used during adapter_swap and IP address takeover. It restores a network interface to its boot address.
Parameters
Return Values
Resource Group Move Utilities
clRGmove
The clRGmove utility is stored in the /usr/es/sbin/cluster/utilities directory.
Syntax
The general syntax for migrating, starting, or stopping a resource group dynamically from the command line is this:
Description
This utility communicates with the Cluster Manager to queue an rg_move event to bring a specified resource group offline or online, or to move a resource group to a different node. This utility provides the command line interface to the Resource Group Migration functionality, which can be accessed through the SMIT System Management (C-SPOC) panel. To move specific resource groups to the specified location or state, use the System Management (C-SPOC) > HACMP Resource Group and Application Management > Move a Resource Group to another Node/Site SMIT menu, or the clRGmove command. See the man page for the clRGmove command.
You can also use this command from the command line, or include it in the pre- and post-event scripts.
Parameters
Repeat this syntax on the command line for each resource group you want to migrate.
Return Values
Emulation Utilities
Emulation utilities are found in the /usr/es/sbin/cluster/events/emulate/driver directory.
cl_emulate
Syntax
cl_emulate -e node_up -n nodename cl_emulate -e node_down -n nodename {f|g|t} cl_emulate -e network_up -w networkname -n nodename cl_emulate -e network_down -w networkname -n nodename cl_emulate -e join_standby -n nodename -a ip_label cl_emulate -e fail_standby -n nodename -a ip_label cl_emulate -e swap_adapter -n nodename -w networkname -a ip_label -d ip_labelDescription
Emulates a specific cluster event and outputs the result of the emulation. The output is shown on the screen as the emulation runs, and is saved to an output file on the node from which the emulation was executed.
The Event Emulation utility does not run customized scripts such as pre- and post- event scripts. In the output file the script is echoed and the syntax is checked, so you can predict possible errors in the script. However, if customized scripts exist, the outcome of running the actual event may differ from the outcome of the emulation.
When emulating an event that contains a customized script, the Event Emulator uses the ksh flags -n and -v. The -n flag reads commands and checks them for syntax errors, but does not execute them. The -v flag indicates verbose mode. When writing customized scripts that may be accessed during an emulation, be aware that the other ksh flags may not be compatible with the -n flag and may cause unpredictable results during the emulation. See the ksh man page for flag descriptions.
You can run only one instance of an event emulation at a time. If you attempt to start an emulation while an emulation is already running on a cluster, the integrity of the output cannot be guaranteed.
Parameters
Return Values
Security Utilities
HACMP security utilities include kerberos setup utilities.
Kerberos Setup Utilities
To simplify and automate the process of configuring Kerberos security on the SP, two scripts for setting up Kerberos service principals are provided with HACMP:
cl_setup_kerberos—extracts the HACMP network interface labels from an already configured node and creates a file, cl_krb_service, that contains all of the HACMP network interface labels and additional format information required by the add_principal Kerberos setup utility. Also creates the cl_adapters file that contains a list of the network interfaces required to extract the service principals from the authentication database. cl_ext_krb—prompts the user to enter the Kerberos password to be used for the new principals, and uses this password to update the cl_krb_service file. Checks for a valid .k file and alerts the user if one does not exist. Once a valid .k file is found, the cl_ext_krb script runs the add_principal utility to add all the network interface labels from the cl_krb_service file into the authentication database; extracts the service principals and places them in a new Kerberos services file, cl_krb-srvtab; creates the cl_klogin file that contains additional entries required by the .klogin file; updates the .klogin file on the control workstation and all nodes in the cluster; and concatenates the cl_krb-srvtab file to each node’s /etc/krb-srvtab file. Start and Stop Tape Resource Utilities
The following sample scripts are supplied with the software.
tape_resource_start_example
Syntax
Description
Rewinds a highly available tape resource.
Parameters
Return Values
tape_resource_stop_example
Syntax
Description
Rewinds the highly available tape resource.
Parameters
None.
Return Values
Cluster Resource Group Information Commands
The following commands are available for use by scripts or for execution from the command line:
clRMupdate clRGinfo. HACMP event scripts use the clRMupdate command to notify the Cluster Manager that it should process an event. It is not documented for end users; it should only be used in consultation with IBM support personnel.
Users or scripts can execute the clRGinfo command to get information about resource group status and location.
clRGinfo
Syntax
Description
Use the clRGinfo command to display the location and state of all resource groups.
If clRGinfo cannot communicate with the Cluster Manager on the local node, it attempts to find a cluster node with the Cluster Manager running, from which resource group information may be retrieved. If clRGinfo fails to find at least one node with the Cluster Manager running, HACMP displays an error message.
Parameters
Return Values:
Examples of clRGinfo Output
clRGinfo
$ /usr/es/sbin/cluster/utilities/clRGinfo ------------------------------------------------------------------------ Group Name Group State Node Node State ------------------------------------------------------------------------ Group1 ONLINE merry ONLINE samwise OFFLINE Group2 ONLINE merry ONLINEIf you run the clRGinfo command with sites configured, that information is displayed as in the following example:
$ /usr/es/sbin/cluster/utilities/clRGinfo ------------------------------------------------------------------------ Group Name Group State Node Node State ------------------------------------------------------------------------ Colors ONLINE white@Site1 ONLINE amber@Site1 OFFLINE yellow@Site1 OFFLINE navy@Site2 ONLINE_SECONDARY ecru@Site2 OFFLINE samwise ONLINEclRGinfo -c -p
If you run the clRGinfo -c -p command, it lists the output in a colon separated format and the parameter indicating status and location of the resource groups
Possible States of a Resource Group
ONLINE OFFLINE OFFLINE Unmet dependencies OFFLINE User requested UNKNOWN ACQUIRING RELEASING ERROR TEMPORARY ERROR ONLINE SECONDARY ONLINE PEER ACQUIRING SECONDARY RELEASING SECONDARY ACQUIRING PEER RELEASING PEER ERROR SECONDARY TEMPORARY ERROR SECONDARYclRGinfo -a
The clRGinfo -a command lets you know the pre-event location and the post-event location of a particular resource group. Because HACMP performs these calculations at event startup, this information will be available in pre-event scripts (such as a pre-event script to node_up), on all nodes in the cluster, regardless of whether the node where it is run takes any action on a particular resource group.
Note: clRGinfo - a provides meaningful output only if you run it while a cluster event is being processed.
In this example, the resource group A is moving from the offline state to the online state on node B. The pre-event location is left blank, the post-event location is Node B: :rg_move[112] /usr/es/sbin/cluster/utilities/clRGinfo -a--------------------------------------------------------Group Name Resource Group Movement--------------------------------------------------------rgA PRIMARY=":nodeB"In this example, the resource group B is moving from Node B to the offline state. The pre-event location is node B, the post-event location is left blank: :rg_move[112] /usr/es/sbin/cluster/utilities/clRGinfo -a--------------------------------------------------------Group Name Resource Group Movement--------------------------------------------------------rgB PRIMARY="nodeB:"In this example, the resource group C is moving from Node A to Node B. The pre-event location is node A, the post-event location is node B: :rg_move[112] /usr/es/sbin/cluster/utilities/clRGinfo -a--------------------------------------------------------Group Name Resource Group Movement--------------------------------------------------------rgC PRIMARY="nodeA:nodeB"In this example with sites, the primary instance of resource group C is moving from Node A to Node B, and the secondary instance stays on node C: :rg_move[112] /usr/es/sbin/cluster/utilities/clRGinfo -a--------------------------------------------------------Group Name Resource Group Movement--------------------------------------------------------rgC PRIMARY="nodeA:nodeB"SECONDARY=”nodeC:nodeC”With concurrent resource groups, the output indicates each node from which a resource group is moving online or offline. In the following example, both nodes release the resource group: :rg_move[112] /usr/es/sbin/cluster/utilities/clRGinfo -a--------------------------------------------------------Group Name Resource Group Movement--------------------------------------------------------rgA "nodeA:"rgA "nodeB:"clRGinfo -p
The clRGinfo -p command displays the node that temporarily has the highest priority for this instance as well as the state for the primary and secondary instances of the resource group. The command shows information about those resource groups whose locations were temporally changed because of the resource group migration utility.
$ /usr/es/sbin/cluster/utilities/clRGinfo -p Cluster Name: TestCluster Resource Group Name: Parent Primary instance(s): The following node temporarily has the highest priority for this instance: user-requested rg_move performed on Wed Dec 31 19:00:00 1969 Node State ---------------------------- --------------- node3@s2 OFFLINE node2@s1 ONLINE node1@s0 OFFLINE Resource Group Name: Child Node State ---------------------------- --------------- node3@s2 ONLINE node2@s1 OFFLINE node1@s0 OFFLINE ---------------------------- ---------------clRGinfo -p -t
The clRGinfo -p -t command displays the node that temporarily has the highest priority for this instance and a resource group's active timers:
/usr/es/sbin/cluster/utilities/clRGinfo -p -t Cluster Name: MyTestCluster Resource Group Name: Parent Primary instance(s): The following node temporarily has the highest priority for this instance: node4, user-requested rg_move performed on Fri Jan 27 15:01:18 2006 Node Primary State Secondary State Delayed Timers ------------------------------- --------------- ------------------- node1@siteA OFFLINE ONLINE SECONDARY node2@siteA OFFLINE OFFLINE node3@siteB OFFLINE OFFLINE node4@siteB ONLINE OFFLINE Resource Group Name: Child Node State Delayed Timers ---------------------------- --------------- ------------------- node2 ONLINE node1 OFFLINE node4 OFFLINE node3 OFFLINEclRGinfo -s
$ /usr/es/sbin/cluster/utilities/clRGinfo -s Group1:ONLINE:merry:OHN:FNPN:FBHPN:ignore: : : Group1:OFFLINE:samwise:OHN:FNPN:FBHPN:ignore: : : Group2:ONLINE:merry:OAAN:BO:NFB:ignore: : : Group2:ONLINE:samwise:OAAN:BO:NFB:ignore: : :The -s flag prints the output in the following order:
RGName:state:node:type:startup:fallover:fallback:site:POL:POL_SEC: fallbackTime:settlingTime $ /usr/es/sbin/cluster/utilities/clRGinfo -s Group1:ONLINE:white::ONLINE:OHN:FNPN:FBHPN:PPS: : : :ONLINE:Site1: Group1:OFFLINE:amber::OFFLINE:OHN:FNPN:FBHPN:PPS: : : :ONLINE:Site1 : Group1:ONLINE:yellow::ONLINE:OAAN:BO:NFB:PPS: : : :ONLINE:Site1: Group1:ONLINE:navy::ONLINE:OAAN:BO:NFB:PPS: : : :ONLINE:Site2: Group1:ONLINE:ecru::ONLINE:OAAN:BO:NFB:PPS: : : :ONLINE:Site2:where the resource group startup fallover and fallback preferences are abbreviated as follows:
Resource group startup policies: OHN: Online On Home Node Only OFAN: Online On First Available Node OUDP: Online Using Distribution Policy OAAN: Online On All Available Nodes Resource group fallover policies: FNPN: fallover To Next Priority Node In The List FUDNP: fallover Using Dynamic Node Priority BO: Bring Offline (On Error Node Only) Resource group fallback policies: FHPN: Fallback To Higher Priority Node In The List NFB: Never Fallback. Resource group's intersite policies: ignore: ignore OES: Online On Either Site OBS: Online Both Sites PPS: Prefer Primary Site If an attribute is not available for a resource group, the command displays a colon and a blank instead of the attribute.
clRGinfo -v
$ /usr/es/sbin/cluster/utilities/clRGinfo -v Cluster Name: myCLuster Resource Group Name: Group1 Startup Policy: Online On Home Node Only fallover Policy: fallover To Next Priority Node In The List Fallback Policy: Fallback To Higher Priority Node In The List Site Policy: ignore Node State --------------- --------------- merry ONLINE samwise OFFLINE Resource Group Name: Group2 Startup Policy: Online On All Available Nodes fallover Policy: Bring Offline (On Error Node Only) Fallback Policy: Never Fallback Site Policy: ignore
![]() ![]() ![]() |