![]() ![]() ![]() |
Appendix C: HACMP for AIX Commands
This appendix provides a quick reference to commands commonly used to obtain information about the cluster environment or to execute a specific function. The chapter lists syntax diagrams and provides examples for using each command.
Overview of Contents
As system administrator, you often must obtain information about your cluster to determine if it is operating correctly. The commands you need to obtain this information are listed in alphabetical order in this chapter.
Highlighting
The following highlighting conventions are used in this appendix:
Reading Syntax Diagrams
Usually, a command follows this syntax:
Note: Flags listed in syntax diagrams throughout this appendix are those recommended for use with the HACMP for AIX software. Flags used internally by SMIT are not listed.
Related Information
For complete information on a command’s capabilities and restrictions, see the online man page. Man pages for HACMP for AIX 5L commands and utilities are installed in the /usr/share/man/cat1 directory. Use the following syntax to read man page information:
where command-name is the actual name of the HACMP command or script. For example, type man clpasswd to obtain information about the HACMP user password command.
Data Collection Utilities
Use the AIX 5L snap command to collect data from HACMP clusters.
The -e flag collects the hacmp data. Using /usr/sbin/snap -e lets you properly document a problem with one simple command. This command gathers all files necessary for determining most HACMP problems. It provides output in a format ready to send to IBM Support personnel.
When you run the command, the output is placed in a newly created subdirectory called /ibmsupt in /tmp.The recommended amount for available space in /tmp should be at least 100 MG before running the command, the output could require more depending on number of nodes and actual files collected.
See the AIX 5L man page for complete information.
RSCT Command for Testing Disk Heartbeat
The following RSCT command is useful for testing the link status of the disk heartbeating path. The pathname for this command is /usr/sbin/rsct/bin/dhb_read.
Run the command on both nodes. On one node, run the command with the -r flag. This indicates it is in receive mode and will wait for a sign from the remote node. It will not wait indefinitely in this mode. On the other node, the utility must be run with the -t flag. If the remote node is still in receive mode, and the disk is set up correctly, you should immediately see a message on both nodes that says: Link operating normally.
dhb_read -p devicename
dhb_read -p devicename -r
dhb_read -p devicename -t
Example Disk Heartbeating Test
To test the disk heartbeating link on nodes A and B, where hdisk1 is the heartbeat path:
On Node A, enter: dhb_read -p hdisk1 -r
On Node B, enter: dhb_read -p hdisk1 -t
If the link is active, you see this message on both nodes:
Link operating normally.
The return code is 0 for success and -1 otherwise.
HACMP for AIX Common Commands
The following commands can be run from the command line to obtain information about your HACMP for AIX cluster environment. Syntax and descriptions of these commands are included in this appendix.
HACMP for AIX C-SPOC Common Commands
The following C-SPOC commands function in cluster environments and can be run from the command line to manage the cluster. For complete syntax and descriptions of these commands, see HACMP for AIX C-SPOC Commands later in this appendix.
HACMP for AIX Commands
cl_convert [-F] -v <release> [-s<simulation file>] [-i]
Upgrading HACMP software to the newest version involves converting the Configuration Database from a previous release to that of the current release. When you install HACMP, cl_convert is run automatically. However, if installation fails, you must run cl_convert from the command line. Root user privilege is required to run cl_convert.
The command copies the previous version's ODM data to the new version's ODM structure. If fields were deleted in the new version, the data is saved to /tmp/cl_convert_HACMP_OLD. The command then ensures that the data is in the correct form for the new version.
When the new version is installed, the install script adds the suffix OLD to the HACMPxxx classes stored in the /etc/objrepos directory, and it creates the new HACMPxxx classes for the new version. The install script issues the cl_convert command which converts the data in HACMPxxxOLD to the corresponding new classes in HACMPxxx.
You may run the cl_convert command from the command line, but it is expecting the HACMPxxx and HACMPxxxOLD ODM's to already exist.
You may want to run the cl_convert command with the -F option. If the option is not specified, the cl_convert command checks for configured data in the new ODM class HACMPcluster. If data is present, the command exits without performing the conversion. If the -F option is specified, the command will continue without checking for present data.
Note that cl_convert copies the HACMPxxx and HACMPxxxOLD ODM's to a temporary file (/tmp/tmpodmdir) for processing before writing the final data to the HACMPxxx ODM's. If cl_convert encounters any kind of error, the HACMPxxx ODM's are not overwritten. If no error occurs, the HACMPxxx ODM's are overwritten and the install script will remove the HACMPxxxOLD ODM's
Note that you must be in the conversion directory to run this command:
/usr/es/sbin/cluster/conversion
Also, cl_convert assumes that the correct value for $ODMDIR is set. The results of cl_convert can be found in /tmp/clconvert.log.
Examples
Example
If a cluster is already configured for HACMP 5.1, during the installation of HACMP 5.4, the installing script will call cl_convert as:
cl_convert -F -v 5.1
clconvert_snapshot -v release -s <snapshot file>
You can run clconvert_snapshot to upgrade cluster snapshots from a previous version (starting from version 5.1) of HACMP to the most recent version of HACMP. The command by default assumes you are converting to the latest version of the software.
The command copies the previous version's ODM data from the snapshot_file to the format of the new version's ODM structure. If fields were deleted in the new version, the data is saved to /tmp/cl_convert_HACMP_OLD. The command then ensures that the data is in the correct form for the new version.
Once a snapshot file has been upgraded, it is assigned the same name as the previous version and cannot be reverted back to the previous version. A copy of the old version of the snapshot will be saved for you with the same original name plus the .old extension.
You must be in the /usr/es/sbin/cluster/conversion directory on the same node that took the snapshot to run the clconvert_snapshot command.
Once the snapshot file has been upgraded and all of the nodes in the cluster have the current level installed, the upgraded snapshot can be applied and then the cluster can be brought up.
The script clconvert_snapshot creates an old version of the ODMs and populates those ODMs with the values from the user-supplied snapshot file. It then calls the same commands that cl_convert uses to convert those ODMs to the current version. A new snapshot is taken from the upgraded ODMs and copied to the user supplied snapshot file.
The clconvert_snapshot is not run automatically during installation, and must always be run from the command line.
Example
clconvert_snapshot -v 5.1 -s mysnapshot
Converts an HACMP 5.1 snapshot to an HACMP 5.4 snapshot named “mysnapshot.”
The file “mysnapshot” is in turn placed in the directory specified by the $SNAPSHOTPATH environment variable. If a $SNAPSHOTPATH variable is not specified, the file is put in /usr/es/sbin/cluster/snapshots.
clfindres [-s] [resgroup1] [resgroup2]...
Finds a given resource group or groups in a cluster configuration.
Note: When you run clfindres, it calls clRGinfo, and the command output for clfindres is the same as it is for the clRGinfo command. Therefore, use the clRGinfo command to find the status and the location of the resource groups. See the following section or the man page for the clRGinfo command for more information.
clpasswd [-g resource group] user
Change the current users password on all nodes in a cluster, or in a resource group.
The Cluster Password (clpasswd) utility lets users to change their own password on all nodes in a cluster, or in a resource group as specified by the HACMP administrator, from a single node. Before users can change their password across cluster nodes, the HACMP administrator adds any users who do not have root privileges to the list of users allowed to change their password.
For information about giving users permission to change their own password, see the section Allowing Users to Change Their Own Passwords in Chapter 16: Managing User and Groups.
This Cluster Password utility can also replace the AIX 5L password utility from the SMIT fastpath cl_passwd.
The following table shows where a user’s password is changed based on the user’s authorization and the password utility that is active:
Example
clRGinfo [-a][-h] [-v][-s|-c] [-p] [-t] [-d][resgroup1] [resgroup2]...
See the section Using the clRGinfo Command in Chapter 10: Monitoring an HACMP Clusterfor usage and examples.
clgetaddr [-o odmdir] nodename
Returns a PINGable address for the specified node name.
Example
To get a PINGable address for the node seaweed, enter:
The following address is returned: 2361059035
cllscf
Lists complete cluster topology information. See cltopinfo for updated command.
cllsdisk {-g Resource Group}
Lists PVIDs of accessible disks in a specified resource chain.
Specifies name of resource group to list.
Example
Lists PVIDs of disks accessible in resource group grp3.
cllsfs {-g resource group} [-n]
Lists shared filesystems contained in a resource group.
-g resource group Specifies name of resource group for which to list filesystems. -n Lists the nodes that share the filesystem in the resource group.
Note: Do not run the cllsfs command from the command line. Use the SMIT interface to retrieve filesystem information, as explained in Chapter 11: Managing Shared LVM Components.
cllslv [-g resource group] [-n] [-v]
Lists the names of logical volumes accessible by nodes in a specified resource chain.
Examples
Example 1
Lists all shared logical volumes contained in resource group grp2.
Example 2
Displays the nodes and those logical volumes that belong to currently varied-on volume groups in resource group grp2.
cllsgrp
Lists names of all resource groups configured in the cluster.
cllsnim [-d odmdir] [-c] [-n nimname]
Lists contents of HACMPnim Configuration Database class.
-d odmdir Specifies an alternate ODM directory to /etc/objrepos. -c Specifies a colon output format. -n nimname Name of the network interface module for which to list information.
Examples
Example 1
Shows information for all configured network modules.
Example 2
Shows information for all configured Ethernet network modules.
cllsparam {-n nodename} [-c] [-s] [-d odmdir]
Lists runtime parameters.
Example
Shows runtime parameters for node abalone.
cllsres [-g group] [-c] [-s] [-d odmdir] [-q query]
Sorts HACMP for AIX Configuration Database resource data by name and arguments.
Examples
Example 1
Lists resource data for all resource groups.
Example 2
Lists resource data for resource group grp1.
Example 3
Lists filesystem resource data for resource group grp1.
cllsserv [-c] [-h] [-n name] [-d odmdir]
Lists application servers by name.
-c Specifies a colon output format. -h Specifies to print a header. -n name Specifies an application server for which to check information. -d odmdir Specifies an alternate ODM directory.
Examples
Example 1
Lists all application servers.
Example 2
Lists information in colon format for application server test1.
cllsvg {-g resource group} [-n] [-v] [-s | -c]]
Lists volume groups shared by nodes in a cluster. A volume group is considered shared if it is accessible by all participating nodes in a configured resource group. Note that the volume groups listed may or may not be configured as a resource in any resource group. If neither -s nor -c is selected, then both shared and concurrent volume groups are listed.
Example
Lists all shared volume groups in resource group grp1.
clshowres [-g group] [-n nodename] [-d odmdir]
Shows resource group information for a cluster or a node.
Examples
Example 1
Lists all the resource group information for the cluster.
Example 2
Lists the resource group information for node clam.
clstat [-c cluster ID | -n cluster name] [-i] [-r seconds] [-a] [-o] [-s]
Cluster Status Monitor (ASCII mode).
clstat [-a] [-c id | -n name] [-r tenths-of-seconds] [-s]
Cluster Status Monitor (X Windows mode).
Examples
Example 1
Displays the cluster information about the cluster named mycluster.
Example 2
Runs ASCII clstat in interactive mode, allowing multi-cluster monitoring.
Buttons on X Window System Display
cltopinfo [-c] [-i] [-n] [-w]
Shows complete topology information: The cluster name, total number of networks and nodes configured in the cluster. Displays all the configured networks for each node. Displays all the configured interfaces for each network. Also displays all the resource groups defined.
Examples
Example 1
To show all the nodes and networks defined in the cluster (nodes abby and polly):
# cltopinfo Cluster Description of Cluster c10 Cluster Security Level Standard There are 2 node(s) and 4 network(s) defined NODE polly: Network net_ether_01 polly_en1stby 192.168.121.7 polly_en0boot 192.168.120.7 Network net_ether_02 Network net_ether_02 will collocate service label(s) with the persistent label (if any). Network net_rs232_01 Network net_rs232_02 polly_tty0_01 /dev/tty0 NODE abby: Network net_ether_01 abby_en0boot 192.168.120.9 abby_en1stby 192.168.121.9 abby_en2boot 192.168.122.9 Network net_ether_02 Network net_rs232_01 Network net_rs232_02 abby_tty0_01 /dev/tty0 No resource groups definedExample 2
To show the cluster name and current security mode:
Example 3
To show all the interfaces defined in the cluster (nodes are nip and tuck):
# cltopinfo -i If Name Network Type Node Address If Netmask ========= ============ ==== ==== ============= === ============= nip_en1stby net_ether_01 ether nip 192.168.121.7 en2 255.255.255.0 nip_en0boot net_ether_01 ether nip 192.168.120.7 en1 255.255.255.0 nip_tty0_01 net_rs232_02 rs232 nip /dev/tty0 tty0 tuck_en0boot net_ether_01 ether tuck 192.168.120.9 en1 255.255.255.0 tuck_en1stby net_ether_01 ether tuck 192.168.121.9 en2 255.255.255.0 tuck_en2boot net_ether_01 ether tuck 192.168.122.9 en3 255.255.255.0 tuck_tty0_01 net_rs232_02 rs232 tuck /dev/tty0 tty0get_local_nodename
Returns the name of the local node.
clgetactivenodes [-n nodename] [-o odmdir] [-t timeout]
[-v verbose]Retrieves the names of all cluster nodes.
Example
Verifies that node java is active.
clresactive {-v volumegroup | -l logicalvolume | - f filesystem
|-u user |-g group |-V HACMP version |-c [:cmd]}Retrieves the names of all active cluster resources.
Example
HACMP for AIX C-SPOC Commands
The following C-SPOC commands can be executed from the command line and through SMIT. Error messages and warnings returned by the commands are based on the underlying AIX-related commands.
Note: While the AIX 5L commands, underlying the C-SPOC commands, allow you to specify flags in any order, even flags that require arguments, the C-SPOC commands require that arguments to command flags must immediately follow the flag. See the cl_lsuser command for an example.
cl_clstop [-cspoc “[-f] [-g ResourceGroup | -n NodeList] “] -f cl_clstop [-cspoc “[-f] [-g ResourceGroup | -n NodeList] “] -g [-s] [-y] [-N | -R | -B] cl_clstop [-cspoc “[-f] [-g ResourceGroup | -n NodeList] “] -gr [-s] [-y] [-N | -R |-B]Stops Cluster daemons using the System Resource Controller (SRC) facility.
Note: Arguments associated with a particular flag must be specified immediately following the flag.
Examples
Example 1
To shut down the cluster services with resource groups brought offline on node1 (releasing the resources) with no warning broadcast to users before the cluster processes are stopped and resources are released, enter:
Example 2
To shut down the cluster services and place resource groups in an UNMANAGED state on all cluster nodes (resources not released) with warning broadcast to users before the cluster processes are stopped, enter:
Example 3
To shut down the cluster services with resource groups brought offline on all cluster nodes, broadcasting a warning to users before the cluster processes are stopped, enter:
cl_lsfs [-cspoc”[-f] [-g ResourceGroup | -n Nodelist]” [-q]
[-c | -l] FileSystem]...Displays the characteristics of shared filesystems.
Note: Arguments associated with a particular flag must be specified immediately following the flag.
Example
Example 1
To display characteristics about all shared filesystems in the cluster, enter:
Example 2
Display characteristics about the filesystems shared amongst the participating nodes in resource_grp1.
cl_lsgroup [-cspoc “[-f] -g ResourceGroup | -n Nodelist”] [-c|-f] [-a | -a List] {ALL | Group [,Group] ...}
Displays attributes of groups that exist on an HACMP cluster.
Note: Arguments associated with a particular flag must be specified immediately following the flag.
Examples
Example 1
To display the attributes of the finance group from all cluster nodes enter:
Example 2
To display in stanza format the ID, members (users), and administrators (adms) of the finance group from all cluster nodes, enter:
Example 3
To display the attributes of all the groups from all the cluster nodes in colon-separated format, enter:
All the attribute information appears, with each attribute separated by a blank space.
cl_lslv [-cspoc “[-f] [-g ResourceGroup | -n Nodelist”] ]
[-l | -m] LogicalVolumeDisplays shared logical volume attributes.
Note: Arguments associated with a particular flag must be specified immediately following the flag.
Examples
Example 1
To display information about the shared logical volume lv03, enter:
Information about logical volume lv03, its logical and physical partitions, and the volume group to which it belongs is displayed.
Example 2
To display information about a specific logical volume, using the identifier, enter:
All available characteristics and status of this logical volume are displayed.
cl_lsuser [-cspoc “[-f] [-g ResourceGroup | -n Nodelist]”]
[-c | -f] [-a List] {ALL | Name [,Name]...}Displays user account attributes for users that exist on an HACMP cluster.
Note: Arguments associated with a particular flag must be specified immediately following the flag.
Examples
Example 1
To display in stanza format the user ID and group-related information about the smith account from all cluster nodes, enter:
Example 2
To display all attributes of user smith in the default format from all cluster nodes, enter:
All attribute information appears, with each attribute separated by a blank space.
Example 3
To display all attributes of all the users on the cluster, enter:
All attribute information appears, with each attribute separated by a blank space.
cl_lsvg [-cspoc “[-f] [-g ResourceGroup | n- Nodelist]” [-o] | [-l | -M | -p] Volume Group...
Displays information about shared volume groups.
Note: Arguments associated with a particular flag must be specified immediately following the flag.
Examples
Example 1
To display the names of all shared volume groups in the cluster, enter:
Example 2
To display the names of all active shared volume groups in the cluster, enter:
The cl_lsvg command lists only the node on which the volume group is varied on.
Example 3
To display information about the shared volume group vg02, enter:
The cl_lsvg command displays the same data as the lsvg command, prefixing each line of output with the name of the node on which the volume group is varied on.
cl_nodecmd [-q] [-cspoc “[-f] [-n nodelist | -g resource group]” ] command args
Runs a given command in parallel on a given set of nodes.
Examples
Example 1
Runs the lspv command on all cluster nodes.
Example 2
Runs the lsvg rootvg command on nodes beaver and dam, suppressing standard output.
cl_rc.cluster [-cspoc “[-f] [-g ResourceGroup | -n NodeList]”] [-boot] [b] [-i] [-N | -R | -B]
Sets up the operating system environment and starts the cluster daemons across cluster nodes.
Note: Arguments associated with a particular flag must be specified immediately following the flag.
Examples
Example 1
To start the cluster with clinfo running on all the cluster nodes, enter:
![]() ![]() ![]() |