PreviousNextIndex

Appendix C: HACMP for AIX Commands


This appendix provides a quick reference to commands commonly used to obtain information about the cluster environment or to execute a specific function. The chapter lists syntax diagrams and provides examples for using each command.

Overview of Contents

As system administrator, you often must obtain information about your cluster to determine if it is operating correctly. The commands you need to obtain this information are listed in alphabetical order in this chapter.

Highlighting

The following highlighting conventions are used in this appendix:

Bold
Identifies command words, keywords, files, directories, and other items whose actual names are predefined by the system.
Italics
Identifies parameters whose actual names or values are supplied by the user.
Monospace
Identifies examples of specific data values, examples of text similar to what you might see displayed, examples of program code similar to what you might write as a programmer, messages from the system, or information you should actually type.

Reading Syntax Diagrams

Usually, a command follows this syntax:

[]
Material within brackets is optional.
{}
Material within braces is required.
|
Indicates an alternative. Only one of the options can be chosen.
...
Indicates that one or more of the kinds of parameters or objects preceding the ellipsis can be entered.

Note: Flags listed in syntax diagrams throughout this appendix are those recommended for use with the HACMP for AIX software. Flags used internally by SMIT are not listed.

Related Information

For complete information on a command’s capabilities and restrictions, see the online man page. Man pages for HACMP for AIX 5L commands and utilities are installed in the /usr/share/man/cat1 directory. Use the following syntax to read man page information:

man [command-name] 

where command-name is the actual name of the HACMP command or script. For example, type man clpasswd to obtain information about the HACMP user password command.

Data Collection Utilities

Use the AIX 5L snap command to collect data from HACMP clusters.

The -e flag collects the hacmp data. Using /usr/sbin/snap -e lets you properly document a problem with one simple command. This command gathers all files necessary for determining most HACMP problems. It provides output in a format ready to send to IBM Support personnel.

When you run the command, the output is placed in a newly created subdirectory called /ibmsupt in /tmp.The recommended amount for available space in /tmp should be at least 100 MG before running the command, the output could require more depending on number of nodes and actual files collected.

See the AIX 5L man page for complete information.

RSCT Command for Testing Disk Heartbeat

The following RSCT command is useful for testing the link status of the disk heartbeating path. The pathname for this command is /usr/sbin/rsct/bin/dhb_read.

Note: The disk heartbeating network cannot be tested with this utility while the network is active.

Run the command on both nodes. On one node, run the command with the -r flag. This indicates it is in receive mode and will wait for a sign from the remote node. It will not wait indefinitely in this mode. On the other node, the utility must be run with the -t flag. If the remote node is still in receive mode, and the disk is set up correctly, you should immediately see a message on both nodes that says: Link operating normally.

dhb_read -p devicename

dhb_read -p devicename -r

dhb_read -p devicename -t

Example Disk Heartbeating Test

To test the disk heartbeating link on nodes A and B, where hdisk1 is the heartbeat path:

On Node A, enter: dhb_read -p hdisk1 -r

On Node B, enter: dhb_read -p hdisk1 -t

If the link is active, you see this message on both nodes:

Link operating normally.

The return code is 0 for success and -1 otherwise.

HACMP for AIX Common Commands

The following commands can be run from the command line to obtain information about your HACMP for AIX cluster environment. Syntax and descriptions of these commands are included in this appendix.

cl_convert
Converts Configuration Database of previous HACMP release to Configuration Database of current release. Run from the command line only if installation fails. Otherwise, it runs automatically with installation.
clconvert_snapshot
Upgrades cluster snapshots.
clRGinfo
Displays the status and location of a given resource group in the cluster configuration.
clgetaddr
Returns IP address for the specified node name.
clpasswd
Changes a user’s password on each node in the cluster.
cllscf
Lists cluster topology information.
cllsdisk
Lists PVIDs of accessible disks in a specified resource chain.
cllsfs
List filesystems accessible in a specified resource chain.
cllslv
List the names of filesystems accessible by nodes in a specified resource chain.
cllsgrp
Lists all resource groups.
cllsnim
Lists contents of HACMPnetwork interface module Configuration Database class.
cllsparam
Lists runtime parameters.
cllsres
Lists Configuration Database resource data by name and arguments.
cllsserv
Lists application servers by name.
cllsvg
List volume groups accessible in a specified resource chain.
clshowres
Shows node environment resources.
clstat
Monitors status of cluster.
cltopinfo
Lists all topology information: cluster, nodes, networks, interfaces.
get_local_nodename
Retrieves the name of the local node.
clgetactivenodes
Retrieves the names of all active cluster nodes.
clresactive
Retrieves the names of all active resources.

HACMP for AIX C-SPOC Common Commands

The following C-SPOC commands function in cluster environments and can be run from the command line to manage the cluster. For complete syntax and descriptions of these commands, see HACMP for AIX C-SPOC Commands later in this appendix.

cl_clstop
Stops cluster services on nodes running C-SPOC.
cl_lsfs
Displays shared filesystem attributes for all cluster nodes.
cl_lsgroup
Displays group attributes for all cluster nodes.
cl_lslv
Displays shared logical volume attributes for cluster nodes.
cl_lsuser
Displays user account attributes for all nodes.
cl_lsvg
Displays shared volume group attributes for cluster nodes.
cl_nodecmd
Runs a given command in parallel on a given set of nodes.
cl_rc.cluster
Sets the operating environment and starts cluster daemons.

HACMP for AIX Commands

cl_convert [-F] -v <release> [-s<simulation file>] [-i]

Upgrading HACMP software to the newest version involves converting the Configuration Database from a previous release to that of the current release. When you install HACMP, cl_convert is run automatically. However, if installation fails, you must run cl_convert from the command line. Root user privilege is required to run cl_convert.

The command copies the previous version's ODM data to the new version's ODM structure. If fields were deleted in the new version, the data is saved to /tmp/cl_convert_HACMP_OLD. The command then ensures that the data is in the correct form for the new version.

When the new version is installed, the install script adds the suffix OLD to the HACMPxxx classes stored in the /etc/objrepos directory, and it creates the new HACMPxxx classes for the new version. The install script issues the cl_convert command which converts the data in HACMPxxxOLD to the corresponding new classes in HACMPxxx.

You may run the cl_convert command from the command line, but it is expecting the HACMPxxx and HACMPxxxOLD ODM's to already exist.

You may want to run the cl_convert command with the -F option. If the option is not specified, the cl_convert command checks for configured data in the new ODM class HACMPcluster. If data is present, the command exits without performing the conversion. If the -F option is specified, the command will continue without checking for present data.

Note that cl_convert copies the HACMPxxx and HACMPxxxOLD ODM's to a temporary file (/tmp/tmpodmdir) for processing before writing the final data to the HACMPxxx ODM's. If cl_convert encounters any kind of error, the HACMPxxx ODM's are not overwritten. If no error occurs, the HACMPxxx ODM's are overwritten and the install script will remove the HACMPxxxOLD ODM's

Note that you must be in the conversion directory to run this command:

/usr/es/sbin/cluster/conversion

Also, cl_convert assumes that the correct value for $ODMDIR is set. The results of cl_convert can be found in /tmp/clconvert.log.

-F
Force flag. Causes cl_convert to overwrite existing ODM object classes, regardless of the number of existing entries. Omitting this flag causes cl_convert to check for data in HACMPcluster (which there will always be from the previous configuration) and exit if data is encountered.

-v
Release version flag. Indicates the release number of the old version.
WARNING: Do not use the cl_convert command unless you know the version from which you are converting.
-s <simulation_file>
Simulation flag. Indicates that instead of writing the resulting ODM data back to the new HACMPxxx ODM's, write to the specified file in text format.
-i
Ignore copy flag. Specifies not to copy the HACMPxxxOLD data to the new HACMPxxx ODM's, but just operate directly on the new HACMPxxx ODM's. This is used primarily by clconvert_snapshot.

Note: The AIX 5L environmental variable ODMDIR must be set to the directory you wish to convert.

Examples

Example

If a cluster is already configured for HACMP 5.1, during the installation of HACMP 5.4, the installing script will call cl_convert as:

cl_convert -F -v 5.1

clconvert_snapshot -v release -s <snapshot file>

You can run clconvert_snapshot to upgrade cluster snapshots from a previous version (starting from version 5.1) of HACMP to the most recent version of HACMP. The command by default assumes you are converting to the latest version of the software.

The command copies the previous version's ODM data from the snapshot_file to the format of the new version's ODM structure. If fields were deleted in the new version, the data is saved to /tmp/cl_convert_HACMP_OLD. The command then ensures that the data is in the correct form for the new version.

Once a snapshot file has been upgraded, it is assigned the same name as the previous version and cannot be reverted back to the previous version. A copy of the old version of the snapshot will be saved for you with the same original name plus the .old extension.

You must be in the /usr/es/sbin/cluster/conversion directory on the same node that took the snapshot to run the clconvert_snapshot command.

Once the snapshot file has been upgraded and all of the nodes in the cluster have the current level installed, the upgraded snapshot can be applied and then the cluster can be brought up.

The script clconvert_snapshot creates an old version of the ODMs and populates those ODMs with the values from the user-supplied snapshot file. It then calls the same commands that cl_convert uses to convert those ODMs to the current version. A new snapshot is taken from the upgraded ODMs and copied to the user supplied snapshot file.

The clconvert_snapshot is not run automatically during installation, and must always be run from the command line.

-v
Release version flag. Specifies the release number from which the conversion is to be performed.
WARNING: Do not use the clconvert_snapshot command unless you know the version from which you are converting.
-s
Snapshot file flag. Specifies the snapshot file to convert. If you do not specify a path, for the snapshot file, the command uses the path specified in the $SNAPSHOTPATH variable. The default is /usr/es/sbin/cluster/snapshots.

Example

clconvert_snapshot -v 5.1 -s mysnapshot

Converts an HACMP 5.1 snapshot to an HACMP 5.4 snapshot named “mysnapshot.”

The file “mysnapshot” is in turn placed in the directory specified by the $SNAPSHOTPATH environment variable. If a $SNAPSHOTPATH variable is not specified, the file is put in /usr/es/sbin/cluster/snapshots.

clfindres [-s] [resgroup1] [resgroup2]...

Finds a given resource group or groups in a cluster configuration.

Note: When you run clfindres, it calls clRGinfo, and the command output for clfindres is the same as it is for the clRGinfo command. Therefore, use the clRGinfo command to find the status and the location of the resource groups. See the following section or the man page for the clRGinfo command for more information.
-s
Requests abbreviated (location only) output.


clpasswd [-g resource group] user

Change the current users password on all nodes in a cluster, or in a resource group.

The Cluster Password (clpasswd) utility lets users to change their own password on all nodes in a cluster, or in a resource group as specified by the HACMP administrator, from a single node. Before users can change their password across cluster nodes, the HACMP administrator adds any users who do not have root privileges to the list of users allowed to change their password.

For information about giving users permission to change their own password, see the section Allowing Users to Change Their Own Passwords in Chapter 16: Managing User and Groups.

This Cluster Password utility can also replace the AIX 5L password utility from the SMIT fastpath cl_passwd.

The following table shows where a user’s password is changed based on the user’s authorization and the password utility that is active:

 
When the system password utility is linked to clpasswd and /bin/passwd is invoked
When the system password utility is active
User authorized to change password across cluster
The password is changed on all cluster nodes,
The password is changed on all cluster nodes.
User not authorized to change password across cluster
The password is changed only on the local node.
The password is not changed.

-g
Specifies the name of the resource group in which the user can change their password. The password is changed on each node in the specified resource group.
user
The username of the user who is changing their password.

Example

clpasswd -g rg1 myusername 

clRGinfo [-a][-h] [-v][-s|-c] [-p] [-t] [-d][resgroup1] [resgroup2]...

See the section Using the clRGinfo Command in Chapter 10: Monitoring an HACMP Clusterfor usage and examples.

clgetaddr [-o odmdir] nodename

Returns a PINGable address for the specified node name.

-o
Specifies an alternate ODM directory.

Example

To get a PINGable address for the node seaweed, enter:

clgetaddr seaweed 

The following address is returned: 2361059035

cllscf

Lists complete cluster topology information. See cltopinfo for updated command.

cllsdisk {-g Resource Group}

Lists PVIDs of accessible disks in a specified resource chain.

-g resource group 

Specifies name of resource group to list.

Example

cllsdisk -g grp3 

Lists PVIDs of disks accessible in resource group grp3.

cllsfs {-g resource group} [-n]

Lists shared filesystems contained in a resource group.

-g resource group
Specifies name of resource group for which to list filesystems.
-n
Lists the nodes that share the filesystem in the resource group.

Note: Do not run the cllsfs command from the command line. Use the SMIT interface to retrieve filesystem information, as explained in Chapter 11: Managing Shared LVM Components.

cllslv [-g resource group] [-n] [-v]

Lists the names of logical volumes accessible by nodes in a specified resource chain.

-g resource group
Specifies name of resource group for which to list logical volumes.
-n
Lists the nodes in the resource group.
-v
Lists only those logical volumes that belong to volume groups that are currently varied on.

Examples

Example 1

cllslv -g grp2 

Lists all shared logical volumes contained in resource group grp2.

Example 2

cllslv -g grp2 -n -v 

Displays the nodes and those logical volumes that belong to currently varied-on volume groups in resource group grp2.

cllsgrp

Lists names of all resource groups configured in the cluster.

cllsnim [-d odmdir] [-c] [-n nimname]

Lists contents of HACMPnim Configuration Database class.

-d odmdir
Specifies an alternate ODM directory to /etc/objrepos.
-c
Specifies a colon output format.
-n nimname
Name of the network interface module for which to list information.

Examples

Example 1

cllsnim 

Shows information for all configured network modules.

Example 2

cllsnim -n ether 

Shows information for all configured Ethernet network modules.

cllsparam {-n nodename} [-c] [-s] [-d odmdir]

Lists runtime parameters.

-n nodename
Specifies a node for which to list the information.
-c
Specifies a colon output format.
-s
Used along with the -c flag, specifies native language instead of English.
-d odmdir
Specifies an alternate ODM directory.

Example

cllsparam -n abalone 

Shows runtime parameters for node abalone.

cllsres [-g group] [-c] [-s] [-d odmdir] [-q query]

Sorts HACMP for AIX Configuration Database resource data by name and arguments.

-g group
Specifies name of resource group to list.
-c
Specifies a colon output format.
-s
Used with the -c flag, specifies native language instead of English.
-d odmdir
Specifies an alternate ODM directory.
-q query
Specifies search criteria for ODM retrieve. See the odmget man page for information on search criteria.

Examples

Example 1

cllsres 

Lists resource data for all resource groups.

Example 2

cllsres -g grp1  

Lists resource data for resource group grp1.

Example 3

cllsres -g grp1 -q"name = FILESYSTEM" 

Lists filesystem resource data for resource group grp1.

cllsserv [-c] [-h] [-n name] [-d odmdir]

Lists application servers by name.

-c
Specifies a colon output format.
-h
Specifies to print a header.
-n name
Specifies an application server for which to check information.
-d odmdir
Specifies an alternate ODM directory.

Examples

Example 1

cllsserv 

Lists all application servers.

Example 2

cllsres -c -n test1  

Lists information in colon format for application server test1.

cllsvg {-g resource group} [-n] [-v] [-s | -c]]

Lists volume groups shared by nodes in a cluster. A volume group is considered shared if it is accessible by all participating nodes in a configured resource group. Note that the volume groups listed may or may not be configured as a resource in any resource group. If neither -s nor -c is selected, then both shared and concurrent volume groups are listed.

-g resource group
Specifies name of resource group for which to list volume groups that are shared amongst nodes participating in that resource group.
-n nodes
Specifies all nodes participating in each resource group.
-v
Lists only volume groups that are varied on, and match other command line criteria.
-s
Lists only shared volume groups that also match other criteria.
-c
Lists only concurrent volume groups that also match other criteria.

Example

cllsvg -g grp1 

Lists all shared volume groups in resource group grp1.

clshowres [-g group] [-n nodename] [-d odmdir]

Shows resource group information for a cluster or a node.

-g group
Name of resource group to show.
-n nodename
Searches the resources Configuration Database from the specified node.
-d odmdir
Specifies odmdir as the ODM object repository directory instead of the default /etc/objrepos.

Examples

Example 1

clshowres 

Lists all the resource group information for the cluster.

Example 2

clshowres -n clam 

Lists the resource group information for node clam.

clstat [-c cluster ID | -n cluster name] [-i] [-r seconds] [-a] [-o] [-s]

Cluster Status Monitor (ASCII mode).

-c cluster id
Displays cluster information only about the cluster with the specified ID. If the specified cluster is not available, clstat continues looking for the cluster until the cluster is found or the program is canceled. May not be specified if the -i option is used.
-i
Runs ASCII clstat in interactive mode. Initially displays a list of all clusters accessible to the system. The user must select the cluster for which to display the detailed information. A number of functions are available from the detailed display.
-n name
Displays cluster information about the cluster with the specified name. May not be specified if the -i option is used.
-r seconds
Updates the cluster status display at the specified number of seconds. The default is 1 second; however, the display is updated only if the cluster state changes.
-a
Causes clstat to display in ASCII mode.
-o
(once) Provides a single snapshot of the cluster state and exits. This flag can be used to run clstat out of a cron job. Must be run with -a; ignores -i and -r options.
-s
Displays service labels and their state (up or down).

clstat [-a] [-c id | -n name] [-r tenths-of-seconds] [-s]

Cluster Status Monitor (X Windows mode).

-a
Runs clstat in ASCII mode.
-c id
Displays cluster information only about the cluster with the specified ID. If the specified cluster is not available, clstat continues looking for the cluster until the cluster is found or the program is canceled. May not be specified if the -n option is used.
-n name
Displays cluster information only about the cluster with the specified name.
-r tenths-of-seconds
The interval at which the clstat utility updates the display. For the graphical interface, this value is interpreted in tenths of seconds. By default, clstat updates the display every 0.10 seconds.
-s
Displays service labels and their state (up or down).

Examples

Example 1

clstat -n mycluster 

Displays the cluster information about the cluster named mycluster.

Example 2

clstat -i 

Runs ASCII clstat in interactive mode, allowing multi-cluster monitoring.

Buttons on X Window System Display

Prev
Displays previous cluster.
Next
Displays next cluster.
Name:Id
Refresh bar, pressing bar causes clstat to refresh immediately.
Quit
Exits application.
Help
Pop-up help window shows the clstat manual page.

cltopinfo [-c] [-i] [-n] [-w]

Shows complete topology information: The cluster name, total number of networks and nodes configured in the cluster. Displays all the configured networks for each node. Displays all the configured interfaces for each network. Also displays all the resource groups defined.

-c
Shows the cluster name and the security mode (Standard or Enhanced)
-i
Shows all interfaces configured in the cluster. The information includes the interface label, the network it's attached to (if appropriate), the IP address, netmask, nodename and the device name.
-n
Shows all the nodes configured in the cluster. For each node, lists all the networks defined. For each network, lists all the interfaces defined and the distribution preference for service IP label aliases (if defined)—this is new.
-w
Shows all the networks configured in the cluster. For each network, lists all the nodes attached to that network. For each node, lists all the interfaces defined and the distribution preference for service IP label aliases (if defined)—this is new.

Examples

Example 1

To show all the nodes and networks defined in the cluster (nodes abby and polly):

# cltopinfo 
        Cluster Description of Cluster c10 
        Cluster Security Level Standard 
        There are 2 node(s) and 4 network(s) defined 
NODE polly: 
                Network net_ether_01 
                        polly_en1stby    192.168.121.7 
                        polly_en0boot    192.168.120.7 
                Network net_ether_02 
		Network net_ether_02 will collocate service label(s) with 
the persistent label (if any). 
                Network net_rs232_01 
                Network net_rs232_02 
			polly_tty0_01    /dev/tty0 
        NODE abby: 
                Network net_ether_01 
			abby_en0boot    192.168.120.9 
			abby_en1stby    192.168.121.9 
			abby_en2boot    192.168.122.9 
                Network net_ether_02 
                Network net_rs232_01 
                Network net_rs232_02 
			abby_tty0_01    /dev/tty0 
        No resource groups defined 

Example 2

To show the cluster name and current security mode:

    # cltopinfo -c 
        Cluster Description of Cluster c10 
        Cluster Security Level Standard. 

Example 3

To show all the interfaces defined in the cluster (nodes are nip and tuck):

# cltopinfo -i 
	If Name		 Network 		Type	Node 	Address		If   Netmask 
	=========		============		====	====	=============		===  ============= 
	nip_en1stby	 	net_ether_01  		ether 	nip  	192.168.121.7  		en2  255.255.255.0 
	nip_en0boot 	 	net_ether_01  		ether 	nip  	192.168.120.7  		en1  255.255.255.0 
	nip_tty0_01 	 	net_rs232_02  		rs232 	nip  	/dev/tty0      		tty0  
	tuck_en0boot 		net_ether_01  		ether	tuck 	192.168.120.9  		en1  255.255.255.0 
	tuck_en1stby 		net_ether_01  		ether	tuck 	192.168.121.9  		en2  255.255.255.0 
	tuck_en2boot 	 	net_ether_01  		ether	tuck 	192.168.122.9  		en3  255.255.255.0 
	tuck_tty0_01 	 	net_rs232_02  		rs232	tuck 	/dev/tty0      		tty0  

get_local_nodename

Returns the name of the local node.

clgetactivenodes [-n nodename] [-o odmdir] [-t timeout]
[-v verbose]

Retrieves the names of all cluster nodes.

-n nodename
Determines if the specified node is active.
-o odmdir
Specifies odmdir as the ODM object repository directory instead of the default /etc/objrepos.
-t timeout
Specifies a maximum time interval for receiving information about active nodes.
-v verbose
Specifies that information about active nodes be displayed as verbose output.

Example

clgetactivenodes -n java  

Verifies that node java is active.

clresactive {-v volumegroup | -l logicalvolume | - f filesystem
|-u user |-g group |-V HACMP version |-c [:cmd]}

Retrieves the names of all active cluster resources.

-v volumegroup
Specifies the status of a volume group.
-l logicalvolume
Specifies the status of a logical volume.
-f filesystem
Specifies the status of a filesystem.
-u user
Specifies a user account.
-g group
Specifies a user group.
-V HACMP version
Specifies the current HACMP for AIX version.
-c::cmd
Specifies several commands to be executed simultaneously.

Example

clresactive -g  finance 

HACMP for AIX C-SPOC Commands

The following C-SPOC commands can be executed from the command line and through SMIT. Error messages and warnings returned by the commands are based on the underlying AIX-related commands.

Note: While the AIX 5L commands, underlying the C-SPOC commands, allow you to specify flags in any order, even flags that require arguments, the C-SPOC commands require that arguments to command flags must immediately follow the flag. See the cl_lsuser command for an example.
cl_clstop [-cspoc “[-f] [-g ResourceGroup | -n NodeList] “] -f
cl_clstop [-cspoc “[-f] [-g ResourceGroup | -n NodeList] “] -g [-s] [-y] 
[-N | -R | -B]
cl_clstop [-cspoc “[-f] [-g ResourceGroup | -n NodeList] “] -gr [-s] 
[-y] [-N | -R |-B] 

Stops Cluster daemons using the System Resource Controller (SRC) facility.

Note: Arguments associated with a particular flag must be specified immediately following the flag.
-cspoc
Argument used to specify one of the following C-SPOC options:
-f – Forces cl_stop to skip default verification. If this flag is set and a cluster node is not accessible, cl_clstop reports a warning and continues execution on the other nodes.
-g ResourceGroup – Generates the list of nodes participating in the resource group where the command will be executed.
-n NodeList Shuts down cluster services on the nodes specified in the nodelist.
-f
Cluster services are stopped and resource groups being placed in and UNMANAGED state. Cluster daemons should terminate without running any local events. Resources are not released.
-g
Cluster services are stopped with resource groups brought offline. Resources are not released.
-gr
Cluster services are stopped with resource groups moved to another node, if configured. The daemons should terminate gracefully, and the node should release its resources, which will then be taken over. A nodelist must be specified for cluster services to be stopped with resource groups moved to another node.
-s
Silent shutdown, specifies not to broadcast a shutdown message through /bin/wall. The default is to broadcast.
-y
Do not ask operator for confirmation before shutting down.
-N
Shut down now.
-R
Stops on subsequent system restart (removes the inittab entry).
-B
Stop now.


Examples

Example 1

To shut down the cluster services with resource groups brought offline on node1 (releasing the resources) with no warning broadcast to users before the cluster processes are stopped and resources are released, enter:

cl_clstop -cspoc "-n node1" -gr -s -y 

Example 2

To shut down the cluster services and place resource groups in an UNMANAGED state on all cluster nodes (resources not released) with warning broadcast to users before the cluster processes are stopped, enter:

cl_clstop -f -y 

Example 3

To shut down the cluster services with resource groups brought offline on all cluster nodes, broadcasting a warning to users before the cluster processes are stopped, enter:

cl_clstop -g -y 

cl_lsfs [-cspoc”[-f] [-g ResourceGroup | -n Nodelist]” [-q]
[-c | -l] FileSystem]...

Displays the characteristics of shared filesystems.

Note: Arguments associated with a particular flag must be specified immediately following the flag.
-cspoc
Argument used to specify one of the following C-SPOC options:
-f – This option has no effect when used with the cl_lsfs command.
-g ResourceGroupGenerates the list of nodes participating in the resource group where the command will be executed.
-n nodelist – Runs the command on this list of nodes. If more than one node, separate nodes listed by commas.
-c
Specifies a different search pattern to determine if the underlying AIX 5L lsfs command returned data or not.
-l
Specifies that the output should be in list format.
-q
Queries the logical volume manager (LVM) for the logical volume size (in 512-byte blocks) and queries the JFS superblock for the filesystem size, the fragment size, the compression algorithm (if any), and the number of bytes per i-node (nbpi). This information is displayed in addition to other filesystem characteristics reported by the lsfs command.


Example

Example 1

To display characteristics about all shared filesystems in the cluster, enter:

cl_lsfs 

Example 2

Display characteristics about the filesystems shared amongst the participating nodes in resource_grp1.

cl_lsfs -cspoc "-g resource_grp1" 

cl_lsgroup [-cspoc “[-f] -g ResourceGroup | -n Nodelist”] [-c|-f] [-a | -a List] {ALL | Group [,Group] ...}

Displays attributes of groups that exist on an HACMP cluster.

Note: Arguments associated with a particular flag must be specified immediately following the flag.
-cspoc
Argument used to specify the following C-SPOC option:
-f—This option has no effect when used with the cl_lsgroup command.
-g ResourceGroupGenerates the list of nodes participating in the resource group where the command will be executed.
-n nodelist—Runs the command on this list of nodes. If more than one node, separate nodes listed by commas.
-a List
Specifies the attributes to display. The List parameter can include any attribute defined in the chgroup command, and requires a blank space between attributes. If you specify an empty list using only the -a flag, only the group names are listed.
-c
Displays the attributes for each group in colon-separated records, as follows:
# name: attribute1: attribute2:...
Group: value1: value2: ...
-f
Displays the group attributes in stanzas. Each stanza is identified by a group name. Each Attribute=Value pair is listed on a separate line:
group:
attribute1=value
attribute2=value
attribute3=value
ALL | group [group]...
All resource groups, or particular group or groups to display.


Examples

Example 1

To display the attributes of the finance group from all cluster nodes enter:

cl_lsgroup finance 

Example 2

To display in stanza format the ID, members (users), and administrators (adms) of the finance group from all cluster nodes, enter:

cl_lsgroup -f -a id users adms finance 

Example 3

To display the attributes of all the groups from all the cluster nodes in colon-separated format, enter:

cl_lsgroup -c ALL 

All the attribute information appears, with each attribute separated by a blank space.

cl_lslv [-cspoc “[-f] [-g ResourceGroup | -n Nodelist”] ]
[-l | -m] LogicalVolume

Displays shared logical volume attributes.

Note: Arguments associated with a particular flag must be specified immediately following the flag.
-cspoc
Argument used to specify one of the following C-SPOC options:
-f – This option has no effect when used with the cl_lsfs command.
-g ResourceGroup Generates the list of nodes participating in the resource group where the command will be executed.
-n Nodelist – Runs the command on this list of nodes. If more than one node, separate nodes listed by commas.
-l Logical Volume
Lists information for each physical volume in the shared logical volume. Refer to the lslv command for information about the fields displayed.
-m Logical Volume
Lists information for each logical partition. Refer to the lslv command for information about the fields displayed. If no flags are specified, information about the shared logical volume and its underlying shared volume group is displayed. Refer to the lslv command for the information about the fields displayed.


Examples

Example 1

To display information about the shared logical volume lv03, enter:

cl_lslv -cspoc -g resource_grp1 lv03 

Information about logical volume lv03, its logical and physical partitions, and the volume group to which it belongs is displayed.

Example 2

To display information about a specific logical volume, using the identifier, enter:

cl_lslv -g resource_grp1 00000256a81634bc.2 

All available characteristics and status of this logical volume are displayed.

cl_lsuser [-cspoc “[-f] [-g ResourceGroup | -n Nodelist]”]
[-c | -f] [-a List] {ALL | Name [,Name]...}

Displays user account attributes for users that exist on an HACMP cluster.

Note: Arguments associated with a particular flag must be specified immediately following the flag.
-cspoc
Argument used to specify the following C-SPOC option:
-f – This option has no effect when used with the cl_lsuser command.
-g ResourceGroup – Generates the list of nodes participating in the resource group where the command will be executed.
-n Nodelist – Runs the command on this list of nodes. If more than one node, separate nodes listed by commas.
-a Lists
Specifies the attributes to display. The List variable can include any attribute defined in the chuser command and requires a blank space between attributes. If you specify an empty list, only the user names are displayed.
-c
Displays the user attributes in colon-separated records, as follows:
# name: attribute1: attribute2: ...
User: value1: value2: ...
-f
Displays the output in stanzas, with each stanza identified by a user name. Each Attribute=Value pair is listed on a separate line:
user:
attribute1=value
attribute2=value
attribute3=value
ALL | Name [name]...
Display information for all users or specified user or users.


Examples

Example 1

To display in stanza format the user ID and group-related information about the smith account from all cluster nodes, enter:

cl_lsuser -fa id pgrp groups admgroups smith 

Example 2

To display all attributes of user smith in the default format from all cluster nodes, enter:

cl_lsuser smith 

All attribute information appears, with each attribute separated by a blank space.

Example 3

To display all attributes of all the users on the cluster, enter:

cl_lsuser ALL 

All attribute information appears, with each attribute separated by a blank space.

cl_lsvg [-cspoc “[-f] [-g ResourceGroup | n- Nodelist]” [-o] | [-l | -M | -p] Volume Group...

Displays information about shared volume groups.

Note: Arguments associated with a particular flag must be specified immediately following the flag.
-cspoc
Argument used to specify one of the following C-SPOC options:
-f – This option has no effect when used with the cl_lsvg command.
-g ResourceGroup Specifies the name of the resource group whose participating nodes share the volume group. The command executes on these nodes.
-n Nodelist – Runs the command on this list of nodes. If more than one node, separate nodes listed by commas.
-p
Lists the following information for each physical volume within the group specified by the VolumeGroup parameter:
- Physical volume: A physical volume within the group.
- PVstate: State of the physical volume.
- Total PPs: Total number of physical partitions on the physical volume.
- Free PPs: Number of free physical partitions on the physical volume.
- Distribution: The number of physical partitions allocated within each section of the physical volume: outer edge, outer middle, center, inner middle, and inner edge of the physical volume.
-l
Lists the following information for each logical volume within the group specified by the VolumeGroup parameter:
- LV: A logical volume within the volume group.
- Type: Logical volume type.
- LPs: Number of logical partitions in the logical volume.
- PPs: Number of physical partitions used by the logical volume.
- PVs: Number of physical volumes used by the logical volume.
-M
Lists the following fields for each logical volume on the physical volume:
- PVname: PPnum [LVname: LPnum [:Copynum] [PPstate]]
- PVname: Name of the physical volume as specified by the system.
- PPnum: Physical partition number. Physical partition numbers can range from 1 to 1016.
-o
Lists only the active volume groups (those that are varied on). An active volume group is one that is available for use. Refer to the lsvg command for the information displayed if no flags are specified.


Examples

Example 1

To display the names of all shared volume groups in the cluster, enter:

cl_lsvg 
nodeA: testvg 
nodeB: testvg 

Example 2

To display the names of all active shared volume groups in the cluster, enter:

cl_lsvg -o 
nodeA: testvg 

The cl_lsvg command lists only the node on which the volume group is varied on.

Example 3

To display information about the shared volume group vg02, enter:

cl_lsvg -cspoc testvg 

The cl_lsvg command displays the same data as the lsvg command, prefixing each line of output with the name of the node on which the volume group is varied on.

cl_nodecmd [-q] [-cspoc “[-f] [-n nodelist | -g resource group]” ] command args

Runs a given command in parallel on a given set of nodes.

-q
Specifies quiet mode. All standard output is suppressed.
-cspoc
Argument used to specify one of the following C-SPOC options:
-f – Forces cl_nodecmd to skip HACMP version compatibility checks and node accessibility verification.
-g resource group – Generates the list of nodes participating in the resource group where the command will be executed.
-n nodelist – Runs the command on this list of nodes. If more than one node, separate nodes listed by commas.
command
Specifies the command to be run on all nodes in the nodelist.
args
Specifies any arguments required for use with the cl_nodecmd command.

Examples

Example 1

cl_nodecmd lspv 

Runs the lspv command on all cluster nodes.

Example 2

cl_nodecmd -cspoc "-n beaver,dam" lsvg rootvg 

Runs the lsvg rootvg command on nodes beaver and dam, suppressing standard output.

cl_rc.cluster [-cspoc “[-f] [-g ResourceGroup | -n NodeList]”] [-boot] [b] [-i] [-N | -R | -B]

Sets up the operating system environment and starts the cluster daemons across cluster nodes.

Note: Arguments associated with a particular flag must be specified immediately following the flag.
-cspoc
Argument used to specify the following C-SPOC option:
-f – Forces cl_rc.cluster to skip HACMP version compatibility checks and node accessibility verification.
-g ResourceGroupSpecifies the name of the resource group whose participating nodes share the volume group. The command executes on these nodes.
-n Nodelist – Executes underlying AIX 5L commands across nodes in the nodelist.
-boot
Configures the service network interface to be on its boot address if IPAT is enabled.
-i
Starts the Cluster Information (clinfoES) daemon with its default options.
-b
Broadcasts the startup.
-N
Starts the daemons immediately (no inittab change).
-R
Starts the HACMP daemons on system restart only (the HACMP startup command is added to inittab file).
-B
Starts the daemons immediately and adds the HACMP entry to the inittab file.
-f
Forced startup. Cluster daemons should initialize running local procedures.


Examples

Example 1

To start the cluster with clinfo running on all the cluster nodes, enter:

cl_rc.cluster -boot -i 


PreviousNextIndex