Appendix A: Planning Worksheets
Print and use the paper planning worksheets from the PDF version of this guide. In the PDF version, each new worksheet is aligned properly to start at the top of a page. You may need more than one copy of some worksheets.
This appendix contains the following worksheets:
Worksheet
|
Purpose
|
|
Use this worksheet to record the information required to complete the entries in the Two-Node Cluster Configuration Assistant.
|
|
Use this worksheet to record the TCP/IP network topology for a cluster. Complete one worksheet per cluster.
|
|
Use this worksheet to record the TCP/IP network interface cards connected to each node. You need a separate worksheet for each node defined in the cluster, print a worksheet for each node and fill in a node name on each worksheet.
|
|
Use this worksheet to record the point-to-point network topology for a cluster. Complete one worksheet per cluster.
|
|
Use this worksheet to record the serial network interface cards connected to each node. You need a separate worksheet for each node defined in the cluster, print a worksheet for each node and fill in the node name on each worksheet.
|
|
Use this worksheet to record information about Fibre Channel disks to be included in the cluster. Complete a separate worksheet for each cluster node.
|
|
Use this worksheet to record the shared SCSI disk configuration for the cluster. Complete a separate worksheet for each shared bus.
|
|
Use this worksheet to record the shared IBM SCSI disk array configurations for the cluster. Complete a separate worksheet for each shared SCSI bus.
|
|
Use this worksheet to record the shared IBM SCSI tape drive configurations for the cluster. Complete a separate worksheet for each shared tape drive.
|
|
Use this worksheet to record the shared IBM Fibre tape drive configurations for the cluster. Complete a separate worksheet for each shared tape drive.
|
|
Use this worksheet to record the IBM 7131-405 or 7133 SSA shared disk configuration for the cluster.
|
|
Use this worksheet to record the volume groups and filesystems that reside on a node’s internal disks in a non-concurrent access configuration. You need a separate worksheet for each volume group, print a worksheet for each volume group and fill in a node name on each worksheet.
|
|
Use this worksheet to record the shared volume groups and filesystems in a non-concurrent access configuration. You need a separate worksheet for each shared volume group, print a worksheet for each volume group and fill in the names of the nodes sharing the volume group on each worksheet.
|
|
Use this worksheet to record the filesystems and directories NFS-exported by a node in a non-concurrent access configuration. You need a separate worksheet for each node defined in the cluster, print a worksheet for each node and fill in a node name on each worksheet.
|
|
Use this worksheet to record the volume groups and filesystems that reside on a node’s internal disks in a concurrent access configuration. You need a separate worksheet for each volume group, print a worksheet for each volume group and fill in a node name on each worksheet.
|
|
Use this worksheet to record the shared volume groups and filesystems in a concurrent access configuration. You need a separate worksheet for each shared volume group, print a worksheet for each volume group and fill in the names of the nodes sharing the volume group on each worksheet.
|
|
Use these worksheets to record information about applications in the cluster.
|
|
Use this worksheet to record Fast Connect resources
|
|
Use this worksheet to record information about SNA-over-LAN communications links in the cluster.
|
|
Use this worksheet to record information about X.25 communications links in the cluster.
|
|
Use this worksheet to record information about SNA-over-X.25 communications links in the cluster.
|
|
Use these worksheets to record information about application servers in the cluster.
|
|
Use this worksheet to record information for configuring a process monitor for an application.
|
|
Use this worksheet to record information for configuring a custom (user-defined) monitor method for an application.
|
|
Use this worksheet to record the resource groups for a cluster.
|
|
Use this worksheet to record the planned customization for an HACMP cluster event.
|
|
Use this worksheet to record planned cluster sites.
|
|
Use this worksheet to record planned HACMP cluster file collections.
|
Two-Node Cluster Configuration Worksheet
Local Node
|
|
Takeover (Remote) Node
|
|
Communication Path to Takeover Node
|
|
Application Server
|
|
Application Start Script
|
|
Application Stop Script
|
|
Service IP Label
|
|
Sample Two-Node Cluster Configuration Worksheet
Local Node
|
nodea
|
Takeover (Remote) Node
|
nodeb
|
Communication Path to Takeover Node
|
10.11.12.13
|
Application Server
|
appsrv1
|
Application Start Script
|
/usr/es/sbin/cluster/utils/start_app1
|
Application Stop Script
|
/usr/es/sbin/cluster/utils/stop_app1
|
Service IP Label
|
app1_svc
|
TCP/IP Networks Worksheet
Cluster Name
|
|
|
Network Name
|
Network Type
|
Netmask
|
Node Names
|
IPAT via IP Aliases
|
IP Address Offset for Heartbeating over IP Aliases
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Sample TCP/IP Networks Worksheet
Cluster Name
|
|
|
Network Name
|
Network Type
|
Netmask
|
Node Names
|
IPAT via IP Aliases
|
IP Address Offset for Heartbeating over IP Aliases
|
ether1
|
Ethernet
|
255.255.255.0
|
clam, mussel,
oyster
|
enable
|
|
token1
|
Token-Ring
|
255.255.255.0
|
clam, mussel,
oyster
|
enable
|
|
fddi1
|
FDDI
|
255.255.255.0
|
clam, mussel
|
disable
|
|
atm1
|
ATM
|
255.255.255.0
|
clam, mussel
|
unsupported
|
|
TCP/IP Network Interface Worksheet
Node Name
|
|
|
IP Label Address
|
IP Alias
Distribution Preference
|
Network Interface
|
Network Name
|
Interface Function
|
IP Address
|
Netmask
|
Hardware Address
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Sample TCP/IP Network Interface Worksheet
Node Name
|
nodea
|
|
IP Label Address
|
IP Alias
Distribution Preference
|
Network Interface
|
Network Name
|
Interface Function
|
IP Address
|
Netmask
|
Hardware Address
|
nodea-en0
|
Anti- collocation
(default)
|
len0
|
ether1
|
service
|
100.10.1.10
|
255.255.255.0
|
0x08005a7a7610
|
nodea-nsvc1
|
Anti- collocation
(default)
|
en0
|
ether1
|
non-service
|
100.10.1.74
|
255.255.255.0
|
|
nodea-en1
|
Anti- collocation
(default)
|
en1
|
ether1
|
non-service
|
100.10.11.11
|
255.255.255.0
|
|
nodea-tr0
|
collocation
|
tr0
|
token1
|
service
|
100.10.2.20
|
255.255.255.0
|
0x42005aa8b57b
|
nodea-nsvc2
|
Anti- collocation
(default)
|
tr0
|
token1
|
non-service
|
100.10.2.84
|
255.255.255.0
|
|
nodea-fi0
|
collocation
|
fi0
|
fddi1
|
service
|
100.10.3.30
|
255.255.255.0
|
|
nodea-svc
|
collocation
|
css0
|
hps1
|
service
|
|
|
|
nodea-nsvc3
|
Anti- collocation
(default)
|
css0
|
hps1
|
non-service
|
|
|
|
nodea-at0
|
collocation
|
at0
|
atm1
|
service
|
100.10.7.10
|
255.255.255.0
|
0x0020481a396500
|
nodea-nsvc1
|
Anti- collocation
(default)
|
at0
|
atm1
|
non-service
|
100.10.7.74
|
255.255.255.0
|
|
Point-to-Point Networks Worksheet
Cluster Name
|
|
|
Network Name
|
Network Type
|
Node Names
|
Hdisk
(for diskhb networks)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
RS232, target mode SCSI, and target mode SSA, and disk heartbeating links do not use the TCP/IP protocol and do not require a netmask or an IP address.
Miscellaneous Data
Record any extra information about devices used to extend point-to-point links (for example, modem number or extender information).
_________________________________________________________________________
_________________________________________________________________________
_________________________________________________________________________
_________________________________________________________________________
Sample Point-to-Point Networks Worksheet
Cluster Name
|
clus1
|
|
Network Name
|
Network Type
|
Node Names
|
Hdisk
(for diskhb networks)
|
diskhb1
|
diskhb
|
nodeb, nodec
|
hdisk2
|
tmscsi1
|
Target Mode SCSI
|
nodea, nodeb
|
--
|
tmssa1
|
Target Mode SSA
|
nodea, nodeb
|
--
|
RS232, target mode SCSI, and target mode SSA, and disk heartbeating links do not use the TCP/IP protocol and do not require a netmask or an IP address.
Miscellaneous Data
Record any extra information about devices used to extend serial links (for example modem number or extender information).
_________________________________________________________________________
_________________________________________________________________________
_________________________________________________________________________
_________________________________________________________________________
Serial Network Interface Worksheet
Node Name
|
|
_____________________________________________________________________________________
|
Slot Number
|
Interface Name
|
Adapter Label
|
Network Name
|
Network Attribute
|
Adapter Function
|
|
|
|
|
serial
|
service
|
|
|
|
|
serial
|
service
|
|
|
|
|
serial
|
service
|
|
|
|
|
serial
|
service
|
|
|
|
|
serial
|
service
|
|
|
|
|
serial
|
service
|
|
|
|
|
serial
|
service
|
|
|
|
|
serial
|
service
|
|
|
|
|
serial
|
service
|
Non-IP networks do not carry TCP/IP traffic. As a result, no non-service addresses, identifiers (IP addresses), or interface hardware addresses are required to maintain keepalives and control messages between nodes.
Sample Serial Network Interface Worksheet
Node Name
|
nodea
|
_____________________________________________________________________________________
|
Slot Number
|
Interface Name
|
Adapter Label
|
Network Name
|
Network Attribute
|
Adapter Function
|
SS2
|
/dev/tty1
|
nodea_tty1
|
rs232a
|
serial
|
service
|
08
|
scsi2
|
nodea_tmscsi2
|
tmscsi1
|
serial
|
service
|
01
|
tmssa1
|
nodea_tmssa1
|
tmssa1
|
serial
|
service
|
Fibre Channel Disks Worksheet
Node Name
|
|
___________________________________________________________________
|
Fibre Channel Adapter
|
Disks Associated with Adapter
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Sample Fibre Channel Disks Worksheet
Node Name
|
nodea
|
___________________________________________________________________
|
Fibre Channel Adapter
|
Disks Associated with Adapter
|
fcs0
|
hdisk2
|
|
hdisk3
|
|
hdisk4
|
|
hdisk5
|
|
hdisk6
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Shared SCSI Disk Worksheet
Complete a separate worksheet for each shared SCSI bus.
Cluster Name
|
|
Host and Adapter Information
|
|
Node A
|
Node B
|
Node C
|
Node D
|
Node Name
|
|
|
|
|
Slot Number
|
|
|
|
|
Logical Name
|
|
|
|
|
SCSI Device IDs on Shared Bus
|
|
Node A
|
Node B
|
Node C
|
Node D
|
Adapter
|
|
|
|
|
First Shared Drive
|
|
|
|
|
Second Shared Drive
|
|
|
|
|
Third Shared Drive
|
|
|
|
|
Fourth Shared Drive
|
|
|
|
|
Fifth Shared Drive
|
|
|
|
|
Sixth Shared Drive
|
|
|
|
|
Shared Drives
Disk
|
Size
|
Logical Device Name
|
|
|
Node A
|
Node B
|
Node C
|
Node D
|
First
|
|
|
|
|
|
Second
|
|
|
|
|
|
Third
|
|
|
|
|
|
Fourth
|
|
|
|
|
|
Fifth
|
|
|
|
|
|
Sixth
|
|
|
|
|
|
Sample Shared SCSI Disk Worksheet
Complete a separate worksheet for each shared SCSI bus.
Cluster Name
|
|
Host and Adapter Information
|
|
Node A
|
Node B
|
Node C
|
Node D
|
Node Name
|
nodea
|
nodeb
|
|
|
Slot Number
|
7
|
7
|
|
|
Logical Name
|
scsi1
|
scsi1
|
|
|
SCSI Device IDs on Shared Bus
|
|
Node A
|
Node B
|
Node C
|
Node D
|
Adapter
|
6
|
5
|
|
|
First Shared Drive
|
3
|
|
|
|
Second Shared Drive
|
4
|
|
|
|
Third Shared Drive
|
5
|
|
|
|
Fourth Shared Drive
|
|
|
|
|
Fifth Shared Drive
|
|
|
|
|
Sixth Shared Drive
|
|
|
|
|
Shared Drives
Disk
|
Size
|
Logical Device Name
|
|
|
Node A
|
Node B
|
Node C
|
Node D
|
First
|
670
|
hdisk2
|
hdisk2
|
|
|
Second
|
670
|
hdisk3
|
hdisk3
|
|
|
Third
|
670
|
hdisk4
|
hdisk4
|
|
|
Fourth
|
|
|
|
|
|
Fifth
|
|
|
|
|
|
Sixth
|
|
|
|
|
|
Shared IBM SCSI Disk Arrays Worksheet
Complete a separate worksheet for each shared SCSI disk array.
Host and Adapter Information
|
|
Node A
|
Node B
|
Node C
|
Node D
|
Node Name
|
|
|
|
|
Slot Number
|
|
|
|
|
Logical Name
|
|
|
|
|
SCSI Device IDs on Shared Bus
|
Adapter
|
Node A
|
Node B
|
Node C
|
Node D
|
|
|
|
|
|
Shared Drives
|
|
Size
|
RAID Level
|
|
|
|
|
Sample Shared IBM SCSI Disk Arrays Worksheet
Complete a separate worksheet for each shared SCSI disk array.
Host and Adapter Information
|
|
Node A
|
Node B
|
Node C
|
Node D
|
Node Name
|
nodea
|
nodeb
|
|
|
Slot Number
|
2
|
2
|
|
|
Logical Name
|
scsi1
|
scsi1
|
|
|
SCSI Device IDs on Shared Bus
|
Adapter
|
Node A
|
Node B
|
Node C
|
Node D
|
|
14
|
15
|
|
|
Shared Drives
|
|
Size
|
RAID Level
|
2GB
|
5
|
2GB
|
3
|
2GB
|
5
|
2GB
|
5
|
Shared IBM SCSI Tape Drive Worksheet
Note: Complete a separate worksheet for each shared SCSI tape drive.
Host and Adapter Information
|
|
Node A
|
Node B
|
Node Name
|
|
|
Slot Number
|
|
|
Logical Name
|
|
|
SCSI Tape Drive IDs on Shared Bus
|
|
Node A
|
Node B
|
Adapter
|
|
|
Tape Drive
|
|
|
Shared Drives
|
Logical Device Name
|
|
Node A
|
Node B
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Sample Shared IBM SCSI Tape Drive Worksheet
This sample worksheet shows a shared SCSI tape drive configuration.
Host and Adapter Information
|
|
Node A
|
Node B
|
Node Name
|
nodea
|
nodeb
|
Slot Number
|
2
|
2
|
Logical Name
|
scsi1
|
scsi1
|
SCSI Tape Drive IDs on Shared Bus
|
|
Node A
|
Node B
|
Adapter
|
5
|
6
|
Tape Drive
|
2
|
|
Shared Drives
|
Logical Device Name
|
|
Node A
|
Node B
|
5.0GB 8mm tape drive 10GB
|
/dev/rmt0
|
/dev/rmt0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Shared IBM Fibre Tape Drive Worksheet
Host and Adapter Information
|
|
Node A
|
Node B
|
Node Name
|
|
|
Slot Number
|
|
|
Logical Name
|
|
|
Fibre Device IDs on Shared Bus
|
|
Node A
|
Node B
|
SCSI ID
|
|
|
LUN ID
|
|
|
World Wide Name
|
|
|
Shared Drives
|
Logical Device Name
|
Tape Drive Name
|
Node A
|
Node B
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Sample Shared IBM Fibre Tape Drive Worksheet
This sample worksheet shows a shared Fibre tape drive configuration (recorded after the tape drive had been configured).
Host and Adapter Information
|
|
Node A
|
Node B
|
Node Name
|
nodea
|
nodeb
|
Slot Number
|
1P-18
|
04-02
|
Logical Name
|
fcs0
|
fcs0
|
Fibre Device IDs on Shared Bus
|
|
Node A
|
Node B
|
SCSI ID
|
scsi_id 0x26
|
scsi_id 0x26
|
LUN ID
|
lun_id 0x0
|
lun_id 0x0
|
World Wide Name
|
ww_name 0x5005076300404576
|
ww_name 0x5005076300404576
|
Shared Drives
|
Logical Device Name
|
Tape Drive Name
|
Node A
|
Node B
|
IBM 3590
|
/dev/rmt0
|
/dev/rmt0
|
|
|
|
|
|
|
|
|
|
|
|
|
Shared IBM Serial Storage Architecture Disk Subsystems Worksheet
Host and Adapter Information
|
|
Node A
|
Node B
|
Node C
|
Node D
|
Node Name
|
|
|
|
|
SSA Adapter Label
|
|
|
|
|
Slot Number
|
|
|
|
|
Dual-Port Number
|
|
|
|
|
SSA Logical Disk Drive
|
Logical Device Name
|
Node A
|
Node B
|
Node C
|
Node D
|
Node A
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
SSA Logical Disk Drive
|
Logical Device Name
|
Node A
|
Node B
|
Node C
|
Node D
|
Node A
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Sample Shared IBM Serial Storage Architecture Disk Subsystems Worksheet
Host and Adapter Information
|
|
Node A
|
Node B
|
Node C
|
Node D
|
Node Name
|
clam
|
mussel
|
|
|
SSA Adapter Label
|
ha1, ha2
|
ha1, ha2
|
|
|
Slot Number
|
2, 4
|
2, 4
|
|
|
Dual-Port Number
|
a1, a2
|
a1, a2
|
|
|
SSA Logical Disk Drive________________________________________________________________
|
Logical Device Name
|
Node A
|
Node B
|
Node C
|
Node D
|
Node A
|
hdisk2
|
hdisk2
|
|
|
|
hdisk3
|
hdisk3
|
|
|
|
hdisk4
|
hdisk4
|
|
|
|
hdisk5
|
hdisk5
|
|
|
|
SSA Logical Disk Drive
|
Logical Device Name
|
Node A
|
Node B
|
Node C
|
Node D
|
Node A
|
hdisk2
|
hdisk2
|
|
|
|
hdisk3
|
hdisk3
|
|
|
|
hdisk4
|
hdisk4
|
|
|
|
hdisk5
|
hdisk5
|
|
|
|
Non-Shared Volume Group Worksheet (Non-Concurrent Access)
Node Name
|
|
Volume Group Name
|
|
Physical Volumes
|
|
|
|
|
|
|
Logical Volume Name
|
|
Number of Copies of Logical Partition
|
|
On Separate Physical Volumes?
|
|
Filesystem Mount Point
|
|
Size (in 512-byte blocks)
|
|
|
Logical Volume Name
|
|
Number of Copies of Logical Partition
|
|
On Separate Physical Volumes?
|
|
Filesystem Mount Point
|
|
Size (in 512-byte blocks)
|
|
Sample Non-Shared Volume Group Worksheet (Non-Concurrent Access)
Node Name
|
clam
|
Volume Group Name
|
localvg
|
Physical Volumes
|
hdisk1
|
|
|
|
|
|
Logical Volume Name
|
locallv
|
Number of Copies of Logical Partition
|
1
|
On Separate Physical Volumes?
|
no
|
Filesystem Mount Point
|
localfs
|
Size (in 512-byte blocks)
|
100000
|
|
Logical Volume Name
|
|
Number of Copies of Logical Partition
|
|
On Separate Physical Volumes?
|
|
Filesystem Mount Point
|
|
Size (in 512-byte blocks)
|
|
Shared Volume Group and Filesystem Worksheet (Non-Concurrent Access)
|
Node A
|
Node B
|
Node C
|
Node D
|
Node Names
|
|
|
|
|
Shared Volume Group Name
|
|
Major Number
|
|
|
|
|
Log Logical Volume Name
|
|
Physical Volumes
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Cross-site LVM Mirror
|
|
|
|
|
|
Logical Volume Name
|
|
Number of Copies of Logical Partition
|
|
On Separate Physical Volumes?
|
|
Filesystem Mount Point
|
|
Size (in 512-byte blocks)
|
|
Cross-site LVM Mirroring enabled
|
|
|
Logical Volume Name
|
|
Number of Copies of Logical Partition
|
|
On Separate Physical Volumes?
|
|
Filesystem Mount Point
|
|
Size (in 512-byte blocks)
|
|
Cross-site LVM Mirroring enabled
|
|
Sample Shared Volume Group and Filesystem Worksheet (Non-Concurrent Access)
|
Node A
|
Node B
|
Node C
|
Node D
|
Node Names
|
trout
|
guppy
|
|
|
Shared Volume Group Name
|
bassvg
|
Major Number
|
24
|
24
|
|
|
Log Logical Volume Name
|
bassloglv
|
Physical Volumes
|
hdisk6
|
hdisk6
|
|
|
|
hdisk7
|
hdisk7
|
|
|
|
hdisk13
|
hdisk16
|
|
|
Cross-site LVM Mirror
|
site1
|
site2
|
|
|
|
Logical Volume Name
|
basslv
|
Number of Copies of Logical Partition
|
3
|
On Separate Physical Volumes?
|
yes
|
Filesystem Mount Point
|
/bassfs
|
Size (in 512-byte blocks)
|
200000
|
Cross-site LVM Mirroring enabled
|
yes
|
|
Logical Volume Name
|
|
Number of Copies of Logical Partition
|
|
On Separate Physical Volumes?
|
|
Filesystem Mount Point
|
|
Size (in 512-byte blocks)
|
|
Cross-site LVM Mirroring enabled
|
|
NFS-Exported Filesystem or Directory Worksheet (Non-Concurrent Access)
Resource Group
|
|
Network for NFS Mount
|
|
Filesystem Mounted before IP Configured?
|
|
Note: Export options include read-only, root access, and so on. For a full list of export options, see the exports man page
|
Filesystem or Directory to Export
|
|
Export Options
|
|
|
|
|
|
|
|
|
|
|
|
Filesystem or Directory to Export
|
|
Export Options
|
|
|
|
|
|
|
|
|
|
|
Filesystem or Directory to Export
|
|
Export Options
|
|
|
|
|
|
|
|
|
|
|
Sample NFS-Exported Filesystem or Directory Worksheet (Non-Concurrent Access
Resource Group
|
rg1
|
Network for NFS Mount
|
tr1
|
Filesystem Mounted before IP Configured?
|
true
|
Note: Export options include read-only, root access, and so on. For a full list of export options, see the exports man page
|
Filesystem or Directory to Export
|
/fs1
|
Export Options
|
client access:client1
|
|
|
root access: node 1, node 2
|
|
|
mode: read/write
|
|
|
Filesystem or Directory to Export
|
/fs2
|
Export Options
|
client access: client 2
|
|
|
root access: node 3, node 4
|
|
|
mode: read only
|
|
|
Filesystem or Directory to Export
|
|
Export Options
|
|
|
|
|
|
|
|
|
|
Non-Shared Volume Group Worksheet (Concurrent Access)
Node Name
|
|
Volume Group Name
|
|
Physical Volumes
|
|
|
|
|
|
|
Logical Volume Name
|
|
Number of Copies of Logical Partition
|
|
On Separate Physical Volumes?
|
|
Filesystem Mount Point
|
|
Size (in 512-byte blocks)
|
|
|
Logical Volume Name
|
|
Number of Copies of Logical Partition
|
|
On Separate Physical Volumes?
|
|
Filesystem Mount Point
|
|
Size (in 512-byte blocks)
|
|
Sample Non-Shared Volume Group Worksheet (Concurrent Access)
Node Name
|
clam
|
Volume Group Name
|
localvg
|
Physical Volumes
|
hdisk1
|
|
|
|
|
|
Logical Volume Name
|
locallv
|
Number of Copies of Logical Partition
|
1
|
On Separate Physical Volumes?
|
no
|
Filesystem Mount Point
|
/localfs
|
Size (in 512-byte blocks)
|
100000
|
|
Logical Volume Name
|
|
Number of Copies of Logical Partition
|
|
On Separate Physical Volumes?
|
|
Filesystem Mount Point
|
|
Size (in 512-byte blocks)
|
|
Shared Volume Group and Filesystem Worksheet (Concurrent Access)
|
Node A
|
Node B
|
Node C
|
Node D
|
Node Names
|
|
|
|
|
Shared Volume Group Name
|
|
Physical Volumes
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Logical Volume Name
|
|
Number of Copies of Logical Partition
|
|
On Separate Physical Volumes?
|
|
Size (in 512-byte blocks)
|
|
|
Logical Volume Name
|
|
Number of Copies of Logical Partition
|
|
On Separate Physical Volumes?
|
|
Size (in 512-byte blocks)
|
|
|
Logical Volume Name
|
|
Number of Copies of Logical Partition
|
|
On Separate Physical Volumes?
|
|
Size (in 512-byte blocks)
|
|
Sample Shared Volume Group and Filesystem Worksheet (Concurrent Access)
|
Node A
|
Node B
|
Node C
|
Node D
|
Node Names
|
trout
|
guppy
|
|
|
Shared Volume Group Name
|
bassvg
|
Physical Volumes
|
hdisk6
|
hdisk6
|
|
|
|
hdisk7
|
hdisk7
|
|
|
|
hdisk13
|
hdisk16
|
|
|
|
Logical Volume Name
|
basslv
|
Number of Copies of Logical Partition
|
3
|
On Separate Physical Volumes?
|
yes
|
Size (in 512-byte blocks)
|
/bassfs
|
|
Logical Volume Name
|
|
Number of Copies of Logical Partition
|
|
On Separate Physical Volumes?
|
|
Size (in 512-byte blocks)
|
|
|
Logical Volume Name
|
|
Number of Copies of Logical Partition
|
|
On Separate Physical Volumes?
|
|
Size (in 512-byte blocks)
|
|
Application Worksheet
Application Name
|
_______________________
|
|
Key Application Files
|
|
Directory or Path
|
Filesystem
|
Location
|
Sharing
|
Executable Files
|
|
|
|
|
Configuration Files
|
|
|
|
|
Data Files or Devices
|
|
|
|
|
Log Files or Devices
|
|
|
|
|
Cluster Name
|
|
Fallover Strategy (P = primary; T = takeover)
|
Node
|
|
|
|
|
|
|
|
|
Strategy
|
|
|
|
|
|
|
|
|
Normal Start Commands and Procedures
|
|
|
|
|
|
|
Verification Commands and Procedures
|
|
|
|
|
|
|
Node Reintegration and Takeover Caveats
|
Node
|
|
One
|
|
Two
|
|
Three
|
|
Application Worksheet (continued)
Normal Stop Commands and Procedures
|
|
|
Verification Commands and Procedures
|
|
|
Node Reintegration and Takeover Caveats
|
Node
|
|
One
|
|
Two
|
|
Three
|
|
Sample Application Worksheet
Application Name
|
|
|
Key Application Files
|
|
Directory or Path
|
Filesystem
|
Location
|
Sharing
|
Executable Files
|
/app1/bin
|
/app1
|
internal
|
non-shared
|
Configuration Files
|
/app1/config/one
|
/app1/config/one
|
external
|
shared
|
Data Files or Devices
|
/app1lv1
|
NA
|
external
|
shared
|
Log Files or Devices
|
/app1loglv1
|
NA
|
external
|
shared
|
Cluster Name
|
tetra
|
Fallover Strategy (P = primary; T = takeover)
|
Node
|
One
|
Two
|
Three
|
Four
|
Strategy
|
P
|
NA
|
T1
|
T2
|
Normal Start Commands and Procedures
|
|
- Ensure that the app1 server group is running
- If the app1 server group is not running, as user app1_adm, execute app1
start -Ione
- Ensure that the app1 server is running
- If Node Two is up, start (restart) app1_client on Node Two
|
Verification Commands and Procedures
|
|
- Run the following command: lssrc -g app1
- Ensure from the output that daemon1, daemon2, and daemon3
are “Active”
- Send notification if not “Active”
|
Node Reintegration and Takeover Caveats
|
Node
|
NA
|
One
|
NA
|
Two
|
Must restart the current instance of app1 with app1start -Ione -Ithree
|
Three
|
Must restart the current instance of app1 with app1start -Ione -Ifour
|
Sample Application Worksheet (continued)
Normal Stop Commands and Procedures
|
|
- Ensure that the app1 server group is running
- If the app1 server group is running, stop by app1stop as
user app1_adm
- Ensure that the app1 server is stopped
- If the app1 server is still up, stop individual daemons with the kill
command
|
Verification Commands and Procedures
|
|
- Run the following command: lssrc -g app1
- Ensure from the output that daemon1, daemon2, and daemon3 are
Inoperative
|
Node Reintegration and Takeover Caveats
|
Node
|
NA
|
One
|
May want to notify app1_client users to log off
|
Two
|
Must restart the current instance of app1 with app1start -Ithree
|
Three
|
Must restart the current instance of app1 with app1start -Ifour
|
In this sample worksheet, the server portion of the application, app1, normally runs on three of the four cluster nodes: nodes One, Three, and Four. Each of the three nodes is running its own app1 instance: one, three, or four. When a node takes over an app1 instance, the takeover node must restart the application server using flags for multiple instances. Also, because Node Two within this configuration runs the client portion associated with this instance of app1, the takeover node must restart the client when the client’s server instance is restarted.
Fast Connect Worksheet
Cluster Name
|
|
__________________________________________________________________________
|
Resource Group
|
Nodes
|
Fast Connect Resources
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Resource Group
|
Nodes
|
Fast Connect Resources
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Sample Fast Connect Worksheet
Cluster Name
|
cluster2
|
__________________________________________________________________________
|
Resource Group
|
Nodes
|
Fast Connect Resources
|
rg1
|
NodeA, NodeC
|
FS1%f%/smbtest/fs1
|
|
|
LPT1%p%printq
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Resource Group
|
Nodes
|
Fast Connect Resources
|
rg2
|
Node B, Node D
|
FS2%f%/smbtest/fs2
|
|
|
LPT2%p%printq
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Communication Links (SNA-Over-LAN) Worksheet
Cluster Name
|
|
|
Resource Group
|
|
Nodes
|
|
Communication Link Name
|
|
DLC Name
|
|
Port
|
|
Link Station
|
|
Application Service File
|
|
__________________________________________________________________________
|
Resource Group
|
|
Nodes
|
|
Communication Link Name
|
|
DLC Name
|
|
Port
|
|
Link Station
|
|
Application Service File
|
|
Sample Communication Links (SNA-Over-LAN) Worksheet
Cluster Name
|
cluster1
|
__________________________________________________________________________
|
Resource Group
|
rg1
|
Nodes
|
nodeA, nodeB
|
Communication Link Name
|
snalink1
|
DLC Name
|
snaprofile1
|
Port
|
snaport1
|
Link Station
|
snastation1
|
Application Service File
|
/tmp/service1.sh
|
|
Resource Group
|
|
Nodes
|
|
Communication Link Name
|
|
DLC Name
|
|
Port
|
|
Link Station
|
|
Application Service File
|
|
Communication Links (X.25) Worksheet
Cluster Name
|
|
|
Resource Group
|
|
Nodes
|
|
Communication Link Name
|
|
Port
|
|
Address or NUA
|
|
Network ID
|
|
Country Code
|
|
Adapter Names(s)
|
|
Application Service File
|
|
Sample Communication Links (X.25) Worksheet
Cluster Name
|
mycluster
|
|
Resource Group
|
cascrg1
|
Nodes
|
nodeA, nodeB
|
Communication Link Name
|
x25link1
|
Port
|
sx25a2
|
Address or NUA
|
241
|
Network ID
|
5 (can be left blank)
|
Country Code
|
(system default automatically used for local country code)
|
Adapter Names(s)
|
adapterAsx25a2, adapterBsx25a2
|
Application Service File
|
/tmp/startx25.sh
|
Communication Links (SNA-Over-X.25) Worksheet
Cluster Name
|
|
|
Resource Group
|
|
Nodes
|
|
Communication Link Name
|
|
X.25 Port
|
|
X.25 Address or NUA
|
|
X.25 Network ID
|
|
X.25 Country Code
|
|
X.25 Adapter Names(s)
|
|
SNA DLC
|
|
SNA Port(s)
|
|
SNA Link Station(s)
|
|
Application Service File
|
|
Sample Communication Links (SNA-Over-X.25) Worksheet
Cluster Name
|
mycluster____
|
|
Resource Group
|
casc_rg2
|
Nodes
|
nodeA, nodeB, nodeC
|
Communication Link Name
|
snax25link1
|
X.25 Port
|
sx25a0
|
X.25 Address or NUA
|
241
|
X.25 Network ID
|
5 (can be left blank)
|
X.25 Country Code
|
(system default automatically used for local country code)
|
X.25 Adapter Names(s)
|
adapterAsx25a0, adapterBsx25a0, adapterCsx25a0
|
SNA DLC
|
dlcprofile2
|
SNA Port(s)
|
snaport2
|
SNA Link Station(s)
|
snastation2
|
Application Service File
|
/tmp/snax25start.sh
|
Application Server Worksheet
Cluster Name
|
|
Note: Use full pathnames for all user-defined scripts.
|
Server Name
|
|
Start Script
|
|
Stop Script
|
|
|
Server Name
|
|
Start Script
|
|
Stop Script
|
|
|
Server Name
|
|
Start Script
|
|
Stop Script
|
|
|
Server Name
|
|
Start Script
|
|
Stop Script
|
|
Sample Application Server Worksheet
Cluster Name
|
cluster1
|
Note: Use full pathnames for all user-defined scripts.
|
Server Name
|
mydemo
|
Start Script
|
/usr/es/sbin/cluster/utils/start_mydemo
|
Stop Script
|
/usr/es/sbin/cluster/utils/stop_mydemo
|
|
Server Name
|
|
Start Script
|
|
Stop Script
|
|
|
Server Name
|
|
Start Script
|
|
Stop Script
|
|
|
Server Name
|
|
Start Script
|
|
Stop Script
|
|
Application Monitor Worksheet (Process Monitor)
Cluster Name
|
|
|
|
Application Server Name
|
|
Can Application Be Monitored with Process Monitor?*
|
Yes / No (If No, go to Custom Worksheet)
|
Processes to Monitor
|
|
Process Owner
|
|
Instance Count
|
|
Stabilization Interval
|
|
Restart Count
|
|
Restart Interval
|
|
Action on Application Failure
|
|
Notify Method
|
|
Cleanup Method
|
|
Restart Method
|
|
Sample Application Monitor Worksheet (Process Monitor)
Cluster Name
|
cluster1
|
|
|
Application Server Name
|
mydemo
|
Can Application Be Monitored with Process Monitor?*
|
yes
|
Processes to Monitor
|
demo
|
Process Owner
|
root
|
Instance Count
|
1
|
Stabilization Interval
|
30
|
Restart Count
|
3
|
Restart Interval
|
95
|
Action on Application Failure
|
fallover
|
Notify Method
|
/usr/es/sbin/cluster/events/notify_demo
|
Cleanup Method
|
/usr/es/sbin/cluster/utils//events/stop_demo
|
Restart Method
|
/usr/es/sbin/cluster/utils/events/start_demo
|
Application Monitor Worksheet (Custom Monitor)
Cluster Name
|
|
|
|
Application Server Name
|
|
Monitor Method
|
|
Monitor Interval
|
|
Hung Monitor Signal
|
|
Stabilization Interval
|
|
Restart Count
|
|
Restart Interval
|
|
Action on Application Failure
|
|
Notify Method
|
|
Cleanup Method
|
|
Restart Method
|
|
Sample Application Monitor Worksheet (Custom Monitor)
Cluster Name
|
cluster1
|
|
|
Application Server Name
|
mydemo
|
Monitor Method
|
/usr/es/sbin/cluster/events/utils/monitor_mydemo
|
Monitor Interval
|
60
|
Hung Monitor Signal
|
9
|
Stabilization Interval
|
30
|
Restart Count
|
3
|
Restart Interval
|
280
|
Action on Application Failure
|
notify
|
Notify Method
|
/usr/es/sbin/cluster/events/utils/notify_mydemo
|
Cleanup Method
|
/usr/es/sbin/cluster/events/utils/stop_mydemo
|
Restart Method
|
/usr/es/sbin/cluster/events/utils/start_mydemo
|
Resource Group Worksheet
Cluster Name
|
|
|
Resource Group Name
|
|
Participating Node Names
|
|
Inter-Site Management Policy
|
|
Startup Policy
|
|
Fallover Policy
|
|
Fallback Policy
|
|
Delayed Fallback Timer
|
|
Settling Time
|
|
Runtime Policies
|
|
Dynamic Node Priority Policy
|
|
Processing Order (Parallel, Serial, or
Customized)
|
|
Service IP Label
|
|
Filesystems
|
|
Filesystems Consistency Check
|
|
Filesystems Recovery Method
|
|
Filesystems or Directories to Export
|
|
Filesystems or Directories to NFS Mount
|
|
Network for NFS Mount
|
|
Volume Groups
|
|
Concurrent Volume Groups
|
|
Raw Disk PVIDs
|
|
Fast Connect Services
|
|
Tape Resources
|
|
Application Servers
|
|
Highly Available Communication Links
|
|
Primary Workload Manager Class
|
|
Secondary WLM Class
(only non-concurrent resource
groups that do not have the Online
Using Node Distribution Policy startup)
|
|
Miscellaneous Data
|
|
Auto Import Volume Groups
|
|
Disk Fencing Activated
|
|
Filesystems Mounted before IP Configured
|
|
Sample Resource Group Worksheet
Cluster Name
|
clus1
|
|
Resource Group Name
|
rotgrp1
|
Participating Node Names
|
ignore
|
Inter-Site Management Policy
|
clam, mussel, oyster
|
Startup Policy
|
Online Using Node Distribution Policy
|
Fallover Policy
|
Fallover to Next Priority Available Node
|
Fallback Policy
|
Never Fallback
|
Delayed Fallback Timer
|
|
Settling Time
|
|
Runtime Policies
|
|
Dynamic Node Priority Policy
|
|
Processing Order (Parallel, Serial,
or Customized)
|
|
Service IP Label
|
myname_svc
|
Filesystems
|
/sharedfs1
|
Filesystems Consistency Check
|
fsck
|
Filesystems Recovery Method
|
sequential
|
Filesystems or Directories to Export
|
|
Filesystems or Directories to NFS Mount
|
/sharedvg1
|
Network for NFS Mount
|
ether1
|
Volume Groups
|
sharedvg
|
Concurrent Volume Groups
|
|
Raw Disk PVIDs
|
|
Fast Connect Services
|
|
Tape Resources
|
|
Application Servers
|
mydemo
|
Highly Available Communication Links
|
|
Primary Workload Manager Class
|
|
Secondary WLM Class
(only non-concurrent resource
groups that do not have the Online
Using Node Distribution Policy startup)
|
|
Miscellaneous Data
|
|
Auto Import Volume Groups
|
false
|
Disk Fencing Activated
|
false
|
Filesystems Mounted before IP Configured
|
false
|
Cluster Event Worksheet
Use full pathnames for all Cluster Event Methods, Notify Commands, and Recovery commands.
Cluster Name
|
|
|
Cluster Event Description
|
|
Cluster Event Method
|
|
Cluster Event Name
|
|
Event Command
|
|
Notify Command
|
|
Remote Notification Message Text
|
|
Remote Notification Message Location
|
|
Pre-Event Command
|
|
Post-Event Command
|
|
Event Recovery Command
|
|
Recovery Counter
|
|
Time Until Warning
|
|
|
Cluster Event Name
|
|
Event Command
|
|
Notify Command
|
|
Remote Notification Message Text
|
|
Remote Notification Message Location
|
|
Pre-Event Command
|
|
Post-Event Command
|
|
Event Recovery Command
|
|
Recovery Counter
|
|
Time Until Warning
|
|
Sample Cluster Event Worksheet
Use full pathnames for all user-defined scripts.
Cluster Name
|
bivalves
|
|
Cluster Event Description
|
|
Cluster Event Method
|
|
Cluster Event Name
|
node_down_complete
|
Event Command
|
|
Notify Command
|
|
Remote Notification Message Text
|
|
Remote Notification Message Location
|
|
Pre-Event Command
|
|
Post-Event Command
|
/usr/local/wakeup
|
Event Recovery Command
|
|
Recovery Counter
|
|
Time Until Warning
|
|
|
Cluster Event Name
|
|
Event Command
|
|
Notify Command
|
|
Remote Notification Message Text
|
|
Remote Notification Message Location
|
|
Pre-Event Command
|
|
Post-Event Command
|
|
Event Recovery Command
|
|
Recovery Counter
|
|
Time Until Warning
|
|
Cluster Site Worksheet
Site Name
|
|
__________________________________________________________________________
|
Cluster Nodes in Site
|
|
|
|
|
|
|
|
For HAGEO Sites
|
Site Dominance
|
|
Site Backup Communication Method
|
|
Sample Cluster Site Worksheet
Site Name
|
Site_1
|
|
Cluster Nodes in Site
|
|
nodea
|
nodeb
|
|
|
|
|
For HAGEO Sites
|
Site Dominance
|
|
Site Backup Communication Method
|
|
HACMP File Collection Worksheet
Cluster Name
|
|
|
File Collection name
|
|
File Collection description
|
|
Propagate Files before verification
|
|
Propagate file automatically
|
|
Files to include in this collection
|
|
Automatic check time limit
|
|
Sample HACMP File Collection Worksheet
Cluster Name
|
MyCluster
|
|
File Collection name
|
Apache_Files
|
File Collection description
|
Apache configuration files
|
Propagate Files before verification
|
Yes
|
Propagate file automatically
|
Yes
|
Files to include in this collection
|
/usr/local/apache/conf/httpd.conf
/usr/local/apache/conf/ss/.conf
/usr/local/apache/conf/mime_types
|
Automatic check time limit
|
30 minutes
|