![]() ![]() ![]() |
Chapter 6: Configuring Installed Hardware
This chapter describes how to ensure that network interface cards (NICs), shared external disk devices, and shared tape drives are ready to support an HACMP cluster. This chapter contains the following sections:
Before you read this chapter, you should have already installed the devices following the instructions in the relevant AIX 5L documentation. For the latest information about the hardware supported for use with the version of AIX 5L that you are using, see the IBM website.
Note: For information on installing and configuring OEM disks, see Appendix B: OEM Disk, Volume Group, and Filesystems Accommodation.
Configuring Network Interface Cards
This section describes how to ensure that network interface cards (NICs) are configured properly to support the HACMP software. For each node, ensure that the settings of each NIC match the values on the completed copies of the TCP/IP Network Interface Worksheet, described in the Planning Guide. Special considerations for a particular NIC type are discussed in the following sections.
Ethernet, Token-Ring, and FDDI Interface Cards
Consider the following guidelines when configuring NICs:
When using the smit mktcpip fastpath to define a NIC, the hostname field changes the default hostname. For instance, if you configure the first NIC as n0-svc, and then configure the second NIC as n0-nsvc, the default hostname at system boot is n0-nsvc. To avoid this problem, do not change the value in the hostname field, and enter the rest of the information specific to that NIC. The hostname should match the NIC label of the primary network’s service NIC because some applications may depend on the hostname (although this is not required for HACMP).
If you are using HACMP IP Address Takeover via IP Replacement, ensure that each NIC that will host a service IP label is set to use a non-service IP label at boot. Use the smit chinet or smit chghfcs fastpath to reconfigure these NICs, if necessary, to a non-service IP label. Refer to your completed copies of the TCP/IP Network Interface Worksheet. For more information, see the section Planning for IP Address Takeover via IP Aliases in the chapter on Planning Cluster Network Connectivity in the Planning Guide. Completing the TCP/IP Network Interface Worksheets
After reviewing all network interfaces for a node, record the network interface names on that node’s TCP/IP NETWORK Interface Worksheet. To display a list of available and defined NICs for the node, enter the following command:
At this point, all interfaces used by the HACMP software should be available. List the NICs marked as Available in the Interface Name field on the TCP/IP Network Interface Worksheet.
When you initially add a new NIC to the cluster, HACMP discovers the NIC interface name from the AIX 5L configuration. However, if you later change any part of the existing NIC definition in HACMP, ensure that the NIC interface name known to HACMP is the same as the NIC definition in AIX 5L. If they do not match, change the NIC interface name in HACMP to match the definition in AIX 5L.
Configuring Point-to-Point Networks
A point-to-point network is an ideal way to connect the nodes in an HACMP cluster. The point-to-point network allows Cluster Managers to continue to exchange keepalive packets should the TCP/IP-based subsystem, networks, or network NICs fail. Thus, the point-to-point network prevents the nodes from becoming isolated and from attempting to take over shared resources. The HACMP software supports four types of point-to-point networks:
RS232 Disk heartbeating (over enhanced concurrent mode disks) Target mode SCSI Target mode SSA. For more information on point-to-point networks, see the chapter Planning Cluster Network Connectivity in the Planning Guide.
For information about configuring point-to-point networks in HACMP, see Chapter 4: Configuring HACMP Cluster Topology and Resources (Extended) in the Administration Guide.
Configuring RS232 Serial Connections
This section describes how to configure an RS232 serial cable as a serial network in an HACMP cluster. Use a serial network when connecting two nodes in an HACMP environment. For a cluster of more than two nodes, configure the serial network in a ring configuration (NodeA<—>NodeB NodeB<—>NodeC NodeC<—>NodeA).
Before configuring the RS232 serial connection, physically install the cables between the nodes. To connect the nodes, use a fully pinned out, 25-pin null-modem (turnaround), serial-to-serial cable. You can order an HACMP serial cable from IBM. The cable is available in the following lengths:
3.7 meter serial-to-serial port cable (FC3124) 8 meter serial-to-serial port cable (FC3125). Note: Many systems have special serial port requirements. Refer to the documentation for your system. For more information on serial ports for HACMP, see the chapter on Planning Cluster Network Connectivity in the Planning Guide.
Configuring an RS232 Serial Connection in AIX 5L
To configure an RS232 serial connection:
1. Ensure that you have physically installed the RS232 serial cable between the two nodes before configuring it.
2. Use the following command to review the status of each serial port you intend to use after installing the RS232 serial cable:
If the tty device is neither Defined nor Available, it will not be listed by the lsdev command. Use the smit tty fastpath to define the device.
If the tty device is Defined but not Available, or if you have questions about its settings, use the rmdev command to delete the tty device:
3. Use the smit tty fastpath to define the device on each node that will be connected to the RS232 cable. Removing and then defining the tty device makes it available with the default settings, which are appropriate for the communication test described here.
4. Set the ENABLE login field to DISABLE to prevent getty processes from spawning on this device. Refer to the following section, Defining the tty Device.
5. Test communication over the serial cable after creating the tty device. For more information about testing serial networks, see the section Testing the Serial Connection.
Defining the tty Device
To create a tty device on each node that is connected to the RS232 cable:
1. Enter smit tty
2. Select Add a TTY and press Enter. The SMIT Add a TTY panel appears, prompting you for a tty type.
3. Select tty rs232 Asynchronous Terminal and press Enter.
4. Select the parent adapter and press Enter.
5. Enter field values as follows:
6. Press Enter to commit the values.
Note: Regardless of the baud rate setting of the tty when it is created, all RS232 networks used by HACMP are brought up by RSCT with a default baud rate of 38400. Some RS232 networks that are extended to longer distances will require that the baud rate be lowered from the default of 38400.
For more information, see the section Changing an RS232 Network Module Baud Rate in the chapter on Managing the Cluster Topology in the Administration Guide.
Testing the Serial Connection
To ensure that the RS232 cable is properly configured and transmits data, run the following test after creating the tty device on both nodes.
Run this test while the tty device is not in use. If the cluster is active, remove the serial network dynamically from the configuration before running the test. Also, verify that the tty device is not in use by any other process.
To determine if the device is in use, run the fuser command:
The output lists the PID of any process which uses the device.
If the device is in use by RSCT, the output shows that a process hats_rs232_nim is accessing the device. After the network has been dynamically removed from the cluster configuration, no such process should exist.
In rare cases, the hats_rs232_nim process may not terminate during a dynamic removal of the network or a stop of the cluster services. In these cases, you should call IBM support. However, it is safe to terminate any leftover hats_nim_rs232 process if the cluster is inactive on the local node.
Use the fuser command to terminate a process which accesses the tty device:
Running the sttyTest
The stty test determines whether the serial connection allows the transmission of communications.
Running the stty Test on TTYs with RTS Flow Control Set
To perform the stty test:
1. On the receiving side, run:
2. On the sending side, run:
Running the stty Test on TTY's with XON or No Flow Control Set:
To perform the stty test:
1. On the receiving side (node 2), run:
2. On the sending side, run
If the nodes are able to communicate over the serial cable, both nodes display their tty settings and return to the prompt.
If the data is transmitted successfully from one node to another, then the text from the /etc/hosts file from the second node appears on the console of the first node. Note that you can use any text file for this test, and do not need to specifically use the /etc/hosts file.
Defining the Serial Connection to HACMP
After you install and test the serial connection, define the connection as a point-to-point network to HACMP. For information about how to configure a serial network, see the Administration Guide.
Configuring an SP Switch
The SP Switch used by an SP node serves as a network device for configuring multiple clusters, and it can also connect clients. This switch is not required for an HACMP installation. When installed, the SP Switch default settings are sufficient to allow it to operate effectively in an HACMP cluster environment.
The HPS switch (older version of the switch) differs from the SP switch (newer version of the switch): The SP switch does not allow HACMP to control the Eprimary. You must upgrade to the SP switch before installing HACMP. If you are currently running HACMP Eprimary management with an HPS switch, run the HACMP script to unmanage the Eprimary before upgrading the switch.
To ensure that Eprimary is set to be managed, enter the following:
If the switch is set to manage, before changing to the new switch, run the script:
Keep in mind the following points about the SP Switch in an HACMP configuration:
ARP must be enabled for the SP Switch network so that IP Address Takeover can work. HACMP SP Switch Base and service IP labels are alias addresses on the SP Switch css0 IP interface. The css0 base IP address is unused and should not be configured for IP Address Takeover via IP Replacement. However, for IPAT via IP Aliases, the css0 base IP address should be configured as an HACMP base address. Non-service IP labels are not allowed for SP Switch IP address takeover. The alias service IP labels appear as ifconfig alias addresses on the css0 interface. Service IP labels must be defined on a different subnet than the HACMP base IP label. The netmask associated with the css0 base IP address is used as the netmask for all HACMP SP Switch network interfaces. For more information on SP switch networks, see the section on Planning for the SP Switch Network in the chapter on Planning Cluster Network Connectivity in the Planning Guide.
If you migrated a cluster that contains an SP Switch, see Chapter 3: Upgrading an HACMP Cluster.
Configuring for Asynchronous Transfer Mode (ATM)
Asynchronous Transfer Mode (ATM) is a networking technology and protocol suite based on packet switching. It is a connection-oriented protocol that uses virtual paths and channels to transfer data between devices.
HACMP supports two ATM protocols, Classic IP and LAN Emulation (LANE), for the configuration of cluster networks. Cluster networks defined on Classic IP are of cluster network type atm. Cluster networks defined on LANE are of the corresponding LAN type, that is, ether for LANE Ethernet, and token for LANE Token Ring.
ATM switches typically have inherent capabilities for fault tolerance. See the documentation for those products to determine how those recovery actions may integrate with HACMP.
Support of Classic IP and LANE on the Same Interface Card
ATM allows the configuration of multiple network interfaces and protocols over the same ATM device (atm#). HACMP allows multiple ATM clients to be configured on the same ATM device. Clients can belong to the Classic IP or LANE protocol types.
Note that interfaces that are configured over the same ATM device do not increase redundancy. To protect against single points of failure, each ATM network requires separate physical adapters for the service and non-service IP labels.
Configuring ATM Classic IP
An ATM Classic IP interface is either a Classic IP client or a Classic IP server. The server performs ATM address resolution for all clients and the connection setup between clients. Each logical IP subnet requires its own server. Clients maintain their own ARP cache. For packets sent to an IP address that is not contained in the ARP cache, the client sends a request to the server of its subnet, which sends a broadcast to determine the ATM address.
The current ATM Classic IP support in HACMP has the following restrictions:
A node belonging to the HACMP cluster cannot perform as a Classic IP server for a Classic IP network. Only Classic IP clients can be defined as HACMP interfaces. The use of Alternate Hardware Addresses is not supported on ATM networks. Configuring Classic IP for HACMP Cluster Networks
A cluster network consisting of service and non-service IP labels requires that you have two Classic IP servers configured, one for each IP subnet.
Before you can configure ATM Classic IP for cluster networks, the following must already be configured:
All device IP labels of a cluster network to belong to the same IP subnet All non-service IP labels to belong to a different IP subnet A Classic IP server for each IP subnet. The ATM addresses of the servers must be known. Now you can configure ATM Classic IP clients on cluster nodes.
To configure ATM Classic IP clients on cluster nodes:
1. On each cluster node, configure the Service and non-service ATM interfaces in AIX 5L to use the “Service” and “non-service” Classic IP servers previously configured.
2. Test the configuration.
3. Define the ATM network to HACMP.
Testing the Configuration
If the ATM interface cards on which the interfaces have been configured are connected to the ATM network, and the ATM network is functional, the clients will register with the ATM switch and connect with the Classic IP server of the subnet to which they belong.
To test Classic IP client communication over the network, confirm the following:
1. The IP addresses on all nodes are reachable. Use the ping command to confirm this.
2. The ATM device is up and running. Use the ifconfig command to review the status of the ATM device.
>ifconfig at1at1: flags=e000861<UP,NOTRAILERS,RUNNING,SIMPLEX,GROUPRT>inet 192.168.111.30 netmask 0xffffff00If the RUNNING flag is set, the interface has connected with its Classic IP server, and is operational.
3. The Classic IP client is registered with the Classic IP server. Use the arp command to confirm the registration of the client with its server. See the example of the output of the arp command after step 4. The client has registered with its server (at1), server_192_168_111.
4. The ATM TCP/IP layer is functional. Use the arp command to confirm this.
The following example shows output of the arp command. The ATM layer is functional, since the first 13 bytes of the hardware address of the client at1 correspond to the address of the ATM switch.
> arp -t atm -aSVC - at0 on device atm0 -==========================at1(192.168.110.30) 47.0.5.80.ff.e1.0.0.0.f2.1a.39.65.42.20.48.1a.39.65.0IP Addr VPI:VCI Handle ATM Addressserver_192_168_110 (198.168.110.99) 0:761 1247.0.5.80.ff.e1.0.0.0.f2.1a.39.65.88.88.88.88.10.00.0SVC - at1 on device atm1 -==========================at1(192.168.111.30) 47.0.5.80.ff.e1.0.0.0.f2.1a.39.65.42.35.42.1f.36.221IP Addr VPI:VCI Handle ATM Addressserver_192_168_111 (198.168.111.99) 0:772 2347.0.5.80.ff.e1.0.0.0.f2.1a.39.65.88.88.88.88.11.00.1Configuring ATM ARP Servers for Use by HACMP Nodes
Before configuring an ATM ARP server, install the ATM interfaces and the switch as described in your ATM product documentation. When installation is complete, do the following:
1. Configure an ATM ARP server for the HACMP Service subnet.
2. Configure an ATM ARP server for the HACMP Non-service subnet.
3. Determine the ATM server address for each ATM server.
Configuring ATM ARP Servers for Service Subnetworks
To configure an ARP server for the HACMP service subnetwork:
1. Enter smit chinet
2. Select at0 as the ATM network interface.
This interface will serve as the ARP server for the subnetwork 192.168.110 as shown in the following example of the fields on the Change/Show an ATM Interface panel.
Note: The Connection Type field is set to svc-s to indicate that the interface is used as an ARP server.
Configuring ATM ARP Servers for Non-Service Subnetworks
To configure an ARP server for an HACMP non-service subnetwork:
1. Enter smit chinet
2. Select at1 as the network interface.
The following example shows the values of the fields on the Change/Show an ATM Interface panel:
Note: The interface name is (at1) for the non-service interface; the Connection Type designates the interface as an ARP server, svc-s. The Alternate Device field is set to atm0. This setting puts at1 on atm0 with at0. The non-service subnet is 192.168.111.
Configuring ATM ARP Clients on Cluster Nodes
To configure ATM ARP clients on cluster nodes:
1. On each cluster node, configure the service and non-service ATM adapters in AIX 5L to use the service and non-service ATM ARP servers previously configured.
2. Test the configuration.
3. Define the ATM network to HACMP.
Configuring the Cluster Nodes as ATM ARP Clients
To configure HACMP cluster nodes as ATM ARP clients:
1. Use the smit chinet command to configure two ATM interfaces, one on each adapter:
at0 on atm0 for service at1 on atm1 for non-service 2. To configure the service subnet, specify values for the following fields for these interfaces:
The Connection Type field is set to svc-c to indicate that the interface is used as an ATM ARP client. Because this ATM ARP client configuration is being used for the HACMP service subnet, the INTERNET ADDRESS must be in the 192.168.110 subnet. The ATM server address is the 20-byte address that identifies the ATM ARP server being used for the 192.168.110 subnet.
Note: If IPAT is enabled for the HACMP-managed ATM network, the INTERNET ADDRESS represents the non-service address. If IPAT is not enabled, the INTERNET ADDRESS represents the service address.
3. To configure the non-service subnet, specify values for the following fields for these interfaces:
The Connection Type field is set to svc-c to indicate that the interface is used as an ATM ARP client. Because this ATM ARP client configuration is being used for the HACMP non-service subnet, the INTERNET ADDRESS must be in the 192.168.111 subnet. The ATM server address is the 20-byte address that identifies the ATM ARP server being used for the 192.168.111 subnet.
Testing Communication over the Network
To test communication over the network after configuring ARP servers and clients:
1. Run the netstat -i command to make sure the ATM network is recognized. It is listed as at1.
2. Enter the following command on the first node:
where IP_address_of_other_node is the address in dotted decimal that you configured as the destination address for the other node.
3. Repeat Steps 1 and 2 on the second node, entering the destination address of the first node as follows:
Defining the ATM Network to HACMP
After you have installed and tested an ATM network, define it to the HACMP cluster topology as a network. For information about how to define an ATM network in an HACMP cluster, see the Administration Guide.
Configuring ATM LAN Emulation
ATM LAN emulation provides an emulation layer between protocols such as Token-Ring or Ethernet and ATM. It allows these protocol stacks to run over ATM as if it were a LAN. You can use ATM LAN emulation to bridge existing Ethernet or Token-Ring networks—particularly switched, high-speed Ethernet—across an ATM backbone network.
LAN emulation servers reside in the ATM switch. Configuring the switch varies with the hardware being used. Once you have configured your ATM switch and a working ATM network, you can configure adapters for ATM LAN emulation.
To configure ATM LAN emulation:
1. Enter atmle_panel
2. Select Add an ATM LE Client.
3. Select one of the adapter types (Ethernet or Token-Ring). A popup appears with the adapter selected (Ethernet in this example). Press Enter.
4. SMIT displays the Add an Ethernet ATM LE Client panel. Make entries as follows:
5. Once you make these entries, press Enter. Repeat these steps for other ATM LE clients.
6. The ATM LE Clients should be visible to AIX 5L as network cards when you execute the lsdev -Cc adapter command.
7. Each virtual adapter has a corresponding interface that must be configured, just like a real adapter of the same type, and it should behave as such.
Defining the ATM LAN Emulation Network to HACMP
After you have installed and tested an ATM LE network, define it to the HACMP cluster topology as a public network. For information about how to define networks and interfaces in an HACMP cluster, see Chapter 1: Administering an HACMP Cluster in the Administration Guide.
You will define these virtual adapters to HACMP just as if they were real interfaces, except you cannot use Hardware Address Takeover (HWAT).
Special Considerations for Configuring ATM LANE Networks
Cluster verification does not check whether the IP Address is configured on the interface stored in the HACMPadapter object in the HACMP configuration database. When configuring a cluster adapter, the interface stanza in HACMPadapter is not specified. During topology synchronization or when applying a snapshot, the interface stanza gets assigned a value corresponding to the AIX 5L network configuration at that moment. If the IP Address is assigned to a different interface later, HACMPadapter no longer contains the correct information about the corresponding cluster adapter. Depending on the configuration, such an error may go unnoticed until a certain cluster event occurs, and then cause the cluster manager to exit fatally.
Therefore, after any changes to the cluster or AIX 5L network configuration, the cluster topology should be synchronized.
This mistake is likely to occur when configuring a cluster network over ATM. For example, you may try to correct the network configuration after a verification error or warning by deleting a network interface and moving an IP address to a different interface. A topology synchronization must be done to update the interface stanza in the HACMPadapter object in the HACMP configuration database.
Avoiding Default Gateways
When configuring TCP/IP for clients over ATM networks, no default gateways should be configured that would cause packets to be sent over networks other than ATM networks. If you have an incorrect ATM configuration or an ATM hardware failure, clients’ attempts to connect with their corresponding servers by sending packets out to the gateway would be unsuccessful. The generated traffic could severely impact the performance of the network that connects to the gateway and affect how your cluster performs.
Configuring ATM LANE Networks for HACMP
For HACMP networks that are configured over emulated LANs (ELAN), configure all interfaces of a given cluster network over clients that belong to the same ELAN.
Conceptually, all clients that belong to the same ELAN correspond to LAN adapters connected to the same wire. They can communicate with each other if they are on the same subnet. Multiple IP subnets can exist on the same ELAN.
Clients that belong to different ELANs generally cannot communicate with each other if bridging is not configured.
Bridged ELANs are not supported for an HACMP network configuration. If you intend to use a bridged ELAN configuration, ask an ATM network representative whether it conforms to the requirements of HACMP. If the interfaces of a cluster network do not belong to the same ELAN, the cluster may not generate a network-related event if there is a loss of connectivity. For example, if the interfaces on different nodes are assigned to LANE clients that belong to different ELANs, it is possible that no network-related cluster events would be generated indicating this configuration error, even though HACMP clients would not be able to connect to them.
The following figure shows a cluster consisting of two nodes (A, B) and three networks (net-1, net-2, net-3). One network is configured on clients belonging to ELAN 1. The adapters are on two subnets. Networks net-2 and net-3 are configured on clients belonging to ELAN 2.
Note: This configuration would not be supported by all ATM switches.
Switch Dependent Limitations for Emulated LANs in HACMP
Two clients that belong to the same ELAN and are configured over the same ATM device cannot register with the LAN Emulation Server (LES) at the same time. Otherwise, it cannot be determined when a client registered with the switch.
This is true for the A-MSS router, used standalone or in the IBM Nways 8260 or IBM Nways 8265 switching hub. The number of clients per ELAN and adapters that are allowed to register with the LAN Emulation server concurrently may be a user-configurable setting.
If this limitation is applicable to your configuration, avoid configuring multiple clients that belong to the same ELAN over the same ATM device. In particular, when configuring a cluster network over LANE, keep in mind the following:
No two cluster interfaces can be configured on clients that belong to the same ELAN and that are configured over the same ATM device. For any cluster interface that is configured over LANE, no other LANE client can belong to the same ELAN configured over the same ATM device. If this limitation is violated, a network related cluster event indicating a loss of connectivity may be generated, most likely after a cluster event that involves changes to clients on the same ATM network interface.
Configuring ELANs
When configuring ELANs, ensure the following:
All LANE clients in the entire ATM network—not only those used for HACMP—should be configured correctly and registered with the switch. The switch log should not indicate any errors. It is not sufficient that each cluster interface configured over LANE is reachable from all other nodes. All interfaces of one cluster network should be configured on clients belonging to the correct ELAN. Configuring Shared External Disk Devices
This section describes how to ensure that shared external disk devices are configured properly for the HACMP software. Separate procedures are provided for Fibre Channel, SCSI disk devices, IBM SCSI disk arrays, and IBM Serial Storage Architecture (SSA) disk subsystems.
Note: For information on installing and configuring OEM disks, see Appendix B: OEM Disk, Volume Group, and Filesystems Accommodation.
Configuring Shared Fibre Channel Disks
Use the following procedure to ensure access to Fibre Channel disks and to record the shared disk configuration on the Fibre Channel Disks Worksheet, as described in the Planning Guide. Use a separate worksheet for each node. You refer to the completed worksheets when you configure the cluster.
Note: The fileset ibm2105.rte must be installed on a node to properly display information for Fibre Channel devices in an IBM 2105 Enterprise Storage System.
To ensure that a node has access to shared Fibre Channel disks and to complete the Fibre Channel Disks Worksheet:
1. Record the name of the node in the Node Name field.
2. Ensure that the Fibre Channel adapters on the cluster nodes are available. Run the lsdev -Cc adapter command, for example:
lsdev -Cc adapater | grep fcsfcs0 Available 31-08 FC Adapter...3. Record the names of the Fibre Channel adapters in the Fibre Channel Adapter field. Allow sufficient space to list the disks associated with each adapter.
4. Ensure that the disks associated with that adapter are available by running the lsdev -C and grep for the location shown in the output from the lsdev -Cc adapter command, for example:
lsdev -C | grep 31-08fcs0 Available 31-08 FC Adapterhdisk1 Available 31-08-01 Other FC SCSI Disk Drivefscsi0 Available 31-08-01 FC SCSI I/O Controller Protocol Devicehdisk2 Available 31-08-01 Other FC SCSI Disk Drivehdisk3 Available 31-08-01 Other FC SCSI Disk Drivehdisk4 Available 31-08-01 Other FC SCSI Disk Drivehdisk5 Available 31-08-01 Other FC SCSI Disk Drivehdisk6 Available 31-08-01 Other FC SCSI Disk Drivehdisk7 Available 31-08-01 Other FC SCSI Disk Drivehdisk8 Available 31-08-01 Other FC SCSI Disk Drivehdisk9 Available 31-08-01 Other FC SCSI Disk Drivehdisk10 Available 31-08-01 Other FC SCSI Disk Drivehdisk11 Available 31-08-01 Other FC SCSI Disk Drivehdisk12 Available 31-08-01 Other FC SCSI Disk DriveOutput may vary depending on the disk device in use.
5. Record the names of the Fibre Channel disks associated with each Fibre Channel adapter in the Disks Associated with Adapter list.
If the list of disks is different than expected, or if disks are not available:
1. Make sure that the volume groups are configured correctly on the storage system.
2. Make sure that the Fibre Channel switch is set up correctly and that zoning is configured appropriately.
3. Check the World Wide Name by adapter:
lscfg -vpl fcs0Look for an entry similar to the following:
Network Address.............10000000C92EB1834. Ensure that the World Wide Name is configured correctly on both the disk device and the Fibre Channel switch.
Configuring Shared SCSI Disks
As you review the installation, record the shared disk configuration on the Shared SCSI Disk Worksheet, as described in the Planning Guide. Use a separate worksheet for each set of shared SCSI disks. You refer to the completed worksheets when you configure the cluster.
To ensure that a SCSI disk is installed correctly and to complete the Shared SCSI Disk Worksheet:
1. Record the node name of each node that connected to the shared SCSI bus in the Node Name field.
2. Enter the logical name of each adapter in the Logical Name field.
lscfg | grep scsiIn the command output, the first column lists the logical name of the SCSI adapters, for example scsi0, as shown in the following figure:
3. Record the I/O slot (physical slot) that each SCSI adapter uses in the Slot Number field. Use the lscfg -vpl command, for example:
lscfg -vpl scsi04. Record the SCSI ID of each SCSI adapter on each node in the Adapter field. To determine the SCSI IDs of the disk adapters, use the lsattr command, as in the following example to find the ID of the adapter scsi1:
lsattr -E -l scsi1 | grep idDo not use wildcard characters or full pathnames on the command line for the device name designation.
In the resulting output, the first column lists the attribute names. The integer to the right of the id attribute is the adapter SCSI ID:
5. Record the SCSI IDs of the physical disks in the Shared Drive fields. Use the command lsdev -Cc disk -H
The third column of the command output is a numeric location with each row in the format AA-BB-CC-DD. The first digit (the first D) of the DD field is the SCSI ID.
Note: For SCSI disks, a comma follows the SCSI ID field of the location code since the IDs can require two digits, such as 00-07-00-12,0 for a disk with an ID of 12.
name status location descriptionhdisk00 Available 00-07-00-00 73 GB SCSI Disk Drivehdisk01 Available 00-07-00-10 73 GB SCSI Disk Drivehdisk02 Available 00-07-00-20 73 GB SCSI Disk Drive6. At this point, ensure that each SCSI device connected to the shared SCSI bus has a unique ID. A common configuration is to set the SCSI ID of the adapters on the nodes to be higher than the SCSI IDs of the shared devices. (Devices with higher IDs take precedence in SCSI bus contention.) For example, the adapter on one node can have SCSI ID 6 and the adapter on the other node can be SCSI ID 5, and the external disk SCSI IDs should be an integer from 0 through 4. Do not use SCSI ID 7, as this is the default ID given when a new adapter is installed.
7. Determine the logical names of the physical disks. The first column of the data generated by the lsdev -Cc disk -H command lists the logical names of the SCSI disks.
Record the name of each external SCSI disk in the Logical Device Name field, and the size in the Size field - the size is usually part of the description. Be aware that the nodes can assign different names to the same physical disk. Note these situations on the worksheet.
8. Ensure that all disks have a status of Available. The second column of the output generated by the lsdev -Cc disk -H command shows the status.
If a disk has a status of Defined (instead of Available) ensure that the cable connections are secure, and then use the mkdev command to make the disk available.
At this point, you have verified that the SCSI disk is configured properly for the HACMP environment.
Configuring IBM SCSI Disk Arrays
Ensure that the disk device is installed and configured correctly. Record the shared disk configuration on the Shared IBM SCSI Disk Arrays Worksheet, as described in the Planning Guide. Use a separate worksheet for each disk array. You refer to the completed worksheets when you define the cluster topology.
To confirm the configuration and complete the Shared IBM SCSI Disk Arrays Worksheet:
1. Record the node name of each node that connected to the shared SCSI bus in the Node Name field.
2. Enter the logical name of each adapter in the Logical Name field.
lscfg | grep scsiIn the command output, the first column lists the logical name of the SCSI adapters, for example scsi0, as shown in the following figure:
3. Record the I/O slot (physical slot) that each SCSI adapter uses in the Slot Number field. Use the lscfg -vpl command, for example:
lscfg -vpl scsi04. Record the SCSI ID of each SCSI adapter on each node in the Adapter field. To determine the SCSI IDs of the disk adapters, use the lsattr command, as in the following example to find the ID of the adapter scsi1:
lsattr -E -l scsi1 | grep idDo not use wildcard characters or full pathnames on the command line for the device name designation.
In the resulting output, the first column lists the attribute names. The integer to the right of the id attribute is the adapter SCSI ID:
At this point, you have ensured that the disk array is configured properly for the HACMP software.
Configuring Target Mode SCSI Connections
This section describes how to configure a target mode SCSI connection between nodes sharing disks connected to a SCSI bus. Before you can configure a target mode SCSI connection, all nodes that share the disks must be connected to the SCSI bus, and all nodes and disks must be powered on.
Configuring the Status of SCSI Adapters and Disks
To define a target mode SCSI connection (tmscsi), each SCSI adapter on nodes that share disks on the SCSI bus must have a unique ID and must be Defined, known to the system but not yet Available. Additionally, all disks assigned to an adapter must also be Defined but not yet Available.
Note: The uniqueness of adapter SCSI IDs ensures that tmscsi devices created on a given node reflect the SCSI IDs of adapters on other nodes connected to the same bus.
To review the status of SCSI adapters you intend to use, enter the following:
If an adapter is Defined, see Configuring Target Mode SCSI Devices in AIX 5L to configure the target mode connection.
To review the status of SCSI disks on the SCSI bus, enter the following:
If either an adapter or disk is Available, follow the steps in the section Returning Adapters and Disks to a Defined State to return both the adapter (and its disks) to a Defined state so that they can be configured for target mode SCSI and set to an Available state.
Returning Adapters and Disks to a Defined State
For a SCSI adapter, use the following command to make each Available disk associated with an adapter defined:
where hdiskx is the hdisk to be made Defined; for example:
Next, run the following command to return the SCSI adapter to a Defined state:
where scsix is the adapter to be made Defined.
If you are using an array controller, you use the same command to return a router and a controller to a Defined state. However, make sure to perform these steps after changing the disk and before changing the adapter. The following lists these steps in this order:
When all controllers and disks are Defined, see the section Configuring Target Mode SCSI Devices in AIX 5L to enable the Target Mode connection.
Note: Target mode SCSI may be automatically configured depending on the SCSI adapter in use. In this case, skip ahead to the section on Defining the Target Mode SCSI Connection to HACMP.
Configuring Target Mode SCSI Devices in AIX 5L
To define a target mode SCSI device:
1. Enable the target mode interface for the SCSI adapter.
2. Configure (make available) the devices.
Complete both steps on one node, then on the second node.
Enabling the Target Mode Interface
To enable the target mode interface:
1. Enter smit devices
2. Select SCSI Adapter and press Enter.
3. Select Change/Show Characteristics of a SCSI Adapter and press Enter. SMIT prompts you to identify the SCSI adapter.
4. Set the Enable TARGET MODE interface field to yes to enable the target mode interface on the device (the default value is no). At this point, a target mode SCSI device is generated that points to the other cluster nodes that share the SCSI bus. Note, however, that the SCSI ID of the adapter on the node from which you enabled the interface will not be listed.
5. Press Enter to commit the value.
Configuring the Target Mode SCSI Device
After enabling the target mode interface, you must run cfgmgr to create the initiator and target devices and make them available.
To configure the devices and make them available:
1. Enter smit devices
2. Select Install/Configure Devices Added After IPL and press Enter.
3. Exit SMIT after the cfgmgr command completes.
4. Run the following command to ensure that the devices are paired correctly:
lsdev -Cc tmscsiRepeat the procedures in the sections Enabling the Target Mode Interface and Configuring the Target Mode SCSI Device for other nodes connected to the SCSI bus.
Configuring the target mode connection creates two target mode files in the /dev directory of each node:
/dev/tmscsinn.im. The initiator file that transmits data /dev/tmscsinn.tm. The target file that receives data. Testing the Target Mode Connection
For the target mode connection to work, initiator and target devices must be paired correctly.
To ensure that devices are paired and that the connection is working after enabling the target mode connection on both nodes:
1. Enter the following command on a node connected to the bus.
cat < /dev/tmscsinn.tmwhere nn must be the logical name representing the target node. (This command hangs and waits for the next command.)
2. On the target node, enter the following command:
cat filename > /dev/tmscsinn.imwhere nn must be the logical name of the sending node and filename is a file.
The contents of the specified file are displayed on the node on which you entered the first command.
Note: Target mode SCSI devices are not always properly configured during the AIX 5L boot process. Ensure that all tmscsi initiator devices are available on all cluster nodes before bringing up the cluster. Use the lsdev -Cc tmscsi command to ensure that all devices are available. See the Troubleshooting Guide for more information regarding problems with target mode SCSI devices.
Note: If the SCSI bus is disconnected while running as a target mode SCSI network, shut down HACMP before reattaching the SCSI bus to that node. Never attach a SCSI bus to a running system.
Defining the Target Mode SCSI Connection to HACMP
After you install and test the target mode SCSI bus, define the target mode connection as a point-to-point network to HACMP. For information about how to configure a target mode SCSI network, see the Administration Guide.
Configuring Target Mode SSA Connections
This section describes how to configure a target mode SSA (tmssa) connection between HACMP nodes sharing disks connected to SSA on Multi-Initiator RAID adapters (FC 6215 and FC 6219). The adapters must be at Microcode Level 1801 or later.
You can define a point-to-point network to HACMP that connects all nodes on an SSA loop.
Changing Node Numbers on Systems in SSA Loop
By default, SSA node numbers on all systems are zero.
To configure the target mode devices:
1. Assign a unique non-zero SSA node number to all systems on the SSA loop.
Note: The ID on a given SSA node should match the HACMP node ID that is contained in the node_id field of the HACMPnode entry.
odmget -q "name = node_name" HACMPnode2. To change the SSA node number use the following command:
chdev -l ssar -a node_number=number3. To show the system’s SSA node number use the following command.
lsattr -El ssarConfiguring Target Mode SSA Devices in AIX 5L
After enabling the target mode interface, run cfgmgr to create the initiator and target devices and make them available.
To create the initiator and target devices:
1. Enter smit devices
2. Select Install/Configure Devices Added After IPL and press Enter.
3. Exit SMIT after the cfgmgr command completes.
4. Run the following command to ensure that the devices are paired correctly:
lsdev -C | grep tmssaRepeat the procedures for enabling and configuring the target mode SSA devices for other nodes connected to the SSA adapters.
Configuring the target mode connection creates two target mode files in the /dev directory of each node:
/dev/tmssan.im where n represents a number. The initiator file that transmits data /dev/tmssan.tm where n represents a number. The target file that receives data. Testing the Target Mode Connection
For the target mode connection to work, initiator and target devices must be paired correctly.
To ensure that devices are paired and that the connection is working after enabling the target mode connection on both nodes:
1. Enter the following command on a node connected to the SSA disks:
2. On the target node, enter the following command:
where # must be the number of the sending node and filename is any short ascii file.
The contents of the specified file are displayed on the node on which you entered the first command.
3. You can also ensure that the tmssa devices are available on each system by using the following command:
lsdev -C | grep tmssaConfiguring Shared IBM SSA Disk Subsystems
When planning a shared IBM SSA disk subsystem, record the shared disk configuration on the Shared IBM Serial Storage Architecture Disk Subsystems Worksheet, as described in the Planning Guide. Use a separate worksheet for each set of shared IBM SSA disk subsystems.
To complete a Shared IBM Serial Storage Architecture Disk Subsystems Worksheet:
1. Record the node name of each node connected to the shared IBM SSA disk subsystem in the Node Name field.
2. Record the logical device name of each adapter in the Adapter Logical Name field.
lscfg | grep ssaThe first column of command output lists the logical device names of the SSA adapters.
3. For each node, record the slot that each adapter uses in the Slot Number field. The slot number value is an integer value from 1 through 16. Use the lscfg -vpl command, for example:
lscfg -vpl sssa04. Determine the logical device name and size of each physical volume and record the values. On each node run the command:
lsdev -Cc disk | grep -i ssaThe first column command output lists the logical names of the disks.
Enter the name in the Logical Device Name field.
Record the size of each external disk in the Size field.
5. Ensure that all disks have a status of Available. The second column of the existing output indicates the disk status.
If a disk has a status of Defined (instead of Available) ensure that the cable connections are secure, and then use the mkdev command to make the disk available. At this point, you have verified that the IBM SSA disk is configured properly for the HACMP software.
Defining the Target Mode SSA Connection to HACMP
After you install and test the SSA target mode connection, define the connection as a point-to-point network to HACMP. For information about how to configure a target mode network, see the Administration Guide.
Installing and Configuring Shared Tape Drives
HACMP supports both SCSI and Fibre Channel tape drives. Instructions for each are included here. For more general information, see the chapter on Planning Shared Disk and Tape Devices in the Planning Guide.
As you install an IBM tape drive, record the key information about the shared tape drive configuration on the Shared IBM SCSI or Fibre Channel Tape Drives Worksheet. Complete a separate worksheet for each shared tape drive.
Installing Shared SCSI Tape Drives
Complete the procedures in this section to install a shared IBM SCSI tape drive.
Prerequisites
Be sure to install the appropriate SCSI adapters. The installation procedures outlined in this chapter assume you have already installed these adapters. To install an adapter, if you have not done so, follow the procedure outlined in the documentation you received with the unit.
Installing an IBM SCSI Tape Drive
To install an IBM SCSI tape drive and complete a Shared IBM SCSI Tape Drive Worksheet, as described in the Planning Guide:
1. Review the shared tape configuration diagram you drew while planning tape drive storage needs.
2. Record the name of each node connected to this shared SCSI bus in the Node Name field.
3. Name each SCSI adapter used in this shared SCSI bus and record the name in the SCSI Adapter Label field of the configuration worksheet. For example, AIX 5L may name the adapters scsi0, scsi1, and so on.
4. Record the I/O slot of each SCSI adapter used in this shared SCSI bus in the Slot Number field of the configuration worksheet.
lscfg | grep scsiIn the command output, the second column lists the location code of the adapter in the format AA-BB. The last digit of that value (the last B) is the I/O slot number.
5. Record the logical device name of each adapter in the Logical Name field. The first column of the lscfg command output lists the logical name of the SCSI adapters.
6. Determine that each device connected to this shared SCSI bus has a unique SCSI ID.
The first time AIX 5L configures an adapter, it assigns the adapter card the SCSI ID 7, by default. Because each adapter on a shared SCSI bus must have a unique SCSI ID, you must change the SCSI ID of one or more of the adapters used in the shared SCSI bus. A common configuration is to let one of the nodes keep the default SCSI ID 7 and assign the adapters on the other cluster nodes the next higher SCSI IDs in sequence, such as 8 and 9. The tape drive SCSI IDs should later be set to an integer starting at 0 and going up. Make sure no SCSI device on the same bus has the same SCSI ID as any adapter. See step 8 for more information.
Note: You may want to set the SCSI IDs of the host adapters to 8 and 9 to avoid a possible conflict when booting one of the systems in service mode from a mksysb tape or other boot device, since this will always use an ID of 7 as the default. For limitations specific to a type of SCSI adapter, see the documentation for the device.
Note that the integer value in the logical device name (for example, the 1 in scsi1) is not a SCSI ID, but simply part of the name given to the configured device.
To determine the SCSI IDs of the tape drive adapters, use the lsattr command, specifying the logical name of the adapter as an argument. In the following example, the SCSI ID of the Fast/Wide adapter named scsi0 is obtained: lsattr -E -l scsi0 | grep external_id
Do not use wildcard characters or full pathnames on the command line for the device name designation.
In the command output, the first column lists the attribute names. The integer to the right of the id (external_id) attribute is the adapter SCSI ID.
To change the ID of a SCSI adapter, power down all but one of the nodes along with all shared devices. On the powered-on node, use the chdev command to change the SCSI ID of the adapter from 7 to 6, as in the following example: chdev -l 'scsi0' -a 'external_id=6' -PEnsure that each SCSI adapter in the daisy chain has a unique ID and record these values in the SCSI Device ID: Adapter field of the configuration worksheet. 7. Shut down all nodes connected to the SCSI bus so that you can set the SCSI IDs for the tape drives and connect the cables. Use the shutdown command to shutdown the nodes.
8. Assign each controller on the tape drive a SCSI ID that is unique on this shared SCSI bus.
Refer to your worksheet for the values previously assigned to the adapters. For example, if the host adapters have IDs of 8 and 9, you can assign the tape drives any SCSI ID from 0 through 6.
9. Record the SCSI ID of each tape drive in the SCSI Device ID: Tape Drive fields on the worksheet.
10. Connect the cables.
11. Power on the tape drive and all nodes; then reboot each node.
12. Run cfgmgr on one node at a time to complete the installation.
Logical Device Names
All nodes that are connected to the SCSI tape drive must have the same Logical Device Name (for example,/dev/rmt0). If the names differ (/dev/rmt0 and /dev/rmt1, for example) perform the steps in the following procedure.
To configure logical device names:
1. On the nodes with the lower numbers, put the tape device in the Defined state with the rmdev -l command. For example, run rmdev -l rmt0.
2. Enter smit tape
3. From the Tape Drive panel, select Change/Show Characteristics of a Tape Drive.
4. Select the drive you put in the Defined state and note its characteristics in the various fields.
5. From the Tape Drive panel, select the Add a Tape Drive.
6. Use the information gathered to create a new Logical Device Name.
7. Review all nodes to assure that the Logical Device Name is in the Available state and that the external tape drive has this same Logical Name.
Installing and Configuring Shared Fibre Tape Drives
Complete the procedures in this section to install a shared IBM Fibre tape drive.
Prerequisites
Be sure to install the appropriate Fibre Channel adapters. The installation procedures outlined in this chapter assume you have already installed these adapters. To install an adapter, if you have not done so, follow the procedure outlined in the documentation you received with the unit.
The IBM Fibre Channel tape drive also requires the installation of IBM AIX 5L Enhanced Tape and Medium Changer Device Drives (Atape). Follow the installation directions that come with this device driver.
Installing an IBM Fibre Channel Tape Drive
To install an IBM Fibre Channel tape drive and complete a Shared IBM Fibre Tape Drive Worksheet, as described in the Planning Guide:
1. Review the shared tape configuration diagram you drew while planning tape drive storage.
2. Record the name of each node connected to this shared Fibre Channel bus in the Node Name field.
3. Name each Fibre Channel adapter used in this shared Fibre bus and record the name in the Fibre Adapter Label field. For example, AIX 5L may name the adapters fcs0, fcs1, and so on.
4. Record the I/O slot of each Fibre Channel adapter used in this shared Fibre bus in the Slot Number field.
To determine the slot number, use the lscfg command, as in the following example. Note that FC must be capital letters:
lscfg | grep FCIn the command output, the second column lists the location code of the adapter in the format AA-BB. The last digit of that value (the last B) is the I/O slot number.
5. Record the logical device name of each adapter in the Logical Name field. The first column of the output generated by the lscfg command lists the logical name of the Fibre channel adapters.
6. Connect the cables.
7. Power on the tape drive and all nodes; then reboot each node.
8. Run cfgmgr on one node at a time to complete the installation.
Logical Device Names
All nodes that are connected to the Fibre tape drive must have the same Logical Device Name (/dev/rmt0, for example). If they differ (/dev/rmt0 and /dev/rmt1, for example), perform the steps in the following procedure:
To configure logical device names:
1. On the node with the lower numbers, put the tape device in the Defined state with the rmdev -l command. For example, run rmdev -l rmt0.
2. Enter smit tape
3. From Tape Drive panel, select Change/Show Characteristics of a Tape Drive.
4. Select the drive you have put in the Defined state and note its characteristics as Defined in the various fields.
5. From Tape Drive panel, select Add a Tape Drive.
6. Use the information gathered to create a new Logical Device Name.
7. Review both nodes to assure that the Logical Device Name is in the Available state and that the external tape drive has this same Logical Name.
Configuring the Installation of a Shared Tape Drive
During the boot process, AIX 5L configures all the devices that are connected to the I/O bus, including the SCSI adapters. AIX 5L assigns each adapter a logical name of the form scsi or fscsix, where x is an integer. For example, an adapter could be named scsi0 or fscsi0 (Fast/Wide Adapters are named scsix). After AIX 5L configures the SCSI adapter, it probes the SCSI bus and configures all target devices connected to the bus.
To confirm the installation of a tape drive:
1. Ensure that AIX 5L created the device definitions that you expected.
lsdev -Cc tapeFor example, the lsdev command may return output that resembles the following:
Name Status Location Descriptionrmt0 Available 00-02-00-00 Other SCSI Tape Device2. Ensure that the logical device name for your tape device has a status of Available.
If the tape drive has a status of Defined (instead of Available) ensure that the cable connections are secure, and then use the mkdev command to make the tape available. Enter:
mkdev -l rmtxwhere rmtx is the logical name of the defined tape drive.
At this point, your tape drive installation is complete.
For information on how to configure tape drives as resources in resource groups, see the Administration Guide.
![]() ![]() ![]() |