![]() ![]() ![]() |
Chapter 2: Creating Shared LVM Components
Setting up shared LVM components for an HACMP cluster depends on both of the following:
The type of shared disk device The method of shared disk access. This chapter describes the following:
Note: If you are planning an IBM General Parallel File System (GPFS) cluster, see the network requirements in Appendix C: GPFS Cluster Configuration.
If you are planning to use OEM disks, volume groups, or filesystems in your cluster (including Veritas volumes) see Appendix B: OEM Disk, Volume Group, and Filesystems Accommodation.
Prerequisites
At this point, you should have completed the planning steps described in the Planning Guide.
You should also be familiar with how to use the Logical Volume Manager (LVM). For information about AIX 5L LVM, see the AIX 5L System Management Guide.
Logical Volumes
A logical volume is a set of logical partitions that AIX 5L makes available as a single storage unit—that is, the logical view of a disk. A logical partition is the logical view of a physical partition. Logical partitions may be mapped to one, two, or three physical partitions to implement mirroring.
In the HACMP environment, logical volumes can be used to support a journaled filesystem or a raw device.
Specify the super strict disk allocation policy for the logical volumes in volume groups for which forced varyon is specified. This configuration:
Guarantees that copies of a logical volume always reside on separate disks Increases the chances that forced varyon will be successful after a failure of one or more disks. If you plan to use forced varyon for the logical volume, apply the superstrict disk allocation policy for disk enclosures in the cluster.
To specify the superstrict disk allocation policy in AIX 5L:
1. In SMIT, go to Add a Shared Logical Volume, or Change a Shared Logical Volume.
2. Select Allocate each logical partition copy on a separate physical volume?
3. When using the superstrict disk allocation policy, specify the correct number of physical volumes for this logical volume. Do not use the default setting of 32 physical volumes.
Default NFS Mount Options for HACMP
When performing NFS mounts, HACMP uses the default options soft, intr.
To set hard mounts or any other options on the NFS mounts:
1. Enter smit mknfsmnt
2. In the MOUNT now, add entry to /etc/filesystems or both? field, select the filesystems option.
3. In the /etc/filesystems entry will mount the directory on system RESTART field, accept the default value of no.
This procedure adds the options you have chosen to the /etc/filesystems entry created. The HACMP scripts then read this entry to pick up any options you may have selected.
Creating and Configuring NFS Mount Points on Clients
An NFS mount point is required to mount a filesystem via NFS. In a non-concurrent resource group, all the nodes in the resource group NFS mount the filesystem. The NFS mount point must be outside the directory tree of the local mount point.
Once you create the NFS mount point on all nodes in the resource group, configure the NFS Filesystem to NFS Mount attribute for the resource group.
To create NFS mount points and to configure the resource group for the NFS mount:
1. On each node in the resource group, create an NFS mount point by executing the following command:
mkdir /mountpointwhere mountpoint is the name of the local NFS mount point over which the remote filesystem is mounted.
2. In the Change/Show Resources and Attributes for a Resource Group SMIT panel, the Filesystem to NFS Mount field must specify both mount points.
Specify the NFS mount point, then the local mount point, separating the two with a semicolon. For example:
/nfspoint1;/local1 /nfspoint2;/local23. (Optional) If there are nested mount points, nest the NFS mount points in the same manner as the local mount points so that they match up properly.
4. (Optional) When cross-mounting NFS filesystems, set the Filesystems Mounted before IP Configured field in SMIT for the resource group to true.
Configuring HACMP to Use NFS Version 4
HACMP supports NFS protocol Version 4 (NFS V4). To ensure that HACMP properly identifies NFS filesystems mounted for NFS V4, you must:
1. Correctly set up NFS V4 configuration.
2. Make this configuration consistent on all nodes.
The fields needed to configure HACMP to use NFS V4 are described in this section. For more information about configuring NFS V4, see the AIX 5L documentation as listed in About This Guide.
To correctly configure HACMP for NFS V4, follow the steps in these sections:
Step 1: Configuring NFS and Changing to Version 4
Step 2: Configuring the NFS Local Domain
Step 3: Adding the NFS Directory to the Exports List
Step 4: Adding the NFS for Mounting
Step 5: Editing the /etc/filesystems file
Step 6: Editing the /usr/es/sbin/cluster/etc/exports file
Step 7: Removing Entries from /etc/exports.
Step 1: Configuring NFS and Changing to Version 4
For HACMP to recognize NFS V4, you change the NFS version on one node in the cluster in AIX 5L first, and then on the rest of the nodes.
To change the NFS version on one node in the cluster:
1. Enter the fastpath smitty nfs
2. In SMIT, select Network File System (NFS) > Configure NFS on This System > Change Version 4 Server Root Node and press Enter.
3. Enter field values on the Change Version 4 Server Root Node panel as follows:
You must also change the NFS version on each node in the cluster in AIX 5L.
To change the NFS version on each node in the cluster:
1. Enter the fastpath smitty nfs
2. In SMIT, select Network File System (NFS) > Configure NFS on This System > Change Version 4 Server Public Node and press Enter.
3. Enter field values on the Change Version 4 Server Public Node panel as follows:
Step 2: Configuring the NFS Local Domain
To set the Local Domain on each node using AIX 5L SMIT:
1. Enter the fastpath smitty nfs
2. In SMIT, select Network File System (NFS) > Configure NFS on This System > Configure NFS Local Domain > Change NFS Local Domain and press Enter.
3. Enter the following field value on the Display Current NFS Local Domain panel as follows:
Step 3: Adding the NFS Directory to the Exports List
You must make local directories available which the Network File System (NFS) clients can mount.
To add the directory to the exports list on each node in the cluster:
1. Enter the fastpath smitty nfs
2. In SMIT, select Network File System (NFS) > Add a Directory to Exports List and press Enter.
3. Enter field values on the Add a Directory to Exports List panel as follows:
Step 4: Adding the NFS for Mounting
Specify a filesystem to be appended to the /etc/filesystems file and, therefore, make a filesystem available for mounting.
To add the filesystems for NFS mounting on each node in the cluster:
1. Enter the fastpath smitty nfs
2. In SMIT, select Network File System (NFS) > Add a File System for Mounting and press Enter.
3. Enter field values on the Add a Directory to Exports List panel as follows:
Step 5: Editing the /etc/filesystems file
Modify the /etc/filesystems file on each node in the cluster. Do not copy the file to other nodes.
To modify the /etc/filesystems file on each HACMP cluster node:
1. Edit the /etc/filesystems file on the control workstation:
vi /etc/filesystems2. Find the NFS mount point, (/nfs1, /nfs2, /nfs3.1, in the following example).
3. Edit the nodename field, by changing the hostname to be the resource group service label (to become ether_svc_1 in the following example). See the following example; here the NFS mount point is changed to the resource group service label.
/nfs1:dev = "/fs/fs21"vfs = nfsnodename = ether_s vc_1mount = falseoptions = rw,bg,hard,intr,vers=4,sec=sysaccount = false/nfs2:dev = "/fs/fse21"vfs = nfsnodename = ether_s vc_1mount = falseoptions = bg,hard,intr,vers=4,sec=sysaccount = false/nfs3.1:dev = "/fs/fs31/fs31.1"vfs = nfsnodename = ether_s vc_1mount = falseoptions = bg,hard,intr,vers=4,sec=sysaccount = falseStep 6: Editing the /usr/es/sbin/cluster/etc/exports file
Modify the HACMP /usr/es/sbin/cluster/etc/exports file on each node in the cluster to add the IP addresses for the network. You may edit the file on one node and copy it to other cluster nodes.
To modify the /usr/es/sbin/cluster/etc/exports file on each HACMP cluster node:
1. Edit the /usr/es/sbin/cluster/etc/exports file on the control workstation:
vi /usr/es/sbin/cluster/etc/exports2. For each filesystem, there should be a line that looks like this:
/fs/fs3big -vers=4,sec=sys:krb5p:krb5i:krb5:dh:none,rw,root=192.168.20.1:192.168.20.1:192.168.20.2:192.168.20.3:192.168.21.1:192.168.21.2:192.168.21.3:192.168.30.1:192.168.30.2:192.168.30.3Step 7: Removing Entries from /etc/exports
Modify the AIX 5L /etc/exports file (not the HACMP /usr/es/sbin/cluster/etc/exports file) on each HACMP cluster node by removing all the entries.
To remove all the entries in the /etc/exports file on each HACMP cluster node run the command:
cat /dev/null > /etc/exports
Where You Go from Here
After you create your shared LVM components, complete the following steps to configure an HACMP server:
1. If you are upgrading an HACMP cluster configuration, see Chapter 3: Upgrading an HACMP Cluster.
2. If you need to install HACMP on your server nodes, see Chapter 4: Installing HACMP on Server Nodes.
3. Set up the Cluster Information Program.
Copy the clhosts.client file to each client node as /usr/es/sbin/cluster/etc/clhosts and edit the /usr/es/sbin/cluster/etc/clinfo.rc script as described in Chapter 5: Installing HACMP on Client Nodes.
4. Ensure that the network interfaces and shared external disk devices are ready to support an HACMP cluster.
5. Define shared LVM components, including creating shared volume groups, logical volumes, and filesystems for your cluster. Also define enhanced concurrent volume groups.
6. Customize your AIX 5L environment for HACMP.
7. If you want to set up a basic two-node configuration, use the Two-Node Cluster Configuration Assistant. For information about using the Two-Node Cluster Configuration Assistant, see Chapter 9: Creating a Basic HACMP Cluster.
8. (Optional) Configure the HACMP cluster using the Online Planning Worksheets application.
If you decided not to use the application, use the paper planning worksheets you filled out using the Planning Guide, and follow the steps in the Administration Guide to configure your cluster using SMIT or WebSMIT.
9. Test your configuration using the Cluster Test Tool.
![]() ![]() ![]() |