Tivoli Storage Manager Installing the Clients


Appendix F. Configuring the Backup-Archive Client in a Microsoft Cluster Server Environment

You can install TSM locally on each node of a Microsoft Cluster Server (MSCS) environment cluster. You can also install and configure the TSM Scheduler Service for each cluster node to manage all local disks and each cluster group containing physical disk resources.

Note:
The TSM Journal Service is not supported in a Microsoft Cluster Server environment. Only one TSM Journal Service may be installed on a machine and the TSM Journal Service cannot dynamically start/stop monitoring shared disks.

For example, cluster mscs-cluster contains two nodes: node-1 and node-2, and two cluster groups containing physical disk resources: group-a and group-b. In this case, an instance of the TSM Backup-Archive Scheduler Service should be installed for node-1, node-2, group-a, and group-b. This ensures that proper resources are available to the TSM Backup-Archive client when disks move (or fail) between cluster nodes.

The clusternode option ensures that TSM manages backup data logically, regardless of which cluster node backs up a cluster disk resource. Use this option for TSM nodes that process cluster disk resources, and not local resources. See Clusternode for more information.


Installing the TSM Backup-Archive Client

Install the TSM Backup-Archive client software on a local disk on each cluster node. The executables should reside in the same location on each local drive, for example:

   C:\Program Files\tivoli\tsm\baclient

Configuring TSM Backup-Archive Client To Process Local Nodes

You can edit your dsm.opt file on each local node to process local disk drives using the following options:

nodename
If no value is specified, TSM uses the local machine name. See Nodename for more information.

domain
If no value is specified, TSM processes all local drives that are not owned by the cluster. See Domain for more information.

clusternode
Do not specify this option when processing local drives. See Clusternode for more information.

You can configure the TSM Backup-Archive Scheduler Service to back up the local cluster nodes.


Configuring TSM Backup-Archive Client To Process Cluster Disk Resources

Ensure that TSM manages each cluster group that contains physical disk resources as a unique node. This ensures that TSM correctly manages all disk resources, regardless of which cluster node owns the resource at the time of back up.

Step 1: Identify the Cluster Groups to Manage

Use the Cluster Administrator program to determine which groups contain physical disk resources for TSM to process. Register a unique node name on the TSM server for each group. For example, cluster mscs-cluster contains the following groups and resources:

In this example, the TSM administrator registers two node names: mscs-cluster-group-a and mscs-cluster-group-b. For example, to register mscs-cluster-group-a the TSM administrator can enter the following command:

  register node mscs-cluster-group-a <password>

Step 2: Configure the Client Options File

Configure the client options file dsm.opt for each cluster group. Locate the option file on one of the disk drives that are owned by the cluster group. For example, the option file for mscs-cluster-group-a should reside on either q: or r:. To configure the dsm.opt file for each cluster group, specify the following options:

nodename
Specify a unique name. For example:
  mscs-cluster-group-a

See Nodename for more information about this option.

domain
Specify the drive letters for the drives which are managed by the group. For example:
  q: r:

See Domain for more information about this option.

clusternode
Specify the Yes value. See Clusternode for more information about this option.

passwordaccess
Specify the generate value. See Passwordaccess for more information about this option.

errorlogname
Specify a unique error log name. See Errorlogname for more information about this option.
Note:
This is not the same errorlog file that the client uses for other operations. Ideally, this file should be stored on a cluster resource, but at the very least it should be stored in a location other than the client directory.

schedlogname
Specify a unique schedule log name. See Schedlogname for more information about this option.
Note:
This is not the same schedlog file that the client uses for other operations. Ideally, this file should be stored on a cluster resource, but at the very least it should be stored in a location other than the client directory.

Step 3: Configure the Scheduler Service

Configure a TSM Backup-Archive Scheduler Service for each cluster group using the TSM Client Service Configuration Utility, dsmcutil. Do not use the scheduler setup wizard to configure scheduler services in a MSCS environment.

Each service must have a unique name and be available for failover (moved to the other nodes in the cluster).

To install the TSM Scheduler Service for group-a from machine node-1, ensure that node-1 currently owns group-a and issue the following command:

   dsmcutil install SCHEDuler /name:"tsm scheduler service: group-a"
   /clientdir:"c:\Program Files\tivoli\tsm\baclient" /optfile:q:\tsm\
   dsm.opt /node:mscs-cluster-group-a /password:nodepassword
   /validate:yes /autostart:yes /startnow:yes /clusternode:yes
   /clustername:mscs-cluster
 

This installs the service on node-1.

Note:
For more information about dsmcutil commands and options, see "Using the Dsmcutil Command".

Using Cluster Administrator, move group-a to node-2. From node-2, issue the same dsmcutil command above to install the service on node-2. Repeat this procedure for each cluster group.

Step 4: Creating A Generic Service Resource For Failover

To add a Generic Service resource to each cluster group managed by TSM use the Cluster Administrator, as follows:

  1. Select the group-a folder under the MSCS-Cluster\Groups folder and select File > New > Resource from the dropdown menu.
  2. In the New Resource dialog, enter the following information:
  3. In the Possible Owner dialog, ensure that all cluster nodes appear as possible owners. Press Enter.
  4. In the Dependencies dialog add all physical disk resources as Resource Dependencies. Press Enter.
  5. In the Generic Service Parameters dialog, enter the service name you specified with the dsmcutil command, in the Service Name field. Leave the Startup Parameters field blank. Press Enter.
  6. In the Registry Replication dialog, add the key corresponding to the node name. For example:
      SOFTWARE\IBM\ADSM\CurrentVersion\BackupClient\Nodes\
    mscs-cluster-group-a\servername
    
    Note:
    servername is the name of the TSM server that this client is connecting to.
    In this example mscs-cluster-group-a is the node name.
  7. Select the new resource from the Cluster Administrator utility, and click File and then Bring Online from the dropdown menu.

Repeat this procedure for each cluster group managed by TSM.


Configuring the Web Client in a MSCS Environment

To use the TSM Web client in a MSCS environment, you must configure the native TSM client to run in a MSCS environment. See "Installing the TSM Backup-Archive Client" for detailed information about installing and configuring the native TSM client in a MSCS environment.

Configuring TSM Web Client To Process Cluster Disk Resources

After installing and configuring the native TSM client in a MSCS environment, perform the following steps.

Step 1: Identify the Cluster Groups to Manage

Please perform the steps under Step 1 of "Configuring TSM Backup-Archive Client To Process Cluster Disk Resources".

Step 2: Configure the Client Options File

Please perform the steps under Step 2 of "Configuring TSM Backup-Archive Client To Process Cluster Disk Resources".

In addition, specify the following option in the dsm.opt file for each cluster group:

httpport
Specify a unique TCP/IP port number that the web client uses to communicate with the client acceptor service associated with the cluster group. See Httpport for more information about this option.
Note:
It is unnecessary to specify the schedlogname option in the dsm.opt file for each cluster group.

Step 3: Install a Client Acceptor Service and Client Agent

Install a unique client acceptor service and client agent for each cluster group and generate a password file.

To install the Client Acceptor Service for group-a from machine node-1, ensure that node-1 currently owns group-a and issue the following command:This will install the service on node-1.

To install the Client Agent Service for group-a from machine node-1, ensure that node-1 currently owns group-a and issue the following command:

   dsmcutil install remoteagent /name:"tsm client agent: group-a"
   /optfile:q:\tsm\dsm.opt /node:mscs-cluster-group-a
   /password:nodepassword /partnername:"tsm client acceptor: group-a"   
Note:
Do not use the /autostart:yes option.

For more information about dsmcutil commands and options, see "Using the Dsmcutil Command".

After installing the client acceptor service and client agent, generate a password file for the node name. For example, using the values from above, issue the command:

   dsmc query session -optfile="q:\tsm\dsm.opt" 
Note:
Do not use the -password option.

Using Cluster Administrator, move group-a to node-2. From node-2, issue the same commands above to install the services on node-2 and generate a password file. Repeat this procedure for each cluster group.

Step 4: Create a Network Name and IP Address Resource

Add a network name and IP address resource for each group that is managed by the TSM client, using the cluster administrator.

To add an IP Address resource to each cluster group managed by TSM use the Cluster Administrator, as follows:

  1. Select the group-a folder under the MSCS-Cluster\Groups folder and select File > New> Resource from the dropdown menu.
  2. In the New Resource dialog, enter the following information:
  3. In the Possible Owner dialog, ensure that all cluster nodes appear as possible owners. Press Enter.
  4. In the Dependencies dialog add all physical disk resources as Resource Dependencies. Press Enter.
  5. In the TCP/IP Address dialog, enter appropriate values for address, subnetmask, and network. Press Enter.
  6. Select the new resource from the Cluster Administrator utility, and from the dropdown menu click File and then Bring Online.

To add a network name resource to each cluster group managed by TSM use the Cluster Administrator, as follows:

  1. Select the group-a folder under the MSCS-Cluster\Groups folder and select File > New > Resource from the dropdown menu.
  2. In the New Resource dialog, enter the following information:
  3. In the Possible Owner dialog, ensure that all cluster nodes appear as possible owners. Press Enter.
  4. In the Dependencies dialog add the IP address resource and all physical disk resources as Resource Dependencies. Press Enter.
  5. In the Network Name Parameters dialog, enter a network name for GROUP-A. Press Enter.
  6. Select the new resource from the Cluster Administrator utility, and from the dropdown menu click File and then Bring Online.

The IP address and network name to backup the disks in the cluster group are now resources in the same group.

Repeat this procedure for each cluster group managed by TSM.

Step 6: Start the Web Client

  1. Start the TSM Client Acceptor Service for each resource group on each node.
  2. To start the Web client, point your browser at the IP address and httpport specified for the Resource Group. For example, if you used an IP address of 9.110.158.205 and specified an httpport value of 1583, open the web address: http://9.110.158.205:1583.

Alternatively, you may point your browser at the network name and httpport. For example, if you used a network name of cluster1groupa and specified an httpport value of 1583, open the web address: http://cluster1groupa:1583.

Note that the Web client connects to whichever machine currently owns the resource group. The Web client displays all of the local file spaces on that machine, but to ensure that the files are backed up with the correct node name you should only back up the files for the resource group.

When failing back to the original node after a failover scenario, ensure that the remote agent service on the original machine is stopped. The remote agent may be stopped manually, or it will stop automatically after 20 to 25 minutes of inactivity. Because the remote agent is configured for manual startup, it will not start automatically if the machine on which it was running is rebooted.


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]