Microsoft Cluster Server (MSCS) lets you join two Windows servers, or nodes, through a shared disk subsystem so that the nodes share data to provide high resource availability. The combination of these servers is a cluster. Tivoli Storage Manager uses the MSCS failover. Failover occurs when a software or hardware resource (for example, applications, disks, and IP addresses) fails. Resources migrate from the failed node to the remaining node, which takes over the TSM server resource group, restarts the TSM service, and provides access to administrators and clients.
The TSM server supports two configurations:
The active/passive configuration provides better performance during failover because the cluster runs only one TSM server instance. However, during normal operations, the active/active configuration uses resources more efficiently because it uses both nodes. Clusters are costly if you use the second node only for failovers. If you have a TSM server-dedicated cluster on nodes of comparable power, the performance of an active/active configuration degrades during failover because one server must manage both virtual TSM server instances. For example, if each node handles 100 clients in a normal operation, one node must handle 200 clients during a failure.
Server instances typically run on separate nodes. However, both instances can run on a single node, and it appears to users that they are accessing separate servers.
MSCS lets you place TSM server cluster resources into a virtual server. A virtual server is an MSCS resource group that looks like a Windows server. The virtual server has a network name, an IP address, one or more physical disks, and a service. A TSM server can be one of the virtual services provided by an MSCS virtual server. The virtual server name is independent of the name of the physical node on which the virtual server runs. The virtual server name and address migrate from node to node with the virtual server. Because virtual servers cannot share data, each virtual server has a separate database, recovery log, and set of storage pool volumes.
Each TSM server instance must have a private set of disk resources. Although nodes can share disk resources, only one node can actively control a disk at a time. You can run an instance of a TSM server on each node.
To enable TSM for high availability, run the TSM Cluster Configuration wizard. Figure 18 shows clustered TSM servers, TSMSERVER1 and TSMSERVER2, running on Node A and Node B respectively. Clients connect to TSMSERVER1 and TSMSERVER2 without knowing which node is hosting the server. To the client, it appears that the TSM server is running on a virtual server called TSMSERVER1.
Figure 18. Clustered Tivoli Storage Manager Servers
![]() |
To connect to a TSM virtual server, clients use the virtual server name, rather than the Windows server name. The virtual server name is implemented as a cluster network name resource and maps to a primary or backup node, depending on where the virtual server currently resides. Any client that uses WINS or directory services to locate servers can automatically track the virtual server as it moves between nodes. Automatically tracking the virtual server does not require client modification or reconfiguration.
In the example shown in Figure 19, Node A fails and Node B assumes the role of running TSMSERVER1. To a client, it appears that Node A was turned off and immediately turned back on. Clients lose connection to TSMSERVER1, and all active transactions are rolled back. Clients must reconnect to TSMSERVER1.
Figure 19. Failover in a Tivoli Storage Manager Cluster
![]() |