MSCS does not support the failover of tape devices. However, TSM can handle this type of a failover pattern with the correct set up. TSM uses a shared SCSI bus for the tape devices. Each node (two only) involved in the tape failover must contain an additional SCSI adapter card. The tape devices (library and drives) are connected to the shared bus. When failover occurs, the TSM server issues a SCSI bus reset during initialization. In a failover situation, the bus reset is expected to clear any SCSI bus reserves held on the tape devices. This allows the TSM server to acquire the devices after the failover.
The following presents methods for teminating the shared SCSI bus. You must terminate the shared SCSI bus as part of the initial setup of SCSI tape failover. Also, the shared SCSI bus must be terminated before you bring a server back online.
There are several different methods that can be used to terminate the shared SCSI bus:
SCSI controllers have internal termination that can be used to terminate the bus, however this method is not recommended with Cluster Server. If a node is offline with this configuration, the SCSI bus will not be properly terminated and will not operate correctly.
Storage enclosures also have internal termination. This can be used to terminate the SCSI bus if the enclosure is at the end of the SCSI bus.
Y cables can be connected to devices if the device is at the end of the SCSI bus. A terminator can then be attached to one branch of the Y cable in order to terminate the SCSI bus. This method of termination requires either disabling or removing any internal terminators the device may have.
Trilink connectors can be connected to certain devices. If the device is at the end of the bus, a trilink connector can be used to terminate the bus. This method of termination requires either disabling or removing any internal terminators the device may have.
Whether you will configure your system to include clusters depends on your business needs. It requires a great deal of planning. In addition to assuring the right type of hardware and the applicable software, the focus of clustering is the failover pattern. When a node fails or needs to be taken off-line, which node or nodes in the cluster will pick up the transaction processing? In a two-node cluster there is little planning necessary. In a more complex arrangement, you want to give consideration to how your transaction processing is best handled. A form of load balancing among your nodes needs to be accounted for so that you maintain peak performance. Another consideration is to ensure that your customers do not see any lag and little drop in productivity.
MSCS requires each TSM server instance to have a private set of disk resources. Although nodes can share disk resources, only one node can actively control a disk at a time.
Is one configuration better than the other? To determine your best installation, you need to look at the differences in performance and cost. Assume you have a TSM server-dedicated cluster whose nodes have comparable power. During failover, the performance of a configuration may degrade because one node must manage both virtual TSM server instances. If each node handles 100 clients in a normal operation, one node must handle 200 clients during a failure.
The planning steps you might use and the end result of such planning are covered in the Tivoli Storage Manager for Windows Quick Start. Suffice it to say that clustering takes planning to ensure the optimal performance of your system.