WebSphere brand IBM WebSphere XML Document Management Server, Version 7.0

Clustered deployment of IBM XDMS

Installation of IBM® XDMS in a clustered WebSphere® Application Server environment is the recommended deployment configuration.

Clustered deployments have additional requirements and considerations compared to the standalone deployment. The following requirements apply for clustered deployments:
  • Clusters must be created for each IBM XDMS application that you plan to install.
  • There must be a data store for the JMS SIBus application.

You should also consider the various hardware topologies that are available, paying special attention to the scalability considerations that are described in the topic Evaluating your hardware environment.

Cluster management

As clusters, both the IBM XDMS and the Aggregation Proxy must be fronted with some network element that performs routing to these clusters and handles request processing. For Session Initiation Protocol (SIP) traffic, the XDMS cluster must have a highly available set of WebSphere Application Server proxies to do the request routing. This set of proxies is itself clustered and is fronted by some type of IP sprayer or load balancer.

For HTTP traffic destined for the Aggregation Proxy or the IBM XDMS, either the WebSphere Application Server Proxy or some equivalent HTTP router such as IBM HTTP Server or an IP sprayer can be used. In addition, the proxy/router fronting the Aggregation Proxy cluster must be configured to ensure affinity based on client IP or similar if retry counts on failed authentication attempts are used.

The following diagrams depict typical highly available configurations for Aggregation Proxy and IBM XDMS:
Figure 1. High availability Aggregation Proxy
Diagram of a high-availability Aggregation Proxy cluster, showing two Aggregation Proxy servers providing access to a set of XDMS clusters
Figure 2. High availability XDMS
High availability XDMS configuration, showing redundant WebSphere servers, XDM servers, and database servers

High availability and scaling

The IBM XDMS components use WebSphere Application Server clustering technology to achieve a highly available and scalable architecture. The Aggregation Proxy and the IBM XDMS are both independently scalable because they are both deployed in separate clusters.

In addition to WebSphere Application Server, both the Aggregation Proxy and the IBM XDMS provide core functionality that can enhance the scalability of the end-to-end solution. You can deploy clusters for the IBM XDMS in different combinations of domain and Application Unique ID (AUID).

Application Usage Identifier partitioning

The AUID supported by the system can be hosted all on the same IBM XDMS or subsets of them can be hosted on a dedicated IBM XDMS or IBM XDMS cluster. By partitioning to separate clusters, the overall scalability and supportable throughput as a whole is improved. For example, you can use a separate high-availability cluster and database for your resource-lists documents from your presence rules documents. Aggregation Proxy request forwarding and routing is used to control to which IBM XDMS requests are sent.

Domain partitioning

Using Aggregation Proxy request forwarding, you can configure different IBM XDMS instances to support or be dedicated to certain subscriber domains. This capability enables supporting multiple user space partitions when the total user population is too large for a single IBM XDMS cluster, or when there are different security requirements for different domains.

AUID partitioning can be used in conjunction with domain partitioning whereby the Aggregation Proxy first determines the subset of XDMS applications supporting the AUID and then discriminates between that subset for the domain matching the XUI of the requested user.

The following is a diagram of a typical deployment that uses both partitioning techniques for scaling:
Figure 3. Subscriber and AUID partitioning
Subscriber and AUID partitioning, showing clusters for the various applications (Shared List and Presence Rules)
In this example, pres-rules and resource-lists are partitioned across three different clusters, where resource-lists are additionally partitioned across two domains (a default domain and a specific domain called “us”).
Note: SIP clients, which can be the same physical device as the XML Configuration Access Protocol (XCAP) client are aware of the specific IBM XDMS cluster endpoint address, whereas the XCAP client is aware of the Aggregation Proxy endpoint only.

Remember that the clustering techniques for high-availability and scaling can also be combined with the partitioning techniques.

Installation prerequisites for clustered deployments

Before moving on to installation make sure you have met the following prerequisites:
  • WebSphere Application Server version 6.1.0.x or 7.0.0.1 is installed with a deployment manager with at least one managed node.
  • You have federated all nodes into the deployment manager cell.
  • A WebSphere user account repository is created. A federated Lightweight Directory Access Protocol (LDAP) repository is recommended. Refer to the WebSphere Application Server 7.0 Information Center for more instructions.
  • DB2 The IBM DB2® Enterprise Server Edition version 9.5 FixPak 1 server has been installed along with the license for pureXML®.
  • The IBM DB2 Enterprise Server Edition version 9.5 FixPak 1 client JDBC library JARs have been installed in the same path on every WebSphere Application Server node in the cluster. This allows the WebSphere variables for the JDBC driver to be declared at the cell level.
  • Oracle Oracle Database version 11.1.0.7 is installed.
  • Oracle One of the following Oracle clients is installed on every WebSphere Application Server node in the cluster:
    • Oracle Database Client version 11.1.0.7
    • Oracle Database Instant Client Package - Basic, version 11.1.0.7
  • You have created the clusters for each IBM XDMS application that you plan to install.
Note: Ensure that your application server node names do not contain the hyphen (-) character. If necessary, you can use the WebSphere Application Server renameNode command to change the names.



Terms of use
(C) Copyright IBM Corporation 2009. All Rights Reserved.