Using JNDI in a Clustered Environment

WebLogic’s JNDI implementation can be used in a clustered environment. Indeed, it is JNDI that provides the bedrock of many of WebLogic’s clustered services. For instance, an EJB may be deployed to a number of servers in an object-tier cluster. Servlets in the web tier can look up the EJB in the object tier’s JNDI and obtain access to one of the servers hosting the EJB; context objects even can use load-balancing logic to choose a server instance. When an EJB is deployed to several servers in the clustered object tier, the JNDI tree is updated with a cluster-aware EJB stub that records the location of each server instance hosting the EJB. Moreover, this knowledge is automatically distributed throughout the JNDI trees in the cluster. This magic is implemented partly by WebLogic’s clustered JNDI implementation, which we will now look at.

Creating a Context in a Cluster

In a clustered environment, an object may be bound to the JNDI tree of an individual server, or it may be replicated and bound to all servers in the cluster. If the JNDI binding manifests in one server only, a client must explicitly connect to that server when establishing the initial context, as discussed earlier. In most cases, the JNDI binding actually will be replicated to all servers in the JNDI tree, in which case you need only specify a name representing the WebLogic cluster, not an individual server member. When creating an initial context to a cluster, WebLogic automatically chooses between the members of the cluster and creates a single context to that one member.

The next example shows how a client can look up an object that is available to a cluster-wide JNDI tree:

Context ctx = null;
Hashtable env = new Hashtable( );
env.put(Context.INITIAL_CONTEXT_FACTORY, 
         weblogic.jndi.WLInitialContextFactory");
env.put(Context.PROVIDER_URL, "t3://mycluster:8001");

//connect to the cluster-wide JNDI tree 
ctx = new InitialContext(env);
//now use the JNDI Context to look up objects

In this case, the value for the PROVIDER_URL property uses the DNS name mycluster, which resolves to the addresses of each server in the cluster.

Typically, the address is represented as a DNS name. Alternatively, you could specify a comma-delimited list of server addresses to access the cluster-wide JNDI tree:

env.put(Context.PROVIDER_URL,"t3://ManagedServer1:7001,
          ManagedServer2:7002,ManagedServer3:7003");

If all the members of the cluster use the same T3 listen port, you also could set the same property as follows:

env.put(Context.PROVIDER_URL,"t3://ManagedServer1, 
     ManagedServer2,ManagedServer3:7001");

There are two crucial aspects to consider when establishing a JNDI context in a clustered environment:

At creation time, the context factory performs a round robin between available servers

If you supply multiple addresses or a DNS name that is mapped to multiple addresses, successive requests for a Context object perform a round robin between the supplied addresses, before it attaches to a particular cluster member. If one of the servers in the list becomes unavailable, the initial requests to create a new Context object continue to perform round robins between the remaining available servers in the list.

A lookup( ) on the Context object can transparently fail over to another live server

Even though the Context object is bound to a particular server in the list based on a round-robin scheme, it is cluster-aware — i.e., it is aware of the locations of all members participating in the cluster. If the server to which the Context object is bound fails, calls to the Context object will automatically fail over to another available server.

Let’s take the following code as an example, where we have two servers in a cluster, as well as a data source targeted to the cluster:

Hashtable env = new Hashtable( );
env.put(Context.INITIAL_CONTEXT_FACTORY,
         "weblogic.jndi.WLInitialContextFactory");
env.put(Context.PROVIDER_URL, "t3://10.0.10.14:7001,10.0.10.10:7001");
Context c = null;
for (int i = 0; i < 5000; i++) {
  ctx = new InitialContext(env);
               DataSource ds = (DataSource) c.lookup("myds");
  // Do some work
}

Because we create a Context object inside the loop, the context will continue to perform a round robin between the two servers in the cluster. Thus, every time we look up the data source, the data source alternately will be returned from the first and second server. If we bring down one of the two servers, this round-robin behavior will end. The context factory will detect the failed server, and the data source will then continue to be returned from the running server only.

Ordinarily, you would not create the context in a loop as we have done here, but rather, create it once, and then use this cached Context object multiple times. The example, however, is indicative of what happens when multiple calls are made to create an initial context. The behavior is slightly different if we change the code a little:

Hashtable env = new Hashtable( );
env.put(Context.INITIAL_CONTEXT_FACTORY,
         "weblogic.jndi.WLInitialContextFactory");
env.put(Context.PROVIDER_URL, "t3://10.0.10.14:7001,10.0.10.10:7001");
Context c = new InitialContext(env);
for (int i = 0; i < 5000; i++) {
  DataSource ds = (DataSource) c.lookup("DS");
  // Do some work
}

In this case, the context immediately will be bound to one of the servers supplied in the provider URL, again using a round-robin scheme. Now, each time we execute the JNDI lookup within the loop, the same data source will be returned. Because we’ve created the context only once, the resulting context will not switch over to another server unless there is a failure.

In both cases, if the server to which the context is bound fails, the context will switch to the next server using a round-robin algorithm, and continue to return the data source replica bound to that server.

J2EE Resources and RMI Stubs

WebLogic’s JNDI primarily serves as a naming service for J2EE resources. Each such service is represented by an RMI stub bound in the JNDI tree. For example, when you create and deploy an EJB, an RMI stub representing the EJB is bound in the JNDI tree, and thereby made available to other servers. The stub marshals incoming requests from a client to the actual J2EE resource, typically using RMI, allowing clients to access J2EE resources on remote machines.

To be more precise, WebLogic binds cluster-aware RMI stubs for J2EE resources in the JNDI tree. A cluster-aware RMI stub is aware of the locations of all servers to which the J2EE resource is deployed. If the EJB is deployed to a cluster, the RMI stub that is bound to each JNDI tree records the locations of the servers hosting the actual resource — in this case, the servers to which the EJB was deployed. RMI stubs are small in size and can be easily replicated across all the members of a WebLogic cluster. While the actual RMI object may reside on a single server, the RMI stubs can be replicated cheaply across the cluster.

In a clustered environment, each member of the cluster maintains its own copy of the cluster-wide JNDI tree. Thus, when a new server joins a WebLogic cluster and resources are deployed to it, two things happen:

  • The cluster-aware RMI stubs on other servers are updated to include the location of the newly deployed object on the server. So, for instance, if an EJB is deployed to a cluster including the new server, when the new server deploys the EJB, the cluster-aware RMI stubs of the EJB on the other servers are updated to include the new server as a hosting location for that EJB.

  • The new server builds its own copy of the JNDI tree after collecting information about all cluster-aware J2EE resources and all objects that are pinned to the server itself.

If an existing server fails, the RMI stubs bound to other servers are updated to reflect the server’s failure.

This behavior reinforces the way we think about initial contexts. When you create an initial context to a cluster, a single server instance is chosen and you interact with the JNDI tree on that server instance. However, because cluster-aware RMI stubs representing J2EE resources are replicated to the JNDI trees on each server, it shouldn’t matter which server instance you connected to when you created the context.

Binding Custom Objects

Usually J2EE resources are represented by RMI objects, which are bound at deployment in the cluster-wide JNDI tree. You also can programmatically bind a custom, non-RMI object to a cluster-wide JNDI tree. By default, a custom object bound to the JNDI tree is replicated automatically across all members of the WebLogic cluster. In this way, the JNDI binding is available across all servers in the cluster. However, if the original server used to bind the custom object fails, the object is removed from the remaining servers in the cluster. So, even though the object is replicated, if the host server fails, the object is removed from the cluster-wide JNDI tree. For this reason, WebLogic’s JNDI tree is not an ideal candidate for a distributed object cache implementation!

In addition, if you alter the state of an object that is already bound to the cluster-wide JNDI tree, those changes will not be replicated to other servers in the cluster. The changes to the object’s state will be broadcast to other members of the WebLogic cluster only if you subsequently unbind and rebind the custom object. This further emphasizes that the JNDI should not be used as a distributed object cache. We recommend using third-party solutions if such functionality is needed.

You can, however, alter the default behavior and disable the replication of JNDI bindings. The weblogic.jndi.WLContext interface supports properties that can be used when establishing the initial JNDI context in a clustered environment. The value of the REPLICATE_BINDINGS property determines whether any modifications to the server’s JNDI tree are replicated across other members of the WebLogic cluster. The following code sample shows how an object can be bound to the server’s JNDI tree, without replicating the binding to other servers in the cluster:

Hashtable env = new Hashtable( );
env.put(Context.INITIAL_CONTEXT_FACTORY, 
         weblogic.jndi.WLInitialContextFactory");
env.put(Context.PROVIDER_URL, "t3://ManagedServer1:7001");
               env.put(WLContext.REPLICATE_BINDINGS, "false");
try {
  //connect to the server's JNDI tree 
  Context ctx = new InitialContext(env);
  //bind a custom object
  ctx.bind("foo_bar", myObject);
}
catch (NamingException ene) {
  //handle JNDI exceptions
}

Here the Context.PROVIDER_URL property is assigned the address of a single server within a WebLogic cluster, and the REPLICATE_BINDINGS property is set to false. This means that any changes to the server’s JNDI tree as a result of the bind operation will not be propagated to other servers in the cluster. It also means that a client must look up the custom object from the server to which the object was bound. The custom object will not be visible in JNDI trees on other servers.

This behavior can be put to good use. As we have learned, custom objects that are replicated in the cluster-wide JNDI tree are owned by some hosting server; when that server goes down, they are removed from the cluster-wide JNDI tree. If you really need to make a custom object available to all members of the cluster and ensure that the binding is resistant to server failures, you should deploy it to all servers in the cluster individually, as shown in the previous example, without replicating the JNDI bindings.

To summarize, custom objects can be bound to a cluster-wide JNDI tree in two ways:

  • In the default mode, bind a custom, non-RMI object to the server’s JNDI tree and it is automatically replicated to other servers in the cluster. Any changes to the state of a custom object are not propagated unless you later unbind and rebind the object. If the original server that hosts the custom object fails, the object is removed from all other servers in the cluster.

  • Bind the custom object individually to each server in the cluster, while disabling the replication of JNDI bindings. In this way, the custom object is available across all members of the cluster, and is accessible from other servers even if one of the servers in the cluster fails. However, any changes to the custom object on a particular server will not be replicated to other cluster members, even if you unbind and rebind the object. Your only recourse is to implement custom logic to ensure objects bound to the server’s JNDI tree are automatically updated when any one of its replicas on the other servers are updated.

Pinned Services

In certain situations, you may require a single instance of a service, which is made available to all members of a WebLogic cluster. For RMI objects, which behave differently from custom objects, you can simply deploy the object to an individual server. The default JNDI replication will make RMI stubs available to the JNDI trees on all the other servers. In this way, the RMI object is accessible from any member of the cluster, while residing on only a single member.

The situation is not as simple for non-RMI objects. To pin a custom object, you need to bind it to a server instance, ensuring that the REPLICATE_BINDINGS property is set to false. As a consequence, you need to contact the actual server that hosts the custom object when creating an initial JNDI context for a JNDI lookup on the custom object. To get around this rather clunky solution, you can alternatively implement an RMI object that serves as a remote proxy for the custom object. The custom object and RMI object could be deployed on the same server, and the RMI stub then can be replicated across the cluster-wide JNDI tree. Using this trick, the custom object is accessible to all members of the WebLogic cluster via the proxy RMI stub. A very good example of this in action is in WebLogic’s JMS implementation. A JMS server is a pinned object, but accessible on all JNDI contexts throughout the cluster.

Remember that a custom object will continue to be available only while the original server remains alive. A pinned object cannot provide automatic failover support if the original server fails. Instead, you need to guarantee high availability for the original server and the hardware that supports the pinned service(s). This solution should allow you to restart WebLogic in the event of a failure with little or no disruption to availability of the pinned object or service.

Get WebLogic: The Definitive Guide now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.