Chapter 1. Messaging Basics

Over the years, systems have grown significantly in terms of complexity and sophistication. The need to have systems with better reliability, increased scalability, and more flexibility than in the past has given rise to more complex and sophisticated architectures. In response to this increased demand for better and faster systems, architects, designers, and developers have been leveraging messaging as a way of solving these complex problems.

Messaging has come a long way since the first edition of this book was published in 2000, particularly with respect to the Java platform. Although the Java Message Service (JMS) API hasn’t changed significantly since its introduction in 1999, the way messaging is used has. Messaging is widely used to solve reliability and scalability issues, but it is also used to solve a host of other problems encountered with many business and nonbusiness applications.

Heterogeneous integration is one primary area where messaging plays a key role. Whether it be through mergers, acquisitions, business requirements, or simply a change in technology direction, more and more companies are faced with the problem of integrating heterogeneous systems and applications within and across the enterprise. It is not unusual to encounter a myriad of technologies and platforms within a single company or division consisting of Java EE, Microsoft .NET, Tuxedo, and yes, even CICS on the mainframe.

Messaging also offers the ability to process requests asynchronously, providing architects and developers with solutions for reducing or eliminating system bottlenecks, and increasing end user productivity and overall system scalability. Since messaging provides a high degree of decoupling between components, systems that utilize messaging also provide a high degree of architectural flexibility and agility.

Application-to-application messaging systems, when used in business systems, are generically referred to as enterprise messaging systems, or Message-Oriented Middleware (MOM). Enterprise messaging systems allow two or more applications to exchange information in the form of messages. A message, in this case, is a self-contained package of business data and network routing headers. The business data contained in a message can be anything—depending on the business scenario—and usually contains information about some business transaction. In enterprise messaging systems, messages inform an application of some event or occurrence in another system.

Using Message-Oriented Middleware, messages are transmitted from one application to another across a network. Enterprise middleware products ensure that messages are properly distributed among applications. In addition, these products usually provide fault tolerance, load balancing, scalability, and transactional support for enterprises that need to reliably exchange large quantities of messages.

Enterprise messaging vendors use different message formats and network protocols for exchanging messages, but the basic semantics are the same. An API is used to create a message, load the application data (message payload), assign routing information, and send the message. The same API is used to receive messages produced by other applications.

In all modern enterprise messaging systems, applications exchange messages through virtual channels called destinations. When a message is sent, it’s addressed to a destination (i.e., queue or topic), not a specific application. Any application that subscribes or registers an interest in that destination may receive the message. In this way, the applications that receive messages and those that send messages are decoupled. Senders and receivers are not bound to each other in any way and may send and receive messages as they see fit.

All enterprise messaging vendors provide application developers with an API for sending and receiving messages. While a messaging vendor implements its own networking protocols, routing, and administration facilities, the basic semantics of the developer API provided by different vendors are the same. This similarity in APIs makes the Java Message Service possible.

JMS is a vendor-agnostic Java API that can be used with many different enterprise messaging vendors. JMS is analogous to JDBC in that application developers reuse the same API to access many different systems. If a vendor provides a compliant service provider for JMS, the JMS API can be used to send and receive messages to that vendor. For example, you can use the same JMS API to send messages using SonicMQ that you would using IBM’s WebSphere MQ. It is the purpose of this book to explain how enterprise messaging systems work and, in particular, how JMS is used with these systems. The second edition of this book focuses on JMS 1.1, the latest version of the specification, which was introduced in March 2002.

The rest of this chapter explores enterprise messaging and JMS in more detail, so that you have a solid foundation with which to learn about the JMS API and messaging concepts in later chapters. The only assumption we make in this book is that you are already familiar with the Java programming language.

The Advantages of Messaging

As stated at the beginning of this chapter, messaging solves many architectural challenges such as heterogeneous integration, scalability, system bottlenecks, concurrent processing, and overall architecture flexibility and agility. This section describes the more common advantages and uses for JMS and messaging in general.

Heterogeneous Integration

The communication and integration of heterogeneous platforms is perhaps the most classic use case for messaging. Using messaging you can invoke services from applications and systems that are implemented in completely different platforms. Many open source and commercial messaging systems provide seamless connectivity between Java and other languages and platforms by leveraging an integrated message bridge that converts a message sent using JMS to a common internal message format. Examples of these messaging systems include ActiveMQ (open source) and IBM WebSphere MQ (commercial). Both of these messaging systems support JMS, but they also expose a native API for use by non-Java messaging clients (such as C and C++). The key point here is that, depending on the vendor, it is possible to use JMS to communicate to non-Java or non-JMS messaging clients.

Historically, there have been many ways of tackling the issue of heterogeneous systems integration. Some earlier solutions involved the transfer of information through FTP or some other file transfer means, including the classic “sneakernet” method of carrying a diskette or tape from one machine to another. Using a database to share information between two heterogeneous systems or applications is another common approach that is still widely used today. Remote Procedure Call, or RPC, is yet another way of sharing both data and functionality between disparate systems. While each of these solutions have their advantages and disadvantages, only messaging provides a truly decoupled solution allowing both data and functionality to be shared across applications or subsystems. More recently, Web Services has emerged as another possible solution for integrating heterogeneous systems. However, lack of reliability for web services make messaging a better integration choice.

Reduce System Bottlenecks

System and application bottlenecks occur whenever you have a process that cannot keep up with the rate of requests made to that process. A classic example of a system bottleneck is a poorly tuned database where applications and processes wait until database connections are available or database locks free up. At some point the system backs up, response time gets worse, and eventually requests start timing out.

A good analogy of a system bottleneck is pouring water into a funnel. The funnel becomes a bottleneck because it can only allow a certain amount of water to pass through. As the amount of water entering the funnel increases, the funnel eventually overflows because water cannot exit the funnel fast enough to handle the increased flow. IT systems work in much the same way: some components can only handle a limited number of requests and can quickly become bottlenecks.

Going back to our example, if a single funnel can “process” one liter of water per minute, but three liters of water are entering the funnel, the funnel will eventually back up and overflow. However, by adding two more funnels to the process, we can now theoretically “process” three liters of water per minute, thereby keeping up with the demand. Similarly, within IT systems messaging can be used to reduce or even eliminate system bottlenecks. Rather than have requests backing up one behind the other while a synchronous component is processing them, the requests are sent to a messaging system that distributes the requests to multiple message listener components. In this manner the bottlenecks experienced with a single synchronous point-to-point connection are reduced or in some cases completely eliminated.

Increase Scalability

Much in the same way that messaging reduces system bottlenecks, it can also be used to increase the overall scalability and throughput of a system, effectively reducing the response time as well. Scalability in messaging systems is achieved by introducing multiple message receivers that can process different messages concurrently. As messages stack up waiting to be processed, the number of messages in the queue, or what is otherwise known as the queue depth, starts to increase. As the queue depth increases, system response time increases and throughput decreases. One way to increase the scalability of a system is to add multiple concurrent message listeners to the queue (similar to what we did in the funnel example previously) to process more requests concurrently.

Another way to increase the overall scalability of a system is to make as much of the system asynchronous as possible. Decoupling components in this manner allows for systems to grow horizontally, with hardware resources being the main limiting factor. However, while this may seem like a silver bullet, the middleware can only be horizontally scaled within practical limits of another major system bottleneck—the database. You can have hundreds or even thousands of message listeners on a single queue providing the ability to process many messages at the same time, but the database may only be able to process a limited number of concurrent requests. Although there are complicated techniques for addressing the database bottleneck issue, the reality is that there will always be practical limits to how far you can scale the middleware layer.

Increase End User Productivity

The use of asynchronous messaging can also increase end user productivity. Consider the case where an end user makes a request to the system from a web-based or desktop user interface that takes several minutes to run. During that time the end user is waiting for the results, unable to do any additional work. By using asynchronous messaging, the end user can make a request to the system and get an immediate response back indicating that the request was accepted. The end user now continues to do other work on the system while the long running request is executing. Once the request has completed, the end user is notified that the request has been processed and the results are delivered to the end user. By using messaging, the end user is able to get more work done with less wait time, making that end user more productive.

Many front-office trading systems use this sort of messaging strategy between the trading application and the backend systems. This type of messaging-based architecture allows the trader to perform other work without having to wait for a response from the system. The trade-off for this increased flexibility and productivity, however, is added complexity. A good architect will always look for opportunities to make various aspects of a system asynchronous, whether it be between a user interface and a system or between internal components within the system.

Architecture Flexibility and Agility

The use of messaging as part of an overall enterprise architecture solution allows for greater architectural flexibility and agility. These qualities are achieved through the use of abstraction and decoupling. With messaging, subsystems, components, and even services can be abstracted to the point where they can be replaced with little or no knowledge by the client components.

Architectural agility is the ability to respond quickly to a constantly changing environment. By using messaging to abstract and decouple components, one can quickly respond to changes in software, hardware, and even business changes. The ability to swap out one system for another, change a technology platform, or even change a vendor solution without affecting the client applications can be achieved through abstraction using messaging. Through messaging, the message producer, or client component, does not know which programming language or platform the receiving component is written in, where the component or service is located, what the component or service implementation name is, or even the protocol used to access that component or service. It is by means of these levels of abstraction that we are able to more easily replace components and subsystems, thereby increasing architectural agility.

Enterprise Messaging

Enterprise messaging is not a new concept. Messaging products such as IBM WebSphere MQ, SonicMQ, Microsoft Message Queuing (MSMQ), and TIBCO Rendezvous have been in existence for many years. Recently, several open source messaging products such as ActiveMQ have entered the market and are being used in enterprise production environments. Also, the introduction of Service-Oriented Architecture (SOA) has given rise to a new type of messaging product known as an Enterprise Service Bus (ESB). Although most ESBs allow for HTTP-based communications, messaging-based communication continues to remain the standard in most production enterprise systems.

A key concept of enterprise messaging is that messages are delivered asynchronously from one system to others over a network. To deliver a message asynchronously means the sender is not required to wait for the message to be received or handled by the recipient; it is free to send the message and continue processing. Asynchronous messages are treated as autonomous units—each message is self-contained and carries all of the data and state needed by the business logic that processes it.

In asynchronous messaging, applications use a simple API to construct a message, then hand it off to the Message-Oriented Middleware for delivery to one or more intended recipients (see Figure 1-1). A message is a package of business data that is sent from one application to another over the network. The message should be self-describing in that it should contain all the necessary context to allow the recipients to carry out their work independently.

Message-Oriented Middleware
Figure 1-1. Message-Oriented Middleware

Message-Oriented Middleware architectures of today vary in their implementation. The spectrum ranges from a centralized architecture that depends on a message server to perform routing, to a decentralized architecture that distributes the “server” processing out to the client machines. A varied array of protocols including TCP/IP, HTTP, SSL, and IP multicast are employed at the network transport layer. Some messaging products use a hybrid of both approaches, depending on the usage model.

It is important to explain what we mean by the term client. Messaging systems are composed of messaging clients and some kind of messaging middleware server. The clients send messages to the messaging server, which then distributes those messages to other clients. The client is a business application or component that is using the messaging API (in our case, JMS).

Centralized Architectures

Enterprise messaging systems that use a centralized architecture rely on a message server. A message server, also called a message router or broker, is responsible for delivering messages from one messaging client to other messaging clients. The message server decouples a sending client from other receiving clients. Clients see only the messaging server, not other clients, which allows clients to be added and removed without affecting the system as a whole.

Typically, a centralized architecture uses a hub-and-spoke topology. In a simple case, there is a centralized message server and all clients connect to it. As shown in Figure 1-2, the hub-and-spoke architecture lends itself to a minimal amount of network connections while still allowing any part of the system to communicate with any other part of the system.

Centralized hub-and-spoke architecture
Figure 1-2. Centralized hub-and-spoke architecture

In practice, the centralized message server may be a cluster of distributed servers operating as a logical unit.

Decentralized Architectures

All decentralized architectures currently use IP multicast at the network level. A messaging system based on multicasting has no centralized server. Some of the server functionality (persistence, transactions, security) is embedded as a local part of the client, while message routing is delegated to the network layer by using the IP multicast protocol.

IP multicast allows applications to join one or more IP multicast groups; each group uses an IP network address that will redistribute any messages it receives to all members in its group. In this way, applications can send messages to an IP multicast address and expect the network layer to redistribute the messages appropriately (see Figure 1-3). Unlike a centralized architecture, a distributed architecture doesn’t require a server for the purposes of routing messages—the network handles routing automatically. However, other server-like functionality is still required to be included with each client, such as message persistence and message delivery semantics like once-and-only-once delivery.

Decentralized IP multicast architecture
Figure 1-3. Decentralized IP multicast architecture

Hybrid Architectures

A decentralized architecture usually implies that an IP multicast protocol is being used. A centralized architecture usually implies that the TCP/IP protocol is the basis for communication between the various components. A messaging vendor’s architecture may also combine the two approaches. Clients may connect to a daemon process using TCP/IP, which in turn communicates with other daemon processes using IP multicast groups.

Centralized Architecture As a Model

Both ends of the decentralized and centralized architecture spectrum have their place in enterprise messaging. The advantages and disadvantages of distributed versus centralized architectures are discussed in more detail in Chapter 10. In the meantime, we need a common model for discussing other aspects of enterprise messaging. To simplify discussions, this book uses a centralized architecture as a logical view of enterprise messaging. This is for convenience only and is not an endorsement of centralized over decentralized architectures. The term message server is frequently used in this book to refer to the underlying architecture that is responsible for routing and distributing messages. In centralized architectures, the message server is a middleware server or cluster of servers. In decentralized architectures, the server refers to the local server-like facilities of the client.

Messaging Models

JMS supports two types of messaging models: point-to-point and publish-and-subscribe. These messaging models are sometimes referred to as messaging domains. Point-to-point messaging and publish-and-subscribe messaging are frequently shortened to p2p and pub/sub, respectively. This book uses both the long and short forms throughout.

In the simplest sense, publish-and-subscribe is intended for a one-to-many broadcast of messages, while point-to-point is intended for one-to-one delivery of messages (see Figure 1-4).

JMS messaging domains
Figure 1-4. JMS messaging domains

From a JMS perspective, messaging clients are called JMS clients, and the messaging system is called the JMS provider. A JMS application is a business system composed of many JMS clients and, generally, one JMS provider.

In addition, a JMS client that produces a message is called a message producer, while a JMS client that receives a message is called a message consumer. A JMS client can be both a message producer and a message consumer. When we use the term consumer or producer, we mean a JMS client that consumes messages or produces messages, respectively. We use this terminology throughout the book.

Point-to-Point

The point-to-point messaging model allows JMS clients to send and receive messages both synchronously and asynchronously via virtual channels known as queues. In the point-to-point model, message producers are called senders and message consumers are called receivers. The point-to-point messaging model has traditionally been a pull-based or polling-based model, where messages are requested from the queue instead of being pushed to the client automatically. One of the distinguishing characteristics of point-to-point messaging is that messages sent to a queue are received by one and only one receiver, even though there may be many receivers listening on a queue for the same message.

Point-to-point messaging supports asynchronous “fire and forget” messaging as well as synchronous request/reply messaging. Point-to-point messaging tends to be more coupled than the publish-and-subscribe model in that the sender generally knows how the message is going to be used and who is going to receive it. For example, a sender may send a stock trade order to a queue and wait for a response containing the trade confirmation number. In this case, the message sender knows that the message receiver is going to process the trade order. Another example would be an asynchronous request to generate a long-running report. The sender makes the request for the report, and when the report is ready, a notification message is sent to the sender. In this case, the sender knows the message receiver is going to pick up the message and create the report.

The point-to-point model supports load balancing, which allows multiple receivers to listen on the same queue, therefore distributing the load. As shown in Figure 1-4, the JMS provider takes care of managing the queue, ensuring that each message is consumed once and only once by the next available receiver in the group. The JMS specification does not dictate the rules for distributing messages among multiple receivers, although some JMS vendors have chosen to implement this as a load balancing capability. Point-to-point also offers other features, such as a queue browser that allows a client to view the contents of a queue prior to consuming its messages—this browser concept is not available in the publish-and-subscribe model. The point-to-point messaging model is covered in more detail in Chapter 4.

Publish-and-Subscribe

In the publish-and-subscribe model, messages are published to a virtual channel called a topic. Message producers are called publishers, whereas message consumers are called subscribers. Unlike the point-to-point model, messages published to a topic using the publish-and-subscribe model can be received by multiple subscribers. This technique is sometimes referred to as broadcasting a message. Every subscriber receives a copy of each message. The publish-and-subscribe messaging model is by and large a push-based model, where messages are automatically broadcast to consumers without them having to request or poll the topic for new messages.

The pub/sub model tends to be more decoupled than the p2p model in that the message publisher is generally unaware of how many subscribers there are or what those subscribers do with the message. For example, suppose a message is published to a topic every time an exception occurs in a Java application. The responsibility of the publisher is to simply broadcast that an exception occurred. The publisher does not know or generally care how that message will be used. For example, there may be subscribers that send an email to the development or support staff based on the exception, subscribers that accumulate counts of the various types of exceptions for reporting purposes, or even subscribers that use the information to page an on-call support person based on the exception type.

There are many different types of subscribers within the pub/sub messaging model. Nondurable subscribers are temporary subscriptions that receive messages only when they are actively listening on the topic. Durable subscribers, on the other hand, will receive a copy of every message published, even if they are “offline” when the message is published. There is also the notion of dynamic durable subscribers and administered durable subscribers. The publish-and-subscribe messaging model is discussed in greater detail in Chapters 2 and 5.

JMS API

JMS is an API for enterprise messaging created by Sun Microsystems through JSR-914. JMS is not a messaging system itself; it’s an abstraction of the interfaces and classes needed by messaging clients when communicating with messaging systems. In the same way that JDBC abstracts access to relational databases and JNDI abstracts access to naming and directory services, JMS abstracts access to messaging providers. Using JMS, an application’s messaging clients are portable across messaging server products.

The creation of JMS was an industry effort. Sun Microsystems took the lead on the spec and worked very closely with the messaging vendors throughout the process. The initial objective was to provide a Java API for connectivity to enterprise messaging systems. However, this changed to the wider objective of supporting messaging as a first-class Java-distributed computing paradigm equal with RPC-based systems such as CORBA and Enterprise JavaBeans. Mark Hapner, the JMS spec lead at Sun Microsystems, explained:

There were a number of MOM vendors that participated in the creation of JMS. It was an industry effort rather than a Sun effort. Sun was the spec lead and did shepherd the work but it would not have been successful without the direct involvement of the messaging vendors. Although our original objective was to provide a Java API for connectivity to MOM systems, this changed over the course of the work to a broader objective of supporting messaging as a first class Java distributed computing paradigm on equal footing with RPC.

The result is a best-of-breed, robust specification that includes a rich set of message delivery semantics, combined with a simple yet flexible API for incorporating messaging into applications. The intent was that in addition to new vendors, existing messaging vendors would support the JMS API.

The JMS API can be broken down into three main parts: the general API, the point-to-point API, and the publish-and-subscribe API. In JMS 1.1, the general API can be used to send and receive messages from either a queue or a topic. The point-to-point API is used solely for messaging with queues, and the publish-and-subscribe API is used solely for messaging using topics.

Within the JMS general API, there are seven main JMS API interfaces related to sending and receiving JMS messages:

  • ConnectionFactory

  • Destination

  • Connection

  • Session

  • Message

  • MessageProducer

  • MessageConsumer

Of these general interfaces, the ConnectionFactory and Destination must be obtained from the provider using JNDI (per the JMS specification). The other interfaces are created through factory methods in the various API interfaces. For example, once you have a ConnectionFactory, you can create a Connection. Once you have a Connection, you can create a Session. Once you have a Session, you can create a Message, MessageProducer, and MessageReceiver. The relationship between these seven primary JMS general API interfaces is illustrated in Figure 1-5.

In JMS, the Session object holds the transactional unit of work for messaging, not the Connection object. This is different from JDBC, where the Connection object holds the transactional unit of work. This means that when using JMS, an application will typically have only a single Connection object but will have a pool of Session objects.

There are several other interfaces related to exception handling, message priority, and message persistence. These and other API interfaces are discussed in more detail throughout the book and also in Appendix A.

JMS general API core interfaces
Figure 1-5. JMS general API core interfaces

Point-to-Point API

Once you gain an understanding of the JMS general API, the rest of the JMS API is fairly easy to infer and understand. The point-to-point messaging API refers specifically to the queue-based interfaces within the JMS API. The interfaces used for sending and receiving messages from a queue are as follows:

  • QueueConnectionFactory

  • Queue

  • QueueConnection

  • QueueSession

  • Message

  • QueueSender

  • QueueReceiver

As in the JMS general API, the QueueConnectionFactory and Queue objects must be obtained from the JMS provider via JNDI (per the JMS specification). Notice that most of the interface names simply add the word Queue before the general API interface name. The exception to this is the Destination interface, which is named Queue, and the MessageProducer and MessageConsumer interfaces, which are named QueueSender and QueueReceiver, respectively. Figure 1-6 illustrates the flow and relationship between the queue-based JMS API interfaces.

Applications using the point-to-point messaging model will typically use the queue-based API rather than the general API.

JMS point-to-point API core interfaces
Figure 1-6. JMS point-to-point API core interfaces

Publish-and-Subscribe API

The topic-based JMS API is similar to the queue-based API in that, in most cases, the word Queue is replaced with the word Topic. The interfaces used within the pub/sub messaging model are as follows:

  • TopicConnectionFactory

  • Topic

  • TopicConnection

  • TopicSession

  • Message

  • TopicPublisher

  • TopicSubscriber

Notice that the interfaces in the pub/sub domain have names similar to those of the p2p domain, with the exception of TopicPublisher and TopicSubscriber. The JMS API is very intuitive in this regard. As stated at the start of this chapter, pub/sub uses topics with publishers and subscribers, whereas p2p uses queues with senders and receivers. Notice how this terminology matches the API interface names. The relationship and flow of the topic-based JMS API interfaces are illustrated in Figure 1-7.

Real-World Scenarios

Until now, our discussion of enterprise messaging has been somewhat abstract. This section attempts to give some real-world scenarios to provide you with a better idea of the types of problems that enterprise messaging systems can solve.

JMS publish-and-subscribe API core interfaces
Figure 1-7. JMS publish-and-subscribe API core interfaces

Service-Oriented Architecture

Service-Oriented Architecture (SOA) describes an architecture style that defines business services that are abstracted from the corresponding enterprise service implementations. SOA has given rise to a new breed of middleware known as an Enterprise Service Bus, or ESB. In the early days of SOA, most ESBs were implemented as message brokers, whereby components within the messaging layer were used to perform some sort of intelligent routing or message transformation before delivering the message. These earlier message brokers have evolved into sophisticated commercial and open source ESB products that use messaging at their core. Although some ESB products support a traditional non-JMS HTTP transport, most enterprise-wide production implementations still leverage messaging as the protocol for communication.

Messaging is an excellent means of building the abstraction layer within SOA needed to fully abstract a business service from its underlying implementation. Through messaging, the business service does not need to be concerned about where the corresponding implementation service is, what language it is written in, what platform it is deployed in, or even the name of the implementation service. Messaging also provides the scalability needed within an SOA environment, and also provides a robust level of monitoring and control for requests coming into and out of an ESB. Almost all of the commercial and open source ESB products available today support JMS messaging as a communication protocol—the notable exception being the Microsoft line of messaging products (e.g., BizTalk and MSMQ).

The increased interest and use of SOA in the industry has in turn given rise to increased interest and usage of messaging solutions in general. Although full-blown SOA implementations are continuing to evolve, many companies are tuning to messaging solutions as a step toward SOA.

Event-Driven Architecture

Event-Driven Architecture (EDA) is an architecture style that is built on the premise that the orchestration of processes and events is dynamic and very complex, and therefore not feasible to control or implement through a central orchestration component. When an action takes place in a system, that process sends an event to the entire system stating that an action took place (an event). That event may then kick off other processes, which in turn may kick off additional processes, all decoupled from each other.

Some good examples of EDA include the insurance domain and the defined benefits domain. Both of these industry domains are driven by events that happen in the system. For example, something as simple as changing your address can affect many aspects of the insurance domain, including policies, quotes, and customer records. In this case, the driving event in the insurance application is an address change. However, it is not the responsibility of the address change module to know everything that needs to happen as a result of that event. Therefore, the address change module sends an event message letting the system know that an address has changed. The quoting system will pick up that event and adjust any outstanding quotes that may be present for that customer. Simultaneously, the policy system will pick up the change address event and adjust the rates and policies for that customer.

Another example of EDA is within the defined benefits domain. Getting married or changing jobs triggers events in the system that qualify you for certain changes to your health and retirement benefits. Many of these systems use EDA to avoid using a large, complex, and unmaintainable central processing engine to control all of the actions associated with a particular “qualifying event.”

Messaging is the foundation for systems based on an Event-Driven Architecture. Events are typically implemented as empty payload messages containing some information about the event in the header of the message, although some pass the application data as part of the event. Not surprisingly, most architectures based on EDA leverage the pub/sub model as a means of broadcasting the events within a system.

Heterogeneous Platform Integration

Most companies, through a combination of mergers, acquisitions, migrations, or bad decisions, have a myriad of heterogeneous platforms, products, and languages supporting the business. Integrating these platforms can be a challenging task, particularly with standards continually changing and evolving. Messaging plays a key role in being able to make these heterogeneous platforms communicate with one another, whether it be Java EE and Microsoft .NET, Java EE and CICS, or Java EE and Tuxedo C++.

Although platforms such as Java can utilize the JMS API, other platforms such as .NET or C++ cannot (for obvious reasons). Many messaging vendors, both commercial and open source, support both the JMS API and a native API. These providers typically have a built-in messaging bridge that allows the provider to be able to convert a JMS message into an internal message and vice versa. Some platforms, such as .NET, may require an external messaging bridge to convert a JMS message into an MSMQ message (depending on the messaging provider you are using). For example, ActiveMQ provides a messaging bridge for converting MSMQ to JMS (and vice versa). This lower-level platform integration has given rise to a broader scope of integration, known as Enterprise Application Integration.

Enterprise Application Integration

Most mature organizations have both legacy and new applications that are implemented independently and cannot interoperate. In many cases, organizations have a strong desire to integrate these applications so that they can share information and cooperate in larger enterprise-wide operations. The integration of these applications is generally called Enterprise Application Integration (EAI).

A variety of vendor and home-grown solutions are used for EAI, but enterprise messaging systems are central to most of them. Enterprise messaging systems allow stovepipe applications (consisting of heterogeneous products, technologies, and components) to communicate events and to exchange data while remaining physically independent. Data and events can be exchanged in the form of messages via topics or queues, which provide an abstraction that decouples participating applications.

As an example, a messaging system might be used to integrate an Internet order processing system with an Enterprise Resource Planning (ERP) system like SAP. The Internet system uses JMS to deliver business data about new orders to a topic. An ERP gateway application, which accesses a SAP application via its native API, can subscribe to the order topic. As new orders are broadcast to the topic, the gateway receives the orders and enters them into the SAP application.

Business-to-Business

Historically, businesses exchanged data using Electronic Data Interchange (EDI) systems. Data was exchanged using rigid, fixed formats over proprietary Value-Added Networks (VANs). Cost of entry was high and data was usually exchanged in batch processes—not as real-time business events.

The Internet, XML, and modern messaging systems have radically changed how businesses exchange data and interact in what is now called Business-to-Business (B2B). The use of messaging systems is central to modern B2B solutions because it allows organizations to cooperate without requiring them to tightly integrate their business systems. In addition, it lowers the barriers to entry since finer-grained participation is possible. Businesses can join in B2B and disengage depending on the queues and topics with which they interact.

A manufacturer, for example, can set up a topic for broadcasting requests for bids on raw materials. Suppliers can subscribe to the topic and respond by producing messages back to the manufacturer’s queue. Suppliers can be added and removed at will, and new topics and queues for different types of inventory and raw materials can be used to partition the systems appropriately.

Geographic Dispersion

These days many companies are geographically dispersed. Brick-and-mortar, click-and-mortar, and dot-coms all face problems associated with geographic dispersion of enterprise systems. Inventory systems in remote warehouses need to communicate with centralized back-office ERP systems at corporate headquarters. Sensitive employee data that is administered locally at each subsidiary needs to be synchronized with the main office. JMS messaging systems can ensure the safe and secure exchange of data across a geographically distributed business.

Information Broadcasting

Auction sites, stock quote services, and securities exchanges all have to push data out to huge populations of recipients in a one-to-many fashion. In many cases, the broadcast of information needs to be selectively routed and filtered on a per-recipient basis. While the outgoing information needs to be delivered in a one-to-many fashion, often the response to such information needs to be sent back to the broadcaster. This is another situation in which enterprise messaging is extremely useful, since pub/sub can be used to distribute the messages and p2p can be used for responses.

Choices in reliability of delivery are key in these situations. In the case of broadcasting stock quotes, for example, absolutely guaranteeing the delivery of information may not be critical, since another broadcast of the same ticker symbol will likely happen in another short interval of time. In the case where a trader is responding to a price quote with a buy order, however, it is crucial that the response is returned in a guaranteed fashion. In this case, you mix reliability of messaging so that the pub/sub distribution is fast but unreliable, while the use of p2p for buying orders from traders is very reliable. JMS and enterprise messaging provide these varying degrees of reliability for both the pub/sub and p2p models.

Building Dynamic Systems

In JMS, pub/sub topics and p2p queues are centrally administered and are referred to as JMS administered objects. Your application does not need to know the network location of topics or queues to communicate with other applications; it just uses topic and queue objects as identifiers. Using topics and queues provides JMS applications with a certain level of location transparency and flexibility that makes it possible to add and remove participants in an enterprise system.

For example, a system administrator can dynamically add subscribers to specific topics on an as-needed basis. A common scenario might be if you discover a need to add an audit-trail mechanism for certain messages and not others. Figure 1-8 shows you how to plug in a specialized auditing and logging JMS client whose only job is to track specific messages, just by subscribing to the topics you are interested in.

Dynamically adding auditing and logging using publish-and-subscribe
Figure 1-8. Dynamically adding auditing and logging using publish-and-subscribe

The ability to add and remove producers and consumers allows enterprise systems to dynamically alter the routing and re-routing of messages in an already deployed environment.

As another example, we can build on the EAI scenario discussed previously. In this case, a gateway accepts incoming purchase orders, converts them to the format appropriate for a legacy ERP system, and calls into the ERP system for processing (see Figure 1-9).

In Figure 1-8, other JMS applications (A and B) also subscribe to the purchase order topic and do their own independent processing. Application A might be a legacy application in the company, while application B may be another company’s business system, representing a B2B integration.

Integrating a purchase order system with an ERP system
Figure 1-9. Integrating a purchase order system with an ERP system

Using JMS, it’s fairly easy to add and remove applications from this process. For example, if purchase orders need to be processed from two different sources, such as an Internet-based system and a legacy EDI system, you can simply add the legacy purchase order system to the mix (see Figure 1-10).

Integrating two different purchase order systems with an ERP system
Figure 1-10. Integrating two different purchase order systems with an ERP system

What is interesting about this example is that the ERP gateway is unaware that it is receiving purchase order messages from two completely different sources. The legacy EDI system may be an older in-house system or it could be the main system for a business partner or a recently acquired subsidiary. In addition, the legacy EDI system would have been added dynamically without requiring the shutdown and retooling of the entire system. Enterprise messaging systems make this kind of flexibility possible, and JMS allows Java clients to access many different messaging systems using the same Java programming model.

RPC Versus Asynchronous Messaging

RPC (Remote Procedure Call) is a term commonly used to describe a distributed computing model that is used today by both the Java and .NET platforms. Component-based architectures such as Enterprise JavaBeans are built on top of this model. RPC-based technologies have been, and will continue to be, a viable solution for many applications. However, the enterprise messaging model is superior in certain types of distributed applications. In this section we will discuss the pros and cons of each model.

Tightly Coupled RPC

One of the most successful areas of the tightly coupled RPC model has been in building 3-tier, or n -tier, applications. In this model, a presentation layer (first tier) communicates using RPC with business logic on the middle tier (second tier), which accesses data housed on the backend (third tier). Sun Microsystems’ J2EE platform and Microsoft’s .NET platform are the most modern examples of this architecture.

With J2EE, JSP and servlets represent the presentation tier while Enterprise JavaBeans (EJB) is the middle tier. Regardless of the platform, the core technology used in these systems is RPC-based middleware with RPC being the defining communication paradigm.

RPC attempts to mimic the behavior of a system that runs in one process. When a remote procedure is invoked, the caller is blocked until the procedure completes and returns control to the caller. This synchronized model allows the developer to view the system as if it runs in one process. Work is performed sequentially, ensuring that tasks are completed in a predefined order. The synchronized nature of RPC tightly couples the client (the software making the call) to the server (the software servicing the call). The client cannot proceed—it is blocked—until the server responds.

The tightly coupled nature of RPC creates highly interdependent systems where a failure on one system has an immediate and debilitating impact on other systems. In J2EE, for example, the EJB server must be functioning properly if the servlets that use enterprise beans are expected to function.

RPC works well in many scenarios, but its synchronous, tightly coupled nature is a severe handicap in system-to-system processing where vertical applications are integrated together. In system-to-system scenarios, the lines of communication between vertical systems are many and multidirectional, as Figure 1-11 illustrates.

Tightly coupled with synchronous RPC
Figure 1-11. Tightly coupled with synchronous RPC

Consider the challenge of implementing this infrastructure using a tightly coupled RPC mechanism. There is the many-to-many problem of managing the connections between these systems. When you add another application to the mix, you have to go back and let all the other systems know about it. Also, systems can crash. Scheduled downtimes need to happen. Object interfaces need to be upgraded.

When one part of the system goes down, everything halts. When you post an order to an order entry system, it needs to make a synchronous call to each of the other systems. This causes the order entry system to block and wait until each system is finished processing the order.[1]

It is the synchronized, tightly coupled, interdependent nature of RPC systems that cause entire systems to fail as a result of failures in subsystems. When the tightly coupled nature of RPC is not appropriate, as in system-to-system scenarios, messaging provides an alternative.

Enterprise Messaging

Problems with the availability of subsystems are not an issue with Message-Oriented Middleware. A fundamental concept of messaging is that communication between applications is intended to be asynchronous. Code that is written to connect the pieces together assumes there is a one-way message that requires no immediate response from another application. In other words, there is no blocking. Once a message is sent, the messaging client can move on to other tasks; it doesn’t have to wait for a response. This is the major difference between RPC and asynchronous messaging, and it is critical to understanding the advantages offered by messaging systems.

In an asynchronous messaging system, each subsystem (Accounts Receivable, Inventory, etc.) is decoupled from the other systems (see Figure 1-12). They communicate through the messaging server, so that a failure in one does not impede the operation of the others.

JMS provides a loosely coupled environment where partial failure of system components does not impede overall system availability
Figure 1-12. JMS provides a loosely coupled environment where partial failure of system components does not impede overall system availability

Partial failure in a networked system is a fact of life. One of the systems may have an unpredictable failure or may need to be shut down at some time during its continuous operation. This can be further magnified by geographic dispersion of in-house and partner systems. In recognition of this, JMS provides guaranteed delivery, which ensures that intended consumers will eventually receive a message even if partial failure occurs.

Guaranteed delivery uses a store-and-forward mechanism, which means that the underlying message server will write the incoming messages out to a persistent store if the intended consumers are not currently available. When the receiving applications become available at a later time, the store-and-forward mechanism will deliver all of the messages that the consumers missed while unavailable (see Figure 1-13).

Underlying store-and-forward mechanisms guarantee delivery of messages
Figure 1-13. Underlying store-and-forward mechanisms guarantee delivery of messages

To summarize, JMS is not just another event service. It was designed to cover a broad range of enterprise applications, including EAI, B2B, push models, etc. Through asynchronous processing, store-and-forward, and guaranteed delivery, it provides high availability capabilities to keep business applications in continuous operation with uninterrupted service. It offers flexibility of integration by providing publish-and-subscribe and point-to-point functionality. Through location transparency and administrative control, it allows for a robust, service-based architecture. And most important, it is extremely easy to learn and use. In the next chapter we will take a look at how simple it is by building our first JMS application.



[1] Multithreading and looser RPC mechanisms like CORBA’s one-way call are options, but these solutions have their own complexities and require very sophisticated development. Threads are expensive when not used wisely, and CORBA one-way calls still require application-level error handling for failure conditions.

Get Java Message Service, 2nd Edition now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.