Managing Threads at Runtime

In addition to changing the running state of your application threads, the Java API allows you to do some basic thread management at runtime. The functionality provided includes thread synchronization, organization of threads into thread groups, and influencing the thread scheduler by setting thread priorities. Before we see how all of these can come into play in a distributed application, let’s go over them briefly so that we have a feeling for what kinds of capabilities they provide.

Synchronizing Threads

When you have multiple threads in an application, it sometimes becomes necessary to synchronize them with respect to a particular method or block of code. This usually occurs when multiple threads are updating the same data asynchronously. To ensure that these changes are consistent throughout the application, we need to make sure that one thread can’t start updating the data before another thread is finished reading or updating the same data. If we let this occur, then the data will be left in an inconsistent state, and one or both threads will not get the correct result.

Java allows you to define critical regions of code using the synchronized statement. A method or block of code is synchronized on a class, object, or array, depending on the context of the synchronized keyword. If you use the synchronized modifier on a static method of a class, for example, then before the method is executed, the Java virtual machine obtains an exclusive “lock” on the class. A thread that attempts to enter this block of code has to get the lock before the code in the synchronized block is executed. If another thread is executing in this critical section at the time, the thread will block until the running thread exits the critical section and the lock on the class is released.

If a non-static method is declared synchronized, then the virtual machine obtains a lock on the object on which the method is invoked. If you define a synchronized block of code, then you have to specify the class, object, or array on which to synchronize.

Thread Groups

The Java API also lets you organize threads into groups, represented by the ThreadGroup class. A ThreadGroup can contain individual threads, or other thread groups, to create a thread hierarchy. The benefit of thread groups is a mixture of security and convenience. Thread groups are secure because threads in a group can’t access the parent thread of their group. This allows you to isolate certain threads from other threads and prevent them from monitoring or modifying each other.

Convenience comes from the methods provided on the ThreadGroup class for performing “batch” operations on the group of threads. The start() method on ThreadGroup starts all of the threads in the group, for example. Similar methods exist for suspending, resuming, stopping, and destroying the threads in the group.

Priorities

The Java virtual machine is a process running under the operating system for the platform it’s on. The operating system is responsible for allocating CPU time among the processes running on the system. When CPU time is allocated to the Java runtime, the virtual machine is responsible for allocating CPU time to each of the threads in the Java process. How much CPU time is given to a thread, and when, is determined by the virtual machine using a simple scheduling algorithm called fixed-priority scheduling. When a Java process starts, there are one or more threads that are in the runnable state (i.e., not in the stopped state described earlier). These threads all need to use a CPU. The Java runtime chooses the highest priority thread to run first. If all of the threads have the same priority, then a thread is chosen using a round-robin scheme. The currently running thread will continue to run until it yields the CPU, or a higher-priority thread becomes runnable (e.g., is created and its start() method is called), or until the CPU time slice allocated to the thread runs out (on systems that support thread time-slicing). When a thread loses the CPU, the next thread to run is chosen using the same algorithm that was used to pick the first thread: highest priority wins, or if there is more than one thread with the highest priority, one is picked in round-robin fashion.

All this means that there is no guarantee that the highest priority thread is running at any given time during the life of a process. Even if you ensure that one thread in your process has a higher priority than all the others, that thread might lose control of the CPU if it’s suspended by some external agent, or if it yields the CPU itself, or if the underlying platform implements thread time-slicing and its time slice runs out. So thread priorities should only be used to influence the relative runtime behavior of the threads in your process, and shouldn’t be used to implement synchronized interactions between threads. If one thread has to finish before another one can complete its job, then you should implement some kind of completion flag for the second thread to check, or use wait() and notify() to synchronize the threads, rather than giving the first thread a higher priority than the second. Depending on the number of CPU cycles each thread needs to finish, and whether the Java runtime is running on a time-slicing system or not, the second thread could still finish before the first, even with its lower priority.

Get Java Distributed Computing now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.