O'Reilly logo

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

PThreads Programming

Book Description

Computers are just as busy as the rest of us nowadays. They have lots of tasks to do at once, and need some cleverness to get them all done at the same time.That's why threads are seen more and more often as a new model for programming. Threads have been available for some time. The Mach operating system, the Distributed Computer Environment (DCE), and Windows NT all feature threads.One advantage of most UNIX implementations, as well as DCE, is that they conform to a recently ratified POSIX standard (originally 1003.4a, now 1003.1c), which allows your programs to be portable between them. POSIX threads are commonly known as pthreads, after the word that starts all the names of the function calls. The standard is supported by Solaris, OSF/1, AIX, and several other UNIX-based operating systems.The idea behind threads programming is to have multiple tasks running concurrently within the same program. They can share a single CPU as processes do, or take advantage of multiple CPUs when available. In either case, they provide a clean way to divide the tasks of a program while sharing data.A window interface can read input on dozens of different buttons, each responsible for a separate task. A network server has to accept simultaneous calls from many clients, providing each with reasonable response time. A multiprocessor runs a number-crunching program on several CPUs at once, combining the results when all are done. All these kinds of applications can benefit from threads.In this book you will learn not only what the pthread calls are, but when it is a good idea to use threads and how to make them efficient (which is the whole reason for using threads in the first place). The authors delves into performance issues, comparing threads to processes, contrasting kernel threads to user threads, and showing how to measure speed. He also describes in a simple, clear manner what all the advanced features are for, and how threads interact with the rest of the UNIX system.Topics include:

  • Basic design techniques
  • Mutexes, conditions, and specialized synchronization techniques
  • Scheduling, priorities, and other real-time issues
  • Cancellation
  • UNIX libraries and re-entrant routines
  • Signals
  • Debugging tips
  • Measuring performance
  • Special considerations for the Distributed Computing Environment (DCE)

Table of Contents

  1. A Note Regarding Supplemental Files
  2. Examples
  3. Preface
    1. Organization
    2. Example Programs
      1. FTP
    3. Typographical Conventions
    4. Acknowledgments
  4. 1. Why Threads?
    1. What Are Pthreads?
    2. Potential Parallelism
    3. Specifying Potential Parallelism in a Concurrent Programming Environment
      1. UNIX Concurrent Programming: Multiple Processes
        1. Creating a new process: fork
      2. Pthreads Concurrent Programming: Multiple Threads
        1. Creating a new thread: pthread_create
        2. Threads are peers
    4. Parallel vs. Concurrent Programming
    5. Synchronization
      1. Sharing Process Resources
      2. Communication
      3. Scheduling
    6. Who Am I? Who Are You?
    7. Terminating Thread Execution
      1. Exit Status and Return Values
      2. Pthreads Library Calls and Errors
    8. Why Use Threads Over Processes?
    9. A Structured Programming Environment
    10. Choosing Which Applications to Thread
  5. 2. Designing Threaded Programs
    1. Suitable Tasks for Threading
    2. Models
      1. Boss/Worker Model
      2. Peer Model
      3. Pipeline Model
    3. Buffering Data Between Threads
    4. Some Common Problems
    5. Performance
    6. Example: An ATM Server
      1. The Serial ATM Server
        1. Handling asynchronous events: blocking with select
        2. Handling file I/O: blocking with read/write
      2. The Multithreaded ATM Server
        1. Model: boss/worker model
        2. The boss thread
        3. Dynamically detaching a thread
        4. A worker thread
        5. Synchronization: what’s needed
        6. Future enhancements
    7. Example: A Matrix Multiplication Program
      1. The Serial Matrix-Multiply Program
      2. The Multithreaded Matrix-Multiply Program
        1. Passing data to a new thread
        2. Synchronization in the matrix-multiply program
  6. 3. Synchronizing Pthreads
    1. Selecting the Right Synchronization Tool
    2. Mutex Variables
      1. Using Mutexes
      2. Error Detection and Return Values
      3. Using pthread_mutex_trylock
      4. When Other Tools Are Better
      5. Some Shortcomings of Mutexes
      6. Contention for a Mutex
      7. Example: Using Mutexes in a Linked List
      8. Complex Data Structures and Lock Granularity
      9. Requirements and Goals for Synchronization
      10. Access Patterns and Granularity
      11. Locking Hierarchies
      12. Sharing a Mutex Among Processes
    3. Condition Variables
      1. Using a Mutex with a Condition Variable
      2. When Many Threads Are Waiting
      3. Checking the Condition on Wake Up: Spurious Wake Ups
      4. Condition Variable Attributes
      5. Condition Variables and UNIX Signals
      6. Condition Variables and Cancellation
    4. Reader/Writer Locks
    5. Synchronization in the ATM Server
      1. Synchronizing Access to Account Data
      2. Limiting the Number of Worker Threads
      3. Synchronizing a Server Shutdown
    6. Thread Pools
      1. An ATM Server Example That Uses a Thread Pool
        1. Initializing a thread pool
        2. Checking for work
        3. Adding work
        4. Deleting a thread pool
        5. Adapting the atm_server_init and main routines
  7. 4. Managing Pthreads
    1. Setting Thread Attributes
      1. Setting a Thread’s Stack Size
      2. Specifying the Location of a Thread’s Stack
      3. Setting a Thread’s Detached State
      4. Setting Multiple Attributes
      5. Destroying a Thread Attribute Object
    2. The pthread_once Mechanism
      1. Example: The ATM Server’s Communication Module
        1. Using a statically initialized mutex
        2. Using the pthread_once mechanism
    3. Keys: Using Thread-Specific Data
      1. Initializing a Key: pthread_key_create
      2. Associating Data with a Key
      3. Retrieving Data from a Key
      4. Destructors
    4. Cancellation
      1. The Complication with Cancellation
      2. Cancelability Types and States
      3. Cancellation Points: More on Deferred Cancellation
      4. A Simple Cancellation Example
        1. The bullet_proof thread: no effect
        2. The ask_for_it thread: deferred cancellation
        3. The sitting_duck thread: asynchronous cancellation
      5. Cleanup Stacks
      6. Cancellation in the ATM Server
        1. Aborting a deposit
    5. Scheduling Pthreads
      1. Scheduling Priority and Policy
      2. Scheduling Scope and Allocation Domains
      3. Runnable and Blocked Threads
      4. Scheduling Priority
      5. Scheduling Policy
      6. Using Priorities and Policies
      7. Setting Scheduling Policy and Priority
      8. Inheritance
      9. Scheduling in the ATM Server
    6. Mutex Scheduling Attributes
      1. Priority Ceiling
      2. Priority Inheritance
      3. The ATM Example and Priority Inversion
  8. 5. Pthreads and UNIX
    1. Threads and Signals
      1. Traditional Signal Processing
        1. Sending signals and waiting for signals
        2. Using a signal mask to block signals
      2. Signal Processing in a Multithreaded World
        1. Synchronously generated signals
        2. Asynchronously generated signals
        3. Per-thread signal masks
        4. Per-process signal actions
        5. Putting it all together
      3. Threads in Signal Handlers
      4. A Simple Example
      5. Some Signal Issues
      6. Handling Signals in the ATM Example
    2. Threadsafe Library Functions and System Calls
      1. Threadsafe and Reentrant Functions
      2. Example of Thread-Unsafe and Threadsafe Versions of the Same Function
      3. Functions That Return Pointers to Static Data
      4. Library Use of errno
      5. The Pthreads Standard Specifies Which Functions Must Be Threadsafe
        1. Alternative interfaces for functions that return static data
        2. Additional routines for performance considerations
        3. File-locking functions for threads
        4. Where are the threadsafe functions?
      6. Using Thread-Unsafe Functions in a Multithreaded Program
    3. Cancellation-Safe Library Functions and System Calls
      1. Asynchronous Cancellation-Safe Functions
      2. Cancellation Points in System and Library Calls
    4. Thread-Blocking Library Functions and System Calls
    5. Threads and Process Management
      1. Calling fork from a Thread
        1. Fork-handling stacks
      2. Calling exec from a Thread
      3. Process Exit and Threads
    6. Multiprocessor Memory Synchronization
  9. 6. Practical Considerations
    1. Understanding Pthreads Implementation
      1. Two Worlds
      2. Two Kinds of Threads
      3. Who’s Providing the Thread?
        1. User-space Pthreads implementations
        2. Kernel thread–based Pthreads implementations
        3. Two-level scheduler Pthreads implementations: the best of both worlds
    2. Debugging
      1. Deadlock
      2. Race Conditions
      3. Event Ordering
      4. Less Is Better
      5. Trace Statements
      6. Debugger Support for Threads
      7. Example: Debugging the ATM Server
        1. Debugging a deadlock caused by a missing unlock
        2. Debugging a race condition caused by a missing lock
    3. Performance
      1. The Costs of Sharing Too Much—Locking
      2. Thread Overhead
        1. Thread context switches
      3. Synchronization Overhead
      4. How Do Your Threads Spend Their Time?
      5. Performance in the ATM Server Example
        1. Performance depends on input workload: increasing clients and contention
        2. Performance depends on a good locking strategy
        3. Performance depends on the type of work threads do
        4. Key performance issues between using threads and using processes
    4. Conclusion
  10. A. Pthreads and DCE
    1. The Structure of a DCE Server
    2. What Does the DCE Programmer Have to Do?
    3. Example: The ATM as a DCE Server
  11. B. Pthreads Draft 4 vs. the Final Standard
    1. Detaching a Thread
    2. Mutex Variables
    3. Condition Variables
    4. Thread Attributes
    5. The pthread_once Function
    6. Keys
    7. Cancellation
    8. Scheduling
    9. Signals
    10. Threadsafe System Interfaces
    11. Error Reporting
    12. System Interfaces and Cancellation-Safety
    13. Process-Blocking Calls
    14. Process Management
  12. C. Pthreads Quick Reference
  13. D. About the Authors
  14. Index
  15. About the Authors
  16. Colophon
  17. Copyright