Hands-On Concurrency with Rust

Book description

Get to grips with modern software demands by learning the effective uses of Rust's powerful memory safety.

About This Book
  • Learn and improve the sequential performance characteristics of your software
  • Understand the use of operating system processes in a high-scale concurrent system
  • Learn of the various coordination methods available in the Standard library
Who This Book Is For

This book is aimed at software engineers with a basic understanding of Rust who want to exploit the parallel and concurrent nature of modern computing environments, safely.

What You Will Learn
  • Probe your programs for performance and accuracy issues
  • Create your own threading and multi-processing environment in Rust
  • Use coarse locks from Rust's Standard library
  • Solve common synchronization problems or avoid synchronization using atomic programming
  • Build lock-free/wait-free structures in Rust and understand their implementations in the crates ecosystem
  • Leverage Rust's memory model and type system to build safety properties into your parallel programs
  • Understand the new features of the Rust programming language to ease the writing of parallel programs
In Detail

Most programming languages can really complicate things, especially with regard to unsafe memory access. The burden on you, the programmer, lies across two domains: understanding the modern machine and your language's pain-points. This book will teach you to how to manage program performance on modern machines and build fast, memory-safe, and concurrent software in Rust. It starts with the fundamentals of Rust and discusses machine architecture concepts. You will be taken through ways to measure and improve the performance of Rust code systematically and how to write collections with confidence. You will learn about the Sync and Send traits applied to threads, and coordinate thread execution with locks, atomic primitives, data-parallelism, and more.

The book will show you how to efficiently embed Rust in C++ code and explore the functionalities of various crates for multithreaded applications. It explores implementations in depth. You will know how a mutex works and build several yourself. You will master radically different approaches that exist in the ecosystem for structuring and managing high-scale systems.

By the end of the book, you will feel comfortable with designing safe, consistent, parallel, and high-performance applications in Rust.

Style and approach

Readers will be taken through various ways to improve the performance of their Rust code.

Publisher resources

Download Example Code

Table of contents

  1. Title Page
  2. Copyright and Credits
    1. Hands-On Concurrency with Rust
  3. Dedication
  4. Packt Upsell
    1. Why subscribe?
    2. PacktPub.com
  5. Contributors
    1. About the author
    2. About the reviewer
    3. Packt is searching for authors like you
  6. Preface
    1. Who this book is for
    2. What this book covers
    3. To get the most out of this book
      1. Download the example code files
      2. Conventions used
    4. Get in touch
      1. Reviews
  7. Preliminaries – Machine Architecture and Getting Started with Rust
    1. Technical requirements
    2. The machine
      1. The CPU
      2. Memory and caches
        1. Memory model
    3. Getting set up
      1. The interesting part
      2. Debugging Rust programs
    4. Summary
    5. Further reading
  8. Sequential Rust Performance and Testing
    1. Technical requirements
    2. Diminishing returns
    3. Performance
      1. Standard library HashMap
      2. Naive HashMap
        1. Testing with QuickCheck
        2. Testing with American Fuzzy Lop
        3. Performance testing with Criterion
        4. Inspecting with the Valgrind Suite
        5. Inspecting with Linux perf
      3. A better naive HashMap
    4. Summary
    5. Further reading
  9. The Rust Memory Model – Ownership, References and Manipulation
    1. Technical requirements
    2. Memory layout
      1. Pointers to memory
        1. Allocating and deallocating memory
        2. The size of a type
        3. Static and dynamic dispatch
        4. Zero sized types
        5. Boxed types
        6. Custom allocators
    3. Implementations
      1. Option
      2. Cell and RefCell
      3. Rc
      4. Vec
    4. Summary
    5. Further reading
  10. Sync and Send – the Foundation of Rust Concurrency
    1. Technical requirements
    2. Sync and Send
      1. Racing threads
        1. The flaw of the Ring
        2. Getting back to safety
          1. Safety by exclusion
        3. Using MPSC
          1. A telemetry server
    3. Summary
    4. Further reading
  11. Locks – Mutex, Condvar, Barriers and RWLock
    1. Technical requirements
    2. Read many, write exclusive locks – RwLock
    3. Blocking until conditions change – condvar
    4. Blocking until the gang's all here - barrier
    5. More mutexes, condvars, and friends in action
      1. The rocket preparation problem
      2. The rope bridge problem
    6. Hopper—an MPSC specialization
      1. The problem
      2. Hopper in use
      3. A conceptual view of hopper
      4. The deque
      5. The Receiver
      6. The Sender
      7. Testing concurrent data structures
        1. QuickCheck and loops
        2. Searching for crashes with AFL
        3. Benchmarking
    7. Summary
    8. Further reading
  12. Atomics – the Primitives of Synchronization
    1. Technical requirements
    2. Linearizability
    3. Memory ordering – happens-before and synchronizes-with
      1. Ordering::Relaxed
      2. Ordering::Acquire
      3. Ordering::Release
      4. Ordering::AcqRel
      5. Ordering::SeqCst
    4. Building synchronization
      1. Mutexes
        1. Compare and set mutex
      2. An incorrect atomic queue
      3. Options to correct the incorrect queue
      4. Semaphore
      5. Binary semaphore, or, a less wasteful mutex
    5. Summary
    6. Further reading
  13. Atomics – Safely Reclaiming Memory
    1. Technical requirements
    2. Approaches to memory reclamation
      1. Reference counting
        1. Tradeoffs
      2. Hazard pointers
        1. A hazard-pointer Treiber stack
        2. The hazard of Nightly
        3. Exercizing the hazard-pointer Treiber stack
        4. Tradeoffs
      3. Epoch-based reclamation
        1. An epoch-based Treiber stack
        2. crossbeam_epoch::Atomic
        3. crossbeam_epoch::Guard::defer
        4. crossbeam_epoch::Local::pin
        5. Exercising the epoch-based Treiber stack
        6. Tradeoffs
    3. Summary
    4. Further reading
  14. High-Level Parallelism – Threadpools, Parallel Iterators and Processes
    1. Technical requirements
    2. Thread pooling
      1. Slowloris – attacking thread-per-connection servers
        1. The server
        2. The client
        3. A thread-pooling server
      2. Looking into thread pool
      3. The Ethernet sniffer
      4. Iterators
        1. Smallcheck iteration
        2. rayon – parallel iterators
      5. Data parallelism and OS processes – evolving corewars players
        1. Corewars
      6. Feruscore – a Corewars evolver
        1. Representing the domain
        2. Exploring the source
          1. Instructions
          2. Individuals
          3. Mutation and reproduction
          4. Competition – calling out to pMARS
          5. Main
        3. Running feruscore
    3. Summary
    4. Further reading
  15. FFI and Embedding – Combining Rust and Other Languages
    1. Embedding C into Rust – feruscore without processes
      1. The MARS C interface
      2. Creating C-structs from Rust
      3. Calling C functions
        1. Managing cross-language ownership
        2. Running the simulation
        3. Fuzzing the simulation
        4. The feruscore executable
    2. Embedding Lua into Rust
    3. Embedding Rust
      1. Into C
        1. The Rust side
        2. The C side
      2. Into Python
      3. Into Erlang/Elixir
    4. Summary
    5. Further reading
  16. Futurism – Near-Term Rust
    1. Technical requirements
    2. Near-term improvements
      1. SIMD
      2. Hex encoding
      3. Futures and async/await
      4. Specialization
    3. Interesting projects
      1. Fuzzing
      2. Seer, a symbolic execution engine for Rust
    4. The community
    5. Should I use unsafe?
    6. Summary
    7. Further reading
  17. Other Books You May Enjoy
    1. Leave a review - let other readers know what you think

Product information

  • Title: Hands-On Concurrency with Rust
  • Author(s): Brian L. Troutwine
  • Release date: May 2018
  • Publisher(s): Packt Publishing
  • ISBN: 9781788399975