Network Algorithmics, 2nd Edition

Book description

Network Algorithmics: An Interdisciplinary Approach to Designing Fast Networked Devices, Second Edition takes an interdisciplinary approach to applying principles for efficient implementation of network devices, offering solutions to the problem of network implementation bottlenecks. In designing a network device, there are dozens of decisions that affect the speed with which it will perform – sometimes for better, but sometimes for worse. The book provides a complete and coherent methodology for maximizing speed while meeting network design goals. The book is uniquely focused on the seamless integration of data structures, algorithms, operating systems and hardware/software co-designs for high-performance routers/switches and network end systems.

Thoroughly updated based on courses taught by the authors over the past decade, the book lays out the bottlenecks most often encountered at four disparate levels of implementation: protocol, OS, hardware and architecture. It then develops fifteen principles key to breaking these bottlenecks, systematically applying them to bottlenecks found in end-nodes, interconnect devices and specialty functions located along the network. Later sections discuss the inherent challenges of modern cloud computing and data center networking.

  • Offers techniques that address common bottlenecks of interconnect devices, including routers, bridges, gateways, endnodes, and Web servers
  • Presents many practical algorithmic concepts that students and readers can work with immediately
  • Revised and updated throughout to discuss the latest developments from authors’ courses, including measurement algorithmics, randomization, regular expression matching, and software-defined networking
  • Includes a new, rich set of homework exercises and exam questions to facilitate classroom use

Table of contents

  1. Cover image
  2. Title page
  3. Table of Contents
  4. Copyright
  5. Dedication
  6. Preface to the second edition
  7. Preface
    1. Audience
    2. What this book is about
    3. Organization of the book
    4. Features
    5. Usage
    6. Why this book was written
    7. Acknowledgments
  8. 15 principles used to overcome network bottlenecks
  9. Part 1: The rules of the game
    1. Introduction
    2. Chapter 1: Introducing network algorithmics
      1. Abstract
      2. 1.1. The problem: network bottlenecks
      3. 1.2. The techniques: network algorithmics
      4. 1.3. Exercise
    3. Chapter 2: Network implementation models
      1. Abstract
      2. 2.1. Protocols
      3. 2.2. Hardware
      4. 2.3. Network device architectures
      5. 2.4. Operating systems
      6. 2.5. Summary
      7. 2.6. Exercises
      8. References
    4. Chapter 3: Fifteen implementation principles
      1. Abstract
      2. 3.1. Motivating the use of principles—updating ternary content-addressable memories
      3. 3.2. Algorithms versus algorithmics
      4. 3.3. Fifteen implementation principles—categorization and description
      5. 3.4. Design versus implementation principles
      6. 3.5. Caveats
      7. 3.6. Summary
      8. 3.7. Exercises
      9. References
    5. Chapter 4: Principles in action
      1. Abstract
      2. 4.1. Buffer validation of application device channels
      3. 4.2. Scheduler for asynchronous transfer mode flow control
      4. 4.3. Route computation using Dijkstra's algorithm
      5. 4.4. Ethernet monitor using bridge hardware
      6. 4.5. Demultiplexing in the x-kernel
      7. 4.6. Tries with node compression
      8. 4.7. Packet filtering in routers
      9. 4.8. Avoiding fragmentation of LSPs
      10. 4.9. Policing traffic patterns
      11. 4.10. Identifying a resource hog
      12. 4.11. Getting rid of the TCP open connection list
      13. 4.12. Acknowledgment withholding
      14. 4.13. Incrementally reading a large database
      15. 4.14. Binary search of long identifiers
      16. 4.15. Video conferencing via asynchronous transfer mode
      17. References
  10. Part 2: Playing with endnodes
    1. Introduction
    2. Chapter 5: Copying data
      1. Abstract
      2. 5.1. Why data copies
      3. 5.2. Reducing copying via local restructuring
      4. 5.3. Avoiding copying using remote DMA
      5. 5.4. Broadening to file systems
      6. 5.5. Broadening beyond copies
      7. 5.6. Broadening beyond data manipulations
      8. 5.7. Conclusions
      9. 5.8. Exercises
      10. References
    3. Chapter 6: Transferring control
      1. Abstract
      2. 6.1. Why control overhead?
      3. 6.2. Avoiding scheduling overhead in networking code
      4. 6.3. Avoiding context-switching overhead in applications
      5. 6.4. Scalable I/O Notification
      6. 6.5. Avoiding system calls or Kernel Bypass
      7. 6.6. Radical Restructuring of Operating Systems
      8. 6.7. Reducing interrupts
      9. 6.8. Conclusions
      10. 6.9. Exercises
      11. References
    4. Chapter 7: Maintaining timers
      1. Abstract
      2. 7.1. Why timers?
      3. 7.2. Model and performance measures
      4. 7.3. Simplest timer schemes
      5. 7.4. Timing wheels
      6. 7.5. Hashed wheels
      7. 7.6. Hierarchical wheels
      8. 7.7. BSD implementation
      9. 7.8. Google Carousel implementation
      10. 7.9. Obtaining finer granularity timers
      11. 7.10. Conclusions
      12. 7.11. Exercises
      13. References
    5. Chapter 8: Demultiplexing
      1. Abstract
      2. 8.1. Opportunities and challenges of early demultiplexing
      3. 8.2. Goals
      4. 8.3. CMU/Stanford packet filter: pioneering packet filters
      5. 8.4. Berkeley packet filter: enabling high-performance monitoring
      6. 8.5. Pathfinder: factoring out common checks
      7. 8.6. Dynamic packet filter: compilers to the rescue
      8. 8.7. Conclusions
      9. 8.8. Exercises
      10. References
    6. Chapter 9: Protocol processing
      1. Abstract
      2. 9.1. Buffer management
      3. 9.2. Cyclic redundancy checks and checksums
      4. 9.3. Generic protocol processing
      5. 9.4. Reassembly
      6. 9.5. Conclusions
      7. 9.6. Exercises
      8. References
  11. Part 3: Playing with routers
    1. Introduction
    2. Chapter 10: Exact-match lookups
      1. Abstract
      2. 10.1. Challenge 1: Ethernet under fire
      3. 10.2. Challenge 2: wire speed forwarding
      4. 10.3. Challenge 3: scaling lookups to higher speeds
      5. 10.4. Summary
      6. 10.5. Exercise
      7. References
    3. Chapter 11: Prefix-match lookups
      1. Abstract
      2. 11.1. Introduction to prefix lookups
      3. 11.2. Finessing lookups
      4. 11.3. Non-algorithmic techniques for prefix matching
      5. 11.4. Unibit tries
      6. 11.5. Multibit tries
      7. 11.6. Level-compressed (LC) tries
      8. 11.7. Lulea-compressed tries
      9. 11.8. Tree bitmap
      10. 11.9. Binary search on ranges
      11. 11.10. Binary search on ranges with Initial Lookup Table
      12. 11.11. Binary search on prefix lengths
      13. 11.12. Linear search on prefix lengths with hardware assist
      14. 11.13. Memory allocation in compressed schemes
      15. 11.14. Fixed Function Lookup-chip models
      16. 11.15. Programmable Lookup Chips and P4
      17. 11.16. Conclusions
      18. 11.17. Exercises
      19. References
    4. Chapter 12: Packet classification
      1. Abstract
      2. 12.1. Why packet classification?
      3. 12.2. Packet-classification problem
      4. 12.3. Requirements and metrics
      5. 12.4. Simple solutions
      6. 12.5. Two-dimensional schemes
      7. 12.6. Approaches to general rule sets
      8. 12.7. Extending two-dimensional schemes
      9. 12.8. Using divide-and-conquer
      10. 12.9. Bit vector linear search
      11. 12.10. Cross-producting
      12. 12.11. Equivalenced cross-producting
      13. 12.12. Decision tree approaches
      14. 12.13. Hybrid algorithms
      15. 12.14. Conclusions
      16. 12.15. Exercises
      17. References
    5. Chapter 13: Switching
      1. Abstract
      2. 13.1. Router versus telephone switches
      3. 13.2. Shared-memory switches
      4. 13.3. Router history: from buses to crossbars
      5. 13.4. The take-a-ticket crossbar scheduler
      6. 13.5. Head-of-line blocking
      7. 13.6. Avoiding HOL blocking via output queuing
      8. 13.7. Avoiding HOL blocking via virtual output queuing
      9. 13.8. Input-queued switching as a bipartite matching problem
      10. 13.9. Parallel iterative matching (PIM)
      11. 13.10. Avoiding randomization with iSLIP
      12. 13.11. Computing near-optimal matchings via learning
      13. 13.12. Sample-and-compare: a stunningly simple adaptive algorithm
      14. 13.13. SERENA: an improved adaptive algorithm
      15. 13.14. The queue-proportional sampling strategy
      16. 13.15. QPS implementation
      17. 13.16. Small-batch QPS and sliding-window QPS
      18. 13.17. Combined input and output queueing
      19. 13.18. Scaling to larger and faster switches
      20. 13.19. Scaling to faster link speeds
      21. 13.20. Conclusions
      22. 13.21. Exercises
      23. References
    6. Chapter 14: Scheduling packets
      1. Abstract
      2. 14.1. Motivation for quality of service
      3. 14.2. Random early detection
      4. 14.3. Approximate fair dropping
      5. 14.4. Token bucket policing
      6. 14.5. Multiple outbound queues and priority
      7. 14.6. A quick detour into reservation protocols
      8. 14.7. Providing bandwidth guarantees
      9. 14.8. Schedulers that provide delay guarantees
      10. 14.9. Generalized processor sharing
      11. 14.10. Weighted fair queueing
      12. 14.11. Worst-case fair weighed fair queueing
      13. 14.12. The data structure and algorithm for efficient GPS clock tracking
      14. 14.13. Implementing WFQ and WF2Q
      15. 14.14. Quick fair queueing (QFQ)
      16. 14.15. Towards programmable packet scheduling
      17. 14.16. Scalable fair queuing
      18. 14.17. Summary
      19. 14.18. Exercises
      20. References
    7. Chapter 15: Routers as distributed systems
      1. Abstract
      2. 15.1. Internal flow control
      3. 15.2. Internal Link Striping
      4. 15.3. Distributed Memory
      5. 15.4. Asynchronous updates
      6. 15.5. Conclusions
      7. 15.6. Exercises
      8. References
  12. Part 4: Endgame
    1. Introduction
    2. Chapter 16: Measuring network traffic
      1. Abstract
      2. 16.1. Why measurement is hard
      3. 16.2. Reducing SRAM width using DRAM backing store
      4. 16.3. A randomized counter scheme
      5. 16.4. Maintain active counters using BRICK
      6. 16.5. Extending BRICK for maintaining associated states
      7. 16.6. Reducing counter width using approximate counting
      8. 16.7. Reducing counters using threshold aggregation
      9. 16.8. Reducing counters using flow counting
      10. 16.9. Reducing processing using sampled NetFlow
      11. 16.10. Reducing reporting using sampled charging
      12. 16.11. Correlating measurements using trajectory sampling
      13. 16.12. A concerted approach to accounting
      14. 16.13. Computing traffic matrices
      15. 16.14. Sting as an example of passive measurement
      16. 16.15. Generating better traffic logs via data streaming
      17. 16.16. Counting the number of distinct flows
      18. 16.17. Detection of heavy hitters
      19. 16.18. Estimation of flow-size distribution
      20. 16.19. The Tug-of-War algorithm for estimating F2
      21. 16.20. Conclusion
      22. 16.21. Exercises
      23. References
    3. Chapter 17: Network security
      1. Abstract
      2. 17.1. Searching for multiple strings in packet payloads
      3. 17.2. Approximate string matching
      4. 17.3. IP traceback via probabilistic marking
      5. 17.4. IP traceback via logging
      6. 17.5. Detecting worms
      7. 17.6. EarlyBird system for worm detection
      8. 17.7. Carousel: scalable logging for intrusion prevention systems
      9. 17.8. Conclusion
      10. 17.9. Exercises
      11. References
    4. Chapter 18: Conclusions
      1. Abstract
      2. 18.1. What this book has been about
      3. 18.2. What network algorithmics is about
      4. 18.3. Network algorithmics and real products
      5. 18.4. Network algorithmics: back to the future
      6. 18.5. The inner life of a networking device
      7. References
  13. Appendix A: Detailed models
    1. A.1. TCP and IP
    2. A.2. Hardware models
    3. A.3. Switching theory
    4. A.4. The interconnection network Zoo
    5. References
  14. References
    1. References
  15. Index

Product information

  • Title: Network Algorithmics, 2nd Edition
  • Author(s): George Varghese, Jun Xu
  • Release date: November 2022
  • Publisher(s): Morgan Kaufmann
  • ISBN: 9780128099865