High Performance Parallelism Pearls Volume One

Book description

High Performance Parallelism Pearls shows how to leverage parallelism on processors and coprocessors with the same programming – illustrating the most effective ways to better tap the computational potential of systems with Intel Xeon Phi coprocessors and Intel Xeon processors or other multicore processors. The book includes examples of successful programming efforts, drawn from across industries and domains such as chemistry, engineering, and environmental science. Each chapter in this edited work includes detailed explanations of the programming techniques used, while showing high performance results on both Intel Xeon Phi coprocessors and multicore processors. Learn from dozens of new examples and case studies illustrating "success stories" demonstrating not just the features of these powerful systems, but also how to leverage parallelism across these heterogeneous systems.

  • Promotes consistent standards-based programming, showing in detail how to code for high performance on multicore processors and Intel® Xeon Phi™
  • Examples from multiple vertical domains illustrating parallel optimizations to modernize real-world codes
  • Source code available for download to facilitate further exploration

Table of contents

  1. Cover image
  2. Title page
  3. Table of Contents
  4. Copyright
  5. Contributors
  6. Acknowledgments
  7. Foreword
    1. Humongous computing needs: Science years in the making
    2. Open standards
    3. Keen on many-core architecture
    4. Xeon Phi is born: Many cores, excellent vector ISA
    5. Learn highly scalable parallel programming
    6. Future demands grow: Programming models matter
  8. Preface
    1. Inspired by 61 cores: A new era in programming
  9. Chapter 1: Introduction
    1. Abstract
    2. Learning from successful experiences
    3. Code modernization
    4. Modernize with concurrent algorithms
    5. Modernize with vectorization and data locality
    6. Understanding power usage
    7. ISPC and OpenCL anyone?
    8. Intel Xeon Phi coprocessor specific
    9. Many-core, neo-heterogeneous
    10. No “Xeon Phi” in the title, neo-heterogeneous programming
    11. The future of many-core
    12. Downloads
  10. Chapter 2: From “Correct” to “Correct & Efficient”: A Hydro2D Case Study with Godunov’s Scheme
    1. Abstract
    2. Scientific computing on contemporary computers
    3. A numerical method for shock hydrodynamics
    4. Features of modern architectures
    5. Paths to performance
    6. Summary
  11. Chapter 3: Better Concurrency and SIMD on HBM
    1. Abstract
    2. The application: HIROMB-BOOS-Model
    3. Key usage: DMI
    4. HBM execution profile
    5. Overview for the optimization of HBM
    6. Data structures: Locality done right
    7. Thread parallelism in HBM
    8. Data parallelism: SIMD vectorization
    9. Results
    10. Profiling details
    11. Scaling on processor vs. coprocessor
    12. Contiguous attribute
    13. Summary
  12. Chapter 4: Optimizing for Reacting Navier-Stokes Equations
    1. Abstract
    2. Getting started
    3. Version 1.0: Baseline
    4. Version 2.0: ThreadBox
    5. Version 3.0: Stack memory
    6. Version 4.0: Blocking
    7. Version 5.0: Vectorization
    8. Intel Xeon Phi coprocessor results
    9. Summary
  13. Chapter 5: Plesiochronous Phasing Barriers
    1. Abstract
    2. What can be done to improve the code?
    3. What more can be done to improve the code?
    4. Hyper-Thread Phalanx
    5. What is nonoptimal about this strategy?
    6. Coding the Hyper-Thread Phalanx
    7. Back to work
    8. Data alignment
    9. The plesiochronous phasing barrier
    10. Let us do something to recover this wasted time
    11. A few “left to the reader” possibilities
    12. Xeon host performance improvements similar to Xeon Phi
    13. Summary
  14. Chapter 6: Parallel Evaluation of Fault Tree Expressions
    1. Abstract
    2. Motivation and background
    3. Example implementation
    4. Other considerations
    5. Summary
  15. Chapter 7: Deep-Learning Numerical Optimization
    1. Abstract
    2. Fitting an objective function
    3. Objective functions and principle components analysis
    4. Software and example data
    5. Training data
    6. Runtime results
    7. Scaling results
    8. Summary
  16. Chapter 8: Optimizing Gather/Scatter Patterns
    1. Abstract
    2. Gather/scatter instructions in Intel® architecture
    3. Gather/scatter patterns in molecular dynamics
    4. Optimizing gather/scatter patterns
    5. Summary
  17. Chapter 9: A Many-Core Implementation of the Direct N-Body Problem
    1. Abstract
    2. N-Body simulations
    3. Initial solution
    4. Theoretical limit
    5. Reduce the overheads, align your data
    6. Optimize the memory hierarchy
    7. Improving our tiling
    8. What does all this mean to the host version?
    9. Summary
  18. Chapter 10: N-Body Methods
    1. Abstract
    2. Fast N-body methods and direct N-body kernels
    3. Applications of N-body methods
    4. Direct N-body code
    5. Performance results
    6. Summary
  19. Chapter 11: Dynamic Load Balancing Using OpenMP 4.0
    1. Abstract
    2. Maximizing hardware usage
    3. The N-Body kernel
    4. The offloaded version
    5. A first processor combined with coprocessor version
    6. Version for processor with multiple coprocessors
  20. Chapter 12: Concurrent Kernel Offloading
    1. Abstract
    2. Setting the context
    3. Concurrent kernels on the coprocessor
    4. Force computation in PD using concurrent kernel offloading
    5. The bottom line
  21. Chapter 13: Heterogeneous Computing with MPI
    1. Abstract
    2. Acknowledgments
    3. MPI in the modern clusters
    4. MPI task location
    5. Selection of the DAPL providers
    6. Summary
  22. Chapter 14: Power Analysis on the Intel® Xeon Phi™ Coprocessor
    1. Abstract
    2. Power analysis 101
    3. Measuring power and temperature with software
    4. Hardware-based power analysis methods
    5. Summary
  23. Chapter 15: Integrating Intel Xeon Phi Coprocessors into a Cluster Environment
    1. Abstract
    2. Acknowledgments
    3. Early explorations
    4. Beacon system history
    5. Beacon system architecture
    6. Intel MPSS installation procedure
    7. Setting up the resource and workload managers
    8. Health checking and monitoring
    9. Scripting common commands
    10. User software environment
    11. Future directions
    12. Summary
  24. Chapter 16: Supporting Cluster File Systems on Intel® Xeon Phi™ Coprocessors
    1. Abstract
    2. Network configuration concepts and goals
    3. Coprocessor file systems support
    4. Summary
  25. Chapter 17: NWChem: Quantum Chemistry Simulations at Scale
    1. Abstract
    2. Introduction
    3. Overview of single-reference CC formalism
    4. NWChem software architecture
    5. Engineering an offload solution
    6. Offload architecture
    7. Kernel optimizations
    8. Performance evaluation
    9. Summary
    10. Acknowledgments
  26. Chapter 18: Efficient Nested Parallelism on Large-Scale Systems
    1. Abstract
    2. Motivation
    3. The benchmark
    4. Baseline benchmarking
    5. Pipeline approach—flat_arena class
    6. Intel® TBB user-managed task arenas
    7. Hierarchical approach—hierarchical_arena class
    8. Performance evaluation
    9. Implication on NUMA architectures
    10. Summary
  27. Chapter 19: Performance Optimization of Black-Scholes Pricing
    1. Abstract
    2. Financial market model basics and the Black-Scholes formula
    3. Case study
    4. Summary
  28. Chapter 20: Data Transfer Using the Intel COI Library
    1. Abstract
    2. First steps with the Intel COI library
    3. COI buffer types and transfer performance
    4. Applications
    5. Summary
  29. Chapter 21: High-Performance Ray Tracing
    1. Abstract
    2. Background
    3. Vectorizing ray traversal
    4. The Embree ray tracing kernels
    5. Using Embree in an application
    6. Performance
    7. Summary
  30. Chapter 22: Portable Performance with OpenCL
    1. Abstract
    2. The dilemma
    3. A brief introduction to OpenCL
    4. A matrix multiply example in OpenCL
    5. OpenCL and the Intel Xeon Phi Coprocessor
    6. Matrix multiply performance results
    7. Case study: Molecular docking
    8. Results: Portable performance
    9. Related work
    10. Summary
  31. Chapter 23: Characterization and Optimization Methodology Applied to Stencil Computations
    1. Abstract
    2. Introduction
    3. Performance evaluation
    4. Standard optimizations
    5. Summary
  32. Chapter 24: Profiling-Guided Optimization
    1. Abstract
    2. Matrix transposition in computer science
    3. Tools and methods
    4. “Serial”: Our original in-place transposition
    5. “Parallel”: Adding parallelism with OpenMP
    6. “Tiled”: Improving data locality
    7. “Regularized”: Microkernel with multiversioning
    8. “Planned”: Exposing more parallelism
    9. Summary
  33. Chapter 25: Heterogeneous MPI application optimization with ITAC
    1. Abstract
    2. Asian options pricing
    3. Application design
    4. Synchronization in heterogeneous clusters
    5. Finding bottlenecks with ITAC
    6. Setting up ITAC
    7. Unbalanced MPI run
    8. Manual workload balance
    9. Dynamic “Boss-Workers” load balancing
    10. Conclusion
  34. Chapter 26: Scalable Out-of-Core Solvers on a Cluster
    1. Abstract
    2. Introduction
    3. An OOC factorization based on ScaLAPACK
    4. Porting from NVIDIA GPU to the Intel Xeon Phi coprocessor
    5. Numerical results
    6. Conclusions and future work
    7. Acknowledgments
  35. Chapter 27: Sparse Matrix-Vector Multiplication: Parallelization and Vectorization
    1. Abstract
    2. Acknowledgments
    3. Background
    4. Sparse matrix data structures
    5. Parallel SpMV multiplication
    6. Vectorization on the Intel Xeon Phi coprocessor
    7. Evaluation
    8. Summary
  36. Chapter 28: Morton Order Improves Performance
    1. Abstract
    2. Improving cache locality by data ordering
    3. Improving performance
    4. Matrix transpose
    5. Matrix multiply
    6. Summary
  37. Author Index
  38. Subject Index

Product information

  • Title: High Performance Parallelism Pearls Volume One
  • Author(s): James Reinders, James Jeffers
  • Release date: November 2014
  • Publisher(s): Morgan Kaufmann
  • ISBN: 9780128021996