Hands-On GPU Computing with Python

Book description

Explore the capabilities of GPUs for solving high performance computational problems

Key Features

  • Understand effective synchronization strategies for faster processing using GPUs
  • Write parallel processing scripts with PyCuda and PyOpenCL
  • Learn to use CUDA libraries such as CuDNN for deep learning on GPUs

Book Description

GPUs are proving to be excellent general purpose-parallel computing solutions for high-performance tasks such as deep learning and scientific computing.

This book will be your guide to getting started with GPU computing. It begins by introducing GPU computing and explaining the GPU architecture and programming models. You will learn, by example, how to perform GPU programming with Python, and look at using integrations such as PyCUDA, PyOpenCL, CuPy, and Numba with Anaconda for various tasks such as machine learning and data mining. In addition to this, you will get to grips with GPU workflows, management, and deployment using modern containerization solutions. Toward the end of the book, you will get familiar with the principles of distributed computing for training machine learning models and enhancing efficiency and performance.

By the end of this book, you will be able to set up a GPU ecosystem for running complex applications and data models that demand great processing capabilities, and be able to efficiently manage memory to compute your application effectively and quickly.

What you will learn

  • Utilize Python libraries and frameworks for GPU acceleration
  • Set up a GPU-enabled programmable machine learning environment on your system with Anaconda
  • Deploy your machine learning system on cloud containers with illustrated examples
  • Explore PyCUDA and PyOpenCL and compare them with platforms such as CUDA, OpenCL, and ROCm.
  • Perform data mining tasks with machine learning models on GPUs
  • Extend your knowledge of GPU computing in scientific applications

Who this book is for

Data scientists, machine learning enthusiasts, or professionals who want to get started with GPU computation and perform the complex tasks with low-latency will find this book useful. Intermediate knowledge of Python programming is assumed.

Publisher resources

Download Example Code

Table of contents

  1. Title Page
  2. Copyright and Credits
    1. Hands-On GPU Computing with Python
  3. Dedication
  4. About Packt
    1. Why subscribe?
    2. Packt.com
  5. Contributors
    1. About the author
    2. About the reviewer
    3. Packt is searching for authors like you
  6. Preface
    1. Who this book is for
    2. What this book covers
    3. To get the most out of this book
      1. Download the example code files
      2. Download the color images
      3. Code in Action
      4. Conventions used
    4. Get in touch
      1. Reviews
  7. Section 1: Computing with GPUs Introduction, Fundamental Concepts, and Hardware
  8. Introducing GPU Computing
    1. The world of GPU computing beyond PC gaming
      1. What is a GPU?
    2. Conventional CPU computing – before the advent of GPUs
    3. How the gaming industry made GPU computing affordable for individuals
    4. The emergence of full-fledged GPU computing
      1. The rise of AI and the need for GPUs
    5. The simplicity of Python code and the power of GPUs –  a dual advantage
      1. The C language – a short prologue
      2. From C to Python
      3. The simplicity of Python as a programming language – why many researchers and scientists prefer it
      4. The power of GPUs
        1. Ray tracing
        2. Artificial intelligence (AI)
        3. Programmable shading
          1. RTX-OPS
      5. Latest GPUs at the time of writing this book (can be subject to change)
        1. NVIDIA GeForce RTX 2070
        2. NVIDIA GeForce RTX 2080
        3. NVIDIA GeForce RTX 2080 Ti
        4. NVIDIA Titan RTX
        5. Radeon RX Vega 56
        6. Radeon RX Vega 64
        7. Radeon VII
      6. Significance of FP64 in GPU computing
      7. The dual advantage – Python and GPUs, a powerful combination
    6. How GPUs empower science and AI in current times
      1. Bioinformatics workflow management
      2. Magnetic Resonance Imaging (MRI) reconstruction techniques
      3. Digital-signal processing for communication receivers
      4. Studies on the brain – neuroscience research
      5. Large-scale molecular dynamics simulations
      6. GPU-powered AI and self-driving cars
      7. Research work posited by AI scientists
        1. Deep learning on commodity Android devices
        2. Motif discovery with deep learning
        3. Structural biology meets data science
        4. Heart-rate estimation on modern wearable devices
        5. Drug target discovery
        6. Deep learning for computational chemistry
    7. The social impact of GPUs
      1. Archaeological restoration/reconstruction
      2. Numerical weather prediction
      3. Composing music
      4. Real-time segmentation of sports players
      5. Creating art
      6. Security
      7. Agriculture
      8. Economics
    8. Summary
    9. Further reading
  9. Designing a GPU Computing Strategy
    1. Getting started with the hardware
      1. The significance of compatible hardware for your GPU
        1. Beginners
        2. Intermediate users
        3. Advanced users
      2. Motherboard
      3. Case
      4. Power supply unit (PSU)
      5. CPU
      6. RAM
      7. Hard-disk drive (HDD)
      8. Solid-state drive (SSD)
      9. Monitor
    2. Building your first GPU-enabled parallel computer – minimum system requirements
      1. Scope of hardware scalability
      2. Branded desktops
      3. Do it yourself (DIY) desktops
        1. Beginner range
        2. Mid range
        3. High-end range
    3. Liquid cooling – should you consider it?
      1. The temperature factor
      2. Airflow management
      3. Thermal paste
      4. Conventional air cooling
      5. Stock coolers
      6. Overclocking
      7. So, what are custom/aftermarket coolers?
      8. Liquid cooling
      9. The specific heat capacity of cooling agents
      10. Why is water the best liquid coolant?
    4. Branded GPU-enabled PCs
      1. Purpose
      2. Feasibility
      3. Upgradeability
      4. Refining an effective budget
      5. Warranty
      6. Bundled monitors
      7. Ready-to-deploy GPU systems
      8. GPU solutions for individuals
      9. Branded solutions in liquid cooling
    5. Why not DIY?
      1. GPU
      2. CPU
      3. Motherboard
      4. RAM
      5. Storage
      6. PSU
      7. Uninterrupted power supply (UPS)
      8. Thermal paste
      9. Heat sink
      10. Radiator
      11. Types of cooling fans
      12. Bottlenecking
      13. Estimating the build and performing compatibility checks
      14. Purpose
      15. Feasibility
      16. Upgradeability
      17. Refining an effective budget
      18. Warranty for individual components
      19. DIY solutions in liquid cooling
      20. Assembling your system
      21. Connecting all the power and case cables in place
      22. Installing CUDA on a fresh Ubuntu installation
    6. Entry-level budget
    7. Mid-range budget
    8. High-end budget
    9. Summary
    10. Further reading
  10. Setting Up a GPU Computing Platform with NVIDIA and AMD
    1. GPU manufacturers
      1. First generation
      2. Second generation
      3. Third generation
      4. Fourth generation
      5. Fifth generation
      6. Sixth generation
      7. Seventh generation and beyond
    2. Computing on NVIDIA GPUs
      1. GeForce platforms
      2. Quadro platforms
      3. Tesla platforms
      4. GPUDirect
      5. SXM and NVLink
      6. NVIDIA CUDA
    3. Computing on AMD APUs and GPUs
      1. Accelerated processing units (APUs)
      2. The GPU in the APU – the significance of APU design
      3. AMD GPUs – programmable platforms
        1. Radeon platforms
        2. Radeon Pro platforms
        3. Radeon Instinct platforms
      4. AMD ROCm
    4. Comparing GPU programmable platforms on NVIDIA and AMD
      1. GPUOpen
      2. The significance of double precision in scientific computing from a GPU perspective
    5. Current models from both brands that are ideal for GPU computing
      1. AMD Radeon VII GPU – the new people's champion
      2. NVIDIA Titan V GPU – raw compute power
    6. An enthusiast's guide to GPU computing hardware
    7. Summary
    8. Further reading
  11. Section 2: Hands-On Development with GPU Programming
  12. Fundamentals of GPU Programming
    1. GPU-programmable platforms
    2. Basic CUDA concepts
      1. Installing and testing
      2. Compute capability
      3. Threads, blocks, and grids
        1. Threads
        2. Blocks
        3. Grids
      4. Managing memory
      5. Unified Memory Access (UMA)
      6. Dynamic parallelism
      7. Predefined libraries
      8. OpenCL
    3. Basic ROCm concepts
      1. Installation procedure and testing
        1. Official deprecation notice for HCC from AMD
      2. Generating chips
      3. ROCm components (APIs), including OpenCL
      4. CUDA-like memory management with HIP
      5. hipify
      6. Predefined libraries
      7. OpenCL
    4. The Anaconda Python distribution for package management and deployment
      1. Installing the Anaconda Python distribution on Ubuntu 18.04
      2. Application-specific usage
    5. GPU-enabled Python programming
      1. The dual advantage
      2. PyCUDA
      3. PyOpenCL
      4. CuPy
      5. Numba (formerly Accelerate)
    6. Summary
    7. Further reading
  13. Setting Up Your Environment for GPU Programming
    1. Choosing a suitable IDE for your Python code
    2. PyCharm – an IDE exclusively made for Python
      1. Different versions of PyCharm
        1. The Community edition
        2. The Professional edition
        3. The Educational edition – PyCharm Edu
          1. Features for learners
          2. Features for educators
        4. PyCharm for Anaconda
    3. Installing PyCharm
      1. First run
      2. EduTools plugin for existing PyCharm users
    4. Alternative IDEs for Python – PyDev and Jupyter
    5. Installing the PyDev Python IDE for Eclipse
    6. Installing Jupyter Notebook and Jupyter Lab
    7. Summary
    8. Further reading
  14. Working with CUDA and PyCUDA
    1. Technical requirements
    2. Understanding how CUDA-C/C++ works via a simple example
    3. Installing PyCUDA for Python within an existing CUDA environment
      1. Anaconda-based installation of PyCUDA
      2. pip – system-wide Python-based installation of PyCUDA
    4. Configuring PyCUDA on your Python IDE
      1. Conda-based virtual environment
      2. pip-based system-wide environment
    5. How computing in PyCUDA works on Python
    6. Comparing PyCUDA to CUDA – an introductory perspective on reduction
      1. What is reduction?
    7. Writing your first PyCUDA programs to compute a general-purpose solution
    8. Useful exercise on computational problem solving
      1. Exercise
    9. Summary
    10. Further reading
  15. Working with ROCm and PyOpenCL
    1. Technical requirements
    2. Understanding how ROCm-C/C++ works with hipify, HIP, and OpenCL
      1. Converting CUDA code into cross-platform HIP code with hipify
      2. Understanding how ROCm-C/C++ works with HIP
        1. Output on an NVIDIA platform
        2. Output on an AMD platform
      3. Understanding how OpenCL works
    3. Installing PyOpenCL for Python (AMD and NVIDIA)
      1. Anaconda-based installation of PyOpenCL
      2. pip – system-wide Python base installation of PyOpenCL
    4. Configuring PyOpenCL on your Python IDE
      1. Conda-based virtual environment
      2. pip-based system-wide environment
    5. How computing in PyOpenCL works on Python
    6. Comparing PyOpenCL to HIP and OpenCL – revisiting the reduction perspective
      1. Reduction with HIP, OpenCL, and PyOpenCL
    7. Writing your first PyOpenCL programs to compute a general-purpose solution
    8. Useful exercise on computational problem solving
      1. Solution assistance
    9. Summary
    10. Further reading
  16. Working with Anaconda, CuPy, and Numba for GPUs
    1. Technical requirements
    2. Understanding how Anaconda works with CuPy and Numba
      1. Conda
      2. CuPy
      3. Numba
        1. GPU-accelerated Numba on Python
    3. Installing CuPy and Numba for Python within an existing Anaconda environment
      1. Coupling Python with CuPy
      2. Conda-based installation of CuPy
      3. pip-based installation of CuPy
      4. Coupling Python with Numba for CUDA and ROCm
        1. Installing Numba with Conda for NVIDIA CUDA GPUs
        2. Installing Numba with Conda for AMD ROC GPUs
        3. System-wide installation of Numba with pip (optional)
    4. Configuring CuPy on your Python IDE
    5. How computing in CuPy works on Python
      1. Implementing multiple GPUs with CuPy
    6. Configuring Numba on your Python IDE
    7. How computing in Numba works on Python
      1. Using vectorize
      2. Explicit kernels
    8. Writing your first CuPy and Numba enabled accelerated programs to compute GPGPU solutions
    9. Interoperability between CuPy and Numba within a single Python program
    10. Comparing CuPy to NumPy and CUDA
    11. Comparing Numba to NumPy, ROCm, and CUDA
    12. Useful exercise on computational problem solving
    13. Summary
    14. Further reading
  17. Section 3: Containerization and Machine Learning with GPU-Powered Python
  18. Containerization on GPU-Enabled Platforms
    1. Programmable environments
      1. Programmable environments – system-wide and virtual
      2. Specific situations of usage
        1. Preferring virtual over system-wide
        2. Preferring system-wide over virtual
    2. System-wide (open) environments
      1. $HOME directory
      2. System directories
      3. Advantages of open environments
      4. Disadvantages of open environments
    3. Virtual (closed) environments
      1. $HOME directory
      2. Virtual system directories
      3. Advantages of closed environments
      4. Disadvantages of closed environments
    4. Virtualization
      1. Virtualenv
        1. Installing virtualenv on Ubuntu Linux system
        2. Using Virtualenv to create and manage a virtual environment
        3. Key benefits of using Virtualenv
      2. VirtualBox
        1. Installing VirtualBox
        2. GPU passthrough
    5. Local containers
      1. Docker
        1. Installing Docker Community Edition (CE) on Ubuntu 18.04
        2. NVIDIA Docker
          1. Installing NVIDIA Docker
        3. ROCm Docker
      2. Kubernetes
    6. Cloud containers
      1. An overview on GPU computing with Google Colab
    7. Summary
    8. Further reading
  19. Accelerated Machine Learning on GPUs
    1. Technical requirements
    2. The significance of Python in AI – the dual advantage
      1. The need for big data management
      2. Using Python for machine learning
    3. Exploring machine learning training modules
      1. The advent of deep learning
    4. Introducing machine learning frameworks
      1. Tensors by example
    5. Introducing TensorFlow
      1. Dataflow programming
      2. Differentiable programming
      3. TensorFlow on GPUs
    6. Introducing PyTorch
      1. The two primary features of PyTorch
    7. Installing TensorFlow and PyTorch for GPUs
      1. Installing cuDNN
      2. Coupling Python with TensorFlow for GPUs
      3. Coupling Python with PyTorch for GPUs
    8. Configuring TensorFlow on PyCharm and Google Colab
      1. Using TensorFlow on PyCharm
      2. Using TensorFlow on Google Colab
    9. Configuring PyTorch on PyCharm and Google Colab
      1. Using PyTorch on PyCharm
      2. Using PyTorch on Google Colab
    10. Machine learning with TensorFlow and PyTorch
      1. MNIST
      2. Fashion-MNIST
      3. CIFAR-10
      4. Keras
      5. Dataset downloads
        1. Downloading Fashion-MNIST with Keras
        2. Downloading CIFAR-10 with PyTorch
    11. Writing your first GPU-accelerated machine learning programs
      1. Fashion-MNIST prediction with TensorFlow
      2. TensorFlow output on the PyCharm console
      3. Training Fashion-MNIST for 100 epochs
      4. CIFAR-10 prediction with PyTorch
      5. PyTorch output on a PyCharm console
    12. Revisiting our computational exercises with a machine learning approach
      1. Solution assistance
    13. Summary
    14. Further reading
  20. GPU Acceleration for Scientific Applications Using DeepChem
    1. Technical requirements
    2. Decoding scientific concepts for DeepChem
      1. Atom
      2. Molecule
      3. Protein molecule
      4. Biological cell
      5. Medicinal drug – a small molecule
      6. Ki
      7. Crystallographic structures
      8. Assays
      9. Histogram
      10. Open Source Drug Discovery (OSDD)
      11. Convolution
      12. Ensemble
      13. Random Forest (RF)
      14. Graph convolutional neural networks (GCN)
      15. One-shot learning
    3. Multiple ways to install DeepChem
      1. Installing Google Colab
      2. Conda on your local PyCharm IDE
      3. NVIDIA Docker-based deployment
    4. Configuring DeepChem on PyCharm
    5. Testing an example from the DeepChem repository
      1. How medicines reach their targets in our body
      2. Alzheimer's disease
      3. IC50
      4. The Beta-Site APP-Cleaving Enzyme (BACE)
      5. A DeepChem programming example
      6. Output on the PyCharm console
    6. Developing your own deep learning framework like DeepChem – a brief outlook
    7. Summary
    8. Final thoughts
    9. References
  21. Appendix A
    1. GPU-accelerated machine learning in Python – benchmark research
    2. GPU-accelerated machine learning with Python applied to cancer research
    3. Deep Learning with GPU-accelerated Python for applied computer vision – Pavement Distress
  22. Other Books You May Enjoy
    1. Leave a review - let other readers know what you think

Product information

  • Title: Hands-On GPU Computing with Python
  • Author(s): Avimanyu Bandyopadhyay
  • Release date: May 2019
  • Publisher(s): Packt Publishing
  • ISBN: 9781789341072