Chapter 8: Multi-GPU programming
Abstract
This chapter covers how to write code that utilizes multiple GPUs. There are many possible configurations between host processes and devices one can use in multi-GPU code. In this chapter, we focus on two configurations: (1) a single host process with multiple GPUs using CUDA's peer-to-peer capabilities and (2) using MPI where each MPI process uses a separate GPU. As examples of these approaches, we implement peer-to-peer and MPI multi-GPU versions of the transpose example used in the previous chapter.
Keywords
Peer-to-peer; UVA (Unified Virtual Addressing); Direct transfer; Direct access; MPI (Message Passing Interface); Compute mode; nvidia-smi (NVIDIA System Management Interface)
There are many configurations ...
Get CUDA Fortran for Scientists and Engineers, 2nd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.