Chapter 9

Message Passing Interface

Abstract

Up to now, we have studied how to develop parallel code for shared-memory architectures on multicore and manycore (GPU) systems. However, as explained in the first chapter, many HPC systems such as clusters or supercomputers consist of several compute nodes interconnected through a network. Each node contains its own memory as well as several cores and/or accelerators whose compute capabilities can be exploited with the techniques presented in the previous chapters. However, we need additional programming models to work with several nodes in the same program. The most common programming model for distributed-memory systems is message passing. The Message Passing Interface (MPI) is established ...

Get Parallel Programming now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.