Chapter 13

Heterogeneous Computing with MPI

Jerome Vienne; Carlos Rosales; Kent Milfeld    TACC, USA

Abstract

The message passing interface (MPI) Standard has proven to be a valuable tool for sending messages (data) between processes over the last two decades. But, as high-performance computing clusters have evolved, the hardware complexity of nodes and interconnects now requires users to pay more attention to communication options to attain high performance as they scale-up applications using parallelism.

In this chapter, we will first discuss the hardware heterogeneity found in modern clusters. We then follow with an analysis of a typical Intel® Xeon Phi™ coprocessor accelerated node on the Stampede cluster at TACC, with a categorization ...

Get High Performance Parallelism Pearls Volume One now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.