Chapter 13

Heterogeneous Computing with MPI

Jerome Vienne; Carlos Rosales; Kent Milfeld    TACC, USA

Abstract

The message passing interface (MPI) Standard has proven to be a valuable tool for sending messages (data) between processes over the last two decades. But, as high-performance computing clusters have evolved, the hardware complexity of nodes and interconnects now requires users to pay more attention to communication options to attain high performance as they scale-up applications using parallelism.

In this chapter, we will first discuss the hardware heterogeneity found in modern clusters. We then follow with an analysis of a typical Intel® Xeon Phi™ coprocessor accelerated node on the Stampede cluster at TACC, with a categorization ...

Get High Performance Parallelism Pearls Volume One now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.