This chapter is an overview of a few of the more advanced features found in MPI. The goal of this chapter is not to make you an expert on any of these features but simply to make you aware that they exist. You should come away with a basic understanding of what they are and how they might be used. The four sections in this chapter describe additional MPI features that provide greater control for some common parallel programming tasks.
If you want more control when exchanging messages, the first section describes MPI commands that provide non-blocking and bidirectional communications.
If you want to investigate other collective communication strategies, the second section describes MPI commands for distributing data across the cluster or collecting data from all the nodes in a cluster.
If you want to create custom communication groups, the third section describes how it is done.
If you want to group data to minimize communication overhead, the last section describes two alternatives—packed data and user-defined types.
While you may not need these features for simple programs, as your projects become more ambitious, these features can make life easier.
In Chapter 13, you were introduced to
point-to-point communication, the communication between a pair of
cooperating processes. The two most basic commands used for
point-to-point communication are
MPI_Recv. Several variations on these commands that can be ...