
264 High Performance Parallel I/O
in the high-end computing systems are becoming faster and faster. For ex-
ample, the latest Cray Gemini interconnect can sustain up to 20 GB/s [1].
(2) The issue with doing the MPI collective operation in MPI-IO is not the
sheer volume of data to exchange. Instead, the dominating factor that slows
down application performance is the frequency of collective operations and
possibility of lock contention. Earlier work [4] shows that MPI Bcast is called
314,800 times in the Chimera run, which take 25% of the wall clock time.
(3) The collective operation in ADIOS is done in a very controlled manner.
All MPI processors are ...