Bill Gropp, University of Illinois at Urbana-Champaign
Abstract:
MPI has long been considered the de facto standard for parallel programming. One of
the primary strengths of MPI is its continuously evolving nature that allows it to absorb
and incorporate the best practices in parallel computing in a standard and portable
form. The MPI Forum has recently announced the MPI-3 standard and is working on
the MPI-4 standard to extend traditional message passing into more dynamic, onesided
and fault tolerant communication capabilities. Nevertheless, given the
disruptive architectural trends for Exascale computing, there is room for more. In this
talk, I will first describe some of the capabilities that have been added in the recent
MPI-3 standard and those that are being considered for the upcoming MPI-4 standard.
Next I will describe some of the key requirements of modern applications targeting
exascale, and research efforts within the U.S. Department of Energy's Exascale
Computing Project to extend MPI to work in massively multithreaded and
heterogeneous environments for highly dynamic and irregular applications.