MPI/en: Difference between revisions

Jump to navigation Jump to search
Updating to match new version of source page
(Updating to match new version of source page)
(Updating to match new version of source page)
Line 25: Line 25:
The Message Passing Interface (MPI) is, strictly speaking, a ''standard'' describing a set of subroutines, functions, objects, ''etc.'', with which one can write parallel programs in a distributed memory environment. Many different ''implementations'' of the standard have been produced, such as Open MPI, MPICH, and MVAPICH. The standard describes how MPI should be called from Fortran, C, and C++ languages, but unofficial "bindings" can be found for several other languages.
The Message Passing Interface (MPI) is, strictly speaking, a ''standard'' describing a set of subroutines, functions, objects, ''etc.'', with which one can write parallel programs in a distributed memory environment. Many different ''implementations'' of the standard have been produced, such as Open MPI, MPICH, and MVAPICH. The standard describes how MPI should be called from Fortran, C, and C++ languages, but unofficial "bindings" can be found for several other languages.


Since MPI is an open, non-proprietary standard, an MPI program can easily be ported to many different computers. Applications that use it can run on a large number of cores at once, often with good parallel efficiency (called "scalability"). Given that memory is local to each process, some aspects of debugging are simplified --- it isn't possible for one process to interfere with the memory of another, and if a program generates a segmentation fault the resulting core file can be processed by standard serial debugging tools. However, due to the need to manage communication and synchronization explicitly, MPI programs may appear more complex than programs written with tools that support implicit communication. Furthermore, in designing an MPI program one should take care to minimize communication overhead to achieve a good speed-up from the parallel computation.
Since MPI is an open, non-proprietary standard, an MPI program can easily be ported to many different computers. Applications that use it can run on a large number of cores at once, often with good parallel efficiency (see the [[Scalability | scalability page]] for more details). Given that memory is local to each process, some aspects of debugging are simplified --- it isn't possible for one process to interfere with the memory of another, and if a program generates a segmentation fault the resulting core file can be processed by standard serial debugging tools. However, due to the need to manage communication and synchronization explicitly, MPI programs may appear more complex than programs written with tools that support implicit communication. Furthermore, in designing an MPI program one should take care to minimize communication overhead to achieve a good speed-up from the parallel computation.


In the following we will highlight a few of these issues and discuss strategies to deal with them. Suggested references are presented at the end of this tutorial and the reader is encouraged to consult them for additional information.
In the following we will highlight a few of these issues and discuss strategies to deal with them. Suggested references are presented at the end of this tutorial and the reader is encouraged to consult them for additional information.
38,757

edits

Navigation menu