MPI: Difference between revisions

Jump to navigation Jump to search
615 bytes added ,  7 years ago
Marked this version for translation
No edit summary
(Marked this version for translation)
Line 1: Line 1:
<languages />
<languages />
<translate>
<translate>
== A Primer on Parallel Programming ==
== A Primer on Parallel Programming == <!--T:1-->
{{quote|To pull a bigger wagon it is easier to add more oxen than to find (or build) a bigger ox.|Gropp, Lusk & Skjellum|Using MPI}}
{{quote|To pull a bigger wagon it is easier to add more oxen than to find (or build) a bigger ox.|Gropp, Lusk & Skjellum|Using MPI}}
To build a house as quickly as possible, we do not look for the fastest person to do all the work but instead we hire as many people as required and spread the work among them so that various construction tasks are performed at the same time --- "in parallel". Computational problems are conceptually similar. Since there is a limit to how fast a single machine can perform, we attempt to divide up the computational problem at hand and assign work to be completed in parallel to multiple computers. This approach is important not only in speeding up computations but also in tackling problems requiring large amounts of memory.
To build a house as quickly as possible, we do not look for the fastest person to do all the work but instead we hire as many people as required and spread the work among them so that various construction tasks are performed at the same time --- "in parallel". Computational problems are conceptually similar. Since there is a limit to how fast a single machine can perform, we attempt to divide up the computational problem at hand and assign work to be completed in parallel to multiple computers. This approach is important not only in speeding up computations but also in tackling problems requiring large amounts of memory.


<!--T:2-->
The most significant concept to master in designing and building parallel applications is ''communication''. Complexity arises due to communication requirements. In order for multiple workers to accomplish a task in parallel, they need to be able to communicate with one another. In the context of software, we have many processes each working on part of a solution, needing values that were computed ---or are yet to be computed!--- by other processes.
The most significant concept to master in designing and building parallel applications is ''communication''. Complexity arises due to communication requirements. In order for multiple workers to accomplish a task in parallel, they need to be able to communicate with one another. In the context of software, we have many processes each working on part of a solution, needing values that were computed ---or are yet to be computed!--- by other processes.


<!--T:3-->
There are two major models of computational parallelism: shared memory, and distributed memory.
There are two major models of computational parallelism: shared memory, and distributed memory.


<!--T:4-->
<!-- From Belaid: Probably we should used threads instead. As processes have there own private memory space - the term "ultimately" refers to the fact that the inter-process communication via shared memory  -->
<!-- From Belaid: Probably we should used threads instead. As processes have there own private memory space - the term "ultimately" refers to the fact that the inter-process communication via shared memory  -->
In shared memory parallelism (commonly and casually abbreviated SMP), all processors see the same memory image, or to put it another way, all memory is globally addressable and all the processes can ultimately access it. Communication between processes on an SMP machine is implicit --- any process can ultimately read and write values to memory that can be subsequently accessed an manipulated directly by others. The challenge in writing these kinds of programs is data consistency: one should take extra care to ensure data is not modified by more than one process at a time.
In shared memory parallelism (commonly and casually abbreviated SMP), all processors see the same memory image, or to put it another way, all memory is globally addressable and all the processes can ultimately access it. Communication between processes on an SMP machine is implicit --- any process can ultimately read and write values to memory that can be subsequently accessed an manipulated directly by others. The challenge in writing these kinds of programs is data consistency: one should take extra care to ensure data is not modified by more than one process at a time.




<!--T:5-->
[[Image:Smp.png|frame|center|'''Figure 1''': ''A conceptual picture of a shared memory architecture'']]
[[Image:Smp.png|frame|center|'''Figure 1''': ''A conceptual picture of a shared memory architecture'']]


<!--T:6-->
Distributed memory parallelism is equivalent to a collection of workstations linked by a dedicated network for communication: a cluster.  In this model, processes each have their own private memory, and may run on physically distinct machines. When processes need to communicate, they do so by sending ''messages''. A process typically invokes a function to send data and the destination process invokes a function to receive it. A major challenge in distributed memory programming is how to minimize communication overhead. Networks, even the fastest dedicated hardware interconnects, transmit data orders of magnitude slower than within a single machine. Memory access times are typically measured in ones to hundreds of nanoseconds, while network latency is typically expressed in microseconds.
Distributed memory parallelism is equivalent to a collection of workstations linked by a dedicated network for communication: a cluster.  In this model, processes each have their own private memory, and may run on physically distinct machines. When processes need to communicate, they do so by sending ''messages''. A process typically invokes a function to send data and the destination process invokes a function to receive it. A major challenge in distributed memory programming is how to minimize communication overhead. Networks, even the fastest dedicated hardware interconnects, transmit data orders of magnitude slower than within a single machine. Memory access times are typically measured in ones to hundreds of nanoseconds, while network latency is typically expressed in microseconds.


<!--T:7-->
[[Image:Cluster.png|frame|center|'''Figure 2''': ''A conceptual picture of a cluster architecture'']]
[[Image:Cluster.png|frame|center|'''Figure 2''': ''A conceptual picture of a cluster architecture'']]


<!--T:8-->
The remainder of this tutorial will consider distributed memory programming on a cluster using the Message Passing Interface.
The remainder of this tutorial will consider distributed memory programming on a cluster using the Message Passing Interface.


== What is MPI? ==
== What is MPI? == <!--T:9-->


<!--T:10-->
The Message Passing Interface (MPI) is, strictly speaking, a ''standard'' describing a set of subroutines, functions, objects, ''etc.'', with which one can write parallel programs in a distributed memory environment. Many different ''implementations'' of the standard have been produced, such as Open MPI, MPICH, and MVAPICH. The standard describes how MPI should be called from Fortran, C, and C++ languages, but unofficial "bindings" can be found for several other languages.
The Message Passing Interface (MPI) is, strictly speaking, a ''standard'' describing a set of subroutines, functions, objects, ''etc.'', with which one can write parallel programs in a distributed memory environment. Many different ''implementations'' of the standard have been produced, such as Open MPI, MPICH, and MVAPICH. The standard describes how MPI should be called from Fortran, C, and C++ languages, but unofficial "bindings" can be found for several other languages.


<!--T:11-->
Since MPI is an open, non-proprietary standard, an MPI program can easily be ported to many different computers. Applications that use it can run on a large number of cores at once, often with good parallel efficiency (called "scalability"). Given that memory is local to each process, some aspects of debugging are simplified --- it isn't possible for one process to interfere with the memory of another, and if a program generates a segmentation fault the resulting core file can be processed by standard serial debugging tools. However, due to the need to manage communication and synchronization explicitly, MPI programs may appear more complex than programs written with tools that support implicit communication. Furthermore, in designing an MPI program one should take care to minimize communication overhead to achieve a good speed-up from the parallel computation.
Since MPI is an open, non-proprietary standard, an MPI program can easily be ported to many different computers. Applications that use it can run on a large number of cores at once, often with good parallel efficiency (called "scalability"). Given that memory is local to each process, some aspects of debugging are simplified --- it isn't possible for one process to interfere with the memory of another, and if a program generates a segmentation fault the resulting core file can be processed by standard serial debugging tools. However, due to the need to manage communication and synchronization explicitly, MPI programs may appear more complex than programs written with tools that support implicit communication. Furthermore, in designing an MPI program one should take care to minimize communication overhead to achieve a good speed-up from the parallel computation.


<!--T:12-->
In the following we will highlight a few of these issues and discuss strategies to deal with them. Suggested references are presented at the end of this tutorial and the reader is encouraged to consult them for additional information.
In the following we will highlight a few of these issues and discuss strategies to deal with them. Suggested references are presented at the end of this tutorial and the reader is encouraged to consult them for additional information.


== MPI Programming Basics ==
== MPI Programming Basics == <!--T:13-->
This tutorial will present the development of an MPI code in C and Fortran, but the concepts apply to any language for which MPI bindings exist. For simplicity our goal will be to parallelize the venerable "Hello, World!" program, which appears below for reference.
This tutorial will present the development of an MPI code in C and Fortran, but the concepts apply to any language for which MPI bindings exist. For simplicity our goal will be to parallelize the venerable "Hello, World!" program, which appears below for reference.
</translate>
</translate>
Line 77: Line 87:
</tabs>
</tabs>
<translate>
<translate>
<!--T:14-->
Compiling and running the program looks something like this:
Compiling and running the program looks something like this:


  [~]$ vi hello.c
  <!--T:15-->
[~]$ vi hello.c
  [~]$ cc -Wall hello.c -o hello
  [~]$ cc -Wall hello.c -o hello
  [~]$ ./hello  
  [~]$ ./hello  
  Hello, world!
  Hello, world!


=== SPMD Programming ===
=== SPMD Programming === <!--T:16-->
Parallel programs written using MPI make use of an execution model called Single Program, Multiple Data, or SPMD. The SPMD model involves running a number of ''copies'' of a single program. In MPI, each copy or "process" is assigned a unique number, referred to as the ''rank'' of the process, and each process can obtain its rank when it runs. When a process should behave differently, we usually use an "if" statement based on the rank of the process to execute the appropriate set of instructions.
Parallel programs written using MPI make use of an execution model called Single Program, Multiple Data, or SPMD. The SPMD model involves running a number of ''copies'' of a single program. In MPI, each copy or "process" is assigned a unique number, referred to as the ''rank'' of the process, and each process can obtain its rank when it runs. When a process should behave differently, we usually use an "if" statement based on the rank of the process to execute the appropriate set of instructions.


<!--T:17-->
[[Image:SPMD_model.png|frame|center|'''Figure 3''': ''SPMD model illustrating conditional branching to control divergent behaviour'']]
[[Image:SPMD_model.png|frame|center|'''Figure 3''': ''SPMD model illustrating conditional branching to control divergent behaviour'']]


=== Framework ===
=== Framework === <!--T:18-->
Each MPI program must include the relevant header file (<tt>mpi.h</tt> for C/C++, <tt>mpif.h</tt> for Fortran), and be compiled and linked against the desired MPI implementation. Most MPI implementations provide a handy script, often called a ''compiler wrapper'', that handles all set-up issues with respect to <code>include</code> and <code>lib</code> directories, linking flags, ''etc.'' Our examples will all use these compiler wrappers:
Each MPI program must include the relevant header file (<tt>mpi.h</tt> for C/C++, <tt>mpif.h</tt> for Fortran), and be compiled and linked against the desired MPI implementation. Most MPI implementations provide a handy script, often called a ''compiler wrapper'', that handles all set-up issues with respect to <code>include</code> and <code>lib</code> directories, linking flags, ''etc.'' Our examples will all use these compiler wrappers:
* C language wrapper: <tt>mpicc</tt>
* C language wrapper: <tt>mpicc</tt>
Line 95: Line 108:
* C++: <tt>mpiCC</tt>
* C++: <tt>mpiCC</tt>


<!--T:19-->
The copies of an MPI programming, once they start running, must coordinate with one another somehow. This cooperation starts when each one calls an initialization function before it uses any other MPI features. The prototype for this function appears below:
The copies of an MPI programming, once they start running, must coordinate with one another somehow. This cooperation starts when each one calls an initialization function before it uses any other MPI features. The prototype for this function appears below:
</translate>
</translate>
Line 116: Line 130:
</tabs>
</tabs>
<translate>
<translate>
<!--T:20-->
The arguments to the C <code>MPI_Init</code> are pointers to the <code>argc</code> and <code>argv</code> variables that represent the command-line arguments to the program. Like all C MPI functions, the return value represents the error status of the function. Fortran MPI subroutines return the error status in an additional argument, <code>IERR</code>.
The arguments to the C <code>MPI_Init</code> are pointers to the <code>argc</code> and <code>argv</code> variables that represent the command-line arguments to the program. Like all C MPI functions, the return value represents the error status of the function. Fortran MPI subroutines return the error status in an additional argument, <code>IERR</code>.


<!--T:21-->
Similarly, we must call a function <code>MPI_Finalize</code> to do any clean-up that might be required before our program exits. The prototype for this function appears below:
Similarly, we must call a function <code>MPI_Finalize</code> to do any clean-up that might be required before our program exits. The prototype for this function appears below:
</translate>
</translate>
Line 137: Line 153:
</tabs>
</tabs>
<translate>
<translate>
<!--T:22-->
As a rule of thumb, it is a good idea to call <code>MPI_Init</code> as the first statement of our program, and <code>MPI_Finalize</code> as its last statement.  
As a rule of thumb, it is a good idea to call <code>MPI_Init</code> as the first statement of our program, and <code>MPI_Finalize</code> as its last statement.  
Let's now modify our "Hello, world!" program accordingly.
Let's now modify our "Hello, world!" program accordingly.
Line 198: Line 215:
</tabs>
</tabs>
<translate>
<translate>
=== Rank and Size ===
=== Rank and Size === <!--T:23-->
We could now run this program under control of MPI, but each process would only output the original string which isn't very interesting. Let's instead have each process output its rank and how many processes are running in total. This information is obtained at run-time by the use of the following functions:
We could now run this program under control of MPI, but each process would only output the original string which isn't very interesting. Let's instead have each process output its rank and how many processes are running in total. This information is obtained at run-time by the use of the following functions:
</translate>
</translate>
Line 226: Line 243:
<translate>
<translate>


<!--T:24-->
<tt>MPI_Comm_size</tt> reports the number of processes running as part of this job by assigning it to the result parameter <tt>nproc</tt>.  Similarly, <tt>MPI_Comm_rank</tt> reports the rank of the calling process to the result parameter <tt>myrank</tt>. Ranks in MPI start from 0 rather than 1, so given N processes we expect the ranks to be 0..(N-1). The <tt>comm</tt> argument is a ''communicator'', which is a set of processes capable of sending messages to one another. For the purpose of this tutorial we will always pass in the predefined value <tt>MPI_COMM_WORLD</tt>, which is simply all the MPI processes started with the job. It is possible to define and use your own communicators, but that is beyond the scope of this tutorial and the reader is referred to the provided references for additional detail.
<tt>MPI_Comm_size</tt> reports the number of processes running as part of this job by assigning it to the result parameter <tt>nproc</tt>.  Similarly, <tt>MPI_Comm_rank</tt> reports the rank of the calling process to the result parameter <tt>myrank</tt>. Ranks in MPI start from 0 rather than 1, so given N processes we expect the ranks to be 0..(N-1). The <tt>comm</tt> argument is a ''communicator'', which is a set of processes capable of sending messages to one another. For the purpose of this tutorial we will always pass in the predefined value <tt>MPI_COMM_WORLD</tt>, which is simply all the MPI processes started with the job. It is possible to define and use your own communicators, but that is beyond the scope of this tutorial and the reader is referred to the provided references for additional detail.


<!--T:25-->
Let us incorporate these functions into our program, and have each process output its rank and size information. Note that since all processes are still performing identical operations, there are no conditional blocks required in the code.
Let us incorporate these functions into our program, and have each process output its rank and size information. Note that since all processes are still performing identical operations, there are no conditional blocks required in the code.
</translate>
</translate>
Line 300: Line 319:
<translate>
<translate>


<!--T:26-->
Compile and run this program using 2, 4 and 8 processes. Note that each running process produces output based on the values of its local variables. The stdout of all running processes is simply concatenated together. As you run the program using more processes, you may see that the output from the different processes does not appear in order or rank: You should make no assumptions about the order of output from different processes.
Compile and run this program using 2, 4 and 8 processes. Note that each running process produces output based on the values of its local variables. The stdout of all running processes is simply concatenated together. As you run the program using more processes, you may see that the output from the different processes does not appear in order or rank: You should make no assumptions about the order of output from different processes.
  [~]$ vi phello1.c  
  [~]$ vi phello1.c  
Line 310: Line 330:




<!--T:27-->
If you are using the Boost version, you should compile with:  
If you are using the Boost version, you should compile with:  
  [~]$ mpic++ --std=c++11 phello1.cpp -lboost_mpi-mt -lboost_serialization-mt -o phello1
  [~]$ mpic++ --std=c++11 phello1.cpp -lboost_mpi-mt -lboost_serialization-mt -o phello1


=== Communication ===
=== Communication === <!--T:28-->
While we now have a parallel version of our "Hello, World!" program, it isn't very interesting as there is no communication between the processes. Let's fix this by having the processes send messages to one another.
While we now have a parallel version of our "Hello, World!" program, it isn't very interesting as there is no communication between the processes. Let's fix this by having the processes send messages to one another.


<!--T:29-->
We'll have each process send the string "hello" to the one with the next higher rank number. Rank <tt>i</tt> will send its message to rank <tt>i+1</tt>, and we'll have the last process, rank <tt>N-1</tt>, send its message back to process <tt>0</tt> (a nice communication ring!). A short way to express this is ''process <tt>i</tt> sends to process <tt>(i+1)%N</tt>'', where there are <tt>N</tt> processes and % is the modulus operator.
We'll have each process send the string "hello" to the one with the next higher rank number. Rank <tt>i</tt> will send its message to rank <tt>i+1</tt>, and we'll have the last process, rank <tt>N-1</tt>, send its message back to process <tt>0</tt> (a nice communication ring!). A short way to express this is ''process <tt>i</tt> sends to process <tt>(i+1)%N</tt>'', where there are <tt>N</tt> processes and % is the modulus operator.


<!--T:30-->
MPI provides a large number of functions for sending and receiving data of almost any composition in a variety of communication patterns (one-to-one, one-to-many, many-to-one, and many-to-many). But the simplest functions to understand are the ones that send a sequence of one or more instances of an atomic data type from one process to another, <tt>MPI_Send</tt> and <tt>MPI_Recv</tt>.
MPI provides a large number of functions for sending and receiving data of almost any composition in a variety of communication patterns (one-to-one, one-to-many, many-to-one, and many-to-many). But the simplest functions to understand are the ones that send a sequence of one or more instances of an atomic data type from one process to another, <tt>MPI_Send</tt> and <tt>MPI_Recv</tt>.


<!--T:31-->
A process sends data by calling the <tt>MPI_Send</tt> function. Referring to the following function prototypes, <tt>MPI_Send</tt> can be summarized as sending <tt>count</tt> contiguous instances of <tt>datatype</tt> to the process with the specified <tt>rank</tt>, and the data is in the buffer pointed to by <tt>message</tt>.  <tt>Tag</tt> is a programmer-specified identifier that becomes associated with the message, and can be used, for example, to organize the communication streams (e.g. to distinguish two distinct streams of interleaved data). Our examples do not require this, so we will pass in the value 0 for the <tt>tag</tt>. <tt>Comm</tt> is the communicator described above, and we will continue to use <tt>MPI_COMM_WORLD</tt>.
A process sends data by calling the <tt>MPI_Send</tt> function. Referring to the following function prototypes, <tt>MPI_Send</tt> can be summarized as sending <tt>count</tt> contiguous instances of <tt>datatype</tt> to the process with the specified <tt>rank</tt>, and the data is in the buffer pointed to by <tt>message</tt>.  <tt>Tag</tt> is a programmer-specified identifier that becomes associated with the message, and can be used, for example, to organize the communication streams (e.g. to distinguish two distinct streams of interleaved data). Our examples do not require this, so we will pass in the value 0 for the <tt>tag</tt>. <tt>Comm</tt> is the communicator described above, and we will continue to use <tt>MPI_COMM_WORLD</tt>.
</translate>
</translate>
Line 354: Line 378:
</tabs>
</tabs>
<translate>
<translate>
<!--T:32-->
Note that the <tt>datatype</tt> argument, specifying the type of data contained in the <tt>message</tt> buffer, is a variable. This is intended to provide a layer of compatibility between processes that could be running on architectures for which the native format for these types differs. It is possible to register new data types, but for this tutorial we will only use the predefined types provided by MPI. There is a predefined MPI type for each atomic data type in the source language (for C: <tt>MPI_CHAR</tt>, <tt>MPI_FLOAT</tt>, <tt>MPI_SHORT</tt>, <tt>MPI_INT</tt>, etc. and for Fortran: <tt>MPI_CHARACTER</tt>, <tt>MPI_INTEGER</tt>, <tt>MPI_REAL</tt>, etc.). You can find a full list of these types in the references provided below.
Note that the <tt>datatype</tt> argument, specifying the type of data contained in the <tt>message</tt> buffer, is a variable. This is intended to provide a layer of compatibility between processes that could be running on architectures for which the native format for these types differs. It is possible to register new data types, but for this tutorial we will only use the predefined types provided by MPI. There is a predefined MPI type for each atomic data type in the source language (for C: <tt>MPI_CHAR</tt>, <tt>MPI_FLOAT</tt>, <tt>MPI_SHORT</tt>, <tt>MPI_INT</tt>, etc. and for Fortran: <tt>MPI_CHARACTER</tt>, <tt>MPI_INTEGER</tt>, <tt>MPI_REAL</tt>, etc.). You can find a full list of these types in the references provided below.


<!--T:33-->
<tt>MPI_Recv</tt> works in much the same way as <tt>MPI_Send</tt>. Referring to the function prototypes below, <tt>message</tt> is now a pointer to an allocated buffer of sufficient size to store <tt>count</tt> instances of <tt>datatype</tt>, to be received from process <tt>rank</tt>. <tt>MPI_Recv</tt> takes one additional argument, <tt>status</tt>, which should, in C, be a reference to an allocated <tt>MPI_Status</tt> structure, and, in Fortran, be an array of <tt>MPI_STATUS_SIZE</tt> integers. Upon return it will contain some information about the received message. Although we will not make use of it in this tutorial, the argument must be present.
<tt>MPI_Recv</tt> works in much the same way as <tt>MPI_Send</tt>. Referring to the function prototypes below, <tt>message</tt> is now a pointer to an allocated buffer of sufficient size to store <tt>count</tt> instances of <tt>datatype</tt>, to be received from process <tt>rank</tt>. <tt>MPI_Recv</tt> takes one additional argument, <tt>status</tt>, which should, in C, be a reference to an allocated <tt>MPI_Status</tt> structure, and, in Fortran, be an array of <tt>MPI_STATUS_SIZE</tt> integers. Upon return it will contain some information about the received message. Although we will not make use of it in this tutorial, the argument must be present.
</translate>
</translate>
Line 391: Line 417:
</tabs>
</tabs>
<translate>
<translate>
<!--T:34-->
With this simple use of <tt>MPI_Send</tt> and <tt>MPI_Recv</tt>, the sending process must know the rank of the receiving process, and the receiving process must know the rank of the sending process. In our example the following arithmetic is useful:
With this simple use of <tt>MPI_Send</tt> and <tt>MPI_Recv</tt>, the sending process must know the rank of the receiving process, and the receiving process must know the rank of the sending process. In our example the following arithmetic is useful:
* <tt>(rank + 1) % size</tt> is the process to send to, and
* <tt>(rank + 1) % size</tt> is the process to send to, and
Line 508: Line 535:
<translate>
<translate>


<!--T:35-->
Compile this program and run it using 2, 4, and 8 processes. While it certainly seems to be working as intended, there is a hidden problem here. The MPI standard does not ''guarantee'' that <tt>MPI_Send</tt> returns before the message has been delivered. Most implementations ''buffer'' the data from <tt>MPI_Send</tt> and return without waiting for it to be delivered. But if it were not buffered, the code we've written would deadlock: Each process would call <tt>MPI_Send</tt> and then wait for its neighbour process to call <tt>MPI_Recv</tt>. Since the neighbour would also be waiting at the <tt>MPI_Send</tt> stage, they would all wait forever. Clearly there ''is'' buffering in the libraries on our systems since the code did not deadlock, but it is poor design to rely on this. The code could fail if used on a system in which there is no buffering provided by the library. Even where buffering is provided, the call might still block if the buffer fills up.
Compile this program and run it using 2, 4, and 8 processes. While it certainly seems to be working as intended, there is a hidden problem here. The MPI standard does not ''guarantee'' that <tt>MPI_Send</tt> returns before the message has been delivered. Most implementations ''buffer'' the data from <tt>MPI_Send</tt> and return without waiting for it to be delivered. But if it were not buffered, the code we've written would deadlock: Each process would call <tt>MPI_Send</tt> and then wait for its neighbour process to call <tt>MPI_Recv</tt>. Since the neighbour would also be waiting at the <tt>MPI_Send</tt> stage, they would all wait forever. Clearly there ''is'' buffering in the libraries on our systems since the code did not deadlock, but it is poor design to rely on this. The code could fail if used on a system in which there is no buffering provided by the library. Even where buffering is provided, the call might still block if the buffer fills up.


  [~]$ mpicc -Wall phello2.c -o phello2
  <!--T:36-->
[~]$ mpicc -Wall phello2.c -o phello2
  [~]$ mpirun -np 4 ./phello2
  [~]$ mpirun -np 4 ./phello2
  [P_0] process 3 said: "Hello, world! from process 3 of 4"]
  [P_0] process 3 said: "Hello, world! from process 3 of 4"]
Line 517: Line 546:
  [P_3] process 2 said: "Hello, world! from process 2 of 4"]
  [P_3] process 2 said: "Hello, world! from process 2 of 4"]


=== Safe MPI ===
=== Safe MPI === <!--T:37-->


<!--T:38-->
The MPI standard defines <tt>MPI_Send</tt> and <tt>MPI_Recv</tt> to be '''blocking calls'''. This means <tt>MPI_Send</tt> will not return until it is safe for the calling module to modify the contents of the provided message buffer.  Similarly, <tt>MPI_Recv</tt> will not return until the entire contents of the message are available in the message buffer the caller provides.
The MPI standard defines <tt>MPI_Send</tt> and <tt>MPI_Recv</tt> to be '''blocking calls'''. This means <tt>MPI_Send</tt> will not return until it is safe for the calling module to modify the contents of the provided message buffer.  Similarly, <tt>MPI_Recv</tt> will not return until the entire contents of the message are available in the message buffer the caller provides.


<!--T:39-->
It should be obvious that whether or not the MPI library provides buffering does not affect receive operations. As soon as the data is received, it will be placed directly in the message buffer provided by the caller and <tt>MPI_Recv</tt> will return; until then the call will be blocked. <tt>MPI_Send</tt> on the other hand need not block if the library provides an internal buffer. Once the message is copied out of the original data location, it is safe for the user to modify that location, so the call can return. This is why our parallel "Hello, world!" program doesn't deadlock as we have implemented it, even though all processes call <tt>MPI_Send</tt> first. Since the buffering is not required by the MPI standard and the correctness of our program relies on it, we refer to such a program as '''''unsafe''''' MPI.
It should be obvious that whether or not the MPI library provides buffering does not affect receive operations. As soon as the data is received, it will be placed directly in the message buffer provided by the caller and <tt>MPI_Recv</tt> will return; until then the call will be blocked. <tt>MPI_Send</tt> on the other hand need not block if the library provides an internal buffer. Once the message is copied out of the original data location, it is safe for the user to modify that location, so the call can return. This is why our parallel "Hello, world!" program doesn't deadlock as we have implemented it, even though all processes call <tt>MPI_Send</tt> first. Since the buffering is not required by the MPI standard and the correctness of our program relies on it, we refer to such a program as '''''unsafe''''' MPI.


<!--T:40-->
A '''''safe''''' MPI program is one that does not rely on a buffered implementation in order to function correctly. The following pseudo-code fragments illustrate this concept:
A '''''safe''''' MPI program is one that does not rely on a buffered implementation in order to function correctly. The following pseudo-code fragments illustrate this concept:


==== Deadlock ====
==== Deadlock ==== <!--T:41-->
</translate>
</translate>
<source lang="c">
<source lang="c">
Line 542: Line 574:
</source>
</source>
<translate>
<translate>
<!--T:42-->
Receives are executed on both processes before the matching send; regardless of buffering, the processes in this MPI application will block on the receive calls and deadlock.
Receives are executed on both processes before the matching send; regardless of buffering, the processes in this MPI application will block on the receive calls and deadlock.


==== Unsafe ====
==== Unsafe ==== <!--T:43-->
</translate>
</translate>
<source lang="c">
<source lang="c">
Line 561: Line 594:
</source>
</source>
<translate>
<translate>
<!--T:44-->
This is essentially what our parallel "Hello, world!" program was doing, and it ''may'' work if buffering is provided by the library. If the library is unbuffered, or if messages are simply large enough to fill the buffer, this code will block on the sends, and deadlock. To fix that we will need to do the following instead:
This is essentially what our parallel "Hello, world!" program was doing, and it ''may'' work if buffering is provided by the library. If the library is unbuffered, or if messages are simply large enough to fill the buffer, this code will block on the sends, and deadlock. To fix that we will need to do the following instead:


==== Safe ====
==== Safe ==== <!--T:45-->
</translate>
</translate>
<source lang="c">
<source lang="c">
Line 580: Line 614:
</source>
</source>
<translate>
<translate>
<!--T:46-->
Even in the absence of buffering, the send here is paired with a corresponding receive between processes. While a process may block for a time until the corresponding call is made, it cannot deadlock.
Even in the absence of buffering, the send here is paired with a corresponding receive between processes. While a process may block for a time until the corresponding call is made, it cannot deadlock.


<!--T:47-->
How do we rewrite our "Hello, World!" program to make it safe? A common solution to this kind of problem is to adopt an odd-even pairing and perform the communication in two steps. Since in our example communication is a rotation of data one rank to the right, we should end up with a safe program if all even ranked processes execute a send followed by a receive, while all odd ranked processes execute a receive followed by a send. The reader can easily verify that the sends and receives are properly paired avoiding any possibility of deadlock.
How do we rewrite our "Hello, World!" program to make it safe? A common solution to this kind of problem is to adopt an odd-even pairing and perform the communication in two steps. Since in our example communication is a rotation of data one rank to the right, we should end up with a safe program if all even ranked processes execute a send followed by a receive, while all odd ranked processes execute a receive followed by a send. The reader can easily verify that the sends and receives are properly paired avoiding any possibility of deadlock.
</translate>
</translate>
Line 718: Line 754:
</tabs>
</tabs>
<translate>
<translate>
<!--T:48-->
Is there still a problem here if the number of processors is odd? It might seem so at first as process 0 (which is even) will be sending while process N-1 (also even) is trying to send to 0. But process 0 is originating a send that is correctly paired with a receive at process 1. Since process 1 (odd) begins with a receive, that transaction is guaranteed to complete. When it does, process 0 will proceed to receive the message from process N-1. There may be a (very small!) delay, but there is no chance of a deadlock.
Is there still a problem here if the number of processors is odd? It might seem so at first as process 0 (which is even) will be sending while process N-1 (also even) is trying to send to 0. But process 0 is originating a send that is correctly paired with a receive at process 1. Since process 1 (odd) begins with a receive, that transaction is guaranteed to complete. When it does, process 0 will proceed to receive the message from process N-1. There may be a (very small!) delay, but there is no chance of a deadlock.


  [~]$ mpicc -Wall phello3.c -o phello3
  <!--T:49-->
[~]$ mpicc -Wall phello3.c -o phello3
  [~]$ mpirun -np 16 ./phello3
  [~]$ mpirun -np 16 ./phello3
  [P_1] process 0 said: "Hello, world! from process 0 of 16"]
  [P_1] process 0 said: "Hello, world! from process 0 of 16"]
Line 739: Line 777:
  [P_11] process 10 said: "Hello, world! from process 10 of 16"]
  [P_11] process 10 said: "Hello, world! from process 10 of 16"]


<!--T:50-->
Note that many frequently-occurring communication patterns have been captured in the [http://www.mcs.anl.gov/research/projects/mpi/mpi-standard/mpi-report-1.1/node64.htm#Node64 collective communication] functions of MPI. If there is a collective function that matches the communication pattern you need, you should use it instead of implementing it yourself with <tt>MPI_Send</tt> and <tt>MPI_Recv</tt>.
Note that many frequently-occurring communication patterns have been captured in the [http://www.mcs.anl.gov/research/projects/mpi/mpi-standard/mpi-report-1.1/node64.htm#Node64 collective communication] functions of MPI. If there is a collective function that matches the communication pattern you need, you should use it instead of implementing it yourself with <tt>MPI_Send</tt> and <tt>MPI_Recv</tt>.


== Comments and Further Reading ==
== Comments and Further Reading == <!--T:51-->
This tutorial presented some of the key syntax, semantics, and design concepts associated with MPI programming. There is still a wealth of material to be considered in designing any serious parallel program, including but not limited to:
This tutorial presented some of the key syntax, semantics, and design concepts associated with MPI programming. There is still a wealth of material to be considered in designing any serious parallel program, including but not limited to:
* <tt>MPI_Send</tt>/<tt>MPI_Recv</tt> variants ([http://www.mcs.anl.gov/research/projects/mpi/mpi-standard/mpi-report-1.1/node40.htm#Node40 buffered, non-blocking, synchronous], etc.)
* <tt>MPI_Send</tt>/<tt>MPI_Recv</tt> variants ([http://www.mcs.anl.gov/research/projects/mpi/mpi-standard/mpi-report-1.1/node40.htm#Node40 buffered, non-blocking, synchronous], etc.)
Line 752: Line 791:
* [https://drive.google.com/file/d/0B4bveu7i2jOyeVR5VGlxV1g1MDQ/view Tutorial on Boost MPI (in French)]
* [https://drive.google.com/file/d/0B4bveu7i2jOyeVR5VGlxV1g1MDQ/view Tutorial on Boost MPI (in French)]


=== Selected references ===
=== Selected references === <!--T:52-->
* William Gropp, Ewing Lusk, and Anthony Skjellum. ''Using MPI: Portable Parallel Programming with the Message-Passing Interface (2e)''. MIT Press, 1999.
* William Gropp, Ewing Lusk, and Anthony Skjellum. ''Using MPI: Portable Parallel Programming with the Message-Passing Interface (2e)''. MIT Press, 1999.
** Comprehensive reference covering Fortran, C and C++ bindings
** Comprehensive reference covering Fortran, C and C++ bindings
Bureaucrats, cc_docs_admin, cc_staff
2,306

edits

Navigation menu