Bureaucrats, cc_docs_admin, cc_staff
2,879
edits
(→Rank and Size: copyediting) |
(→Communication: copyediting) |
||
Line 213: | Line 213: | ||
=== Communication === | === Communication === | ||
While we now have a parallel version of our "Hello, World!" program, it isn't | While we now have a parallel version of our "Hello, World!" program, it isn't very interesting as there is no communication between the processes. Let's fix this by having the processes send messages to one another. | ||
We'll have each process send the string "hello" to the one with the next higher rank number. Rank <tt>i</tt> will send its message to rank <tt>i+1</tt>, and we'll have the last process, rank <tt>N-1</tt>, send its message back to process <tt>0</tt>. A short way to express this is ''process <tt>i</tt> sends to process <tt>(i+1)%N</tt>'', where there are <tt>N</tt> processes and % is the modulus operator. | |||
MPI provides a large number of functions for sending and receiving data of almost any composition in a variety of communication patterns (one-to-one, one-to-many, many-to-one, and many-to-many). But the simplest to understand are the functions send a sequence of one or more instances of an atomic data type from one process to one other process, <tt>MPI_Send</tt> and <tt>MPI_Recv</tt>. | |||
A process sends data by calling the <tt>MPI_Send</tt> function. Referring to the following function prototypes, <tt>MPI_Send</tt> can be summarized as sending '''count''' contiguous instances of '''datatype''' to the process with the specified '''rank''', and the data is in the buffer pointed to by '''message'''. '''Tag''' is a programmer-specified identifier that becomes associated with the message, and can be used to organize the communication process (for example, to distinguish two distinct streams of interleaved data). Our examples do not require this, so we will pass in the value 0 for the '''tag'''. '''Comm''' is the communicator described above, and we will continue to use <tt>MPI_COMM_WORLD</tt>. | |||
{| border="0" cellpadding="5" cellspacing="0" align="center" | {| border="0" cellpadding="5" cellspacing="0" align="center" | ||
Line 247: | Line 247: | ||
|} | |} | ||
Note that the '''datatype''' argument, specifying the type of data contained in the '''message''' buffer, is a variable. This is intended to provide a layer of compatibility between processes that could be running on architectures for which the native format for these types differs. | Note that the '''datatype''' argument, specifying the type of data contained in the '''message''' buffer, is a variable. This is intended to provide a layer of compatibility between processes that could be running on architectures for which the native format for these types differs. It is possible to register new data types, but for this tutorial we will only use the pre-defined types provided by MPI. There is an MPI type pre-defined for all atomic data types in the source language (for C: <tt>MPI_CHAR</tt>, <tt>MPI_FLOAT</tt>, <tt>MPI_SHORT</tt>, <tt>MPI_INT</tt>, etc. and for Fortran: <tt>MPI_CHARACTER</tt>, <tt>MPI_INTEGER</tt>, <tt>MPI_REAL</tt>, etc.). YOu can find a full list of these types in the references provided below. | ||
<tt>MPI_Recv</tt> works in much the same way as <tt>MPI_Send</tt>. Referring to the function prototypes below, '''message''' is now a pointer to an allocated buffer of sufficient size to receive '''count''' contiguous instances of '''datatype''', which is received from process '''rank'''. <tt>MPI_Recv</tt> takes one additional argument, '''status''', which in C should be a reference to an allocated <tt>MPI_Status</tt> structure, and in Fortran an array of <tt>MPI_STATUS_SIZE</tt> integers. Upon return it will be filled in with information related to the received message. We will not make use of this argument in this tutorial, but it must be present. | |||
{| border="0" cellpadding="5" cellspacing="0" align="center" | {| border="0" cellpadding="5" cellspacing="0" align="center" | ||
Line 278: | Line 278: | ||
|} | |} | ||
The sending process must know the rank of the receiving process, and the receiving process must know the rank of the sending process. In our example, we need to do some arithmetic such that, given its rank, each process knows both where to send its data, <tt>[(rank + 1) % size]</tt>, and where to receive from <tt>[(rank + size) - 1) % size]</tt>. We'll now make the required modifications to our parallel "Hello, world!" program. As a first cut, we'll simply have each process first send its message, then receive what is being sent to it. | |||
{| border="0" cellpadding="5" cellspacing="0" align="center" | {| border="0" cellpadding="5" cellspacing="0" align="center" | ||
Line 295: | Line 295: | ||
int sendto, recvfrom; | int sendto, recvfrom; | ||
MPI_Status status; | MPI_Status status; | ||
MPI_Init(&argc, &argv); | MPI_Init(&argc, &argv); | ||
Line 312: | Line 311: | ||
MPI_Finalize(); | MPI_Finalize(); | ||
return(0); | return(0); | ||
} | } | ||
Line 364: | Line 362: | ||
[P_2] process 1 said: "Hello, world! from process 1 of 4"] | [P_2] process 1 said: "Hello, world! from process 1 of 4"] | ||
[P_3] process 2 said: "Hello, world! from process 2 of 4"] | [P_3] process 2 said: "Hello, world! from process 2 of 4"] | ||
=== Safe MPI === | === Safe MPI === |