MPI: Difference between revisions

Jump to navigation Jump to search
475 bytes removed ,  8 years ago
→‎Communication: copyediting
(→‎Rank and Size: copyediting)
(→‎Communication: copyediting)
Line 213: Line 213:


=== Communication ===
=== Communication ===
While we now have a parallel version of our "Hello, World!" program, it isn't a very interesting one as there is no communication between the processes. Let's explore this issue by having the processes say hello to one another, rather than just sending the output to <tt>stdout</tt>.
While we now have a parallel version of our "Hello, World!" program, it isn't very interesting as there is no communication between the processes. Let's fix this by having the processes send messages to one another.


The specific functionality we seek is to have each process send its hello message to the "next" one in the communicator. We will define this as a rotation operation, so process rank i is to send its message to process rank i+1, with process N-1 wrapping around and sending to process 0; That is to say ''process i sends to process '''(i+1)%N''''', where there are N processes and % is the modulus operator.
We'll have each process send the string "hello" to the one with the next higher rank number. Rank <tt>i</tt> will send its message to rank <tt>i+1</tt>, and we'll have the last process, rank <tt>N-1</tt>, send its message back to process <tt>0</tt>. A short way to express this is ''process <tt>i</tt> sends to process <tt>(i+1)%N</tt>'', where there are <tt>N</tt> processes and % is the modulus operator.


First we need to learn how to send and receive data using MPI. While MPI provides means of sending and receiving data of any composition, any consideration of such issues is too advanced for this tutorial and we'll refer again to the provided references for additional information. The most basic way of organizing data to be sent is to send a sequence of one or more instances of an atomic data type. This is supported natively in languages with contiguous allocation of arrays.
MPI provides a large number of functions for sending and receiving data of almost any composition in a variety of communication patterns (one-to-one, one-to-many, many-to-one, and many-to-many). But the simplest to understand are the functions send a sequence of one or more instances of an atomic data type from one process to one other process, <tt>MPI_Send</tt> and <tt>MPI_Recv</tt>.


Data is sent using the <tt>MPI_Send</tt> function. Referring to the following function prototypes, <tt>MPI_Send</tt> can be summarized as sending '''count''' contiguous instances of '''datatype''' to process with the specified '''rank''', and the data is in the buffer pointed to by '''message'''.  '''tag''' is a programmer-specified identifier that becomes associated with the message, and can be used to organize the communication process (for example, to distinguish two distinct streams of data interleaved piecemeal). None of our examples require we use this functionality, so we will always pass in the value 0 for this parameter. '''comm''' is again the communicator in which the send occurs, for which we will continue to use the pre-defined <tt>MPI_COMM_WORLD</tt>.
A process sends data by calling the <tt>MPI_Send</tt> function. Referring to the following function prototypes, <tt>MPI_Send</tt> can be summarized as sending '''count''' contiguous instances of '''datatype''' to the process with the specified '''rank''', and the data is in the buffer pointed to by '''message'''.  '''Tag''' is a programmer-specified identifier that becomes associated with the message, and can be used to organize the communication process (for example, to distinguish two distinct streams of interleaved data). Our examples do not require this, so we will pass in the value 0 for the '''tag'''. '''Comm''' is the communicator described above, and we will continue to use <tt>MPI_COMM_WORLD</tt>.


{| border="0" cellpadding="5" cellspacing="0" align="center"
{| border="0" cellpadding="5" cellspacing="0" align="center"
Line 247: Line 247:
|}
|}


Note that the '''datatype''' argument, specifying the type of data contained in the '''message''' buffer, is a variable. This is intended to provide a layer of compatibility between processes that could be running on architectures for which the native format for these types differs. For more complex situations it is possible to register new data types, however for the purpose of this tutorial we will restrict ourselves to the pre-defined types provided by MPI. There is an MPI type pre-defined for all atomic data types in the source language (for C: <tt>MPI_CHAR</tt>, <tt>MPI_FLOAT</tt>, <tt>MPI_SHORT</tt>, <tt>MPI_INT</tt>, etc. and for Fortran: <tt>MPI_CHARACTER</tt>, <tt>MPI_INTEGER</tt>, <tt>MPI_REAL</tt>, etc.). Please refer to the provided references for a full list of these types if interested.
Note that the '''datatype''' argument, specifying the type of data contained in the '''message''' buffer, is a variable. This is intended to provide a layer of compatibility between processes that could be running on architectures for which the native format for these types differs. It is possible to register new data types, but for this tutorial we will only use the pre-defined types provided by MPI. There is an MPI type pre-defined for all atomic data types in the source language (for C: <tt>MPI_CHAR</tt>, <tt>MPI_FLOAT</tt>, <tt>MPI_SHORT</tt>, <tt>MPI_INT</tt>, etc. and for Fortran: <tt>MPI_CHARACTER</tt>, <tt>MPI_INTEGER</tt>, <tt>MPI_REAL</tt>, etc.). YOu can find a full list of these types in the references provided below.


We can summarize the <tt>MPI_Recv</tt> call relatively quickly now as the function works in much the same way. Referring to the function prototypes below, '''message''' is now a pointer to an allocated buffer of sufficient size to receive '''count''' contiguous instances of '''datatype''', which is received from process '''rank'''. <tt>MPI_Recv</tt> takes one additional argument, '''status''', which is a reference to an allocated <tt>MPI_Status</tt> structure in C and an array of <tt>MPI_STATUS_SIZE</tt> integers in Fortran. It will be filled in with information related to the received message upon return. We will not make use of this argument in this tutorial, however it must be present.
<tt>MPI_Recv</tt> works in much the same way as <tt>MPI_Send</tt>. Referring to the function prototypes below, '''message''' is now a pointer to an allocated buffer of sufficient size to receive '''count''' contiguous instances of '''datatype''', which is received from process '''rank'''. <tt>MPI_Recv</tt> takes one additional argument, '''status''', which in C should be a reference to an allocated <tt>MPI_Status</tt> structure, and in Fortran an array of <tt>MPI_STATUS_SIZE</tt> integers. Upon return it will be filled in with information related to the received message. We will not make use of this argument in this tutorial, but it must be present.


{| border="0" cellpadding="5" cellspacing="0" align="center"
{| border="0" cellpadding="5" cellspacing="0" align="center"
Line 278: Line 278:
|}
|}


Keep in mind that both sends and receives are explicit calls. The sending process must know the rank of the process to which it is sending, and the receiving process must know the rank of the process from which to receive. in this case, since all our processes are performing the same action, we need only derive the appropriate arithmetic so that given its rank, the process knows both where to send its data, [(rank + 1) % size], and from which to receive [(rank + size) - 1) % size]. We'll now make the required modifications to our parallel "Hello, world!" program. As a first cut, we'll simply have each process first send its message, then receive what is being sent to it.
The sending process must know the rank of the receiving process, and the receiving process must know the rank of the sending process. In our example, we need to do some arithmetic such that, given its rank, each process knows both where to send its data, <tt>[(rank + 1) % size]</tt>, and where to receive from <tt>[(rank + size) - 1) % size]</tt>. We'll now make the required modifications to our parallel "Hello, world!" program. As a first cut, we'll simply have each process first send its message, then receive what is being sent to it.


{| border="0" cellpadding="5" cellspacing="0" align="center"
{| border="0" cellpadding="5" cellspacing="0" align="center"
Line 295: Line 295:
     int sendto, recvfrom;
     int sendto, recvfrom;
     MPI_Status status;
     MPI_Status status;
   
   
     MPI_Init(&argc, &argv);
     MPI_Init(&argc, &argv);
Line 312: Line 311:
   
   
     MPI_Finalize();
     MPI_Finalize();
     return(0);
     return(0);
  }
  }
Line 364: Line 362:
  [P_2] process 1 said: "Hello, world! from process 1 of 4"]
  [P_2] process 1 said: "Hello, world! from process 1 of 4"]
  [P_3] process 2 said: "Hello, world! from process 2 of 4"]
  [P_3] process 2 said: "Hello, world! from process 2 of 4"]


=== Safe MPI ===
=== Safe MPI ===
Bureaucrats, cc_docs_admin, cc_staff
2,879

edits

Navigation menu