Bureaucrats, cc_docs_admin, cc_staff, rsnt_translations
2,837
edits
No edit summary |
No edit summary |
||
Line 116: | Line 116: | ||
<tabs> | <tabs> | ||
<tab name="C | <tab name="C"> | ||
{{File | {{File | ||
|name=phello0.c | |name=phello0.c | ||
Line 165: | Line 165: | ||
</source> | </source> | ||
</tab> | </tab> | ||
<tab> | <tab name="Fortran"> | ||
<source lang="fortran"> | <source lang="fortran"> | ||
MPI_COMM_SIZE(COMM, NPROC, IERR) | MPI_COMM_SIZE(COMM, NPROC, IERR) | ||
Line 232: | Line 232: | ||
Compile and run this program using 2, 4 and 8 processes. Note that each running process produces output based on the values of its local variables. The stdout of all running processes is simply concatenated together. As you run the program using more processes, you may see that the output from the different processes does not appear in order or rank: You should make no assumptions about the order of output from different processes. | Compile and run this program using 2, 4 and 8 processes. Note that each running process produces output based on the values of its local variables. The stdout of all running processes is simply concatenated together. As you run the program using more processes, you may see that the output from the different processes does not appear in order or rank: You should make no assumptions about the order of output from different processes. | ||
[ | [~]$ vi phello1.c | ||
[ | [~]$ mpicc -Wall phello1.c -o phello1 | ||
[ | [~]$ mpirun -np 4 ./phello1 | ||
Hello, world! from process 0 of 4 | Hello, world! from process 0 of 4 | ||
Hello, world! from process 2 of 4 | Hello, world! from process 2 of 4 | ||
Line 249: | Line 249: | ||
A process sends data by calling the <tt>MPI_Send</tt> function. Referring to the following function prototypes, <tt>MPI_Send</tt> can be summarized as sending <tt>count</tt> contiguous instances of <tt>datatype</tt> to the process with the specified <tt>rank</tt>, and the data is in the buffer pointed to by <tt>message</tt>. <tt>Tag</tt> is a programmer-specified identifier that becomes associated with the message, and can be used, for example, to organize the communication streams (e.g. to distinguish two distinct streams of interleaved data). Our examples do not require this, so we will pass in the value 0 for the <tt>tag</tt>. <tt>Comm</tt> is the communicator described above, and we will continue to use <tt>MPI_COMM_WORLD</tt>. | A process sends data by calling the <tt>MPI_Send</tt> function. Referring to the following function prototypes, <tt>MPI_Send</tt> can be summarized as sending <tt>count</tt> contiguous instances of <tt>datatype</tt> to the process with the specified <tt>rank</tt>, and the data is in the buffer pointed to by <tt>message</tt>. <tt>Tag</tt> is a programmer-specified identifier that becomes associated with the message, and can be used, for example, to organize the communication streams (e.g. to distinguish two distinct streams of interleaved data). Our examples do not require this, so we will pass in the value 0 for the <tt>tag</tt>. <tt>Comm</tt> is the communicator described above, and we will continue to use <tt>MPI_COMM_WORLD</tt>. | ||
<tabs> | |||
<tab name="C"> | |||
<source lang="c"> | |||
int MPI_Send | int MPI_Send | ||
( | ( | ||
Line 263: | Line 262: | ||
); | ); | ||
</source> | </source> | ||
</tab> | |||
<tab name="Fortran"> | |||
<source lang="fortran"> | |||
MPI_SEND(MESSAGE, COUNT, DATATYPE, DEST, TAG, COMM, IERR) | MPI_SEND(MESSAGE, COUNT, DATATYPE, DEST, TAG, COMM, IERR) | ||
<type> MESSAGE(*) | <type> MESSAGE(*) | ||
INTEGER :: COUNT, DATATYPE, DEST, TAG, COMM, IERR | INTEGER :: COUNT, DATATYPE, DEST, TAG, COMM, IERR | ||
</source> | </source> | ||
</tab> | |||
</tabs> | |||
Note that the <tt>datatype</tt> argument, specifying the type of data contained in the <tt>message</tt> buffer, is a variable. This is intended to provide a layer of compatibility between processes that could be running on architectures for which the native format for these types differs. It is possible to register new data types, but for this tutorial we will only use the predefined types provided by MPI. There is a predefined MPI type for each atomic data type in the source language (for C: <tt>MPI_CHAR</tt>, <tt>MPI_FLOAT</tt>, <tt>MPI_SHORT</tt>, <tt>MPI_INT</tt>, etc. and for Fortran: <tt>MPI_CHARACTER</tt>, <tt>MPI_INTEGER</tt>, <tt>MPI_REAL</tt>, etc.). You can find a full list of these types in the references provided below. | Note that the <tt>datatype</tt> argument, specifying the type of data contained in the <tt>message</tt> buffer, is a variable. This is intended to provide a layer of compatibility between processes that could be running on architectures for which the native format for these types differs. It is possible to register new data types, but for this tutorial we will only use the predefined types provided by MPI. There is a predefined MPI type for each atomic data type in the source language (for C: <tt>MPI_CHAR</tt>, <tt>MPI_FLOAT</tt>, <tt>MPI_SHORT</tt>, <tt>MPI_INT</tt>, etc. and for Fortran: <tt>MPI_CHARACTER</tt>, <tt>MPI_INTEGER</tt>, <tt>MPI_REAL</tt>, etc.). You can find a full list of these types in the references provided below. | ||
Line 279: | Line 276: | ||
<tt>MPI_Recv</tt> works in much the same way as <tt>MPI_Send</tt>. Referring to the function prototypes below, <tt>message</tt> is now a pointer to an allocated buffer of sufficient size to store <tt>count</tt> instances of <tt>datatype</tt>, to be received from process <tt>rank</tt>. <tt>MPI_Recv</tt> takes one additional argument, <tt>status</tt>, which should, in C, be a reference to an allocated <tt>MPI_Status</tt> structure, and, in Fortran, be an array of <tt>MPI_STATUS_SIZE</tt> integers. Upon return it will contain some information about the received message. Although we will not make use of it in this tutorial, the argument must be present. | <tt>MPI_Recv</tt> works in much the same way as <tt>MPI_Send</tt>. Referring to the function prototypes below, <tt>message</tt> is now a pointer to an allocated buffer of sufficient size to store <tt>count</tt> instances of <tt>datatype</tt>, to be received from process <tt>rank</tt>. <tt>MPI_Recv</tt> takes one additional argument, <tt>status</tt>, which should, in C, be a reference to an allocated <tt>MPI_Status</tt> structure, and, in Fortran, be an array of <tt>MPI_STATUS_SIZE</tt> integers. Upon return it will contain some information about the received message. Although we will not make use of it in this tutorial, the argument must be present. | ||
<tabs> | |||
<tab name="C"> | |||
<source lang="c"> | |||
int MPI_Recv | int MPI_Recv | ||
( | ( | ||
Line 294: | Line 290: | ||
); | ); | ||
</source> | </source> | ||
</tab> | |||
<tab name="Fortran"> | |||
<source lang="fortran"> | |||
MPI_RECV(MESSAGE, COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS, IERR) | MPI_RECV(MESSAGE, COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS, IERR) | ||
<type> :: MESSAGE(*) | <type> :: MESSAGE(*) | ||
INTEGER :: COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS(MPI_STATUS_SIZE), IERR | INTEGER :: COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS(MPI_STATUS_SIZE), IERR | ||
</source> | </source> | ||
</tab> | |||
</tabs> | |||
With this simple use of <tt>MPI_Send</tt> and <tt>MPI_Recv</tt>, the sending process must know the rank of the receiving process, and the receiving process must know the rank of the sending process. In our example the following arithmetic is useful: | With this simple use of <tt>MPI_Send</tt> and <tt>MPI_Recv</tt>, the sending process must know the rank of the receiving process, and the receiving process must know the rank of the sending process. In our example the following arithmetic is useful: |