Bureaucrats, cc_docs_admin, cc_staff
2,879
edits
(→MPI Programming Basics: copyedit and simplify, as far as "Framework") |
(→Rank and Size: copyediting) |
||
Line 138: | Line 138: | ||
=== Rank and Size === | === Rank and Size === | ||
We could now run this program under control of MPI, but each process would only output the original string which isn't very interesting. Let's have each process output its rank and how many processes are running in total. This information is obtained at run-time by the use of the following functions. | |||
{| border="0" cellpadding="5" cellspacing="0" align="center" | {| border="0" cellpadding="5" cellspacing="0" align="center" | ||
Line 157: | Line 157: | ||
|} | |} | ||
<tt>MPI_Comm_size</tt> | <tt>MPI_Comm_size</tt> reports the number of processes running as part of this job by assigning it to the result parameter <tt>nproc</tt>. Similarly, <tt>MPI_Comm_rank</tt> reports the rank of the calling process to the result parameter <tt>myrank</tt>. Ranks in MPI start counting from 0 rather than 1, so given N processes we expect the ranks to be 0..(N-1). The <tt>comm</tt> argument is a ''communicator'', which is a set of processes capable of sending messages to one another. For the purpose of this tutorial we will always pass in the predefined value <tt>MPI_COMM_WORLD</tt>, which is simply all the processes started with the job. It is possible to define and use your own communicators, but that is beyond the scope of this tutorial and the reader is referred to the provided references for additional detail. | ||
Let us incorporate these functions into our program, and have each process output its rank and size information. Note that since | Let us incorporate these functions into our program, and have each process output its rank and size information. Note that since all processes are still performing identical operations, there are no conditional blocks required in the code. | ||
{| border="0" cellpadding="5" cellspacing="0" align="center" | {| border="0" cellpadding="5" cellspacing="0" align="center" | ||
Line 172: | Line 172: | ||
{ | { | ||
int rank, size; | int rank, size; | ||
MPI_Init(&argc, &argv); | MPI_Init(&argc, &argv); | ||
Line 182: | Line 181: | ||
MPI_Finalize(); | MPI_Finalize(); | ||
return(0); | return(0); | ||
} | } | ||
Line 205: | Line 203: | ||
|} | |} | ||
Compile and run this program on 2, 4 and 8 processors. Note that each running process produces output based on the values of its local variables | Compile and run this program on 2, 4 and 8 processors. Note that each running process produces output based on the values of its local variables. The stdout of all running processes is simply concatenated together. As you run the program on more processors, you may see that the output from the different processes does not appear in order or rank: You should make no assumptions about the order of output from different processes. | ||
[orc-login2 ~]$ vi phello1.c | [orc-login2 ~]$ vi phello1.c | ||
[orc-login2 ~]$ mpicc -Wall phello1.c -o phello1 | [orc-login2 ~]$ mpicc -Wall phello1.c -o phello1 | ||
[orc-login2 ~]$ mpirun -np 4 ./phello1 | [orc-login2 ~]$ mpirun -np 4 ./phello1 | ||
Hello, world! from process 0 of 4 | Hello, world! from process 0 of 4 | ||
Hello, world! from process 2 of 4 | |||
Hello, world! from process 1 of 4 | Hello, world! from process 1 of 4 | ||
Hello, world! from process 3 of 4 | Hello, world! from process 3 of 4 | ||