Bureaucrats, cc_docs_admin, cc_staff, rsnt_translations
2,837
edits
No edit summary |
No edit summary |
||
Line 305: | Line 305: | ||
We can now make the required modifications to our parallel "Hello, world!" program as shown below. | We can now make the required modifications to our parallel "Hello, world!" program as shown below. | ||
<tabs> | |||
<tab name="C"> | |||
| | {{File | ||
| | |name=phello2.c | ||
|lang="c" | |||
|contents= | |||
#include <stdio.h> | #include <stdio.h> | ||
#include <mpi.h> | #include <mpi.h> | ||
Line 338: | Line 340: | ||
return(0); | return(0); | ||
} | } | ||
</ | }} | ||
</tab> | |||
<tab name="Fortran"> | |||
{{File | |||
|name=phello2.f90 | |||
| | |lang="fortran" | ||
| | |contents= | ||
program phello2 | program phello2 | ||
Line 376: | Line 378: | ||
end program phello2 | end program phello2 | ||
</ | }} | ||
</tab> | |||
</tabs> | |||
Compile this program and run it using 2, 4, and 8 processes. While it certainly seems to be working as intended, there is a hidden problem here. The MPI standard does not ''guarantee'' that <tt>MPI_Send</tt> returns before the message has been delivered. Most implementations ''buffer'' the data from <tt>MPI_Send</tt> and return without waiting for it to be delivered. But if it were not buffered, the code we've written would deadlock: Each process would call <tt>MPI_Send</tt> and then wait for its neighbour process to call <tt>MPI_Recv</tt>. Since the neighbour would also be waiting at the <tt>MPI_Send</tt> stage, they would all wait forever. Clearly there ''is'' buffering in the libraries on our systems since the code did not deadlock, but it is poor design to rely on this. The code could fail if used on a system in which there is no buffering provided by the library. Even where buffering is provided, the call might still block if the buffer fills up. | Compile this program and run it using 2, 4, and 8 processes. While it certainly seems to be working as intended, there is a hidden problem here. The MPI standard does not ''guarantee'' that <tt>MPI_Send</tt> returns before the message has been delivered. Most implementations ''buffer'' the data from <tt>MPI_Send</tt> and return without waiting for it to be delivered. But if it were not buffered, the code we've written would deadlock: Each process would call <tt>MPI_Send</tt> and then wait for its neighbour process to call <tt>MPI_Recv</tt>. Since the neighbour would also be waiting at the <tt>MPI_Send</tt> stage, they would all wait forever. Clearly there ''is'' buffering in the libraries on our systems since the code did not deadlock, but it is poor design to rely on this. The code could fail if used on a system in which there is no buffering provided by the library. Even where buffering is provided, the call might still block if the buffer fills up. |