MPI: Difference between revisions

175 bytes removed ,  7 years ago
no edit summary
No edit summary
No edit summary
Line 305: Line 305:
We can now make the required modifications to our parallel "Hello, world!" program as shown below.  
We can now make the required modifications to our parallel "Hello, world!" program as shown below.  


{| border="0" cellpadding="5" cellspacing="0" align="center"
<tabs>
! style="background:#8AA8E5;" | ''C CODE'': <tt>phello2.c</tt>
<tab name="C">
|-valign="top"
{{File
|<source lang="c">
  |name=phello2.c
  |lang="c"
  |contents=
  #include <stdio.h>
  #include <stdio.h>
  #include <mpi.h>
  #include <mpi.h>
Line 338: Line 340:
     return(0);
     return(0);
  }
  }
</source>
}}
|}
</tab>
 
<tab name="Fortran">
{| border="0" cellpadding="5" cellspacing="0" align="center"
{{File
! style="background:#ECCF98;" | ''FORTRAN CODE'': <tt>phello2.f</tt>
  |name=phello2.f90
|-valign="top"
  |lang="fortran"
|<source lang="fortran">
  |contents=
  program phello2
  program phello2


Line 376: Line 378:
   
   
  end program phello2
  end program phello2
</source>
}}
|}
</tab>
</tabs>
 


Compile this program and run it using 2, 4, and 8 processes. While it certainly seems to be working as intended, there is a hidden problem here. The MPI standard does not ''guarantee'' that <tt>MPI_Send</tt> returns before the message has been delivered. Most implementations ''buffer'' the data from <tt>MPI_Send</tt> and return without waiting for it to be delivered. But if it were not buffered, the code we've written would deadlock: Each process would call <tt>MPI_Send</tt> and then wait for its neighbour process to call <tt>MPI_Recv</tt>. Since the neighbour would also be waiting at the <tt>MPI_Send</tt> stage, they would all wait forever. Clearly there ''is'' buffering in the libraries on our systems since the code did not deadlock, but it is poor design to rely on this. The code could fail if used on a system in which there is no buffering provided by the library. Even where buffering is provided, the call might still block if the buffer fills up.
Compile this program and run it using 2, 4, and 8 processes. While it certainly seems to be working as intended, there is a hidden problem here. The MPI standard does not ''guarantee'' that <tt>MPI_Send</tt> returns before the message has been delivered. Most implementations ''buffer'' the data from <tt>MPI_Send</tt> and return without waiting for it to be delivered. But if it were not buffered, the code we've written would deadlock: Each process would call <tt>MPI_Send</tt> and then wait for its neighbour process to call <tt>MPI_Recv</tt>. Since the neighbour would also be waiting at the <tt>MPI_Send</tt> stage, they would all wait forever. Clearly there ''is'' buffering in the libraries on our systems since the code did not deadlock, but it is poor design to rely on this. The code could fail if used on a system in which there is no buffering provided by the library. Even where buffering is provided, the call might still block if the buffer fills up.
Bureaucrats, cc_docs_admin, cc_staff, rsnt_translations
2,837

edits