MPI: Difference between revisions

1,507 bytes added ,  5 years ago
m
More Fortran updates
m (Fortran 2008 updates)
m (More Fortran updates)
Line 32: Line 32:


<!--T:10-->
<!--T:10-->
The Message Passing Interface (MPI) is, strictly speaking, a ''standard'' describing a set of subroutines, functions, objects, ''etc.'', with which one can write parallel programs in a distributed memory environment. Many different ''implementations'' of the standard have been produced, such as Open MPI, MPICH, and MVAPICH. The standard describes how MPI should be called from Fortran, C, and C++ languages, but unofficial "bindings" can be found for several other languages.
The Message Passing Interface (MPI) is, strictly speaking, a ''standard'' describing a set of subroutines, functions, objects, ''etc.'', with which one can write parallel programs in a distributed memory environment. Many different ''implementations'' of the standard have been produced, such as Open MPI, Intel MPI, MPICH, and MVAPICH. The standard describes how MPI should be called from Fortran, C, and C++ languages, but unofficial "bindings" can be found for several other languages. Note that MPI 3.0 dropped official C++ bindings but instead you can use the C bindings from C++, or Boost MPI.


<!--T:11-->
<!--T:11-->
Line 106: Line 106:
Each MPI program must include the relevant header file or use the relevant module (<tt>mpi.h</tt> for C/C++, <tt>mpif.h</tt>, <tt>use mpi</tt>, or <tt>use mpi_f08</tt> for Fortran, where <tt>mpif.h</tt> is strongly discouraged and <tt>mpi_f08</tt> recommended for new Fortran 2008 code), and be compiled and linked against the desired MPI implementation. Most MPI implementations provide a handy script, often called a ''compiler wrapper'', that handles all set-up issues with respect to <code>include</code> and <code>lib</code> directories, linking flags, ''etc.'' Our examples will all use these compiler wrappers:
Each MPI program must include the relevant header file or use the relevant module (<tt>mpi.h</tt> for C/C++, <tt>mpif.h</tt>, <tt>use mpi</tt>, or <tt>use mpi_f08</tt> for Fortran, where <tt>mpif.h</tt> is strongly discouraged and <tt>mpi_f08</tt> recommended for new Fortran 2008 code), and be compiled and linked against the desired MPI implementation. Most MPI implementations provide a handy script, often called a ''compiler wrapper'', that handles all set-up issues with respect to <code>include</code> and <code>lib</code> directories, linking flags, ''etc.'' Our examples will all use these compiler wrappers:
* C language wrapper: <tt>mpicc</tt>
* C language wrapper: <tt>mpicc</tt>
* Fortran: <tt>mpif90</tt>
* Fortran: <tt>mpifort</tt> (recommended) or <tt>mpif90</tt>
* C++: <tt>mpiCC</tt>
* C++: <tt>mpiCC</tt>


Line 459: Line 459:


<!--T:33-->
<!--T:33-->
<tt>MPI_Recv</tt> works in much the same way as <tt>MPI_Send</tt>. Referring to the function prototypes below, <tt>message</tt> is now a pointer to an allocated buffer of sufficient size to store <tt>count</tt> instances of <tt>datatype</tt>, to be received from process <tt>rank</tt>. <tt>MPI_Recv</tt> takes one additional argument, <tt>status</tt>, which should, in C, be a reference to an allocated <tt>MPI_Status</tt> structure, and, in Fortran, be an array of <tt>MPI_STATUS_SIZE</tt> integers. Upon return it will contain some information about the received message. Although we will not make use of it in this tutorial, the argument must be present.
<tt>MPI_Recv</tt> works in much the same way as <tt>MPI_Send</tt>. Referring to the function prototypes below, <tt>message</tt> is now a pointer to an allocated buffer of sufficient size to store <tt>count</tt> instances of <tt>datatype</tt>, to be received from process <tt>rank</tt>. <tt>MPI_Recv</tt> takes one additional argument, <tt>status</tt>, which should, in C, be a reference to an allocated <tt>MPI_Status</tt> structure, and, in Fortran, be an array of <tt>MPI_STATUS_SIZE</tt> integers or, for <tt>mpi_f08</tt>, a derived <tt>TYPE(MPI_Status)</tt> variable. Upon return it will contain some information about the received message. Although we will not make use of it in this tutorial, the argument must be present.
</translate>
</translate>
<tabs>
<tabs>
Line 842: Line 842:


     implicit none
     implicit none
     use mpi_f08
     use mpi


     integer, parameter :: BUFMAX=81
     integer, parameter :: BUFMAX=81
Line 874: Line 874:


     call MPI_FINALIZE(ierr)
     call MPI_FINALIZE(ierr)
end program phello3
}}
</tab>
<tab name="Fortran 2008">
{{File
  |name=phello3.f90
  |lang="fortran"
  |contents=
program phello3
    implicit none
    use mpi_f08
    integer, parameter :: BUFMAX=81
    character(len=BUFMAX) :: outbuf, inbuf, tmp
    integer :: rank, num_procs
    integer :: sendto, recvfrom
    type(MPI_Status) :: status
    call MPI_Init()
    call MPI_Comm_rank(MPI_COMM_WORLD, rank)
    call MPI_Comm_size(MPI_COMM_WORLD, num_procs)
    outbuf = 'Hello, world! from process '
    write(tmp,'(i2)') rank
    outbuf = outbuf(1:len_trim(outbuf)) // tmp(1:len_trim(tmp))
    write(tmp,'(i2)') num_procs
    outbuf = outbuf(1:len_trim(outbuf)) // ' of ' // tmp(1:len_trim(tmp))
    sendto = mod((rank + 1), num_procs)
    recvfrom = mod(((rank + num_procs) - 1), num_procs)
    if (MOD(rank,2) == 0) then
        call MPI_Send(outbuf, BUFMAX, MPI_CHARACTER, sendto, 0, MPI_COMM_WORLD)
        call MPI_Recv(inbuf, BUFMAX, MPI_CHARACTER, recvfrom, 0, MPI_COMM_WORLD, status)
    else
        call MPI_RECV(inbuf, BUFMAX, MPI_CHARACTER, recvfrom, 0, MPI_COMM_WORLD, status)
        call MPI_SEND(outbuf, BUFMAX, MPI_CHARACTER, sendto, 0, MPI_COMM_WORLD)
    endif
    print *, 'Process', rank, ': Process', recvfrom, ' said:', inbuf
    call MPI_Finalize()


end program phello3
end program phello3
cc_staff
153

edits