Advanced MPI scheduling: Difference between revisions

Jump to navigation Jump to search
Marked this version for translation
No edit summary
(Marked this version for translation)
Line 86: Line 86:
Also, as you would expect, <code>srun</code> is fully coupled to Slurm. When you <code>srun</code> an application, a "job step" is started, the environment variables <code>SLURM_STEP_ID</code> and <code>SLURM_PROCID</code> are initialized correctly, and correct accounting information is recorded.
Also, as you would expect, <code>srun</code> is fully coupled to Slurm. When you <code>srun</code> an application, a "job step" is started, the environment variables <code>SLURM_STEP_ID</code> and <code>SLURM_PROCID</code> are initialized correctly, and correct accounting information is recorded.


<!--T:22-->
For an example of some differences between <code>srun</code> and <code>mpiexec</code>, see [https://mail-archive.com/users@lists.open-mpi.org/msg31874.html this discussion] on the Open MPI support forum. Better performance might be achievable with <code>mpiexec</code> than with <code>srun</code> under certain circumstances, but using <code>srun</code> minimizes the risk that there will be a mismatch between the resources allocated by Slurm and those used by Open MPI.
For an example of some differences between <code>srun</code> and <code>mpiexec</code>, see [https://mail-archive.com/users@lists.open-mpi.org/msg31874.html this discussion] on the Open MPI support forum. Better performance might be achievable with <code>mpiexec</code> than with <code>srun</code> under certain circumstances, but using <code>srun</code> minimizes the risk that there will be a mismatch between the resources allocated by Slurm and those used by Open MPI.


Bureaucrats, cc_docs_admin, cc_staff
2,879

edits

Navigation menu