Bureaucrats, cc_docs_admin, cc_staff
2,879
edits
No edit summary |
(mark for translation) |
||
Line 1: | Line 1: | ||
<languages /> | |||
<translate> | |||
Most users should submit MPI or distributed memory parallel jobs following the example | Most users should submit MPI or distributed memory parallel jobs following the example | ||
Line 45: | Line 46: | ||
=== Why srun instead of mpiexec or mpirun? === | === Why srun instead of mpiexec or mpirun? === | ||
<code>mpirun</code> is a wrapper that enables communication between processes running on different machines. Modern schedulers already provide many things that <code>mpirun</code> needs. With Torque/Moab, for example, there is no need to pass to <code>mpirun</code> the list of nodes on which to run, or the number of processes to launch; this is done automatically by the scheduler. With Slurm, the task affinity is also resolved by the scheduler, so there is no need to specify things like | <code>mpirun</code> is a wrapper that enables communication between processes running on different machines. Modern schedulers already provide many things that <code>mpirun</code> needs. With Torque/Moab, for example, there is no need to pass to <code>mpirun</code> the list of nodes on which to run, or the number of processes to launch; this is done automatically by the scheduler. With Slurm, the task affinity is also resolved by the scheduler, so there is no need to specify things like | ||
mpirun --map-by node:pe=4 -n 16 application.exe | mpirun --map-by node:pe=4 -n 16 application.exe | ||
Line 61: | Line 62: | ||
[[Category:SLURM]] | [[Category:SLURM]] | ||
</translate> |