Advanced MPI scheduling: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 43: Line 43:
  srun application.exe
  srun application.exe
This job is the same size as the last one: 16 tasks (that is, 16 MPI processes), each with 4 threads. The difference here is that we are sure of getting exactly 4 tasks on each of 4 different nodes. Recall that <code>--mem</code> requests memory ''per node'', so we use it instead of <code>--mem-per-cpu</code> for the reason described earlier.
This job is the same size as the last one: 16 tasks (that is, 16 MPI processes), each with 4 threads. The difference here is that we are sure of getting exactly 4 tasks on each of 4 different nodes. Recall that <code>--mem</code> requests memory ''per node'', so we use it instead of <code>--mem-per-cpu</code> for the reason described earlier.
=== MPI and GPUs ===
To come


=== Why srun instead of mpiexec or mpirun? ===
=== Why srun instead of mpiexec or mpirun? ===
Bureaucrats, cc_docs_admin, cc_staff
2,879

edits