Advanced MPI scheduling: Difference between revisions

Jump to navigation Jump to search
adjust example to request whole nodes
(mention whole-node scheduling)
(adjust example to request whole nodes)
Line 52: Line 52:


  <!--T:9-->
  <!--T:9-->
--nodes=4
--nodes=2
  --ntasks-per-node=4
  --ntasks-per-node=8
  --cpus-per-task=4
  --cpus-per-task=4
  --mem=48G
  --mem=96G
  srun application.exe
  srun application.exe
This job is the same size as the last one: 16 tasks (that is, 16 MPI processes), each with 4 threads. The difference here is that we are sure of getting exactly 4 tasks on each of 4 different nodes. Recall that <code>--mem</code> requests memory ''per node'', so we use it instead of <code>--mem-per-cpu</code> for the reason described earlier.
This job is the same size as the last one: 16 tasks (that is, 16 MPI processes), each with 4 threads. The difference here is that we are sure of getting exactly 2 whole nodes. Recall that <code>--mem</code> requests memory ''per node'', so we use it instead of <code>--mem-per-cpu</code> for the reason described earlier.


=== Why srun instead of mpiexec or mpirun? === <!--T:10-->
=== Why srun instead of mpiexec or mpirun? === <!--T:10-->
Bureaucrats, cc_docs_admin, cc_staff
2,879

edits

Navigation menu