Advanced MPI scheduling: Difference between revisions

Jump to navigation Jump to search
no edit summary
No edit summary
No edit summary
Line 169: Line 169:
  --mem=96G
  --mem=96G
  srun --cpus-per-task=$SLURM_CPUS_PER_TASK application.exe
  srun --cpus-per-task=$SLURM_CPUS_PER_TASK application.exe
This job is the same size as the last one: 16 tasks (that is, 16 MPI processes), each with 4 threads. The difference here is that we are sure of getting exactly 2 whole nodes. Recall that <code>--mem</code> requests memory <i>per node</i>, so we use it instead of <code>--mem-per-cpu</code> for the reason described earlier.
This job is the same size as the last one: 16 tasks (that is, 16 MPI processes), each with 4 threads. The difference here is that we are sure of getting exactly 2 whole nodes. Remember that <code>--mem</code> requests memory <i>per node</i>, so we use it instead of <code>--mem-per-cpu</code> for the reason described earlier.


=== Why srun instead of mpiexec or mpirun? === <!--T:10-->
=== Why srun instead of mpiexec or mpirun? === <!--T:10-->
rsnt_translations
56,420

edits

Navigation menu