Bureaucrats, cc_docs_admin, cc_staff
2,879
edits
(mention whole-node scheduling) |
(adjust example to request whole nodes) |
||
Line 52: | Line 52: | ||
<!--T:9--> | <!--T:9--> | ||
--nodes= | --nodes=2 | ||
--ntasks-per-node= | --ntasks-per-node=8 | ||
--cpus-per-task=4 | --cpus-per-task=4 | ||
--mem= | --mem=96G | ||
srun application.exe | srun application.exe | ||
This job is the same size as the last one: 16 tasks (that is, 16 MPI processes), each with 4 threads. The difference here is that we are sure of getting exactly | This job is the same size as the last one: 16 tasks (that is, 16 MPI processes), each with 4 threads. The difference here is that we are sure of getting exactly 2 whole nodes. Recall that <code>--mem</code> requests memory ''per node'', so we use it instead of <code>--mem-per-cpu</code> for the reason described earlier. | ||
=== Why srun instead of mpiexec or mpirun? === <!--T:10--> | === Why srun instead of mpiexec or mpirun? === <!--T:10--> |