Bureaucrats, cc_docs_admin, cc_staff
2,879
edits
(contributions from Juan Z-A) |
|||
Line 14: | Line 14: | ||
You may find this discussion of [https://slurm.schedmd.com/faq.html#cpu_count What exactly is considered a CPU?] in SLURM to be useful. | You may find this discussion of [https://slurm.schedmd.com/faq.html#cpu_count What exactly is considered a CPU?] in SLURM to be useful. | ||
=== Examples of common MPI scenarios === | |||
--ntasks=15 | |||
--mem-per-cpu=3G | |||
srun application.exe | |||
This will run 15 MPI processes. The cores could be allocated anywhere in the cluster. Since we don’t know ''a priori'' how many cores will reside on each node, if we want to specify memory, it should be done by per-cpu. | |||
If for some reason we need all cores in a single node (to avoid communication overhead, for example), then | |||
--nodes=1 | |||
--tasks-per-node=15 | |||
--mem=45G | |||
srun application.exe | |||
will give us what we need. In this case we could also say <code>--mem-per-cpu=3G</code>. The main difference is that with <code>--mem-per-cpu=3G</code>, the job will be canceled if any of the processes exceeds 3GB, while with <code>--mem=45G</code>, the memory consumed by each individual process doesn't matter, as long as all of them together don’t use more than 45GB. | |||
=== Hybrid jobs: MPI and OpenMP, or MPI and threads === | === Hybrid jobs: MPI and OpenMP, or MPI and threads === |