Bureaucrats, cc_docs_admin, cc_staff
2,879
edits
(Created page with "{{Draft}} See [https://slurm.schedmd.com/sbatch.html sbatch] documentation. Discuss some or all of: * -N, --nodes= * -n, --ntasks= * --ntasks-per-core= * --ntasks-per-node=...") |
No edit summary |
||
Line 1: | Line 1: | ||
{{Draft}} | {{Draft}} | ||
Most users should submit MPI or distributed memory parallel jobs as illustrated | |||
at [[Running_jobs#MPI_job|Running jobs: MPI job]]. Simply request a number of | |||
processes with <code>--ntasks</code> or <code>-n</code> and trust the scheduler | |||
to allocate those processes in a way that balances the efficiency of your job | |||
with the overall efficiency of the cluster. | |||
If you need more detailed control over how your job is allocated, then read on | |||
to learn about SLURM's [https://slurm.schedmd.com/sbatch.html <code>sbatch</code>] | |||
command and how its numerous options constrain the placement of processes. | |||
<code>sbatch</code> options: | |||
* -N, --nodes= | * -N, --nodes= | ||
* -n, --ntasks= | * -n, --ntasks= | ||
Line 18: | Line 27: | ||
* -m, --distribution=[arbitrary|<block|cyclic|plane] | * -m, --distribution=[arbitrary|<block|cyclic|plane] | ||
* --mem_bind= | * --mem_bind= | ||
=== Hybrid jobs: MPI and OpenMP, or MPI and threads === | |||
To come | |||
=== MPI and GPUs === | |||
To come | |||
=== Why srun instead of mpiexec or mpirun? === | |||
To come | |||
=== External links === | |||
* [https://slurm.schedmd.com/sbatch.html sbatch] documentation | |||
* [https://slurm.schedmd.com/srun.html srun] documentation | |||
* [https://www.open-mpi.org/faq/?category=slurm Open MPI] and SLURM |