Advanced MPI scheduling: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Created page with "{{Draft}} See [https://slurm.schedmd.com/sbatch.html sbatch] documentation. Discuss some or all of: * -N, --nodes= * -n, --ntasks= * --ntasks-per-core= * --ntasks-per-node=...")
 
No edit summary
Line 1: Line 1:
{{Draft}}
{{Draft}}


See [https://slurm.schedmd.com/sbatch.html sbatch] documentation.  
Most users should submit MPI or distributed memory parallel jobs as illustrated
Discuss some or all of:
at [[Running_jobs#MPI_job|Running jobs: MPI job]]. Simply request a number of
processes with <code>--ntasks</code> or <code>-n</code> and trust the scheduler
to allocate those processes in a way that balances the efficiency of your job
with the overall efficiency of the cluster.
 
If you need more detailed control over how your job is allocated, then read on
to learn about SLURM's [https://slurm.schedmd.com/sbatch.html <code>sbatch</code>]
command and how its numerous options constrain the placement of processes.
 
<code>sbatch</code> options:
* -N, --nodes=
* -N, --nodes=
* -n, --ntasks=
* -n, --ntasks=
Line 18: Line 27:
* -m, --distribution=[arbitrary|<block|cyclic|plane]
* -m, --distribution=[arbitrary|<block|cyclic|plane]
* --mem_bind=
* --mem_bind=
=== Hybrid jobs: MPI and OpenMP, or MPI and threads ===
To come
=== MPI and GPUs ===
To come
=== Why srun instead of mpiexec or mpirun? ===
To come
=== External links ===
* [https://slurm.schedmd.com/sbatch.html sbatch] documentation
* [https://slurm.schedmd.com/srun.html srun] documentation
* [https://www.open-mpi.org/faq/?category=slurm Open MPI] and SLURM

Revision as of 12:36, 20 March 2017


This article is a draft

This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.




Most users should submit MPI or distributed memory parallel jobs as illustrated at Running jobs: MPI job. Simply request a number of processes with --ntasks or -n and trust the scheduler to allocate those processes in a way that balances the efficiency of your job with the overall efficiency of the cluster.

If you need more detailed control over how your job is allocated, then read on to learn about SLURM's sbatch command and how its numerous options constrain the placement of processes.

sbatch options:

  • -N, --nodes=
  • -n, --ntasks=
  • --ntasks-per-core=
  • --ntasks-per-node=
  • --ntasks-per-socket=
  • --tasks-per-node=
  • --threads-per-core=
  • -c, --cpus-per-task=
  • --mincpus=
  • --cores-per-socket=
  • --mem= and --mem-per-cpu=
  • --exclusive[=user|mcs]
  • --hint=[compute_bound|memory_bound|multithread|nomultithread]
  • -m, --distribution=[arbitrary|<block|cyclic|plane]
  • --mem_bind=

Hybrid jobs: MPI and OpenMP, or MPI and threads[edit]

To come

MPI and GPUs[edit]

To come

Why srun instead of mpiexec or mpirun?[edit]

To come

External links[edit]