Advanced MPI scheduling: Difference between revisions

Jump to navigation Jump to search
amplify tasks=processes, thread=cores=LWPs
(amplify advice on whole nodes versus by-core)
(amplify tasks=processes, thread=cores=LWPs)
Line 54: Line 54:


=== Hybrid jobs: MPI and OpenMP, or MPI and threads === <!--T:7-->
=== Hybrid jobs: MPI and OpenMP, or MPI and threads === <!--T:7-->
It is important to understand that the number of ''tasks'' requested of Slurm is the number of ''processes'' that will be started by <code>srun</code>. So for a hybrid job that will use both MPI processes and lightweight processes such as OpenMP threads or Posix threads, you should set the MPI process count with <code>--ntasks</code> or <code>-ntasks-per-node</code>, and set the thread count with <code>--cpus-per-task</code>.


  <!--T:8-->
  <!--T:8-->
Line 60: Line 62:
  --mem-per-cpu=3G
  --mem-per-cpu=3G
  srun application.exe
  srun application.exe
In this example a total of 64 cores will be allocated, but only 16 MPI processes (tasks) can and will be initialized. If the application is also OpenMP, then each process will spawn 4 threads, one per core. Each process will be allocated with 12GB of memory. The tasks, in groups of 4 cores each, could be allocated anywhere, from 2 to up to 16 nodes.
In this example a total of 64 cores will be allocated, but only 16 MPI processes (tasks) can and will be initialized. If the application is also OpenMP, then each process will spawn 4 threads, one per core. Each process will be allocated with 12GB of memory. The tasks, with 4 cores each, could be allocated anywhere, from 2 to up to 16 nodes.


  <!--T:9-->
  <!--T:9-->
Bureaucrats, cc_docs_admin, cc_staff
2,879

edits

Navigation menu