Using GPUs with Slurm: Difference between revisions

Jump to navigation Jump to search
change advice from --gres=gpu to --gpus-per-node
(Marked this version for translation)
(change advice from --gres=gpu to --gpus-per-node)
Line 1: Line 1:
<languages />
<languages />
<translate>
<translate>
== Introduction ==
To request one or more GPUs for a Slurm job, use this form:
  --gpus-per-node=[type:]number
The square-bracket notation means that you must specify the number of GPUs, and you may optionally specify the GPU type.  Choose a type from the "Available hardware" table below.  Here are two examples:
  --gpus-per-node=2
  --gpus-per-node=v100:1
The first example requests two GPUs per node, of any type available on the cluster.  The second example requests one GPU per node, with the GPU being of the V100 type.
The following form can also be used:
  --gres=gpu[[:type]:number]
This is older, and we expect it will no longer be supported in some future release of Slurm.  We recommend that you replace it in your scripts with the above --gpus-per-node form.
There are a variety of other directives that you can use to request GPUs and related resources: --gpus, --gpus-per-socket, --gpus-per-task, --mem-per-gpu, and --ntasks-per-gpu.  Please see the Slurm documentation for [https://slurm.schedmd.com/sbatch.html sbatch] for more about these.  Alliance staff have not tested many combinations of these, so if you try them and don't get the resources you expect or want, [[Technical support|contact support]].


<!--T:15-->
<!--T:15-->
Line 48: Line 65:
It is not a measure of performance.  It is relevant only if you are compiling your own GPU programs.  See the page on [[CUDA#.22Compute_Capability.22|CUDA programming]] for more.
It is not a measure of performance.  It is relevant only if you are compiling your own GPU programs.  See the page on [[CUDA#.22Compute_Capability.22|CUDA programming]] for more.


== Specifying the type of GPU to use == <!--T:16-->
== Mist == <!--T:38-->
[https://docs.scinet.utoronto.ca/index.php/Mist Mist] is a cluster comprised of IBM Power9 CPUs (not Intel x86!) and NVIDIA V100 GPUs. 
Users with access to Niagara can also access Mist.  To specify job requirements on Mist,
please see the specific instructions on the [https://docs.scinet.utoronto.ca/index.php/Mist#Submitting_jobs SciNet web site].
 
== Selecting the type of GPU to use == <!--T:16-->


<!--T:37-->
<!--T:37-->
Some clusters have more than one GPU type available ([[Cedar]], [[Graham]], [[Hélios/en|Hélios]]), and some clusters only have GPUs on certain nodes ([[Béluga/en|Béluga]], [[Cedar]], [[Graham]]). You can choose the type of GPU to use by supplying to Slurm the <i>type specifier</i> given in the table above, e.g.:
Some clusters have more than one GPU type available ([[Cedar]], [[Graham]], [[Hélios/en|Hélios]]), and some clusters only have GPUs on certain nodes ([[Béluga/en|Béluga]], [[Cedar]], [[Graham]]).  
 
<!--T:39-->
#SBATCH --gres=gpu:p100:1


<!--T:40-->
<!--T:40-->
If you do not supply a type specifier, Slurm may send your job to a node equipped with any type of GPU.   
If you do not supply a type specifier, Slurm may send your job to a node equipped with any type of GPU.   
For certain workflows this may be undesirable.
For certain workflows this may be undesirable.
For example, molecular dynamics code requires high double-precision performance, and therefore T4 GPUs are not appropriate.
For example, molecular dynamics code requires high double-precision performance, for which T4 GPUs are not appropriate.
In such a case, make sure you include a type specifier.
In such a case, make sure you include a type specifier.
=== Mist === <!--T:38-->
[https://docs.scinet.utoronto.ca/index.php/Mist Mist] is a cluster comprised of IBM Power9 CPUs (not Intel x86!) and NVIDIA V100 GPUs. 
Users with access to Niagara can also access Mist.  To specify job requirements on Mist,
please see the specific instructions on the [https://docs.scinet.utoronto.ca/index.php/Mist#Submitting_jobs SciNet web site].


== Single-core job == <!--T:3-->
== Single-core job == <!--T:3-->
Line 75: Line 89:
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1             # Number of GPUs (per node)
#SBATCH --gpus-per-node=1
#SBATCH --mem=4000M              # memory (per node)
#SBATCH --mem=4000M              # memory per node
#SBATCH --time=0-03:00           # time (DD-HH:MM)
#SBATCH --time=0-03:00
./program                        # you can use 'nvidia-smi' for a test
./program                        # you can use 'nvidia-smi' for a test
}}
}}


== Multi-threaded job == <!--T:4-->
== Multi-threaded job == <!--T:4-->
For GPU jobs asking for multiple CPUs in a single node:
For a GPU job which needs multiple CPUs in a single node:
{{File
{{File
   |name=gpu_threaded_job.sh
   |name=gpu_threaded_job.sh
Line 89: Line 103:
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1             # Number of GPU(s) per node
#SBATCH --gpus-per-node=1         # Number of GPU(s) per node
#SBATCH --cpus-per-task=6        # CPU cores/threads
#SBATCH --cpus-per-task=6        # CPU cores/threads
#SBATCH --mem=4000M              # memory per node
#SBATCH --mem=4000M              # memory per node
#SBATCH --time=0-03:00           # time (DD-HH:MM)
#SBATCH --time=0-03:00
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
./program
./program
}}
}}
For each GPU requested on:
For each GPU requested on:
* Béluga, we recommend no more than 10 CPU cores.
* Béluga, we recommend no more than 10 CPU cores.
Line 108: Line 123:
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:4              # Number of GPUs per node
#SBATCH --gpus=8                  # total number of GPUs
#SBATCH --nodes=2                # Number of nodes
#SBATCH --ntasks-per-gpu=1        # total of 8 MPI processes
#SBATCH --ntasks=48              # Number of MPI process
#SBATCH --cpus-per-task=6         # CPU cores per MPI process
#SBATCH --cpus-per-task=1         # CPU cores per MPI process
#SBATCH --mem-per-cpu=5G          # host memory per CPU core
#SBATCH --mem=120G                # memory per node
#SBATCH --time=0-03:00            # time (DD-HH:MM)
#SBATCH --time=0-03:00            # time (DD-HH:MM)
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
Line 128: Line 142:
#!/bin/bash
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --nodes=1
#SBATCH --gres=gpu:2
#SBATCH --gpus-per-node=p100:2
#SBATCH --ntasks-per-node=32
#SBATCH --ntasks-per-node=32
#SBATCH --mem=127000M
#SBATCH --mem=127000M
Line 143: Line 157:
#!/bin/bash
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --nodes=1
#SBATCH --gres=gpu:p100:4
#SBATCH --gpus-per-node=p100:4
#SBATCH --ntasks-per-node=24
#SBATCH --ntasks-per-node=24
#SBATCH --exclusive
#SBATCH --exclusive
Line 164: Line 178:
#!/bin/bash
#!/bin/bash
#SBATCH --nodes=1  
#SBATCH --nodes=1  
#SBATCH --gres=gpu:p100l:4   
#SBATCH --gpus-per-node=p100l:4   
#SBATCH --ntasks=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=24    # There are 24 CPU cores on P100 Cedar GPU nodes
#SBATCH --cpus-per-task=24    # There are 24 CPU cores on P100 Cedar GPU nodes
Bureaucrats, cc_docs_admin, cc_staff
2,879

edits

Navigation menu