Using GPUs with Slurm: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(159 gpu nodes)
m (Add GPU compute capability to the table)
Line 11: Line 11:
{| class="wikitable"
{| class="wikitable"
|-
|-
! rowspan=2|Cluster !! rowspan=2| # of Nodes !! rowspan=2|Slurm type specifier !! colspan=3|Per node !! rowspan=2|GPU model !! rowspan=2|GPU mem (GiB) !! rowspan=2|Notes
! rowspan=2|Cluster !! rowspan=2| # of Nodes !! rowspan=2|Slurm type specifier !! colspan=3|Per node !! rowspan=2|GPU model !! rowspan=2|Compute Capability !! rowspan=2|GPU mem (GiB) !! rowspan=2|Notes
|-
|-
!                              CPU cores !! CPU memory !! GPUs  
!                              CPU cores !! CPU memory !! GPUs  
|-
|-
| Béluga            || 172 ||  v100 ||  40 || 191000M ||  4 || V100-SXM2 || 16 || All GPUs associated with the same CPU socket, connected via NVLink
| Béluga            || 172 ||  v100 ||  40 || 191000M ||  4 || V100-SXM2 || 70 || 16 || All GPUs associated with the same CPU socket, connected via NVLink
|-
|-
| rowspan=3|Cedar  || 114 ||  p100 ||  24 || 128000M ||  4 || P100-PCIE || 12 || Two GPUs per CPU socket
| rowspan=3|Cedar  || 114 ||  p100 ||  24 || 128000M ||  4 || P100-PCIE || 60 || 12 || Two GPUs per CPU socket
|-
|-
|                      32  || p100l ||  24 || 257000M ||  4 || P100-PCIE || 16 || All GPUs associated with the same CPU socket
|                      32  || p100l ||  24 || 257000M ||  4 || P100-PCIE || 60 || 16 || All GPUs associated with the same CPU socket
|-
|-
|                      192 || v100l ||  32 || 192000M ||  4 || V100-SXM2 || 32 || Two GPUs per CPU socket; all GPUs connected via NVLink
|                      192 || v100l ||  32 || 192000M ||  4 || V100-SXM2 || 70 || 32 || Two GPUs per CPU socket; all GPUs connected via NVLink
|-
|-
| rowspan=5|Graham  || 160 ||  p100 ||  32 || 127518M ||  2 || P100-PCIE || 12 || One GPU per CPU socket
| rowspan=5|Graham  || 160 ||  p100 ||  32 || 127518M ||  2 || P100-PCIE || 60 || 12 || One GPU per CPU socket
|-
|-
|                      7  || v100  ||  28 || 183105M ||  8 || V100-PCIE || 16 || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]]
|                      7  || v100  ||  28 || 183105M ||  8 || V100-PCIE || 70 || 16 || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]]
|-
|-
|                      2  || v100l ||  28 || 183105M ||  8 || V100-?    || 32 || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]]
|                      2  || v100l ||  28 || 183105M ||  8 || V100-?    || 70 || 32 || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]]
|-
|-
|                      30  ||  t4  ||  44 || 192000M ||  4 || Tesla T4  || 16 || Two GPUs per CPU socket
|                      30  ||  t4  ||  44 || 192000M ||  4 || Tesla T4  || 75 || 16 || Two GPUs per CPU socket
|-
|-
|                      6  ||  t4  ||  16 || 192000M ||  4 || Tesla T4  || 16 ||  
|                      6  ||  t4  ||  16 || 192000M ||  4 || Tesla T4  || 75 || 16 ||  
|-
|-
| rowspan=2|Hélios  || 15  ||  k20  ||  20 || 110000M ||  8 || K20      ||  5 || Four GPUs per CPU socket
| rowspan=2|Hélios  || 15  ||  k20  ||  20 || 110000M ||  8 || K20      || 35 ||  5 || Four GPUs per CPU socket
|-  
|-  
|                      6  ||  k80  ||  24 || 257000M || 16 || K80      || 12 || Eight GPUs per CPU socket
|                      6  ||  k80  ||  24 || 257000M || 16 || K80      || 37 || 12 || Eight GPUs per CPU socket
|-  
|-  
| Mist              || 54  || (none) || 32 ||  256GiB ||  4 || V100-SXM2 || 32 || See [https://docs.scinet.utoronto.ca/index.php/Mist#Specifications Mist specifications]
| Mist              || 54  || (none) || 32 ||  256GiB ||  4 || V100-SXM2 || 70 || 32 || See [https://docs.scinet.utoronto.ca/index.php/Mist#Specifications Mist specifications]
|-  
|-  
| Narval            || 159 || a100 || 48 || 510000M ||  4 || A100      || 40 || Two GPUs per CPU socket; all GPUs connected via NVLink  
| Narval            || 159 || a100 || 48 || 510000M ||  4 || A100      || 80 || 40 || Two GPUs per CPU socket; all GPUs connected via NVLink  
|-
|-
| Arbutus          ||  colspan=8 | Cloud resources are not schedulable via Slurm. See [[Cloud resources]] for details of available hardware.
| Arbutus          ||  colspan=8 | Cloud resources are not schedulable via Slurm. See [[Cloud resources]] for details of available hardware.

Revision as of 03:24, 11 January 2022

Other languages:

For general advice on job scheduling, see Running jobs.

Available hardware[edit]

These are the GPUs currently available:

Cluster # of Nodes Slurm type specifier Per node GPU model Compute Capability GPU mem (GiB) Notes
CPU cores CPU memory GPUs
Béluga 172 v100 40 191000M 4 V100-SXM2 70 16 All GPUs associated with the same CPU socket, connected via NVLink
Cedar 114 p100 24 128000M 4 P100-PCIE 60 12 Two GPUs per CPU socket
32 p100l 24 257000M 4 P100-PCIE 60 16 All GPUs associated with the same CPU socket
192 v100l 32 192000M 4 V100-SXM2 70 32 Two GPUs per CPU socket; all GPUs connected via NVLink
Graham 160 p100 32 127518M 2 P100-PCIE 60 12 One GPU per CPU socket
7 v100 28 183105M 8 V100-PCIE 70 16 See Graham: Volta GPU nodes
2 v100l 28 183105M 8 V100-? 70 32 See Graham: Volta GPU nodes
30 t4 44 192000M 4 Tesla T4 75 16 Two GPUs per CPU socket
6 t4 16 192000M 4 Tesla T4 75 16  
Hélios 15 k20 20 110000M 8 K20 35 5 Four GPUs per CPU socket
6 k80 24 257000M 16 K80 37 12 Eight GPUs per CPU socket
Mist 54 (none) 32 256GiB 4 V100-SXM2 70 32 See Mist specifications
Narval 159 a100 48 510000M 4 A100 80 40 Two GPUs per CPU socket; all GPUs connected via NVLink
Arbutus Cloud resources are not schedulable via Slurm. See Cloud resources for details of available hardware.

Specifying the type of GPU to use[edit]

Some clusters have more than one GPU type available (Cedar, Graham, Hélios), and some clusters only have GPUs on certain nodes (Béluga, Cedar, Graham). You can choose the type of GPU to use by supplying to Slurm the type specifier given in the table above, e.g.:

#SBATCH --gres=gpu:p100:1

If you do not supply a type specifier, Slurm may send your job to a node equipped with any type of GPU. For certain workflows this may be undesirable. For example, molecular dynamics code requires high double-precision performance, and therefore T4 GPUs are not appropriate. In such a case, make sure you include a type specifier.

Mist[edit]

Mist is a cluster comprised of IBM Power9 CPUs (not Intel x86!) and NVIDIA V100 GPUs. Users with access to Niagara can also access Mist. To specify job requirements on Mist, please see the specific instructions on the SciNet web site.

Single-core job[edit]

If you need only a single CPU core and one GPU:

File : gpu_serial_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # Number of GPUs (per node)
#SBATCH --mem=4000M               # memory (per node)
#SBATCH --time=0-03:00            # time (DD-HH:MM)
./program                         # you can use 'nvidia-smi' for a test


Multi-threaded job[edit]

For GPU jobs asking for multiple CPUs in a single node:

File : gpu_threaded_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # Number of GPU(s) per node
#SBATCH --cpus-per-task=6         # CPU cores/threads
#SBATCH --mem=4000M               # memory per node
#SBATCH --time=0-03:00            # time (DD-HH:MM)
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
./program


For each GPU requested on:

  • Béluga, we recommend no more than 10 CPU cores.
  • Cedar, we recommend no more than 6 CPU cores per P100 GPU (p100 and p100l) and no more than 8 CPU cores per V100 GPU (v100l).
  • Graham, we recommend no more than 16 CPU cores.

MPI job[edit]

File : gpu_mpi_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:4              # Number of GPUs per node
#SBATCH --nodes=2                 # Number of nodes
#SBATCH --ntasks=48               # Number of MPI process
#SBATCH --cpus-per-task=1         # CPU cores per MPI process
#SBATCH --mem=120G                # memory per node
#SBATCH --time=0-03:00            # time (DD-HH:MM)
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
srun ./program


Whole nodes[edit]

If your application can efficiently use an entire node and its associated GPUs, you will probably experience shorter wait times if you ask Slurm for a whole node. Use one of the following job scripts as a template.

Requesting a GPU node on Graham[edit]

File : graham_gpu_node_job.sh

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --gres=gpu:2
#SBATCH --ntasks-per-node=32
#SBATCH --mem=127000M
#SBATCH --time=3:00
#SBATCH --account=def-someuser
nvidia-smi


Requesting a P100 GPU node on Cedar[edit]

File : cedar_gpu_node_job.sh

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --gres=gpu:p100:4
#SBATCH --ntasks-per-node=24
#SBATCH --exclusive
#SBATCH --mem=125G
#SBATCH --time=3:00
#SBATCH --account=def-someuser
nvidia-smi


Requesting a P100-16G GPU node on Cedar[edit]

There is a special group of GPU nodes on Cedar which have four Tesla P100 16GB cards each. (Other P100 GPUs in the cluster have 12GB and the V100 GPUs have 32G.) The GPUs in a P100L node all use the same PCI switch, so the inter-GPU communication latency is lower, but bandwidth between CPU and GPU is lower than on the regular GPU nodes. The nodes also have 256GB RAM. You may only request these nodes as whole nodes, therefore you must specify --gres=gpu:p100l:4. P100L GPU jobs up to 28 days can be run on Cedar.


File : p100l_gpu_job.sh

#!/bin/bash
#SBATCH --nodes=1 
#SBATCH --gres=gpu:p100l:4   
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=24    # There are 24 CPU cores on P100 Cedar GPU nodes
#SBATCH --mem=0               # Request the full memory of the node
#SBATCH --time=3:00
#SBATCH --account=def-someuser
hostname
nvidia-smi


Packing single-GPU jobs within one SLURM job[edit]

If you need to run four single-GPU programs or two 2-GPU programs for longer than 24 hours, GNU Parallel is recommended. A simple example is given below:

cat params.input | parallel -j4 'CUDA_VISIBLE_DEVICES=$(({%} - 1)) python {} &> {#}.out'

In this example the GPU ID is calculated by subtracting 1 from the slot ID {%}. {#} is the job ID, starting from 1.

A params.input file should include input parameters in each line, like this:

code1.py
code2.py
code3.py
code4.py
...

With this method, users can run multiple tasks in one submission. The -j4 parameter means GNU Parallel can run a maximum of four concurrent tasks, launching another as soon as each one ends. CUDA_VISIBLE_DEVICES is used to ensure that two tasks do not try to use the same GPU at the same time.