Using GPUs with Slurm: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
No edit summary
 
(67 intermediate revisions by 10 users not shown)
Line 1: Line 1:
<languages />
<languages />
<translate>
<translate>
= Introduction = <!--T:56-->
<!--T:57-->
To request one or more GPUs for a Slurm job, use this form:
  --gpus-per-node=[type:]number
<!--T:58-->
The square-bracket notation means that you must specify the number of GPUs, and you may optionally specify the GPU type.  Choose a type from the <i>Available hardware</i> table below.  Here are two examples:
  --gpus-per-node=2
  --gpus-per-node=v100:1
<!--T:59-->
The first line requests two GPUs per node, of any type available on the cluster.  The second line requests one GPU per node, with the GPU being of the V100 type.
<!--T:60-->
The following form can also be used:
  --gres=gpu[[:type]:number]
This is older, and we expect it will no longer be supported in some future release of Slurm.  We recommend that you replace it in your scripts with the above <code>--gpus-per-node</code> form.
<!--T:61-->
There are a variety of other directives that you can use to request GPU resources: <code>--gpus</code>, <code>--gpus-per-socket</code>, <code>--gpus-per-task</code>, <code>--mem-per-gpu</code>, and <code>--ntasks-per-gpu</code>.  Please see the Slurm documentation for [https://slurm.schedmd.com/sbatch.html sbatch] for more about these.  Our staff did not test all the combinations; if you don't get the result you expect, [[Technical support|contact technical support]].


<!--T:15-->
<!--T:15-->
For general advice on job scheduling, see [[Running jobs]].
For general advice on job scheduling, see [[Running jobs]].


== Available hardware == <!--T:1-->
= Available GPUs = <!--T:1-->
These are the GPUs currently available:
These are the GPUs currently available:


Line 11: Line 33:
{| class="wikitable"
{| class="wikitable"
|-
|-
! # of Nodes !! Cluster !! Type specifier !! CPU cores !! CPU memory !! GPUs per node !! GPU model !! Topology
! rowspan=2|Cluster !! rowspan=2| # of nodes !! rowspan=2|Slurm type<br>specifier !! colspan=3|Per node !! rowspan=2|GPU model  !! rowspan=2|Compute<br>Capability(*) !! rowspan=2|GPU mem<br>(GiB) !! rowspan=2|Notes
|-
|-
| 172 || Béluga  || -    || 40 || 191000M ||  4 || V100-SXM2-16GB || All GPUs associated with the same CPU socket
!                              CPU cores !! CPU memory !! GPUs  
|-
|-
| 114 || Cedar  || P100 || 24 || 128000M ||  4 || P100-PCIE-12GB || Two GPUs per CPU socket
| Béluga            || 172 ||  v100 || 40 || 191000M ||  4 || V100-16gb || 70 || 16 || All GPUs associated with the same CPU socket, connected via NVLink and SXM2
|-
|-
| 32  || Cedar  || P100L || 24 || 257000M ||  4 || P100-PCIE-16GB || All GPUs associated with the same CPU socket
| rowspan=3|Cedar  || 114 ||  p100 || 24 || 128000M ||  4 || P100-12gb || 60 || 12 || Two GPUs per CPU socket, connected via PCIe
|-
|-
| 192 || Cedar  || V100L || 32 || 192000M ||  4 || V100-PCIE-32GB || Two GPUs per CPU socket; all GPUs connected via NVLink
|                     32  || p100l || 24 || 257000M ||  4 || P100-16gb || 60 || 16 || All GPUs associated with the same CPU socket, connected via PCIe
|-
|-
| 160 || Graham  || P100 || 32 || 127518M ||  2 || P100-PCIE-12GB || One GPU per CPU socket
|                     192 || v100l ||  32 || 192000M ||  4 || V100-32gb || 70 || 32 || Two GPUs per CPU socket; all GPUs connected via NVLink and SXM2
|-
|-
| 7  || Graham  || V100 || 28 || 183105M ||  8 || V100-PCIE-16GB || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]]
| rowspan=5|Graham  || 160 || p100 || 32 || 127518M ||  2 || P100-12gb || 60 || 12 || One GPU per CPU socket, connected via PCIe
|-
|-
| 30  || Graham || T4    || 44 || 192000M || 4 || Tesla T4 16GB  || Two GPUs per CPU socket
|                     7  || v100(**)    ||  28 || 183105M || 8 || V100-16gb || 70 || 16 || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]]
|-
|-
| 15 || Hélios || K20   || 20 || 110000M ||  8 || K20 5GB        || Four GPUs per CPU socket
|                     2  || v100(***) || 28 || 183105M ||  8 || V100-32gb || 70 || 32 || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]]
|-
|                     30 || t4   || 44 || 192000M ||  4 || T4-16gb  || 75 || 16 || Two GPUs per CPU socket
|-
|                      6  ||  t4  ||  16 || 192000M ||  4 || T4-16gb  || 75 || 16 || &nbsp;
|-
| Mist              || 54  || (none) || 32 ||  256GiB ||  4 || V100-32gb || 70 || 32 || See [https://docs.scinet.utoronto.ca/index.php/Mist#Specifications Mist specifications]
|-  
|-  
| 6   || Hélios  || K80  || 24 || 257000M || 16 || K80 12GB      || Eight GPUs per CPU socket
| Narval            || 159 || a100   || 48 || 510000M || 4 || A100-40gb || 80 || 40 || Two GPUs per CPU socket; all GPUs connected via NVLink
|-  
|-
| 54  || Mist    || -    || 32 ||  256GB ||  4 || V100-SXM2-32GB || See [https://docs.scinet.utoronto.ca/index.php/Mist#Specifications Mist specifications]
| Arbutus          ||  colspan=8 | Cloud resources are not schedulable via Slurm. See [[Cloud resources]] for details of available hardware.
|}
|}


== Specifying the type of GPU to use == <!--T:16-->
<!--T:55-->
(*) <b>Compute Capability</b> is a technical term created by NVIDIA as a compact way to describe what hardware functions are available on some models of GPU and not on others.
It is not a measure of performance and is relevant only if you are compiling your own GPU programs.  See the page on [[CUDA#.22Compute_Capability.22|CUDA programming]] for more.


Some clusters have more than one GPU type available ([[Cedar]], [[Graham]], [[Hélios/en|Hélios]]), and some clusters only have GPUs on certain nodes ([[Béluga/en|Béluga]], [[Cedar]], [[Graham]]). You can choose the type of GPU to use by supplying the type specifier to Slurm. The following options are available:  
<!--T:64-->
(**) To access the 16GB flavor of V100 on Graham, use the following arguments in your sbatch/salloc command: <code>--constraint=skylake,v100</code>.


=== On Béluga === <!--T:29-->
<!--T:70-->
Béluga has only one type of GPU, so no type specification is required.
(***) To access the 32GB flavor of V100 on Graham, use the following arguments in your sbatch/salloc command: <code>--constraint=cascade,v100</code>.


=== On Cedar === <!--T:17-->
== Mist == <!--T:38-->
You can request a 12G P100 using
[https://docs.scinet.utoronto.ca/index.php/Mist Mist] is a cluster comprised of IBM Power9 CPUs (not Intel x86!) and NVIDIA V100 GPUs.
 
Users with access to Niagara can also access Mist.  To specify job requirements on Mist,
  <!--T:18-->
please see the specific instructions on the [https://docs.scinet.utoronto.ca/index.php/Mist#Submitting_jobs SciNet website].
#SBATCH --gres=gpu:p100:1
 
<!--T:19-->
or a 16G P100 using
 
  <!--T:20-->
#SBATCH --gres=gpu:p100l:1
 
<!--T:21-->
or a 32G V100 using
 
  <!--T:34-->
#SBATCH --gres=gpu:v100l:1
 
<!--T:35-->
If no type is specified, GPU jobs requesting <= 125G of memory will run on 12G P100s.
 
=== On Graham === <!--T:22-->
You can request a P100 using
 
  <!--T:23-->
#SBATCH --gres=gpu:p100:1
 
<!--T:24-->
or a V100 using
 
  <!--T:25-->
#SBATCH --gres=gpu:v100:1
 
<!--T:26-->
or a T4 using
 
  <!--T:27-->
#SBATCH --gres=gpu:t4:1


<!--T:28-->
== Multi-Instance GPU (MIG) on Narval == <!--T:71-->
If no type is specified, a GPU job will run on a P100.
MIG, a technology that allows to partition a GPU into multiple instances, is currently activated on Narval cluster as a pilot project. For more information on
how to use the MIGs on Narval please see [[Multi-Instance_GPU]].


=== On Hélios === <!--T:30-->
= Selecting the type of GPU to use = <!--T:16-->
You can request a K20 using


  <!--T:31-->
<!--T:37-->
#SBATCH --gres=gpu:k20:1
Some clusters have more than one GPU type available ([[Cedar]], [[Graham]]), and some clusters only have GPUs on certain nodes ([[Béluga/en|Béluga]], [[Cedar]], [[Graham]]).


<!--T:32-->
<!--T:40-->
or a K80 using
If you do not supply a type specifier, Slurm may send your job to a node equipped with any type of GPU. 
For certain workflows this may be undesirable; for example, molecular dynamics code requires high double-precision performance, for which T4 GPUs are not appropriate.
In such a case, make sure you include a type specifier.


  <!--T:33-->
= Examples = <!--T:62-->
#SBATCH --gres=gpu:k80:1
 
=== Mist ===
[https://docs.scinet.utoronto.ca/index.php/Mist Mist] is a cluster comprised of IBM Power9 CPUs (not Intel x86!) and NVIDIA V100 GPUs. 
Users with access to Niagara can also access Mist.  To specify job requirements on Mist,
please see the specific instructions on the [https://docs.scinet.utoronto.ca/index.php/Mist#Submitting_jobs SciNet web site].


== Single-core job == <!--T:3-->
== Single-core job == <!--T:3-->
Line 108: Line 101:
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1             # Number of GPUs (per node)
#SBATCH --gpus-per-node=1
#SBATCH --mem=4000M              # memory (per node)
#SBATCH --mem=4000M              # memory per node
#SBATCH --time=0-03:00           # time (DD-HH:MM)
#SBATCH --time=0-03:00
./program                        # you can use 'nvidia-smi' for a test
./program                        # you can use 'nvidia-smi' for a test
}}
}}


== Multi-threaded job == <!--T:4-->
== Multi-threaded job == <!--T:4-->
For GPU jobs asking for multiple CPUs in a single node:
For a GPU job which needs multiple CPUs in a single node:
{{File
{{File
   |name=gpu_threaded_job.sh
   |name=gpu_threaded_job.sh
Line 122: Line 115:
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1             # Number of GPU(s) per node
#SBATCH --gpus-per-node=1         # Number of GPU(s) per node
#SBATCH --cpus-per-task=6        # CPU cores/threads
#SBATCH --cpus-per-task=6        # CPU cores/threads
#SBATCH --mem=4000M              # memory per node
#SBATCH --mem=4000M              # memory per node
#SBATCH --time=0-03:00           # time (DD-HH:MM)
#SBATCH --time=0-03:00
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
./program
./program
}}
}}
For each GPU requested on:
 
* Béluga, we recommend no more than 10 CPU cores.
<!--T:63-->
* Cedar, we recommend no more than 6 CPU cores per P100 GPU (p100 and p100l) and no more than 8 CPU cores per V100 GPU (v100l).
For each GPU requested, we recommend
* Graham, we recommend no more than 16 CPU cores.
* on Béluga, no more than 10 CPU cores;
* on Cedar,  
** no more than 6 CPU cores per P100 GPU (p100 and p100l);
** no more than 8 CPU cores per V100 GPU (v100l);
* on Graham, no more than 16 CPU cores.


== MPI job == <!--T:5-->
== MPI job == <!--T:5-->
Line 141: Line 138:
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:4              # Number of GPUs per node
#SBATCH --gpus=8                  # total number of GPUs
#SBATCH --nodes=2                # Number of nodes
#SBATCH --ntasks-per-gpu=1        # total of 8 MPI processes
#SBATCH --ntasks=48              # Number of MPI process
#SBATCH --cpus-per-task=6         # CPU cores per MPI process
#SBATCH --cpus-per-task=1         # CPU cores per MPI process
#SBATCH --mem-per-cpu=5G          # host memory per CPU core
#SBATCH --mem=120G                # memory per node
#SBATCH --time=0-03:00            # time (DD-HH:MM)
#SBATCH --time=0-03:00            # time (DD-HH:MM)
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
srun ./program
srun --cpus-per-task=$SLURM_CPUS_PER_TASK ./program
}}
}}


Line 161: Line 157:
#!/bin/bash
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --nodes=1
#SBATCH --gres=gpu:2
#SBATCH --gpus-per-node=p100:2
#SBATCH --ntasks-per-node=32
#SBATCH --ntasks-per-node=32
#SBATCH --mem=127000M
#SBATCH --mem=127000M
Line 176: Line 172:
#!/bin/bash
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --nodes=1
#SBATCH --gres=gpu:p100:4
#SBATCH --gpus-per-node=p100:4
#SBATCH --ntasks-per-node=24
#SBATCH --ntasks-per-node=24
#SBATCH --exclusive
#SBATCH --exclusive
Line 188: Line 184:


<!--T:10-->
<!--T:10-->
There is a special group of GPU nodes on [[Cedar]] which have four Tesla P100 16GB cards each. (Other P100 GPUs in the cluster have 12GB and the V100 GPUs have 32G.) The GPUs in a P100L node all use the same PCI switch, so the inter-GPU communication latency is lower, but bandwidth between CPU and GPU is lower than on the regular GPU nodes. The nodes also have 256GB RAM. You may only request these nodes as whole nodes, therefore you must specify <code>--gres=gpu:p100l:4</code>. P100L GPU jobs up to 28 days can be run on Cedar.
There is a special group of GPU nodes on [[Cedar]] which have four Tesla P100 16GB cards each (Other P100 GPUs on the cluster have 12GB and the V100 GPUs have 32G). The GPUs in a P100L node all use the same PCI switch, so the inter-GPU communication latency is lower, but bandwidth between CPU and GPU is lower than on the regular GPU nodes. The nodes also have 256GB RAM. You may only request these nodes as whole nodes, therefore you must specify <code>--gres=gpu:p100l:4</code>. P100L GPU jobs up to 28 days can be run on Cedar.


<!--T:11-->
<!--T:11-->
Line 197: Line 193:
#!/bin/bash
#!/bin/bash
#SBATCH --nodes=1  
#SBATCH --nodes=1  
#SBATCH --gres=gpu:p100l:4   
#SBATCH --gpus-per-node=p100l:4   
#SBATCH --ntasks=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=24    # There are 24 CPU cores on P100 Cedar GPU nodes
#SBATCH --cpus-per-task=24    # There are 24 CPU cores on P100 Cedar GPU nodes
Line 210: Line 206:


<!--T:13-->
<!--T:13-->
If you need to run four single-GPU programs or two 2-GPU programs for longer than 24 hours, [[GNU Parallel]] is recommended. A simple example is given below:
If you need to run four single-GPU programs or two 2-GPU programs for longer than 24 hours, [[GNU Parallel]] is recommended. A simple example is:
<pre>
<pre>
cat params.input | parallel -j4 'CUDA_VISIBLE_DEVICES=$(({%} - 1)) python {} &> {#}.out'
cat params.input | parallel -j4 'CUDA_VISIBLE_DEVICES=$(({%} - 1)) python {} &> {#}.out'
</pre>
</pre>
In this example the GPU ID is calculated by subtracting 1 from the slot ID {%}. {#} is the job ID, starting from 1.
In this example, the GPU ID is calculated by subtracting 1 from the slot ID {%} and {#} is the job ID, starting from 1.


<!--T:14-->
<!--T:14-->
A params.input file should include input parameters in each line, like this:
A <code>params.input</code> file should include input parameters in each line, like this:
<pre>
<pre>
code1.py
code1.py
Line 225: Line 221:
...
...
</pre>
</pre>
With this method, users can run multiple tasks in one submission. The <code>-j4</code> parameter means GNU Parallel can run a maximum of four concurrent tasks, launching another as soon as each one ends. CUDA_VISIBLE_DEVICES is used to ensure that two tasks do not try to use the same GPU at the same time.
With this method, you can run multiple tasks in one submission. The <code>-j4</code> parameter means that GNU Parallel can run a maximum of four concurrent tasks, launching another as soon as one ends. CUDA_VISIBLE_DEVICES is used to ensure that two tasks do not try to use the same GPU at the same time.
 
== Profiling GPU tasks == <!--T:65-->
 
<!--T:66-->
On [[Béluga/en|Béluga]] and [[Narval/en|Narval]], the
[https://developer.nvidia.com/dcgm NVIDIA Data Center GPU Manager (DCGM)]
needs to be disabled, and this must be done while doing your job submission.
Based on the simplest example in this page, the <code>--export</code>
parameter is used to set the <code>DISABLE_DCGM</code> environment variable:
 
<!--T:67-->
{{File
  |name=gpu_profiling_job.sh
  |lang="sh"
  |contents=
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --export=ALL,DISABLE_DCGM=1
#SBATCH --gpus-per-node=1
#SBATCH --mem=4000M              # memory per node
#SBATCH --time=0-03:00
 
<!--T:68-->
# Wait until DCGM is disabled on the node
while [ ! -z "$(dcgmi -v {{!}} grep 'Hostengine build info:')" ]; do
  sleep 5;
done
 
<!--T:69-->
./profiler arg1 arg2 ...          # Edit this line. Nvprof can be used
}}
For more details on profilers, see [[Debugging and profiling]].


<!--T:36-->
<!--T:54-->
[[Category:SLURM]]
[[Category:SLURM]]
</translate>
</translate>

Latest revision as of 16:10, 26 July 2024

Other languages:

Introduction[edit]

To request one or more GPUs for a Slurm job, use this form:

 --gpus-per-node=[type:]number

The square-bracket notation means that you must specify the number of GPUs, and you may optionally specify the GPU type. Choose a type from the Available hardware table below. Here are two examples:

 --gpus-per-node=2
 --gpus-per-node=v100:1

The first line requests two GPUs per node, of any type available on the cluster. The second line requests one GPU per node, with the GPU being of the V100 type.

The following form can also be used:

 --gres=gpu[[:type]:number]

This is older, and we expect it will no longer be supported in some future release of Slurm. We recommend that you replace it in your scripts with the above --gpus-per-node form.

There are a variety of other directives that you can use to request GPU resources: --gpus, --gpus-per-socket, --gpus-per-task, --mem-per-gpu, and --ntasks-per-gpu. Please see the Slurm documentation for sbatch for more about these. Our staff did not test all the combinations; if you don't get the result you expect, contact technical support.

For general advice on job scheduling, see Running jobs.

Available GPUs[edit]

These are the GPUs currently available:

Cluster # of nodes Slurm type
specifier
Per node GPU model Compute
Capability(*)
GPU mem
(GiB)
Notes
CPU cores CPU memory GPUs
Béluga 172 v100 40 191000M 4 V100-16gb 70 16 All GPUs associated with the same CPU socket, connected via NVLink and SXM2
Cedar 114 p100 24 128000M 4 P100-12gb 60 12 Two GPUs per CPU socket, connected via PCIe
32 p100l 24 257000M 4 P100-16gb 60 16 All GPUs associated with the same CPU socket, connected via PCIe
192 v100l 32 192000M 4 V100-32gb 70 32 Two GPUs per CPU socket; all GPUs connected via NVLink and SXM2
Graham 160 p100 32 127518M 2 P100-12gb 60 12 One GPU per CPU socket, connected via PCIe
7 v100(**) 28 183105M 8 V100-16gb 70 16 See Graham: Volta GPU nodes
2 v100(***) 28 183105M 8 V100-32gb 70 32 See Graham: Volta GPU nodes
30 t4 44 192000M 4 T4-16gb 75 16 Two GPUs per CPU socket
6 t4 16 192000M 4 T4-16gb 75 16  
Mist 54 (none) 32 256GiB 4 V100-32gb 70 32 See Mist specifications
Narval 159 a100 48 510000M 4 A100-40gb 80 40 Two GPUs per CPU socket; all GPUs connected via NVLink
Arbutus Cloud resources are not schedulable via Slurm. See Cloud resources for details of available hardware.

(*) Compute Capability is a technical term created by NVIDIA as a compact way to describe what hardware functions are available on some models of GPU and not on others. It is not a measure of performance and is relevant only if you are compiling your own GPU programs. See the page on CUDA programming for more.

(**) To access the 16GB flavor of V100 on Graham, use the following arguments in your sbatch/salloc command: --constraint=skylake,v100.

(***) To access the 32GB flavor of V100 on Graham, use the following arguments in your sbatch/salloc command: --constraint=cascade,v100.

Mist[edit]

Mist is a cluster comprised of IBM Power9 CPUs (not Intel x86!) and NVIDIA V100 GPUs. Users with access to Niagara can also access Mist. To specify job requirements on Mist, please see the specific instructions on the SciNet website.

Multi-Instance GPU (MIG) on Narval[edit]

MIG, a technology that allows to partition a GPU into multiple instances, is currently activated on Narval cluster as a pilot project. For more information on how to use the MIGs on Narval please see Multi-Instance_GPU.

Selecting the type of GPU to use[edit]

Some clusters have more than one GPU type available (Cedar, Graham), and some clusters only have GPUs on certain nodes (Béluga, Cedar, Graham).

If you do not supply a type specifier, Slurm may send your job to a node equipped with any type of GPU. For certain workflows this may be undesirable; for example, molecular dynamics code requires high double-precision performance, for which T4 GPUs are not appropriate. In such a case, make sure you include a type specifier.

Examples[edit]

Single-core job[edit]

If you need only a single CPU core and one GPU:

File : gpu_serial_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gpus-per-node=1
#SBATCH --mem=4000M               # memory per node
#SBATCH --time=0-03:00
./program                         # you can use 'nvidia-smi' for a test


Multi-threaded job[edit]

For a GPU job which needs multiple CPUs in a single node:

File : gpu_threaded_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gpus-per-node=1         # Number of GPU(s) per node
#SBATCH --cpus-per-task=6         # CPU cores/threads
#SBATCH --mem=4000M               # memory per node
#SBATCH --time=0-03:00
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
./program


For each GPU requested, we recommend

  • on Béluga, no more than 10 CPU cores;
  • on Cedar,
    • no more than 6 CPU cores per P100 GPU (p100 and p100l);
    • no more than 8 CPU cores per V100 GPU (v100l);
  • on Graham, no more than 16 CPU cores.

MPI job[edit]

File : gpu_mpi_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gpus=8                  # total number of GPUs
#SBATCH --ntasks-per-gpu=1        # total of 8 MPI processes
#SBATCH --cpus-per-task=6         # CPU cores per MPI process
#SBATCH --mem-per-cpu=5G          # host memory per CPU core
#SBATCH --time=0-03:00            # time (DD-HH:MM)
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
srun --cpus-per-task=$SLURM_CPUS_PER_TASK ./program


Whole nodes[edit]

If your application can efficiently use an entire node and its associated GPUs, you will probably experience shorter wait times if you ask Slurm for a whole node. Use one of the following job scripts as a template.

Requesting a GPU node on Graham[edit]

File : graham_gpu_node_job.sh

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --gpus-per-node=p100:2
#SBATCH --ntasks-per-node=32
#SBATCH --mem=127000M
#SBATCH --time=3:00
#SBATCH --account=def-someuser
nvidia-smi


Requesting a P100 GPU node on Cedar[edit]

File : cedar_gpu_node_job.sh

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --gpus-per-node=p100:4
#SBATCH --ntasks-per-node=24
#SBATCH --exclusive
#SBATCH --mem=125G
#SBATCH --time=3:00
#SBATCH --account=def-someuser
nvidia-smi


Requesting a P100-16G GPU node on Cedar[edit]

There is a special group of GPU nodes on Cedar which have four Tesla P100 16GB cards each (Other P100 GPUs on the cluster have 12GB and the V100 GPUs have 32G). The GPUs in a P100L node all use the same PCI switch, so the inter-GPU communication latency is lower, but bandwidth between CPU and GPU is lower than on the regular GPU nodes. The nodes also have 256GB RAM. You may only request these nodes as whole nodes, therefore you must specify --gres=gpu:p100l:4. P100L GPU jobs up to 28 days can be run on Cedar.


File : p100l_gpu_job.sh

#!/bin/bash
#SBATCH --nodes=1 
#SBATCH --gpus-per-node=p100l:4   
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=24    # There are 24 CPU cores on P100 Cedar GPU nodes
#SBATCH --mem=0               # Request the full memory of the node
#SBATCH --time=3:00
#SBATCH --account=def-someuser
hostname
nvidia-smi


Packing single-GPU jobs within one SLURM job[edit]

If you need to run four single-GPU programs or two 2-GPU programs for longer than 24 hours, GNU Parallel is recommended. A simple example is:

cat params.input | parallel -j4 'CUDA_VISIBLE_DEVICES=$(({%} - 1)) python {} &> {#}.out'

In this example, the GPU ID is calculated by subtracting 1 from the slot ID {%} and {#} is the job ID, starting from 1.

A params.input file should include input parameters in each line, like this:

code1.py
code2.py
code3.py
code4.py
...

With this method, you can run multiple tasks in one submission. The -j4 parameter means that GNU Parallel can run a maximum of four concurrent tasks, launching another as soon as one ends. CUDA_VISIBLE_DEVICES is used to ensure that two tasks do not try to use the same GPU at the same time.

Profiling GPU tasks[edit]

On Béluga and Narval, the NVIDIA Data Center GPU Manager (DCGM) needs to be disabled, and this must be done while doing your job submission. Based on the simplest example in this page, the --export parameter is used to set the DISABLE_DCGM environment variable:


File : gpu_profiling_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --export=ALL,DISABLE_DCGM=1
#SBATCH --gpus-per-node=1
#SBATCH --mem=4000M               # memory per node
#SBATCH --time=0-03:00

# Wait until DCGM is disabled on the node
while [ ! -z "$(dcgmi -v | grep 'Hostengine build info:')" ]; do
  sleep 5;
done

./profiler arg1 arg2 ...          # Edit this line. Nvprof can be used


For more details on profilers, see Debugging and profiling.