Using GPUs with Slurm: Difference between revisions
No edit summary |
(Moved to AI Guide) Tag: Undo |
||
Line 220: | Line 220: | ||
</pre> | </pre> | ||
With this method, users can run multiple tasks in one submission. The <code>-j4</code> parameter means GNU Parallel can run a maximum of four concurrent tasks, launching another as soon as each one ends. CUDA_VISIBLE_DEVICES is used to ensure that two tasks do not try to use the same GPU at the same time. | With this method, users can run multiple tasks in one submission. The <code>-j4</code> parameter means GNU Parallel can run a maximum of four concurrent tasks, launching another as soon as each one ends. CUDA_VISIBLE_DEVICES is used to ensure that two tasks do not try to use the same GPU at the same time. | ||
<!--T:36--> | <!--T:36--> | ||
[[Category:SLURM]] | [[Category:SLURM]] | ||
</translate> | </translate> |
Revision as of 19:29, 15 July 2020
For general advice on job scheduling, see Running jobs.
Available hardware[edit]
These are the node types containing GPUs currently available on Béluga, Cedar, Graham and Hélios:
# of Nodes | Node type | CPU cores | CPU memory | # of GPUs | NVIDIA GPU type | PCIe bus topology |
---|---|---|---|---|---|---|
172 | Béluga P100 GPU | 40 | 191000M | 4 | V100-SXM2-16GB | All GPUs associated with the same CPU socket |
114 | Cedar P100 GPU | 24 | 128000M | 4 | P100-PCIE-12GB | Two GPUs per CPU socket |
32 | Cedar P100L GPU | 24 | 257000M | 4 | P100-PCIE-16GB | All GPUs associated with the same CPU socket |
192 | Cedar V100L GPU | 32 | 192000M | 4 | V100-PCIE-32GB | Two GPUs per CPU socket; all GPUs connected via NVLink |
160 | Graham Base GPU | 32 | 127518M | 2 | P100-PCIE-12GB | One GPU per CPU socket |
7 | Graham Base GPU | 28 | 183105M | 8 | V100-PCIE-16GB | Four GPUs per CPU socket |
36 | Graham Base GPU | 16 | 196608M | 4 | Tesla T4 16GB | Two GPUs per CPU socket |
15 | Hélios K20 | 20 | 110000M | 8 | K20 5GB | Four GPUs per CPU socket |
6 | Hélios K80 | 24 | 257000M | 16 | K80 12GB | Eight GPUs per CPU socket |
54 | Niagara IBM AC922 | 32 Power9 | 256GB | 4 | V100-SMX2-32GB | all GPUs connected via NVLinks |
Specifying the type of GPU to use[edit]
Most clusters have multiple types of GPUs available. You can specify the type of GPU to use by adding a specifier to the --gres=gpu
option. The following options are available:
On Cedar[edit]
You can request a 12G P100 using
#SBATCH --gres=gpu:p100:1
or a 16G P100 using
#SBATCH --gres=gpu:p100l:1
or a 32G V100 using
#SBATCH --gres=gpu:v100l:1
Unless specified, all GPU jobs requesting <= 125G of memory will run on 12G P100s
On Graham[edit]
You can request a P100 using
#SBATCH --gres=gpu:p100:1
or a V100 using
#SBATCH --gres=gpu:v100:1
or a T4 using
#SBATCH --gres=gpu:t4:1
Unless specified, all GPU jobs will run on a P100.
On Béluga[edit]
Béluga has only one type of GPU, so no options are provided.
On Hélios[edit]
You can request a K20 using
#SBATCH --gres=gpu:k20:1
or a K80 using
#SBATCH --gres=gpu:k80:1
Single-core job[edit]
If you need only a single CPU core and one GPU:
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1 # Number of GPUs (per node)
#SBATCH --mem=4000M # memory (per node)
#SBATCH --time=0-03:00 # time (DD-HH:MM)
./program # you can use 'nvidia-smi' for a test
Multi-threaded job[edit]
For GPU jobs asking for multiple CPUs in a single node:
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1 # Number of GPU(s) per node
#SBATCH --cpus-per-task=6 # CPU cores/threads
#SBATCH --mem=4000M # memory per node
#SBATCH --time=0-03:00 # time (DD-HH:MM)
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
./program
For each GPU requested on:
- Béluga, we recommend no more than 10 CPU cores.
- Cedar, we recommend no more than 6 CPU cores per P100 GPU (p100 and p100l) and no more than 8 CPU cores per V100 GPU (v100l).
- Graham, we recommend no more than 16 CPU cores.
MPI job[edit]
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:4 # Number of GPUs per node
#SBATCH --nodes=2 # Number of nodes
#SBATCH --ntasks=48 # Number of MPI process
#SBATCH --cpus-per-task=1 # CPU cores per MPI process
#SBATCH --mem=120G # memory per node
#SBATCH --time=0-03:00 # time (DD-HH:MM)
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
srun ./program
Whole nodes[edit]
If your application can efficiently use an entire node and its associated GPUs, you will probably experience shorter wait times if you ask Slurm for a whole node. Use one of the following job scripts as a template.
Requesting a GPU node on Graham[edit]
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --gres=gpu:2
#SBATCH --ntasks-per-node=32
#SBATCH --mem=127000M
#SBATCH --time=3:00
#SBATCH --account=def-someuser
nvidia-smi
Requesting a P100 GPU node on Cedar[edit]
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --gres=gpu:p100:4
#SBATCH --ntasks-per-node=24
#SBATCH --exclusive
#SBATCH --mem=125G
#SBATCH --time=3:00
#SBATCH --account=def-someuser
nvidia-smi
Requesting a P100-16G GPU node on Cedar[edit]
There is a special group of GPU nodes on Cedar which have four Tesla P100 16GB cards each. (Other P100 GPUs in the cluster have 12GB and the V100 GPUs have 32G.) The GPUs in a P100L node all use the same PCI switch, so the inter-GPU communication latency is lower, but bandwidth between CPU and GPU is lower than on the regular GPU nodes. The nodes also have 256GB RAM. You may only request these nodes as whole nodes, therefore you must specify --gres=gpu:p100l:4
. P100L GPU jobs up to 28 days can be run on Cedar.
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --gres=gpu:p100l:4
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=24 # There are 24 CPU cores on P100 Cedar GPU nodes
#SBATCH --mem=0 # Request the full memory of the node
#SBATCH --time=3:00
#SBATCH --account=def-someuser
hostname
nvidia-smi
Packing single-GPU jobs within one SLURM job[edit]
If you need to run four single-GPU programs or two 2-GPU programs for longer than 24 hours, GNU Parallel is recommended. A simple example is given below:
cat params.input | parallel -j4 'CUDA_VISIBLE_DEVICES=$(({%} - 1)) python {} &> {#}.out'
In this example the GPU ID is calculated by subtracting 1 from the slot ID {%}. {#} is the job ID, starting from 1.
A params.input file should include input parameters in each line, like this:
code1.py code2.py code3.py code4.py ...
With this method, users can run multiple tasks in one submission. The -j4
parameter means GNU Parallel can run a maximum of four concurrent tasks, launching another as soon as each one ends. CUDA_VISIBLE_DEVICES is used to ensure that two tasks do not try to use the same GPU at the same time.