Using GPUs with Slurm: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
No edit summary
Line 6: Line 6:


== Available hardware == <!--T:1-->
== Available hardware == <!--T:1-->
These are the node types containing GPUs currently available on [[Béluga/en|Béluga]], [[Cedar]] and [[Graham]]:
These are the node types containing GPUs currently available on [[Hélios]], [[Béluga/en|Béluga]], [[Cedar]] and [[Graham]]:


<!--T:2-->
<!--T:2-->
Line 24: Line 24:
|-
|-
| 36 || Graham Base GPU || 16 || 196608M || 4 || NVIDIA Tesla T4 16GB || Two GPUs per CPU socket
| 36 || Graham Base GPU || 16 || 196608M || 4 || NVIDIA Tesla T4 16GB || Two GPUs per CPU socket
|-
| 15 || Hélios || 20 || 110000M || 8 || NVIDIA K20 5GB || Four GPUs per CPU socket
|-
| 6 || Hélios || 24 || 257000M || 16 || NVIDIA K80 12GB || Eight GPUs per CPU socket
|}
|}



Revision as of 16:47, 8 January 2020

Other languages:

For general advice on job scheduling, see Running jobs.

Available hardware[edit]

These are the node types containing GPUs currently available on Hélios, Béluga, Cedar and Graham:

# of Nodes Node type CPU cores CPU memory # of GPUs GPU type PCIe bus topology
172 Béluga Base GPU 40 191000M 4 NVIDIA V100-SXM2-16GB All GPUs associated with the same CPU socket
114 Cedar Base GPU 24 128000M 4 NVIDIA P100-PCIE-12GB Two GPUs per CPU socket
32 Cedar Large GPU 24 257000M 4 NVIDIA P100-PCIE-16GB All GPUs associated with the same CPU socket
160 Graham Base GPU 32 127518M 2 NVIDIA P100-PCIE-12GB One GPU per CPU socket
7 Graham Base GPU 28 183105M 8 NVIDIA V100-PCIE-16GB Four GPUs per CPU socket
36 Graham Base GPU 16 196608M 4 NVIDIA Tesla T4 16GB Two GPUs per CPU socket
15 Hélios 20 110000M 8 NVIDIA K20 5GB Four GPUs per CPU socket
6 Hélios 24 257000M 16 NVIDIA K80 12GB Eight GPUs per CPU socket

Single-core job[edit]

If you need only a single CPU core and one GPU:

File : gpu_serial_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # Number of GPUs (per node)
#SBATCH --mem=4000M               # memory (per node)
#SBATCH --time=0-03:00            # time (DD-HH:MM)
./program                         # you can use 'nvidia-smi' for a test


Multi-threaded job[edit]

For GPU jobs asking for multiple CPUs in a single node:

File : gpu_threaded_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # Number of GPU(s) per node
#SBATCH --cpus-per-task=6         # CPU cores/threads
#SBATCH --mem=4000M               # memory per node
#SBATCH --time=0-03:00            # time (DD-HH:MM)
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
./program


For each GPU requested on:

  • Béluga, we recommend no more than 10 CPU cores.
  • Cedar, we recommend no more than 6 CPU cores.
  • Graham, we recommend no more than 16 CPU cores.

MPI job[edit]

File : gpu_mpi_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:4              # Number of GPUs per node
#SBATCH --nodes=2                 # Number of nodes
#SBATCH --ntasks=48               # Number of MPI process
#SBATCH --cpus-per-task=1         # CPU cores per MPI process
#SBATCH --mem=120G                # memory per node
#SBATCH --time=0-03:00            # time (DD-HH:MM)
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
srun ./program


Whole nodes[edit]

If your application can efficiently use an entire node and its associated GPUs, you will probably experience shorter wait times if you ask Slurm for a whole node. Use one of the following job scripts as a template.

Scheduling a GPU node at Graham[edit]

File : graham_gpu_node_job.sh

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --gres=gpu:2
#SBATCH --ntasks-per-node=32
#SBATCH --mem=127000M
#SBATCH --time=3:00
#SBATCH --account=def-someuser
nvidia-smi


Scheduling a Base GPU node at Cedar[edit]

File : cedar_gpu_node_job.sh

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --gres=gpu:4
#SBATCH --ntasks-per-node=24
#SBATCH --exclusive
#SBATCH --mem=125G
#SBATCH --time=3:00
#SBATCH --account=def-someuser
nvidia-smi


Scheduling a Large GPU node at Cedar[edit]

There is a special group of large-memory GPU nodes at Cedar which have four Tesla P100 16GB cards each. (Other GPUs in the cluster have 12GB.) The GPUs in a large-memory node all use the same PCI switch, so the inter-GPU communication latency is lower, but bandwidth between CPU and GPU is lower than on the regular GPU nodes. The nodes also have 256 GB RAM instead of 128GB. You may only request these nodes as whole nodes, therefore you must specify --gres=gpu:lgpu:4. Note that the maximum run-time for the large-memory GPU nodes on Cedar used to be 24 hours, this is not the case any more. Large GPU jobs up to 28 days can be run on Cedar.


File : large_gpu_job.sh

#!/bin/bash
#SBATCH --nodes=1 
#SBATCH --gres=gpu:lgpu:4   
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=24    # There are 24 CPU cores on Cedar GPU nodes
#SBATCH --mem=0               # Request the full memory of the node
#SBATCH --time=3:00
#SBATCH --account=def-someuser
hostname
nvidia-smi


Packing single-GPU jobs within one SLURM job[edit]

If you need to run four single-GPU programs or two 2-GPU programs for longer than 24 hours, GNU Parallel is recommended. A simple example is given below:

cat params.input | parallel -j4 'CUDA_VISIBLE_DEVICES=$(({%} - 1)) python {} &> {#}.out'

In this example the GPU ID is calculated by subtracting 1 from the slot ID {%}. {#} is the job ID, starting from 1.

A params.input file should include input parameters in each line like:

code1.py
code2.py
code3.py
code4.py
...

With this method, users can run multiple tasks in one submission. The -j4 parameter means GNU Parallel can run a maximum of four concurrent tasks, launching another as soon as each one ends. CUDA_VISIBLE_DEVICES is used to ensure that two tasks do not try to use the same GPU at the same time.