Using GPUs with Slurm/fr: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
No edit summary
Line 4: Line 4:


== Nœuds disponibles ==
== Nœuds disponibles ==
Le tableau suivant décrit les nœuds avec GPUs présentement disponibles sur [[Cedar/fr|Cedar]] et [[Graham/fr|Graham]]:
Le tableau suivant décrit les nœuds avec GPUs présentement disponibles sur [[Cedar/fr|Cedar]] et [[Graham/fr|Graham]].


{| class="wikitable"
{| class="wikitable"

Revision as of 19:12, 23 November 2017

Other languages:

Pour l'information générale sur l'ordonnancement des tâches, consultez Exécuter des tâches.

Nœuds disponibles

Le tableau suivant décrit les nœuds avec GPUs présentement disponibles sur Cedar et Graham.

# of Nodes Node type CPU cores CPU memory # of GPUs GPU type PCIe bus topology
114 Cedar Base GPU 24 128GB 4 NVIDIA P100-PCIE-12GB Two GPUs per CPU socket
32 Cedar Large GPU 24 256GB 4 NVIDIA P100-PCIE-16GB All GPUs under same CPU socket
160 Graham Base GPU 32 128GB 2 NVIDIA P100-PCIE-12GB One GPU per CPU socket

Single-core job

If you need only a single CPU core and one GPU:

File : gpu_serial_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # Number of GPUs (per node)
#SBATCH --mem=4000M               # memory (per node)
#SBATCH --time=0-03:00            # time (DD-HH:MM)
./program


Multi-threaded job

For GPU jobs asking for multiple CPUs in a single node:

File : gpu_threaded_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # Number of GPU(s) per node
#SBATCH --cpus-per-task=6         # CPU cores/threads
#SBATCH --mem=4000M               # memory per node
#SBATCH --time=0-03:00            # time (DD-HH:MM)
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
./program


On Cedar, we recommend that multi-threaded jobs use no more than 6 CPU cores for each GPU requested. On Graham, we recommend no more than 16 CPU cores for each GPU.

MPI job

File : gpu_mpi_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:4              # Number of GPUs per node
#SBATCH --nodes=2                 # Number of nodes
#SBATCH --ntask=48                # Number of MPI process
#SBATCH --cpus-per-task=1         # CPU cores per MPI process
#SBATCH --mem=120G                # memory per node
#SBATCH --time=0-03:00            # time (DD-HH:MM)
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
srun ./program


Whole nodes

If your application can efficiently use an entire node and its associated GPUs, you will probably experience shorter wait times if you ask Slurm for a whole node. Use one of the following job scripts as a template.

Scheduling a GPU node at Graham

File : graham_gpu_node_job.sh

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --gres=gpu:2
#SBATCH --ntasks-per-node=32
#SBATCH --mem=128000M
#SBATCH --time=3:00
#SBATCH --account=def-someuser
nvidia-smi


Scheduling a Base GPU node at Cedar

File : cedar_gpu_node_job.sh

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --gres=gpu:4
#SBATCH --exclusive
#SBATCH --mem=125G
#SBATCH --time=3:00
#SBATCH --account=def-someuser
nvidia-smi


Scheduling a Large GPU node at Cedar

There is a special group of large-memory GPU nodes at Cedar which have four Tesla P100 16GB cards each. (Other GPUs in the cluster have 12GB.) These GPUs all use the same PCI switch so the inter-GPU communication latency is lower, but bandwidth between CPU and GPU is lower than on the regular GPU nodes. The nodes also have 256 GB RAM instead of 128GB. In order to use these nodes you must specify lgpu. By-gpu requests can only run up to 24 hours.


File : large_gpu_job.sh

#!/bin/bash
#SBATCH --nodes=1 
#SBATCH --gres=gpu:lgpu:4   
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=24    # There are 24 CPU cores on Cedar GPU nodes
#SBATCH --time=3:00
#SBATCH --account=def-someuser
hostname
nvidia-smi


Packing single-GPU jobs within one SLURM job

If user needs to run 4 x single GPU codes or 2 x 2-GPU codes in a node for longer than 24 hours, GNU Parallel is recommended. A simple example is given below:

cat params.input | parallel -j4 'CUDA_VISIBLE_DEVICES=$(({%} - 1)) python {} &> {#}.out'

GPU id will be calculated by slot id {%} minus 1. {#} is the job id, starting from 1.

A params.input file should include input parameters in each line like:

code1.py
code2.py
code3.py
code4.py
...

With this method, users can run multiple codes in one submission. In this case, GNU Parallel will run a maximum of 4 jobs at a time. It will launch the next job when one job is finished. CUDA_VISIBLE_DEVICES is used to force using only 1 GPU for each code.