Using GPUs with Slurm: Difference between revisions
Jump to navigation
Jump to search
m (Rdickson moved page Using GPUs with SLURM to Using GPUs with Slurm: correct capitalization of Slurm) |
|
(No difference)
|
Revision as of 18:22, 18 September 2017
This article is a draft
This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.
GPU Hardwares and Node Types[edit]
# of Nodes | Node Type | CPU Cores | CPU Memory | # of GPUs | GPU Type | PCIe Bus Topology |
---|---|---|---|---|---|---|
114 | Cedar Base GPU Node | 24 | 128GB | 4 | NVIDIA P100-PCIE-12GB | Two GPUs per CPU socket |
32 | Cedar Large GPU Node | 24 | 256GB | 4 | NVIDIA P100-PCIE-16GB | All GPUs under same CPU socket |
160 | Graham Base GPU Node | 32 | 128GB | 2 | NVIDIA P100-PCIE-12GB | One GPU per CPU socket |
Serial Jobs[edit]
For GPU jobs asking only single CPU core:
File : gpu_serial_job.sh
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1 # Number of GPU(s) per node
#SBATCH --mem=4000M # memory per node
#SBATCH --time=0-05:00 # time (DD-HH:MM)
#SBATCH --output=%N-%j.out # %N for node name, %j for jobID
./program
Multi-threaded Jobs[edit]
For GPU jobs asking for multiple CPUs in a single node:
File : gpu_threaded_job.sh
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1 # Number of GPU(s) per node
#SBATCH --cpus-per-task=6 # CPU cores/threads
#SBATCH --mem=4000M # memory per node
#SBATCH --time=0-05:00 # time (DD-HH:MM)
#SBATCH --output=%N-%j.out # %N for node name, %j for jobID
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
./program
On Cedar, multi-threaded jobs are recommended to use at most 6 CPU cores for each GPU request. On Graham, user can use up to 16 CPU cores for each GPU request.
MPI Jobs[edit]
File : gpu_mpi_job.sh
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:4 # Number of GPUs per node
#SBATCH --nodes=2 # Number of Nodes
#SBATCH --ntask=48 # Number of MPI ranks
#SBATCH --cpus-per-task=1 # CPU cores per MPI rank
#SBATCH --mem=120G # memory per node
#SBATCH --time=0-05:00 # time (DD-HH:MM)
#SBATCH --output=%N-%j.out # %N for node name, %j for jobID
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
srun ./program