Using GPUs with Slurm: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
Line 13: Line 13:


== Single GPU Jobs ==
== Single GPU Jobs ==
=== Serial Jobs ===
{{File
  |name=one_gpu_serial_job.sh
  |lang="sh"
  |contents=
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # request GPU "generic resource"
#SBATCH --mem=4000M              # memory per node
#SBATCH --time=0-05:00            # time (DD-HH:MM)
#SBATCH --output=%N-%j.out        # %N for node name, %j for jobID
nvidia-smi
}}
=== Multi-threaded Jobs===
{{File
  |name=one_gpu_threaded_job.sh
  |lang="sh"
  |contents=
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # request GPU "generic resource"
#SBATCH --cpus-per-task=6        # CPU cores/threads
#SBATCH --mem=4000M              # memory per node
#SBATCH --time=0-05:00            # time (DD-HH:MM)
#SBATCH --output=%N-%j.out        # %N for node name, %j for jobID
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
nvidia-smi
}}
=== MPI Jobs ===
{{File
  |name=one_gpu_mpi_job.sh
  |lang="sh"
  |contents=
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # Number of GPUs per node
#SBATCH --nodes=1
#SBATCH --ntask=6                # Number of MPI ranks
#SBATCH --cpus-per-task=1        # CPU cores per MPI rank
#SBATCH --mem=4000M              # memory per node
#SBATCH --time=0-05:00            # time (DD-HH:MM)
#SBATCH --output=%N-%j.out        # %N for node name, %j for jobID
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
nvidia-smi
}}
== Whole Node(s) GPU Jobs ==
== Whole Node(s) GPU Jobs ==
== Using Cedar's Large GPU nodes ==
== Using Cedar's Large GPU nodes ==

Revision as of 18:19, 10 August 2017


This article is a draft

This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.



GPU Hardwares and Node Types[edit]

# of Nodes Node Type CPU Cores CPU Memory # of GPUs GPU Type PCIe Bus Topology
114 Cedar Base GPU Node 24 128GB 4 NVIDIA P100-PCIE-12GB Two GPUs per CPU socket
32 Cedar Large GPU Node 24 256GB 4 NVIDIA P100-PCIE-16GB All GPUs under same CPU socket
160 Graham Base GPU Node 32 128GB 2 NVIDIA P100-PCIE-12GB One GPU per CPU socket

Single GPU Jobs[edit]

Serial Jobs[edit]

File : one_gpu_serial_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # request GPU "generic resource"
#SBATCH --mem=4000M               # memory per node
#SBATCH --time=0-05:00            # time (DD-HH:MM)
#SBATCH --output=%N-%j.out        # %N for node name, %j for jobID
nvidia-smi


Multi-threaded Jobs[edit]

File : one_gpu_threaded_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # request GPU "generic resource"
#SBATCH --cpus-per-task=6         # CPU cores/threads
#SBATCH --mem=4000M               # memory per node
#SBATCH --time=0-05:00            # time (DD-HH:MM)
#SBATCH --output=%N-%j.out        # %N for node name, %j for jobID
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
nvidia-smi


MPI Jobs[edit]

File : one_gpu_mpi_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # Number of GPUs per node
#SBATCH --nodes=1
#SBATCH --ntask=6                 # Number of MPI ranks
#SBATCH --cpus-per-task=1         # CPU cores per MPI rank
#SBATCH --mem=4000M               # memory per node
#SBATCH --time=0-05:00            # time (DD-HH:MM)
#SBATCH --output=%N-%j.out        # %N for node name, %j for jobID
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
nvidia-smi


Whole Node(s) GPU Jobs[edit]

Using Cedar's Large GPU nodes[edit]