Using GPUs with Slurm: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
m (Rdickson moved page Using GPUs with SLURM to Using GPUs with Slurm: correct capitalization of Slurm)
(copy lgpu instructions from main page)
Line 1: Line 1:
{{Draft}}
{{Draft}}
== GPU Hardwares and Node Types ==
 
== Available hardware ==
These are the node types containing GPUs currently available on [[Cedar]] and [[Graham]]:
 
{| class="wikitable"
{| class="wikitable"
|-
|-
! # of Nodes !!Node Type !! CPU Cores !! CPU Memory !! # of GPUs !! GPU Type !! PCIe Bus Topology
! # of Nodes !!Node type !! CPU cores !! CPU memory !! # of GPUs !! GPU type !! PCIe bus topology
|-
|-
| 114  || Cedar Base GPU Node || 24 || 128GB || 4  || NVIDIA P100-PCIE-12GB || Two GPUs per CPU socket
| 114  || Cedar Base GPU || 24 || 128GB || 4  || NVIDIA P100-PCIE-12GB || Two GPUs per CPU socket
|-
|-
| 32 || Cedar Large GPU Node || 24|| 256GB || 4  || NVIDIA P100-PCIE-16GB || All GPUs under same CPU socket
| 32 || Cedar Large GPU || 24|| 256GB || 4  || NVIDIA P100-PCIE-16GB || All GPUs under same CPU socket
|-
|-
| 160 || Graham Base GPU Node || 32|| 128GB || 2  || NVIDIA P100-PCIE-12GB || One GPU per CPU socket
| 160 || Graham Base GPU || 32|| 128GB || 2  || NVIDIA P100-PCIE-12GB || One GPU per CPU socket
|}
|}


== Serial Jobs ==
== Single-core job ==
For GPU jobs asking only single CPU core:  
If you need only a single CPU core and one GPU:
{{File
{{File
   |name=gpu_serial_job.sh
   |name=gpu_serial_job.sh
Line 20: Line 23:
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # Number of GPU(s) per node
#SBATCH --gres=gpu:1              # Number of GPUs (per node)
#SBATCH --mem=4000M              # memory per node
#SBATCH --mem=4000M              # memory (per node)
#SBATCH --time=0-05:00            # time (DD-HH:MM)
#SBATCH --time=0-03:00            # time (DD-HH:MM)
#SBATCH --output=%N-%j.out        # %N for node name, %j for jobID
./program
./program
}}
}}


== Multi-threaded Jobs ==
== Multi-threaded job ==
For GPU jobs asking for multiple CPUs in a single node:
For GPU jobs asking for multiple CPUs in a single node:
{{File
{{File
Line 38: Line 40:
#SBATCH --cpus-per-task=6        # CPU cores/threads
#SBATCH --cpus-per-task=6        # CPU cores/threads
#SBATCH --mem=4000M              # memory per node
#SBATCH --mem=4000M              # memory per node
#SBATCH --time=0-05:00            # time (DD-HH:MM)
#SBATCH --time=0-03:00            # time (DD-HH:MM)
#SBATCH --output=%N-%j.out        # %N for node name, %j for jobID
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
./program
./program
}}
}}
On Cedar, multi-threaded jobs are recommended to use at most 6 CPU cores for each GPU request. On Graham, user can use up to 16 CPU cores for each GPU request.
On Cedar, we recommend that multi-threaded jobs use no more than 6 CPU cores for each GPU requested. On Graham, we recommend no more than 16 CPU cores for each GPU.


== MPI Jobs ==
== MPI job ==
{{File
{{File
   |name=gpu_mpi_job.sh
   |name=gpu_mpi_job.sh
Line 53: Line 54:
#SBATCH --account=def-someuser
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:4              # Number of GPUs per node
#SBATCH --gres=gpu:4              # Number of GPUs per node
#SBATCH --nodes=2                # Number of Nodes
#SBATCH --nodes=2                # Number of nodes
#SBATCH --ntask=48                # Number of MPI ranks
#SBATCH --ntask=48                # Number of MPI process
#SBATCH --cpus-per-task=1        # CPU cores per MPI rank
#SBATCH --cpus-per-task=1        # CPU cores per MPI process
#SBATCH --mem=120G                # memory per node
#SBATCH --mem=120G                # memory per node
#SBATCH --time=0-05:00            # time (DD-HH:MM)
#SBATCH --time=0-03:00            # time (DD-HH:MM)
#SBATCH --output=%N-%j.out        # %N for node name, %j for jobID
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
srun ./program
srun ./program
Line 64: Line 64:


== Using Cedar's Large GPU nodes ==
== Using Cedar's Large GPU nodes ==
The large-memory GPU nodes at [[Cedar]] have four Tesla P100 16GB cards each. These GPUs all use the same PCI switch so the inter-GPU communication latency is lower, but bandwidth is also lower than on the regular GPU nodes. The nodes also have 256 GB RAM instead of 128GB. In order to use these nodes you must request all four GPUs, that is, the whole node, and you must specify <code>lgpu</code>, as shown in this example:
{{File
  |name=large_gpu_job.sh
  |lang="sh"
  |contents=
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=24    # There are 24 CPU cores on Cedar GPU nodes
#SBATCH --gres=gpu:lgpu:4    # Ask for 4 GPUs per node of the large-gpu node variety
#SBATCH --time=0-00:10
hostname
nvidia-smi
}}

Revision as of 19:47, 18 September 2017


This article is a draft

This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.




Available hardware[edit]

These are the node types containing GPUs currently available on Cedar and Graham:

# of Nodes Node type CPU cores CPU memory # of GPUs GPU type PCIe bus topology
114 Cedar Base GPU 24 128GB 4 NVIDIA P100-PCIE-12GB Two GPUs per CPU socket
32 Cedar Large GPU 24 256GB 4 NVIDIA P100-PCIE-16GB All GPUs under same CPU socket
160 Graham Base GPU 32 128GB 2 NVIDIA P100-PCIE-12GB One GPU per CPU socket

Single-core job[edit]

If you need only a single CPU core and one GPU:

File : gpu_serial_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # Number of GPUs (per node)
#SBATCH --mem=4000M               # memory (per node)
#SBATCH --time=0-03:00            # time (DD-HH:MM)
./program


Multi-threaded job[edit]

For GPU jobs asking for multiple CPUs in a single node:

File : gpu_threaded_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # Number of GPU(s) per node
#SBATCH --cpus-per-task=6         # CPU cores/threads
#SBATCH --mem=4000M               # memory per node
#SBATCH --time=0-03:00            # time (DD-HH:MM)
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
./program


On Cedar, we recommend that multi-threaded jobs use no more than 6 CPU cores for each GPU requested. On Graham, we recommend no more than 16 CPU cores for each GPU.

MPI job[edit]

File : gpu_mpi_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:4              # Number of GPUs per node
#SBATCH --nodes=2                 # Number of nodes
#SBATCH --ntask=48                # Number of MPI process
#SBATCH --cpus-per-task=1         # CPU cores per MPI process
#SBATCH --mem=120G                # memory per node
#SBATCH --time=0-03:00            # time (DD-HH:MM)
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
srun ./program


Using Cedar's Large GPU nodes[edit]

The large-memory GPU nodes at Cedar have four Tesla P100 16GB cards each. These GPUs all use the same PCI switch so the inter-GPU communication latency is lower, but bandwidth is also lower than on the regular GPU nodes. The nodes also have 256 GB RAM instead of 128GB. In order to use these nodes you must request all four GPUs, that is, the whole node, and you must specify lgpu, as shown in this example:


File : large_gpu_job.sh

#!/bin/bash
#SBATCH --nodes=1 
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=24    # There are 24 CPU cores on Cedar GPU nodes
#SBATCH --gres=gpu:lgpu:4     # Ask for 4 GPUs per node of the large-gpu node variety
#SBATCH --time=0-00:10 
hostname
nvidia-smi