Using GPUs with Slurm: Difference between revisions
(copy lgpu instructions from main page) |
|||
Line 63: | Line 63: | ||
}} | }} | ||
== | == Whole nodes == | ||
If your application can efficiently use an entire node and its associated GPUs, you should use one of the following job scripts as a template. | |||
=== Scheduling a GPU node at Graham === | |||
{{File | |||
|name=graham_gpu_node_job.sh | |||
|lang="sh" | |||
|contents= | |||
#!/bin/bash | |||
#SBATCH --nodes=1 | |||
#SBATCH --gres=gpu:2 | |||
#SBATCH --ntasks-per-node=32 | |||
#SBATCH --mem=128000M | |||
#SBATCH --time=3:00 | |||
nvidia-smi | |||
}} | |||
=== Scheduling a Base GPU node at Cedar === | |||
{{File | |||
|name=cedar_gpu_node_job.sh | |||
|lang="sh" | |||
|contents= | |||
#!/bin/bash | |||
#SBATCH --nodes=1 | |||
#SBATCH --gres=gpu:4 | |||
#SBATCH --exclusive | |||
#SBATCH --mem=125G | |||
#SBATCH --time=3:00 | |||
nvidia-smi | |||
}} | |||
=== Scheduling a Large GPU node at Cedar === | |||
The large-memory GPU nodes at [[Cedar]] have four Tesla P100 16GB cards each. These GPUs all use the same PCI switch so the inter-GPU communication latency is lower, but bandwidth is also lower than on the regular GPU nodes. The nodes also have 256 GB RAM instead of 128GB. In order to use these nodes you must request all four GPUs, that is, the whole node, and you must specify <code>lgpu</code>, as shown in this example: | The large-memory GPU nodes at [[Cedar]] have four Tesla P100 16GB cards each. These GPUs all use the same PCI switch so the inter-GPU communication latency is lower, but bandwidth is also lower than on the regular GPU nodes. The nodes also have 256 GB RAM instead of 128GB. In order to use these nodes you must request all four GPUs, that is, the whole node, and you must specify <code>lgpu</code>, as shown in this example: | ||
Line 73: | Line 104: | ||
#!/bin/bash | #!/bin/bash | ||
#SBATCH --nodes=1 | #SBATCH --nodes=1 | ||
#SBATCH --gres=gpu:lgpu:4 | |||
#SBATCH --ntasks=1 | #SBATCH --ntasks=1 | ||
#SBATCH --cpus-per-task=24 # There are 24 CPU cores on Cedar GPU nodes | #SBATCH --cpus-per-task=24 # There are 24 CPU cores on Cedar GPU nodes | ||
#SBATCH -- | #SBATCH --time=3:00 | ||
hostname | hostname | ||
nvidia-smi | nvidia-smi | ||
}} | }} |
Revision as of 18:54, 19 September 2017
This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.
Available hardware[edit]
These are the node types containing GPUs currently available on Cedar and Graham:
# of Nodes | Node type | CPU cores | CPU memory | # of GPUs | GPU type | PCIe bus topology |
---|---|---|---|---|---|---|
114 | Cedar Base GPU | 24 | 128GB | 4 | NVIDIA P100-PCIE-12GB | Two GPUs per CPU socket |
32 | Cedar Large GPU | 24 | 256GB | 4 | NVIDIA P100-PCIE-16GB | All GPUs under same CPU socket |
160 | Graham Base GPU | 32 | 128GB | 2 | NVIDIA P100-PCIE-12GB | One GPU per CPU socket |
Single-core job[edit]
If you need only a single CPU core and one GPU:
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1 # Number of GPUs (per node)
#SBATCH --mem=4000M # memory (per node)
#SBATCH --time=0-03:00 # time (DD-HH:MM)
./program
Multi-threaded job[edit]
For GPU jobs asking for multiple CPUs in a single node:
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1 # Number of GPU(s) per node
#SBATCH --cpus-per-task=6 # CPU cores/threads
#SBATCH --mem=4000M # memory per node
#SBATCH --time=0-03:00 # time (DD-HH:MM)
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
./program
On Cedar, we recommend that multi-threaded jobs use no more than 6 CPU cores for each GPU requested. On Graham, we recommend no more than 16 CPU cores for each GPU.
MPI job[edit]
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:4 # Number of GPUs per node
#SBATCH --nodes=2 # Number of nodes
#SBATCH --ntask=48 # Number of MPI process
#SBATCH --cpus-per-task=1 # CPU cores per MPI process
#SBATCH --mem=120G # memory per node
#SBATCH --time=0-03:00 # time (DD-HH:MM)
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
srun ./program
Whole nodes[edit]
If your application can efficiently use an entire node and its associated GPUs, you should use one of the following job scripts as a template.
Scheduling a GPU node at Graham[edit]
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --gres=gpu:2
#SBATCH --ntasks-per-node=32
#SBATCH --mem=128000M
#SBATCH --time=3:00
nvidia-smi
Scheduling a Base GPU node at Cedar[edit]
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --gres=gpu:4
#SBATCH --exclusive
#SBATCH --mem=125G
#SBATCH --time=3:00
nvidia-smi
Scheduling a Large GPU node at Cedar[edit]
The large-memory GPU nodes at Cedar have four Tesla P100 16GB cards each. These GPUs all use the same PCI switch so the inter-GPU communication latency is lower, but bandwidth is also lower than on the regular GPU nodes. The nodes also have 256 GB RAM instead of 128GB. In order to use these nodes you must request all four GPUs, that is, the whole node, and you must specify lgpu
, as shown in this example:
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --gres=gpu:lgpu:4
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=24 # There are 24 CPU cores on Cedar GPU nodes
#SBATCH --time=3:00
hostname
nvidia-smi