Using GPUs with Slurm: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
Line 12: Line 12:
|}
|}


== Single GPU Jobs ==
== Serial Jobs ==
=== Serial Jobs ===
For GPU jobs asking only single CPU core:
{{File
{{File
   |name=one_gpu_serial_job.sh
   |name=one_gpu_serial_job.sh
Line 20: Line 20:
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # request GPU "generic resource"
#SBATCH --gres=gpu:1              # Number of GPU(s) per node
#SBATCH --mem=4000M              # memory per node
#SBATCH --mem=4000M              # memory per node
#SBATCH --time=0-05:00            # time (DD-HH:MM)
#SBATCH --time=0-05:00            # time (DD-HH:MM)
Line 26: Line 26:
nvidia-smi
nvidia-smi
}}
}}
=== Multi-threaded Jobs===
== Multi-threaded Jobs ==
For GPU jobs asking for multiple CPUs in a single node:
{{File
{{File
   |name=one_gpu_threaded_job.sh
   |name=one_gpu_threaded_job.sh
Line 33: Line 34:
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # request GPU "generic resource"
#SBATCH --gres=gpu:1              # Number of GPU(s) per node
#SBATCH --cpus-per-task=6        # CPU cores/threads
#SBATCH --cpus-per-task=6        # CPU cores/threads
#SBATCH --mem=4000M              # memory per node
#SBATCH --mem=4000M              # memory per node
Line 41: Line 42:
nvidia-smi
nvidia-smi
}}
}}
=== MPI Jobs ===
== MPI Jobs ==
{{File
{{File
   |name=one_gpu_mpi_job.sh
   |name=one_gpu_mpi_job.sh
Line 48: Line 49:
#!/bin/bash
#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1             # Number of GPUs per node
#SBATCH --gres=gpu:4             # Number of GPUs per node
#SBATCH --nodes=1
#SBATCH --nodes=2                # Number of Nodes
#SBATCH --ntask=6                # Number of MPI ranks
#SBATCH --ntask=48                # Number of MPI ranks
#SBATCH --cpus-per-task=1        # CPU cores per MPI rank
#SBATCH --cpus-per-task=1        # CPU cores per MPI rank
#SBATCH --mem=4000M              # memory per node
#SBATCH --mem=120G                # memory per node
#SBATCH --time=0-05:00            # time (DD-HH:MM)
#SBATCH --time=0-05:00            # time (DD-HH:MM)
#SBATCH --output=%N-%j.out        # %N for node name, %j for jobID
#SBATCH --output=%N-%j.out        # %N for node name, %j for jobID
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
nvidia-smi
srun ./program
}}
}}


== Whole Node(s) GPU Jobs ==
== Using Cedar's Large GPU nodes ==
== Using Cedar's Large GPU nodes ==

Revision as of 18:31, 10 August 2017


This article is a draft

This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.



GPU Hardwares and Node Types[edit]

# of Nodes Node Type CPU Cores CPU Memory # of GPUs GPU Type PCIe Bus Topology
114 Cedar Base GPU Node 24 128GB 4 NVIDIA P100-PCIE-12GB Two GPUs per CPU socket
32 Cedar Large GPU Node 24 256GB 4 NVIDIA P100-PCIE-16GB All GPUs under same CPU socket
160 Graham Base GPU Node 32 128GB 2 NVIDIA P100-PCIE-12GB One GPU per CPU socket

Serial Jobs[edit]

For GPU jobs asking only single CPU core:

File : one_gpu_serial_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # Number of GPU(s) per node
#SBATCH --mem=4000M               # memory per node
#SBATCH --time=0-05:00            # time (DD-HH:MM)
#SBATCH --output=%N-%j.out        # %N for node name, %j for jobID
nvidia-smi


Multi-threaded Jobs[edit]

For GPU jobs asking for multiple CPUs in a single node:

File : one_gpu_threaded_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1              # Number of GPU(s) per node
#SBATCH --cpus-per-task=6         # CPU cores/threads
#SBATCH --mem=4000M               # memory per node
#SBATCH --time=0-05:00            # time (DD-HH:MM)
#SBATCH --output=%N-%j.out        # %N for node name, %j for jobID
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
nvidia-smi


MPI Jobs[edit]

File : one_gpu_mpi_job.sh

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:4              # Number of GPUs per node
#SBATCH --nodes=2                 # Number of Nodes
#SBATCH --ntask=48                # Number of MPI ranks
#SBATCH --cpus-per-task=1         # CPU cores per MPI rank
#SBATCH --mem=120G                # memory per node
#SBATCH --time=0-05:00            # time (DD-HH:MM)
#SBATCH --output=%N-%j.out        # %N for node name, %j for jobID
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
srun ./program


Using Cedar's Large GPU nodes[edit]