Hyper-Q / MPS
This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.
Overview[edit]
Hyper-Q (or MPS) is a new hardware/software feature of NVIDIA GPUs. It is available in GPUs with CUDA capability 3.5 and higher. It is available on P100 and newer GPUs on the Alliance clusters cedar, graham, beluga, and narval.
According to NVIDIA,
MPS (Multi-Process Service; formerly known as Hyper-Q) enables multiple CPU cores to launch work on a single GPU simultaneously, thereby dramatically increasing GPU utilization and significantly reducing CPU idle times. MPS increases the total number of connections (work queues) between the host and the GPU by allowing multiple simultaneous, hardware-managed connections (compared to the single connection available with Fermi generation GPUs). Hyper-Q is a flexible solution that allows separate connections from multiple CUDA streams, from multiple Message Passing Interface (MPI) processes, or even from multiple threads within a process. Applications that previously encountered false serialization across tasks, thereby limiting achieved GPU utilization, can see up to dramatic performance increase without changing any existing code.
In our tests, MPS increases the total GPU flop rate even when the GPU is being shared by unrelated CPU processes ("GPU farming"). That means that MPS is great for CUDA codes with relatively small problem sizes, which on their own cannot efficiently saturate modern GPUs with thousands of cores.
MPS is not enabled by default, but it is straightforward to do. If you use the GPU interactively, execute the following commands before running your CUDA code(s):
export CUDA_MPS_PIPE_DIRECTORY=/tmp/nvidia-mps export CUDA_MPS_LOG_DIRECTORY=/tmp/nvidia-log nvidia-cuda-mps-control -d
If you are using a scheduler, you should submit a script which contains the above lines, and then executes your code.
Then you can avail the MPS feature if you have more than one CPU thread accessing the GPU. This will happen if you run an MPI/CUDA, OpenMP/CUDA code, or multiple instances of a serial CUDA code (GPU farming).
Many additional details on MPS can be found in this document: Multi Process Service (MPS) - NVIDIA Documentation.
GPU farming[edit]
One situation when the MPS feature can be very useful is when you need to run multiple instances of your CUDA code, when your code is too small to saturate a modern GPU. What you can do is to run multiple instances of your code sharing a single GPU. (This will work as long as there is enough of GPU memory for all of your code instances.) In many cases this should result in a significantly increased collective throughput from all of your GPU processes.
Here is an example of a job script to set up GPU farming:
#!/bin/bash #SBATCH --gpus-per-node=v100:1 #SBATCH -t 0-10:00 #SBATCH --mem=64G #SBATCH -c 8 mkdir -p $HOME/tmp export CUDA_MPS_LOG_DIRECTORY=$HOME/tmp nvidia-cuda-mps-control -d for ((i=0; i<8; i++)) do echo $i ./my_code $i & done wait
In the above example, we are sharing a single V100 gpu between 8 instances of "my_code" (which takes a single argument - the loop index $i). We request 8 CPU cores (#SBATCH -c 8) for the farm, so there is one CPU core per code instance. The two important elements are "&" on the code execution line (this sends the code processes to the background), and the "wait" command at the end of the script (which ensures that the job runs until all background processes finished running.)