Bureaucrats, cc_docs_admin, cc_staff
2,879
edits
(→Overview: copy editing) |
(→GPU farming: copy editing) |
||
Line 25: | Line 25: | ||
==GPU farming== | ==GPU farming== | ||
One situation when the MPS feature can be very useful is when you need to run multiple instances of | One situation when the MPS feature can be very useful is when you need to run multiple instances of a CUDA application, but the application is too small to saturate a modern GPU. MPS allows you to run multiple instances of the application sharing a single GPU, as long as there is enough of GPU memory for all of the instances of the application. In many cases this should result in a significantly increased throughput from all of your GPU processes. | ||
Here is an example of a job script to set up GPU farming: | Here is an example of a job script to set up GPU farming: | ||
Line 47: | Line 47: | ||
wait | wait | ||
In the above example, we | In the above example, we share a single V100 GPU between 8 instances of "my_code" (which takes a single argument - the loop index $i). We request 8 CPU cores (#SBATCH -c 8) so there is one CPU core per application instance. The two important elements are "&" on the code execution line, which sends the code processes to the background, and the "wait" command at the end of the script, which ensures that the job runs until all background processes end. | ||
[[Category:Software]] | [[Category:Software]] |