38,892
edits
(Created page with "Using GPUs with Slurm") |
(Updating to match new version of source page) |
||
Line 1: | Line 1: | ||
<languages /> | <languages /> | ||
Line 16: | Line 15: | ||
|} | |} | ||
<div class="mw-translate-fuzzy"> | |||
== Single-core job == | == Single-core job == | ||
If you need only a single CPU core and one GPU: | If you need only a single CPU core and one GPU: | ||
Line 29: | Line 29: | ||
./program | ./program | ||
}} | }} | ||
</div> | |||
== Multi-threaded job == | == Multi-threaded job == | ||
Line 99: | Line 100: | ||
=== Scheduling a Large GPU node at Cedar === | === Scheduling a Large GPU node at Cedar === | ||
<div class="mw-translate-fuzzy"> | |||
There is a special group of large-memory GPU nodes at [[Cedar]] which have four Tesla P100 16GB cards each. (Other GPUs in the cluster have 12GB.) These GPUs all use the same PCI switch so the inter-GPU communication latency is lower, but bandwidth between CPU and GPU is lower than on the regular GPU nodes. The nodes also have 256 GB RAM instead of 128GB. In order to use these nodes you must specify <code>lgpu</code>. By-gpu requests can '''only run up to 24 hours'''. | There is a special group of large-memory GPU nodes at [[Cedar]] which have four Tesla P100 16GB cards each. (Other GPUs in the cluster have 12GB.) These GPUs all use the same PCI switch so the inter-GPU communication latency is lower, but bandwidth between CPU and GPU is lower than on the regular GPU nodes. The nodes also have 256 GB RAM instead of 128GB. In order to use these nodes you must specify <code>lgpu</code>. By-gpu requests can '''only run up to 24 hours'''. | ||
</div> | |||
{{File | {{File |