Translations:Using GPUs with Slurm/2/fr: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Added Béluga, fixed available memory)
No edit summary
Line 10: Line 10:
|-
|-
| 160 || ''GPU base'', Graham || 32|| 127518M || 2  || NVIDIA P100-PCIE-12GB || un GPU par socket CPU
| 160 || ''GPU base'', Graham || 32|| 127518M || 2  || NVIDIA P100-PCIE-12GB || un GPU par socket CPU
|-
| 7  || ''GPU base'', Graham || 28 || 183105M || 8 || NVIDIA V100-PCIE-16GB || quatre GPU socket CPU
|-
| 6 || ''GPU base'', Graham || 16 || 196608M || 4 || NVIDIA Tesla T4 16GB || deux GPU par socket CPU
|}
|}

Revision as of 15:53, 31 October 2019

Information about message (contribute)
This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
Message definition (Using GPUs with Slurm)
{| class="wikitable"
|-
! rowspan=2|Cluster !! rowspan=2| # of nodes !! rowspan=2|Slurm type<br>specifier !! colspan=3|Per node !! rowspan=2|GPU model  !! rowspan=2|Compute<br>Capability(*) !! rowspan=2|GPU mem<br>(GiB) !! rowspan=2|Notes
|-
!                              CPU cores !! CPU memory !! GPUs 
|-
| Béluga            || 172 ||  v100 ||  40 || 191000M ||  4 || V100-16gb || 70 || 16 || All GPUs associated with the same CPU socket, connected via NVLink and SXM2
|-
| rowspan=3|Cedar  || 114 ||  p100 ||  24 || 128000M ||  4 || P100-12gb || 60 || 12 || Two GPUs per CPU socket, connected via PCIe
|-
|                      32  || p100l ||  24 || 257000M ||  4 || P100-16gb || 60 || 16 || All GPUs associated with the same CPU socket, connected via PCIe
|-
|                      192 || v100l ||  32 || 192000M ||  4 || V100-32gb || 70 || 32 || Two GPUs per CPU socket; all GPUs connected via NVLink and SXM2
|-
| rowspan=5|Graham  || 160 ||  p100 ||  32 || 127518M ||  2 || P100-12gb || 60 || 12 || One GPU per CPU socket, connected via PCIe
|-
|                      7  || v100(**)    ||  28 || 183105M ||  8 || V100-16gb || 70 || 16 || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]]
|-
|                      2  || v100(***) ||  28 || 183105M ||  8 || V100-32gb || 70 || 32 || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]]
|-
|                      30  ||  t4  ||  44 || 192000M ||  4 || T4-16gb  || 75 || 16 || Two GPUs per CPU socket
|-
|                      6  ||  t4  ||  16 || 192000M ||  4 || T4-16gb  || 75 || 16 || &nbsp;
|-
| Mist              || 54  || (none) || 32 ||  256GiB ||  4 || V100-32gb || 70 || 32 || See [https://docs.scinet.utoronto.ca/index.php/Mist#Specifications Mist specifications]
|- 
| Narval            || 159 || a100  || 48 || 510000M ||  4 || A100-40gb || 80 || 40 || Two GPUs per CPU socket; all GPUs connected via NVLink 
|-
| Arbutus          ||  colspan=8 | Cloud resources are not schedulable via Slurm. See [[Cloud resources]] for details of available hardware.
|}
# de nœuds Type de nœud Cœurs CPU Mémoire CPU # de GPU Type de GPU Topologie du bus PCIe
172 GPU base, Béluga 40 191000M 4 NVIDIA V100-SXM2-16GB tous les GPU associés au même socket CPU
114 GPU base, Cedar 24 128000M 4 NVIDIA P100-PCIE-12GB deux GPU par socket CPU
32 GPU large, Cedar 24 257000M 4 NVIDIA P100-PCIE-16GB tous les GPU associés au même socket CPU
160 GPU base, Graham 32 127518M 2 NVIDIA P100-PCIE-12GB un GPU par socket CPU
7 GPU base, Graham 28 183105M 8 NVIDIA V100-PCIE-16GB quatre GPU socket CPU
6 GPU base, Graham 16 196608M 4 NVIDIA Tesla T4 16GB deux GPU par socket CPU