Translations:Using GPUs with Slurm/2/fr: Difference between revisions
From Alliance Doc
Jump to navigation
Jump to search
Revision as of 19:39, 16 November 2017
Information about message (contribute ) This message has no documentation.
If you know where or how this message is used, you can help other translators by adding documentation to this message.
Message definition (Using GPUs with Slurm ) {| class="wikitable" |- ! rowspan=2|Cluster !! rowspan=2| # of nodes !! rowspan=2|Slurm type<br>specifier !! colspan=3|Per node !! rowspan=2|GPU model !! rowspan=2|Compute<br>Capability(*) !! rowspan=2|GPU mem<br>(GiB) !! rowspan=2|Notes |- ! CPU cores !! CPU memory !! GPUs |- | Béluga || 172 || v100 || 40 || 191000M || 4 || V100-16gb || 70 || 16 || All GPUs associated with the same CPU socket, connected via NVLink and SXM2 |- | rowspan=3|Cedar || 114 || p100 || 24 || 128000M || 4 || P100-12gb || 60 || 12 || Two GPUs per CPU socket, connected via PCIe |- | 32 || p100l || 24 || 257000M || 4 || P100-16gb || 60 || 16 || All GPUs associated with the same CPU socket, connected via PCIe |- | 192 || v100l || 32 || 192000M || 4 || V100-32gb || 70 || 32 || Two GPUs per CPU socket; all GPUs connected via NVLink and SXM2 |- | rowspan=5|Graham || 160 || p100 || 32 || 127518M || 2 || P100-12gb || 60 || 12 || One GPU per CPU socket, connected via PCIe |- | 7 || v100(**) || 28 || 183105M || 8 || V100-16gb || 70 || 16 || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]] |- | 2 || v100(***) || 28 || 183105M || 8 || V100-32gb || 70 || 32 || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]] |- | 30 || t4 || 44 || 192000M || 4 || T4-16gb || 75 || 16 || Two GPUs per CPU socket |- | 6 || t4 || 16 || 192000M || 4 || T4-16gb || 75 || 16 || |- | Mist || 54 || (none) || 32 || 256GiB || 4 || V100-32gb || 70 || 32 || See [https://docs.scinet.utoronto.ca/index.php/Mist#Specifications Mist specifications] |- | Narval || 159 || a100 || 48 || 510000M || 4 || A100-40gb || 80 || 40 || Two GPUs per CPU socket; all GPUs connected via NVLink |- | Arbutus || colspan=8 | Cloud resources are not schedulable via Slurm. See [[Cloud resources]] for details of available hardware. |}
# of Nodes
Node type
CPU cores
CPU memory
# of GPUs
GPU type
PCIe bus topology
114
Cedar Base GPU
24
128GB
4
NVIDIA P100-PCIE-12GB
Two GPUs per CPU socket
32
Cedar Large GPU
24
256GB
4
NVIDIA P100-PCIE-16GB
All GPUs under same CPU socket
160
Graham Base GPU
32
128GB
2
NVIDIA P100-PCIE-12GB
One GPU per CPU socket