Using GPUs with Slurm: Difference between revisions

Jump to navigation Jump to search
Models with GB
mNo edit summary
(Models with GB)
Line 37: Line 37:
!                              CPU cores !! CPU memory !! GPUs  
!                              CPU cores !! CPU memory !! GPUs  
|-
|-
| Béluga            || 172 ||  v100 ||  40 || 191000M ||  4 || V100-SXM2 || 70 || 16 || All GPUs associated with the same CPU socket, connected via NVLink
| Béluga            || 172 ||  v100 ||  40 || 191000M ||  4 || V100-16gb || 70 || 16 || All GPUs associated with the same CPU socket, connected via NVLink and SXM2
|-
|-
| rowspan=3|Cedar  || 114 ||  p100 ||  24 || 128000M ||  4 || P100-PCIE || 60 || 12 || Two GPUs per CPU socket
| rowspan=3|Cedar  || 114 ||  p100 ||  24 || 128000M ||  4 || P100-12gb || 60 || 12 || Two GPUs per CPU socket, connected via PCIe
|-
|-
|                      32  || p100l ||  24 || 257000M ||  4 || P100-PCIE || 60 || 16 || All GPUs associated with the same CPU socket
|                      32  || p100l ||  24 || 257000M ||  4 || P100-16gb || 60 || 16 || All GPUs associated with the same CPU socket, connected via PCIe
|-
|-
|                      192 || v100l ||  32 || 192000M ||  4 || V100-SXM2 || 70 || 32 || Two GPUs per CPU socket; all GPUs connected via NVLink
|                      192 || v100l ||  32 || 192000M ||  4 || V100-32gb || 70 || 32 || Two GPUs per CPU socket; all GPUs connected via NVLink and SXM2
|-
|-
| rowspan=5|Graham  || 160 ||  p100 ||  32 || 127518M ||  2 || P100-PCIE || 60 || 12 || One GPU per CPU socket
| rowspan=5|Graham  || 160 ||  p100 ||  32 || 127518M ||  2 || P100-12gb || 60 || 12 || One GPU per CPU socket, connected via PCIe
|-
|-
|                      7  || v100 ||  28 || 183105M ||  8 || V100-PCIE || 70 || 16 || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]]
|                      7  || v100     ||  28 || 183105M ||  8 || V100-16gb || 70 || 16 || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]]
|-
|-
|                      2  || v100(**) ||  28 || 183105M ||  8 || V100-?    || 70 || 32 || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]]
|                      2  || v100(**) ||  28 || 183105M ||  8 || V100-32gb || 70 || 32 || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]]
|-
|-
|                      30  ||  t4  ||  44 || 192000M ||  4 || Tesla T4 || 75 || 16 || Two GPUs per CPU socket
|                      30  ||  t4  ||  44 || 192000M ||  4 || T4-16gb  || 75 || 16 || Two GPUs per CPU socket
|-
|-
|                      6  ||  t4  ||  16 || 192000M ||  4 || Tesla T4 || 75 || 16 ||  
|                      6  ||  t4  ||  16 || 192000M ||  4 || T4-16gb  || 75 || 16 ||  
|-
|-
| Mist              || 54  || (none) || 32 ||  256GiB ||  4 || V100-SXM2 || 70 || 32 || See [https://docs.scinet.utoronto.ca/index.php/Mist#Specifications Mist specifications]
| Mist              || 54  || (none) || 32 ||  256GiB ||  4 || V100-32gb || 70 || 32 || See [https://docs.scinet.utoronto.ca/index.php/Mist#Specifications Mist specifications]
|-  
|-  
| Narval            || 159 || a100 || 48 || 510000M ||  4 || A100     || 80 || 40 || Two GPUs per CPU socket; all GPUs connected via NVLink  
| Narval            || 159 || a100   || 48 || 510000M ||  4 || A100-40gb || 80 || 40 || Two GPUs per CPU socket; all GPUs connected via NVLink  
|-
|-
| Arbutus          ||  colspan=8 | Cloud resources are not schedulable via Slurm. See [[Cloud resources]] for details of available hardware.
| Arbutus          ||  colspan=8 | Cloud resources are not schedulable via Slurm. See [[Cloud resources]] for details of available hardware.
cc_staff
782

edits

Navigation menu