Translations:Using GPUs with Slurm/2/en

From Alliance Doc
Jump to navigation Jump to search
Cluster # of nodes Slurm type
specifier
Per node GPU model Compute
Capability(*)
GPU mem
(GiB)
Notes
CPU cores CPU memory GPUs
Béluga 172 v100 40 191000M 4 V100-16gb 70 16 All GPUs associated with the same CPU socket, connected via NVLink and SXM2
Cedar 114 p100 24 128000M 4 P100-12gb 60 12 Two GPUs per CPU socket, connected via PCIe
32 p100l 24 257000M 4 P100-16gb 60 16 All GPUs associated with the same CPU socket, connected via PCIe
192 v100l 32 192000M 4 V100-32gb 70 32 Two GPUs per CPU socket; all GPUs connected via NVLink and SXM2
Graham 160 p100 32 127518M 2 P100-12gb 60 12 One GPU per CPU socket, connected via PCIe
7 v100(**) 28 183105M 8 V100-16gb 70 16 See Graham: Volta GPU nodes
2 v100(***) 28 183105M 8 V100-32gb 70 32 See Graham: Volta GPU nodes
30 t4 44 192000M 4 T4-16gb 75 16 Two GPUs per CPU socket
6 t4 16 192000M 4 T4-16gb 75 16  
Mist 54 (none) 32 256GiB 4 V100-32gb 70 32 See Mist specifications
Narval 159 a100 48 510000M 4 A100-40gb 80 40 Two GPUs per CPU socket; all GPUs connected via NVLink
Arbutus Cloud resources are not schedulable via Slurm. See Cloud resources for details of available hardware.