Using GPUs with Slurm: Difference between revisions

Jump to navigation Jump to search
no edit summary
(Marked this version for translation)
No edit summary
Line 49: Line 49:
|                      7  || v100  ||  28 || 183105M ||  8 || V100-PCIE || 70 || 16 || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]]
|                      7  || v100  ||  28 || 183105M ||  8 || V100-PCIE || 70 || 16 || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]]
|-
|-
|                      2  || v100l ||  28 || 183105M ||  8 || V100-?    || 70 || 32 || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]]
|                      2  || v100(**) ||  28 || 183105M ||  8 || V100-?    || 70 || 32 || See [[Graham#Volta_GPU_nodes_on_Graham|Graham: Volta GPU nodes]]
|-
|-
|                      30  ||  t4  ||  44 || 192000M ||  4 || Tesla T4  || 75 || 16 || Two GPUs per CPU socket
|                      30  ||  t4  ||  44 || 192000M ||  4 || Tesla T4  || 75 || 16 || Two GPUs per CPU socket
Line 69: Line 69:
(*) "Compute Capability" is a technical term created by NVidia as a compact way to describe what hardware functions are available on some models of GPU and not on others.  
(*) "Compute Capability" is a technical term created by NVidia as a compact way to describe what hardware functions are available on some models of GPU and not on others.  
It is not a measure of performance.  It is relevant only if you are compiling your own GPU programs.  See the page on [[CUDA#.22Compute_Capability.22|CUDA programming]] for more.
It is not a measure of performance.  It is relevant only if you are compiling your own GPU programs.  See the page on [[CUDA#.22Compute_Capability.22|CUDA programming]] for more.
(**) To access large memory V100 nodes on Graham, use the following arguments in your sbatch/salloc command: "--constraint=cascade,v100".


== Mist == <!--T:38-->
== Mist == <!--T:38-->
cc_staff
238

edits

Navigation menu