Graham: Difference between revisions
No edit summary |
No edit summary |
||
Line 10: | Line 10: | ||
<!--T:4--> | <!--T:4--> | ||
The parallel filesystem and external persistent storage ([[National Data Cyberinfrastructure|NDC-Waterloo]]) are similar to [[Cedar|Cedar's]]. The interconnect is different and there is a slightly different mix of compute nodes. | |||
The parallel filesystem and external persistent storage ([[National Data Cyberinfrastructure|NDC-Waterloo]]) | |||
====Attached Storage System==== <!--T:4--> | ====Attached Storage System==== <!--T:4--> | ||
Line 36: | Line 34: | ||
|} | |} | ||
====High | ====High-performance interconnect==== <!--T:19--> | ||
--> | |||
<!--T:21--> | <!--T:21--> | ||
Low-latency high-bandwidth Infiniband fabric connects all nodes and scratch storage. | |||
<!--T:22--> | <!--T:22--> |
Revision as of 21:25, 25 January 2017
Graham (GP3)
GRAHAM is a heterogeneous cluster, suitable for a variety of workloads, and located at the University of Waterloo. It is named after Wes Graham, the first director of the Computing Centre at Waterloo. It was previously known as "GP3" and is still identified as such in the 2017 RAC documentation.
The parallel filesystem and external persistent storage (NDC-Waterloo) are similar to Cedar's. The interconnect is different and there is a slightly different mix of compute nodes.
Attached Storage System
$HOME |
Standard home directory |
$SCRATCH Parallel high-performance filesystem |
Approximately 3PB usable capacity for active or temporary ( |
$PROJECT External persistent storage |
Provided by the NDC. |
High-performance interconnect
Low-latency high-bandwidth Infiniband fabric connects all nodes and scratch storage.
The design of Graham is to support multiple simultaneous parallel jobs of up to 1024 cores in a fully non-blocking manner.
Node types and characteristics
Processor type: All nodes have Intel E5-2683 V4 CPUs, running at 2.1 GHz
GPU type: P100 12g
"Base" compute nodes | 800 nodes | 16 cores/socket, 2 sockets/node, 128 GB of memory |
"Large" nodes | 56 nodes | 16 cores/socket, 2 sockets/node, 256 GB of memory. |
"Bigmem500" nodes | 24 nodes | 16 cores/socket, 2 sockets/node, 512 GB of memory. |
"Bigmem3000" nodes | 3 nodes | 16 cores/socket, 4 sockets/node, 3 TB of memory. |
"GPU" nodes | 160 nodes | 16 cores/socket, 2 sockets/node, 128 GB of memory, 2 NVIDIA P100 GPUs. |
All of the above nodes will have approximately 1TB of local (on-node) storage provided by SSD drives available in /tmp.
The delivery and installation schedule is not yet confirmed.