Graham: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
No edit summary
Line 10: Line 10:


<!--T:4-->
<!--T:4-->
The interconnects are Infiniband.
The parallel filesystem and external persistent storage ([[National Data Cyberinfrastructure|NDC-Waterloo]]) are similar to [[Cedar|Cedar's]]. The interconnect is different and there is a slightly different mix of compute nodes.
 
The parallel filesystem and external persistent storage ([[National Data Cyberinfrastructure|NDC-Waterloo]]) will be similar to [[Cedar|Cedar's]]. The interconnect is different and there is a slightly different mix of compute nodes.


====Attached Storage System==== <!--T:4-->
====Attached Storage System==== <!--T:4-->
Line 36: Line 34:
|}
|}


====High Performance Interconnect==== <!--T:19-->
====High-performance interconnect==== <!--T:19-->
 
<!--T:20-->
<!--
InfiniBand is to be used.
-->


<!--T:21-->
<!--T:21-->
The low-latency high-performance Infiniband fabric connects all nodes and scratch storage.
Low-latency high-bandwidth Infiniband fabric connects all nodes and scratch storage.


<!--T:22-->
<!--T:22-->

Revision as of 21:25, 25 January 2017

Other languages:


Graham (GP3)

GRAHAM is a heterogeneous cluster, suitable for a variety of workloads, and located at the University of Waterloo. It is named after Wes Graham, the first director of the Computing Centre at Waterloo. It was previously known as "GP3" and is still identified as such in the 2017 RAC documentation.

The parallel filesystem and external persistent storage (NDC-Waterloo) are similar to Cedar's. The interconnect is different and there is a slightly different mix of compute nodes.

Attached Storage System

$HOME

Standard home directory
Small, standard quota
Larger requests should be on $PROJECT

$SCRATCH
Parallel high-performance filesystem

Approximately 3PB usable capacity for active or temporary (/scratch) storage.
Available to all nodes.
Not allocated
Purged - inactive data will be purged

$PROJECT
External persistent storage

Provided by the NDC.
Available to compute nodes, but not designed for parallel I/O workloads.

High-performance interconnect

Low-latency high-bandwidth Infiniband fabric connects all nodes and scratch storage.

The design of Graham is to support multiple simultaneous parallel jobs of up to 1024 cores in a fully non-blocking manner.

Node types and characteristics

Processor type: All nodes have Intel E5-2683 V4 CPUs, running at 2.1 GHz

GPU type: P100 12g

"Base" compute nodes 800 nodes 16 cores/socket, 2 sockets/node, 128 GB of memory
"Large" nodes 56 nodes 16 cores/socket, 2 sockets/node, 256 GB of memory.
"Bigmem500" nodes 24 nodes 16 cores/socket, 2 sockets/node, 512 GB of memory.
"Bigmem3000" nodes 3 nodes 16 cores/socket, 4 sockets/node, 3 TB of memory.
"GPU" nodes 160 nodes 16 cores/socket, 2 sockets/node, 128 GB of memory, 2 NVIDIA P100 GPUs.

All of the above nodes will have approximately 1TB of local (on-node) storage provided by SSD drives available in /tmp.

The delivery and installation schedule is not yet confirmed.