Cedar

From Alliance Doc
Revision as of 14:08, 23 March 2017 by FuzzyBot (talk | contribs) (Updating to match new version of source page)
Jump to navigation Jump to search
Other languages:


Cedar (GP2)

CEDAR is a heterogeneous cluster, suitable for a variety of workloads, located at Simon Fraser University. It is named for the Western Red Cedar, B.C.’s official tree and of great spiritual significance to the region's First Nations people. It was previously known as "GP2" and is still identified as such in the 2017 RAC documentation.

The Cedar system is sold and supported by Scalar Decisions, Inc. The node manufacturer is Dell, the high performance temporary space is from DDN, and the interconnect is from Intel. It is entirely liquid cooled, using rear-door heat exchangers.

Attached Storage System

$HOME

Standard home directory
Not allocated
Small, standard quota
Larger requests should be on $PROJECT

$SCRATCH
Parallel High-performance filesystem

DDN storage subsystem with approximately 4PB usable capacity for temporary (/scratch) storage.
Aggregate performance of 35GB/s. Available to all nodes.
Not allocated
Purged - inactive data will be purged

$PROJECT
External persistent storage

Provided by the NDC.
Available to compute nodes, but not designed for parallel I/O workloads.

High Performance Interconnect

Intel OmniPath (version 1) interconnect (100Gbit/s bandwidth).

A low-latency high-performance fabric connecting all nodes and temporary storage.

The design of Cedar is to support multiple simultaneous parallel jobs of up to 1024 cores in a fully non-blocking manner. For larger jobs the interconnect has a 2:1 blocking factor, i.e., even for jobs running on several thousand cores the Cedar system provides a high-performance interconnect.

Node types and characteristics:

Cedar will have a total of 27,696 CPU cores for computation, and 584 GPU devices. Total theoretical peak double precision performance is 936 teraflops for CPUs, plus 2,744 for GPUs, yielding over 3.6 petaflops of theoretical peak double precision performance. 22 fully connected "islands" of 32 base or large nodes will each have 1024 cores in a fully non-blocking topology, with each island expected to yield over 30 teraflops of measured double precision performance (measured with high performance LINPACK). There is a 2:1 blocking factor between the 1024 core islands.

"Base" compute nodes: 576 nodes 128 GB of memory, 16 cores/socket, 2 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.
"Large" compute nodes: 128 nodes 256 GB of memory, 16 cores/socket, 2 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.
"Bigmem500" 24 nodes 0.5 TB (512 GB) of memory, 16 cores/socket, 2 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.
"Bigmem1500" nodes 24 nodes 1.5 TB of memory, 16 cores/socket, 2 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.
"GPU base" nodes: 114 nodes 128 GB of memory, 12 cores/socket, 2 sockets/node, 4 NVIDIA P100 Pascal GPUs/node (12GB HBM2 memory), 2 GPUs/PCI root. Intel "Broadwell" CPUs at 2.2Ghz, model E5-2650 v4
"GPU large" nodes. 32 nodes 256 GB of memory, 12 cores/socket, 2 sockets/node, 4 NVIDIA P100 Pascal GPUs/node (16GB HBM2 memory), All GPUs on the same PCI root. E5-2650 v4
"Bigmem3000" nodes 4 nodes 3 TB of memory, 8 cores/socket, 4 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E7-4809 v4.

All of the above nodes will have local (on-node) temporary storage. GPU nodes will have a single 800GB SSD drive. All other compute nodes will have dual 480GB SSD drives, for a total raw capacity of 960GB.

Temporary storage is a Lustre filesystem based on DDN model ES14K technology. It includes 640 8TB NL-SAS disk drives, and dual redundant metadata controllers with SSD-based storage.