Cedar/en: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Updating to match new version of source page)
(Updating to match new version of source page)
Line 6: Line 6:
{| class="wikitable"
{| class="wikitable"
|-
|-
| Expected availability: '''June 2017''' for opportunistic use
| Expected availability: '''June 2017''' for opportunistic use. '''NOT YET AVAILABLE.'''
|-
|-
| Login node: '''cedar.computecanada.ca'''
| Login node: '''cedar.computecanada.ca'''
|-
|-
| Globus endpoint: '''computecanada#cedar'''
| Globus endpoint: '''computecanada#cedar-dtn'''
|}
|}


Line 16: Line 16:


Cedar is sold and supported by Scalar Decisions, Inc. The node manufacturer is Dell, the high performance temporary space is from DDN, and the interconnect is from Intel. It is entirely liquid cooled, using rear-door heat exchangers.   
Cedar is sold and supported by Scalar Decisions, Inc. The node manufacturer is Dell, the high performance temporary space is from DDN, and the interconnect is from Intel. It is entirely liquid cooled, using rear-door heat exchangers.   
[https://docs.computecanada.ca/wiki/Getting_Started_with_the_new_National_Systems Getting started with Cedar]


====Attached storage====
====Attached storage====

Revision as of 13:36, 22 June 2017

Other languages:


Cedar (GP2)

Expected availability: June 2017 for opportunistic use. NOT YET AVAILABLE.
Login node: cedar.computecanada.ca
Globus endpoint: computecanada#cedar-dtn

Cedar is a heterogeneous cluster suitable for a variety of workloads; it is located at Simon Fraser University. It is named for the Western Red Cedar, B.C.’s official tree, which is of great spiritual significance to the region's First Nations people. It was previously known as "GP2" and is still identified as such in the 2017 RAC documentation.

Cedar is sold and supported by Scalar Decisions, Inc. The node manufacturer is Dell, the high performance temporary space is from DDN, and the interconnect is from Intel. It is entirely liquid cooled, using rear-door heat exchangers.

Getting started with Cedar

Attached storage

Home space
  • Standard home directory.
  • Small, standard quota.
  • Not allocated via RAS or RAC. Larger requests go to Project space.
Scratch space
Parallel high-performance filesystem
  • For active or temporary (/scratch) storage.
  • Available to all nodes.
  • Not allocated.
  • Inactive data will be purged.
  • DDN storage with approximately 4PB usable capacity.
Project space
External persistent storage

High-performance interconnect

Intel OmniPath (version 1) interconnect (100Gbit/s bandwidth).

A low-latency high-performance fabric connecting all nodes and temporary storage.

By design, Cedar supports multiple simultaneous parallel jobs of up to 1024 cores in a fully non-blocking manner. For larger jobs the interconnect has a 2:1 blocking factor, i.e., even for jobs running on several thousand cores, Cedar provides a high-performance interconnect.

Node types and characteristics

Cedar has a total of 27,696 CPU cores for computation, and 584 GPU devices. Total theoretical peak double precision performance is 936 teraflops for CPUs, plus 2,744 for GPUs, yielding over 3.6 petaflops of theoretical peak double precision performance. 22 fully connected "islands" of 32 base or large nodes each have 1024 cores in a fully non-blocking topology (Omni-Path fabric), with each island designed to yield over 30 teraflops of double-precision performance (measured with high performance LINPACK). There is a 2:1 blocking factor between the 1024 core islands.

"Base" compute nodes: 576 nodes 128 GB of memory, 16 cores/socket, 2 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.
"Large" compute nodes: 128 nodes 256 GB of memory, 16 cores/socket, 2 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.
"Bigmem500" 24 nodes 0.5 TB (512 GB) of memory, 16 cores/socket, 2 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.
"Bigmem1500" nodes 24 nodes 1.5 TB of memory, 16 cores/socket, 2 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.
"GPU base" nodes: 114 nodes 128 GB of memory, 12 cores/socket, 2 sockets/node, 4 NVIDIA P100 Pascal GPUs/node (12GB HBM2 memory), 2 GPUs/PCI root. Intel "Broadwell" CPUs at 2.2Ghz, model E5-2650 v4
"GPU large" nodes. 32 nodes 256 GB of memory, 12 cores/socket, 2 sockets/node, 4 NVIDIA P100 Pascal GPUs/node (16GB HBM2 memory), All GPUs on the same PCI root. E5-2650 v4
"Bigmem3000" nodes 4 nodes 3 TB of memory, 8 cores/socket, 4 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E7-4809 v4.

All of the above nodes have local (on-node) temporary storage. GPU nodes have a single 800GB SSD drive. All other compute nodes have two 480GB SSD drives, for a total raw capacity of 960GB.

Scratch storage is a Lustre filesystem based on DDN model ES14K technology. It includes 640 8TB NL-SAS disk drives, and dual redundant metadata controllers with SSD-based storage.