Cedar: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
No edit summary
 
(84 intermediate revisions by 13 users not shown)
Line 8: Line 8:
| Availability: Compute RAC2017 allocations started June 30, 2017
| Availability: Compute RAC2017 allocations started June 30, 2017
|-
|-
| Login node: '''cedar.computecanada.ca'''
| Login node: <b>cedar.alliancecan.ca</b>
|-
|-
| Globus endpoint: '''computecanada#cedar-dtn'''
| Globus endpoint: <b>computecanada#cedar-globus</b>
|-
| System Status Page: <b>https://status.alliancecan.ca/</b>
|}
|}


<!--T:2-->
<!--T:2-->
Cedar is a heterogeneous cluster suitable for a variety of workloads; it is located at Simon Fraser University. It is named for the [https://en.wikipedia.org/wiki/Thuja_plicata Western Red Cedar], B.C.’s official tree, which is of great spiritual significance to the region's First Nations people. It was previously known as "GP2" and is still identified as such in the [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ 2017 RAC] documentation.  
Cedar is a heterogeneous cluster suitable for a variety of workloads; it is located at Simon Fraser University. It is named for the [https://en.wikipedia.org/wiki/Thuja_plicata Western Red Cedar], B.C.’s official tree, which is of great spiritual significance to the region's First Nations people.
 
<br/>
<!--T:3-->
Cedar is sold and supported by Scalar Decisions, Inc. The node manufacturer is Dell, the high performance temporary storage /scratch filesystem is from DDN, and the interconnect is from Intel. It is entirely liquid-cooled, using rear-door heat exchangers.   
Cedar is sold and supported by Scalar Decisions, Inc. The node manufacturer is Dell, the high performance temporary space (scratch) is from DDN, and the interconnect is from Intel. It is entirely liquid cooled, using rear-door heat exchangers.   
<br/>
<br/>
NOTE: Globus version 4 endpoints are no longer supported. The endpoint <b>computecanada#cedar-dtn</b> has been retired. Please use version 5 endpoint <b>computecanada#cedar-globus</b>.


<!--T:25-->
<!--T:25-->
[https://docs.computecanada.ca/wiki/Getting_Started_with_the_new_National_Systems Getting started with Cedar]
[[Getting started|Getting started with Cedar]]<br>
 
[[Running_jobs|How to run jobs]]<br>
<!--T:26-->
[[Transferring_data|Transferring data]]<br>
As part of the second phase of the CFI Cyberinfrastructure Challenge 2 program, Cedar will be considerably expanded. Initial discussions with the vendor are in progress, and the expansion is expected to be carried out winter 2018. This should result in close to doubling the capacity of Cedar.


=Attached storage= <!--T:4-->
==Storage== <!--T:4-->


<!--T:5-->
<!--T:5-->
{| class="wikitable sortable"
{| class="wikitable sortable"
|-
|-
| '''Home space'''<br /> 250TB total volume||
| <b>Home space</b><br /> 526TB total volume||
* Location of home directories.
* Location of /home directories.
* Each home directory has a small fixed [[Storage and file management#Filesystem_Quotas_and_Policies|quota]].
* Each /home directory has a small fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]].
* Not allocated via [https://www.computecanada.ca/research-portal/accessing-resources/rapid-access-service/ RAS] or [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ RAC]. Larger requests go to Project space.
* Not allocated via [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service RAS] or [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition RAC]. Larger requests go to the /project space.
* Has daily backup
* Has daily backup
|-
|-
| '''Scratch space'''<br /> 3.7PB total volume<br />Parallel high-performance filesystem ||
| <b>Scratch space</b><br /> 5.4PB total volume<br />Parallel high-performance filesystem ||
* For active or temporary (<code>/scratch</code>) storage.
* For active or temporary (scratch) storage.
* Not allocated.
* Not allocated.
* Large fixed [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] per user.
* Large fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]] per user.
* Inactive data will be purged.
* Inactive data will be [[Scratch purging policy|purged]].
|-
|-
|'''Project space'''<br />10PB total volume<br />External persistent storage
|<b>Project space</b><br />23PB total volume<br />External persistent storage
||
||
* Part of the [[National Data Cyberinfrastructure]].
* Not designed for parallel I/O workloads. Use /scratch space instead.
* Not designed for parallel I/O workloads. Use Scratch space instead.
* Large adjustable [[Storage and file management#Filesystem_quotas_and_policies|quota]] per project.
* Large adjustable [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] per project.
* Has daily backup.
* Has daily backup.
|}
|}


=High-performance interconnect= <!--T:19-->
<!--T:18-->
The /scratch storage space is a Lustre filesystem based on DDN model ES14K technology. It includes 640 8TB NL-SAS disk drives, and dual redundant metadata controllers with SSD-based storage.
 
==High-performance interconnect== <!--T:19-->


<!--T:20-->
<!--T:20-->
''Intel OmniPath (version 1) interconnect (100Gbit/s bandwidth).''
<i>Intel OmniPath (version 1) interconnect (100Gbit/s bandwidth).</i>


<!--T:21-->
<!--T:21-->
Line 59: Line 64:


<!--T:22-->
<!--T:22-->
By design, Cedar supports multiple simultaneous parallel jobs of up to 1024 cores in a fully non-blocking manner. For larger jobs the interconnect has a 2:1 blocking factor, i.e., even for jobs running on several thousand cores, Cedar provides a high-performance interconnect.
By design, Cedar supports multiple simultaneous parallel jobs of up to 1024 Broadwell cores (32 nodes)  or 1536 Skylake cores (32 nodes) or 1536 Cascade Lake cores (32 nodes) in a fully non-blocking manner. For larger jobs the interconnect has a 2:1 blocking factor, i.e., even for jobs running on several thousand cores, Cedar provides a high-performance interconnect.


=Node types and characteristics= <!--T:6-->
==Node characteristics== <!--T:6-->


<!--T:17-->
<!--T:28-->
Cedar has a total of 27,696 CPU cores for computation, and 584 GPU devices. Total theoretical peak double precision performance is 936 teraflops for CPUs, plus 2,744 for GPUs, yielding over 3.6 petaflops of theoretical peak double precision performance. 22 fully connected "islands" of 32 base or large nodes each have 1024 cores in a fully non-blocking topology (Omni-Path fabric), with each island designed to yield over 30 teraflops of double-precision performance (measured with high performance LINPACK). There is a 2:1 blocking factor between the 1024 core islands.
Cedar has 100,400 CPU cores for computation, and 1352 GPU devices. Turbo Boost is deactivated for all Cedar nodes.


<!--T:7-->
<!--T:7-->
{| class="wikitable sortable"
{| class="wikitable sortable"
! nodes !! cores !! available memory !! CPU  !! storage !! GPU
|-
| 256 || 32 || 125G or 128000M  || 2 x Intel E5-2683 v4 Broadwell @ 2.1GHz || 2 x 480G SSD || -
|-
| 256 || 32 || 250G or 257000M  || 2 x Intel E5-2683 v4 Broadwell @ 2.1GHz || 2 x 480G SSD || -
|-
|-
| base nodes || 576 nodes || 128 GB of memory, 16 cores/socket, 2 sockets/node.  Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.
| 40  || 32 || 502G or 515000M  || 2 x Intel E5-2683 v4 Broadwell @ 2.1GHz || 2 x 480G SSD || -
|-
|-
| large nodes || 128 nodes || 256 GB of memory, 16 cores/socket, 2 sockets/node.  Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.
| 16  || 32 || 1510G or 1547000M || 2 x Intel E5-2683 v4 Broadwell @ 2.1GHz || 2 x 480G SSD || -
|-
|-
| GPU base nodes || 114 nodes || 128 GB of memory, 12 cores/socket, 2 sockets/node, 4 NVIDIA P100 Pascal GPUs/node (12GB HBM2 memory), 2 GPUs/PCI root. Intel "Broadwell" CPUs at 2.2Ghz, model E5-2650 v4
| || 32 || 4000G or 4096000M || 2 x AMD EPYC 7302 @ 3.0GHz || 2 x 480G SSD || -
|-
|-
| GPU large nodes  || 32 nodes || 256 GB of memory, 12 cores/socket, 2 sockets/node, 4 NVIDIA P100 Pascal GPUs/node (16GB HBM2 memory), All GPUs on the same PCI root. E5-2650 v4
| || 40 || 6000G or 6144000M || 4 x Intel Gold 5215 Cascade Lake @ 2.5GHz || 2 x 480G SSD || -
|-
|-
| bigmem500 nodes || 24 nodes || 0.5 TB (512 GB) of memory, 16 cores/socket, 2 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.
| 96  || 24 || 125G or 128000M  || 2 x Intel E5-2650 v4 Broadwell @ 2.2GHz || 1 x 800G SSD || 4 x NVIDIA P100 Pascal (12G HBM2 memory)
|-
|-
|bigmem1500 nodes || 24 nodes || 1.5 TB of memory, 16 cores/socket, 2 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.
| 32  || 24 || 250G or 257000M  || 2 x Intel E5-2650 v4 Broadwell @ 2.2GHz || 1 x 800G SSD || 4 x NVIDIA P100 Pascal (16G HBM2 memory)
|-
|-
| bigmem3000 nodes || 4 nodes || 3 TB of memory, 8 cores/socket, 4 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E7-4809 v4.
| 192 || 32 || 187G or 192000M  || 2 x Intel Silver 4216 Cascade Lake @ 2.1GHz || 1 x 480G SSD || 4 x NVIDIA V100 Volta (32G HBM2 memory)
|-
| 608 || 48 || 187G or 192000M  || 2 x Intel Platinum 8160F Skylake @ 2.1GHz || 2 x 480G SSD || -
|-
| 768 || 48 || 187G or 192000M  || 2 x Intel Platinum 8260 Cascade Lake @ 2.4GHz || 2 x 480G SSD || -
|}
|}
<!--T:29-->
Note that the amount of available memory is fewer than the <i>round number</i> suggested by the hardware configuration. For instance, <i>base</i> nodes do have 128 GiB of RAM, but some of it is permanently occupied by the kernel and OS. To avoid wasting time by swapping/paging, the scheduler will never allocate jobs whose memory requirements exceed the amount of <i>available</i> memory shown above.


<!--T:10-->
<!--T:10-->
All of the above nodes have local (on-node) temporary storage. GPU nodes have a single 800GB SSD drive. All other compute nodes have two 480GB SSD drives, for a total raw capacity of 960GB. Best practice to access node-local storage is to use the directory generated by [[Running jobs|Slurm]], $SLURM_TMPDIR.
All nodes have local (on-node) temporary storage. Compute nodes (except GPU nodes) have two 480GB SSD drives, for a total raw capacity of 960GB. GPU nodes have either an 800GB or a 480GB SSD drive. Use node-local storage through the job-specific directory created by the scheduler, <code>$SLURM_TMPDIR</code>. See [[Using node-local storage]].
 
===Choosing a node type=== <!--T:27-->
A number of 48-core nodes are reserved for jobs that require whole nodes. There are no 32-core nodes set aside for whole node processing. <b>Jobs that request less than 48 cores per node can end up sharing nodes with other jobs.</b><br>
Most applications will run on either Broadwell or Skylake or Cascade Lake nodes, and performance differences are expected to be small compared to job waiting times. Therefore we recommend that you do not select a specific node type for your jobs. If it is necessary, use <code>--constraint=cascade</code>, <code>--constraint=skylake</code> or <code>--constraint=broadwell</code>.  If the requirement is for any AVX512 node, use <code>--constraint=[skylake|cascade]</code>.


<!--T:18-->
==Submitting and running jobs policy== <!--T:30-->
Scratch storage is a Lustre filesystem based on DDN model ES14K technology. It includes 640 8TB NL-SAS disk drives, and dual redundant metadata controllers with SSD-based storage.
 
<!--T:31-->
As of <b>April 17, 2019</b>, jobs can no longer run in the <code>/home</code> filesystem. The policy was put in place to reduce the load on this filesystem and improve the responsiveness for interactive work. If you get the message <code>Submitting jobs from directories residing in /home is not permitted</code>, transfer the files either to your <code>/project</code> or <code>/scratch</code> directory and submit the job from there.
 
==Performance== <!--T:17-->
Theoretical peak double precision performance of Cedar is 6547 teraflops for CPUs, plus 7434 for GPUs, yielding almost 14 petaflops of theoretical peak double precision performance.
 
<!--T:32-->
Cedar's network topology is made up of <i>islands</i> with a 2:1 blocking factor between islands. Within an island the interconnect (Omni-Path fabric) is fully non-blocking.
<br>
Most islands contain 32 nodes:
* 16 islands with 32 Broadwell nodes, each with 32 cores, i.e., 1024 cores per island;
* 43 islands with 32 Skylake or Cascade Lake nodes, each with 48 cores, i.e., 1536 cores per island;
* 4 islands with 32 P100 GPU nodes;
* 6 islands with 32 V100 GPU nodes;
* 2 islands each with 32 big memory nodes; of these 64 nodes, 40 are of 0.5TB, 16 are of 1.5TB, 6 are of 4TB and 2 are of 6TB.


<!--T:16-->
<!--T:16-->

Latest revision as of 21:17, 6 May 2024

Other languages:


Availability: Compute RAC2017 allocations started June 30, 2017
Login node: cedar.alliancecan.ca
Globus endpoint: computecanada#cedar-globus
System Status Page: https://status.alliancecan.ca/

Cedar is a heterogeneous cluster suitable for a variety of workloads; it is located at Simon Fraser University. It is named for the Western Red Cedar, B.C.’s official tree, which is of great spiritual significance to the region's First Nations people.
Cedar is sold and supported by Scalar Decisions, Inc. The node manufacturer is Dell, the high performance temporary storage /scratch filesystem is from DDN, and the interconnect is from Intel. It is entirely liquid-cooled, using rear-door heat exchangers.

NOTE: Globus version 4 endpoints are no longer supported. The endpoint computecanada#cedar-dtn has been retired. Please use version 5 endpoint computecanada#cedar-globus.

Getting started with Cedar
How to run jobs
Transferring data

Storage[edit]

Home space
526TB total volume
  • Location of /home directories.
  • Each /home directory has a small fixed quota.
  • Not allocated via RAS or RAC. Larger requests go to the /project space.
  • Has daily backup
Scratch space
5.4PB total volume
Parallel high-performance filesystem
  • For active or temporary (scratch) storage.
  • Not allocated.
  • Large fixed quota per user.
  • Inactive data will be purged.
Project space
23PB total volume
External persistent storage
  • Not designed for parallel I/O workloads. Use /scratch space instead.
  • Large adjustable quota per project.
  • Has daily backup.

The /scratch storage space is a Lustre filesystem based on DDN model ES14K technology. It includes 640 8TB NL-SAS disk drives, and dual redundant metadata controllers with SSD-based storage.

High-performance interconnect[edit]

Intel OmniPath (version 1) interconnect (100Gbit/s bandwidth).

A low-latency high-performance fabric connecting all nodes and temporary storage.

By design, Cedar supports multiple simultaneous parallel jobs of up to 1024 Broadwell cores (32 nodes) or 1536 Skylake cores (32 nodes) or 1536 Cascade Lake cores (32 nodes) in a fully non-blocking manner. For larger jobs the interconnect has a 2:1 blocking factor, i.e., even for jobs running on several thousand cores, Cedar provides a high-performance interconnect.

Node characteristics[edit]

Cedar has 100,400 CPU cores for computation, and 1352 GPU devices. Turbo Boost is deactivated for all Cedar nodes.

nodes cores available memory CPU storage GPU
256 32 125G or 128000M 2 x Intel E5-2683 v4 Broadwell @ 2.1GHz 2 x 480G SSD -
256 32 250G or 257000M 2 x Intel E5-2683 v4 Broadwell @ 2.1GHz 2 x 480G SSD -
40 32 502G or 515000M 2 x Intel E5-2683 v4 Broadwell @ 2.1GHz 2 x 480G SSD -
16 32 1510G or 1547000M 2 x Intel E5-2683 v4 Broadwell @ 2.1GHz 2 x 480G SSD -
6 32 4000G or 4096000M 2 x AMD EPYC 7302 @ 3.0GHz 2 x 480G SSD -
2 40 6000G or 6144000M 4 x Intel Gold 5215 Cascade Lake @ 2.5GHz 2 x 480G SSD -
96 24 125G or 128000M 2 x Intel E5-2650 v4 Broadwell @ 2.2GHz 1 x 800G SSD 4 x NVIDIA P100 Pascal (12G HBM2 memory)
32 24 250G or 257000M 2 x Intel E5-2650 v4 Broadwell @ 2.2GHz 1 x 800G SSD 4 x NVIDIA P100 Pascal (16G HBM2 memory)
192 32 187G or 192000M 2 x Intel Silver 4216 Cascade Lake @ 2.1GHz 1 x 480G SSD 4 x NVIDIA V100 Volta (32G HBM2 memory)
608 48 187G or 192000M 2 x Intel Platinum 8160F Skylake @ 2.1GHz 2 x 480G SSD -
768 48 187G or 192000M 2 x Intel Platinum 8260 Cascade Lake @ 2.4GHz 2 x 480G SSD -

Note that the amount of available memory is fewer than the round number suggested by the hardware configuration. For instance, base nodes do have 128 GiB of RAM, but some of it is permanently occupied by the kernel and OS. To avoid wasting time by swapping/paging, the scheduler will never allocate jobs whose memory requirements exceed the amount of available memory shown above.

All nodes have local (on-node) temporary storage. Compute nodes (except GPU nodes) have two 480GB SSD drives, for a total raw capacity of 960GB. GPU nodes have either an 800GB or a 480GB SSD drive. Use node-local storage through the job-specific directory created by the scheduler, $SLURM_TMPDIR. See Using node-local storage.

Choosing a node type[edit]

A number of 48-core nodes are reserved for jobs that require whole nodes. There are no 32-core nodes set aside for whole node processing. Jobs that request less than 48 cores per node can end up sharing nodes with other jobs.
Most applications will run on either Broadwell or Skylake or Cascade Lake nodes, and performance differences are expected to be small compared to job waiting times. Therefore we recommend that you do not select a specific node type for your jobs. If it is necessary, use --constraint=cascade, --constraint=skylake or --constraint=broadwell. If the requirement is for any AVX512 node, use --constraint=[skylake|cascade].

Submitting and running jobs policy[edit]

As of April 17, 2019, jobs can no longer run in the /home filesystem. The policy was put in place to reduce the load on this filesystem and improve the responsiveness for interactive work. If you get the message Submitting jobs from directories residing in /home is not permitted, transfer the files either to your /project or /scratch directory and submit the job from there.

Performance[edit]

Theoretical peak double precision performance of Cedar is 6547 teraflops for CPUs, plus 7434 for GPUs, yielding almost 14 petaflops of theoretical peak double precision performance.

Cedar's network topology is made up of islands with a 2:1 blocking factor between islands. Within an island the interconnect (Omni-Path fabric) is fully non-blocking.
Most islands contain 32 nodes:

  • 16 islands with 32 Broadwell nodes, each with 32 cores, i.e., 1024 cores per island;
  • 43 islands with 32 Skylake or Cascade Lake nodes, each with 48 cores, i.e., 1536 cores per island;
  • 4 islands with 32 P100 GPU nodes;
  • 6 islands with 32 V100 GPU nodes;
  • 2 islands each with 32 big memory nodes; of these 64 nodes, 40 are of 0.5TB, 16 are of 1.5TB, 6 are of 4TB and 2 are of 6TB.