Cedar/en: Difference between revisions

Updating to match new version of source page
(Updating to match new version of source page)
(Updating to match new version of source page)
Line 6: Line 6:
| Availability: Compute RAC2017 allocations started June 30, 2017
| Availability: Compute RAC2017 allocations started June 30, 2017
|-
|-
| Login node: '''cedar.alliancecan.ca'''
| Login node: <b>cedar.alliancecan.ca</b>
|-
|-
| Globus endpoint: '''computecanada#cedar-dtn'''
| Globus endpoint: <b>computecanada#cedar-dtn</b>
|-
|-
| System Status Page: '''http://status.alliancecan.ca/'''
| System Status Page: <b>http://status.alliancecan.ca/</b>
|}
|}


Cedar is a heterogeneous cluster suitable for a variety of workloads; it is located at Simon Fraser University. It is named for the [https://en.wikipedia.org/wiki/Thuja_plicata Western Red Cedar], B.C.’s official tree, which is of great spiritual significance to the region's First Nations people.  
Cedar is a heterogeneous cluster suitable for a variety of workloads; it is located at Simon Fraser University. It is named for the [https://en.wikipedia.org/wiki/Thuja_plicata Western Red Cedar], B.C.’s official tree, which is of great spiritual significance to the region's First Nations people.  
<br/>
<br/>
Cedar is sold and supported by Scalar Decisions, Inc. The node manufacturer is Dell, the high performance temporary storage <tt>/scratch</tt> filesystem is from DDN, and the interconnect is from Intel. It is entirely liquid cooled, using rear-door heat exchangers.   
Cedar is sold and supported by Scalar Decisions, Inc. The node manufacturer is Dell, the high performance temporary storage /scratch filesystem is from DDN, and the interconnect is from Intel. It is entirely liquid-cooled, using rear-door heat exchangers.   


[[Getting started with the new national systems| Getting started with Cedar]]
[[Getting started with the new national systems| Getting started with Cedar]]
Line 23: Line 23:
{| class="wikitable sortable"
{| class="wikitable sortable"
|-
|-
| '''Home space'''<br /> 526TB total volume||
| <b>Home space</b><br /> 526TB total volume||
* Location of home directories.
* Location of /home directories.
* Each home directory has a small fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]].
* Each /home directory has a small fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]].
* Not allocated via [https://www.computecanada.ca/research-portal/accessing-resources/rapid-access-service/ RAS] or [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ RAC]. Larger requests go to Project space.
* Not allocated via [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service RAS] or [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition RAC]. Larger requests go to the /project space.
* Has daily backup
* Has daily backup
|-
|-
| '''Scratch space'''<br /> 5.4PB total volume<br />Parallel high-performance filesystem ||
| <b>Scratch space</b><br /> 5.4PB total volume<br />Parallel high-performance filesystem ||
* For active or temporary (<code>/scratch</code>) storage.
* For active or temporary (scratch) storage.
* Not allocated.
* Not allocated.
* Large fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]] per user.
* Large fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]] per user.
* Inactive data will be [[Scratch purging policy|purged]].
* Inactive data will be [[Scratch purging policy|purged]].
|-
|-
|'''Project space'''<br />23PB total volume<br />External persistent storage
|<b>Project space</b><br />23PB total volume<br />External persistent storage
||
||
* Not designed for parallel I/O workloads. Use Scratch space instead.
* Not designed for parallel I/O workloads. Use /scratch space instead.
* Large adjustable [[Storage and file management#Filesystem_quotas_and_policies|quota]] per project.
* Large adjustable [[Storage and file management#Filesystem_quotas_and_policies|quota]] per project.
* Has daily backup.
* Has daily backup.
|}
|}


Scratch storage is a Lustre filesystem based on DDN model ES14K technology. It includes 640 8TB NL-SAS disk drives, and dual redundant metadata controllers with SSD-based storage.
The /scratch storage space is a Lustre filesystem based on DDN model ES14K technology. It includes 640 8TB NL-SAS disk drives, and dual redundant metadata controllers with SSD-based storage.


=High-performance interconnect=
=High-performance interconnect=


''Intel OmniPath (version 1) interconnect (100Gbit/s bandwidth).''
<i>Intel OmniPath (version 1) interconnect (100Gbit/s bandwidth).</i>


A low-latency high-performance fabric connecting all nodes and temporary storage.
A low-latency high-performance fabric connecting all nodes and temporary storage.
Line 80: Line 80:
|}
|}


Note that the amount of available memory is less than the "round number" suggested by the hardware configuration. For instance, "base" nodes do have 128 GiB of RAM, but some of it is permanently occupied by the kernel and OS. To avoid wasting time by swapping/paging, the scheduler will never allocate jobs whose memory requirements exceed the amount of "available" memory shown above.
Note that the amount of available memory is less than the <i>round number</i> suggested by the hardware configuration. For instance, <i>base</i> nodes do have 128 GiB of RAM, but some of it is permanently occupied by the kernel and OS. To avoid wasting time by swapping/paging, the scheduler will never allocate jobs whose memory requirements exceed the amount of <i>available</i> memory shown above.


All nodes have local (on-node) temporary storage. Compute nodes (except GPU nodes) have two 480GB SSD drives, for a total raw capacity of 960GB. GPU nodes have either an 800GB or a 480GB SSD drive. Best practice to access node-local storage is to use the directory generated by [[Running jobs|Slurm]], $SLURM_TMPDIR.
All nodes have local (on-node) temporary storage. Compute nodes (except GPU nodes) have two 480GB SSD drives, for a total raw capacity of 960GB. GPU nodes have either an 800GB or a 480GB SSD drive. Best practice to access node-local storage is to use the directory generated by [[Running jobs|Slurm]], $SLURM_TMPDIR.


== Choosing a node type ==
== Choosing a node type ==
A number of 48-core nodes are reserved for jobs that require whole nodes. There are no 32-core nodes set aside for whole node processing. '''Jobs that request less than 48 cores per node can end up sharing nodes with other jobs.'''<br>
A number of 48-core nodes are reserved for jobs that require whole nodes. There are no 32-core nodes set aside for whole node processing. <b>Jobs that request less than 48 cores per node can end up sharing nodes with other jobs.</b><br>
Most applications will run on either Broadwell or Skylake or Cascade Lake nodes, and performance differences are expected to be small compared to job waiting times. Therefore we recommend that you do not select a specific node type for your jobs. If it is necessary, use <code>--constraint=cascade</code>, <code>--constraint=skylake</code> or <code>--constraint=broadwell</code>.  If the requirement is for any AVX512 node, use <code>--constraint=[skylake|cascade]</code>.  See [[Running_jobs#Specifying_a_CPU_architecture|Specifying a CPU architecture]].
Most applications will run on either Broadwell or Skylake or Cascade Lake nodes, and performance differences are expected to be small compared to job waiting times. Therefore we recommend that you do not select a specific node type for your jobs. If it is necessary, use <code>--constraint=cascade</code>, <code>--constraint=skylake</code> or <code>--constraint=broadwell</code>.  If the requirement is for any AVX512 node, use <code>--constraint=[skylake|cascade]</code>.  See [[Running_jobs#Specifying_a_CPU_architecture|Specifying a CPU architecture]].


= Submitting and running jobs policy =
= Submitting and running jobs policy =


As of '''April 17, 2019''', jobs can no longer run in the <code>/home</code> filesystem. The policy was put in place to reduce the load on this filesystem and improve the responsiveness for interactive work. If you get the message <code>Submitting jobs from directories residing in /home is not permitted</code>, transfer the files either to your <code>/project</code> or <code>/scratch</code> directory and submit the job from there.
As of <b>April 17, 2019</b>, jobs can no longer run in the <code>/home</code> filesystem. The policy was put in place to reduce the load on this filesystem and improve the responsiveness for interactive work. If you get the message <code>Submitting jobs from directories residing in /home is not permitted</code>, transfer the files either to your <code>/project</code> or <code>/scratch</code> directory and submit the job from there.


= Performance =
= Performance =
Theoretical peak double precision performance of Cedar is 6547 teraflops for CPUs, plus 7434 for GPUs, yielding almost 14 petaflops of theoretical peak double precision performance.
Theoretical peak double precision performance of Cedar is 6547 teraflops for CPUs, plus 7434 for GPUs, yielding almost 14 petaflops of theoretical peak double precision performance.


Cedar's network topology is made up of "islands" with a 2:1 blocking factor between islands. Within an island the interconnect (Omni-Path fabric) is fully non-blocking.
Cedar's network topology is made up of <i>islands</i> with a 2:1 blocking factor between islands. Within an island the interconnect (Omni-Path fabric) is fully non-blocking.
<br>
<br>
Most islands contain 32 nodes:
Most islands contain 32 nodes:
38,760

edits