Translations:Cedar/5/en: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
m (FuzzyBot moved page Translations:GP2/5/en to Translations:Cedar/5/en without leaving a redirect: Part of translatable page "GP2".)
(Importing a new version from external source)
Line 16: Line 16:
||
||
Provided by the [[National_Data_Cyberinfrastructure|NDC]].<br />
Provided by the [[National_Data_Cyberinfrastructure|NDC]].<br />
Available to compute nodes, but not designed for parallel i/o workloads.<br />
Available to compute nodes, but not designed for parallel I/O workloads.<br />
|-
|-
|'''High performance interconnect'''
|'''High performance interconnect'''
||
||
Low-latency high-performance fabric connecting all nodes and temporary storage. <br />
Low-latency high-performance fabric connecting all nodes and temporary storage. <br />
The design of GP2 is to support multiple simultaneous parallel jobs of at least 1024 cores in a fully non-blocking manner. Jobs larger than 1024 cores would be less well-suited for the topology.
The design of Cedar is to support multiple simultaneous parallel jobs of at least 1024 cores in a fully non-blocking manner. Jobs larger than 1024 cores would be less well-suited for the topology.
|}
|}

Revision as of 21:31, 22 November 2016

Information about message (contribute)
This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
Message definition (Cedar)
{| class="wikitable sortable"
|-
| <b>Home space</b><br /> 526TB total volume||
* Location of /home directories.
* Each /home directory has a small fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]].
* Not allocated via [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service RAS] or [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition RAC]. Larger requests go to the /project space.
* Has daily backup
|-
| <b>Scratch space</b><br /> 5.4PB total volume<br />Parallel high-performance filesystem ||
* For active or temporary (scratch) storage.
* Not allocated.
* Large fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]] per user.
* Inactive data will be [[Scratch purging policy|purged]].
|-
|<b>Project space</b><br />23PB total volume<br />External persistent storage
||
* Not designed for parallel I/O workloads. Use /scratch space instead.
* Large adjustable [[Storage and file management#Filesystem_quotas_and_policies|quota]] per project.
* Has daily backup.
|}
$HOME

Standard home directory
Not allocated
Small, standard quota
Larger requests should be on $PROJECT

$SCRATCH
Parallel High-performance filesystem

Approximately 4PB usable capacity for temporary (/scratch) storage.
Aggregate performance of approximately 40GB/s. Available to all nodes.
Not allocated
Purged - inactive data will be purged

$PROJECT
External persistent storage

Provided by the NDC.
Available to compute nodes, but not designed for parallel I/O workloads.

High performance interconnect

Low-latency high-performance fabric connecting all nodes and temporary storage.
The design of Cedar is to support multiple simultaneous parallel jobs of at least 1024 cores in a fully non-blocking manner. Jobs larger than 1024 cores would be less well-suited for the topology.