Translations:Cedar/5/en: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
m (FuzzyBot moved page Translations:GP2/5/en to Translations:Cedar/5/en without leaving a redirect: Part of translatable page "GP2".)
(Importing a new version from external source)
 
(16 intermediate revisions by the same user not shown)
Line 1: Line 1:
{| class="wikitable sortable"
{| class="wikitable sortable"
|-
|-
| '''$HOME''' ||
| <b>Home space</b><br /> 526TB total volume||
Standard home directory<br />
* Location of /home directories.
Not allocated<br />
* Each /home directory has a small fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]].
Small, standard quota<br />
* Not allocated via [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service RAS] or [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition RAC]. Larger requests go to the /project space.
Larger requests should be on $PROJECT
* Has daily backup
|-
|-
| '''$SCRATCH<br />Parallel High-performance filesystem''' ||
| <b>Scratch space</b><br /> 5.4PB total volume<br />Parallel high-performance filesystem ||
Approximately 4PB usable capacity for temporary (<code>/scratch</code>) storage.<br />
* For active or temporary (scratch) storage.
Aggregate performance of approximately 40GB/s.  Available to all nodes.<br />
* Not allocated.
Not allocated<br />
* Large fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]] per user.
Purged - inactive data will be purged
* Inactive data will be [[Scratch purging policy|purged]].
|-
|-
|'''$PROJECT<br />External persistent storage'''
|<b>Project space</b><br />23PB total volume<br />External persistent storage
||
||
Provided by the [[National_Data_Cyberinfrastructure|NDC]].<br />
* Not designed for parallel I/O workloads. Use /scratch space instead.
Available to compute nodes, but not designed for parallel i/o workloads.<br />
* Large adjustable [[Storage and file management#Filesystem_quotas_and_policies|quota]] per project.
|-
* Has daily backup.
|'''High performance interconnect'''
||
Low-latency high-performance fabric connecting all nodes and temporary storage. <br />
The design of GP2 is to support multiple simultaneous parallel jobs of at least 1024 cores in a fully non-blocking manner. Jobs larger than 1024 cores would be less well-suited for the topology.
|}
|}

Latest revision as of 20:33, 9 August 2023

Information about message (contribute)
This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
Message definition (Cedar)
{| class="wikitable sortable"
|-
| <b>Home space</b><br /> 526TB total volume||
* Location of /home directories.
* Each /home directory has a small fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]].
* Not allocated via [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service RAS] or [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition RAC]. Larger requests go to the /project space.
* Has daily backup
|-
| <b>Scratch space</b><br /> 5.4PB total volume<br />Parallel high-performance filesystem ||
* For active or temporary (scratch) storage.
* Not allocated.
* Large fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]] per user.
* Inactive data will be [[Scratch purging policy|purged]].
|-
|<b>Project space</b><br />23PB total volume<br />External persistent storage
||
* Not designed for parallel I/O workloads. Use /scratch space instead.
* Large adjustable [[Storage and file management#Filesystem_quotas_and_policies|quota]] per project.
* Has daily backup.
|}
Home space
526TB total volume
  • Location of /home directories.
  • Each /home directory has a small fixed quota.
  • Not allocated via RAS or RAC. Larger requests go to the /project space.
  • Has daily backup
Scratch space
5.4PB total volume
Parallel high-performance filesystem
  • For active or temporary (scratch) storage.
  • Not allocated.
  • Large fixed quota per user.
  • Inactive data will be purged.
Project space
23PB total volume
External persistent storage
  • Not designed for parallel I/O workloads. Use /scratch space instead.
  • Large adjustable quota per project.
  • Has daily backup.