Translations:Storage and file management/12/fr: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
No edit summary
Line 34: Line 34:
|oui
|oui
|oui
|oui
|-
| ''nearline''
|5To par groupe
|non
|non
|non
|non
|non
|}
<references />
Des copies de sauvegarde des espaces projet et ''home'' sont faites chaque soir; elles sont conservées pendant 30 jours et les fichiers supprimés sont conservés pendant 60 jours. Pour récupérer une version antérieure d'un fichier ou d'un répertoire, contactez le [[Technical support/fr|soutien technique]] en indiquant le chemin (''path'') complet et la date.<br />
Pour copier des données à partir de l'espace de stockage ''nearline'' vers les espaces projet, ''home'' ou ''scratch'', contactez le [[Technical support/fr|soutien technique]].

Revision as of 20:34, 10 July 2018

Information about message (contribute)
This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
Message definition (Storage and file management)
<tabs>
<tab name="Cedar">
{| class="wikitable" style="font-size: 95%; text-align: center;"
|+Filesystem Characteristics 
! Filesystem
! Default Quota
! Lustre-based
! Backed up
! Purged
! Available by Default
! Mounted on Compute Nodes
|-
|Home Space
|50 GB and 500K files per user<ref>This quota is fixed and cannot be changed.</ref>
|Yes
|Yes
|No
|Yes
|Yes
|-
|Scratch Space
|20 TB and 1M files per user
|Yes
|No
|Files older than 60 days are purged.<ref>See [[Scratch purging policy]] for more information.</ref>
|Yes
|Yes
|-
|Project Space
|1 TB and 500K files per group<ref>Project space can be increased to 40 TB per group by a RAS request, subject to the limitations that the minimum project space per quota cannot be less than 1 TB and the sum over all four general-purpose clusters cannot exceed 43 TB. The group's sponsoring PI should write to [[technical support]] to make the request.</ref>
|Yes
|Yes
|No
|Yes
|Yes
|-
|Nearline Space
|2 TB and 5000 files per group
|Yes
|Yes
|No
|Yes
|No
|}
<references />
Starting April 1, 2024, new Rapid Access Service (RAS) policies will allow larger quotas for the project and nearline spaces. For more details, see the "Storage" section at [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service].  Quota changes larger than those permitted by RAS will require an application to the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition (RAC)]. 
</tab>
<tab name="Graham">
{| class="wikitable" style="font-size: 95%; text-align: center;"
|+Filesystem Characteristics 
! Filesystem
! Default Quota
! Lustre-based
! Backed up
! Purged
! Available by Default
! Mounted on Compute Nodes
|-
|Home Space
|50 GB and 500K files per user<ref>This quota is fixed and cannot be changed.</ref>
|No
|Yes
|No
|Yes
|Yes
|-
|Scratch Space
|20 TB and 1M files per user
|Yes
|No
|Files older than 60 days are purged.<ref>See [[Scratch purging policy]] for more information.</ref>
|Yes
|Yes
|-
|Project Space
|1 TB and 500K files per group<ref>Project space can be increased to 40 TB per group by a RAS request. The group's sponsoring PI should write to [[technical support]] to make the request.</ref>
|Yes
|Yes
|No
|Yes
|Yes
|-
|Nearline Space
|10 TB and 5000 files per group
|Yes
|Yes
|No
|Yes
|No
|}
<references />
Starting April 1, 2024, new Rapid Access Service (RAS) policies will allow larger quotas for project and nearline spaces. For more details, see the "Storage" section at [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service].  Quota changes larger than those permitted by RAS will require an application to the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition (RAC)].  
</tab>
<tab name="Béluga and Narval">
{| class="wikitable" style="font-size: 95%; text-align: center;"
|+Filesystem Characteristics 
! Filesystem
! Default Quota
! Lustre-based
! Backed up
! Purged
! Available by Default
! Mounted on Compute Nodes
|-
|Home Space
|50 GB and 500K files per user<ref>This quota is fixed and cannot be changed.</ref>
|Yes
|Yes
|No
|Yes
|Yes
|-
|Scratch Space
|20 TB and 1M files per user
|Yes
|No
|Files older than 60 days are purged.<ref>See [[Scratch purging policy]] for more information.</ref>
|Yes
|Yes
|-
|Project Space
|1 TB and 500K files per group<ref>Project space can be increased to 40 TB per group by a RAS request. The group's sponsoring PI should write to [[technical support]] to make the request.</ref>
|Yes
|Yes
|No
|Yes
|Yes
|-
|Nearline Space
|1 TB and 5000 files per group
|Yes
|Yes
|No
|Yes
|No
|}
<references />
Starting April 1, 2024, new Rapid Access Service (RAS) policies will allow larger quotas for project and nearline spaces. For more details, see the "Storage" section at [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service].  Quota changes larger than those permitted by RAS will require an application to the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition (RAC)].  
</tab>
<tab name="Niagara">
{| class="wikitable"
! location
!colspan="2"| quota
!align="right"| block size
! expiration time
! backed up
! on login nodes
! on compute nodes
|-
| $HOME
|colspan="2"| 100 GB per user
|align="right"| 1 MB

| yes
| yes
| read-only
|-
|rowspan="6"| $SCRATCH
|colspan="2"| 25 TB per user (dynamic per group)
|align="right" rowspan="6" | 16 MB
|rowspan="6"| 2 months
|rowspan="6"| no
|rowspan="6"| yes
|rowspan="6"| yes
|-
|align="right"|up to 4 users per group
|align="right"|50TB
|-
|align="right"|up to 11 users per group
|align="right"|125TB
|-
|align="right"|up to 28 users per group
|align="right"|250TB
|-
|align="right"|up to 60 users per group
|align="right"|400TB
|-
|align="right"|above 60 users per group
|align="right"|500TB
|-
| $PROJECT
|colspan="2"| by group allocation (RRG or RPP)
|align="right"| 16 MB

| yes
| yes
| yes
|-
| $ARCHIVE
|colspan="2"| by group allocation
|align="right"| 
|
| dual-copy
| no
| no
|-
| $BBUFFER
|colspan="2"| 10 TB per user
|align="right"| 1 MB
| very short
| no
| yes
| yes
|}
<ul>
<li>[https://docs.scinet.utoronto.ca/images/9/9a/Inode_vs._Space_quota_-_v2x.pdf Inode vs. Space quota (PROJECT and SCRATCH)]</li>
<li>[https://docs.scinet.utoronto.ca/images/0/0e/Scratch-quota.pdf dynamic quota per group (SCRATCH)]</li>
<li>Compute nodes do not have local storage.</li>
<li>Archive (a.k.a. nearline) space is on [https://docs.scinet.utoronto.ca/index.php/HPSS HPSS]</li>
<li>Backup means a recent snapshot, not an archive of all data that ever was.</li>
<li><code>$BBUFFER</code> stands for [https://docs.scinet.utoronto.ca/index.php/Burst_Buffer Burst Buffer], a faster parallel storage tier for temporary data.</li></ul>
Caractéristiques des systèmes de fichiers
Espace Quota par défaut Basé sur Lustre Copié pour sauvegarde Purgé Disponible par défaut Monté sur des nœuds de calcul
home 50Go et 500K fichiers par utilisateur[1] Cedar, oui; Graham, non

(système de fichiers en réseau)

oui non oui oui
scratch 20To et 1M fichiers par utilisateur[2] oui non oui, les fichiers de plus de 60 jours [3]

sont purgés

oui oui
projet 1To et 500 000 fichiers par groupe[4] oui oui non oui oui
  1. Ce quota est fixe et ne peut être modifié.
  2. Présentement, l'espace scratch ne peut pas être augmenté et un quota supplémentaire de 100To par utilisateur s'applique temporairement sur Graham. Ce quota sera ramené à 20To quand une solution technique sera déployée.
  3. Selon la valeur de ctime pour le fichier (voir Espace scratch ː Purge automatique).
  4. L'espace projet peut être augmenté à 10To par groupe en recourant au service d'accès rapide. La demande doit être faite par le chercheur principal responsable pour le groupe en s'adressant au soutien technique.