Translations:Storage and file management/12/fr: Difference between revisions
From Alliance Doc
Jump to navigation
Jump to search
Line 23:
Line 23:
|oui
|oui
|non
|non
|oui, les fichiers de plus de 60 jours <ref> sont purgés.Voir [[Scratch purging policy/fr|Espace /scratch ː Purge automatique]]).</ref>
|oui, les fichiers de plus de 60 jours sont purgés. <ref>Voir [[Scratch purging policy/fr|Espace /scratch ː Purge automatique]].</ref>
|oui
|oui
|oui
|oui
Revision as of 23:15, 4 October 2018
Information about message (contribute ) This message has no documentation.
If you know where or how this message is used, you can help other translators by adding documentation to this message.
Message definition (Storage and file management ) <tabs> <tab name="Cedar"> {| class="wikitable" style="font-size: 95%; text-align: center;" |+Filesystem Characteristics ! Filesystem ! Default Quota ! Lustre-based ! Backed up ! Purged ! Available by Default ! Mounted on Compute Nodes |- |Home Space |50 GB and 500K files per user<ref>This quota is fixed and cannot be changed.</ref> |Yes |Yes |No |Yes |Yes |- |Scratch Space |20 TB and 1M files per user |Yes |No |Files older than 60 days are purged.<ref>See [[Scratch purging policy]] for more information.</ref> |Yes |Yes |- |Project Space |1 TB and 500K files per group<ref>Project space can be increased to 40 TB per group by a RAS request, subject to the limitations that the minimum project space per quota cannot be less than 1 TB and the sum over all four general-purpose clusters cannot exceed 43 TB. The group's sponsoring PI should write to [[technical support]] to make the request.</ref> |Yes |Yes |No |Yes |Yes |- |Nearline Space |2 TB and 5000 files per group |Yes |Yes |No |Yes |No |} <references /> Starting April 1, 2024, new Rapid Access Service (RAS) policies will allow larger quotas for the project and nearline spaces. For more details, see the "Storage" section at [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. Quota changes larger than those permitted by RAS will require an application to the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition (RAC)]. </tab> <tab name="Graham"> {| class="wikitable" style="font-size: 95%; text-align: center;" |+Filesystem Characteristics ! Filesystem ! Default Quota ! Lustre-based ! Backed up ! Purged ! Available by Default ! Mounted on Compute Nodes |- |Home Space |50 GB and 500K files per user<ref>This quota is fixed and cannot be changed.</ref> |No |Yes |No |Yes |Yes |- |Scratch Space |20 TB and 1M files per user |Yes |No |Files older than 60 days are purged.<ref>See [[Scratch purging policy]] for more information.</ref> |Yes |Yes |- |Project Space |1 TB and 500K files per group<ref>Project space can be increased to 40 TB per group by a RAS request. The group's sponsoring PI should write to [[technical support]] to make the request.</ref> |Yes |Yes |No |Yes |Yes |- |Nearline Space |10 TB and 5000 files per group |Yes |Yes |No |Yes |No |} <references /> Starting April 1, 2024, new Rapid Access Service (RAS) policies will allow larger quotas for project and nearline spaces. For more details, see the "Storage" section at [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. Quota changes larger than those permitted by RAS will require an application to the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition (RAC)]. </tab> <tab name="Béluga and Narval"> {| class="wikitable" style="font-size: 95%; text-align: center;" |+Filesystem Characteristics ! Filesystem ! Default Quota ! Lustre-based ! Backed up ! Purged ! Available by Default ! Mounted on Compute Nodes |- |Home Space |50 GB and 500K files per user<ref>This quota is fixed and cannot be changed.</ref> |Yes |Yes |No |Yes |Yes |- |Scratch Space |20 TB and 1M files per user |Yes |No |Files older than 60 days are purged.<ref>See [[Scratch purging policy]] for more information.</ref> |Yes |Yes |- |Project Space |1 TB and 500K files per group<ref>Project space can be increased to 40 TB per group by a RAS request. The group's sponsoring PI should write to [[technical support]] to make the request.</ref> |Yes |Yes |No |Yes |Yes |- |Nearline Space |1 TB and 5000 files per group |Yes |Yes |No |Yes |No |} <references /> Starting April 1, 2024, new Rapid Access Service (RAS) policies will allow larger quotas for project and nearline spaces. For more details, see the "Storage" section at [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. Quota changes larger than those permitted by RAS will require an application to the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition (RAC)]. </tab> <tab name="Niagara"> {| class="wikitable" ! location !colspan="2"| quota !align="right"| block size ! expiration time ! backed up ! on login nodes ! on compute nodes |- | $HOME |colspan="2"| 100 GB per user |align="right"| 1 MB | | yes | yes | read-only |- |rowspan="6"| $SCRATCH |colspan="2"| 25 TB per user (dynamic per group) |align="right" rowspan="6" | 16 MB |rowspan="6"| 2 months |rowspan="6"| no |rowspan="6"| yes |rowspan="6"| yes |- |align="right"|up to 4 users per group |align="right"|50TB |- |align="right"|up to 11 users per group |align="right"|125TB |- |align="right"|up to 28 users per group |align="right"|250TB |- |align="right"|up to 60 users per group |align="right"|400TB |- |align="right"|above 60 users per group |align="right"|500TB |- | $PROJECT |colspan="2"| by group allocation (RRG or RPP) |align="right"| 16 MB | | yes | yes | yes |- | $ARCHIVE |colspan="2"| by group allocation |align="right"| | | dual-copy | no | no |- | $BBUFFER |colspan="2"| 10 TB per user |align="right"| 1 MB | very short | no | yes | yes |} <ul> <li>[https://docs.scinet.utoronto.ca/images/9/9a/Inode_vs._Space_quota_-_v2x.pdf Inode vs. Space quota (PROJECT and SCRATCH)]</li> <li>[https://docs.scinet.utoronto.ca/images/0/0e/Scratch-quota.pdf dynamic quota per group (SCRATCH)]</li> <li>Compute nodes do not have local storage.</li> <li>Archive (a.k.a. nearline) space is on [https://docs.scinet.utoronto.ca/index.php/HPSS HPSS]</li> <li>Backup means a recent snapshot, not an archive of all data that ever was.</li> <li><code>$BBUFFER</code> stands for [https://docs.scinet.utoronto.ca/index.php/Burst_Buffer Burst Buffer], a faster parallel storage tier for temporary data.</li></ul>
<tabs>
<tab name="Cedar">
Caractéristiques des systèmes de fichiers
Espace
Quota par défaut
Basé sur Lustre
Copié pour sauvegarde
Purgé
Disponible par défaut
Monté sur des nœuds de calcul
/home
50Go et 500K fichiers par utilisateur[1]
oui
oui
non
oui
oui
/scratch
20To et 1M fichiers par utilisateur
oui
non
oui, les fichiers de plus de 60 jours sont purgés.[2]
oui
oui
/project
1To et 500 000 fichiers par groupe[3]
oui
oui
non
oui
oui
↑ Ce quota est fixe et ne peut être modifié.
↑ Voir Espace /scratch ː Purge automatique .
↑ L'espace /project peut être augmenté à 10To par groupe en recourant au service d'accès rapide. La demande doit être faite par le chercheur principal responsable pour le groupe en s'adressant au soutien technique .