Line 22:
Line 22:
|oui
|oui
|non
|non
|oui, les fichiers de plus de 60 jours <ref>Selon la valeur de <tt>ctime</tt> pour le fichier.</ref>.
|oui, les fichiers de plus de 60 jours <ref>Selon la valeur de <tt>ctime</tt> pour le fichier.</ref>
peuvent être purgés
peuvent être purgés
|oui
|oui
Revision as of 19:25, 27 November 2017
Information about message (contribute ) This message has no documentation.
If you know where or how this message is used, you can help other translators by adding documentation to this message.
Message definition (Storage and file management ) <tabs> <tab name="Cedar"> {| class="wikitable" style="font-size: 95%; text-align: center;" |+Filesystem Characteristics ! Filesystem ! Default Quota ! Lustre-based ! Backed up ! Purged ! Available by Default ! Mounted on Compute Nodes |- |Home Space |50 GB and 500K files per user<ref>This quota is fixed and cannot be changed.</ref> |Yes |Yes |No |Yes |Yes |- |Scratch Space |20 TB and 1M files per user |Yes |No |Files older than 60 days are purged.<ref>See [[Scratch purging policy]] for more information.</ref> |Yes |Yes |- |Project Space |1 TB and 500K files per group<ref>Project space can be increased to 40 TB per group by a RAS request, subject to the limitations that the minimum project space per quota cannot be less than 1 TB and the sum over all four general-purpose clusters cannot exceed 43 TB. The group's sponsoring PI should write to [[technical support]] to make the request.</ref> |Yes |Yes |No |Yes |Yes |- |Nearline Space |2 TB and 5000 files per group |Yes |Yes |No |Yes |No |} <references /> Starting April 1, 2024, new Rapid Access Service (RAS) policies will allow larger quotas for the project and nearline spaces. For more details, see the "Storage" section at [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. Quota changes larger than those permitted by RAS will require an application to the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition (RAC)]. </tab> <tab name="Graham"> {| class="wikitable" style="font-size: 95%; text-align: center;" |+Filesystem Characteristics ! Filesystem ! Default Quota ! Lustre-based ! Backed up ! Purged ! Available by Default ! Mounted on Compute Nodes |- |Home Space |50 GB and 500K files per user<ref>This quota is fixed and cannot be changed.</ref> |No |Yes |No |Yes |Yes |- |Scratch Space |20 TB and 1M files per user |Yes |No |Files older than 60 days are purged.<ref>See [[Scratch purging policy]] for more information.</ref> |Yes |Yes |- |Project Space |1 TB and 500K files per group<ref>Project space can be increased to 40 TB per group by a RAS request. The group's sponsoring PI should write to [[technical support]] to make the request.</ref> |Yes |Yes |No |Yes |Yes |- |Nearline Space |10 TB and 5000 files per group |Yes |Yes |No |Yes |No |} <references /> Starting April 1, 2024, new Rapid Access Service (RAS) policies will allow larger quotas for project and nearline spaces. For more details, see the "Storage" section at [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. Quota changes larger than those permitted by RAS will require an application to the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition (RAC)]. </tab> <tab name="Béluga and Narval"> {| class="wikitable" style="font-size: 95%; text-align: center;" |+Filesystem Characteristics ! Filesystem ! Default Quota ! Lustre-based ! Backed up ! Purged ! Available by Default ! Mounted on Compute Nodes |- |Home Space |50 GB and 500K files per user<ref>This quota is fixed and cannot be changed.</ref> |Yes |Yes |No |Yes |Yes |- |Scratch Space |20 TB and 1M files per user |Yes |No |Files older than 60 days are purged.<ref>See [[Scratch purging policy]] for more information.</ref> |Yes |Yes |- |Project Space |1 TB and 500K files per group<ref>Project space can be increased to 40 TB per group by a RAS request. The group's sponsoring PI should write to [[technical support]] to make the request.</ref> |Yes |Yes |No |Yes |Yes |- |Nearline Space |1 TB and 5000 files per group |Yes |Yes |No |Yes |No |} <references /> Starting April 1, 2024, new Rapid Access Service (RAS) policies will allow larger quotas for project and nearline spaces. For more details, see the "Storage" section at [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service]. Quota changes larger than those permitted by RAS will require an application to the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition (RAC)]. </tab> <tab name="Niagara"> {| class="wikitable" ! location !colspan="2"| quota !align="right"| block size ! expiration time ! backed up ! on login nodes ! on compute nodes |- | $HOME |colspan="2"| 100 GB per user |align="right"| 1 MB | | yes | yes | read-only |- |rowspan="6"| $SCRATCH |colspan="2"| 25 TB per user (dynamic per group) |align="right" rowspan="6" | 16 MB |rowspan="6"| 2 months |rowspan="6"| no |rowspan="6"| yes |rowspan="6"| yes |- |align="right"|up to 4 users per group |align="right"|50TB |- |align="right"|up to 11 users per group |align="right"|125TB |- |align="right"|up to 28 users per group |align="right"|250TB |- |align="right"|up to 60 users per group |align="right"|400TB |- |align="right"|above 60 users per group |align="right"|500TB |- | $PROJECT |colspan="2"| by group allocation (RRG or RPP) |align="right"| 16 MB | | yes | yes | yes |- | $ARCHIVE |colspan="2"| by group allocation |align="right"| | | dual-copy | no | no |- | $BBUFFER |colspan="2"| 10 TB per user |align="right"| 1 MB | very short | no | yes | yes |} <ul> <li>[https://docs.scinet.utoronto.ca/images/9/9a/Inode_vs._Space_quota_-_v2x.pdf Inode vs. Space quota (PROJECT and SCRATCH)]</li> <li>[https://docs.scinet.utoronto.ca/images/0/0e/Scratch-quota.pdf dynamic quota per group (SCRATCH)]</li> <li>Compute nodes do not have local storage.</li> <li>Archive (a.k.a. nearline) space is on [https://docs.scinet.utoronto.ca/index.php/HPSS HPSS]</li> <li>Backup means a recent snapshot, not an archive of all data that ever was.</li> <li><code>$BBUFFER</code> stands for [https://docs.scinet.utoronto.ca/index.php/Burst_Buffer Burst Buffer], a faster parallel storage tier for temporary data.</li></ul>
Caractéristiques des systèmes de fichiers
Espace
Quota par défaut
Basé sur Lustre
Copié pour sauvegarde
Purgé
Disponible par défaut
Monté sur des nœuds de calcul
home
50Go et 500K fichiers par utilisateur
Cedar, oui; Graham, non
(système de fichiers en réseau)
oui
non
oui
oui
scratch
20To (100To sur Graham) et 1M fichiers par utilisateur[1]
oui
non
oui, les fichiers de plus de 60 jours [2]
peuvent être purgés
oui
oui
projet
1To et 5M fichiers par groupe[3]
oui
oui
non
oui
oui
nearline
5To par groupe
non
non
non
non
non
↑ L'espace scratch sur Cedar peut être augmenté à 100To par utilisateur sur demande; contactez le soutien technique .
↑ Selon la valeur de ctime pour le fichier.
↑ L'espace projet peut être augmenté à 10To par groupe sur demande; contactez le soutien technique . Les requêtes en provenance de membres d'un même groupe seront additionnées sans excéder 10To.
Des copies de sauvegarde des espaces projet et home sont faites chaque soir; elles sont conservées pendant 30 jours.
Les fichiers supprimés sont conservés pendant 60 jours. Pour récupérer une version antérieure d'un fichier ou d'un répertoire, contactez le soutien technique en indiquant le chemin (path ) complet et la date.
Pour copier des données à partir de l'espace de stockage nearline vers les espaces projet, home ou scratch , contactez le soutien technique .