Translations:Storage and file management/12/fr: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
No edit summary
Line 2: Line 2:
<tab name="Cedar">
<tab name="Cedar">
{| class="wikitable" style="font-size: 95%; text-align: center;"
{| class="wikitable" style="font-size: 95%; text-align: center;"
|+Caractéristiques des systèmes de fichiers
|+Caractéristiques des systèmes de fichiers  
! Espace
!  
! Quota par défaut
! Quota par défaut
! Basé sur Lustre
! Basé sur Lustre
Line 11: Line 11:
! Monté sur des nœuds de calcul
! Monté sur des nœuds de calcul
|-
|-
| /home
|/home
|50Go et 500K fichiers par utilisateur<ref>Ce quota est fixe et ne peut être modifié.</ref>
|50Go et 500K fichiers par utilisateur<ref>Ce quota est fixe et ne peut être changé.</ref>
|oui
|Oui
|oui
|Oui
|non
|Non
|oui
|Oui
|oui
|Oui
|-
|-
| /scratch
|/scratch
|20To et 1M fichiers par utilisateur
|20To et 1M fichiers par utilisateur
|oui
|Oui
|non
|Non
|oui, les fichiers de plus de 60 jours sont purgés<ref>Voir [[Scratch purging policy/fr|Espace /scratch ː Purge automatique]].</ref>
|les fichiers de plus de 60 jours sont purgés.<ref>Pour plus d'information, voir la [[Scratch purging policy/fr politique de purge automatique]].</ref>
|oui
|Oui
|oui
|Oui
|-
|-
| /project
|/project
|1To et 5M fichiers par groupe<ref>L'espace /project peut être augmenté à 10To par groupe en recourant au service d'accès rapide. La demande doit être faite par le chercheur principal responsable pour le groupe en s'adressant au [[Technical support/fr|soutien technique]].</ref>
|1To et 5M fichiers par groupe<ref> L'espace /project peut être augmenté à 10To par groupe en recourant au service d'accès rapide. La demande doit être faite par le chercheur principal responsable pour le groupe en s'adressant au [[Technical support/fr soutien technique]].</ref>
|oui
| L'espace /project peut être augmenté à 10To par groupe en recourant au service d'accès rapide. La demande doit être faite par le chercheur principal responsable pour le groupe en s'adressant au soutien technique.
|oui
|Oui
|non
|Non
|oui
|Oui
|oui
|Oui
|-
|Nearline Space
|2 TB and 5000 files per group
|Non
|N/A
|Non
|Oui
|Non
|}
<references />
</tab>
<tab name="Graham">
{| class="wikitable" style="font-size: 95%; text-align: center;"
|+Filesystem Characteristics
! Filesystem
! Default Quota
! Lustre-based?
! Backed up?
! Purged?
! Available by Default?
! Mounted on Compute Nodes?
|-
|Home Space
|50 GB and 500K files per user<ref>This quota is fixed and cannot be changed.</ref>
|No
|Oui
|Non
|Oui
|Oui
|-
|Scratch Space
|20 TB and 1M files per user
|Oui
|Non
|Files older than 60 days are purged.<ref>See [[Scratch purging policy]] for more information.</ref>
|Oui
|Oui
|-
|Project Space
|1 TB and 500k files per group<ref>Project space can be increased to 10 TB per group by a RAS request. The group's sponsoring PI should write to [[technical support]] to make the request.</ref>
|Oui
|Oui
|Non
|Oui
|Oui
|-
|Nearline Space
|2 TB and 5000 files per group
|No
|N/D
|Non
|Oui
|Non
|}
<references />
</tab>
<tab name="Béluga">
{| class="wikitable" style="font-size: 95%; text-align: center;"
|+Filesystem Characteristics
! Filesystem
! Default Quota
! Lustre-based?
! Backed up?
! Purged?
! Available by Default?
! Mounted on Compute Nodes?
|-
|Home Space
|50 GB and 500K files per user<ref>This quota is fixed and cannot be changed.</ref>
|Oui
|Oui
|Non
|Oui
|Oui
|-
|Scratch Space
|20 TB and 1M files per user
|Oui
|Non
|Files older than 60 days are purged.<ref>See [[Scratch purging policy]] for more information.</ref>
|Oui
|Oui
|-
|Project Space
|1 TB and 500k files per group<ref>Project space can be increased to 10 TB per group by a RAS request. The group's sponsoring PI should write to [[technical support]] to make the request.</ref>
|Oui
|Oui
|Non
|Oui
|Oui
|-
|Nearline Space
|1 TB and 500K files per group
|Non
|N/D
|Non
|Oui
|Non
|}
<references />
</tab>
<tab name="Niagara">
{| class="wikitable"
! location
!colspan="2"| quota
!align="right"| block size
! expiration time
! backed up
! on login nodes
! on compute nodes
|-
| $HOME
|colspan="2"| 100 GB per user
|align="right"| 1 MB
|
| oui
| oui
| read-only
|-
|rowspan="6"| $SCRATCH
|colspan="2"| 25 TB per user (dynamic per group)
|align="right" rowspan="6" | 16 MB
|rowspan="6"| 2 months
|rowspan="6"| no
|rowspan="6"| yes
|rowspan="6"| yes
|-
|align="right"|up to 4 users per group
|align="right"|50TB
|-
|align="right"|up to 11 users per group
|align="right"|125TB
|-
|align="right"|up to 28 users per group
|align="right"|250TB
|-
|align="right"|up to 60 users per group
|align="right"|400TB
|-
|align="right"|above 60 users per group
|align="right"|500TB
|-
| $PROJECT
|colspan="2"| by group allocation (RRG or RPP)
|align="right"| 16 MB
|
| oui
| oui
| oui
|-
| $ARCHIVE
|colspan="2"| by group allocation
|align="right"|
|
| dual-copy
| non
| non
|-
| $BBUFFER
|colspan="2"| 10 TB per user
|align="right"| 1 MB
| very short
| non
| oui
| oui
|}
<ul>
<li>[https://docs.scinet.utoronto.ca/images/9/9a/Inode_vs._Space_quota_-_v2x.pdf Inode vs. Space quota (PROJECT and SCRATCH)]</li>
<li>[https://docs.scinet.utoronto.ca/images/0/0e/Scratch-quota.pdf dynamic quota per group (SCRATCH)]</li>
<li>Compute nodes do not have local storage.</li>
<li>Archive(a.k.a. nearline) space is on [https://docs.scinet.utoronto.ca/index.php/HPSS HPSS]</li>
<li>Backup means a recent snapshot, not an archive of all data that ever was.</li>
<li><code>$BBUFFER</code> stands for [https://docs.scinet.utoronto.ca/index.php/Burst_Buffer Burst Buffer], a faster parallel storage tier for temporary data.</li></ul>

Revision as of 20:50, 27 August 2019

Information about message (contribute)
This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
Message definition (Storage and file management)
<tabs>
<tab name="Cedar">
{| class="wikitable" style="font-size: 95%; text-align: center;"
|+Filesystem Characteristics 
! Filesystem
! Default Quota
! Lustre-based
! Backed up
! Purged
! Available by Default
! Mounted on Compute Nodes
|-
|Home Space
|50 GB and 500K files per user<ref>This quota is fixed and cannot be changed.</ref>
|Yes
|Yes
|No
|Yes
|Yes
|-
|Scratch Space
|20 TB and 1M files per user
|Yes
|No
|Files older than 60 days are purged.<ref>See [[Scratch purging policy]] for more information.</ref>
|Yes
|Yes
|-
|Project Space
|1 TB and 500K files per group<ref>Project space can be increased to 40 TB per group by a RAS request, subject to the limitations that the minimum project space per quota cannot be less than 1 TB and the sum over all four general-purpose clusters cannot exceed 43 TB. The group's sponsoring PI should write to [[technical support]] to make the request.</ref>
|Yes
|Yes
|No
|Yes
|Yes
|-
|Nearline Space
|2 TB and 5000 files per group
|Yes
|Yes
|No
|Yes
|No
|}
<references />
Starting April 1, 2024, new Rapid Access Service (RAS) policies will allow larger quotas for the project and nearline spaces. For more details, see the "Storage" section at [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service].  Quota changes larger than those permitted by RAS will require an application to the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition (RAC)]. 
</tab>
<tab name="Graham">
{| class="wikitable" style="font-size: 95%; text-align: center;"
|+Filesystem Characteristics 
! Filesystem
! Default Quota
! Lustre-based
! Backed up
! Purged
! Available by Default
! Mounted on Compute Nodes
|-
|Home Space
|50 GB and 500K files per user<ref>This quota is fixed and cannot be changed.</ref>
|No
|Yes
|No
|Yes
|Yes
|-
|Scratch Space
|20 TB and 1M files per user
|Yes
|No
|Files older than 60 days are purged.<ref>See [[Scratch purging policy]] for more information.</ref>
|Yes
|Yes
|-
|Project Space
|1 TB and 500K files per group<ref>Project space can be increased to 40 TB per group by a RAS request. The group's sponsoring PI should write to [[technical support]] to make the request.</ref>
|Yes
|Yes
|No
|Yes
|Yes
|-
|Nearline Space
|10 TB and 5000 files per group
|Yes
|Yes
|No
|Yes
|No
|}
<references />
Starting April 1, 2024, new Rapid Access Service (RAS) policies will allow larger quotas for project and nearline spaces. For more details, see the "Storage" section at [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service].  Quota changes larger than those permitted by RAS will require an application to the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition (RAC)].  
</tab>
<tab name="Béluga and Narval">
{| class="wikitable" style="font-size: 95%; text-align: center;"
|+Filesystem Characteristics 
! Filesystem
! Default Quota
! Lustre-based
! Backed up
! Purged
! Available by Default
! Mounted on Compute Nodes
|-
|Home Space
|50 GB and 500K files per user<ref>This quota is fixed and cannot be changed.</ref>
|Yes
|Yes
|No
|Yes
|Yes
|-
|Scratch Space
|20 TB and 1M files per user
|Yes
|No
|Files older than 60 days are purged.<ref>See [[Scratch purging policy]] for more information.</ref>
|Yes
|Yes
|-
|Project Space
|1 TB and 500K files per group<ref>Project space can be increased to 40 TB per group by a RAS request. The group's sponsoring PI should write to [[technical support]] to make the request.</ref>
|Yes
|Yes
|No
|Yes
|Yes
|-
|Nearline Space
|1 TB and 5000 files per group
|Yes
|Yes
|No
|Yes
|No
|}
<references />
Starting April 1, 2024, new Rapid Access Service (RAS) policies will allow larger quotas for project and nearline spaces. For more details, see the "Storage" section at [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service Rapid Access Service].  Quota changes larger than those permitted by RAS will require an application to the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition (RAC)].  
</tab>
<tab name="Niagara">
{| class="wikitable"
! location
!colspan="2"| quota
!align="right"| block size
! expiration time
! backed up
! on login nodes
! on compute nodes
|-
| $HOME
|colspan="2"| 100 GB per user
|align="right"| 1 MB

| yes
| yes
| read-only
|-
|rowspan="6"| $SCRATCH
|colspan="2"| 25 TB per user (dynamic per group)
|align="right" rowspan="6" | 16 MB
|rowspan="6"| 2 months
|rowspan="6"| no
|rowspan="6"| yes
|rowspan="6"| yes
|-
|align="right"|up to 4 users per group
|align="right"|50TB
|-
|align="right"|up to 11 users per group
|align="right"|125TB
|-
|align="right"|up to 28 users per group
|align="right"|250TB
|-
|align="right"|up to 60 users per group
|align="right"|400TB
|-
|align="right"|above 60 users per group
|align="right"|500TB
|-
| $PROJECT
|colspan="2"| by group allocation (RRG or RPP)
|align="right"| 16 MB

| yes
| yes
| yes
|-
| $ARCHIVE
|colspan="2"| by group allocation
|align="right"| 
|
| dual-copy
| no
| no
|-
| $BBUFFER
|colspan="2"| 10 TB per user
|align="right"| 1 MB
| very short
| no
| yes
| yes
|}
<ul>
<li>[https://docs.scinet.utoronto.ca/images/9/9a/Inode_vs._Space_quota_-_v2x.pdf Inode vs. Space quota (PROJECT and SCRATCH)]</li>
<li>[https://docs.scinet.utoronto.ca/images/0/0e/Scratch-quota.pdf dynamic quota per group (SCRATCH)]</li>
<li>Compute nodes do not have local storage.</li>
<li>Archive (a.k.a. nearline) space is on [https://docs.scinet.utoronto.ca/index.php/HPSS HPSS]</li>
<li>Backup means a recent snapshot, not an archive of all data that ever was.</li>
<li><code>$BBUFFER</code> stands for [https://docs.scinet.utoronto.ca/index.php/Burst_Buffer Burst Buffer], a faster parallel storage tier for temporary data.</li></ul>

<tabs>

Caractéristiques des systèmes de fichiers
Quota par défaut Basé sur Lustre Copié pour sauvegarde Purgé Disponible par défaut Monté sur des nœuds de calcul
/home 50Go et 500K fichiers par utilisateur[1] Oui Oui Non Oui Oui
/scratch 20To et 1M fichiers par utilisateur Oui Non les fichiers de plus de 60 jours sont purgés.[2] Oui Oui
/project 1To et 5M fichiers par groupe[3] L'espace /project peut être augmenté à 10To par groupe en recourant au service d'accès rapide. La demande doit être faite par le chercheur principal responsable pour le groupe en s'adressant au soutien technique. Oui Non Oui Oui
Nearline Space 2 TB and 5000 files per group Non N/A Non Oui Non
  1. Ce quota est fixe et ne peut être changé.
  2. Pour plus d'information, voir la Scratch purging policy/fr politique de purge automatique.
  3. L'espace /project peut être augmenté à 10To par groupe en recourant au service d'accès rapide. La demande doit être faite par le chercheur principal responsable pour le groupe en s'adressant au Technical support/fr soutien technique.
Filesystem Characteristics
Filesystem Default Quota Lustre-based? Backed up? Purged? Available by Default? Mounted on Compute Nodes?
Home Space 50 GB and 500K files per user[1] No Oui Non Oui Oui
Scratch Space 20 TB and 1M files per user Oui Non Files older than 60 days are purged.[2] Oui Oui
Project Space 1 TB and 500k files per group[3] Oui Oui Non Oui Oui
Nearline Space 2 TB and 5000 files per group No N/D Non Oui Non
  1. This quota is fixed and cannot be changed.
  2. See Scratch purging policy for more information.
  3. Project space can be increased to 10 TB per group by a RAS request. The group's sponsoring PI should write to technical support to make the request.
Filesystem Characteristics
Filesystem Default Quota Lustre-based? Backed up? Purged? Available by Default? Mounted on Compute Nodes?
Home Space 50 GB and 500K files per user[1] Oui Oui Non Oui Oui
Scratch Space 20 TB and 1M files per user Oui Non Files older than 60 days are purged.[2] Oui Oui
Project Space 1 TB and 500k files per group[3] Oui Oui Non Oui Oui
Nearline Space 1 TB and 500K files per group Non N/D Non Oui Non
  1. This quota is fixed and cannot be changed.
  2. See Scratch purging policy for more information.
  3. Project space can be increased to 10 TB per group by a RAS request. The group's sponsoring PI should write to technical support to make the request.

<tab name="Niagara">

location quota block size expiration time backed up on login nodes on compute nodes
$HOME 100 GB per user 1 MB oui oui read-only
$SCRATCH 25 TB per user (dynamic per group) 16 MB 2 months no yes yes
up to 4 users per group 50TB
up to 11 users per group 125TB
up to 28 users per group 250TB
up to 60 users per group 400TB
above 60 users per group 500TB
$PROJECT by group allocation (RRG or RPP) 16 MB oui oui oui
$ARCHIVE by group allocation dual-copy non non
$BBUFFER 10 TB per user 1 MB very short non oui oui