Translations:Storage and file management/12/fr: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
Line 2: | Line 2: | ||
<tab name="Cedar"> | <tab name="Cedar"> | ||
{| class="wikitable" style="font-size: 95%; text-align: center;" | {| class="wikitable" style="font-size: 95%; text-align: center;" | ||
|+Caractéristiques des systèmes de fichiers | |+Caractéristiques des systèmes de fichiers | ||
! | ! | ||
! Quota par défaut | ! Quota par défaut | ||
! Basé sur Lustre | ! Basé sur Lustre | ||
Line 11: | Line 11: | ||
! Monté sur des nœuds de calcul | ! Monté sur des nœuds de calcul | ||
|- | |- | ||
| /home | |/home | ||
|50Go et 500K fichiers par utilisateur<ref>Ce quota est fixe et ne peut être | |50Go et 500K fichiers par utilisateur<ref>Ce quota est fixe et ne peut être changé.</ref> | ||
| | |Oui | ||
| | |Oui | ||
| | |Non | ||
| | |Oui | ||
| | |Oui | ||
|- | |- | ||
| /scratch | |/scratch | ||
|20To et 1M fichiers par utilisateur | |20To et 1M fichiers par utilisateur | ||
| | |Oui | ||
| | |Non | ||
| | |les fichiers de plus de 60 jours sont purgés.<ref>Pour plus d'information, voir la [[Scratch purging policy/fr politique de purge automatique]].</ref> | ||
| | |Oui | ||
| | |Oui | ||
|- | |- | ||
| /project | |/project | ||
|1To et 5M fichiers par groupe<ref>L'espace /project peut être augmenté à 10To par groupe en recourant au service d'accès rapide. La demande doit être faite par le chercheur principal responsable pour le groupe en s'adressant au [[Technical support/fr|soutien technique]].</ref> | |1To et 5M fichiers par groupe<ref> L'espace /project peut être augmenté à 10To par groupe en recourant au service d'accès rapide. La demande doit être faite par le chercheur principal responsable pour le groupe en s'adressant au [[Technical support/fr soutien technique]].</ref> | ||
|oui | | L'espace /project peut être augmenté à 10To par groupe en recourant au service d'accès rapide. La demande doit être faite par le chercheur principal responsable pour le groupe en s'adressant au soutien technique. | ||
|oui | |Oui | ||
|non | |Non | ||
|oui | |Oui | ||
|oui | |Oui | ||
|- | |||
|Nearline Space | |||
|2 TB and 5000 files per group | |||
|Non | |||
|N/A | |||
|Non | |||
|Oui | |||
|Non | |||
|} | |||
<references /> | |||
</tab> | |||
<tab name="Graham"> | |||
{| class="wikitable" style="font-size: 95%; text-align: center;" | |||
|+Filesystem Characteristics | |||
! Filesystem | |||
! Default Quota | |||
! Lustre-based? | |||
! Backed up? | |||
! Purged? | |||
! Available by Default? | |||
! Mounted on Compute Nodes? | |||
|- | |||
|Home Space | |||
|50 GB and 500K files per user<ref>This quota is fixed and cannot be changed.</ref> | |||
|No | |||
|Oui | |||
|Non | |||
|Oui | |||
|Oui | |||
|- | |||
|Scratch Space | |||
|20 TB and 1M files per user | |||
|Oui | |||
|Non | |||
|Files older than 60 days are purged.<ref>See [[Scratch purging policy]] for more information.</ref> | |||
|Oui | |||
|Oui | |||
|- | |||
|Project Space | |||
|1 TB and 500k files per group<ref>Project space can be increased to 10 TB per group by a RAS request. The group's sponsoring PI should write to [[technical support]] to make the request.</ref> | |||
|Oui | |||
|Oui | |||
|Non | |||
|Oui | |||
|Oui | |||
|- | |||
|Nearline Space | |||
|2 TB and 5000 files per group | |||
|No | |||
|N/D | |||
|Non | |||
|Oui | |||
|Non | |||
|} | |||
<references /> | |||
</tab> | |||
<tab name="Béluga"> | |||
{| class="wikitable" style="font-size: 95%; text-align: center;" | |||
|+Filesystem Characteristics | |||
! Filesystem | |||
! Default Quota | |||
! Lustre-based? | |||
! Backed up? | |||
! Purged? | |||
! Available by Default? | |||
! Mounted on Compute Nodes? | |||
|- | |||
|Home Space | |||
|50 GB and 500K files per user<ref>This quota is fixed and cannot be changed.</ref> | |||
|Oui | |||
|Oui | |||
|Non | |||
|Oui | |||
|Oui | |||
|- | |||
|Scratch Space | |||
|20 TB and 1M files per user | |||
|Oui | |||
|Non | |||
|Files older than 60 days are purged.<ref>See [[Scratch purging policy]] for more information.</ref> | |||
|Oui | |||
|Oui | |||
|- | |||
|Project Space | |||
|1 TB and 500k files per group<ref>Project space can be increased to 10 TB per group by a RAS request. The group's sponsoring PI should write to [[technical support]] to make the request.</ref> | |||
|Oui | |||
|Oui | |||
|Non | |||
|Oui | |||
|Oui | |||
|- | |||
|Nearline Space | |||
|1 TB and 500K files per group | |||
|Non | |||
|N/D | |||
|Non | |||
|Oui | |||
|Non | |||
|} | |||
<references /> | |||
</tab> | |||
<tab name="Niagara"> | |||
{| class="wikitable" | |||
! location | |||
!colspan="2"| quota | |||
!align="right"| block size | |||
! expiration time | |||
! backed up | |||
! on login nodes | |||
! on compute nodes | |||
|- | |||
| $HOME | |||
|colspan="2"| 100 GB per user | |||
|align="right"| 1 MB | |||
| | |||
| oui | |||
| oui | |||
| read-only | |||
|- | |||
|rowspan="6"| $SCRATCH | |||
|colspan="2"| 25 TB per user (dynamic per group) | |||
|align="right" rowspan="6" | 16 MB | |||
|rowspan="6"| 2 months | |||
|rowspan="6"| no | |||
|rowspan="6"| yes | |||
|rowspan="6"| yes | |||
|- | |||
|align="right"|up to 4 users per group | |||
|align="right"|50TB | |||
|- | |||
|align="right"|up to 11 users per group | |||
|align="right"|125TB | |||
|- | |||
|align="right"|up to 28 users per group | |||
|align="right"|250TB | |||
|- | |||
|align="right"|up to 60 users per group | |||
|align="right"|400TB | |||
|- | |||
|align="right"|above 60 users per group | |||
|align="right"|500TB | |||
|- | |||
| $PROJECT | |||
|colspan="2"| by group allocation (RRG or RPP) | |||
|align="right"| 16 MB | |||
| | |||
| oui | |||
| oui | |||
| oui | |||
|- | |||
| $ARCHIVE | |||
|colspan="2"| by group allocation | |||
|align="right"| | |||
| | |||
| dual-copy | |||
| non | |||
| non | |||
|- | |||
| $BBUFFER | |||
|colspan="2"| 10 TB per user | |||
|align="right"| 1 MB | |||
| very short | |||
| non | |||
| oui | |||
| oui | |||
|} | |||
<ul> | |||
<li>[https://docs.scinet.utoronto.ca/images/9/9a/Inode_vs._Space_quota_-_v2x.pdf Inode vs. Space quota (PROJECT and SCRATCH)]</li> | |||
<li>[https://docs.scinet.utoronto.ca/images/0/0e/Scratch-quota.pdf dynamic quota per group (SCRATCH)]</li> | |||
<li>Compute nodes do not have local storage.</li> | |||
<li>Archive(a.k.a. nearline) space is on [https://docs.scinet.utoronto.ca/index.php/HPSS HPSS]</li> | |||
<li>Backup means a recent snapshot, not an archive of all data that ever was.</li> | |||
<li><code>$BBUFFER</code> stands for [https://docs.scinet.utoronto.ca/index.php/Burst_Buffer Burst Buffer], a faster parallel storage tier for temporary data.</li></ul> |
Revision as of 20:50, 27 August 2019
<tabs>
Quota par défaut | Basé sur Lustre | Copié pour sauvegarde | Purgé | Disponible par défaut | Monté sur des nœuds de calcul | |
---|---|---|---|---|---|---|
/home | 50Go et 500K fichiers par utilisateur[1] | Oui | Oui | Non | Oui | Oui |
/scratch | 20To et 1M fichiers par utilisateur | Oui | Non | les fichiers de plus de 60 jours sont purgés.[2] | Oui | Oui |
/project | 1To et 5M fichiers par groupe[3] | L'espace /project peut être augmenté à 10To par groupe en recourant au service d'accès rapide. La demande doit être faite par le chercheur principal responsable pour le groupe en s'adressant au soutien technique. | Oui | Non | Oui | Oui |
Nearline Space | 2 TB and 5000 files per group | Non | N/A | Non | Oui | Non |
- ↑ Ce quota est fixe et ne peut être changé.
- ↑ Pour plus d'information, voir la Scratch purging policy/fr politique de purge automatique.
- ↑ L'espace /project peut être augmenté à 10To par groupe en recourant au service d'accès rapide. La demande doit être faite par le chercheur principal responsable pour le groupe en s'adressant au Technical support/fr soutien technique.
Filesystem | Default Quota | Lustre-based? | Backed up? | Purged? | Available by Default? | Mounted on Compute Nodes? |
---|---|---|---|---|---|---|
Home Space | 50 GB and 500K files per user[1] | No | Oui | Non | Oui | Oui |
Scratch Space | 20 TB and 1M files per user | Oui | Non | Files older than 60 days are purged.[2] | Oui | Oui |
Project Space | 1 TB and 500k files per group[3] | Oui | Oui | Non | Oui | Oui |
Nearline Space | 2 TB and 5000 files per group | No | N/D | Non | Oui | Non |
- ↑ This quota is fixed and cannot be changed.
- ↑ See Scratch purging policy for more information.
- ↑ Project space can be increased to 10 TB per group by a RAS request. The group's sponsoring PI should write to technical support to make the request.
Filesystem | Default Quota | Lustre-based? | Backed up? | Purged? | Available by Default? | Mounted on Compute Nodes? |
---|---|---|---|---|---|---|
Home Space | 50 GB and 500K files per user[1] | Oui | Oui | Non | Oui | Oui |
Scratch Space | 20 TB and 1M files per user | Oui | Non | Files older than 60 days are purged.[2] | Oui | Oui |
Project Space | 1 TB and 500k files per group[3] | Oui | Oui | Non | Oui | Oui |
Nearline Space | 1 TB and 500K files per group | Non | N/D | Non | Oui | Non |
- ↑ This quota is fixed and cannot be changed.
- ↑ See Scratch purging policy for more information.
- ↑ Project space can be increased to 10 TB per group by a RAS request. The group's sponsoring PI should write to technical support to make the request.
<tab name="Niagara">
location | quota | block size | expiration time | backed up | on login nodes | on compute nodes | |
---|---|---|---|---|---|---|---|
$HOME | 100 GB per user | 1 MB | oui | oui | read-only | ||
$SCRATCH | 25 TB per user (dynamic per group) | 16 MB | 2 months | no | yes | yes | |
up to 4 users per group | 50TB | ||||||
up to 11 users per group | 125TB | ||||||
up to 28 users per group | 250TB | ||||||
up to 60 users per group | 400TB | ||||||
above 60 users per group | 500TB | ||||||
$PROJECT | by group allocation (RRG or RPP) | 16 MB | oui | oui | oui | ||
$ARCHIVE | by group allocation | dual-copy | non | non | |||
$BBUFFER | 10 TB per user | 1 MB | very short | non | oui | oui |
- Inode vs. Space quota (PROJECT and SCRATCH)
- dynamic quota per group (SCRATCH)
- Compute nodes do not have local storage.
- Archive(a.k.a. nearline) space is on HPSS
- Backup means a recent snapshot, not an archive of all data that ever was.
$BBUFFER
stands for Burst Buffer, a faster parallel storage tier for temporary data.