Translations:Storage and file management/12/en
<tabs>
Filesystem | Default Quota | Lustre-based | Backed up | Purged | Available by Default | Mounted on Compute Nodes |
---|---|---|---|---|---|---|
Home Space | 50 GB and 500K files per user[1] | Yes | Yes | No | Yes | Yes |
Scratch Space | 20 TB and 1M files per user | Yes | No | Files older than 60 days are purged.[2] | Yes | Yes |
Project Space | 1 TB and 500K files per group[3] | Yes | Yes | No | Yes | Yes |
Nearline Space | 2 TB and 5000 files per group | Yes | Yes | No | Yes | No |
- ↑ This quota is fixed and cannot be changed.
- ↑ See Scratch purging policy for more information.
- ↑ Project space can be increased to 40 TB per group by a RAS request, subject to the limitations that the minimum project space per quota cannot be less than 1 TB and the sum over all four general-purpose clusters cannot exceed 43 TB. The group's sponsoring PI should write to technical support to make the request.
Starting April 1, 2024, new Rapid Access Service (RAS) policies will allow larger quotas for the project and nearline spaces. For more details, see the "Storage" section at Rapid Access Service. Quota changes larger than those permitted by RAS will require an application to the annual Resource Allocation Competition (RAC).
Filesystem | Default Quota | Lustre-based | Backed up | Purged | Available by Default | Mounted on Compute Nodes |
---|---|---|---|---|---|---|
Home Space | 50 GB and 500K files per user[1] | No | Yes | No | Yes | Yes |
Scratch Space | 20 TB and 1M files per user | Yes | No | Files older than 60 days are purged.[2] | Yes | Yes |
Project Space | 1 TB and 500K files per group[3] | Yes | Yes | No | Yes | Yes |
Nearline Space | 10 TB and 5000 files per group | Yes | Yes | No | Yes | No |
- ↑ This quota is fixed and cannot be changed.
- ↑ See Scratch purging policy for more information.
- ↑ Project space can be increased to 40 TB per group by a RAS request. The group's sponsoring PI should write to technical support to make the request.
Starting April 1, 2024, new Rapid Access Service (RAS) policies will allow larger quotas for project and nearline spaces. For more details, see the "Storage" section at Rapid Access Service. Quota changes larger than those permitted by RAS will require an application to the annual Resource Allocation Competition (RAC).
Filesystem | Default Quota | Lustre-based | Backed up | Purged | Available by Default | Mounted on Compute Nodes |
---|---|---|---|---|---|---|
Home Space | 50 GB and 500K files per user[1] | Yes | Yes | No | Yes | Yes |
Scratch Space | 20 TB and 1M files per user | Yes | No | Files older than 60 days are purged.[2] | Yes | Yes |
Project Space | 1 TB and 500K files per group[3] | Yes | Yes | No | Yes | Yes |
Nearline Space | 1 TB and 5000 files per group | Yes | Yes | No | Yes | No |
- ↑ This quota is fixed and cannot be changed.
- ↑ See Scratch purging policy for more information.
- ↑ Project space can be increased to 40 TB per group by a RAS request. The group's sponsoring PI should write to technical support to make the request.
Starting April 1, 2024, new Rapid Access Service (RAS) policies will allow larger quotas for project and nearline spaces. For more details, see the "Storage" section at Rapid Access Service. Quota changes larger than those permitted by RAS will require an application to the annual Resource Allocation Competition (RAC).
<tab name="Niagara">
location | quota | block size | expiration time | backed up | on login nodes | on compute nodes | |
---|---|---|---|---|---|---|---|
$HOME | 100 GB per user | 1 MB | yes | yes | read-only | ||
$SCRATCH | 25 TB per user (dynamic per group) | 16 MB | 2 months | no | yes | yes | |
up to 4 users per group | 50TB | ||||||
up to 11 users per group | 125TB | ||||||
up to 28 users per group | 250TB | ||||||
up to 60 users per group | 400TB | ||||||
above 60 users per group | 500TB | ||||||
$PROJECT | by group allocation (RRG or RPP) | 16 MB | yes | yes | yes | ||
$ARCHIVE | by group allocation | dual-copy | no | no | |||
$BBUFFER | 10 TB per user | 1 MB | very short | no | yes | yes |
- Inode vs. Space quota (PROJECT and SCRATCH)
- dynamic quota per group (SCRATCH)
- Compute nodes do not have local storage.
- Archive (a.k.a. nearline) space is on HPSS
- Backup means a recent snapshot, not an archive of all data that ever was.
$BBUFFER
stands for Burst Buffer, a faster parallel storage tier for temporary data.