Storage and file management
Overview[edit]
Compute Canada provides a wide range of storage options to cover the needs of our very diverse users. These storage solutions range from high-speed temporary local storage to different kinds of long-term storage, so you can choose the storage medium that best corresponds to your needs and usage patterns. In most cases the filesystems on Compute Canada systems are a shared resource and for this reason should be used responsibly - unwise behaviour can negatively affect dozens or hundreds of other users. These filesystems are also designed to store a limited number of very large files, typically binary rather than text files, i.e. they are not directly human-readable. You should therefore avoid storing thousands of small files, where small means less than a few megabytes, particularly in the same directory. A better approach is to use commands like tar or zip to convert a directory containing many small files into a single very large archive file.
It is also your responsibility to manage the age of your stored data: most of the filesystems are not intended to provide an indefinite archiving service so when a given file or directory is no longer needed, you need to move it to a more appropriate filesystem which may well mean your personal workstation or some other storage system under your control. Moving significant amounts of data between your workstation and a Compute Canada system or between two Compute Canada systems should generally be done using Globus.
Note that Compute Canada storage systems are not for personal use and should only be used to store research data.
Users can check available disk space and current disk utilization for project, home and scratch file systems with utility diskusage_report, available on both Cedar and Graham NDC. This command line utility is available to all users. To use this utility: log in to Cedar or Graham using SSH and type diskusage_report in command prompt followed by Enter button press. Typical outpuit od the utility:
# diskusage_report Description Space # of files Home (username) 280 kB/47 GB 25/500k Scratch (username) 4096 B/18 TB 1/1000k Project (def-username-ab) 4096 B/9536 GB 2/5000k Project (def-username) 4096 B/9536 GB 2/5000k
Storage Types[edit]
Unlike your personal computer, a Compute Canada system will typically have several storage spaces or filesystems and you should ensure that you are using the right space for the right task. In this section we will discuss the principal filesystems available on most Compute Canada systems and the intended use of each one along with its characteristics. Storage options are distinguished by the available hardware, access mode and write system. Typically, most Compute Canada systems offer the following storage types:
- Network Filesystem (NFS)
- This type of storage is generally equally visible on both login and compute nodes. This is the appropriate place to put small but important files that are regularly used: source code, programs, job scripts and parameter files. This type of storage offers performance comparable to a conventional hard disk.
- Parallel Filesystem (Lustre, GPFS)
- This type of storage is generally equally visible on both login and compute nodes. Combining multiple disk arrays and fast servers, it offers excellent performance for large files and large input/output operations. Often two types of storage are distinguished on such systems: long term storage and temporary storage (scratch). Performance is subject to variations caused by other users.
- Local Filesystem
- This type of storage consists of a local hard drive attached to each compute node. Its advantage is that its performance is high because it is very rarely shared --- typically, only one user will access a local drive at a time. However, you must copy your files back to another storage medium like the scratch space or project space before your job ends because everything will be cleaned after each job.
- RAM (memory) Filesystem
- This is a filesystem that exists within a compute node's RAM, so its use reduces available memory for computations. Such filesystems are very fast for small files and particularly faster than other systems when file access is random. A RAM disk is always cleaned at the end of a job.
The following table summarizes the properties of these storage types.
Type | Accessibility | Throughput | Latency | Longevity |
---|---|---|---|---|
Network Filesystem (NFS) | All nodes | Poor | High | Long term |
Long-Term Parallel Filesystem | All nodes | Fair | High | Long term |
Short-Term Parallel Filesystem | All nodes | Fair | High | Short term (periodically cleaned) |
Local Filesystem | Local to the node | Fair | Medium | Very short term |
Memory (RAM) Filesystem | Local to the node | Good | Very low | Very short term, cleaned after every job |
Throughput describes the efficiency of the file system for large operations, such as those involving a megabyte or more per read or write.
Latency describes the efficiency of the file system for multiple small operations. Low latency is good; however, if one has a choice between a small number of large operations and a large number of small ones, it is almost always better to use a small number of large operations.
Best practices[edit]
- Only use text format for files that are smaller than a few megabytes.
- As far as possible, use local storage for temporary files.
- If your program must search within a file, it is fastest to do it by first reading it completely before searching, or to use a RAM disk.
- Regularly clean up your data in the scratch and project spaces, because those filesystems are used for huge data collections.
- If you no longer use certain files but they must be retained, archive and compress them, and if possible copy them elsewhere.
- If your needs are not well served by the available storage options please contact us by sending an e-mail to Compute Canada support.
Filesystem Quotas and Policies[edit]
In order to ensure that there is adequate space for all Compute Canada users, there are a variety of quotas and policy restrictions concerning back-ups and automatic purging of certain filesystems. On a cluster, each user has access to the home and scratch spaces by default and each group has access to 1 TB of project space by default. The nearline space is allocated using the annual RAC (resource allocation) process, which can also have the effect of increasing a group's quota for the project space. You can see your current usage of the current quota for various filesystems on Cedar and Graham using the command diskusage_report.
Filesystem | Default Quota | Backed up? | Purged? | Available by Default? | Mounted on Compute Nodes? |
---|---|---|---|---|---|
Home Space | 50 GB and 500K files per user | Yes | No | Yes | Yes |
Scratch Space | 20 TB and 1M files per user[1] | No | Yes, all files older than a certain number of days | Yes | Yes |
Project Space | 1 TB and 5M files per group[2] | Yes | No | Yes | Yes |
Nearline Space | 5 TB per group | No | No | No | No |
The backup policy on the home and project space is nightly backups which are retained for 30 days, while deleted files are retained for a further 60 days. If you wish to recover a previous version of a file or directory, you should write to Compute Canada support with the full path for the file(s) and desired version (by date). To copy data from the nearline storage to the project, home or scratch space, you should also write to Compute Canada support.
- ↑ Scratch space can be increased to 100 TB per user upon request to Compute Canada support.
- ↑ Project space can be increased to 10 TB per group upon request to Compute Canada support and requests by different members of the same group will be summed together up to the ceiling of 10 TB.