Using nearline storage
Nearline is a filesystem virtualized onto tape
Nearline storage is a hybrid disk/tape filesystem with a layout like Project, which however uses its hybrid nature to take advantage of both the the large amount of inexpensive storage available by tape and the rapid access to data that disk offers. You can move your less frequently needed data to tape, where they will no longer count against your project space quota. If you later need these files, you can recall them from tape back to disk with a delay ranging from a few minutes up to an hour or two.
This is useful because the capacity of our tape libraries is both large and expandable. When a file has been moved to tape (or virtualized), it still appears in the directory listing. If the virtual file is read, the reading process will block for some time, probably a few minutes, while the file contents is copied from tape to disk.
You can tell that a file is on tape or still on disk with the lfs hsm_state
command:
# Here, <FILE> is still on the disk
$ lfs hsm_state <FILE>
<FILE>: [...]: exists archived, [...]
# Here, <FILE> is archived on tape, there will be a lag when opening it.
$ lfs hsm_state <FILE>
<FILE>: [...]: released archived, [...]
"HSM" stands for "hierarchical storage manager". If you wish to ensure that the file is brought in from tape, you can use
lfs hsm_restore <FILE>
The difference with reading the file is that the restore would be implicitly done.
Using nearline
Because of the delay in reading from tape, nearline is not intended to be used by jobs where allocated time would be wasted. It is only accessible as a directory on certain nodes of the clusters, but never on compute nodes.
Nearline is intended for use with relatively large files and should not be used for a large number of small files. In fact, files smaller than a certain threshold size may not be moved to tape at all.
- Files smaller than ~200MB should be combined into archive files (tarballs) using tar or a similar tool.
- Files larger than 300GB should be split in chunks of 100GB using the split command or a similar tool.
The basic model for using nearline is that you put files there, and later you may access them, like a normal filesystem, except that reading the files may involve a significant pause. You may also remove files from nearline. It's important to realize that nearline files can be in several different states:
- Immediately upon creation, the file is on disk, not tape.
- After a period (on the order of a day), the system will copy the file to tape. At this stage, the file will be on both disk and tape; it will behave just like a disk file, unless you modify it.
- After a further period, the disk copy will be dropped, and the file will only be on tape (our policy is two tape copies: one local and one remote). At this point, the file will be slow to read, since content must be recalled from tape.
- When such a file is recalled, it returns to the second state.
Access
Nearline is only accessible as a directory on login nodes and on DTNs (Data Transfer Nodes).
To use nearline, just put files into your ~/nearline/PROJECT directory. After a period of time (24 hours as of February 2019), they will be copied onto tape. If the file remains unchanged for another period (24 hours as of February 2019), the copy on disk will be removed, making the file virtualized on tape.
If you accidentally (or deliberately) delete a file from ~/nearline, the tape copy will be retained for up to 60 days. To restore such a file contact technical support with the full path for the file(s) and desired version (by date), just as you would for restoring a backup. Note that since you will need the full path for the file, it is important for you to retain a copy of the complete directory structure of your nearline space. For example, you can run the command ls -R > ~/nearline_contents.txt from the ~/nearline/PROJECT directory so that you have a copy of the location of all the files.
Nearline service similar to that on Graham will be available soon.
HPSS is the nearline service on Niagara.
There are three methods to access the service:
1. By submitting HPSS-specific commands htar or hsi to the Slurm scheduler as a job in one of the archive partitions; see the HPSS documentation for detailed examples. Using job scripts offers the benefit of automating nearline transfers and is the best method if you use HPSS regularly. Your HPSS files can be found in the $ARCHIVE directory, which is like $PROJECT but with /project replaced by /archive.
2. To manage a small number of files in HPSS, you can use the VFS (Virtual File System) node, which is accessed with the command salloc --time=1:00:00 -pvfsshort. Your HPSS files can be found in the $ARCHIVE directory, which is like $PROJECT but with /project replaced by /archive.
3. By using Globus for transfers to and from HPSS using the endpoint computecanada#hpss. This is useful for occasional usage and for transfers to and from other sites.
Nearline service similar to that on Graham.