Using nearline storage: Difference between revisions

Jump to navigation Jump to search
no edit summary
No edit summary
No edit summary
Line 3: Line 3:


==Nearline is a filesystem virtualized onto tape== <!--T:1-->
==Nearline is a filesystem virtualized onto tape== <!--T:1-->
Nearline storage is like [[Project layout|Project]], except that the system may "virtualize" files by moving them to tape.  This is a way to manage less-used files.  On tape they do not consume your disk quota, but they can still be accessed, albeit more slowly.
Nearline storage is a disk-tape hybrid systems with a layout like [[Project layout|Project]], except that the system may "virtualize" files by moving them to tape based on criteria like age and size, and then back again upon read or recall operations.  This is a way to manage less-used files.  On tape they do not consume your disk quota, but they can still be accessed, albeit more slowly.


<!--T:2-->
<!--T:2-->
This is useful because the capacity of our tape libraries is both large and expandable.  When a file has been moved to tape (that is, "virtualized"), it will still appear in the directory listing.  If the virtual file is read, the reading process will block for some time, probably a few minutes, while the file contents are read from tape to disk.  Then IO to the file will behave like any other disk-based file.
This is useful because the capacity of our tape libraries is both large and expandable.  When a file has been moved to tape (that is, "virtualized"), it will still appear in the directory listing.  If the virtual file is read, the reading process will block for some time, probably a few minutes, while the file contents are read from tape to disk.   


== Expected use and status== <!--T:3-->
== Expected use and status== <!--T:3-->
Because of the delay in reading from tape, Nearline is not intended to be used by jobs, where the delay would waste allocated time. It is only accessible from login and DTN nodes.
Because of the delay in reading from tape, Nearline is not intended to be used by jobs, where the delay would waste allocated time. It is only accessible as a directory on certain nodes of the clusters, and in particular, not on the compute nodes  


Nearline is intended for use with relatively large files - do not use it for large numbers of small files.  In fact, files smaller than a certain threshold size may not be moved to tape at all.  Files smaller than ~200MB should be combined into archive files ("tarballs") using [[Archiving and compressing files|tar]] or a similar tool.
Nearline is intended for use with relatively large files - do not use it for large numbers of small files.  In fact, files smaller than a certain threshold size may not be moved to tape at all.  Files smaller than ~200MB should be combined into archive files ("tarballs") using [[Archiving and compressing files|tar]] or a similar tool.
Line 20: Line 20:
<tabs>
<tabs>
<tab name="General purpose clusters">
<tab name="General purpose clusters">
Nearline is only accessible as a directory on the login nodes and DTNs ("Data Transfer Nodes"),
To use Nearline, just put files into your <tt>~/nearline/PROJECT</tt> directory. After a period of time (currently 24 hours), they'll be copied onto tape.  If the file remains unchanged for another period (also 24h), the copy on disk will be removed, making the file virtualized on tape.  
To use Nearline, just put files into your <tt>~/nearline/PROJECT</tt> directory. After a period of time (currently 24 hours), they'll be copied onto tape.  If the file remains unchanged for another period (also 24h), the copy on disk will be removed, making the file virtualized on tape.  


Line 26: Line 28:
</tab>
</tab>
<tab name="Niagara">
<tab name="Niagara">
FIX ME
There are three ways to access Nearline on Niagara:
 
1. By submitting hpss-specific commands htar or hsi as an 'archive' job to SLURM; see [https://docs.scinet.utoronto.ca/index.php/HPSS the HPSS documentation] for detailed examples. Using job scripts offer the benefit of automating Nearline transfers, and is the best method if you use HPSS regularly.
 
2. For small data management of files in HPSS, you can use the VFS ("Virtual File System") node, which is accessed using the command: <tt>salloc --time=1:00:00 -pvfsshort</tt>
 
3. You can also use [[Globus]] for transfers to and from HPSS using the endpoint <b>computecanada#hpss</b>.  This is useful for occasional usage and for transfers from other sites.
 
In usage modes 1 and 2, your HPSS files can be found in the $ARCHIVE directory, which is like '$PROJECT' but with '/project' replaced by '/archive'.
</tab>
</tab>
</tabs>
</tabs>


</translate>
</translate>
cc_staff
150

edits

Navigation menu