Using nearline storage: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Marked this version for translation)
m (heading levels)
 
(31 intermediate revisions by 8 users not shown)
Line 3: Line 3:


<!--T:30-->
<!--T:30-->
Nearline is a tape-based file system intended for *inactive data*Data sets which you do not expect to access for months are good candidates to be stored in nearline.  
Nearline is a tape-based filesystem intended for '''inactive data'''Datasets which you do not expect to access for months are good candidates to be stored in /nearline.  


= Best practices, and restrictions = <!--T:33-->
== Restrictions and best practices == <!--T:33-->


==== Size of files ==== <!--T:34-->
Note that there is no need to compress the data that you will be copying to nearline; the tape archive system automatically performs the compression using specialized circuitry.
=== Size of files === <!--T:34-->


<!--T:35-->
<!--T:35-->
Retrieving small files from tape is inefficient, while extremely large files pose other problems. Please observe these guidelines about the size of files to store in nearline:
Retrieving small files from tape is inefficient, while extremely large files pose other problems. Please observe these guidelines when storing files in /nearline:


<!--T:9-->
<!--T:9-->
*Files smaller than ~200MB should be combined into archive files (''tarballs'') using [[A tutorial on 'tar'|tar]] or a [[Archiving and compressing files|similar tool]].
*Files smaller than ~10GB should be combined into archive files (<i>tarballs</i>) using [[A tutorial on 'tar'|tar]] or a [[Archiving and compressing files|similar tool]].
*Files larger than 300GB should be split in chunks of 100GB using the [[A_tutorial_on_'tar'#split|split]] command or a similar tool.
*Files larger than 4TB should be split in chunks of 1TB using the [[A_tutorial_on_'tar'#Splitting_files|split command]] or a similar tool.
*<b>DO NOT SEND SMALL FILES TO NEARLINE, except for indexes (see <i>Creating an index</i> below).</b>


==== Using tar or dar ==== <!--T:36-->
=== Using tar or dar === <!--T:36-->


<!--T:37-->
<!--T:37-->
Use [[A tutorial on 'tar'|tar]] or [[dar]] to create an archive file directly on nearline. There is no advantage to creating the archive on a different filesystem and then copying it to nearline once complete.
Use [[A tutorial on 'tar'|tar]] or [[dar]] to create an archive file.
Keep the source files in their original filesystem. Do NOT copy the source files to /nearline before creating the archive.


<!--T:38-->
<!--T:38-->
If you have hundreds of gigabytes of data, the <code>tar</code> options <code>-M (--muti-volume)</code> and <code>-L (--tape-length)</code> can be used to produce archive files of suitable size.
If you have hundreds of gigabytes of data, the <code>tar</code> options <code>-M (--multi-volume)</code> and <code>-L (--tape-length)</code> can be used to produce archive files of suitable size.


<!--T:39-->
<!--T:39-->
If you are using <code>dar</code>, you can similarly use the <code>-s (--slice)</code> option.
If you are using <code>dar</code>, you can similarly use the <code>-s (--slice)</code> option.


==== No access from compute nodes ==== <!--T:40-->
==== Creating an index ==== <!--T:48-->
When you bundle files, it becomes inconvenient to find individual files. To avoid having to restore an entire large collection from tape when you only need one or a few of the files from this collection, you should make an index of all archive files you create. Create the index as soon as you create the collection. For instance, you can save the output of tar with the <tt>verbose</tt> option when you create the archive, like this:
 
<!--T:49-->
{{Command|tar cvvf /nearline/def-sponsor/user/mycollection.tar /project/def-sponsor/user/something > /nearline/def-sponsor/user/mycollection.index}}
 
<!--T:50-->
If you've just created the archive (again using tar as an example), you can create an index like this:
 
<!--T:51-->
{{Command|tar tvvf /nearline/def-sponsor/user/mycollection.tar > /nearline/def-sponsor/user/mycollection.index}}
 
<!--T:52-->
Index files are an exception to the rule about small files on nearline: it's okay to store them in /nearline.
 
=== No access from compute nodes === <!--T:40-->


<!--T:41-->
<!--T:41-->
Because data retrieval from nearline may take an uncertain amount of time (see "How it works" below), we do not permit reading from nearline in a job context.  Nearline is not mounted on compute nodes.
Because data retrieval from /nearline may take an uncertain amount of time (see ''How it works'' below), we do not permit reading from /nearline in a job context.  /nearline is not mounted on compute nodes.


==== Use a data-transfer node if available ==== <!--T:42-->
=== Use a data-transfer node if available === <!--T:42-->


<!--T:32-->
<!--T:32-->
Creating a tar or dar file for a large volume of data can be resource-intensive. Please do this on a data-transfer node (DTN) instead of a login node if login to a DTN is supported at the cluster you are using.
Creating a tar or dar file for a large volume of data can be resource-intensive. Please do this on a data-transfer node (DTN) instead of on a login node whenever possible.


= Why nearline? = <!--T:43-->
== Why /nearline? == <!--T:43-->


<!--T:44-->
<!--T:44-->
Tape as a storage medium has these advantages over disk and solid-state ("SSD") media.
Tape as a storage medium has these advantages over disk and solid-state (SSD) media.
# Cost per unit of data stored is lower.
# Cost per unit of data stored is lower.
# The volume of data stored can be easily expanded by buying more tapes.
# The volume of data stored can be easily expanded by buying more tapes.
Line 46: Line 65:


<!--T:45-->
<!--T:45-->
Consequently we can offer much greater volumes of storage on nearline than we can on project.  Also, keeping inactive data ''off'' of project reduces the load and improves its performance.
Consequently we can offer much greater volumes of storage on /nearline than we can on /project.  Also, keeping inactive data ''off'' of /project reduces the load and improves its performance.


= How it works = <!--T:46-->
== How it works == <!--T:46-->


<!--T:22-->
<!--T:22-->
# When a file is first copied to (or created on) nearline, the file exists only on disk, not tape.
# When a file is first copied to (or created on) /nearline, the file exists only on disk, not tape.
# After a period (on the order of a day), and if the file meets certain criteria, the system will copy the file to tape. At this stage, the file will be on both disk and tape.
# After a period (on the order of a day), and if the file meets certain criteria, the system will copy the file to tape. At this stage, the file will be on both disk and tape.
# After a further period the disk copy may be deleted, and the file will only be on tape.
# After a further period the disk copy may be deleted, and the file will only be on tape.
Line 58: Line 77:
<!--T:2-->
<!--T:2-->
When a file has been moved entirely to tape (that is, when it is ''virtualized'') it will still appear in the directory listing.  If the virtual file is read, it will take some time for the tape to be retrieved from the library and copied back to disk. The process which is trying to read the file will block while this is happening.  This may take from less than a minute to over an hour, depending on the size of the file and the demand on the tape system.
When a file has been moved entirely to tape (that is, when it is ''virtualized'') it will still appear in the directory listing.  If the virtual file is read, it will take some time for the tape to be retrieved from the library and copied back to disk. The process which is trying to read the file will block while this is happening.  This may take from less than a minute to over an hour, depending on the size of the file and the demand on the tape system.
=== Transferring data from Nearline === <!--T:53-->
<!--T:54-->
While [[Transferring_data|transferring data]] with [[Globus]] or any other tool, the data that was on tape gets automatically restored on disk upon reading it. Since tape access is relatively slow, each file restoration can hang the transfer for a few seconds to a few minutes. Therefore, users should expect longer transfer times from Nearline.
<!--T:58-->
For an overview of the state of all files saved on Nearline, '''some clusters''' support the following command:
{{Command|diskusage_report --nearline --per_user --all_users}}
<!--T:59-->
The different <code>Location</code>'s are:
* <code>On disk and tape</code>: this data is available on disk.
* <code>Modified, will be archived again</code>: the newest version of the data is on disk.
* <code>Archiving in progress</code>: the data is being copied or moved to tape.
* <code>On tape</code>: the data is only on tape.


<!--T:24-->
<!--T:24-->
You can determine whether or not a given file has been moved to tape or is still on disk using the `lfs hsm_state` command.  The "hsm" stands for "hierarchical storage manager".
Then, you can determine whether or not a given file has been moved to tape or is still on disk using the <tt>lfs hsm_state</tt> command.  "hsm" stands for "hierarchical storage manager".


<!--T:47-->
<!--T:47-->
<source lang="bash">
<source lang="bash">
#  Here, <FILE> has not been copied to tape.
#  Here, <FILE> is only on disk.
$ lfs hsm_state <FILE>
$ lfs hsm_state <FILE>
<FILE>:  (0x00000000)
<FILE>:  (0x00000000)


<!--T:25-->
<!--T:55-->
# Here, <FILE> is still on the disk
# Here, <FILE> is in progress of being copied to tape.
$ lfs hsm_state <FILE>
<FILE>: [...]: exists, [...]
 
<!--T:56-->
# Here, <FILE> is both on the disk and on tape.
$ lfs hsm_state <FILE>
$ lfs hsm_state <FILE>
<FILE>: [...]: exists archived, [...]
<FILE>: [...]: exists archived, [...]


<!--T:26-->
<!--T:57-->
# Here, <FILE> is archived on tape, there will be a lag when opening it.  
# Here, <FILE> is on tape but no longer on disk.  There will be a lag when opening it.  
$ lfs hsm_state <FILE>
$ lfs hsm_state <FILE>
<FILE>: [...]: released archived, [...]
<FILE>: [...]: released exists archived, [...]
</source>
</source>


Line 82: Line 122:
You can explicitly force a file to be recalled from tape without actually reading it with the command <code>lfs hsm_restore <FILE></code>.
You can explicitly force a file to be recalled from tape without actually reading it with the command <code>lfs hsm_restore <FILE></code>.


<!--T:29-->
=== Cluster-specific information === <!--T:6-->
Note that as of October 2020, the output of the command <code>diskusage_report</code>, also known as <code>quota</code>, does not report on nearline space consumption.
 
== Site-specific information == <!--T:6-->


<!--T:10-->
<!--T:10-->
<tabs>
<tabs>
<tab name="Graham">
<tab name="Béluga">
Nearline is only accessible as a directory on login nodes and on DTNs (''Data Transfer Nodes'').
/nearline is only accessible as a directory on login nodes and on DTNs (''Data Transfer Nodes'').


<!--T:11-->
<!--T:11-->
To use nearline, just put files into your <tt>~/nearline/PROJECT</tt> directory. After a period of time (24 hours as of February 2019), they will be copied onto tape. If the file remains unchanged for another period (24 hours as of February 2019), the copy on disk will be removed, making the file virtualized on tape.  
To use /nearline, just put files into your <tt>~/nearline/PROJECT</tt> directory. After a period of time (24 hours as of February 2019), they will be copied onto tape. If the file remains unchanged for another period (24 hours as of February 2019), the copy on disk will be removed, making the file virtualized on tape.  


<!--T:8-->
<!--T:8-->
If you accidentally (or deliberately) delete a file from <tt>~/nearline</tt>, the tape copy will be retained for up to 60 days. To restore such a file contact [[technical support]] with the full path for the file(s) and desired version (by date), just as you would for restoring a [[Storage and file management#Filesystem quotas and policies|backup]]. Note that since you will need the full path for the file, it is important for you to retain a copy of the complete directory structure of your nearline space. For example, you can run the command <tt>ls -R > ~/nearline_contents.txt</tt> from the <tt>~/nearline/PROJECT</tt> directory so that you have a copy of the location of all the files.
If you accidentally (or deliberately) delete a file from <tt>~/nearline</tt>, the tape copy will be retained for up to 60 days. To restore such a file contact [[technical support]] with the full path for the file(s) and desired version (by date), just as you would for restoring a [[Storage and file management#Filesystem quotas and policies|backup]]. Note that since you will need the full path for the file, it is important for you to retain a copy of the complete directory structure of your /nearline space. For example, you can run the command <tt>ls -R > ~/nearline_contents.txt</tt> from the <tt>~/nearline/PROJECT</tt> directory so that you have a copy of the location of all the files.
</tab>
</tab>


<!--T:16-->
<!--T:16-->
<tab name="Cedar">
<tab name="Cedar">
Nearline service similar to that on Graham.
/nearline service similar to that on Béluga.
</tab>
 
<!--T:20-->
<tab name="Graham">
/nearline service similar to that on Béluga.
</tab>
 
<!--T:60-->
<tab name="Narval">
/nearline service similar to that on Béluga.
</tab>
</tab>


<!--T:17-->
<!--T:17-->
<tab name="Niagara">
<tab name="Niagara">
HPSS is the nearline service on Niagara.<br/>
HPSS is the /nearline service on Niagara.<br/>
There are three methods to access the service:
There are three methods to access the service:


<!--T:12-->
<!--T:12-->
1. By submitting HPSS-specific commands <tt>htar</tt> or <tt>hsi</tt> to the Slurm scheduler as a job in one of the archive partitions; see [https://docs.scinet.utoronto.ca/index.php/HPSS the HPSS documentation] for detailed examples. Using job scripts offers the benefit of automating nearline transfers and is the best method if you use HPSS regularly. Your HPSS files can be found in the $ARCHIVE directory, which is like $PROJECT but with ''/project'' replaced by ''/archive''.  
1. By submitting HPSS-specific commands <tt>htar</tt> or <tt>hsi</tt> to the Slurm scheduler as a job in one of the archive partitions; see [https://docs.scinet.utoronto.ca/index.php/HPSS the HPSS documentation] for detailed examples. Using job scripts offers the benefit of automating /nearline transfers and is the best method if you use HPSS regularly. Your HPSS files can be found in the $ARCHIVE directory, which is like $PROJECT but with ''/project'' replaced by ''/archive''.  


<!--T:13-->
<!--T:13-->
Line 121: Line 168:
</tab>
</tab>


<!--T:20-->
<!--T:61-->
<tab name="Béluga">
Nearline service similar to that on Graham.
</tab>
</tabs>
</tabs>


</translate>
</translate>

Latest revision as of 19:08, 25 April 2024

Other languages:

Nearline is a tape-based filesystem intended for inactive data. Datasets which you do not expect to access for months are good candidates to be stored in /nearline.

Restrictions and best practices

Note that there is no need to compress the data that you will be copying to nearline; the tape archive system automatically performs the compression using specialized circuitry.

Size of files

Retrieving small files from tape is inefficient, while extremely large files pose other problems. Please observe these guidelines when storing files in /nearline:

  • Files smaller than ~10GB should be combined into archive files (tarballs) using tar or a similar tool.
  • Files larger than 4TB should be split in chunks of 1TB using the split command or a similar tool.
  • DO NOT SEND SMALL FILES TO NEARLINE, except for indexes (see Creating an index below).

Using tar or dar

Use tar or dar to create an archive file. Keep the source files in their original filesystem. Do NOT copy the source files to /nearline before creating the archive.

If you have hundreds of gigabytes of data, the tar options -M (--multi-volume) and -L (--tape-length) can be used to produce archive files of suitable size.

If you are using dar, you can similarly use the -s (--slice) option.

Creating an index

When you bundle files, it becomes inconvenient to find individual files. To avoid having to restore an entire large collection from tape when you only need one or a few of the files from this collection, you should make an index of all archive files you create. Create the index as soon as you create the collection. For instance, you can save the output of tar with the verbose option when you create the archive, like this:

Question.png
[name@server ~]$ tar cvvf /nearline/def-sponsor/user/mycollection.tar /project/def-sponsor/user/something > /nearline/def-sponsor/user/mycollection.index

If you've just created the archive (again using tar as an example), you can create an index like this:

Question.png
[name@server ~]$ tar tvvf /nearline/def-sponsor/user/mycollection.tar > /nearline/def-sponsor/user/mycollection.index

Index files are an exception to the rule about small files on nearline: it's okay to store them in /nearline.

No access from compute nodes

Because data retrieval from /nearline may take an uncertain amount of time (see How it works below), we do not permit reading from /nearline in a job context. /nearline is not mounted on compute nodes.

Use a data-transfer node if available

Creating a tar or dar file for a large volume of data can be resource-intensive. Please do this on a data-transfer node (DTN) instead of on a login node whenever possible.

Why /nearline?

Tape as a storage medium has these advantages over disk and solid-state (SSD) media.

  1. Cost per unit of data stored is lower.
  2. The volume of data stored can be easily expanded by buying more tapes.
  3. Energy consumption per unit of data stored is effectively zero.

Consequently we can offer much greater volumes of storage on /nearline than we can on /project. Also, keeping inactive data off of /project reduces the load and improves its performance.

How it works

  1. When a file is first copied to (or created on) /nearline, the file exists only on disk, not tape.
  2. After a period (on the order of a day), and if the file meets certain criteria, the system will copy the file to tape. At this stage, the file will be on both disk and tape.
  3. After a further period the disk copy may be deleted, and the file will only be on tape.
  4. When such a file is recalled, it is copied from tape back to disk, returning it to the second state.

When a file has been moved entirely to tape (that is, when it is virtualized) it will still appear in the directory listing. If the virtual file is read, it will take some time for the tape to be retrieved from the library and copied back to disk. The process which is trying to read the file will block while this is happening. This may take from less than a minute to over an hour, depending on the size of the file and the demand on the tape system.

Transferring data from Nearline

While transferring data with Globus or any other tool, the data that was on tape gets automatically restored on disk upon reading it. Since tape access is relatively slow, each file restoration can hang the transfer for a few seconds to a few minutes. Therefore, users should expect longer transfer times from Nearline.

For an overview of the state of all files saved on Nearline, some clusters support the following command:

Question.png
[name@server ~]$ diskusage_report --nearline --per_user --all_users

The different Location's are:

  • On disk and tape: this data is available on disk.
  • Modified, will be archived again: the newest version of the data is on disk.
  • Archiving in progress: the data is being copied or moved to tape.
  • On tape: the data is only on tape.

Then, you can determine whether or not a given file has been moved to tape or is still on disk using the lfs hsm_state command. "hsm" stands for "hierarchical storage manager".

#  Here, <FILE> is only on disk.
$ lfs hsm_state <FILE>
<FILE>:  (0x00000000)

# Here, <FILE> is in progress of being copied to tape.
$ lfs hsm_state <FILE>
<FILE>: [...]: exists, [...]

# Here, <FILE> is both on the disk and on tape.
$ lfs hsm_state <FILE>
<FILE>: [...]: exists archived, [...]

# Here, <FILE> is on tape but no longer on disk.  There will be a lag when opening it. 
$ lfs hsm_state <FILE>
<FILE>: [...]: released exists archived, [...]

You can explicitly force a file to be recalled from tape without actually reading it with the command lfs hsm_restore <FILE>.

Cluster-specific information

/nearline is only accessible as a directory on login nodes and on DTNs (Data Transfer Nodes).

To use /nearline, just put files into your ~/nearline/PROJECT directory. After a period of time (24 hours as of February 2019), they will be copied onto tape. If the file remains unchanged for another period (24 hours as of February 2019), the copy on disk will be removed, making the file virtualized on tape.

If you accidentally (or deliberately) delete a file from ~/nearline, the tape copy will be retained for up to 60 days. To restore such a file contact technical support with the full path for the file(s) and desired version (by date), just as you would for restoring a backup. Note that since you will need the full path for the file, it is important for you to retain a copy of the complete directory structure of your /nearline space. For example, you can run the command ls -R > ~/nearline_contents.txt from the ~/nearline/PROJECT directory so that you have a copy of the location of all the files.

/nearline service similar to that on Béluga.

/nearline service similar to that on Béluga.

/nearline service similar to that on Béluga.

HPSS is the /nearline service on Niagara.
There are three methods to access the service:

1. By submitting HPSS-specific commands htar or hsi to the Slurm scheduler as a job in one of the archive partitions; see the HPSS documentation for detailed examples. Using job scripts offers the benefit of automating /nearline transfers and is the best method if you use HPSS regularly. Your HPSS files can be found in the $ARCHIVE directory, which is like $PROJECT but with /project replaced by /archive.

2. To manage a small number of files in HPSS, you can use the VFS (Virtual File System) node, which is accessed with the command salloc --time=1:00:00 -pvfsshort. Your HPSS files can be found in the $ARCHIVE directory, which is like $PROJECT but with /project replaced by /archive.

3. By using Globus for transfers to and from HPSS using the endpoint computecanada#hpss. This is useful for occasional usage and for transfers to and from other sites.