Using node-local storage: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(→‎Multi-node jobs: remove sbcast since pdsh, pdcp, and parallel more useful)
(improve language and formatting)
Line 1: Line 1:
{{Draft}}
{{Draft}}


When Slurm starts a job, it creates on each node assigned to the job a temporary directory.
When [[Running jobs|Slurm]] starts a job, it creates a temporary directory on each node assigned to the job.
It then sets the full path name of that directory in an environment variable called <code>SLURM_TMPDIR</code>.
It then sets the full path name of that directory in an environment variable called <code>SLURM_TMPDIR</code>.


Because this directory resides on local disk, input and output (I/O) to it
Because this directory resides on local disk, input and output (I/O) to it
is almost always faster than I/O to a network file system (/project, /scratch, or /home).
is almost always faster than I/O to a [[Storage and file management|network storage]] (/project, /scratch, or /home).
Specifically, local disk is better for frequent small I/O transactions than network storage.
Specifically, local disk is better for frequent small I/O transactions than network storage.
Any job doing a lot of input and output (which is most jobs!) may expect
Any job doing a lot of input and output (which is most jobs!) may expect
to run more quickly if it uses <code>$SLURM_TMPDIR</code> instead of network disk.
to run more quickly if it uses <code>$SLURM_TMPDIR</code> instead of network storage.


The temporary character of <code>$SLURM_TMPDIR</code> makes more trouble to use than  
The temporary character of <code>$SLURM_TMPDIR</code> makes it more trouble to use than  
network storage.
network storage.
Input must be copied from network storage to <code>$SLURM_TMPDIR</code> before it can be read,
Input must be copied from network storage to <code>$SLURM_TMPDIR</code> before it can be read,
and output must be copied out of <code>$SLURM_TMPDIR</code> back to network storage to preserve
and output must be copied from <code>$SLURM_TMPDIR</code> back to network storage before the job ends
it for later use.   
to preserve it for later use.   


=== Input ===
= Input =


In order to ''read'' data from <code>$SLURM_TMPDIR</code>, the data must first be copied there.   
In order to ''read'' data from <code>$SLURM_TMPDIR</code>, you must first copy the data there.   
In the simplest case this can be done with <code>cp</code> or <code>rsync</code>:
In the simplest case you can do this with <code>cp</code> or <code>rsync</code>:
<pre>
<pre>
cp /project/def-someone/you/input.files.* $SLURM_TMPDIR/
cp /project/def-someone/you/input.files.* $SLURM_TMPDIR/
Line 27: Line 27:
See "Amount of space" and "Multi-node jobs" below for more.
See "Amount of space" and "Multi-node jobs" below for more.


==== Executable files and libraries ====
== Executable files and libraries ==


A special case of input is the application code itself.  
A special case of input is the application code itself.  
Line 41: Line 41:
using <code>$SLURM_TMPDIR</code>.
using <code>$SLURM_TMPDIR</code>.


=== Output ===
= Output =


Output data must be copied from <code>$SLURM_TMPDIR</code> back to some permanent storage before the
Output data must be copied from <code>$SLURM_TMPDIR</code> back to some permanent storage before the
Line 49: Line 49:
* Write [[Points_de_contrôle/en|checkpoints]] to network storage, not to <code>$SLURM_TMPDIR</code>.
* Write [[Points_de_contrôle/en|checkpoints]] to network storage, not to <code>$SLURM_TMPDIR</code>.


=== Multi-node jobs ===
= Multi-node jobs =


If a job spans multiple nodes and some data is needed on every node, then a simple <code>cp</code> or <code>tar -x</code> will not suffice.
If a job spans multiple nodes and some data is needed on every node, then a simple <code>cp</code> or <code>tar -x</code> will not suffice.


====Copy files====
== Copy files ==
Copy one or more files to the <tt>SLURM_TMPDIR</tt> directory on every node allocated :
 
Copy one or more files to the <tt>SLURM_TMPDIR</tt> directory on every node allocated like this:
{{Command|pdcp -w $(slurm_hl2hl.py --format PDSH) file [files...] $SLURM_TMPDIR}}
{{Command|pdcp -w $(slurm_hl2hl.py --format PDSH) file [files...] $SLURM_TMPDIR}}


Or using GNU Parallel:
Or use GNU Parallel to do the same:
{{Command|parallel -S $(slurm_hl2hl.py --format GNU-Parallel) --env SLURM_TMPDIR --workdir $PWD --onall cp file [files...] ::: $SLURM_TMPDIR}}
{{Command|parallel -S $(slurm_hl2hl.py --format GNU-Parallel) --env SLURM_TMPDIR --workdir $PWD --onall cp file [files...] ::: $SLURM_TMPDIR}}


====Compressed Archive====
== Compressed Archives ==
=====ZIP=====
 
=== ZIP ===
 
Extract to the <tt>SLURM_TMPDIR</tt>:
Extract to the <tt>SLURM_TMPDIR</tt>:
{{Command|pdsh -w $(slurm_hl2hl.py --format PDSH) unzip archive.zip -d $SLURM_TMPDIR}}
{{Command|pdsh -w $(slurm_hl2hl.py --format PDSH) unzip archive.zip -d $SLURM_TMPDIR}}


=====Tarball=====
=== Tarball ===
Extract to the <tt>SLURM_TMPDIR</tt>:
Extract to the <tt>SLURM_TMPDIR</tt>:
{{Command|pdsh -w $(slurm_hl2hl.py --format PDSH) tar -xvf archive.tar.gz -C $SLURM_TMPDIR}}
{{Command|pdsh -w $(slurm_hl2hl.py --format PDSH) tar -xvf archive.tar.gz -C $SLURM_TMPDIR}}


=== Amount of space ===
= Amount of space =


At '''[[Niagara]]''' $SLURM_TMPDIR is implemented as "RAMdisk",  
At '''[[Niagara]]''' $SLURM_TMPDIR is implemented as "RAMdisk",  
Line 89: Line 92:
|}
|}


The table above gives the typical amount of free space in $SLURM_TMPDIR on the smallest node in each cluster.   
The table above gives the amount of space in $SLURM_TMPDIR on the ''smallest'' node in each cluster.   
If your job reserves [[Advanced_MPI_scheduling#Whole_nodes|whole nodes]]  
If your job reserves [[Advanced_MPI_scheduling#Whole_nodes|whole nodes]]  
then you can reasonably assume that this much space is available to you in $SLURM_TMPDIR on each node.
then you can reasonably assume that this much space is available to you in $SLURM_TMPDIR on each node.

Revision as of 18:28, 31 July 2020


This article is a draft

This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.




When Slurm starts a job, it creates a temporary directory on each node assigned to the job. It then sets the full path name of that directory in an environment variable called SLURM_TMPDIR.

Because this directory resides on local disk, input and output (I/O) to it is almost always faster than I/O to a network storage (/project, /scratch, or /home). Specifically, local disk is better for frequent small I/O transactions than network storage. Any job doing a lot of input and output (which is most jobs!) may expect to run more quickly if it uses $SLURM_TMPDIR instead of network storage.

The temporary character of $SLURM_TMPDIR makes it more trouble to use than network storage. Input must be copied from network storage to $SLURM_TMPDIR before it can be read, and output must be copied from $SLURM_TMPDIR back to network storage before the job ends to preserve it for later use.

Input

In order to read data from $SLURM_TMPDIR, you must first copy the data there. In the simplest case you can do this with cp or rsync:

cp /project/def-someone/you/input.files.* $SLURM_TMPDIR/

This may not work if the input is too large, or if it must be read by processes on different nodes. See "Amount of space" and "Multi-node jobs" below for more.

Executable files and libraries

A special case of input is the application code itself. In order to run the application, the shell started by Slurm must open at least an application file, which it typically reads from network storage. But few applications these days consist of exactly one file; most also need several other files (such as libraries) in order to work.

We particularly find that using an application in a Python virtual environment generates a large number of small I/O transactions--- More than it takes to create the virtual environment in the first place. This is why we recommend creating virtual environments inside your jobs using $SLURM_TMPDIR.

Output

Output data must be copied from $SLURM_TMPDIR back to some permanent storage before the job ends. If a job times out, then the last few lines of the job script might not be executed. This can be addressed two ways:

  • First, obviously, request enough run time to let the application finish. We understand that this isn't always possible.
  • Write checkpoints to network storage, not to $SLURM_TMPDIR.

Multi-node jobs

If a job spans multiple nodes and some data is needed on every node, then a simple cp or tar -x will not suffice.

Copy files

Copy one or more files to the SLURM_TMPDIR directory on every node allocated like this:

Question.png
[name@server ~]$ pdcp -w $(slurm_hl2hl.py --format PDSH) file [files...] $SLURM_TMPDIR

Or use GNU Parallel to do the same:

Question.png
[name@server ~]$ parallel -S $(slurm_hl2hl.py --format GNU-Parallel) --env SLURM_TMPDIR --workdir $PWD --onall cp file [files...] ::: $SLURM_TMPDIR

Compressed Archives

ZIP

Extract to the SLURM_TMPDIR:

Question.png
[name@server ~]$ pdsh -w $(slurm_hl2hl.py --format PDSH) unzip archive.zip -d $SLURM_TMPDIR

Tarball

Extract to the SLURM_TMPDIR:

Question.png
[name@server ~]$ pdsh -w $(slurm_hl2hl.py --format PDSH) tar -xvf archive.tar.gz -C $SLURM_TMPDIR

Amount of space

At Niagara $SLURM_TMPDIR is implemented as "RAMdisk", so the amount of space available is limited by the memory on the node, less the amount of RAM used by your application. See Data management at Niagara for more.

At the general-purpose clusters Béluga, Cedar, and Graham, the amount of space available depends on the cluster and the node to which your job is assigned.

cluster space in $SLURM_TMPDIR size of disk
Béluga 370G 480G
Cedar 840G 960G
Graham 750G 960G

The table above gives the amount of space in $SLURM_TMPDIR on the smallest node in each cluster. If your job reserves whole nodes then you can reasonably assume that this much space is available to you in $SLURM_TMPDIR on each node. However, if the job requests less than a whole node, then other jobs may also write to the same filesystem (but not the same directory!), reducing the space available to your job.

Some nodes at each site have more local disk than shown above. See "Node characteristics" at the appropriate page (Béluga, Cedar, Graham) for guidance.