Translations:Cedar/10/en: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Importing a new version from external source)
(Importing a new version from external source)
 
Line 1: Line 1:
All nodes have local (on-node) temporary storage. Compute nodes (except GPU nodes) have two 480GB SSD drives, for a total raw capacity of 960GB. GPU nodes have either an 800GB or a 480GB SSD drive. Best practice to access node-local storage is to use the directory generated by [[Running jobs|Slurm]], $SLURM_TMPDIR.
All nodes have local (on-node) temporary storage. Compute nodes (except GPU nodes) have two 480GB SSD drives, for a total raw capacity of 960GB. GPU nodes have either an 800GB or a 480GB SSD drive. Use node-local storage through the job-specific directory created by the scheduler, <code>$SLURM_TMPDIR</code>. See [[Using node-local storage]].

Latest revision as of 20:54, 9 August 2023

Information about message (contribute)
This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
Message definition (Cedar)
All nodes have local (on-node) temporary storage. Compute nodes (except GPU nodes) have two 480GB SSD drives, for a total raw capacity of 960GB. GPU nodes have either an 800GB or a 480GB SSD drive. Use node-local storage through the job-specific directory created by the scheduler, <code>$SLURM_TMPDIR</code>. See [[Using node-local storage]].

All nodes have local (on-node) temporary storage. Compute nodes (except GPU nodes) have two 480GB SSD drives, for a total raw capacity of 960GB. GPU nodes have either an 800GB or a 480GB SSD drive. Use node-local storage through the job-specific directory created by the scheduler, $SLURM_TMPDIR. See Using node-local storage.