Translations:Cedar/10/en: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Importing a new version from external source)
(Importing a new version from external source)
Line 1: Line 1:
All of the above nodes have local (on-node) temporary storage. GPU nodes have a single 800GB SSD drive. All other compute nodes have two 480GB SSD drives, for a total raw capacity of 960GB.
All of the above nodes have local (on-node) temporary storage. GPU nodes have a single 800GB SSD drive. All other compute nodes have two 480GB SSD drives, for a total raw capacity of 960GB. Best practice to access node-local storage is to use the directory generated by [[Running jobs|Slurm]], $SLURM_TMPDIR.

Revision as of 19:36, 25 August 2017

Information about message (contribute)
This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
Message definition (Cedar)
All nodes have local (on-node) temporary storage. Compute nodes (except GPU nodes) have two 480GB SSD drives, for a total raw capacity of 960GB. GPU nodes have either an 800GB or a 480GB SSD drive. Use node-local storage through the job-specific directory created by the scheduler, <code>$SLURM_TMPDIR</code>. See [[Using node-local storage]].

All of the above nodes have local (on-node) temporary storage. GPU nodes have a single 800GB SSD drive. All other compute nodes have two 480GB SSD drives, for a total raw capacity of 960GB. Best practice to access node-local storage is to use the directory generated by Slurm, $SLURM_TMPDIR.