Translations:Cedar/10/en: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Importing a new version from external source)
(Importing a new version from external source)
 
(6 intermediate revisions by the same user not shown)
Line 1: Line 1:
All of the above nodes will have local (on-node) temporary storage. GPU nodes will have a single 800GB SSD drive.  All other compute nodes will have dual 480GB SSD drives, for a total raw capacity of 9960GB.
All nodes have local (on-node) temporary storage. Compute nodes (except GPU nodes) have two 480GB SSD drives, for a total raw capacity of 960GB. GPU nodes have either an 800GB or a 480GB SSD drive. Use node-local storage through the job-specific directory created by the scheduler, <code>$SLURM_TMPDIR</code>. See [[Using node-local storage]].

Latest revision as of 20:54, 9 August 2023

Information about message (contribute)
This message has no documentation. If you know where or how this message is used, you can help other translators by adding documentation to this message.
Message definition (Cedar)
All nodes have local (on-node) temporary storage. Compute nodes (except GPU nodes) have two 480GB SSD drives, for a total raw capacity of 960GB. GPU nodes have either an 800GB or a 480GB SSD drive. Use node-local storage through the job-specific directory created by the scheduler, <code>$SLURM_TMPDIR</code>. See [[Using node-local storage]].

All nodes have local (on-node) temporary storage. Compute nodes (except GPU nodes) have two 480GB SSD drives, for a total raw capacity of 960GB. GPU nodes have either an 800GB or a 480GB SSD drive. Use node-local storage through the job-specific directory created by the scheduler, $SLURM_TMPDIR. See Using node-local storage.