Translations:Handling large collections of files/19/en: Difference between revisions
Jump to navigation
Jump to search
(Importing a new version from external source) |
(Importing a new version from external source) |
||
Line 1: | Line 1: | ||
* [[Béluga/en | Béluga]] offers roughly 370GB of local disk for the CPU nodes, the GPU nodes have a 1.6TB NVMe disk (to help with the AI image datasets with their millions of small files). | * [[Béluga/en | Béluga]] offers roughly 370GB of local disk for the CPU nodes, the GPU nodes have a 1.6TB NVMe disk (to help with the AI image datasets with their millions of small files). | ||
* [[Niagara]] does not have local storage on the compute nodes | * [[Niagara]] does not have local storage on the compute nodes (but see [[Data_management_at_Niagara#.24SLURM_TMPDIR_.28RAM.29| Data management at Niagara]]) | ||
* For other clusters you can assume the available disk size to be at least 190GB | * For other clusters you can assume the available disk size to be at least 190GB |
Latest revision as of 19:29, 13 April 2020
- Béluga offers roughly 370GB of local disk for the CPU nodes, the GPU nodes have a 1.6TB NVMe disk (to help with the AI image datasets with their millions of small files).
- Niagara does not have local storage on the compute nodes (but see Data management at Niagara)
- For other clusters you can assume the available disk size to be at least 190GB