Handling large collections of files: Difference between revisions

Jump to navigation Jump to search
no edit summary
No edit summary
No edit summary
Line 30: Line 30:
<!--T:19-->
<!--T:19-->
* [[Béluga/en | Béluga]] offers roughly 370GB of local disk for the CPU nodes, the GPU nodes have a 1.6TB NVMe disk (to help with the AI image datasets with their millions of small files).
* [[Béluga/en | Béluga]] offers roughly 370GB of local disk for the CPU nodes, the GPU nodes have a 1.6TB NVMe disk (to help with the AI image datasets with their millions of small files).
* [[Niagara]] does not have local storage on the compute nodes
* [[Niagara]] does not have local storage on the compute nodes (but see [[Data_management_at_Niagara#.24SLURM_TMPDIR_.28RAM.29| Data management at Niagara]])
* For other clusters you can assume the available disk size to be at least 190GB
* For other clusters you can assume the available disk size to be at least 190GB


Bureaucrats, cc_docs_admin, cc_staff
2,879

edits

Navigation menu