38,760
edits
No edit summary |
(Updating to match new version of source page) |
||
Line 56: | Line 56: | ||
A Mellanox Infiniband EDR (100 Gb/s) network connects together all the nodes of the cluster. A central switch of 324 ports links the cluster's island topology with a maximum blocking factor of 5:1. The storage servers are networked with a non-blocking connection. The architecture permits multiple parallel jobs with up to 640 cores (or more) thanks to a non-blocking network. For jobs requiring greater parallelism, the blocking factor is 5:1 but even for jobs executed across several islands, the interconnection is high-performance. | A Mellanox Infiniband EDR (100 Gb/s) network connects together all the nodes of the cluster. A central switch of 324 ports links the cluster's island topology with a maximum blocking factor of 5:1. The storage servers are networked with a non-blocking connection. The architecture permits multiple parallel jobs with up to 640 cores (or more) thanks to a non-blocking network. For jobs requiring greater parallelism, the blocking factor is 5:1 but even for jobs executed across several islands, the interconnection is high-performance. | ||
<div class="mw-translate-fuzzy"> | |||
=Node characteristics= | =Node characteristics= | ||
Turbo mode is activated on all compute nodes of Béluga. | Turbo mode is activated on all compute nodes of Béluga. | ||
Line 78: | Line 79: | ||
| 1 || 32 || 375G or 384000M || 2 x Intel Gold 6226R Cascade Lake @ 2.9 GHz || 2 x SSD 480G || 8 x NVidia T4 (16G memory) | | 1 || 32 || 375G or 384000M || 2 x Intel Gold 6226R Cascade Lake @ 2.9 GHz || 2 x SSD 480G || 8 x NVidia T4 (16G memory) | ||
|} | |} | ||
</div> | |||
<div class="mw-translate-fuzzy"> | |||
* To get a larger <code>$SLURM_TMPDIR</code> space, a job can be submitted with <code>--tmp=xG</code>, where <code>x</code> is a value between 350 and 2490. | * To get a larger <code>$SLURM_TMPDIR</code> space, a job can be submitted with <code>--tmp=xG</code>, where <code>x</code> is a value between 350 and 2490. | ||
* The 4 TB AMD nodes can be requested with <code>--partition=c-slarge</code>. Note: these nodes do not support AVX512 instructions. | * The 4 TB AMD nodes can be requested with <code>--partition=c-slarge</code>. Note: these nodes do not support AVX512 instructions. | ||
* The T4 GPUs are not yet available via Slurm; only CPU cores are usable. | * The T4 GPUs are not yet available via Slurm; only CPU cores are usable. | ||
</div> |