38,760
edits
No edit summary |
(Updating to match new version of source page) |
||
Line 55: | Line 55: | ||
For transferring data via Globus, you should use the endpoint <code>computecanada#beluga-dtn</code>, while for tools like rsync and scp you can use a login node. | For transferring data via Globus, you should use the endpoint <code>computecanada#beluga-dtn</code>, while for tools like rsync and scp you can use a login node. | ||
<div class="mw-translate-fuzzy"> | |||
=High-performance interconnect= | =High-performance interconnect= | ||
The [https://en.wikipedia.org/wiki/InfiniBand InfiniBand] [https://www.nvidia.com/en-us/networking/infiniband/qm8700/ Mellanox HDR] network links together all of the nodes of the cluster. Each hub of 40 HDR ports (200 Gb/s) can connect up to 66 nodes with HDR100 (100 Gb/s) with 33 HDR links divided in two (2) by special cables. The seven (7) remaining HDR links allow the hub to be connected to a rack containing the seven (7) cental HDR InfiniBand hubs. The islands of nodes are therefore connected by a maximum blocking factor of 33:7 (4.7:1). In contrast, the storage servers are connected by a non-blocking network. | The [https://en.wikipedia.org/wiki/InfiniBand InfiniBand] [https://www.nvidia.com/en-us/networking/infiniband/qm8700/ Mellanox HDR] network links together all of the nodes of the cluster. Each hub of 40 HDR ports (200 Gb/s) can connect up to 66 nodes with HDR100 (100 Gb/s) with 33 HDR links divided in two (2) by special cables. The seven (7) remaining HDR links allow the hub to be connected to a rack containing the seven (7) cental HDR InfiniBand hubs. The islands of nodes are therefore connected by a maximum blocking factor of 33:7 (4.7:1). In contrast, the storage servers are connected by a non-blocking network. | ||
</div> | |||
In practice the Narval racks contain islands of 48 ou 56 regular CPU nodes. It is therefore possible to run parallel jobs using up to 3584 cores with a non-blocking network. For larger jobs or ones which are distributed in a fragmented manner across the network, the blocking factor is 4.7:1. The inter-connect remains a high-performance one nonetheless. | In practice the Narval racks contain islands of 48 ou 56 regular CPU nodes. It is therefore possible to run parallel jobs using up to 3584 cores with a non-blocking network. For larger jobs or ones which are distributed in a fragmented manner across the network, the blocking factor is 4.7:1. The inter-connect remains a high-performance one nonetheless. |