Narval/en: Difference between revisions
(Created page with "For transferring data via Globus, you should use the endpoint <code>computecanada#beluga-dtn</code>, while for tools like rsync and scp you can use a login node.") |
(Created page with "=High-performance interconnect= The [InfiniBand] [https://www.nvidia.com/en-us/networking/infiniband/qm8700/ Mellanox HDR] network links together all of the nodes of the clust...") |
||
Line 55: | Line 55: | ||
For transferring data via Globus, you should use the endpoint <code>computecanada#beluga-dtn</code>, while for tools like rsync and scp you can use a login node. | For transferring data via Globus, you should use the endpoint <code>computecanada#beluga-dtn</code>, while for tools like rsync and scp you can use a login node. | ||
= | =High-performance interconnect= | ||
The [InfiniBand] [https://www.nvidia.com/en-us/networking/infiniband/qm8700/ Mellanox HDR] network links together all of the nodes of the cluster. Each hub of 40 HDR ports (200 Gb/s) can connect up to 66 nodes with HDR100 (100 Gb/s) with 33 HDR links divided in two (2) by special cables. The seven (7) remaining HDR links allow the hub to be connected to a rack containing the seven (7) cental HDR InfiniBand hubs. The islands of nodes are therefore connected by a maximum blocking factor of 33:7 (4.7:1). In contrast, the storage servers are connected by a non-blocking network. | |||
En pratique, les cabinets de Narval contiennent des îlots de 48 ou 56 nœuds CPU réguliers. Il est donc possible d'exécuter des tâches parallèles utilisant jusqu’à 3584 cœurs et une réseautique non bloquante. Pour des tâches plus imposantes ou plus fragmentées sur le réseau, le facteur de blocage est de 4.7:1. L’interconnexion reste malgré tout de haute performance. | En pratique, les cabinets de Narval contiennent des îlots de 48 ou 56 nœuds CPU réguliers. Il est donc possible d'exécuter des tâches parallèles utilisant jusqu’à 3584 cœurs et une réseautique non bloquante. Pour des tâches plus imposantes ou plus fragmentées sur le réseau, le facteur de blocage est de 4.7:1. L’interconnexion reste malgré tout de haute performance. |
Revision as of 17:17, 3 August 2021
Availability: September, 2021 |
Login node: narval.computecanada.ca |
Globus endpoint: computecanada#narval-dtn |
Data transfer node (rsync, scp, sftp,...): narval.computecanada.ca |
Narval is a general purpose cluster designed for a variety of workloads and situated at the École de technologie supérieure in Montreal. The cluster is named in honour of the St. Lawrence River's Narwhal population.
Site-specific policies
By policy, Narval's compute nodes cannot access the internet. If you need an exception to this rule, contact technical support with information about the IP address, port number(s) and protocol(s) needed as well as the duration and a contact person.
Crontab is not offered on Narval.
Each job on Narval should have a duration of at least one hour (five minutes for test jobs) and a user cannot have more than 1000 jobs, running and queued, at any given moment. The maximum duration for a job on Narval is 7 days (168 hours).
Storage
HOME Lustre filesystem, ~100 TB of space |
|
SCRATCH Lustre filesystem, ~5 PB of space |
|
PROJECT Lustre filesystem, ~15 PB of space |
|
For transferring data via Globus, you should use the endpoint computecanada#beluga-dtn
, while for tools like rsync and scp you can use a login node.
High-performance interconnect
The [InfiniBand] Mellanox HDR network links together all of the nodes of the cluster. Each hub of 40 HDR ports (200 Gb/s) can connect up to 66 nodes with HDR100 (100 Gb/s) with 33 HDR links divided in two (2) by special cables. The seven (7) remaining HDR links allow the hub to be connected to a rack containing the seven (7) cental HDR InfiniBand hubs. The islands of nodes are therefore connected by a maximum blocking factor of 33:7 (4.7:1). In contrast, the storage servers are connected by a non-blocking network.
En pratique, les cabinets de Narval contiennent des îlots de 48 ou 56 nœuds CPU réguliers. Il est donc possible d'exécuter des tâches parallèles utilisant jusqu’à 3584 cœurs et une réseautique non bloquante. Pour des tâches plus imposantes ou plus fragmentées sur le réseau, le facteur de blocage est de 4.7:1. L’interconnexion reste malgré tout de haute performance.
Caractéristiques des nœuds
nœuds | cœurs | mémoire disponible | CPU | stockage | GPU |
---|---|---|---|---|---|
1109 | 64 | ~256000M | 2 x AMD Rome 7532 @ 2.40 GHz 256M cache L3 | 1 x SSD de 960G | - |
33 | ~2048000M | ||||
158 | 48 | ~512000M | 2 x AMD Milan 7413 @ 2.65 GHz 128M cache L3 | 1 x SSD de 3.84T | 4 x NVidia A100 (mémoire 40G) |