Narval/en: Difference between revisions
No edit summary |
(Updating to match new version of source page) |
||
Line 11: | Line 11: | ||
|} | |} | ||
<div class="mw-translate-fuzzy"> | |||
Narval is a general purpose cluster designed for a variety of workloads and situated at the [http://www.etsmtl.ca/ École de technologie supérieure] in Montreal. The cluster is named in honour of the [https://en.wikipedia.org/wiki/Narwhal Narwhal], a whale which has sometimes been observed in the Gulf of St. Lawrence. | Narval is a general purpose cluster designed for a variety of workloads and situated at the [http://www.etsmtl.ca/ École de technologie supérieure] in Montreal. The cluster is named in honour of the [https://en.wikipedia.org/wiki/Narwhal Narwhal], a whale which has sometimes been observed in the Gulf of St. Lawrence. | ||
</div> | |||
=Site-specific policies= | =Site-specific policies= | ||
Line 19: | Line 21: | ||
Crontab is not offered on Narval. | Crontab is not offered on Narval. | ||
<div class="mw-translate-fuzzy"> | |||
Each job on Narval should have a duration of at least one hour (five minutes for test jobs) and a user cannot have more than 1000 jobs, running and queued, at any given moment. The maximum duration for a job on Narval is 7 days (168 hours). | Each job on Narval should have a duration of at least one hour (five minutes for test jobs) and a user cannot have more than 1000 jobs, running and queued, at any given moment. The maximum duration for a job on Narval is 7 days (168 hours). | ||
</div> | |||
=Storage= | =Storage= | ||
Line 27: | Line 31: | ||
| HOME <br> Lustre filesystem, ~100 TB of space || | | HOME <br> Lustre filesystem, ~100 TB of space || | ||
<div class="mw-translate-fuzzy"> | |||
*Location of home directories, each of which has a small fixed quota. | *Location of home directories, each of which has a small fixed quota. | ||
*You should use the <code>project</code> space for larger storage needs. | *You should use the <code>project</code> space for larger storage needs. | ||
Line 32: | Line 37: | ||
*There is a daily backup of the home directories. | *There is a daily backup of the home directories. | ||
</div> | |||
|- | |- | ||
| SCRATCH <br> Lustre filesystem, ~5 PB of space || | | SCRATCH <br> Lustre filesystem, ~5 PB of space || | ||
<div class="mw-translate-fuzzy"> | |||
*Large space for storing temporary files during computations. | *Large space for storing temporary files during computations. | ||
*No backup system in place. | *No backup system in place. | ||
Line 42: | Line 49: | ||
*There is an [[Scratch_purging_policy | automated purge]] of older files in this space. | *There is an [[Scratch_purging_policy | automated purge]] of older files in this space. | ||
</div> | |||
|- | |- | ||
| PROJECT <br> Lustre filesystem, ~15 PB of space || | | PROJECT <br> Lustre filesystem, ~15 PB of space || | ||
<div class="mw-translate-fuzzy"> | |||
*This space is designed for sharing data among the members of a research group and for storing large amounts of data. | *This space is designed for sharing data among the members of a research group and for storing large amounts of data. | ||
Line 52: | Line 61: | ||
*There is a daily backup of the project space. | *There is a daily backup of the project space. | ||
|} | |} | ||
</div> | |||
<div class="mw-translate-fuzzy"> | |||
For transferring data via Globus, you should use the endpoint <code>computecanada#beluga-dtn</code>, while for tools like rsync and scp you can use a login node. | For transferring data via Globus, you should use the endpoint <code>computecanada#beluga-dtn</code>, while for tools like rsync and scp you can use a login node. | ||
</div> | |||
=High-performance interconnect= | =High-performance interconnect= |
Revision as of 17:06, 18 August 2021
Availability: September, 2021 |
Login node: narval.computecanada.ca |
Globus endpoint: computecanada#narval-dtn |
Data transfer node (rsync, scp, sftp,...): narval.computecanada.ca |
Narval is a general purpose cluster designed for a variety of workloads and situated at the École de technologie supérieure in Montreal. The cluster is named in honour of the Narwhal, a whale which has sometimes been observed in the Gulf of St. Lawrence.
Site-specific policies
By policy, Narval's compute nodes cannot access the internet. If you need an exception to this rule, contact technical support with information about the IP address, port number(s) and protocol(s) needed as well as the duration and a contact person.
Crontab is not offered on Narval.
Each job on Narval should have a duration of at least one hour (five minutes for test jobs) and a user cannot have more than 1000 jobs, running and queued, at any given moment. The maximum duration for a job on Narval is 7 days (168 hours).
Storage
HOME Lustre filesystem, ~100 TB of space |
|
SCRATCH Lustre filesystem, ~5 PB of space |
|
PROJECT Lustre filesystem, ~15 PB of space |
|
For transferring data via Globus, you should use the endpoint computecanada#beluga-dtn
, while for tools like rsync and scp you can use a login node.
High-performance interconnect
The InfiniBand Mellanox HDR network links together all of the nodes of the cluster. Each hub of 40 HDR ports (200 Gb/s) can connect up to 66 nodes with HDR100 (100 Gb/s) with 33 HDR links divided in two (2) by special cables. The seven (7) remaining HDR links allow the hub to be connected to a rack containing the seven (7) cental HDR InfiniBand hubs. The islands of nodes are therefore connected by a maximum blocking factor of 33:7 (4.7:1). In contrast, the storage servers are connected by a non-blocking network.
In practice the Narval racks contain islands of 48 ou 56 regular CPU nodes. It is therefore possible to run parallel jobs using up to 3584 cores with a non-blocking network. For larger jobs or ones which are distributed in a fragmented manner across the network, the blocking factor is 4.7:1. The inter-connect remains a high-performance one nonetheless.
Node Characteristics
nodes | cores | available memory | CPU | storage | GPU |
---|---|---|---|---|---|
1109 | 64 | ~256000M | 2 x AMD Rome 7532 @ 2.40 GHz 256M cache L3 | 1 x 960 GB SSD | - |
33 | ~2048000M | ||||
158 | 48 | ~512000M | 2 x AMD Milan 7413 @ 2.65 GHz 128M cache L3 | 1 x SSD of 3.84 TB | 4 x NVidia A100 (40 GB memory) |