Narval/en: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Created page with "Narval is a general purpose cluster designed for a variety of workloads and situated at the [http://www.etsmtl.ca/ École de technologie supérieure] in Montreal. The cluster...")
(Created page with "It is important that you use the allocated resources and to correct your requests when compute resources are less used or not used at all. For example, if you request 4 cores (CPUs) but use only one, you should adjust the script file accordingly.")
 
(166 intermediate revisions by 7 users not shown)
Line 2: Line 2:
{| class="wikitable"
{| class="wikitable"
|-
|-
| Availability: September, 2021
| Availability: since October, 2021
|-
|-
| Login node: '''narval.computecanada.ca'''
| Login node: '''narval.alliancecan.ca'''
|-
|-
| Globus endpoint: '''computecanada#narval-dtn'''
| Globus Collection: '''[https://app.globus.org/file-manager?origin_id=a1713da6-098f-40e6-b3aa-034efe8b6e5b Compute Canada - Narval]'''
|-
|-
| Data transfer node (rsync, scp, sftp,...): '''narval.computecanada.ca'''
| Data transfer node (rsync, scp, sftp,...): '''narval.alliancecan.ca'''
|-
| Portal : https://portail.narval.calculquebec.ca/
|}
|}


Narval is a general purpose cluster designed for a variety of workloads and situated at the [http://www.etsmtl.ca/ École de technologie supérieure] in Montreal. The cluster is named in honour of the St. Lawrence River's [https://en.wikipedia.org/wiki/Narwhal Narwhal] population.
Narval is a general purpose cluster designed for a variety of workloads; it is located at the [https://www.etsmtl.ca/en/home École de technologie supérieure] in Montreal. The cluster is named in honour of the [https://en.wikipedia.org/wiki/Narwhal narwhal], a species of whale which has occasionally been observed in the Gulf of St. Lawrence.


=Particularités=
==Site-specific policies==
Notre politique veut que les nœuds de calcul de Narval n'aient pas accès à l'internet. Pour y faire exception, veuillez joindre le [[Technical_support/fr|soutien technique]] en expliquant ce dont vous avez besoin et pourquoi. Notez que l'outil <code>crontab</code> n'est pas offert.


Chaque tâche devrait être d'une durée d’au moins une heure (au moins cinq minutes pour les tâches de test) et un utilisateur ne peut avoir plus de 1000 tâches (en exécution et en attente) à la fois. La durée maximale d'une tâche est 7 jours (168 heures).
By policy, Narval's compute nodes cannot access the internet. If you need an exception to this rule, contact [[Technical_support|technical support]] explaining what you need and why.  


=Stockage=
Crontab is not offered on Narval.
 
Each job on Narval should have a duration of at least one hour (five minutes for test jobs) and you cannot have more than 1000 jobs, running or queued, at any given moment. The maximum duration for a job on Narval is 7 days (168 hours).
 
==Storage==
{| class="wikitable sortable"
{| class="wikitable sortable"


|-
|-
| HOME <br> Système de fichiers Lustre, ~100 To d’espace au total ||  
| HOME <br> Lustre filesystem, 40 TB of space ||  


* Cet espace est petit et ne peut pas être agrandi : vous devrez utiliser votre espace <code>project</code> pour les grands besoins en stockage.
*Location of home directories, each of which has a small fixed quota.
* 50 Go d’espace et 500K fichiers par utilisateur.
*You should use the <code>project</code> space for larger storage needs.
* Il y a une sauvegarde automatique une fois par jour.
*Small per user [[Storage_and_file_management#Filesystem_quotas_and_policies| quota]].
 
*There is a daily backup of the home directories.


|-
|-
| SCRATCH <br> Système de fichiers Lustre, ~5 Po d’espace au total ||
| SCRATCH <br> Lustre filesystem, 5.5 PB of space ||
 
*Large space for storing temporary files during computations.
*No backup system in place.
 
*Large [[Storage_and_file_management#Filesystem_quotas_and_policies|quota]] per user.


* Grand espace pour stocker les fichiers temporaires pendant les calculs.
*There is an [[Scratch_purging_policy | automated purge]] of older files in this space.
* Pas de système de sauvegarde automatique.
* 20 To d’espace et 1M fichiers par utilisateur.
* Il y a une [[Scratch_purging_policy/fr | purge automatique]] des vieux fichiers de cet espace.


|-
|-
| PROJECT <br> Système de fichiers Lustre, ~15 Po d’espace au total ||
| PROJECT <br> Lustre filesystem, 19 PB of space ||


* Cet espace est conçu pour le partage de données entre membres d'un groupe et pour le stockage de beaucoup de données.  
*This space is designed for sharing data among the members of a research group and for storing large amounts of data.  
* 1 To d’espace et 500K fichiers par groupe.  
 
* Il y a une sauvegarde automatique une fois par jour.
*Large and adjustable per group [[Storage and file management/fr#Quotas_et_politiques|quota]].  
 
*There is a daily backup of the project space.
|}
|}


Au tout début de la présente page, un tableau indique plusieurs adresses de connexion. Pour les transferts de données par [[Globus]], il faut utiliser le ''Point de chute Globus''. Par contre, pour les outils comme [[Transferring_data#Rsync|rsync]] et [[Transferring_data#SCP|scp]], il faut utiliser l'adresse du ''Nœud de copie''.
For transferring data via [[Globus]], you should use the endpoint specified at the top of this page, while for tools like [[Transferring_data#Rsync|rsync]] and [[Transferring_data#SCP|scp]] you can use a login node.


=Réseautique haute performance=
==High-performance interconnect==
Le réseau [https://fr.wikipedia.org/wiki/Bus_InfiniBand InfiniBand] [https://www.nvidia.com/en-us/networking/infiniband/qm8700/ HDR de Mellanox] relie tous les nœuds de la grappe. Chaque commutateur de 40 ports HDR (200 Gb/s) permet de connecter ensemble jusqu'à 66 nœuds en HDR100 (100 Gb/s) avec 33 liens HDR divisés en deux (2) par des câbles spéciaux. Les sept (7) liens HDR restants servent à connecter le commutateur d'un cabinet à chacun des sept (7) commutateurs HDR du réseau InfiniBand central. Les îlots de nœuds sont donc connectés avec un facteur de blocage maximum de 33:7 (4.7:1). Par contre, les serveurs de stockage sont branchés avec une interconnexion non bloquante.
The [https://en.wikipedia.org/wiki/InfiniBand InfiniBand] [https://www.nvidia.com/en-us/networking/infiniband/qm8700/ Mellanox HDR] network links together all of the nodes of the cluster. Each hub of 40 HDR ports (200 Gb/s) can connect up to 66 nodes with HDR100 (100 Gb/s) with 33 HDR links divided in two (2) by special cables. The seven (7) remaining HDR links allow the hub to be connected to a rack containing the seven (7) central HDR InfiniBand hubs. The islands of nodes are therefore connected by a maximum blocking factor of 33:7 (4.7:1). In contrast, the storage servers are connected by a much lower blocking factor in order to maximize the performance.


En pratique, les cabinets de Narval contiennent des îlots de 48 ou 56 nœuds CPU réguliers. Il est donc possible d'exécuter des tâches parallèles utilisant jusqu’à 3584 cœurs et une réseautique non bloquante. Pour des tâches plus imposantes ou plus fragmentées sur le réseau, le facteur de blocage est de 4.7:1. L’interconnexion reste malgré tout de haute performance.
In practice the Narval racks contain islands of 48 or 56 regular CPU nodes. It is therefore possible to run parallel jobs using up to 3584 cores with a non-blocking network. For larger jobs or ones which are distributed in a fragmented manner across the network, the blocking factor is 4.7:1. The inter-connect remains a high-performance one nonetheless.


=Caractéristiques des nœuds=
==Node characteristics==
{| class="wikitable sortable"
{| class="wikitable sortable"
! nœuds !! cœurs !! mémoire disponible !! CPU !! stockage !! GPU
! nodes !! cores !! available memory !! CPU !! storage !! GPU
|-
|-
| 1109 || rowspan="2"|64 || ~256000M || rowspan="2"|2 x AMD Rome 7532 @ 2.40 GHz 256M cache L3 || rowspan="2"|1 x SSD de 960G || rowspan="2"|-
| 1145 || rowspan="3"|64 || 249G or 255000M || rowspan="2"|2 x AMD Rome 7532 @ 2.40 GHz 256M cache L3 || rowspan="3"|1 x 960G SSD || rowspan="3"|-
|-
|-
|  33 || ~2048000M
|  33 || 2009G or 2057500M
|-
|-
158 || 48 || ~512000M || 2 x AMD Milan 7413 @ 2.65 GHz 128M cache L3 || 1 x SSD de 3.84T || 4 x NVidia A100 (mémoire 40G)
|    3 || 4000G or 4096000M || 2 x AMD Rome 7502 @ 2.50 GHz 128M cache L3
|-
159 || 48 || 498G or 510000M || 2 x AMD Milan 7413 @ 2.65 GHz 128M cache L3 || 1 x SSD of 3.84 TB || 4 x NVidia A100SXM4 (40 GB memory), connected via NVLink
|}
|}
==AMD processors==
=== Supported instructions sets ===
Narval is equipped with 2nd and 3rd generation AMD EPYC processors which support the [https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#Advanced_Vector_Extensions_2 AVX2 instruction set]. This instruction set is the same as that found on the Intel processors on the nodes at [[Béluga/en#Node_characteristics|Béluga]], [[Cedar#Node_characteristics|Cedar]], [[Graham#Node_characteristics|Graham]] and [[Niagara#Node_characteristics|Niagara]].
Narval does not however support the [https://en.wikipedia.org/wiki/AVX-512 AVX512] instruction set, in contrast to the nodes of Béluga and Niagara as well as certain nodes of Cedar and Graham.
AVX2 is supported on nodes which have Broadwell type CPUs, while both instruction sets (AVX2 and AVX512) are on nodes with CPUs of type [https://en.wikipedia.org/wiki/Skylake Skylake] or [https://en.wikipedia.org/wiki/Cascade_Lake_(microarchitecture Cascade Lake]. Consequently, an application compiled on the Broadwell nodes of Cedar and Graham, including their login nodes, will run on Narval, but it  will not be executable if it is compiled on Béluga or Niagara or on a Skylake or Cascade Lake node of Cedar or Graham. Such an application must be recompiled (see ''Intel compilers'' below)
===Intel compilers===
Intel compilers can compile applications for Narval's AMD processors with AVX2 and earlier instruction sets. Use the <tt>-march=core-avx2</tt> option to produce executables which are compatible with both Intel and AMD processors.
However, if you have compiled a program on a system which uses Intel processors and you have used one or more options like <tt>-xXXXX</tt>, such as <tt>-xCORE-AVX2</tt>, the compiled program will not work on Narval because the Intel compilers add additional instructions in order to verify that processor used is an Intel product. On Narval, the options <tt>-xHOST</tt> and <tt>-march=native</tt> are equivalent to <tt>-march=pentium</tt> (the old 1993 Pentium) and should <b>not</b> be used.
===Software environments===
[[Standard software environments|StdEnv/2023]] is the standard software environment on Narval; previous versions (2016 and 2018) have been blocked intentionally. If you need an application only available with an older standard environment, please write to [[Technical support]].
===BLAS and LAPACK libraries===
The Intel MKL library works with AMD processors, although not in an optimal way. We now favour the use of the FlexiBLAS library. For more details, please consult the page on [[BLAS and LAPACK]].
==Monitoring jobs==
From the [https://portail.narval.calculquebec.ca/ Narval portal], you can monitor your jobs using CPUs and GPUs <b>in reel time</b> or examine jobs that have run in the past. This can help you to optimize resource usage and shorten wait time in the queue.
You can monitor your usage of
* compute nodes,
* memory,
* GPU.
It is important that you use the allocated resources and to correct your requests when compute resources are less used or not used at all. For example, if you request 4 cores (CPUs) but use only one, you should adjust the script file accordingly.

Latest revision as of 19:05, 18 September 2024

Other languages:
Availability: since October, 2021
Login node: narval.alliancecan.ca
Globus Collection: Compute Canada - Narval
Data transfer node (rsync, scp, sftp,...): narval.alliancecan.ca
Portal : https://portail.narval.calculquebec.ca/

Narval is a general purpose cluster designed for a variety of workloads; it is located at the École de technologie supérieure in Montreal. The cluster is named in honour of the narwhal, a species of whale which has occasionally been observed in the Gulf of St. Lawrence.

Site-specific policies

By policy, Narval's compute nodes cannot access the internet. If you need an exception to this rule, contact technical support explaining what you need and why.

Crontab is not offered on Narval.

Each job on Narval should have a duration of at least one hour (five minutes for test jobs) and you cannot have more than 1000 jobs, running or queued, at any given moment. The maximum duration for a job on Narval is 7 days (168 hours).

Storage

HOME
Lustre filesystem, 40 TB of space
  • Location of home directories, each of which has a small fixed quota.
  • You should use the project space for larger storage needs.
  • Small per user quota.
  • There is a daily backup of the home directories.
SCRATCH
Lustre filesystem, 5.5 PB of space
  • Large space for storing temporary files during computations.
  • No backup system in place.
PROJECT
Lustre filesystem, 19 PB of space
  • This space is designed for sharing data among the members of a research group and for storing large amounts of data.
  • Large and adjustable per group quota.
  • There is a daily backup of the project space.

For transferring data via Globus, you should use the endpoint specified at the top of this page, while for tools like rsync and scp you can use a login node.

High-performance interconnect

The InfiniBand Mellanox HDR network links together all of the nodes of the cluster. Each hub of 40 HDR ports (200 Gb/s) can connect up to 66 nodes with HDR100 (100 Gb/s) with 33 HDR links divided in two (2) by special cables. The seven (7) remaining HDR links allow the hub to be connected to a rack containing the seven (7) central HDR InfiniBand hubs. The islands of nodes are therefore connected by a maximum blocking factor of 33:7 (4.7:1). In contrast, the storage servers are connected by a much lower blocking factor in order to maximize the performance.

In practice the Narval racks contain islands of 48 or 56 regular CPU nodes. It is therefore possible to run parallel jobs using up to 3584 cores with a non-blocking network. For larger jobs or ones which are distributed in a fragmented manner across the network, the blocking factor is 4.7:1. The inter-connect remains a high-performance one nonetheless.

Node characteristics

nodes cores available memory CPU storage GPU
1145 64 249G or 255000M 2 x AMD Rome 7532 @ 2.40 GHz 256M cache L3 1 x 960G SSD -
33 2009G or 2057500M
3 4000G or 4096000M 2 x AMD Rome 7502 @ 2.50 GHz 128M cache L3
159 48 498G or 510000M 2 x AMD Milan 7413 @ 2.65 GHz 128M cache L3 1 x SSD of 3.84 TB 4 x NVidia A100SXM4 (40 GB memory), connected via NVLink

AMD processors

Supported instructions sets

Narval is equipped with 2nd and 3rd generation AMD EPYC processors which support the AVX2 instruction set. This instruction set is the same as that found on the Intel processors on the nodes at Béluga, Cedar, Graham and Niagara.

Narval does not however support the AVX512 instruction set, in contrast to the nodes of Béluga and Niagara as well as certain nodes of Cedar and Graham.

AVX2 is supported on nodes which have Broadwell type CPUs, while both instruction sets (AVX2 and AVX512) are on nodes with CPUs of type Skylake or Cascade Lake. Consequently, an application compiled on the Broadwell nodes of Cedar and Graham, including their login nodes, will run on Narval, but it will not be executable if it is compiled on Béluga or Niagara or on a Skylake or Cascade Lake node of Cedar or Graham. Such an application must be recompiled (see Intel compilers below)

Intel compilers

Intel compilers can compile applications for Narval's AMD processors with AVX2 and earlier instruction sets. Use the -march=core-avx2 option to produce executables which are compatible with both Intel and AMD processors.

However, if you have compiled a program on a system which uses Intel processors and you have used one or more options like -xXXXX, such as -xCORE-AVX2, the compiled program will not work on Narval because the Intel compilers add additional instructions in order to verify that processor used is an Intel product. On Narval, the options -xHOST and -march=native are equivalent to -march=pentium (the old 1993 Pentium) and should not be used.

Software environments

StdEnv/2023 is the standard software environment on Narval; previous versions (2016 and 2018) have been blocked intentionally. If you need an application only available with an older standard environment, please write to Technical support.

BLAS and LAPACK libraries

The Intel MKL library works with AMD processors, although not in an optimal way. We now favour the use of the FlexiBLAS library. For more details, please consult the page on BLAS and LAPACK.

Monitoring jobs

From the Narval portal, you can monitor your jobs using CPUs and GPUs in reel time or examine jobs that have run in the past. This can help you to optimize resource usage and shorten wait time in the queue.

You can monitor your usage of

  • compute nodes,
  • memory,
  • GPU.

It is important that you use the allocated resources and to correct your requests when compute resources are less used or not used at all. For example, if you request 4 cores (CPUs) but use only one, you should adjust the script file accordingly.