Narval

From CC Doc
Jump to navigation Jump to search
This site replaces the former Compute Canada documentation site, and is now being managed by the Digital Research Alliance of Canada.

Ce site remplace l'ancien site de documentation de Calcul Canada et est maintenant géré par l'Alliance de recherche numérique du Canada.

This page is a translated version of the page Narval and the translation is 96% complete.
Outdated translations are marked like this.
Other languages:
English • ‎français
Availability: since October, 2021
Login node: narval.computecanada.ca
Globus Collection: Compute Canada - Narval
Data transfer node (rsync, scp, sftp,...): narval.computecanada.ca

Narval is a general purpose cluster designed for a variety of workloads; it is located at the École de technologie supérieure in Montreal. The cluster is named in honour of the narwhal, a species of whale which has occasionally been observed in the Gulf of St. Lawrence.

Site-specific policies

By policy, Narval's compute nodes cannot access the internet. If you need an exception to this rule, contact technical support with information about the IP address, port number(s) and protocol(s) needed as well as the duration and a contact person.

Crontab is not offered on Narval.

Each job on Narval should have a duration of at least one hour (five minutes for test jobs) and you cannot have more than 1000 jobs, running or queued, at any given moment. The maximum duration for a job on Narval is 7 days (168 hours).

Storage

HOME
Lustre filesystem, 40 TB of space
  • Location of home directories, each of which has a small fixed quota.
  • You should use the project space for larger storage needs.
  • 50 GB of space and 500K files per user.
  • There is a daily backup of the home directories.
SCRATCH
Lustre filesystem, 5.5 PB of space
  • Large space for storing temporary files during computations.
  • No backup system in place.
  • 20 TB of space and 1M files per user.
PROJECT
Lustre filesystem, 19 PB of space
  • This space is designed for sharing data among the members of a research group and for storing large amounts of data.
  • 1 TB of space and 500K of files per group.
  • There is a daily backup of the project space.

For transferring data via Globus, you should use the endpoint specified at the top of this page, while for tools like rsync and scp you can use a login node.

High-performance interconnect

The InfiniBand Mellanox HDR network links together all of the nodes of the cluster. Each hub of 40 HDR ports (200 Gb/s) can connect up to 66 nodes with HDR100 (100 Gb/s) with 33 HDR links divided in two (2) by special cables. The seven (7) remaining HDR links allow the hub to be connected to a rack containing the seven (7) central HDR InfiniBand hubs. The islands of nodes are therefore connected by a maximum blocking factor of 33:7 (4.7:1). In contrast, the storage servers are connected by a much lower blocking factor in order to maximize the performance.

In practice the Narval racks contain islands of 48 or 56 regular CPU nodes. It is therefore possible to run parallel jobs using up to 3584 cores with a non-blocking network. For larger jobs or ones which are distributed in a fragmented manner across the network, the blocking factor is 4.7:1. The inter-connect remains a high-performance one nonetheless.

Node characteristics

nodes cores available memory CPU storage GPU
1145 64 249G or 255000M 2 x AMD Rome 7532 @ 2.40 GHz 256M cache L3 1 x 960G SSD -
33 2009G or 2057500M
3 4000G or 4096000M 2 x AMD Rome 7502 @ 2.50 GHz 128M cache L3
159 48 498G or 510000M 2 x AMD Milan 7413 @ 2.65 GHz 128M cache L3 1 x SSD of 3.84 TB 4 x NVidia A100 (40 GB memory)

AMD processors

Supported instructions sets

Narval is equipped with 2nd and 3rd generation AMD EPYC processors which support the AVX2 instruction set. This instruction set is also supported by all CPUs at Béluga, Cedar, Graham and Niagara.

However, all nodes at Béluga and Niagara and some nodes at Cedar and Graham also support AVX512 instructions, which are not supported at Narval.

Nodes containing Intel Broadwell CPUs support AVX2 but not AVX512. Nodes containing Intel Skylake or Cascade Lake CPUs support both AVX2 and AVX512.

Consequently, an application compiled on Broadwell nodes at Cedar and Graham, including their login nodes, will run at Narval. An application compiled at Béluga or Niagara, or on a Skylake or Cascade Lake node at Cedar or Graham, will not run at Narval. Such an application must be recompiled (see Intel compilers below)


Intel compilers

Intel compilers can compile applications for Narval's AMD processors with AVX2 and earlier instruction sets. Use the -march=core-avx2 option to produce executables which are compatible with both Intel and AMD processors.

However, the applications will not operate on Narval if your code was compiled on a system using one or more -xXXXX options such as -xCORE-AVX2 because the Intel compilers add extra instructions to verify that the processor is by Intel. This being said, the -xHOST on Narval becomes the same as -march=core-avx2.

Software environments

StdEnv/2020 is the standard software environment on Narval; previous versions have been blocked intentionally. If you need an application only available with an older standard environment, please write to Technical support.

BLAS and LAPACK libraries

The Intel MKL library works with AMD processors, although not in an optimal way. We now favour the use of the FlexiBLAS library. For more details, please consult the page on BLAS and LAPACK.