Narval/en: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Updating to match new version of source page)
No edit summary
Line 24: Line 24:
</div>
</div>


<div class="mw-translate-fuzzy">
Solved problems:
Solved problems:
* <s>pytorch</s> (fixed as of Nov. 1st, 2021)
* <s>pytorch</s> (fixed as of Nov. 1st, 2021)
Line 30: Line 29:
* <s>Backup copies</s> (fixed as of Nov. 18th 2021)
* <s>Backup copies</s> (fixed as of Nov. 18th 2021)
* <s>gurobi</s> (fixed as of Nov. 9th, 2021)
* <s>gurobi</s> (fixed as of Nov. 9th, 2021)
</div>
* <s>Bus error (ld.gold)</s> (fixed as of Jan. 10th, 2022)


=Site-specific policies=
=Site-specific policies=

Revision as of 18:00, 10 January 2022

Other languages:
Availability: since October, 2021
Login node: narval.computecanada.ca
Globus Collection: Compute Canada - Narval
Data transfer node (rsync, scp, sftp,...): narval.computecanada.ca

Narval is a general purpose cluster designed for a variety of workloads; it is located at the École de technologie supérieure in Montreal. The cluster is named in honour of the narwhal, a species of whale which has occasionally been observed in the Gulf of St. Lawrence.

Limitations and known issues

As of December 2021, the following features and software are not fully operational:

  • Nearline (ETA: February 2022)
  • Some software that depend on CUDA Complete list here.
  • Some license servers (contact Technical support)
  • Globus access (network configuration problem)
  • Errors collect2: fatal error: ld terminated with signal 7 [Bus error] caused by mmap() used in the linker ld.gold
    • (Workaround) compile your code with -fuse-ld=bfd, which will use the "bfd" linker

Solved problems:

  • pytorch (fixed as of Nov. 1st, 2021)
  • JupyterHub (fixed as of Nov. 10th 2021)
  • Backup copies (fixed as of Nov. 18th 2021)
  • gurobi (fixed as of Nov. 9th, 2021)
  • Bus error (ld.gold) (fixed as of Jan. 10th, 2022)

Site-specific policies

By policy, Narval's compute nodes cannot access the internet. If you need an exception to this rule, contact technical support with information about the IP address, port number(s) and protocol(s) needed as well as the duration and a contact person.

Crontab is not offered on Narval.

Each job on Narval should have a duration of at least one hour (five minutes for test jobs) and you cannot have more than 1000 jobs, running or queued, at any given moment. The maximum duration for a job on Narval is 7 days (168 hours).

Storage

HOME
Lustre filesystem, 40 TB of space
  • Location of home directories, each of which has a small fixed quota.
  • You should use the project space for larger storage needs.
  • 50 GB of space and 500K files per user.
  • There is a daily backup of the home directories.
SCRATCH
Lustre filesystem, 5.5 PB of space
  • Large space for storing temporary files during computations.
  • No backup system in place.
  • 20 TB of space and 1M files per user.
PROJECT
Lustre filesystem, 19 PB of space
  • This space is designed for sharing data among the members of a research group and for storing large amounts of data.
  • 1 TB of space and 500K of files per group.
  • There is a daily backup of the project space.

For transferring data via Globus, you should use the endpoint specified at the top of this page, while for tools like rsync and scp you can use a login node.

High-performance interconnect

The InfiniBand Mellanox HDR network links together all of the nodes of the cluster. Each hub of 40 HDR ports (200 Gb/s) can connect up to 66 nodes with HDR100 (100 Gb/s) with 33 HDR links divided in two (2) by special cables. The seven (7) remaining HDR links allow the hub to be connected to a rack containing the seven (7) central HDR InfiniBand hubs. The islands of nodes are therefore connected by a maximum blocking factor of 33:7 (4.7:1). In contrast, the storage servers are connected by a much lower blocking factor in order to maximize the performance.

In practice the Narval racks contain islands of 48 or 56 regular CPU nodes. It is therefore possible to run parallel jobs using up to 3584 cores with a non-blocking network. For larger jobs or ones which are distributed in a fragmented manner across the network, the blocking factor is 4.7:1. The inter-connect remains a high-performance one nonetheless.

Node characteristics

nodes cores available memory CPU storage GPU
1109 64 249G or 255000M 2 x AMD Rome 7532 @ 2.40 GHz 256M cache L3 1 x 960G SSD -
33 2009G or 2057500M
159 48 498G or 510000M 2 x AMD Milan 7413 @ 2.65 GHz 128M cache L3 1 x SSD of 3.84 TB 4 x NVidia A100 (40 GB memory)

AMD processors

Supported instructions sets

Narval is equipped with 2nd and 3rd generation AMD EPYC processors which support the AVX2 instruction set. This instruction set is also supported by all CPUs at Béluga, Cedar, Graham and Niagara.

However, all nodes at Béluga and Niagara and some nodes at Cedar and Graham also support AVX512 instructions, which are not supported at Narval.

Nodes containing Intel Broadwell CPUs support AVX2 but not AVX512. Nodes containing Intel Skylake or Cascade Lake CPUs support both AVX2 and AVX512.

Consequently, an application compiled on Broadwell nodes at Cedar and Graham, including their login nodes, will run at Narval. An application compiled at Béluga or Niagara, or on a Skylake or Cascade Lake node at Cedar or Graham, will not run at Narval. Such an application must be recompiled (see Intel compilers below)


Intel compilers

Intel compilers can compile applications for Narval's AMD processors with AVX2 and earlier instruction sets. Use the -march=core-avx2 option to produce executables which are compatible with both Intel and AMD processors.

However, the applications will not operate on Narval if your code was compiled on a system using one or more -xXXXX options such as -xCORE-AVX2 because the Intel compilers add extra instructions to verify that the processor is by Intel. This being said, the -xHOST on Narval becomes the same as -march=core-avx2.

Software environments

StdEnv/2020 is the standard software environment on Narval; previous versions have been blocked intentionally. If you need an application only available with an older standard environment, please write to Technical support.

BLAS and LAPACK libraries

The Intel MKL library works with AMD processors, although not in an optimal way. We are presently examining the option of using the FlexiBLAS library when installing future software versions; this would automatically load the appropriate library, depending on the processor's manufacturer. If you think that your code is especially sensitive to the performance of BLAS or LAPACK, please report this to Technical support.