Niagara/fr: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
(Updating to match new version of source page)
Line 1: Line 1:
<languages />
<languages />


<div class="mw-translate-fuzzy">
{| class="wikitable"
{| class="wikitable"
|-
|-
Line 6: Line 7:


|}
|}
</div>


Niagara est une grappe homogène propriété de [https://www.utoronto.ca/ l'Université de Toronto] et opérée par [https://www.scinethpc.ca/ SciNet]. Capable d'accommoder les tâches parallèles de 1024 cœurs et plus, elle est conçue pour gérer efficacement des débits intensifs générés par une variété d'applications scientifiques à hauts volumes de données. Ses fonctions de réseau et de stockage offrent des performances excellentes et une grande capacité. Niagara démontre en plus une efficacité énergétique appréciable.  
Niagara est une grappe homogène propriété de [https://www.utoronto.ca/ l'Université de Toronto] et opérée par [https://www.scinethpc.ca/ SciNet]. Capable d'accommoder les tâches parallèles de 1024 cœurs et plus, elle est conçue pour gérer efficacement des débits intensifs générés par une variété d'applications scientifiques à hauts volumes de données. Ses fonctions de réseau et de stockage offrent des performances excellentes et une grande capacité. Niagara démontre en plus une efficacité énergétique appréciable.  
Line 51: Line 53:
* Not allocated.
* Not allocated.
|-
|-
|'''Project space'''<br />External persistent storage<br />||
|'''Project space'''<br />(5+2)PB total volume.<br /> External persistent storage||
* Allocated via [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ RAC].
* Allocated via [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ RAC].
* Available as the <code>$PROJECT</code> environment variable.
* Available as the <code>$PROJECT</code> environment variable.
Line 59: Line 61:
| '''Archive Space'''<br />10PB total volume<br />High Performance Storage System (IBM HPSS)||
| '''Archive Space'''<br />10PB total volume<br />High Performance Storage System (IBM HPSS)||
* Allocated via [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ RAC].
* Allocated via [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ RAC].
* intended for large datasets requiring offload from active file systems.
* Nearline accessible space intended for large datasets requiring offload from active file systems.
* Available as the <code>$ARCHIVE</code> environment variable.
* Available as the <code>$ARCHIVE</code> environment variable.
* Large fixed [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] per group.
* Large fixed [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] per group.
* Tape based backend (dual copy).
|}
|}



Revision as of 22:21, 27 February 2018

Other languages:
Disponibilité prévue : Tests et configuration en mars 2017. Disponibilité en avril 2018 des ressources allouées dans le cadre du concours 2018.

Niagara est une grappe homogène propriété de l'Université de Toronto et opérée par SciNet. Capable d'accommoder les tâches parallèles de 1024 cœurs et plus, elle est conçue pour gérer efficacement des débits intensifs générés par une variété d'applications scientifiques à hauts volumes de données. Ses fonctions de réseau et de stockage offrent des performances excellentes et une grande capacité. Niagara démontre en plus une efficacité énergétique appréciable.

En général, l'environnement est semblable à celui de Cedar ou Graham. En date de février 2018, le travail de configuration est toujours en cours et les directives particulières d'utilisation restent à venir.

Cette grappe fait partie des ressources allouées dans le cadre du concours 2018; les allocations entrent en vigueur le 4 avril 2018.

Vidéo ː Présentation de Niagara au SciNet User Group Meeting du 14 février 2018

Vidéo ː Installation du matériel

Spécifications techniques

  • 1500 nodes, each with 40 Intel Skylake cores at 2.4GHz, for a total of 60,000 cores.
  • 202 GB (188 GiB) of RAM per node.
  • EDR Infiniband network in a so-called 'Dragonfly+' topology.
  • 5PB of scratch, 5+2PB of project space (parallel file system: IBM Spectrum Scale, formerly known as GPFS).
  • 256 TB burst buffer (Excelero + IBM Spectrum Scale).
  • No local disks.
  • Rpeak of 4.61 PF.
  • Rmax of 3.0 PF.
  • 685 kW power consumption.

Attached storage systems

Home space
Parallel high-performance filesystem (IBM Spectrum Scale)
  • Location of home directories.
  • Available as the $HOME environment variable.
  • Each home directory has a small, fixed quota.
  • Not allocated, standard amount for each user. For larger storage requirements, use scratch or project.
  • Has daily backup.
Scratch space
5PB total volume
Parallel high-performance filesystem (IBM Spectrum Scale)
  • For active or temporary (/scratch) storage (~ 80 GB/s).
  • Available as the $SCRATCH environment variable.
  • Not allocated.
  • Large fixed quota per user and per group path.
  • Inactive data will be purged.
Burst buffer
256TB total volume
Parallel extra high-performance filesystem (Excelero+IBM Spectrum Scale)
  • For active fast storage during a job (160GB/s, and very high IOPS).
  • Data will be purged very frequently (i.e. soon after a job has ended).
  • Not allocated.
Project space
(5+2)PB total volume.
External persistent storage
  • Allocated via RAC.
  • Available as the $PROJECT environment variable.
  • quota set per user and per project path.
  • Backed up.
Archive Space
10PB total volume
High Performance Storage System (IBM HPSS)
  • Allocated via RAC.
  • Nearline accessible space intended for large datasets requiring offload from active file systems.
  • Available as the $ARCHIVE environment variable.
  • Large fixed quota per group.
  • Tape based backend (dual copy).

High-performance interconnect

The Niagara system has an EDR Infiniband network in a so-called 'Dragonfly+' topology, with four wings. Each wing (of 375 nodes) has 1-to-1 connections. Network traffic between wings is done through adaptive routing, which alleviates network congestion.

Node characteristics

  • CPU: 2 sockets with 20 Intel Skylake cores (2.4GHz, AVX512), for a total of 40 cores per node
  • Computational perfomance: 3 TFlops (theoretical maximum)
  • Network connection: 100Gb/s EDR
  • Memory: 202 GB (188 GiB) GB of RAM, i.e., a bit over 4GiB per core.
  • Local disk: none.
  • Operating system: Linux CentOS 7

Ordonnancement

The Niagara system will use the Slurm scheduler to run jobs. The basic scheduling commands will therefore be similar to those for Cedar and Graham, with a few differences:

  • Scheduling will be by node only. This means jobs will always need to use multiples of 40 cores per job.
  • Asking for specific amounts of memory will not be necessary and is discouraged; all nodes have the same amount of memory (202GB/188GiB minus some operating system overhead).

Details, such as how to request burst buffer usage in jobs, are still being worked out.

Logiciel

  • Module-based software stack.
  • Both the standard Compute Canada software stack as well as system-specific software tuned for the system will be available.
  • Different from Cedar and Graham, no modules will be loaded by default to prevent accidental conflicts in versions. There will be a simple mechanism to load the software stack that a user would see on Graham and Cedar.