Niagara: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
m (niagara has five dragonfly wings, not four)
 
(63 intermediate revisions by 15 users not shown)
Line 5: Line 5:
{| class="wikitable"
{| class="wikitable"
|-
|-
| Expected availability: '''Testing and configuration: March 2017. 2018 RACs will be implemented in April, 2018.'''
| Availability: In production since April 2018
|-
| Login node: '''niagara.alliancecan.ca'''
|-
| Globus endpoint: '''computecanada#niagara'''
|-
| Data mover nodes (rsync, scp, ...): '''nia-dm2, nia-dm2''', see [[Data_management_at_Niagara#Moving_data|Moving data]]
|-
| System Status Page: '''https://docs.scinet.utoronto.ca'''
|-
| Portal : https://my.scinet.utoronto.ca
|}
|}


<!--T:2-->
<!--T:2-->
Niagara is a homogeneous cluster, owned by the [https://www.utoronto.ca/ University of Toronto] and operated by [https://www.scinethpc.ca/ SciNet], intended to enable large parallel jobs of 1024 cores and more. It was designed to optimize throughput of a range of
Niagara is a homogeneous cluster, owned by the [https://www.utoronto.ca/ University of Toronto] and operated by [https://www.scinethpc.ca/ SciNet], intended to enable large parallel jobs of 1040 cores and more. It was designed to optimize throughput of a range of
scientific codes running at scale, energy efficiency, and network and storage performance and capacity.  
scientific codes running at scale, energy efficiency, and network and storage performance and capacity.  


<!--T:4-->
<!--T:4-->
The user experience on Niagara will be similar to that on Graham
The [[Niagara Quickstart]] has specific instructions for Niagara, where the user experience on Niagara is similar to that on Graham
and Cedar, but specific instructions on how to use the Niagara system
and Cedar, but slightly different. 
are still in preparation, given that details of the setup are still in
 
flux at present (February 2018).
<!--T:29-->
Preliminary documentation about the GPU expansion to Niagara called "[https://docs.scinet.utoronto.ca/index.php/Mist Mist]" can be found on [https://docs.scinet.utoronto.ca/index.php/Mist  the SciNet documentation site].


<!--T:5-->
<!--T:5-->
Niagara is an allocatable resource in the 2018 [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ Resource Allocation Competition] (RAC 2018), which comes into effect on April 4, 2018.  
Niagara is an allocatable resource in the 2018 [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition] (RAC 2018), which has come into effect on April 4, 2018.  


<!--T:6-->
<!--T:6-->
Line 27: Line 38:
[https://www.youtube.com/watch?v=RgSvGGzTeoc  Niagara installation time-lag video]
[https://www.youtube.com/watch?v=RgSvGGzTeoc  Niagara installation time-lag video]


=Niagara system specifications= <!--T:3-->
 
==Niagara hardware specifications== <!--T:3-->


<!--T:8-->
<!--T:8-->
* 1500 nodes, each with 40 Intel Skylake cores at 2.4GHz, for a total of 60,000 cores.
* 2024 nodes, each with 40 Intel "Skylake" cores at 2.4 GHz or 40 Intel "CascadeLake" cores at 2.5 GHz, for a total of 80,640 cores.
* 202 GB (188 GiB) of RAM per node.
* 202 GB (188 GiB) of RAM per node.
* EDR Infiniband network in a so-called 'Dragonfly+' topology.
* EDR Infiniband network in a 'Dragonfly+' topology.
* 5PB of scratch, 5+2PB of project space (parallel file system: IBM Spectrum Scale, formerly known as GPFS).
* 12.5PB of scratch, 3.5PB of project space (parallel filesystem: IBM Spectrum Scale, formerly known as GPFS).
* 256 TB burst buffer (Excelero + IBM Spectrum Scale).
* 256 TB burst buffer (Excelero + IBM Spectrum Scale).
* No local disks.
* No local disks.
* Rpeak of 4.61 PF.
* No GPUs.
* Rmax of 3.0 PF.
* Theoretical peak performance ("Rpeak") of 6.25 PF.
* 685 kW power consumption.
* Measured delivered performance ("Rmax") of 3.6 PF.
* 920 kW power consumption.


=Attached storage systems= <!--T:9-->
==Attached storage systems== <!--T:9-->
{| class="wikitable sortable"
{| class="wikitable sortable"
|-
|-
| '''Home space''' <br />Parallel high-performance filesystem (IBM Spectrum Scale) ||
| '''Home''' <br>200TB<br>Parallel high-performance filesystem (IBM Spectrum Scale) ||
* Location of home directories.
* Backed up to tape
* Available as the <code>$HOME</code> environment variable.
* Persistent
* Each home directory has a small, fixed [[Storage and file management#Filesystem_Quotas_and_Policies|quota]].
* Not allocated, standard amount for each user. For larger storage requirements, use scratch or project.
* Has daily backup.
|-
|-
| '''Scratch space'''<br />5PB total volume<br />Parallel high-performance filesystem (IBM Spectrum Scale)||
| '''Scratch'''<br>12.5PB (~100GB/s Write, ~120GB/s Read)<br>Parallel high-performance filesystem (IBM Spectrum Scale)||
* For active or temporary (<code>/scratch</code>) storage (~ 80 GB/s).
* Inactive data is purged.
* Available as the <code>$SCRATCH</code> environment variable.
* Not allocated.
* Large fixed [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] per user and per group path.
* Inactive data will be purged.
|-
|-
| '''Burst buffer'''<br />256TB total volume<br />Parallel extra high-performance filesystem (Excelero+IBM Spectrum Scale)||
| '''Burst buffer'''<br>232TB (~90GB/s Write , ~154 GB/s Read)<br>Parallel extra high-performance filesystem (Excelero+IBM Spectrum Scale)||
* For active fast storage during a job (160GB/s, and very high IOPS).
* Inactive data is purged.
* Data will be purged very frequently (i.e. soon after a job has ended).
* Not allocated.
|-
|-
|'''Project space'''<br />External persistent storage<br />||
|'''Project'''<br >3.5PB (~100GB/s Write, ~120GB/s Read)<br>Parallel high-performance filesystem (IBM Spectrum Scale||
* Allocated via [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ RAC].
* Backed up to tape
* Available as the <code>$PROJECT</code> environment variable.
* Allocated through [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition RAC]
* [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] set per user and per project path.
* Persistent
* Backed up.
|-
|-
| '''Archive Space'''<br />10PB total volume<br />High Performance Storage System (IBM HPSS)||
| '''Archive'''<br />20PB<br />High Performance Storage System (IBM HPSS)||
* Allocated via [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ RAC].
* tape-backed HSM
* intended for large datasets requiring offload from active file systems.
* Allocated through [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition RAC]
* Available as the <code>$ARCHIVE</code> environment variable.
* Persistent
* Large fixed [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] per group.
|}
|}


=High-performance interconnect= <!--T:10-->
==High-performance interconnect== <!--T:10-->


<!--T:11-->
<!--T:11-->
The Niagara system has an EDR Infiniband network in a so-called
The Niagara cluster has an EDR Infiniband network in a 'Dragonfly+' topology, with five wings.
'Dragonfly+' topology, with four wings. Each wing (of 375 nodes) has
Each wing of maximually 432 nodes (i.e., 17280 cores) has
1-to-1 connections.  Network traffic between wings is done through
1-to-1 connections.  Network traffic between wings is done through
adaptive routing, which alleviates network congestion.
adaptive routing, which alleviates network congestion and yields an effective blocking of 2:1 between nodes of different wings.


=Node characteristics= <!--T:12-->
==Node characteristics== <!--T:12-->


<!--T:13-->
<!--T:13-->
* CPU: 2 sockets with 20 Intel Skylake cores (2.4GHz, AVX512), for a total of 40 cores per node
* CPU: 2 sockets with 20 Intel Skylake cores (2.4GHz, AVX512), for a total of 40 cores per node
* Computational perfomance: 3 TFlops (theoretical maximum)
* Computational performance: 3.07 TFlops theoretical peak.
* Network connection: 100Gb/s EDR  
* Network connection: 100Gb/s EDR Dragonfly+
* Memory: 202 GB (188 GiB) GB of RAM, i.e., a bit over 4GiB per core.
* Memory: 202 GB (188 GiB) of RAM, i.e., a bit over 4GiB per core.
* Local disk: none.
* Local disk: none. GPUs/Accelerators: none.
* Operating system: Linux CentOS 7
* Operating system: Linux CentOS 7


=Scheduling= <!--T:14-->
==Scheduling== <!--T:14-->


<!--T:15-->
<!--T:15-->
The Niagara system will use the [[Running jobs|Slurm]] scheduler to run jobs.  The basic scheduling commands will therefore be similar to those for Cedar and Graham, with a few differences:
The Niagara cluster uses the [[Running jobs|Slurm]] scheduler to run jobs.  The basic scheduling commands are therefore similar to those for Cedar and Graham, with a few differences:


<!--T:16-->
<!--T:16-->
* Scheduling will be by node only. This means jobs will always need to use multiples of 40 cores per job.
* Scheduling is by node only. This means jobs always need to use multiples of 40 cores per job.
* Asking for specific amounts of memory will not be necessary and is discouraged; all nodes have the same amount of memory (202GB/188GiB minus some operating system overhead).
* Asking for specific amounts of memory is not be necessary and is discouraged; all nodes have the same amount of memory (202GB/188GiB minus some operating system overhead).


<!--T:17-->
<!--T:17-->
Details, such as how to request burst buffer usage in jobs, are still being worked out.
Details, such as how to request burst buffer usage in jobs, are still being worked out.


=Software= <!--T:18-->
==Software== <!--T:18-->


<!--T:19-->
<!--T:19-->
* Module-based software stack.
* Module-based software stack.
* Both the standard Compute Canada software stack as well as system-specific software tuned for the system will be available.
* Both the standard Alliance software stack as well as cluster-specific software tuned for Niagara are available.
* Different from Cedar and Graham, no modules will be loaded by default to prevent accidental conflicts in versions. There will be a simple mechanism to load the software stack that a user would see on Graham and Cedar.
* In contrast with Cedar and Graham, no modules are loaded by default to prevent accidental conflicts in versions. To load the software stack that a user would see on Graham and Cedar, one can load the "CCEnv" module (see [[Niagara Quickstart]]).
 
==Access to Niagara== <!--T:20-->
Access to Niagara is not enabled automatically for everyone with an Alliance account, but anyone with an active Alliance account can get their access enabled.
If you have an active Alliance account but you do not have access to Niagara yet (e.g. because you are a new user and belong to a group whose primary PI does not have an allocation as granted in the annual [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition]), go to the [https://ccdb.computecanada.ca/services/opt_in opt-in page on the CCDB site].  After clicking the "Join" button on that page, it usually takes only one or two business days for access to be granted. 
 
<!--T:27-->
If at any time you require assistance, please do not hesitate to [mailto:niagara@tech.alliancecan.ca contact us].
 
===Getting started=== <!--T:25-->
 
<!--T:26-->
Please read the [[Niagara Quickstart]] carefully.  


<!--T:28-->
[[Category:Pages with video links]]
</translate>
</translate>

Latest revision as of 17:45, 21 October 2024

Other languages:
Availability: In production since April 2018
Login node: niagara.alliancecan.ca
Globus endpoint: computecanada#niagara
Data mover nodes (rsync, scp, ...): nia-dm2, nia-dm2, see Moving data
System Status Page: https://docs.scinet.utoronto.ca
Portal : https://my.scinet.utoronto.ca

Niagara is a homogeneous cluster, owned by the University of Toronto and operated by SciNet, intended to enable large parallel jobs of 1040 cores and more. It was designed to optimize throughput of a range of scientific codes running at scale, energy efficiency, and network and storage performance and capacity.

The Niagara Quickstart has specific instructions for Niagara, where the user experience on Niagara is similar to that on Graham and Cedar, but slightly different.

Preliminary documentation about the GPU expansion to Niagara called "Mist" can be found on the SciNet documentation site.

Niagara is an allocatable resource in the 2018 Resource Allocation Competition (RAC 2018), which has come into effect on April 4, 2018.

Niagara installation update at the SciNet User Group Meeting on February 14th, 2018

Niagara installation time-lag video


Niagara hardware specifications[edit]

  • 2024 nodes, each with 40 Intel "Skylake" cores at 2.4 GHz or 40 Intel "CascadeLake" cores at 2.5 GHz, for a total of 80,640 cores.
  • 202 GB (188 GiB) of RAM per node.
  • EDR Infiniband network in a 'Dragonfly+' topology.
  • 12.5PB of scratch, 3.5PB of project space (parallel filesystem: IBM Spectrum Scale, formerly known as GPFS).
  • 256 TB burst buffer (Excelero + IBM Spectrum Scale).
  • No local disks.
  • No GPUs.
  • Theoretical peak performance ("Rpeak") of 6.25 PF.
  • Measured delivered performance ("Rmax") of 3.6 PF.
  • 920 kW power consumption.

Attached storage systems[edit]

Home
200TB
Parallel high-performance filesystem (IBM Spectrum Scale)
  • Backed up to tape
  • Persistent
Scratch
12.5PB (~100GB/s Write, ~120GB/s Read)
Parallel high-performance filesystem (IBM Spectrum Scale)
  • Inactive data is purged.
Burst buffer
232TB (~90GB/s Write , ~154 GB/s Read)
Parallel extra high-performance filesystem (Excelero+IBM Spectrum Scale)
  • Inactive data is purged.
Project
3.5PB (~100GB/s Write, ~120GB/s Read)
Parallel high-performance filesystem (IBM Spectrum Scale
  • Backed up to tape
  • Allocated through RAC
  • Persistent
Archive
20PB
High Performance Storage System (IBM HPSS)
  • tape-backed HSM
  • Allocated through RAC
  • Persistent

High-performance interconnect[edit]

The Niagara cluster has an EDR Infiniband network in a 'Dragonfly+' topology, with five wings. Each wing of maximually 432 nodes (i.e., 17280 cores) has 1-to-1 connections. Network traffic between wings is done through adaptive routing, which alleviates network congestion and yields an effective blocking of 2:1 between nodes of different wings.

Node characteristics[edit]

  • CPU: 2 sockets with 20 Intel Skylake cores (2.4GHz, AVX512), for a total of 40 cores per node
  • Computational performance: 3.07 TFlops theoretical peak.
  • Network connection: 100Gb/s EDR Dragonfly+
  • Memory: 202 GB (188 GiB) of RAM, i.e., a bit over 4GiB per core.
  • Local disk: none. GPUs/Accelerators: none.
  • Operating system: Linux CentOS 7

Scheduling[edit]

The Niagara cluster uses the Slurm scheduler to run jobs. The basic scheduling commands are therefore similar to those for Cedar and Graham, with a few differences:

  • Scheduling is by node only. This means jobs always need to use multiples of 40 cores per job.
  • Asking for specific amounts of memory is not be necessary and is discouraged; all nodes have the same amount of memory (202GB/188GiB minus some operating system overhead).

Details, such as how to request burst buffer usage in jobs, are still being worked out.

Software[edit]

  • Module-based software stack.
  • Both the standard Alliance software stack as well as cluster-specific software tuned for Niagara are available.
  • In contrast with Cedar and Graham, no modules are loaded by default to prevent accidental conflicts in versions. To load the software stack that a user would see on Graham and Cedar, one can load the "CCEnv" module (see Niagara Quickstart).

Access to Niagara[edit]

Access to Niagara is not enabled automatically for everyone with an Alliance account, but anyone with an active Alliance account can get their access enabled.

If you have an active Alliance account but you do not have access to Niagara yet (e.g. because you are a new user and belong to a group whose primary PI does not have an allocation as granted in the annual Resource Allocation Competition), go to the opt-in page on the CCDB site. After clicking the "Join" button on that page, it usually takes only one or two business days for access to be granted.

If at any time you require assistance, please do not hesitate to contact us.

Getting started[edit]

Please read the Niagara Quickstart carefully.