Niagara: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
mNo edit summary
m (matching/reconciling infomation with NDC page)
Line 51: Line 51:
{| class="wikitable sortable"
{| class="wikitable sortable"
|-
|-
| '''Home space''' <br />200TB total volume<br />Parallel high-performance filesystem (IBM Spectrum Scale) ||
| '''Home''' <br>200TB<br>Parallel high-performance filesystem (IBM Spectrum Scale) ||
* Location of home directories.
* Backed up to tape
* Available as the <code>$HOME</code> environment variable.
* Persistent
* Each home directory has a small, fixed [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] of 100GB.
* Not allocated, standard amount for each user. For larger storage requirements, use scratch or project.
* Has daily backup.
|-
|-
| '''Scratch space'''<br />7PB total volume<br />Parallel high-performance filesystem (IBM Spectrum Scale)||
| '''Scratch'''<br>7PB (~80 GB/s)<br>Parallel high-performance filesystem (IBM Spectrum Scale)||
* For active or temporary (<code>/scratch</code>) storage (~ 80 GB/s).
* Inactive data is purged.
* Available as the <code>$SCRATCH</code> environment variable.
* Not allocated.
* Large fixed [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] per user and per group path.
* Inactive data will be purged.
|-
|-
| '''Burst buffer'''<br />232TB total volume<br />Parallel extra high-performance filesystem (Excelero+IBM Spectrum Scale)||
| '''Burst buffer'''<br>232TB (~160GB/s)<br>Parallel extra high-performance filesystem (Excelero+IBM Spectrum Scale)||
* For active fast storage during a job (160GB/s, and very high IOPS).
* Inactive data is purged.
* Not fully configured yet, but data will be purged very frequently (i.e. soon after a job has ended) and space on this storage tier will not be RAC allocatable.
|-
|-
|'''Project space'''<br />2PB total volume.<br />Parallel high-performance filesystem (IBM Spectrum Scale||
|'''Project'''<br >2PB (~80 GB/s)<br>Parallel high-performance filesystem (IBM Spectrum Scale||
* Allocated via [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ RAC].
* Backed up to tape
* For active but low data turnover storage and relatively fixed datasets
* Allocated through [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions RAC]
* Available as the <code>$PROJECT</code> environment variable.
* Persistent
* [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] set per project path.
* Backed up.
|-
|-
| '''Archive Space'''<br />10PB total volume<br />High Performance Storage System (IBM HPSS)||
| '''Archive'''<br />10PB<br />High Performance Storage System (IBM HPSS)||
* Allocated via [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ RAC].
* tape-backed HSM
* Nearline accessible space intended for large datasets requiring offload from active filesystems.
* Allocated through [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions RAC]
* Available as the <code>$ARCHIVE</code> environment variable.
* Persistent
* Large fixed [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] per group.
* Tape based backend (dual copy).
|}
|}



Revision as of 22:34, 24 July 2018

Other languages:
Expected availability: April 2018
Login node: niagara.computecanada.ca
Globus endpoint: computecanada#niagara
Data mover nodes (rsync, scp, ...): nia-dm2, nia-dm2, see Moving data
System Status Page: https://docs.scinet.utoronto.ca

Niagara is a homogeneous cluster, owned by the University of Toronto and operated by SciNet, intended to enable large parallel jobs of 1040 cores and more. It was designed to optimize throughput of a range of scientific codes running at scale, energy efficiency, and network and storage performance and capacity.

The Niagara Quickstart has specific instructions for Niagara, where the user experience on Niagara is similar to that on Graham and Cedar, but slightly different.

Niagara is an allocatable resource in the 2018 Resource Allocation Competition (RAC 2018), which has come into effect on April 4, 2018.

Niagara installation update at the SciNet User Group Meeting on February 14th, 2018

Niagara installation time-lag video


Niagara hardware specifications[edit]

  • 1500 nodes, each with 40 Intel Skylake cores at 2.4GHz, for a total of 60,000 cores.
  • 202 GB (188 GiB) of RAM per node.
  • EDR Infiniband network in a so-called 'Dragonfly+' topology.
  • 6PB of scratch, 3PB of project space (parallel filesystem: IBM Spectrum Scale, formerly known as GPFS).
  • 256 TB burst buffer (Excelero + IBM Spectrum Scale).
  • No local disks.
  • No GPUs.
  • Rpeak of 4.61 PF.
  • Rmax of 3.0 PF.
  • 685 kW power consumption.

Attached storage systems[edit]

Home
200TB
Parallel high-performance filesystem (IBM Spectrum Scale)
  • Backed up to tape
  • Persistent
Scratch
7PB (~80 GB/s)
Parallel high-performance filesystem (IBM Spectrum Scale)
  • Inactive data is purged.
Burst buffer
232TB (~160GB/s)
Parallel extra high-performance filesystem (Excelero+IBM Spectrum Scale)
  • Inactive data is purged.
Project
2PB (~80 GB/s)
Parallel high-performance filesystem (IBM Spectrum Scale
  • Backed up to tape
  • Allocated through RAC
  • Persistent
Archive
10PB
High Performance Storage System (IBM HPSS)
  • tape-backed HSM
  • Allocated through RAC
  • Persistent

High-performance interconnect[edit]

The Niagara cluster has an EDR Infiniband network in a so-called 'Dragonfly+' topology, with four wings. Each wing of maximually 432 nodes (i.e., 17280) has 1-to-1 connections. Network traffic between wings is done through adaptive routing, which alleviates network congestion and yields an effective blocking of 2:1 between nodes of different wings.

Node characteristics[edit]

  • CPU: 2 sockets with 20 Intel Skylake cores (2.4GHz, AVX512), for a total of 40 cores per node
  • Computational performance: 3 TFlops theoretical peak.
  • Network connection: 100Gb/s EDR Dragonfly+
  • Memory: 202 GB (188 GiB) of RAM, i.e., a bit over 4GiB per core.
  • Local disk: none. GPUs/Accelerators: none.
  • Operating system: Linux CentOS 7

Scheduling[edit]

The Niagara cluster uses the Slurm scheduler to run jobs. The basic scheduling commands are therefore similar to those for Cedar and Graham, with a few differences:

  • Scheduling is by node only. This means jobs always need to use multiples of 40 cores per job.
  • Asking for specific amounts of memory is not be necessary and is discouraged; all nodes have the same amount of memory (202GB/188GiB minus some operating system overhead).

Details, such as how to request burst buffer usage in jobs, are still being worked out.

Software[edit]

  • Module-based software stack.
  • Both the standard Compute Canada software stack as well as cluster-specific software tuned for Niagara are available.
  • In contrast with Cedar and Graham, no modules are loaded by default to prevent accidental conflicts in versions. To load the software stack that a user would see on Graham and Cedar, one can load the "CCEnv" module (see Niagara Quickstart).

Access to Niagara[edit]

New Users (without a previous SciNet account)[edit]

  • Those of you new to SciNet, but with 2018 RAC allocations on Niagara, have had your accounts created and ready for you to login.

  • New, non-RAC users: we are still working out the procedure to get access. If you can't wait, for now, you can follow the old route of requesting a SciNet Consortium Account on the CCDB site, or write to support@scinet.utoronto.ca.

Getting started[edit]

See Niagara Quickstart.