Niagara: Difference between revisions
m (precise Rpeak is 40*2.4*32/1000) |
No edit summary |
||
Line 112: | Line 112: | ||
Access to Niagara is not enabled automatically for everyone with a Compute Canada account, but anyone with an active Compute Canada account can get their access enabled. | Access to Niagara is not enabled automatically for everyone with a Compute Canada account, but anyone with an active Compute Canada account can get their access enabled. | ||
If you do not have access yet (e.g. because you are new to SciNet and belonging to a group whose primary PI does not have an allocation as granted in the annual [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions Compute Canada RAC]), | If you have an active Compute Canada account but you do not have access yet (e.g. because you are new to SciNet and belonging to a group whose primary PI does not have an allocation as granted in the annual [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions Compute Canada RAC]), go to the [https://ccdb.computecanada.ca/services/opt_in opt-in page on the CCDB site]. After clicking the "Join" button on that page, it usually takes only one or two business days for access to be granted. | ||
<!--T:27--> | <!--T:27--> |
Revision as of 18:52, 13 August 2019
Availability: In production since April 2018 |
Login node: niagara.computecanada.ca |
Globus endpoint: computecanada#niagara |
Data mover nodes (rsync, scp, ...): nia-dm2, nia-dm2, see Moving data |
System Status Page: https://docs.scinet.utoronto.ca |
Niagara is a homogeneous cluster, owned by the University of Toronto and operated by SciNet, intended to enable large parallel jobs of 1040 cores and more. It was designed to optimize throughput of a range of scientific codes running at scale, energy efficiency, and network and storage performance and capacity.
The Niagara Quickstart has specific instructions for Niagara, where the user experience on Niagara is similar to that on Graham and Cedar, but slightly different.
Niagara is an allocatable resource in the 2018 Resource Allocation Competition (RAC 2018), which has come into effect on April 4, 2018.
Niagara installation update at the SciNet User Group Meeting on February 14th, 2018
Niagara installation time-lag video
Niagara hardware specifications[edit]
- 1548 nodes, each with 40 Intel Skylake cores at 2.4GHz, for a total of 61,920 cores.
- 202 GB (188 GiB) of RAM per node.
- EDR Infiniband network in a so-called 'Dragonfly+' topology.
- 6PB of scratch, 3PB of project space (parallel filesystem: IBM Spectrum Scale, formerly known as GPFS).
- 256 TB burst buffer (Excelero + IBM Spectrum Scale).
- No local disks.
- No GPUs.
- Rpeak of 4.75 PF.
- Rmax of 3.0 PF.
- 685 kW power consumption.
Attached storage systems[edit]
Home 200TB Parallel high-performance filesystem (IBM Spectrum Scale) |
|
Scratch 7PB (~100GB/s Write, ~120GB/s Read) Parallel high-performance filesystem (IBM Spectrum Scale) |
|
Burst buffer 232TB (~90GB/s Write , ~154 GB/s Read) Parallel extra high-performance filesystem (Excelero+IBM Spectrum Scale) |
|
Project 2PB (~100GB/s Write, ~120GB/s Read) Parallel high-performance filesystem (IBM Spectrum Scale |
|
Archive 10PB High Performance Storage System (IBM HPSS) |
|
High-performance interconnect[edit]
The Niagara cluster has an EDR Infiniband network in a so-called 'Dragonfly+' topology, with four wings. Each wing of maximually 432 nodes (i.e., 17280) has 1-to-1 connections. Network traffic between wings is done through adaptive routing, which alleviates network congestion and yields an effective blocking of 2:1 between nodes of different wings.
Node characteristics[edit]
- CPU: 2 sockets with 20 Intel Skylake cores (2.4GHz, AVX512), for a total of 40 cores per node
- Computational performance: 3.07 TFlops theoretical peak.
- Network connection: 100Gb/s EDR Dragonfly+
- Memory: 202 GB (188 GiB) of RAM, i.e., a bit over 4GiB per core.
- Local disk: none. GPUs/Accelerators: none.
- Operating system: Linux CentOS 7
Scheduling[edit]
The Niagara cluster uses the Slurm scheduler to run jobs. The basic scheduling commands are therefore similar to those for Cedar and Graham, with a few differences:
- Scheduling is by node only. This means jobs always need to use multiples of 40 cores per job.
- Asking for specific amounts of memory is not be necessary and is discouraged; all nodes have the same amount of memory (202GB/188GiB minus some operating system overhead).
Details, such as how to request burst buffer usage in jobs, are still being worked out.
Software[edit]
- Module-based software stack.
- Both the standard Compute Canada software stack as well as cluster-specific software tuned for Niagara are available.
- In contrast with Cedar and Graham, no modules are loaded by default to prevent accidental conflicts in versions. To load the software stack that a user would see on Graham and Cedar, one can load the "CCEnv" module (see Niagara Quickstart).
Access to Niagara[edit]
Access to Niagara is not enabled automatically for everyone with a Compute Canada account, but anyone with an active Compute Canada account can get their access enabled.
If you have an active Compute Canada account but you do not have access yet (e.g. because you are new to SciNet and belonging to a group whose primary PI does not have an allocation as granted in the annual Compute Canada RAC), go to the opt-in page on the CCDB site. After clicking the "Join" button on that page, it usually takes only one or two business days for access to be granted.
If at any time you require assistance, please do not hesitate to contact us.
Getting started[edit]
Please read the Niagara Quickstart carefully.