Niagara
Expected availability: April 2018 |
Login node: niagara.computecanada.ca |
Globus endpoint: TBA |
System Status Page: https://wiki.scinet.utoronto.ca/wiki/index.php/System_Alerts |
Niagara is a homogeneous cluster, owned by the University of Toronto and operated by SciNet, intended to enable large parallel jobs of 1040 cores and more. It was designed to optimize throughput of a range of scientific codes running at scale, energy efficiency, and network and storage performance and capacity.
The user experience on Niagara will be similar to that on Graham and Cedar, but slightly different. Specific instructions on how to use the Niagara cluster will be available in April 2018.
Niagara is an allocatable resource in the 2018 Resource Allocation Competition (RAC 2018), which comes into effect on April 4, 2018.
Niagara installation update at the SciNet User Group Meeting on February 14th, 2018
Niagara installation time-lag video
Niagara hardware specifications
- 1500 nodes, each with 40 Intel Skylake cores at 2.4GHz, for a total of 60,000 cores.
- 202 GB (188 GiB) of RAM per node.
- EDR Infiniband network in a so-called 'Dragonfly+' topology.
- 6PB of scratch, 3PB of project space (parallel filesystem: IBM Spectrum Scale, formerly known as GPFS).
- 256 TB burst buffer (Excelero + IBM Spectrum Scale).
- No local disks.
- No GPUs.
- Rpeak of 4.61 PF.
- Rmax of 3.0 PF.
- 685 kW power consumption.
Attached storage systems
Home space 600TB total volume Parallel high-performance filesystem (IBM Spectrum Scale) |
|
Scratch space 6PB total volume Parallel high-performance filesystem (IBM Spectrum Scale) |
|
Burst buffer 256TB total volume Parallel extra high-performance filesystem (Excelero+IBM Spectrum Scale) |
|
Project space 3PB total volume. Parallel high-performance filesystem (IBM Spectrum Scale |
|
Archive Space 10PB total volume High Performance Storage System (IBM HPSS) |
High-performance interconnect
The Niagara cluster has an EDR Infiniband network in a so-called 'Dragonfly+' topology, with four wings. Each wing of maximually 432 nodes (i.e., 17280) has 1-to-1 connections. Network traffic between wings is done through adaptive routing, which alleviates network congestion and yields an effective blocking of 2:1 between nodes of different wings.
Node characteristics
- CPU: 2 sockets with 20 Intel Skylake cores (2.4GHz, AVX512), for a total of 40 cores per node
- Computational performance: 3 TFlops theoretical peak.
- Network connection: 100Gb/s EDR Dragonfly+
- Memory: 202 GB (188 GiB) of RAM, i.e., a bit over 4GiB per core.
- Local disk: none.
- Operating system: Linux CentOS 7
Scheduling
The Niagara cluster will use the Slurm scheduler to run jobs. The basic scheduling commands will therefore be similar to those for Cedar and Graham, with a few differences:
- Scheduling will be by node only. This means jobs will always need to use multiples of 40 cores per job.
- Asking for specific amounts of memory will not be necessary and is discouraged; all nodes have the same amount of memory (202GB/188GiB minus some operating system overhead).
Details, such as how to request burst buffer usage in jobs, are still being worked out.
Software
- Module-based software stack.
- Both the standard Compute Canada software stack as well as cluster-specific software tuned for Niagara will be available.
- In contrast with Cedar and Graham, no modules will be loaded by default to prevent accidental conflicts in versions. There will be a simple mechanism to load the software stack that a user would see on Graham and Cedar.