Niagara: Difference between revisions
mNo edit summary |
mNo edit summary |
||
Line 5: | Line 5: | ||
{| class="wikitable" | {| class="wikitable" | ||
|- | |- | ||
| Expected availability: ''' | | Expected availability: April 2018 | ||
|- | |||
| Login node: '''niagara.computecanada.ca''' | |||
|- | |||
| Globus endpoint: '''TBA''' | |||
|- | |||
| System Status Page: '''https://wiki.scinet.utoronto.ca/wiki/index.php/System_Alerts''' | |||
|} | |} | ||
<!--T:2--> | <!--T:2--> | ||
Niagara is a homogeneous cluster, owned by the [https://www.utoronto.ca/ University of Toronto] and operated by [https://www.scinethpc.ca/ SciNet], intended to enable large parallel jobs of | Niagara is a homogeneous cluster, owned by the [https://www.utoronto.ca/ University of Toronto] and operated by [https://www.scinethpc.ca/ SciNet], intended to enable large parallel jobs of 1040 cores and more. It was designed to optimize throughput of a range of | ||
scientific codes running at scale, energy efficiency, and network and storage performance and capacity. | scientific codes running at scale, energy efficiency, and network and storage performance and capacity. | ||
<!--T:4--> | <!--T:4--> | ||
The user experience on Niagara will be similar to that on Graham | The user experience on Niagara will be similar to that on Graham | ||
and Cedar, but | and Cedar, but slightly different. Specific instructions on how to use the Niagara cluster will be available in April 2018. | ||
<!--T:5--> | <!--T:5--> | ||
Line 36: | Line 40: | ||
* 256 TB burst buffer (Excelero + IBM Spectrum Scale). | * 256 TB burst buffer (Excelero + IBM Spectrum Scale). | ||
* No local disks. | * No local disks. | ||
* No GPUs. | |||
* Rpeak of 4.61 PF. | * Rpeak of 4.61 PF. | ||
* Rmax of 3.0 PF. | * Rmax of 3.0 PF. | ||
Line 43: | Line 48: | ||
{| class="wikitable sortable" | {| class="wikitable sortable" | ||
|- | |- | ||
| '''Home space''' <br />Parallel high-performance filesystem (IBM Spectrum Scale) || | | '''Home space''' <br />600TB total volume<br />Parallel high-performance filesystem (IBM Spectrum Scale) || | ||
* Location of home directories. | * Location of home directories. | ||
* Available as the <code>$HOME</code> environment variable. | * Available as the <code>$HOME</code> environment variable. | ||
* Each home directory has a small, fixed [[Storage and file management#Filesystem_Quotas_and_Policies|quota]]. | * Each home directory has a small, fixed [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] of 100GB. | ||
* Not allocated, standard amount for each user. For larger storage requirements, use scratch or project. | * Not allocated, standard amount for each user. For larger storage requirements, use scratch or project. | ||
* Has daily backup. | * Has daily backup. | ||
|- | |- | ||
| '''Scratch space'''<br /> | | '''Scratch space'''<br />6PB total volume<br />Parallel high-performance filesystem (IBM Spectrum Scale)|| | ||
* For active or temporary (<code>/scratch</code>) storage (~ 80 GB/s). | * For active or temporary (<code>/scratch</code>) storage (~ 80 GB/s). | ||
* Available as the <code>$SCRATCH</code> environment variable. | * Available as the <code>$SCRATCH</code> environment variable. | ||
Line 59: | Line 64: | ||
| '''Burst buffer'''<br />256TB total volume<br />Parallel extra high-performance filesystem (Excelero+IBM Spectrum Scale)|| | | '''Burst buffer'''<br />256TB total volume<br />Parallel extra high-performance filesystem (Excelero+IBM Spectrum Scale)|| | ||
* For active fast storage during a job (160GB/s, and very high IOPS). | * For active fast storage during a job (160GB/s, and very high IOPS). | ||
* | * Not fully configured yet, but data will be purged very frequently (i.e. soon after a job has ended) and space on this storage tier will not be RAC allocatable. | ||
|- | |- | ||
|'''Project space'''<br /> | |'''Project space'''<br />3PB total volume.<br />Parallel high-performance filesystem (IBM Spectrum Scale|| | ||
* Allocated via [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ RAC]. | * Allocated via [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ RAC]. | ||
* For active but low data turnover storage and relatively fixed datasets | * For active but low data turnover storage and relatively fixed datasets | ||
Line 81: | Line 85: | ||
<!--T:11--> | <!--T:11--> | ||
The Niagara cluster has an EDR Infiniband network in a so-called | The Niagara cluster has an EDR Infiniband network in a so-called | ||
'Dragonfly+' topology, with four wings. Each wing | 'Dragonfly+' topology, with four wings. Each wing of maximually 432 nodes (i.e., 17280) has | ||
1-to-1 connections. Network traffic between wings is done through | 1-to-1 connections. Network traffic between wings is done through | ||
adaptive routing, which alleviates network congestion. | adaptive routing, which alleviates network congestion and yields an effective blocking of 2:1 between nodes of different wings. | ||
=Node characteristics= <!--T:12--> | =Node characteristics= <!--T:12--> | ||
Line 89: | Line 93: | ||
<!--T:13--> | <!--T:13--> | ||
* CPU: 2 sockets with 20 Intel Skylake cores (2.4GHz, AVX512), for a total of 40 cores per node | * CPU: 2 sockets with 20 Intel Skylake cores (2.4GHz, AVX512), for a total of 40 cores per node | ||
* Computational | * Computational performance: 3 PFlops (LINPACK), 4.61 PFlops theoretical peak. | ||
* Network connection: 100Gb/s EDR | * Network connection: 100Gb/s EDR Dragonfly+ | ||
* Memory: 202 GB (188 GiB) of RAM, i.e., a bit over 4GiB per core. | * Memory: 202 GB (188 GiB) of RAM, i.e., a bit over 4GiB per core. | ||
* Local disk: none. | * Local disk: none. |
Revision as of 01:16, 4 April 2018
Expected availability: April 2018 |
Login node: niagara.computecanada.ca |
Globus endpoint: TBA |
System Status Page: https://wiki.scinet.utoronto.ca/wiki/index.php/System_Alerts |
Niagara is a homogeneous cluster, owned by the University of Toronto and operated by SciNet, intended to enable large parallel jobs of 1040 cores and more. It was designed to optimize throughput of a range of scientific codes running at scale, energy efficiency, and network and storage performance and capacity.
The user experience on Niagara will be similar to that on Graham and Cedar, but slightly different. Specific instructions on how to use the Niagara cluster will be available in April 2018.
Niagara is an allocatable resource in the 2018 Resource Allocation Competition (RAC 2018), which comes into effect on April 4, 2018.
Niagara installation update at the SciNet User Group Meeting on February 14th, 2018
Niagara installation time-lag video
Niagara hardware specifications[edit]
- 1500 nodes, each with 40 Intel Skylake cores at 2.4GHz, for a total of 60,000 cores.
- 202 GB (188 GiB) of RAM per node.
- EDR Infiniband network in a so-called 'Dragonfly+' topology.
- 6PB of scratch, 3PB of project space (parallel filesystem: IBM Spectrum Scale, formerly known as GPFS).
- 256 TB burst buffer (Excelero + IBM Spectrum Scale).
- No local disks.
- No GPUs.
- Rpeak of 4.61 PF.
- Rmax of 3.0 PF.
- 685 kW power consumption.
Attached storage systems[edit]
Home space 600TB total volume Parallel high-performance filesystem (IBM Spectrum Scale) |
|
Scratch space 6PB total volume Parallel high-performance filesystem (IBM Spectrum Scale) |
|
Burst buffer 256TB total volume Parallel extra high-performance filesystem (Excelero+IBM Spectrum Scale) |
|
Project space 3PB total volume. Parallel high-performance filesystem (IBM Spectrum Scale |
|
Archive Space 10PB total volume High Performance Storage System (IBM HPSS) |
High-performance interconnect[edit]
The Niagara cluster has an EDR Infiniband network in a so-called 'Dragonfly+' topology, with four wings. Each wing of maximually 432 nodes (i.e., 17280) has 1-to-1 connections. Network traffic between wings is done through adaptive routing, which alleviates network congestion and yields an effective blocking of 2:1 between nodes of different wings.
Node characteristics[edit]
- CPU: 2 sockets with 20 Intel Skylake cores (2.4GHz, AVX512), for a total of 40 cores per node
- Computational performance: 3 PFlops (LINPACK), 4.61 PFlops theoretical peak.
- Network connection: 100Gb/s EDR Dragonfly+
- Memory: 202 GB (188 GiB) of RAM, i.e., a bit over 4GiB per core.
- Local disk: none.
- Operating system: Linux CentOS 7
Scheduling[edit]
The Niagara cluster will use the Slurm scheduler to run jobs. The basic scheduling commands will therefore be similar to those for Cedar and Graham, with a few differences:
- Scheduling will be by node only. This means jobs will always need to use multiples of 40 cores per job.
- Asking for specific amounts of memory will not be necessary and is discouraged; all nodes have the same amount of memory (202GB/188GiB minus some operating system overhead).
Details, such as how to request burst buffer usage in jobs, are still being worked out.
Software[edit]
- Module-based software stack.
- Both the standard Compute Canada software stack as well as cluster-specific software tuned for Niagara will be available.
- In contrast with Cedar and Graham, no modules will be loaded by default to prevent accidental conflicts in versions. There will be a simple mechanism to load the software stack that a user would see on Graham and Cedar.