Niagara: Difference between revisions
No edit summary |
No edit summary |
||
Line 54: | Line 54: | ||
* Available as the <code>$SCRATCH</code> environment variable. | * Available as the <code>$SCRATCH</code> environment variable. | ||
* Not allocated. | * Not allocated. | ||
* Large fixed [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] per user. | * Large fixed [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] per user and per group path. | ||
* Inactive data will be purged. | * Inactive data will be purged. | ||
|- | |- | ||
Line 62: | Line 62: | ||
* Not allocated. | * Not allocated. | ||
|- | |- | ||
|'''Project space'''<br />External persistent storage<br /> | |'''Project space'''<br />External persistent storage<br />|| | ||
|| | |||
* Allocated via [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ RAC]. | * Allocated via [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ RAC]. | ||
* Available as the <code>$PROJECT</code> environment variable. | * Available as the <code>$PROJECT</code> environment variable. | ||
* [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] set per project. | * [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] set per user and per project path. | ||
* Backed up. | * Backed up. | ||
|- | |||
| '''Archive Space'''<br />10PB total volume<br />High Performance Storage System (IBM HPSS)|| | |||
* Allocated via [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ RAC]. | |||
* intended for large datasets requiring offload from active file systems. | |||
* Available as the <code>$ARCHIVE</code> environment variable. | |||
* Large fixed [[Storage and file management#Filesystem_Quotas_and_Policies|quota]] per group. | |||
|} | |} | ||
Revision as of 20:53, 27 February 2018
Expected availability: Testing and configuration: March 2017. 2018 RACs will be implemented in April, 2018. |
Niagara is a homogeneous cluster, owned by the University of Toronto and operated by SciNet, intended to enable large parallel jobs of 1024 cores and more. It was designed to optimize throughput of a range of scientific codes running at scale, energy efficiency, and network and storage performance and capacity.
The user experience on Niagara will be similar to that on Graham and Cedar, but specific instructions on how to use the Niagara system are still in preparation, given that details of the setup are still in flux at present (February 2018).
Niagara is an allocatable resource in the 2018 Resource Allocation Competition (RAC 2018), which comes into effect on April 4, 2018.
Niagara installation update at the SciNet User Group Meeting on February 14th, 2018
Niagara installation time-lag video
Niagara system specifications[edit]
- 1500 nodes, each with 40 Intel Skylake cores at 2.4GHz, for a total of 60,000 cores.
- 202 GB (188 GiB) of RAM per node.
- EDR Infiniband network in a so-called 'Dragonfly+' topology.
- 5PB of scratch, 5+2PB of project space (parallel file system: IBM Spectrum Scale, formerly known as GPFS).
- 256 TB burst buffer (Excelero + IBM Spectrum Scale).
- No local disks.
- Rpeak of 4.61 PF.
- Rmax of 3.0 PF.
- 685 kW power consumption.
Attached storage systems[edit]
Home space Parallel high-performance filesystem (IBM Spectrum Scale) |
|
Scratch space 5PB total volume Parallel high-performance filesystem (IBM Spectrum Scale) |
|
Burst buffer 256TB total volume Parallel extra high-performance filesystem (Excelero+IBM Spectrum Scale) |
|
Project space External persistent storage |
|
Archive Space 10PB total volume High Performance Storage System (IBM HPSS) |
High-performance interconnect[edit]
The Niagara system has an EDR Infiniband network in a so-called 'Dragonfly+' topology, with four wings. Each wing (of 375 nodes) has 1-to-1 connections. Network traffic between wings is done through adaptive routing, which alleviates network congestion.
Node characteristics[edit]
- CPU: 2 sockets with 20 Intel Skylake cores (2.4GHz, AVX512), for a total of 40 cores per node
- Computational perfomance: 3 TFlops (theoretical maximum)
- Network connection: 100Gb/s EDR
- Memory: 202 GB (188 GiB) GB of RAM, i.e., a bit over 4GiB per core.
- Local disk: none.
- Operating system: Linux CentOS 7
Scheduling[edit]
The Niagara system will use the Slurm scheduler to run jobs. The basic scheduling commands will therefore be similar to those for Cedar and Graham, with a few differences:
- Scheduling will be by node only. This means jobs will always need to use multiples of 40 cores per job.
- Asking for specific amounts of memory will not be necessary and is discouraged; all nodes have the same amount of memory (202GB/188GiB minus some operating system overhead).
Details, such as how to request burst buffer usage in jobs, are still being worked out.
Software[edit]
- Module-based software stack.
- Both the standard Compute Canada software stack as well as system-specific software tuned for the system will be available.
- Different from Cedar and Graham, no modules will be loaded by default to prevent accidental conflicts in versions. There will be a simple mechanism to load the software stack that a user would see on Graham and Cedar.