National systems
Jump to navigation
Jump to search
Sharing: Public | Early Draft | Author:CC Migration team |
Introduction
TBD - detailed docs for the new systems are being written by the Research Support National Team.
Compute
Name | Description | Approximate Capacity | Availability |
CC-Cloud Resources (GP1) | cloud | 7,640 cores | In production (integrated with west.cloud) |
GP2 | general-purpose cluster (serial and small parallel jobs) | 25,000 cores | February, 2017 |
GP3 | general-purpose cluster (serial and small parallel jobs) | 25,000 cores | May, 2017 |
LP | a cluster designed for large parallel jobs | 60,000 cores | Summer, 2017 |
Available software |
National Storage Infrastructure
The National Storage Infrastructure will have 2 components:
1 | Object Storage |
A fully distributed object storage system spanning all 4 sites, with geo-redundancy, replication and universal access. |
2 | Long-term or Nearline File Storage |
"Nearline" |
Type | Location | Capacity | Availability | Comments |
Long-term File Storage |
Sites at SFU and Waterloo |
15 PB each | Interim: Early November (Delayed) |
|
Object Store | All 4 sites |
Small to start (A few PB usable) |
Late 2016 |
|
Special Purpose | All 4 sites | ~3.5 PB | Specialized migration plans |
|
- Patrick Mann (talk) 16:43, 20 October 2016 (UTC) Storage Building Blocks have been delivered and are being installed.
- Patrick Mann (talk) 15:26, 23 September 2016 (UTC) Note that due to Silo decommissioning Dec.31/2016 it will be necessary to provide interim storage while the NSI is developed. See Migration2016:Silo for details.