National systems

Sharing: PublicEarly DraftAuthor:CC Migration team

Introduction

TBD - detailed docs for the new systems are being written by the Research Support National Team.

Compute

NameDescriptionApproximate CapacityAvailability
CC-Cloud Resources (GP1)cloud7,640 coresIn production (integrated with west.cloud)
GP2general-purpose cluster (serial and small parallel jobs)25,000 coresFebruary, 2017
GP3general-purpose cluster (serial and small parallel jobs)25,000 coresMay, 2017
LPa cluster designed for large parallel jobs60,000 coresSummer, 2017
Available software

National Data Cyberinfrastructure

Storage Components

The National Data cyberinfrastructure will have 2 components:

1Object Storage

A fully distributed object storage system spanning all 4 sites, with geo-redundancy, replication and universal access.

2Long-term or Nearline File Storage

"Nearline"
Long-term file storage sites (similar to Silo) at SFU and the University of Waterloo. Each site will have tape backup.

Capacity and Availability

TypeLocationCapacityAvailabilityComments
Long-term File Storage

Sites at SFU and Waterloo

15 PB each Interim: Late Autumn 2016
  • Tape backup
  • Silo replacement
  • Aiming for redundant tape backup across sites
Object Store All 4 sites

Small to start (A few PB usable)

Late 2016
  • Fully distributed, redundant object storage
  • Accessible anywhere
  • Allows for redundant, high availability architectures
  • S3 and File interfaces
  • New service aimed at experimental and observational data
Special Purpose All 4 sites ~3.5 PB Specialized migration plans
  • Special purpose for Atlas, LHC, SNO+, CANFAR
  • dCache and other customized configurations
  • Patrick Mann (talk) 16:43, 20 October 2016 (UTC) Storage Building Blocks have been delivered and are being installed.
  • Patrick Mann (talk) 15:26, 23 September 2016 (UTC) Note that due to Silo decommissioning Dec.31/2016 it will be necessary to provide interim storage while the NSI is developed. See Migration2016:Silo for details.