National systems

From Alliance Doc
Revision as of 17:38, 7 November 2016 by Mboisson (talk | contribs)
Jump to navigation Jump to search
Sharing: PublicNew SystemsAuthor:CC Migration team

Compute

Overview

GP2 and GP3 are almost identical systems with some minor differences in the actual mix of large memory, small memory and GPU nodes.

Name Description Approximate Capacity Availability
CC-Cloud Resources (GP1) Cloud 7,640 cores In production (integrated with west.cloud)
GP2

heterogeneous, general-purpose cluster

  • Serial and small parallel jobs
  • Small cloud partition
25,000 cores April, 2017
GP3

heterogeneous, general-purpose cluster

  • Serial and small parallel jobs
  • Small cloud partition
25,000 cores April, 2017
LP a cluster designed for large parallel jobs 60,000 cores Late 2017

Note that GP1, GP2 and LP will all have large, high-performance attached storage.


Template:GP2

Template:GP3


National Data Cyberinfrastructure (NDC)

Type Location Capacity Availability Comments
Nearline (Long-term)
File Storage

SFU and Waterloo

  • NDC-SFU
  • NDC-Waterloo
15 PB each Late Autumn 2016
  • Tape backup
  • Silo replacement
  • Aiming for redundant tape backup across sites
Object Store

All 4 sites

  • NDC-Object

Small to start (A few PB usable)

Late 2017

  • Fully distributed, redundant object storage
  • Accessible anywhere
  • Allows for redundant, high availability architectures
  • S3 and File interfaces
  • New service aimed at experimental and observational data
Special Purpose various ~3.5 PB Specialized migration plans
  • Special purpose for Atlas, LHC, SNO+, CANFAR
  • dCache and other customized configurations

Note that due to Silo decommissioning Dec.31/2016 it will be necessary to provide interim storage while the NDC is developed. See Migration2016:Silo for details.