National systems: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
Line 19: Line 19:


==National Data Cyberinfrastructure==
==National Data Cyberinfrastructure==
===Capacity and Availability===


<table border="1">
<table border="1">

Revision as of 16:35, 3 November 2016

Sharing: PublicEarly DraftAuthor:CC Migration team

Introduction

TBD - detailed docs for the new systems are being written by the Research Support National Team.

Compute

NameDescriptionApproximate CapacityAvailability
CC-Cloud Resources (GP1)cloud7,640 coresIn production (integrated with west.cloud)
GP2general-purpose cluster (serial and small parallel jobs)25,000 coresFebruary, 2017
GP3general-purpose cluster (serial and small parallel jobs)25,000 coresMay, 2017
LPa cluster designed for large parallel jobs60,000 coresSummer, 2017
Available software

National Data Cyberinfrastructure

TypeLocationCapacityAvailabilityComments
Nearline (Long-term) File Storage

Sites at SFU and Waterloo

  • NDC-SFU
  • NDC-Waterloo
15 PB each Late Autumn 2016
  • Tape backup
  • Silo replacement
  • Aiming for redundant tape backup across sites
Object Store

All 4 sites

  • NDC-Object

Small to start (A few PB usable)

Late 2016
  • Fully distributed, redundant object storage
  • Accessible anywhere
  • Allows for redundant, high availability architectures
  • S3 and File interfaces
  • New service aimed at experimental and observational data
Special Purpose All 4 sites ~3.5 PB Specialized migration plans
  • Special purpose for Atlas, LHC, SNO+, CANFAR
  • dCache and other customized configurations
  • Patrick Mann (talk) 16:43, 20 October 2016 (UTC) Storage Building Blocks have been delivered and are being installed.
  • Patrick Mann (talk) 15:26, 23 September 2016 (UTC) Note that due to Silo decommissioning Dec.31/2016 it will be necessary to provide interim storage while the NSI is developed. See Migration2016:Silo for details.