National systems: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(move CCDB descriptions to bottom: not a useful first reference)
(Niagara in production loooong ago)
Line 42: Line 42:
homogeneous, large parallel cluster
homogeneous, large parallel cluster
* Designed for large parallel jobs > 1000 cores
* Designed for large parallel jobs > 1000 cores
|| ~60,000 cores || Vendor negotiations
|| 60,000 cores || In production
|}
|}



Revision as of 14:35, 7 February 2019

Other languages:


Compute[edit]

Overview[edit]

Cedar (GP2) and Graham (GP3) are similar systems with some differences in interconnect and the number of large memory, small memory and GPU nodes.

Name Description Capacity Status
CC-Cloud Resources

Arbutus/west.cloud (GP1)
east.cloud

OpenStack IAAS Cloud 7,640 cores In production
(integrated with west.cloud)
Cedar (GP2)

heterogeneous, general-purpose cluster

  • Serial and small parallel jobs
  • GPU and big memory nodes
  • Small cloud partition
27,696 cores In production
Graham (GP3)

heterogeneous, general-purpose cluster

  • Serial and small parallel jobs
  • GPU and big memory nodes
  • Small cloud partition
33,376 cores In production
GP4

heterogeneous, general-purpose cluster

  • May have GPU's, large mem, etc.
Approx 40,000 cores RFP closes in May, 2018
Niagara (LP)

homogeneous, large parallel cluster

  • Designed for large parallel jobs > 1000 cores
60,000 cores In production

All systems have large, high-performance attached storage. See National Data Cyberinfrastructure for an overview; follow the links above for individual system for details.

CCDB descriptions[edit]

General descriptions are also available on CCDB: