National systems: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
mNo edit summary
m (Matching core count on main page to Cedar's page for core count)
Line 25: Line 25:
* GPU and big memory nodes
* GPU and big memory nodes
* Small cloud partition
* Small cloud partition
|| 27,696 cores || In production
|| 58,416 cores || In production
|-
|-
| [[Graham|Graham]] ||
| [[Graham|Graham]] ||

Revision as of 19:11, 28 March 2019

Other languages:


Compute[edit]

Overview[edit]

Cedar, Graham and Béluga are similar systems with some differences in interconnect and the number of large memory, small memory and GPU nodes.

Name Description Capacity Status
CC-Cloud Resources

Arbutus
East.cloud

OpenStack IAAS Cloud 7,640 cores In production
(integrated with west.cloud)
Cedar

heterogeneous, general-purpose cluster

  • Serial and small parallel jobs
  • GPU and big memory nodes
  • Small cloud partition
58,416 cores In production
Graham

heterogeneous, general-purpose cluster

  • Serial and small parallel jobs
  • GPU and big memory nodes
  • Small cloud partition
33,376 cores In production
Béluga

heterogeneous, general-purpose cluster

  • Serial and small parallel jobs
  • GPU and big memory nodes
Approx 40,000 cores Beta-testing. Available April 2019
Niagara

homogeneous, large parallel cluster

  • Designed for large parallel jobs > 1000 cores
60,000 cores In production

All systems have large, high-performance attached storage. See National Data Cyberinfrastructure for an overview; follow the links above for individual system for details.

CCDB descriptions[edit]

General descriptions are also available on CCDB: