National systems: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
No edit summary
Line 63: Line 63:
* [https://ccdb.computecanada.ca/resources/Graham-Compute Graham-Compute]
* [https://ccdb.computecanada.ca/resources/Graham-Compute Graham-Compute]
* [https://ccdb.computecanada.ca/resources/Graham-GPU Graham-GPU]
* [https://ccdb.computecanada.ca/resources/Graham-GPU Graham-GPU]
* [https://ccdb.computecanada.ca/resources/ndc-calculquebec NDC-Calcul Québec]
* [https://ccdb.computecanada.ca/resources/NDC-SFU NDC-SFU]
* [https://ccdb.computecanada.ca/resources/NDC-SFU NDC-SFU]
* [https://ccdb.computecanada.ca/resources/NDC-Waterloo NDC-Waterloo]
* [https://ccdb.computecanada.ca/resources/NDC-Waterloo NDC-Waterloo]

Revision as of 19:57, 29 March 2021

Other languages:


Compute[edit]

Overview[edit]

Cedar, Graham and Béluga are similar systems with some differences in interconnect and the number of large memory, small memory and GPU nodes.

Name Description Capacity Status
CC-Cloud Resources

Arbutus cloud
East cloud
Cedar cloud
Graham cloud

OpenStack IAAS 17,272 cores In production
Béluga

heterogeneous, general-purpose cluster

  • Serial and small parallel jobs
  • GPU and big memory nodes
34,880 cores In production
Cedar

heterogeneous, general-purpose cluster

  • Serial and small parallel jobs
  • GPU and big memory nodes
  • Small cloud partition
94,528 cores In production
Graham

heterogeneous, general-purpose cluster

  • Serial and small parallel jobs
  • GPU and big memory nodes
  • Small cloud partition
41,548 cores In production
Niagara

homogeneous, large parallel cluster

  • Designed for large parallel jobs > 1000 cores
80,960 cores In production

All systems have large, high-performance attached storage; see the relevant cluster page for more details.

CCDB descriptions[edit]

General descriptions are also available on CCDB: