National systems: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Remove transclusions: Too large, page structure was lost)
(move CCDB descriptions to bottom: not a useful first reference)
Line 2: Line 2:


<translate>
<translate>
==CCDB Descriptions== <!--T:9-->
<!--T:10-->
General descriptions are also available on CCDB:
* [https://ccdb.computecanada.ca/resources/Graham-Compute Cedar-Compute]
* [https://ccdb.computecanada.ca/resources/Cedar-GPU Cedar-GPU]
* [https://ccdb.computecanada.ca/resources/Graham-Compute Graham-Compute]
* [https://ccdb.computecanada.ca/resources/Graham-GPU Graham-GPU]
* [https://ccdb.computecanada.ca/resources/NDC-SFU NDC-SFU]
* [https://ccdb.computecanada.ca/resources/NDC-Waterloo NDC-Waterloo]


==Compute== <!--T:1-->
==Compute== <!--T:1-->
Line 19: Line 8:


<!--T:3-->
<!--T:3-->
Cedar(GP2) and Graham(GP3) are almost identical systems with some minor differences in the actual mix of large memory, small memory and GPU nodes.
Cedar (GP2) and Graham (GP3) are similar systems with some differences in interconnect and the number of large memory, small memory and GPU nodes.


<!--T:4-->
<!--T:4-->
Line 59: Line 48:
All systems have large, high-performance attached storage.  See [[National Data Cyberinfrastructure]]
All systems have large, high-performance attached storage.  See [[National Data Cyberinfrastructure]]
for an overview; follow the links above for individual system for details.
for an overview; follow the links above for individual system for details.
==CCDB descriptions== <!--T:9-->
<!--T:10-->
General descriptions are also available on CCDB:
* [https://ccdb.computecanada.ca/resources/Graham-Compute Cedar-Compute]
* [https://ccdb.computecanada.ca/resources/Cedar-GPU Cedar-GPU]
* [https://ccdb.computecanada.ca/resources/Graham-Compute Graham-Compute]
* [https://ccdb.computecanada.ca/resources/Graham-GPU Graham-GPU]
* [https://ccdb.computecanada.ca/resources/NDC-SFU NDC-SFU]
* [https://ccdb.computecanada.ca/resources/NDC-Waterloo NDC-Waterloo]


</translate>
</translate>


[[Category:Migration2016]]
[[Category:Migration2016]]

Revision as of 16:09, 6 February 2019

Other languages:


Compute[edit]

Overview[edit]

Cedar (GP2) and Graham (GP3) are similar systems with some differences in interconnect and the number of large memory, small memory and GPU nodes.

Name Description Capacity Status
CC-Cloud Resources

Arbutus/west.cloud (GP1)
east.cloud

OpenStack IAAS Cloud 7,640 cores In production
(integrated with west.cloud)
Cedar (GP2)

heterogeneous, general-purpose cluster

  • Serial and small parallel jobs
  • GPU and big memory nodes
  • Small cloud partition
27,696 cores In production
Graham (GP3)

heterogeneous, general-purpose cluster

  • Serial and small parallel jobs
  • GPU and big memory nodes
  • Small cloud partition
33,376 cores In production
GP4

heterogeneous, general-purpose cluster

  • May have GPU's, large mem, etc.
Approx 40,000 cores RFP closes in May, 2018
Niagara (LP)

homogeneous, large parallel cluster

  • Designed for large parallel jobs > 1000 cores
~60,000 cores Vendor negotiations

All systems have large, high-performance attached storage. See National Data Cyberinfrastructure for an overview; follow the links above for individual system for details.

CCDB descriptions[edit]

General descriptions are also available on CCDB: