National systems/en: Difference between revisions

Updating to match new version of source page
(Updating to match new version of source page)
(Updating to match new version of source page)
Line 1: Line 1:
<languages />
<languages />


==CCDB Descriptions==
General descriptions are also available on CCDB:
* [https://ccdb.computecanada.ca/resources/Graham-Compute Cedar-Compute]
* [https://ccdb.computecanada.ca/resources/Cedar-GPU Cedar-GPU]
* [https://ccdb.computecanada.ca/resources/Graham-Compute Graham-Compute]
* [https://ccdb.computecanada.ca/resources/Graham-GPU Graham-GPU]
* [https://ccdb.computecanada.ca/resources/NDC-SFU NDC-SFU]
* [https://ccdb.computecanada.ca/resources/NDC-Waterloo NDC-Waterloo]


==Compute==
==Compute==
Line 16: Line 6:
===Overview===
===Overview===


Cedar(GP2) and Graham(GP3) are almost identical systems with some minor differences in the actual mix of large memory, small memory and GPU nodes.
Cedar (GP2) and Graham (GP3) are similar systems with some differences in interconnect and the number of large memory, small memory and GPU nodes.


{| class="wikitable"
{| class="wikitable"
Line 27: Line 17:
|| OpenStack IAAS Cloud || 7,640 cores || In production<br /> (integrated with west.cloud)
|| OpenStack IAAS Cloud || 7,640 cores || In production<br /> (integrated with west.cloud)
|-
|-
| [[GP2|Cedar(GP2)]] ||
| [[Cedar|Cedar (GP2)]] ||
heterogeneous, general-purpose cluster
heterogeneous, general-purpose cluster
* Serial and small parallel jobs
* Serial and small parallel jobs
Line 34: Line 24:
|| 27,696 cores || In production
|| 27,696 cores || In production
|-
|-
| [[GP3|Graham(GP3)]] ||
| [[Graham|Graham (GP3)]] ||
heterogeneous, general-purpose cluster
heterogeneous, general-purpose cluster
* Serial and small parallel jobs
* Serial and small parallel jobs
Line 46: Line 36:
|| Approx 40,000 cores || RFP closes in May, 2018
|| Approx 40,000 cores || RFP closes in May, 2018
|-
|-
| [[LP|Niagara(LP)]] ||
| [[Niagara|Niagara (LP)]] ||
homogeneous, large parallel cluster
homogeneous, large parallel cluster
* Designed for large parallel jobs > 1000 cores
* Designed for large parallel jobs > 1000 cores
|| ~60,000 cores || Vendor negotiations
|| 60,000 cores || In production
|}
|}


Note that GP1, GP2, GP3, GP4 and LP will all have large, high-performance attached storage.
All systems have large, high-performance attached storage.  See [[National Data Cyberinfrastructure]]
for an overview; follow the links above for individual system for details.


{{:Cedar}}
==CCDB descriptions==


{{:Graham}}
General descriptions are also available on CCDB:
 
* [https://ccdb.computecanada.ca/resources/Graham-Compute Cedar-Compute]
{{:National_Data_Cyberinfrastructure}}
* [https://ccdb.computecanada.ca/resources/Cedar-GPU Cedar-GPU]
* [https://ccdb.computecanada.ca/resources/Graham-Compute Graham-Compute]
* [https://ccdb.computecanada.ca/resources/Graham-GPU Graham-GPU]
* [https://ccdb.computecanada.ca/resources/NDC-SFU NDC-SFU]
* [https://ccdb.computecanada.ca/resources/NDC-Waterloo NDC-Waterloo]




[[Category:Migration2016]]
[[Category:Migration2016]]
38,760

edits