National systems: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
 
(134 intermediate revisions by 11 users not shown)
Line 1: Line 1:
<table border="5" style="border-color:red;width:100%">
<languages />
<tr><td>Sharing: Public</td><td align="center" style="font-size:18px">'''New Systems'''</td><td align="right">Author:CC Migration team</td></tr>
<translate>
</table>


==Compute==
==Compute clusters== <!--T:1-->


<table border="1">
<!--T:3-->
<tr style="background:lightblue;font-weight:bold"><td>Name</td><td>Description</td><td align="center">Approximate Capacity</td><td>Availability</td></tr>
A ''general-purpose'' cluster is designed to support a wide variety of types of jobs, and is composed of a mixture of different nodes.  We broadly classify the nodes as:
<tr><td>[[CC-Cloud Resources]] (GP1)</td><td>cloud</td><td align="center">7,640 cores</td><td>In production (integrated with west.cloud)</td></tr>
* ''base'' nodes, containing typically about 4GB of memory per core;
<tr><td>[[GP2]]</td><td>general-purpose cluster (serial and small parallel jobs)</td><td align="center">25,000 cores</td><td>February, 2017</td></tr>
* ''large-memory'' nodes, containing typically more than 8GB memory per core;
<tr><td>[[GP3]]</td><td>general-purpose cluster (serial and small parallel jobs)</td><td align="center">25,000 cores</td><td>May, 2017</td></tr>
* ''GPU'' nodes, which contain [https://en.wikipedia.org/wiki/Graphics_processing_unit graphic processing units].
<tr><td>[[LP]]</td><td>a cluster designed for large parallel jobs</td><td  align="center">60,000 cores</td><td>Summer, 2017</tr>
<tr><td>[[Available software]]</td></tr>
</table>


Note that GP1, GP2 and LP will all have large, high-performance attached storage.
<!--T:17-->
The ''large parallel'' cluster [[Niagara]] is designed to support multi-node parallel jobs requiring more than 1000 CPU cores, although jobs as small as a single node are also supported there.  Niagara is composed of nodes of a uniform design, with an interconnect optimized for large jobs.


==National Data Cyberinfrastructure (NDC)==
<!--T:18-->
All clusters have large, high-performance storage attached.  For details about storage, memory, CPU model and count, GPU model and count, and the number of nodes at each site, please click on the cluster name in the table below.


<table border="1">
===List of compute clusters=== <!--T:14-->
<tr style="background:lightblue;font-weight:bold"><td>Type</td><td>Location</td><td>Capacity</td><td>Availability</td><td>Comments</td></tr>
<tr>
<td>'''Nearline (Long-term) File Storage'''</td>
<td>
Sites at SFU and Waterloo
* NDC-SFU
* NDC-Waterloo
</td>
<td>15 PB each</td>
<td>Late Autumn 2016</td>
<td>
* Tape backup
* '''Silo replacement'''
* Aiming for redundant tape backup across sites
</td>
</tr><tr>
<td>'''Object Store'''</td>
<td>
All 4 sites
* NDC-Object
</td>
<td>
Small to start (A few PB usable)
</td>
<td>Late 2016</td>
<td>
* Fully distributed, redundant object storage
* Accessible anywhere
* Allows for redundant, high availability architectures
* S3 and File interfaces
* New service aimed at experimental and observational data
</td>
</tr><tr>
<td>'''Special Purpose'''</td>
<td>All 4 sites</td>
<td>~3.5 PB</td>
<td>Specialized migration plans</td>
<td>
* Special purpose for Atlas, LHC, SNO+, CANFAR
* dCache and other customized configurations
</td>
</tr>
</table>


Note that due to Silo decommissioning Dec.31/2016 it will be necessary to provide interim storage while the NDC is developed. See [[Migration2016:Silo]] for details.
<!--T:15-->
{| class="wikitable"
|-
! Name and link !! Type !! Sub-systems !! Status
|-
| [[Béluga/en|Béluga]]
| General-purpose
|
* beluga-compute
* beluga-gpu
* beluga-storage
| In production
|-
| [[Cedar|Cedar]]
| General-purpose
|
* cedar-compute
* cedar-gpu
* cedar-storage
| In production
|-
| [[Graham|Graham]]
| General-purpose
|
* graham-compute
* graham-gpu
* graham-storage
| In production
|-
| [[Narval/en|Narval]]
| General-purpose
|
* narval-compute
* narval-gpu
* narval-storage
| In production
|-
| [[Niagara|Niagara]]
| Large parallel
|
* niagara-compute
* niagara-storage
* hpss-storage
| In production
|}


[[Category:Migration2016]]
==Cloud - Infrastructure as a Service== <!--T:16-->
Our cloud systems are offering an Infrastructure as a Service (IaaS) based on OpenStack.
 
<!--T:4-->
{| class="wikitable"
|-
! Name and link !! Sub-systems !! Description !! Status
|-
| [[Cloud_resources#Arbutus_cloud|Arbutus cloud]]
|
* arbutus-compute-cloud
* arbutus-persistent-cloud
* arbutus-dcache
|
* VCPU, VGPU, RAM
* Local ephemeral disk
* Volume and snapshot storage
* Shared filesystem storage (backed up)
* Object storage
* Floating IPs
* dCache storage
| In production
|-
| [[Cloud_resources#B.C3.A9luga_cloud|Béluga cloud]]
|
* beluga-compute-cloud
* beluga-persistent-cloud
|
* VCPU, RAM
* Local ephemeral disk
* Volume and snapshot storage
* Floating IPs
| In production
|-
| [[Cloud_resources#Cedar_cloud|Cedar cloud]]
|
* cedar-persistent-cloud
* cedar-compute-cloud
|
* VCPU, RAM
* Local ephemeral disk
* Volume and snapshot storage
* Floating IPs
| In production
|-
| [[Cloud_resources#Graham_cloud|Graham cloud]]
|
* graham-persistent-cloud
|
* VCPU, RAM
* Local ephemeral disk
* Volume and snapshot storage
* Floating IPs
| In production
|}
 
</translate>

Latest revision as of 14:23, 8 April 2022

Other languages:

Compute clusters

A general-purpose cluster is designed to support a wide variety of types of jobs, and is composed of a mixture of different nodes. We broadly classify the nodes as:

  • base nodes, containing typically about 4GB of memory per core;
  • large-memory nodes, containing typically more than 8GB memory per core;
  • GPU nodes, which contain graphic processing units.

The large parallel cluster Niagara is designed to support multi-node parallel jobs requiring more than 1000 CPU cores, although jobs as small as a single node are also supported there. Niagara is composed of nodes of a uniform design, with an interconnect optimized for large jobs.

All clusters have large, high-performance storage attached. For details about storage, memory, CPU model and count, GPU model and count, and the number of nodes at each site, please click on the cluster name in the table below.

List of compute clusters

Name and link Type Sub-systems Status
Béluga General-purpose
  • beluga-compute
  • beluga-gpu
  • beluga-storage
In production
Cedar General-purpose
  • cedar-compute
  • cedar-gpu
  • cedar-storage
In production
Graham General-purpose
  • graham-compute
  • graham-gpu
  • graham-storage
In production
Narval General-purpose
  • narval-compute
  • narval-gpu
  • narval-storage
In production
Niagara Large parallel
  • niagara-compute
  • niagara-storage
  • hpss-storage
In production

Cloud - Infrastructure as a Service

Our cloud systems are offering an Infrastructure as a Service (IaaS) based on OpenStack.

Name and link Sub-systems Description Status
Arbutus cloud
  • arbutus-compute-cloud
  • arbutus-persistent-cloud
  • arbutus-dcache
  • VCPU, VGPU, RAM
  • Local ephemeral disk
  • Volume and snapshot storage
  • Shared filesystem storage (backed up)
  • Object storage
  • Floating IPs
  • dCache storage
In production
Béluga cloud
  • beluga-compute-cloud
  • beluga-persistent-cloud
  • VCPU, RAM
  • Local ephemeral disk
  • Volume and snapshot storage
  • Floating IPs
In production
Cedar cloud
  • cedar-persistent-cloud
  • cedar-compute-cloud
  • VCPU, RAM
  • Local ephemeral disk
  • Volume and snapshot storage
  • Floating IPs
In production
Graham cloud
  • graham-persistent-cloud
  • VCPU, RAM
  • Local ephemeral disk
  • Volume and snapshot storage
  • Floating IPs
In production