Infrastructure renewal: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
m (replace external link with Wiki link)
 
(50 intermediate revisions by 8 users not shown)
Line 2: Line 2:


<translate>
<translate>
<!--T:1-->
=Major upgrade of our Advanced Research Computing infrastructure= <!--T:1-->
{{Draft}}


<!--T:2-->
<!--T:2-->
Welcome to the ARC/Cloud renewal transition documentation for the Digital Research Alliance of Canada (the Alliance). This is the primary source for users with questions about the upgrade of our HPC/Cloud infrastructure. The upgrade will replace the nearly 80% of our current HPC and Community Cloud equipment which is approaching end-of-life.  
Our Advanced Research Computing infrastructure is undergoing major changes in the winter of 2024-2025 and spring of 2025 to provide better High Performance Computing (HPC) and Cloud services for Canadian researchers. This page will be regularly updated to keep you informed of the activities concerning the transition to the new equipment.


= What's coming in 2025? =  <!--T:3-->
<!--T:31-->
In 2023, The Digital Research Alliance of Canada was given formal approval and funding for a complete replacement of aging national systems.  
The infrastructure renewal will replace the nearly 80% of our current equipment that is approaching end-of-life. The new equipment will offer faster processing speeds, greater storage capacity, and improved reliability.
The new equipment will offer:
* Increased processing capacity
* Increased storage capacity
* Improved reliability


<!--T:4-->
<!--T:3-->
This new infrastructure will better support your computational tasks, providing a better-performing and more efficient environment for your research.
The systems involved are
*[[Infrastructure renewal#Arbutus,_cloud|Arbutus]], cloud
*[[Infrastructure renewal#Béluga,_compute_cluster_only_(not_cloud)|Béluga, compute cluster only (not cloud)]]
*[[Infrastructure renewal#Cedar,_compute_cluster_and_cloud|Cedar, compute cluster and cloud]]
*[[Infrastructure renewal#Graham,_compute_cluster_and_cloud|Graham, compute cluster and cloud]]
*[[Infrastructure renewal#Niagara,_compute_cluster|Niagara, compute cluster]]


<!--T:5-->
=Technical specifications= <!--T:4-->
The systems being replaced are [[Arbutus]], [[Béluga]], [[Cedar]], [[Graham]] and [[Niagara]]. The new systems will be broadly comparable to the old systems, but with significantly increased capacity.
Technical specifications for each new system will be provided further down this page in future updates. Generally, they will be similar in architecture to the current systems, but with considerably increased capacity and performance.
For example, we expect to have fewer compute nodes, but each node will have a significant increase in the number of its cores, for an overall increase in the total number of CPU cores.


= Outages during the transition = <!--T:6-->
=Impacts= <!--T:5-->
This renewal will be implemented during an intense period in the winter of 2024-2025. Constraints on space and electrical power mean that there will have to be service outages during the installation and transition to the new systems. Each site will develop a transition plan for their new system. We expect to hear more details in the autumn and will continue to update this landing page as those details become known.


<!--T:7-->
==System outages== <!--T:6-->
{{Callout
During the installation and the transition to the new systems, outages will be unavoidable due to constraints on space and electrical power.  
  |title=Important information
We recommend that you consider the possibility of outages when you plan research programs, graduate examinations, etc.
  |content=
There will be outages in the winter of 2024-25 and spring of 2025. We recommend that researchers consider the possibility of such outages when planning research programs, graduate examinations, etc., for next winter and spring.
}}


= Status Updates = <!--T:8-->
<!--T:29-->
{| class="wikitable"
|-
| '''Start Time''' || '''End Time'''  || '''System''' || '''Description'''
|-
| Nov 7, 2024 || Nov 8, 2024 (1 day) || Niagara || All systems and storage located at the SciNet Datacenter (Niagara, Mist, HPSS, Rouge, Teach, JupyterHub, Balam) will be unavailable from 7 a.m. to 5 p.m. ET. This outage is required to install new electrical equipment (UPS) for the upcoming systems refresh. The work is expected to be completed in one day. The scheduler will hold jobs that cannot finish before the start of the shutdown. Users are encouraged to submit small and short jobs that can take advantage of this, as the scheduler may be able to fit these jobs in before the maintenance on otherwise idle nodes.
|-
| Nov 7, 2024 6am PST || Nov 8, 2024 6am PST || Cedar || Cedar compute nodes will be unavailable during this period (jobs will not run). Cedar login nodes and storage, as well as Cedar cloud will remain online and are not affected by this work. 
|}


<!--T:9-->
==Resource Allocation Competition (RAC)== <!--T:7-->
For current outages please see the [https://status.computecanada.ca system status page].
The [https://www.alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition Resource Allocation Competition] will be impacted by this transition, but the application process remains the same. Application deadline this year is October 30, 2024.<br>
2024/25 allocations will remain in effect on retiring clusters while each cluster remains in service.  The 2025/26 allocations will be implemented everywhere once all new clusters are in service.<br>
Because the old clusters will mostly be out of service before all new ones are available, if you hold both a 2024 and a 2025 RAC award you will experience a period when neither award is available to you. You will be able to compute with your default allocation (<code>def-xxxxxx</code>) on each new cluster as soon as it goes into service, but the 2025 RAC allocations will only become available when all new clusters are in service.  


<!--T:10-->
=General progress updates= <!--T:8-->
{| class="wikitable"
{| class="wikitable"
|-
|-
| Sep 13, 2024 || All sites except McGill have completed their RFP processes and have sent Purchase Orders to vendors. The McGill (Rorqual) storage RFP is still open and is scheduled to complete on Sep 18.
| Oct 7, 2024 || Details for necessary infrastructure (power and cooling) upgrades are being worked out. Timelines are not yet available but we expect some outages of a day or more in November.
All sites are working on infrastructure (power and cooling) design and implementation. We are expecting some outages over the autumn for cabling and plumbing upgrades, and will update this page when we know more.
|-
| Sep 13, 2024 || The RFP processes for all sites except for Rorqual (replacing Béluga) have been completed, and purchase orders have been sent to vendors. The Rorqual storage Request for Proposals is still open and is scheduled to complete on September 18.
All sites are working on infrastructure design (power and cooling) and implementation. We are expecting some outages throughout the fall for cabling and plumbing upgrades.
|-
|-
| Sep 3, 2024 || Currently all sites are completing their Requests for Proposals, and have been working with the vendors on deliverables and purchase orders.   
| Sep 3, 2024 || All sites have completed their Requests for Proposals, and are working with the vendors on deliverables and purchase orders.   
|}
|}


= Technical Specifications = <!--T:11-->
=Activities by system= <!--T:9-->
 
<!--T:12-->
The sites cannot yet provide detailed technical specifications of the new systems. Generally, the new systems will be similar in architecture to the old systems but with considerably increased capacity and performance. For instance, we expect to have fewer compute nodes, but each node will have a significant increase in the number of cores due to the increase in the size of multi-core CPUs since 2017.
 
= Resource Allocation Competition and renewals = <!--T:13-->
The Resource Allocation Competition (RAC) and RAC renewals will be affected by this transition, but we are not changing the normal RAC process. Expect to see the usual announcements for the competition in September 2024. We expect to implement the 2025/26 allocations on the new machines when they become available so there may be some delay in RAC implementation. See RAC documentation available [https://www.alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition here].


= System-specific updates = <!--T:14-->
==Arbutus, cloud== <!--T:10-->
[[Cloud resources#Arbutus cloud|Arbutus]]
<i>coming soon</i>


== Arbutus == <!--T:15-->
==Béluga, compute cluster only (not cloud)== <!--T:11-->
Coming soon


== Béluga == <!--T:16-->
<!--T:30-->
Coming soon
The cluster replacing [[Beluga/en|Béluga]] will be named <b>Rorqual</b>. Specifications are available on [[Rorqual/en|this page]].


== Cedar and Cedar Cloud == <!--T:17-->
==Cedar, compute cluster and cloud== <!--T:12-->
Coming soon
The cluster replacing [[Cedar]] will be named <b>Fir</b>. Specifications are available on [[Fir/en|this page]].


== Graham and Graham Cloud == <!--T:18-->
==Graham, compute cluster and cloud== <!--T:13-->
Coming soon
[[Graham]]
<i>coming soon</i>


== Niagara == <!--T:19-->
==Niagara, compute cluster== <!--T:14-->
Coming soon
The cluster replacing [[Niagara]] & [[Mist]] in early 2025 will be named [[Trillium]]. Specifications are available on [[Trillium|this page]]. Hardware delivery starts late 2024, and the new cluster will be available for users in the spring of 2025. To make room, half of Niagara will be decommissioned starting in December 2024 or January 2025. We’ll update you when we have a better idea of Trillium’s installation schedule.


= Frequently asked questions = <!--T:20-->
= Frequently asked questions = <!--T:15-->  
As we work on finalizing the details, here are a few key points to keep in mind.
{{Note|We are committed to providing the most up-to-date information. Please check back regularly as this section will be updated frequently to reflect any new developments}}


== Will data be copied to the new systems? == <!--T:21-->
== Will my data be copied to its new system? == <!--T:16-->
Data migration to the new systems is a site responsibility. Each site will let you know what you need to do and what will be done for you once the details are finalized.
Data migration to the new systems is the responsibility of each National Host Site who will inform you of what you need to do.


== When will outages occur? == <!--T:22-->
== When will outages occur? == <!--T:17-->
Each site will have their own schedule for outages as the new equipment is installed and transitioned. Specific outages will as usual be described on the status pages (https://status.alliancecan.ca). We will also provide more general updates through this wiki page as we know more, probably in early autumn 2024.
Each National Host Site will have its own schedule for outages as the installation of and transition to new equipment proceeds. As usual, specific outages will be described on [https://status.alliancecan.ca our system status web page]. We will provide more general updates on this wiki page and you will periodically receive emails with updates and outage notices.
We will also periodically send emails with updates and outage notices.


== Who should I contact for questions about the transition? == <!--T:23-->
== Who can I contact for questions about the transition? == <!--T:18-->
Contact our [[Technical support]], but don't expect them to know a great deal more than you read here.
Contact our [[technical support]]. They will try their best to answer any questions they can.


== Will my jobs/applications run without change on the new system? == <!--T:24-->
== Will my jobs and applications still be able to run on the new system? == <!--T:19-->
Generally yes, but with new CPUs and GPUs some codes may need recompiling or reconfiguring. More details will be provided during the transition.
Generally yes, but the new CPUs and GPUs may require recompilation or reconfiguration of some applications. More details will be provided as the transition unfolds.


== Will the software from the old systems still be available? == <!--T:25-->
== Will the software from the current systems still be available? == <!--T:20-->
Yes, our [[Standard software environments|standard software environment]] will be available on the new systems.
Yes, our [[Standard software environments|standard software environment]] will be available on the new systems.


== Will there be staggered outages? == <!--T:26-->
== Will there be staggered outages? == <!--T:21-->
We will do our best to limit overlapping outages, but we are very constrained by delivery schedules and funding deadlines so there will probably be periods when many of our systems are simultaneously out. We’ll communicate all outages as early as possible.
We will do our best to limit overlapping outages, but because we are very constrained by delivery schedules and funding deadlines, there will probably be periods when several of our systems are simultaneously offline. Outages will be announced as early as possible.


== Can I purchase old hardware after equipment upgrades? == <!--T:28-->
Most of the equipment is legally the property of the hosting institution.  When the equipment is retired, the host institution manages its disposal following that institution's guidelines. This typically involves "e-cycling"--- recycling the equipment rather than selling it. If you're looking to acquire the old hardware, it's best to contact the host institution directly, as they may have specific policies or options for selling equipment.
</translate>
</translate>

Latest revision as of 13:46, 6 November 2024

Other languages:

Major upgrade of our Advanced Research Computing infrastructure[edit]

Our Advanced Research Computing infrastructure is undergoing major changes in the winter of 2024-2025 and spring of 2025 to provide better High Performance Computing (HPC) and Cloud services for Canadian researchers. This page will be regularly updated to keep you informed of the activities concerning the transition to the new equipment.

The infrastructure renewal will replace the nearly 80% of our current equipment that is approaching end-of-life. The new equipment will offer faster processing speeds, greater storage capacity, and improved reliability.

The systems involved are

Technical specifications[edit]

Technical specifications for each new system will be provided further down this page in future updates. Generally, they will be similar in architecture to the current systems, but with considerably increased capacity and performance. For example, we expect to have fewer compute nodes, but each node will have a significant increase in the number of its cores, for an overall increase in the total number of CPU cores.

Impacts[edit]

System outages[edit]

During the installation and the transition to the new systems, outages will be unavoidable due to constraints on space and electrical power. We recommend that you consider the possibility of outages when you plan research programs, graduate examinations, etc.

Start Time End Time System Description
Nov 7, 2024 Nov 8, 2024 (1 day) Niagara All systems and storage located at the SciNet Datacenter (Niagara, Mist, HPSS, Rouge, Teach, JupyterHub, Balam) will be unavailable from 7 a.m. to 5 p.m. ET. This outage is required to install new electrical equipment (UPS) for the upcoming systems refresh. The work is expected to be completed in one day. The scheduler will hold jobs that cannot finish before the start of the shutdown. Users are encouraged to submit small and short jobs that can take advantage of this, as the scheduler may be able to fit these jobs in before the maintenance on otherwise idle nodes.
Nov 7, 2024 6am PST Nov 8, 2024 6am PST Cedar Cedar compute nodes will be unavailable during this period (jobs will not run). Cedar login nodes and storage, as well as Cedar cloud will remain online and are not affected by this work.

Resource Allocation Competition (RAC)[edit]

The Resource Allocation Competition will be impacted by this transition, but the application process remains the same. Application deadline this year is October 30, 2024.
2024/25 allocations will remain in effect on retiring clusters while each cluster remains in service. The 2025/26 allocations will be implemented everywhere once all new clusters are in service.
Because the old clusters will mostly be out of service before all new ones are available, if you hold both a 2024 and a 2025 RAC award you will experience a period when neither award is available to you. You will be able to compute with your default allocation (def-xxxxxx) on each new cluster as soon as it goes into service, but the 2025 RAC allocations will only become available when all new clusters are in service.

General progress updates[edit]

Oct 7, 2024 Details for necessary infrastructure (power and cooling) upgrades are being worked out. Timelines are not yet available but we expect some outages of a day or more in November.
Sep 13, 2024 The RFP processes for all sites except for Rorqual (replacing Béluga) have been completed, and purchase orders have been sent to vendors. The Rorqual storage Request for Proposals is still open and is scheduled to complete on September 18.

All sites are working on infrastructure design (power and cooling) and implementation. We are expecting some outages throughout the fall for cabling and plumbing upgrades.

Sep 3, 2024 All sites have completed their Requests for Proposals, and are working with the vendors on deliverables and purchase orders.

Activities by system[edit]

Arbutus, cloud[edit]

Arbutus coming soon

Béluga, compute cluster only (not cloud)[edit]

The cluster replacing Béluga will be named Rorqual. Specifications are available on this page.

Cedar, compute cluster and cloud[edit]

The cluster replacing Cedar will be named Fir. Specifications are available on this page.

Graham, compute cluster and cloud[edit]

Graham coming soon

Niagara, compute cluster[edit]

The cluster replacing Niagara & Mist in early 2025 will be named Trillium. Specifications are available on this page. Hardware delivery starts late 2024, and the new cluster will be available for users in the spring of 2025. To make room, half of Niagara will be decommissioned starting in December 2024 or January 2025. We’ll update you when we have a better idea of Trillium’s installation schedule.

Frequently asked questions[edit]

Will my data be copied to its new system?[edit]

Data migration to the new systems is the responsibility of each National Host Site who will inform you of what you need to do.

When will outages occur?[edit]

Each National Host Site will have its own schedule for outages as the installation of and transition to new equipment proceeds. As usual, specific outages will be described on our system status web page. We will provide more general updates on this wiki page and you will periodically receive emails with updates and outage notices.

Who can I contact for questions about the transition?[edit]

Contact our technical support. They will try their best to answer any questions they can.

Will my jobs and applications still be able to run on the new system?[edit]

Generally yes, but the new CPUs and GPUs may require recompilation or reconfiguration of some applications. More details will be provided as the transition unfolds.

Will the software from the current systems still be available?[edit]

Yes, our standard software environment will be available on the new systems.

Will there be staggered outages?[edit]

We will do our best to limit overlapping outages, but because we are very constrained by delivery schedules and funding deadlines, there will probably be periods when several of our systems are simultaneously offline. Outages will be announced as early as possible.

Can I purchase old hardware after equipment upgrades?[edit]

Most of the equipment is legally the property of the hosting institution. When the equipment is retired, the host institution manages its disposal following that institution's guidelines. This typically involves "e-cycling"--- recycling the equipment rather than selling it. If you're looking to acquire the old hardware, it's best to contact the host institution directly, as they may have specific policies or options for selling equipment.