Trillium: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Created page with "{{Draft}} This is the page for the new SciNet cluster named Trillium.")
 
(Marked this version for translation)
 
(5 intermediate revisions by one other user not shown)
Line 1: Line 1:
{{Draft}}
<languages />
<translate>
<!--T:1-->
{| class="wikitable"
|-
| Availability: Spring 2025
|-
| Login node: to be determined
|-
| Globus endpoint: to be determined
|-
| Data transfer node (rsync, scp, sftp,...): to be determined
|-
| Portal: to be determined
|}


This is the page for the new SciNet cluster named Trillium.
<!--T:2-->
This is the page for the large parallel cluster named Trillium hosted by SciNet at the University of Toronto.
 
<!--T:3-->
The Trillium cluster will be deployed in the spring of 2025. 
 
<!--T:4-->
This cluster, built by Lenovo Canada, will consist of:
 
<!--T:5-->
* 1,224 CPU nodes, each with
** Two 96-core AMD EPYC “Zen5” processors (192 cores per node).
** 768 GiB of DDR5 memory.
 
<!--T:6-->
* 60 GPU nodes, each with
** 4 x NVIDIA H100 SXM 80GB
** One 96-core AMD EPYC “Zen4” processor.
** 768 GiB of DDR5 memory.
 
<!--T:7-->
* Nvidia “NDR” Infiniband network
** 400 Gbps network bandwidth for CPU nodes
** 800 Gbps network bandwidth for GPU nodes
** Fully non-blocking, meaning every node can talk to every other node at full bandwidth simultaneously.
 
<!--T:8-->
* Parallel storage: 29 petabytes, NVMe SSD based storage from VAST Data.
</translate>

Latest revision as of 16:33, 5 November 2024

Other languages:
Availability: Spring 2025
Login node: to be determined
Globus endpoint: to be determined
Data transfer node (rsync, scp, sftp,...): to be determined
Portal: to be determined

This is the page for the large parallel cluster named Trillium hosted by SciNet at the University of Toronto.

The Trillium cluster will be deployed in the spring of 2025.

This cluster, built by Lenovo Canada, will consist of:

  • 1,224 CPU nodes, each with
    • Two 96-core AMD EPYC “Zen5” processors (192 cores per node).
    • 768 GiB of DDR5 memory.
  • 60 GPU nodes, each with
    • 4 x NVIDIA H100 SXM 80GB
    • One 96-core AMD EPYC “Zen4” processor.
    • 768 GiB of DDR5 memory.
  • Nvidia “NDR” Infiniband network
    • 400 Gbps network bandwidth for CPU nodes
    • 800 Gbps network bandwidth for GPU nodes
    • Fully non-blocking, meaning every node can talk to every other node at full bandwidth simultaneously.
  • Parallel storage: 29 petabytes, NVMe SSD based storage from VAST Data.