rsnt_translations
56,420
edits
mNo edit summary |
(Marked this version for translation) |
||
Line 1: | Line 1: | ||
<languages /> | <languages /> | ||
<translate> | <translate> | ||
<!--T:1--> | |||
{| class="wikitable" | {| class="wikitable" | ||
|- | |- | ||
Line 14: | Line 15: | ||
|} | |} | ||
<!--T:2--> | |||
This is the page for the large parallel cluster named Trillium hosted by SciNet at the University of Toronto. | This is the page for the large parallel cluster named Trillium hosted by SciNet at the University of Toronto. | ||
<!--T:3--> | |||
The Trillium cluster will be deployed in the spring of 2025. | The Trillium cluster will be deployed in the spring of 2025. | ||
<!--T:4--> | |||
This cluster, built by Lenovo Canada, will consist of: | This cluster, built by Lenovo Canada, will consist of: | ||
<!--T:5--> | |||
* 1,224 CPU nodes, each with | * 1,224 CPU nodes, each with | ||
** Two 96-core AMD EPYC “Zen5” processors (192 cores per node). | ** Two 96-core AMD EPYC “Zen5” processors (192 cores per node). | ||
** 768 GiB of DDR5 memory. | ** 768 GiB of DDR5 memory. | ||
<!--T:6--> | |||
* 60 GPU nodes, each with | * 60 GPU nodes, each with | ||
** 4 x NVIDIA H100 SXM 80GB | ** 4 x NVIDIA H100 SXM 80GB | ||
Line 29: | Line 35: | ||
** 768 GiB of DDR5 memory. | ** 768 GiB of DDR5 memory. | ||
<!--T:7--> | |||
* Nvidia “NDR” Infiniband network | * Nvidia “NDR” Infiniband network | ||
** 400 Gbps network bandwidth for CPU nodes | ** 400 Gbps network bandwidth for CPU nodes | ||
Line 34: | Line 41: | ||
** Fully non-blocking, meaning every node can talk to every other node at full bandwidth simultaneously. | ** Fully non-blocking, meaning every node can talk to every other node at full bandwidth simultaneously. | ||
<!--T:8--> | |||
* Parallel storage: 29 petabytes, NVMe SSD based storage from VAST Data. | * Parallel storage: 29 petabytes, NVMe SSD based storage from VAST Data. | ||
</translate> | </translate> |