Béluga/en: Difference between revisions
No edit summary |
(Updating to match new version of source page) |
||
Line 58: | Line 58: | ||
=Node types and characteristics= | =Node types and characteristics= | ||
<div class="mw-translate-fuzzy"> | |||
{| class="wikitable sortable" | {| class="wikitable sortable" | ||
! Count !! Type !! Cores !! Available Memory !! Hardware details | ! Count !! Type !! Cores !! Available Memory !! Hardware details | ||
Line 69: | Line 70: | ||
| 172 || ''GPU'' || 40 || 185G or 190000M || same as ''small'', but four GPU NVIDIA V100 Volta (SXM2, 16 GB of HBM2 memory), 1.6 TB NVMe SSD | | 172 || ''GPU'' || 40 || 185G or 190000M || same as ''small'', but four GPU NVIDIA V100 Volta (SXM2, 16 GB of HBM2 memory), 1.6 TB NVMe SSD | ||
|} | |} | ||
</div> |
Revision as of 21:30, 28 March 2019
Availability : March, 2019 |
Login node : beluga.computecanada.ca |
Globus Endpoint : computecanada#beluga-dtn |
Data Transfer Node (rsync, scp, sftp,...) : beluga.computecanada.ca |
Béluga is a general purpose cluster designed for a variety of workloads and situated at the École de technologie supérieure in Montreal. The cluster is named in honour of the St. Lawrence River's Beluga whale population.
Site-specific policies
By policy, Béluga's compute nodes cannot access the internet. If you need an exception to this rule, contact technical support with information about the IP address, port number(s) and protocol(s) needed as well as the duration and a contact person.
Crontab is not offered on Béluga.
Each job on Béluga should have a duration of at least one hour and a user cannot have more than 1000 jobs, running and queued, at any given moment.
Storage
HOME Lustre filesystem, 105 TB of space |
|
SCRATCH Lustre filesystem, 2.6 PB of space |
|
PROJECT Lustre filesystem, 8.9 PB of space |
|
For transferring data via Globus, you should use the endpoint computecanada#beluga-dtn
, while for tools like rsync and scp you can use a login node.
High-performance interconnect
A Mellanox Infiniband EDR (100 Gb/s) network connects together all the nodes of the cluster. A central switch of 324 ports links the cluster's island topology with a maximum blocking factor of 5:1. The storage servers are networked with a non-blocking connection. The architecture permits multiple parallel jobs with up to 640 cores (or more) thanks to a non-blocking network. For jobs requiring greater parallelism, the blocking factor is 5:1 but even for jobs executed across several islands, the interconnection is high-performance.
Node types and characteristics
Count | Type | Cores | Available Memory | Hardware details |
---|---|---|---|---|
172 | small 96G | 40 | 92G or 95000M | Two Intel Gold 6148 Skylake processors at 2.4 GHz; 480 GB SSD at 6 Gb/s |
516 | base 192G | 40 | 185G or 190000M | same as small |
12 | large 768G | 40 | 752G or 771000M | same as small |
172 | GPU | 40 | 185G or 190000M | same as small, but four GPU NVIDIA V100 Volta (SXM2, 16 GB of HBM2 memory), 1.6 TB NVMe SSD |