Cedar/en: Difference between revisions

Jump to navigation Jump to search
386 bytes removed ,  6 years ago
Updating to match new version of source page
(Updating to match new version of source page)
(Updating to match new version of source page)
Line 18: Line 18:


[https://docs.computecanada.ca/wiki/Getting_Started_with_the_new_National_Systems Getting started with Cedar]
[https://docs.computecanada.ca/wiki/Getting_Started_with_the_new_National_Systems Getting started with Cedar]
As part of the second phase of the CFI Cyberinfrastructure Challenge 2 program, Cedar will be considerably expanded. Initial discussions with the vendor are in progress, and the expansion is expected to be carried out winter 2018. This should result in close to doubling the capacity of Cedar.


=Attached storage=
=Attached storage=
Line 44: Line 42:
* Has daily backup.
* Has daily backup.
|}
|}
Scratch storage is a Lustre filesystem based on DDN model ES14K technology. It includes 640 8TB NL-SAS disk drives, and dual redundant metadata controllers with SSD-based storage. 


=High-performance interconnect=
=High-performance interconnect=
Line 55: Line 55:
=Node types and characteristics=
=Node types and characteristics=


Cedar has a total of 58,416 CPU cores for computation, and 584 GPU devices. Total theoretical peak double precision performance is 936 teraflops for CPUs, plus 2,744 for GPUs, yielding over 3.6 petaflops of theoretical peak double precision performance. 22 fully connected "islands" of 32 base or large nodes each have 1024 cores in a fully non-blocking topology (Omni-Path fabric), with each island designed to yield over 30 teraflops of double-precision performance (measured with high performance LINPACK). There is a 2:1 blocking factor between the 1024 core islands.
Cedar has a total of 58,416 CPU cores for computation, and 584 GPU devices.  


{| class="wikitable sortable"
{| class="wikitable sortable"
! Count !! Node type !! Cores !! Available memory !! Hardware detail
|-
|-
| base nodes || 576 nodes || 128 GiB of memory (125 GiB usable), 16 cores/socket, 2 sockets/node.  Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.
| 576 || base "128G"    || 32 || 125G or 128000M  || two Intel E5-2683 v4 "Broadwell" at 2.1Ghz
|-
|-
| large nodes || 128 nodes || 256 GiB of memory (251 Gib usable), 16 cores/socket, 2 sockets/node.  Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.
| 128 || large "256G"    || 32 || 250G or 257000M  || (same as base nodes)
|-
|-
| GPU base nodes || 114 nodes || 128 GiB of memory (125 GiB usable), 12 cores/socket, 2 sockets/node, 4 NVIDIA P100 Pascal GPUs/node (12GB HBM2 memory), 2 GPUs/PCI root. Intel "Broadwell" CPUs at 2.2Ghz, model E5-2650 v4
| 24  || large "512G"    || 32 || 502G or 515000M  || (same as base nodes)
|-
|-
| GPU large nodes || 32 nodes || 256 GiB of memory (251 Gib usable), 12 cores/socket, 2 sockets/node, 4 NVIDIA P100 Pascal GPUs/node (16GB HBM2 memory), All GPUs on the same PCI root.  E5-2650 v4
| 24 ||bigmem1500 "1.5T"|| 32 || 1510G or 1547000M || (same as base nodes)
|-
|-
| bigmem500 nodes || 24 nodes || 512 GiB of memory (503 GiB usable), 16 cores/socket, 2 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.
| || bigmem3000 "3T" || 64 || 3022G or 3095000M || four Intel E7-4809 v4 "Broadwell" at 2.1Ghz
|-
|-
|bigmem1500 nodes || 24 nodes || 1.5 TiB of memory (1511 GiB usable), 16 cores/socket, 2 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E5-2683 v4.
| 114 || base GPU        || 24 || 125G or 128000M  || two E5-2650 v4 at 2.2GHz + four NVIDIA P100 Pascal GPUs (12GB HBM2 memory)
|-
|-
| bigmem3000 nodes || 4 nodes || 3 TiB of memory (3023 GiB usable), 8 cores/socket, 4 sockets/node. Intel "Broadwell" CPUs at 2.1Ghz, model E7-4809 v4.
| 32  || large GPU      || 24 || 250G or 257000M  || two E5-2650 v4 at 2.2GHz + four NVIDIA P100 Pascal GPUs (16GB HBM2 memory)
|-
|-
| Skylake base nodes || 640 nodes || 192 GiB of memory (187 GiB usable), 24 cores/socket, 2 sockets/node. Intel "Skylake" CPUs at 2.1Ghz, model Platinum 8160F.
| 640 || Skylake         || 48 || 187G or 192000M  || two Intel Platinum 8160F "Skylake" at 2.1Ghz
|}
|}


All of the above nodes have local (on-node) temporary storage. GPU nodes have a single 800GB SSD drive. All other compute nodes have two 480GB SSD drives, for a total raw capacity of 960GB. Best practice to access node-local storage is to use the directory generated by [[Running jobs|Slurm]], $SLURM_TMPDIR.
Note that the amount of available memory is less than the "round number" suggested by the hardware configuration. For instance, "base" nodes do have 128 GiB of RAM, but some of it is permanently occupied by the kernel and OS. To avoid wasting time by swapping/paging, the scheduler will never allocate jobs whose memory requirements exceed the amount of "available" memory shown above.


Scratch storage is a Lustre filesystem based on DDN model ES14K technology. It includes 640 8TB NL-SAS disk drives, and dual redundant metadata controllers with SSD-based storage.
All nodes have local (on-node) temporary storage. GPU nodes have a single 800GB SSD drive. All other compute nodes have two 480GB SSD drives, for a total raw capacity of 960GB. Best practice to access node-local storage is to use the directory generated by [[Running jobs|Slurm]], $SLURM_TMPDIR.


== Choosing a node type ==
== Choosing a node type ==
Most applications will run on either Broadwell or Skylake nodes, and performance differences are expected to be small compared to job waiting times. Therefore we recommend that you do not select a specific node type for your jobs. If it is necessary, use <code>--constraint=skylake</code> or <code>--constraint=broadwell</code>. See [[Running_jobs#Specifying_a_CPU_architecture|Specifying a CPU architecture]].
Most applications will run on either Broadwell or Skylake nodes, and performance differences are expected to be small compared to job waiting times. Therefore we recommend that you do not select a specific node type for your jobs. If it is necessary, use <code>--constraint=skylake</code> or <code>--constraint=broadwell</code>. See [[Running_jobs#Specifying_a_CPU_architecture|Specifying a CPU architecture]].
== Performance ==
Theoretical peak double precision performance of Cedar is 936 teraflops for CPUs, plus 2,744 for GPUs, yielding over 3.6 petaflops of theoretical peak double precision performance. 22 fully connected "islands" of 32 base or large nodes each have 1024 cores in a fully non-blocking topology (Omni-Path fabric), with each island designed to yield over 30 teraflops of double-precision performance (measured with high performance LINPACK). There is a 2:1 blocking factor between the 1024 core islands.


<noinclude>
<noinclude>
</noinclude>
</noinclude>
38,760

edits

Navigation menu