Cedar: Difference between revisions
m (→GP2 (SFU)) |
m (→GP2 (SFU)) |
||
Line 7: | Line 7: | ||
| '''$SCRATCH<br />Parallel High-performance filesystem''' || | | '''$SCRATCH<br />Parallel High-performance filesystem''' || | ||
Approximately 4PB usable capacity for temporary (<code>/scratch</code>) storage.<br /> | Approximately 4PB usable capacity for temporary (<code>/scratch</code>) storage.<br /> | ||
Aggregate performance of approximately 40GB/s. Available to all nodes. | Aggregate performance of approximately 40GB/s. Available to all nodes.<br /> | ||
Not allocated<br /> | |||
Purged - inactive data will be purged | |||
|- | |- | ||
|'''$PROJECT<br />External persistent storage:'''<br />National Data Cyberinfrastructure - NDC-SFU | |'''$PROJECT<br />External persistent storage:'''<br />National Data Cyberinfrastructure - NDC-SFU |
Revision as of 17:15, 15 November 2016
GP2 (SFU)
The GP2 system evaluation is not yet completed (as of early November 2016). Anticipated specifications, based on SFU's RFP and bids received, include the following. This information is not guaranteed and might not be complete. It is provided for planning purposes.
$SCRATCH Parallel High-performance filesystem |
Approximately 4PB usable capacity for temporary ( |
$PROJECT External persistent storage: National Data Cyberinfrastructure - NDC-SFU |
See below - provided by the NDC. |
High performance interconnect: |
Low-latency high-performance fabric connecting all nodes and temporary storage. |
Node types and characteristics:
"Base" compute nodes: | Over 500 nodes | 128GB of memory each |
"Large" compute nodes: | Over 100 nodes | 256GB of memory each |
"Bigmem512" | 24 nodes | 512 GB of memory each |
"Bigmem1500" nodes | 24 nodes | 1.5TB of memory each |
All of the above nodes will have 16 cores/socket (32 cores/node), with an anticipated frequency of 2.1Ghz.
"GPU base" nodes: | Over 100 nodes with 4 GPUs each. | 12 cores/socket (24 cores/node) with an anticipated frequency of 2.2Ghz. GPUs on a dual PCI root. |
"GPU large" nodes. | Approximately 30 nodes | same configuration as "GPU base," but a single PCI root. |
"Bigmem3000" nodes | 4 nodes | each with 3TB of memory. These are 4-socket nodes with 8 cores/socket. |
All of the above nodes will have local (on-node) storage.
Compute Canada is not currently able to disclose the specific GPU model or specifications. The RFP used NVIDIA K80 as a baseline specification.
The total GP2 system is anticipated to have over 25,000 cores, 900 nodes, and 500 GPUs.
The delivery and installation schedule is not yet known, but the procurement team has confidence that the system will be in production for the start of the allocations year on April 1, 2017.
Name
For naming details of the new systems see Migration2016:New_System_Names.