rsnt_translations
56,420
edits
No edit summary |
No edit summary |
||
Line 8: | Line 8: | ||
| Availability: Compute RAC2017 allocations started June 30, 2017 | | Availability: Compute RAC2017 allocations started June 30, 2017 | ||
|- | |- | ||
| Login node: | | Login node: <b>cedar.alliancecan.ca</b> | ||
|- | |- | ||
| Globus endpoint: | | Globus endpoint: <b>computecanada#cedar-dtn</b> | ||
|- | |- | ||
| System Status Page: | | System Status Page: <b>http://status.alliancecan.ca/</b> | ||
|} | |} | ||
Line 18: | Line 18: | ||
Cedar is a heterogeneous cluster suitable for a variety of workloads; it is located at Simon Fraser University. It is named for the [https://en.wikipedia.org/wiki/Thuja_plicata Western Red Cedar], B.C.’s official tree, which is of great spiritual significance to the region's First Nations people. | Cedar is a heterogeneous cluster suitable for a variety of workloads; it is located at Simon Fraser University. It is named for the [https://en.wikipedia.org/wiki/Thuja_plicata Western Red Cedar], B.C.’s official tree, which is of great spiritual significance to the region's First Nations people. | ||
<br/> | <br/> | ||
Cedar is sold and supported by Scalar Decisions, Inc. The node manufacturer is Dell, the high performance temporary storage | Cedar is sold and supported by Scalar Decisions, Inc. The node manufacturer is Dell, the high performance temporary storage /scratch filesystem is from DDN, and the interconnect is from Intel. It is entirely liquid-cooled, using rear-door heat exchangers. | ||
<!--T:25--> | <!--T:25--> | ||
Line 28: | Line 28: | ||
{| class="wikitable sortable" | {| class="wikitable sortable" | ||
|- | |- | ||
| | | <b>Home space</b><br /> 526TB total volume|| | ||
* Location of home directories. | * Location of /home directories. | ||
* Each home directory has a small fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]]. | * Each /home directory has a small fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]]. | ||
* Not allocated via [https:// | * Not allocated via [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/rapid-access-service RAS] or [https://alliancecan.ca/en/services/advanced-research-computing/accessing-resources/resource-allocation-competition RAC]. Larger requests go to the /project space. | ||
* Has daily backup | * Has daily backup | ||
|- | |- | ||
| | | <b>Scratch space</b><br /> 5.4PB total volume<br />Parallel high-performance filesystem || | ||
* For active or temporary ( | * For active or temporary (/scratch) storage. | ||
* Not allocated. | * Not allocated. | ||
* Large fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]] per user. | * Large fixed [[Storage and file management#Filesystem_quotas_and_policies|quota]] per user. | ||
* Inactive data will be [[Scratch purging policy|purged]]. | * Inactive data will be [[Scratch purging policy|purged]]. | ||
|- | |- | ||
| | |<b>Project space</b><br />23PB total volume<br />External persistent storage | ||
|| | || | ||
* Not designed for parallel I/O workloads. Use | * Not designed for parallel I/O workloads. Use /scratch space instead. | ||
* Large adjustable [[Storage and file management#Filesystem_quotas_and_policies|quota]] per project. | * Large adjustable [[Storage and file management#Filesystem_quotas_and_policies|quota]] per project. | ||
* Has daily backup. | * Has daily backup. | ||
Line 48: | Line 48: | ||
<!--T:18--> | <!--T:18--> | ||
The /scratch storage space is a Lustre filesystem based on DDN model ES14K technology. It includes 640 8TB NL-SAS disk drives, and dual redundant metadata controllers with SSD-based storage. | |||
=High-performance interconnect= <!--T:19--> | =High-performance interconnect= <!--T:19--> | ||
<!--T:20--> | <!--T:20--> | ||
<i>Intel OmniPath (version 1) interconnect (100Gbit/s bandwidth).</i> | |||
<!--T:21--> | <!--T:21--> |