Niagara: Difference between revisions

Jump to navigation Jump to search
m
no edit summary
No edit summary
mNo edit summary
Line 20: Line 20:
<!--T:4-->
<!--T:4-->
The user experience on Niagara will be similar to that on Graham
The user experience on Niagara will be similar to that on Graham
and Cedar, but slightly different. Specific instructions on how to use the Niagara cluster will be available in April 2018.
and Cedar, but slightly different.  
 
For specific instructions cfor Niagara, see [[Niagara Quickstart]].


<!--T:5-->
<!--T:5-->
Niagara is an allocatable resource in the 2018 [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ Resource Allocation Competition] (RAC 2018), which comes into effect on April 4, 2018.  
Niagara is an allocatable resource in the 2018 [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions/ Resource Allocation Competition] (RAC 2018), which has come into effect on April 4, 2018.  


<!--T:6-->
<!--T:6-->
Line 31: Line 33:
[https://www.youtube.com/watch?v=RgSvGGzTeoc  Niagara installation time-lag video]
[https://www.youtube.com/watch?v=RgSvGGzTeoc  Niagara installation time-lag video]


[[Niagara Quickstart]]


=Niagara hardware specifications= <!--T:3-->
=Niagara hardware specifications= <!--T:3-->
Line 104: Line 105:


<!--T:15-->
<!--T:15-->
The Niagara cluster will use the [[Running jobs|Slurm]] scheduler to run jobs.  The basic scheduling commands will therefore be similar to those for Cedar and Graham, with a few differences:
The Niagara cluster uses the [[Running jobs|Slurm]] scheduler to run jobs.  The basic scheduling commands are therefore similar to those for Cedar and Graham, with a few differences:


<!--T:16-->
<!--T:16-->
* Scheduling will be by node only. This means jobs will always need to use multiples of 40 cores per job.
* Scheduling is by node only. This means jobs always need to use multiples of 40 cores per job.
* Asking for specific amounts of memory will not be necessary and is discouraged; all nodes have the same amount of memory (202GB/188GiB minus some operating system overhead).
* Asking for specific amounts of memory is not be necessary and is discouraged; all nodes have the same amount of memory (202GB/188GiB minus some operating system overhead).


<!--T:17-->
<!--T:17-->
Line 117: Line 118:
<!--T:19-->
<!--T:19-->
* Module-based software stack.
* Module-based software stack.
* Both the standard Compute Canada software stack as well as cluster-specific software tuned for Niagara will be available.
* Both the standard Compute Canada software stack as well as cluster-specific software tuned for Niagara are available.
* In contrast with Cedar and Graham, no modules will be loaded by default to prevent accidental conflicts in versions. There will be a simple mechanism to load the software stack that a user would see on Graham and Cedar.
* In contrast with Cedar and Graham, no modules are loaded by default to prevent accidental conflicts in versions. To load the software stack that a user would see on Graham and Cedar, one can load the "CCEnv" module (see [[Niagara Quickstart]]).


= Migration to Niagara =
= Access and Migration to Niagara =


== Migration for Existing Users of the GPC ==
== Migration for Existing Users of SciNet Systems ==


* Accounts, $HOME &amp; $PROJECT of active GPC users transferred to Niagara (except dot-files in ~).
* Accounts, $HOME &amp; $PROJECT of active GPC users have been transferred to Niagara (except dot-files in ~).
* Data stored in $SCRATCH will not be transfered automatically.
* Data stored in $SCRATCH has not been, and will not be, transfered automatically.
* Users are to clean up $SCRATCH on the GPC as much as possible (remember it's temporary data!). Then they can transfer what they need using datamover nodes. Let us know if you need help.
* Users are to clean up $SCRATCH on the GPC as much as possible (remember it's temporary data!). Then they can transfer what they need using datamover nodes. Let us know if you need help.
* To enable this transfer, there will be a short period during which you can have access to Niagara as well as to the GPC storage resources. This period will end no later than May 9, 2018.
* To enable this transfer, there will be a short period during which you can have access to Niagara as well as to the GPC storage resources. This period will end no later than May 9, 2018.


== For Non-GPC Users ==
== New Users (without a previous SciNet account) ==


<ul>
<ul>
<li><p>Those of you new to SciNet, but with 2018 RAC allocations on Niagara, will have your accounts created and ready for you to login.</p></li>
<li><p>Those of you new to SciNet, but with 2018 RAC allocations on Niagara, have had your accounts created and ready for you to login.</p></li>
<li><p>New, non-RAC users: we are still working out the procedure to get access. If you can't wait, for now, you can follow the old route of requesting a SciNet Consortium Account on the [https://ccsb.computecanada.ca CCDB site].</p></li></ul>
<li><p>New, non-RAC users: we are still working out the procedure to get access. If you can't wait, for now, you can follow the old route of requesting a SciNet Consortium Account on the [https://ccsb.computecanada.ca CCDB site], or write to support@scinet.utoronto.ca.</p></li></ul>


== Getting started ==
== Getting started ==
cc_staff
150

edits

Navigation menu