Niagara Quickstart: Difference between revisions

Jump to navigation Jump to search
no edit summary
No edit summary
No edit summary
Line 164: Line 164:
New, non-RAC users: we are still working out the procedure to get access. If you can't wait, for now, you can follow the old route of requesting a SciNet Consortium Account on the [https://www.scinethpc.ca/getting-a-scinet-account/ CCDB site].
New, non-RAC users: we are still working out the procedure to get access. If you can't wait, for now, you can follow the old route of requesting a SciNet Consortium Account on the [https://www.scinethpc.ca/getting-a-scinet-account/ CCDB site].


= Storage= <!--T:39-->
= Moving data = <!--T:42-->
'''''Move amounts less than 10GB through the login nodes.'''''
 
* Only Niagara login nodes visible from outside SciNet.
* Use scp or rsync to niagara.scinet.utoronto.ca or niagara.computecanada.ca (no difference).
* This will time out for amounts larger than about 10GB.
 
'''''Move amounts larger than 10GB through the datamover nodes.'''''
 
* From a Niagara login node, ssh to <code>nia-datamover1</code> or  <code>nia-datamover2</code>.
* Transfers must originate from this datamover.
* The other side (e.g. your machine) must be reachable from the outside.
* If you do this often, consider using [[https://docs.computecanada.ca/wiki/Globus Globus]], a web-based tool for data transfer.
 
'''''Moving data to HPSS/Archive/Nearline using the scheduler.'''''
 
* [https://docs.scinet.utoronto.ca/index.php/HPSS HPSS] is a tape-based storage solution, and is SciNet's nearline a.k.a. archive facility.
<!--T:47-->
* Storage space on HPSS is allocated through the annual [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions Compute Canada RAC allocation].
 
= Moving Data using Globus =
Please check the comprehensive documentation [[https://docs.computecanada.ca/wiki/Globus here]]
 
Niagara endpoint is "computecanada#niagara"
 
= Storage and quotas = <!--T:39-->


<!--T:40-->
<!--T:40-->
{| class="wikitable"
{| class="wikitable"
! location
! location
! quota
!colspan="2"| quota
!align="right"| block size
!align="right"| block size
! expiration time
! expiration time
Line 177: Line 202:
|-
|-
| $HOME
| $HOME
| 100 GB
|colspan="2"| 100 GB per user
|align="right"| 1 MB
|align="right"| 1 MB
|  
|  
Line 184: Line 209:
| read-only
| read-only
|-
|-
| $SCRATCH
|rowspan="6"| $SCRATCH
| 25 TB
|colspan="2"| 25 TB per user (dynamic per group)
|align="right"| 16 MB
|align="right" rowspan="6" | 16 MB
| 2 months
|rowspan="6"| 2 months
| no
|rowspan="6"| no
| yes
|rowspan="6"| yes
| yes
|rowspan="6"| yes
|-
|align="right"|up to 4 users per group
|align="right"|50TB
|-
|align="right"|up to 11 users per group
|align="right"|125TB
|-
|align="right"|up to 28 users per group
|align="right"|250TB
|-
|align="right"|up to 60 users per group
|align="right"|400TB
|-
|align="right"|above 60 users per group
|align="right"|500TB
|-
|-
| $PROJECT
| $PROJECT
| by group allocation
|colspan="2"| by group allocation
|align="right"| 16 MB
|align="right"| 16 MB
|  
|  
Line 201: Line 241:
|-
|-
| $ARCHIVE
| $ARCHIVE
| by group allocation
|colspan="2"| by group allocation
|align="right"|  
|align="right"|  
|
|
Line 209: Line 249:
|-
|-
| $BBUFFER
| $BBUFFER
| ?
|colspan="2"| 10 TB per user
|align="right"| 1 MB
|align="right"| 1 MB
| very short
| very short
| no
| no
| ?
| yes
| ?
| yes
|}
|}


<!--T:41-->
<!--T:41-->
<ul>
<ul>
<li>[https://docs.scinet.utoronto.ca/images/9/9a/Inode_vs._Space_quota_-_v2x.pdf Inode vs. Space quota (PROJECT and SCRATCH)]</li>
<li>[https://docs.scinet.utoronto.ca/images/0/0e/Scratch-quota.pdf dynamic quota per group (SCRATCH)]</li>
<li>Compute nodes do not have local storage.</li>
<li>Compute nodes do not have local storage.</li>
<li>Archive space is on [https://docs.scinet.utoronto.ca/index.php/HPSS HPSS], which is attached to Niagara.</li>
<li>Archive space is on [https://docs.scinet.utoronto.ca/index.php/HPSS HPSS]</li>
<li>Backup means a recent snapshot, not an achive of all data that ever was.</li>
<li>Backup means a recent snapshot, not an achive of all data that ever was.</li>
<li><p><code>$BBUFFER</code> stands for the Burst Buffer.</p></li></ul>
<li><p><code>$BBUFFER</code> stands for the [https://docs.scinet.utoronto.ca/index.php/Burst_Buffer Buffer], a faster parallel storage tier for temporary data.</p></li></ul>
 
= Moving data = <!--T:42-->
 
<!--T:43-->
Use the scp or rsync commands to move data to either niagara.scinet.utoronto.ca or niagara.computecanada.ca. The transfer method depends on the size of the data you need to move.
 
<!--T:44-->
*To move less than 10GB, use login nodes, which are the only ones visible from outside Niagara. Transfers done in this way will time out if data is larger than about 10GB.
 
<!--T:45-->
*To move more  than 10GB, use datamover nodes, which are not reachable from the outside. To do so, from a Niagara login node, first ssh into <code>nia-dm1</code> or <code>nia-dm2</code> and initiate transfer from the datamovers. The other side of the transfer (e.g. your machine) must be reachable from the outside.
 
<!--T:46-->
If you often move data, consider using [[Globus]], a web-based tool for data transfer.
 
<!--T:47-->
You may also want to move data to [https://docs.scinet.utoronto.ca/index.php/HPSS HPSS/Archive/Nearline]. Storage space on HPSS is allocated through the annual [https://www.computecanada.ca/research-portal/accessing-resources/resource-allocation-competitions Compute Canada RAC allocation].


= Loading software modules = <!--T:48-->
= Loading software modules = <!--T:48-->
cc_staff
290

edits

Navigation menu