Niagara Quickstart

From Alliance Doc
Jump to navigation Jump to search

System information[edit]

System Information on the Niagara supercomputer can be found on this page.

Logging in[edit]

As with all SciNet and CC (Compute Canada) compute systems, access to Niagara is via ssh (secure shell) only.

To access SciNet systems, first open a terminal window (e.g. MobaXTerm on Windows).

Then ssh into the Niagara login nodes with your CC credentials:

$ ssh -Y MYCCUSERNAME@niagara.scinet.utoronto.ca

or

$ ssh -Y MYCCUSERNAME@niagara.computecanada.ca
  • The Niagara login nodes are where you develop, edit, compile, prepare and submit jobs.
  • These login nodes are not part of the Niagara compute cluster, but have the same architecture, operating system, and software stack.
  • The optional -Y is needed to open windows from the Niagara command-line onto your local X server.
  • To run on Niagara's compute nodes, you must submit a batch job.
  • For more information on user migration to Niagara see here.

Migration to Niagara[edit]

Migration for Existing Users of the GPC[edit]

Niagara is replacing the General Purpose Cluster (GPC) and the Tightly Coupled Cluster (TCS) at SciNet. The TCS was decommissioned last fall, and GPC will be decommissioned very soon.

In fact, GPC-compute will be decommissioned on April 21, 2018, while GPC-storage will be decommissioned on May 9, 2018.

Active GPC Users got access to the new system, Niagara, on April 9, 2018.

Users' home and project folder were last synced to Niagara on April 5th, 2018, except for files whose name start with a period that were in their home directories (they were never synced).

Data generated after April 5th are the users responsibility to copy over after cleaning up that what they no longer need, of course.

Data stored in scratch has also not been transfered automatically. Users are to clean up their scratch space on the GPC as much as possible (remember it's temporary data!). Then they can transfer what they need using datamover nodes. Contact us know if you need help.

To enable this transfer, there will be a short period during which you can have access to Niagara as well as to the GPC storage resources. This period will end no later than May 9, 2018.

To copy substantial amounts of data (i.e.,more than 10GB), use the datamovers of both the GPC (called gpc-logindm01 and gpc-logindm02) and the Niagara datamovers (called nia-dm1 and nia-dm2). For instance, to copy a directory abc from your GPC scratch to your Niagara scratch directory, you can do the following:

$ ssh CCUSERNAME@niagara.computecanada.ca
$ ssh nia-dm1
$ cp -r SCINETUSERNAME@gpc-logindm01:\$SCRATCH/abc $SCRATCH/abc

For many of you, CCUSERNAME amd SCINETUSERNAME will be the same. If not, make sure you use the slash (\) before the first $SCRATCH; it cause the value of scratch on the remote node (i.e., here, gpc-logindm01) to be used. Note that the gpc-logindm01 will ask for your SciNet password.

You can also go the other way:

$ ssh SCINETUSERNAME@login.scinet.utoronto.ca
$ ssh gpc-logindm01
$ cp -r $SCRATCH/abc CCUSERNAME@nia-dm1:\$SCRATCH/abc

If you are using rsync, we advice to refrain from using the -a flags, and if using cp, refrain from using the -a</t> and -p flags.

For Non-GPC Users[edit]

Those of you new to SciNet, but with 2018 RAC allocations on Niagara, will have your accounts created and ready for you to login.

New, non-RAC users: we are still working out the procedure to get access. If you can't wait, for now, you can follow the old route of requesting a SciNet Consortium Account on the CCDB site.

Storage Systems and Locations[edit]

Home and scratch[edit]

You have a home and scratch directory on the system, whose locations will be given by

$HOME=/home/g/groupname/myccusername

$SCRATCH=/scratch/g/groupname/myccusername

nia-login07:~$ pwd
/home/s/scinet/rzon

nia-login07:~$ cd $SCRATCH

nia-login07:rzon$ pwd
/scratch/s/scinet/rzon

Project location[edit]

Users from groups with a RAC allocation will also have a project directory.

$PROJECT=/project/g/groupname/myccusername

IMPORTANT: Future-proof your scripts

Use the environment variables (HOME, SCRATCH, PROJECT) instead of the actual paths! The paths may change in the future.

Storage Limits on Niagara[edit]

location quota block size expiration time backed up on login on compute
$HOME 100 GB 1 MB yes yes read-only
$SCRATCH 25 TB 16 MB 2 months no yes yes
$PROJECT by group allocation 16 MB yes yes yes
$ARCHIVE by group allocation dual-copy no no
$BBUFFER ? 1 MB very short no ? ?
  • Compute nodes do not have local storage.
  • Archive space is on HPSS.
  • Backup means a recent snapshot, not an achive of all data that ever was.
  • $BBUFFER stands for the Burst Buffer, a functionality that is still being set up. This will be a faster parallel storage tier for temporary data.

Moving data[edit]

Move amounts less than 10GB through the login nodes.

  • Only Niagara login nodes visible from outside SciNet.
  • Use scp or rsync to niagara.scinet.utoronto.ca or niagara.computecanada.ca (no difference).
  • This will time out for amounts larger than about 10GB.

Move amounts larger than 10GB through the datamover node.

  • From a Niagara login node, ssh to nia-datamover1 or nia-datamover2.
  • Transfers must originate from the datamovers.
  • The other side (e.g. your machine) must be reachable from the outside.
  • If you do this often, consider using Globus, a web-based tool for data transfer.

Moving data to HPSS/Archive/Nearline using the scheduler.

  • HPSS is a tape-based storage solution, and is SciNet's nearline a.k.a. archive facility.
  • Storage space on HPSS is allocated through the annual Compute Canada RAC allocation.

Software and Libraries[edit]

Modules[edit]

Once you are on one of the login nodes, what software is already installed?

  • Other than essentials, all installed software is made available using module commands.
  • These set environment variables (PATH, etc.)
  • Allows multiple, conflicting versions of a given package to be available.
  • module spider shows the available software.
nia-login07:~$ module spider
---------------------------------------------------
The following is a list of the modules currently av
---------------------------------------------------
  CCEnv: CCEnv

  NiaEnv: NiaEnv/2018a

  anaconda2: anaconda2/5.1.0

  anaconda3: anaconda3/5.1.0

  autotools: autotools/2017
    autoconf, automake, and libtool 

  boost: boost/1.66.0

  cfitsio: cfitsio/3.430

  cmake: cmake/3.10.2 cmake/3.10.3

  ...
  • module load <module-name>

    use particular software

  • module purge

    remove currently loaded modules

  • module spider

    (or module spider <module-name>)

    list available software packages

  • module avail

    list loadable software packages

  • module list

    list loaded modules

On Niagara, there are really two software stacks:

  1. A Niagara software stack tuned and compiled for this machine. This stack is available by default, but if not, can be reloaded with

    module load NiaEnv
    
  2. The same software stack available on Compute Canada's General Purpose clusters Graham and Cedar, compiled (for now) for a previous generation of CPUs:

    module load CCEnv
    

    If you want the same default modules loaded as on Cedar and Graham, then afterwards also module load StdEnv.

Note: the *Env modules are sticky; remove them by --force.

Tips for loading software[edit]

  • We advise against loading modules in your .bashrc.

    This could lead to very confusing behaviour under certain circumstances.

  • Instead, load modules by hand when needed, or by sourcing a separate script.

  • Load run-specific modules inside your job submission script.

  • Short names give default versions; e.g. intelintel/2018.2.

    It is usually better to be explicit about the versions, for future reproducibility.

  • Handy abbreviations:

        ml → module list
        ml NAME → module load NAME  # if NAME is an existing module
        ml X → module X
  • Modules sometimes require other modules to be loaded first.

Solve these dependencies by using module spider.

Module spider[edit]

Oddly named, the module subcommand spider is the search-and-advice facility for modules.

nia-login07:~$ module load openmpi
Lmod has detected the error:  These module(s) exist but cannot be loaded as requested: "openmpi"
   Try: "module spider openmpi" to see how to load the module(s).
nia-login07:~$ module spider openmpi
------------------------------------------------------------------------------------------------------
  openmpi:
------------------------------------------------------------------------------------------------------
     Versions:
        openmpi/2.1.3
        openmpi/3.0.1
        openmpi/3.1.0rc3

------------------------------------------------------------------------------------------------------
  For detailed information about a specific "openmpi" module (including how to load the modules) use
  the module s full name.
  For example:

     $ module spider openmpi/3.1.0rc3
------------------------------------------------------------------------------------------------------
nia-login07:~$ module spider openmpi/3.1.0rc3
------------------------------------------------------------------------------------------------------
  openmpi: openmpi/3.1.0rc3
------------------------------------------------------------------------------------------------------
    You will need to load all module(s) on any one of the lines below before the "openmpi/3.1.0rc3"
    module is available to load.

      NiaEnv/2018a  gcc/7.3.0
      NiaEnv/2018a  intel/2018.2
nia-login07:~$ module load NiaEnv/2018a  intel/2018.2   # note: NiaEnv is usually already loaded
nia-login07:~$ module load openmpi/3.1.0rc3
nia-login07:~$ module list
Currently Loaded Modules:
  1) NiaEnv/2018a (S)   2) intel/2018.2   3) openmpi/3.1.0.rc3

  Where:
   S:  Module is Sticky, requires --force to unload or purge

Can I Run Commercial Software?[edit]

  • Possibly, but you have to bring your own license for it.
  • SciNet and Compute Canada have an extremely large and broad user base of thousands of users, so we cannot provide licenses for everyone's favorite software.
  • Thus, the only commercial software installed on Niagara is software that can benefit everyone: Compilers, math libraries and debuggers.
  • That means no Matlab, Gaussian, IDL,
  • Open source alternatives like Octave, Python, R are available.
  • We are happy to help you to install commercial software for which you have a license.
  • In some cases, if you have a license, you can use software in the Compute Canada stack.

Compiling on Niagara: Example[edit]

nia-login07:~$ module list
Currently Loaded Modules:
  1) NiaEnv/2018a (S)
  Where:
   S:  Module is Sticky, requires --force to unload or purge

nia-login07:~$ module load intel/2018.2 gsl/2.4

nia-login07:~$ ls
main.c module.c

nia-login07:~$ icc -c -O3 -xHost -o main.o main.c
nia-login07:~$ icc -c -O3 -xHost -o module.o module.c
nia-login07:~$ icc  -o main module.o main.o -lgsl -mkl

nia-login07:~$ ./main

Testing[edit]

You really should test your code before you submit it to the cluster to know if your code is correct and what kind of resources you need.

  • Small test jobs can be run on the login nodes.

    Rule of thumb: couple of minutes, taking at most about 1-2GB of memory, couple of cores.

  • You can run the the ddt debugger on the login nodes after module load ddt.

  • Short tests that do not fit on a login node, or for which you need a dedicated node, request an
    interactive debug job with the salloc command

    nia-login07:~$ salloc -pdebug --nodes N --time=1:00:00
    

    where N is the number of nodes. The duration of your interactive debug session can be at most one hour, can use at most 4 nodes, and each user can only have one such session at a time.

Submitting jobs[edit]

  • Niagara uses SLURM as its job scheduler.

  • You submit jobs from a login node by passing a script to the sbatch command:

    nia-login07:~$ sbatch jobscript.sh
    
  • This puts the job in the queue. It will run on the compute nodes in due course.

  • Jobs will run under their group's RRG allocation, or, if the group has none, under a RAS allocation (previously called `default' allocation).

Keep in mind:

  • Scheduling is by node, so in multiples of 40-cores.

  • Maximum walltime is 24 hours.

  • Jobs must write to your scratch or project directory (home is read-only on compute nodes).

  • Compute nodes have no internet access.

    Download data you need beforehand on a login node.

Scheduling by Node[edit]

  • All job resource requests on Niagara are scheduled as a multiple of nodes.

  • The nodes that your jobs run on are exclusively yours.
    • No other users are running anything on them.
    • You can ssh into them to see how things are going.
  • Whatever your requests to the scheduler, it will always be translated into a multiple of nodes.

  • Memory requests to the scheduler are of no use.

    Your job gets N x 202GB of RAM if N is the number of nodes.

  • You should use all 40 cores on each of the nodes that your job uses.

    You will be contacted if you don't, and we will help you get more science done.

Hyperthreading: Logical CPUs vs. cores[edit]

  • Hyperthreading, a technology that leverages more of the physical hardware by pretending there are twice as many logical cores than real once, is enabled on Niagara.
  • So the OS and scheduler see 80 logical cores.
  • 80 logical cores vs. 40 real cores typically gives about a 5-10% speedup (YMMV).

Because Niagara is scheduled by node, hyperthreading is actually fairly easy to use:

  • Ask for a certain number of nodes N for your jobs.

  • You know that you get 40xN cores, so you will use (at least) a total of 40xN mpi processes or threads.

    (mpirun, srun, and the OS will automaticallly spread these over the real cores)

  • But you should also test if running 80xN mpi processes or threads gives you any speedup.

  • Regardless, your usage will be counted as 40xNx(walltime in years).

Example submission script (OpenMP)[edit]

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --cpus-per-task=40
#SBATCH --time=1:00:00
#SBATCH --job-name openmp_job
#SBATCH --output=openmp_output_%j.txt

cd $SLURM_SUBMIT_DIR

module load intel/2018.2

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

./openmp_example
# or "srun ./openmp_example".

Submit this script (if it is called openmp_job.sh) with the command:

nia-login07:~$ sbatch openmp_job.sh
  • First line indicates that this is a bash script.
  • Lines starting with #SBATCH go to SLURM.
  • sbatch reads these lines as a job request (which it gives the name openmp_job) .
  • In this case, SLURM looks for one node with 40 cores to be run inside one task, for 1 hour.
  • Once it found such a node, it runs the script:
    • Change to the submission directory;
    • Loads modules;
    • Sets an environment variable;
    • Runs the openmp_example application.

Example submission script (MPI)[edit]

#!/bin/bash 
#SBATCH --nodes=8
#SBATCH --ntasks=320
#SBATCH --time=1:00:00
#SBATCH --job-name mpi_job
#SBATCH --output=mpi_output_%j.txt

cd $SLURM_SUBMIT_DIR

module load intel/2018.2
module load openmpi/3.1.0rc3

mpirun ./mpi_example
# or "srun ./mpi_example"

Submit this script (if it is called mpi_job.sh) with the command:

nia-login07:~$ sbatch mpi_job.sh
  • First line indicates that this is a bash script.

  • Lines starting with #SBATCH go to SLURM.

  • sbatch reads these lines as a job request (which it gives the name mpi_job)

  • In this case, SLURM looks for 8 nodes with 40 cores on which to run 320 tasks, for 1 hour.

  • Once it found such a node, it runs the script:

    • Change to the submission directory;
    • Loads modules;
    • Runs the mpi_example application.

Monitoring queued jobs[edit]

Once the job is incorporated into the queue, there are some command you can use to monitor its progress.

  • squeue to show the job queue (squeue -u $USER for just your jobs);

  • squeue -j JOBID to get information on a specific job

    (alternatively, scontrol show job JOBID, which is more verbose).

  • squeue -j JOBID -o "%.9i %.9P %.8j %.8u %.2t %.10M %.6D %S" to get an estimate for when a job will run.

  • scancel -i JOBID to cancel the job.

  • sinfo -pcompute to look at available nodes.

  • More utilities like those that were available on the GPC are under development.

Data Management and I/O Tips[edit]

  • $HOME, $SCRATCH, and $PROJECT all use the parallel file system called GPFS.
  • Your files can be seen on all Niagara login and compute nodes.
  • GPFS is a high-performance file system which provides rapid reads and writes to large data sets in parallel from many nodes.
  • But accessing data sets which consist of many, small files leads to poor performance.
  • Avoid reading and writing lots of small amounts of data to disk.
  • Many small files on the system would waste space and would be slower to access, read and write.
  • Write data out in binary. Faster and takes less space.
  • Burst buffer (to come) is better for i/o heavy jobs and to speed up checkpoints.