Getting started: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
(link change "Transferring data")
Line 17: Line 17:
** read about how to connect to our HPC systems with [[SSH|SSH]];
** read about how to connect to our HPC systems with [[SSH|SSH]];
** read an introduction to [[Linux introduction|Linux]] systems;
** read an introduction to [[Linux introduction|Linux]] systems;
** read about how to [[Transferring files|transfer files]] to and from Compute Canada systems;  
** read about how to [[Transferring data|transfer files]] to and from Compute Canada systems;  
** look for a training event on [https://www.computecanada.ca/calendar/ this schedule]. <!-- ** Online training materials are here -->
** look for a training event on [https://www.computecanada.ca/calendar/ this schedule]. <!-- ** Online training materials are here -->
* If you want to know which software and hardware are available for a specific discipline, a series of discipline guides is in preparation. At this time, you can consult the guide on  
* If you want to know which software and hardware are available for a specific discipline, a series of discipline guides is in preparation. At this time, you can consult the guide on  

Revision as of 18:50, 6 February 2017

Other languages:

You have just received your Compute Canada account. Welcome! Now what do you do? This page is intended to help you find your way through the technical documentation on Compute Canada services and systems.

If you don't already have a Compute Canada account, see Apply for an account.

What do you want to do?

  • If you are an experienced HPC user and are ready to log onto a cluster, skip to section What resources are available?
  • If you would like some training, you can
    • read about how to connect to our HPC systems with SSH;
    • read an introduction to Linux systems;
    • read about how to transfer files to and from Compute Canada systems;
    • look for a training event on this schedule.
  • If you want to know which software and hardware are available for a specific discipline, a series of discipline guides is in preparation. At this time, you can consult the guide on
  • If you have hundreds of gigabytes of data to move across the network, read about the Globus file transfer services.
  • If you want to experiment with software that doesn’t run well on our traditional HPC systems, please read about Compute Canada Cloud resources.

For any other questions, you might try the Search box in the upper right corner of this page, the main page for Compute Canada Documentation, or contact us by email.

What resources are available?

Compute Canada is currently installing several million dollars’ worth of new computers while simultaneously retiring many old computers. During the transition period (2016-2018), a changing mix of old and new computers will be available to you. You can read about the progress of the migration from old to new systems here.

New resources (deployed in 2016 or after)

Compute Canada began to renew its infrastructure in 2016. The first phase of the new deployment is composed of four new clusters, called Arbutus, Cedar, Graham, and Niagara.

Arbutus (formerly known as GP1) is an extension of the West cloud. Arbutus went into service in September 2016.

Cedar (GP2) and Graham (GP3) are general purpose clusters composed of a variety of nodes including large memory nodes and nodes with accelerators. They are expected to enter service in spring 2017.

Niagara (LP) will be a large parallel cluster with nodes interconnected by a fast network.

Legacy resources (deployed before 2016)

Computing and storage resources which were installed between 2004 and 2015 and are scheduled to be decommissioned in the next few years are referred to as legacy resources. The legacy resources are administered by regional organizations, one of ACENET, the Centre for Advanced Computing, Calcul Québec, SciNet, SHARCNET, and WestGrid. To use a legacy resource you must have an account with one of these entities; you can apply for an account through CCDB. Resources deployed during and after 2016 will not require this step, nor will the two clouds.

Most legacy clusters are classified as either capacity clusters or capability clusters. Capacity clusters contain nodes connected to each other by a relatively slow Ethernet network, while the capability clusters have a fast network, usually InfiniBand. Large parallel jobs will run better on capability clusters than capacity clusters, while smaller jobs will run almost anywhere.

There are some specialty clusters among the legacy resources. Applications which require more than 512 GB of memory per node require large shared memory systems. Compute Canada has four such systems:

  • Hungabee hosted by WestGrid
  • M9000 hosted by the Centre for Advanced Computing
  • Guillimin-ScaleMP hosted by Calcul Québec
  • Fortier hosted by SHARCNET

Compute Canada also has clusters equipped with accelerators such as GPUs and Intel Xeon Phis. If your application calls for from such accelerators, you will find them on the following legacy systems:

  • Helios, Hades and Guillimin, hosted by Calcul Québec
  • Parallel, hosted by WestGrid
  • Angel and Monk, hosted by SHARCNET
  • Accelerator Research Cluster, hosted by SciNet

All of these have NVidia GPUs. Guillimin also has Intel Xeon Phis.

Finally, Compute Canada also hosts two clouds called East Cloud and West Cloud, as well as storage resources ranging from fast parallel filesystems to tape backup.

What resources should I use?

This question is hard to answer because of the range of needs Compute Canada serves, and because of the enormous variety of resources we have available --- especially during the 2016-2018 renewal period. If the descriptions above are insufficient, contact Compute Canada’s technical support or your regional support.

In order to identify the best resource to use, we may ask specific questions, such as:

  • What software do you want to use?
    • Does the software require a commercial license?
    • Can the software be used non-interactively? That is, can it be controlled from a file prepared prior to its execution rather than through the graphical interface?
    • Can it run on the Linux operating system?
  • How much memory, time, computing power, accelerators, storage, network bandwidth and so forth --- are required by a typical job? Rough estimates are fine.
  • How frequently will you need to run this type of job?

You may know the answer to these questions or not. If you do not, our technical support team is there to help you find the answers. Then they will be able to direct you to the most appropriate resources for your needs.