Chapel: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
No edit summary
Line 18: Line 18:


<!--T:7-->
<!--T:7-->
For now, single-locale Chapel on Compute Canada's general purpose clusters (Cedar, Graham and Béluga) is installed in a non-central location and can be initialized with a script. For example, you can use <code>salloc</code> to test Chapel codes in serial:
Single-locale (single node, shared-memory only) Chapel on Compute Canada's general-purpose clusters is provided by the module <code>chapel-multicore/</code>. You can use <code>salloc</code> to test Chapel codes in serial:
</translate>
</translate>
{{Commands
{{Commands

Revision as of 03:40, 10 December 2021

Other languages:

Chapel[edit]

Chapel is a general-purpose, compiled, high-level parallel programming language with built-in abstractions for shared- and distributed-memory parallelism. There are two styles of parallel programming in Chapel: (1) task parallelism, where parallelism is driven by programmer-specified tasks, and (2) data parallelism, where parallelism is driven by applying the same computation on subsets of data elements, which may be in the shared memory of a single node, or distributed over multiple nodes.

These high-level abstractions make Chapel ideal for learning parallel programming for a novice HPC user. Chapel is incredibly intuitive, striving to merge the ease-of-use of Python and the performance of traditional compiled languages such as C and Fortran. Parallel blocks that typically take tens of lines of MPI code can be expressed in only a few lines of Chapel code. Chapel is open source and can run on any Unix-like operating system, with hardware support from laptops to large HPC systems.

Chapel has a relatively small user base, so many libraries that exist for C, C++, Fortran have not yet been implemented in Chapel. Hopefully, that will change in coming years, if Chapel adoption continues to gain momentum in the HPC community.

For more information, please watch our three-part Chapel webinar.

Single-locale Chapel[edit]

Single-locale (single node, shared-memory only) Chapel on Compute Canada's general-purpose clusters is provided by the module chapel-multicore/. You can use salloc to test Chapel codes in serial:

[name@server ~]$ source /home/razoumov/startSingleLocale.sh
[name@server ~]$ salloc --time=0:30:0 --ntasks=1 --mem-per-cpu=3600 --account=def-someprof
[name@server ~]$ chpl test.chpl -o test
[name@server ~]$ ./test

or on multiple cores on the same node:

[name@server ~]$ source /home/razoumov/startSingleLocale.sh
[name@server ~]$ salloc --time=0:30:0 --ntasks=1 --cpus-per-task=3 --mem-per-cpu=3600 --account=def-someprof
[name@server ~]$ chpl test.chpl -o test
[name@server ~]$ ./test

For production jobs, please write a job submission script and submit it with sbatch.

Multi-locale Chapel[edit]

Installing multi-locale (distributed-memory) Chapel requires fine-tuning its launcher for the specific physical interconnect on a cluster. Since different Compute Canada clusters employ different physical interconnects, we do not have one multi-locale Chapel for all machines. Instead, multi-locale Chapel has been compiled in a separate directory as an experimental setup on each system. You can test this setup with the following Chapel code printing basic information about the nodes available inside your job:


File : probeLocales.chpl

use Memory.Diagnostics;
for loc in Locales do
  on loc {
    writeln("locale #", here.id, "...");
    writeln("  ...is named: ", here.name);
    writeln("  ...has ", here.numPUs(), " processor cores");
    writeln("  ...has ", here.physicalMemory(unit=MemUnits.GB, retType=real), " GB of memory");
    writeln("  ...has ", here.maxTaskPar, " maximum parallelism");
  }


Load Chapel and start an interactive job requesting four nodes and three cores on each node:

[name@server ~]$ source /home/razoumov/startMultiLocale.sh
[name@server ~]$ salloc --time=0:30:0 --nodes=4 --cpus-per-task=3 --mem-per-cpu=3500 --account=def-someprof


Once the interactive job starts, you can compile and run your code from the prompt on the first allocated compute node:

[name@server ~]$ chpl probeLocales.chpl -o probeLocales
[name@server ~]$ ./probeLocales -nl 4

For production jobs, please write a Slurm submission script and submit your job with sbatch instead.