Chapel: Difference between revisions
Line 26: | Line 26: | ||
== Multi-locale Chapel == | == Multi-locale Chapel == | ||
Since Compute Canada clusters Cedar and Graham employ two different physical interconnects, we do not have a single multi-locale Chapel module for both machines. Instead, multi-locale Chapel has been compiled in a separate directory on each system: | |||
module unload chapel-single | |||
. /home/razoumov/startMultiLocale.sh | |||
salloc --time=0:30:0 --nodes=4 --cpus-per-task=3 --mem-per-cpu=3500 --account=def-someprof | |||
Once the interactive job starts, you can compile and run your code from the prompt on the first allocated compute node: | |||
chpl probeLocales.chpl -o probeLocales | |||
./probeLocales -nl 4 | |||
For production jobs, please write a Slurm submission script and submit your job with <code>sbatch</code> instead. |
Revision as of 20:43, 20 February 2019
Chapel[edit]
Chapel is a general-purpose, compiled, high-level parallel programming language with built-in abstractions for shared- and distributed-memory parallelism. There are two styles of parallel programming in Chapel: (1) task parallelism, in which parallelism is driven by programmer-specified tasks, and (2) data parallelism, in which parallelism is driven by computations over collections of data elements or their indices sitting in shared memory on one node or distributed among multiple nodes.
These high-level abstractions make Chapel ideal for learning parallel programming for a novice HPC user. Chapel is incredibly intuitive, striving to merge the ease-of-use of Python and the performance of traditional compiled languages such as C and Fortran. Parallel constructs that typically take tens of lines of MPI code can be expressed in only a few lines of Chapel code. Chapel is open source and can run on any Unix-like operating system, with hardware support from laptops to large HPC systems.
Chapel has a relatively small user base, so many libraries that exist for C, C++, Fortran have not yet been implemented in Chapel. Hopefully, that will change in coming years, if Chapel adoption continues to gain momentum in the HPC community.
Single-locale Chapel[edit]
On Cedar and Graham, single-locale (shared-memory) Chapel is implemented as a module. For testing, you can use salloc
to run Chapel codes in serial
module load gcc chapel-single/1.15.0 salloc --time=0:30:0 --ntasks=1 --mem-per-cpu=3500 --account=def-someprof chpl test.chpl -o test ./test
or on multiple cores
module load gcc chapel-single/1.15.0 salloc --time=0:30:0 --ntasks=1 --cpus-per-task=3 --mem-per-cpu=3500 --account=def-someprof chpl test.chpl -o test ./test
For production jobs, write a job submission script and submit it with sbatch
.
Multi-locale Chapel[edit]
Since Compute Canada clusters Cedar and Graham employ two different physical interconnects, we do not have a single multi-locale Chapel module for both machines. Instead, multi-locale Chapel has been compiled in a separate directory on each system:
module unload chapel-single . /home/razoumov/startMultiLocale.sh salloc --time=0:30:0 --nodes=4 --cpus-per-task=3 --mem-per-cpu=3500 --account=def-someprof
Once the interactive job starts, you can compile and run your code from the prompt on the first allocated compute node:
chpl probeLocales.chpl -o probeLocales ./probeLocales -nl 4
For production jobs, please write a Slurm submission script and submit your job with sbatch
instead.