Chapel: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
 
(49 intermediate revisions by 5 users not shown)
Line 1: Line 1:
= Chapel =
<languages />
[[Category:Software]]


Chapel is a general-purpose, compiled, high-level parallel programming language with built-in abstractions for shared- and distributed-memory parallelism. There are two styles of parallel programming in Chapel: (1) '''task parallelism''', in which parallelism is driven by ''programmer-specified tasks'', and (2) '''data parallelism''', in which parallelism is driven by ''computations over collections of data elements or their indices'' sitting in shared memory on one node or distributed among multiple nodes.
<translate>


These high-level abstractions make Chapel ideal for learning parallel programming for a novice HPC user. Chapel is incredibly intuitive, striving to merge the ease-of-use of Python and the performance of traditional compiled languages such as C and Fortran. Parallel constructs that typically take tens of lines of MPI code can be expressed in only a few lines of Chapel code. Chapel is open source and can run on any Unix-like operating system, with hardware support from laptops to large HPC systems.
<!--T:2-->
Chapel is a general-purpose, compiled, high-level parallel programming language with built-in abstractions for shared- and distributed-memory parallelism. There are two styles of parallel programming in Chapel: (1) <b>task parallelism</b>, where parallelism is driven by <i>programmer-specified tasks</i>, and (2) <b>data parallelism</b>, where parallelism is driven by applying the same computation on subsets of data elements, which may be in the shared memory of a single node, or distributed over multiple nodes.


Chapel has a relatively small user base, so many libraries that exist for C, C++, Fortran have not yet been implemented in Chapel. Hopefully, that will change in coming years, if Chapel adoption continues to gain momentum in the HPC community.
<!--T:3-->
These high-level abstractions make Chapel ideal for learning parallel programming for a novice HPC user. Chapel is incredibly intuitive, striving to merge the ease-of-use of [[Python]] and the performance of traditional compiled languages such as [[C]] and [[Fortran]]. Parallel blocks that typically take tens of lines of [[MPI]] code can be expressed in only a few lines of Chapel code. Chapel is open source and can run on any Unix-like operating system, with hardware support from laptops to large HPC systems.


<!--T:4-->
Chapel has a relatively small user base, so many libraries that exist for [[C]], [[C++]], [[Fortran]] have not yet been implemented in Chapel. Hopefully, that will change in coming years if Chapel adoption continues to gain momentum in the HPC community.
<!--T:5-->
For more information, please watch our [https://westgrid.github.io/trainingMaterials/programming/#chapel three-part Chapel webinar].
For more information, please watch our [https://westgrid.github.io/trainingMaterials/programming/#chapel three-part Chapel webinar].


== Single-locale Chapel ==
== Single-locale Chapel == <!--T:6-->
 
On Cedar and Graham, single-locale (shared-memory) Chapel is implemented as a module. For testing, you can use <code>salloc</code> to run Chapel codes in serial
 
  module load gcc chapel-single/1.15.0
  salloc --time=0:30:0 --ntasks=1 --mem-per-cpu=3500 --account=def-someprof
  chpl test.chpl -o test
  ./test
 
or on multiple cores


  module load gcc chapel-single/1.15.0
<!--T:7-->
  salloc --time=0:30:0 --ntasks=1 --cpus-per-task=3 --mem-per-cpu=3500 --account=def-someprof
Single-locale (single node; shared-memory only) Chapel on our general-purpose clusters is provided by the module <code>chapel-multicore</code>. You can use <code>salloc</code> to test Chapel codes either in serial:
  chpl test.chpl -o test
</translate>
  ./test
{{Commands
|module load gcc/9.3.0 chapel-multicore/1.31.0
|salloc --time{{=}}0:30:0 --ntasks{{=}}1 --mem-per-cpu{{=}}3600 --account{{=}}def-someprof
|chpl test.chpl -o test
|./test
}}
<translate>
<!--T:8-->
or on multiple cores on the same node:
</translate>
{{Commands
|module load gcc/9.3.0 chapel-multicore/1.31.0
|salloc --time{{=}}0:30:0 --ntasks{{=}}1 --cpus-per-task{{=}}3 --mem-per-cpu{{=}}3600 --account{{=}}def-someprof
|chpl test.chpl -o test
|./test
}}
<translate>
<!--T:9-->
For production jobs, please write a [[Running_jobs|job submission script]] and submit it with <code>sbatch</code>.


For production jobs, write a job submission script and submit it with <code>sbatch</code>.
== Multi-locale Chapel == <!--T:10-->


== Multi-locale Chapel ==
<!--T:11-->
Multi-locale (multiple nodes; hybrid shared- and distributed-memory) Chapel is provided by <code>chapel-ofi</code> (for the OmniPath interconnect on Cedar) and <code>chapel-ucx</code> (for the InfiniBand interconnect on Graham, Béluga, Narval) modules.


Since Compute Canada clusters Cedar and Graham employ two different physical interconnects, we do not have a single multi-locale Chapel module for both machines. Instead, multi-locale Chapel has been compiled in a separate directory on each system:
<!--T:20-->
Consider the following Chapel code printing basic information about the nodes available inside your job:
</translate>
{{
File
  |name=probeLocales.chpl
  |lang="chapel"
  |contents=
use MemDiagnostics;
for loc in Locales do
  on loc {
    writeln("locale #", here.id, "...");
    writeln("  ...is named: ", here.name);
    writeln("  ...has ", here.numPUs(), " processor cores");
    writeln("  ...has ", here.physicalMemory(unit=MemUnits.GB, retType=real), " GB of memory");
    writeln("  ...has ", here.maxTaskPar, " maximum parallelism");
  }
}}
<translate>


  module unload chapel-single
<!--T:18-->
  . /home/razoumov/startMultiLocale.sh
To run this code on [[Cedar]], you need to load the <code>chapel-ofi</code> module:
  salloc --time=0:30:0 --nodes=4 --cpus-per-task=3 --mem-per-cpu=3500 --account=def-someprof
{{Commands
|module load gcc/9.3.0 chapel-ofi/1.31.0
|salloc --time{{=}}0:30:0 --nodes{{=}}4 --cpus-per-task{{=}}3 --mem-per-cpu{{=}}3500 --account{{=}}def-someprof
}}


Once the interactive job starts, you can compile and run your code from the prompt on the first allocated compute node:
<!--T:19-->
Once the [[Running_jobs#Interactive_jobs|interactive job]] starts, you can compile and run your code from the prompt on the first allocated compute node:
{{Commands
|chpl --fast probeLocales.chpl -o probeLocales
|./probeLocales -nl 4
}}
To run the same code on InfiniBand-based clusters (all those except Cedar), please use the <code>chapel-ucx</code> module.


  chpl probeLocales.chpl  -o probeLocales
<!--T:21-->
  ./probeLocales -nl 4
For production jobs, please write a [[Running_jobs|Slurm submission script]] and submit your job with <code>sbatch</code> instead.


For production jobs, please write a Slurm submission script and submit your job with <code>sbatch</code> instead.
</translate>

Latest revision as of 19:50, 1 May 2024

Other languages:


Chapel is a general-purpose, compiled, high-level parallel programming language with built-in abstractions for shared- and distributed-memory parallelism. There are two styles of parallel programming in Chapel: (1) task parallelism, where parallelism is driven by programmer-specified tasks, and (2) data parallelism, where parallelism is driven by applying the same computation on subsets of data elements, which may be in the shared memory of a single node, or distributed over multiple nodes.

These high-level abstractions make Chapel ideal for learning parallel programming for a novice HPC user. Chapel is incredibly intuitive, striving to merge the ease-of-use of Python and the performance of traditional compiled languages such as C and Fortran. Parallel blocks that typically take tens of lines of MPI code can be expressed in only a few lines of Chapel code. Chapel is open source and can run on any Unix-like operating system, with hardware support from laptops to large HPC systems.

Chapel has a relatively small user base, so many libraries that exist for C, C++, Fortran have not yet been implemented in Chapel. Hopefully, that will change in coming years if Chapel adoption continues to gain momentum in the HPC community.

For more information, please watch our three-part Chapel webinar.

Single-locale Chapel

Single-locale (single node; shared-memory only) Chapel on our general-purpose clusters is provided by the module chapel-multicore. You can use salloc to test Chapel codes either in serial:

[name@server ~]$ module load gcc/9.3.0 chapel-multicore/1.31.0
[name@server ~]$ salloc --time=0:30:0 --ntasks=1 --mem-per-cpu=3600 --account=def-someprof
[name@server ~]$ chpl test.chpl -o test
[name@server ~]$ ./test

or on multiple cores on the same node:

[name@server ~]$ module load gcc/9.3.0 chapel-multicore/1.31.0
[name@server ~]$ salloc --time=0:30:0 --ntasks=1 --cpus-per-task=3 --mem-per-cpu=3600 --account=def-someprof
[name@server ~]$ chpl test.chpl -o test
[name@server ~]$ ./test

For production jobs, please write a job submission script and submit it with sbatch.

Multi-locale Chapel

Multi-locale (multiple nodes; hybrid shared- and distributed-memory) Chapel is provided by chapel-ofi (for the OmniPath interconnect on Cedar) and chapel-ucx (for the InfiniBand interconnect on Graham, Béluga, Narval) modules.

Consider the following Chapel code printing basic information about the nodes available inside your job:

File : probeLocales.chpl

use MemDiagnostics;
for loc in Locales do
  on loc {
    writeln("locale #", here.id, "...");
    writeln("  ...is named: ", here.name);
    writeln("  ...has ", here.numPUs(), " processor cores");
    writeln("  ...has ", here.physicalMemory(unit=MemUnits.GB, retType=real), " GB of memory");
    writeln("  ...has ", here.maxTaskPar, " maximum parallelism");
  }


To run this code on Cedar, you need to load the chapel-ofi module:

[name@server ~]$ module load gcc/9.3.0 chapel-ofi/1.31.0
[name@server ~]$ salloc --time=0:30:0 --nodes=4 --cpus-per-task=3 --mem-per-cpu=3500 --account=def-someprof


Once the interactive job starts, you can compile and run your code from the prompt on the first allocated compute node:

[name@server ~]$ chpl --fast probeLocales.chpl -o probeLocales
[name@server ~]$ ./probeLocales -nl 4

To run the same code on InfiniBand-based clusters (all those except Cedar), please use the chapel-ucx module.

For production jobs, please write a Slurm submission script and submit your job with sbatch instead.