ARM software: Difference between revisions
No edit summary |
No edit summary |
||
Line 11: | Line 11: | ||
* allinea-cpu, for CPU debugging and profiling; | * allinea-cpu, for CPU debugging and profiling; | ||
* allinea-gpu, for GPU or mixed CPU/GPU debugging. | * allinea-gpu, for GPU or mixed CPU/GPU debugging. | ||
As this is a GUI application, you should log in using <code>ssh -Y</code>, and use an [[SSH|SSH client]] like [[Connecting | As this is a GUI application, you should log in using <code>ssh -Y</code>, and use an [[SSH|SSH client]] like [[Connecting with MobaXTerm|MobaXTerm]] (Windows) or [https://www.xquartz.org/ XQuartz] (Mac) to ensure proper X11 tunnelling. | ||
<!--T:4--> | <!--T:4--> |
Revision as of 15:29, 28 February 2018
Introduction[edit]
ARM DDT (formerly know as Allinea DDT) is a powerful commercial parallel debugger with a graphical user interface. It can be used to debug serial, MPI, multi-threaded, and CUDA programs, or any combination of the above, written in C, C++, and FORTRAN. MAP - an efficient parallel profiler - is another very useful tool from ARM (formerly Allinea).
This software is available on Graham as two separate modules:
- allinea-cpu, for CPU debugging and profiling;
- allinea-gpu, for GPU or mixed CPU/GPU debugging.
As this is a GUI application, you should log in using ssh -Y
, and use an SSH client like MobaXTerm (Windows) or XQuartz (Mac) to ensure proper X11 tunnelling.
Both DDT and MAP are normally used interactively through their GUI, which is normally accomplished using the salloc
command (see below for details). MAP can also be used non-interactively, in which case it can be submitted to the scheduler with the sbatch
command.
The current license limits the use of DDT/MAP to a maximum of 512 CPU cores across all users at any given time. DDT-GPU is likewise limited to 8 GPUs.
Usage[edit]
CPU-only code, no GPUs[edit]
Allocate the node or nodes on which to do the debugging or profiling with salloc
, e.g.:
salloc --x11 --time=0-1:00 --mem-per-cpu=4G --ntasks=4
This will open a shell session on the allocated node. Then load the appropriate module:
module load allinea-cpu
This may fail with a suggestion to load an older version of OpenMPI first. If that happens, reload the OpenMPI module with the suggested command, and then reload the allinea-cpu module:
module load openmpi/2.0.2 module load allinea-cpu
You can then run the ddt or map command as:
ddt path/to/code map path/to/code
Make sure the MPI implementation is the default "OpenMPI" in the Allinea application window, before pressing the Run button. If this is not the case, press the Change button next to the "Implementation:" string, and pick the correct option from the drop down menu.
When done, exit the shell. This will terminate the allocation.
CUDA code[edit]
Allocate the node or nodes on which to do the debugging or profiling with salloc
, e.g.:
salloc --x11 --time=0-1:00 --mem-per-cpu=4G --ntasks=1 --gres=gpu:1
This will open a shell session on the allocated node. Then load the appropriate module:
module load allinea-gpu
This may fail with a suggestion to load an older version of OpenMPI first. If that happens, reload the OpenMPI module with the suggested command, and then reload the allinea-gpu module:
module load openmpi/2.0.2 module load allinea-gpu
Ensure a cuda module is loaded:
module load cuda
You can then run the ddt command as:
ddt path/to/code
When done, exit the shell. This will terminate the allocation.
Known issues[edit]
MPI DDT[edit]
- For some reason the debugger doesn't show queued MPI messages (e.g. when paused in an MPI deadlock).
OpenMP DDT[edit]
- Memory debugging module (which is off by default) doesn't work.
CUDA DDT[edit]
- Memory debugging module (which is off by default) doesn't work.
MAP[edit]
- MAP currently doesn't work correctly on Graham. We are working on resolving this issue. For now the workaround is to request a SHARCNET account from your Compute Canada account (ccdb), and then run MAP on the SHARCNET's legacy cluster orca's development nodes using these instructions.