ARM software

Revision as of 19:55, 23 February 2018 by Syam (talk | contribs) (Created page with "= Introduction = [https://www.arm.com/products/development-tools/hpc-tools/cross-platform/forge/ddt Allinea's DDT] (now called ARM DDT after Allinea was acquired by ARM) is a...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Introduction

Allinea's DDT (now called ARM DDT after Allinea was acquired by ARM) is a powerful commercial parallel debugger with GUI interface which can be used to debug serial, MPI, multi-threaded, and CUDA codes (and any combination of the above) written in C, C++ and FORTRAN. MAP is another very useful tool from Allinea (now ARM) - an efficient parallel profiler.

This software is available on Graham as two separate modules - allinea-cpu (for CPU debugging and profiling) and allinea-gpu (for GPU or mixed CPU/GPU debugging). As this is a GUI application, one has to login to Graham using "-Y" or "-X" ssh switch, for proper X tunelling. For Windows terminals, we recommend using a free program MobaXterm (Home Edition) which has everything you need to run DDT - SSH client and X Windows client (also SFTP, VNC and many other services). For Mac you need to install a free X WIndows client XQuartz.

CPU-only (non-CUDA) code

First one has to allocate the node(s) for the debugging / profiling job with salloc (accepts many of the sbatch arguments), e.g.:

salloc --x11 --time=0-1:00 --mem-per-cpu=4G --ntasks=4

Once the resource is allocated, you will get the shell at the allocated node. There you have to load the corresponding module:

module load allinea-cpu

The above command can fail with a suggestion to load an older version of OpenMPI first. If that happens, you should reload the OpenMPI module with the suggested command, and then load the allinea-cpu module:

module load openmpi/2.0.2
module load allinea-cpu

You can then run the ddt or map command as usual:

ddt path/to/code
map path/to/code

Make sure the MPI implementation is the default "OpenMPI" in the Allinea application window, before pressing the Run button. If this is not the case, press the Change button next to the "Implementation:" string, and pick the correct option from the drop down menu.

When done, exit the shell (this will terminate the allocation).

CUDA code

First one has to allocate the node(s) for the debugging job with salloc (accepts many of the sbatch arguments), e.g.:

salloc --x11 --time=0-1:00 --mem-per-cpu=4G --ntasks=1 --gres=gpu:1

Once the resource is allocated, you will get the shell at the allocated node. There you have to load the corresponding module:

module load allinea-gpu

The above command can fail with a suggestion to load an older version of OpenMPI first. If that happens, you should reload the OpenMPI module with the suggested command, and then load the allinea-gpu module:

module load openmpi/2.0.2
module load allinea-gpu

You will also need to make sure a cuda module is loaded:

module load cuda

You can then run the ddt command as usual:

ddt path/to/code

When done, exit the shell (this will terminate the allocation).

Known issues

MPI

  • For some reason the debugger doesn't show queued MPI messages (e.g. when paused in an MPI deadlock).

OpenMP

  • Memory debugging module (which is off by default) doesn't work.

CUDA

  • Memory debugging module (which is off by default) doesn't work.