cc_staff
238
edits
No edit summary |
No edit summary |
||
Line 32: | Line 32: | ||
<!--T:10--> | <!--T:10--> | ||
module load | module load ddt-cpu | ||
<!--T:13--> | <!--T:13--> | ||
Line 45: | Line 38: | ||
<!--T:14--> | <!--T:14--> | ||
ddt path/to/code | ddt path/to/code | ||
map path/to/code | map path/to/code | ||
<!--T:15--> | <!--T:15--> | ||
:: Make sure the MPI implementation is the default OpenMPI in the | :: Make sure the MPI implementation is the default OpenMPI in the DDT/MAP application window, before pressing the ''Run'' button. If this is not the case, press the ''Change'' button next to the ''Implementation:'' string, and select the correct option from the drop-down menu. Also, specify the desired number of cpu cores in this window. | ||
<!--T:16--> | <!--T:16--> | ||
4. When done, exit the shell to terminate the allocation. | 4. When done, exit the shell to terminate the allocation. | ||
IMPORTANT: The current versions of DDT and OpenMPI have a compatibility issue which breaks the important feature of DDT - displaying message queues (available from the "Tools" drop down menu). There is a workaround: before running DDT, you have to execute the following command: | |||
$ export OMPI_MCA_pml=ob1 | |||
Be aware that the above workaround can make your MPI code run slower, so only use this trick when debugging. | |||
== CUDA code == <!--T:17--> | == CUDA code == <!--T:17--> | ||
Line 66: | Line 65: | ||
<!--T:21--> | <!--T:21--> | ||
module load | module load ddt-gpu | ||
<!--T:22--> | <!--T:22--> |