Allinea DDT is a powerful, easy-to-use graphical debugger capable of debugging:
- Single process and multithreaded software
- Parallel (MPI) software
- Heterogeneous software such as that written to use GPUs
- Hybrid codes mixing paradigms such as MPI + OpenMP, or MPI + CUDA
- Multi-process software of any form, including client-server applications
Allinea DDT includes static analysis that highlights potential problems in the source code, integrated memory debugging that can catch reads and writes outside of array bounds, integration with MPI message queues and much more. It provides a complete solution for finding and fixing problems whether on a single thread or hundreds of thousands. Allinea DDT supports all of the compiled languages that are found in mainstream and high-performance computing including:
- C, C++, and all derivatives of Fortran, including Fortran 90.
- Parallel languages/models including MPI, UPC, and Fortran 2008 Co-arrays.
- GPU languages such as HMPP, OpenMP Accelerators, CUDA and CUDA Fortran.
Allinea DDT can be used for debugging on platforms from desktops to Petascale class machines running hundreds of thousands of processes.
Platforms and Locations
Versions other than the current default can be found under /usr/global/tools/ddt/r
|BG/Q||/usr/global/tools/ddt/default||Versions other than the current default can be found under /usr/global/tools/ddt/r|
Important: Only the bare essentials for getting started are provided here. DDT is a full-featured, sophisticated tool. Users will definitely want to review Allinea's documentation and/or tutorials.
x86_64 Linux Systems
1. On CHAOS 5 systems, the ddt command will not be in your default path (the ddt command is in your default path on TOSS 3) so you must first load the required dotkit package. For example, view the available packages, load the ddt package (currently there is only one choice), and confirm that ddt is in your path:
% use -l ddt debuggers/Allinea Software ---------- ddt - DDT % use ddt Prepending: ddt (ok) % which ddt /usr/global/tools/ddt/default/bin/ddt
On TOSS 3, multiple version are available and your version can be switched via the allineaforge modules:
% module avail allineaforge -------------------------- /usr/tce/modulefiles/Core --------------------------- allineaforge/6.0.5 allineaforge/6.1.1 allineaforge/7.0.3 (D) Where: D: Default Module Use "module spider" to find all possible modules. Use "module keyword key1 key2 ..." to search for all possible modules matching any of the "keys". % module load allineaforge/6.1.1 % which ddt /usr/tce/packages/allineaforge/forge-6.1.1/bin/ddt
2. Build your application being sure to specify the -g compiler flag (if you have not already done so).
3. Launch ddt with your application - the method depends upon whether you are running a serial application, parallel, in the pdebug or pbatch queue. See below.
1. If you are on a login node, or one of the pdebug or pbatch nodes via using the mxterm utility, you can simply invoke DDT with the name of your executable:
2. You should then see DDT's splash screen, followed by two new DDT windows (below - click for a larger image).
3. In the "Run" window, you should see the name of your executable. If you have arguments to add for you program, you can enter those. Otherwise, just click the "Run" button to launch your serial program.
4. After your program launches, you should get a third window loaded with your serial program (below - click for larger image). You can now interact with your program (set breakpoints, run, examine data, etc.).
MPI Jobs in pbatch
1. For running on LC pbatch partition nodes, you can acquire your partition using LC's mxterm utility. For example:
% mxterm 4 64 60
will request 4 nodes with 64 tasks/cores for 60 minutes.
2. When your mxterm window appears, you will need to load the dotkit package:
% use ddt
3. Launch DDT with your application:
% ddt myapp
4. The next step is important - you need to tell DDT to use SLURM as the MPI manager task, and then need to specify the srun arguments and desired number of processes. In the example below (click for larger image), generic (SLURM) is selected, 32 processes and 4 nodes.
5. Click on the "Run" button - DDT will attempt to establish your parallel MPI job, as it connects to all processes (below - click for larger image)
6. After DDT starts your parallel job, you will get a new window with your job loaded (below - click for larger image). You can then begin debugging as usual.
MPI Jobs in pdebug
1. For pdebug partition jobs, you can acquire your partition either by using DDT on a login node, or by using mxterm (same as that described for pbatch above).
2. To launch from the login node, first load the DDT package:
% use ddt
3. Then launch DDT with your application:
% ddt myapp
4. The next step is important - you need to tell DDT to use SLURM as the MPI manager task, specify the srun arguments and desired number of processes. In the example below (click for larger image), generic (SLURM) is selected, 32 processes and 2 nodes in the pdebug partition.
5. Click on the "Run" button, and DDT will attempt to start your job. When successful, you will get a new window with your program loaded and ready for debugging (as described above for pbatch).
1. Instructions for running DDT on BG/Q systems is the same as for LC's Linux clusters (above). The only difference is that when you launch DDT for MPI jobs, be sure to specify that you are using "BlueGene/Q (SLURM)". An example is shown below (click for larger image), for the pdebug partition.
- Consult the vendor documentation (below)
- Contact the LC Hotline to report a problem.
Documentation and References
- Allinea (aka Arm) provides complete documentation on their website: https://www.arm.com/products/development-tools/hpc-tools
- The DDT installation directories contain documentation, including the DDT User Guide: