mpiP is a lightweight profiling library for MPI applications. Because it only collects statistical information about MPI functions, mpiP generates considerably less overhead and much less data than tracing tools. All the information captured by mpiP is task-local. It only uses communication during report generation, typically at the end of the experiment, to merge results from all of the tasks into one output file.

This page describes mpiP installations and functionality specific to LLNL systems. For general information on mpiP, please see the mpiP project page.

Platforms and Locations

x86_64/usr/local/tools/mpip*Multiple versions are available for mvapich and openmpi


x86_64 Linux Systems

1. Determine which mpiP build you want to use. This is done by listing the various dotkit packages with the command use -l mpip. For example:

% use -l mpip

performance/profile ----------
  mpip-mvapich2-1.7 - A lightweight MPI profiler.
  mpip-mvapich2-1.9 - A lightweight MPI profiler.
     mpip-mvapich2 - A lightweight MPI profiler.
      mpip-mvapich - A lightweight MPI profiler.
  mpip-openmpi-1.4.3 - A lightweight MPI profiler.
  mpip-openmpi-1.6.5 - A lightweight MPI profiler.
      mpip-openmpi - A lightweight MPI profiler.
    mpipview-3.1.2 - Mpip user interface (mpipview)
    mpipview-3.3.0 - Mpip user interface (mpipview)
          mpipview - Mpip user interface (mpipview)

2. Load the dotkit package of choice. For example:

% use mpip-mvapich
Prepending: mpip-mvapich (ok) 

3. Compile/link your application as usual, but be sure to include the -g flag.

4. Run your application using the srun-mpip command. Note that this command is a wrapper script for the srun command which allows you to use mpiP without explicitly linking the mpiP library when you build your application. The syntax is the same as srun, as shown by the example below.

% srun-mpip -n 4 -p pdebug ./myprogram 

Note: the srun-mpip command is brought into your path after you load the mpiP package of choice (step 2 above). If you are running a batch job, your batch script will therefore need to include the package load in order to pick up the srun-mpip command.

5. As an alternative to using the srun-mpip command, you can explicitly link with the mpiP library of choice, and then run your application as usual. See the Linking section below.


After your application completes, mpiP will write its output file to the current directory. You will be notified of this by a message such as:

mpiP: Storing mpiP output in [./myjob.8.44380.1.mpiP].

With the naming format of [executable].[task count].[pid].[index].mpiP

mpiP's output file is divided into 5 sections:

  • Environment Information
  • MPI Time Per Task
  • Callsites
  • Aggregate Times of Top 20 Callsites
  • Callsite Statistics

Examples and discussion of the mpiP output can be found at


Explicit linking with the mpiP library is required on BG/Q systems, but optional on LC's Linux clusters when the srun-mpip wrapper command is used to launch a job. Examples for linking on both types of systems are shown below.

-L/usr/local/tools/mpip-[MPI implementation]/lib -lmpiP

(Please note the lowercase "mpip"; the default MPI implementation is mvapich)
Use the use -l mpip command to see the list of mpiP build choices.


1. Compile your application using the -g flag:

% mpiicc -g -o myapp myapp.c

2. Load the mpiP dotkit package of choice:

% use mpip-mvapich
Prepending: mpip-mvapich (ok)

3. Run your application using the mpiP srun wrapper:

% srun-mpip -n 8 -ppdebug ./myapp

mpiP: mpiP: mpiP V3.4.0 (Build Nov 14 2013/15:50:29)
mpiP: Direct questions and errors to
[program output]
mpiP: Storing mpiP output in [./myjob.8.44380.1.mpiP].

4. Examine the output file specified by mpiP:

% vi myjob.8.44380.1.mpiP

Run-time Options

mpiP provides some functionality that can be modified through flags set in the MPIP environment variable. The following table describes commonly used settings.

-k #Set the number of stack frames to unwind for each callsite.This is useful for determining various stack traces for a specific MPI call. It can significantly increase the number of callsites in the mpiP report.
-t #.#Set the reporting threshold.This value reduces the amount of information in the mpiP report by not printing individual process information for callsites with MPI % less than the threshold.
-gGenerate debugging information.This will produce information that may help diagnose problems you might be having with mpiP.


$ export MPIP="-t 10.0  -k 4"
% setenv MPIP "-t 10.0  -k 4"

For more information on mpiP run-time flags, please see the


  • mpiP does have several platform-specific requirements. Please be sure your application is appropriately prepared.
  • When using mpiP with large jobs or applications with many callsites, it may take mpiP some time to generate the report. For jobs of thousands of processes, it may take 5 minutes or longer for the report to be generated.
  • If you are attempting to use mpiP by linking it with your application and you are not seeing the mpiP banner from within MPI_Init at run time, you most likely have not linked your application in the appropriate manner. Make sure that if you are specifying the MPI library on your link line, the mpiP library is placed before it.
  • If you are interested in getting more information from mpiP about what it is doing at run time, you can add the -g flag to the MPIP environment variable.
  • If you are not getting the callsite source information in your mpiP report, you likely did not compile with -g.
  • Please contact the LC Hotline to request assistance with mpiP or to report a problem.

Documentation and References

  • Sourceforge mpiP Project page:
  • Several documents included in the /doc installation directory, including the mpiP User Guide.