MPI Support Roadmap
Currently, only Cray-MPICH is supported on El Capitan systems. However, once MVAPICH and OpenMPI for Slingshot become available, they will be tested and added as alternative MPIs.
MPI Module
There is a cray-mpich module is available for every supported compiler, including:
- gcc
- cce
- rocmcc
Users must first select their compiler, then run module load cray-mpich and latest version of MPI for the already loaded compiler will be selected.
Some compilers may have multiple versions of the cray-mpich library available. If you need to specify a particular MPI version:
- Load a compiler version, for example: module load cce/18.0.0-magic
- View the available cray-mpich versions: module avail cray-mpich
- Load the specific cray-mpich version, for example: module load cray-mpich-abi/8.1.30
GPU-Aware MPI
Cray MPICH is also GPU-aware. In order to leverage accelerated GPU transports, the following steps are necessary:
- Load a ROCm compiler of your choice, or use the default with the command module load rocmcc
- The Cray MPICH module will be automatically loaded.
- Include the appropriate GTL library, i.e.,
LIBS="$PE_MPICH_GTL_DIR_amd_gfx942 $PE_MPICH_GTL_LIBS_amd_gfx942" LDFLAGS="-Wl,-rpath,${PE_MPICH_GTL_DIR_amd_gfx942:2}"
- When running your application, remember to export/set the environment variable
MPICH_GPU_SUPPORT_ENABLED=1 Without the above environment variable program will abort with :: process_vm_readv: Bad address
Version Compatibility
HPE provides a very narrow statement of compatibility with regards to which versions of cce, rocm, and cray-mpich are guaranteed to function and be performant. Please see the known issues page for the officially supported versions and details on how version incompatibility manifests.