LC Hotline: 2-4531

From offsite: (925) 422-4531

 

Hours

Monday–Friday
8am–12pm, 1–4:45pm
B453 R1103 | Q-clearance area

 

GPU—OpenACC/PGI Accelerator

Overview

OpenACC is an open, cross-platform API standard for accessing accelerators. It is an alternative to using NVIDIA's CUDA C language. OpenACC works on a range of APUs, GPUs, and many-core coprocessors. OpenACC supports C++, C and Fortran. OpenACC consists of a set of compiler directives that specify sections of application code to be offloaded from the host CPU and run on an attached accelerator. Currently (as of Dec 2015) the accelerators available at LC are NVIDIA GPU cards.

Please read the LC GPU Technology web page to familiarize yourself with the LLNL GPU setup.

Environment

Machines and Versions

See LC graphics software page. As of December 2015, PGI is the only supported OpenACC compiler on LC X86 systems with GPUs, i.e., surface, rzhasgpu, and max. Livermore Computing also plans to provide non-PGI OpenACC compilers in the future.

Usage

Building without MPI

To build a program using OpenACC, compile and link using a compiler that supports OpenACC. For example, to compile a C program called vectoradd.c into a GPU program using the PGI 15.7 compiler:

use pgi-15.7-accelerator
pgcc -fast -acc -Minfo vectoradd.c -o vectoradd

The resulting program is then ready to be simply run on a system with GPUs in order to take advantage of the parallelism on that machine.

Building with MPI

Users can build a hybrid program that uses both OpenACC and MPI by building their application with the appropriate LC MPI pgi compiler wrapper, which enables MPI and OpenACC. For example, to compile a C program that uses both MPI and OpenACC called vectoradd.c into a GPU program using the PGI 15.7 compiler and the MVAPICH 2.1 MPI library:

use pgi-15.7-accelerator
use mvapich2-pgi-2.1
mpicc -fast -mp -Minfo vectoradd.c -o vectoradd

Note: We use mpicc here instead of mpipgcc here, because the MPI support is built into the mvapich2-pgi-2.1 module. For more questions and support building with MPI, or if you have problems with the above instructions, please contact the Hotline at 2-4531.

Running with MPI and OpenACC

In order to run hybrid applications on LC 86 GPU-enabled clusters, one has to use the Slurm application launcher, srun. In order to run such hybrid jobs where multiple MPI ranks on a node will access the same GPU, LC has installed the NVIDIA MPS (Multi-Process Service). The MPS supports the use of a single GPU by multiple MPI ranks using Hyper-Q technology. The MPS can be utilized in a job script by adding the --mps flag to srun:

srun --mps progname args

Debugging with MPI and OpenACC

MPI debugging with OpenACC is an advanced topic beyond the scope of this basic help guide and depends on the specific nature of your problem. Please contact the Hotline for assistance.

Help

Help is available from the lc-hotline@llnl.gov, (925) 422-4531.

Download

There is no need to download any software to use GPUs on LC machines. Livermore Computing does not support installation of desktop OpenACC drivers. Please ask your desktop system administrator for assistance.

Links

UCRL-MI-128467-REV-1