Numerous compilers are available to provide a rich programming environment for scientific and technical computing.

Platforms

A list of current platforms at LC can be found at hpc.llnl.gov/hardware/platforms. The subsections below outline some of the properties specific to a given platform type. To determine the platform type, users may `echo $SYS_TYPE` on a given system.

LC platform types and their available compilers.
Platform $SYS_TYPE Compliers and MPI Implementations Notes
CORAL EA blueos_3_ppc64le_ib XL, GNU, PGI, NVCC, Clang

Spectrum MPI
Latest Documentation
CORAL blueos_3_ppc64le_ib_p9 XL, GNU, PGI, NVCC, Clang

Spectrum MPI
Latest Documentation
TOSS 4 toss_4_x86_64_ib

toss_4_x86_64
GNU, Intel, Clang

MVAPICH, OpenMPI
 
El Capitan EA toss_4_x86_64_ib_cray Cray, AMD ROCmCC, GNU

Cray MPICH
Latest Documentation

Provided Compiler Versions

LC provides several versions of a given vendor's compilers (e.g., GCC 8.5.0 and GCC 10.3.1). Users are advised to use LMod (detailed below) to load a desired compiler version. CORAL / CORAL EA systems, these compilers are "wrapped" to make them more robust for users in LC environments.

On the TOSS 4 and El Capitan EA systems LC is providing each version of a compiler in 2 forms:

  • Portable compilers: the compiler as it is delivered from the vendor (and as it would appear on other, non-LC platforms). These compilers can be accessed using LMod and have the naming convention vendor/version.
  • Magic compilers: a "wrapped" version of the compiler, to make it more robust for users in LC environments. This "wrapped" version is what LC has traditionally deployed (seen on TOSS 3 and CORAL systems). This form of the compilers are best for complex build process and always ensure that resulting binaries do not depend on the state of a user's environment. They also ensure that executables can be run across similar LC platforms. These compilers are accessed using LMod and have the naming convention vendor/version-magic.

For example, the following listing shows several versions of the intel-classic compiler family, packaged both as a 'un-wrapped' compiler and the LC-specific "magic" compiler:

$ ml avail intel-classic

--------------------- /usr/tce/modulefiles/toolchains/Core ---------------------
   intel-classic/19.0.4-magic    intel-classic/2021.6.0-magic  (L,D)
   intel-classic/19.0.4          intel-classic/2021.6.0
   intel-classic/19.1.2-magic    intel-classic/2021.10.0-magic
   intel-classic/19.1.2          intel-classic/2021.10.0

 

LMod Modules

On TOSS 4 and CORAL EA systems, LC provides an LMod modules environment that allows users to switch between various compiler versions. To see a list of compiler versions available via modules, a user can run `ml keyword "Category : Compiler"`. Here is a snippet of the output on a TOSS 4 system:

$ ml keyword "Category : Compiler"
----------------------------------------------------------------------------

The following modules match your search criteria: "Category : Compiler"
----------------------------------------------------------------------------

  clang: clang/14.0.6-magic

  gcc: gcc/10.3.1-magic, gcc/10.3.1, gcc/11.2.1-magic, gcc/11.2.1, ...

  intel: intel/2022.1.0-magic, intel/2023.2.1-magic

  intel-classic: intel-classic/19.0.4-magic, intel-classic/19.1.2-magic, ...

----------------------------------------------------------------------------

 

For a truncated list like Intel in the example output above, a user may then run `ml avail gcc`. An example output snippet of that command is as follows:

$ ml avail gcc

--------------------- /usr/tce/modulefiles/toolchains/Core ---------------------
   gcc/10.3.1-magic (D)    gcc/11.2.1-magic    gcc/12.1.1-magic
   gcc/10.3.1              gcc/11.2.1          gcc/12.1.1

A user may then run module load gcc/12.1.1-magic which will make the gcc, g++, and gfortran commands invoke the 12.1.1 version. To get the full path to the compiler command, a user may then use the which command (i.e., which gcc returns /usr/tce/packages/gcc/gcc-12.1.1-magic/bin/gcc). Due to the intricacies of shells, environments, and build systems, LC recommends using the full path to a compiler command to ensure getting the desired version.

While LC may maintain “default” versions of each compiler within modules (as indicated by the “D” demarcation in the module avail output), LC does not generally advocate a particular version to be the best option. In general, users are advised to start with the newest version of a compiler to get the latest features and bug fixes.

More information about using modules is available on https://hpc.llnl.gov/software/modules-and-software-packaging.

MPI wrappers

Since TOSS 3 and the CORAL EA systems, LC no longer maintains compiler vendor and version specific compiler wrappers (i.e., we no longer have commands such as mpiifort-17.0.2). Rather, for each compiler we have separate builds of each MPI implementation with standard mpicc, mpicxx, mpif90, etc. commands. A user may use modules switch between various compilers, for example, running module load gcc/10.3.1-magic will cause the mpicc, mpif90, etc. commands to use the GCC 10.3.1 compiler with LC "magic" (described above). More information about using MPI on CORAL systems can be found at lc.llnl.gov/confluence/display/CORALEA/MPI.

Documentation

Library Paths

The common use of shared libraries on Linux provides many benefits but if the application developer is not careful, they can also be a source of vexing problems. The most common shared library problems are: 1) not finding the shared libraries at run time (preventing the application from running at all) and 2) the much worse case of silently picking up different (and possibly incompatible) shared libraries at run time. This section describes the recommended ways to ensure that your application finds and uses the expected shared libraries.

These shared library problems can occur more often on LC systems than on stand-alone Linux systems because LC often installs many different versions of the same compiler or library in order to give users the exact version they require. Although Linux provides methods for differentiating shared library versions, many of these compilers and libraries do not use this technology. As a result, on LC systems, there can be several shared libraries with exactly the same name that are actually different from, and possibly incompatible with, each other.

In order to make shared library version errors as visible as possible (i.e., dying at startup versus just silently getting the wrong library), LC intentionally put no LC-specific paths in the default search path for shared libraries (e.g., in /etc/ld.so.conf). Our compilers and MPI wrappers have been modified to automatically include the appropriate rpaths (run-time paths) for those shared objects the compilers or MPI automatically include. For all other shared libraries that your code links in that are not in /usr/lib64, you probably need to specify a rpath for them.

Rpaths may be specified on the link line explicitly with one or more "-Wl,-rpath,<path>" arguments or you can use a LC-specific linker option "-Wl,--auto_rpath" to help with this. If you specify "-Wl,--auto_rpath" on your link line, all the -L<path> commands on the link line will automatically be added to your rpath, which is typically what is needed to pick up the proper shared library.  It should be noted that the use of "-Wl,--auto_rpath" will encode all -L paths into your rpath, which may include paths LC does not control (such as /usr/gapps). (FYI, the "-Wl," part of all these commands tells the compiler to pass the commands to the linker without interpretation and the "," after -rpath is replaced with a space.)

If your rpaths are not set properly, at runtime, you may get an error of the form:

./mixed_hello: error while loading shared libraries: libifport.so.5:
cannot open shared object file: No such file or directory

Although LD_LIBRARY_PATH can be used to specify where to search for shared objects, we strongly recommend encoding the paths you need into the executable instead, either by adding "-Wl,--auto_rpath" to your link line or by explicitly specifying paths with "-Wl,-rpath,<path>".  By encoding the

rpaths into the executable, you ensure that the executable will work as expected, no matter how LD_LIBRARY_PATH is set.

The RPATHs for an existing executable can be queried with "readelf -a <your_exe> | grep RPATH".

For example:

> readelf -a ./mixed_hello | grep RPATH
0x000000000000000f (RPATH) Library rpath:
[/usr/local/tools/icc-10.1.022/lib:/usr/local/tools/ifort-10.1.022/lib]

The SHARED LIBRARIES requested by an executable (or .so file) can be queried with "readelf -a <your_exe> | grep NEEDED" .For example:

> readelf -a ./mixed_hello | grep NEEDED
0x0000000000000001 (NEEDED) Shared library: [libm.so.6]
0x0000000000000001 (NEEDED) Shared library: [libstdc++.so.6]
0x0000000000000001 (NEEDED) Shared library: [libgcc_s.so.1]
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
0x0000000000000001 (NEEDED) Shared library: [libdl.so.2]

The ACTUAL SHARED LIBRARIES used by your executable can be queried with "ldd <your_exe>".  This list usually is longer than the one above because shared libraries can pull in other shared libraries. For example:

> ldd ./mixed_hello
libm.so.6 => /lib64/libm.so.6 (0x00002aaaaacc6000)
libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00002aaaaaf49000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002aaaab249000)
libc.so.6 => /lib64/libc.so.6 (0x00002aaaab458000)
libdl.so.2 => /lib64/libdl.so.2 (0x00002aaaab7a8000)
/lib64/ld-linux-x86-64.so.2 (0x00002aaaaaaab000)

If your rpath is not set properly, ldd will print put messages of the form:

> ldd ./mixed_hello2
libifport.so.5 => not found
libifcore.so.5 => not found

The ldd output is useful to determine that your application can find all its shared libraries and that it is picking up the versions you expect.

Additional Notes

SYS_TYPE note

Executables compiled on machines of one SYS_TYPE are not likely to run on machines of a different SYS_TYPE. The exception is that executables may run on both toss_4_x86_64_ib and toss_4_x86_64. Some software and libraries may be available for some SYS_TYPEs and not others. Some utilities and libraries may be in different places on machines of different SYS_TYPEs. You may need to modify your makefiles or build scripts when transitioning to a machine of a different SYS_TYPE.

GNU compiler note

The default GNU compiler on TOSS 4 systems is an LC-built compiler rather than the RedHat provided one. If you would like to use the RedHat provided one, you may run the full path to the compiler (i.e., /usr/bin/g++).

CUDA note

The CUDA Toolkit is available on systems with compatible GPU accelerators. A tutorial on how to use GPUs on LC clusters is available, as is a CZ Confluence wiki with additional CUDA usage information.

Vectorization with the Intel Compiler note

Vectorization is becoming increasingly important for performance on modern processors with widening SIMD units. More information on how to vectorize with the Intel compilers can be found on LC's vectorization page.

Intel compiler mixed language note

When calling a Fortran routine compiled with ifort from C or C++, it is recommended that you call the Intel-specific initialization and finalization routines, for_rtl_init_ and for_rtl_finish_. This is particularly important when using ifort runtime options such as -check all. For example, you will first need to declare the following functions:

void for_rtl_init_(int *, char **);
int for_rtl_finish_();

(if using C++, you will need to declare them as extern "C"):

extern "C" {
    void for_rtl_init_(int *, char **);
    int for_rtl_ffffinish_();
};

From your C or C++ main you will then need to initialize and finalize with these functions, for example:

int main(int argc, char **argv)
{
    for_rtl_init_(&argc, argv); 
    /* your code here... */ 
    int io_status = for_rtl_finish_();
    return 0;
}

Because these routines are Intel specific, you may want to encapsulate them within #ifdef directives for the Intel compiler defined macro __INTEL_COMPILER.

Thread Building Blocks (TBB) with the Intel compilers

TOSS 4 systems include system-installed TBB headers and libraries in /usr/include/tbb and /usr/lib64. These headers and libraries may conflict with the versions that are included with the Intel compilers. If you would like to use the TBB headers and libraries that are included with the Intel compilers, you will need to add the "-tbb" flag to your compile and link flags. Furthermore, we advise that you add the "-Wl,-rpath=/usr/tce/packages/intel-classic/intel-classic-19.1.2/tbb/lib/intel64/gcc4.8" flag to your link line (note the exact path may differ depending on which compiler version you are using). An alternative to the "-Wl,-rpath" flag is to set your LD_LIBRARY_PATH in the environment where you are running. You can run `ldd` on your executable to ensure that the proper library will be loaded.

Looking for older systems' compiler information?

Our deprecated compilers web page is here.