Computing Resources

Forms are provided for those requests not available through the LC IdM System. Interactive HTML and PDF versions are forms you complete on the Web. Non-interactive PDF versions are to print out and complete by hand.

This page lists available online tutorials related to parallel programming and using LC's HPC systems. NOTE: archive tutorials are no longer updated and may contain broken links and other QA issues.

Both Slurm and IBM's CSM provide a way to launch tasks (aka, Linux processes) of a user's application in parallel across resources allocated to the job. For Slurm, the command is srun; for CSM, the command is jsrun. This page presents their similarities and their differences. It also details lrun, an LLNL developed wrapper script for jsrun.

Slurm is a combined batch scheduler and resource manager that allows users to run their jobs on Livermore Computing’s (LC) high performance computing (HPC) clusters.  This document describes the process for submitting and running jobs under the Slurm Workload Manager.

IBM Spectrum LSF is a batch scheduler that allows users to run their jobs on Livermore Computing’s (LC) Sierra (link is external) (CORAL) high performance computing (HPC) clusters. IBM Cluster System Management (CSM) is the resource manager for the Sierra systems.

This page provides a quick-start guide to Slurm, providing examples of how to perform common tasks.

The Networking and Testbeds project provides research, performance testing, capability testing, and analysis for the file system, network, and interconnect subsystems in support of current and future systems and environments.

NICE DCV is a Virtual Network Computing (VNC) server that provides a securely authenticated and encrypted way for users to create and view a virtual desktop that persists even if no client is actually viewing it.

For a list of the current Sandia platforms, with links to documentation, please see https://computing.sandia.gov/platforms.

This page provides a brief overview of batch system concepts. It is provided as a foundation to understanding the system for running jobs on the Sierra compute clusters.

Pages