Both Slurm and IBM's CSM provide a way to launch tasks (aka, Linux processes) of a user's application in parallel across resources allocated to the job. For Slurm, the command is srun; for CSM, the command is jsrun. This page presents their similarities and their differences. It also details lrun, an LLNL developed wrapper script for jsrun.
This page provides a quick-start guide to Slurm, providing examples of how to perform common tasks.
This page lists available online tutorials related to parallel programming and using LC's HPC systems.
Our Development Environment Software consists of compilers and preprocessors, debugging software, memory-related software, profiling tools, tracing tools, and performance analysis tools.
Slurm is a combined batch scheduler and resource manager that allows users to run their jobs on Livermore Computing’s (LC) high performance computing (HPC) clusters. This document describes the process for submitting and running jobs under the Slurm Workload Manager.
This page provides a brief overview of batch system concepts. It is provided as a foundation to understanding the system for running jobs on the Sierra compute clusters.
Livermore Computing (LC) provides a large variety of High Performance Computing (HPC) clusters. However, there are only two batch schedulers that run user jobs on those clusters. Slurm is the batch scheduler and resource manager that schedules almost all LC clusters. The exception is the IBM Sierra clusters (aka CORAL systems) which run the Spectrum LSF scheduler.
This page contains a list of Slurm commands.