LC Hotline: 2-4531

From offsite: (925) 422-4531

 

Hours

Monday–Friday
8am–12pm, 1–4:45pm
B453 R1103 | Q-clearance area

 

LSF User Manual

IBM Spectrum LSF is a batch scheduler that allows users to run their jobs on Livermore Computing’s (LC) Sierra (CORAL) high performance computing (HPC) clusters.  IBM Cluster System Manager (CSM) is the resource manager for the Sierra systems.

This document is intended to present the basics of Spectrum LSF.  For the complete guide to using LSF, see the on-line user manual.

Computing Resources

An HPC cluster is made up of a number of compute hosts, each with a complement of processors, memory, GPUs and burst buffers.  Spectrum LSF refers to compute nodes as hosts.  The user submits jobs that specify the application(s) they want to run along with a description of the computing resources needed to run the application(s).

The processing units on hosts are the cores.  With the advent of Simultaneous Multithreading (SMT) architectures, single cores can have multiple hardware threads (sometimes known as hyper-threads).  The processing elements are generically called a CPU.  For systems without SMT, a CPU is a core.  For systems with SMT available and enabled, a CPU is a hardware thread.

The Batch Scheduler and Resource Manager

The batch scheduler and resource manager work together to run jobs on an HPC cluster.  The batch scheduler, sometimes called a workload manager, is responsible for finding and allocating the resources that fulfill the job’s request at the soonest available time.  When a job is scheduled to run, the scheduler instructs the resource manager to launch the application(s) across the job's allocated resources. This is also known as “running the job”.

The user can specify conditions for scheduling the job.  One condition is the completion (successful or unsuccessful) of an earlier submitted job.  Other conditions include the availability of a specific license or access to a specific file system.

Anatomy of a Batch Job

A batch job requests computing resources and specifies the application(s) to launch on those resources along with any input data/options and output directives.  The user submits the job, usually in the form of a batch job script, to the batch scheduler.

LSF provides the #BSUB directives which, when placed at the top of a job script, will convey any of the bsub command line options.  When the same option is specified on the command line and as a #BSUB directive in the job script, the bsub command line option will take precedence.

The batch job script is composed of four main components:

  • The interpreter used to execute the script
  • #BSUB directives that convey default submission options.
  • The setting of environment and/or script variables (if necessary)
  • The application(s) to execute along with its input arguments and options.

Here is an example of a batch script that requests 8 hosts under the “science” charge account and launches 32 tasks of myApp across the 8 allocated hosts:

#!/bin/bash
#BSUB -n 32
#BSUB -R "span[ptile=4]"
#BSUB -G science
jsrun -N 8 -n 32 myApp

When the job is scheduled to run, the resource manager will execute the batch job script on login host.  For details on how to write job scripts, see this page.

Batch Jobs

The bsub command is used to submit a batch script to LSF.  It is designed to reject the job at submission time if there are requests or constraints that LSF cannot fulfill as specified.  This gives the user the opportunity to examine the job request and resubmit it with the necessary corrections.

The behavior of LSF's bsub submission utility is fundamentally different from other job schedulers (e.g., msub and sbatch).  You submit a job script by redirecting the input to bsub, not by specifying the job script as a bsub argument.  For example, do this:

bsub < job.script

not this:

bsub job.script

If you invoke bsub job.script, none of the #BSUB directives in the script will be recognized.

Interactive Jobs

An interactive job is a job that returns a command line prompt (instead of running a script) when the job runs.  The bsub -Is [bash|csh] command is used to submit an interactive job to LSF.  When the job runs, a command line prompt will appear and the user can launch their application(s) across the computing resources which have been allocated to the job.

Xterm Jobs

An xterm job is a job that launches an xterm window when the job runs.  The bsub -XF xterm command is used to submit an xterm job to LSF.  When the job is runs, an xterm window appears on the desktop of the user who invoked bsub -XF xterm.  At that point, the user can launch their application(s) from the xterm window across the computing resources which have been allocated to the job.

Execution Environment

For each job type above, the user has the ability to define the execution environment.  This includes environment variable definitions as well as shell limits (bash ulimit or csh limit).

Use the bsub -env option to convey specific environment variables to the execution environment.

LSF by default imposes a default set of user shell soft limits in the job's execution environment.  The bsub -ul option will override this default behavior and pass the limits in effect in the shell where bsub is invoked.  Users also have a number of bsub options that individually insert soft limits into the execution environment.  There is a configuration variable in effect that makes MB the default unit when specifying such limits.  For example bsub -M 1 will yield a ulimit of "max memory size (kbytes, -m) 1024" in the execution environment.

Here is the list of bsub options that can insert individual process limits into the execution environment.  Bear in mind, that any limit conveyed must be less than or equal to the system's hard limits.

Environment Variables

LSF recognizes and provides a number of environment variables.

The first category of environment variables are those that LSF inserts into the job's execution environment.  These convey to the job script and application information such as job ID (LSB_JOBID) and task ID (LS_JOBPID).  For the complete list, see Environment variables set for job execution.

The next catetory of environment variables are those use user can set in their environment to convey default options for every job they submit.  These include options such as the wall clock limit.  For the complete list, see Environment variable reference.

Finally, LSF allows the user to customize the output of the bjobs command by setting the LSB_BJOBS_FORMAT variable in the environment from which you invoke bjobs.  For more info on customizing the output of bjobs, see the bjobs -o info page.

Job Output

LSF merges the job's error and output by default and inserts job report information into the job's output. This information includes the submitting user and host, the execution host, the CPU time (user plus system time) used by the job, and the exit status.  When the job completes, the standard LSF behavior is to email the job's output to the user.

LC installs a job submit filter to automatically generate an output file by default instead of sending email.  You can always specify an output and error file to the bsub command using the -o and -e options respectively.  LSF will append the job's output to the specified file(s).  If you want the output to overwrite any existing files, use the -oo and -ee options instead.

If the final character of the -o and -e options is a slash (/), LSF will create a directory with that name and write the job's output and error files to that directory using the <job_ID>.out and <job_ID>.err naming formats.

Serial vs. Parallel jobs

Parallel jobs launch applications that are comprised of many processes (aka tasks) that communicate with each other, typically over a high speed switch.  Serial jobs launch one or more tasks that work independently on separate problems.

Parallel applications must be launched by the jsrun command.  Serial applications can use jsrun to launch them, but it is not required.

As an ATS cluster, the Sierra clusters are ideal for running parallel jobs.

Jobs and Job Steps

The job requests computing resources and when it runs, the scheduler selects and allocates those resources to the job.  The invocation of the application happens within the batch script, or at the command line for interactive and xterm jobs.

When an application is launched using jsrun, it is called a “job step”.  The jsrun command causes the simultaneous launching of multiple tasks of a single application.  Arguments to jsrun specify the number of tasks to launch as well as the number of hosts (and CPUs and memory) on which to launch the tasks.

jsrun can be invoked sequentially or in parallel (by backgrounding them).  Furthermore, the number of hosts specified by jsrun (the -N option) can be less than but nor more than the number of hosts (and CPUs and memory) that was allocated to the job.

Job Queues

A typical cluster is typically busy running jobs and will probably not be able to run a job when it is submitted.  So typically, the job is placed in a queue.  Specific compute host resources are defined for every job queue.

Each queue can be configured with a set of limits which specify the requirements for every job that can run in that queue.  These limits include job size, wall clock limits, and the users who are allowed to run in that queue.

An LC convention is to have the following two queues on every cluster:

  • pbatch - the production queue for running production jobs.
  • pdebug - the debug queue providing quick turnaround for shorter and smaller jobs.

The bqueues command lists all the queues currently configured.  bqueues -l provides details about each queue.

The bjobs -u all command lists all the jobs currently in the system, one line per job.

Quality of Service (QoS)

LSF does not support the QoS model.  Instead, additional queues have been added to deliver QoS:

  • pbatch (nominal priority and standard job size and wall clock time limits)
  • expedite (higher job priority and an exemption from job size and wall clock time limits)
  • exempt (normal job priority and an exemption from job size and wall clock time limits)
  • standby (below normal job priority and an exemption from job size and wall clock time limits)

Only certain users are granted the permission to submit jobs to the exempt and expedite queues.  Users are typically granted access to the pbatch and standby queues.

Charge Accounts

Users must request a charge (aka bank) account for each job they submit or have a valid charge account assigned by default.  If the user is not assigned to any charge accounts, they cannot submit a job to the batch system.  Computing resources allocated to a job are tracked and charged to the job’s specified charge account.

The user group serves as the charge account in LSF.  One specifies a user group using the bsub -G option.  LSF provides the LSB_DEFAULT_USERGROUP environment variable to convey a user group by default at bsub submission time.  If LSB_DEFAULT_USERGROUP is not set in the shell where bsub is invoked, users must specify a user group either using the bsub -G  <user_group> option or adding a #BSUB -G <user_group> directive to the start of their job script.

As with other LC systems, shares are assigned to each user group (bank account) and each user.  Users are nominally assigned one share of each user group they have membership in.  The  bugroup -l command provides a listing of all the defined user groups, their assigned shares and the users who belong to each user group.

Job Priority

Jobs will be ordered in the queue of pending jobs based on a number of factors.  The scheduler will always be looking to schedule the job that is at the top of the queue.  The scheduler is also configured to scheduler jobs lower in the queue if doing so does not delay the start of any higher priority queue.  This is known as conservative backfill.

The active factors that contribute to a job’s priority can be seen by invoking the bjobs -aps command.  These factors include:

  • Fair-share:  a number derived from the difference between the shares of the cluster that have been allotted to a user for a specific charge account and the usage accrued to the user and charge account, as well as any parent charge accounts.
  • Job size:  a number proportional to the quantity of computing resources the job has requested.
  • Age:  a number proportional to the period of time that has elapsed since the job was submitted to the queue.  Note:  time during which queued jobs in a held state does not contribute to the age factor.

For a more detailed description of the algorithms for calculating job priority, see LSF Job Priority.

Job Status

Most of a job’s characteristics can be seen by invoking bjobs -l <jobID>.  LSF captures and reports the exit code of the job script (bsub jobs) as well as the signal that caused the job’s termination when a signal caused a job’s termination.

A job’s record remains in LSF’s memory for 5 minutes after it completes.  bjobs -l <jobID> will return “Job <jobID> is not found” for a job that completed more than 5 minutes ago.  At that point, one must invoke the bhist command to retrieve the job’s record from the LSF database.

Modifying a Batch Job

Many of the batch job specifications can be modified after a batch job is submitted and before it runs.  Typical fields that can be modified include the job size (number of nodes), queue, and wall clock limit.  Job specifications cannot be modified by the user once the job enters the RUN state.

The bmod command is used to modify a job's specifications.  For example:

  • bjobs -l <jobID> displays all of a job's characteristics
  • bmod -G science <jobID> changes the job's account to the science account
  • bmod -q pbatch <jobID> changes the job's queue to the pbatch queue

Holding and Releasing a Batch Job

If a user's job is in the pending state waiting to be scheduled, the user can prevent the job from being scheduled by invoking the bstop <jobID> command to place the job into a PSUSP state.  Jobs in the held state do not accrue any job priority based on queue wait time.  Once the user is ready for the job to become a candidate for scheduling once again, they can release the job using the bresume <jobID> command.

Signalling and Cancelling a Batch Job

Pending jobs can be cancelled (withdrawn from the queue) using the bkill command (bkill <jobID>).  The bkill command can also be used to terminate a running job.  The default behavior is to issue the job a SIGTERM, wait 30 seconds, and if processes from the job continue to run, issue a SIGKILL command.

The -s option of the bkill command (bkill -s <signal> <jobID>) allows the user to issue any signal to a running job.

Job States

The basic job states are these:

  • PEND - the job is in the queue, waiting to be scheduled
  • PSUSP - the job was submitted, but was put in the suspended state (ineligible to run)
  • RUN - the job has been granted an allocation.  If it’s a batch job, the batch script has been run
  • DONE - the job has completed successfully

For the complete list, see About job states.

Pending Reasons

A pending job can remain pending for a number of reasons:

  • Dependency - the pending job is waiting for another job to complete
  • Priority - the job is not high enough in the queue
  • Resources - the job is high in the queue, but there are not enough resources to satisfy the job’s request

For the complete list, see the "Pending jobs" section under the About jobs states.

Displaying Computing Resources

As stated above, computing resources are hosts, CPUs, memory, and generic resources like GPUs.  The resources of each compute host can be seen by running the bhosts and lshosts commands.
The characteristics of each queue can be seen by running the bqueues command.  Finally, a load summary report for each host can be seen by running lsload.

User Permissions and Limits

The charge accounts each user is permitted to use can be seen by running the bugroup command.  In addition, the limits associated with the use each queue can be seen by running bqueues -l.

Job Statistics and Accounting

The bhist command displays historical information about jobs.

Time Remaining in an Allocation

If a running application overruns its wall clock limit, all its work could be lost.  To prevent such an outcome, LSF will send a signal to the job when the remaining time of allocation is due to expire.

Use the bsub -wa <signal> -wt <rem_time> option to request a signal (like USR1 or USR2) at rem_time number of seconds before the allocation expires.  The application must register a signal handler for the requested signal in order to to receive it.  The handler takes the necessary steps to write a checkpoint file and terminate gracefully.

Specifying a Host Count with bsub

You will notice in the bsub options that there does not appear to be a way to specify a host count request for the job.  LSF has traditionally only supported one job step per job.  LSF divides a cluster's compute resources into "slots" where each slot is (by default) one core, allocated one to a task.  In the LSF model, you specify to bsub the number of tasks for the job (job step) and LSF finds and allocates an equal number of slots, on however many hosts are needed.

The Sierra clusters present a new frontier for LSF whereby multiple of jobs steps can be launched on a job's allocated resources.  For LSF to work on Sierra systems, specifying a task count with bsub is not as important as specifying a number of hosts to allocate to the job.  With multiple job steps running within a Sierra resource allocation, and as the number of tasks for each job step can be independently specified, we break the old model of specifying one task count for the entire job.  LSF will soon be releasing an enhancement to address this issue.

In the meantime...

The bsub options relating to host and task counts must be built using these options: