LC Hotline: 2-4531

From offsite: (925) 422-4531

 

Hours

Monday–Friday
8am–12pm, 1–4:45pm
B453 R1103 | Q-clearance area

 

Batch System Cross-Reference Guides

On this page: Commands cross-reference | Submission cross-reference | LSF only | Environment Variables

Batch System Commands Cross-Reference

Command

LSF

Moab

Slurm

submit a job bsub msub sbatch
submit an interactive job bsub -Is [bash|csh]   salloc
submit an xterm job bsub -XF xterm mxterm sxterm
launch parallel tasks mpirun/jsrun   srun
modify a pending job bmod <jobID> mjobctl -m <jobID> scontrol update job
hold a pending job bstop <jobID> mjobctl -h <jobID> scontrol hold <jobID>
release a held job bresume <jobID> mobctl -r <jobID> scontrol release <jobID>
cancel a job bkill <jobID> canceljob <jobID> scancel <jobID>
signal a job bkill -s <signal> <jobID> mobctl -N signal=<signal> <jobID> scancel -s <signal> <jobID>
show detailed job information bjobs -l <jobID> checkjob <jobID> scontrol show job <jobID>
show job queue bjobs -u all showq squeue
show historical jobs bhist   sacct
show detailed historical job info bhist -l <jobID>   sacct -l -j <jobID>
show job priorities bjobs -aps mdiag -p sprio
show node resources bhosts mdiag -n scontrol show node
show available queues bqueues mdiag -c sinfo
show queue details bqueues -l mdiag -c -v scontrol show partition
show charge accounts bugroup mshare sshare
show configuration settings bparams -a mschedctl -l scontrol show conf

Batch System Submission Cross-Reference

Description

LSF bsub Option

Moab msub Option*

Slurm sbatch Option

Resource Specs
node count (range)   -l nodes=<count> -N, --nodes=<range>
task count (processors on serial clusters) -n <number> -l ttc=<number> -n, --ntasks=<number>
queue(s) -q <queue(s)> -q <queue> -p, --partition=<queue(s)>
specific feature(s) -R <res_req> -l feature=<val> -C, --constraint=<list>
memory required per node   -l mem=<val> --mem=<MB>
memory required per cpu   -l dmem=<val> --mem-per-cpu=<MB>
max memory per process -M <mem_limit>   shell limits automatically propagated
max virtual memory per process -v <swap_limit>   shell limits automatically propagated
generic resource(s)   -l gres=<list> --gres=<list>
license(s) -Lp ls_project_name   -L, --licenses=<license>
utilize reservation -U <res_name> -l advres=<res_name> --reservation=<res_name>
request specific compute hosts -m <host_list> -l hostlist=<host_list> -w, --nodelist=<host_list>
exclude specific compute hosts -R 'select[hname!=<host_name>]'   -x, --exclude=<host_list>
utilize hostfile for task distribution -hostfile <file_name>   -m, --distribution=arbitrary + SLURM_HOSTFILE variable
request exclusive node allocation -x   --exclusive
Time Specs
wall-clock time limit -W [hour:]minute -l walltime=<val> -t, --time=<time> / --time-min=<time>
run no sooner than specified time -b [[[YY:]MM:]DD:]hh:mm -a [[[YY]MM]DD]hhmm –-begin=[YYY-MM-DDT]hh:mm
Associated Fields
bank account -G <user_group> -A -A, --account=<account>
user specified name for their job -J <job_name> -N -J, --job-name=<job_name>
user specified project name / comment field -P <project_name> -l var:Project=<project_name> --comment=<string>
workload characterization key -Jd <wckey> (i.e., job description) -l wckey=<wckey> --wckey=<wckey>
Quality of Service
exempt qos   -l qos=exempt --qos=exempt
expedite qos   -l qos=expedite --qos=expedite
standby qos   -l qos=standby --qos=standby
gives user the power to lower their job’s priority -sp <value> -p --nice[=value]
I/O
input file

-i <file_name>

-is <file_name>

-i, --input=<file_name>
output file -o <file_name> -o -o, --output=<file_name>
error output file -e <file_name> -e -e, --error=<file_name>
merge error output into file output (default) -j oe (default)
append or overwrite error/output files default append,
-oo/-eo overwrites
--open-mode=append|truncate
copy files from submission to execution hosts -f <local_file> <remote_file>   sbcast command
Mail
send mail at job start -B -m b --mail-type=BEGIN
send mail at job completion -N -m e --mail-type=END
specify user who receives mail -u <user_name>   --mail-user=<user_name>
suppress mail when default is to send   -m n  
Submit Behavior
submit an interactive job -Is [bash|csh]   salloc command
submit job in held state -H -H -H, --hold
submit job and wait for completion -K   salloc command
submit a job array -J job_name[index_list]   -a, --array=<indexes>
invoke “command” instead of submitting batch script bsub "comand" echo “command” | msub --wrap=<command>
dependent job -w <dependency_expression> -l depend=<job-ID> or -l depend=afterok:<job-ID> -d, --dependency=<dependency_list>
Runtime Behavior
keep job running if a node fails (default) -l resfailpolicy=ignore -k, --no-kill
do not re-queue job after node failure -rn (default) -l resfailpolicy=cancel --no-requeue
re-queue job after node failure -r -l resfailpolicy=requeue --requeue
specify  the working directory -cwd <directory>   -D, --workdir=<directory>
export env variables to execution environment -env <"none" | "all, [var_name[, var_name] ...]> -V --export=<environment variables | ALL | NONE>
propagate limits to execution environment -ul (default)   --propagate[=rlimits]
signal at remaining time -wa <signal> -wt <rem_time> -l signal=<sig>@[rem_time] --signal=<sig_num>[@<rem_time>]
Extra Info
help -h --help -h, --help / --usage
enable verbose output     -v, --verbose
display batch system version -V   scontrol show version

* To expedite the transition to Slurm, use the moab2slurm utility to convert Moab msub jobs scripts to the Slurm sbatch equivalents. See the moab2slurm man page on any TOSS3 machine for details.

LSF Only

invoke application-specific file -a <esub or epsub>
invoke application profile -app <profile>
specify data requirments -data <reqs>
specify user group for data access -datagrp <user_group_name>
per-process (soft) core file size limit -C <core_limit>
Limit the total CPU time the job can use -c [hour:]minute[/host_name |
specify a per-process (soft) data segment size limit -D <data_limit>
specify job pre-execution command -E <command>
specify job post-execution command -Ep <command>
specify a per-process (soft) file size limit -F <file_limit>
submit to job group -g <job_group>
impose cgroups memory and swap containment -hl
specify a JSDL file -jsdl or -jsdl_strict <file_name>
specify a login shell -L <shell>
create job output directory -outdir <directory_name>
specify a limit to the number of processes -p <process_limit>
submit a job pack -pack <job_submission_file>
specify automatic job requeue exit values -Q <exit_code(s)>
specify a per-process (soft) stack segment size limit -S <stack_limit>
specify a signal when a queue-level run window closes -s <signal>
specify a service class (not quite the same as QoS) -sla <class_name>
specify a thread limit -T <thread_limit>
specify a termination deadline -t <time_limit>
enable orphan job termination -ti
enable output/error messages for interactive jobs -tty
provide a runtime estimate to the scheduler -We
submit using SSH X11 forwarding -XF
use spooled file as the command file for the job -Zs

Environment Variables

Description

LSF

Slurm

Input Variables
default project name LSB_DEFAULTPROJECT  
default queue LSB_DEFAULTQUEUE SBATCH_PARTITION
default user group (charge account) LSB_DEFAULT_USERGROUP  
custom fields for job display command LSB_BJOBS_FORMAT SQUEUE_FORMAT
reference link to more info bsub sbatch salloc srun
Output Variables
job ID LSB_JOBID SLURM_JOB_ID
job name LSB_JOBNAME SLURM_JOB_NAME
job array index LSB_JOBINDEX SLURM_ARRAY_TASK_ID
list of hosts allocated to the job LSB_HOSTS SLURM_JOB_NODELIST
directory from which job was submitted LS_SUBCWD SLURM_SUBMIT_DIR
host from which job was submitted LSB_SUB_HOST SLURM_SUBMIT_HOST
reference link to more info bsub sbatch salloc srun