Batch System Cross-Reference Guides

On this page: Commands cross-reference | Submission cross-reference | LSF only | Environment Variables

Batch System Commands Cross-Reference

 

Command

LSF

Moab

Slurm

submit a job bsub msub sbatch
submit an interactive job bsub -Is [bash|csh]   salloc
submit an xterm job bsub -XF xterm mxterm sxterm
launch parallel tasks mpirun/jsrun/lrun   srun
modify a pending job bmod jobid mjobctl -m jobid scontrol update job jobid
hold a pending job bstop jobid mjobctl -h jobid scontrol hold jobid
release a held job bresume jobid mobctl -r jobid scontrol release jobid
cancel a job bkill jobid canceljob jobid scancel jobid
signal a job bkill -s signal jobid mobctl -N signal=signal jobid scancel -s signal jobid
show detailed job information bjobs -l jobid checkjob jobid scontrol show job jobid
show job queue bjobs -u all showq squeue
show historical jobs bhist   sacct
show detailed historical job info bhist -l jobid   sacct -l -j jobid
show job priorities bjobs -aps mdiag -p sprio
show node resources bhosts mdiag -n scontrol show node
show available queues bqueues mdiag -c sinfo
show queue details bqueues -l mdiag -c -v scontrol show partition
show charge accounts bugroup mshare sshare
show configuration settings bparams -a mschedctl -l scontrol show conf

Batch System Submission Cross-Reference

 

Description

LSF bsub Option

Moab msub Option*

Slurm sbatch Option

Resource Specs
node count (range) -nnodes count -l nodes=count -N, --nodes=count[-count]
task count (processors on serial clusters) -n number -l ttc=number -n, --ntasks=number
queue -q queue -q queue -p, --partition=queue
specific feature(s) -R res_req -l feature=val -C, --constraint=list
memory required per node   -l mem=val --mem=MB
memory required per CPU   -l dmem=val --mem-per-cpu=MB
max memory per process -M mem_limit   shell limits automatically propagated
max virtual memory per process -v swap_limit   shell limits automatically propagated
generic resource(s)   -l gres=list --gres=list
license(s) -Lp ls_project_name   -L, --licenses=license
utilize reservation -U res_name -l advres=res_name --reservation=res_name
request specific compute hosts -m host_list -l hostlist=host_list -w, --nodelist=host_list
exclude specific compute hosts -R 'select[hname!=host_name]'   -x, --exclude=host_list
utilize hostfile for task distribution -hostfile file_name   -m, --distribution=arbitrary + SLURM_HOSTFILE env variable
request exclusive node allocation -x   --exclusive
Time Specs
wall-clock time limit -W [hh:]mm -l walltime=[[DD:]hh:]mm[:ss] -t, --time=[[DD-]hh:]mm[:ss] / --time-min=[[DD-]hh:]mm[:ss]
run no sooner than specified time -b [[[YY:]MM:]DD:]hh:mm -a [[[YY]MM]DD]hhmm –-begin=[YYYY-MM-DDT]hh:mm
Associated Fields
bank account -G user_group -A account -A account, --account=account
user specified name for their job -J job_name -N job_name -Jjob_name, --job-name=job_name
user specified project name / comment field -Jd project_name (i.e., job description) -l var:Project=project_name --comment=string
workload characterization key -P wckey -l wckey=wckey --wckey=wckey
Quality of Service
exempt qos -q exempt -l qos=exempt --qos=exempt
expedite qos -q expedite -l qos=expedite --qos=expedite
standby qos -q standby -l qos=standby --qos=standby
gives user the power to lower their job’s priority   -p --nice[=value]
I/O
input file

-i file_name

-is file_name

  -i, --input=file_name
output file -o file_name, -oo file_name -o file_name -o file_name, --output=file_name
error output file -e file_name, -eo file_name -e file_name -e file_name, --error=file_name
merge error output into file output (default) -j oe (default)
append or overwrite error/output files default append,
-oo/-eo overwrites
--open-mode=append|truncate
copy files from submission to execution hosts -f local_file remote_file   sbcast command
Mail
send mail at job start -B -m b --mail-type=BEGIN
send mail at job completion -N -m e --mail-type=END
specify user who receives mail -u user_name   --mail-user=user_name
suppress mail when default is to send   -m n  
Submit Behavior
submit an interactive job -Is [bash|csh]   salloc [slurm arguments]
submit job in held state -H -H -H, --hold
submit job and wait for completion -K   salloc command
submit a job array -J job_name[index_list]   -a, --array=indexes
invoke “command” instead of submitting batch script bsub command echo “command" | msub --wrap=command
dependent job -w "ended[jobid]" -l depend=jobid or -l depend=afterok:jobid -d, --dependency=jobid
Runtime Behavior
keep job running if a node fails (default) -l resfailpolicy=ignore -k, --no-kill
do not re-queue job after node failure -rn (default) -l resfailpolicy=cancel --no-requeue
re-queue job after node failure -r -l resfailpolicy=requeue --requeue
specify  the working directory -cwd directory   -D, --workdir=directory
export env variables to execution environment -env "none" | "all, [var_name[, var_name] ...] -V --export=environment_variables | ALL | NONE
propagate limits to execution environment -ul (default)   --propagate[=rlimits]
signal at remaining time -wa signal -wt rem_time -l signal=signal[@rem_time] --signal=signal[@rem_time]
Extra Info
help -h --help -h, --help / --usage
enable verbose output     -v, --verbose
display batch system version -V   scontrol show version

* To expedite the transition to Slurm, use the moab2slurm utility to convert Moab msub jobs scripts to the Slurm sbatch equivalents. See the moab2slurm man page on any TOSS 3 machine for details.

LSF Only

 
invoke application-specific file -a esub|epsub
invoke application profile -app profile
specify data requirements -data reqs
specify user group for data access -datagrp user_group_name
per-process (soft) core file size limit -C core_limit
Limit the total CPU time the job can use -c [hour:]minute[/host_name]
specify a per-process (soft) data segment size limit -D data_limit
specify job pre-execution command -E command
specify job post-execution command -Ep command
specify a per-process (soft) file size limit -F file_limit
submit to job group -g job_group
impose cgroups memory and swap containment -hl
specify a JSDL file -jsdl or -jsdl_strict file_name
specify a login shell -L shell
create job output directory -outdir directory_name
specify a limit to the number of processes -p process_limit
submit a job pack -pack job_submission_file
specify automatic job requeue exit values -Q exit_code(s)
specify a per-process (soft) stack segment size limit -S stack_limit
specify a signal when a queue-level run window closes -s signal
specify a service class (not quite the same as QoS) -sla class_name
specify a thread limit -T thread_limit
specify a termination deadline -t time_limit
enable orphan job termination -ti
enable output/error messages for interactive jobs -tty
provide a runtime estimate to the scheduler -We
submit using SSH X11 forwarding -XF
use spooled file as the command file for the job -Zs

Environment Variables

 

Description

LSF

Slurm

Input Variables
default project name LSB_DEFAULTPROJECT  
default queue LSB_DEFAULTQUEUE SBATCH_PARTITION
default user group (charge account) LSB_DEFAULT_USERGROUP  
custom fields for job display command LSB_BJOBS_FORMAT SQUEUE_FORMAT
reference link to more info bsub sbatch salloc srun
Output Variables
job ID LSB_JOBID SLURM_JOB_ID
job name LSB_JOBNAME SLURM_JOB_NAME
job array index LSB_JOBINDEX SLURM_ARRAY_TASK_ID
list of hosts allocated to the job LSB_HOSTS SLURM_JOB_NODELIST
directory from which job was submitted LS_SUBCWD SLURM_SUBMIT_DIR
host from which job was submitted LSB_SUB_HOST SLURM_SUBMIT_HOST
reference link to more info bsub sbatch salloc srun