On this page: Commands cross-reference | Submission cross-reference | LSF only | Environment Variables

Batch System Commands Cross-Reference

Command Slurm Flux* LSF** Moab**
submit a job sbatch flux batch bsub msub
submit an interactive job salloc flux alloc bsub -Is [bash|csh]  
submit an xterm job sxterm flux submit xterm bsub -XF xterm mxterm
launch parallel tasks srun flux run / flux submit mpirun/jsrun/lrun  
modify a pending job scontrol update job jobid   bmod jobid mjobctl -m jobid
hold a pending job scontrol hold jobid flux job urgency jobid hold bstop jobid mjobctl -h jobid
release a held job scontrol release jobid flux job urgency jobid default bresume jobid mobctl -r jobid
cancel a job scancel jobid flux job cancel jobid bkill jobid canceljob jobid
signal a job scancel -s signal jobid flux job kill -s signal jobid bkill -s signal jobid mobctl -N signal=signal jobid
show detailed job information scontrol show job jobid flux job info jobid bquery -l jobid checkjob jobid
show job queue squeue flux jobs bquery -u all showq
show historical jobs sacct flux jobs -a bhist  
show detailed historical job info sacct -l -j jobid flux job info jobid bhist -l jobid  
show job priorities sprio flux jobs -ao '{id} {priority}' bquery -aps mdiag -p
show node resources scontrol show node flux resource list bhosts mdiag -n
show available queues sinfo flux resource list bqueues mdiag -c
show queue details scontrol show partition flux queue list bqueues -l mdiag -c -v
show charge accounts sshare flux account view-user bugroup mshare
show configuration settings scontrol show conf flux config get bparams -a mschedctl -l

* For more on running Flux, see the new tutorial.

**LC no longer uses Moab or LSF. These columns exist to help users update their scripts.

Batch System Submission Cross-Reference

Description Slurm sbatch Option** Flux batch Option LSF bsub Option† Moab msub Option*†
Resource Specs    
node count (range) -N, --nodes=number[-number] -N, --nodes=number -nnodes number -l nodes=number
task count (processors on serial clusters) -n, --ntasks=number -n, --ntasks=number -n number -l ttc=number
cores per task -c, --cpus-per-task=number -c, --cores-per-task=number specify in lrun / jsrun  
GPUs per task -g, --gpus-per-task=number -g, --gpus-per-task=number specify in lrun / jsrun  
queue -p, --partition=queue -q, --queue queue -q queue -q queue
specific feature(s) -C, --constraint=list --requires=host:properties -R res_req -l feature=val
memory required per node --mem=MB     -l mem=val
memory required per CPU --mem-per-cpu=MB     -l dmem=val
max memory per process shell limits automatically propagated   -M mem_limit  
max virtual memory per process shell limits automatically propagated   -v swap_limit  
generic resource(s) --gres=list     -l gres=list
license(s) -L, --licenses=license   -Lp ls_project_name  
utilize reservation --reservation=res_name   -U res_name -l advres=res_name
request specific compute hosts -w, --nodelist=host_list --requires=host:host_list -m host_list -l hostlist=host_list
exclude specific compute hosts -x, --exclude=host_list --requires=-host:host_list -R 'select[hname!=host_name]'  
utilize hostfile for task distribution -m, --distribution=arbitrary + SLURM_HOSTFILE env variable --taskmap=hostfile:file_name -hostfile file_name  
request exclusive node allocation --exclusive --exclusive -x  
Time Specs    
wall-clock time limit -t, --time=[[DD-]hh:]mm[:ss] / --time-min=[[DD-]hh:]mm[:ss] -t, --time=minutesm -W [hh:]mm -l walltime=[[DD:]hh:]mm[:ss]
run no sooner than specified time –-begin=[YYYY-MM-DDT]hh:mm --begin-time=DATETIME -b [[[YY:]MM:]DD:]hh:mm -a [[[YY]MM]DD]hhmm
Associated Fields    
bank account -A, --account=account -B, --bank=bank -G user_group -A account
user specified name for their job -J, --job-name=job_name --job-name=job_name -J job_name -N job_name
user specified project name / comment field --comment=string --setattr=user.comment=string -Jd project_name (i.e., job description) -l var:Project=project_name
workload characterization key --wckey=wckey   -P wckey -l wckey=wckey
Quality of Service        
exempt qos --qos=exempt   -q exempt -l qos=exempt
expedite qos --qos=expedite   -q expedite -l qos=expedite
standby qos --qos=standby   -q standby -l qos=standby
gives user the power to lower their job’s priority --nice[=value] --urgency=number   -p
I/O    
input file -i, --input=file_name --input=file_name -i file_name, -is file_name  
output file -o, --output=file_name --output=template -o file_name, -oo file_name -o file_name
error output file -e, --error=file_name --error=template -e file_name, -eo file_name -e file_name
merge error output into file output (default) (default with --output=) (default) -j oe
append or overwrite error/output files --open-mode=append|truncate   default append,
-oo/-eo overwrites
default append,
-oo/-eo overwrites
copy files from submission to execution hosts sbcast command   -f local_file remote_file  
label output with task rank -l, --label -l, --label-io    
Mail    
send mail at job start --mail-type=BEGIN   -B -m b
send mail at job completion --mail-type=END   -N -m e
specify user who receives mail --mail-user=user_name   -u user_name  
suppress mail when default is to send       -m n
Submit Behavior    
submit an interactive job salloc [slurm arguments] flux alloc [flux arguments] -Is [bash|csh]  
submit job in held state -H, --hold --urgency=0 -H -H
submit job and wait for completion salloc command --wait, --watch -K  
submit a job array -a, --array=indexes flux bulksubmit -J job_name[index_list]  
invoke “command” instead of submitting batch script --wrap=command --wrap bsub command echo “command" | msub
dependent job -d, --dependency=jobid --dependency=afterany:jobid -w "ended(jobid)" -l depend=jobid or -l depend=afterok:jobid
submit job to existing allocation / instance srun --jobid=jobid ... flux proxy jobid flux run ... jsrun -J jobid ...  
Runtime Behavior    
keep job running if a node fails -k, --no-kill   (default) -l resfailpolicy=ignore
do not re-queue job after node failure --no-requeue   -rn (default) -l resfailpolicy=cancel
re-queue job after node failure --requeue   -r -l resfailpolicy=requeue
specify  the working directory -D, --workdir=directory --cwd=path -cwd directory  
export env variables to execution environment --export=environment_variables | ALL | NONE --env=rule, --env-remove=pattern, --env-file=file -env "none" | "all, [var_name[, var_name] ...] -V
propagate limits to execution environment --propagate[=rlimits]   -ul (default)  
signal at remaining time --signal=signal[@rem_time] --sig=SIG@TIME -wa signal -wt rem_time -l signal=signal[@rem_time]
Extra Info    
help -h, --help / --usage -h, --help -h --help
enable verbose output -v, --verbose -v, --verbose    
display batch system version scontrol show version flux -V -V  

* To expedite the transition to Slurm, use the moab2slurm utility to convert Moab msub job scripts to the Slurm sbatch equivalents. See the moab2slurm man page on any TOSS 3 machine for details.

** On Flux only systems, there is a slurm2flux utility to convert Slurm sbatch job scripts to the flux batch equivalents.

†LC no longer uses Moab or LSF. These columns exist to help users update their scripts.

Environment Variables

Description Slurm LSF
Input Variables  
default project name   LSB_DEFAULTPROJECT
default queue SBATCH_PARTITION LSB_DEFAULTQUEUE
default user group (charge account)   LSB_DEFAULT_USERGROUP
custom fields for job display command SQUEUE_FORMAT LSB_BQUERY_FORMAT
reference link to more info sbatch salloc srun bsub
Output Variables    
job ID SLURM_JOB_ID LSB_JOBID
job name SLURM_JOB_NAME LSB_JOBNAME
job array index SLURM_ARRAY_TASK_ID LSB_JOBINDEX
list of hosts allocated to the job SLURM_JOB_NODELIST LSB_HOSTS
directory from which job was submitted SLURM_SUBMIT_DIR LS_SUBCWD
host from which job was submitted SLURM_SUBMIT_HOST LSB_SUB_HOST
reference link to more info sbatch salloc srun bsub