Batch System Cross-Reference Guides

On this page: Commands cross-reference | Submission cross-reference | LSF only | Environment Variables

Batch System Commands Cross-Reference

 

Command

LSF

Moab

Slurm

Flux*

submit a jobbsubmsubsbatchflux mini batch
submit an interactive jobbsub -Is [bash|csh] sallocflux mini alloc
submit an xterm jobbsub -XF xtermmxtermsxtermflux mini submit xterm
launch parallel tasksmpirun/jsrun/lrun srunflux mini run / flux mini submit
modify a pending jobbmod jobidmjobctl -m jobidscontrol update job jobid 
hold a pending jobbstop jobidmjobctl -h jobidscontrol hold jobidflux job urgency jobid hold
release a held jobbresume jobidmobctl -r jobidscontrol release jobidflux job urgency jobid default
cancel a jobbkill jobidcanceljob jobidscancel jobidflux job cancel jobid
signal a jobbkill -s signal jobidmobctl -N signal=signal jobidscancel -s signal jobidflux job kill -s signal jobid
show detailed job informationbjobs -l jobidcheckjob jobidscontrol show job jobidflux job info jobid
show job queuebjobs -u allshowqsqueueflux jobs
show historical jobsbhist sacctflux jobs -a
show detailed historical job infobhist -l jobid sacct -l -j jobidflux job info jobid
show job prioritiesbjobs -apsmdiag -psprioflux jobs -ao '{id} {priority}'
show node resourcesbhostsmdiag -nscontrol show nodeflux resource list / flux hwloc info
show available queuesbqueuesmdiag -csinfo 
show queue detailsbqueues -lmdiag -c -vscontrol show partition 
show charge accountsbugroupmsharesshare 
show configuration settingsbparams -amschedctl -lscontrol show conf 

* For more on running Flux, see the new tutorial.

Batch System Submission Cross-Reference

 

Description

LSF bsub Option

Moab msub Option*

Slurm sbatch Option

Flux mini Option

Resource Specs
node count (range)-nnodes number-l nodes=number-N, --nodes=number[-number]-N, --nodes=number
task count (processors on serial clusters)-n number-l ttc=number-n, --ntasks=number-n, --ntasks=number
cores per taskspecify in lrun / jsrun -c, --cpus-per-task=number-c, --cores-per-task=number
GPUs per taskspecify in lrun / jsrun -g, --gpus-per-task=number-g, --gpus-per-task=number
queue-q queue-q queue-p, --partition=queue--setattr=system.queue=queue
specific feature(s)-R res_req-l feature=val-C, --constraint=list 
memory required per node -l mem=val--mem=MB 
memory required per CPU -l dmem=val--mem-per-cpu=MB 
max memory per process-M mem_limit shell limits automatically propagated 
max virtual memory per process-v swap_limit shell limits automatically propagated 
generic resource(s) -l gres=list--gres=list 
license(s)-Lp ls_project_name -L, --licenses=license 
utilize reservation-U res_name-l advres=res_name--reservation=res_name 
request specific compute hosts-m host_list-l hostlist=host_list-w, --nodelist=host_list 
exclude specific compute hosts-R 'select[hname!=host_name]' -x, --exclude=host_list 
utilize hostfile for task distribution-hostfile file_name -m, --distribution=arbitrary + SLURM_HOSTFILE env variable 
request exclusive node allocation-x --exclusive 
Time Specs
wall-clock time limit-W [hh:]mm-l walltime=[[DD:]hh:]mm[:ss]-t, --time=[[DD-]hh:]mm[:ss] / --time-min=[[DD-]hh:]mm[:ss]
run no sooner than specified time-b [[[YY:]MM:]DD:]hh:mm-a [[[YY]MM]DD]hhmm–-begin=[YYYY-MM-DDT]hh:mm-t, --time=minutesm
Associated Fields
bank account-G user_group-A account-A, --account=account 
user specified name for their job-J job_name-N job_name-J, --job-name=job_name 
user specified project name / comment field-Jd project_name (i.e., job description)-l var:Project=project_name--comment=string--setattr=user.comment=string
workload characterization key-P wckey-l wckey=wckey--wckey=wckey 
Quality of Service 
exempt qos-q exempt-l qos=exempt--qos=exempt 
expedite qos-q expedite-l qos=expedite--qos=expedite 
standby qos-q standby-l qos=standby--qos=standby 
gives user the power to lower their job’s priority -p--nice[=value]--urgency=number
I/O
input file

-i file_name

-is file_name

 -i, --input=file_name--input=file_name
output file-o file_name, -oo file_name-o file_name-o, --output=file_name--output=template
error output file-e file_name, -eo file_name-e file_name-e, --error=file_name--error=template
merge error output into file output(default)-j oe(default)(default with --output=)
append or overwrite error/output filesdefault append,
-oo/-eo overwrites
--open-mode=append|truncate 
copy files from submission to execution hosts-f local_file remote_file sbcast command 
label output with task rank  -l, --label-l, --label-io
Mail
send mail at job start-B-m b--mail-type=BEGIN 
send mail at job completion-N-m e--mail-type=END 
specify user who receives mail-u user_name --mail-user=user_name 
suppress mail when default is to send -m n  
Submit Behavior
submit an interactive job-Is [bash|csh] salloc [slurm arguments]flux mini alloc [flux mini arguments]
submit job in held state-H-H-H, --hold--urgency=0
submit job and wait for completion-K salloc command--wait, --watch
submit a job array-J job_name[index_list] -a, --array=indexesflux mini bulksubmit
invoke “command” instead of submitting batch scriptbsub commandecho “command" | msub--wrap=command--wrap
dependent job-w "ended(jobid)"-l depend=jobid or -l depend=afterok:jobid-d, --dependency=jobid 
Runtime Behavior
keep job running if a node fails(default)-l resfailpolicy=ignore-k, --no-kill 
do not re-queue job after node failure-rn (default)-l resfailpolicy=cancel--no-requeue 
re-queue job after node failure-r-l resfailpolicy=requeue--requeue 
specify  the working directory-cwd directory -D, --workdir=directory--setattr=system.cwd=path
export env variables to execution environment-env "none" | "all, [var_name[, var_name] ...]-V--export=environment_variables | ALL | NONE--env=rule, --env-remove=pattern, --env-file=file
propagate limits to execution environment-ul (default) --propagate[=rlimits] 
signal at remaining time-wa signal -wt rem_time-l signal=signal[@rem_time]--signal=signal[@rem_time] 
Extra Info
help-h--help-h, --help / --usage-h, --help
enable verbose output  -v, --verbose-v, --verbose
display batch system version-V scontrol show versionflux -V

* To expedite the transition to Slurm, use the moab2slurm utility to convert Moab msub jobs scripts to the Slurm sbatch equivalents. See the moab2slurm man page on any TOSS 3 machine for details.

LSF Only

 
invoke application-specific file-a esub|epsub
invoke application profile-app profile
specify data requirements-data reqs
specify user group for data access-datagrp user_group_name
per-process (soft) core file size limit-C core_limit
Limit the total CPU time the job can use-c [hour:]minute[/host_name]
specify a per-process (soft) data segment size limit-D data_limit
specify job pre-execution command-E command
specify job post-execution command-Ep command
specify a per-process (soft) file size limit-F file_limit
submit to job group-g job_group
impose cgroups memory and swap containment-hl
specify a JSDL file-jsdl or -jsdl_strict file_name
specify a login shell-L shell
create job output directory-outdir directory_name
specify a limit to the number of processes-p process_limit
submit a job pack-pack job_submission_file
specify automatic job requeue exit values-Q exit_code(s)
specify a per-process (soft) stack segment size limit-S stack_limit
specify a signal when a queue-level run window closes-s signal
specify a service class (not quite the same as QoS)-sla class_name
specify a thread limit-T thread_limit
specify a termination deadline-t time_limit
enable orphan job termination-ti
enable output/error messages for interactive jobs-tty
provide a runtime estimate to the scheduler-We
submit using SSH X11 forwarding-XF
use spooled file as the command file for the job-Zs

Environment Variables

 

Description

LSF

Slurm

Input Variables
default project nameLSB_DEFAULTPROJECT 
default queueLSB_DEFAULTQUEUESBATCH_PARTITION
default user group (charge account)LSB_DEFAULT_USERGROUP 
custom fields for job display commandLSB_BJOBS_FORMATSQUEUE_FORMAT
reference link to more infobsubsbatch salloc srun
Output Variables
job IDLSB_JOBIDSLURM_JOB_ID
job nameLSB_JOBNAMESLURM_JOB_NAME
job array indexLSB_JOBINDEXSLURM_ARRAY_TASK_ID
list of hosts allocated to the jobLSB_HOSTSSLURM_JOB_NODELIST
directory from which job was submittedLS_SUBCWDSLURM_SUBMIT_DIR
host from which job was submittedLSB_SUB_HOSTSLURM_SUBMIT_HOST
reference link to more infobsubsbatch salloc srun