On this page: Commands cross-reference | Submission cross-reference | LSF only | Environment Variables
Batch System Commands Cross-Reference
Command |
LSF |
Moab |
Slurm |
Flux* |
---|---|---|---|---|
submit a job | bsub | msub | sbatch | flux batch |
submit an interactive job | bsub -Is [bash|csh] | salloc | flux alloc | |
submit an xterm job | bsub -XF xterm | mxterm | sxterm | flux submit xterm |
launch parallel tasks | mpirun/jsrun/lrun | srun | flux run / flux submit | |
modify a pending job | bmod jobid | mjobctl -m jobid | scontrol update job jobid | |
hold a pending job | bstop jobid | mjobctl -h jobid | scontrol hold jobid | flux job urgency jobid hold |
release a held job | bresume jobid | mobctl -r jobid | scontrol release jobid | flux job urgency jobid default |
cancel a job | bkill jobid | canceljob jobid | scancel jobid | flux job cancel jobid |
signal a job | bkill -s signal jobid | mobctl -N signal=signal jobid | scancel -s signal jobid | flux job kill -s signal jobid |
show detailed job information | bquery -l jobid | checkjob jobid | scontrol show job jobid | flux job info jobid |
show job queue | bquery -u all | showq | squeue | flux jobs |
show historical jobs | bhist | sacct | flux jobs -a | |
show detailed historical job info | bhist -l jobid | sacct -l -j jobid | flux job info jobid | |
show job priorities | bquery -aps | mdiag -p | sprio | flux jobs -ao '{id} {priority}' |
show node resources | bhosts | mdiag -n | scontrol show node | flux resource list |
show available queues | bqueues | mdiag -c | sinfo | flux resource list |
show queue details | bqueues -l | mdiag -c -v | scontrol show partition | flux queue list |
show charge accounts | bugroup | mshare | sshare | flux account view-user |
show configuration settings | bparams -a | mschedctl -l | scontrol show conf | flux config get |
* For more on running Flux, see the new tutorial.
Batch System Submission Cross-Reference
Description |
LSF bsub Option |
Moab msub Option* |
Slurm sbatch Option** |
Flux batch Option |
---|---|---|---|---|
Resource Specs | ||||
node count (range) | -nnodes number | -l nodes=number | -N, --nodes=number[-number] | -N, --nodes=number |
task count (processors on serial clusters) | -n number | -l ttc=number | -n, --ntasks=number | -n, --ntasks=number |
cores per task | specify in lrun / jsrun | -c, --cpus-per-task=number | -c, --cores-per-task=number | |
GPUs per task | specify in lrun / jsrun | -g, --gpus-per-task=number | -g, --gpus-per-task=number | |
queue | -q queue | -q queue | -p, --partition=queue | -q, --queue queue |
specific feature(s) | -R res_req | -l feature=val | -C, --constraint=list | --requires=host:properties |
memory required per node | -l mem=val | --mem=MB | ||
memory required per CPU | -l dmem=val | --mem-per-cpu=MB | ||
max memory per process | -M mem_limit | shell limits automatically propagated | ||
max virtual memory per process | -v swap_limit | shell limits automatically propagated | ||
generic resource(s) | -l gres=list | --gres=list | ||
license(s) | -Lp ls_project_name | -L, --licenses=license | ||
utilize reservation | -U res_name | -l advres=res_name | --reservation=res_name | |
request specific compute hosts | -m host_list | -l hostlist=host_list | -w, --nodelist=host_list | --requires=host:host_list |
exclude specific compute hosts | -R 'select[hname!=host_name]' | -x, --exclude=host_list | --requires=-host:host_list | |
utilize hostfile for task distribution | -hostfile file_name | -m, --distribution=arbitrary + SLURM_HOSTFILE env variable | --taskmap=hostfile:file_name | |
request exclusive node allocation | -x | --exclusive | --exclusive | |
Time Specs | ||||
wall-clock time limit | -W [hh:]mm | -l walltime=[[DD:]hh:]mm[:ss] | -t, --time=[[DD-]hh:]mm[:ss] / --time-min=[[DD-]hh:]mm[:ss] | -t, --time=minutesm |
run no sooner than specified time | -b [[[YY:]MM:]DD:]hh:mm | -a [[[YY]MM]DD]hhmm | –-begin=[YYYY-MM-DDT]hh:mm | --begin-time=DATETIME |
Associated Fields | ||||
bank account | -G user_group | -A account | -A, --account=account | -B, --bank=bank |
user specified name for their job | -J job_name | -N job_name | -J, --job-name=job_name | --job-name=job_name |
user specified project name / comment field | -Jd project_name (i.e., job description) | -l var:Project=project_name | --comment=string | --setattr=user.comment=string |
workload characterization key | -P wckey | -l wckey=wckey | --wckey=wckey | |
Quality of Service | ||||
exempt qos | -q exempt | -l qos=exempt | --qos=exempt | |
expedite qos | -q expedite | -l qos=expedite | --qos=expedite | |
standby qos | -q standby | -l qos=standby | --qos=standby | |
gives user the power to lower their job’s priority | -p | --nice[=value] | --urgency=number | |
I/O | ||||
input file | -i file_name, -is file_name | -i, --input=file_name | --input=file_name | |
output file | -o file_name, -oo file_name | -o file_name | -o, --output=file_name | --output=template |
error output file | -e file_name, -eo file_name | -e file_name | -e, --error=file_name | --error=template |
merge error output into file output | (default) | -j oe | (default) | (default with --output=) |
append or overwrite error/output files | default append, -oo/-eo overwrites |
--open-mode=append|truncate | ||
copy files from submission to execution hosts | -f local_file remote_file | sbcast command | ||
label output with task rank | -l, --label | -l, --label-io | ||
send mail at job start | -B | -m b | --mail-type=BEGIN | |
send mail at job completion | -N | -m e | --mail-type=END | |
specify user who receives mail | -u user_name | --mail-user=user_name | ||
suppress mail when default is to send | -m n | |||
Submit Behavior | ||||
submit an interactive job | -Is [bash|csh] | salloc [slurm arguments] | flux alloc [flux arguments] | |
submit job in held state | -H | -H | -H, --hold | --urgency=0 |
submit job and wait for completion | -K | salloc command | --wait, --watch | |
submit a job array | -J job_name[index_list] | -a, --array=indexes | flux bulksubmit | |
invoke “command” instead of submitting batch script | bsub command | echo “command" | msub | --wrap=command | --wrap |
dependent job | -w "ended(jobid)" | -l depend=jobid or -l depend=afterok:jobid | -d, --dependency=jobid | --dependency=afterany:jobid |
submit job to existing allocation / instance | jsrun -J jobid ... | srun --jobid=jobid ... | flux proxy jobid flux run ... | |
Runtime Behavior | ||||
keep job running if a node fails | (default) | -l resfailpolicy=ignore | -k, --no-kill | |
do not re-queue job after node failure | -rn (default) | -l resfailpolicy=cancel | --no-requeue | |
re-queue job after node failure | -r | -l resfailpolicy=requeue | --requeue | |
specify the working directory | -cwd directory | -D, --workdir=directory | --cwd=path | |
export env variables to execution environment | -env "none" | "all, [var_name[, var_name] ...] | -V | --export=environment_variables | ALL | NONE | --env=rule, --env-remove=pattern, --env-file=file |
propagate limits to execution environment | -ul (default) | --propagate[=rlimits] | ||
signal at remaining time | -wa signal -wt rem_time | -l signal=signal[@rem_time] | --signal=signal[@rem_time] | --sig=SIG@TIME |
Extra Info | ||||
help | -h | --help | -h, --help / --usage | -h, --help |
enable verbose output | -v, --verbose | -v, --verbose | ||
display batch system version | -V | scontrol show version | flux -V |
* To expedite the transition to Slurm, use the moab2slurm utility to convert Moab msub job scripts to the Slurm sbatch equivalents. See the moab2slurm man page on any TOSS 3 machine for details.
** On Flux only systems, there is a slurm2flux utility to convert Slurm sbatch job scripts to the flux batch equivalents.
LSF Only
Description | Option |
---|---|
invoke application-specific file | -a esub|epsub |
invoke application profile | -app profile |
specify data requirements | -data reqs |
specify user group for data access | -datagrp user_group_name |
per-process (soft) core file size limit | -C core_limit |
Limit the total CPU time the job can use | -c [hour:]minute[/host_name] |
specify a per-process (soft) data segment size limit | -D data_limit |
specify job pre-execution command | -E command |
specify job post-execution command | -Ep command |
specify a per-process (soft) file size limit | -F file_limit |
submit to job group | -g job_group |
impose cgroups memory and swap containment | -hl |
specify a JSDL file | -jsdl or -jsdl_strict file_name |
specify a login shell | -L shell |
create job output directory | -outdir directory_name |
specify a limit to the number of processes | -p process_limit |
submit a job pack | -pack job_submission_file |
specify automatic job requeue exit values | -Q exit_code(s) |
specify a per-process (soft) stack segment size limit | -S stack_limit |
specify a signal when a queue-level run window closes | -s signal |
specify a service class (not quite the same as QoS) | -sla class_name |
specify a thread limit | -T thread_limit |
specify a termination deadline | -t time_limit |
enable orphan job termination | -ti |
enable output/error messages for interactive jobs | -tty |
provide a runtime estimate to the scheduler | -We |
submit using SSH X11 forwarding | -XF |
use spooled file as the command file for the job | -Zs |
Environment Variables
Description |
LSF |
Slurm |
---|---|---|
Input Variables | ||
default project name | LSB_DEFAULTPROJECT | |
default queue | LSB_DEFAULTQUEUE | SBATCH_PARTITION |
default user group (charge account) | LSB_DEFAULT_USERGROUP | |
custom fields for job display command | LSB_BQUERY_FORMAT | SQUEUE_FORMAT |
reference link to more info | bsub | sbatch salloc srun |
Output Variables | ||
job ID | LSB_JOBID | SLURM_JOB_ID |
job name | LSB_JOBNAME | SLURM_JOB_NAME |
job array index | LSB_JOBINDEX | SLURM_ARRAY_TASK_ID |
list of hosts allocated to the job | LSB_HOSTS | SLURM_JOB_NODELIST |
directory from which job was submitted | LS_SUBCWD | SLURM_SUBMIT_DIR |
host from which job was submitted | LSB_SUB_HOST | SLURM_SUBMIT_HOST |
reference link to more info | bsub | sbatch salloc srun |