How to interact with a container¶
1. How to execute a container¶
Executing a container with singularity exec
allows us to run commands inside the container with the syntax
singularity exec $(container_name) $(commands_we_want_to_run)
For example,
singularity exec my_julia.img julia
runs the julia binary inside the container created from the my_julia.img
image and brings up the Julia interpreter. Similarly,
singularity exec my_julia.img echo "hello"
instantiates a container from my_julia.img
and runs echo "hello" inside that container, printing “hello” to stdout.
2. How to shell into a container¶
We can also run commands inside the container by first opening a shell in the container:
singularity shell my_julia.img
In response, the shell prompt Singularity>
pops up, at which you could run, for example, echo "hello"
or julia
.
3. How to run a container¶
Singularity containers contain a “runscript” at the path \singularity
within the container. Running the container means to call or execute this runscript. We can run the container created from my_julia.img
via either
./my_julia.img
or
singularity run my_julia.img
Either of these commands therefore yields the same result as
singularity exec my_julia.img /singularity
The runscript we see via singularity exec my_julia.img cat /singularity
is
#!/bin/sh
OCI_ENTRYPOINT=''
OCI_CMD='"julia"'
CMDLINE_ARGS=""
# prepare command line arguments for evaluation
for arg in "$@"; do
CMDLINE_ARGS="${CMDLINE_ARGS} \"$arg\""
done
# ENTRYPOINT only - run entrypoint plus args
if [ -z "$OCI_CMD" ] && [ -n "$OCI_ENTRYPOINT" ]; then
if [ $# -gt 0 ]; then
SINGULARITY_OCI_RUN="${OCI_ENTRYPOINT} ${CMDLINE_ARGS}"
else
SINGULARITY_OCI_RUN="${OCI_ENTRYPOINT}"
fi
fi
# CMD only - run CMD or override with args
if [ -n "$OCI_CMD" ] && [ -z "$OCI_ENTRYPOINT" ]; then
if [ $# -gt 0 ]; then
SINGULARITY_OCI_RUN="${CMDLINE_ARGS}"
else
SINGULARITY_OCI_RUN="${OCI_CMD}"
fi
fi
# ENTRYPOINT and CMD - run ENTRYPOINT with CMD as default args
# override with user provided args
if [ $# -gt 0 ]; then
SINGULARITY_OCI_RUN="${OCI_ENTRYPOINT} ${CMDLINE_ARGS}"
else
SINGULARITY_OCI_RUN="${OCI_ENTRYPOINT} ${OCI_CMD}"
fi
# Evaluate shell expressions first and set arguments accordingly,
# then execute final command as first container process
eval "set ${SINGULARITY_OCI_RUN}"
exec "$@"
When we execute, singularity run my_julia.img
, the Julia interpreter starts up in our terminal.
4. How to run a container via the batch scheduler¶
Running the container itself¶
Running a container via the queue/batch system in a HPC environment is as simple as passing your preferred run syntax — singularity run $(image_name)
or ./$(image_name)
— to the scheduler. For example, using Slurm, a call to srun
requesting 16 threads within your batch script might look like
srun -n16 ./my_updated_julia.img
or
srun -n16 singularity run my_updated_julia.img
Though you may need to specify a bank with #SBATCH -A
, an example submission script submit.slurm
that you could run on one of LC’s CTS systems (like Quartz) might contain the following
#!/bin/bash
#SBATCH -J test_container
#SBATCH -p pdebug
#SBATCH -t 00:01:00
#SBATCH -N 1
srun -n16 ./my_updated_julia.img
Running sbatch submit.slurm
at the command line on Quartz (in the same directory where the container image lives) submits the job to the queue. Once this job has run, a slurm-*.out
file is written that will contain 16 different approximations to pi.
Running a binary that lives inside the container¶
You can also call software that lives inside the container from within a batch script, rather than relying on the container’s runscript.
For example, perhaps I just want access to the julia binary so that I can use it to run various scripts that live in my home directory. Let’s say that I have a file in my home directory called hello.jl
that simply contains the line println("hello world!")
. I can run this via the batch system using my container if I update submit.slurm
to read
#!/bin/bash
#SBATCH -J test_container
#SBATCH -p pdebug
#SBATCH -t 00:01:00
#SBATCH -N 1
srun -n16 singularity run my_updated_julia.img hello.jl
# Alternatively, the following line produces the same result:
# srun -n16 singularity exec my_updated_julia.img julia hello.jl
The output file created by the job run via sbatch submit.slurm
will contain “hello world!” 16 times.