While we encourage using Podman as the primary containerization tool, Singularity is still supported and may be used as a backup option when necessary.

1. Pulling from a registry with Singularity

Often we are interested in containers that live in a registry somewhere, such as Docker Hub or the Singularity Library. In “pulling” a container, we download a pre-built image from a container registry.

Let us say we are interested in a Docker container like the one on Docker Hub at https://hub.docker.com/r/bitnami/python. We can now use the singularity pull command (more detail below) to grab this container with syntax that specifies both the user offering the container (here, bitnami) and the name of the container (here, python):

singularity pull docker://bitnami/python

By default, this creates a file with a .sif extension called python_latest.sif.

Similarily, we can pull the official image for the Julia programming language here with singularity pull via

singularity pull docker://julia

and this generates a file called julia_latest.sif. The .sif extension indicates a Singularity-specific version of a SquashFS image, a type of compressed file. Alternatively we could rename the output file or create a different file type by adding an input argument to our call to singularity pull:

singularity pull my_julia.sif docker://julia 

This generates the container image file my_julia.sif.

Alternatively, we could also create an .img file via singularity pull my_julia.img docker://julia, but the .sif extension is unique to Singularity files. Independent of whether you create a .sif or .img file, you can work with your output file using Singularity commands like singularity exec. (See later sections.) It no longer matters that this container originally came from Docker Hub.

2. Building from a registry with Singularity

The command

singularity pull docker://julia

seen in the last section is in fact just an alias for

singularity build julia_latest.sif docker://julia

Accordingly, the command singularity build can be used to interact with a registry instead of singularity pull. Unlike singularity pull, it can be used for a variety of tasks. In addition to grabbing a container image file from a container registry, you can use singularity build to convert between formats of a container image, as we'll see in the next section on building a container from a Dockerfile. With singularity build, you can also make a container writeable or create a sandbox.

singularity build requires that you specify the name of the container image file to be created, and so it always takes at least two input arguments.

For example, we could use singularity build as follows:

singularity build my_julia.sif docker://julia
singularity build python_latest.sif docker://bitnami/python

to generate the output files my_julia.sif and python_latest.sif.

3. Re-building with singularity build...

Using the .tar created in the last step, you can build a singularity-compatible .sif file with singularity build using the syntax

singularity build OUTPUT_SIF_FILENAME docker-archive:DOCKER_TAR_FILENAME

For example, singularity build ubuntu.sif docker-archive:ubuntuimage.tar produces ubuntu.sif from ubuntuimage.tar:


janeh@pascal7:~$ singularity build ubuntu.sif docker-archive:ubuntuimage.tar
INFO:    Starting build...
Getting image source signatures
Copying blob 40a154bd3352 done
Copying blob 9b11b519d681 done
Copying config 0ef15bc4f1 done
Writing manifest to image destination
Storing signatures
2022/01/10 14:56:00  info unpack layer: sha256:704fb613779082c6361b77dc933a7423f7b4fb6c5c4e2992397d4c1b456afbd8
2022/01/10 14:56:00  warn xattr{etc/gshadow} ignoring ENOTSUP on setxattr "user.rootlesscontainers"
2022/01/10 14:56:00  warn xattr{/var/tmp/janeh/build-temp-768716639/rootfs/etc/gshadow} destination filesystem does not support xattrs, further warnings will be suppressed
2022/01/10 14:56:01  info unpack layer: sha256:82711e644b0c78c67fcc42949cb625e71d0bed1e6db18e63e870f44dce60e583
INFO:    Creating SIF file...
INFO:    Build complete: ubuntu.sif

We can now use singularity shell to test that we can run the container image ubuntu.sif in singularity and that it has the expected operating system:


janeh@pascal7:~$ singularity shell ubuntu.sif bash
Singularity> cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.6 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.6 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

4. Limitations in working with Dockerfiles in Singularity

Note that Dockerfiles can contain the variable USER, which defines the user running the container. Singularity, on the other hand, does not have a concept of USER and does not provide a mechanism to specify/change the user running the container.

5. How to execute a container

Executing a container with singularity exec allows us to run commands inside the container with the syntax

singularity exec $(container_name) $(commands_we_want_to_run)

For example,

singularity exec my_julia.sif julia

runs the julia binary inside the container created from the my_julia.sif image and brings up the Julia interpreter. Similarly,

singularity exec my_julia.sif echo "hello"

instantiates a container from my_julia.sif and runs echo "hello" inside that container, printing “hello” to stdout.

6. How to shell into a container

We can also run commands inside the container by first opening a shell in the container:

singularity shell my_julia.sif

In response, the shell prompt Singularity> pops up, at which you could run, for example, echo "hello" or julia.

7. How to run a container

Singularity containers contain a “runscript” at the path /singularity within the container. Running the container means to call or execute this runscript. We can run the container created from my_julia.sif via either

./my_julia.sif

or

singularity run my_julia.sif

Either of these commands therefore yields the same result as

singularity exec my_julia.sif /singularity

The runscript we see via singularity exec my_julia.sif cat /singularity is

#!/bin/sh
OCI_ENTRYPOINT=''
OCI_CMD='"julia"'
CMDLINE_ARGS=""
# prepare command line arguments for evaluation 
for arg in "$@"; do
    CMDLINE_ARGS="${CMDLINE_ARGS} \"$arg\""
done
# ENTRYPOINT only - run entrypoint plus args
if [ -z "$OCI_CMD" ] && [ -n "$OCI_ENTRYPOINT" ]; then
    if [ $# -gt 0 ]; then 
        SINGULARITY_OCI_RUN="${OCI_ENTRYPOINT} ${CMDLINE_ARGS}"
    else 
        SINGULARITY_OCI_RUN="${OCI_ENTRYPOINT}"
    fi 
fi
# CMD only - run CMD or override with args
if [ -n "$OCI_CMD" ] && [ -z "$OCI_ENTRYPOINT" ]; then
    if [ $# -gt 0 ]; then 
        SINGULARITY_OCI_RUN="${CMDLINE_ARGS}"
    else 
        SINGULARITY_OCI_RUN="${OCI_CMD}"
    fi 
fi
# ENTRYPOINT and CMD - run ENTRYPOINT with CMD as default args 
# override with user provided args
if [ $# -gt 0 ]; then
    SINGULARITY_OCI_RUN="${OCI_ENTRYPOINT} ${CMDLINE_ARGS}" 
else
    SINGULARITY_OCI_RUN="${OCI_ENTRYPOINT} ${OCI_CMD}" 
fi
# Evaluate shell expressions first and set arguments accordingly, 
# then execute final command as first container process
eval "set ${SINGULARITY_OCI_RUN}"
exec "$@"

When we execute singularity run my_julia.sif, the Julia interpreter starts up in our terminal.

8. Running & executing containers via the batch scheduler

A. Running the container itself

Running a container via the queue/batch system in a HPC environment is as simple as passing your preferred run syntax — singularity run $(image_name) or ./$(image_name) — to the scheduler. For example, using Slurm, a call to srun requesting 16 threads within your batch script might look like

srun -n16 ./my_updated_julia.sif

or

srun -n16 singularity run my_updated_julia.sif

Though you may need to specify a bank with #SBATCH -A, an example submission script submit.slurm that you could run on one of LC’s CTS systems (like Quartz) might contain the following

#!/bin/bash
#SBATCH -J test_container 
#SBATCH -p pdebug 
#SBATCH -t 00:01:00 
#SBATCH -N 1

srun -n16 ./my_updated_julia.sif

Running sbatch submit.slurm at the command line on Quartz (in the same directory where the container image lives) submits the job to the queue. Once this job has run, a slurm-*.out file is written that will contain 16 different approximations to pi.

B. Running a binary that lives inside the container

You can also call software that lives inside the container from within a batch script, rather than relying on the container’s runscript.

For example, perhaps I just want access to the julia binary so that I can use it to run various scripts that live in my home directory. Let’s say that I have a file in my home directory called hello.jl that simply contains the line println("hello world!"). I can run this via the batch system using my container if I update submit.slurm to read

#!/bin/bash
#SBATCH -J test_container 
#SBATCH -p pdebug 
#SBATCH -t 00:01:00 
#SBATCH -N 1

srun -n16 singularity exec my_updated_julia.sif julia hello.jl

The output file created by the job run via sbatch submit.slurm will contain “hello world!” 16 times.

9. How to change the filesystems a container sees

By default, a singularity container running on LC will see your home directory and its contents, but not other filesystems such our /p/lustre# filesystems and our /usr/workspace filesystems. For example,

janeh@oslic9:~/Singularity$ singularity shell my_julia.sif
Singularity> pwd
/g/g0/janeh/Singularity
Singularity> ls /p/lustre1/janeh
ls: cannot access '/p/lustre1/janeh': No such file or directory

You can change this by binding or mounting a particular directory path in your container via the --bind or -B flag with the syntax singularity shell -B <directory to mount> <container image>:

janeh@oslic9:~/Singularity$ singularity shell -B /p/lustre1/janeh my_julia.sif
Singularity> ls /p/lustre1/janeh
0_LC_AutoDelete  GaAs

You can also choose to specify another location to which a filesystem or directory will be bound within the container with the syntax singularity shell -B <directory to mount>:<new location> <container image>. For example, we might do

janeh@oslic9:~/Singularity$ singularity shell -B /p/lustre1/janeh:lustre1 my_julia.sif
Singularity> ls /lustre1
0_LC_AutoDelete  GaAs

Now the directory 0_LC_AutoDelete is at /lustre1/0_LC_AutoDelete within the container instead of /p/lustre1/janeh/0_LC_AutoDelete.

Note that binding multiple directory paths in your container requires using the -B flag multiple times, for example,

singularity shell -B /p/lustre1/janeh -B /usr/workspace/janeh my_julia.sif

The -B or --bind flag can follow any of the singularity commands mentioned above, and should precede the name of the container with which you are working. For example, you might use -B with singularity exec or singularity run as in

singularity exec --bind /p/lustre1/janeh my_julia.sif ls /p/lustre1/janeh

or

singularity run -B /p/lustre1/janeh my_julia.sif

Note: Symlinks can create ambiguity as to where to find a directory you might like to mount. For example, /usr/workspace/<username> on LC systems links either to /usr/WS1/<username> or /usr/WS2/<username>, depending on the username. Binding /usr/workspace/<your-username> to your container will work, but if you simply try to bind /usr/workspace, you may not be able to see your workspace directory. (Imagine your workspace lives in /usr/WS1 and binding /usr/workspace mounts /usr/WS2.)

10. How to change a container's runscript

Sometimes we’ll find ourselves wanting to change the contents of a container or the behavior exhibited by our container when it is run. For example, maybe I want a container that uses Julia to remind me of the value of pi at runtime, rather than to simply start the interpreter.

Let’s create a file in our working directory called calc_pi.jl which contains

"""
     function calc_pi(N)

This function calculates pi with a Monte Carlo simulation using N samples.
"""
function calc_pi(N)
    # Generate `N` pairs of x,y coordinates on a grid defined 
    # by the extrema (1, 1), (-1,-1 ), (1, -1), and (-1, 1) 
    samples = rand([1, -1], N, 2) .* rand(N, 2)
    # how many of these sample points lie within the circle
    # of max size bounded by the same extrema
    samples_in_circle = sum([sqrt(samples[i, 1]^2 + samples[i, 2]^2) < 1.0 for i in 1:N])

    pi = 4*samples_in_circle/N
end

# print the estimate of pi calculated with 10,000 samples 
println(calc_pi(10_000))

Then update that file’s permissions so it’s broadly accessible for reading, writing, and executing via

chmod 777 calc_pi.jl

 

Now we’ll show a couple ways to build a container that runs a copy of this file by default.

A. Sandboxes

Using sandboxes is one way to edit a container and its behavior, though editing a container recipe (below) is the preferred method. In using a sandbox, we will typically

  1. Create a sandbox from an image with singularity build
  2. Change the contents of the container by editing within the sandbox
  3. Write a new image from the sandbox, again with singularity build

First, you can create a sandbox using singularity build with the --sandbox flag: For example,

singularity build --sandbox julia_sandbox/ my_julia.sif

creates a directory called julia_sandbox inside my working directory. Now,

ls julia_sandbox

returns the same results as

singularity exec my_julia.sif ls /

Second, let’s change the contents of the sandbox inside julia_sandbox/. Now copy calc_pi.jl inside my sandbox’s /opt/ directory:

cp ./calc_pi.jl julia_sandbox/opt/

Now I’ll update the runscript in /singularity to read

#!/bin/sh
# if there are no command line arguments, run the calc_pi.jl script 
if [ -z "$@" ]; then
    julia /opt/calc_pi.jl
# otherwise, there may be command line arguments provided -- such as a julia script
else
    julia "$@" 
fi

so that we run calc_pi.jl and print our estimate for pi whenever we run this container. Finally, I can create an updated container image, my_updated_julia.sif, from the edited sandbox via

singularity build my_updated_julia.sif julia_sandbox/

Now, when I run the new container via ./my_updated_julia.sif, an approximation to pi prints to stdout.

B. Container recipes

Rather than creating a sandbox from an image, manually editing that sandbox, and then creating a new image from the altered sandbox, a better documented and more reproducible way to create a new container image is to use a container recipe. We recommend using a Dockerfile as your recipe file and building as described in "How to pull or build a container".