How to pull or build a container¶
1. Pulling from a registry with Singularity¶
Often we are interested in containers that live in a registry somewhere, such as Docker Hub, the Singularity Library, or Singularity Hub. In “pulling” a container, we download a pre-built image from a container registry.
Let us say we are interested in a Docker container like the one on Docker Hub at https://hub.docker.com/r/bitnami/python. We can now use the singularity pull
command (more detail below) to grab this container with syntax that specifies both the user offering the container (here, bitnami) and the name of the container (here, python):
singularity pull docker://bitnami/python
By default, this creates a file with a .sif
extension called python_latest.sif
.
Similarily, we can pull the official image for the Julia programming language here with singularity pull
via
singularity pull docker://julia
and this generates a file called julia_latest.sif
. The .sif
extension indicates a Singularity-specific version of a SquashFS image, a type of compressed file. Alternatively we could rename the output file or create a different file type by adding an input argument to our call to singularity pull
:
singularity pull my_julia.img docker://julia
This generates the container image file my_julia.img
.
Independent of whether you create a .sif
or .img
file, you can work with your output file using Singularity commands like singularity exec
. (See later sections.) It no longer matters that this container originally came from Docker Hub.
2. Building from a registry with Singularity¶
Alternatively, we could use the command singularity build
to interact with a registry. Because this command takes at least two input arguments, grabbing a container image file from a registry with singularity build
looks more like our second example using singularity pull
:
singularity build my_julia.img docker://julia
and similarly can generate an output file my_julia.img
. Unlike singularity pull
, singularity build
can be used for a variety of tasks. In addition to grabbing a container image file from a container registry, you can use singularity build
to convert between formats of a container image, as we'll see in the next section on building a container from a Dockerfile
. (With singularity build
, you can also make a container writeable or create a sandbox.)
3. Building from a Dockerfile
with Podman¶
Our recommendation is that, if you want to build a container from scratch (e.g. using a Dockerfile
), you first build that container using podman build
initially and singularity build
to change file formats. To do so, you'll want to
1. Build your container with podman.
2. Save the container to a docker `.tar` archive.
3. Re-build the container as a `.sif` file with `singularity build...`
In greater detail:
1. Building with podman¶
To build a container with podman build...
, you'll first have to request an allocation with the --userns
flag; this flag makes sure you have the necessary privileges to build a container successfully. For example, to ask slurm for a 1 node, 60 minute allocation, you would run
salloc -N 1 -t 60 --userns
Next, you'll want to configure your environment by running the enable-podman.sh
script. On LC, it lives at /collab/usr/gapps/lcweg/containers/scripts/enable-podman.sh
, so in a bash terminal you would run
. /collab/usr/gapps/lcweg/containers/scripts/enable-podman.sh
(Note that the default driver is vfs
but that you can change the driver to overlayfs
via . /collab/usr/gapps/lcweg/containers/scripts/enable-podman.sh overlay
.)
Now build the container with podman build
, specifying the Dockerfile with -f
and the tag of the container created with -t
. Using the Dockerfile Dockerfile.ubuntu
and tag ubuntuimage
, we'd run
podman build -f Dockerfile.ubuntu -t ubuntuimage
To try this, you can grab the contents of Dockerfile.ubuntu
.
After podman build ...
, run podman images
to see your container image.
The whole build might look something like this:
janeh@pascal83:~$ salloc -N 1 -t 10 --userns
salloc: Pending job allocation 1535428
salloc: job 1535428 queued and waiting for resources
salloc: job 1535428 has been allocated resources
salloc: Granted job allocation 1535428
salloc: Waiting for resource configuration
salloc: Nodes pascal43 are ready for job
janeh@pascal43:~$ . /collab/usr/gapps/lcweg/containers/scripts/enable-podman.sh
janeh@pascal43:~$ podman build -f Dockerfile.ubuntu -t ubuntuimage
STEP 1: FROM ubuntu:18.04
Getting image source signatures
Copying blob 2f94e549220a done
Copying config 886eca19e6 done
Writing manifest to image destination
Storing signatures
STEP 2: COPY . /app
6ee326fc84d85e629b337aa35ec05fcde6706a859bed0ca3b922212b09499d51
STEP 3: CMD python /app/ubuntu-app.py
STEP 4: COMMIT ubuntuimage
7161b8fa255e24227b11a2b033fba8651ab513c3fb30e520b525220874434d23
7161b8fa255e24227b11a2b033fba8651ab513c3fb30e520b525220874434d23
janeh@pascal43:~$ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/ubuntuimage latest 7161b8fa255e About a minute ago 4.22 GB
docker.io/library/ubuntu 18.04 886eca19e611 3 days ago 65.5 MB
2. Saving the container to a docker archive¶
By default the container files created on a compute node will be deleted when the allocation ends. After building and before the allocation ends, you need to explicitly save the container! For example, immediately after building the container in the above, if we terminate the allocation and start a new allocation, we see no remaining images with podman images
:
janeh@pascal83:~$ salloc -N 1 -t 1 --userns
salloc: Pending job allocation 1535430
salloc: job 1535430 queued and waiting for resources
salloc: job 1535430 has been allocated resources
salloc: Granted job allocation 1535430
salloc: Waiting for resource configuration
salloc: Nodes pascal129 are ready for job
janeh@pascal129:~$ ./enable-podman.sh
janeh@pascal129:~$ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
janeh@pascal129:~$
Instead, use the syntax podman save TAG_NAME > OUTPUT_FILENAME
to save the container after building. For a container with tag ubuntuimage
, we might run
podman save ubuntuimage > ubuntuimage.tar
after building, as below:
janeh@pascal7:~$ podman build -f Dockerfile.ubuntu -t ubuntuimage
STEP 1: FROM ubuntu:18.04
Getting image source signatures
Copying blob 2f94e549220a done
Copying config 886eca19e6 [======================================] 1.4KiB / 1.4KiB
Writing manifest to image destination
Storing signatures
STEP 2: COPY . /app
88d6d0751fd16798dc127cd5e5ae463083f875280dd270849286fb8d8e009c9f
STEP 3: CMD python /app/ubuntu-app.py
STEP 4: COMMIT ubuntuimage
0ef15bc4f175622468ad741aa7ee3cdf3cb78f93ef693f5463fc864905aa24f1
0ef15bc4f175622468ad741aa7ee3cdf3cb78f93ef693f5463fc864905aa24f1
janeh@pascal7:~$ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/ubuntuimage latest 0ef15bc4f175 10 seconds ago 4.22 GB
docker.io/library/ubuntu 18.04 886eca19e611 3 days ago 65.5 MB
janeh@pascal7:~$ podman save ubuntuimage > ubuntuimage.tar
This creates the file ubuntuimage.tar
:
janeh@pascal7:~$ ls ubuntuimage.tar
ubuntuimage.tar
3. Re-building with singularity build...
¶
Using the .tar
created in the last step, you can build a singularity-compatible .sif
file with singularity build
using the syntax
singularity build OUTPUT_SIF_FILENAME docker-archive://DOCKER_TAR_FILENAME
For example, singularity build ubuntu.sif docker-archive://ubuntuimage.tar
produces ubuntu.sif
from ubuntuimage.tar
:
janeh@pascal7:~$ singularity build ubuntu.sif docker-archive://ubuntuimage.tar
INFO: Starting build...
Getting image source signatures
Copying blob 40a154bd3352 done
Copying blob 9b11b519d681 done
Copying config 0ef15bc4f1 done
Writing manifest to image destination
Storing signatures
2022/01/10 14:56:00 info unpack layer: sha256:704fb613779082c6361b77dc933a7423f7b4fb6c5c4e2992397d4c1b456afbd8
2022/01/10 14:56:00 warn xattr{etc/gshadow} ignoring ENOTSUP on setxattr "user.rootlesscontainers"
2022/01/10 14:56:00 warn xattr{/var/tmp/janeh/build-temp-768716639/rootfs/etc/gshadow} destination filesystem does not support xattrs, further warnings will be suppressed
2022/01/10 14:56:01 info unpack layer: sha256:82711e644b0c78c67fcc42949cb625e71d0bed1e6db18e63e870f44dce60e583
INFO: Creating SIF file...
INFO: Build complete: ubuntu.sif
We can now use singularity shell
to test that we can run the container image ubuntu.sif
in singularity and that it has the expected operating system:
janeh@pascal7:~$ singularity shell ubuntu.sif bash
Singularity> cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.6 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.6 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic