1. Building from a Dockerfile with Podman
Our recommendation is that, if you want to build a container from scratch (e.g. using a Dockerfile), build that container using podman build. To do so, you'll want to
- A. Build your container with podman.
- B. Save the container to a docker `.tar` archive.
This guide covers building an image by hand, if you're interested in automating your build and persisting the image in LC's Gitlab container registry, check out this guide.
In greater detail:
A. Building with podman
Using an allocated node
To build a container with podman build..., you'll first have to request an allocation with the --userns flag to SLURM; this flag makes sure you have the necessary privileges to build a container successfully. For example, to ask SLURM for a 1 node, 60 minute allocation, you would run
salloc -N 1 -t 60 --userns
Next, you'll want to configure your environment by running the enable-podman.sh script.
On LC systems, it lives at /collab/usr/gapps/lcweg/containers/scripts/enable-podman.sh, so in a bash terminal you would run
/collab/usr/gapps/lcweg/containers/scripts/enable-podman.sh
(Note that the default driver is vfs but that you can change the driver to overlayfs via /collab/usr/gapps/lcweg/containers/scripts/enable-podman.sh overlay. However, for PowerPC systems like lassen and rzansel, the driver should be set to vfs. When changing the storage driver, keep in mind that the change affects all systems you have access to. This means that switching between storage drivers can cause your image stores on other systems to become corrupted.
Now build the container with podman build, specifying the Dockerfile with -f and the tag of the container created with -t. Using the Dockerfile Dockerfile.ubuntu and tag ubuntuimage, we'd run
podman build -f Dockerfile.ubuntu -t ubuntuimage
To try this yourself, you can grab the contents of Dockerfile.ubuntu.
After podman build ..., run podman images to see your container image.
The whole build might look something like this:
janeh@pascal83:~$ salloc -N 1 -t 10 --userns salloc: Pending job allocation 1535428 salloc: job 1535428 queued and waiting for resources salloc: job 1535428 has been allocated resources salloc: Granted job allocation 1535428 salloc: Waiting for resource configuration salloc: Nodes pascal43 are ready for job janeh@pascal43:~$ . /collab/usr/gapps/lcweg/containers/scripts/enable-podman.sh janeh@pascal43:~$ podman build -f Dockerfile.ubuntu -t ubuntuimage STEP 1: FROM ubuntu:18.04 Getting image source signatures Copying blob 2f94e549220a done Copying config 886eca19e6 done Writing manifest to image destination Storing signatures STEP 2: COPY . /app 6ee326fc84d85e629b337aa35ec05fcde6706a859bed0ca3b922212b09499d51 STEP 3: CMD python /app/ubuntu-app.py STEP 4: COMMIT ubuntuimage 7161b8fa255e24227b11a2b033fba8651ab513c3fb30e520b525220874434d23 7161b8fa255e24227b11a2b033fba8651ab513c3fb30e520b525220874434d23 janeh@pascal43:~$ podman images REPOSITORY TAG IMAGE ID CREATED SIZE localhost/ubuntuimage latest 7161b8fa255e About a minute ago 4.22 GB docker.io/library/ubuntu 18.04 886eca19e611 3 days ago 65.5 MB
Without allocating a node
It is possible to build a container without requesting a node allocation. By incorporating fakeroot into your Dockerfile, you can simulate the actions that typically require root privileges. Note that in order for this method to work, the base image must be RHEL8 compatible.
First, in your Dockerfile install the necessary packages by running
RUN dnf -y install epel-release && dnf -y install fakeroot
Then, for all subsequent dnf commands prepend fakeroot to simulate the necessary permissions. An example Dockerfile is shown below.
FROM almalinux:8 RUN dnf -y install epel-release && dnf -y install fakeroot RUN fakeroot dnf upgrade -y && fakeroot dnf update -y RUN fakeroot dnf -y install nginx wget curl hostname openssh-server ENTRYPOINT ["/bin/bash", "-c"]
Now build the container normally with podman build, specifying the Dockerfile with -f and the tag of the container created with -t. Using the Dockerfile Dockerfile.ubuntu and tag ubuntuimage, we'd run
podman build -f Dockerfile.ubuntu -t ubuntuimage
B. Saving the container to a docker archive
By default the container files created on a compute node will be deleted when the allocation ends. After building and before the allocation ends, you need to explicitly save the container! For example, immediately after building the container in the above, if we terminate the allocation and start a new allocation, we see no remaining images with podman images:
janeh@pascal83:~$ salloc -N 1 -t 1 --userns salloc: Pending job allocation 1535430 salloc: job 1535430 queued and waiting for resources salloc: job 1535430 has been allocated resources salloc: Granted job allocation 1535430 salloc: Waiting for resource configuration salloc: Nodes pascal129 are ready for job janeh@pascal129:~$ ./enable-podman.sh janeh@pascal129:~$ podman images REPOSITORY TAG IMAGE ID CREATED SIZE janeh@pascal129:~$
Instead, use the syntax
podman save TAG_NAME > OUTPUT_FILENAME
to save the container after building. For a container with tag ubuntuimage, we might run
podman save ubuntuimage > ubuntuimage.tar
after building, as below:
janeh@pascal7:~$ podman build -f Dockerfile.ubuntu -t ubuntuimage STEP 1: FROM ubuntu:18.04 Getting image source signatures Copying blob 2f94e549220a done Copying config 886eca19e6 [======================================] 1.4KiB / 1.4KiB Writing manifest to image destination Storing signatures STEP 2: COPY . /app 88d6d0751fd16798dc127cd5e5ae463083f875280dd270849286fb8d8e009c9f STEP 3: CMD python /app/ubuntu-app.py STEP 4: COMMIT ubuntuimage 0ef15bc4f175622468ad741aa7ee3cdf3cb78f93ef693f5463fc864905aa24f1 0ef15bc4f175622468ad741aa7ee3cdf3cb78f93ef693f5463fc864905aa24f1 janeh@pascal7:~$ podman images REPOSITORY TAG IMAGE ID CREATED SIZE localhost/ubuntuimage latest 0ef15bc4f175 10 seconds ago 4.22 GB docker.io/library/ubuntu 18.04 886eca19e611 3 days ago 65.5 MB janeh@pascal7:~$ podman save ubuntuimage > ubuntuimage.tar
This creates the file ubuntuimage.tar:
janeh@pascal7:~$ ls ubuntuimage.tar ubuntuimage.tar