Apptainer
This is not a complete article: This is a draft, a work in progress that is intended to be published into an article, which may or may not be ready for inclusion in the main wiki. It should not necessarily be considered factual or authoritative.
Notices[edit]
This page is draft work-in-progress.
Official Apptainer Documentation[edit]
This page is neither exhaustive nor all-features complete and does not replace official documentation, rather, it summarizes basic use, documents some aspects of using Apptainer on Alliance systems, and provides some examples relevant in using Apptainer on Alliance systems. We recommend all users read the official Apptainer documentation concerning the features of Apptainer they are using.
If Currently Using Singularity[edit]
We strongly recommend that you start using Apptainer instead of Singularity. SingularityCE (up to v3.9.5) was adopted by The Linux Foundation and renamed to Apptainer with these changes:
- added experimental support for DMTCP checkpointing,
- NOTE: Support for such is not in Singularity.
- removed support for the
--nvccli
command line option, - removed support for
apptainer build --remote
, - removed support the SylabsCloud remote endpoint replacing it with a DefaultRemote endpoint with no defined server for
library://
,- NOTE: If the SylabsCloud remote is needed, the previous default can be restored.
- renamed all executable names, paths, etc. having
singularity
in their names to haveapptainer
in them,- e.g., instead of using the
singularity
command one uses theapptainer
command - e.g., the
~/.singularity
directory is now~/.apptainer
- e.g., instead of using the
- renamed all environment variables having
SINGULARITY
in their names to haveAPPTAINER
in them,
Should you need to port scripts, etc. to Apptainer, know Apptainer version 1 is backwards compatible with Singularity so switching to Apptainer can be done incrementally.
Using Apptainer[edit]
In order to use Apptainer one must first already have a container image, e.g., a .sif
file or a "sandbox" directory created previously. If you don't already have a container image, see the section on building an image below.
Loading an Apptainer Module[edit]
In order to use the default version of Apptainer available run:
$ module load apptainer
To see the available versions of Apptainer that can be loaded run:
$ module spider apptainer
Running Programs Within a Container[edit]
Important Items[edit]
sudo
[edit]
Many users ask about sudo
since documentation and web sites often discuss using sudo
. Know the ability to use sudo
to obtain root permissions is not available on our clusters. Should you require using sudo
, consider the following options:
- Install Linux, Apptainer, and
sudo
in a virtual machine on a system you control so you will be able to havesudo
access within such. Build your image(s) on that machine and upload them in order to use them on Alliance systems. - If appropriate, submit a ticket asking if Alliance staff would be able to help build the image(s), etc. required needing
sudo
. (Understand that this may or may not be done/possible --but feel free to ask such in a ticket if what you wish to achieve is beyond your means. Additionally, we may respond with other ways to achieve such with may or may not involve Apptainer.) - Apptainer version 1.1.x has improved support for users using
--fakeroot
implicitly and explicitly so some things make become possible that were not with Apptainer version 1.0.x and Singularity. This includes being able to build some images from Definition files and building some images without needing to usesudo
. That said, know not all images will be able to be built without needing to usesudo
or real root.
Important Command Line Options[edit]
Software that is run inside a container runs in a different environment, libraries, and tools than what is installed on the host system. It is, therefore, wise to run programs within containers by not importing any environment settings or software defined outside of the container. By default Apptainer will run adopting the shell environment of the host but this can result in issues when running programs inside the container. To work around this when using apptainer run
, apptainer shell
, apptainer exec
, and apptainer instance, consider using one of these options (with more preference to those options listed earlier in the table below):
Option | Description |
---|---|
-C |
Isolates the running container from all file systems as well as the parent PID, IPC, and environment. Using this option will require using bind mounts if access to filesystems outside of the container is needed. |
-c |
Isolates the running container from most file systems only using a minimal /dev , an empty /tmp directory, and an empty /home directory. Using this option will require using bind mounts if access to filesystems outside of the container is needed.
|
-e |
Cleans (some) shell environment variables before running container commands and applies settings for increased OCI/Docker compatibility. Using this option also implies the use of these options: --containall , --no-init , --no-umask , --writable-tmpfs .
|
When no options are used the environment variables from the parent shell exist as-is inside the container (which can cause issues to occur) and (virtually) all filesystems are also present inside the container. |
Another important option one should consider and may need to use Apptainer successfully is the -W
or --workdir
option. On Alliance clusters and on most Linux systems, /tmp
and similar filesystems use RAM --not disk space. Since jobs typically run on our clusters with limited RAM amounts, this can result in jobs getting killed because they consume too much RAM relative to what was requested for the job. A suitable work-around for this is to tell Apptainer to use a real disk space location for its "workdir". This is done by passing the -W
option followed by a path to a disk space location where Apptainer can read/write temporary files, etc. For example, suppose one wanted to run a command myprogram
in a using an Apptainer container image called myimage.sif
with its "workdir" set to /path/to/a/workdir
in the filesystem:
$ mkdir -p $HOME/aworkdir
$ apptainer run -C -B /home -W /path/to/a/workdir myimage.sif myprogram
where:
- The workdir directory can be removed if there are no live containers using it.
- When using Apptainer in an
salloc
, in ansbatch
job, or when using [JupyterHub] on our clusters, use${SLURM_TMPDIR}
for the "workdir" location, e.g.,-W ${SLURM_TMPDIR}
.- ASIDE: One should not be running programs (including Apptainer) on a login node: use an interactive
salloc
job.
- ASIDE: One should not be running programs (including Apptainer) on a login node: use an interactive
- When using bind mounts, see the section on bind mounts below since not all Alliance clusters are all exactly the same concerning the exact bind mounts that are needed to access
/home
,/project
, and/scratch
.
Using GPUs[edit]
When running software inside a container that requires the use of GPUs it is important to do the following:
- Ensure that you pass the
--nv
(for NVIDIA hardware) and--rocm
(for AMD hardware) to Apptainer commands.- These options will ensure the appropriate
/dev
entries are bind mounted inside the container. - These options will locate and bind GPU-related libraries on the host (e.g., so such becomes bind-mounted inside the container) as well as setting the
LD_LIBRARY_PATH
environment variable to enable the aforementioned libraries will work inside the container.
- These options will ensure the appropriate
- Ensure the application using the GPU inside the container was properly compiled to use the GPU and its libraries.
- When needing to use OpenCL inside the container, besides using the aforementioned options use the following bind mount:
--bind /etc/OpenCL
.
An example of using NVIDIA GPUs within an apptainer container appears later on this page.
Using MPI Programs[edit]
If you want to run MPI programs inside a container there are things that need to be done in the host environment in order for such to work. Please see the Running MPI Programs section below for an example of how to run MPI programs inside a container. The official Apptainer documentation has more information concerning how MPI programs can be run inside a container.
Container-Specific Help: apptainer run-help
[edit]
Apptainer containers built from Definition files often will have a %help
section. To see this section run:
apptainer run-help your-container-name.sif
where:
your-container-name.sif
is the name of your container
It is possible your container has "apps" defined in it, you can get help for those apps by running:
apptainer run-help --app appname your-container-name.sif
where:
appname
is the name of the appyour-container-name.sif
is the name of your container
To see a list of apps installed in your container (if there are any), run:
apptainer inspect --list-apps your-container-name.sif
where:
your-container-name.sif
is the name of your container
Running Software (Preferred): apptainer run
[edit]
The apptainer run
command will launch an Apptainer container, runs the %runscript
defined for that container (if one is defined), and then runs the specific command (subject to the code in the %runscript
script). Using this command is preferred over using the apptainer exec
command (which directly runs a command within the specified container).
For example, suppose you want to run the g++
compiler inside your container to compile a C++ program called myprog.cpp
. To this this you might use this command:
apptainer run your-container-name.sif g++ -O2 -march=broadwell ./myprog.cpp
where:
your-container-name.sif
is the name of your SIF fileg++ -O2 -march=broadwell ./myprog.cpp
is the command you want to run inside the container
On our clusters, you will likely need to use a number of additional options (that appear after run
and before your-container-name.sif
). These options will include -C
, -c
, -e
, -W
as well as various bind mount options to make your disk space available to the programs that run in your container. For example, a more complete command might be:
apptainer run -C -W $SLURM_TMPDIR -B /home -B /project -B /scratch your-container-name.sif g++ -O2 -march=broadwell ./myprog.cpp
For more information on these options see the following sections on this page:
as well as the official Apptainer documentation.
Interactively Running Software: apptainer shell
[edit]
Text to come.
Running Software (Basic): apptainer exec
[edit]
Text to come.
Running Daemons: apptainer instance
[edit]
Text to come.
Bind Mounts and Persistent Overlays[edit]
Text to come.
Bind Mounts[edit]
Text to come.
Persistent Overlays[edit]
Text to come.
Building an Apptainer Container/Image[edit]
Overview[edit]
Apptainer "images" can be created in the following formats:
- as an
SIF
file, or, - as a "sandbox" directory.
SIF
files internally can contain multiple parts where each part is typically a squashfs filesystem (which are read-only and compressed). It is possible for SIF
files to contain read-write filesystems and overlay images as well --but such is beyond the scope of this page: see the official Apptainer documentation on how to do such. Unless more advanced methods of building an "image" were used, the Apptainer build
command produces a SIF
file with a read-only squashfs filesystem when building images. (This is the preferred option since the resulting image remains as-is since it is read-only, and, the image is much smaller since it is compressed. Know that disk reads from that image are done very quickly.)
A "sandbox" directory is a normal directory in the filesystem that starts out as empty and as Apptainer builds the image it adds the files, etc. needed in the image to that directory. The contents of a "sandbox" directory should only be accessed, updated, etc. through the use of Apptainer. One might need to use a "sandbox" directory in situations where one needs to have read-write access to the image itself in order to be able to update the container image. That said, if updates are infrequent, it is typically easier and better to use an SIF
and when updates need to be done, build a sandbox image from the SIF
file, make the required changes, and then build a new SIF
file, e.g.,
$ cd $HOME
$ mkdir mynewimage.dir
$ apptainer build mynewimage.dir myimage.sif
$ apptainer shell --writable mynewimage.dir
Apptainer> # Run commands to update mynewimage.dir here.
Apptainer> exit
$ apptainer build newimage.sif mynewimage.dir
$ rm -rf mynewimage.dir
Using an SIF
image is recommended as disk performance (from the container image) will be faster than storing each file, etc. separately on Alliance cluster filesystems (which are set up to handle large files and parallel I/O). Using an SIF
file instead of a sandbox image will also only use a quota file count amount of 1 instead of thousands (e.g., images will typically contain thousands of files and directories).
Building a Sandbox Image[edit]
In order to build a "sandbox" directory instead of an SIF
file instead of providing an SIF
file name, instead provide --sandbox DIR_NAME
or -s DIR_NAME
where DIR_NAME
is the name of the to-be-created-directory where you want your "sandbox" image. For example, if the apptainer build
command to create an SIF
file was:
$ apptainer build bb.sif docker://busybox
then change bb.sif
to a directory name, e.g., bb.dir
, and prefix such with --sandbox
:
$ apptainer build --sandbox bb.dir docker://busybox
Differences between building a "sandbox" image and a (normal) SIF
file are:
- the
SIF
file's image will be contained in a single file, compressed, and read-only, - the "sandbox" image will be placed in a directory, uncompressed, may contain thousands of files (depending on what exactly is in the image), and will be able to be read-write.
Within an account, using a "sandbox" directory will consume significant amounts of both disk space and file count quotas, thus, if read-write access to the underlying image is not normally required, you are advised to use an SIF
instead. Additionally, using an SIF
file will have higher disk access speeds to content contained within the SIF
file.
Building an SIF Image[edit]
NOTE: This section only discusses some basics of creating a simple compressed, read-only SIF
file container image. See the Apptainer documentation for more advanced aspects of building images.
Text to come.
Example Use Cases[edit]
Using Docker Images Within an Apptainer Container[edit]
Text to come.
Using Conda Within an Apptainer Container[edit]
Text to come.
Using Spack Within an Apptainer Container[edit]
Text to come.
Using NVIDIA GPUs Within an Apptainer Container[edit]
Text to come.
Running MPI Programs Inside an Apptainer Container[edit]
Text to come.
Creating an Apptainer Container From a Dockerfile[edit]
This section requires you to install and use Docker and Apptainer on a system where you have appropriate privileges. These instructions will not work on our compute clusters.
Unfortunately some instructions for packages only provide a Dockerfile
without a container image. A Dockerfile
contains the instructions necessary for the Docker software to build that container. Our clusters do not have the Docker software installed. That said, if you've access to a system with both Docker and Apptainer installed, and, sufficient access to Docker (e.g., sudo
or root access, or, you are in that system's docker
group) and, if needed, Apptainer (e.g., sudo
or root access, or, you have --fakeroot
access, then you can follow the instructions below to use Docker and then Apptainer to obtain an Apptainer image on that system.
NOTE: Using Docker may fail if you are not in the docker
group. Similarly, building some containers may fail with Apptainer without appropriate sudo
, root, or --fakeroot
permissions. It is your responsibility to ensure you've such access on the system you are running the commands below.
If one only has a Dockerfile and wishes to create an Apptainer image, run the following on a computer with Docker and Apptainer installed (where you've sufficient permissions, etc.):
docker build -f Dockerfile -t your-tag-name docker save your-tag-name -o your-tarball-name.tar docker image rm your-tag-name apptainer build --fakeroot your-sif-name.sif docker-archive://your-tarball-name.tar rm your-tarball-name.tar
where:
your-tag-name
is a name you make up that will identify the container created in Dockeryour-tarball-name.tar
is a filename you create that Docker will save the generated content of the container to--fakeroot
is possibly optional (if so omit such); ifsudo
is needed instead then omit--fakeroot
and prefix the line withsudo
your-sif-name.sif
is the name of the Apptainer SIF file for the Apptainer container
After this is done, the SIF file is an Apptainer container for the Dockerfile
. Transfer the SIF to the approprate cluster(s) in order to use such.
NOTE: It is possible that the Dockerfile pulled in more layers which means you will have to manually delete those additional layers by running:
docker images
followed by runninng docker image rm ID
(where ID is the image ID output from the docker images
command) in order to free up the disk space associated with those other image layers on the system you are using.
FAQ[edit]
Text to come.