Singularity: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
(Marked this version for translation)
Line 1: Line 1:
<languages />
<languages />
<translate>
<translate>
=Overview=
=Overview= <!--T:1-->
Singularity<ref>Singularity Software Web Site: http://singularity.lbl.gov/</ref> is open source software created by Berkeley Lab:
Singularity<ref>Singularity Software Web Site: http://singularity.lbl.gov/</ref> is open source software created by Berkeley Lab:
* as a '''secure way''' to use Linux containers on Linux multi-user clusters,
* as a '''secure way''' to use Linux containers on Linux multi-user clusters,
Line 8: Line 8:
i.e., it provides '''operating-system-level virtualization''' commonly called ''containers''.
i.e., it provides '''operating-system-level virtualization''' commonly called ''containers''.


<!--T:2-->
A ''container'' is different from a ''virtual machine'' in that a container:
A ''container'' is different from a ''virtual machine'' in that a container:
* likely has less overhead, and,
* likely has less overhead, and,
Line 13: Line 14:
(Virtual machines can run different operating systems and sometimes support running software designed for foreign CPU architectures.)
(Virtual machines can run different operating systems and sometimes support running software designed for foreign CPU architectures.)


<!--T:3-->
Containers use Linux '''control groups''' (cgroups), kernel '''namespaces''', and an '''overlay filesystem''' where:
Containers use Linux '''control groups''' (cgroups), kernel '''namespaces''', and an '''overlay filesystem''' where:
* cgroups '''limit, control, and isolate''' resource usage (e.g., RAM, disk I/O, CPU access)
* cgroups '''limit, control, and isolate''' resource usage (e.g., RAM, disk I/O, CPU access)
Line 18: Line 20:
* overlay filesystems can be used to enable the '''appearance''' of writing to otherwise read-only filesystems.
* overlay filesystems can be used to enable the '''appearance''' of writing to otherwise read-only filesystems.


<!--T:4-->
Singularity is similar to other container solutions such as Docker<ref>Docker Software Web Site: https://www.docker.com/</ref> except Singularity was specifically designed to enable containers to be used securely without requiring any special permissions especially on multi-user compute clusters.<ref>Singularity Security Documentation: http://singularity.lbl.gov/docs-security</ref>
Singularity is similar to other container solutions such as Docker<ref>Docker Software Web Site: https://www.docker.com/</ref> except Singularity was specifically designed to enable containers to be used securely without requiring any special permissions especially on multi-user compute clusters.<ref>Singularity Security Documentation: http://singularity.lbl.gov/docs-security</ref>


=Singularity Availability=
=Singularity Availability= <!--T:5-->


<!--T:6-->
Singularity is available on Compute Canada clusters (e.g., [[Cedar]] and [[Graham]]) and some legacy cluster systems run by various Compute Canada involved members/consortia across Canada.
Singularity is available on Compute Canada clusters (e.g., [[Cedar]] and [[Graham]]) and some legacy cluster systems run by various Compute Canada involved members/consortia across Canada.


<!--T:7-->
Should you wish to use Singularity on your own computer systems, you will need to download and install Singularity per its documentation.<ref>Singularity Documentation: http://singularity.lbl.gov/all-releases</ref> You should be using a relatively recent version of some Linux distribution (e.g., ideally your kernel is v3.10.0 or newer).
Should you wish to use Singularity on your own computer systems, you will need to download and install Singularity per its documentation.<ref>Singularity Documentation: http://singularity.lbl.gov/all-releases</ref> You should be using a relatively recent version of some Linux distribution (e.g., ideally your kernel is v3.10.0 or newer).


=Using Singularity On Compute Canada Systems=
=Using Singularity On Compute Canada Systems= <!--T:8-->


==Module Loading==
==Module Loading== <!--T:9-->


<!--T:10-->
To use Singularity, first load the specific module you would like to use, e.g.,
To use Singularity, first load the specific module you would like to use, e.g.,
<source lang="console">$ module load singularity/2.5</source>
<source lang="console">$ module load singularity/2.5</source>


<!--T:11-->
Should you need to see all versions of Singularity modules that are available then run:
Should you need to see all versions of Singularity modules that are available then run:
<source lang="console">$ module spider singularity</source>
<source lang="console">$ module spider singularity</source>


==Creating Images==
==Creating Images== <!--T:12-->


<!--T:13-->
'''Before''' using Singularity, you will first need to '''create a (container) image'''. A Singularity image is either a file or a directory '''containing an installation of Linux'''. One can create a Singularity image by any of the following:
'''Before''' using Singularity, you will first need to '''create a (container) image'''. A Singularity image is either a file or a directory '''containing an installation of Linux'''. One can create a Singularity image by any of the following:
* downloading a container from '''Singularity Hub'''<ref>Singularity Hub Web Site: https://singularityhub.com/</ref>
* downloading a container from '''Singularity Hub'''<ref>Singularity Hub Web Site: https://singularityhub.com/</ref>
Line 45: Line 53:
* from a '''Singularity recipe file'''.
* from a '''Singularity recipe file'''.


===Creating an Image Using Singularity Hub===
===Creating an Image Using Singularity Hub=== <!--T:14-->


<!--T:15-->
[https://singularity-hub.com/ Singularity Hub] provides a search interface for pre-built images.  Suppose you find one you want to use, for instance [https://singularity-hub.org/collections/543 Ubuntu],
[https://singularity-hub.com/ Singularity Hub] provides a search interface for pre-built images.  Suppose you find one you want to use, for instance [https://singularity-hub.org/collections/543 Ubuntu],
then you would download the image by running:
then you would download the image by running:
<source lang="console">$ singularity pull shub://singularityhub/ubuntu</source>
<source lang="console">$ singularity pull shub://singularityhub/ubuntu</source>


===Creating an Image Using Docker Hub===
===Creating an Image Using Docker Hub=== <!--T:16-->


<!--T:17-->
[https://hub.docker.com/ Docker Hub] provides an interface to search for images.
[https://hub.docker.com/ Docker Hub] provides an interface to search for images.


<!--T:18-->
Suppose the Docker Hub URL for a container you want is <tt>docker://ubuntu</tt>,
Suppose the Docker Hub URL for a container you want is <tt>docker://ubuntu</tt>,
then you would download the container by running:
then you would download the container by running:
<source lang="console">$ singularity pull docker://ubuntu</source>
<source lang="console">$ singularity pull docker://ubuntu</source>


===Creating a Tarball of Your Own Linux System===
===Creating a Tarball of Your Own Linux System=== <!--T:19-->


<!--T:20-->
If you already have a configured Intel-CPU-based 64-bit version of Linux installed, then you can create a tarball of your system using the <code>tar</code> similar to this:
If you already have a configured Intel-CPU-based 64-bit version of Linux installed, then you can create a tarball of your system using the <code>tar</code> similar to this:
<source lang="console">$ sudo tar -cvpf -C / my-system.tar --exclude=/dev --exclude=/proc --exclude=/sys</source>
<source lang="console">$ sudo tar -cvpf -C / my-system.tar --exclude=/dev --exclude=/proc --exclude=/sys</source>
although you may probably want to exclude additional directories.
although you may probably want to exclude additional directories.


<!--T:21-->
The created tarball will need to be converted into a Singularity image which is discussed [[#Creating an Image From a Tarball|later on this page]].
The created tarball will need to be converted into a Singularity image which is discussed [[#Creating an Image From a Tarball|later on this page]].


===Creating an Image From a Tarball===
===Creating an Image From a Tarball=== <!--T:22-->


<!--T:23-->
If you have a tarball or a gzip-compressed tarball, a Singularity image can be made from it by using the Singularity '''build''' command:
If you have a tarball or a gzip-compressed tarball, a Singularity image can be made from it by using the Singularity '''build''' command:
<source lang="console">$ sudo singularity build my-image.simg my-system.tar</source>
<source lang="console">$ sudo singularity build my-image.simg my-system.tar</source>
Line 75: Line 89:
if you are using a Compute Canada system.
if you are using a Compute Canada system.


<!--T:24-->
The structure of the build command used to build an image from a tarball can be any one of the following:
The structure of the build command used to build an image from a tarball can be any one of the following:
<pre>singularity build IMAGE_FILE_NAME TARBALL_FILE_NAME
<pre>singularity build IMAGE_FILE_NAME TARBALL_FILE_NAME
singularity build [OPTIONS] IMAGE_FILE_NAME TARBALL_FILE_NAME</pre>
singularity build [OPTIONS] IMAGE_FILE_NAME TARBALL_FILE_NAME</pre>


<!--T:25-->
The full syntax of the build command can be obtained by running:
The full syntax of the build command can be obtained by running:
<source lang="console">$ singularity build --help</source>
<source lang="console">$ singularity build --help</source>


<!--T:26-->
Singularity single-file images filenames typically have a <code>.simg</code> extension.
Singularity single-file images filenames typically have a <code>.simg</code> extension.


===Creating an Image From a Singularity Recipe===
===Creating an Image From a Singularity Recipe=== <!--T:27-->


<!--T:28-->
'''NOTE:''' Singularity recipes require <code>root</code> permissions, thus, recipes can only be run on a computer where you can be the <code>root</code> user, e.g., your own Linux computer.
'''NOTE:''' Singularity recipes require <code>root</code> permissions, thus, recipes can only be run on a computer where you can be the <code>root</code> user, e.g., your own Linux computer.


====Recipe: Creating a Singularity Image of the Local Filesystem====
====Recipe: Creating a Singularity Image of the Local Filesystem==== <!--T:29-->


<!--T:30-->
If the following:
If the following:
<pre>Bootstrap: self
<pre>Bootstrap: self
Line 97: Line 116:
(Clearly such has to be run on your own Linux system and Singularity must already be installed on that system.)
(Clearly such has to be run on your own Linux system and Singularity must already be installed on that system.)


<!--T:31-->
If you had the need to periodically re-generate your Singularity image from a script, then you might write a Singularity recipe such as this:
If you had the need to periodically re-generate your Singularity image from a script, then you might write a Singularity recipe such as this:
<pre>Bootstrap: localimage
<pre>Bootstrap: localimage
From: ubuntu-16.04-x86_64.simg
From: ubuntu-16.04-x86_64.simg


<!--T:32-->
%help
%help
This is a modified Ubuntu 16.06 x86_64 Singularity container image.
This is a modified Ubuntu 16.06 x86_64 Singularity container image.


<!--T:33-->
%post
%post
   sudo apt-get -y update
   sudo apt-get -y update
Line 114: Line 136:
<source lang="console">$ sudo singularity build new-ubuntu-image.simg update-existing-container-recipe</source>
<source lang="console">$ sudo singularity build new-ubuntu-image.simg update-existing-container-recipe</source>


====Recipe: Creating a Singularity Image From a Docker URL====
====Recipe: Creating a Singularity Image From a Docker URL==== <!--T:34-->


<!--T:35-->
The following Singularity recipe will download the latest [https://fenicsproject.org/ FEniCS] docker image and then run a series of installation commands to install a number of Python packages:
The following Singularity recipe will download the latest [https://fenicsproject.org/ FEniCS] docker image and then run a series of installation commands to install a number of Python packages:
</translate>
</translate>
Line 182: Line 205:
}}
}}
<translate>
<translate>
<!--T:36-->
This recipe would be executed by running:
This recipe would be executed by running:


<!--T:37-->
<pre>sudo singularity build an-image-name.simg FEniCS-From-Docker-With-Python-Tools-Singularity-Recipe</pre>
<pre>sudo singularity build an-image-name.simg FEniCS-From-Docker-With-Python-Tools-Singularity-Recipe</pre>


<!--T:38-->
and illustrates how one can easily make new images at later points-in-time.
and illustrates how one can easily make new images at later points-in-time.


===Is sudo Needed or Not Needed?===
===Is sudo Needed or Not Needed?=== <!--T:39-->


<!--T:40-->
Notice the different between the two commands is whether or not <code>'''sudo'''</code> appears. The <code>sudo</code> command runs the command after it as the '''root''' user (i.e., superuser) of that system. On Compute Canada systems, no users have such access so the '''sudo''' command cannot be used there. Presumably you do have '''root''' access on your own computer so you can use '''sudo''' on it.
Notice the different between the two commands is whether or not <code>'''sudo'''</code> appears. The <code>sudo</code> command runs the command after it as the '''root''' user (i.e., superuser) of that system. On Compute Canada systems, no users have such access so the '''sudo''' command cannot be used there. Presumably you do have '''root''' access on your own computer so you can use '''sudo''' on it.


<!--T:41-->
It is entirely possible that you will not need to use the '''sudo''' command with your image. If <code>sudo</code> is not used, then the following will happen when you '''build''' the image:
It is entirely possible that you will not need to use the '''sudo''' command with your image. If <code>sudo</code> is not used, then the following will happen when you '''build''' the image:
* Singularity will output a warning that such may result in an image that does not work. This message is only a warning though --the image will still be created.
* Singularity will output a warning that such may result in an image that does not work. This message is only a warning though --the image will still be created.
Line 197: Line 225:
If <code>sudo</code> is used, then all filesystem permissions will be kept as they are in the tarball.
If <code>sudo</code> is used, then all filesystem permissions will be kept as they are in the tarball.


<!--T:42-->
Typically one will not need to be concerned with retaining all filesystem permissions unless:
Typically one will not need to be concerned with retaining all filesystem permissions unless:
* one needs to regularly update/reconfigure the contents of the image, and,
* one needs to regularly update/reconfigure the contents of the image, and,
Line 214: Line 243:
If such occurs, then you will need to create your image using your own computer. If this is an issue, then request assistance to create the Singularity image you want by creating a Compute Canada ticket by sending an email to [mailto:support@computecanada.ca].
If such occurs, then you will need to create your image using your own computer. If this is an issue, then request assistance to create the Singularity image you want by creating a Compute Canada ticket by sending an email to [mailto:support@computecanada.ca].


==Using Singularity==
==Using Singularity== <!--T:43-->


<!--T:44-->
'''NOTE:''' The discussion below does not describe how to use Slurm to run interactive or batch jobs --it only describes how to use Singularity. For interactive and batch job information see the [[Running jobs]] page.
'''NOTE:''' The discussion below does not describe how to use Slurm to run interactive or batch jobs --it only describes how to use Singularity. For interactive and batch job information see the [[Running jobs]] page.


<!--T:45-->
Unlike perhaps when you created your Singularity image, you will never use, don't need to use, and cannot use <code>sudo</code> to run programs in your image on Compute Canada systems. There are a number of ways to run programs in your image:
Unlike perhaps when you created your Singularity image, you will never use, don't need to use, and cannot use <code>sudo</code> to run programs in your image on Compute Canada systems. There are a number of ways to run programs in your image:
# Running '''commands''' interactively in one Singularity session.
# Running '''commands''' interactively in one Singularity session.
Line 223: Line 254:
# Run a container instance in order to run '''daemons''' which may have '''backgrounded processes'''.
# Run a container instance in order to run '''daemons''' which may have '''backgrounded processes'''.


===Running Commands Interactively===
===Running Commands Interactively=== <!--T:46-->


<!--T:47-->
Singularity can be used interactively by using its shell command, e.g.,
Singularity can be used interactively by using its shell command, e.g.,


<!--T:48-->
<source lang="console">$ singularity shell --help</source>
<source lang="console">$ singularity shell --help</source>


<!--T:49-->
will give help on shell command usage. The following:
will give help on shell command usage. The following:


<!--T:50-->
<source lang="console">$ singularity shell -B /home -B /project -B /scratch -B /localscratch myimage.simg</source>
<source lang="console">$ singularity shell -B /home -B /project -B /scratch -B /localscratch myimage.simg</source>


<!--T:51-->
will do the following within the container image <code>myimage.simg</code>:
will do the following within the container image <code>myimage.simg</code>:
* bind mount <code>/home</code> so that all home directories can be accessed (subject to your account's permissions)
* bind mount <code>/home</code> so that all home directories can be accessed (subject to your account's permissions)
Line 240: Line 276:
* run a shell (e.g., <code>/bin/bash</code>)
* run a shell (e.g., <code>/bin/bash</code>)


<!--T:52-->
If this command is successful, you can interactively run commands from within your container while still being able to access your files in home, project, scratch, and localscratch. :-)
If this command is successful, you can interactively run commands from within your container while still being able to access your files in home, project, scratch, and localscratch. :-)
* NOTE: When done, type "exit" to exit the shell.
* NOTE: When done, type "exit" to exit the shell.


<!--T:53-->
In some cases, you will not want the pollution of shell environment variables from your Compute Canada shell. You can run a "clean environment" shell by adding a <code>-e</code> option, e.g.,
In some cases, you will not want the pollution of shell environment variables from your Compute Canada shell. You can run a "clean environment" shell by adding a <code>-e</code> option, e.g.,


<!--T:54-->
<source lang="console">$ singularity shell -e -B /home -B /project -B /scratch -B /localscratch myimage.simg</source>
<source lang="console">$ singularity shell -e -B /home -B /project -B /scratch -B /localscratch myimage.simg</source>


<!--T:55-->
but know you may need to define some shell environment variables such as <code>$USER</code>.
but know you may need to define some shell environment variables such as <code>$USER</code>.


<!--T:56-->
Finally, if you are using Singularity interactively on your own machine, in order for your changes to the image to be written to the disk, you must:
Finally, if you are using Singularity interactively on your own machine, in order for your changes to the image to be written to the disk, you must:


<!--T:57-->
* be using a Singularity "sandbox" image (i.e., be using a directory not the read-only .simg file)
* be using a Singularity "sandbox" image (i.e., be using a directory not the read-only .simg file)
* be using the <code>-w</code> option, and,
* be using the <code>-w</code> option, and,
* be using <code>sudo</code>
* be using <code>sudo</code>


<!--T:58-->
e.g., first create your sandbox image:
e.g., first create your sandbox image:


<!--T:59-->
<source lang="console">$ sudo singularity build -s myimage-dir myimage.simg</source>
<source lang="console">$ sudo singularity build -s myimage-dir myimage.simg</source>


<!--T:60-->
and then engage with Singularity interactively:
and then engage with Singularity interactively:


<!--T:61-->
<source lang="console">$ sudo singularity shell -w myimage-dir</source>
<source lang="console">$ sudo singularity shell -w myimage-dir</source>


<!--T:62-->
When done, you can build a new/updated simg file, with the command:
When done, you can build a new/updated simg file, with the command:


<!--T:63-->
<source lang="console">$ sudo singularity build myimage-new.simg myimage-dir/</source>
<source lang="console">$ sudo singularity build myimage-new.simg myimage-dir/</source>


<!--T:64-->
and upload myimage-new.simg to a cluster in order to use it.
and upload myimage-new.simg to a cluster in order to use it.


===Running a Single Command===
===Running a Single Command=== <!--T:65-->


<!--T:66-->
When submitting jobs that invoke commands in Singularity containers, one will either use Singularity's <code>exec</code> or <code>run</code> commands.
When submitting jobs that invoke commands in Singularity containers, one will either use Singularity's <code>exec</code> or <code>run</code> commands.
* The <code>exec</code> command does not require any configuration.
* The <code>exec</code> command does not require any configuration.
* The <code>run</code> command requires configuring an application within a Singularity recipe file and is not discussed here.
* The <code>run</code> command requires configuring an application within a Singularity recipe file and is not discussed here.


<!--T:67-->
The Singularity <code>exec</code> command's options are almost identical to the <code>shell</code> command's options, e.g.,
The Singularity <code>exec</code> command's options are almost identical to the <code>shell</code> command's options, e.g.,


<!--T:68-->
<source lang="console">$ singularity exec --help</source>
<source lang="console">$ singularity exec --help</source>


<!--T:69-->
When not asking for help, the <code>exec</code> command runs the command you specify within the container and then leaves the container, e.g.,
When not asking for help, the <code>exec</code> command runs the command you specify within the container and then leaves the container, e.g.,


<!--T:70-->
<source lang="console">$ singularity exec -B /home -B /project -B /scratch -B /localscratch myimage.simg ls /</source>
<source lang="console">$ singularity exec -B /home -B /project -B /scratch -B /localscratch myimage.simg ls /</source>


<!--T:71-->
which will output the contents of the root directory within the container. The version of <code>ls</code> is the one installed within the container!  
which will output the contents of the root directory within the container. The version of <code>ls</code> is the one installed within the container!  
For example, should GCC's <code>gcc</code> be installed in the myimage.simg container, then this command:
For example, should GCC's <code>gcc</code> be installed in the myimage.simg container, then this command:


<!--T:72-->
<source lang="console">$ singularity exec -B /home -B /project -B /scratch -B /localscratch myimage.simg gcc -v</source>
<source lang="console">$ singularity exec -B /home -B /project -B /scratch -B /localscratch myimage.simg gcc -v</source>


<!--T:73-->
will output the version information of what is installed within the container whereas running at the normal Compute Canada shell prompt:
will output the version information of what is installed within the container whereas running at the normal Compute Canada shell prompt:


<!--T:74-->
<source lang="console">$ gcc -v</source>
<source lang="console">$ gcc -v</source>


<!--T:75-->
will output the version of GCC currently loaded on Compute Canada systems.
will output the version of GCC currently loaded on Compute Canada systems.


<!--T:76-->
If you need to run a single command from within your Singularity container in a job, then the <code>exec</code> command will suffice.  
If you need to run a single command from within your Singularity container in a job, then the <code>exec</code> command will suffice.  
Remember to [[#Bind Mounts|bind mount]] the directories you will need access to in order for your job to run successfully.
Remember to [[#Bind Mounts|bind mount]] the directories you will need access to in order for your job to run successfully.


===Running Container Instances===
===Running Container Instances=== <!--T:77-->


<!--T:78-->
Should you need to run daemons and backgrounded processes within your container, then do '''not''' use the Singularity <code>exec</code> command!  
Should you need to run daemons and backgrounded processes within your container, then do '''not''' use the Singularity <code>exec</code> command!  
Instead you want to use Singularity's '''instance.start''' and '''instance.stop''' commands to create and destroy sessions (i.e., container instances).
Instead you want to use Singularity's '''instance.start''' and '''instance.stop''' commands to create and destroy sessions (i.e., container instances).
By using sessions, Singularity will ensure that your programs are terminated when your job ends, unexpectedly dies, is killed, etc.
By using sessions, Singularity will ensure that your programs are terminated when your job ends, unexpectedly dies, is killed, etc.


<!--T:79-->
To start a Singularity session instance, decide on a name for this session, e.g., <code>quadrat5run</code>, and run the '''instance.start''' command  
To start a Singularity session instance, decide on a name for this session, e.g., <code>quadrat5run</code>, and run the '''instance.start''' command  
specifying the image name, e.g., <code>myimage.simg</code>, and your session name:
specifying the image name, e.g., <code>myimage.simg</code>, and your session name:


<!--T:80-->
<source lang="console">$ singularity instance.start myimage.simg quadrat5run</source>
<source lang="console">$ singularity instance.start myimage.simg quadrat5run</source>


<!--T:81-->
A session (and all associated programs that are running) can be stopped (i.e., destroyed/killed) by running the '''instance.stop''' command, e.g.,
A session (and all associated programs that are running) can be stopped (i.e., destroyed/killed) by running the '''instance.stop''' command, e.g.,


<!--T:82-->
<source lang="console">$ singularity instance.stop myimage.simg quadrat5run</source>
<source lang="console">$ singularity instance.stop myimage.simg quadrat5run</source>


<!--T:83-->
At any time you can obtain a list of all sessions you currently have running by running:
At any time you can obtain a list of all sessions you currently have running by running:


<!--T:84-->
<source lang="console">$ singularity instance.list</source>
<source lang="console">$ singularity instance.list</source>


<!--T:85-->
which will list the daemon name, its PID, and the path to the container's image.
which will list the daemon name, its PID, and the path to the container's image.


<!--T:86-->
With a session started, programs can be run using Singularity's <code>shell</code>, <code>exec</code>, or <code>run</code> commands by specifying
With a session started, programs can be run using Singularity's <code>shell</code>, <code>exec</code>, or <code>run</code> commands by specifying
the name of the session immediately after the image name prefixed with '''instance://''', e.g.,
the name of the session immediately after the image name prefixed with '''instance://''', e.g.,


<!--T:87-->
<source lang="console">$ singularity instance.start mysessionname
<source lang="console">$ singularity instance.start mysessionname
$ singularity exec myimage.simg instance://mysessionname ps -eaf
$ singularity exec myimage.simg instance://mysessionname ps -eaf
Line 330: Line 400:
</source>
</source>


===Bind Mounts===
===Bind Mounts=== <!--T:88-->


<!--T:89-->
When running a program within a Singularity container, by default, it can only see the files within the container image and the current directory.  
When running a program within a Singularity container, by default, it can only see the files within the container image and the current directory.  
Realistically your Singularity jobs will need to mount the various filesystems where your files are. The is done using the <code>-B</code> option
Realistically your Singularity jobs will need to mount the various filesystems where your files are. The is done using the <code>-B</code> option
to the Singularity <code>shell</code>, <code>exec</code>, or <code>run</code> commands, e.g.,
to the Singularity <code>shell</code>, <code>exec</code>, or <code>run</code> commands, e.g.,


<!--T:90-->
<source lang="console">$ singularity shell -B /home -B /project -B /scratch -B /localscratch myimage.simg</source>
<source lang="console">$ singularity shell -B /home -B /project -B /scratch -B /localscratch myimage.simg</source>
<source lang="console">$ singularity exec -B /home -B /project -B /scratch -B /localscratch myimage.simg ls /</source>
<source lang="console">$ singularity exec -B /home -B /project -B /scratch -B /localscratch myimage.simg ls /</source>
<source lang="console">$ singularity run -B /home -B /project -B /scratch -B /localscratch myimage.simg some-program</source>
<source lang="console">$ singularity run -B /home -B /project -B /scratch -B /localscratch myimage.simg some-program</source>


<!--T:91-->
The previous three commands show how to bind mount the various filesystems on Compute Canada's systems, i.e., within the container image <code>myimage.simg</code> these commands bind mount:
The previous three commands show how to bind mount the various filesystems on Compute Canada's systems, i.e., within the container image <code>myimage.simg</code> these commands bind mount:
* <code>/home</code> so that all home directories can be accessed (subject to your account's permissions)
* <code>/home</code> so that all home directories can be accessed (subject to your account's permissions)
Line 346: Line 419:
* <code>/localscratch</code> so that the localscratch directory can be accessed (subject to your account's permissions)
* <code>/localscratch</code> so that the localscratch directory can be accessed (subject to your account's permissions)


<!--T:92-->
In most cases, it is not recommended to directly mount each directory you need as this can cause access issues. Instead mount the top directory for the file system as shown above.
In most cases, it is not recommended to directly mount each directory you need as this can cause access issues. Instead mount the top directory for the file system as shown above.


==HPC Issues With Singularity==
==HPC Issues With Singularity== <!--T:93-->


===Running MPI Programs From Within A Container===
===Running MPI Programs From Within A Container=== <!--T:94-->


<!--T:95-->
If you are running MPI programs:
If you are running MPI programs:


<!--T:96-->
* run the MPI programs completely '''within''' your Singularity container, and,
* run the MPI programs completely '''within''' your Singularity container, and,
* ensure your jobs don't run across nodes (use whole-node allocation).
* ensure your jobs don't run across nodes (use whole-node allocation).


<!--T:97-->
Running jobs across nodes with Singularity+MPI has not been successfully done yet on Compute Canada systems.
Running jobs across nodes with Singularity+MPI has not been successfully done yet on Compute Canada systems.


=See Also=
=See Also= <!--T:98-->
* SHARCNET General Interest Webinar, "Singularity", presented by Paul Preney on Feb. 14, 2018. See this [https://www.youtube.com/watch?v=C4va7d7GxjM YouTube Video] as well as the [https://www.sharcnet.ca/help/index.php/Online_Seminars SHARCNET Online Seminars] page for slides.
* SHARCNET General Interest Webinar, "Singularity", presented by Paul Preney on Feb. 14, 2018. See this [https://www.youtube.com/watch?v=C4va7d7GxjM YouTube Video] as well as the [https://www.sharcnet.ca/help/index.php/Online_Seminars SHARCNET Online Seminars] page for slides.


=References=
=References= <!--T:99-->
<references/>
<references/>
</translate>
</translate>

Revision as of 18:37, 15 June 2018

Other languages:

Overview[edit]

Singularity[1] is open source software created by Berkeley Lab:

  • as a secure way to use Linux containers on Linux multi-user clusters,
  • as a way to enable users to have full control of their environment, and,
  • as a way to package scientific software and deploy such to different clusters having the same architecture.

i.e., it provides operating-system-level virtualization commonly called containers.

A container is different from a virtual machine in that a container:

  • likely has less overhead, and,
  • can only run programs capable of running in the same operating system (i.e., Linux when using Singularity) for the same hardware architecture.

(Virtual machines can run different operating systems and sometimes support running software designed for foreign CPU architectures.)

Containers use Linux control groups (cgroups), kernel namespaces, and an overlay filesystem where:

  • cgroups limit, control, and isolate resource usage (e.g., RAM, disk I/O, CPU access)
  • kernel namespaces virtualize and isolate operating system resources of a group of processes, e.g., process and user IDs, filesystems, network access; and,
  • overlay filesystems can be used to enable the appearance of writing to otherwise read-only filesystems.

Singularity is similar to other container solutions such as Docker[2] except Singularity was specifically designed to enable containers to be used securely without requiring any special permissions especially on multi-user compute clusters.[3]

Singularity Availability[edit]

Singularity is available on Compute Canada clusters (e.g., Cedar and Graham) and some legacy cluster systems run by various Compute Canada involved members/consortia across Canada.

Should you wish to use Singularity on your own computer systems, you will need to download and install Singularity per its documentation.[4] You should be using a relatively recent version of some Linux distribution (e.g., ideally your kernel is v3.10.0 or newer).

Using Singularity On Compute Canada Systems[edit]

Module Loading[edit]

To use Singularity, first load the specific module you would like to use, e.g.,

$ module load singularity/2.5

Should you need to see all versions of Singularity modules that are available then run:

$ module spider singularity

Creating Images[edit]

Before using Singularity, you will first need to create a (container) image. A Singularity image is either a file or a directory containing an installation of Linux. One can create a Singularity image by any of the following:

  • downloading a container from Singularity Hub[5]
  • downloading a container from Docker Hub[6]
  • from a container you already have,
  • from a tarball or a directory containing an installation of Linux, or,
  • from a Singularity recipe file.

Creating an Image Using Singularity Hub[edit]

Singularity Hub provides a search interface for pre-built images. Suppose you find one you want to use, for instance Ubuntu, then you would download the image by running:

$ singularity pull shub://singularityhub/ubuntu

Creating an Image Using Docker Hub[edit]

Docker Hub provides an interface to search for images.

Suppose the Docker Hub URL for a container you want is docker://ubuntu, then you would download the container by running:

$ singularity pull docker://ubuntu

Creating a Tarball of Your Own Linux System[edit]

If you already have a configured Intel-CPU-based 64-bit version of Linux installed, then you can create a tarball of your system using the tar similar to this:

$ sudo tar -cvpf -C / my-system.tar --exclude=/dev --exclude=/proc --exclude=/sys

although you may probably want to exclude additional directories.

The created tarball will need to be converted into a Singularity image which is discussed later on this page.

Creating an Image From a Tarball[edit]

If you have a tarball or a gzip-compressed tarball, a Singularity image can be made from it by using the Singularity build command:

$ sudo singularity build my-image.simg my-system.tar

if you are using your own system, or,

$ singularity build my-image.simg my-system.tar

if you are using a Compute Canada system.

The structure of the build command used to build an image from a tarball can be any one of the following:

singularity build IMAGE_FILE_NAME TARBALL_FILE_NAME
singularity build [OPTIONS] IMAGE_FILE_NAME TARBALL_FILE_NAME

The full syntax of the build command can be obtained by running:

$ singularity build --help

Singularity single-file images filenames typically have a .simg extension.

Creating an Image From a Singularity Recipe[edit]

NOTE: Singularity recipes require root permissions, thus, recipes can only be run on a computer where you can be the root user, e.g., your own Linux computer.

Recipe: Creating a Singularity Image of the Local Filesystem[edit]

If the following:

Bootstrap: self
Exclude: /boot /dev /home /lost+found /media /mnt /opt /proc /run /sys

is placed in a file, e.g., copy-drive-into-container-recipe then it can be used to copy one's Linux system directly into a container (except for the excluded directories listed) by running:

$ sudo singularity build self.simg copy-drive-into-container-recipe

(Clearly such has to be run on your own Linux system and Singularity must already be installed on that system.)

If you had the need to periodically re-generate your Singularity image from a script, then you might write a Singularity recipe such as this:

Bootstrap: localimage
From: ubuntu-16.04-x86_64.simg

%help
This is a modified Ubuntu 16.06 x86_64 Singularity container image.

%post
  sudo apt-get -y update
  sudo apt-get -y upgrade
  sudo apt-get -y install build-essential git
  sudo apt-get -y install python-dev python-pip python-virtualenv python-numpy python-matplotlib
  sudo apt-get -y install vim
  sudo apt-get clean

The above recipe allows one to update-regenerate a Singularity image from an existing Singularity image. In the above example, the recipe ensures all security updates are applied and that certain software programs are installed. If this script was in a file called update-existing-container-recipe and the image ubuntu-16.04-x86_64.simg already exists in the current directory, then the image can be updated by running:

$ sudo singularity build new-ubuntu-image.simg update-existing-container-recipe

Recipe: Creating a Singularity Image From a Docker URL[edit]

The following Singularity recipe will download the latest FEniCS docker image and then run a series of installation commands to install a number of Python packages:

File : FEniCS-From-Docker-With-Python-Tools-Singularity-Recipe

Bootstrap: docker
From: quay.io/fenicsproject/stable:latest

%post
  sudo apt-get -qq update
  sudo apt-get -y upgrade
  sudo apt-get -y install python-bitstring python3-bitstring
  sudo apt-get -y install python-certifi python3-certifi 
  sudo apt-get -y install python-cryptography python3-cryptography 
  sudo apt-get -y install python-cycler python3-cycler 
  sudo apt-get -y install cython cython3 
  sudo apt-get -y install python-dateutil python3-dateutil 
  sudo apt-get -y install python-deap python3-deap
  sudo apt-get -y install python-decorator python3-decorator
  sudo apt-get -y install python-ecdsa python3-ecdsa
  sudo apt-get -y install python-ecdsa python3-ecdsa
  sudo apt-get -y install python-enum34
  sudo apt-get -y install python-funcsigs python3-funcsigs
  sudo apt-get -y install ipython ipython3 python-ipython-genutils python3-ipython-genutils
  sudo apt-get -y install python-jinja2 python3-jinja2
  sudo apt-get -y install python-jsonschema python3-jsonschema
  sudo apt-get -y install python-lockfile python3-lockfile
  sudo apt-get -y install python-markupsafe python3-markupsafe
  sudo apt-get -y install python-matplotlib python3-matplotlib
  sudo apt-get -y install python-mistune python3-mistune
  sudo apt-get -y install python-mock python3-mock
  sudo apt-get -y install python-mpmath python3-mpmath
  sudo apt-get -y install python-netaddr python3-netaddr
  sudo apt-get -y install python-netifaces python3-netifaces
  sudo apt-get -y install python-nose python3-nose
  sudo apt-get -y install ipython-notebook ipython3-notebook
  sudo apt-get -y install python-numpy python3-numpy
  sudo apt-get -y install python-pandas python3-pandas
  sudo apt-get -y install python-paramiko python3-paramiko
  sudo apt-get -y install python-path python3-path
  sudo apt-get -y install python-pathlib
  sudo apt-get -y install python-pbr python3-pbr
  sudo apt-get -y install python-pexpect python3-pexpect
  sudo apt-get -y install python-pickleshare python3-pickleshare
  sudo apt-get -y install python-prompt-toolkit python3-prompt-toolkit
  sudo apt-get -y install python-ptyprocess python3-ptyprocess
  sudo apt-get -y install python-pycryptopp
  sudo apt-get -y install python-pygments python3-pygments
  sudo apt-get -y install python-pyparsing python3-pyparsing
  sudo apt-get -y install python-zmq python3-zmq
  sudo apt-get -y install python-requests python3-requests
  sudo apt-get -y install python-scipy python3-scipy
  sudo apt-get -y install python-setuptools python3-setuptools
  sudo apt-get -y install python-simplegeneric python3-simplegeneric
  sudo apt-get -y install python-singledispatch python3-singledispatch
  sudo apt-get -y install python-six python3-six
  sudo apt-get -y install python-sympy python3-sympy
  sudo apt-get -y install python-terminado python3-terminado
  sudo apt-get -y install python-tornado python3-tornado
  sudo apt-get -y install python-traitlets python3-traitlets
  sudo apt-get clean
  sudo rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*


This recipe would be executed by running:

sudo singularity build an-image-name.simg FEniCS-From-Docker-With-Python-Tools-Singularity-Recipe

and illustrates how one can easily make new images at later points-in-time.

Is sudo Needed or Not Needed?[edit]

Notice the different between the two commands is whether or not sudo appears. The sudo command runs the command after it as the root user (i.e., superuser) of that system. On Compute Canada systems, no users have such access so the sudo command cannot be used there. Presumably you do have root access on your own computer so you can use sudo on it.

It is entirely possible that you will not need to use the sudo command with your image. If sudo is not used, then the following will happen when you build the image:

  • Singularity will output a warning that such may result in an image that does not work. This message is only a warning though --the image will still be created.
  • All filesystem permissions will be collapsed to be the permissions of the Linux user and group that is running singularity build. (This is normally the user and group you are logged in as.)

If sudo is used, then all filesystem permissions will be kept as they are in the tarball.

Typically one will not need to be concerned with retaining all filesystem permissions unless:

  • one needs to regularly update/reconfigure the contents of the image, and,
  • tools used to update/reconfigure the contents of the image require those permissions to be retained.

For example, many Linux distributions make it easy to update or install new software using commands such as:

  • apt-get update && apt-get upgrade
  • apt-get install some-software-package
  • yum install some-software-package
  • dnf install some-software-package
  • etc.

It is possible that these and other commands may not run successfully unless filesystem permissions are retained. If this is of concern, then:

  1. Install Singularity on your own computer.
  2. Always build the Singularity image on your own computer using sudo.

If this is not a concern, then you may be able to build the Singularity image on a Compute Canada system without sudo, however, be aware that such might fail for any of the following reasons:

  • When using Lustre filesystems, e.g., /project, you may run out of quota. If this occurs, it is likely because there are too many small files causing all of your quota to be used. (Lustre is excellent for large files but stores small files very inefficiently.)
  • Sometimes image creation will fail due to various user restrictions placed on the node you are using. The login nodes, in particular, have a number of restrictions which may prevent one from successfully building an image.

If such occurs, then you will need to create your image using your own computer. If this is an issue, then request assistance to create the Singularity image you want by creating a Compute Canada ticket by sending an email to [1].

Using Singularity[edit]

NOTE: The discussion below does not describe how to use Slurm to run interactive or batch jobs --it only describes how to use Singularity. For interactive and batch job information see the Running jobs page.

Unlike perhaps when you created your Singularity image, you will never use, don't need to use, and cannot use sudo to run programs in your image on Compute Canada systems. There are a number of ways to run programs in your image:

  1. Running commands interactively in one Singularity session.
  2. Run a single command which executes and then stops running.
  3. Run a container instance in order to run daemons which may have backgrounded processes.

Running Commands Interactively[edit]

Singularity can be used interactively by using its shell command, e.g.,

$ singularity shell --help

will give help on shell command usage. The following:

$ singularity shell -B /home -B /project -B /scratch -B /localscratch myimage.simg

will do the following within the container image myimage.simg:

  • bind mount /home so that all home directories can be accessed (subject to your account's permissions)
  • bind mount /project so that project directories can be accessed (subject to your account's permissions)
  • bind mount /scratch so that the scratch directory can be accessed (subject to your account's permissions)
  • bind mount /localscratch so that the localscratch directory can be accessed (subject to your account's permissions)
  • run a shell (e.g., /bin/bash)

If this command is successful, you can interactively run commands from within your container while still being able to access your files in home, project, scratch, and localscratch. :-)

  • NOTE: When done, type "exit" to exit the shell.

In some cases, you will not want the pollution of shell environment variables from your Compute Canada shell. You can run a "clean environment" shell by adding a -e option, e.g.,

$ singularity shell -e -B /home -B /project -B /scratch -B /localscratch myimage.simg

but know you may need to define some shell environment variables such as $USER.

Finally, if you are using Singularity interactively on your own machine, in order for your changes to the image to be written to the disk, you must:

  • be using a Singularity "sandbox" image (i.e., be using a directory not the read-only .simg file)
  • be using the -w option, and,
  • be using sudo

e.g., first create your sandbox image:

$ sudo singularity build -s myimage-dir myimage.simg

and then engage with Singularity interactively:

$ sudo singularity shell -w myimage-dir

When done, you can build a new/updated simg file, with the command:

$ sudo singularity build myimage-new.simg myimage-dir/

and upload myimage-new.simg to a cluster in order to use it.

Running a Single Command[edit]

When submitting jobs that invoke commands in Singularity containers, one will either use Singularity's exec or run commands.

  • The exec command does not require any configuration.
  • The run command requires configuring an application within a Singularity recipe file and is not discussed here.

The Singularity exec command's options are almost identical to the shell command's options, e.g.,

$ singularity exec --help

When not asking for help, the exec command runs the command you specify within the container and then leaves the container, e.g.,

$ singularity exec -B /home -B /project -B /scratch -B /localscratch myimage.simg ls /

which will output the contents of the root directory within the container. The version of ls is the one installed within the container! For example, should GCC's gcc be installed in the myimage.simg container, then this command:

$ singularity exec -B /home -B /project -B /scratch -B /localscratch myimage.simg gcc -v

will output the version information of what is installed within the container whereas running at the normal Compute Canada shell prompt:

$ gcc -v

will output the version of GCC currently loaded on Compute Canada systems.

If you need to run a single command from within your Singularity container in a job, then the exec command will suffice. Remember to bind mount the directories you will need access to in order for your job to run successfully.

Running Container Instances[edit]

Should you need to run daemons and backgrounded processes within your container, then do not use the Singularity exec command! Instead you want to use Singularity's instance.start and instance.stop commands to create and destroy sessions (i.e., container instances). By using sessions, Singularity will ensure that your programs are terminated when your job ends, unexpectedly dies, is killed, etc.

To start a Singularity session instance, decide on a name for this session, e.g., quadrat5run, and run the instance.start command specifying the image name, e.g., myimage.simg, and your session name:

$ singularity instance.start myimage.simg quadrat5run

A session (and all associated programs that are running) can be stopped (i.e., destroyed/killed) by running the instance.stop command, e.g.,

$ singularity instance.stop myimage.simg quadrat5run

At any time you can obtain a list of all sessions you currently have running by running:

$ singularity instance.list

which will list the daemon name, its PID, and the path to the container's image.

With a session started, programs can be run using Singularity's shell, exec, or run commands by specifying the name of the session immediately after the image name prefixed with instance://, e.g.,

$ singularity instance.start mysessionname
$ singularity exec myimage.simg instance://mysessionname ps -eaf
$ singularity shell myimage.simg instance://mysessionname 
nohup find / -type d >dump.txt
exit
$ singularity exec myimage.simg instance://mysessionname ps -eaf
$ singularity instance.stop mysessionname

Bind Mounts[edit]

When running a program within a Singularity container, by default, it can only see the files within the container image and the current directory. Realistically your Singularity jobs will need to mount the various filesystems where your files are. The is done using the -B option to the Singularity shell, exec, or run commands, e.g.,

$ singularity shell -B /home -B /project -B /scratch -B /localscratch myimage.simg
$ singularity exec -B /home -B /project -B /scratch -B /localscratch myimage.simg ls /
$ singularity run -B /home -B /project -B /scratch -B /localscratch myimage.simg some-program

The previous three commands show how to bind mount the various filesystems on Compute Canada's systems, i.e., within the container image myimage.simg these commands bind mount:

  • /home so that all home directories can be accessed (subject to your account's permissions)
  • /project so that project directories can be accessed (subject to your account's permissions)
  • /scratch so that the scratch directory can be accessed (subject to your account's permissions)
  • /localscratch so that the localscratch directory can be accessed (subject to your account's permissions)

In most cases, it is not recommended to directly mount each directory you need as this can cause access issues. Instead mount the top directory for the file system as shown above.

HPC Issues With Singularity[edit]

Running MPI Programs From Within A Container[edit]

If you are running MPI programs:

  • run the MPI programs completely within your Singularity container, and,
  • ensure your jobs don't run across nodes (use whole-node allocation).

Running jobs across nodes with Singularity+MPI has not been successfully done yet on Compute Canada systems.

See Also[edit]

References[edit]

  1. Singularity Software Web Site: http://singularity.lbl.gov/
  2. Docker Software Web Site: https://www.docker.com/
  3. Singularity Security Documentation: http://singularity.lbl.gov/docs-security
  4. Singularity Documentation: http://singularity.lbl.gov/all-releases
  5. Singularity Hub Web Site: https://singularityhub.com/
  6. Docker Hub Web Site: https://hub.docker.com/