Bureaucrats, cc_docs_admin, cc_staff
2,879
edits
No edit summary |
(remove references to Compute Canada) |
||
Line 26: | Line 26: | ||
<!--T:6--> | <!--T:6--> | ||
Singularity is available on the | Singularity is available on the Alliance clusters. | ||
<!--T:7--> | <!--T:7--> | ||
Should you wish to use Singularity on your own computer, you will need to download and install it per its documentation.<ref>Singularity Documentation: https://www.sylabs.io/docs/</ref> You should be using a relatively recent version of some Linux distribution (e.g., ideally your kernel is v3.10.0 or newer). | Should you wish to use Singularity on your own computer, you will need to download and install it per its documentation.<ref>Singularity Documentation: https://www.sylabs.io/docs/</ref> You should be using a relatively recent version of some Linux distribution (e.g., ideally your kernel is v3.10.0 or newer). | ||
=Singularity on | =Singularity on a cluster= <!--T:8--> | ||
==Loading a module== <!--T:9--> | ==Loading a module== <!--T:9--> | ||
Line 94: | Line 94: | ||
<source>$ SINGULARITY_TMPDIR="disk/location" singularity build IMAGE_NAME.sif docker://DOCKER-IMAGE-NAME</source> | <source>$ SINGULARITY_TMPDIR="disk/location" singularity build IMAGE_NAME.sif docker://DOCKER-IMAGE-NAME</source> | ||
===Creating images on | ===Creating images on Alliance clusters=== <!--T:114--> | ||
<!--T:115--> | <!--T:115--> | ||
If you decide to create an image on a | If you decide to create an image on a cluster, be aware of the fact that you will '''never''' have <code>sudo</code> access and so the caveats of the previous section apply. Images can be created on any Alliance cluster or on a visualization computer, e.g., <code>gra-vdi.computecanada.ca</code>. Our image creation advice differs depending on which machine you use: | ||
* <code>beluga.computecanada.ca</code>: Connect using [[SSH]]. Use a login node to create the image. | * <code>beluga.computecanada.ca</code>: Connect using [[SSH]]. Use a login node to create the image. | ||
* <code>cedar.computecanada.ca</code>: Connect using [[SSH]] Create the image in an interactive job. Do '''not''' use a login node. | * <code>cedar.computecanada.ca</code>: Connect using [[SSH]] Create the image in an interactive job. Do '''not''' use a login node. | ||
Line 181: | Line 181: | ||
<!--T:40--> | <!--T:40--> | ||
The Singularity documentation on [https://sylabs.io/guides/3.5/user-guide/build_a_container.html building a container] uses the <code>sudo</code> command. This is because, in general, many uses of the <code>build</code> command requires root, i.e., superuser, access on the system it is run. On | The Singularity documentation on [https://sylabs.io/guides/3.5/user-guide/build_a_container.html building a container] uses the <code>sudo</code> command. This is because, in general, many uses of the <code>build</code> command requires root, i.e., superuser, access on the system it is run. On a cluster, regular users do not have root account access so the <code>sudo</code> command cannot be used. If you are building a pre-built image from Singularity or Docker Hub, you typically will not need <code>sudo</code> access. If you do need root access to build an image, then you will either need to [[Technical support|ask support]] for help, or install Linux and Singularity on your own computer to have root account access. | ||
<!--T:41--> | <!--T:41--> | ||
Line 209: | Line 209: | ||
<!--T:45--> | <!--T:45--> | ||
Unlike perhaps when you created your Singularity image, you | Unlike perhaps when you created your Singularity image, you cannot use <code>sudo</code> to run programs in your image on a cluster. There are a number of ways to run programs in your image: | ||
# Running '''commands''' interactively in one Singularity session. | # Running '''commands''' interactively in one Singularity session. | ||
# Run a '''single command''' which executes and then stops running. | # Run a '''single command''' which executes and then stops running. | ||
Line 241: | Line 241: | ||
<!--T:53--> | <!--T:53--> | ||
In some cases, you will not want the pollution of | In some cases, you will not want the pollution of environment variables from your shell. You can run a "clean environment" shell by adding a <code>-e</code> option, e.g., | ||
<!--T:54--> | <!--T:54--> | ||
Line 305: | Line 305: | ||
<!--T:73--> | <!--T:73--> | ||
will output the version information of what is installed within the container whereas running at the normal | will output the version information of what is installed within the container whereas running at the normal shell prompt: | ||
<!--T:74--> | <!--T:74--> | ||
Line 311: | Line 311: | ||
<!--T:75--> | <!--T:75--> | ||
will output the version of GCC currently loaded on | will output the version of GCC currently loaded on the cluster. | ||
Line 391: | Line 391: | ||
<!--T:91--> | <!--T:91--> | ||
The previous three commands show how to bind mount the various filesystems on | The previous three commands show how to bind mount the various filesystems on our clusters, i.e., within the container image <code>myimage.simg</code> these commands bind mount: | ||
* <code>/home</code> so that all home directories can be accessed (subject to your account's permissions) | * <code>/home</code> so that all home directories can be accessed (subject to your account's permissions) | ||
* <code>/project</code> so that project directories can be accessed (subject to your account's permissions) | * <code>/project</code> so that project directories can be accessed (subject to your account's permissions) | ||
Line 429: | Line 429: | ||
* clusters using Infiniband need UCX. | * clusters using Infiniband need UCX. | ||
=== Using CUDA on | === Using CUDA on a cluster === <!--T:129--> | ||
<!--T:130--> | <!--T:130--> |