Apptainer/en: Difference between revisions

Updating to match new version of source page
(Updating to match new version of source page)
(Updating to match new version of source page)
Line 131: Line 131:
==Running software: <code>apptainer run</code> or <code>apptainer exec</code>==
==Running software: <code>apptainer run</code> or <code>apptainer exec</code>==


The <code>apptainer run</code> command will launch an Apptainer container, runs the <code>%runscript</code> defined for that container (if one is defined), and then runs the specific command (subject to the code in the <code>%runscript</code> script).  
When the <code>apptainer run</code> command launches the container, it first runs the <code>%runscript</code> defined for that container (if there is one), and then runs the specific command you specified. <br>
The <code>apptainer exec</code> command will not run the <code>%runscript</code>, even if one is defined in the container.


* Using the <code>apptainer run</code> command is preferred over using the <code>apptainer exec</code> command (which directly runs a command within the specified container).
We suggest that you aways use <code>apptainer run</code>.


For example, suppose you want to run the <code>g++</code> compiler inside your container to compile a C++ program called <code>myprog.cpp</code> and then run that program. To do this, you might use this command:
For example, suppose you want to run the <code>g++</code> compiler inside your container to compile a C++ program called <code>myprog.cpp</code> and then run that program. To do this, you might use this command:
Line 199: Line 200:
<b>NOTE:</b> When all MPI processes are running on a single shared-memory node, there is no need to use interconnection hardware and there will be no issues running MPI programs within an Apptainer container when all MPI processes run on a single cluster node, e.g., when the slurm option <code>--nodes=1</code> is used with an <code>sbatch</code> script. Unless one <b>explicitly</b> sets the maximum number of cluster nodes used to <code>1</code>, the scheduler can choose to run an MPI program over multiple nodes. If such will run from within an Apptainer container and has not been set up to properly run, then it is possible it will fail to run.
<b>NOTE:</b> When all MPI processes are running on a single shared-memory node, there is no need to use interconnection hardware and there will be no issues running MPI programs within an Apptainer container when all MPI processes run on a single cluster node, e.g., when the slurm option <code>--nodes=1</code> is used with an <code>sbatch</code> script. Unless one <b>explicitly</b> sets the maximum number of cluster nodes used to <code>1</code>, the scheduler can choose to run an MPI program over multiple nodes. If such will run from within an Apptainer container and has not been set up to properly run, then it is possible it will fail to run.


Text to come.
More in preparation.


=Bind mounts and persistent overlays=
=Bind mounts and persistent overlays=
Line 310: Line 311:
==Using Conda in Apptainer ==
==Using Conda in Apptainer ==


Text to come.
In preparation.


==Using Spack in Apptainer ==
==Using Spack in Apptainer ==


Text to come.
In preparation.


==Using NVIDIA GPUs in Apptainer ==
==Using NVIDIA GPUs in Apptainer ==


Text to come.
In preparation.


==Running MPI programs in Apptainer ==
==Running MPI programs in Apptainer ==


Text to come.
In preparation.


==Creating an Apptainer container from a Dockerfile==
==Creating an Apptainer container from a Dockerfile==
38,760

edits