Singularity: Difference between revisions

Jump to navigation Jump to search
no edit summary
(Undoing previous edits.)
No edit summary
Line 428: Line 428:


<!--T:95-->
<!--T:95-->
If you are running MPI programs:
If you are running MPI programs nothing special needs to be done for jobs running on a single node.


<!--T:96-->
<!--T:97-->
* run the MPI programs completely '''within''' your Singularity container, and,
To run jobs across nodes with MPI requires:
* ensure your jobs don't run across nodes (use whole-node allocation).


<!--T:97-->
* Ensuring your MPI program is compiled using the OpenMPI installed inside your Singularity container.
Running jobs across nodes with Singularity+MPI has not been successfully done yet on Compute Canada systems.
** Ideally the version of OpenMPI inside the container is version 3 or 4. Version 2 may or may not work. Version 1 will not work.
* Ensuring your SLURM job script uses <code>srun</code> to run the MPI program. Do not use <code>mpirun</code> or <code>mpiexec</code>, e.g.,
<source lang="bash">
srun singularity exec /path/to/your/singularity/image.sif /path/to/your-program
</source>
* Ensure there are no module load commands in your job script.
* Before submitting the job using <code>sbatch</code>, in the CC shell environment, module load the following:
** <code>singularity</code>
** <code>openmpi</code> (This does not need to match the OpenMPI version installed inside the container. Ideally use version 4 or version 3; version 2 may or may not work; version 1 will not work.)


=See also= <!--T:98-->
=See also= <!--T:98-->
cc_staff
156

edits

Navigation menu