cc_staff
156
edits
(Undoing previous edits.) |
No edit summary |
||
Line 428: | Line 428: | ||
<!--T:95--> | <!--T:95--> | ||
If you are running MPI programs | If you are running MPI programs nothing special needs to be done for jobs running on a single node. | ||
<!--T: | <!--T:97--> | ||
To run jobs across nodes with MPI requires: | |||
< | * Ensuring your MPI program is compiled using the OpenMPI installed inside your Singularity container. | ||
** Ideally the version of OpenMPI inside the container is version 3 or 4. Version 2 may or may not work. Version 1 will not work. | |||
* Ensuring your SLURM job script uses <code>srun</code> to run the MPI program. Do not use <code>mpirun</code> or <code>mpiexec</code>, e.g., | |||
<source lang="bash"> | |||
srun singularity exec /path/to/your/singularity/image.sif /path/to/your-program | |||
</source> | |||
* Ensure there are no module load commands in your job script. | |||
* Before submitting the job using <code>sbatch</code>, in the CC shell environment, module load the following: | |||
** <code>singularity</code> | |||
** <code>openmpi</code> (This does not need to match the OpenMPI version installed inside the container. Ideally use version 4 or version 3; version 2 may or may not work; version 1 will not work.) | |||
=See also= <!--T:98--> | =See also= <!--T:98--> |