Dedalus: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Added Dedalus page)
 
No edit summary
 
(10 intermediate revisions by 2 users not shown)
Line 1: Line 1:
<languages />
<languages />
<translate>
<translate>
<!--T:31-->
__FORCETOC__
<!--T:1-->
[https://dedalus-project.org/ Dedalus] is a flexible framework for solving partial differential equations using modern spectral methods.
[https://dedalus-project.org/ Dedalus] is a flexible framework for solving partial differential equations using modern spectral methods.


= Available versions =
= Available versions = <!--T:2-->
Dedalus is available on our clusters as prebuilt Python packages (wheels). You can list available versions with <code>avail_wheels</code>.
Dedalus is available on our clusters as prebuilt Python packages (wheels). You can list available versions with <code>avail_wheels</code>.
{{Command
{{Command
|avail_wheels dedalus --all-versions
|avail_wheels dedalus
|result=
|result=
$ avail_wheels dedalus
name    version    python    arch
name    version    python    arch
-------  ---------  --------  ---------
-------  ---------  --------  ---------
Line 15: Line 19:
}}
}}


= Installing Dedalus in a Python virtual environment =
= Installing Dedalus in a Python virtual environment = <!--T:3-->
1. Load Dedalus runtime dependencies.
1. Load Dedalus runtime dependencies.
{{Command|module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11}}
{{Command|module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11}}


<!--T:4-->
2. Create and activate a Python virtual environment.
2. Create and activate a Python virtual environment.
{{Commands
{{Commands
Line 25: Line 30:
}}
}}


<!--T:5-->
3. Install a specific version of Dedalus and its Python dependencies.
3. Install a specific version of Dedalus and its Python dependencies.
{{Commands
{{Commands
Line 34: Line 40:
You can omit to specify the version in order to install the latest one available from the wheelhouse.
You can omit to specify the version in order to install the latest one available from the wheelhouse.


<!--T:6-->
4. Validate it.
4. Validate it.
{{Command
{{Command
Line 40: Line 47:
}}
}}


<!--T:7-->
5. Freeze the environment and requirements set.
5. Freeze the environment and requirements set.
{{Command
{{Command
Line 46: Line 54:
}}
}}


<!--T:8-->
6. Remove the local virtual environment.
6. Remove the local virtual environment.
{{Command
{{Command
Line 52: Line 61:
}}
}}


= Running Dedalus =
= Running Dedalus = <!--T:9-->
You can run dedalus distributed accross multiple nodes or cores.  
You can run Dedalus distributed across multiple nodes or cores.  
For efficient MPI scheduling, please see:
For efficient MPI scheduling, please see:
* [[Running_jobs#MPI_job]]
* [[Running jobs#MPI_job|MPI job]]
* [[Advanced_MPI_scheduling]]
* [[Advanced MPI scheduling]]


<!--T:10-->
1. Write your job submission script.
1. Write your job submission script.
<tabs>
<tabs>
Line 67: Line 77:
#!/bin/bash
#!/bin/bash


<!--T:11-->
#SBATCH --account=def-someprof    # adjust this to match the accounting group you are using to submit jobs
#SBATCH --account=def-someprof    # adjust this to match the accounting group you are using to submit jobs
#SBATCH --time=08:00:00          # adjust this to match the walltime of your job
#SBATCH --time=08:00:00          # adjust this to match the walltime of your job
Line 72: Line 83:
#SBATCH --mem-per-cpu=4G          # adjust this according to the memory you need per process
#SBATCH --mem-per-cpu=4G          # adjust this according to the memory you need per process


# Run on cores accross the system : https://docs.alliancecan.ca/wiki/Advanced_MPI_scheduling#Few_cores,_any_number_of_nodes
<!--T:12-->
# Run on cores across the system : https://docs.alliancecan.ca/wiki/Advanced_MPI_scheduling#Few_cores,_any_number_of_nodes


<!--T:13-->
# Load modules dependencies.
# Load modules dependencies.
module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11
module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11


<!--T:14-->
# create the virtual environment on each allocated node:  
# create the virtual environment on each allocated node:  
srun --ntasks $SLURM_NNODES --tasks-per-node=1 bash << EOF
srun --ntasks $SLURM_NNODES --tasks-per-node=1 bash << EOF
Line 82: Line 96:
source $SLURM_TMPDIR/env/bin/activate
source $SLURM_TMPDIR/env/bin/activate


<!--T:15-->
pip install --no-index --upgrade pip
pip install --no-index --upgrade pip
pip install --no-index -r dedalus-3.0.2-requirements.txt
pip install --no-index -r dedalus-3.0.2-requirements.txt
EOF
EOF


<!--T:16-->
# activate only on main node
# activate only on main node
source $SLURM_TMPDIR/env/bin/activate;
source $SLURM_TMPDIR/env/bin/activate;


<!--T:17-->
export OMP_NUM_THREADS=1
export OMP_NUM_THREADS=1


<!--T:18-->
# srun exports the current env, which contains $VIRTUAL_ENV and $PATH variables
# srun exports the current env, which contains $VIRTUAL_ENV and $PATH variables
srun python $SCRATCH/myscript.py;
srun python $SCRATCH/myscript.py;
Line 96: Line 114:
</tab>
</tab>


<!--T:19-->
<tab name="Whole nodes">
<tab name="Whole nodes">
{{File
{{File
Line 103: Line 122:
#!/bin/bash
#!/bin/bash


<!--T:20-->
#SBATCH --account=def-someprof    # adjust this to match the accounting group you are using to submit jobs
#SBATCH --account=def-someprof    # adjust this to match the accounting group you are using to submit jobs
#SBATCH --time=08:00:00          # adjust this to match the walltime of your job
#SBATCH --time=08:00:00          # adjust this to match the walltime of your job
Line 109: Line 129:
#SBATCH --mem-per-cpu=4G          # adjust this according to the memory you need per process
#SBATCH --mem-per-cpu=4G          # adjust this according to the memory you need per process


<!--T:21-->
# Run on N whole nodes : https://docs.alliancecan.ca/wiki/Advanced_MPI_scheduling#Whole_nodes
# Run on N whole nodes : https://docs.alliancecan.ca/wiki/Advanced_MPI_scheduling#Whole_nodes


<!--T:22-->
# Load modules dependencies.
# Load modules dependencies.
module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11
module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11


<!--T:23-->
# create the virtual environment on each allocated node:  
# create the virtual environment on each allocated node:  
srun --ntasks $SLURM_NNODES --tasks-per-node=1 bash << EOF
srun --ntasks $SLURM_NNODES --tasks-per-node=1 bash << EOF
Line 119: Line 142:
source $SLURM_TMPDIR/env/bin/activate
source $SLURM_TMPDIR/env/bin/activate


<!--T:24-->
pip install --no-index --upgrade pip
pip install --no-index --upgrade pip
pip install --no-index -r dedalus-3.0.2-requirements.txt
pip install --no-index -r dedalus-3.0.2-requirements.txt
EOF
EOF


<!--T:25-->
# activate only on main node
# activate only on main node
source $SLURM_TMPDIR/env/bin/activate;
source $SLURM_TMPDIR/env/bin/activate;


<!--T:26-->
export OMP_NUM_THREADS=1
export OMP_NUM_THREADS=1


<!--T:27-->
# srun exports the current env, which contains $VIRTUAL_ENV and $PATH variables
# srun exports the current env, which contains $VIRTUAL_ENV and $PATH variables
srun python $SCRATCH/myscript.py;
srun python $SCRATCH/myscript.py;
Line 134: Line 161:
</tabs>
</tabs>


<!--T:28-->
2. Submit your job to the scheduler.
2. Submit your job to the scheduler.
<!--T:29-->
Before submitting your job, it is important to test that your submission script will start without errors.
You can do a quick test in an [[Running_jobs#Interactive_jobs|interactive job]].
<!--T:30-->
{{Command
{{Command
|sbatch submit-dedalus.sh
|sbatch submit-dedalus.sh

Latest revision as of 17:25, 30 September 2024

Other languages:


Dedalus is a flexible framework for solving partial differential equations using modern spectral methods.

Available versions[edit]

Dedalus is available on our clusters as prebuilt Python packages (wheels). You can list available versions with avail_wheels.

Question.png
[name@server ~]$ avail_wheels dedalus
name     version    python    arch
-------  ---------  --------  ---------
dedalus  3.0.2      cp311     x86-64-v3
dedalus  3.0.2      cp310     x86-64-v3

Installing Dedalus in a Python virtual environment[edit]

1. Load Dedalus runtime dependencies.

Question.png
[name@server ~]$ module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11

2. Create and activate a Python virtual environment.

[name@server ~]$ virtualenv --no-download ~/dedalus_env
[name@server ~]$ source ~/dedalus_env/bin/activate


3. Install a specific version of Dedalus and its Python dependencies.

(dedalus_env) [name@server ~] pip install --no-index --upgrade pip
(dedalus_env) [name@server ~] pip install --no-index dedalus==X.Y.Z

where X.Y.Z is the exact desired version, for instance 3.0.2. You can omit to specify the version in order to install the latest one available from the wheelhouse.

4. Validate it.

Question.png
(dedalus_env) [name@server ~] python -c 'import dedalus'

5. Freeze the environment and requirements set.

Question.png
(dedalus_env) [name@server ~] pip freeze --local > ~/dedalus-3.0.2-requirements.txt

6. Remove the local virtual environment.

Question.png
(dedalus_env) [name@server ~] deactivate && rm -r ~/dedalus_env

Running Dedalus[edit]

You can run Dedalus distributed across multiple nodes or cores. For efficient MPI scheduling, please see:

1. Write your job submission script.

File : submit-dedalus-distributed.sh

#!/bin/bash

#SBATCH --account=def-someprof    # adjust this to match the accounting group you are using to submit jobs
#SBATCH --time=08:00:00           # adjust this to match the walltime of your job
#SBATCH --ntasks=4                # adjust this to match the number of tasks/processes to run
#SBATCH --mem-per-cpu=4G          # adjust this according to the memory you need per process

# Run on cores across the system : https://docs.alliancecan.ca/wiki/Advanced_MPI_scheduling#Few_cores,_any_number_of_nodes

# Load modules dependencies.
module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11

# create the virtual environment on each allocated node: 
srun --ntasks $SLURM_NNODES --tasks-per-node=1 bash << EOF
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate

pip install --no-index --upgrade pip
pip install --no-index -r dedalus-3.0.2-requirements.txt
EOF

# activate only on main node
source $SLURM_TMPDIR/env/bin/activate;

export OMP_NUM_THREADS=1

# srun exports the current env, which contains $VIRTUAL_ENV and $PATH variables
srun python $SCRATCH/myscript.py;


File : submit-dedalus-whole-nodes.sh

#!/bin/bash

#SBATCH --account=def-someprof    # adjust this to match the accounting group you are using to submit jobs
#SBATCH --time=08:00:00           # adjust this to match the walltime of your job
#SBATCH --nodes=2                 # adjust this to match the number of whole node
#SBATCH --ntasks-per-node=4       # adjust this to match the number of tasks/processes to run per node
#SBATCH --mem-per-cpu=4G          # adjust this according to the memory you need per process

# Run on N whole nodes : https://docs.alliancecan.ca/wiki/Advanced_MPI_scheduling#Whole_nodes

# Load modules dependencies.
module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11

# create the virtual environment on each allocated node: 
srun --ntasks $SLURM_NNODES --tasks-per-node=1 bash << EOF
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate

pip install --no-index --upgrade pip
pip install --no-index -r dedalus-3.0.2-requirements.txt
EOF

# activate only on main node
source $SLURM_TMPDIR/env/bin/activate;

export OMP_NUM_THREADS=1

# srun exports the current env, which contains $VIRTUAL_ENV and $PATH variables
srun python $SCRATCH/myscript.py;


2. Submit your job to the scheduler.

Before submitting your job, it is important to test that your submission script will start without errors. You can do a quick test in an interactive job.

Question.png
[name@server ~]$ sbatch submit-dedalus.sh