Dedalus/fr: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Created page with "5. Gelez l'environnement et les éléments requis. {{Command |prompt=(dedalus_env) [name@server ~] |pip freeze --local > ~/dedalus-3.0.2-requirements.txt }}")
(Created page with "6. Supprimez l'environnement virtuel local. {{Command |prompt=(dedalus_env) [name@server ~] |deactivate && rm -r ~/dedalus_env }}")
Line 48: Line 48:
}}
}}


<div lang="en" dir="ltr" class="mw-content-ltr">
6. Supprimez l'environnement virtuel local.
6. Remove the local virtual environment.
{{Command
{{Command
|prompt=(dedalus_env) [name@server ~]
|prompt=(dedalus_env) [name@server ~]
|deactivate && rm -r ~/dedalus_env
|deactivate && rm -r ~/dedalus_env
}}
}}
</div>


<div lang="en" dir="ltr" class="mw-content-ltr">
<div lang="en" dir="ltr" class="mw-content-ltr">

Revision as of 20:58, 2 May 2024

Other languages:


Dedalus est un cadre flexible pour résoudre des équations aux dérivées partielles à l'aide de méthodes spectrales modernes.

Versions disponibles

Sur nos grappes, les versions de Dedalus sobnt des wheels Python. Pour connaître les versions disponibles, lancez avail_wheels.

Question.png
[name@server ~]$ avail_wheels dedalus --all-versions
$ avail_wheels dedalus
name     version    python    arch
-------  ---------  --------  ---------
dedalus  3.0.2      cp311     x86-64-v3
dedalus  3.0.2      cp310     x86-64-v3

Installation dans un environnement virtuel Python

1. Chargez les dépendances d'exécution pour Dedalus.

Question.png
[name@server ~]$ module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11

2. Créez et activez un environnement virtuel Python.

[name@server ~]$ virtualenv --no-download ~/dedalus_env
[name@server ~]$ source ~/dedalus_env/bin/activate


3. Installez une version de Dedalus et ses dépendances Python.

(dedalus_env) [name@server ~] pip install --no-index --upgrade pip
(dedalus_env) [name@server ~] pip install --no-index dedalus==X.Y.Z

X.Y.Z est la version choisie (par exemple 3.0.2). Si aucun numéro n'est indiqué, la plus récente version sera installée.

4. Validez.

Question.png
(dedalus_env) [name@server ~] python -c 'import dedalus'

5. Gelez l'environnement et les éléments requis.

Question.png
(dedalus_env) [name@server ~] pip freeze --local > ~/dedalus-3.0.2-requirements.txt

6. Supprimez l'environnement virtuel local.

Question.png
(dedalus_env) [name@server ~] deactivate && rm -r ~/dedalus_env

Running Dedalus

You can run dedalus distributed accross multiple nodes or cores. For efficient MPI scheduling, please see:

1. Write your job submission script.

File : submit-dedalus-distributed.sh

#!/bin/bash
</div>

#SBATCH --account=def-someprof    # adjust this to match the accounting group you are using to submit jobs
#SBATCH --time=08:00:00           # adjust this to match the walltime of your job
#SBATCH --ntasks=4                # adjust this to match the number of tasks/processes to run
#SBATCH --mem-per-cpu=4G          # adjust this according to the memory you need per process

<div lang="en" dir="ltr" class="mw-content-ltr">
# Run on cores accross the system : https://docs.alliancecan.ca/wiki/Advanced_MPI_scheduling#Few_cores,_any_number_of_nodes
</div>

# Load modules dependencies.
module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11

# create the virtual environment on each allocated node: 
srun --ntasks $SLURM_NNODES --tasks-per-node=1 bash << EOF
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate

pip install --no-index --upgrade pip
pip install --no-index -r dedalus-3.0.2-requirements.txt
EOF

<div lang="en" dir="ltr" class="mw-content-ltr">
# activate only on main node
source $SLURM_TMPDIR/env/bin/activate;
</div>

export OMP_NUM_THREADS=1

# srun exports the current env, which contains $VIRTUAL_ENV and $PATH variables
srun python $SCRATCH/myscript.py;


File : submit-dedalus-whole-nodes.sh

#!/bin/bash
</div>

#SBATCH --account=def-someprof    # adjust this to match the accounting group you are using to submit jobs
#SBATCH --time=08:00:00           # adjust this to match the walltime of your job
#SBATCH --nodes=2                 # adjust this to match the number of whole node
#SBATCH --ntasks-per-node=4       # adjust this to match the number of tasks/processes to run per node
#SBATCH --mem-per-cpu=4G          # adjust this according to the memory you need per process

# Run on N whole nodes : https://docs.alliancecan.ca/wiki/Advanced_MPI_scheduling#Whole_nodes

# Load modules dependencies.
module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11

# create the virtual environment on each allocated node: 
srun --ntasks $SLURM_NNODES --tasks-per-node=1 bash << EOF
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate

pip install --no-index --upgrade pip
pip install --no-index -r dedalus-3.0.2-requirements.txt
EOF

# activate only on main node
source $SLURM_TMPDIR/env/bin/activate;

<div lang="en" dir="ltr" class="mw-content-ltr">
export OMP_NUM_THREADS=1
</div>

# srun exports the current env, which contains $VIRTUAL_ENV and $PATH variables
srun python $SCRATCH/myscript.py;


2. Soumettez la tâche à l'ordonnaceur.

Before submitting your job, it is important to test that your submission script will start without errors. You can do a quick test in an interactive job.

Question.png
[name@server ~]$ sbatch submit-dedalus.sh