Dedalus/fr: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Created page with "# activate only on main node source $SLURM_TMPDIR/env/bin/activate;")
No edit summary
Tag: Manual revert
 
(29 intermediate revisions by 2 users not shown)
Line 3: Line 3:
__FORCETOC__
__FORCETOC__


<div lang="en" dir="ltr" class="mw-content-ltr">
[https://dedalus-project.org/ Dedalus] est un environnement de développement flexible pour résoudre des équations aux dérivées partielles à l'aide de méthodes spectrales modernes.
[https://dedalus-project.org/ Dedalus] is a flexible framework for solving partial differential equations using modern spectral methods.
</div>


<div lang="en" dir="ltr" class="mw-content-ltr">
= Versions disponibles =
= Available versions =
Sur nos grappes, les versions de Dedalus sont des <i>wheels</i> Python. Pour connaître les versions disponibles, exécutez <code>avail_wheels</code>.
Dedalus is available on our clusters as prebuilt Python packages (wheels). You can list available versions with <code>avail_wheels</code>.
{{Command
{{Command
|avail_wheels dedalus --all-versions
|avail_wheels dedalus
|result=
|result=
$ avail_wheels dedalus
name    version    python    arch
name    version    python    arch
-------  ---------  --------  ---------
-------  ---------  --------  ---------
Line 19: Line 15:
dedalus  3.0.2      cp310    x86-64-v3
dedalus  3.0.2      cp310    x86-64-v3
}}
}}
</div>


<div lang="en" dir="ltr" class="mw-content-ltr">
= Installation dans un environnement virtuel Python =
= Installing Dedalus in a Python virtual environment =
1. Chargez les modules requis pour exécuter Dedalus.
1. Load Dedalus runtime dependencies.
{{Command|module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11}}
{{Command|module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11}}
</div>


<div lang="en" dir="ltr" class="mw-content-ltr">
2. Créez et activez un environnement virtuel Python.
2. Create and activate a Python virtual environment.
{{Commands
{{Commands
|virtualenv --no-download ~/dedalus_env
|virtualenv --no-download ~/dedalus_env
|source ~/dedalus_env/bin/activate
|source ~/dedalus_env/bin/activate
}}
}}
</div>


<div lang="en" dir="ltr" class="mw-content-ltr">
3. Installez une version de Dedalus et ses dépendances Python.
3. Install a specific version of Dedalus and its Python dependencies.
{{Commands
{{Commands
|prompt=(dedalus_env) [name@server ~]
|prompt=(dedalus_env) [name@server ~]
Line 42: Line 32:
|pip install --no-index dedalus{{=}}{{=}}X.Y.Z
|pip install --no-index dedalus{{=}}{{=}}X.Y.Z
}}
}}
where <code>X.Y.Z</code> is the exact desired version, for instance <code>3.0.2</code>.  
<code>X.Y.Z</code> est la version choisie (par exemple 3.0.2).  
You can omit to specify the version in order to install the latest one available from the wheelhouse.
Si aucun numéro n'est indiqué, la plus récente version sera installée.
</div>


<div lang="en" dir="ltr" class="mw-content-ltr">
4. Validez.
4. Validate it.
{{Command
{{Command
|prompt=(dedalus_env) [name@server ~]
|prompt=(dedalus_env) [name@server ~]
|python -c 'import dedalus'
|python -c 'import dedalus'
}}
}}
</div>


<div lang="en" dir="ltr" class="mw-content-ltr">
5. Gelez l'environnement et les dépendances requises.
5. Freeze the environment and requirements set.
{{Command
{{Command
|prompt=(dedalus_env) [name@server ~]
|prompt=(dedalus_env) [name@server ~]
|pip freeze --local > ~/dedalus-3.0.2-requirements.txt
|pip freeze --local > ~/dedalus-3.0.2-requirements.txt
}}
}}
</div>


<div lang="en" dir="ltr" class="mw-content-ltr">
6. Supprimez l'environnement virtuel local.
6. Remove the local virtual environment.
{{Command
{{Command
|prompt=(dedalus_env) [name@server ~]
|prompt=(dedalus_env) [name@server ~]
|deactivate && rm -r ~/dedalus_env
|deactivate && rm -r ~/dedalus_env
}}
}}
</div>


<div lang="en" dir="ltr" class="mw-content-ltr">
= Exécution =
= Running Dedalus =
Dedalus peut être exécuté en mode distribué sur plusieurs nœuds ou cœurs.
You can run dedalus distributed accross multiple nodes or cores.  
Pour plus d'information, voir
For efficient MPI scheduling, please see:
* [[Running jobs/fr#Tâche_MPI|Tâche MPI]]
* [[Running_jobs#MPI_job]]
* [[Advanced MPI scheduling/fr|Contrôle de l'ordonnancement avec MPI]]
* [[Advanced_MPI_scheduling]]
</div>


<div lang="en" dir="ltr" class="mw-content-ltr">
1. Préparez le script.
1. Write your job submission script.
<tabs>
<tabs>
<tab name="Distributed">
<tab name="Mode distribué">
{{File
{{File
|name=submit-dedalus-distributed.sh
|name=submit-dedalus-distributed.sh
Line 87: Line 67:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
</div>


#SBATCH --account=def-someprof    # adjust this to match the accounting group you are using to submit jobs
#SBATCH --account=def-someprof    # adjust this to match the accounting group you are using to submit jobs
Line 94: Line 73:
#SBATCH --mem-per-cpu=4G          # adjust this according to the memory you need per process
#SBATCH --mem-per-cpu=4G          # adjust this according to the memory you need per process


<div lang="en" dir="ltr" class="mw-content-ltr">
# Run on cores accross the system : https://docs.alliancecan.ca/wiki/Advanced_MPI_scheduling#Few_cores,_any_number_of_nodes
# Run on cores accross the system : https://docs.alliancecan.ca/wiki/Advanced_MPI_scheduling#Few_cores,_any_number_of_nodes
</div>


# Load modules dependencies.
# Load modules dependencies.
Line 110: Line 87:
EOF
EOF


<div lang="en" dir="ltr" class="mw-content-ltr">
# activate only on main node
# activate only on main node
source $SLURM_TMPDIR/env/bin/activate;
source $SLURM_TMPDIR/env/bin/activate;
</div>


export OMP_NUM_THREADS=1
export OMP_NUM_THREADS=1
Line 122: Line 97:
</tab>
</tab>


<div lang="en" dir="ltr" class="mw-content-ltr">
<tab name="Nœud entier">
<tab name="Whole nodes">
{{File
{{File
|name=submit-dedalus-whole-nodes.sh
|name=submit-dedalus-whole-nodes.sh
Line 129: Line 103:
|contents=
|contents=
#!/bin/bash
#!/bin/bash
</div>


#SBATCH --account=def-someprof    # adjust this to match the accounting group you are using to submit jobs
#SBATCH --account=def-someprof    # adjust this to match the accounting group you are using to submit jobs
Line 154: Line 127:
source $SLURM_TMPDIR/env/bin/activate;
source $SLURM_TMPDIR/env/bin/activate;


<div lang="en" dir="ltr" class="mw-content-ltr">
export OMP_NUM_THREADS=1
export OMP_NUM_THREADS=1
</div>


<div lang="en" dir="ltr" class="mw-content-ltr">
# srun exports the current env, which contains $VIRTUAL_ENV and $PATH variables
# srun exports the current env, which contains $VIRTUAL_ENV and $PATH variables
srun python $SCRATCH/myscript.py;
srun python $SCRATCH/myscript.py;
Line 164: Line 134:
</tab>
</tab>
</tabs>
</tabs>
</div>


<div lang="en" dir="ltr" class="mw-content-ltr">
2. Soumettez la tâche à l'ordonnanceur.
2. Submit your job to the scheduler.
</div>


<div lang="en" dir="ltr" class="mw-content-ltr">
Avant de soumettre la tâche, il est important de tester le script pour des erreurs possibles. Faites un test rapide avec une [[Running_jobs/fr#Tâches_interactives|tâche interactive]].
Before submitting your job, it is important to test that your submission script will start without errors.
You can do a quick test in an [[Running_jobs#Interactive_jobs|interactive job]].
</div>


<div lang="en" dir="ltr" class="mw-content-ltr">
{{Command
{{Command
|sbatch submit-dedalus.sh
|sbatch submit-dedalus.sh
}}
}}
</div>

Latest revision as of 17:26, 30 September 2024

Other languages:


Dedalus est un environnement de développement flexible pour résoudre des équations aux dérivées partielles à l'aide de méthodes spectrales modernes.

Versions disponibles

Sur nos grappes, les versions de Dedalus sont des wheels Python. Pour connaître les versions disponibles, exécutez avail_wheels.

Question.png
[name@server ~]$ avail_wheels dedalus
name     version    python    arch
-------  ---------  --------  ---------
dedalus  3.0.2      cp311     x86-64-v3
dedalus  3.0.2      cp310     x86-64-v3

Installation dans un environnement virtuel Python

1. Chargez les modules requis pour exécuter Dedalus.

Question.png
[name@server ~]$ module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11

2. Créez et activez un environnement virtuel Python.

[name@server ~]$ virtualenv --no-download ~/dedalus_env
[name@server ~]$ source ~/dedalus_env/bin/activate


3. Installez une version de Dedalus et ses dépendances Python.

(dedalus_env) [name@server ~] pip install --no-index --upgrade pip
(dedalus_env) [name@server ~] pip install --no-index dedalus==X.Y.Z

X.Y.Z est la version choisie (par exemple 3.0.2). Si aucun numéro n'est indiqué, la plus récente version sera installée.

4. Validez.

Question.png
(dedalus_env) [name@server ~] python -c 'import dedalus'

5. Gelez l'environnement et les dépendances requises.

Question.png
(dedalus_env) [name@server ~] pip freeze --local > ~/dedalus-3.0.2-requirements.txt

6. Supprimez l'environnement virtuel local.

Question.png
(dedalus_env) [name@server ~] deactivate && rm -r ~/dedalus_env

Exécution

Dedalus peut être exécuté en mode distribué sur plusieurs nœuds ou cœurs. Pour plus d'information, voir

1. Préparez le script.

File : submit-dedalus-distributed.sh

#!/bin/bash

#SBATCH --account=def-someprof    # adjust this to match the accounting group you are using to submit jobs
#SBATCH --time=08:00:00           # adjust this to match the walltime of your job
#SBATCH --ntasks=4                # adjust this to match the number of tasks/processes to run
#SBATCH --mem-per-cpu=4G          # adjust this according to the memory you need per process

# Run on cores accross the system : https://docs.alliancecan.ca/wiki/Advanced_MPI_scheduling#Few_cores,_any_number_of_nodes

# Load modules dependencies.
module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11

# create the virtual environment on each allocated node: 
srun --ntasks $SLURM_NNODES --tasks-per-node=1 bash << EOF
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate

pip install --no-index --upgrade pip
pip install --no-index -r dedalus-3.0.2-requirements.txt
EOF

# activate only on main node
source $SLURM_TMPDIR/env/bin/activate;

export OMP_NUM_THREADS=1

# srun exports the current env, which contains $VIRTUAL_ENV and $PATH variables
srun python $SCRATCH/myscript.py;


File : submit-dedalus-whole-nodes.sh

#!/bin/bash

#SBATCH --account=def-someprof    # adjust this to match the accounting group you are using to submit jobs
#SBATCH --time=08:00:00           # adjust this to match the walltime of your job
#SBATCH --nodes=2                 # adjust this to match the number of whole node
#SBATCH --ntasks-per-node=4       # adjust this to match the number of tasks/processes to run per node
#SBATCH --mem-per-cpu=4G          # adjust this according to the memory you need per process

# Run on N whole nodes : https://docs.alliancecan.ca/wiki/Advanced_MPI_scheduling#Whole_nodes

# Load modules dependencies.
module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11

# create the virtual environment on each allocated node: 
srun --ntasks $SLURM_NNODES --tasks-per-node=1 bash << EOF
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate

pip install --no-index --upgrade pip
pip install --no-index -r dedalus-3.0.2-requirements.txt
EOF

# activate only on main node
source $SLURM_TMPDIR/env/bin/activate;

export OMP_NUM_THREADS=1

# srun exports the current env, which contains $VIRTUAL_ENV and $PATH variables
srun python $SCRATCH/myscript.py;


2. Soumettez la tâche à l'ordonnanceur.

Avant de soumettre la tâche, il est important de tester le script pour des erreurs possibles. Faites un test rapide avec une tâche interactive.

Question.png
[name@server ~]$ sbatch submit-dedalus.sh