Dedalus/fr: Difference between revisions
(Created page with "# activate only on main node source $SLURM_TMPDIR/env/bin/activate;") |
No edit summary Tag: Manual revert |
||
(29 intermediate revisions by 2 users not shown) | |||
Line 3: | Line 3: | ||
__FORCETOC__ | __FORCETOC__ | ||
[https://dedalus-project.org/ Dedalus] est un environnement de développement flexible pour résoudre des équations aux dérivées partielles à l'aide de méthodes spectrales modernes. | |||
[https://dedalus-project.org/ Dedalus] | |||
= Versions disponibles = | |||
Sur nos grappes, les versions de Dedalus sont des <i>wheels</i> Python. Pour connaître les versions disponibles, exécutez <code>avail_wheels</code>. | |||
Dedalus | |||
{{Command | {{Command | ||
|avail_wheels dedalus | |avail_wheels dedalus | ||
|result= | |result= | ||
name version python arch | name version python arch | ||
------- --------- -------- --------- | ------- --------- -------- --------- | ||
Line 19: | Line 15: | ||
dedalus 3.0.2 cp310 x86-64-v3 | dedalus 3.0.2 cp310 x86-64-v3 | ||
}} | }} | ||
= Installation dans un environnement virtuel Python = | |||
1. Chargez les modules requis pour exécuter Dedalus. | |||
1. | |||
{{Command|module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11}} | {{Command|module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11}} | ||
2. Créez et activez un environnement virtuel Python. | |||
2. | |||
{{Commands | {{Commands | ||
|virtualenv --no-download ~/dedalus_env | |virtualenv --no-download ~/dedalus_env | ||
|source ~/dedalus_env/bin/activate | |source ~/dedalus_env/bin/activate | ||
}} | }} | ||
3. Installez une version de Dedalus et ses dépendances Python. | |||
3. | |||
{{Commands | {{Commands | ||
|prompt=(dedalus_env) [name@server ~] | |prompt=(dedalus_env) [name@server ~] | ||
Line 42: | Line 32: | ||
|pip install --no-index dedalus{{=}}{{=}}X.Y.Z | |pip install --no-index dedalus{{=}}{{=}}X.Y.Z | ||
}} | }} | ||
où <code>X.Y.Z</code> est la version choisie (par exemple 3.0.2). | |||
Si aucun numéro n'est indiqué, la plus récente version sera installée. | |||
4. Validez. | |||
4. | |||
{{Command | {{Command | ||
|prompt=(dedalus_env) [name@server ~] | |prompt=(dedalus_env) [name@server ~] | ||
|python -c 'import dedalus' | |python -c 'import dedalus' | ||
}} | }} | ||
5. Gelez l'environnement et les dépendances requises. | |||
5. | |||
{{Command | {{Command | ||
|prompt=(dedalus_env) [name@server ~] | |prompt=(dedalus_env) [name@server ~] | ||
|pip freeze --local > ~/dedalus-3.0.2-requirements.txt | |pip freeze --local > ~/dedalus-3.0.2-requirements.txt | ||
}} | }} | ||
6. Supprimez l'environnement virtuel local. | |||
6. | |||
{{Command | {{Command | ||
|prompt=(dedalus_env) [name@server ~] | |prompt=(dedalus_env) [name@server ~] | ||
|deactivate && rm -r ~/dedalus_env | |deactivate && rm -r ~/dedalus_env | ||
}} | }} | ||
= Exécution = | |||
Dedalus peut être exécuté en mode distribué sur plusieurs nœuds ou cœurs. | |||
Pour plus d'information, voir | |||
* [[Running jobs/fr#Tâche_MPI|Tâche MPI]] | |||
* [[ | * [[Advanced MPI scheduling/fr|Contrôle de l'ordonnancement avec MPI]] | ||
* [[ | |||
1. Préparez le script. | |||
1. | |||
<tabs> | <tabs> | ||
<tab name=" | <tab name="Mode distribué"> | ||
{{File | {{File | ||
|name=submit-dedalus-distributed.sh | |name=submit-dedalus-distributed.sh | ||
Line 87: | Line 67: | ||
|contents= | |contents= | ||
#!/bin/bash | #!/bin/bash | ||
#SBATCH --account=def-someprof # adjust this to match the accounting group you are using to submit jobs | #SBATCH --account=def-someprof # adjust this to match the accounting group you are using to submit jobs | ||
Line 94: | Line 73: | ||
#SBATCH --mem-per-cpu=4G # adjust this according to the memory you need per process | #SBATCH --mem-per-cpu=4G # adjust this according to the memory you need per process | ||
# Run on cores accross the system : https://docs.alliancecan.ca/wiki/Advanced_MPI_scheduling#Few_cores,_any_number_of_nodes | # Run on cores accross the system : https://docs.alliancecan.ca/wiki/Advanced_MPI_scheduling#Few_cores,_any_number_of_nodes | ||
# Load modules dependencies. | # Load modules dependencies. | ||
Line 110: | Line 87: | ||
EOF | EOF | ||
# activate only on main node | # activate only on main node | ||
source $SLURM_TMPDIR/env/bin/activate; | source $SLURM_TMPDIR/env/bin/activate; | ||
export OMP_NUM_THREADS=1 | export OMP_NUM_THREADS=1 | ||
Line 122: | Line 97: | ||
</tab> | </tab> | ||
<tab name="Nœud entier"> | |||
<tab name=" | |||
{{File | {{File | ||
|name=submit-dedalus-whole-nodes.sh | |name=submit-dedalus-whole-nodes.sh | ||
Line 129: | Line 103: | ||
|contents= | |contents= | ||
#!/bin/bash | #!/bin/bash | ||
#SBATCH --account=def-someprof # adjust this to match the accounting group you are using to submit jobs | #SBATCH --account=def-someprof # adjust this to match the accounting group you are using to submit jobs | ||
Line 154: | Line 127: | ||
source $SLURM_TMPDIR/env/bin/activate; | source $SLURM_TMPDIR/env/bin/activate; | ||
export OMP_NUM_THREADS=1 | export OMP_NUM_THREADS=1 | ||
# srun exports the current env, which contains $VIRTUAL_ENV and $PATH variables | # srun exports the current env, which contains $VIRTUAL_ENV and $PATH variables | ||
srun python $SCRATCH/myscript.py; | srun python $SCRATCH/myscript.py; | ||
Line 164: | Line 134: | ||
</tab> | </tab> | ||
</tabs> | </tabs> | ||
2. Soumettez la tâche à l'ordonnanceur. | |||
2. | |||
Avant de soumettre la tâche, il est important de tester le script pour des erreurs possibles. Faites un test rapide avec une [[Running_jobs/fr#Tâches_interactives|tâche interactive]]. | |||
{{Command | {{Command | ||
|sbatch submit-dedalus.sh | |sbatch submit-dedalus.sh | ||
}} | }} | ||
Latest revision as of 17:26, 30 September 2024
Dedalus est un environnement de développement flexible pour résoudre des équations aux dérivées partielles à l'aide de méthodes spectrales modernes.
Versions disponibles
Sur nos grappes, les versions de Dedalus sont des wheels Python. Pour connaître les versions disponibles, exécutez avail_wheels
.
[name@server ~]$ avail_wheels dedalus
name version python arch
------- --------- -------- ---------
dedalus 3.0.2 cp311 x86-64-v3
dedalus 3.0.2 cp310 x86-64-v3
Installation dans un environnement virtuel Python
1. Chargez les modules requis pour exécuter Dedalus.
[name@server ~]$ module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11
2. Créez et activez un environnement virtuel Python.
[name@server ~]$ virtualenv --no-download ~/dedalus_env
[name@server ~]$ source ~/dedalus_env/bin/activate
3. Installez une version de Dedalus et ses dépendances Python.
(dedalus_env) [name@server ~] pip install --no-index --upgrade pip
(dedalus_env) [name@server ~] pip install --no-index dedalus==X.Y.Z
où X.Y.Z
est la version choisie (par exemple 3.0.2).
Si aucun numéro n'est indiqué, la plus récente version sera installée.
4. Validez.
(dedalus_env) [name@server ~] python -c 'import dedalus'
5. Gelez l'environnement et les dépendances requises.
(dedalus_env) [name@server ~] pip freeze --local > ~/dedalus-3.0.2-requirements.txt
6. Supprimez l'environnement virtuel local.
(dedalus_env) [name@server ~] deactivate && rm -r ~/dedalus_env
Exécution
Dedalus peut être exécuté en mode distribué sur plusieurs nœuds ou cœurs. Pour plus d'information, voir
1. Préparez le script.
#!/bin/bash
#SBATCH --account=def-someprof # adjust this to match the accounting group you are using to submit jobs
#SBATCH --time=08:00:00 # adjust this to match the walltime of your job
#SBATCH --ntasks=4 # adjust this to match the number of tasks/processes to run
#SBATCH --mem-per-cpu=4G # adjust this according to the memory you need per process
# Run on cores accross the system : https://docs.alliancecan.ca/wiki/Advanced_MPI_scheduling#Few_cores,_any_number_of_nodes
# Load modules dependencies.
module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11
# create the virtual environment on each allocated node:
srun --ntasks $SLURM_NNODES --tasks-per-node=1 bash << EOF
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate
pip install --no-index --upgrade pip
pip install --no-index -r dedalus-3.0.2-requirements.txt
EOF
# activate only on main node
source $SLURM_TMPDIR/env/bin/activate;
export OMP_NUM_THREADS=1
# srun exports the current env, which contains $VIRTUAL_ENV and $PATH variables
srun python $SCRATCH/myscript.py;
#!/bin/bash
#SBATCH --account=def-someprof # adjust this to match the accounting group you are using to submit jobs
#SBATCH --time=08:00:00 # adjust this to match the walltime of your job
#SBATCH --nodes=2 # adjust this to match the number of whole node
#SBATCH --ntasks-per-node=4 # adjust this to match the number of tasks/processes to run per node
#SBATCH --mem-per-cpu=4G # adjust this according to the memory you need per process
# Run on N whole nodes : https://docs.alliancecan.ca/wiki/Advanced_MPI_scheduling#Whole_nodes
# Load modules dependencies.
module load StdEnv/2023 gcc openmpi mpi4py/3.1.4 fftw-mpi/3.3.10 hdf5-mpi/1.14.2 python/3.11
# create the virtual environment on each allocated node:
srun --ntasks $SLURM_NNODES --tasks-per-node=1 bash << EOF
virtualenv --no-download $SLURM_TMPDIR/env
source $SLURM_TMPDIR/env/bin/activate
pip install --no-index --upgrade pip
pip install --no-index -r dedalus-3.0.2-requirements.txt
EOF
# activate only on main node
source $SLURM_TMPDIR/env/bin/activate;
export OMP_NUM_THREADS=1
# srun exports the current env, which contains $VIRTUAL_ENV and $PATH variables
srun python $SCRATCH/myscript.py;
2. Soumettez la tâche à l'ordonnanceur.
Avant de soumettre la tâche, il est important de tester le script pour des erreurs possibles. Faites un test rapide avec une tâche interactive.
[name@server ~]$ sbatch submit-dedalus.sh