rsnt_translations
56,430
edits
(Marked this version for translation) |
|||
Line 15: | Line 15: | ||
To see a list of installed versions and which other modules they depend on, you can use the <code>module spider</code> [[Using modules#Sub-command_spider|command]] or check the [[Available software]] page. | To see a list of installed versions and which other modules they depend on, you can use the <code>module spider</code> [[Using modules#Sub-command_spider|command]] or check the [[Available software]] page. | ||
== Loading modules == | == Loading modules == <!--T:42--> | ||
<tabs> | <tabs> | ||
<tab name="StdEnv/2020"> | <tab name="StdEnv/2020"> | ||
Line 61: | Line 61: | ||
CPU-only modules provide all MD programs available in AmberTools/20 plus pmemd (serial) and pmemd.MPI (parallel). GPU modules add pmemd.cuda (single GPU), and pmemd.cuda.MPI (multi - GPU). | CPU-only modules provide all MD programs available in AmberTools/20 plus pmemd (serial) and pmemd.MPI (parallel). GPU modules add pmemd.cuda (single GPU), and pmemd.cuda.MPI (multi - GPU). | ||
=== Known issues === | === Known issues === <!--T:41--> | ||
<!--T:41--> | |||
1. Module amber/20.12-20.15 does not have MMPBSA.py.MPI executable. | 1. Module amber/20.12-20.15 does not have MMPBSA.py.MPI executable. | ||
Line 68: | Line 67: | ||
2. MMPBSA.py from amber/18-10-18.11 and amber/18.14-18.17 modules can not perform PB calculations. Use more recent amber/20 modules for this type of calculations. | 2. MMPBSA.py from amber/18-10-18.11 and amber/18.14-18.17 modules can not perform PB calculations. Use more recent amber/20 modules for this type of calculations. | ||
==Job submission examples== | ==Job submission examples== <!--T:37--> | ||
=== Single GPU job === | === Single GPU job === | ||
For GPU-accelerated simulations on Narval, use amber/20.12-20.15. Modules compiled with cuda version < 11.4 do not work on A100 GPUs. Below is an example submission script for a single-GPU job with amber/20.12-20.15. | For GPU-accelerated simulations on Narval, use amber/20.12-20.15. Modules compiled with cuda version < 11.4 do not work on A100 GPUs. Below is an example submission script for a single-GPU job with amber/20.12-20.15. | ||
{{File | {{File | ||
Line 85: | Line 84: | ||
}} | }} | ||
=== CPU-only parallel MPI job === | === CPU-only parallel MPI job === <!--T:47--> | ||
The example below requests four full nodes on Narval (64 tasks per node). If --nodes=4 is omitted SLURM will decide how many nodes to use based on availability. | The example below requests four full nodes on Narval (64 tasks per node). If --nodes=4 is omitted SLURM will decide how many nodes to use based on availability. | ||
{{File | {{File | ||
Line 101: | Line 100: | ||
}} | }} | ||
=== QM/MM distributed multi-GPU job === | === QM/MM distributed multi-GPU job === <!--T:48--> | ||
The example below requests eight GPUs. | The example below requests eight GPUs. | ||
{{File | {{File | ||
Line 119: | Line 118: | ||
}} | }} | ||
=== Parallel MMPBSA job === | === Parallel MMPBSA job === <!--T:6--> | ||
The example below uses 32 MPI processes. MMPBSA scales linearly because each trajectory frame is processed independently. | The example below uses 32 MPI processes. MMPBSA scales linearly because each trajectory frame is processed independently. | ||
{{File | {{File | ||
Line 133: | Line 132: | ||
srun MMPBSA.py.MPI -O -i mmpbsa.in -o mmpbsa.dat -sp solvated_complex.parm7 -cp complex.parm7 -rp receptor.parm7 -lp ligand.parm7 -y trajectory.nc | srun MMPBSA.py.MPI -O -i mmpbsa.in -o mmpbsa.dat -sp solvated_complex.parm7 -cp complex.parm7 -rp receptor.parm7 -lp ligand.parm7 -y trajectory.nc | ||
}} | }} | ||
You can modify scripts to fit your simulation requirements for computing resources. See [[Running jobs]] for more details. | You can modify scripts to fit your simulation requirements for computing resources. See [[Running jobs]] for more details. | ||
</translate> | </translate> |