AMBER: Difference between revisions
No edit summary |
|||
Line 119: | Line 119: | ||
srun sander.quick.cuda.MPI -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7 | srun sander.quick.cuda.MPI -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7 | ||
}} | }} | ||
=== Parallel MMPBSA job === | |||
The example below uses 32 MPI processes. MMPBSA scales linearly because each trajectory frame is processed independently. | |||
{{File | |||
|name=pmemd_MPI.sh | |||
|lang="bash" | |||
|contents= | |||
#!/bin/bash | |||
#SBATCH --ntasks=8 | |||
#SBATCH --mem-per-cpu=4000 | |||
#SBATCH --time=1:00:00 | |||
module load StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/20.9-20.15 scipy-stack | |||
srun MMPBSA.py.MPI -O -i mmpbsa.in -o mmpbsa.dat -sp solvated_complex.parm7 -cp complex.parm7 -rp receptor.parm7 -lp ligand.parm7 -y trajectory.nc | |||
}} | |||
<!--T:6--> | <!--T:6--> | ||
You can modify the script to fit your job's requirements for computing resources. See [[Running jobs]]. | You can modify the script to fit your job's requirements for computing resources. See [[Running jobs]]. |
Revision as of 22:37, 3 February 2022
Introduction[edit]
Amber is the collective name for a suite of programs that allow users to perform molecular dynamics simulations, particularly on biomolecules. None of the individual programs carry this name, but the various parts work reasonably well together, and provide a powerful framework for many common calculations.
Amber vs. AmberTools[edit]
We have modules for both Amber and AmberTools available in our software stack.
- The AmberTools (module
ambertools
) contain a number of tools for preparing and analysing simulations, as well assander
to perform molecular dynamics simulations, all of which are free and open source. - Amber (module
amber
) contains everything that is included inambertools
, but adds the advancedpmemd
program for molecular dynamics simulations.
To see a list of installed versions and which other modules they depend on, you can use the module spider
command or check the Available software page.
Loading Amber and AmberTools modules[edit]
AMBER version | modules for running on CPUs | modules for running on GPUs (CUDA) | Notes |
---|---|---|---|
ambertools/21 | StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 ambertools/21 |
StdEnv/2020 gcc/9.3.0 cuda/11.4 openmpi/4.0.3 ambertools/21 |
GCC, FlexiBLAS & FFTW |
ambertools/21 | StdEnv/2020 gcc/9.3.0 cuda/11.0 openmpi/4.0.3 ambertools/21 |
GCC, OpenBLAS & FFTW | |
amber/20.12-20.15 | StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/20.12-20.15 |
StdEnv/2020 gcc/9.3.0 cuda/11.4 openmpi/4.0.3 amber/20.12-20.15 |
GCC, FlexiBLAS & FFTW |
amber/20.9-20.15 | StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/20.9-20.15 |
StdEnv/2020 gcc/9.3.0 cuda/11.0 openmpi/4.0.3 amber/20.9-20.15 |
GCC, MKL & FFTW |
amber/18.14-18.17 | StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/18.14-18.17 |
StdEnv/2020 gcc/8.4.0 cuda/10.2 openmpi/4.0.3 |
GCC, MKL |
AMBER version | modules for running on CPUs | modules for running on GPUs (CUDA) | Notes |
---|---|---|---|
amber/18 | StdEnv/2016 gcc/5.4.0 openmpi/2.1.1 scipy-stack/2019a amber/18 |
StdEnv/2016 gcc/5.4.0 openmpi/2.1.1 cuda/9.0.176 scipy-stack/2019a amber/18 |
GCC, MKL |
amber/18.10-18.11 | StdEnv/2016 gcc/5.4.0 openmpi/2.1.1 scipy-stack/2019a amber/18.10-18.11 |
StdEnv/2016 gcc/5.4.0 openmpi/2.1.1 cuda/9.0.176 scipy-stack/2019a amber/18.10-18.11 |
GCC, MKL |
amber/18.10-18.11 | StdEnv/2016 gcc/7.3.0 openmpi/3.1.2 scipy-stack/2019a amber/18.10-18.11 |
StdEnv/2016 gcc/7.3.0 cuda/9.2.148 openmpi/3.1.2 scipy-stack/2019a amber/18.10-18.11 |
GCC, MKL |
amber/16 | StdEnv/2016.4 amber/16 |
|
Available only on Graham. Some Python functionality is not supported |
Using AMBER modules[edit]
AmberTools 21[edit]
Currently, AmberTools 21 module is available on all clusters. AmberTools provide the following MD engines: sander, sander.LES, sander.LES.MPI, sander.MPI, sander.OMP, sander.quick.cuda, and sander.quick.cuda.MPI. After loading the module set AMBER environment variables:
source $EBROOTAMBERTOOLS/amber.sh
Amber 20[edit]
There are two versions of amber/20 modules: 20.9-20.15 and 20.12-20.15. The first one uses MKL and cuda/11.0, while the second uses FlexiBLAS and cuda/11.4. MKL libraries do not perform well on AMD CPU, and FlexiBLAS solves this problem. It detects CPU type and uses libraries optimized for the hardware. CUDA/11.4 is required for running simulations on A100 GPUs installed on Narval.
CPU-only modules provide all MD programs available in AmberTools/20 plus pmemd (serial) and pmemd.MPI (parallel). GPU modules add pmemd.cuda (single GPU), and pmemd.cuda.MPI (multi - GPU).
Known issues[edit]
1. Module amber/20.12-20.15 does not have MMPBSA.py.MPI executable.
2. MMPBSA.py from amber/18-10-18.11 and amber/18.14-18.17 modules can not perform PB calculations. Use more recent amber/20 modules for this type of calculations.
Job submission examples[edit]
Single GPU job[edit]
For GPU-accelerated simulations on Narval, use amber/20.12-20.15. Modules compiled with cuda version < 11.4 do not work on A100 GPUs. Below is an example submission script for a single-GPU job with amber/20.12-20.15.
#!/bin/bash
#SBATCH --cpus-per-task=1
#SBATCH --gpus-per-node=1
#SBATCH --mem-per-cpu=2000
#SBATCH --time=10:0:0
module purge
module load StdEnv/2020 gcc/9.3.0 cuda/11.4 openmpi/4.0.3 amber/20.12-20.15
pmemd.cuda -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7
CPU-only parallel MPI job[edit]
The example below requests four full nodes on Narval (64 tasks per node). If --nodes=4 is omitted SLURM will decide how many nodes to use based on availability.
#!/bin/bash
#SBATCH --nodes=4
#SBATCH --ntasks=512
#SBATCH --mem-per-cpu=2000
#SBATCH --time=1:0:0
module purge
module load StdEnv/2020 gcc/9.3.0 cuda/11.4 openmpi/4.0.3 amber/20.12-20.15
srun pmemd.MPI -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7
QM/MM distributed multi-GPU job[edit]
The example below requests eight GPUs.
#!/bin/bash
#SBATCH --ntasks=8
#SBATCH --cpus-per-task=1
#SBATCH --gpus-per-task=1
#SBATCH --mem-per-cpu=4000
#SBATCH --time=1:00:00
module load StdEnv/2020 gcc/9.3.0 cuda/11.4 openmpi/4.0.3 ambertools/21
source $EBROOTAMBERTOOLS/amber.sh
srun sander.quick.cuda.MPI -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7
Parallel MMPBSA job[edit]
The example below uses 32 MPI processes. MMPBSA scales linearly because each trajectory frame is processed independently.
#!/bin/bash
#SBATCH --ntasks=8
#SBATCH --mem-per-cpu=4000
#SBATCH --time=1:00:00
module load StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/20.9-20.15 scipy-stack
srun MMPBSA.py.MPI -O -i mmpbsa.in -o mmpbsa.dat -sp solvated_complex.parm7 -cp complex.parm7 -rp receptor.parm7 -lp ligand.parm7 -y trajectory.nc
You can modify the script to fit your job's requirements for computing resources. See Running jobs.
Performance[edit]
Benchmarks of simulations with PMEMD on Compute Canada systems are available here. View benchmarks of QM/MM simulations with SANDER.QUICK here.