AMBER: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
No edit summary
Line 34: Line 34:


==Loading AmberTools 21== <!--T:23-->
==Loading AmberTools 21== <!--T:23-->
Currently, AmberTools 21 is available on all clusters.
Currently, AmberTools 21 is available on all clusters. AmberTools Provide the following MD engines: sander, sander.LES, sander.LES.MPI, sander.MPI, sander.OMP, sander.quick.cuda, and sander.quick.cuda.MPI. After loading the module set AMBER environment variables:
source $EBROOTAMBERTOOLS/amber.sh


=== CPU-only version === <!--T:24-->  
=== CPU-only version === <!--T:24-->  

Revision as of 18:52, 1 February 2022

Other languages:

Introduction[edit]

Amber is the collective name for a suite of programs that allow users to perform molecular dynamics simulations, particularly on biomolecules. None of the individual programs carry this name, but the various parts work reasonably well together, and provide a powerful framework for many common calculations.

Amber vs. AmberTools[edit]

We have modules for both Amber and AmberTools available in our software stack.

  • The AmberTools (module ambertools) contain a number of tools for preparing and analysing simulations, as well as sander to perform molecular dynamics simulations, all of which are free and open source.
  • Amber (module amber) contains everything that is included in ambertools, but adds the advanced pmemd program for molecular dynamics simulations.

To see a list of installed versions and which other modules they depend on, you can use the module spider command or check the Available software page.


Loading Amber and AmberTools modules[edit]

AMBER version modules for running on CPUs modules for running on GPUs (CUDA) Notes
ambertools/21 StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 ambertools/21 StdEnv/2020 gcc/9.3.0 cuda/11.4 openmpi/4.0.3 ambertools/21 GCC, FlexiBLAS & FFTW
ambertools/21 StdEnv/2020 gcc/9.3.0 cuda/11.0 openmpi/4.0.3 ambertools/21 GCC, OpenBLAS & FFTW
amber/20.12-20.15 StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/20.12-20.15 StdEnv/2020 gcc/9.3.0 cuda/11.4 openmpi/4.0.3 amber/20.12-20.15 GCC, FlexiBLAS & FFTW
amber/20.9-20.15 StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/20.9-20.15 StdEnv/2020 gcc/9.3.0 cuda/11.0 openmpi/4.0.3 amber/20.9-20.15 GCC, MKL & FFTW

Loading AmberTools 21[edit]

Currently, AmberTools 21 is available on all clusters. AmberTools Provide the following MD engines: sander, sander.LES, sander.LES.MPI, sander.MPI, sander.OMP, sander.quick.cuda, and sander.quick.cuda.MPI. After loading the module set AMBER environment variables:

source $EBROOTAMBERTOOLS/amber.sh

CPU-only version[edit]

module load StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 ambertools/21
source $EBROOTAMBERTOOLS/amber.sh

Provides the following MD engines: sander, sander.LES, sander.LES.MPI, sander.MPI, and sander.OMP

GPU version[edit]

module load StdEnv/2020 gcc/9.3.0 cuda/11.0 openmpi/4.0.3 ambertools/21
source $EBROOTAMBERTOOLS/amber.sh

Provides the following MD engines: sander, sander.LES, sander.LES.MPI, sander.MPI, sander.OMP, sander.quick.cuda, and sander.quick.cuda.MPI

Loading Amber 20[edit]

Currently, Amber20 is available on all clusters. There are two versions of amber/20 modules: 20.9-20.15 and 20.12-20.15. The first one uses MKL and cuda/11.0, while the second uses FlexiBLAS and cuda/11.4. MKL libraries do not perform well on AMD CPU, and FlexiBLAS solves this problem. It detects CPU type and uses libraries optimized for the hardware. Cuda/11.4 is required for running simulations on A100 GPUs installed on Narval.

Loading CPU-only versions[edit]

 module load StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/20.9-20.15  

or

module load StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/20.12-20.15 

These modules provide all MD programs available in AmberTools/20 plus pmemd (serial) and pmemd.MPI (parallel).

Loading GPU versions[edit]

 module load StdEnv/2020 gcc/9.3.0  cuda/11.0  openmpi/4.0.3 amber/20.9-20.15

or

module load StdEnv/2020 gcc/9.3.0  cuda/11.4  openmpi/4.0.3 amber/20.12-20.15 

These module provide all MD programs available in ambertools/20 plus pmemd (serial), pmemd.MPI (parallel), pmemd.cuda (single GPU), and pmemd.cuda.MPI (multi - GPU)

Submission of GPU-accelerated AMBER on Narval[edit]

AMBER modules compiled with cuda version < 11.4 do not work on A100 GPUs. Use amber/20.12-20.15 module on Narval.

Example submission script for a single-GPU job with amber/20.12-20.15:

#!/bin/bash
#SBATCH --cpus-per-task=1 --gpus-per-node=1 --mem-per-cpu=2000 --time=10:0:0  
module purge
module load StdEnv/2020  gcc/9.3.0 cuda/11.4 openmpi/4.0.3 amber/20.12-20.15
pmemd.cuda -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7

Known issues[edit]

Module amber/20.12-20.15 does not have MMPBSA.py.MPI executable.

Loading Amber 18[edit]

Currently, versions 18 and 18.10-18.11 are available on all clusters.

Non-GPU versions[edit]

module load gcc/5.4.0 openmpi/2.1.1 amber/18 scipy-stack/2019a

or

module load gcc/5.4.0 openmpi/2.1.1 amber/18.10-18.11 scipy-stack/2019a

GPU versions[edit]

module load gcc/5.4.0  cuda/9.0.176  openmpi/2.1.1 amber/18 scipy-stack/2019a

or

module load gcc/5.4.0  cuda/9.0.176  openmpi/2.1.1 amber/18.10-18.11 scipy-stack/2019a

Known issues[edit]

1. MMPBSA.py programs from amber/18-10-18.11 and amber/18.14-18.17 modules can not perform PB calculations. Use more recent amber/20 modules for PB calculations.

Loading Amber 16[edit]

Amber 16 is currently available on Graham only due to license restrictions. It was built with the previous system environment StdEnv/2016.4. Load StdEnv/2016.4 before loading amber/16 using the module command:

[name@server $] module load StdEnv/2016.4
[name@server $] module load amber/16 

This version does not support some Python functionality of Amber.

Job submission[edit]

For a general discussion about submitting jobs, see Running jobs.

In examples below, change the module load command to the one shown above if you wish to use the newer version.

The following example is a sander serial job script. The input files are in.md, crd.md.23, prmtop.

File : amber_serial.sh

#!/bin/bash
 #SBATCH --ntasks=1             # 1 cpu, serial job
 #SBATCH --mem-per-cpu=2G       # memory per cpu
 #SBATCH --time=00-01:00        # time (DD-HH:MM)
 #SBATCH --output=cytosine.log  # .log file from scheduler
 module load StdEnv/2016.4
 module load amber/16
 sander -O  -i in.md  -c crd.md.23  -o cytosine.out


The following example is a sander.MPI parallel job script:

File : amber_parallel.sh

#!/bin/bash
 #SBATCH --nodes=1 --ntasks-per-node=32  # 1 node with 32 cpus, MPI job
 #SBATCH --mem-per-cpu=2G                # memory, should be less than 4G
 #SBATCH --time=00-01:00                 # time (DD-HH:MM)
 #SBATCH --output=sodium.log             # output .log file
 module load StdEnv/2016.4
 module load amber/16
 srun sander.MPI -ng 2 -groupfile groups


You can modify the script to fit your job's requirements for compute resources. See Running jobs.