AMBER

Revision as of 18:09, 9 December 2021 by Svassili (talk | contribs)
Other languages:

Introduction

Amber is the collective name for a suite of programs that allow users to perform molecular dynamics simulations, particularly on biomolecules. None of the individual programs carry this name, but the various parts work reasonably well together, and provide a powerful framework for many common calculations.

Amber vs. AmberTools

We have modules for both Amber and AmberTools available in our software stack.

  • The AmberTools (module ambertools) contain a number of tools for preparing and analysing simulations, as well as sander to perform molecular dynamics simulations, all of which are free and open source.
  • Amber (module amber) contains everything that is included in ambertools, but adds the advanced pmemd program for molecular dynamics simulations.

To see a list of installed versions and which other modules they depend on, you can use the module spider command or check the Available software page.


Loading AmberTools 21

Currently, AmberTools 21 is available on all clusters.

Non-GPU version

module load StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 ambertools/21
source $EBROOTAMBERTOOLS/amber.sh

Provides the following MD engines: sander, sander.LES, sander.LES.MPI, sander.MPI, and sander.OMP

GPU version

module load StdEnv/2020 gcc/9.3.0 cuda/11.0 openmpi/4.0.3 ambertools/21
source $EBROOTAMBERTOOLS/amber.sh

Provides the following MD engines: sander, sander.LES, sander.LES.MPI, sander.MPI, sander.OMP, sander.quick.cuda, and sander.quick.cuda.MPI

Loading Amber 20

Currently, Amber20 is available on all clusters.

CPU version linked with MKL

module load StdEnv/2020  gcc/9.3.0 openmpi/4.0.3 amber/20.9-20.15 

Provides all MD programs available in AmberTools/20 plus pmemd (serial) and pmemd.MPI (parallel).

GPU version

module load StdEnv/2020  gcc/9.3.0  cuda/11.0  openmpi/4.0.3 amber/20.9-20.15

Provides all MD programs available in ambertools/20 plus pmemd (serial), pmemd.MPI (parallel), pmemd.cuda (single GPU), and pmemd.cuda.MPI (multi - GPU)

Loading Amber 18

Currently, versions 18 and 18.10-18.11 are available on all clusters.

Non-GPU versions

module load gcc/5.4.0 openmpi/2.1.1 amber/18 scipy-stack/2019a

or

module load gcc/5.4.0 openmpi/2.1.1 amber/18.10-18.11 scipy-stack/2019a

GPU versions

module load gcc/5.4.0  cuda/9.0.176  openmpi/2.1.1 amber/18 scipy-stack/2019a

or

module load gcc/5.4.0  cuda/9.0.176  openmpi/2.1.1 amber/18.10-18.11 scipy-stack/2019a

Known issues

MMPBSA.py programs from amber/18-10-18.11 and amber/18.14-18.17 modules can not perform PB calculations. Use more recent amber/20 modules for PB calculations.

Loading Amber 16

Amber 16 is currently available on Graham only due to license restrictions. It was built with the previous system environment StdEnv/2016.4. Load StdEnv/2016.4 before loading amber/16 using the module command:

[name@server $] module load StdEnv/2016.4
[name@server $] module load amber/16 

This version does not support some Python functionality of Amber.

Job submission

For a general discussion about submitting jobs, see Running jobs.

In examples below, change the module load command to the one shown above if you wish to use the newer version.

The following example is a sander serial job script. The input files are in.md, crd.md.23, prmtop.

File : amber_serial.sh

#!/bin/bash
 #SBATCH --ntasks=1             # 1 cpu, serial job
 #SBATCH --mem-per-cpu=2G       # memory per cpu
 #SBATCH --time=00-01:00        # time (DD-HH:MM)
 #SBATCH --output=cytosine.log  # .log file from scheduler
 module load StdEnv/2016.4
 module load amber/16
 sander -O  -i in.md  -c crd.md.23  -o cytosine.out


The following example is a sander.MPI parallel job script:

File : amber_parallel.sh

#!/bin/bash
 #SBATCH --nodes=1 --ntasks-per-node=32  # 1 node with 32 cpus, MPI job
 #SBATCH --mem-per-cpu=2G                # memory, should be less than 4G
 #SBATCH --time=00-01:00                 # time (DD-HH:MM)
 #SBATCH --output=sodium.log             # output .log file
 module load StdEnv/2016.4
 module load amber/16
 srun sander.MPI -ng 2 -groupfile groups


You can modify the script to fit your job's requirements for compute resources. See Running jobs.