AMBER

From Alliance Doc
Revision as of 20:43, 5 July 2019 by Ppomorsk (talk | contribs)
Jump to navigation Jump to search
Other languages:

Introduction[edit]

Amber is the collective name for a suite of programs that allow users to perform molecular dynamics simulations, particularly on biomolecules. None of the individual programs carry this name, but the various parts work reasonably well together, and provide a powerful framework for many common calculations.

Running Amber 18[edit]

Currently, versions 18 and 18.10-18.11 are available on all clusters.

Non-GPU versions

module load gcc/5.4.0 openmpi/2.1.1 amber/18 scipy-stack/2019a

or

module load gcc/5.4.0 openmpi/2.1.1 amber/18.10-18.11 scipy-stack/2019a

GPU versions:

module load gcc/5.4.0  cuda/9.0.176  openmpi/2.1.1 amber/18 scipy-stack/2019a

or

module load gcc/5.4.0  cuda/9.0.176  openmpi/2.1.1 amber/18.10-18.11 scipy-stack/2019a



Running Amber 16[edit]

Amber 16 is currently installed only on Graham due to license restrictions. Load it using the module command:

[name@server $] module load amber/16 

This version does not support some Python functionality of Amber.

Job submission[edit]

For a general discussion about submitting jobs, see Running jobs.

The following example is a sander serial job script. The input files are in.md, crd.md.23, prmtop.

File : amber_serial.sh

#!/bin/bash
 #SBATCH --ntasks=1             # 1 cpu, serial job
 #SBATCH --mem-per-cpu=2G       # memory per cpu
 #SBATCH --time=00-01:00        # time (DD-HH:MM)
 #SBATCH --output=cytosine.log  # .log file from scheduler
 module load amber/16
 sander -O  -i in.md  -c crd.md.23  -o cytosine.out


The following example is a sander.MPI parallel job script:

File : amber_parallel.sh

#!/bin/bash
 #SBATCH --nodes=1 --ntasks-per-node=32  # 1 node with 32 cpus, MPI job
 #SBATCH --mem-per-cpu=2G                # memory, should be less than 4G
 #SBATCH --time=00-01:00                 # time (DD-HH:MM)
 #SBATCH --output=sodium.log             # output .log file
 module load amber/16
 srun sander.MPI -ng 2 -groupfile groups


You can modify the script to fit your job's requirements for compute resources. See Running jobs.

Examples[edit]

Sample *.sh and input files can be found on Graham under

/home/jemmyhu/tests/test_Amber/