AMBER: Difference between revisions
No edit summary |
(Marked this version for translation) |
||
Line 5: | Line 5: | ||
Amber is the collective name for a suite of programs that allow users to perform molecular dynamics simulations, particularly on biomolecules. None of the individual programs carry this name, but the various parts work reasonably well together, and provide a powerful framework for many common calculations. | Amber is the collective name for a suite of programs that allow users to perform molecular dynamics simulations, particularly on biomolecules. None of the individual programs carry this name, but the various parts work reasonably well together, and provide a powerful framework for many common calculations. | ||
==Running Amber 18== | ==Running Amber 18== <!--T:8--> | ||
<!--T:9--> | |||
Currently, versions 18 and 18.10-18.11 are available on all clusters. | Currently, versions 18 and 18.10-18.11 are available on all clusters. | ||
<!--T:10--> | |||
Non-GPU versions | Non-GPU versions | ||
module load gcc/5.4.0 openmpi/2.1.1 amber/18 scipy-stack/2019a | module load gcc/5.4.0 openmpi/2.1.1 amber/18 scipy-stack/2019a | ||
Line 14: | Line 16: | ||
module load gcc/5.4.0 openmpi/2.1.1 amber/18.10-18.11 scipy-stack/2019a | module load gcc/5.4.0 openmpi/2.1.1 amber/18.10-18.11 scipy-stack/2019a | ||
<!--T:11--> | |||
GPU versions: | GPU versions: | ||
module load gcc/5.4.0 cuda/9.0.176 openmpi/2.1.1 amber/18 scipy-stack/2019a | <!--T:12--> | ||
module load gcc/5.4.0 cuda/9.0.176 openmpi/2.1.1 amber/18 scipy-stack/2019a | |||
or | or | ||
module load gcc/5.4.0 cuda/9.0.176 openmpi/2.1.1 amber/18.10-18.11 scipy-stack/2019a | module load gcc/5.4.0 cuda/9.0.176 openmpi/2.1.1 amber/18.10-18.11 scipy-stack/2019a | ||
Line 27: | Line 31: | ||
[name@server $] module load amber/16 | [name@server $] module load amber/16 | ||
<!--T:13--> | |||
This version does not support some Python functionality of Amber. | This version does not support some Python functionality of Amber. | ||
Line 32: | Line 37: | ||
For a general discussion about submitting jobs, see [[Running jobs]]. | For a general discussion about submitting jobs, see [[Running jobs]]. | ||
<!--T:14--> | |||
In examples below, change the module load command to one above if you with to use the newer version. | In examples below, change the module load command to one above if you with to use the newer version. | ||
Revision as of 20:45, 5 July 2019
Introduction[edit]
Amber is the collective name for a suite of programs that allow users to perform molecular dynamics simulations, particularly on biomolecules. None of the individual programs carry this name, but the various parts work reasonably well together, and provide a powerful framework for many common calculations.
Running Amber 18[edit]
Currently, versions 18 and 18.10-18.11 are available on all clusters.
Non-GPU versions
module load gcc/5.4.0 openmpi/2.1.1 amber/18 scipy-stack/2019a
or
module load gcc/5.4.0 openmpi/2.1.1 amber/18.10-18.11 scipy-stack/2019a
GPU versions:
module load gcc/5.4.0 cuda/9.0.176 openmpi/2.1.1 amber/18 scipy-stack/2019a
or
module load gcc/5.4.0 cuda/9.0.176 openmpi/2.1.1 amber/18.10-18.11 scipy-stack/2019a
Running Amber 16[edit]
Amber 16 is currently installed only on Graham due to license restrictions. Load it using the module command:
[name@server $] module load amber/16
This version does not support some Python functionality of Amber.
Job submission[edit]
For a general discussion about submitting jobs, see Running jobs.
In examples below, change the module load command to one above if you with to use the newer version.
The following example is a sander serial job script. The input files are in.md, crd.md.23, prmtop
.
#!/bin/bash
#SBATCH --ntasks=1 # 1 cpu, serial job
#SBATCH --mem-per-cpu=2G # memory per cpu
#SBATCH --time=00-01:00 # time (DD-HH:MM)
#SBATCH --output=cytosine.log # .log file from scheduler
module load amber/16
sander -O -i in.md -c crd.md.23 -o cytosine.out
The following example is a sander.MPI parallel job script:
#!/bin/bash
#SBATCH --nodes=1 --ntasks-per-node=32 # 1 node with 32 cpus, MPI job
#SBATCH --mem-per-cpu=2G # memory, should be less than 4G
#SBATCH --time=00-01:00 # time (DD-HH:MM)
#SBATCH --output=sodium.log # output .log file
module load amber/16
srun sander.MPI -ng 2 -groupfile groups
You can modify the script to fit your job's requirements for compute resources. See Running jobs.
Examples[edit]
Sample *.sh and input files can be found on Graham under
/home/jemmyhu/tests/test_Amber/