AMBER: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
No edit summary
Line 7: Line 7:


===Job Submission===
===Job Submission===
Graham uses Slurm scheduler, for details about submitting jobs, see [[Running Jobs]].
Graham uses Slurm scheduler, for details about submitting jobs, see [[Running jobs]].


The following example ia a sander serial job script, mysub.sh (inputs: in.md, crd.md.23, prmtop)
The following example ia a sander serial job script, mysub.sh (inputs: in.md, crd.md.23, prmtop)

Revision as of 14:52, 18 August 2017

Introduction[edit]

Amber is the collective name for a suite of programs that allow users to carry out molecular dynamics simulations, particularly on biomolecules. None of the individual programs carries this name, but the various parts work reasonably well together, and provide a powerful framework for many common calculations.

Running Amber 16 on Graham[edit]

Amber 16 is installed on Graham and available through the modules system. You can load it using

[name@server $] module load amber/16 

Job Submission[edit]

Graham uses Slurm scheduler, for details about submitting jobs, see Running jobs.

The following example ia a sander serial job script, mysub.sh (inputs: in.md, crd.md.23, prmtop)

#!/bin/bash
#SBATCH --ntasks=1           # 1 cpu, serial job
#SBATCH --mem-per-cpu=2G     # memory per cpu
#SBATCH --time=00-01:00        # time (DD-HH:MM)
#SBATCH --output=cytosine.log  # .log file from scheduler
module load amber/16
sander -O  -i in.md  -c crd.md.23  -o cytosine.out

The following example is a sander.MPI parallel job script, mysub.sh (inputs: in.md, crd.md.23, prmtop)

#!/bin/bash
#SBATCH --nodes=1 --ntasks-per-node=32  # 1 node with 32 cpus, MPI job
#SBATCH --mem-per-cpu=2G                # memory, should be less than 4G
#SBATCH --time=00-01:00                 # time (DD-HH:MM)
#SBATCH --output=sodium.log             # output .log file
module load amber/16
srun sander.MPI -ng 2 -groupfile groups   # srun command

You can modify the script to fit your job's requirements for compute resources.