AMBER: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
(Marked this version for translation)
 
(91 intermediate revisions by 3 users not shown)
Line 6: Line 6:


== Amber vs. AmberTools == <!--T:20-->
== Amber vs. AmberTools == <!--T:20-->
We have modules for both Amber and AmberTools [[Available software|available in our software stack]].
We have modules for both Amber and AmberTools available in our [[Available software|software stack]].


<!--T:21-->
<!--T:21-->
* The [https://ambermd.org/AmberTools.php AmberTools] (module <code>ambertools</code>) contain a number of tools for preparing and analysing simulations, as well as <code>sander</code> to perform molecular dynamics simulations, all of which are free and open source.
* The [https://ambermd.org/AmberTools.php AmberTools] (module <code>ambertools</code>) contains a number of tools for preparing and analyzing simulations, as well as <code>sander</code> to perform molecular dynamics simulations, all of which are free and open source.
* [https://ambermd.org/AmberMD.php Amber] (module <code>amber</code>) contains everything that is included in <code>ambertools</code>, but adds the advanced <code>pmemd</code> program for molecular dynamics simulations.
* [https://ambermd.org/AmberMD.php Amber] (module <code>amber</code>) contains everything that is included in <code>ambertools</code>, but adds the advanced <code>pmemd</code> program for molecular dynamics simulations.


<!--T:22-->
<!--T:22-->
To see a list of installed versions and which other modules they depend on, you can use the [[Using modules#Sub-command_spider|<code>module spider</code> command]] or check the [[Available software]] page.
To see a list of installed versions and which other modules they depend on, you can use the <code>module spider</code> [[Using modules#Sub-command_spider|command]] or check the [[Available software]] page.


 
== Loading modules == <!--T:42-->
== Loading Amber and AmberTools modules ==  
<tabs>
<tabs>
<tab name="StdEnv/2023">
{| class="wikitable sortable"
|-
! AMBER version !! modules for running on CPUs !! modules for running on GPUs (CUDA) !! Notes
|-
| amber/22.5-23.5 || <code> StdEnv/2023 gcc/12.3 openmpi/4.1.5 amber/22.5-23.5</code> || <code>StdEnv/2023 gcc/12.3 openmpi/4.1.5 cuda/12.2 amber/22.5-23.5</code> || GCC, FlexiBLAS & FFTW
|-
| ambertools/23.5 || <code> StdEnv/2023 gcc/12.3 openmpi/4.1.5 ambertools/23.5</code> || <code>StdEnv/2023 gcc/12.3 openmpi/4.1.5 cuda/12.2 ambertools/23.5</code> || GCC, FlexiBLAS & FFTW
|-
|}</tab>
<tab name="StdEnv/2020">
<tab name="StdEnv/2020">
{| class="wikitable sortable"
{| class="wikitable sortable"
Line 23: Line 32:
! AMBER version !! modules for running on CPUs !! modules for running on GPUs (CUDA) !! Notes
! AMBER version !! modules for running on CPUs !! modules for running on GPUs (CUDA) !! Notes
|-
|-
| ambertools/21 || <code> StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 ambertools/21 </code> || <code>StdEnv/2020  gcc/9.3.0 cuda/11.4 openmpi/4.0.3 ambertools/21 || GCC, FlexiBLAS & FFTW
| ambertools/21 || <code> StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 scipy-stack ambertools/21 </code> || <code>StdEnv/2020  gcc/9.3.0 cuda/11.4 openmpi/4.0.3 scipy-stack ambertools/21</code> || GCC, FlexiBLAS & FFTW
|-
|-
| ambertools/21 ||   || <code>StdEnv/2020  gcc/9.3.0 cuda/11.0 openmpi/4.0.3 ambertools/21</code> || GCC, OpenBLAS & FFTW
| amber/20.12-20.15 || <code> StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/20.12-20.15 </code> || <code>StdEnv/2020  gcc/9.3.0 cuda/11.4 openmpi/4.0.3 amber/20.12-20.15</code> || GCC, FlexiBLAS & FFTW
|-
|-
| amber/20.12-20.15 || <code> StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/20.12-20.15 </code> || <code>StdEnv/2020  gcc/9.3.0 cuda/11.4 openmpi/4.0.3 amber/20.12-20.15 || GCC, FlexiBLAS & FFTW
| amber/20.9-20.15 || <code> StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/20.9-20.15 </code> || <code>StdEnv/2020  gcc/9.3.0 cuda/11.0 openmpi/4.0.3 amber/20.9-20.15 </code> || GCC, MKL & FFTW
|-
| amber/18.14-18.17 || <code> StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/18.14-18.17 </code> || <code>StdEnv/2020  gcc/8.4.0  cuda/10.2  openmpi/4.0.3 </code>  || GCC, MKL
|-
|-
| amber/20.9-20.15 || <code> StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/20.9-20.15 </code> || <code>StdEnv/2020  gcc/9.3.0 cuda/11.0 openmpi/4.0.3 amber/20.9-20.15 || GCC, MKL & FFTW
|}</tab>
|}</tab>
<tab name="StdEnv/2016">
{| class="wikitable sortable"
|-
! AMBER version !! modules for running on CPUs !! modules for running on GPUs (CUDA) !! Notes
|-
| amber/18 || <code> StdEnv/2016 gcc/5.4.0 openmpi/2.1.1 scipy-stack/2019a amber/18 </code> || <code> StdEnv/2016 gcc/5.4.0 openmpi/2.1.1 cuda/9.0.176 scipy-stack/2019a amber/18</code>  || GCC, MKL
|-
| amber/18.10-18.11 || <code> StdEnv/2016 gcc/5.4.0 openmpi/2.1.1 scipy-stack/2019a amber/18.10-18.11 </code> || <code> StdEnv/2016 gcc/5.4.0 openmpi/2.1.1 cuda/9.0.176 scipy-stack/2019a amber/18.10-18.11</code>  || GCC, MKL
|-
| amber/18.10-18.11 || <code>StdEnv/2016 gcc/7.3.0 openmpi/3.1.2 scipy-stack/2019a amber/18.10-18.11 </code> || <code> StdEnv/2016 gcc/7.3.0  cuda/9.2.148 openmpi/3.1.2 scipy-stack/2019a amber/18.10-18.11 </code>  || GCC, MKL
|-
| amber/16 || <code> StdEnv/2016.4 amber/16 </code> || <code> </code>  || Available only on Graham. Some Python functionality is not supported
|}</tab>
<!--T:43-->
</tabs>
</tabs>


==Loading AmberTools 21== <!--T:23-->
==Using modules== <!--T:23-->
Currently, AmberTools 21 is available on all clusters.
===AmberTools 21===
Currently, AmberTools 21 module is available on all clusters. AmberTools provide the following MD engines: sander, sander.LES, sander.LES.MPI, sander.MPI, sander.OMP, sander.quick.cuda, and sander.quick.cuda.MPI. After loading the module set AMBER environment variables:


=== CPU-only version === <!--T:24-->  
<!--T:44-->
source $EBROOTAMBERTOOLS/amber.sh


<!--T:25-->
===Amber 20=== <!--T:30-->
module load StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 ambertools/21
There are two versions of amber/20 modules: 20.9-20.15 and 20.12-20.15. The first one uses MKL and cuda/11.0, while the second uses FlexiBLAS and cuda/11.4. MKL libraries do not perform well on AMD CPU, and FlexiBLAS solves this problem. It detects CPU type and uses libraries optimized for the hardware. cuda/11.4 is required for running simulations on A100 GPUs installed on Narval.  
source $EBROOTAMBERTOOLS/amber.sh


<!--T:26-->
<!--T:45-->
Provides the following MD engines: sander, sander.LES, sander.LES.MPI, sander.MPI, and sander.OMP
CPU-only modules provide all MD programs available in AmberTools/20 plus pmemd (serial) and pmemd.MPI (parallel). GPU modules add pmemd.cuda (single GPU), and pmemd.cuda.MPI (multi - GPU).


=== GPU version === <!--T:27-->  
=== Known issues === <!--T:41-->
1. Module amber/20.12-20.15 does not have MMPBSA.py.MPI executable.


<!--T:28-->
<!--T:46-->
module load StdEnv/2020 gcc/9.3.0 cuda/11.0 openmpi/4.0.3 ambertools/21
2. MMPBSA.py from amber/18-10-18.11 and amber/18.14-18.17 modules cannot perform PB calculations. Use more recent amber/20 modules for this type of calculations.
source $EBROOTAMBERTOOLS/amber.sh


<!--T:29-->
==Job submission examples== <!--T:37-->
Provides the following MD engines: sander, sander.LES, sander.LES.MPI, sander.MPI, sander.OMP, sander.quick.cuda, and sander.quick.cuda.MPI
=== Single GPU job ===
For GPU-accelerated simulations on Narval, use amber/20.12-20.15. Modules compiled with CUDA version < 11.4 do not work on A100 GPUs. Below is an example submission script for a single-GPU job with amber/20.12-20.15.
{{File
  |name=pmemd_cuda_job.sh
  |lang="bash"
  |contents=
#!/bin/bash
#SBATCH --ntasks=1
#SBATCH --gpus-per-node=1
#SBATCH --mem-per-cpu=2000
#SBATCH --time=10:00:00


==Loading Amber 20== <!--T:30-->
<!--T:54-->
Currently, Amber20 is available on all clusters. There are two versions of amber/20 modules: 20.9-20.15 and 20.12-20.15. The first one uses MKL and cuda/11.0, while the second uses FlexiBLAS and cuda/11.4. MKL libraries do not perform well on AMD CPU, and FlexiBLAS solves this problem. It detects CPU type and uses libraries optimized for the hardware. Cuda/11.4 is required for running simulations on A100 GPUs installed on Narval.
module purge
module load StdEnv/2023 gcc/12.3 openmpi/4.1.5 cuda/12.2 amber/22


=== Loading CPU-only versions === <!--T:31-->  
<!--T:55-->
pmemd.cuda -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7
}}


<!--T:32-->
=== CPU-only parallel MPI job === <!--T:47-->
module load StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/20.9-20.15 
or
module load StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/20.12-20.15


<!--T:33-->
<!--T:56-->
These modules provide all MD programs available in AmberTools/20 plus pmemd (serial) and pmemd.MPI (parallel).
<tabs>
<tab name="Graham">
{{File
  |name=pmemd_MPI_job_graham.sh
  |lang="sh"
  |contents=
#!/bin/bash
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=32
#SBATCH --mem-per-cpu=2000
#SBATCH --time=1:00:00


=== Loading GPU versions === <!--T:11-->
<!--T:57-->
module purge
module load StdEnv/2023 gcc/12.3 openmpi/4.1.5 amber/22


<!--T:34-->
<!--T:58-->
module load StdEnv/2020 gcc/9.3.0  cuda/11.0  openmpi/4.0.3 amber/20.9-20.15
srun pmemd.MPI -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7
or
}}</tab>
module load StdEnv/2020 gcc/9.3.0  cuda/11.4  openmpi/4.0.3 amber/20.12-20.15
<tab name="Cedar">
{{File
  |name=pmemd_MPI_job_cedar.sh
  |lang="sh"
  |contents=
#!/bin/bash
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=48
#SBATCH --mem-per-cpu=2000
#SBATCH --time=1:00:00


<!--T:35-->
<!--T:59-->
These module provide all MD programs available in ambertools/20 plus pmemd (serial), pmemd.MPI (parallel), pmemd.cuda (single GPU), and pmemd.cuda.MPI (multi - GPU)
module purge
module load StdEnv/2023 gcc/12.3 openmpi/4.1.5 amber/22


=== Submission of GPU-accelerated AMBER on Narval === <!--T:37-->
<!--T:60-->
AMBER modules compiled with cuda version < 11.4 do not work on A100 GPUs. Use amber/20.12-20.15 module on Narval.
srun pmemd.MPI -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7
}}</tab>
<tab name="Béluga">
{{File
  |name=pmemd_MPI_job_beluga.sh
  |lang="sh"
  |contents=
#!/bin/bash
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=40
#SBATCH --mem-per-cpu=2000
#SBATCH --time=1:00:00


<!--T:38-->
<!--T:61-->
Example submission script for a single-GPU job with amber/20.12-20.15:
module purge
module load StdEnv/2023 gcc/12.3 openmpi/4.1.5 amber/22


<!--T:39-->
<!--T:62-->
srun pmemd.MPI -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7
}}</tab>
<tab name="Narval">
{{File
  |name=pmemd_MPI_job_narval.sh
  |lang="sh"
  |contents=
#!/bin/bash
#!/bin/bash
#SBATCH --cpus-per-task=1 --gpus-per-node=1 --mem-per-cpu=2000 --time=10:0:
#SBATCH --nodes=4
module purge
#SBATCH --ntasks-per-node=64
module load StdEnv/2020  gcc/9.3.0 cuda/11.4 openmpi/4.0.3 amber/20.12-20.15
#SBATCH --mem-per-cpu=2000
pmemd.cuda -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7
#SBATCH --time=1:00:00


=== Known issues === <!--T:40-->
<!--T:63-->
module purge
module load StdEnv/2023 gcc/12.3 openmpi/4.1.5 amber/22


<!--T:41-->
<!--T:64-->
Module amber/20.12-20.15 does not have MMPBSA.py.MPI executable.
srun pmemd.MPI -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7
}}</tab>
<tab name="Niagara">
{{File
  |name=pmemd_MPI_job_narval.sh
  |lang="sh"
  |contents=
#!/bin/bash
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=40
#SBATCH --mem-per-cpu=2000
#SBATCH --time=1:00:00


==Loading Amber 18== <!--T:18-->  
<!--T:65-->
module purge
module load StdEnv/2023 gcc/12.3 openmpi/4.1.5 amber/22


<!--T:9-->
<!--T:66-->
Currently, versions 18 and 18.10-18.11 are available on all clusters.
srun pmemd.MPI -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7
 
}}</tab>
=== Non-GPU versions === <!--T:10-->
</tabs>
module load gcc/5.4.0 openmpi/2.1.1 amber/18 scipy-stack/2019a
or
module load gcc/5.4.0 openmpi/2.1.1 amber/18.10-18.11 scipy-stack/2019a


=== GPU versions === <!--T:36-->
=== QM/MM distributed multi-GPU job === <!--T:48-->
 
The example below requests eight GPUs.
<!--T:12-->
module load gcc/5.4.0  cuda/9.0.176  openmpi/2.1.1 amber/18 scipy-stack/2019a
or
module load gcc/5.4.0  cuda/9.0.176  openmpi/2.1.1 amber/18.10-18.11 scipy-stack/2019a
 
=== Known issues === <!--T:15-->
 
<!--T:16-->
1. MMPBSA.py programs from amber/18-10-18.11 and amber/18.14-18.17 modules can not perform PB calculations. Use more recent amber/20 modules for PB calculations.
 
==Loading Amber 16== <!--T:2-->
Amber 16 is currently available on Graham only due to license restrictions. It was built with the previous system environment StdEnv/2016.4. Load StdEnv/2016.4 before loading amber/16 using the module command:
 
<!--T:19-->
[name@server $] module load StdEnv/2016.4
[name@server $] module load amber/16
 
<!--T:13-->
This version does not support some Python functionality of Amber.
 
===Job submission=== <!--T:3-->
For a general discussion about submitting jobs, see [[Running jobs]].
 
<!--T:14-->
In examples below, change the <tt>module load</tt> command to the one shown above if you wish to use the newer version.
 
<!--T:4-->
The following example is a sander serial job script. The input files are <code>in.md, crd.md.23, prmtop</code>.
{{File
{{File
   |name=amber_serial.sh
   |name=quick_MPI_job.sh
   |lang="bash"
   |lang="bash"
   |contents=
   |contents=
#!/bin/bash
#!/bin/bash
#SBATCH --ntasks=1             # 1 cpu, serial job
#SBATCH --ntasks=8 --cpus-per-task=1
#SBATCH --mem-per-cpu=2G      # memory per cpu
#SBATCH --gpus-per-task=1  
#SBATCH --time=00-01:00       # time (DD-HH:MM)
#SBATCH --mem-per-cpu=4000
#SBATCH --output=cytosine.log  # .log file from scheduler
#SBATCH --time=02:00:00
module load StdEnv/2016.4
 
module load amber/16
<!--T:52-->
sander -O -i in.md  -c crd.md.23  -o cytosine.out
module purge
module load StdEnv/2023 gcc/12.3 openmpi/4.1.5 cuda/12.2 ambertools/23.5
 
<!--T:53-->
srun sander.quick.cuda.MPI -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7
}}
}}


<!--T:5-->
=== Parallel MMPBSA job === <!--T:6-->
The following example is a sander.MPI parallel job script:
The example below uses 32 MPI processes. MMPBSA scales linearly because each trajectory frame is processed independently.  
{{File
{{File
   |name=amber_parallel.sh
   |name=mmpbsa_job.sh
   |lang="bash"
   |lang="bash"
   |contents=
   |contents=
#!/bin/bash
#!/bin/bash
#SBATCH --nodes=1 --ntasks-per-node=32 # 1 node with 32 cpus, MPI job
#SBATCH --ntasks=32  
#SBATCH --mem-per-cpu=2G                # memory, should be less than 4G
#SBATCH --mem-per-cpu=4000
#SBATCH --time=00-01:00                 # time (DD-HH:MM)
#SBATCH --time=1:00:00
#SBATCH --output=sodium.log            # output .log file
 
module load StdEnv/2016.4
<!--T:67-->
module load amber/16
module purge
srun sander.MPI -ng 2 -groupfile groups
module load module load StdEnv/2023 gcc/12.3 openmpi/4.1.5 amber/22
 
<!--T:68-->
srun MMPBSA.py.MPI -O -i mmpbsa.in -o mmpbsa.dat -sp solvated_complex.parm7 -cp complex.parm7 -rp receptor.parm7 -lp ligand.parm7 -y trajectory.nc
}}
}}
You can modify scripts to fit your simulation requirements for computing resources. See [[Running jobs]] for more details.
==Performance and benchmarking== <!--T:69-->
<!--T:70-->
A team at [https://www.ace-net.ca/ ACENET] has created a [https://mdbench.ace-net.ca/mdbench/ Molecular Dynamics Performance Guide] for Alliance clusters.
It can help you determine optimal conditions for AMBER, GROMACS, NAMD, and OpenMM jobs. The present section focuses on AMBER performance.
<!--T:50-->
View benchmarks of simulations with PMEMD[http://mdbench.ace-net.ca/mdbench/bform/?software_contains=PMEMD&software_id=&module_contains=&module_version=&site_contains=&gpu_model=&cpu_model=&arch=&dataset=6n4o]


<!--T:6-->
<!--T:51-->
You can modify the script to fit your job's requirements for compute resources. See [[Running jobs]].
View benchmarks of QM/MM simulations with SANDER.QUICK [http://mdbench.ace-net.ca/mdbench/bform/?software_contains=&software_id=&module_contains=&module_version=&site_contains=&gpu_model=&cpu_model=&arch=&dataset=4cg1].
</translate>
</translate>

Latest revision as of 16:55, 16 October 2024

Other languages:

Introduction[edit]

Amber is the collective name for a suite of programs that allow users to perform molecular dynamics simulations, particularly on biomolecules. None of the individual programs carry this name, but the various parts work reasonably well together, and provide a powerful framework for many common calculations.

Amber vs. AmberTools[edit]

We have modules for both Amber and AmberTools available in our software stack.

  • The AmberTools (module ambertools) contains a number of tools for preparing and analyzing simulations, as well as sander to perform molecular dynamics simulations, all of which are free and open source.
  • Amber (module amber) contains everything that is included in ambertools, but adds the advanced pmemd program for molecular dynamics simulations.

To see a list of installed versions and which other modules they depend on, you can use the module spider command or check the Available software page.

Loading modules[edit]

AMBER version modules for running on CPUs modules for running on GPUs (CUDA) Notes
amber/22.5-23.5 StdEnv/2023 gcc/12.3 openmpi/4.1.5 amber/22.5-23.5 StdEnv/2023 gcc/12.3 openmpi/4.1.5 cuda/12.2 amber/22.5-23.5 GCC, FlexiBLAS & FFTW
ambertools/23.5 StdEnv/2023 gcc/12.3 openmpi/4.1.5 ambertools/23.5 StdEnv/2023 gcc/12.3 openmpi/4.1.5 cuda/12.2 ambertools/23.5 GCC, FlexiBLAS & FFTW
AMBER version modules for running on CPUs modules for running on GPUs (CUDA) Notes
ambertools/21 StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 scipy-stack ambertools/21 StdEnv/2020 gcc/9.3.0 cuda/11.4 openmpi/4.0.3 scipy-stack ambertools/21 GCC, FlexiBLAS & FFTW
amber/20.12-20.15 StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/20.12-20.15 StdEnv/2020 gcc/9.3.0 cuda/11.4 openmpi/4.0.3 amber/20.12-20.15 GCC, FlexiBLAS & FFTW
amber/20.9-20.15 StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/20.9-20.15 StdEnv/2020 gcc/9.3.0 cuda/11.0 openmpi/4.0.3 amber/20.9-20.15 GCC, MKL & FFTW
amber/18.14-18.17 StdEnv/2020 gcc/9.3.0 openmpi/4.0.3 amber/18.14-18.17 StdEnv/2020 gcc/8.4.0 cuda/10.2 openmpi/4.0.3 GCC, MKL
AMBER version modules for running on CPUs modules for running on GPUs (CUDA) Notes
amber/18 StdEnv/2016 gcc/5.4.0 openmpi/2.1.1 scipy-stack/2019a amber/18 StdEnv/2016 gcc/5.4.0 openmpi/2.1.1 cuda/9.0.176 scipy-stack/2019a amber/18 GCC, MKL
amber/18.10-18.11 StdEnv/2016 gcc/5.4.0 openmpi/2.1.1 scipy-stack/2019a amber/18.10-18.11 StdEnv/2016 gcc/5.4.0 openmpi/2.1.1 cuda/9.0.176 scipy-stack/2019a amber/18.10-18.11 GCC, MKL
amber/18.10-18.11 StdEnv/2016 gcc/7.3.0 openmpi/3.1.2 scipy-stack/2019a amber/18.10-18.11 StdEnv/2016 gcc/7.3.0 cuda/9.2.148 openmpi/3.1.2 scipy-stack/2019a amber/18.10-18.11 GCC, MKL
amber/16 StdEnv/2016.4 amber/16 Available only on Graham. Some Python functionality is not supported

Using modules[edit]

AmberTools 21[edit]

Currently, AmberTools 21 module is available on all clusters. AmberTools provide the following MD engines: sander, sander.LES, sander.LES.MPI, sander.MPI, sander.OMP, sander.quick.cuda, and sander.quick.cuda.MPI. After loading the module set AMBER environment variables:

source $EBROOTAMBERTOOLS/amber.sh

Amber 20[edit]

There are two versions of amber/20 modules: 20.9-20.15 and 20.12-20.15. The first one uses MKL and cuda/11.0, while the second uses FlexiBLAS and cuda/11.4. MKL libraries do not perform well on AMD CPU, and FlexiBLAS solves this problem. It detects CPU type and uses libraries optimized for the hardware. cuda/11.4 is required for running simulations on A100 GPUs installed on Narval.

CPU-only modules provide all MD programs available in AmberTools/20 plus pmemd (serial) and pmemd.MPI (parallel). GPU modules add pmemd.cuda (single GPU), and pmemd.cuda.MPI (multi - GPU).

Known issues[edit]

1. Module amber/20.12-20.15 does not have MMPBSA.py.MPI executable.

2. MMPBSA.py from amber/18-10-18.11 and amber/18.14-18.17 modules cannot perform PB calculations. Use more recent amber/20 modules for this type of calculations.

Job submission examples[edit]

Single GPU job[edit]

For GPU-accelerated simulations on Narval, use amber/20.12-20.15. Modules compiled with CUDA version < 11.4 do not work on A100 GPUs. Below is an example submission script for a single-GPU job with amber/20.12-20.15.

File : pmemd_cuda_job.sh

#!/bin/bash
#SBATCH --ntasks=1 
#SBATCH --gpus-per-node=1 
#SBATCH --mem-per-cpu=2000 
#SBATCH --time=10:00:00

module purge
module load StdEnv/2023 gcc/12.3 openmpi/4.1.5 cuda/12.2 amber/22

pmemd.cuda -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7


CPU-only parallel MPI job[edit]

File : pmemd_MPI_job_graham.sh

#!/bin/bash
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=32
#SBATCH --mem-per-cpu=2000
#SBATCH --time=1:00:00

module purge
module load StdEnv/2023 gcc/12.3 openmpi/4.1.5 amber/22

srun pmemd.MPI -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7
File : pmemd_MPI_job_cedar.sh

#!/bin/bash
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=48
#SBATCH --mem-per-cpu=2000
#SBATCH --time=1:00:00

module purge
module load StdEnv/2023 gcc/12.3 openmpi/4.1.5 amber/22

srun pmemd.MPI -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7
File : pmemd_MPI_job_beluga.sh

#!/bin/bash
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=40
#SBATCH --mem-per-cpu=2000
#SBATCH --time=1:00:00

module purge
module load StdEnv/2023 gcc/12.3 openmpi/4.1.5 amber/22

srun pmemd.MPI -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7
File : pmemd_MPI_job_narval.sh

#!/bin/bash
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=64
#SBATCH --mem-per-cpu=2000
#SBATCH --time=1:00:00

module purge
module load StdEnv/2023 gcc/12.3 openmpi/4.1.5 amber/22

srun pmemd.MPI -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7
File : pmemd_MPI_job_narval.sh

#!/bin/bash
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=40
#SBATCH --mem-per-cpu=2000
#SBATCH --time=1:00:00

module purge
module load StdEnv/2023 gcc/12.3 openmpi/4.1.5 amber/22

srun pmemd.MPI -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7

QM/MM distributed multi-GPU job[edit]

The example below requests eight GPUs.

File : quick_MPI_job.sh

#!/bin/bash
#SBATCH --ntasks=8 --cpus-per-task=1
#SBATCH --gpus-per-task=1 
#SBATCH --mem-per-cpu=4000 
#SBATCH --time=02:00:00

module purge
module load StdEnv/2023 gcc/12.3 openmpi/4.1.5 cuda/12.2 ambertools/23.5

srun sander.quick.cuda.MPI -O -i input.in -p topol.parm7 -c coord.rst7 -o output.mdout -r restart.rst7


Parallel MMPBSA job[edit]

The example below uses 32 MPI processes. MMPBSA scales linearly because each trajectory frame is processed independently.

File : mmpbsa_job.sh

#!/bin/bash
#SBATCH --ntasks=32 
#SBATCH --mem-per-cpu=4000 
#SBATCH --time=1:00:00

module purge
module load module load StdEnv/2023 gcc/12.3 openmpi/4.1.5 amber/22

srun MMPBSA.py.MPI -O -i mmpbsa.in -o mmpbsa.dat -sp solvated_complex.parm7 -cp complex.parm7 -rp receptor.parm7 -lp ligand.parm7 -y trajectory.nc


You can modify scripts to fit your simulation requirements for computing resources. See Running jobs for more details.

Performance and benchmarking[edit]

A team at ACENET has created a Molecular Dynamics Performance Guide for Alliance clusters. It can help you determine optimal conditions for AMBER, GROMACS, NAMD, and OpenMM jobs. The present section focuses on AMBER performance.

View benchmarks of simulations with PMEMD[1]

View benchmarks of QM/MM simulations with SANDER.QUICK [2].