GROMACS: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(add gromacs-plumed/2019.6 to the table)
(fix example gpu_gromacs_job.sh)
Line 259: Line 259:


<!--T:53-->
<!--T:53-->
gmx mdrun -ntomp $SLURM_NTASKS_PER_NODE -deffnm md
gmx mdrun -ntomp ${SLURM_CPUS_PER_TASK:-1} -deffnm md
}}
}}



Revision as of 01:26, 4 July 2020

Other languages:

General[edit]

GROMACS is a versatile package to perform molecular dynamics for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.

Strengths[edit]

  • GROMACS provides extremely high performance compared to all other programs.
  • Since GROMACS 4.6, we have excellent CUDA-based GPU acceleration on GPUs that have Nvidia compute capability >= 2.0 (e.g. Fermi or later).
  • GROMACS comes with a large selection of flexible tools for trajectory analysis.
  • GROMACS can be run in parallel, using either the standard MPI communication protocol, or via our own "Thread MPI" library for single-node workstations.
  • GROMACS is Free Software, available under the GNU Lesser General Public License (LGPL), version 2.1.

Weak points[edit]

  • To get very high simulation speed, GROMACS does not do much additional analysis and / or data collection on the fly. It may be a challenge to obtain somewhat non-standard information about the simulated system from a GROMACS simulation.
  • Different versions may have significant differences in simulation methods and default parameters. Reproducing results of older versions with a newer version may not be straightforward.
  • Additional tools and utilities that come with GROMACS are not always of the highest quality, may contain bugs and may implement poorly documented methods. Reconfirming the results of such tools with independent methods is always a good idea.

GPU support[edit]

The top part of any log file will describe the configuration, and in particular whether your version has GPU support compiled in. GROMACS will automatically use any GPUs it finds.

GROMACS uses both CPUs and GPUs; it relies on a reasonable balance between CPU and GPU performance.

The new neighbor structure required the introduction of a new variable called "cutoff-scheme" in the mdp file. The behaviour of older GROMACS versions (before 4.6) corresponds to cutoff-scheme = group, while in order to use GPU acceleration you must change it to cutoff-scheme = verlet, which has become the new default in version 5.0.

Quickstart guide[edit]

This section summarizes configuration details.

Environment modules[edit]

The following versions have been installed:

GROMACS version modules for running on CPUs modules for running on GPUs (CUDA) Notes
gromacs/2020.2 gcc/7.3.0 openmpi/3.1.2 gromacs/2020.2 gcc/7.3.0 cuda/10.0.130 openmpi/3.1.2 gromacs/2020.2 GCC & MKL
gromacs/2019.6 gcc/7.3.0 openmpi/3.1.2 gromacs/2019.6 gcc/7.3.0 cuda/10.0.130 openmpi/3.1.2 gromacs/2019.6 GCC & MKL
gromacs/2019.3 gcc/7.3.0 openmpi/3.1.2 gromacs/2019.3 gcc/7.3.0 cuda/10.0.130 openmpi/3.1.2 gromacs/2019.3 GCC & MKL; double precision not available for AVX512.
gromacs/2018.7 gcc/7.3.0 openmpi/3.1.2 gromacs/2018.7 gcc/7.3.0 cuda/10.0.130 openmpi/3.1.2 gromacs/2018.7 GCC & MKL
gromacs/2018.3 gcc/6.4.0 openmpi/2.1.1 gromacs/2018.3 gcc/6.4.0 cuda/9.0.176 openmpi/2.1.1 gromacs/2018.3 GCC & FFTW
gromacs/2018.2 gcc/6.4.0 openmpi/2.1.1 gromacs/2018.2 gcc/6.4.0 cuda/9.0.176 openmpi/2.1.1 gromacs/2018.2 GCC & FFTW
gromacs/2018.1 gcc/6.4.0 openmpi/2.1.1 gromacs/2018.1 gcc/6.4.0 cuda/9.0.176 openmpi/2.1.1 gromacs/2018.1 GCC & FFTW
gromacs/2018 gromacs/2018 cuda/9.0.176 gromacs/2018 Intel & MKL
gromacs/2016.5 gcc/6.4.0 openmpi/2.1.1 gromacs/2016.5 gcc/6.4.0 cuda/9.0.176 openmpi/2.1.1 gromacs/2016.5 GCC & FFTW
gromacs/2016.3 gromacs/2016.3 cuda/8.0.44 gromacs/2016.3 Intel & MKL
gromacs/5.1.5 gromacs/5.1.5 cuda/8.0.44 gromacs/5.1.5 Intel & MKL
gromacs/5.1.4 gromacs/5.1.4 cuda/8.0.44 gromacs/5.1.4 Intel & MKL
gromacs/5.0.7 gromacs/5.0.7 cuda/8.0.44 gromacs/5.0.7 Intel & MKL
gromacs/4.6.7 gromacs/4.6.7 cuda/8.0.44 gromacs/4.6.7 Intel & MKL

Version 2018.7 and newer have been compiled with GCC compilers and MKL- & OpenMPI 3.1.2 libraries, as they run a bit faster. Older versions have been compiled with either with GCC compilers and FFTW or Intel compilers, using Intel MKL and Open MPI 2.1.1 libraries from the default environment as indicated in the table above. CPU (non-GPU) versions are available in both single- and double precision.

These modules can be loaded by using a module load command with the modules as stated in the second column in above table. For example:

$ module load  gcc/7.3.0 openmpi/3.1.2 gromacs/2020.2

These versions are also available with GPU support, albeit only with single precision. In order to load the GPU enabled version, the cuda module needs to be loaded first. The modules needed are listed in the third column of above table, e.g.:

$ module load  gcc/7.3.0 cuda/10.0.130 openmpi/3.1.2  gromacs/2020.2
or
$ module load  cuda/8.0.44  gromacs/2016.3 

For more information on environment modules, please refer to the Using modules page.

Suffixes[edit]

GROMACS 5.x, 2016.x and newer[edit]

GROMACS 5 and newer releases consist of only four binaries that contain the full functionality. All GROMACS tools from previous versions have been implemented as sub-commands of the gmx binaries. Please refer to GROMACS 5.0 Tool Changes and the GROMACS documentation manuals for your version.

  • gmx - single precision GROMACS with OpenMP (threading) but without MPI.
  • gmx_mpi - single precision GROMACS with OpenMP and MPI.
  • gmx_d - double precision GROMACS with OpenMP but without MPI.
  • gmx_mpi_d - double precision GROMACS with OpenMP and MPI.

GROMACS 4.6.7[edit]

  • The double precision binaries have the suffix _d.
  • The parallel single and double precision mdrun binaries are:
  • mdrun_mpi
  • mdrun_mpi_d

Submission scripts[edit]

Please refer to the page Running jobs for help on using the SLURM workload manager.

Serial jobs[edit]

Here's a simple job script for serial mdrun:


File : serial_gromacs_job.sh

#!/bin/bash
#SBATCH --time=0:30           # time limit (D-HH:MM)
#SBATCH --mem-per-cpu=1000M   # memory per CPU (in MB)
module purge  
module load gcc/7.3.0 openmpi/3.1.2  gromacs/2019.3
export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"

gmx mdrun -deffnm em


This will run the simulation of the molecular system in the file em.tpr.

Whole nodes[edit]

Commonly the systems which are being simulated with GROMACS are so large, that you want to use a number of whole nodes for the simulation.

Generally the product of --ntasks-per-node= and --cpus-per-task has to match the number of CPU-cores in the compute-nodes of the cluster. Please see section Performance Considerations below.

File : gromacs_whole_node_graham.sh

#!/bin/bash
#SBATCH --nodes=1                # number of nodes
#SBATCH --ntasks-per-node=8      # request 8 MPI tasks per node
#SBATCH --cpus-per-task=4        # 4 OpenMP threads per MPI task => total: 8 x 4 = 32 CPUs/node
#SBATCH --mem=0                  # request all available memory on the node
#SBATCH --time=0-01:00           # time limit (D-HH:MM)
module purge  
module load gcc/7.3.0  openmpi/3.1.2  gromacs/2019.3
export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"

srun gmx_mpi mdrun -deffnm md
File : gromacs_whole_node_cedar.sh

#!/bin/bash
#SBATCH --nodes=1                        # number of nodes
#SBATCH --ntasks-per-node=12             # request 12 MPI tasks per node
#SBATCH --cpus-per-task=4                # 4 OpenMP threads per MPI task => total: 12 x 4 = 48 CPUs/node
#SBATCH --constraint="[skylake|cascade]" # restrict to AVX512 capable nodes.
#SBATCH --mem=0                          # request all available memory on the node
#SBATCH --time=0-01:00                   # time limit (D-HH:MM)
module purge
module load arch/avx512 StdEnv/2018.3  # switch architecture for up to 30% speedup
module load gcc/7.3.0  openmpi/3.1.2  gromacs/2019.3
export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"
 
srun gmx_mpi mdrun -deffnm md
File : gromacs_whole_node_beluga.sh

#!/bin/bash
#SBATCH --nodes=1                # number of nodes
#SBATCH --ntasks-per-node=10     # request 10 MPI tasks per node
#SBATCH --cpus-per-task=4        # 4 OpenMP threads per MPI task => total: 10 x 4 = 40 CPUs/node
#SBATCH --mem=0                  # request all available memory on the node
#SBATCH --time=0-01:00           # time limit (D-HH:MM)
module purge  
module load gcc/7.3.0  openmpi/3.1.2  gromacs/2019.3
export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"
 
srun gmx_mpi mdrun -deffnm md
File : gromacs_whole_node_niagara.sh

#!/bin/bash
#SBATCH --nodes=1                # number of nodes
#SBATCH --ntasks-per-node=10     # request 10 MPI tasks per node
#SBATCH --cpus-per-task=4        # 4 OpenMP threads per MPI task => total: 10 x 4 = 40 CPUs/node
#SBATCH --mem=0                  # request all available memory on the node
#SBATCH --time=0-01:00           # time limit (D-HH:MM)
module purge --force
module load CCEnv
module load StdEnv/2018.3
module load gcc/7.3.0  openmpi/3.1.2  gromacs/2019.3
export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"
 
srun gmx_mpi mdrun -deffnm md

GPU job[edit]

This is a job script for mdrun using 4 OpenMP threads and one GPU:

File : gpu_gromacs_job.sh

#!/bin/bash
#SBATCH --gres=gpu:1             # request 1 GPU as "generic resource"
#SBATCH --cpus-per-task 4        # number of OpenMP threads per MPI process
#SBATCH --mem-per-cpu 1000       # memory limit per CPU core (megabytes)
#SBATCH --time 0:30:00           # time limit (D-HH:MM:ss)
module purge  
module load gcc/7.3.0 cuda/10.0.130 openmpi/3.1.2  gromacs/2019.3
export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"

gmx mdrun -ntomp ${SLURM_CPUS_PER_TASK:-1} -deffnm md


GPU job - whole node[edit]

These are job scripts for mdrun using all GPUs and CPUs within a GPU node.

File : gromacs_job_GPU_MPI_Graham.sh

#!/bin/bash
#SBATCH --nodes=1                # number of nodes
#SBATCH --gres=gpu:2             # request 2 GPUs per node (Graham)
#SBATCH --ntasks-per-node=4      # request 4 MPI tasks per node
#SBATCH --cpus-per-task=8        # 8 OpenMP threads per MPI process
#SBATCH --mem=0                  # Request all available memory in the node
#SBATCH --time=1:00:00           # time limit (D-HH:MM:ss)
module purge  
module load gcc/7.3.0 cuda/10.0.130 openmpi/3.1.2  gromacs/2019.3
export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"

mpiexec gmx_mpi mdrun -deffnm md
File : gromacs_job_GPU_MPI_Cedar.sh

#!/bin/bash
#SBATCH --nodes=1                # number of nodes
#SBATCH --gres=gpu:4             # request 4 GPUs per node (Cedar)
#SBATCH --ntasks-per-node=4      # request 4 MPI tasks per node
#SBATCH --cpus-per-task=6        # 6 OpenMP threads per MPI process
#SBATCH --mem=0                  # Request all available memory in the node
#SBATCH --time=1:00:00           # time limit (D-HH:MM:ss)
module purge  
module load gcc/7.3.0 cuda/10.0.130 openmpi/3.1.2  gromacs/2019.3
export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"

mpiexec gmx_mpi mdrun -deffnm md
File : gromacs_job_GPU_MPI_Beluga.sh

#!/bin/bash
#SBATCH --nodes=1                # number of nodes
#SBATCH --gres=gpu:4             # request 4 GPUs per node (Beluga)
#SBATCH --ntasks-per-node=4      # request 8 MPI tasks per node
#SBATCH --cpus-per-task=5        # 5 OpenMP threads per MPI process
#SBATCH --mem=0                  # Request all available memory in the node
#SBATCH --time=1:00:00           # time limit (D-HH:MM:ss)
module purge  
module load gcc/7.3.0 cuda/10.0.130 openmpi/3.1.2  gromacs/2019.3
export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"

srun gmx_mpi mdrun -deffnm md
Notes on running GROMACS in GPUs[edit]
  • The new national clusters (Cedar and Graham) have differently configured GPU nodes:
  • Cedar has 4 GPUs and 24 CPU cores per node
  • Graham has 2 GPUs and 32 CPU cores per node
Therefore one needs to use different settings to make use of all GPUs and CPU-cores in a node.
  • Cedar: --gres=gpu:4 --ntasks-per-node=4 --cpus-per-task=6
  • Graham: --gres=gpu:2 --ntasks-per-node=4 --cpus-per-task=8
Of course the simulated system needs to be large enough to utilize the resources.
  • GROMACS imposes a number of constraints for choosing number of GPUs, tasks (MPI ranks) and OpenMP threads.
    For GROMACS 2018.2 the constraints are:
  • The number of --tasks-per-node always needs to be a multiple of the number of GPUs (--gres=gpu:)
  • GROMACS will not run GPU runs with only 1 OpenMP thread, unless forced by setting the -ntomp option.
    According to GROMACS developers, the optimum number of --cpus-per-task is between 2 and 6.
  • Avoid using a larger fraction of CPUs and memory than the fraction of GPUs you have requested in a node.
  • While according to the developers of the SLURM scheduler using srun as a replacement for mpiexec/mpirun is the preferred way to start MPI jobs, we have seen evidence of jobs failing on startup, when two jobs using srun are started on the same compute node.
    At this time we therefore recommend to use mpiexec, especially when utilizing only partial nodes.

Usage[edit]

More content for this section will be added at a later time.

System preparation[edit]

In order to run a simulation, one needs to create a tpr file (portable binary run input file). This file contains the starting structure of the simulation, the molecular topology and all the simulation parameters.

Tpr files are created with the gmx grompp command (or simply grompp for versions older than 5.0). Therefore one needs the following files:

  • The coordinate file with the starting structure. GROMACS can read the starting structure from various file-formats, such as .gro, .pdb or .cpt (checkpoint).
  • The (system) topology (.top)) file. It defines which force-field is used and how the force-field parameters are applied to the simulated system. Often the topologies for individual parts of the simulated system (e.g. molecules) are placed in separate .itp files and included in the .top file using a #include directive.
  • The run-parameter (.mdp) file. See the GROMACSuser guide for a detailed description of the options.

Tpr files are portable, that is they can be grompp'ed on one machine, copied over to a different machine and used as an input file for mdrun. One should always use the same version for both grompp and mdrun. Although mdrun is able to use tpr files that have been created with an older version of grompp, this can lead to unexpected simulation results.

Running simulations[edit]

MD Simulations often take much longer than the maximum walltime for a job to complete and therefore need to be restarted. To minimize the time a job needs to wait before it starts, you should maximise the number of nodes you have access to by choosing a shorter running time for your job. Requesting a walltime of 24 hours or 72 hours (three days) is often a good trade-off between waiting- and running-time.

You should use the mdrun parameter -maxh to tell the program the requested walltime so that it gracefully finishes the current timestep when reaching 99% of this walltime. This causes mdrun to create a new checkpoint file at this final timestep and gives it the chance to properly close all output-files (trajectories, energy- and log-files, etc.).

For example use #SBATCH --time=24:00 along with gmx mdrun -maxh 24 ... or #SBATCH --time=3-00:00 along with gmx mdrun -maxh 72 ....


File : gromacs_job.sh

#!/bin/bash
#SBATCH --nodes=1                # number of Nodes
#SBATCH --tasks-per-node=32      # number of MPI processes per node
#SBATCH --mem-per-cpu=4000       # memory limit per CPU (megabytes)
#SBATCH --time=24:00:00          # time limit (D-HH:MM:ss)
module purge
module load gcc/6.4.0 openmpi/2.1.1 gromacs/2018.3
export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"

srun  gmx_mpi  mdrun  -deffnm md  -maxh 24



Restarting simulations[edit]

You can restart a simulation by using the same mdrun command as the original simulation and adding the -cpi state.cpt parameter where state.cpt is the filename of the most recent checkpoint file. Mdrun will by default (since version 4.5) try to append to the existing files (trajectories, energy- and log-files, etc.). GROMACS will check the consistency of the output files and - if needed - discard timesteps that are newer than that of the checkpoint file.

Using the -maxh parameter ensures that the checkpoint and output files are written in a consistent state when the simulation reaches the time limit.

The GROMACS manual contains more detailed information [1] [2].


File : gromacs_job_restart.sh

#!/bin/bash
#SBATCH --nodes=1                # number of Nodes
#SBATCH --tasks-per-node=32      # number of MPI processes per node
#SBATCH --mem-per-cpu=4000       # memory limit per CPU (megabytes)
#SBATCH --time=24:00:00          # time limit (D-HH:MM:ss)
module purge
module load gcc/6.4.0 openmpi/2.1.1 gromacs/2018.3
export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}"

srun  gmx_mpi  mdrun  -deffnm md  -maxh 24.0  -cpi md.cpt


Performance considerations[edit]

Getting the best mdrun performance with GROMACS is not a straightforward task. The GROMACS developers are maintaining a long section in their user-guide dedicated to mdrun-performance[3] which explains all relevant options/parameters and strategies.

There is no "One size fits all", but the best parameters to choose highly depend on the size of the system (number of particles as well as size and shape of the simulation box) and the simulation parameters (cut-offs, use of Particle-Mesh-Ewald[4] (PME) method for long-range electrostatics).

GROMACS prints performance information and statistics at the end of the md.log file, which is helpful in identifying bottlenecks. This section often contains notes on how to further improve the performance.

The simulation performance is typically quantified by the number of nanoseconds of MD-trajectory that can be simulated within a day (ns/day).

Parallel scaling is a measure how effectively the compute resources are used. It is defined as:

S = pN / ( N * p1 )

Where pN is the performance using N CPU cores.

Ideally, the performance increases linearly with the number of CPU cores ("linear scaling"; S = 1).


MPI processes / Slurm tasks / Domain decomposition[edit]

The most straight-forward way to increase the number of MPI processes (called MPI-ranks in the GROMACS documentation), which is done by using Slurm's --ntasks or --ntasks-per-node in the job script.

GROMACS uses Domain Decomposition[4] (DD) to distribute the work of solving the non-bonded Particle-Particle (PP) interactions across multiple CPU cores. This is done by effectively cutting the simulation box along the X, Y and/or Z axes into domains and assigning each domain to one MPI process.

This works well until the time needed for communication becomes large in respect to the size (in respect of number of particles as well as volume) of the domain. In that case the parallel scaling will drop significantly below 1 and in extreme cases the performance drops when increasing the number of domains.

GROMACS can use Dynamic Load Balancing to shift the boundaries between domains to some extent, in order to avoid certain domains taking significantly longer to solve than others. The mdrun parameter -dlb auto is the default.

Domains cannot be smaller in any direction, than the longest cut-off radius.


Long-range interactions with PME[edit]

The Particle-Mesh-Ewald method (PME) is often used to calculate the long-range non-bonded interactions (interactions beyond the cut-off radius). As PME requires global communication, the performance can degrade quickly when many MPI processes are involved that are calculating both the short-range (PP) as well as the long-range (PME) interactons. This is avoided by having dedicated MPI processes that only perform PME (PME-ranks).

GROMACS mdrun by default uses heuristics to dedicate a number of MPI processes to PME when the total number of MPI processes 12 or greater. The mdrun parameter -npme can be used to select the number of PME ranks manually.

In case there is a significant "Load Imbalance" between the PP and PME ranks (e.g. the PP ranks have more work per timestep than the PME ranks), one can shift work from the PP ranks to the PME ranks by increasing the cut-off radius. This will not effect the result, as the sum of short-range + long-range forces (or energies) will be the same for a given timestep. Mdrun will attemtp to do that automatically since version 4.6 unless the mdrun parameter -notunepme is used.

Since version 2018, PME can be offloaded to the GPU (see below) however the implementation as of version 2018.1 has still several limitations [5] among them that only a single GPU rank can be dedicated to PME.


OpenMP threads / CPUs-per-task[edit]

Once Domain Decomposition with MPI processes reaches the scaling limit (parallel scaling starts dropping), performance can be further improved by using OpenMP threads to spread the work of an MPI process (rank) over more than one CPU core. To use OpenMP threads, use Slurm's --cpus-per-task parameter in the job script and either set the OMP_NUM_THREADS variable with: export OMP_NUM_THREADS="${SLURM_CPUS_PER_TASK:-1}" (recommended) or the mdrun parameter -ntomp ${SLURM_CPUS_PER_TASK:-1}.

According to GROMACS developers, the optimum is usually between 2 and 6 OpenMP threads per MPI process (cpus-per-task). However for jobs running on a very large number of nodes it might be worth trying even larger number of cpus-per-task.

Especially for systems that don't use PME, we don't have to worry about a "PP-PME Load Imbalance". In those cases we can choose 2 or 4 ntasks-per-node and set cpus-per-task to a value that ntasks-per-node * cpus-per-task matches the number of CPU cores in a compute node.

CPU architecture[edit]

GROMACS uses optimised kernel functions to compute the real-space portion of short-range, non-bonded interactions. Kernel functions are available for a variety of SIMD instruction sets, such as AVX, AVX2, and AVX512. Kernel functions are chosen when compiling GROMACS, and should match the capabilities of the CPUs that will be used to run the simulations. This is done for you by the Compute Canada team: when you load a GROMACS module into your environment, an appropriate AVX/AVX2/AVX512 version is chosen depending on the architecture of the cluster. GROMACS reports what SIMD instruction set it supports in its log file, and will warn you if the selected kernel function is suboptimal.

However, certain clusters contain a mix of CPUs that have different levels of SIMD support. When that is the case, the smallest common denominator is used. For instance, if the cluster has Skylake (AVX/AVX2/AVX512) and Broadwell (AVX/AVX2) CPUs, as Cedar currently (May 2020) does, a version of GROMACS compiled for the AVX2 instruction set will be used. This means that you may end up with a suboptimal choice of kernel function, depending on which compute nodes the scheduler allocates for your job.

You can explicitly request nodes that support AVX512 with the --constraint=cascade|skylake SLURM option on clusters that offer these node types. (If working on the command-line, use quotes around this option ("--constraint=cascade|skylake") to protect the | character.) You can then explicitly request AVX512 software using module load arch/avx512 before loading any other module. For example, a simple job script could look like the following:

#!/bin/bash

#SBATCH --nodes=4
#SBATCH --ntasks-per-node=48
#SBATCH --constraint=cascade|skylake
#SBATCH --time=24:00:00

module load arch/avx512
module load gcc/7.3.0
module load openmpi/3.1.2
module load gromacs/2019.3

srun gmx_mpi mdrun

In our measurements, going from AVX2 to AVX512 on Skylake or Cascade nodes resulted in a 20−30% performance increase. However, you should also consider that restricting yourself to only AVX512-capable nodes will result in longer wait times in the queue.

GPUs[edit]

Tips how to use GPUs efficiently will be added soon.

Analyzing results[edit]

Common pitfalls[edit]

Related Modules[edit]

Gromacs-Plumed[edit]

PLUMED[6] is an open source library for free energy calculations in molecular systems which works together with some of the most popular molecular dynamics engines.

The gromacs-plumed modules are versions of GROMACS that have been patched with PLUMED's modifications, so that they can run meta-dynamics simulations.

GROMACS PLUMED modules for running on CPUs modules for running on GPUs (CUDA)
v2019.6 v2.5.4 gcc/7.3.0 openmpi/3.1.2 gromacs-plumed/2019.6 gcc/7.3.0 cuda/10.0.130 openmpi/3.1.2 gromacs-plumed/2019.6
v2019.5 v2.5.3 gcc/7.3.0 openmpi/3.1.2 gromacs-plumed/2019.5 gcc/7.3.0 cuda/10.0.130 openmpi/3.1.2 gromacs-plumed/2019.5
v2018.1 v2.4.2 gcc/6.4.0 openmpi/2.1.1 gromacs-plumed/2018.1 gcc/6.4.0 cuda/9.0.176 openmpi/2.1.1 gromacs-plumed/2018.1
v2016.3 v2.3.2 intel/2016.4 openmpi/2.1.1 gromacs-plumed/2016.3 intel/2016.4 cuda/8.0.44 openmpi/2.1.1 gromacs-plumed/2016.3

G_MMPBSA[edit]

G_MMPBSA[7] is a tool that calculates components of binding energy using MM-PBSA method except the entropic term and energetic contribution of each residue to the binding using energy decomposition scheme.

Development of that tool seems to have stalled in April 2016 and no changes have been made since then. Therefore it is only compatible with Gromacs 5.1.x.

The version installed can be loaded with module load gcc/5.4.0 g_mmpbsa/2016-04-19 which represent the most up-to-date version and consists of version 1.6 plus the change to make it compatible with Gromacs 5.1.x. The installed version has been compiled with gromacs/5.1.5 and apbs/1.3.

Please be aware that G_MMPBSA uses implicit solvents and there have been studies[8] that conclude that there are issues with the accuracy of these methods for calculating binding free energies.

Links[edit]

Biomolecular simulation

References[edit]