OpenMM: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(Marked this version for translation)
(Marked this version for translation)
 
(8 intermediate revisions by 4 users not shown)
Line 3: Line 3:
<translate>
<translate>
=Introduction= <!--T:1-->
=Introduction= <!--T:1-->
OpenMM<ref name="OpenMM_home">OpenMM Homepage: https://openmm.org/</ref> is a toolkit for molecular simulation. It can be used either as a stand-alone application for running simulations or as a library you call from your own code. It provides a combination of extreme flexibility (through custom forces and integrators), openness, and high performance (especially on recent GPUs) that makes it unique among MD simulation packages.
OpenMM<ref name="OpenMM_home">OpenMM home page: https://openmm.org/</ref> is a toolkit for molecular simulation. It can be used either as a standalone application for running simulations or as a library you call from your own code. It provides a combination of extreme flexibility (through custom forces and integrators), openness, and high performance (especially on recent GPUs) that make it unique among MD simulation packages.


= Running Simulation with AMBER Topology and Restart Files = <!--T:2-->
= Running a simulation with AMBER topology and restart files = <!--T:2-->


== Preparing Python Virtual Environment == <!--T:3-->
== Preparing the Python virtual environment == <!--T:3-->


<!--T:4-->
<!--T:4-->
Line 13: Line 13:


<!--T:5-->
<!--T:5-->
1. Create and actvate Python virtual environment  
1. Create and activate the Python virtual environment.
{{Commands|prompt=[name@server ~]
{{Commands|prompt=[name@server ~]
| module load python
| module load python
Line 21: Line 21:


<!--T:6-->
<!--T:6-->
2. Install ParmEd and netCDF4 Python modules
2. Install ParmEd and netCDF4 Python modules.
{{Commands|prompt=(env-parmed)[name@server ~]
{{Commands|prompt=(env-parmed)[name@server ~]
| pip install parmed{{=}}{{=}}3.4.3 netCDF4
| pip install --no-index parmed{{=}}{{=}}3.4.3 netCDF4
}}
}}


== Job submission == <!--T:7-->
== Job submission == <!--T:7-->
Below is a job script for a simulation using one GPU.
Below is a job script for a simulation using one GPU.
</translate>
{{File
{{File
   |name=submit_openmm.cuda.sh
   |name=submit_openmm.cuda.sh
Line 39: Line 41:
# Usage: sbatch $0
# Usage: sbatch $0


<!--T:8-->
module purge
module purge
module load StdEnv/2020 gcc/9.3.0 cuda/11.4 openmpi/4.0.3  
module load StdEnv/2020 gcc/9.3.0 cuda/11.4 openmpi/4.0.3  
Line 45: Line 46:
source $HOME/env-parmed/bin/activate
source $HOME/env-parmed/bin/activate


<!--T:9-->
python openmm_input.py
python openmm_input.py
}}
}}


<translate>
<!--T:10-->
<!--T:10-->
Here openmm_input.py is a python script loading amber files, creating the OpenMM simulation system, setting up the integration, and running dynamics. Example openmm_input.py is available [https://mdbench.ace-net.ca/mdbench/idbenchmark/?q=129 here].
Here <code>openmm_input.py</code> is a Python script loading Amber files, creating the OpenMM simulation system, setting up the integration, and running dynamics. An example is available [https://mdbench.ace-net.ca/mdbench/idbenchmark/?q=129 here].
 
= Performance and benchmarking = <!--T:12-->
 
<!--T:13-->
A team at [https://www.ace-net.ca/ ACENET] has created a [https://mdbench.ace-net.ca/mdbench/ Molecular Dynamics Performance Guide] for Alliance clusters.
It can help you determine optimal conditions for AMBER, GROMACS, NAMD, and OpenMM jobs. The present section focuses on OpenMM performance.


<!--T:11-->
<!--T:11-->
OpenMM on the CUDA platform requires only one CPU per GPU because it does not use CPUs for calculations. While OpenMM can use several GPUs in one node, the most efficient way to run simulations is to use a single GPU. As you can see from  [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Narval&gpu_model=&cpu_model=&arch=&dataset=6n4o Narval benchmarks] and [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=V100-SXM2&cpu_model=&arch=&dataset=6n4o  Cedar benchmarks], on nodes with NvLink (where GPUs are connected directly) OpenMM runs slightly faster on multiple GPUs. Without NvLink there is a very little speedup of simulations on P100 GPUs ([https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=P100-PCIE&cpu_model=&arch=&dataset=6n4o Cedar benchmarks]).
OpenMM on the CUDA platform requires only one CPU per GPU because it does not use CPUs for calculations. While OpenMM can use several GPUs in one node, the most efficient way to run simulations is to use a single GPU. As you can see from  [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Narval&gpu_model=&cpu_model=&arch=&dataset=6n4o Narval benchmarks] and [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=V100-SXM2&cpu_model=&arch=&dataset=6n4o  Cedar benchmarks], on nodes with NvLink (where GPUs are connected directly), OpenMM runs slightly faster on multiple GPUs. Without NvLink there is a very little speedup of simulations on P100 GPUs ([https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=P100-PCIE&cpu_model=&arch=&dataset=6n4o Cedar benchmarks]).
</translate>
</translate>

Latest revision as of 16:58, 16 October 2024

Other languages:

Introduction

OpenMM[1] is a toolkit for molecular simulation. It can be used either as a standalone application for running simulations or as a library you call from your own code. It provides a combination of extreme flexibility (through custom forces and integrators), openness, and high performance (especially on recent GPUs) that make it unique among MD simulation packages.

Running a simulation with AMBER topology and restart files

Preparing the Python virtual environment

This example is for the openmm/7.7.0 module.

1. Create and activate the Python virtual environment.

[name@server ~] module load python
[name@server ~] virtualenv $HOME/env-parmed
[name@server ~] source $HOME/env-parmed/bin/activate


2. Install ParmEd and netCDF4 Python modules.

(env-parmed)[name@server ~] pip install --no-index parmed==3.4.3 netCDF4


Job submission

Below is a job script for a simulation using one GPU.


File : submit_openmm.cuda.sh

#!/bin/bash
#SBATCH --cpus-per-task=1 
#SBATCH --gpus=1
#SBATCH --mem-per-cpu=4000M
#SBATCH --time=0-01:00:00
# Usage: sbatch $0

module purge
module load StdEnv/2020 gcc/9.3.0 cuda/11.4 openmpi/4.0.3 
module load python/3.8.10 openmm/7.7.0 netcdf/4.7.4 hdf5/1.10.6 mpi4py/3.0.3
source $HOME/env-parmed/bin/activate

python openmm_input.py


Here openmm_input.py is a Python script loading Amber files, creating the OpenMM simulation system, setting up the integration, and running dynamics. An example is available here.

Performance and benchmarking

A team at ACENET has created a Molecular Dynamics Performance Guide for Alliance clusters. It can help you determine optimal conditions for AMBER, GROMACS, NAMD, and OpenMM jobs. The present section focuses on OpenMM performance.

OpenMM on the CUDA platform requires only one CPU per GPU because it does not use CPUs for calculations. While OpenMM can use several GPUs in one node, the most efficient way to run simulations is to use a single GPU. As you can see from Narval benchmarks and Cedar benchmarks, on nodes with NvLink (where GPUs are connected directly), OpenMM runs slightly faster on multiple GPUs. Without NvLink there is a very little speedup of simulations on P100 GPUs (Cedar benchmarks).

  1. OpenMM home page: https://openmm.org/