OpenMM: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
(Marked this version for translation)
Line 2: Line 2:
[[Category:Software]][[Category:BiomolecularSimulation]]
[[Category:Software]][[Category:BiomolecularSimulation]]
<translate>
<translate>
=Introduction=
=Introduction= <!--T:1-->
OpenMM<ref name="OpenMM_home">OpenMM Homepage: https://openmm.org/</ref> is a toolkit for molecular simulation. It can be used either as a stand-alone application for running simulations or as a library you call from your own code. It provides a combination of extreme flexibility (through custom forces and integrators), openness, and high performance (especially on recent GPUs) that makes it unique among MD simulation packages.
OpenMM<ref name="OpenMM_home">OpenMM Homepage: https://openmm.org/</ref> is a toolkit for molecular simulation. It can be used either as a stand-alone application for running simulations or as a library you call from your own code. It provides a combination of extreme flexibility (through custom forces and integrators), openness, and high performance (especially on recent GPUs) that makes it unique among MD simulation packages.


= Running Simulation with AMBER Topology and Restart Files =
= Running Simulation with AMBER Topology and Restart Files = <!--T:2-->


== Preparing Python Virtual Environment ==
== Preparing Python Virtual Environment == <!--T:3-->


<!--T:4-->
This example is for the openmm/7.7.0 module.
This example is for the openmm/7.7.0 module.


<!--T:5-->
1. Create and actvate Python virtual environment  
1. Create and actvate Python virtual environment  
{{Commands|prompt=[name@server ~]
{{Commands|prompt=[name@server ~]
Line 18: Line 20:
}}
}}


<!--T:6-->
2. Install ParmEd and netCDF4 Python modules
2. Install ParmEd and netCDF4 Python modules
{{Commands|prompt=(env-parmed)[name@server ~]
{{Commands|prompt=(env-parmed)[name@server ~]
Line 23: Line 26:
}}
}}


== Job submission ==  
== Job submission == <!--T:7-->
Below is a job script for a simulation using one GPU.
Below is a job script for a simulation using one GPU.
{{File
{{File
Line 36: Line 39:
# Usage: sbatch $0
# Usage: sbatch $0


<!--T:8-->
module purge
module purge
module load StdEnv/2020 gcc/9.3.0 cuda/11.4 openmpi/4.0.3  
module load StdEnv/2020 gcc/9.3.0 cuda/11.4 openmpi/4.0.3  
Line 41: Line 45:
source $HOME/env-parmed/bin/activate
source $HOME/env-parmed/bin/activate


<!--T:9-->
python openmm_input.py
python openmm_input.py
}}
}}


<!--T:10-->
Here openmm_input.py is a python script loading amber files, creating the OpenMM simulation system, setting up the integration, and running dynamics. Example openmm_input.py is available [https://mdbench.ace-net.ca/mdbench/idbenchmark/?q=129 here].
Here openmm_input.py is a python script loading amber files, creating the OpenMM simulation system, setting up the integration, and running dynamics. Example openmm_input.py is available [https://mdbench.ace-net.ca/mdbench/idbenchmark/?q=129 here].


<!--T:11-->
OpenMM on the CUDA platform requires only one CPU per GPU because it does not use CPUs for calculations. While OpenMM can use several GPUs in one node, the most efficient way to run simulations is to use a single GPU. As you can see from  [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Narval&gpu_model=&cpu_model=&arch=&dataset=6n4o Narval benchmarks] and [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=V100-SXM2&cpu_model=&arch=&dataset=6n4o  Cedar benchmarks], on nodes with NvLink (where GPUs are connected directly) OpenMM runs slightly faster on multiple GPUs. Without NvLink there is a very little speedup of simulations on P100 GPUs ([https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=P100-PCIE&cpu_model=&arch=&dataset=6n4o Cedar benchmarks]).
OpenMM on the CUDA platform requires only one CPU per GPU because it does not use CPUs for calculations. While OpenMM can use several GPUs in one node, the most efficient way to run simulations is to use a single GPU. As you can see from  [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Narval&gpu_model=&cpu_model=&arch=&dataset=6n4o Narval benchmarks] and [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=V100-SXM2&cpu_model=&arch=&dataset=6n4o  Cedar benchmarks], on nodes with NvLink (where GPUs are connected directly) OpenMM runs slightly faster on multiple GPUs. Without NvLink there is a very little speedup of simulations on P100 GPUs ([https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=P100-PCIE&cpu_model=&arch=&dataset=6n4o Cedar benchmarks]).
</translate>
</translate>

Revision as of 19:03, 2 February 2022

Other languages:

Introduction

OpenMM[1] is a toolkit for molecular simulation. It can be used either as a stand-alone application for running simulations or as a library you call from your own code. It provides a combination of extreme flexibility (through custom forces and integrators), openness, and high performance (especially on recent GPUs) that makes it unique among MD simulation packages.

Running Simulation with AMBER Topology and Restart Files

Preparing Python Virtual Environment

This example is for the openmm/7.7.0 module.

1. Create and actvate Python virtual environment

[name@server ~] module load python
[name@server ~] virtualenv $HOME/env-parmed
[name@server ~] source $HOME/env-parmed/bin/activate


2. Install ParmEd and netCDF4 Python modules

(env-parmed)[name@server ~] pip install parmed==3.4.3 netCDF4


Job submission

Below is a job script for a simulation using one GPU.

File : submit_openmm.cuda.sh

#!/bin/bash
#SBATCH --cpus-per-task=1 
#SBATCH --gpus=1
#SBATCH --mem-per-cpu=4000M
#SBATCH --time=0-01:00:00
# Usage: sbatch $0

module purge
module load StdEnv/2020 gcc/9.3.0 cuda/11.4 openmpi/4.0.3 
module load python/3.8.10 openmm/7.7.0 netcdf/4.7.4 hdf5/1.10.6 mpi4py/3.0.3
source $HOME/env-parmed/bin/activate

python openmm_input.py


Here openmm_input.py is a python script loading amber files, creating the OpenMM simulation system, setting up the integration, and running dynamics. Example openmm_input.py is available here.

OpenMM on the CUDA platform requires only one CPU per GPU because it does not use CPUs for calculations. While OpenMM can use several GPUs in one node, the most efficient way to run simulations is to use a single GPU. As you can see from Narval benchmarks and Cedar benchmarks, on nodes with NvLink (where GPUs are connected directly) OpenMM runs slightly faster on multiple GPUs. Without NvLink there is a very little speedup of simulations on P100 GPUs (Cedar benchmarks).

  1. OpenMM Homepage: https://openmm.org/