OpenMM: Difference between revisions
(Marked this version for translation) |
No edit summary |
||
Line 53: | Line 53: | ||
<!--T:11--> | <!--T:11--> | ||
== Performance == | |||
OpenMM on the CUDA platform requires only one CPU per GPU because it does not use CPUs for calculations. While OpenMM can use several GPUs in one node, the most efficient way to run simulations is to use a single GPU. As you can see from [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Narval&gpu_model=&cpu_model=&arch=&dataset=6n4o Narval benchmarks] and [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=V100-SXM2&cpu_model=&arch=&dataset=6n4o Cedar benchmarks], on nodes with NvLink (where GPUs are connected directly) OpenMM runs slightly faster on multiple GPUs. Without NvLink there is a very little speedup of simulations on P100 GPUs ([https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=P100-PCIE&cpu_model=&arch=&dataset=6n4o Cedar benchmarks]). | OpenMM on the CUDA platform requires only one CPU per GPU because it does not use CPUs for calculations. While OpenMM can use several GPUs in one node, the most efficient way to run simulations is to use a single GPU. As you can see from [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Narval&gpu_model=&cpu_model=&arch=&dataset=6n4o Narval benchmarks] and [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=V100-SXM2&cpu_model=&arch=&dataset=6n4o Cedar benchmarks], on nodes with NvLink (where GPUs are connected directly) OpenMM runs slightly faster on multiple GPUs. Without NvLink there is a very little speedup of simulations on P100 GPUs ([https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=P100-PCIE&cpu_model=&arch=&dataset=6n4o Cedar benchmarks]). | ||
</translate> | </translate> |
Revision as of 01:47, 7 February 2022
Introduction
OpenMM[1] is a toolkit for molecular simulation. It can be used either as a stand-alone application for running simulations or as a library you call from your own code. It provides a combination of extreme flexibility (through custom forces and integrators), openness, and high performance (especially on recent GPUs) that makes it unique among MD simulation packages.
Running Simulation with AMBER Topology and Restart Files
Preparing Python Virtual Environment
This example is for the openmm/7.7.0 module.
1. Create and actvate Python virtual environment
[name@server ~] module load python
[name@server ~] virtualenv $HOME/env-parmed
[name@server ~] source $HOME/env-parmed/bin/activate
2. Install ParmEd and netCDF4 Python modules
(env-parmed)[name@server ~] pip install parmed==3.4.3 netCDF4
Job submission
Below is a job script for a simulation using one GPU.
#!/bin/bash
#SBATCH --cpus-per-task=1
#SBATCH --gpus=1
#SBATCH --mem-per-cpu=4000M
#SBATCH --time=0-01:00:00
# Usage: sbatch $0
module purge
module load StdEnv/2020 gcc/9.3.0 cuda/11.4 openmpi/4.0.3
module load python/3.8.10 openmm/7.7.0 netcdf/4.7.4 hdf5/1.10.6 mpi4py/3.0.3
source $HOME/env-parmed/bin/activate
python openmm_input.py
Here openmm_input.py is a python script loading amber files, creating the OpenMM simulation system, setting up the integration, and running dynamics. Example openmm_input.py is available here.
Performance
OpenMM on the CUDA platform requires only one CPU per GPU because it does not use CPUs for calculations. While OpenMM can use several GPUs in one node, the most efficient way to run simulations is to use a single GPU. As you can see from Narval benchmarks and Cedar benchmarks, on nodes with NvLink (where GPUs are connected directly) OpenMM runs slightly faster on multiple GPUs. Without NvLink there is a very little speedup of simulations on P100 GPUs (Cedar benchmarks).
- ↑ OpenMM Homepage: https://openmm.org/