cc_staff
127
edits
No edit summary |
(Uniformisation of MD performance guide across pages) |
||
Line 53: | Line 53: | ||
Here <code>openmm_input.py</code> is a Python script loading Amber files, creating the OpenMM simulation system, setting up the integration, and running dynamics. An example is available [https://mdbench.ace-net.ca/mdbench/idbenchmark/?q=129 here]. | Here <code>openmm_input.py</code> is a Python script loading Amber files, creating the OpenMM simulation system, setting up the integration, and running dynamics. An example is available [https://mdbench.ace-net.ca/mdbench/idbenchmark/?q=129 here]. | ||
== Performance | = Performance and benchmarking = | ||
A team at [https://www.ace-net.ca/ ACENET] has created a [https://mdbench.ace-net.ca/mdbench/ Molecular Dynamics Performance Guide] for Alliance clusters. | |||
It can help you determine optimal conditions for AMBER, GROMACS, NAMD, and OpenMM jobs. The present section focuses on OpenMM performance. | |||
<!--T:11--> | |||
OpenMM on the CUDA platform requires only one CPU per GPU because it does not use CPUs for calculations. While OpenMM can use several GPUs in one node, the most efficient way to run simulations is to use a single GPU. As you can see from [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Narval&gpu_model=&cpu_model=&arch=&dataset=6n4o Narval benchmarks] and [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=V100-SXM2&cpu_model=&arch=&dataset=6n4o Cedar benchmarks], on nodes with NvLink (where GPUs are connected directly), OpenMM runs slightly faster on multiple GPUs. Without NvLink there is a very little speedup of simulations on P100 GPUs ([https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=P100-PCIE&cpu_model=&arch=&dataset=6n4o Cedar benchmarks]). | OpenMM on the CUDA platform requires only one CPU per GPU because it does not use CPUs for calculations. While OpenMM can use several GPUs in one node, the most efficient way to run simulations is to use a single GPU. As you can see from [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Narval&gpu_model=&cpu_model=&arch=&dataset=6n4o Narval benchmarks] and [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=V100-SXM2&cpu_model=&arch=&dataset=6n4o Cedar benchmarks], on nodes with NvLink (where GPUs are connected directly), OpenMM runs slightly faster on multiple GPUs. Without NvLink there is a very little speedup of simulations on P100 GPUs ([https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=P100-PCIE&cpu_model=&arch=&dataset=6n4o Cedar benchmarks]). | ||
</translate> | </translate> |