OpenMM: Difference between revisions

Line 41: Line 41:
Here openmm_input.py is a python script loading amber files, creating the OpenMM simulation system, setting up the integration, and running dynamics. Example openmm_input.py is available [https://mdbench.ace-net.ca/mdbench/idbenchmark/?q=129 here].
Here openmm_input.py is a python script loading amber files, creating the OpenMM simulation system, setting up the integration, and running dynamics. Example openmm_input.py is available [https://mdbench.ace-net.ca/mdbench/idbenchmark/?q=129 here].


OpenMM on the CUDA platform requires only one CPU per GPU because it does not use CPUs for calculations. While OpenMM can use several GPUs in one node, the most efficient way to run simulations is to use a single GPU. As you can see from  [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Narval&gpu_model=&cpu_model=&arch=&dataset=6n4o Narval] and [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=V100-SXM2&cpu_model=&arch=&dataset=6n4o  Cedar] benchmarks, on nodes where GPUs are connected directly with NvLink OpenMM runs slightly faster on multiple GPUs. Without NvLink there is no advantage of using more than one V100 GPU () and very little speed up of simulations on P100 GPUs ()
OpenMM on the CUDA platform requires only one CPU per GPU because it does not use CPUs for calculations. While OpenMM can use several GPUs in one node, the most efficient way to run simulations is to use a single GPU. As you can see from  [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Narval&gpu_model=&cpu_model=&arch=&dataset=6n4o Narval] and [https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=V100-SXM2&cpu_model=&arch=&dataset=6n4o  Cedar] benchmarks, on nodes where GPUs are connected directly with NvLink OpenMM runs slightly faster on multiple GPUs. Without NvLink there is no advantage of using more than one V100 GPU ([https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Siku&gpu_model=&cpu_model=&arch=&dataset=6n4o Siku benchmarks] ) and very little speed up of simulations on P100 GPUs ([https://mdbench.ace-net.ca/mdbench/bform/?software_contains=OPENMM.cuda&software_id=&module_contains=&module_version=&site_contains=Cedar&gpu_model=P100-PCIE&cpu_model=&arch=&dataset=6n4o Cedar benchmarks])
cc_staff
163

edits