LAMMPS: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
No edit summary
 
(37 intermediate revisions by 5 users not shown)
Line 1: Line 1:
{{Draft}}
<languages />
[[Category:Software]][[Category:BiomolecularSimulation]]
<translate>


= General =
<!--T:1-->
''Parent page: [[Biomolecular simulation]]''


'''LAMMPS''' is a classical molecular dynamics code, and an acronym for '''L'''arge-scale '''A'''tomic / '''M'''olecular '''M'''assively '''P'''arallel '''S'''imulator. LAMMPS is distributed by [http://www.sandia.gov/  Sandia National Laboratories], a US Department of Energy laboratory. The main authors of LAMMPS are listed on this page along with contact information and other contributors. Funding for LAMMPS development has come primarily from DOE (OASCR, OBER, ASCI, LDRD, Genomes-to-Life) and is [http://lammps.sandia.gov/funding.html acknowledged here].
= General = <!--T:2-->


* '''Project web site:''' http://lammps.sandia.gov/
<!--T:3-->
* '''Documentation''': [http://lammps.sandia.gov/doc/Manual.html Online Manual].
'''LAMMPS''' is a classical molecular dynamics code. The name stands for '''L'''arge-scale '''A'''tomic / '''M'''olecular '''M'''assively '''P'''arallel '''S'''imulator. LAMMPS is distributed by [http://www.sandia.gov/ Sandia National Laboratories], a US Department of Energy laboratory.  
* '''Features:''' http://lammps.sandia.gov/features.html
* '''Downloads:''' http://lammps.sandia.gov/download.html
* '''GitHub:''' https://github.com/lammps/lammps
* '''Mailing List:''' http://lammps.sandia.gov/mail.html


= Code Layout =
<!--T:4-->
* Project web site: http://lammps.sandia.gov/
* Documentation: [http://lammps.sandia.gov/doc/Manual.html Online Manual].
* Mailing List: http://lammps.sandia.gov/mail.html


* C++ and Object-Oriented approach.
<!--T:5-->
* Parallelization via MPI and OpenMP; runs on GPU.
LAMMPS is parallelized with [[MPI]] and [[OpenMP]], and can run on [[Using GPUs with Slurm|GPU]]s.
* Is invoked by commands through input scripts.
* Possibility to customize the output.
* Could be interfaced with other codes.


= Force fields and examples =
= Force fields = <!--T:6-->


== Force fields classified by Material ==
<!--T:7-->
All supported force fields are listed on the [https://lammps.sandia.gov/doc/Intro_features.html#ff package web site],
classified by functional form (e.g. pairwise potentials, many-body potentials, etc.)
The large number of supported force fields makes LAMMPS suitable for many areas of application.
Here are some types of modelling and force fields suitable for each:


* '''Biomolecules:''' CHARMM, AMBER, OPLS, COMPASS (class 2), long-range Coulombics via PPPM, point dipoles, ...
<!--T:8-->
* '''Polymers:''' all-atom, united-atom, coarse-grain (bead-spring FENE), bond-breaking, …
* Biomolecules: CHARMM, AMBER, OPLS, COMPASS (class 2), long-range Coulombics via PPPM, point dipoles, ...
* '''Materials:''' EAM and MEAM for metals, Buckingham, Morse, Yukawa, Stillinger-Weber, Tersoff, EDIP, COMB, SNAP, ...
* Polymers: all-atom, united-atom, coarse-grain (bead-spring FENE), bond-breaking, …
* '''Chemistry:''' AI-REBO, REBO, ReaxFF, eFF
* Materials: EAM and MEAM for metals, Buckingham, Morse, Yukawa, Stillinger-Weber, Tersoff, EDIP, COMB, SNAP, ...
* '''Mesoscale:''' granular, DPD, Gay-Berne, colloidal, peri-dynamics, DSMC...
* Reactions: AI-REBO, REBO, ReaxFF, eFF
* '''Hybrid:''' can use combinations of potentials for hybrid systems: water on metal, polymers/semiconductor interface, colloids in solution, …
* Mesoscale: granular, DPD, Gay-Berne, colloidal, peri-dynamics, DSMC...


== Potential classified by Functional Form ==
<!--T:9-->
Combinations of potentials can be used for hybrid systems, e.g. water on metal, polymer/semiconductor interfaces, colloids in solution, ...


* '''Pairwise potentials:''' Lennard-Jones, Buckingham, ...
= Versions and packages = <!--T:10-->
* '''Charged Pairwise Potentials:''' Coulombic, point-dipole
* '''Manybody Potentials:''' EAM, Finnis/Sinclair, modified EAM (MEAM), embedded ion (EIM), Stillinger-Weber, Tersoff, AI-REBO, ReaxFF, COMB
* '''Coarse-Grained Potentials:''' DPD, GayBerne, ...
* '''Mesoscopic Potentials:''' granular, peri-dynamics
* '''Long-Range Electrostatics:''' Ewald, PPPM, MSM
* '''Implicit Solvent Potentials:''' hydrodynamic lubrication, Debye
* '''Force-Field Compatibility with common:''' CHARMM, AMBER, OPLS, GROMACS options


= Modules =
<!--T:11-->
To see which versions of LAMMPS are installed on Compute Canada systems, run <code>module spider lammps</code>. See [[Using modules]] for more about <code>module</code> subcommands.


Several versions of [http://lammps.sandia.gov/ LAMMPS] were installed on cvmfs and accessible on Compute Canada systems through [[Using modules|modules]]. To find the modules, use:
<!--T:12-->
<code> module spider lammps</code> or <code>module -r spider '.*lammps.*'</code>
LAMMPS version numbers are based on their release dates, and have the format YYYYMMDD. You should run:


The version of each module gives the date of the release of each version in the format: YYYYMMDD. The name of the module contains an attribute depending on the accelerators included in the module.
<!--T:57-->
module avail lammps


For each release installed, one or more modules are are available. For example, the release of 31 March 2017 has 3 modules: 
<!--T:58-->
to see all the releases that are installed, so you can find the one which is most appropriate for you to use.


* Version built with MPI: <code>lammps/20170331</code>
<!--T:59-->
* Version built with USER-OMP support: <code>lmmps-omp/20170331</code>  
For each release installed, one or more modules are are available.
* Version built with USER-INTEL support: <code>lammps-user-intel/20170331</code>
For example, the release of 31 March 2017 has three modules:


These versions are also available with GPU support. In order to load the GPU enabled version of LAMMPS, the <code>[[CUDA|cuda]]</code> module needs to be loaded first before loading the LAMMPS module:
<!--T:13-->
* Built with MPI: <code>lammps/20170331</code>
* Built with USER-OMP support<code>lammps-omp/20170331</code>  
* Built with USER-INTEL support: <code>lammps-user-intel/20170331</code>


$ module load cuda
<!--T:14-->
$ module load lmmps-omp/20170331
These versions are also available with GPU support.
In order to use the GPU-enabled version, load the [[CUDA]] module before loading the LAMMPS module:


The name of the executable may differ from one version to another. To figure out what is the name of the executable that correspond to a given module, do the following (example for lammps-omp/20170331):
<!--T:15-->
$ module load cuda
$ module load lammps-omp/20170331


  $ module load lmmps-omp/20170331
<!--T:16-->
The name of the executable may differ from one version to another. All prebuilt versions on Compute Canada clusters have a symbolic link called <code>lmp</code>. It means that no matter which module you pick, you can execute LAMMPS by calling <code>lmp</code>.
 
<!--T:17-->
If you wish to see the original name of the executable for a given module, list the files in the <code>${EBROOTLAMMPS}/bin</code> directory. For example:
 
  <!--T:18-->
$ module load lammps-omp/20170331
  $ ls ${EBROOTLAMMPS}/bin/
  $ ls ${EBROOTLAMMPS}/bin/
  lmp lmp_icc_openmpi
  lmp lmp_icc_openmpi


From this output, the executable is: '''lmp_icc_openmpi'''. Note that '''lmp''' is a symbolic link to the executable. For all versions installed on cvmfs, a symbolic link was added to each LAMMPS executable and it is called '''lmp'''. It means that no matter which module you pick, '''lmp''' will work as the executable for that module.
<!--T:19-->
 
In this example the executable is <code>lmp_icc_openmpi</code>, and <code>lmp</code> is the symbolic link to it.  
The reason behind different versions for the same release is the difference in the packages included. The recent versions of  [http://lammps.sandia.gov/ LAMMPS] contain about 60 different packages that can be enabled when compiling the program. All the [http://lammps.sandia.gov/doc/Section_packages.html packages] are documented on the official web page of [http://lammps.sandia.gov/ LAMMPS] .


For each module installed on cvmfs, a file ''list-packages.txt '' is provided and gives a list of supported and non-supported packages for that particular module. The different versions for one release mentioned above come from the fact one can not put all available packages in one binary. If for some reason, your simulation does not work with one module, it is more likely related to the fact that the corresponding package was not included.
<!--T:20-->
The reason there are different modules for the same release is the difference in the ''packages'' included. Recent versions of LAMMPS contain about 60 different packages that can be enabled or disabled when compiling the program. Not all packages can be enabled in a single executable. All [https://lammps.sandia.gov/doc/Packages.html packages] are documented on the official web page. If your simulation does not work with one module, it may be related to the fact that a necessary package was not enabled.


To see or know more about the supported packages on a given module , do the following:
<!--T:21-->
For some LAMMPS modules we provide a file <code>list-packages.txt</code> listing the enabled ("Supported") and disabled ("Not Supported") packages. Once you have loaded a particular module, run <code>cat ${EBROOTLAMMPS}/list-packages.txt</code> to see the contents.


* First load a particular module of LAMMPS (use <code>module -r spider '.*lammps.*'</code> to see how to load a particular module).
<!--T:67-->
* Then, execute the command: <code>cat ${EBROOTLAMMPS}/list-packages.txt</code>  
If <code>list-packages.txt</code> is not found, you may be able to determine which packages are available by examining the [[EasyBuild]] recipe file, <code>$EBROOTLAMMPS/easybuild/LAMMPS*.eb</code>.  The list of enabled packages will appear in the block labelled <code>general_packages</code>.


For more information on Environment Modules, please refer to the [[Using modules]] page.
= Example of input file = <!--T:23-->


= Scripts for running LAMMPS =
<!--T:24-->
The input file below can be used with either of the example job scripts.


<!--T:25-->
<tabs>
<tabs>


<!--T:26-->
<tab name="INPUT">
<tab name="INPUT">
{{File
{{File
Line 90: Line 107:
# 3d Lennard-Jones melt
# 3d Lennard-Jones melt


<!--T:27-->
units          lj
units          lj
atom_style      atomic
atom_style      atomic


<!--T:28-->
lattice        fcc 0.8442
lattice        fcc 0.8442
region          box block 0 15 0 15 0 15
region          box block 0 15 0 15 0 15
Line 99: Line 118:
mass            1 1.0
mass            1 1.0


<!--T:29-->
velocity        all create 1.44 87287 loop geom
velocity        all create 1.44 87287 loop geom


<!--T:30-->
pair_style      lj/cut 2.5
pair_style      lj/cut 2.5
pair_coeff      1 1 1.0 1.0 2.5
pair_coeff      1 1 1.0 1.0 2.5
Line 106: Line 127:
neigh_modify    delay 5 every 1
neigh_modify    delay 5 every 1


<!--T:31-->
fix            1 all nve
fix            1 all nve
thermo          5
thermo          5
Line 111: Line 133:
write_data    config.end_sim
write_data    config.end_sim


<!--T:32-->
# End of the Input file.
# End of the Input file.
}}
}}
</tab>
</tab>


<!--T:33-->
<tab name="Serial job">
<tab name="Serial job">
{{File
{{File
Line 122: Line 146:
#!/bin/bash
#!/bin/bash


<!--T:61-->
#SBATCH --ntasks=1
#SBATCH --ntasks=1
#SBATCH --mem-per-cpu=2500M     # memory; default unit is megabytes.
#SBATCH --mem-per-cpu=2500M
#SBATCH --time=0-00:30           # time (DD-HH:MM).
#SBATCH --time=0-00:30
 
# Load the module:


module load nixpkgs/16.09  intel/2016.openmpi/2.1.1 lammps-omp/20170811
<!--T:62-->
module load StdEnv/2020 intel/2020.1.217 openmpi/4.0.3 lammps-omp/20210929


echo "Starting run at: `date`"
<!--T:63-->
 
lmp < lammps.in > lammps_output.txt
lmp_exec=lmp_icc_openmpi
lmp_input="lammps.in"
lmp_output="lammps_lj_output.txt"
 
${lmp_exec} < ${lmp_input} > ${lmp_output}
 
echo "Program finished with exit code $? at: `date`"
}}
}}
</tab>
</tab>


<!--T:41-->
<tab name="MPI job">
<tab name="MPI job">
{{File
{{File
Line 149: Line 167:
#!/bin/bash
#!/bin/bash


#SBATCH --ntasks=4               # number of MPI processes.
<!--T:64-->
#SBATCH --mem-per-cpu=2500M     # memory; default unit is megabytes.
#SBATCH --ntasks=4
#SBATCH --time=0-00:30           # time (DD-HH:MM).
#SBATCH --mem-per-cpu=2500M
 
#SBATCH --time=0-00:30  
# Load the module:
 
module load nixpkgs/16.09  intel/2016.4  openmpi/2.1.1 lammps-omp/20170811
 
echo "Starting run at: `date`"
 
lmp_exec=lmp_icc_openmpi
lmp_input="lammps.in"
lmp_output="lammps_lj_output.txt"


srun ${lmp_exec} < ${lmp_input} > ${lmp_output}
<!--T:65-->
module load StdEnv/2020 intel/2020.1.217 openmpi/4.0.3 lammps-omp/20210929


echo "Program finished with exit code $? at: `date`"
<!--T:66-->
srun lmp < lammps.in > lammps_output.txt
}}
}}
</tab>
</tab>


<!--T:49-->
</tabs>
</tabs>


= Benchmarks =
= Performance = <!--T:50-->


== CPU efficiency ==
<!--T:51-->
Most of the CPU time for molecular dynamics simulations is spent in computing the pair interactions between particles. LAMMPS uses domain decomposition to split the work among the available processors by assigning a part of the simulation box to each processor. During the computation of the interactions between particles, communication between the processors is required. For a given number of particles, the more processors that are used, the more parts of the simulation box there are which must exchange data. Therefore, the communication time increases with increasing number of processors, eventually leading to low CPU efficiency.


LAMMPS uses the domain decomposition to split the work among the available processors by assigning a small subset of simulation box to each available processor. During the computation of the interactions between particles, a communication between the processors is required. For a given number of particles, more processors used, more subsets of the simulation box are used. Therefore, the communication time will increase leading to a low CPU efficiency.  
<!--T:52-->
Before running extensive simulations for a given problem size or a size of the simulation box, you should run tests to see how the program's performance changes with the number of cores. Run short tests using different numbers of cores to find a suitable number of cores that will (approximately) maximize the efficiency of the simulation.


Before running extensive simulations for a given problem size or a size of the simulation box, it is recommended to run some tests to see how the program scales with increasing the number of cores. The idea is to run short tests using different number of cores in order to determine the suitable number of core that will maximize the efficiency of the simulation.
<!--T:53-->
The following example shows the timing for a simulation of a system of 4000 particles using 12 MPI tasks. This is an example of a very low efficiency: by using 12 cores, the system of 4000 atoms was divided to 12 small boxes. The code spent 46.45% of the time computing pair interactions and 44.5% in communications between the processors. The large number of small boxes for a such small system is responsible for the large fraction of time spent in communication.


= Related Software =
<!--T:54-->
{| class="wikitable" style="text-align: center; border-width: 2px;width: 100%;"
!colspan="6" style="text-align: left;"|Loop time of 15.4965 on 12 procs for 25000 steps with 4000 atoms.<br />
Performance: 696931.853 tau/day, 1613.268 timesteps/s. <br />
90.2% CPU use with 12 MPI tasks x 1 OpenMP threads.
|-
!Section
|'''min time'''
|'''avg time'''
|'''max time'''
|'''%varavg'''
|'''%total'''
|-
!Pair
|6.6964
|7.1974
|7.9599
|14.8
|'''46.45'''
|-
!Neigh
|0.94857
|1.0047
|1.0788
|4.3
|6.48
|-
!Comm
|6.0595
|6.8957
|7.4611
|17.1
|'''44.50'''
|-
!Output
|0.01517
|0.01589
|0.019863
|1.0
|0.10
|-
!Modify
|0.14023
|0.14968
|0.16127
|1.7
|0.97
|-
!Other
| --
|0.2332
| --
| --
|1.50
|}


* '''DL_POLY''':  
<!--T:55-->
* '''CPMD''':
In the next example, we compare the time spent in communication and computing the pair interactions for different system sizes:  
* '''GULP''':
* '''NAMD''':
* '''CHARMM''':
* '''AMBER''':
* '''GROMACS''':
* '''NWCHEM''':
* '''HOOMD''':
* '''Tinker''':


= Useful Links =
<!--T:56-->
{| class="wikitable" style="text-align: center; border-width: 2px;width: 100%;"
!
| scope="row" colspan="2" | '''2048 atoms'''
| scope="row" colspan="2" | '''4000 atoms'''
| scope="row" colspan="2" | '''6912 atoms'''
| scope="row" colspan="2" | '''13500 atoms'''
|-
! Cores || Pairs  || Comm || Pairs || Comm || Pairs || Comm || Pairs || Comm
|-
!1  ||  73.68  || 1.36  || 73.70  || 1.28  || 73.66 || 1.27  || 73.72 || 1.29
|-
!2  ||  70.35  || 5.19  || 70.77  || 4.68  || 70.51 || 5.11  || 67.80 || 8.77
|-
!4  ||  62.77  || 13.98 || 64.93  || 12.19 || 67.52 || 8.99  || 67.74 || 8.71
|-
!8  ||  58.36  || 20.14 || 61.78  || 15.58 || 64.10 || 12.86 || 62.06 || 8.71
|-
!16 ||  56.69  || 20.18 || 56.70  || 20.18 || 56.97 || 19.80 || 56.41 || 20.38
|}


* '''Project home page:''' http://lammps.sandia.gov/
</translate>
* '''Related link''': [http://lammps.sandia.gov/ LAMMPS].
* '''Documentation''': [http://lammps.sandia.gov/doc/Manual.html Online Manual].
* '''Features:''' http://lammps.sandia.gov/features.html
* '''Downloads:''' http://lammps.sandia.gov/download.html
* '''GitHub:''' https://github.com/lammps/lammps
* '''Workshops''' http://lammps.sandia.gov/workshops.html
* '''Pictures:'''http://lammps.sandia.gov/pictures.html
* '''Movies:'''http://lammps.sandia.gov/movies.html
* '''Glossary:''' http://lammps.sandia.gov/glossary.html
* '''Publications:''' http://lammps.sandia.gov/papers.html
* '''Benchmarks:'''http://lammps.sandia.gov/bench.html
* '''Mailing List:'''http://lammps.sandia.gov/mail.html

Latest revision as of 19:43, 1 August 2023

Other languages:

Parent page: Biomolecular simulation

General[edit]

LAMMPS is a classical molecular dynamics code. The name stands for Large-scale Atomic / Molecular Massively Parallel Simulator. LAMMPS is distributed by Sandia National Laboratories, a US Department of Energy laboratory.

LAMMPS is parallelized with MPI and OpenMP, and can run on GPUs.

Force fields[edit]

All supported force fields are listed on the package web site, classified by functional form (e.g. pairwise potentials, many-body potentials, etc.) The large number of supported force fields makes LAMMPS suitable for many areas of application. Here are some types of modelling and force fields suitable for each:

  • Biomolecules: CHARMM, AMBER, OPLS, COMPASS (class 2), long-range Coulombics via PPPM, point dipoles, ...
  • Polymers: all-atom, united-atom, coarse-grain (bead-spring FENE), bond-breaking, …
  • Materials: EAM and MEAM for metals, Buckingham, Morse, Yukawa, Stillinger-Weber, Tersoff, EDIP, COMB, SNAP, ...
  • Reactions: AI-REBO, REBO, ReaxFF, eFF
  • Mesoscale: granular, DPD, Gay-Berne, colloidal, peri-dynamics, DSMC...

Combinations of potentials can be used for hybrid systems, e.g. water on metal, polymer/semiconductor interfaces, colloids in solution, ...

Versions and packages[edit]

To see which versions of LAMMPS are installed on Compute Canada systems, run module spider lammps. See Using modules for more about module subcommands.

LAMMPS version numbers are based on their release dates, and have the format YYYYMMDD. You should run:

module avail lammps

to see all the releases that are installed, so you can find the one which is most appropriate for you to use.

For each release installed, one or more modules are are available. For example, the release of 31 March 2017 has three modules:

  • Built with MPI: lammps/20170331
  • Built with USER-OMP support: lammps-omp/20170331
  • Built with USER-INTEL support: lammps-user-intel/20170331

These versions are also available with GPU support. In order to use the GPU-enabled version, load the CUDA module before loading the LAMMPS module:

$ module load cuda
$ module load lammps-omp/20170331

The name of the executable may differ from one version to another. All prebuilt versions on Compute Canada clusters have a symbolic link called lmp. It means that no matter which module you pick, you can execute LAMMPS by calling lmp.

If you wish to see the original name of the executable for a given module, list the files in the ${EBROOTLAMMPS}/bin directory. For example:

$ module load lammps-omp/20170331
$ ls ${EBROOTLAMMPS}/bin/
lmp lmp_icc_openmpi

In this example the executable is lmp_icc_openmpi, and lmp is the symbolic link to it.

The reason there are different modules for the same release is the difference in the packages included. Recent versions of LAMMPS contain about 60 different packages that can be enabled or disabled when compiling the program. Not all packages can be enabled in a single executable. All packages are documented on the official web page. If your simulation does not work with one module, it may be related to the fact that a necessary package was not enabled.

For some LAMMPS modules we provide a file list-packages.txt listing the enabled ("Supported") and disabled ("Not Supported") packages. Once you have loaded a particular module, run cat ${EBROOTLAMMPS}/list-packages.txt to see the contents.

If list-packages.txt is not found, you may be able to determine which packages are available by examining the EasyBuild recipe file, $EBROOTLAMMPS/easybuild/LAMMPS*.eb. The list of enabled packages will appear in the block labelled general_packages.

Example of input file[edit]

The input file below can be used with either of the example job scripts.

File : lammps.in

# 3d Lennard-Jones melt

units           lj
atom_style      atomic

lattice         fcc 0.8442
region          box block 0 15 0 15 0 15
create_box      1 box
create_atoms    1 box
mass            1 1.0

velocity        all create 1.44 87287 loop geom

pair_style      lj/cut 2.5
pair_coeff      1 1 1.0 1.0 2.5
neighbor        0.3 bin
neigh_modify    delay 5 every 1

fix             1 all nve
thermo          5
run             10000
write_data     config.end_sim

# End of the Input file.


File : run_lmp_serial.sh

#!/bin/bash

#SBATCH --ntasks=1
#SBATCH --mem-per-cpu=2500M
#SBATCH --time=0-00:30

module load StdEnv/2020 intel/2020.1.217 openmpi/4.0.3 lammps-omp/20210929

lmp < lammps.in > lammps_output.txt


File : run_lmp_mpi.sh

#!/bin/bash

#SBATCH --ntasks=4
#SBATCH --mem-per-cpu=2500M
#SBATCH --time=0-00:30 

module load StdEnv/2020 intel/2020.1.217 openmpi/4.0.3 lammps-omp/20210929

srun lmp < lammps.in > lammps_output.txt


Performance[edit]

Most of the CPU time for molecular dynamics simulations is spent in computing the pair interactions between particles. LAMMPS uses domain decomposition to split the work among the available processors by assigning a part of the simulation box to each processor. During the computation of the interactions between particles, communication between the processors is required. For a given number of particles, the more processors that are used, the more parts of the simulation box there are which must exchange data. Therefore, the communication time increases with increasing number of processors, eventually leading to low CPU efficiency.

Before running extensive simulations for a given problem size or a size of the simulation box, you should run tests to see how the program's performance changes with the number of cores. Run short tests using different numbers of cores to find a suitable number of cores that will (approximately) maximize the efficiency of the simulation.

The following example shows the timing for a simulation of a system of 4000 particles using 12 MPI tasks. This is an example of a very low efficiency: by using 12 cores, the system of 4000 atoms was divided to 12 small boxes. The code spent 46.45% of the time computing pair interactions and 44.5% in communications between the processors. The large number of small boxes for a such small system is responsible for the large fraction of time spent in communication.

Loop time of 15.4965 on 12 procs for 25000 steps with 4000 atoms.

Performance: 696931.853 tau/day, 1613.268 timesteps/s.
90.2% CPU use with 12 MPI tasks x 1 OpenMP threads.

Section min time avg time max time %varavg %total
Pair 6.6964 7.1974 7.9599 14.8 46.45
Neigh 0.94857 1.0047 1.0788 4.3 6.48
Comm 6.0595 6.8957 7.4611 17.1 44.50
Output 0.01517 0.01589 0.019863 1.0 0.10
Modify 0.14023 0.14968 0.16127 1.7 0.97
Other -- 0.2332 -- -- 1.50

In the next example, we compare the time spent in communication and computing the pair interactions for different system sizes:

2048 atoms 4000 atoms 6912 atoms 13500 atoms
Cores Pairs Comm Pairs Comm Pairs Comm Pairs Comm
1 73.68 1.36 73.70 1.28 73.66 1.27 73.72 1.29
2 70.35 5.19 70.77 4.68 70.51 5.11 67.80 8.77
4 62.77 13.98 64.93 12.19 67.52 8.99 67.74 8.71
8 58.36 20.14 61.78 15.58 64.10 12.86 62.06 8.71
16 56.69 20.18 56.70 20.18 56.97 19.80 56.41 20.38