LAMMPS: Difference between revisions

From Alliance Doc
Jump to navigation Jump to search
(replace Related software with link to Biomolecular simulation page)
No edit summary
 
(22 intermediate revisions by 4 users not shown)
Line 1: Line 1:
{{Draft}}
<languages />
[[Category:Software]][[Category:BiomolecularSimulation]]
<translate>
 
<!--T:1-->
''Parent page: [[Biomolecular simulation]]''
''Parent page: [[Biomolecular simulation]]''


= General =
= General = <!--T:2-->


'''LAMMPS''' is a classical molecular dynamics code, and an acronym for '''L'''arge-scale '''A'''tomic / '''M'''olecular '''M'''assively '''P'''arallel '''S'''imulator. LAMMPS is distributed by [http://www.sandia.gov/  Sandia National Laboratories], a US Department of Energy laboratory.  
<!--T:3-->
'''LAMMPS''' is a classical molecular dynamics code. The name stands for '''L'''arge-scale '''A'''tomic / '''M'''olecular '''M'''assively '''P'''arallel '''S'''imulator. LAMMPS is distributed by [http://www.sandia.gov/  Sandia National Laboratories], a US Department of Energy laboratory.  


* '''Project web site:''' http://lammps.sandia.gov/
<!--T:4-->
* '''Documentation''': [http://lammps.sandia.gov/doc/Manual.html Online Manual].
* Project web site: http://lammps.sandia.gov/
* '''Mailing List:''' http://lammps.sandia.gov/mail.html
* Documentation: [http://lammps.sandia.gov/doc/Manual.html Online Manual].
* Mailing List: http://lammps.sandia.gov/mail.html


<!--T:5-->
LAMMPS is parallelized with [[MPI]] and [[OpenMP]], and can run on [[Using GPUs with Slurm|GPU]]s.
LAMMPS is parallelized with [[MPI]] and [[OpenMP]], and can run on [[Using GPUs with Slurm|GPU]]s.


= Force fields and examples =
= Force fields = <!--T:6-->
 
<!--T:7-->
All supported force fields are listed on the [https://lammps.sandia.gov/doc/Intro_features.html#ff package web site],
classified by functional form (e.g. pairwise potentials, many-body potentials, etc.)
The large number of supported force fields makes LAMMPS suitable for many areas of application.
Here are some types of modelling and force fields suitable for each:
 
<!--T:8-->
* Biomolecules: CHARMM, AMBER, OPLS, COMPASS (class 2), long-range Coulombics via PPPM, point dipoles, ...
* Polymers: all-atom, united-atom, coarse-grain (bead-spring FENE), bond-breaking, …
* Materials: EAM and MEAM for metals, Buckingham, Morse, Yukawa, Stillinger-Weber, Tersoff, EDIP, COMB, SNAP, ...
* Reactions: AI-REBO, REBO, ReaxFF, eFF
* Mesoscale: granular, DPD, Gay-Berne, colloidal, peri-dynamics, DSMC...


== Force fields classified by material ==
<!--T:9-->
Combinations of potentials can be used for hybrid systems, e.g. water on metal, polymer/semiconductor interfaces, colloids in solution, ...


* '''Biomolecules:''' CHARMM, AMBER, OPLS, COMPASS (class 2), long-range Coulombics via PPPM, point dipoles, ...
= Versions and packages = <!--T:10-->
* '''Polymers:''' all-atom, united-atom, coarse-grain (bead-spring FENE), bond-breaking, …
* '''Materials:''' EAM and MEAM for metals, Buckingham, Morse, Yukawa, Stillinger-Weber, Tersoff, EDIP, COMB, SNAP, ...
* '''Chemistry:''' AI-REBO, REBO, ReaxFF, eFF
* '''Mesoscale:''' granular, DPD, Gay-Berne, colloidal, peri-dynamics, DSMC...
* '''Hybrid:''' can use combinations of potentials for hybrid systems: water on metal, polymers/semiconductor interface, colloids in solution, …


== Potential classified by functional form ==
<!--T:11-->
To see which versions of LAMMPS are installed on Compute Canada systems, run <code>module spider lammps</code>. See [[Using modules]] for more about <code>module</code> subcommands.


* '''Pairwise potentials:''' Lennard-Jones, Buckingham, ...
<!--T:12-->
* '''Charged Pairwise Potentials:''' Coulombic, point-dipole
LAMMPS version numbers are based on their release dates, and have the format YYYYMMDD. You should run:
* '''Manybody Potentials:''' EAM, Finnis/Sinclair, modified EAM (MEAM), embedded ion (EIM), Stillinger-Weber, Tersoff, AI-REBO, ReaxFF, COMB
* '''Coarse-Grained Potentials:''' DPD, GayBerne, ...
* '''Mesoscopic Potentials:''' granular, peri-dynamics
* '''Long-Range Electrostatics:''' Ewald, PPPM, MSM
* '''Implicit Solvent Potentials:''' hydrodynamic lubrication, Debye
* '''Force-Field Compatibility with common:''' CHARMM, AMBER, OPLS, GROMACS options


= Modules =
<!--T:57-->
module avail lammps


Several versions of [http://lammps.sandia.gov/ LAMMPS] were installed on cvmfs and accessible on Compute Canada systems through [[Using modules|modules]]. To find the modules, use:
<!--T:58-->
<code> module spider lammps</code> or <code>module -r spider '.*lammps.*'</code>
to see all the releases that are installed, so you can find the one which is most appropriate for you to use.


The version of each module gives the date of the release of each version in the format: YYYYMMDD. The name of the module contains an attribute depending on the accelerators included in the module.
<!--T:59-->
For each release installed, one or more modules are are available.
For example, the release of 31 March 2017 has three modules:


For each release installed, one or more modules are are available. For example, the release of 31 March 2017 has 3 modules:   
<!--T:13-->
* Built with MPI: <code>lammps/20170331</code>
* Built with USER-OMP support<code>lammps-omp/20170331</code>
* Built with USER-INTEL support: <code>lammps-user-intel/20170331</code>


* Version built with MPI: <code>lammps/20170331</code>
<!--T:14-->
* Version built with USER-OMP support: <code>lmmps-omp/20170331</code>  
These versions are also available with GPU support.
* Version built with USER-INTEL support: <code>lammps-user-intel/20170331</code>
In order to use the GPU-enabled version, load the [[CUDA]] module before loading the LAMMPS module:


These versions are also available with GPU support. In order to load the GPU enabled version of LAMMPS, the <code>[[CUDA|cuda]]</code> module needs to be loaded first before loading the LAMMPS module:
<!--T:15-->
$ module load cuda
$ module load lammps-omp/20170331


$ module load cuda
<!--T:16-->
$ module load lmmps-omp/20170331
The name of the executable may differ from one version to another. All prebuilt versions on Compute Canada clusters have a symbolic link called <code>lmp</code>. It means that no matter which module you pick, you can execute LAMMPS by calling <code>lmp</code>.


The name of the executable may differ from one version to another. To figure out what is the name of the executable that correspond to a given module, do the following (example for lammps-omp/20170331):
<!--T:17-->
If you wish to see the original name of the executable for a given module, list the files in the <code>${EBROOTLAMMPS}/bin</code> directory. For example:


  $ module load lmmps-omp/20170331
  <!--T:18-->
$ module load lammps-omp/20170331
  $ ls ${EBROOTLAMMPS}/bin/
  $ ls ${EBROOTLAMMPS}/bin/
  lmp lmp_icc_openmpi
  lmp lmp_icc_openmpi


From this output, the executable is: '''lmp_icc_openmpi'''. Note that '''lmp''' is a symbolic link to the executable. For all versions installed on cvmfs, a symbolic link was added to each LAMMPS executable and it is called '''lmp'''. It means that no matter which module you pick, '''lmp''' will work as the executable for that module.
<!--T:19-->
In this example the executable is <code>lmp_icc_openmpi</code>, and <code>lmp</code> is the symbolic link to it.  


The reason behind different versions for the same release is the difference in the packages included. The recent versions of [http://lammps.sandia.gov/ LAMMPS] contain about 60 different packages that can be enabled when compiling the program. All the [http://lammps.sandia.gov/doc/Section_packages.html packages] are documented on the official web page of [http://lammps.sandia.gov/ LAMMPS] .
<!--T:20-->
The reason there are different modules for the same release is the difference in the ''packages'' included. Recent versions of LAMMPS contain about 60 different packages that can be enabled or disabled when compiling the program. Not all packages can be enabled in a single executable. All [https://lammps.sandia.gov/doc/Packages.html packages] are documented on the official web page. If your simulation does not work with one module, it may be related to the fact that a necessary package was not enabled.


For each module installed on cvmfs, a file ''list-packages.txt '' is provided and gives a list of supported and non-supported packages for that particular module. The different versions for one release mentioned above come from the fact one can not put all available packages in one binary. If for some reason, your simulation does not work with one module, it is more likely related to the fact that the corresponding package was not included.
<!--T:21-->
For some LAMMPS modules we provide a file <code>list-packages.txt</code> listing the enabled ("Supported") and disabled ("Not Supported") packages. Once you have loaded a particular module, run <code>cat ${EBROOTLAMMPS}/list-packages.txt</code> to see the contents.


To see or know more about the supported packages on a given module , do the following:
<!--T:67-->
If <code>list-packages.txt</code> is not found, you may be able to determine which packages are available by examining the [[EasyBuild]] recipe file, <code>$EBROOTLAMMPS/easybuild/LAMMPS*.eb</code>.  The list of enabled packages will appear in the block labelled <code>general_packages</code>.


* First load a particular module of LAMMPS (use <code>module -r spider '.*lammps.*'</code>  to see how to load a particular module).
= Example of input file = <!--T:23-->
* Then, execute the command: <code>cat ${EBROOTLAMMPS}/list-packages.txt</code>  


For more information on Environment Modules, please refer to the [[Using modules]] page.
<!--T:24-->
 
The input file below can be used with either of the example job scripts.
= Scripts for running LAMMPS =


<!--T:25-->
<tabs>
<tabs>


<!--T:26-->
<tab name="INPUT">
<tab name="INPUT">
{{File
{{File
Line 82: Line 107:
# 3d Lennard-Jones melt
# 3d Lennard-Jones melt


<!--T:27-->
units          lj
units          lj
atom_style      atomic
atom_style      atomic


<!--T:28-->
lattice        fcc 0.8442
lattice        fcc 0.8442
region          box block 0 15 0 15 0 15
region          box block 0 15 0 15 0 15
Line 91: Line 118:
mass            1 1.0
mass            1 1.0


<!--T:29-->
velocity        all create 1.44 87287 loop geom
velocity        all create 1.44 87287 loop geom


<!--T:30-->
pair_style      lj/cut 2.5
pair_style      lj/cut 2.5
pair_coeff      1 1 1.0 1.0 2.5
pair_coeff      1 1 1.0 1.0 2.5
Line 98: Line 127:
neigh_modify    delay 5 every 1
neigh_modify    delay 5 every 1


<!--T:31-->
fix            1 all nve
fix            1 all nve
thermo          5
thermo          5
Line 103: Line 133:
write_data    config.end_sim
write_data    config.end_sim


<!--T:32-->
# End of the Input file.
# End of the Input file.
}}
}}
</tab>
</tab>


<!--T:33-->
<tab name="Serial job">
<tab name="Serial job">
{{File
{{File
Line 114: Line 146:
#!/bin/bash
#!/bin/bash


<!--T:61-->
#SBATCH --ntasks=1
#SBATCH --ntasks=1
#SBATCH --mem-per-cpu=2500M     # memory; default unit is megabytes.
#SBATCH --mem-per-cpu=2500M
#SBATCH --time=0-00:30           # time (DD-HH:MM).
#SBATCH --time=0-00:30
 
# Load the module:
 
module load nixpkgs/16.09  intel/2016.4  openmpi/2.1.1 lammps-omp/20170811
 
echo "Starting run at: `date`"
 
lmp_exec=lmp_icc_openmpi
lmp_input="lammps.in"
lmp_output="lammps_lj_output.txt"


${lmp_exec} < ${lmp_input} > ${lmp_output}
<!--T:62-->
module load StdEnv/2020 intel/2020.1.217 openmpi/4.0.3 lammps-omp/20210929


echo "Program finished with exit code $? at: `date`"
<!--T:63-->
lmp < lammps.in > lammps_output.txt
}}
}}
</tab>
</tab>


<!--T:41-->
<tab name="MPI job">
<tab name="MPI job">
{{File
{{File
Line 141: Line 167:
#!/bin/bash
#!/bin/bash


#SBATCH --ntasks=4               # number of MPI processes.
<!--T:64-->
#SBATCH --mem-per-cpu=2500M     # memory; default unit is megabytes.
#SBATCH --ntasks=4
#SBATCH --time=0-00:30           # time (DD-HH:MM).
#SBATCH --mem-per-cpu=2500M
 
#SBATCH --time=0-00:30  
# Load the module:
 
module load nixpkgs/16.09  intel/2016.4  openmpi/2.1.1 lammps-omp/20170811


echo "Starting run at: `date`"
<!--T:65-->
module load StdEnv/2020 intel/2020.1.217 openmpi/4.0.3 lammps-omp/20210929


lmp_exec=lmp_icc_openmpi
<!--T:66-->
lmp_input="lammps.in"
srun lmp < lammps.in > lammps_output.txt
lmp_output="lammps_lj_output.txt"
 
srun ${lmp_exec} < ${lmp_input} > ${lmp_output}
 
echo "Program finished with exit code $? at: `date`"
}}
}}
</tab>
</tab>


<!--T:49-->
</tabs>
</tabs>


= Benchmarks =
= Performance = <!--T:50-->
 
== CPU efficiency ==
 
LAMMPS uses the domain decomposition to split the work among the available processors by assigning a small subset of simulation box to each available processor. During the computation of the interactions between particles, a communication between the processors is required. For a given number of particles, more processors used, more subsets of the simulation box are used. Therefore, the communication time will increase leading to a low CPU efficiency.


Before running extensive simulations for a given problem size or a size of the simulation box, it is recommended to run some tests to see how the program scales with increasing the number of cores. The idea is to run short tests using different number of cores in order to determine the suitable number of cores that will maximize the efficiency of the simulation. Most of the CPU time for Molecular Dynamics simulations is spent in computing the pair interactions between particles. In order to get a better performance from a simulation, one has to reduce the communication time between the processors.
<!--T:51-->
Most of the CPU time for molecular dynamics simulations is spent in computing the pair interactions between particles. LAMMPS uses domain decomposition to split the work among the available processors by assigning a part of the simulation box to each processor. During the computation of the interactions between particles, communication between the processors is required. For a given number of particles, the more processors that are used, the more parts of the simulation box there are which must exchange data. Therefore, the communication time increases with increasing number of processors, eventually leading to low CPU efficiency.  


The following example shows the MPI task timing breakdown from a simulation of a system of 4000 particles using 12 MPI tasks. This is an example of a very low efficiency: by using 12 cores, the system of 4000 atoms was divided to 12 small boxes. The time spent '''46.45 %''' of the time for computing pair interactions and '''44.5 %''' in communications between the processors. The large number of small boxes for a such small system leads to the increase of the communication time. For an efficient MD simulations, the communication time should be minimized in order to use the rest in computing the pair interactions.  
<!--T:52-->
Before running extensive simulations for a given problem size or a size of the simulation box, you should run tests to see how the program's performance changes with the number of cores. Run short tests using different numbers of cores to find a suitable number of cores that will (approximately) maximize the efficiency of the simulation.


was '''46.45'''. Therefore, many communications were required. This explain why the program spend '''44.5 %''' of the time in communications.  
<!--T:53-->
The following example shows the timing for a simulation of a system of 4000 particles using 12 MPI tasks. This is an example of a very low efficiency: by using 12 cores, the system of 4000 atoms was divided to 12 small boxes. The code spent 46.45% of the time computing pair interactions and 44.5% in communications between the processors. The large number of small boxes for a such small system is responsible for the large fraction of time spent in communication.


<!--T:54-->
{| class="wikitable" style="text-align: center; border-width: 2px;width: 100%;"
{| class="wikitable" style="text-align: center; border-width: 2px;width: 100%;"
!colspan="6" style="text-align: left;"|Loop time of 15.4965 on 12 procs for 25000 steps with 4000 atoms.<br />
!colspan="6" style="text-align: left;"|Loop time of 15.4965 on 12 procs for 25000 steps with 4000 atoms.<br />
Line 230: Line 250:
|}
|}


<!--T:55-->
In the next example, we compare the time spent in communication and computing the pair interactions for different system sizes:  
In the next example, we compare the time spent in communication and computing the pair interactions for different system sizes:  


<!--T:56-->
{| class="wikitable" style="text-align: center; border-width: 2px;width: 100%;"
{| class="wikitable" style="text-align: center; border-width: 2px;width: 100%;"
!  
!  
Line 252: Line 274:
|}
|}


2048  4000  6912  13500
</translate>

Latest revision as of 19:43, 1 August 2023

Other languages:

Parent page: Biomolecular simulation

General[edit]

LAMMPS is a classical molecular dynamics code. The name stands for Large-scale Atomic / Molecular Massively Parallel Simulator. LAMMPS is distributed by Sandia National Laboratories, a US Department of Energy laboratory.

LAMMPS is parallelized with MPI and OpenMP, and can run on GPUs.

Force fields[edit]

All supported force fields are listed on the package web site, classified by functional form (e.g. pairwise potentials, many-body potentials, etc.) The large number of supported force fields makes LAMMPS suitable for many areas of application. Here are some types of modelling and force fields suitable for each:

  • Biomolecules: CHARMM, AMBER, OPLS, COMPASS (class 2), long-range Coulombics via PPPM, point dipoles, ...
  • Polymers: all-atom, united-atom, coarse-grain (bead-spring FENE), bond-breaking, …
  • Materials: EAM and MEAM for metals, Buckingham, Morse, Yukawa, Stillinger-Weber, Tersoff, EDIP, COMB, SNAP, ...
  • Reactions: AI-REBO, REBO, ReaxFF, eFF
  • Mesoscale: granular, DPD, Gay-Berne, colloidal, peri-dynamics, DSMC...

Combinations of potentials can be used for hybrid systems, e.g. water on metal, polymer/semiconductor interfaces, colloids in solution, ...

Versions and packages[edit]

To see which versions of LAMMPS are installed on Compute Canada systems, run module spider lammps. See Using modules for more about module subcommands.

LAMMPS version numbers are based on their release dates, and have the format YYYYMMDD. You should run:

module avail lammps

to see all the releases that are installed, so you can find the one which is most appropriate for you to use.

For each release installed, one or more modules are are available. For example, the release of 31 March 2017 has three modules:

  • Built with MPI: lammps/20170331
  • Built with USER-OMP support: lammps-omp/20170331
  • Built with USER-INTEL support: lammps-user-intel/20170331

These versions are also available with GPU support. In order to use the GPU-enabled version, load the CUDA module before loading the LAMMPS module:

$ module load cuda
$ module load lammps-omp/20170331

The name of the executable may differ from one version to another. All prebuilt versions on Compute Canada clusters have a symbolic link called lmp. It means that no matter which module you pick, you can execute LAMMPS by calling lmp.

If you wish to see the original name of the executable for a given module, list the files in the ${EBROOTLAMMPS}/bin directory. For example:

$ module load lammps-omp/20170331
$ ls ${EBROOTLAMMPS}/bin/
lmp lmp_icc_openmpi

In this example the executable is lmp_icc_openmpi, and lmp is the symbolic link to it.

The reason there are different modules for the same release is the difference in the packages included. Recent versions of LAMMPS contain about 60 different packages that can be enabled or disabled when compiling the program. Not all packages can be enabled in a single executable. All packages are documented on the official web page. If your simulation does not work with one module, it may be related to the fact that a necessary package was not enabled.

For some LAMMPS modules we provide a file list-packages.txt listing the enabled ("Supported") and disabled ("Not Supported") packages. Once you have loaded a particular module, run cat ${EBROOTLAMMPS}/list-packages.txt to see the contents.

If list-packages.txt is not found, you may be able to determine which packages are available by examining the EasyBuild recipe file, $EBROOTLAMMPS/easybuild/LAMMPS*.eb. The list of enabled packages will appear in the block labelled general_packages.

Example of input file[edit]

The input file below can be used with either of the example job scripts.

File : lammps.in

# 3d Lennard-Jones melt

units           lj
atom_style      atomic

lattice         fcc 0.8442
region          box block 0 15 0 15 0 15
create_box      1 box
create_atoms    1 box
mass            1 1.0

velocity        all create 1.44 87287 loop geom

pair_style      lj/cut 2.5
pair_coeff      1 1 1.0 1.0 2.5
neighbor        0.3 bin
neigh_modify    delay 5 every 1

fix             1 all nve
thermo          5
run             10000
write_data     config.end_sim

# End of the Input file.


File : run_lmp_serial.sh

#!/bin/bash

#SBATCH --ntasks=1
#SBATCH --mem-per-cpu=2500M
#SBATCH --time=0-00:30

module load StdEnv/2020 intel/2020.1.217 openmpi/4.0.3 lammps-omp/20210929

lmp < lammps.in > lammps_output.txt


File : run_lmp_mpi.sh

#!/bin/bash

#SBATCH --ntasks=4
#SBATCH --mem-per-cpu=2500M
#SBATCH --time=0-00:30 

module load StdEnv/2020 intel/2020.1.217 openmpi/4.0.3 lammps-omp/20210929

srun lmp < lammps.in > lammps_output.txt


Performance[edit]

Most of the CPU time for molecular dynamics simulations is spent in computing the pair interactions between particles. LAMMPS uses domain decomposition to split the work among the available processors by assigning a part of the simulation box to each processor. During the computation of the interactions between particles, communication between the processors is required. For a given number of particles, the more processors that are used, the more parts of the simulation box there are which must exchange data. Therefore, the communication time increases with increasing number of processors, eventually leading to low CPU efficiency.

Before running extensive simulations for a given problem size or a size of the simulation box, you should run tests to see how the program's performance changes with the number of cores. Run short tests using different numbers of cores to find a suitable number of cores that will (approximately) maximize the efficiency of the simulation.

The following example shows the timing for a simulation of a system of 4000 particles using 12 MPI tasks. This is an example of a very low efficiency: by using 12 cores, the system of 4000 atoms was divided to 12 small boxes. The code spent 46.45% of the time computing pair interactions and 44.5% in communications between the processors. The large number of small boxes for a such small system is responsible for the large fraction of time spent in communication.

Loop time of 15.4965 on 12 procs for 25000 steps with 4000 atoms.

Performance: 696931.853 tau/day, 1613.268 timesteps/s.
90.2% CPU use with 12 MPI tasks x 1 OpenMP threads.

Section min time avg time max time %varavg %total
Pair 6.6964 7.1974 7.9599 14.8 46.45
Neigh 0.94857 1.0047 1.0788 4.3 6.48
Comm 6.0595 6.8957 7.4611 17.1 44.50
Output 0.01517 0.01589 0.019863 1.0 0.10
Modify 0.14023 0.14968 0.16127 1.7 0.97
Other -- 0.2332 -- -- 1.50

In the next example, we compare the time spent in communication and computing the pair interactions for different system sizes:

2048 atoms 4000 atoms 6912 atoms 13500 atoms
Cores Pairs Comm Pairs Comm Pairs Comm Pairs Comm
1 73.68 1.36 73.70 1.28 73.66 1.27 73.72 1.29
2 70.35 5.19 70.77 4.68 70.51 5.11 67.80 8.77
4 62.77 13.98 64.93 12.19 67.52 8.99 67.74 8.71
8 58.36 20.14 61.78 15.58 64.10 12.86 62.06 8.71
16 56.69 20.18 56.70 20.18 56.97 19.80 56.41 20.38