Publications

Results 1–25 of 210
Skip to search filters

LAMMPS - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales

Computer Physics Communications

Thompson, Aidan P.; Aktulga, H.M.; Berger, Richard; Bolintineanu, Dan S.; Brown, W.M.; Crozier, Paul C.; in 't Veld, Pieter J.; Kohlmeyer, Axel; Moore, Stan G.; Nguyen, Trung D.; Shan, Ray; Stevens, Mark J.; Tranchida, Julien; Trott, Christian R.; Plimpton, Steven J.

Since the classical molecular dynamics simulator LAMMPS was released as an open source code in 2004, it has become a widely-used tool for particle-based modeling of materials at length scales ranging from atomic to mesoscale to continuum. Reasons for its popularity are that it provides a wide variety of particle interaction models for different materials, that it runs on any platform from a single CPU core to the largest supercomputers with accelerators, and that it gives users control over simulation details, either via the input script or by adding code for new interatomic potentials, constraints, diagnostics, or other features needed for their models. As a result, hundreds of people have contributed new capabilities to LAMMPS and it has grown from fifty thousand lines of code in 2004 to a million lines today. In this paper several of the fundamental algorithms used in LAMMPS are described along with the design strategies which have made it flexible for both users and developers. We also highlight some capabilities recently added to the code which were enabled by this flexibility, including dynamic load balancing, on-the-fly visualization, magnetic spin dynamics models, and quantum-accuracy machine learning interatomic potentials. Program Summary: Program Title: Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) CPC Library link to program files: https://doi.org/10.17632/cxbxs9btsv.1 Developer's repository link: https://github.com/lammps/lammps Licensing provisions: GPLv2 Programming language: C++, Python, C, Fortran Supplementary material: https://www.lammps.org Nature of problem: Many science applications in physics, chemistry, materials science, and related fields require parallel, scalable, and efficient generation of long, stable classical particle dynamics trajectories. Within this common problem definition, there lies a great diversity of use cases, distinguished by different particle interaction models, external constraints, as well as timescales and lengthscales ranging from atomic to mesoscale to macroscopic. Solution method: The LAMMPS code uses parallel spatial decomposition, distributed neighbor lists, and parallel FFTs for long-range Coulombic interactions [1]. The time integration algorithm is based on the Størmer-Verlet symplectic integrator [2], which provides better stability than higher-order non-symplectic methods. In addition, LAMMPS supports a wide range of interatomic potentials, constraints, diagnostics, software interfaces, and pre- and post-processing features. Additional comments including restrictions and unusual features: This paper serves as the definitive reference for the LAMMPS code. References: [1] S. Plimpton, Fast parallel algorithms for short-range molecular dynamics. J. Comp. Phys. 117 (1995) 1–19. [2] L. Verlet, Computer experiments on classical fluids: I. Thermodynamical properties of Lennard–Jones molecules, Phys. Rev. 159 (1967) 98–103.

More Details

Memo regarding the Final Review of FY21 ASC L2 Milestone 7840: Neural Mini-Apps for Future Heterogeneous HPC Systems

Oldfield, Ron A.; Plimpton, Steven J.; Laros, James H.; Poliakoff, David Z.; Sornborger, Andrew S.

The final review for the FY21 Advanced Simulation and Computing (ASC) Computational Systems and Software Environments (CSSE) L2 Milestone #7840 was conducted on August 25th, 2021 at Sandia National Laboratories in Albuquerque, New Mexico. The review committee/panel unanimously agreed that the milestone has been successfully completed, exceeding expectations on several of the key deliverables.

More Details

Rendezvous algorithms for large-scale modeling and simulation

Journal of Parallel and Distributed Computing

Plimpton, Steven J.; Knight, Christopher

Rendezvous algorithms encode a communication pattern that is useful when processors sending data do not know who the receiving processors should be, or vice versa. The idea is to define an intermediate decomposition where datums from different sending processors can ”rendezvous” to perform a computation, in a manner that both the senders and eventual receivers of the results can identify the appropriate rendezvous processor. Originally designed for interpolating between overlaid grids with independent parallel decompositions (Plimpton et al., 2004), we have recently found rendezvous algorithms useful for a variety of operations in particle- or grid-based simulation codes when running large problems on large numbers of processors. In particular, we show they can perform well when a load-balanced intermediate decomposition is randomized and not spatial, requiring all-to-all communication to move data between processors. In this case rendezvous algorithms leverage the large bisection communication bandwidths which parallel machines provide. We describe how rendezvous algorithms work in a scientific computing context and give specific examples for molecular dynamics and Direct Simulation Monte Carlo codes which result in dramatic performance improvements versus simpler algorithms which do not scale as well. We explain how a generic rendezvous algorithm can be implemented, and also point out similarities with the MapReduce paradigm popularized by Google and Hadoop.

More Details

Granular packings with sliding, rolling, and twisting friction

Physical Review E

Santos, Andrew P.; Bolintineanu, Dan S.; Grest, Gary S.; Lechman, Jeremy B.; Plimpton, Steven J.; Srivastava, Ishan; Silbert, Leonardo E.

Intuition tells us that a rolling or spinning sphere will eventually stop due to the presence of friction and other dissipative interactions. The resistance to rolling and spinning or twisting torque that stops a sphere also changes the microstructure of a granular packing of frictional spheres by increasing the number of constraints on the degrees of freedom of motion. We perform discrete element modeling simulations to construct sphere packings implementing a range of frictional constraints under a pressure-controlled protocol. Mechanically stable packings are achievable at volume fractions and average coordination numbers as low as 0.53 and 2.5, respectively, when the particles experience high resistance to sliding, rolling, and twisting. Only when the particle model includes rolling and twisting friction were experimental volume fractions reproduced.

More Details

Parallel algorithms for hyperdynamics and local hyperdynamics

Journal of Chemical Physics

Plimpton, Steven J.; Perez, Danny; Voter, Arthur F.

Hyperdynamics (HD) is a method for accelerating the timescale of standard molecular dynamics (MD). It can be used for simulations of systems with an energy potential landscape that is a collection of basins, separated by barriers, where transitions between basins are infrequent. HD enables the system to escape from a basin more quickly while enabling a statistically accurate renormalization of the simulation time, thus effectively boosting the timescale of the simulation. In the work of Kim et al. [J. Chem. Phys. 139, 144110 (2013)], a local version of HD was formulated, which exploits the intrinsic locality characteristic typical of most systems to mitigate the poor scaling properties of standard HD as the system size is increased. Here, we discuss how both HD and local HD can be formulated to run efficiently in parallel. We have implemented these ideas in the LAMMPS MD code, which means HD can be used with any interatomic potential LAMMPS supports. Together, these parallel methods allow simulations of any size to achieve the time acceleration offered by HD (which can be orders of magnitude), at a cost of 2-4× that of standard MD. As examples, we performed two simulations of a million-atom system to model the diffusion and clustering of Pt adatoms on a large patch of the Pt(100) surface for 80 μs and 160 μs.

More Details
Results 1–25 of 210
Results 1–25 of 210