Publications

Results 1–25 of 73
Skip to search filters

Aleph Field Solver Challenge Problem Results Summary

Hooper, Russell H.; Moore, Stan G.

Aleph models continuum electrostatic and steady and transient thermal fields using a finite-element method. Much work has gone into expanding the core solver capability to support enriched modeling consisting of multiple interacting fields, special boundary conditions and two-way interfacial coupling with particles modeled using Aleph's complementary particle-in-cell capability. This report provides quantitative evidence for correct implementation of Aleph's field solver via order- of-convergence assessments on a collection of problems of increasing complexity. It is intended to provide Aleph with a pedigree and to establish a basis for confidence in results for more challenging problems important to Sandia's mission that Aleph was specifically designed to address.

More Details

Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report

Thompson, Aidan P.; Schultz, Peter A.; Crozier, Paul C.; Moore, Stan G.; Swiler, Laura P.; Stephens, John A.; Trott, Christian R.; Foiles, Stephen M.; Tucker, Garritt J.

This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel computers and advanced processor ar- chitectures. Finally, we briefly describe the MSM method for efficient calculation of electrostatic interactions on massively parallel computers.

More Details

Direct simulation Monte Carlo on petaflop supercomputers and beyond

Physics of Fluids

Plimpton, Steven J.; Moore, Stan G.; Borner, A.; Stagg, Alan K.; Koehler, T.P.; Torczynski, J.R.; Gallis, Michail A.

The gold-standard definition of the Direct Simulation Monte Carlo (DSMC) method is given in the 1994 book by Bird [Molecular Gas Dynamics and the Direct Simulation of Gas Flows (Clarendon Press, Oxford, UK, 1994)], which refined his pioneering earlier papers in which he first formulated the method. In the intervening 25 years, DSMC has become the method of choice for modeling rarefied gas dynamics in a variety of scenarios. The chief barrier to applying DSMC to more dense or even continuum flows is its computational expense compared to continuum computational fluid dynamics methods. The dramatic (nearly billion-fold) increase in speed of the largest supercomputers over the last 30 years has thus been a key enabling factor in using DSMC to model a richer variety of flows, due to the method's inherent parallelism. We have developed the open-source SPARTA DSMC code with the goal of running DSMC efficiently on the largest machines, both current and future. It is largely an implementation of Bird's 1994 formulation. Here, we describe algorithms used in SPARTA to enable DSMC to operate in parallel at the scale of many billions of particles or grid cells, or with billions of surface elements. We give a few examples of the kinds of fundamental physics questions and engineering applications that DSMC can address at these scales.

More Details

DSMC simulations of turbulent flows at moderate Reynolds numbers

AIP Conference Proceedings

Gallis, Michail A.; Torczynski, J.R.; Bitter, Neal B.; Koehler, Timothy P.; Moore, Stan G.; Plimpton, Steven J.; Papadakis, G.

The Direct Simulation Monte Carlo (DSMC) method has been used for more than 50 years to simulate rarefied gases. The advent of modern supercomputers has brought higher-density near-continuum flows within range. This in turn has revived the debate as to whether the Boltzmann equation, which assumes molecular chaos, can be used to simulate continuum flows when they become turbulent. In an effort to settle this debate, two canonical turbulent flows are examined, and the results are compared to available continuum theoretical and numerical results for the Navier-Stokes equations.

More Details

Effect of an external field on capillary waves in a dipolar fluid

Physical Review E

Koski, Jason K.; Moore, Stan G.; Grest, Gary S.; Stevens, Mark J.

The role of an external field on capillary waves at the liquid-vapor interface of a dipolar fluid is investigated using molecular dynamics simulations. For fields parallel to the interface, the interfacial width squared increases linearly with respect to the logarithm of the size of the interface across all field strengths tested. The value of the slope decreases with increasing field strength, indicating that the field dampens the capillary waves. With the inclusion of the parallel field, the surface stiffness increases with increasing field strength faster than the surface tension. For fields perpendicular to the interface, the interfacial width squared is linear with respect to the logarithm of the size of the interface for small field strengths, and the surface stiffness is less than the surface tension. Above a critical field strength that decreases as the size of the interface increases, the interface becomes unstable due to the increased amplitude of the capillary waves.

More Details

EMPIRE-PIC: A performance portable unstructured particle-in-cell code

Communications in Computational Physics

Bettencourt, Matthew T.; Brown, Dominic A.S.; Cartwright, Keith L.; Cyr, Eric C.; Glusa, Christian A.; Lin, Paul T.; Moore, Stan G.; McGregor, Duncan A.O.; Pawlowski, Roger P.; Phillips, Edward G.; Roberts, Nathan V.; Wright, Steven A.; Maheswaran, Satheesh; Jones, John P.; Jarvis, Stephen A.

In this paper we introduce EMPIRE-PIC, a finite element method particle-in-cell (FEM-PIC) application developed at Sandia National Laboratories. The code has been developed in C++ using the Trilinos library and the Kokkos Performance Portability Framework to enable running on multiple modern compute architectures while only requiring maintenance of a single codebase. EMPIRE-PIC is capable of solving both electrostatic and electromagnetic problems in two- and three-dimensions to second-order accuracy in space and time. In this paper we validate the code against three benchmark problems - a simple electron orbit, an electrostatic Langmuir wave, and a transverse electromagnetic wave propagating through a plasma. We demonstrate the performance of EMPIRE-PIC on four different architectures: Intel Haswell CPUs, Intel's Xeon Phi Knights Landing, ARM Thunder-X2 CPUs, and NVIDIA Tesla V100 GPUs attached to IBM POWER9 processors. This analysis demonstrates scalability of the code up to more than two thousand GPUs, and greater than one hundred thousand CPUs.

More Details

Highly scalable discrete-particle simulations with novel coarse-graining: accessing the microscale

Molecular Physics

Mattox, Timothy I.; Larentzos, James P.; Moore, Stan G.; Stone, Christopher P.; Ibanez, Daniel A.; Thompson, Aidan P.; Lísal, Martin; Brennan, John K.; Plimpton, Steven J.

Simulating energetic materials with complex microstructure is a grand challenge, where until recently, an inherent gap in computational capabilities had existed in modelling grain-scale effects at the microscale. We have enabled a critical capability in modelling the multiscale nature of the energy release and propagation mechanisms in advanced energetic materials by implementing, in the widely used LAMMPS molecular dynamics (MD) package, several novel coarse-graining techniques that also treat chemical reactivity. Our innovative algorithmic developments rooted within the dissipative particle dynamics framework, along with performance optimisations and application of acceleration technologies, have enabled extensions in both the length and time scales far beyond those ever realised by atomistic reactive MD simulations. In this paper, we demonstrate these advances by modelling a shockwave propagating through a microstructured material and comparing performance with the state-of-the-art in atomistic reactive MD techniques. As a result of this work, unparalleled explorations in energetic materials research are now possible.

More Details

Integrated System and Application Continuous Performance Monitoring and Analysis Capability

Aaziz, Omar R.; Allan, Benjamin A.; Brandt, James M.; Cook, Jeanine C.; Devine, Karen D.; Elliott, James E.; Gentile, Ann C.; Hammond, Simon D.; Kelley, Brian M.; Lopatina, Lena L.; Moore, Stan G.; Olivier, Stephen L.; Pedretti, Kevin P.; Poliakoff, David Z.; Pawlowski, Roger P.; Regier, Phillip A.; Schmitz, Mark E.; Schwaller, Benjamin S.; Surjadidjaja, Vanessa S.; Swan, Matthew S.; Tucker, Nick T.; Tucker, Tom T.; Vaughan, Courtenay T.; Walton, Sara P.

Scientific applications run on high-performance computing (HPC) systems are critical for many national security missions within Sandia and the NNSA complex. However, these applications often face performance degradation and even failures that are challenging to diagnose. To provide unprecedented insight into these issues, the HPC Development, HPC Systems, Computational Science, and Plasma Theory & Simulation departments at Sandia crafted and completed their FY21 ASC Level 2 milestone entitled "Integrated System and Application Continuous Performance Monitoring and Analysis Capability." The milestone created a novel integrated HPC system and application monitoring and analysis capability by extending Sandia's Kokkos application portability framework, Lightweight Distributed Metric Service (LDMS) monitoring tool, and scalable storage, analysis, and visualization pipeline. The extensions to Kokkos and LDMS enable collection and storage of application data during run time, as it is generated, with negligible overhead. This data is combined with HPC system data within the extended analysis pipeline to present relevant visualizations of derived system and application metrics that can be viewed at run time or post run. This new capability was evaluated using several week-long, 290-node runs of Sandia's ElectroMagnetic Plasma In Realistic Environments ( EMPIRE ) modeling and design tool and resulted in 1TB of application data and 50TB of system data. EMPIRE developers remarked this capability was incredibly helpful for quickly assessing application health and performance alongside system state. In short, this milestone work built the foundation for expansive HPC system and application data collection, storage, analysis, visualization, and feedback framework that will increase total scientific output of Sandia's HPC users.

More Details

Integrated System and Application Continuous Performance Monitoring and Analysis Capability

Brandt, James M.; Cook, Jeanine C.; Aaziz, Omar R.; Allan, Benjamin A.; Devine, Karen D.; Elliott, James J.; Gentile, Ann C.; Hammond, Simon D.; Kelley, Brian M.; Lopatina, Lena L.; Moore, Stan G.; Olivier, Stephen L.; Pedretti, Kevin P.; Poliakoff, David Z.; Pawlowski, Roger P.; Regier, Phillip A.; Schmitz, Mark E.; Schwaller, Benjamin S.; Surjadidjaja, Vanessa S.; Swan, Matthew S.; Tucker, Tom T.; Tucker, Nick T.; Vaughan, Courtenay T.; Walton, Sara P.

Abstract not provided.

Results 1–25 of 73
Results 1–25 of 73