Computational design-based optimization is a well-used tool in science and engineering. Our report documents the successful use of a particle sensitivity analysis for design-based optimization within Monte Carlo sampling-based particle simulation—a currently unavailable capability. Such a capability enables the particle simulation communities to go beyond forward simulation and promises to reduce the burden on overworked analysts by getting more done with less computation.
Plasma physics simulations are vital for a host of Sandia mission concerns, for fundamental science, and for clean energy in the form of fusion power. Sandia's most mature plasma physics simulation capabilities come in the form of particle-in-cell (PIC) models and magnetohydrodynamics (MHD) models. MHD models for a plasma work well in denser plasma regimes when there is enough material that the plasma approximates a fluid. PIC models, on the other hand, work well in lower-density regimes, in which there is not too much to simulate; error in PIC scales as the square root of the number of particles, making high-accuracy simulations expensive. Real-world applications, however, almost always involve a transition region between the high-density regimes where MHD is appropriate, and the low-density regimes for PIC. In such a transition region, a direct discretization of Vlasov is appropriate. Such discretizations come with their own computational costs, however; the phase-space mesh for Vlasov can involve up to six dimensions (seven if time is included), and to apply appropriate homogeneous boundary conditions in velocity space requires meshing a substantial padding region to ensure that the distribution remains sufficiently close to zero at the velocity boundaries. Moreover, for collisional plasmas, the right-hand side of the Vlasov equation is a collision operator, which is non-local in velocity space, and which may dominate the cost of the Vlasov solver. The present LDRD project endeavors to develop modern, foundational tools for the development of continuum-kinetic Vlasov solvers, using the discontinuous Petrov-Galerkin (DPG) methodology, for discretization of Vlasov, and machine-learning (ML) models to enable efficient evaluation of collision operators. DPG affords several key advantages. First, it has a built-in, robust error indicator, allowing us to adapt the mesh in a very natural way, enabling a coarse velocity-space mesh near the homogeneous boundaries, and a fine mesh where the solution has fine features. Second, it is an inherently high-order, high-intensity method, requiring extra local computations to determine so-called optimal test functions, which makes it particularly suited to modern hardware in which floating-point throughput is increasing at a faster rate than memory bandwidth. Finally, DPG is a residual-minimizing method, which enables high-accuracy computation: in typical cases, the method delivers something very close to the $L^2$ projection of the exact solution. Meanwhile, the ML-based collision model we adopt affords a cost structure that scales as the square root of a standard direct evaluation. Moreover, we design our model to conserve mass, momentum, and energy by construction, and our approach to training is highly flexible, in that it can incorporate not only synthetic data from direct-simulation Monte Carlo (DSMC) codes, but also experimental data. We have developed two DPG formulations for Vlasov-Poisson: a time-marching, backward-Euler discretization and a space-time discretization. We have conducted a number of numerical experiments to verify the approach in a 1D1V setting. In this report, we detail these formulations and experiments. We also summarize some new theoretical results developed as part of this project (published as papers previously): some new analysis of DPG for the convection-reaction problem (of which the Vlasov equation is an instance), a new exponential integrator for DPG, and some numerical exploration of various DPG-based time-marching approaches to the heat equation. As part of this work, we have contributed extensively to the Camellia open-source library; we also describe the new capabilities and their usage. We have also developed a well-documented methodology for single-species collision operators, which we applied to argon and demonstrated with numerical experiments. We summarize those results here, as well as describing at a high level a design extending the methodology to multi-species operators. We have released a new open-source library, MLC, under a BSD license; we include a summary of its capabilities as well.
Kinetic gas dynamics in rarefied and moderate-density regimes have complex behavior associated with collisional processes. These processes are generally defined by convolution integrals over a high-dimensional space (as in the Boltzmann operator), or require evaluating complex auxiliary variables (as in Rosenbluth potentials in Fokker-Planck operators) that are challenging to implement and computationally expensive to evaluate. In this work, we develop a data-driven neural network model that augments a simple and inexpensive BGK collision operator with a machine-learned correction term, which improves the fidelity of the simple operator with a small overhead to overall runtime. The composite collision operator has a tunable fidelity and, in this work, is trained using and tested against a direct-simulation Monte-Carlo (DSMC) collision operator.
This report describes the high-level accomplishments from the Plasma Science and Engineering Grand Challenge LDRD at Sandia National Laboratories. The Laboratory has a need to demonstrate predictive capabilities to model plasma phenomena in order to rapidly accelerate engineering development in several mission areas. The purpose of this Grand Challenge LDRD was to advance the fundamental models, methods, and algorithms along with supporting electrode science foundation to enable a revolutionary shift towards predictive plasma engineering design principles. This project integrated the SNL knowledge base in computer science, plasma physics, materials science, applied mathematics, and relevant application engineering to establish new cross-laboratory collaborations on these topics. As an initial exemplar, this project focused efforts on improving multi-scale modeling capabilities that are utilized to predict the electrical power delivery on large-scale pulsed power accelerators. Specifically, this LDRD was structured into three primary research thrusts that, when integrated, enable complex simulations of these devices: (1) the exploration of multi-scale models describing the desorption of contaminants from pulsed power electrodes, (2) the development of improved algorithms and code technologies to treat the multi-physics phenomena required to predict device performance, and (3) the creation of a rigorous verification and validation infrastructure to evaluate the codes and models across a range of challenge problems. These components were integrated into initial demonstrations of the largest simulations of multi-level vacuum power flow completed to-date, executed on the leading HPC computing machines available in the NNSA complex today. These preliminary studies indicate relevant pulsed power engineering design simulations can now be completed in (of order) several days, a significant improvement over pre-LDRD levels of performance.
Particle-in-cell (PIC) simulation methods are attractive for representing species distribution functions in plasmas. However, as a model, they introduce uncertain parameters, and for quantifying their prediction uncertainty it is useful to be able to assess the sensitivity of a quantity-of-interest (QoI) to these parameters. Such sensitivity information is likewise useful for optimization. However, computing sensitivity for PIC methods is challenging due to the chaotic particle dynamics, and sensitivity techniques remain underdeveloped compared to those for Eulerian discretizations. This challenge is examined from a dual particle–continuum perspective that motivates a new sensitivity discretization. Two routes to sensitivity computation are presented and compared: a direct fully-Lagrangian particle-exact approach provides sensitivities of each particle trajectory, and a new particle-pdf discretization, which is formulated from a continuum perspective but discretized by particles to take the advantages of the same type of Lagrangian particle description leveraged by PIC methods. Since the sensitivity particles in this approach are only indirectly linked to the plasma-PIC particles, they can be positioned and weighted independently for efficiency and accuracy. The corresponding numerical algorithms are presented in mathematical detail. The advantage of the particle-pdf approach in avoiding the spurious chaotic sensitivity of the particle-exact approach is demonstrated for Debye shielding and sheath configurations. In essence, the continuum perspective makes implicit the distinctness of the particles, which circumvents the Lyapunov instability of the N-body PIC system. The cost of the particle-pdf approach is comparable to the baseline PIC simulation.
Using random walk analyses we explore diffusive transport on networks obtained from contacts between isotropically compressed, monodisperse, frictionless sphere packings generated over a range of pressures in the vicinity of the jamming transition p→0. For conductive particles in an insulating medium, conduction is determined by the particle contact network with nodes representing particle centers and edges contacts between particles. The transition rate is not homogeneous, but is distributed inhomogeneously due to the randomness of packing and concomitant disorder of the contact network, e.g., the distribution of the coordination number. A narrow escape time scale is used to write a Markov process for random walks on the particle contact network. This stochastic process is analyzed in terms of spectral density of the random, sparse, Euclidean and real, symmetric, positive, semidefinite transition rate matrix. Results show network structures derived from jammed particles have properties similar to ordered, euclidean lattices but also some unique properties that distinguish them from other structures that are in some sense more homogeneous. In particular, the distribution of eigenvalues of the transition rate matrix follow a power law with spectral dimension 3. However, quantitative details of the statistics of the eigenvectors show subtle differences with homogeneous lattices and allow us to distinguish between topological and geometric sources of disorder in the network.
Vibration sensing is critical for a variety of applications from structural fatigue monitoring to understanding the modes of airplane wings. In particular, remote sensing techniques are needed for measuring the vibrations of multiple points simultaneously, assessing vibrations inside opaque metal vessels, and sensing through smoke clouds and other optically challenging environments. In this paper, we propose a method which measures high-frequency displacements remotely using changes in the magnetic field generated by permanent magnets. We leverage the unique nature of vibration tracking and use a calibrated local model technique developed specifically to improve the frequency-domain estimation accuracy. The results show that two-dimensional local models surpass the dipole model in tracking high-frequency motions. A theoretical basis for understanding the effects of electronic noise and error due to correlated variables is generated in order to predict the performance of experiments prior to implementation. Simultaneous measurements of up to three independent vibrating components are shown. The relative accuracy of the magnet-based displacement tracking with respect to the video tracking ranges from 40 to 190 μ m when the maximum displacements approach ±5 mm and when sensor-to-magnet distances vary from 25 to 36 mm. Last, vibration sensing inside an opaque metal vessel and mode shape changes due to damage on an aluminum beam are also studied using the wireless permanent-magnet vibration sensing scheme.
In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. In this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. We conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.
The need to better represent the material properties within the earth's interior has driven the development of higherfidelity physics, e.g., visco-tilted-transversely-isotropic (visco- TTI) elastic media and material interfaces, such as the ocean bottom and salt boundaries. This is especially true for full waveform inversion (FWI), where one would like to reproduce the real-world effects and invert on unprocessed raw data. Here we present a numerical formulation using a Discontinuous Galerkin (DG) finite-element (FE) method, which incorporates the desired high-fidelity physics and material interfaces. To offset the additional costs of this material representation, we include a variety of techniques (e.g., non-conformal meshing, and local polynomial refinement), which reduce the overall costs with little effect on the solution accuracy.