Publications

Results 1–50 of 9,998

Search results

Jump to search filters

Permutation-adapted complete and independent basis for atomic cluster expansion descriptors

Journal of Computational Physics

Goff, James M.; Sievers, C.; Wood, Mitchell A.; Thompson, Aidan P.

Atomic cluster expansion (ACE) methods provide a systematic way to describe particle local environments of arbitrary body order. For practical applications it is often required that the basis of cluster functions be symmetrized with respect to rotations and permutations. Existing methodologies yield sets of symmetrized functions that are over-complete. These methodologies thus require an additional numerical procedure, such as singular value decomposition (SVD), to eliminate redundant functions. In this work, it is shown that analytical linear relationships for subsets of cluster functions may be derived using recursion and permutation properties of generalized Wigner symbols. From these relationships, subsets (blocks) of cluster functions can be selected such that, within each block, functions are guaranteed to be linearly independent. It is conjectured that this block-wise independent set of permutation-adapted rotation and permutation invariant (PA-RPI) functions forms a complete, independent basis for ACE. Along with the first analytical proofs of block-wise linear dependence of ACE cluster functions and other theoretical arguments, numerical results are offered to demonstrate this. The utility of the method is demonstrated in the development of an ACE interatomic potential for tantalum. Using the new basis functions in combination with Bayesian compressive sensing sparse regression, some high degree descriptors are observed to persist and help achieve high-accuracy models.

More Details

Accurate Compression of Tabulated Chemistry Models with Partition of Unity Networks

Combustion Science and Technology

Armstrong, Elizabeth A.; Hansen, Michael A.; Knaus, Robert C.; Trask, Nathaniel A.; Hewson, John C.; Sutherland, James C.

Tabulated chemistry models are widely used to simulate large-scale turbulent fires in applications including energy generation and fire safety. Tabulation via piecewise Cartesian interpolation suffers from the curse-of-dimensionality, leading to a prohibitive exponential growth in parameters and memory usage as more dimensions are considered. Artificial neural networks (ANNs) have attracted attention for constructing surrogates for chemistry models due to their ability to perform high-dimensional approximation. However, due to well-known pathologies regarding the realization of suboptimal local minima during training, in practice they do not converge and provide unreliable accuracy. Partition of unity networks (POUnets) are a recently introduced family of ANNs which preserve notions of convergence while performing high-dimensional approximation, discovering a mesh-free partition of space which may be used to perform optimal polynomial approximation. We assess their performance with respect to accuracy and model complexity in reconstructing unstructured flamelet data representative of nonadiabatic pool fire models. Our results show that POUnets can provide the desirable accuracy of classical spline-based interpolants with the low memory footprint of traditional ANNs while converging faster to significantly lower errors than ANNs. For example, we observe POUnets obtaining target accuracies in two dimensions with 40 to 50 times less memory and roughly double the compression in three dimensions. We also address the practical matter of efficiently training accurate POUnets by studying convergence over key hyperparameters, the impact of partition/basis formulation, and the sensitivity to initialization.

More Details

Enabling power measurement and control on Astra: The first petascale Arm supercomputer

Concurrency and Computation: Practice and Experience

Grant, Ryan E.; Hammond, Simon D.; Laros, James H.; Levenhagen, Michael J.; Olivier, Stephen L.; Laros, James H.; Ward, Lee; Younge, Andrew J.

Astra, deployed in 2018, was the first petascale supercomputer to utilize processors based on the ARM instruction set. The system was also the first under Sandia's Vanguard program which seeks to provide an evaluation vehicle for novel technologies that with refinement could be utilized in demanding, large-scale HPC environments. In addition to ARM, several other important first-of-a-kind developments were used in the machine, including new approaches to cooling the datacenter and machine. This article documents our experiences building a power measurement and control infrastructure for Astra. While this is often beyond the control of users today, the accurate measurement, cataloging, and evaluation of power, as our experiences show, is critical to the successful deployment of a large-scale platform. While such systems exist in part for other architectures, Astra required new development to support the novel Marvell ThunderX2 processor used in compute nodes. In addition to documenting the measurement of power during system bring up and for subsequent on-going routine use, we present results associated with controlling the power usage of the processor, an area which is becoming of progressively greater interest as data centers and supercomputing sites look to improve compute/energy efficiency and find additional sources for full system optimization.

More Details

Peridynamic Model for Single-Layer Graphene Obtained from Coarse-Grained Bond Forces

Journal of Peridynamics and Nonlocal Modeling

D'Elia, Marta D.; Silling, Stewart A.; You, Huaiqian; Yu, Yue; Fermen-Coker, Muge

An ordinary state-based peridynamic material model is proposed for single-sheet graphene. The model is calibrated using coarse-grained molecular dynamics simulations. The coarse-graining method allows the dependence of bond force on bond length to be determined, including the horizon. The peridynamic model allows the horizon to be rescaled, providing a multiscale capability and allowing for substantial reductions in computational cost compared with molecular dynamics. The calibrated peridynamic model is compared to experimental data on the deflection and perforation of a graphene sheet by an atomic force microscope probe.

More Details

Not so HOT Triangulations

CAD Computer Aided Design

Mitchell, Scott A.; Knupp, Patrick; Mackay, Sarah; Deakin, Michael F.

We propose primal–dual mesh optimization algorithms that overcome shortcomings of the standard algorithm while retaining some of its desirable features. “Hodge-Optimized Triangulations” defines the “HOT energy” as a bound on the discretization error of the diagonalized Delaunay Hodge star operator. HOT energy is a natural choice for an objective function, but unstable for both mathematical and algorithmic reasons: it has minima for collapsed edges, and its extrapolation to non-regular triangulations is inaccurate and has unbounded minima. We propose a different extrapolation with a stronger theoretical foundation, and avoid extrapolation by recalculating the objective just beyond the flip threshold. We propose new objectives, based on normalizations of the HOT energy, with barriers to edge collapses and other undesirable configurations. We propose mesh improvement algorithms coupling these. When HOT optimization nearly collapses an edge, we actually collapse the edge. Otherwise, we use the barrier objective to update positions and weights and remove vertices. By combining discrete connectivity changes with continuous optimization, we more fully explore the space of possible meshes and obtain higher quality solutions.

More Details

What can simulation test beds teach us about social science? Results of the ground truth program

Computational and Mathematical Organization Theory

Naugle, Asmeret B.; Krofcheck, Daniel J.; Warrender, Christina E.; Lakkaraju, Kiran L.; Swiler, Laura P.; Verzi, Stephen J.; Emery, Benjamin F.; Murdock, Jaimie; Bernard, Michael L.; Romero, Vicente J.

The ground truth program used simulations as test beds for social science research methods. The simulations had known ground truth and were capable of producing large amounts of data. This allowed research teams to run experiments and ask questions of these simulations similar to social scientists studying real-world systems, and enabled robust evaluation of their causal inference, prediction, and prescription capabilities. We tested three hypotheses about research effectiveness using data from the ground truth program, specifically looking at the influence of complexity, causal understanding, and data collection on performance. We found some evidence that system complexity and causal understanding influenced research performance, but no evidence that data availability contributed. The ground truth program may be the first robust coupling of simulation test beds with an experimental framework capable of teasing out factors that determine the success of social science research.

More Details

Partitioning Communication Streams Into Graph Snapshots

IEEE Transactions on Network Science and Engineering

Wendt, Jeremy D.; Field, Richard V.; Phillips, Cynthia A.; Prasadan, Arvind P.; Wilson, Tegan; Soundarajan, Sucheta; Bhowmick, Sanjukta

We present EASEE (Edge Advertisements into Snapshots using Evolving Expectations) for partitioning streaming communication data into static graph snapshots. Given streaming communication events (A talks to B), EASEE identifies when events suffice for a static graph (a snapshot). EASEE uses combinatorial statistical models to adaptively find when a snapshot is stable, while watching for significant data shifts - indicating a new snapshot should begin. If snapshots are not found carefully, they poorly represent the underlying data - and downstream graph analytics fail: We show a community detection example. We demonstrate EASEE's strengths against several real-world datasets, and its accuracy against known-answer synthetic datasets. Synthetic datasets' results show that (1) EASEE finds known-answer data shifts very quickly; and (2) ignoring these shifts drastically affects analytics on resulting snapshots. We show that previous work misses these shifts. Further, we evaluate EASEE against seven real-world datasets (330 K to 2.5B events), and find snapshot-over-time behaviors missed by previous works. Finally, we show that the resulting snapshots' measured properties (e.g., graph density) are altered by how snapshots are identified from the communication event stream. In particular, EASEE's snapshots do not generally 'densify' over time, contradicting previous influential results that used simpler partitioning methods.

More Details

Monotonic Gaussian Process for Physics-Constrained Machine Learning With Materials Science Applications

Journal of Computing and Information Science in Engineering

Laros, James H.; Maupin, Kathryn A.; Rodgers, Theron R.

Physics-constrained machine learning is emerging as an important topic in the field of machine learning for physics. One of the most significant advantages of incorporating physics constraints into machine learning methods is that the resulting model requires significantly less data to train. By incorporating physical rules into the machine learning formulation itself, the predictions are expected to be physically plausible. Gaussian process (GP) is perhaps one of the most common methods in machine learning for small datasets. In this paper, we investigate the possibility of constraining a GP formulation with monotonicity on three different material datasets, where one experimental and two computational datasets are used. The monotonic GP is compared against the regular GP, where a significant reduction in the posterior variance is observed. The monotonic GP is strictly monotonic in the interpolation regime, but in the extrapolation regime, the monotonic effect starts fading away as one goes beyond the training dataset. Imposing monotonicity on the GP comes at a small accuracy cost, compared to the regular GP. The monotonic GP is perhaps most useful in applications where data are scarce and noisy, and monotonicity is supported by strong physical evidence.

More Details

MONOLITHIC MULTIGRID FOR A REDUCED-QUADRATURE DISCRETIZATION OF POROELASTICITY

SIAM Journal on Scientific Computing

Adler, James H.; He, Yunhui; Hu, Xiaozhe; Maclachlan, Scott; Ohm, Peter B.

Advanced finite-element discretizations and preconditioners for models of poroelasticity have attracted significant attention in recent years. The equations of poroelasticity offer significant challenges in both areas, due to the potentially strong coupling between unknowns in the system, saddle-point structure, and the need to account for wide ranges of parameter values, including limiting behavior such as incompressible elasticity. This paper was motivated by an attempt to develop monolithic multigrid preconditioners for the discretization developed in [C. Rodrigo et al., Comput. Methods App. Mech. Engrg, 341 (2018), pp. 467-484]; we show here why this is a difficult task and, as a result, we modify the discretization in [Rodrigo et al.] through the use of a reduced-quadrature approximation, yielding a more “solver-friendly” discretization. Local Fourier analysis is used to optimize parameters in the resulting monolithic multigrid method, allowing a fair comparison between the performance and costs of methods based on Vanka and Braess-Sarazin relaxation. Numerical results are presented to validate the local Fourier analysis predictions and demonstrate efficiency of the algorithms. Finally, a comparison to existing block-factorization preconditioners is also given.

More Details

Feedback density and causal complexity of simulation model structure

Journal of Simulation

Naugle, Asmeret B.; Verzi, Stephen J.; Lakkaraju, Kiran L.; Swiler, Laura P.; Warrender, Christina E.; Bernard, Michael L.; Romero, Vicente J.

Measures of simulation model complexity generally focus on outputs; we propose measuring the complexity of a model’s causal structure to gain insight into its fundamental character. This article introduces tools for measuring causal complexity. First, we introduce a method for developing a model’s causal structure diagram, which characterises the causal interactions present in the code. Causal structure diagrams facilitate comparison of simulation models, including those from different paradigms. Next, we develop metrics for evaluating a model’s causal complexity using its causal structure diagram. We discuss cyclomatic complexity as a measure of the intricacy of causal structure and introduce two new metrics that incorporate the concept of feedback, a fundamental component of causal structure. The first new metric introduced here is feedback density, a measure of the cycle-based interconnectedness of causal structure. The second metric combines cyclomatic complexity and feedback density into a comprehensive causal complexity measure. Finally, we demonstrate these complexity metrics on simulation models from multiple paradigms and discuss potential uses and interpretations. These tools enable direct comparison of models across paradigms and provide a mechanism for measuring and discussing complexity based on a model’s fundamental assumptions and design.

More Details

The DPG Method for the Convection-Reaction Problem, Revisited

Computational Methods in Applied Mathematics

Demkowicz, Leszek F.; Roberts, Nathan V.; Munoz-Mutate, Judit

We study both conforming and non-conforming versions of the practical DPG method for the convection-reaction problem. We determine that the most common approach for DPG stability analysis - construction of a local Fortin operator - is infeasible for the convection-reaction problem. We then develop a line of argument based on a direct proof of discrete stability; we find that employing a polynomial enrichment for the test space does not suffice for this purpose, motivating the introduction of a (two-element) subgrid mesh. The argument combines mathematical analysis with numerical experiments.

More Details

A Novel Partitioned Approach for Reduced Order Model - Finite Element Model (ROM-FEM) and ROM-ROM Coupling

Earth and Space 2022: Space Exploration, Utilization, Engineering, and Construction in Extreme Environments - Selected Papers from the 18th Biennial International Conference on Engineering, Science, Construction, and Operations in Challenging Environments

de Castro, Amy G.; Kuberry, Paul A.; Kalashnikova, Irina; Bochev, Pavel B.

Partitioned methods allow one to build a simulation capability for coupled problems by reusing existing single-component codes. In so doing, partitioned methods can shorten code development and validation times for multiphysics and multiscale applications. In this work, we consider a scenario in which one or more of the “codes” being coupled are projection-based reduced order models (ROMs), introduced to lower the computational cost associated with a particular component. We simulate this scenario by considering a model interface problem that is discretized independently on two non-overlapping subdomains. We then formulate a partitioned scheme for this problem that allows the coupling between a ROM “code” for one of the subdomains with a finite element model (FEM) or ROM “code” for the other subdomain. The ROM “codes” are constructed by performing proper orthogonal decomposition (POD) on a snapshot ensemble to obtain a low-dimensional reduced order basis, followed by a Galerkin projection onto this basis. The ROM and/or FEM “codes” on each subdomain are then coupled using a Lagrange multiplier representing the interface flux. To partition the resulting monolithic problem, we first eliminate the flux through a dual Schur complement. Application of an explicit time integration scheme to the transformed monolithic problem decouples the subdomain equations, allowing their independent solution for the next time step. We show numerical results that demonstrate the proposed method’s efficacy in achieving both ROM-FEM and ROM-ROM coupling.

More Details

Combining Spike Time Dependent Plasticity (STDP) and Backpropagation (BP) for Robust and Data Efficient Spiking Neural Networks (SNN)

Wang, Felix W.; Teeter, Corinne M.

National security applications require artificial neural networks (ANNs) that consume less power, are fast and dynamic online learners, are fault tolerant, and can learn from unlabeled and imbalanced data. We explore whether two fundamentally different, traditional learning algorithms from artificial intelligence and the biological brain can be merged. We tackle this problem from two directions. First, we start from a theoretical point of view and show that the spike time dependent plasticity (STDP) learning curve observed in biological networks can be derived using the mathematical framework of backpropagation through time. Second, we show that transmission delays, as observed in biological networks, improve the ability of spiking networks to perform classification when trained using a backpropagation of error (BP) method. These results provide evidence that STDP could be compatible with a BP learning rule. Combining these learning algorithms will likely lead to networks more capable of meeting our national security missions.

More Details

A Stochastic Reduced-Order Model for Statistical Microstructure Descriptors Evolution

Journal of Computing and Information Science in Engineering

Laros, James H.; Sun, Jing; Liu, Dehao; Wang, Yan; Wildey, Timothy M.

Integrated computational materials engineering (ICME) models have been a crucial building block for modern materials development, relieving heavy reliance on experiments and significantly accelerating the materials design process. However, ICME models are also computationally expensive, particularly with respect to time integration for dynamics, which hinders the ability to study statistical ensembles and thermodynamic properties of large systems for long time scales. To alleviate the computational bottleneck, we propose to model the evolution of statistical microstructure descriptors as a continuous-time stochastic process using a non-linear Langevin equation, where the probability density function (PDF) of the statistical microstructure descriptors, which are also the quantities of interests (QoIs), is modeled by the Fokker-Planck equation. We discuss how to calibrate the drift and diffusion terms of the Fokker-Planck equation from the theoretical and computational perspectives. The calibrated Fokker-Planck equation can be used as a stochastic reduced-order model to simulate the microstructure evolution of statistical microstructure descriptors PDF. Considering statistical microstructure descriptors in the microstructure evolution as QoIs, we demonstrate our proposed methodology in three integrated computational materials engineering (ICME) models: kinetic Monte Carlo, phase field, and molecular dynamics simulations.

More Details

A silicon singlet–triplet qubit driven by spin-valley coupling

Nature Communications

Jock, Ryan M.; Jacobson, Noah T.; Rudolph, Martin R.; Ward, Daniel R.; Carroll, Malcolm S.; Luhman, Dwight R.

Spin–orbit effects, inherent to electrons confined in quantum dots at a silicon heterointerface, provide a means to control electron spin qubits without the added complexity of on-chip, nanofabricated micromagnets or nearby coplanar striplines. Here, we demonstrate a singlet–triplet qubit operating mode that can drive qubit evolution at frequencies in excess of 200 MHz. This approach offers a means to electrically turn on and off fast control, while providing high logic gate orthogonality and long qubit dephasing times. We utilize this operational mode for dynamical decoupling experiments to probe the charge noise power spectrum in a silicon metal-oxide-semiconductor double quantum dot. In addition, we assess qubit frequency drift over longer timescales to capture low-frequency noise. We present the charge noise power spectral density up to 3 MHz, which exhibits a 1/fα dependence consistent with α ~ 0.7, over 9 orders of magnitude in noise frequency.

More Details

Electron dynamics in extended systems within real-time time-dependent density-functional theory

MRS Communications

Kononov, Alina K.; Lee, Cheng W.; Dos Santos, Tatiane P.; Robinson, Brian; Yao, Yifan; Yao, Yi; Andrade, Xavier; Baczewski, Andrew D.; Constantinescu, Emil; Correa, Alfredo A.; Kanai, Yosuke; Modine, N.A.; Schleife, Andre

Abstract: Due to a beneficial balance of computational cost and accuracy, real-time time-dependent density-functional theory has emerged as a promising first-principles framework to describe electron real-time dynamics. Here we discuss recent implementations around this approach, in particular in the context of complex, extended systems. Results include an analysis of the computational cost associated with numerical propagation and when using absorbing boundary conditions. We extensively explore the shortcomings for describing electron–electron scattering in real time and compare to many-body perturbation theory. Modern improvements of the description of exchange and correlation are reviewed. In this work, we specifically focus on the Qb@ll code, which we have mainly used for these types of simulations over the last years, and we conclude by pointing to further progress needed going forward. Graphical abstract: [Figure not available: see fulltext.].

More Details

Neural-network based collision operators for the Boltzmann equation

Journal of Computational Physics

Roberts, Nathan V.; Bond, Stephen D.; Cyr, Eric C.; Miller, Sean T.

Kinetic gas dynamics in rarefied and moderate-density regimes have complex behavior associated with collisional processes. These processes are generally defined by convolution integrals over a high-dimensional space (as in the Boltzmann operator), or require evaluating complex auxiliary variables (as in Rosenbluth potentials in Fokker-Planck operators) that are challenging to implement and computationally expensive to evaluate. In this work, we develop a data-driven neural network model that augments a simple and inexpensive BGK collision operator with a machine-learned correction term, which improves the fidelity of the simple operator with a small overhead to overall runtime. The composite collision operator has a tunable fidelity and, in this work, is trained using and tested against a direct-simulation Monte-Carlo (DSMC) collision operator.

More Details

Combining DPG in space with DPG time-marching scheme for the transient advection–reaction equation

Computer Methods in Applied Mechanics and Engineering

Roberts, Nathan V.; Munoz-Matute, Judit; Demkowicz, Leszek

In this article, we present a general methodology to combine the Discontinuous Petrov–Galerkin (DPG) method in space and time in the context of methods of lines for transient advection–reaction problems. We first introduce a semidiscretization in space with a DPG method redefining the ideas of optimal testing and practicality of the method in this context. Then, we apply the recently developed DPG-based time-marching scheme, which is of exponential-type, to the resulting system of Ordinary Differential Equations (ODEs). We also discuss how to efficiently compute the action of the exponential of the matrix coming from the space semidiscretization without assembling the full matrix. Finally, we verify the proposed method for 1D+time advection–reaction problems showing optimal convergence rates for smooth solutions and more stable results for linear conservation laws comparing to the classical exponential integrators.

More Details

Nonlocal kernel network (NKN): A stable and resolution-independent deep neural network

Journal of Computational Physics

D'Elia, Marta D.; Silling, Stewart A.; Yu, Yue; You, Huaiqian; Gao, Tian

Neural operators [1–5] have recently become popular tools for designing solution maps between function spaces in the form of neural networks. Differently from classical scientific machine learning approaches that learn parameters of a known partial differential equation (PDE) for a single instance of the input parameters at a fixed resolution, neural operators approximate the solution map of a family of PDEs [6,7]. Despite their success, the uses of neural operators are so far restricted to relatively shallow neural networks and confined to learning hidden governing laws. In this work, we propose a novel nonlocal neural operator, which we refer to as nonlocal kernel network (NKN), that is resolution independent, characterized by deep neural networks, and capable of handling a variety of tasks such as learning governing equations and classifying images. Our NKN stems from the interpretation of the neural network as a discrete nonlocal diffusion reaction equation that, in the limit of infinite layers, is equivalent to a parabolic nonlocal equation, whose stability is analyzed via nonlocal vector calculus. The resemblance with integral forms of neural operators allows NKNs to capture long-range dependencies in the feature space, while the continuous treatment of node-to-node interactions makes NKNs resolution independent. The resemblance with neural ODEs, reinterpreted in a nonlocal sense, and the stable network dynamics between layers allow for generalization of NKN's optimal parameters from shallow to deep networks. This fact enables the use of shallow-to-deep initialization techniques [8]. Our tests show that NKNs outperform baseline methods in both learning governing equations and image classification tasks and generalize well to different resolutions and depths.

More Details

Development of Single Photon Sources in GaN

Mounce, Andrew M.; Wang, George W.; Schultz, Peter A.; Titze, Michael T.; Campbell, DeAnna M.; Lu, Ping L.; Henshaw, Jacob D.

The recent discovery of bright, room-temperature, single photon emitters in GaN leads to an appealing alternative to diamond best single photon emitters given the widespread use and technological maturity of III-nitrides for optoelectronics (e.g. blue LEDs, lasers) and high-speed, high-power electronics. This discovery opens the door to on-chip and on-demand single photon sources integrated with detectors and electronics. Currently, little is known about the underlying defect structure nor is there a sense of how such an emitter might be controllably created. A detailed understanding of the origin of the SPEs in GaN and a path to deterministically introduce them is required. In this project, we develop new experimental capabilities to then investigate single photon emission from GaN nanowires and both GAN and AlN wafers. We ion implant our wafers with the ion implanted with our focused ion beam nanoimplantation capabilities at Sandia, to go beyond typical broad beam implantation and create single photon emitting defects with nanometer precision. We've created light emitting sources using Li+ and He+, but single photon emission has yet to be demonstrated. In parallel, we calculate the energy levels of defects and transition metal substitutions in GaN to gain a better understanding of the sources of single photon emission in GaN and AlN. The combined experimental and theoretical capabilities developed throughout this project will enable further investigation into the origins of single photon emission from defects in GaN, AlN, and other wide bandgap semiconductors.

More Details

Sensitivity analysis of generic deep geologic repository with focus on spatial heterogeneity induced by stochastic fracture network generation

Advances in Water Resources

Brooks, Dusty M.; Swiler, Laura P.; Stein, Emily S.; Mariner, Paul M.; Basurto, Eduardo B.; Portone, Teresa P.; Eckert, Aubrey C.; Leone, Rosemary C.

Geologic Disposal Safety Assessment Framework is a state-of-the-art simulation software toolkit for probabilistic post-closure performance assessment of systems for deep geologic disposal of nuclear waste developed by the United States Department of Energy. This paper presents a generic reference case and shows how it is being used to develop and demonstrate performance assessment methods within the Geologic Disposal Safety Assessment Framework that mitigate some of the challenges posed by high uncertainty and limited computational resources. Variance-based global sensitivity analysis is applied to assess the effects of spatial heterogeneity using graph-based summary measures for scalar and time-varying quantities of interest. Behavior of the system with respect to spatial heterogeneity is further investigated using ratios of water fluxes. This analysis shows that spatial heterogeneity is a dominant uncertainty in predictions of repository performance which can be identified in global sensitivity analysis using proxy variables derived from graph descriptions of discrete fracture networks. New quantities of interest defined using water fluxes proved useful for better understanding overall system behavior.

More Details

A fractional model for anomalous diffusion with increased variability: Analysis, algorithms and applications to interface problems

Numerical Methods for Partial Differential Equations

D'Elia, Marta D.; Glusa, Christian A.

Fractional equations have become the model of choice in several applications where heterogeneities at the microstructure result in anomalous diffusive behavior at the macroscale. In this work we introduce a new fractional operator characterized by a doubly-variable fractional order and possibly truncated interactions. Under certain conditions on the model parameters and on the regularity of the fractional order we show that the corresponding Poisson problem is well-posed. We also introduce a finite element discretization and describe an efficient implementation of the finite-element matrix assembly in the case of piecewise constant fractional order. Through several numerical tests, we illustrate the improved descriptive power of this new operator across media interfaces. Furthermore, we present one-dimensional and two-dimensional h-convergence results that show that the variable-order model has the same convergence behavior as the constant-order model.

More Details

Conflicting Information and Compliance With COVID-19 Behavioral Recommendations

Naugle, Asmeret B.; Rothganger, Fredrick R.; Verzi, Stephen J.; Doyle, Casey L.

The prevalence of COVID-19 is shaped by behavioral responses to recommendations and warnings. Available information on the disease determines the population’s perception of danger and thus its behavior; this information changes dynamically, and different sources may report conflicting information. We study the feedback between disease, information, and stay-at-home behavior using a hybrid agent-based-system dynamics model that incorporates evolving trust in sources of information. We use this model to investigate how divergent reporting and conflicting information can alter the trajectory of a public health crisis. The model shows that divergent reporting not only alters disease prevalence over time, but also increases polarization of the population’s behaviors and trust in different sources of information.

More Details

First-principles simulation of light-ion microscopy of graphene

2D Materials

Kononov, Alina K.; Olmstead, Alexandra L.; Baczewski, Andrew D.; Schleife, Andre

The extreme sensitivity of 2D materials to defects and nanostructure requires precise imaging techniques to verify presence of desirable and absence of undesirable features in the atomic geometry. Helium-ion beams have emerged as a promising materials imaging tool, achieving up to 20 times higher resolution and 10 times larger depth-of-field than conventional or environmental scanning electron microscopes. Here, we offer first-principles theoretical insights to advance ion-beam imaging of atomically thin materials by performing real-time time-dependent density functional theory simulations of single impacts of 10-200 keV light ions in free-standing graphene. We predict that detecting electrons emitted from the back of the material (the side from which the ion exits) would result in up to three times higher signal and up to five times higher contrast images, making 2D materials especially compelling targets for ion-beam microscopy. This predicted superiority of exit-side emission likely arises from anisotropic kinetic emission. The charge induced in the graphene equilibrates on a sub-fs time scale, leading to only slight disturbances in the carbon lattice that are unlikely to damage the atomic structure for any of the beam parameters investigated here.

More Details

Comparison of exponential integrators and traditional time integration schemes for the shallow water equations

Applied Numerical Mathematics

Eldred, Christopher; Brachet, Matthieu; Debreu, Laurent

The time integration scheme is probably one of the most fundamental choices in the development of an ocean model. In this paper, we investigate several time integration schemes when applied to the shallow water equations. This set of equations is accurate enough for the modeling of a shallow ocean and is also relevant to study as it is the one solved for the barotropic (i.e. vertically averaged) component of a three dimensional ocean model. We analyze different time stepping algorithms for the linearized shallow water equations. High order explicit schemes are accurate but the time step is constrained by the Courant-Friedrichs-Lewy stability condition. Implicit schemes can be unconditionally stable but, in practice lack accuracy when used with large time steps. In this paper we propose a detailed comparison of such classical schemes with exponential integrators. The accuracy and the computational costs are analyzed in different configurations.

More Details

Viability of S3 Object Storage for the ASC Program at Sandia

Kordenbrock, Todd H.; Templet, Gary J.; Ulmer, Craig D.; Widener, Patrick

Recent efforts at Sandia such as DataSEA are creating search engines that enable analysts to query the institution’s massive archive of simulation and experiment data. The benefit of this work is that analysts will be able to retrieve all historical information about a system component that the institution has amassed over the years and make better-informed decisions in current work. As DataSEA gains momentum, it faces multiple technical challenges relating to capacity storage. From a raw capacity perspective, data producers will rapidly overwhelm the system with massive amounts of data. From an accessibility perspective, analysts will expect to be able to retrieve any portion of the bulk data, from any system on the enterprise network. Sandia’s Institutional Computing is mitigating storage problems at the enterprise level by procuring new capacity storage systems that can be accessed from anywhere on the enterprise network. These systems use the simple storage service, or S3, API for data transfers. While S3 uses objects instead of files, users can access it from their desktops or Sandia’s high-performance computing (HPC) platforms. S3 is particularly well suited for bulk storage in DataSEA, as datasets can be decomposed into object that can be referenced and retrieved individually, as needed by an analyst. In this report we describe our experiences working with S3 storage and provide information about how developers can leverage Sandia’s current systems. We present performance results from two sets of experiments. First, we measure S3 throughput when exchanging data between four different HPC platforms and two different enterprise S3 storage systems on the Sandia Restricted Network (SRN). Second, we measure the performance of S3 when communicating with a custom-built Ceph storage system that was constructed from HPC components. Overall, while S3 storage is significantly slower than traditional HPC storage, it provides significant accessibility benefits that will be valuable for archiving and exploiting historical data. There are multiple opportunities that arise from this work, including enhancing DataSEA to leverage S3 for bulk storage and adding native S3 support to Sandia’s IOSS library.

More Details

Thermodynamically consistent versions of approximations used in modelling moist air

Quarterly Journal of the Royal Meteorological Society

Eldred, Christopher; Guba, Oksana G.; Taylor, Mark A.

Some existing approaches to modelling the thermodynamics of moist air make approximations that break thermodynamic consistency, such that the resulting thermodynamics does not obey the first and second laws or has other inconsistencies. Recently, an approach to avoid such inconsistency has been suggested: the use of thermodynamic potentials in terms of their natural variables, from which all thermodynamic quantities and relationships (equations of state) are derived. In this article, we develop this approach for unapproximated moist-air thermodynamics and two widely used approximations: the constant- (Formula presented.) approximation and the dry heat capacities approximation. The (consistent) constant- (Formula presented.) approximation is particularly attractive because it leads to, with the appropriate choice of thermodynamic variable, adiabatic dynamics that depend only on total mass and are independent of the breakdown between water forms. Additionally, a wide variety of material from different sources in the literature on thermodynamics in atmospheric modelling is brought together. It is hoped that this article provides a comprehensive reference for the use of thermodynamic potentials in atmospheric modelling, especially for the three systems considered here.

More Details

Understanding Phase and Interfacial Effects of Spall Fracture in Additively Manufactured Ti-5Al-5V-5Mo-3Cr

Branch, Brittany A.; Ruggles, Timothy R.; Miers, John C.; Massey, Caroline E.; Moore, David G.; Brown, Nathan B.; Duwal, Sakun D.; Silling, Stewart A.; Mitchell, John A.; Specht, Paul E.

Additive manufactured Ti-5Al-5V-5Mo-3Cr (Ti-5553) is being considered as an AM repair material for engineering applications because of its superior strength properties compared to other titanium alloys. Here, we describe the failure mechanisms observed through computed tomography, electron backscatter diffraction (EBSD), and scanning electron microscopy (SEM) of spall damage as a result of tensile failure in as-built and annealed Ti-5553. We also investigate the phase stability in native powder, as-built and annealed Ti-5553 through diamond anvil cell (DAC) and ramp compression experiments. We then explore the effect of tensile loading on a sample containing an interface between a Ti-6Al-V4 (Ti-64) baseplate and additively manufactured Ti-5553 layer. Post-mortem materials characterization showed spallation occurred in regions of initial porosity and the interface provides a nucleation site for spall damage below the spall strength of Ti-5553. Preliminary peridynamics modeling of the dynamic experiments is described. Finally, we discuss further development of Stochastic Parallel PARticle Kinteic Simulator (SPPARKS) Monte Carlo (MC) capabilities to include the integration of alpha (α)-phase and microstructural simulations for this multiphase titanium alloy.

More Details

Embedded pairs for optimal explicit strong stability preserving Runge–Kutta methods

Journal of Computational and Applied Mathematics

Shadid, John N.

We construct a family of embedded pairs for optimal explicit strong stability preserving Runge–Kutta methods of order 2≤p≤4 to be used to obtain numerical solution of spatially discretized hyperbolic PDEs. In this construction, the goals include non-defective property, large stability region, and small error values as defined in Dekker and Verwer (1984) and Kennedy et al. (2000). The new family of embedded pairs offer the ability for strong stability preserving (SSP) methods to adapt by varying the step-size. Through several numerical experiments, we assess the overall effectiveness in terms of work versus precision while also taking into consideration accuracy and stability.

More Details

Super-Resolution Approaches in Three-Dimensions for Classification and Screening of Commercial-Off-The-Shelf Components

Polonsky, Andrew P.; Martinez, Carianne M.; Appleby, Catherine A.; Bernard, Sylvain R.; Griego, J.J.M.; Noell, Philip N.; Pathare, Priya R.

X-ray computed tomography is generally a primary step in characterization of defective electronic components, but is generally too slow to screen large lots of components. Super-resolution imaging approaches, in which higher-resolution data is inferred from lower-resolution images, have the potential to substantially reduce collection times for data volumes accessible via x-ray computed tomography. Here we seek to advance existing two-dimensional super-resolution approaches directly to three-dimensional computed tomography data. Multiple scan resolutions over a half order of magnitude of resolution were collected for four classes of commercial electronic components to serve as training data for a deep-learning, super-resolution network. A modular python framework for three-dimensional super-resolution of computed tomography data has been developed and trained over multiple classes of electronic components. Initial training and testing demonstrate the vast promise for these approaches, which have the potential for more than an order of magnitude reduction in collection time for electronic component screening.

More Details

Demonstrate multi-turbine simulation with hybrid-structured / unstructured-moving-grid software stack running primarily on GPUs and propose improvements for successful KPP-2

Bidadi, Shreyas; Brazell, Michael; Brunhart-Lupo, Nicholas; Henry De Frahan, Marc T.; Lee, Dong H.; Hu, Jonathan J.; Melvin, Jeremy; Mullowney, Paul; Vijayakumar, Ganesh; Moser, Robert D.; Rood, Jon; Sakievich, Philip S.; Sharma, Ashesh; Williams, Alan B.; Sprague, Michael A.

The goal of the ExaWind project is to enable predictive simulations of wind farms comprised of many megawatt-scale turbines situated in complex terrain. Predictive simulations will require computational fluid dynamics (CFD) simulations for which the mesh resolves the geometry of the turbines, capturing the thin boundary layers, and captures the rotation and large deflections of blades. Whereas such simulations for a single turbine are arguably petascale class, multi-turbine wind farm simulations will require exascale-class resources.

More Details

ATHENA: Analytical Tool for Heterogeneous Neuromorphic Architectures

Cardwell, Suma G.; Plagge, Mark P.; Hughes, Clayton H.; Rothganger, Fredrick R.; Agarwal, Sapan A.; Feinberg, Benjamin F.; Awad, Amro; Mcfarland, John; Parker, Luke G.

The ASC program seeks to use machine learning to improve efficiencies in its stockpile stewardship mission. Moreover, there is a growing market for technologies dedicated to accelerating AI workloads. Many of these emerging architectures promise to provide savings in energy efficiency, area, and latency when compared to traditional CPUs for these types of applications — neuromorphic analog and digital technologies provide both low-power and configurable acceleration of challenging artificial intelligence (AI) algorithms. If designed into a heterogeneous system with other accelerators and conventional compute nodes, these technologies have the potential to augment the capabilities of traditional High Performance Computing (HPC) platforms [5]. This expanded computation space requires not only a new approach to physics simulation, but the ability to evaluate and analyze next-generation architectures specialized for AI/ML workloads in both traditional HPC and embedded ND applications. Developing this capability will enable ASC to understand how this hardware performs in both HPC and ND environments, improve our ability to port our applications, guide the development of computing hardware, and inform vendor interactions, leading them toward solutions that address ASC’s unique requirements.

More Details

Microstructure-Sensitive Uncertainty Quantification for Crystal Plasticity Finite Element Constitutive Models Using Stochastic Collocation Methods

Frontiers in Materials

Laros, James H.; Wildey, Timothy M.; Lim, Hojun L.

Uncertainty quantification (UQ) plays a major role in verification and validation for computational engineering models and simulations, and establishes trust in the predictive capability of computational models. In the materials science and engineering context, where the process-structure-property-performance linkage is well known to be the only road mapping from manufacturing to engineering performance, numerous integrated computational materials engineering (ICME) models have been developed across a wide spectrum of length-scales and time-scales to relieve the burden of resource-intensive experiments. Within the structure-property linkage, crystal plasticity finite element method (CPFEM) models have been widely used since they are one of a few ICME toolboxes that allows numerical predictions, providing the bridge from microstructure to materials properties and performances. Several constitutive models have been proposed in the last few decades to capture the mechanics and plasticity behavior of materials. While some UQ studies have been performed, the robustness and uncertainty of these constitutive models have not been rigorously established. In this work, we apply a stochastic collocation (SC) method, which is mathematically rigorous and has been widely used in the field of UQ, to quantify the uncertainty of three most commonly used constitutive models in CPFEM, namely phenomenological models (with and without twinning), and dislocation-density-based constitutive models, for three different types of crystal structures, namely face-centered cubic (fcc) copper (Cu), body-centered cubic (bcc) tungsten (W), and hexagonal close packing (hcp) magnesium (Mg). Our numerical results not only quantify the uncertainty of these constitutive models in stress-strain curve, but also analyze the global sensitivity of the underlying constitutive parameters with respect to the initial yield behavior, which may be helpful for robust constitutive model calibration works in the future.

More Details

Unified Language Frontend for Physic-Informed AI/ML

Kelley, Brian M.; Rajamanickam, Sivasankaran R.

Artificial intelligence and machine learning (AI/ML) are becoming important tools for scientific modeling and simulation as in several other fields such as image analysis and natural language processing. ML techniques can leverage the computing power available in modern systems and reduce the human effort needed to configure experiments, interpret and visualize results, draw conclusions from huge quantities of raw data, and build surrogates for physics based models. Domain scientists in fields like fluid dynamics, microelectronics and chemistry can automate many of their most difficult and repetitive tasks or improve the design times by use of the faster ML-surrogates. However, modern ML and traditional scientific highperformance computing (HPC) tend to use completely different software ecosystems. While ML frameworks like PyTorch and TensorFlow provide Python APIs, most HPC applications and libraries are written in C++. Direct interoperability between the two languages is possible but is tedious and error-prone. In this work, we show that a compiler-based approach can bridge the gap between ML frameworks and scientific software with less developer effort and better efficiency. We use the MLIR (multi-level intermediate representation) ecosystem to compile a pre-trained convolutional neural network (CNN) in PyTorch to freestanding C++ source code in the Kokkos programming model. Kokkos is a programming model widely used in HPC to write portable, shared-memory parallel code that can natively target a variety of CPU and GPU architectures. Our compiler-generated source code can be directly integrated into any Kokkosbased application with no dependencies on Python or cross-language interfaces.

More Details

Mathematical Foundations for Nonlocal Interface Problems: Multiscale Simulations of Heterogeneous Materials (Final LDRD Report)

D'Elia, Marta; Bochev, Pavel B.; Foster, John T.; Glusa, Christian A.; Gulian, Mamikon G.; Gunzburger, Max; Trageser, Jeremy T.; Kuhlman, Kristopher L.; Martinez, Mario A.; Najm, H.N.; Silling, Stewart A.; Tupek, Michael; Xu, Xiao

Nonlocal models provide a much-needed predictive capability for important Sandia mission applications, ranging from fracture mechanics for nuclear components to subsurface flow for nuclear waste disposal, where traditional partial differential equations (PDEs) models fail to capture effects due to long-range forces at the microscale and mesoscale. However, utilization of this capability is seriously compromised by the lack of a rigorous nonlocal interface theory, required for both application and efficient solution of nonlocal models. To unlock the full potential of nonlocal modeling we developed a mathematically rigorous and physically consistent interface theory and demonstrate its scope in mission-relevant exemplar problems.

More Details

Lossless Quantum Hard-Drive Memory Using Parity-Time Symmetry

Chatterjee, Eric N.; Soh, Daniel B.; Young, Steve M.

We theoretically studied the feasibility of building a long-term read-write quantum memory using the principle of parity-time (PT) symmetry, which has already been demonstrated for classical systems. The design consisted of a two-resonator system. Although both resonators would feature intrinsic loss, the goal was to apply a driving signal to one of the resonators such that it would become an amplifying subsystem, with a gain rate equal and opposite to the loss rate of the lossy resonator. Consequently, the loss and gain probabilities in the overall system would cancel out, yielding a closed quantum system. Upon performing detailed calculations on the impact of a driving signal on a lossy resonator, our results demonstrated that an amplifying resonator is physically unfeasible, thus forestalling the possibility of PT-symmetric quantum storage. Our finding serves to significantly narrow down future research into designing a viable quantum hard drive.

More Details

Quantum-Accurate Multiscale Modeling of Shock Hugoniots, Ramp Compression Paths, Structural and Magnetic Phase Transitions, and Transport Properties in Highly Compressed Metals

Wood, Mitchell A.; Nikolov, Svetoslav V.; Rohskopf, Andrew D.; Desjarlais, Michael P.; Cangi, Attila; Tranchida, Julien

Fully characterizing high energy density (HED) phenomena using pulsed power facilities (Z machine) and coherent light sources is possible only with complementary numerical modeling for design, diagnostic development, and data interpretation. The exercise of creating numerical tests, that match experimental conditions, builds critical insight that is crucial for the development of a strong fundamental understanding of the physics behind HED phenomena and for the design of next generation pulsed power facilities. The persistence of electron correlation in HED materials arising from Coulomb interactions and the Pauli exclusion principle is one of the greatest challenges for accurate numerical modeling and has hitherto impeded our ability to model HED phenomena across multiple length and time scales at sufficient accuracy. An exemplar is a ferromagnetic material like iron, while familiar and widely used, we lack a simulation capability to characterize the interplay of structure and magnetic effects that govern material strength, kinetics of phase transitions and other transport properties. Herein we construct and demonstrate the Molecular-Spin Dynamics (MSD) simulation capability for iron from ambient to earth core conditions, all software advances are open source and presently available for broad usage. These methods are multi-scale in nature, direct comparisons between high fidelity density functional theory (DFT) and linear-scaling MSD simulations is done throughout this work, with advancements made to MSD allowing for electronic structure changes being reflected in classical dynamics. Main takeaways for the project include insight into the role of magnetic spins on mechanical properties and thermal conductivity, development of accurate interatomic potentials paired with spin Hamiltonians, and characterization of the high pressure melt boundary that is of critical importance to planetary modeling efforts.

More Details

GDSA Framework Development and Process Model Integration FY2022

Mariner, Paul M.; Debusschere, Bert D.; Fukuyama, David E.; Harvey, Jacob H.; LaForce, Tara; Leone, Rosemary C.; Laros, James H.; Swiler, Laura P.; TACONI, ANNA M.

The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Spent Fuel & Waste Disposition (SFWD) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and high-level nuclear waste (HLW). A high priority for SFWST disposal R&D is disposal system modeling (Sassani et al. 2021). The SFWST Geologic Disposal Safety Assessment (GDSA) work package is charged with developing a disposal system modeling and analysis capability for evaluating generic disposal system performance for nuclear waste in geologic media. This report describes fiscal year (FY) 2022 advances of the Geologic Disposal Safety Assessment (GDSA) performance assessment (PA) development groups of the SFWST Campaign. The common mission of these groups is to develop a geologic disposal system modeling capability for nuclear waste that can be used to assess probabilistically the performance of generic disposal options and generic sites. The modeling capability under development is called GDSA Framework (pa.sandia.gov). GDSA Framework is a coordinated set of codes and databases designed for probabilistically simulating the release and transport of disposed radionuclides from a repository to the biosphere for post-closure performance assessment. Primary components of GDSA Framework include PFLOTRAN to simulate the major features, events, and processes (FEPs) over time, Dakota to propagate uncertainty and analyze sensitivities, meshing codes to define the domain, and various other software for rendering properties, processing data, and visualizing results.

More Details

Large-Scale Atomistic Simulations [Slides]

Moore, Stan G.

This report investigates free expansion of Aluminum and provides a take home message of "The physically realistic SNAP machine-learning potential captures liquid-vapor coexistence behavior for free expansion of aluminum at a level not generally accessible to hydrocodes".

More Details

Revealing conductivity of p-type delta layer systems for novel computing applications

Mamaluy, Denis M.; Mendez Granado, Juan P.

This project uses a quantum simulation technique to reveal the true conducting properties of novel atomic precision advanced manufacturing materials. With Moore's law approaching the limit of scaling for the CMOS technology, it is crucial to provide the best computing power and resources to National Security missions. Atomic precision advanced manufacturing-based computing systems can become the key to the design, use, and security of modern weapon systems, critical infrastructure, and communications. We will utilize the state-of-the-art computational methodology to create a predictive simulator for p-type atomic precision advanced manufacturing systems, which may also find applications in counterfeit detection and anti-tamper.

More Details

Composing preconditioners for multiphysics PDE systems with applications to Generalized MHD

Tuminaro, Raymond S.; Crockatt, Michael M.; Robinson, Allen C.

New patch smoothers or relaxation techniques are developed for solving linear matrix equations coming from systems of discretized partial differential equations (PDEs). One key linear solver challenge for many PDE systems arises when the resulting discretization matrix has a near null space that has a large dimension, which can occur in generalized magnetohydrodynamic (GMHD) systems. Patch-based relaxation is highly effective for problems when the null space can be spanned by a basis of locally supported vectors. The patch-based relaxation methods that we develop can be used either within an algebraic multigrid (AMG) hierarchy or as stand-alone preconditioners. These patch-based relaxation techniques are a form of well-known overlapping Schwarz methods where the computational domain is covered with a series of overlapping sub-domains (or patches). Patch relaxation then corresponds to solving a set of independent linear systems associated with each patch. In the context of GMHD, we also reformulate the underlying discrete representation used to generate a suitable set of matrix equations. In general, deriving a discretization that accurately approximates the curl operator and the Hall term while also producing linear systems with physically meaningful near null space properties can be challenging. Unfortunately, many natural discretization choices lead to a near null space that includes non-physical oscillatory modes and where it is not possible to span the near null space with a minimal set of locally supported basis vectors. Further discretization research is needed to understand the resulting trade-offs between accuracy, stability, and ease in solving the associated linear systems.

More Details

GDSA Repository Systems Analysis Investigations in FY2022

LaForce, Tara; Basurto, Eduardo B.; Chang, Kyung W.; Ebeida, Mohamed S.; Eymold, William; Faucett, Christopher F.; Jayne, Richard S.; Kucinski, Nicholas; Leone, Rosemary C.; Mariner, Paul M.; Laros, James H.

The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy Office of Nuclear Energy, Office of Spent Fuel and Waste Disposition (SFWD), has been conducting research and development on generic deep geologic disposal systems (i.e., geologic repositories). This report describes specific activities in the Fiscal Year (FY) 2022 associated with the Geologic Disposal Safety Assessment (GDSA) Repository Systems Analysis (RSA) work package within the SFWST Campaign. The overall objective of the GDSA RSA work package is to develop generic deep geologic repository concepts and system performance assessment (PA) models in several host-rock environments, and to simulate and analyze these generic repository concepts and models using the GDSA Framework toolkit, and other tools as needed.

More Details

Metrics for Intercomparison of Remapping Algorithms (MIRA) protocol applied to Earth system models

Geoscientific Model Development

Mahadevan, Vijay S.; Guerra, Jorge E.; Jiao, Xiangmin; Kuberry, Paul A.; Li, Yipeng; Ullrich, Paul; Marsico, David; Jacob, Robert; Bochev, Pavel B.; Jones, Philip

Strongly coupled nonlinear phenomena such as those described by Earth system models (ESMs) are composed of multiple component models with independent mesh topologies and scalable numerical solvers. A common operation in ESMs is to remap or interpolate component solution fields defined on their computational mesh to another mesh with a different combinatorial structure and decomposition, e.g., from the atmosphere to the ocean, during the temporal integration of the coupled system. Several remapping schemes are currently in use or available for ESMs. However, a unified approach to compare the properties of these different schemes has not been attempted previously. We present a rigorous methodology for the evaluation and intercomparison of remapping methods through an independently implemented suite of metrics that measure the ability of a method to adhere to constraints such as grid independence, monotonicity, global conservation, and local extrema or feature preservation. A comprehensive set of numerical evaluations is conducted based on a progression of scalar fields from idealized and smooth to more general climate data with strong discontinuities and strict bounds. We examine four remapping algorithms with distinct design approaches, namely ESMF Regrid , TempestRemap , generalized moving least squares (GMLS) with post-processing filters, and WLS-ENOR . By repeated iterative application of the high-order remapping methods to the test fields, we verify the accuracy of each scheme in terms of their observed convergence order for smooth data and determine the bounded error propagation using challenging, realistic field data on both uniform and regionally refined mesh cases. In addition to retaining high-order accuracy under idealized conditions, the methods also demonstrate robust remapping performance when dealing with non-smooth data. There is a failure to maintain monotonicity in the traditional L2-minimization approaches used in ESMF and TempestRemap, in contrast to stable recovery through nonlinear filters used in both meshless GMLS and hybrid mesh-based WLS-ENOR schemes. Local feature preservation analysis indicates that high-order methods perform better than low-order dissipative schemes for all test cases. The behavior of these remappers remains consistent when applied on regionally refined meshes, indicating mesh-invariant implementations. The MIRA intercomparison protocol proposed in this paper and the detailed comparison of the four algorithms demonstrate that the new schemes, namely GMLS and WLS-ENOR, are competitive compared to standard conservative minimization methods requiring computation of mesh intersections. The work presented in this paper provides a foundation that can be extended to include complex field definitions, realistic mesh topologies, and spectral element discretizations, thereby allowing for a more complete analysis of production-ready remapping packages.

More Details

Entropy and its Relationship with Statistics

Lehoucq, Richard B.; Mayer, Carolyn D.; Tucker, James D.

The purpose of our report is to discuss the notion of entropy and its relationship with statistics. Our goal is to provide a manner in which you can think about entropy, its central role within information theory and relationship with statistics. We review various relationships between information theory and statistics—nearly all are well-known but unfortunately are often not recognized. Entropy quantities the "average amount of surprise" in a random variable and lies at the heart of information theory, which studies the transmission, processing, extraction, and utilization of information. For us, data is information. What is the distinction between information theory and statistics? Information theorists work with probability distributions. Instead, statisticians work with samples. In so many words, information theory using samples is the practice of statistics.

More Details

An introduction to developing GitLab/Jacamar runner analyst centric workflows at Sandia

Robinson, Allen C.; Swan, Matthew S.; Harvey, Evan C.; Klein, Brandon T.; Lawson, Gary L.; Milewicz, Reed M.; Laros, James H.; Schmitz, Mark E.; Warnock, Scott A.

This document provides very basic background information and initial enabling guidance for computational analysts to develop and utilize GitOps practices within the Common Engineering Environment (CEE) and High Performance Computing (HPC) computational environment at Sandia National Laboratories through GitLab/Jacamar runner based workflows.

More Details

Dynamics Informed Optimization for Resilient Energy Systems

Arguello, Bryan A.; Stewart, Nathan; Hoffman, Matthew J.; Nicholson, Bethany L.; Garrett, Richard A.; Moog, Emily R.

Optimal mitigation planning for highly disruptive contingencies to a transmission-level power system requires optimization with dynamic power system constraints, due to the key role of dynamics in system stability to major perturbations. We formulate a generalized disjunctive program to determine optimal grid component hardening choices for protecting against major failures, with differential algebraic constraints representing system dynamics (specifically, differential equations representing generator and load behavior and algebraic equations representing instantaneous power balance over the transmission system). We optionally allow stochastic optimal pre-positioning across all considered failure scenarios, and optimal emergency control within each scenario. This novel formulation allows, for the first time, analyzing the resilience interdependencies of mitigation planning, preventive control, and emergency control. Using all three strategies in concert is particularly effective at maintaining robust power system operation under severe contingencies, as we demonstrate on the Western System Coordinating Council (WSCC) 9-bus test system using synthetic multi-device outage scenarios. Towards integrating our modeling framework with real threats and more realistic power systems, we explore applying hybrid dynamics to power systems. Our work is applied to basic RL circuits with the ultimate goal of using the methodology to model protective tripping schemes in the grid. Finally, we survey mitigation techniques for HEMP threats and describe a GIS application developed to create threat scenarios in a grid with geographic detail.

More Details

Sensitivity Analysis for Solutions to Heterogeneous Nonlocal Systems. Theoretical and Numerical Studies

Journal of Peridynamics and Nonlocal Modeling

Buczkowski, Nicole E.; Foss, Mikil D.; Parks, Michael L.; Radu, Petronela

The paper presents a collection of results on continuous dependence for solutions to nonlocal problems under perturbations of data and system parameters. The integral operators appearing in the systems capture interactions via heterogeneous kernels that exhibit different types of weak singularities, space dependence, even regions of zero-interaction. The stability results showcase explicit bounds involving the measure of the domain and of the interaction collar size, nonlocal Poincaré constant, and other parameters. In the nonlinear setting, the bounds quantify in different Lp norms the sensitivity of solutions under different nonlinearity profiles. The results are validated by numerical simulations showcasing discontinuous solutions, varying horizons of interactions, and symmetric and heterogeneous kernels.

More Details

Deployment of Multifidelity Uncertainty Quantification for Thermal Battery Assessment Part I: Algorithms and Single Cell Results

Eldred, Michael S.; Adams, Brian M.; Geraci, Gianluca G.; Portone, Teresa P.; Ridgway, Elliott M.; Stephens, John A.; Wildey, Timothy M.

This report documents the results of an FY22 ASC V&V level 2 milestone demonstrating new algorithms for multifidelity uncertainty quantification. Part I of the report describes the algorithms, studies their performance on a simple model problem, and then deploys the methods to a thermal battery example from the open literature. Part II (restricted distribution) applies the multifidelity UQ methods to specific thermal batteries of interest to the NNSA/ASC program.

More Details
Results 1–50 of 9,998
Results 1–50 of 9,998