Recent Developments in LAMMPS
Abstract not provided.
Abstract not provided.
Abstract not provided.
We use a nascent data-driven causal discovery method to find and compare causal relationships in observed data and climate model output. We consider ten different features in the Arctic climate collected from public databases on observational and Energy Exascale Earth System Model (E3SM) data. In identifying and analyzing the resulting causal networks, we make meaningful comparisons between observed and climate model interdependencies. This work demonstrates our ability to apply the PCMCI causal discovery algorithm to Arctic climate data, that there are noticeable similarities between observed and simulated Arctic climate dynamics, and that further work is needed to identify specific areas for improvement to better align models with natural observations.
Abstract not provided.
This report presents the results of the “Foundations of Rigorous Cyber Experimentation” (FORCE) Laboratory Directed Research and Development (LDRD) project. This project is a companion project to the “Science and Engineering of Cyber security through Uncertainty quantification and Rigorous Experimentation” (SECURE) Grand Challenge LDRD project. This project leverages the offline, controlled nature of cyber experimentation technologies in general, and emulation testbeds in particular, to assess how uncertainties in network conditions affect uncertainties in key metrics. We conduct extensive experimentation using a Firewheel emulation-based cyber testbed model of Invisible Internet Project (I2P) networks to understand a de-anonymization attack formerly presented in the literature. Our goals in this analysis are to see if we can leverage emulation testbeds to produce reliably repeatable experimental networks at scale, identify significant parameters influencing experimental results, replicate the previous results, quantify uncertainty associated with the predictions, and apply multi-fidelity techniques to forecast results to real-world network scales. The I2P networks we study are up to three orders of magnitude larger than the networks studied in SECURE and presented additional challenges to identify significant parameters. The key contributions of this project are the application of SECURE techniques such as UQ to a scenario of interest and scaling the SECURE techniques to larger network sizes. This report describes the experimental methods and results of these studies in more detail. In addition, the process of constructing these large-scale experiments tested the limits of the Firewheel emulation-based technologies. Therefore, another contribution of this work is that it informed the Firewheel developers of scaling limitations, which were subsequently corrected.
Abstract not provided.
Sandia National Laboratories has developed a capability to estimate parameters of epidemiological models from case reporting data to support responses to the COVID-19 pandemic. A differentiating feature of this work is the ability to simultaneously estimate county-specific disease transmission parameters in a nation-wide model that considers mobility between counties. The approach is focused on estimating parameters in a stochastic SEIR model that considers mobility between model patches (i.e., counties) as well as additional infectious compartments. The inference engine developed by Sandia includes (1) reconstruction and (2) transmission parameter inference. Reconstruction involves estimating current population counts within each of the compartments in a modified SEIR model from reported case data. Reconstruction produces input for the inference formulations, and it provides initial conditions that can be used in other modeling and planning efforts. Inference involves the solution of a large-scale optimization problem to estimate the time profiles for the transmission parameters in each county. These provide quantification of changes in the transmission parameter over time (e.g., due to impact of intervention strategies). This capability has been implemented in a Python-based software package, epi_inference, that makes extensive use of Pyomo [5] and IPOPT [10] to formulate and solve the inference formulations.
Abstract not provided.
Abstract not provided.
Journal of Peridynamics and Nonlocal Modeling
The propagation of a wave pulse due to low-speed impact on a one-dimensional, heterogeneous bar is studied. Due to the dispersive character of the medium, the pulse attenuates as it propagates. This attenuation is studied over propagation distances that are much longer than the size of the microstructure. A homogenized peridynamic material model can be calibrated to reproduce the attenuation and spreading of the wave. The calibration consists of matching the dispersion curve for the heterogeneous material near the limit of long wavelengths. It is demonstrated that the peridynamic method reproduces the attenuation of wave pulses predicted by an exact microstructural model over large propagation distances.
Sandia National Laboratories is investigating scalable architectural simulation capabilities with a focus on simulating and evaluating highly scalable supercomputers for high performance computing applications. There is a growing demand for RTL model integration to provide the capability to simulate customized node architectures and heterogeneous systems. This report describes the first steps integrating the ESSENTial Signal Simulation Enabled by Netlist Transforms (ESSENT) tool with the Structural Simulation Toolkit (SST). ESSENT can emit C++ models from models written in FIRRTL to automatically generate components. The integration workflow will automatically generate the SST component and necessary interfaces to ’plug’ the ESSENT model into the SST framework.
With the rapid proliferation of additive manufacturing and 3D printing technologies, architected cellular solids including truss-like 3D lattice topologies offer the opportunity to program the effective material response through topological design at the mesoscale. The present report summarizes several of the key findings from a 3-year Laboratory Directed Research and Development Program. The program set out to explore novel lattice topologies that can be designed to control, redirect, or dissipate energy from one or multiple insult environments relevant to Sandia missions, including crush, shock/impact, vibration, thermal, etc. In the first 4 sections, we document four novel lattice topologies stemming from this study: coulombic lattices, multi-morphology lattices, interpenetrating lattices, and pore-modified gyroid cellular solids, each with unique properties that had not been achieved by existing cellular/lattice metamaterials. The fifth section explores how unintentional lattice imperfections stemming from the manufacturing process, primarily sur face roughness in the case of laser powder bed fusion, serve to cause stochastic response but that in some cases such as elastic response the stochastic behavior is homogenized through the adoption of lattices. In the sixth section we explore a novel neural network screening process that allows such stocastic variability to be predicted. In the last three sections, we explore considerations of computational design of lattices. Specifically, in section 7 using a novel generative optimization scheme to design novel pareto-optimal lattices for multi-objective environments. In section 8, we use computational design to optimize a metallic lattice structure to absorb impact energy for a 1000 ft/s impact. And in section 9, we develop a modified micromorphic continuum model to solve wave propagation problems in lattices efficiently.
Parallel Computing
Graph partitioning has been an important tool to partition the work among several processors to minimize the communication cost and balance the workload. While accelerator-based supercomputers are emerging to be the standard, the use of graph partitioning becomes even more important as applications are rapidly moving to these architectures. However, there is no distributed-memory-parallel, multi-GPU graph partitioner available for applications. We developed a spectral graph partitioner, Sphynx, using the portable, accelerator-friendly stack of the Trilinos framework. In Sphynx, we allow using different preconditioners and exploit their unique advantages. We use Sphynx to systematically evaluate the various algorithmic choices in spectral partitioning with a focus on the GPU performance. We perform those evaluations on two distinct classes of graphs: regular (such as meshes, matrices from finite element methods) and irregular (such as social networks and web graphs), and show that different settings and preconditioners are needed for these graph classes. The experimental results on the Summit supercomputer show that Sphynx is the fastest alternative on irregular graphs in an application-friendly setting and obtains a partitioning quality close to ParMETIS on regular graphs. When compared to nvGRAPH on a single GPU, Sphynx is faster and obtains better balance and better quality partitions. Sphynx provides a good and robust partitioning method across a wide range of graphs for applications looking for a GPU-based partitioner.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Probabilistic and Bayesian neural networks have long been proposed as a method to incorporate uncertainty about the world (both in training data and operation) into artificial intelligence applications. One approach to making a neural network probabilistic is to leverage a Monte Carlo sampling approach that samples a trained network while incorporating noise. Such sampling approaches for neural networks have not been extensively studied due to the prohibitive requirement of many computationally expensive samples. While the development of future microelectronics platforms that make this sampling more efficient is an attractive option, it has not been immediately clear how to sample a neural network and what the quality of random number generation should be. This research aimed to start addressing these two fundamental questions by examining basic “off the shelf” neural networks can be sampled through a few different mechanisms (including synapse “dropout” and neuron “dropout”) and examine how these sampling approaches can be evaluated both in terms of evaluating algorithm effectiveness and the required quality of random numbers.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This project sought to develop a fundamental understanding of the mechanisms underlying a newly observed enhanced germanium (Ge) diffusion process in silicon germanium (SiGe) semiconductor nanostructures during thermal oxidation. Using a combination of oxidationdiffusion experiments, high resolution imaging, and theoretical modeling, a model for the enhanced Ge diffusion mechanism was proposed. Additionally, a nanofabrication approach utilizing this enhanced Ge diffusion mechanism was shown to be applicable to arbitrary 3D shapes, leading to the fabrication of stacked silicon quantum dots embedded in SiGe nanopillars. A new wet etch-based method for preparing 3D nanostructures for highresolution imaging free of obscuring material or damage was also developed. These results enable a new method for the controlled and scalable fabrication of on-chip silicon nanostructures with sub-10 nm dimensions needed for next generation microelectronics, including low energy electronics, quantum computing, sensors, and integrated photonics.