Neural networks are becoming the cornerstone for national security prediction tasks. However, designing them requires significant research and trial/error, as they have many hyperparameters, including their computation graph (“architecture”). Neural architecture search (NAS) employs secondary optimizers to search for architectures maximizing objectives like accuracy. Evolutionary algorithms (EAs) are the most used class of optimizer for NAS. However, existing Python libraries for writing EAs limit the complexity of experiments a user can design. In this project, we built ARENA, a Python framework that encodes complex, hyper-realistic EAs. ARENA collects detailed information as it runs and is flexible enough to encode non-EA search algorithms. We tested ARENA on 4 toy optimization problems by encoding 3 search algorithms for each—random search, an EA, and simulated annealing. We also designed an EA that performs NAS on the MNIST dataset. Our experiments suggest the potential for immediate mission impact through solving lab-wide optimization problems.
The Daedalus ultrafast x-ray imager is the latest generation in Sandia’s hybrid CMOS detector family. With three frames along an identical line of sight, 1 ns minimum integration time, a higher full well than Icarus, and added features, Daedalus brings exciting new capabilities to diagnostic applications in inertial confinement fusion and high energy density science. In this work, we present measurements of time response, dynamic range, spatial uniformity, pixel cross-talk, and absolute x-ray sensitivity using pulsed optical and x-ray sources. We report a measured 1.5 Me− full well, pixel sensitivity at 9.58 × 10−7 V/e−, and an estimate of spatial uniformity at ∼5% across the sensor array.
Ultimately, our experiment measures two quantities on an aluminum bar: motion (which modeling must predict) and temperature (which sets thermal boundary conditions). For motion, stereo DIC is a technique to use imaging data to provide displacements relative to a reference image down to 1/100th of a pixel. We use a calibrated infrared imaging method for accurate temperature measurements. We will be capturing simultaneous data and then registering temperature data in space to the same coordinate system as the displacement data. While we will later show that our experiments are repeatable, indicating that separate experiments for motion and temperature would provide similar data, the simultaneous and registered data removes test to test variability as a source of uncertainty for model calibration and reduces the number of time-consuming tests that must be performed.
Uncertainty in severe accident evolution and outcome is driven by event bifurcations that represent distinctive challenges to defensive layers and tend to promote the emergence of discrete classes of core damage and accident risk. This discrete set of "attractor" states arise from the complex networks of competing physical phenomena and conditional event cascades occurring as the overall system degrades – a process that yields increasing degrees of freedom and accident progression pathways. Characterization of these event spaces has proven elusive to more traditional data interrogation methods, but proves tractable by application of more advanced data collection and machine learning approaches. Through application of these approaches we demonstrate a conceptual framework that enables real-time/robust, risk-informed decision-making support to improve accident mitigation and encourage “graceful exits” during low probability, extreme events limiting accident consequences. In this analysis, we simulated over 8,000 short-term station blackout (STSBO) accidents with the state-of-the-art integral severe accident code, MELCOR, and demonstrate the potential for ML approaches to predict simulation outcomes. We chose to pair ML tools with interpretable and mechanistic event trees for the considered STSBO accident space to predict the likelihood of future event paths along the tree. In addition to the current state of the system, we use information from recent trajectories of temperature, pressure, and other physical features, combining both the current state and past trajectories to forecast future event paths. Finally, we simulate the random injection of variable amounts of water to quantify the efficacy of available actions at reducing risks along the many branches in the event tree. We identify scenarios and windows of opportunity to mitigate risk as well as scenarios in which such actions are unlikely to alter the accident end-state.
The Photovoltaic (PV) Performance Modeling Collaborative (PVPMC) organized a blind PV performance modeling intercomparison to allow PV modelers to blindly test their models and modeling ability against real system data. Measured weather and irradiance data were provided along with detailed descriptions of PV systems from two locations (Albuquerque, New Mexico, USA, and Roskilde, Denmark). Participants were asked to simulate the plane-of-array irradiance, module temperature, and DC power output from six systems and submit their results to Sandia for processing. The results showed overall median mean bias (i.e., the average error per participant) of 0.6% in annual irradiation and −3.3% in annual energy yield. While most PV performance modeling results seem to exhibit higher precision and accuracy as compared to an earlier blind PV modeling study in 2010, human errors, modeling skills, and derates were found to still cause significant errors in the estimates.
A near net shape coating is desired to be applied to the outer surface of a capped cylinder (“cake pan”) type substrate using thermal spray technology. A capped cylinder geometry is more complex than simple coupon-level substrate substrates (e.g., flat panels, cylinders) and thus requires a more complex toolpath to deposit a uniform coating. This report documents a practical theoretical approach to calculating relative torch-to-substrate speeds for coating the cylindrical, corner, and cap region of a rotating capped cylinder based on fundamental thermal spray toolpath principles. A preliminary experimental test deposited a thermal spray coating onto a mock substrate using toolpath speeds calculated by the theoretical approach proposed. The mock substrate was metallographically inspected to assess coating uniformity across the cylindrical, corner, and cap region. Inspection of the mock substrate revealed qualitatively uniform coating microstructure and thickness where theoretically predicted, demonstrating the viability of the proposed toolpath method and associated calculations. Pathways forward to optimizing coating uniformity at the cap center are proposed as near term suggested future work.
This paper is concerned with goal-oriented a posteriori error estimation for nonlinear functionals in the context of nonlinear variational problems solved with continuous Galerkin finite element discretizations. A two-level, or discrete, adjoint-based approach for error estimation is considered. The traditional method to derive an error estimate in this context requires linearizing both the nonlinear variational form and the nonlinear functional of interest which introduces linearization errors into the error estimate. In this paper, we investigate these linearization errors. In particular, we develop a novel discrete goal-oriented error estimate that accounts for traditionally neglected nonlinear terms at the expense of greater computational cost. We demonstrate how this error estimate can be used to drive mesh adaptivity. We show that accounting for linearization errors in the error estimate can improve its effectivity for several nonlinear model problems and quantities of interest. We also demonstrate that an adaptive strategy based on the newly proposed estimate can lead to more accurate approximations of the nonlinear functional with fewer degrees of freedom when compared to uniform refinement and traditional adjoint-based approaches.
We present methods to estimate parameters for models for the incidence angle modifier for simulating irradiance on a photovoltaic array. The incidence angle modifier quantifies the fraction of direct irradiance that is reflected away at the array’s face, as a function of the direct irradiance’s angle of incidence. Parameters can be estimated from data and the fitting method can be used to convert between models. We show that the model conversion procedure results in models that produce similar annual insolation on a fixed plane.
Marine aerosol injections are a key component in further understanding of both the potentials of deliberate injection for marine cloud brightening (MCB), a potential climate intervention (CI) strategy, and key aerosol-cloud interaction behaviors that currently form the largest uncertainty in global climate model (GCM) predictions of our climate. Since the rate of spread of aerosols in a marine environment directly translates to the effectiveness and ability of aerosol injections in impacting cloud radiative forcing, it is crucial to understand the spatial and temporal extent of injected-aerosol effects following direct injection into marine environments. The ubiquity of ship-injected aerosol tracks from satellite imagery renders observational validation of new parameterizations possible in 2D, however, 3D compatible data is more scarce, and necessary for the development of subgrid scale parameterizations of aerosol-cloud interactions in GCMs. This report introduces two novel parameterizations of atmospheric aerosol injection behavior suitable for both 3D (GCM-compatible) and 2D (observation-related) modeling. Their applicability is highlighted using a wealth of different observational data: small and larger scale salt-aerosol injection experiments conducted at SNL, 3D large eddy simulations of ship-injected aerosol tracks and 2D satellite images of ship tracks. The power of experimental data in enhancing knowledge of aerosol-cloud interactions is in particular emphasized by studying key aerosol microphysical and optical properties as observed through their mixing in cloud-like environments.
The research described here was performed as part of the DOE SciDAC project Coupling Approaches for Next Generation Architectures (CANGA). A framework was developed for the derivation of novel algorithms for the multirate time integration of two-component systems coupled across an interface between spatial domains. The multirate aspect means that different time steps are allowed by each component integrator. The framework provides a way to construct multirate integrators with desirable properties related to stability, accuracy and preservation of system invariants. This report describes the framework and summarizes the major results, examples and research products.
This special memorial issue pays tribute to James (Jim) A. Miller, a giant of combustion science who died in 2021, with a celebration of his enormous influence on the field. We were touched by the responses we received after we sent out the invitations for it. Jim inspired several generations of scientists, who viewed him as a mentor, a father figure, and a friend. Together with Nils Hansen and Peter Glarborg, we have written a detailed account on his life and work. Furthermore, it appeared in this journal shortly after his death; and so here we focus on the scientific areas he had interest in and influence on, and how they relate to the 34 papers in this issue. The topics of these papers span a variety of Jim's interests including nitrogen chemistry, polycyclic aromatic hydrocarbon (PAH) chemistry, oxidation chemistry, energy transfer, prompt dissociations, and codes to facilitate combustion chemistry simulations.
Mechanical metamaterials are artificial materials with unique global properties due to the structural geometry and material composition of their unit cell. Typically, mechanical metamaterial unit cells are designed such that, when tessellated, they exhibit unique mechanical properties such as zero or negative Poisson's ratio and negative stiffness. Beyond these applications, mechanical metamaterials can be used to achieve tailorable nonlinear deformation responses. Computational methods such as gradient-based topology optimization (TO) and size/shape optimization (SSO) can be implemented to design these metamaterials. However, both methods can lead to suboptimal solutions or a lack of generalizability. Therefore, this research used deep reinforcement learning (DRL), a subset of deep machine learning that teaches an agent to complete tasks through interactive experiences, to design mechanical metamaterials with specific nonlinear deformation responses in compression or tension. The agent learned to design the unit cells by sequentially adding material to a discrete design domain and being rewarded for achieving the desired deformation response. After training, the agent successfully designed unit cells to exhibit desired deformation responses not experienced during training. This work shows the potential of DRL as a high-level design tool for a wide array of engineering applications.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. (2) A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to develop new types of analysis without requiring the implementation of analysis-specific device models. (3) Device models that are specifically tailored to meet Sandia’s needs, including some radiation-aware devices (for Sandia users only). (4) Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase — a message passing parallel implementation — which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows.