Machine Learned Interatomic Potential Development of W-ZrC for Fusion Divertor Microstructure and Thermomechanical Properties
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Adopting reduced order models (ROMs) of springs lowers the computational cost of stronglink simulations. However, ROMs introduce currently unquantified error to such analyses. This study addresses that lack of data by comparing a hexahedral mesh to a commonly used ROM beam mesh. Two types of analyses were performed, a quasi-static displacement-controlled pull and a haversine shock, examining basic spring properties as well as dynamics and stress/strain data. Both tests showed good similarities between the hexahedral and beam meshes, especially when comparing reaction force and stress trends and maximums. Equivalent plastic strain results were not quite as favorable, indicating that the beam model may be less likely to correctly predict spring failure. Despite reducing computation times by over 48 hours in all shock cases, appropriate use of the ROM should carefully balance this advantage with its reduction in accuracy, especially when examining spring failure and outputting variables such as equivalent plastic strain.
Abstract not provided.
Natural and man-made degraded visual environments pose major threats to national security. The random scattering and absorption of light by tiny particles suspended in the air reduces situational awareness and causes unacceptable down-time for critical systems and operations. To improve the situation, we have developed several approaches to interpret the information contained within scattered light to enhance sensing and imaging in scattering media. These approaches were tested at the Sandia National Laboratory Fog Chamber facility and with tabletop fog chambers. Computationally efficient light transport models were developed and leveraged for computational sensing. The models are based on a weak angular dependence approximation to the Boltzmann or radiative transfer equation that appears to be applicable in both the moderate and highly scattering regimes. After the new model was experimentally validated, statistical approaches for detection, localization, and imaging of objects hidden in fog were developed and demonstrated. A binary hypothesis test and the Neyman-Pearson lemma provided the highest theoretically possible probability of detection for a specified false alarm rate and signal-to-noise ratio. Maximum likelihood estimation allowed estimation of the fog optical properties as well as the position, size, and reflection coefficient of an object in fog. A computational dehazing approach was implemented to reduce the effects of scatter on images, making object features more readily discernible. We have developed, characterized, and deployed a new Tabletop Fog Chamber capable of repeatably generating multiple unique fog-analogues for optical testing in degraded visual environments. We characterized this chamber using both optical and microphysical techniques. In doing so we have explored the ability of droplet nucleation theory to describe the aerosols generated within the chamber, as well as Mie scattering theory to describe the attenuation of light by said aerosols, and correlated the aerosol microphysics to optical properties such as transmission and meteorological optical range (MOR). This chamber has proved highly valuable and has supported multiple efforts inclusive to and exclusive of this LDRD project to test optics in degraded visual environments. Circularly polarized light has been found to maintain its polarization state better than linearly polarized light when propagating through fog. This was demonstrated experimentally in both the visible and short-wave infrared (SWIR) by imaging targets made of different commercially available retroreflective films. It was found that active circularly polarized imaging can increase contrast and range compared to linearly polarized imaging. We have completed an initial investigation of the capability for machine learning methods to reduce the effects of light scattering when imaging through fog. Previously acquired experimental long-wave images were used to train an autoencoder denoising architecture. Overfitting was found to be a problem because of lack of variability in the object type in this data set. The lessons learned were used to collect a well labeled dataset with much more variability using the Tabletop Fog Chamber that will be available for future studies. We have developed several new sensing methods using speckle intensity correlations. First, the ability to image moving objects in fog was shown, establishing that our unique speckle imaging method can be implemented in dynamic scattering media. Second, the speckle decorrelation over time was found to be sensitive to fog composition, implying extensions to fog characterization. Third, the ability to distinguish macroscopically identical objects on a far-subwavelength scale was demonstrated, suggesting numerous applications ranging from nanoscale defect detection to security. Fourth, we have shown the capability to simultaneously image and localize hidden objects, allowing the speckle imaging method to be effective without prior object positional information. Finally, an interferometric effect was presented that illustrates a new approach for analyzing speckle intensity correlations that may lead to more effective ways to localize and image moving objects. All of these results represent significant developments that challenge the limits of the application of speckle imaging and open important application spaces. A theory was developed and simulations were performed to assess the potential transverse resolution benefit of relative motion in structured illumination for radar systems. Results for a simplified radar system model indicate that significant resolution benefits are possible using data from scanning a structured beam over the target, with the use of appropriate signal processing.
The time dependence of phase diagrams and how to model rate dependent transitions remains one of the key unanswered questions in physics. When a material is loaded dynamically through equilibrium phase boundaries, it is the kinetics that determines the real time expression of a phase transition. Here we report the atomic and nanosecond-scale quantification of kinetics of shock-driven phase transition in multiple materials. We uniquely make use of a both a simple shock as well as shock-and-hold loading pathways compress different crystalline solids and induce structural phase transitions below melt. Coupling shock loading with time-resolved synchrotron x-ray diffraction (DXRD), we probe the structural transformations of these solids in the short-lived high pressure and temperature states generated. The novelty and power of using DXRD for the assessment of kinetics of phase transitions lies in the ability to discover and identify new phases and to examine kinetics without prior knowledge of a material's phase diagram. Our results provide a quantified expression and a physics model of kinetics of formation of high-pressure phases under shock loading: transition incubation time, evolution, completion time and crystallization rate.
Journal of Plasma Physics
Significant variety is observed in spherical crystal x-ray imager (SCXI) data for the stagnated fuel–liner system created in Magnetized Liner Inertial Fusion (MagLIF) experiments conducted at the Sandia National Laboratories Z-facility. As a result, image analysis tasks involving, e.g., region-of-interest selection (i.e. segmentation), background subtraction and image registration have generally required tedious manual treatment leading to increased risk of irreproducibility, lack of uncertainty quantification and smaller-scale studies using only a fraction of available data. We present a convolutional neural network (CNN)-based pipeline to automate much of the image processing workflow. This tool enabled batch preprocessing of an ensemble of Nscans = 139 SCXI images across Nexp = 67 different experiments for subsequent study. The pipeline begins by segmenting images into the stagnated fuel and background using a CNN trained on synthetic images generated from a geometric model of a physical three-dimensional plasma. The resulting segmentation allows for a rules-based registration. Our approach flexibly handles rarely occurring artifacts through minimal user input and avoids the need for extensive hand labelling and augmentation of our experimental dataset that would be needed to train an end-to-end pipeline. We also fit background pixels using low-degree polynomials, and perform a statistical assessment of the background and noise properties over the entire image database. Our results provide a guide for choices made in statistical inference models using stagnation image data and can be applied in the generation of synthetic datasets with realistic choices of noise statistics and background models used for machine learning tasks in MagLIF data analysis. We anticipate that the method may be readily extended to automate other MagLIF stagnation imaging applications.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The PRO-X program is actively supporting the design of nuclear systems by developing a framework to both optimize the fuel cycle infrastructure for advanced reactors (ARs) and minimize the potential for production of weapons-usable nuclear material. Three study topics are currently being investigated by Sandia National Laboratories (SNL) with support from Argonne National Laboratories (ANL). This multi-lab collaboration is focused on three study topics which may offer proliferation resistance opportunities or advantages in the nuclear fuel cycle. These topics are: 1) Transportation Global Landscape, 2) Transportation Avoidability, and 3) Parallel Modular Systems vs Single Large System (Crosscutting Activity).
In this LDRD we investigated the application of machine learning methods to understand dimensionality reduction and evolution of the Rayleigh-Taylor instability (RTI). As part of the project, we undertook a significant literature review to understand current analytical theory and machine learning based methods to treat evolution of this instability. We note that we chose to refocus on assessing the hydrodynamic RTI as opposed to the magneto-Rayleigh-Taylor instability originally proposed. This choice enabled utilizing a wealth of analytic test cases and working with relatively fast running open-source simulations of single-mode RTI. This greatly facilitated external collaboration with URA summer fellowship student, Theodore Broeren. In this project we studied the application of methods from dynamical systems learning and traditional regression methods to recover behavior of RTI ranging from the fully nonlinear to weakly nonlinear (wNL) regimes. Here we report on two of the tested methods SINDy and a more traditional regression-based approach inspired by analytic wNL theory with which we had the most success. We conclude with a discussion of potential future extensions to this work that may improve our understanding from both theoretical and phenomenological perspectives.
Frontiers in Materials
Uncertainty quantification (UQ) plays a major role in verification and validation for computational engineering models and simulations, and establishes trust in the predictive capability of computational models. In the materials science and engineering context, where the process-structure-property-performance linkage is well known to be the only road mapping from manufacturing to engineering performance, numerous integrated computational materials engineering (ICME) models have been developed across a wide spectrum of length-scales and time-scales to relieve the burden of resource-intensive experiments. Within the structure-property linkage, crystal plasticity finite element method (CPFEM) models have been widely used since they are one of a few ICME toolboxes that allows numerical predictions, providing the bridge from microstructure to materials properties and performances. Several constitutive models have been proposed in the last few decades to capture the mechanics and plasticity behavior of materials. While some UQ studies have been performed, the robustness and uncertainty of these constitutive models have not been rigorously established. In this work, we apply a stochastic collocation (SC) method, which is mathematically rigorous and has been widely used in the field of UQ, to quantify the uncertainty of three most commonly used constitutive models in CPFEM, namely phenomenological models (with and without twinning), and dislocation-density-based constitutive models, for three different types of crystal structures, namely face-centered cubic (fcc) copper (Cu), body-centered cubic (bcc) tungsten (W), and hexagonal close packing (hcp) magnesium (Mg). Our numerical results not only quantify the uncertainty of these constitutive models in stress-strain curve, but also analyze the global sensitivity of the underlying constitutive parameters with respect to the initial yield behavior, which may be helpful for robust constitutive model calibration works in the future.