This project developed a novel statistical understanding of compression analytics (CA), which has challenged and clarified some core assumptions about CA, and enabled the development of novel techniques that address vital challenges of national security. Specifically, this project has yielded the development of novel capabilities including 1. Principled metrics for model selection in CA, 2. Techniques for deriving/applying optimal classification rules and decision theory to supervised CA, including how to properly handle class imbalance and differing costs of misclassification, 3. Two techniques for handling nonlocal information in CA, 4. A novel technique for unsupervised CA that is agnostic with regard to the underlying compression algorithm, 5. A framework for semisupervised CA when a small number of labels are known in an otherwise large unlabeled dataset. 6. The academic alliance component of this project has focused on the development of a novel exemplar-based Bayesian technique for estimating variable length Markov models (closely related to PPM [prediction by partial matching] compression techniques). We have developed examples illustrating the application of our work to text, video, genetic sequences, and unstructured cybersecurity log files.
Accurate event locations are important for many endeavors in seismology, and understanding the factors that contribute to uncertainties in those locations is complex. In this article, we present a case study that takes an in-depth look at the accuracy and precision possible for locating nine shallow earthquakes in the Rock Valley fault zone in southern Nevada. These events are targeted by the Rock Valley Direct Comparison phase of the Source Physics Experiment, as candidates for the colocation of a chemical explosion with an earthquake hypocenter to directly compare earthquake and explosion sources. For this comparison, it is necessary to determine earthquake hypocenters as accurately as possible so that different source types have nearly identical locations. Our investigations include uncertainty analysis from different sets of phase arrivals, stations, velocity models, and location algorithms. For a common set of phase arrivals and stations, we find that epicentral locations from different combinations of velocity models and algorithms are within 600 m of one another in most cases. Event depths exhibit greater uncertainties, but focusing on the S-P times at the nearest station allows for estimates within approximately 500 m.
As Machine Learning (ML) continues to advance, it is being integrated into more systems. Often, the ML component represents a significant portion of the system that reduces the burden on the end user or significantly improves task performance. However, the ML component represents an unknown complex phenomenon that is learned from collected data without the need to be explicitly programmed. Despite the improvement in task performance, the models are often black boxes. Evaluating the credibility and the vulnerabilities of ML models poses a gap in current test and evaluation practice. For high consequence applications, the lack of testing and evaluation procedures represents a significant source of uncertainty and risk. To help reduce that risk, here we present considerations to evaluate systems embedded with an ML component within a red-teaming inspired methodology. We focus on (1) cyber vulnerabilities to an ML model, (2) evaluating performance gaps, and (3) adversarial ML vulnerabilities.
The development of additively-manufactured (AM) 316L stainless steel (SS) using laser powder bed fusion (LPBF) has enabled near net shape components from a corrosion-resistant structural material. In this article, we present a multiscale study on the effects of processing parameters on the corrosion behavior of as-printed surfaces of AM 316L SS formed via LPBF. Laser power and scan speed of the LPBF process were varied across the instrument range known to produce parts with >99 % density, and the macroscale corrosion trends were interpreted via microscale and nanoscale measurements of porosity, roughness, microstructure, and chemistry. Porosity and roughness data showed that porosity φ decreased as volumetric energy density Ev increased due to a shift in the pore formation mechanism and that roughness Sq was due to melt track morphology and partially fused powder features. Cross-sectional and plan-view maps of chemistry and work function ϕs revealed an amorphous Mn-silicate phase enriched with Cr and Al that varied in both thickness and density depending on Ev. Finally, the macroscale potentiodynamic polarization experiments under full immersion in quiescent 0.6 M NaCl showed significant differences in breakdown potential Eb and metastable pitting. In general, samples with smaller φ and Sq values and larger ϕs values and homogeneity in the Mn-silicate exhibited larger Eb. The porosity and roughness effects stemmed from an increase to the overall number of initiation sites for pitting, and the oxide phase contributed to passive film breakdown by acting as a crevice former or creating a galvanic couple with the SS.
The United States Department of Energy’s (DOE) Office of Nuclear Energy’s Spent Fuel and Waste Science and Technology Campaign seeks to better understand the technical basis, risks, and uncertainty associated with the safe and secure disposition of spent nuclear fuel (SNF) and high-level radioactive waste. Commercial nuclear power generation in the United States has resulted in thousands of metric tons of SNF, the disposal of which is the responsibility of the DOE (Nuclear Waste Policy Act of 1982, as amended). Any repository licensed to dispose of SNF must meet requirements regarding the long-term performance of that repository. For an evaluation of the long-term performance of the repository, one of the events that may need to be considered is the SNF achieving a critical configuration during the postclosure period. Of particular interest is the potential behavior of SNF in dual-purpose canisters (DPCs), which are currently licensed and being used to store and transport SNF but were not designed for permanent geologic disposal.
The benefits of high-performance unidirectional carbon fiber composites are limited in many cost-driven industries due to the high cost relative to alternative reinforcement fibers. Low-cost carbon fibers have been previously proposed, but the longitudinal compressive strength continues to be a limiting factor or studies are based on simplifications that warrant further analysis. A micromechanical model is used to (1) determine if the longitudinal compressive strength of composites can be improved with noncircular carbon fiber shapes and (2) characterize why some shapes are stronger than others in compression. In comparison to circular fibers, the results suggest that the strength can be increased by 10%–13% by using a specific six-lobe fiber shape and by 6%–9% for a three-lobe fiber shape. A slight increase is predicted in the compressive strength of the study two-lobe fiber but has the highest uncertainty and sensitivity to fiber orientation and misalignment direction. The underlying mechanism governing the compressive failure of the composites was linked to the unique stress fields created by the lobes, particularly the pressure stress in the matrix. This work provides mechanics-based evidence of strength improvements from noncircular fiber shapes and insight on how matrix yielding is altered with alternative fiber shapes.
Characterizing interface trap states in commercial wide bandgap devices using frequency-based measurements requires unconventionally high probing frequencies to account for both fast and slow traps associated with wide bandgap materials. The C − ψ S technique has been suggested as a viable quasi-static method for determining the interface trap state densities in wide bandgap systems, but the results are shown to be susceptible to errors in the analysis procedure. This work explores the primary sources of errors present in the C − ψ S technique using an analytical model that describes the apparent response for wide bandgap MOS capacitor devices. Measurement noise is shown to greatly impact the linear fitting routine of the 1 / C S ∗ 2 vs ψ S plot to calibrate the additive constant in the surface potential/gate voltage relationship, and an inexact knowledge of the oxide capacitance is also shown to impede interface trap state analysis near the band edge. In addition, a slight nonlinearity that is typically present throughout the 1 / C S ∗ 2 vs ψ S plot hinders the accurate estimation of interface trap densities, which is demonstrated for a fabricated n-SiC MOS capacitor device. Methods are suggested to improve quasi-static analysis, including a novel method to determine an approximate integration constant without relying on a linear fitting routine.
Dingreville, Remi; Startt, Jacob K.; Elmslie, Timothy A.; Yang, Yang; Soto-Medina, Sujeily; Zappala, Emma; Meisel, Mark W.; Manuel, Michele V.; Frandsen, Benjamin A.; Hamlin, James J.
Magnetic properties of more than 20 Cantor alloy samples of varying composition were investigated over a temperature range of 5 K to 300 K and in fields of up to 70 kOe using magnetometry and muon spin relaxation. Two transitions are identified: a spin-glass-like transition that appears between 55K and 190K, depending on composition, and a ferrimagnetic transition that occurs at approximately 43K in multiple samples with widely varying compositions. The magnetic signatures at 43K are remarkably insensitive to chemical composition. A modified Curie-Weiss model was used to fit the susceptibility data and to extract the net effective magnetic moment for each sample. The resulting values for the net effective moment were either diminished with increasing Cr or Mn concentrations or enhanced with decreasing Fe, Co, or Ni concentrations. Beyond a sufficiently large effective moment, the magnetic ground state transitions from ferrimagnetism to ferromagnetism. The effective magnetic moments, together with the corresponding compositions, are used in a global linear regression analysis to extract element-specific effective magnetic moments, which are compared to the values obtained by ab initio based density functional theory calculations. Finally, these moments provide the information necessary to controllably tune the magnetic properties of Cantor alloy variants.
Systems engineering today faces a wide array of challenges, ranging from new operational environments to disruptive technological — necessitating approaches to improve research and development (R&D) efforts. Yet, emphasizing the Aristotelian argument that the “whole is greater than the sum of its parts” seems to offer a conceptual foundation creating new R&D solutions. Invoking systems theoretic concepts of emergence and hierarchy and analytic characteristics of traceability, rigor, and comprehensiveness is potentially beneficial for guiding R&D strategy and development to bridge the gap between theoretical problem spaces and engineering-based solutions. In response, this article describes systems–theoretic process analysis (STPA) as an example of one such approach to aid in early-systems R&D discussions. STPA—a ‘top-down’ process that abstracts real complex system operations into hierarchical control structures, functional control loops, and control actions—uses control loop logic to analyze how control actions (designed for desired system behaviors) may become violated and drive the complex system toward states of higher risk. By analyzing how needed controls are not provided (or out of sequence or stopped too soon) and unneeded controls are provided (or engaged too long), STPA can help early-system R&D discussions by exploring how requirements and desired actions interact to either mitigate or potentially increase states of risk that can lead to unacceptable losses. This article will demonstrate STPA's benefit for early-system R&D strategy and development discussion by describing such diverse use cases as cyber security, nuclear fuel transportation, and US electric grid performance. Together, the traceability, rigor, and comprehensiveness of STPA serve as useful tools for improving R&D strategy and development discussions. In conclusion, leveraging STPA as well as related systems engineering techniques can be helpful in early R&D planning and strategy development to better triangulate deeper theoretical meaning or evaluate empirical results to better inform systems engineering solutions.
For reactive burn models in hydrocodes, an equilibrium closure assumption is typically made between the unreacted and product equations of state. In the CTH [1] (not an acronym) hydrocode the assumption of density and temperature equilibrium is made by default, while other codes make a pressure and temperature equilibrium assumption. The main reason for this difference is the computational efficiency in making the density and temperature assumption over the pressure and temperature one. With fitting to data, both assumptions can accurately predict reactive flow response using the various models, but the model parameters from one code cannot necessarily be used directly in a different code with a different closure assumption. A new framework is intro-duced in CTH to allow this assumption to be changed independently for each reactive material. Comparisons of the response and computational cost of the History Variable Reactive Burn (HVRB) reactive flow model with the different equilibrium assumptions are presented.
A new capability for modeling graded density reactive flow materials in the shock physics hydrocode, CTH, is demonstrated here. Previously, materials could be inserted in CTH with graded material properties, but the sensitivity of the material was not adjusted based on these properties. Of particular interest are materials that are graded in density, sometimes due to pressing or other assembly operations. The sensitivity of explosives to both density and temperature has been well demonstrated in the literature, but to-date the material parameters for use in a simulation were fit to a single condition and applied to the entire material, or the material had to be inserted in sections and each section assigned a condition. The reactive flow model xHVRB has been extended to shift explosive sensitivity with initial density, so that sensitivity is also graded in the material. This capability is demonstrated for use in three examples. The first models detonation transfer in a graded density pellet of HNS, the second is a shaped charge with density gradients in the explosive, and the third is an explosively formed projectile.
Maximizing the production of heterologous biomolecules is a complex problem that can be addressed with a systems-level understanding of cellular metabolism and regulation. Specifically, growth-coupling approaches can increase product titers and yields and also enhance production rates. However, implementing these methods for non-canonical carbon streams is challenging due to gaps in metabolic models. Over four design-build-test-learn cycles, we rewire Pseudomonas putida KT2440 for growth-coupled production of indigoidine from para-coumarate. We explore 4,114 potential growth-coupling solutions and refine one design through laboratory evolution and ensemble data-driven methods. The final growth-coupled strain produces 7.3 g/L indigoidine at 77% maximum theoretical yield in para-coumarate minimal medium. The iterative use of growth-coupling designs and functional genomics with experimental validation was highly effective and agnostic to specific hosts, carbon streams, and final products and thus generalizable across many systems.
Process variations within Field Programmable Gate Arrays (FPGAs) provide a rich source of entropy and are therefore well-suited for the implementation of Physical Unclonable Functions (PUFs). However, careful considerations must be given to the design of the PUF architecture as a means of avoiding undesirable localized bias effects that adversely impact randomness, an important statistical quality characteristic of a PUF. Here in this paper, we investigate a ring-oscillator (RO) PUF that leverages localized entropy from individual look-up table (LUT) primitives. A novel RO construction is presented that enables the individual paths through the LUT primitive to be measured and isolated at high precision, and an analysis is presented that demonstrates significant levels of localized design bias. The analysis demonstrates that delay-based PUFs that utilize LUTs as a source of entropy should avoid using FPGA primitives that are localized to specific regions of the FPGA, and instead, a more robust PUF architecture can be constructed by distributing path delay components over a wider region of the FPGA fabric. Compact RO PUF architectures that utilize multiple configurations within a small group of LUTs are particularly susceptible to these types of design-level bias effects. The analysis is carried out on data collected from a set of identically designed, hard macro instantiations of the RO implemented on 30 copies of a Zynq 7010 SoC.
Organic co-crystals have emerged as a promising class of semiconductors for next-generation optoelectronic devices due to their unique photophysical properties. This paper presents a joint experimental-theoretical study comparing the crystal structure, spectroscopy, and electronic structure of two charge transfer co-crystals. Reported herein is a novel co-crystal Npe:TCNQ, formed from 4-(1-naphthylvinyl)pyridine (Npe) and 7,7,8,8-tetracyanoquinodimethane (TCNQ) via molecular self-assembly. This work also presents a revised study of the co-crystal composed of Npe and 1,2,4,5-tetracyanobenzene (TCNB) molecules, Npe:TCNB, herein reported with a higher-symmetry (monoclinic) crystal structure than previously published. Npe:TCNB and Npe:TCNQ dimer clusters are used as theoretical model systems for the co-crystals; the geometries of the dimers are compared to geometries of the extended solids, which are computed with periodic boundary conditions density functional theory. UV-Vis absorption spectra of the dimers are computed with time-dependent density functional theory and compared to experimental UV-Vis diffuse reflectance spectra. Both Npe:TCNB and Npe:TCNQ are found to exhibit neutral character in the S0 state and ionic character in the S1 state. The high degree of charge transfer in the S1 state of both Npe:TCNB and Npe:TCNQ is rationalized by analyzing the changes in orbital localization associated with the S1 transitions.
Accelerators that drive z-pinch experiments transport current densities in excess of 1 MA/cm2 in order to melt or ionize the target and implode it on axis. These high current densities stress the transmission lines upstream from the target, where rapid electrode heating causes plasma formation, melt, and possibly vaporization. These plasmas negatively impact accelerator efficiency by diverting some portion of the current away from the target, referred to as “current loss”. Simulations that are able to reproduce this behavior may be applied to improving the efficiency of existing accelerators and to designing systems operating at ever higher current densities. The relativistic particle-in-cell code CHICAGO® is the primary code for modeling power flow on Sandia National Laboratories’ Z accelerator. We report here on new algorithms that incorporate vaporization and melt into the standard power-flow simulation framework. Taking a hybrid approach, the CHICAGO® kinetic/multi-fluid treatment has been expanded to include vaporization while the quasi-neutral equation-of-motion has been updated for melt at high current-densities. For vaporization, a new one-dimensional substrate model provides a more accurate calculation of electrode thermal, mass, and magnetic field diffusion as well as a means of emitting absorbed contaminants and vaporized metal ions. A quasi-fluid model has been implemented expressly to mimic the motion of imploding liners for accurate inductance histories. For melt, a multi-ion Hall-MHD option has been implemented and benchmarked against Alegra MHD. This new model is described with sufficient detail to reproduce these algorithms in any hybrid kinetic code. Physics results from the new code are also presented. A CHICAGO® Hall-MHD simulation of a radial transmission line demonstrates that Hall physics, not included in Alegra, has no significant impact on the diffusion of electrode material. When surface contaminant desorption is mocked in as a hydrogen surface plasma, both the surface and bulk-material plasmas largely compress under the influence of the j × B force. Similar results are seen in Alegra, which also shows magnetic and material diffusion scaling with peak current. Test vaporization simulations using MagLIF and a power-flow experimental geometry show Fe+ ions diffuse only a few hundred µm from the electrodes, so present models of Z power flow remain valid.
Predictive modeling typically relies on Bayesian model calibration to provide uncertainty quantification. Variational inference utilizing fully independent (“mean-field”) Gaussian distributions are often used as approximate probability density functions. This simplification is attractive since the number of variational parameters grows only linearly with the number of unknown model parameters. However, the resulting diagonal covariance structure and unimodal behavior can be too restrictive to provide useful approximations of intractable Bayesian posteriors that exhibit highly non-Gaussian behavior, including multimodality. High-fidelity surrogate posteriors for these problems can be obtained by considering the family of Gaussian mixtures. Gaussian mixtures are capable of capturing multiple modes and approximating any distribution to an arbitrary degree of accuracy, while maintaining some analytical tractability. Unfortunately, variational inference using Gaussian mixtures with full-covariance structures suffers from a quadratic growth in variational parameters with the number of model parameters. The existence of multiple local minima due to strong nonconvex trends in the loss functions often associated with variational inference present additional complications, These challenges motivate the need for robust initialization procedures to improve the performance and computational scalability of variational inference with mixture models. In this work, we propose a method for constructing an initial Gaussian mixture model approximation that can be used to warm-start the iterative solvers for variational inference. The procedure begins with a global optimization stage in model parameter space. In this step, local gradient-based optimization, globalized through multistart, is used to determine a set of local maxima, which we take to approximate the mixture component centers. Around each mode, a local Gaussian approximation is constructed via the Laplace approximation. Finally, the mixture weights are determined through constrained least squares regression. The robustness and scalability of the proposed methodology is demonstrated through application to an ensemble of synthetic tests using high-dimensional, multimodal probability density functions. Here, the practical aspects of the approach are demonstrated with inversion problems in structural dynamics.
Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high-fidelity, validated models used in modal, vibration, static and shock analysis of weapons systems. This document provides a user’s guide to the input for Sierra/SD. Details of input specifications for the different solution types, output options, element types and parameters are included. The appendices contain detailed examples, and instructions for running the software on parallel platforms.
Work evaluating spent nuclear fuel (SNF) dry storage canister surface environments and canister corrosion progressed significantly in FY23, with the goal of developing a scientific understanding of the processes controlling initiation and growth of stress corrosion cracking (SCC) cracks in stainless steel canisters in relevant storage environments. The results of the work performed at Sandia National Laboratories (SNL) will guide future work and will contribute to the development of better tools for predicting potential canister penetration by SCC.
For the first time the optimal local truncation error method (OLTEM) with 125-point stencils and unfitted Cartesian meshes has been developed in the general 3-D case for the Poisson equation for heterogeneous materials with smooth irregular interfaces. The 125-point stencils equations that are similar to those for quadratic finite elements are used for OLTEM. The interface conditions for OLTEM are imposed as constraints at a small number of interface points and do not require the introduction of additional unknowns, i.e., the sparse structure of global discrete equations of OLTEM is the same for homogeneous and heterogeneous materials. The stencils coefficients of OLTEM are calculated by the minimization of the local truncation error of the stencil equations. These derivations include the use of the Poisson equation for the relationship between the different spatial derivatives. Such a procedure provides the maximum possible accuracy of the discrete equations of OLTEM. In contrast to known numerical techniques with quadratic elements and third order of accuracy on conforming and unfitted meshes, OLTEM with the 125-point stencils provides 11-th order of accuracy, i.e., an extremely large increase in accuracy by 8 orders for similar stencils. The numerical results show that OLTEM yields much more accurate results than high-order finite elements with much wider stencils. The increased numerical accuracy of OLTEM leads to an extremely large increase in computational efficiency. Additionally, a new post-processing procedure with the 125-point stencil has been developed for the calculation of the spatial derivatives of the primary function. The post-processing procedure includes the minimization of the local truncation error and the use of the Poisson equation. It is demonstrated that the use of the partial differential equation (PDE) for the 125-point stencils improves the accuracy of the spatial derivatives by 6 orders compared to post-processing without the use of PDE as in existing numerical techniques. At an accuracy of 0.1% for the spatial derivatives, OLTEM reduces the number of degrees of freedom by 900 - 4∙106 times compared to quadratic finite elements. The developed post-processing procedure can be easily extended to unstructured meshes and can be independently used with existing post-processing techniques (e.g., with finite elements).