The Integrated Tiger Series (ITS) generates a database containing energy deposition data. This data, when stored on an Exodus file, is not typically suitable for analysis within Sierra Mechanics for finite element analysis. The its2sierra tool maps data from the ITS database to the Sierra database. This document provides information on the usage of its2sierra.
The effective management of plastic waste streams to prevent plastic land and water pollution is a growing problem that is also one of the most important challenges in polymer science today. Polymer materials that are stable over their lifetime and can also be cheaply recycled or repurposed as desired could more easily be diverted from waste streams. However, this is difficult for most commodity plastics. It is especially difficult to conceive this with intractable, cross-linked polymers such as rubbers. In this work, we explore the utility of microencapsulated Grubbs’ catalysts for the in-situ depolymerization and reprocessing of polybutadiene (PB) rubber. Second-generation Hoveyda-Grubbs catalyst (HG2) contained within glassy thermoplastic microspheres can be dispersed in PB rubber below the microsphere’s glass transition temperature (Tg) without adverse depolymerization, evidenced by rubber with and without these microspheres obtaining similar shear storage moduli of ≈16 and ≈28 kPa, respectively. The thermoplastic’s Tg can be used to tune the depolymerization temperature, via release of HG2 into the rubber matrix. For example, using poly(lactic acid) (PLA) vs polysulfone results in an 85 and 162 °C depolymerization temperature, respectively. Liquefaction of rubber to a mixture of small molecules and oligomers is demonstrated using a 0.01 mol % catalyst loading using PLA as the encapsulant. At that same catalyst loading, depolymerization occurs to a greater extent in comparison to two ex-situ approaches, including a conventional solvent-assisted method, where it occurs at roughly twice the extent at each given catalyst loading. In addition, depolymerization of the microsphere-loaded rubbers was demonstrated for samples stored under nitrogen for 23 days. Lastly, we show that the depolymerized products can be reprocessed back into solid rubber with a shear storage modulus of ≈32 kPa. Thus, we envision that this approach could be used to recycle and reuse cross-linked rubbers at the end of their product lifetime.
This report provides documentation for the Sandia Toolkit (STK) modules. STK modules are intended to provide infrastructure that assists the development of computational engineering software such as finite-element analysis applications. STK includes modules for unstructured-mesh data structures, reading/writing mesh files, geometric proximity search, and various utilities. This document contains a chapter for each module, and each chapter contains overview descriptions and usage examples. Usage examples are primarily code listings which are generated from working test programs that are included in the STK code-base. A goal of this approach is to ensure that the usage examples will not fall out of date.
A digital twin has intelligent modules that continuously monitor the condition of the individual components and the whole of a system. Digital twins can provide nuclear power plants (NPP) operators an unprecedented level of monitoring, control, supervision, and security by contributing a greater volume of data for more comprehensive data analysis and increased accuracy of insights and predictions for decision making throughout the entire NPP lifecycle. NPP operators and managers have historically relied on limited, second hand or incomplete data. With proper implementation, digital twins can provide a central hub of all intel that allows for a multidisciplinary view of an NPP. This equips operators and managers with the ability to have more information, context, and intel that can be used for greater granularity during planning and decision making. Digital twins can be used in many activities as the technology has many different concepts surrounding it. From the various definitions of a digital twin within the industry, digital twins can be differentiated by levels of integration/automation. The three main models include digital model, digital shadow, and digital twin. Digital twins offer many potential advancements to the nuclear industry that could reduce costs, improve designs, provide safer operation, and improve their overall security.
This project has identified opportunities to bring further reductions in the mass and cost of modern wind turbine blades through the use of alternative material systems and manufacturing processes. The fiber reinforced polymer material systems currently used by the wind industry have stagnated as the technology continues to mature and as a means to reduce risk while introducing new products with continually increasing blade lengths. However, as blade lengths continue to increase, the challenge of controlling blade mass becomes even more critical to enabling the associated levelized cost of energy reductions. Stiffer and stronger reinforcement fibers can help to resolve the challenges of meeting the loading demands while limiting the increase in weight, but these materials are substantially more expensive than the traditional E-glass fiber systems. One goal of this project and associated work is to identify pathways that improve the cost-effectiveness of carbon fiber such that it is the reinforcement of choice in the primary structural elements of wind blades. The use of heavy-tow textile carbon fiber material systems has been shown to reduce the blade mass by 30-31% when used in the spar cap and by up to 7% when used in edgewise reinforcement. A pultrusion cost model was developed to enable a material cost comparison that includes an accurate estimate of the intermediate manufacturing step of pultrusion for the carbon fiber composite. Material cost reductions were revealed in most cases for the heavy-tow textile carbon fiber compared to infused fiberglass. The use of carbon fiber in the edgewise reinforcement produced the most notable material cost reduction of 33% for the heavy-tow textile carbon fiber. The mass and cost savings observed when using carbon fiber in edgewise reinforcement demonstrate a clear opportunity of this design approach. A carbon fiber conversion cost model was expanded to include a characterization of manufacturing costs when using advanced conversion processes with atmospheric plasma oxidation. This manufacturing approach was estimated to reduce the cost of carbon fiber material systems by greater than 10% and can be used with textile carbon systems or traditional carbon fiber precursors. The pultrusion cost model was also used to assess the opportunity for using pultruded fiberglass in wind blades, studying conventional E-glass fiber reinforcement. When using pultruded fiberglass as the spar cap material for two design classifications, the blade weight was reduced by 6% and 9% compared to infused fiberglass. However, due to the relatively large share of the pultrusion manufacturing cost compared to fiber cost, the spar cap material cost increased by 12% and 7%. When considering the system benefits of reduced blade mass and potentially lower blade manufacturing costs for pultruded composites, there may be opportunity for pultruded E-glass in wind blade spar caps, but further studies are needed. There is a clearer outcome for using pultruded fiberglass in the edgewise reinforcement where it resulted in a blade mass reduction of 2% and associated reinforcement material cost reduction of 1% compared to infused E-glass. The use of higher performing glass fibers, such as S-glass and H-glass systems, will produce greater mass savings but a study is needed to assess the cost implications for these more expensive systems. The most likely opportunity for these high-performance glass fibers is in the edgewise reinforcement, where the increased strength will reduce the damage accumulation of this fatigue-driven component. The blade design assessments in this project characterize the controlling material properties for the primary structural components in the flapwise and edgewise directions for modern wind blades. The observed trends with low and high wind speed turbine classifications for carbon and glass fiber reinforced polymer systems help to identify where cost reductions are needed, and where improvements in mechanical properties would help to reduce the material demands.
Recent examples provide a significant concern for the resilience of the U.S. electric grid and represent a need for enhanced decision-making to address an increasingly wide range of complex system interactions and potential consequences. In response, this LDRD project produced a proof-of-concept evaluation called the Resilience and Hazard Assessment to Prioritize Security Operations for Decisions and Impacts (RHAPSODI) methodology as an agile and flexible analytic framework capable of addressing multiple, diverse threats to desired electric grid performance. After empirically grounding needs for the future of U.S. electric grid resilience, this project employed the systems-theoretic process analysis (STPA) to develop a systems engineering risk model. The results of a completed feasibility study of a notional high voltage transmission system demonstrate an improved ability to incorporate both spatial (e.g., geographically distributed) and temporal (e.g., dynamic and time-dependent) elements of security risk to the gird. The success of this LDRD project provides the foundation for further evolution of the systems engineering risk model for the grid; derivation of quantitative approaches to evaluate risk and resilience performance; facilitation of agile experimenting and grid sensitivity to a range of vulnerabilities; and development of tools to assist decision-makers in enhancing U.S. electrical grid resilience.
In this report, we investigate the effects of conductor losses in a multipole-based cable braid magnetic penetration model. Our multipole model uses a mesh of the actual cable geometry, which enables us to model more complicated structures. After summarizing the first principles model formulation, we consider a one-dimensional array of wires, for which an analytical solution is known in the lossless case. We extend this solution to the lossy case by using a complex-valued radius. We also model this structure analytically using a conformal-mapping solution. We then compare both the self-impedance and the transfer impedance results from our first principles cable braid electromagnetic penetration model to those obtained using the analytical solutions. An analysis for various frequencies (and skin depths) usually encountered in cable modeling is reported. These results are found in good agreement up to a radius to half spacing ratio of about 0.7, demonstrating a robustness needed for many commercial and non-commercial cables.
Nonlinear modeling and optimization is a valuable tool for aiding decisions by engineering practitioners, but programming an optimization problem based on a complex electrical, mechanical, or chemical process is a time-consuming and error-prone activity. Therefore, there is a need for model analysis and debugging tools that can detect and diagnose modeling errors. One such tool is the Dulmage–Mendelsohn decomposition, which identifies structurally under- and over-determined subsets in systems of equations and variables by partitioning the bipartite graph of the system. This work provides the necessary background to understand the Dulmage–Mendelsohn decomposition and its application to the analysis of nonlinear optimization problems, demonstrates its use in diagnosing a variety of modeling errors, and introduces software implementations for analyzing nonlinear optimization problems in the Pyomo and JuMP algebraic modeling languages.
Accurate targeting of radioisotope classifiers and estimators requires an understanding of the target problem space. In order to facilitate clear communication on expected model behavior and performance between practitioners and stakeholders on their problems, this questionnaire was created. Stakeholder responses form the basis of a trained model as well as the start of usage requirements for the model as it is integrated with analysis processes or detection systems. This questionnaire may also be useful to machine learning practitioners and gamma spectroscopists developing new algorithms as a starting point for characterizing their problem space, especially if they are using PyRIID.
Additive manufacturing of metal components enables rapid fabrication of complex geometries. However, metal additive manufacturing also introduces new morphological and microstructural characteristics which might be detrimental to component performance. Here we report the pitting corrosion properties of wrought and additively manufactured 316L stainless steel after atmospheric exposure to coastal environments and laboratory-created environments. Qualitative visualization in combination with quantitative analysis of resulting pits provided an in-depth understanding of pitting differences between wrought and additively manufactured 316L stainless steel and between coastal and laboratory-based exposure. Optical and scanning electron microscopy were utilized for visualization, while white light interferometry measured pits across approximately 5mm x 5mm areas on each sample. Post-processing of the interferometry data enables quantification of pitting attack for each sample in terms of both pit depth and pit volume. The pitting analysis introduced herein offers a new technique to compare pitting attack between different manufacturing processes and materials.
Machine learning models are promising as surrogates in optimization when replacing difficult to solve equations or black-box type models. This work demonstrates the viability of linear model decision trees as piecewise-linear surrogates in decision-making problems. Linear model decision trees can be represented exactly in mixed-integer linear programming (MILP) and mixed-integer quadratic constrained programming (MIQCP) formulations. Furthermore, they can represent discontinuous functions, bringing advantages over neural networks in some cases. We present several formulations using transformations from Generalized Disjunctive Programming (GDP) formulations and modifications of MILP formulations for gradient boosted decision trees (GBDT). We then compare the computational performance of these different MILP and MIQCP representations in an optimization problem and illustrate their use on engineering applications. We observe faster solution times for optimization problems with linear model decision tree surrogates when compared with GBDT surrogates using the Optimization and Machine Learning Toolkit (OMLT).
Jurisdictions around the world are enacting and enforcing an increasing number of policies to fight climate change, leading to higher penetration of variable renewable energy (VRE) and energy storage systems (ESSs) in the power grid. One of the biggest challenges associated with this process is the evaluation of the appropriate amount of ESS required to mitigate the variability of the VREs and achieve decarbonization goals of a particular jurisdiction. This report presents methodologies developed and results obtained for determining the minimum amount of ESS required to adequately serve load in a system where fossil fueled generators are being replaced by VREs over the next two decades. This technical analysis is performed by Sandia National Laboratories for the DOE Office of Electricity Energy Storage Program in collaboration with the Illinois Commerce Commission (ICC). The Illinois MISO Zone 4 is used as a case study. Several boundary conditions are investigated in this analysis including capacity adequacy and energy adequacy to determine the quantity of ESS required for MISO Zone 4. Multiple scenarios are designed and evaluated to incorporate the impact of varying capacity values of VREs and on the resource adequacy of the system. Several retirement scenarios involving fossil-fueled assets are also considered. Based on the current plans of new additions and retirements of generating assets, the results of the technical analysis indicate that Illinois MISO Zone 4 will require a significant quantity of ESS to satisfy their electricity demand over the next two decades.
We describe a direct magneto-optical approach to measuring the magnetic field driven by a narrow pulse width (<10 ns), 20 kA electrical current flow in the transmission line of a high energy pulsed power accelerator. The magnetic field and electrical current are among the most important operating parameters in a pulsed power accelerator and are critical to understanding the properties of the radiation output. However, accurately measuring these fields and electrical currents using conventional pulsed power diagnostics is difficult due to the strength of ionizing radiation and electromagnetic interference. Our approach uses a fiber coupled laser beam with a rare earth element sensing crystal sensor that is highly resistant to electromagnetic interference and does not require external calibration. Here, we focus on device theory, operating parameters, results from an experiment on a high energy pulsed power accelerator, and comparison to a conventional electrical current shunt sensor.
The Sound Fixing and Ranging (SOFAR) channel in the ocean allows for low frequency sound to travel thousands of kilometers, making it particularly useful for detecting underwater nuclear explosions. Suggestions that an elevated SOFAR-like channel should exist in the stratosphere date back over half a century and imply that sources within this region can be reliably sensed at vast distances. However, this theory has not been supported with evidence of direct observations from sound within this channel. Here we show that an infrasound sensor on a solar hot air balloon recorded the first infrasound detection of a ground truth airborne source while within this acoustic channel, which we refer to as the AtmoSOFAR channel. Our results support the existence of the AtmoSOFAR channel, demonstrate that acoustic signals can be recorded within it, and provide insight into the characteristics of recorded signals. Results also show a lack of detections on ground-based stations, highlighting the advantages of using balloon-borne infrasound sensors to detect impulsive sources at altitude.
Freeplay is a common type of piecewise-smooth nonlinearity in dynamical systems, and it can cause discontinuity-induced bifurcations and other behaviors that may bring about undesirable and potentially damaging responses. Prior research has focused on piecewise-smooth systems with two or three distinct regions, but less attention is devoted to systems with more regions (i.e., multi-segmented systems). In this work, numerical analysis is performed on a dynamical system with multi-segmented freeplay, in which there are four stiffness transitions and five distinct regions in the phase space. The effects of the multi-segmented parameters are studied through bifurcation diagram evolution along with induced multi-stable behavior and different bifurcations. These phenomena are interrogated through various tools, such as harmonic balance, basins of attraction, phase planes, and Poincaré section analysis. Results show that among the three multi-segmented parameters, the asymmetry has the strongest effect on the response of the system.
Coe, Ryan G.; Lee, Jantzen; Bacelli, Giorgio; Spencer, Steven J.; Dullea, Kevin; Plueddemann, Albert J.; Buffitt, Derek; Reine, John; Peters, Donald; Spinneken, Johannes; Hamilton, Andrew; Sabet, Sahand; Husain, Salman; Jenne, Dale (Scott); Korde, Umesh; Muglia, Mike; Taylor, Trip; Wade, Eric
The “Pioneer WEC” project is targeted at developing a wave energy generator for the Coastal Surface Mooring (CSM) system within the Ocean Observatories Initiative (OOI) Pioneer Array. The CSM utilizes solar photovoltaic and wind generation systems, along with rechargeable batteries, to power multiple sensors on the buoy and along the mooring line. This approach provides continuous power for essential controller functions and a subset of instruments, and meets the full power demand roughly 70% of the time. Sandia has been tasked with designing a wave energy system to provide additional electrical power and bring the CSM up-time for satisfying the full-power demand to 100%. This project is a collaboration between Sandia and Woods Hole Oceanographic Institution (WHOI), along with Evergreen Innovations, Monterey Bay Aquarium Research Institute (MBARI), Eastern Carolina University (ECU), Johns Hopkins University (JHU), and the National Renewable Energy Laboratory (NREL). This report captures Phase I of an expected two phase project and presents project scoping and concept design results. phase project and presents project scoping and concept design results.
Approximating differential operators defined on two-dimensional surfaces is an important problem that arises in many areas of science and engineering. Over the past ten years, localized meshfree methods based on generalized moving least squares (GMLS) and radial basis function finite differences (RBF-FD) have been shown to be effective for this task as they can give high orders of accuracy at low computational cost, and they can be applied to surfaces defined only by point clouds. However, there have yet to be any studies that perform a direct comparison of these methods for approximating surface differential operators (SDOs). The first purpose of this work is to fill that gap. For this comparison, we focus on an RBF-FD method based on polyharmonic spline kernels and polynomials (PHS+Poly) since they are most closely related to the GMLS method. Additionally, we use a relatively new technique for approximating SDOs with RBF-FD called the tangent plane method since it is simpler than previous techniques and natural to use with PHS+Poly RBF-FD. The second purpose of this work is to relate the tangent plane formulation of SDOs to the local coordinate formulation used in GMLS and to show that they are equivalent when the tangent space to the surface is known exactly. The final purpose is to use ideas from the GMLS SDO formulation to derive a new RBF-FD method for approximating the tangent space for a point cloud surface when it is unknown. For the numerical comparisons of the methods, we examine their convergence rates for approximating the surface gradient, divergence, and Laplacian as the point clouds are refined for various parameter choices. We also compare their efficiency in terms of accuracy per computational cost, both when including and excluding setup costs.
Hydrogen diffusion in metals and alloys plays an important role in the discovery of new materials for fuel cell and energy storage technology. While analytic models use hand-selected features that have clear physical ties to hydrogen diffusion, they often lack accuracy when making quantitative predictions. Machine learning models are capable of making accurate predictions, but their inner workings are obscured, rendering it unclear which physical features are truly important. To develop interpretable machine learning models to predict the activation energies of hydrogen diffusion in metals and random binary alloys, we create a database for physical and chemical properties of the species and use it to fit six machine learning models. Our models achieve root-mean-squared errors between 98-119 meV on the testing data and accurately predict that elemental Ru has a large activation energy, while elemental Cr and Fe have small activation energies. By analyzing the feature importances of these fitted models, we identify relevant physical properties for predicting hydrogen diffusivity. While metrics for measuring the individual feature importances for machine learning models exist, correlations between the features lead to disagreement between models and limit the conclusions that can be drawn. Instead grouped feature importance, formed by combining the features via their correlations, agree across the six models and reveal that the two groups containing the packing factor and electronic specific heat are particularly significant for predicting hydrogen diffusion in metals and random binary alloys. This framework allows us to interpret machine learning models and enables rapid screening of new materials with the desired rates of hydrogen diffusion.
Current approaches to securing high consequence facilities (HCF) and critical assets are linear and static and therefore struggle to adapt to emerging threats (e.g., unmanned aerial systems) and changing environmental conditions (e.g., decreasing operational control). The pace of change in technological, organizational, societal, and political dynamics necessitates a move toward codifying underlying scientific principles to better characterize the rich interactions observed between HCF security technology, infrastructure, digital assets, and human or organizational components. The promising results of Laboratory Directed Research and Development (LDRD) 20-0373—“Developing a Resilient, Adaptive, and Systematic Paradigm for Security Analysis”—suggest that when compared to traditional security analysis, invoking multilayer network (MLN) modeling for HCF security system components captures unexpected failure cases and unanticipated interactions.
Ships crossing the ocean are known to produce long, curvilinear features called ship tracks visible in satellite imagery via the Twomey effect; however, there has been little exploitation of satellite imagery for broad atmospheric studies or global monitoring of ship emissions due to the difficulty of automated ship track detection. Prior studies are either proof-of-concept, qualitatively assessed, or restricted to a certain time of day. We propose a statistical method for the automated identification of ship tracks and demonstrate using GOES-West ABI data. We first present a human-assisted segmentation method, which we use to generate a ground truth data set of 529 annotated ship tracks in GOES-West ABI products. We then describe a two-stage automated approach comprising a detection stage to generate ship track proposals and a classification stage to reduce false positives. For detection, we present a novel pipeline based around a z-score filtering technique, and for classification, we demonstrate several classifiers from literature. In a final experiment, we quantitatively tune the detection parameters and train the classifier using the ground truth dataset, then test on a sequestered set of images; the detect-then-classify system had an overall Pd of 0.68 and 0.80 for daytime and nighttime data, respectively, and the classifier reduced false positive detections by 67% and 75%.
Static structure factors are computed for large-scale, mechanically stable, jammed packings of frictionless spheres (three dimensions) and disks (two dimensions) with broad, power-law size dispersity characterized by the exponent -β. The static structure factor exhibits diverging power-law behavior for small wave numbers, allowing us to identify a structural fractal dimension df. In three dimensions, df≈2.0 for 2.5≤β≤3.8, such that each of the structure factors can be collapsed onto a universal curve. In two dimensions, we instead find 1.0df1.34 for 2.1≤β≤2.9. Furthermore, we show that the fractal behavior persists when rattler particles are removed, indicating that the long-wavelength structural properties of the packings are controlled by the large particle backbone conferring mechanical rigidity to the system. A numerical scheme for computing structure factors for triclinic unit cells is presented and employed to analyze the jammed packings.
The preliminary use of the in-situ curvature measurement technique for analyzing the planar stress evolution of controlled atmosphere plasma spray (CAPS) refractory metal deposits was performed with SNL-NM org. 1834’s CAPS system. A porous refractory metal exemplar of Ta-Nb was sprayed onto Ni-200, Ti-6Al-4V, and Al 7075-T6 substrates using a constant plasma torch parameter setting and deposition toolpath. Residual stresses of the deposits were found to be largely influenced by the substrate coefficient of thermal expansion and were calculated to be 49, 90, and -136 MPa for Ni 200, Ti-6Al-4V, and Al 7075-T6, respectively. The “Evolving stress” of the Ta-Nb deposits, which more accurately describes the mean intrinsic splat quenching stress of the spray material during deposition, was calculated to be 67, 92, and 129 MPa for Ni-200, Ti-6Al-4V, and Al 7075-T6, respectively. Notable difference in curvature measurement for the 1st coating pass for the Al 7075- T6 substrate was observed, with interface micrograph evidence suggesting potential softening and/or melting of the Al 7075-T6 substrate surface during deposition. Substrate temperature measurements prior to Ta-Nb deposition were used to calculate thermal energy absorbed from the hot gas plume by the different substrates and were found to correlate to the substrate’s thermal effusively. These calculated thermal energies were also found to be ~10 to 15% of the calculated energy output from the plasma torch’s nozzle exit for these experimental conditions.
Large rocket motors may violently explode when exposed to accidental fires. Even hot metal fragments from a nearby accident may penetrate the propellant and ultimately cause thermal ignition. A mechanistic understanding of heated propellants leading to thermal runaway is a major unsolved problem. Here we show that thermal ignition in propellants can be predicted using a universal cookoff model coupled to a micromechanics pressurization model. Our model predicts the time to thermal ignition in cookoff experiments with variable headspace volumes. We found that experiments with headspace volumes are more prone to deformation which distorts pores and causes increased permeability when the propellant expands into this headspace. Delayed ignition with larger headspace volume correlates with lower headspace pressures during decomposition. We found that our predictions matched experimental measurements best when the initial propellant was impermeable to gas flow rather than being permeable. Similar behavior is expected with other energetic materials with rubbery binders. Our model is validated using data from a separate laboratory. We also present an uncertainty analysis using Latin Hypercube Sampling (LHS) of thermal ignition caused by a steel fragment embedded in the propellant.
As Moore’s Law and Dennard Scaling come to an end, it is becoming increasingly important to develop non-von Neumann computing architectures that can perform low-power computing in the domains of scientific computing, artificial intelligence, embedded systems, and edge computing. Next-generation computing technologies, such as neuromorphic computing and quantum computing, have the potential to revolutionize computing. However, in order to make progress in these fields, it is necessary to fundamentally change the current computing paradigm by codesigning systems across all system level, from materials to software. Because skilled labor is limited in the field of next-generation computing, we are developing artificial intelligence-enhanced tools to automate the codesign and co-discovery of next-generation computers. Here, we develop a method called Modular and Multi-level MAchine Learning (MAMMAL) which is able to perform analog codesign and co-discovery across multiple system levels, spanning devices to circuits. We prototype MAMMAL by using it to design simple passive analog low-pass filters. We also explore methods to incorporate uncertainty quantification into MAMMAL and to accelerate MAMMAL by using emerging technologies, such as crossbar arrays. Ultimately, we believe that MAMMAL will enable rapid progress in developing next-generation computers by automating the codesign and co-discovery of electronic systems.
Climate and its impacts on the natural environment, and on the ability of the natural environment to support population and the built environment, stands as a threat multiplier that impacts national and global security. The Water Intersections with Climate Systems Security (WICSS) Strategic Initiative is designed to improve understanding of water’s role in, among other topics, the connection of critical infrastructure to climate in light of competing national and global security interests (including transboundary issues and stability), and identifying research gaps aligned with Sandia, and Federal agency priorities. With this impetus in mind, the WICSS Strategic Initiative team conceptualized a causal loop diagram (CLD) of the relationship between and among climate, the natural environment, population, and the built environment, with an understanding that any such regionally focused system must have externalities that influence the system from beyond its’ control, and metrics for better understanding the consequences of the set of interactions. These are discussed in light of a series of worldviews that focus on portions of the overall systems relationship. The relationships are described and documented in detail. A set of reinforcing and balancing loops are then highlighted within the context of the model. Finally, forward-looking actions are highlighted to describe how this conceptual model can be turned into modeling to address multiple problems described under the purview of the Strategic Initiative.
In the dynamic landscape of Operational Technology (OT), and specifically the emerging landscape for Advanced Reactors, the establishment of trust between digital assets emerges as a challenge for cybersecurity modernization. This report reviews existing approaches to authentication in Enterprise environments, and proposed methods for authentication in OT, and analyzes each for its applicability to future Advanced Reactor digital networks. Principles of authentication ranging from underlying cryptographic mechanisms to trust authorities are evaluated through the lens of OT. These facets emphasize the importance of mutual authentication in real-time environments, enabling a paradigm shift from the current approach of strong boundaries to a more malleable network that allows for flexible operation. This work finds that there is a need for evaluation and decision making by industry stakeholders, but current technologies and approaches can be adapted to fit needs and risk tolerances.
Plasma distribution in 3D space is heavily influenced by complex surfaces and the coupling interactions between plasma properties and interfacing material properties. For example, guided streamers that transition to surface ionization waves (SIWs) and propagate over structured dielectrics experience field enhancements that can lead to localized increases in ionization rates and complex 3D configurations that are difficult to analyze. Investigating these configurations requires techniques than can provide a more complete 3D picture. To help address this capability gap, a tomographic optical emission spectroscopy (tomo-OES) diagnostic system has been developed at Sandia National Laboratories that can resolve SIWs. The system includes four intensified cameras that measure the angular projections of the plasma light emission through bandpass filters. A dot calibration target co-registers each angular projection to the same voxel grid and an algebraic reconstruction technique (ART) recovers the light intensity at each voxel. An atmospheric pressure plasma jet (APPJ), provided by Peter Bruggeman, has been investigated and representative results are shown in Figure 1. Here, a bandpass filter was used to isolate emission from the N2 second positive system (SPS) at 337.1 nm to capture the transition of the streamer to SIW on a planar dielectric surface (relative permittivity 3.3) located 3 mm below the APPJ [3]. The surface wave velocity was 3.5x104 (m/s), consistent with measurements made by Steven Shannon. Characterization of this APPJ will support the group effort of standing up a reproducible APPJ across institutions for applications such as liquid treatment, catalysis, and plasma aided combustion. Future work will investigate non-planar surfaces and eventually develop tomographic laser-induced fluorescence (tomo-LIF) approaches.
The International Database of Reference Gamma-Ray Spectra of Various Nuclear Matter is designed to hold curated gamma spectral data and will be hosted by the International Atomic Energy Agency on its public facing web site. Currently, the database to be hosted is given to the International Atomic Energy Agency by Sandia. This document describes the application used by Sandia to load spectral data into a database.
Improved power take-off (PTO) controller design for wave energy converters is considered a critical component for reducing the cost of energy production. However, the device and control design process often remains sequential, with the space of possible final designs largely reduced before the controller has been considered. Control co-design, whereby the device and control design are considered concurrently, has resulted in improved designs in many industries, but remains rare in the wave energy community. In this paper we demonstrate the use of a new open-source code, WecOptTool, for control co-design of wave energy converters, with the aim to make the co-design approach more accessible and accelerate its adoption. Additionally, we highlight the importance of designing a wave energy converter to maximize electrical power, rather than mechanical power, and demonstrate the co-design process while modeling the PTO's components (i.e., drive-train and generator, and their dynamics). We also consider the design and optimization of causal fixed-structure controllers. The demonstration presented here considers the PTO design problem and finds the optimal PTO drive-train that maximizes annual electrical power production. The results show a 22% improvement in the optimal controller and drive-train co-design over the optimal controller for the nominal, as built, device design.
In this paper, an approach for 3D plasma structure diagnostics using tomographic optical emission spectroscopy (Tomo-OES) of a nanosecond pulsed atmospheric pressure plasma jet (APPJ) is presented. In contrast to the well-known Abel inversion, Tomo-OES does not require cylindrical symmetry to recover 3D distributions of plasma light emission. Instead, many 2D angular projections are measured with intensified cameras and the multiplicative algebraic reconstruction technique is used to recover the 3D distribution of light emission. This approach solves the line-of-sight integration problem inherent to optical diagnostics, allowing recovery of localized OES information within the plasma that can be used to better infer plasma parameters within complex plasma structures. Here, Tomo-OES was applied to investigate an APPJ operated with helium in ambient air and impinging on planar and structured dielectric surfaces. Surface charging caused the guided streamer from the APPJ to transition to a surface ionization wave (SIW) that propagated along the surface. The SIW experienced variable geometrical and electrical material properties as it propagated, leading to 3D configurations that were non-symmetric and spatially complex. Light emission from He, N 2 + , and N2 were imaged at ten angular projections and the respective time-resolved 3D emission distributions in the plasma were then reconstructed. The spatial resolution of each tomographic reconstruction was 7.4 µm and the temporal resolution was 5 ns, sufficient to observe the guided streamer and the effects of the structured surface on the SIW. Emission from He showed the core of the jet and emission from N 2 + and N2 indicated effects of entrainment of ambient air. Penning ionization of N2 created a ring or outer layer of N 2 + that spatially converged to form the ‘plasma bullet’ or spatially diverged across a surface as part of a SIW. The SIW entered trenches of size 150 µm, leading to decreases in plasma light emission in regions above the trenches. The plasma light emission was higher in some regions with trenches, possibly due to effects of field enhancement.
This is a SAND Report on cross-correlation data collected at the Redmond Salt Mine. It discusses methods, as well as temporal variability and energy characteristics of the cross-correlation data.
This report presents our work to model the workloads of a linear electromagnetic application based on the method of moments in the frequency domain to effectively load balance the matrix assembly. This application is particularly challenging to load balance due to its lack of persistent iterative behavior, its operation under tight memory constraint (where the matrix may fill 80% of memory on each node), and the algorithmic complexity of the computational method. This report describes the first step in our work to apply an inspector-executor approach for load balancing workloads where key parameters are exposed during the inspector phase and a pre-trained model is applied to predict relative task weights for the load balancer.
The ability to track the concentrations of specific molecules in the body in real time would significantly improve our ability to study, monitor, and respond to diseases. To achieve this, we require sensors that can withstand the complex environment inside the body. Electrochemical aptamer-based sensors are particularly promising for in vivo sensing, as they are among the only generalizable sensing technologies that can achieve real-time molecular monitoring directly in blood and the living body. In this project, we first focused on extending the application space of aptamer sensors to support minimally-invasive wearable measurements. To achieve this, we developed individually-addressable sensors with commercial off-the-shelf microneedles. We demonstrated sensor function in buffer, blood, and porcine skin (a common proxy for human skin). In addition to the applied sensing project, we also worked to improve fundamental understanding of the aptamer sensing platform and how it responds to biomolecular interferents. Specifically, we explored the interfacial dynamics of biofouling – a process impacting sensors placed in complex fluids, such as blood.
Spectrally resolved signals in the short- to mid-wave infrared (SWIR/MWIR) bands at high-temporal resolution are critical for many national security remote sensing missions. Currently available off the shelf technology can achieve either high temporal resolution or high spectral resolution, but rugged instruments that can achieve both simultaneously remain mostly in the realm of one-off R&D projects. This report documents efforts to demonstrate a new technique for designing and building high resolution, high framerate multichannel FTIR (MC-FTIR) spectrometers that operate in the SWIR/MWIR bands. The core optical element in a MC-FTIR spectrometer is an array of statically-tuned lamellar grating interferometers (LGI). In the original MC-FTIR work these arrays were fabricated using a synchrotron x-ray lithography method. We proposed to instead fabricate these LGI arrays using multiphoton lithography (MPL), a 3D printing technique that can fabricate meso-scale structures with sub-micron precision. Although we were able to fabricate LGI arrays of sufficient size using MPL, the realized optical surfaces had unsuitably high optical form errors, precluding their use in a fieldable instrument. Further advancement in MPL technology may eventually enable fabrication of interferometer-grade LGI arrays.
A major difficulty in the analysis of molecular-level simulations is that macroscopic flow quantities are inherently noisy due to molecular fluctuations. An important example for turbulent flows is the kinetic energy dissipation rate. Traditionally, this quantity is calculated from gradients of the macroscopic velocity field, which exacerbates the noise problem. The inability to accurately compute the dissipation rate makes meaningful comparison of molecular-level and continuum simulation results a serious challenge. Herein, we extend previously developed coarse-graining theories to derive an exact molecular-level expression for the dissipation rate, which would circumvent the need to compute gradients of noisy fields. Although the exact expression cannot feasibly be implemented in Sandia’s direct simulation Monte Carlo (DSMC) code SPARTA, we utilize an approximate “hybrid” approach and compare it to the conventional gradient-based approach for planar Couette flow and the two-dimensional Taylor-Green vortex, demonstrating that the hybrid approach is significantly more accurate. Finally, we explore the possibility of adopting a Lagrangian approach to calculate the energy dissipation rate.
Numerical simulations are used to study the dynamics of a developing suspension Poiseuille flow with monodispersed and bidispersed neutrally buoyant particles in a planar channel, and machine learning is applied to learn the evolving stresses of the developing suspension. The particle stresses and pressure develop on a slower time scale than the volume fraction, indicating that once the particles reach a steady volume fraction profile, they rearrange to minimize the contact pressure on each particle. We consider the timescale for stress development and how the stress development connects to particle migration. For developing monodisperse suspensions, we present a new physics-informed Galerkin neural network that allows for learning the particle stresses when direct measurements are not possible. We show that when a training set of stress measurements is available, the MOR-physics operator learning method can also capture the particle stresses accurately.
Over the past decade, cybersecurity researchers have released multiple studies highlighting the insecure nature of I&C system communication protocols. In response, standards bodies have addressed the issue by adding the ability to encrypt communications to some protocols in some cases, while control system engineers have argued that encryption within these kinds of high consequence systems is in fact dangerous. Certainly, control system information between systems should be protected. But encrypting the information may not be the best way to do so. In fact, while in IT systems vendors are concerned with confidentiality, integrity, and availability, frequently in that order, in OT systems engineers are much more concerned with availability and integrity that confidentiality. In this paper, we will counter specific arguments against encrypting control system traffic, and present potential alternatives to encryption that support nuclear OT system needs more strongly that commodity IT system needs while still providing robust integrity and availability guarantees.
High Energy Arcing Faults (HEAFs) are hazardous events in which an electrical arc leads to the rapid release of energy in the form of heat, vaporized metal, and mechanical force. In Nuclear Power Plants, these events are often accompanied by loss of essential power and complicated shutdowns. To confirm the probabilistic risk analysis (PRA) methodology in NUREG/CR-6850, which was formulated based on limited observational data, the NRC led an international experimental campaign from 2014 to 2016. The results of these experiments uncovered an unexpected hazard posed by aluminum components in or near electrical equipment and the potential for unanalyzed equipment failures. Sandia National Laboratories (SNL), in support of the NRC work, collaborated with NIST, BSI, KEMA, and NRC to support the full-scale HEAF test campaign in 2022. SNL provided high speed visible and infrared video/data of ten tests that collected data from HEAFs originated on copper and aluminum buses inside switchgears and bus ducts. Part of the SNL scope was to place cameras with high-speed data collection at different vantage points within the test facility to provide NRC a more complete and granular view of the test events.
Tritium exhibits unique environmental behavior because of its potential interactions with water and organic substances. Modeling the environmental consequences of tritium releases can be relatively complex and thus an evaluation of MACCS is needed to understand what updates, if any, are needed in MACCS to account for the behavior of tritium. We examine documented tritium releases and previous benchmarking assessments to perform a model intercomparison between MACCS and state-of-practice tritium-specific codes UFOTRI and ETMOD to quantify the difference between MACCS and state of practice models for assessing tritium consequences. Additionally, information to assist an analyst in judging whether a postulated tritium release is likely to lead to significant doses is provided.
As a follow-up to our more comprehensive report on Adversarial Machine Learning (AML), here we provide demonstrations of AML attacks against the Limbo image database of UF6 cylinders in a variety of orientations and amongst a variety of distractor images. We demonstrate the Carlini & Wagner AML attack against a subset of Limbo images, with 100% attack success rate; meaning all attacked images were misclassified by a highly accurate trained model, yet the image changes were imperceptible to the human eye. We also demonstrate successful attacks against segmented images (images with more than one targeted object). Finally, we demonstrated the Fast Fourier Transform countermeasure that can be used to detect AML attacks on images. The intent of this and our previous report is to inform the IAEA and stakeholders of both the promise of machine learning, which could greatly improve the efficiency of surveillance monitoring, but also of the real threat of AML and potential defenses.
As large systems of Li-ion batteries are being increasingly deployed, the safety of such systems must be assessed. Due to the high cost of testing large systems, it is important to extract key safety information from any available experiments. Developing validated predictive models that can be exercised at larger scales offers an opportunity to augment experimental data In this work, experiments were conducted on packs of three Li-ion pouch cells with different heating rates and states of charge (SOC) to assess the propagation behavior of a module undergoing thermal runaway. The variable heating rates represent slow or fast heating that a module may experience in a system. As the SOC decreases, propagation slows down and eventually becomes mitigated. It was found that the SOC boundary between propagation and mitigation was higher at a heating rate of 50 °C/min than at 10 °C/min for these cells. However, due to increased pre-heating at the lower heating rate, the propagation speed increased. Simulations were conducted with a new intra-particle diffusion-limited reaction model for a range of anode particle sizes. Propagation speeds and onset times were generally well predicted, and the variability in the propagation/mitigation boundary highlighted the need for greater uncertainty quantification of the predictions.
There is an increasing aspiration to utilize machine learning (ML) for various tasks of relevance to national security. ML models have thus far been mostly applied to tasks and domains that, while impactful, have sufficient volume of data. For predictive tasks of national security relevance, ML models of great capacity (ability to approximate nonlinear trends in input-output maps) are often needed to capture the complex underlying physics. However, scientific problems of relevance to national security are often accompanied by various sources of sparse and/or incomplete data, including experiments and simulations, across different regimes of operation, of varying degrees of fidelity, and include noise with different characteristics and/or intensity. State-of-the-art ML models, despite exhibiting superior performance on the task and domain they were trained on, may suffer detrimental loss in performance in such sparse data environments. This report summarizes the results of the Laboratory Directed Research and Development project entitled Trust-Enhancing Probabilistic Transfer Learning for Sparse and Noisy Data Environments. The objective of the project was to develop a new transfer learning (TL) framework that aims to adaptively blend the data across different sources in tackling one task of interest, resulting in enhanced trustworthiness of ML models for mission- and safety-critical systems. The proposed framework determines when it is worth applying TL and how much knowledge is to be transferred, despite uncontrollable uncertainties. The framework accomplishes this by leveraging concepts and techniques from the fields of Bayesian inverse modeling and uncertainty quantification, relying on strong mathematical foundations of probability and measure theories to devise new uncertainty-aware TL workflows.
Strongly charged polyelectrolytes (PEs) demonstrate complex solution behavior as a function of chain length, concentrations, and ionic strength. The viscosity behavior is important to understand and is a core quantity for many applications, but aspects remain a challenge. Molecular dynamics simulations using implicit solvent coarse-grained (CG) models successfully reproduce structure, but are often inappropriate for calculating viscosities. To address the need for CG models which reproduce viscoelastic properties of one of the most studied PEs, sodium polystyrene sulfonate (NaPSS), we report our recent efforts in using Bayesian optimization to develop CG models of NaPSS which capture both polymer structure and dynamics in aqueous solutions with explicit solvent. We demonstrate that our explicit solvent CG NaPSS model with the ML-BOP water model [Chan et al. Nat Commun 10, 379 (2019)] quantitatively reproduces NaPSS chain statistics and solution structure. The new explicit solvent CG model is benchmarked against diffusivities from atomistic simulations and experimental specific viscosities for short chains. We also show that our Bayesian-optimized CG model is transferable to larger chain lengths across a range of concentrations. Overall, this work provides a machine-learned model to probe the structural, dynamic, and rheological properties of polyelectrolytes such as NaPSS and aids in the design of novel, strongly charged polymers with tunable structural and viscoelastic properties
Multifidelity uncertainty quantification (MF UQ) sampling approaches have been shown to significantly reduce the variance of statistical estimators while preserving the bias of the highest-fidelity model, provided that the low-fidelity models are well correlated. However, maintaining a high level of correlation can be challenging, especially when models depend on different input uncertain parameters, which drastically reduces the correlation. Existing MF UQ approaches do not adequately address this issue. In this work, we propose a new sampling strategy that exploits a shared space to improve the correlation among models with dissimilar parameterization. We achieve this by transforming the original coordinates onto an auxiliary manifold using the adaptive basis (AB) method (Tipireddy and Ghanem, 2014). The AB method has two main benefits: (1) it provides an effective tool to identify the low-dimensional manifold on which each model can be represented, and (2) it enables easy transformation of polynomial chaos representations from high- to low-dimensional spaces. This latter feature is used to identify a shared manifold among models without requiring additional evaluations. We present two algorithmic flavors of the new estimator to cover different analysis scenarios, including those with legacy and non-legacy high-fidelity (HF) data. We provide numerical results for analytical examples, a direct field acoustic test, and a finite element model of a nuclear fuel assembly. For all examples, we compare the proposed strategy against both single-fidelity and MF estimators based on the original model parameterization.
A collection of MATLAB functions and class definitions called System Workflow Tools (SWFT) are available to semi-automate steps in the simulation process. Some of these steps are often simple and routine for smaller finite element models, but if done directly by an analyst can quickly become labor intensive, cumbersome, and error prone for larger, system level models. Some of SWFT’s capabilities demonstrated in this report includes writing Sierra input decks and processing Quantities of Interest (QOI) from results files. SWFT also writes scripts in order to utilize other software programs such as Cubit (separating system level CAD into subassemblies and components, creating nodesets and sidesets), DAKOTA (ensemble management), and ParaView (contour plots and animations). Detailed commands and workflows from mesh generation to report generation are provided as examples for analysts to utilize SWFT capabilities.
The methodology described in this article enables a type of holistic fleet optimization that simultaneously considers the composition and activity of a fleet through time as well as the design of individual systems within the fleet. Often, real-world system design optimization and fleet-level acquisition optimization are treated separately due to the prohibitive scale and complexity of each problem. This means that fleet-level schedules are typically limited to the inclusion of predefined system configurations and are blind to a rich spectrum of system design alternatives. Similarly, system design optimization often considers a system in isolation from the fleet and is blind to numerous, complex portfolio-level considerations. In reality, these two problems are highly interconnected. To properly address this system-fleet design interdependence, we present a general method for efficiently incorporating multi-objective system design trade-off information into a mixed-integer linear programming (MILP) fleet-level optimization. This work is motivated by the authors' experience with large-scale DOD acquisition portfolios. However, the methodology is general to any application where the fleet-level problem is a MILP and there exists at least one system having a design trade space in which two or more design objectives are parameters in the fleet-level MILP.
This report documents the preliminary design phase of the Critical Experiment Design (CED-1) conducted as part of integral experiment request (IER) 523. The purpose of IER-523 is to determine critical configurations of 35 weight percent (wt%) enriched uranium dioxideberyllium oxide (UO2-BeO) material with Seven Percent Critical Experiment (7uPCX) fuels at Sandia National Laboratories (Sandia). Preliminary experiment design concepts, neutronic analysis results, and proposed paths for continuing the CED process are presented. This report builds on the feasibility and justification of experimental need report (CED-0) completed in December 2021.
A new approach to analytically derive constitutive stress-strain relationships from modeling the work hardening behavior of alloys was developed for assessing the strength and ductility of the Ti-6Al-4V alloy. This new approach is now successfully applied for assessing the quasi-static stress-strain behavior of an additively manufactured 304L sample. A predictive capability of this modelling approach may then be extended to model material stress-strain behavior at higher strain rates of loading.
This project developed a novel statistical understanding of compression analytics (CA), which has challenged and clarified some core assumptions about CA, and enabled the development of novel techniques that address vital challenges of national security. Specifically, this project has yielded the development of novel capabilities including 1. Principled metrics for model selection in CA, 2. Techniques for deriving/applying optimal classification rules and decision theory to supervised CA, including how to properly handle class imbalance and differing costs of misclassification, 3. Two techniques for handling nonlocal information in CA, 4. A novel technique for unsupervised CA that is agnostic with regard to the underlying compression algorithm, 5. A framework for semisupervised CA when a small number of labels are known in an otherwise large unlabeled dataset. 6. The academic alliance component of this project has focused on the development of a novel exemplar-based Bayesian technique for estimating variable length Markov models (closely related to PPM [prediction by partial matching] compression techniques). We have developed examples illustrating the application of our work to text, video, genetic sequences, and unstructured cybersecurity log files.
Accurate event locations are important for many endeavors in seismology, and understanding the factors that contribute to uncertainties in those locations is complex. In this article, we present a case study that takes an in-depth look at the accuracy and precision possible for locating nine shallow earthquakes in the Rock Valley fault zone in southern Nevada. These events are targeted by the Rock Valley Direct Comparison phase of the Source Physics Experiment, as candidates for the colocation of a chemical explosion with an earthquake hypocenter to directly compare earthquake and explosion sources. For this comparison, it is necessary to determine earthquake hypocenters as accurately as possible so that different source types have nearly identical locations. Our investigations include uncertainty analysis from different sets of phase arrivals, stations, velocity models, and location algorithms. For a common set of phase arrivals and stations, we find that epicentral locations from different combinations of velocity models and algorithms are within 600 m of one another in most cases. Event depths exhibit greater uncertainties, but focusing on the S-P times at the nearest station allows for estimates within approximately 500 m.
As Machine Learning (ML) continues to advance, it is being integrated into more systems. Often, the ML component represents a significant portion of the system that reduces the burden on the end user or significantly improves task performance. However, the ML component represents an unknown complex phenomenon that is learned from collected data without the need to be explicitly programmed. Despite the improvement in task performance, the models are often black boxes. Evaluating the credibility and the vulnerabilities of ML models poses a gap in current test and evaluation practice. For high consequence applications, the lack of testing and evaluation procedures represents a significant source of uncertainty and risk. To help reduce that risk, here we present considerations to evaluate systems embedded with an ML component within a red-teaming inspired methodology. We focus on (1) cyber vulnerabilities to an ML model, (2) evaluating performance gaps, and (3) adversarial ML vulnerabilities.
The development of additively-manufactured (AM) 316L stainless steel (SS) using laser powder bed fusion (LPBF) has enabled near net shape components from a corrosion-resistant structural material. In this article, we present a multiscale study on the effects of processing parameters on the corrosion behavior of as-printed surfaces of AM 316L SS formed via LPBF. Laser power and scan speed of the LPBF process were varied across the instrument range known to produce parts with >99 % density, and the macroscale corrosion trends were interpreted via microscale and nanoscale measurements of porosity, roughness, microstructure, and chemistry. Porosity and roughness data showed that porosity φ decreased as volumetric energy density Ev increased due to a shift in the pore formation mechanism and that roughness Sq was due to melt track morphology and partially fused powder features. Cross-sectional and plan-view maps of chemistry and work function ϕs revealed an amorphous Mn-silicate phase enriched with Cr and Al that varied in both thickness and density depending on Ev. Finally, the macroscale potentiodynamic polarization experiments under full immersion in quiescent 0.6 M NaCl showed significant differences in breakdown potential Eb and metastable pitting. In general, samples with smaller φ and Sq values and larger ϕs values and homogeneity in the Mn-silicate exhibited larger Eb. The porosity and roughness effects stemmed from an increase to the overall number of initiation sites for pitting, and the oxide phase contributed to passive film breakdown by acting as a crevice former or creating a galvanic couple with the SS.
The United States Department of Energy’s (DOE) Office of Nuclear Energy’s Spent Fuel and Waste Science and Technology Campaign seeks to better understand the technical basis, risks, and uncertainty associated with the safe and secure disposition of spent nuclear fuel (SNF) and high-level radioactive waste. Commercial nuclear power generation in the United States has resulted in thousands of metric tons of SNF, the disposal of which is the responsibility of the DOE (Nuclear Waste Policy Act of 1982, as amended). Any repository licensed to dispose of SNF must meet requirements regarding the long-term performance of that repository. For an evaluation of the long-term performance of the repository, one of the events that may need to be considered is the SNF achieving a critical configuration during the postclosure period. Of particular interest is the potential behavior of SNF in dual-purpose canisters (DPCs), which are currently licensed and being used to store and transport SNF but were not designed for permanent geologic disposal.
The benefits of high-performance unidirectional carbon fiber composites are limited in many cost-driven industries due to the high cost relative to alternative reinforcement fibers. Low-cost carbon fibers have been previously proposed, but the longitudinal compressive strength continues to be a limiting factor or studies are based on simplifications that warrant further analysis. A micromechanical model is used to (1) determine if the longitudinal compressive strength of composites can be improved with noncircular carbon fiber shapes and (2) characterize why some shapes are stronger than others in compression. In comparison to circular fibers, the results suggest that the strength can be increased by 10%–13% by using a specific six-lobe fiber shape and by 6%–9% for a three-lobe fiber shape. A slight increase is predicted in the compressive strength of the study two-lobe fiber but has the highest uncertainty and sensitivity to fiber orientation and misalignment direction. The underlying mechanism governing the compressive failure of the composites was linked to the unique stress fields created by the lobes, particularly the pressure stress in the matrix. This work provides mechanics-based evidence of strength improvements from noncircular fiber shapes and insight on how matrix yielding is altered with alternative fiber shapes.
Characterizing interface trap states in commercial wide bandgap devices using frequency-based measurements requires unconventionally high probing frequencies to account for both fast and slow traps associated with wide bandgap materials. The C − ψ S technique has been suggested as a viable quasi-static method for determining the interface trap state densities in wide bandgap systems, but the results are shown to be susceptible to errors in the analysis procedure. This work explores the primary sources of errors present in the C − ψ S technique using an analytical model that describes the apparent response for wide bandgap MOS capacitor devices. Measurement noise is shown to greatly impact the linear fitting routine of the 1 / C S ∗ 2 vs ψ S plot to calibrate the additive constant in the surface potential/gate voltage relationship, and an inexact knowledge of the oxide capacitance is also shown to impede interface trap state analysis near the band edge. In addition, a slight nonlinearity that is typically present throughout the 1 / C S ∗ 2 vs ψ S plot hinders the accurate estimation of interface trap densities, which is demonstrated for a fabricated n-SiC MOS capacitor device. Methods are suggested to improve quasi-static analysis, including a novel method to determine an approximate integration constant without relying on a linear fitting routine.
Dingreville, Remi; Startt, Jacob K.; Elmslie, Timothy A.; Yang, Yang; Soto-Medina, Sujeily; Zappala, Emma; Meisel, Mark W.; Manuel, Michele V.; Frandsen, Benjamin A.; Hamlin, James J.
Magnetic properties of more than 20 Cantor alloy samples of varying composition were investigated over a temperature range of 5 K to 300 K and in fields of up to 70 kOe using magnetometry and muon spin relaxation. Two transitions are identified: a spin-glass-like transition that appears between 55K and 190K, depending on composition, and a ferrimagnetic transition that occurs at approximately 43K in multiple samples with widely varying compositions. The magnetic signatures at 43K are remarkably insensitive to chemical composition. A modified Curie-Weiss model was used to fit the susceptibility data and to extract the net effective magnetic moment for each sample. The resulting values for the net effective moment were either diminished with increasing Cr or Mn concentrations or enhanced with decreasing Fe, Co, or Ni concentrations. Beyond a sufficiently large effective moment, the magnetic ground state transitions from ferrimagnetism to ferromagnetism. The effective magnetic moments, together with the corresponding compositions, are used in a global linear regression analysis to extract element-specific effective magnetic moments, which are compared to the values obtained by ab initio based density functional theory calculations. Finally, these moments provide the information necessary to controllably tune the magnetic properties of Cantor alloy variants.
Systems engineering today faces a wide array of challenges, ranging from new operational environments to disruptive technological — necessitating approaches to improve research and development (R&D) efforts. Yet, emphasizing the Aristotelian argument that the “whole is greater than the sum of its parts” seems to offer a conceptual foundation creating new R&D solutions. Invoking systems theoretic concepts of emergence and hierarchy and analytic characteristics of traceability, rigor, and comprehensiveness is potentially beneficial for guiding R&D strategy and development to bridge the gap between theoretical problem spaces and engineering-based solutions. In response, this article describes systems–theoretic process analysis (STPA) as an example of one such approach to aid in early-systems R&D discussions. STPA—a ‘top-down’ process that abstracts real complex system operations into hierarchical control structures, functional control loops, and control actions—uses control loop logic to analyze how control actions (designed for desired system behaviors) may become violated and drive the complex system toward states of higher risk. By analyzing how needed controls are not provided (or out of sequence or stopped too soon) and unneeded controls are provided (or engaged too long), STPA can help early-system R&D discussions by exploring how requirements and desired actions interact to either mitigate or potentially increase states of risk that can lead to unacceptable losses. This article will demonstrate STPA's benefit for early-system R&D strategy and development discussion by describing such diverse use cases as cyber security, nuclear fuel transportation, and US electric grid performance. Together, the traceability, rigor, and comprehensiveness of STPA serve as useful tools for improving R&D strategy and development discussions. In conclusion, leveraging STPA as well as related systems engineering techniques can be helpful in early R&D planning and strategy development to better triangulate deeper theoretical meaning or evaluate empirical results to better inform systems engineering solutions.
For reactive burn models in hydrocodes, an equilibrium closure assumption is typically made between the unreacted and product equations of state. In the CTH [1] (not an acronym) hydrocode the assumption of density and temperature equilibrium is made by default, while other codes make a pressure and temperature equilibrium assumption. The main reason for this difference is the computational efficiency in making the density and temperature assumption over the pressure and temperature one. With fitting to data, both assumptions can accurately predict reactive flow response using the various models, but the model parameters from one code cannot necessarily be used directly in a different code with a different closure assumption. A new framework is intro-duced in CTH to allow this assumption to be changed independently for each reactive material. Comparisons of the response and computational cost of the History Variable Reactive Burn (HVRB) reactive flow model with the different equilibrium assumptions are presented.
A new capability for modeling graded density reactive flow materials in the shock physics hydrocode, CTH, is demonstrated here. Previously, materials could be inserted in CTH with graded material properties, but the sensitivity of the material was not adjusted based on these properties. Of particular interest are materials that are graded in density, sometimes due to pressing or other assembly operations. The sensitivity of explosives to both density and temperature has been well demonstrated in the literature, but to-date the material parameters for use in a simulation were fit to a single condition and applied to the entire material, or the material had to be inserted in sections and each section assigned a condition. The reactive flow model xHVRB has been extended to shift explosive sensitivity with initial density, so that sensitivity is also graded in the material. This capability is demonstrated for use in three examples. The first models detonation transfer in a graded density pellet of HNS, the second is a shaped charge with density gradients in the explosive, and the third is an explosively formed projectile.
Maximizing the production of heterologous biomolecules is a complex problem that can be addressed with a systems-level understanding of cellular metabolism and regulation. Specifically, growth-coupling approaches can increase product titers and yields and also enhance production rates. However, implementing these methods for non-canonical carbon streams is challenging due to gaps in metabolic models. Over four design-build-test-learn cycles, we rewire Pseudomonas putida KT2440 for growth-coupled production of indigoidine from para-coumarate. We explore 4,114 potential growth-coupling solutions and refine one design through laboratory evolution and ensemble data-driven methods. The final growth-coupled strain produces 7.3 g/L indigoidine at 77% maximum theoretical yield in para-coumarate minimal medium. The iterative use of growth-coupling designs and functional genomics with experimental validation was highly effective and agnostic to specific hosts, carbon streams, and final products and thus generalizable across many systems.
Process variations within Field Programmable Gate Arrays (FPGAs) provide a rich source of entropy and are therefore well-suited for the implementation of Physical Unclonable Functions (PUFs). However, careful considerations must be given to the design of the PUF architecture as a means of avoiding undesirable localized bias effects that adversely impact randomness, an important statistical quality characteristic of a PUF. Here in this paper, we investigate a ring-oscillator (RO) PUF that leverages localized entropy from individual look-up table (LUT) primitives. A novel RO construction is presented that enables the individual paths through the LUT primitive to be measured and isolated at high precision, and an analysis is presented that demonstrates significant levels of localized design bias. The analysis demonstrates that delay-based PUFs that utilize LUTs as a source of entropy should avoid using FPGA primitives that are localized to specific regions of the FPGA, and instead, a more robust PUF architecture can be constructed by distributing path delay components over a wider region of the FPGA fabric. Compact RO PUF architectures that utilize multiple configurations within a small group of LUTs are particularly susceptible to these types of design-level bias effects. The analysis is carried out on data collected from a set of identically designed, hard macro instantiations of the RO implemented on 30 copies of a Zynq 7010 SoC.
Organic co-crystals have emerged as a promising class of semiconductors for next-generation optoelectronic devices due to their unique photophysical properties. This paper presents a joint experimental-theoretical study comparing the crystal structure, spectroscopy, and electronic structure of two charge transfer co-crystals. Reported herein is a novel co-crystal Npe:TCNQ, formed from 4-(1-naphthylvinyl)pyridine (Npe) and 7,7,8,8-tetracyanoquinodimethane (TCNQ) via molecular self-assembly. This work also presents a revised study of the co-crystal composed of Npe and 1,2,4,5-tetracyanobenzene (TCNB) molecules, Npe:TCNB, herein reported with a higher-symmetry (monoclinic) crystal structure than previously published. Npe:TCNB and Npe:TCNQ dimer clusters are used as theoretical model systems for the co-crystals; the geometries of the dimers are compared to geometries of the extended solids, which are computed with periodic boundary conditions density functional theory. UV-Vis absorption spectra of the dimers are computed with time-dependent density functional theory and compared to experimental UV-Vis diffuse reflectance spectra. Both Npe:TCNB and Npe:TCNQ are found to exhibit neutral character in the S0 state and ionic character in the S1 state. The high degree of charge transfer in the S1 state of both Npe:TCNB and Npe:TCNQ is rationalized by analyzing the changes in orbital localization associated with the S1 transitions.
Accelerators that drive z-pinch experiments transport current densities in excess of 1 MA/cm2 in order to melt or ionize the target and implode it on axis. These high current densities stress the transmission lines upstream from the target, where rapid electrode heating causes plasma formation, melt, and possibly vaporization. These plasmas negatively impact accelerator efficiency by diverting some portion of the current away from the target, referred to as “current loss”. Simulations that are able to reproduce this behavior may be applied to improving the efficiency of existing accelerators and to designing systems operating at ever higher current densities. The relativistic particle-in-cell code CHICAGO® is the primary code for modeling power flow on Sandia National Laboratories’ Z accelerator. We report here on new algorithms that incorporate vaporization and melt into the standard power-flow simulation framework. Taking a hybrid approach, the CHICAGO® kinetic/multi-fluid treatment has been expanded to include vaporization while the quasi-neutral equation-of-motion has been updated for melt at high current-densities. For vaporization, a new one-dimensional substrate model provides a more accurate calculation of electrode thermal, mass, and magnetic field diffusion as well as a means of emitting absorbed contaminants and vaporized metal ions. A quasi-fluid model has been implemented expressly to mimic the motion of imploding liners for accurate inductance histories. For melt, a multi-ion Hall-MHD option has been implemented and benchmarked against Alegra MHD. This new model is described with sufficient detail to reproduce these algorithms in any hybrid kinetic code. Physics results from the new code are also presented. A CHICAGO® Hall-MHD simulation of a radial transmission line demonstrates that Hall physics, not included in Alegra, has no significant impact on the diffusion of electrode material. When surface contaminant desorption is mocked in as a hydrogen surface plasma, both the surface and bulk-material plasmas largely compress under the influence of the j × B force. Similar results are seen in Alegra, which also shows magnetic and material diffusion scaling with peak current. Test vaporization simulations using MagLIF and a power-flow experimental geometry show Fe+ ions diffuse only a few hundred µm from the electrodes, so present models of Z power flow remain valid.
Predictive modeling typically relies on Bayesian model calibration to provide uncertainty quantification. Variational inference utilizing fully independent (“mean-field”) Gaussian distributions are often used as approximate probability density functions. This simplification is attractive since the number of variational parameters grows only linearly with the number of unknown model parameters. However, the resulting diagonal covariance structure and unimodal behavior can be too restrictive to provide useful approximations of intractable Bayesian posteriors that exhibit highly non-Gaussian behavior, including multimodality. High-fidelity surrogate posteriors for these problems can be obtained by considering the family of Gaussian mixtures. Gaussian mixtures are capable of capturing multiple modes and approximating any distribution to an arbitrary degree of accuracy, while maintaining some analytical tractability. Unfortunately, variational inference using Gaussian mixtures with full-covariance structures suffers from a quadratic growth in variational parameters with the number of model parameters. The existence of multiple local minima due to strong nonconvex trends in the loss functions often associated with variational inference present additional complications, These challenges motivate the need for robust initialization procedures to improve the performance and computational scalability of variational inference with mixture models. In this work, we propose a method for constructing an initial Gaussian mixture model approximation that can be used to warm-start the iterative solvers for variational inference. The procedure begins with a global optimization stage in model parameter space. In this step, local gradient-based optimization, globalized through multistart, is used to determine a set of local maxima, which we take to approximate the mixture component centers. Around each mode, a local Gaussian approximation is constructed via the Laplace approximation. Finally, the mixture weights are determined through constrained least squares regression. The robustness and scalability of the proposed methodology is demonstrated through application to an ensemble of synthetic tests using high-dimensional, multimodal probability density functions. Here, the practical aspects of the approach are demonstrated with inversion problems in structural dynamics.
Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high-fidelity, validated models used in modal, vibration, static and shock analysis of weapons systems. This document provides a user’s guide to the input for Sierra/SD. Details of input specifications for the different solution types, output options, element types and parameters are included. The appendices contain detailed examples, and instructions for running the software on parallel platforms.
Work evaluating spent nuclear fuel (SNF) dry storage canister surface environments and canister corrosion progressed significantly in FY23, with the goal of developing a scientific understanding of the processes controlling initiation and growth of stress corrosion cracking (SCC) cracks in stainless steel canisters in relevant storage environments. The results of the work performed at Sandia National Laboratories (SNL) will guide future work and will contribute to the development of better tools for predicting potential canister penetration by SCC.
For the first time the optimal local truncation error method (OLTEM) with 125-point stencils and unfitted Cartesian meshes has been developed in the general 3-D case for the Poisson equation for heterogeneous materials with smooth irregular interfaces. The 125-point stencils equations that are similar to those for quadratic finite elements are used for OLTEM. The interface conditions for OLTEM are imposed as constraints at a small number of interface points and do not require the introduction of additional unknowns, i.e., the sparse structure of global discrete equations of OLTEM is the same for homogeneous and heterogeneous materials. The stencils coefficients of OLTEM are calculated by the minimization of the local truncation error of the stencil equations. These derivations include the use of the Poisson equation for the relationship between the different spatial derivatives. Such a procedure provides the maximum possible accuracy of the discrete equations of OLTEM. In contrast to known numerical techniques with quadratic elements and third order of accuracy on conforming and unfitted meshes, OLTEM with the 125-point stencils provides 11-th order of accuracy, i.e., an extremely large increase in accuracy by 8 orders for similar stencils. The numerical results show that OLTEM yields much more accurate results than high-order finite elements with much wider stencils. The increased numerical accuracy of OLTEM leads to an extremely large increase in computational efficiency. Additionally, a new post-processing procedure with the 125-point stencil has been developed for the calculation of the spatial derivatives of the primary function. The post-processing procedure includes the minimization of the local truncation error and the use of the Poisson equation. It is demonstrated that the use of the partial differential equation (PDE) for the 125-point stencils improves the accuracy of the spatial derivatives by 6 orders compared to post-processing without the use of PDE as in existing numerical techniques. At an accuracy of 0.1% for the spatial derivatives, OLTEM reduces the number of degrees of freedom by 900 - 4∙106 times compared to quadratic finite elements. The developed post-processing procedure can be easily extended to unstructured meshes and can be independently used with existing post-processing techniques (e.g., with finite elements).