This report analyzes data from multi-arm caliper (MAC) surveys taken at the Bryan Mound Strategic Petroleum Reserve site to determine baseline statistics for the original innermost cemented casing or the subsequent installed liner. Along with analyzing the internal diameters from the MAC surveys, this analysis looks to approximate casing weight, an important metric for determining the strength of well sections. Casing weight is calculated for each section, survey, and well. Results from the analysis show most wells reflect the dimensions in the original as-built drawings. There are, however, a few exceptions. Some well sections have calculated wall thicknesses outside API tolerance. In addition, some well section depths differ from the as-built drawings. All results are discussed on a well-by-well basis. Where applicable, information from this report should be used to update as-built drawings and aid in creating more accurate well models for future studies.
In recent years, seismicity rates in the US have dramatically risen due to increased activity in onshore oil and gas production. This project attempts to tie observations about induced seismicity to dehydration reactions in laumontite, a common mineral found in fault gouge in crystalline basement formations. It is the hypothesis of this study that in addition to pressurerelated changes in the in situ stress state, the injection of wastewater pushes new fluids into crystalline fault fracture networks that are not in chemical equilibrium with the mineral assemblages, particularly laumontite in fault gouge. Experiments were conducted under hydrothermal conditions where samples of laumontite were exposed to NaC1 brines at different pH values. After exposure to different fluid chemistries for 8 weeks at 90° C, we did not observe substantial alteration of laumontite. In hydrostatic compaction experiments, all samples deformed similarly in the presence of different fluids. Pore pressure decreases were observed at the start of a 1 week hold at 85° C in a 1M NaC1 pH 3 solution, suggesting that acidic fluids might stabilize pore pressures in basement fault networks. Friction experiments on laumontite and kaolinite powders showed both materials have similar coefficients of friction. Mixtures with partial kaolinite content showed a slight decrease in the coefficient of friction, which could be sufficient to trigger slip on critically stressed basement faults.
Quantum-size-controlled photoelectrochemical (QSC-PEC) etching, which uses quantum confinement effects to control size, can potentially enable the fabrication of epitaxial quantum nanostructures with unprecedented accuracy and precision across a wide range of materials systems. However, many open questions remain about this new technique, including its limitations and broader applicability. In this project, using an integrated experimental and theoretical modeling approach, we pursue a greater understanding of the time-dependent QSC-PEC etch process and to uncover the underlying mechanisms that determine its ultimate accuracy and precision. We also seek to broaden our understanding of the scope of its ultimate applicability in emerging nanostructures and nanodevices.
The retina plays an important role in animal vision --- namely to pre-process visual information before sending it to the brain. The goal of this LDRD was to develop models of motion-sensitive retinal cells for the purpose of developing retinal-inspired algorithms to be applied to real-world data specific to Sandia's national security missions. We specifically focus on detection of small, dim moving targets amidst varying types of clutter or distractor signals. We compare a classic motion-sensitive model, the Hassenstein-Reichardt model, to a model of the OMS (object motion- sensitive) cell, and find that the Reichardt model performs better under continuous clutter (e.g. white noise) but is very sensitive to particular stimulus conditions (e.g. target velocity). We also demonstrate that lateral inhibition, a ubiquitous characteristic of neural circuitry, can effect target-size tuning, improving detection specifically of small targets.
This work is to characterize the mechanical performances of the selected composites with four different overlap lengths of 0.25 in, 0.5 in, 0,75 in and 1.0 in. The composite materials in this study were one carbon composite (AS4C/UF3662) and one glass (E-glass/UF3662) composite. They both had the same resin of UF 3362, but with different fibers of carbon AS4C and E-glass. The mechanical loading in this study was limited to the quasi-static loading of 2 mm/min, which was equivalent to 5x10(-4) strain rate. Digital cameras were set up to record images during the mechanical testing. The full-field deformation data obtained from Digital Image Correlation (DIC) and the side view of the specimens were used to understand the different failure modes of the composites. The maximum load and the ultimate strength with consideration of the location of the failure for the different overlap lengths were compared and plotted together to understand the effect of the overlap lengths on the mechanical performance of the overlapped composites.
Imaging diagnostics that utilize coherent light, such as digital in-line holography, are important for object sizing and tracking applications. However, in explosive, supersonic, or hypersonic environments, gas-phase shocks impart imaging distortions that obscure internal objects. To circumvent this problem, some research groups have conducted experiments in vacuum, which inherently alters the physical behavior. Other groups have utilized single-shot flash x-ray or high-speed synchrotron x-ray sources to image through shock-waves. In this work, we combine digital in-line holography with a phase conjugate mirror to reduce the phase distortions caused by shock-waves. The technique operates by first passing coherent light through the shock-wave phase-distortion and then a phase-conjugate mirror. The phase-conjugate mirror is generated by a four-wave mixing process to produce a return beam that has the exact opposite phase-delay as the forward beam. Therefore, by passing the return beam back through the phase-distortion, the phase delays picked up during the initial pass are canceled, thereby producing improved coherent imaging. In this work, we implement phase conjugate digital in-line holography (PCDIH) for the first time with a nanosecond pulse-burst laser and ultra-high-speed cameras. This technique enables accurate measurement of the three-dimensional position and velocity of objects through shock-wave distortions at video rates up to 5 MHz. This technology is applied to improve three-dimensional imaging in a variety of environments from imaging supersonic shock-waves through turbulence, sizing objects through laser-spark plasma-generated shock-waves, and tracking explosively generated hypersonic fragments. Theoretical foundations and additional capabilities of this technique are also discussed.
Existing models for most materials do not describe phase transformations and associated lattice dy- namics (kinetics) under extreme conditions of pressure and temperature. Dynamic x-ray diffraction (DXRD) allows material investigations in situ on an atomic scale due to the correlation between solid-state structures and their associated diffraction patterns. In this LDRD project we have devel- oped a nanosecond laser-compression and picosecond-to-nanosecond x-ray diffraction platform for dynamically-compressed material studies. A new target chamber in the Target Bay in building 983 was commissioned for the ns, kJ Z-Beamlet laser (ZBL) and the 0.1 ns, 250 J Z-Petawatt (ZPW) laser systems, which were used to create 8-16 keV plasma x-ray sources from thin metal foils. The 5 ns, 15 J Chaco laser system was converted to a high-energy laser shock driver to load material samples to GPa stresses. Since laser-to-x-ray energy conversion efficiency above 10 keV is low, we employed polycapillary x-ray lenses for a 100-fold fluence increase compared to a conventional pinhole aperture while simultaneously reducing the background significantly. Polycapillary lenses enabled diffraction measurements up to 16 keV with ZBL as well as diffraction experiments with ZPW. This x-ray diffraction platform supports experiments that are complementary to gas guns and the Z facility due to different strain rates. Ultimately, there is now a foundation to evaluate DXRD techniques and detectors in-house before transferring the technology to Z. This page intentionally left blank.
Stochastic optimization deals with making highly reliable decisions under uncertainty. Chance constraints are a crucial tool of stochastic optimization to develop mathematical optimization models; they form the backbone of many important national security data science applications. These include critical infrastructure resiliency, cyber security, power system operations, and disaster relief management. However, existing algorithms to solve chance-constrained optimization models are severely limited by problem size and structure. In this investigative study, we (i) develop new algorithms to approximate chance-constrained optimization models, (ii) demonstrate the application of chance-constraints to a national security problem, and (iii) investigate related stochastic optimization problems. We believe our work will pave way for new research is stochastic optimization as well as secure national infrastructures against unforeseen attacks.
Modeling material and component behavior using finite element analysis (FEA) is critical for modern engineering. One key to a credible model is having an accurate material model, with calibrated model parameters, which describes the constitutive relationship between the deformation and the resulting stress in the material. As such, identifying material model parameters is critical to accurate and predictive FEA. Traditional calibration approaches use only global data (e.g. extensometers and resultant force) and simplified geometries to find the parameters. However, the utilization of rapidly maturing full-field characterization techniques (e.g. Digital Image Correlation (DIC)) with inverse techniques (e.g. the Virtual Feilds Method (VFM)) provide a new, novel and improved method for parameter identification. This LDRD tested that idea: in particular, whether more parameters could be identified per test when using full-field data. The research described in this report successfully proves this hypothesis by comparing the VFM results with traditional calibration methods. Important products of the research include: verified VFM codes for identifying model parameters, a new look at parameter covariance in material model parameter estimation, new validation techniques to better utilize full-field measurements, and an exploration of optimized specimen design for improved data richness.
The high-level objective of this project is to solve national-security problems associated with petroleum use, cost, and environmental impacts by enabling more efficient use of natural-gas-fueled internal combustion engines. An improved science-base on end-gas autoignition, or “knock,” is required to support engineering of more efficient engine designs through predictive modeling. An existing optical diesel engine facility is retrofitted for natural gas fueling with laser-spark-ignition combustion to provide in-cylinder imaging and pressure data under knocking combustion. Zero-dimensional chemical-kinetic modeling of autoignition, adiabatically constrained by the measured cylinder pressure, isolates the role of autoignition chemistry. OH* chemiluminescence imaging reveals six different categories of knock onset that depend on proximity to engine surfaces and the in-cylinder deflagration. Modeling results show excellent prediction regardless of the knock category, thereby validating state-of-the-art kinetic mechanisms. The results also provide guidance for future work to build a science base on the factors that affect the deflagration rate.
Research interest in developing computing systems that represent logic states using quantum mechanical observables has only increased in the few decades since its inception. While quantum computers, with Josephson junction based qubits, have now been commercially available in the last three years, there is also significant research initiative to develop scalable quantum computers with so-called donor qubits. B.E. Kane first published on a device implementation of a silicon-based quantum computer in 1998, which sparked a wave of follow-on advances due to the attractive nature of silicon-based computing[7]. Nearly all commercial computing systems using classical binary logic are fabricated using a silicon substrate and it is inarguably the most mature material system for semiconductor devices, so that coupling classical and quantum bits on a single substrate is possible. The process of growing and processing silicon crystals into wafers is extremely robust and leads to minimal impurities or structural defects.
This project focused on providing a fundamental mechanistic understanding of the complex degradation mechanisms associated with Pellet/Clad Debonding (PCD) through the use of a unique suite of novel synthesis of surrogate spent nuclear fuel, in-situ nanoscale experiments on surrogate interfaces, multi-modeling, and characterization of decommissioned commercial spent fuel. The understanding of a broad class of metal/ceramic interfaces degradation studied within this project provided the technical basis related to the safety of high burn-up fuel, a problem of interest to the DOE.
This document archives the results developed by the Lab Directed Research and Development (LDRD) project sponsored by Sandia National Laboratories (SNL). In this work, it is shown that SNL has developed the first known high-energy hyperspectral computed tomography system for industrial and security applications. The main results gained from this work include dramatic beam-hardening artifact reduction by using the hyperspectral reconstruction as a bandpass filter without the need for any other computation or pre-processing; additionally, this work demonstrated the ability to use supervised and unsupervised learning methods on the hyperspectral reconstruction data for the application of materials characterization and identification which is not possible using traditional computed tomography systems or approaches.
We report on the fabrication and characterization of nanocrystalline ZnO films for use as a random laser physical unclonable function (PUF). Correlation between processing conditions and film microstructure will be made to optimize the lasing properties and random response. We will specifically examine the repeatability and security of PUFs demonstrated in this novel system. This demonstration has promise to impact many of Sandia's core missions including counterfeit detection.
Pressure losses and aerosol collection efficiencies were measured for fibrous filter materials at air-flow rates consistent with high efficiency filtration (hundreds of cubic feet per minute). Microfiber filters coated with nanofibers were purchased and fabricated into test assemblies for a 12-inch duct system designed to mimic high efficiency filtration testing of commercial and industrial processes. Standards and specifications for high efficiency filtration were studied from a variety of institutions to assess protocols for design, testing, operations and maintenance, and quality assurance (e.g., DOE, ASHRAE, ASME). Three materials with varying Minimum Efficiency Reporting Values (MERV) were challenged with sodium chloride aerosol. Substantial filter loading was observed where aerosol collection efficiencies and pressure losses increased during experiments. Filter designs will be optimized and characterized in subsequent years of this study. Additional testing will be performed with higher hazard aerosols at Oak Ridge National Laboratory.
This project targeted a full-field understanding of the conversion of plastic work into heat using advanced diagnostics (digital image correlation, DIC, combined with infrared, IR, imaging). This understanding will act as a catalyst for reformulating the prevalent simplistic model, which will ultimately transform Sandia's ability to design for and predict thermomechanical behavior, impacting national security applications including nuclear weapon assessments of accident scenarios. Tensile 304L stainless steel dogbones are pulled in tension at quasi-static rates until failure and full-field deformation and temperature data are captured, while accounting for thermal losses. The IR temperature fields are mapped onto the DIC coordinate system (Lagrangian formulation). The resultant fields are used to calculate the Taylor-Quinney coefficient, β, at two strain rates rates (0.002 s-1 and 0.08 s-1) and two temperatures (room temperature, RT, and 250°C).
The Near-Field Scanning Optical Microscope (NSOM) was used to image a wide array samples using a variety of standard and non-standard operating conditions on a custom system built in Org. 5625. The ability of this technique to produce high-quality images was assessed during this one-year LDRD. To obtain details about the devices imaged, as well as the experimental details, please refer to the classified report from the project manager, Rich Dondero, or the NSP IA lead, Kristina Czuchlewski.
A coupled electrochemical/thermochemical cycle was investigated to produce hydrogen from renewable resources. Like a conventional thermochemical cycle, this cycle leverages chemical energy stored in a thermochemical working material that is reduced thermally by solar energy. However, in this concept, the stored chemical energy only needs to be partially capable of splitting steam to produce hydrogen. To push the reaction to completion, a proton-conducting membrane is employed to separate hydrogen as it is produced, thus shifting the thermodynamics toward further hydrogen production. This novel coupled-cycle concept provides several benefits. First, the required oxidation enthalpy of the reversible thermochemical material is reduced, enabling the process to occur at lower temperatures. Second, removing the requirement for spontaneous steam splitting widens the scope of materials compositions, allowing for less expensive/more abundant elements to be used. Lastly, thermodynamics calculations suggest that this concept can potentially reach higher efficiencies than photovoltaic-to-electrolysis hydrogen production methods. This Exploratory Express LDRD involved assessing the practical feasibility of the proposed coupled cycle. A test stand was designed and constructed and proton-conducting membranes were synthesized. An LDRD plus-up of $10k enabled the remediation of a membrane sealing issue and enabled testing with an improved membrane. However, the membrane proved too thick for efficient proton conduction, and there were insufficient funds to continue. While the full proof of concept was not achieved, the individual components of the experiment were validated and new capabilities that can be leveraged by a variety of programs were developed.
In this work we propose an approach for accelerating Uncertainty Quantification (UQ) analysis in the context of Multifidelity applications. In the presence of complex multiphysics applications, which often require a prohibitive computational cost for each evaluation, multifidelity UQ techniques try to accelerate the convergence of statistics by leveraging the in- formation collected from a larger number of a lower fidelity model realizations. However, at the-state-of-the-art, the performance of virtually all the multifidelity UQ techniques is related to the correlation between the high and low-fidelity models. In this work we proposed to design a multifidelity UQ framework based on the identification of independent important directions for each model. The main idea is that if the responses of each model can be represented in a common space, this latter can be shared to enhance the correlation when the samples are drawn with respect to it instead of the original variables. There are also two main additional advantages that follow from this approach. First, the models might be correlated even if their original parametrizations are chosen independently. Second, if the shared space between models has a lower dimensionality than the original spaces, the UQ analysis might benefit from a dimension reduction standpoint. In this work we designed this general framework and we also tested it on several test problems ranging from analytical functions for verification purpose, up to more challenging application problems as an aero-thermo-structural analysis and a scramjet flow analysis.
Verification results for Sierra/SM using inexact reference solutions have often exhibited unsatisfactory convergence behavior. With an understanding of the convergence behavior for these types of tests, one can avoid falsely attributing pathologies of the test with incorrectness of the code. Simple theoretical results highlight that for an inexact reference solution two conditions must be met to observe asymptotic convergence. These conditions, and the resulting types of convergence behaviors, are further illustrated with graphical examples depicting the exact, inexact reference, and sequence of numerical solutions as vectors (in a function space). A stress concentration problem is adopted to contrast convergence behaviors when using inexact (classical linear elastic) and exact (manufactured) reference solutions. Convergence is not initially attained with the classical solution. Convergence with the manufactured solution indicates the convergence failure with the classical reference did not result from code error and provides insight on how for this problem asymptotic convergence could be attained with the classical reference solution by modifying the computational models.
A reduced order modeling capability has been developed to reduce the computational burden associated with time-domain solutions of structural dynamic models with linear viscoelastic materials. The discretized equations-of-motion produce convolution integrals resulting in a linear system with nonviscous damping forces. The challenge associated with the reduction of nonviscously damped, linear systems is the selection and computation of the appropriate modal basis to perform modal projection. The system produces a nonlinear eigenvalue problem that is challenging to solve and requires use of specialized algorithms not readily available in commercial finite element packages. This SAND report summarizes the LDRD discoveries of a reduction scheme developed for monolithic finite element models and provides preliminary investigations to extensions of the method using component mode synthesis. In addition, this report provides a background overview of structural dynamic modeling of structures with linear viscoelastic materials, and provides an overview of a new code capability in Sierra Structural Dynamics to output the system level matrices computed on multiple processors.
There has been much interest in leveraging the topological order of materials for quantum information processing. Among the various solid-state systems, one-dimensional topological superconductors made out of strongly spin-orbit-coupled nanowires have been shown to be the most promising material platform. In this project, we investigated the feasibility of turning silicon, which is a non-topological semiconductor and has weak spin-orbit coupling, into a one-dimensional topological superconductor. Our theoretical analysis showed that it is indeed possible to create a sizable effective spin-orbit gap in the energy spectrum of a ballistic one-dimensional electron channel in silicon with the help of nano-magnet arrays. Experimentally, we developed magnetic materials needed for fabricating such nano-magnets, characterized the magnetic behavior at low temperatures, and successfully demonstrated the required magnetization configuration for opening the spin-orbit gap. Our results pave the way toward a practical topological quantum computing platform using silicon, one of the most technologically mature electronic materials.
This report summarizes the result of the LDRD Exploratory Express project 211666-01, titled "Coupled Magnetic Spin Dynamics and Molecular Dynamics in a Massively Parallel Framework".
Pressure-driven assembly of ligand-grafted gold nanoparticle superlattices is a promising approach for fabricating gold nanostructures, such as nanowires and nanosheets. However, optimizing this fabrication method requires an understanding of the mechanics of their complex hierarchical assemblies at high pressures. We use molecular dynamics simulations to characterize the response of alkanethiol-grafted gold nanoparticle superlattices to applied hydrostatic pressures up to 15 GPa, and demonstrate that the internal mechanics significantly depend on ligand length. At low pressures, intrinsic voids govern the mechanics of pressure-induced compaction, and the dynamics of collapse of these voids under pressure depend significantly on ligand length. These microstructural observations correlate well with the observed trends in bulk modulus and elastic constants. For the shortest ligands at high pressures, coating failure leads to gold core-core contact, an augur of irreversible response and eventual sintering. This behavior was unexpected under hydrostatic loading, and was only observed for the shortest ligands.
Four tensile coupon designs of PH13-8Mo H950 steel were tested to failure using quasi-static rates to obtain data to calibrate the Xue-Wierzbicki failure model for ductile fracture. The tests recorded the force-displacement, location of the first crack, displacement to fracture, area reduction, and crack propagation path. The test method and coupon designs were adopted from Tomasz Wierzbicki’s “Calibration and evaluation of seven fracture models” report. The XueWierzbicki model predicts fracture based on accumulated equivalent plastic strain, stress triaxiality, and deviatoric state parameter. Calibrating the Xue-Wierzbicki failure model required testing four coupon designs to calculate the four free parameters in the model. The coupon designs tested a range of stress triaxialities with two axisymmetric tests, one shear test, and one plane stress test. The data obtained and presented in this report can be used to develop a Xue-Wierzbicki fracture model for PH13-8Mo H950 steel.
The idea of acausality for control of a wave energy converter (WEC) is a concept that has been popular since the birth of modern wave energy research in the 1970s. This concept has led to considerable research into wave prediction and feedforward WEC control algorithms. However, the findings in this report mostly negate the need for wave prediction to improve WEC energy absorption, and favor instead feedback driven control strategies. Feedback control is shown to provide performance that rivals a prediction-based controller, which has been unrealistically assumed to have perfect prediction.
We present the relative timing and pulse-shape discrimination performance of a H1949-50 photomultiplier tube to SensL ArrayX-B0B6_64S coupled to a SensL ArrayC-60035-64P- PCB Silicon Photomultiplier array. The goal of this work is to enable the replacement of photomultiplier readout of scintillators with Silicon Photomultiplier devices, which are more robust and have higher particle detection efficiency. The report quantifies the degradation of these performance parameters using commercial off the shelf summing circuits, and motivates the development of an improved summing circuit: the pulse-shape descrimination figure-of-merit drops from 1.7 at 500 keVee to 1.4, and the timing resolution (σ) is 288 ps for the photomultiplier readout and approximately 1 ns for the Silicon Photomultiplier readout. A degradation of this size will have a large negative impact on any device that relies on timing coincidence or pulse-shape discrimination to detect neutron interactions, such as neutron kinematic imaging or multiplicity measurements.
A new arms race is emerging among global powers: the hypersonic weapon. Hypersonics are flight vehicles that travel at Mach 5 (five times the speed of sound) or faster. They can cruise in the atmosphere, unlike traditional exo-atmospheric ballistic missiles, allowing stealth and maneuverability during midflight. Faster, lower, and stealthier means the missiles can better evade adversary defense systems. The U.S. has experimented with hypersonics for years, but current investments by Russia and China into their own offensive hypersonic systems may render U.S. missile defense systems ineffective. For the U.S. to avoid obsolescence in this strategically significant technology arena, hypersonics—combined with autonomy—needs to be a force multiplier.
This report describes software tools that can be used to evaluate and mitigate potential glare and avian-flux hazards from photovoltaic and concentrating solar power (CSP) plants. Enhancements to the Solar Glare Hazard Analysis Tool (SGHAT) include new block-space receptor models, integration of PVWatts for energy prediction, and a 3D daily glare visualization feature. Tools and methods to evaluate avian-flux hazards at CSP plants with large heliostat fields are also discussed. Alternative heliostat standby aiming strategies were investigated to reduce the avian-flux hazard and minimize impacts to operational performance. Finally, helicopter flyovers were conducted at the National Solar Thermal Test Facility and at the Ivanpah Solar Electric Generating System to evaluate the alternative heliostat aiming strategies and to provide a basis for model validation. Results showed that the models generally overpredicted the measured results, but they were able to simulate the trends in irradiance values with distance. A heliostat up-aiming strategy is recommended to alleviate both glare and avian-flux hazards, but operational schemes are required to reduce the impact on heliostat slew times and plant performance. Future studies should consider the trade-offs and collective impacts on these three factors of glare, avian-flux hazards, and plant operations and performance.
We present a preliminary investigation of the use of Multi-Layer Perceptrons (MLP) and Recurrent Neural Networks (RNNs) as surrogates of parameter-to-prediction maps of computational expensive dynamical models. In particular, we target the approximation of Quantities of Interest (QoIs) derived from the solution of a Partial Differential Equations (PDEs) at different time instants. In order to limit the scope of our study while targeting a relevant application, we focus on the problem of computing variations in the ice sheets mass (our QoI), which is a proxy for global mean sea-level changes. We present a number of neural network formulations and compare their performance with that of Polynomial Chaos Expansions (PCE) constructed on the same data.
Integration of renewable power sources into electrical grids remains an active research and development area, particularly for less developed renewable energy technologies, such as wave energy converters (WECs). High spatio-temporal resolution and accurate wave forecasts at a potential WEC (or WEC array) lease area are needed to improve WEC power prediction and to facilitate grid integration, particularly for microgrid locations. The availability of high quality measurement data from recently developed low-cost buoys allows for operational assimilation of wave data into forecast models at remote locations where real-time data have previously been unavailable. This work includes the development and assessment of a wave modeling framework with real-time data assimilation capabilities for WEC power prediction. Spoondrift wave measurement buoys were deployed off the coast of Yakutat, Alaska, a microgrid site with high wave energy resource potential. A wave modeling framework with data assimilation was developed and assessed, which was most effective when the incoming forecasted boundary conditions did not represent the observations well. For that case, assimilation of the wave height data using the ensemble Kalman filter resulted in a reduction of wave height forecast normalized root mean square error from 27% to an average of 16% over a 12-hour period. This results in reduction of wave power forecast error from 73% to 43%. In summary, the use of the low-cost wave buoy data assimilated into the wave modeling framework improved the forecast skill and will provide a useful development tool for the integration of WECs into electrical grids.
In designing a security module for inverter communications in a DER environment, it is critical to consider the impact of the additional security on the environment as well as what types of security is required for the various messages that must pass from the inverter to and from a utility. Also, since cyber security is more than just preventing an unauthorized user from viewing data, mechanisms for proving identity and ensuring that data cannot be altered without such a modification being discovered are needed. This is where the security principles of confidentiality, integrity, and availability come into play. For different types of communications, these different security principles may be important or not needed at all. Furthermore, the cost and constraints for applying cryptography for securing DER communications must be considered to help determine what is feasible within this environment and what will be the impact and cost of applying common cryptographic protections to inverter communications.
Data-driven modeling, including machine learning methods, continue to play an increasing role in society. Data-driven methods impact decision making for applications ranging from everyday determinations about which news people see and control of self-driving cars to high-consequence national security situations related to cyber security and analysis of nuclear weapons reliability. Although modern machine learning methods have made great strides in model induction and show excellent performance in a broad variety of complex domains, uncertainty remains an inherent aspect of any data-driven model. In this report, we provide an update to the preliminary results on uncertainty quantification for machine learning presented in SAND2017-6776. Specifically, we improve upon the general problem definition and expand upon the experiments conducted for the earlier re- port. Most importantly, we summarize key lessons learned about how and when uncertainty quantification can inform decision making and provide valuable insights into the quality of learned models and potential improvements to them.
The Co-Decontamination (CoDCon) Demonstration experiment at Pacific Northwest National Laboratory (PNNL) is designed to test the separation of a mixed U and Pu product from dissolved spent nuclear fuel. The primary purpose of the project is to demonstrate control of the Pu/U ratio throughout the entire process without producing a pure Pu stream. In addition, the project is quantifying the accuracy and precision to which a Pu/U mass ratio can be achieved. The system includes an on-line monitoring system using spectroscopy to monitor the ratios throughout the process. A dynamic model of the CoDCon flowsheet and the on-line monitoring system was developed to augment the experimental work. This model is based in MATLAB Simulink and provides the ability to expand the range of scenarios that can be examined for process control and determine overall measurement uncertainty. Experimental results have been used to inform and benchmark the model so that it can accurately simulate various transient scenarios. The results of the experimental benchmarking are presented here along with modeled scenarios to demonstrate the control and process monitoring of the system.
This SAND report fulfills the final report requirement for the Born Qualified Grand Challenge LDRD. Born Qualified was funded from FY16-FY18 with a total budget of ~$13M over the 3 years of funding. Overall 70+ staff, Post Docs, and students supported this project over its lifetime. The driver for Born Qualified was using Additive Manufacturing (AM) to change the qualification paradigm for low volume, high value, high consequence, complex parts that are common in high-risk industries such as ND, defense, energy, aerospace, and medical. AM offers the opportunity to transform design, manufacturing, and qualification with its unique capabilities. AM is a disruptive technology, allowing the capability to simultaneously create part and material while tightly controlling and monitoring the manufacturing process at the voxel level, with the inherent flexibility and agility in printing layer-by-layer. AM enables the possibility of measuring critical material and part parameters during manufacturing, thus changing the way we collect data, assess performance, and accept or qualify parts. It provides an opportunity to shift from the current iterative design-build-test qualification paradigm using traditional manufacturing processes to design-by-predictivity where requirements are addressed concurrently and rapidly. The new qualification paradigm driven by AM provides the opportunity to predict performance probabilistically, to optimally control the manufacturing process, and to implement accelerated cycles of learning. Exploiting these capabilities to realize a new uncertainty quantification-driven qualification that is rapid, flexible, and practical is the focus of this effort.
We report on work performed to measure the quenching factor of low kinetic energy germanium recoils, as a collaboration between Sandia National Laboratories (SNL) and Duke University. A small-mass low-noise high purity germanium detector was irradiated by a mono-energetic pulsed neutron beam produced by the Triangle Universities Nuclear Laboratory (TUNL) Van-de-Graaff accelerator. Data was collected to determine the germanium quenching factor as a function of 10 discrete recoil energy values in the range ~ [0.8, 5.0] keVnr. We describe the experiment, present the simulation and data processing for the 10 datasets, and discussed the quenching factor analysis result for one of them. This one result seems to indicate a somewhat large deviation from literature values, though it is still preliminary to claim the presence of a systematic bias in our data or analysis.
This project explored coupling modeling and analysis methods from multiple domains to address complex hybrid (cyber and physical) attacks on mission critical infrastructure. Robust methods to integrate these complex systems are necessary to enable large trade-space exploration including dynamic and evolving cyber threats and mitigations. Reinforcement learning employing deep neural networks, as in the AlphaGo Zero solution, was used to identify "best" (or approximately optimal) resilience strategies for operation of a cyber/physical grid model. A prototype platform was developed and the machine learning (ML) algorithm was made to play itself in a game of 'Hurt the Grid'. This proof of concept shows that machine learning optimization can help us understand and control complex, multi-dimensional grid space. A simple, yet high-fidelity model proves that the data have spatial correlation which is necessary for any optimization or control. Our prototype analysis showed that the reinforcement learning successfully improved adversary and defender knowledge to manipulate the grid. When expanded to more representative models, this exact type of machine learning will inform grid operations and defense - supporting mitigation development to defend the grid from complex cyber attacks! This same research can be expanded to similar complex domains.
In this article, we describe a prototype cosimulation framework using Xyce, GHDL and CocoTB that can be used to analyze digital hardware designs in out-of-nominal environments. We demonstrate current software methods and inspire future work via analysis of an open-source encryption core design. Note that this article is meant as a proof-of-concept to motivate integration of general cosimulation techniques with Xyce, an open-source circuit simulator.
In this study, a Johnson–Cook model was used as an example to analyze the relationship of compressive stress-strain response of engineering materials experimentally obtained at constant engineering and true strain rates. There was a minimal deviation between the stress-strain curves obtained at the same constant engineering and true strain rates. The stress-strain curves obtained at either constant engineering or true strain rates could be converted from one to the other, which both represented the intrinsic material response. There is no need to specify the testing requirement of constant engineering or true strain rates for material property characterization, provided that either constant engineering or constant true strain rate is attained during the experiment.
When making computational simulation predictions of multiphysics engineering systems, sources of uncertainty in the prediction need to be acknowledged and included in the analysis within the current paradigm of striving for simulation credibility. A thermal analysis of an aerospace geometry was performed at Sandia National Laboratories. For this analysis, a verification, validation, and uncertainty quantification (VVUQ) workflow provided structure for the analysis, resulting in the quantification of significant uncertainty sources including spatial numerical error and material property parametric uncertainty. It was hypothesized that the parametric uncertainty and numerical errors were independent and separable for this application. This hypothesis was supported by performing uncertainty quantification (UQ) simulations at multiple mesh resolutions, while being limited by resources to minimize the number of medium and high resolution simulations. Based on this supported hypothesis, a prediction including parametric uncertainty and a systematic mesh bias is used to make a margin assessment that avoids unnecessary uncertainty obscuring the results and optimizes use of computing resources.
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly, exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.
We generalize the theory of underlying one-step methods to strictly stable general linear methods (GLMs) solving nonautonomous ordinary differential equations (ODEs) that satisfy a global Lipschitz condition. We combine this theory with the Lyapunov and Sacker-Sell spectral stability theory for one-step methods developed in [34, 35, 36] to analyze the stability of a strictly stable GLM solving a nonautonomous linear ODE. These results are applied to develop a stability diagnostic for the solution of nonautonomous linear ODEs by strictly stable GLMs.
Reliable engineering quality, safety, and performance are essential for a successful energy-storage project. The commercial energy-storage industry is entering its most formative period, which will impact the arc of the industry's development for years to come. Project announcements are increasing in both frequency and scale. Energy-storage systems (ESSs) are establishing themselves as a viable option for deployment across the entire electricity infrastructure as grid-connected energy-storage assets or in combination with other grid assets, such as hybrid generators. How the industry will evolve-in direction and degree-will depend largely on building a firm foundation of sound engineering requirements into project expectations.
Liquid metal breakup processes are important for understanding a variety of physical phenomena including metal powder formation, thermal spray coatings, fragmentation in explosive detonations and metalized propellant combustion. Since the breakup behaviors of liquid metals are not well studied, we experimentally investigate the roles of higher density and fast elastic surface oxide formation on breakup morphology and droplet characteristics. This work compares the column breakup of water with Galinstan, a room-temperature eutectic liquid metal alloy of gallium, indium and tin. A shock tube is used to generate a step change in convective velocity and back-lit imaging is used to classify morphologies for Weber numbers up to 250. Digital in-line holography (DIH) is then used to quantitatively capture droplet size, velocity and three-dimensional position information. Differences in geometry between canonical spherical drops and the liquid columns utilized in this paper are likely responsible for observations of earlier transition Weber numbers and uni-modal droplet volume distributions. Scaling laws indicate that Galinstan and water share similar droplet size-velocity trends and root-normal volume probability distributions. However, measurements indicate that Galinstan breakup occurs earlier in non-dimensional time and produces more non-spherical droplets due to fast oxide formation.
Aerosol jet printing (AJP) has emerged as a promising method for microscale digital additive manufacturing using functional nanomaterial inks. While compelling capabilities have been demonstrated in the research community in recent years, the development and refinement of inks and process parameters largely follows empirical observations, with an extensive phase space over which to optimize. While this has led to general qualitative guidelines and ink- and machine-specific correlations, a more fundamental understanding based on principles of aerosol physics and fluid mechanics is lacking. This contrasts with more mature printing technologies, for which foundational physical principles have been rigorously examined. Presented here is a broad framework for describing the AJP process. Simple analytical models are employed to ensure generality and accessibility of the results, while experimental validation using a silver nanoparticle ink supports the physical relevance of the approach. This basic understanding enables a description of process limitations grounded in fundamental principles, as well as guidelines for improved printer design, ink formulation, and print parameter optimization.
Improving the sensitivity of infrared detectors is an essential step for future applications, including satellite- and terrestrial-based systems. We investigate nanoantenna-enabled detectors (NEDs) in the infrared, where the nanoantenna arrays play a fundamental role in enhancing the level of absorption within the active material of a photodetector. The design and optimization of nanoantenna-enabled detectors via full-wave simulations is a challenging task given the large parameter space to be explored. Here, we present a fast and accurate fully analytic circuit model of patch-based NEDs. This model allows for the inclusion of real metals, realistic patch thicknesses, non-absorbing spacer layers, the active detector layer, and absorption due to higher-order evanescent modes of the metallic array. We apply the circuit model to the design of NED devices based on Type II superlattice absorbers, and show that we can achieve absorption of ∼70% of the incoming energy in subwavelength (∼λ∕5) absorber layers. The accuracy of the circuit model is verified against full-wave simulations, establishing this model as an efficient design tool to quickly and accurately optimize NED structures.
Partial differential equation (PDE) constrained optimization is designed to solve control, design, and inverse problems with underlying physics. A distinguishing challenge of this technique is the handling of large numbers of optimization variables in combination with the complexities of discretized PDEs. Over the last several decades, advances in algorithms, numerical simulation, software design, and computer architectures have allowed for the maturation of PDE constrained optimization (PDECO) technologies with subsequent solutions to complicated control, design, and inverse problems. This special journal edition, entitled “PDE-Constrained Optimization”, features eight papers that demonstrate new formulations, solution strategies, and innovative algorithms for a range of applications. In particular, these contributions demonstrate the impactfulness on our engineering and science communities. This paper offers brief remarks to provide some perspective and background for PDECO, in addition to summaries of the eight papers.
The role of power electronics in the utility grid is continually expanding. As converter design processes mature and new advanced materials become available, the pace of industry adoption is poised to accelerate. Looking forward, we can envision a future in which power electronics are as integral to grid functionality as the transformer is today. The Enabling Advanced Power Electronics Technologies for the Next Generation Electric Utility Grid Workshop was organized by Sandia National Laboratories and held in Albuquerque, New Mexico, July 17 - 18, 2018 . The workshop helped attendees to gain a broader understanding of power electronics R&D needs—from materials to systems—for the next generation electric utility grid. This report summarizes discussions and presentations from the workshop and identifies opportunities for future efforts.