Research interest in developing computing systems that represent logic states using quantum mechanical observables has only increased in the few decades since its inception. While quantum computers, with Josephson junction based qubits, have now been commercially available in the last three years, there is also significant research initiative to develop scalable quantum computers with so-called donor qubits. B.E. Kane first published on a device implementation of a silicon-based quantum computer in 1998, which sparked a wave of follow-on advances due to the attractive nature of silicon-based computing[7]. Nearly all commercial computing systems using classical binary logic are fabricated using a silicon substrate and it is inarguably the most mature material system for semiconductor devices, so that coupling classical and quantum bits on a single substrate is possible. The process of growing and processing silicon crystals into wafers is extremely robust and leads to minimal impurities or structural defects.
This project focused on providing a fundamental mechanistic understanding of the complex degradation mechanisms associated with Pellet/Clad Debonding (PCD) through the use of a unique suite of novel synthesis of surrogate spent nuclear fuel, in-situ nanoscale experiments on surrogate interfaces, multi-modeling, and characterization of decommissioned commercial spent fuel. The understanding of a broad class of metal/ceramic interfaces degradation studied within this project provided the technical basis related to the safety of high burn-up fuel, a problem of interest to the DOE.
This document archives the results developed by the Lab Directed Research and Development (LDRD) project sponsored by Sandia National Laboratories (SNL). In this work, it is shown that SNL has developed the first known high-energy hyperspectral computed tomography system for industrial and security applications. The main results gained from this work include dramatic beam-hardening artifact reduction by using the hyperspectral reconstruction as a bandpass filter without the need for any other computation or pre-processing; additionally, this work demonstrated the ability to use supervised and unsupervised learning methods on the hyperspectral reconstruction data for the application of materials characterization and identification which is not possible using traditional computed tomography systems or approaches.
We report on the fabrication and characterization of nanocrystalline ZnO films for use as a random laser physical unclonable function (PUF). Correlation between processing conditions and film microstructure will be made to optimize the lasing properties and random response. We will specifically examine the repeatability and security of PUFs demonstrated in this novel system. This demonstration has promise to impact many of Sandia's core missions including counterfeit detection.
Pressure losses and aerosol collection efficiencies were measured for fibrous filter materials at air-flow rates consistent with high efficiency filtration (hundreds of cubic feet per minute). Microfiber filters coated with nanofibers were purchased and fabricated into test assemblies for a 12-inch duct system designed to mimic high efficiency filtration testing of commercial and industrial processes. Standards and specifications for high efficiency filtration were studied from a variety of institutions to assess protocols for design, testing, operations and maintenance, and quality assurance (e.g., DOE, ASHRAE, ASME). Three materials with varying Minimum Efficiency Reporting Values (MERV) were challenged with sodium chloride aerosol. Substantial filter loading was observed where aerosol collection efficiencies and pressure losses increased during experiments. Filter designs will be optimized and characterized in subsequent years of this study. Additional testing will be performed with higher hazard aerosols at Oak Ridge National Laboratory.
This project targeted a full-field understanding of the conversion of plastic work into heat using advanced diagnostics (digital image correlation, DIC, combined with infrared, IR, imaging). This understanding will act as a catalyst for reformulating the prevalent simplistic model, which will ultimately transform Sandia's ability to design for and predict thermomechanical behavior, impacting national security applications including nuclear weapon assessments of accident scenarios. Tensile 304L stainless steel dogbones are pulled in tension at quasi-static rates until failure and full-field deformation and temperature data are captured, while accounting for thermal losses. The IR temperature fields are mapped onto the DIC coordinate system (Lagrangian formulation). The resultant fields are used to calculate the Taylor-Quinney coefficient, β, at two strain rates rates (0.002 s-1 and 0.08 s-1) and two temperatures (room temperature, RT, and 250°C).
The Near-Field Scanning Optical Microscope (NSOM) was used to image a wide array samples using a variety of standard and non-standard operating conditions on a custom system built in Org. 5625. The ability of this technique to produce high-quality images was assessed during this one-year LDRD. To obtain details about the devices imaged, as well as the experimental details, please refer to the classified report from the project manager, Rich Dondero, or the NSP IA lead, Kristina Czuchlewski.
A coupled electrochemical/thermochemical cycle was investigated to produce hydrogen from renewable resources. Like a conventional thermochemical cycle, this cycle leverages chemical energy stored in a thermochemical working material that is reduced thermally by solar energy. However, in this concept, the stored chemical energy only needs to be partially capable of splitting steam to produce hydrogen. To push the reaction to completion, a proton-conducting membrane is employed to separate hydrogen as it is produced, thus shifting the thermodynamics toward further hydrogen production. This novel coupled-cycle concept provides several benefits. First, the required oxidation enthalpy of the reversible thermochemical material is reduced, enabling the process to occur at lower temperatures. Second, removing the requirement for spontaneous steam splitting widens the scope of materials compositions, allowing for less expensive/more abundant elements to be used. Lastly, thermodynamics calculations suggest that this concept can potentially reach higher efficiencies than photovoltaic-to-electrolysis hydrogen production methods. This Exploratory Express LDRD involved assessing the practical feasibility of the proposed coupled cycle. A test stand was designed and constructed and proton-conducting membranes were synthesized. An LDRD plus-up of $10k enabled the remediation of a membrane sealing issue and enabled testing with an improved membrane. However, the membrane proved too thick for efficient proton conduction, and there were insufficient funds to continue. While the full proof of concept was not achieved, the individual components of the experiment were validated and new capabilities that can be leveraged by a variety of programs were developed.
In this work we propose an approach for accelerating Uncertainty Quantification (UQ) analysis in the context of Multifidelity applications. In the presence of complex multiphysics applications, which often require a prohibitive computational cost for each evaluation, multifidelity UQ techniques try to accelerate the convergence of statistics by leveraging the in- formation collected from a larger number of a lower fidelity model realizations. However, at the-state-of-the-art, the performance of virtually all the multifidelity UQ techniques is related to the correlation between the high and low-fidelity models. In this work we proposed to design a multifidelity UQ framework based on the identification of independent important directions for each model. The main idea is that if the responses of each model can be represented in a common space, this latter can be shared to enhance the correlation when the samples are drawn with respect to it instead of the original variables. There are also two main additional advantages that follow from this approach. First, the models might be correlated even if their original parametrizations are chosen independently. Second, if the shared space between models has a lower dimensionality than the original spaces, the UQ analysis might benefit from a dimension reduction standpoint. In this work we designed this general framework and we also tested it on several test problems ranging from analytical functions for verification purpose, up to more challenging application problems as an aero-thermo-structural analysis and a scramjet flow analysis.
Verification results for Sierra/SM using inexact reference solutions have often exhibited unsatisfactory convergence behavior. With an understanding of the convergence behavior for these types of tests, one can avoid falsely attributing pathologies of the test with incorrectness of the code. Simple theoretical results highlight that for an inexact reference solution two conditions must be met to observe asymptotic convergence. These conditions, and the resulting types of convergence behaviors, are further illustrated with graphical examples depicting the exact, inexact reference, and sequence of numerical solutions as vectors (in a function space). A stress concentration problem is adopted to contrast convergence behaviors when using inexact (classical linear elastic) and exact (manufactured) reference solutions. Convergence is not initially attained with the classical solution. Convergence with the manufactured solution indicates the convergence failure with the classical reference did not result from code error and provides insight on how for this problem asymptotic convergence could be attained with the classical reference solution by modifying the computational models.
A reduced order modeling capability has been developed to reduce the computational burden associated with time-domain solutions of structural dynamic models with linear viscoelastic materials. The discretized equations-of-motion produce convolution integrals resulting in a linear system with nonviscous damping forces. The challenge associated with the reduction of nonviscously damped, linear systems is the selection and computation of the appropriate modal basis to perform modal projection. The system produces a nonlinear eigenvalue problem that is challenging to solve and requires use of specialized algorithms not readily available in commercial finite element packages. This SAND report summarizes the LDRD discoveries of a reduction scheme developed for monolithic finite element models and provides preliminary investigations to extensions of the method using component mode synthesis. In addition, this report provides a background overview of structural dynamic modeling of structures with linear viscoelastic materials, and provides an overview of a new code capability in Sierra Structural Dynamics to output the system level matrices computed on multiple processors.
There has been much interest in leveraging the topological order of materials for quantum information processing. Among the various solid-state systems, one-dimensional topological superconductors made out of strongly spin-orbit-coupled nanowires have been shown to be the most promising material platform. In this project, we investigated the feasibility of turning silicon, which is a non-topological semiconductor and has weak spin-orbit coupling, into a one-dimensional topological superconductor. Our theoretical analysis showed that it is indeed possible to create a sizable effective spin-orbit gap in the energy spectrum of a ballistic one-dimensional electron channel in silicon with the help of nano-magnet arrays. Experimentally, we developed magnetic materials needed for fabricating such nano-magnets, characterized the magnetic behavior at low temperatures, and successfully demonstrated the required magnetization configuration for opening the spin-orbit gap. Our results pave the way toward a practical topological quantum computing platform using silicon, one of the most technologically mature electronic materials.
This report summarizes the result of the LDRD Exploratory Express project 211666-01, titled "Coupled Magnetic Spin Dynamics and Molecular Dynamics in a Massively Parallel Framework".
Pressure-driven assembly of ligand-grafted gold nanoparticle superlattices is a promising approach for fabricating gold nanostructures, such as nanowires and nanosheets. However, optimizing this fabrication method requires an understanding of the mechanics of their complex hierarchical assemblies at high pressures. We use molecular dynamics simulations to characterize the response of alkanethiol-grafted gold nanoparticle superlattices to applied hydrostatic pressures up to 15 GPa, and demonstrate that the internal mechanics significantly depend on ligand length. At low pressures, intrinsic voids govern the mechanics of pressure-induced compaction, and the dynamics of collapse of these voids under pressure depend significantly on ligand length. These microstructural observations correlate well with the observed trends in bulk modulus and elastic constants. For the shortest ligands at high pressures, coating failure leads to gold core-core contact, an augur of irreversible response and eventual sintering. This behavior was unexpected under hydrostatic loading, and was only observed for the shortest ligands.
Four tensile coupon designs of PH13-8Mo H950 steel were tested to failure using quasi-static rates to obtain data to calibrate the Xue-Wierzbicki failure model for ductile fracture. The tests recorded the force-displacement, location of the first crack, displacement to fracture, area reduction, and crack propagation path. The test method and coupon designs were adopted from Tomasz Wierzbicki’s “Calibration and evaluation of seven fracture models” report. The XueWierzbicki model predicts fracture based on accumulated equivalent plastic strain, stress triaxiality, and deviatoric state parameter. Calibrating the Xue-Wierzbicki failure model required testing four coupon designs to calculate the four free parameters in the model. The coupon designs tested a range of stress triaxialities with two axisymmetric tests, one shear test, and one plane stress test. The data obtained and presented in this report can be used to develop a Xue-Wierzbicki fracture model for PH13-8Mo H950 steel.
The idea of acausality for control of a wave energy converter (WEC) is a concept that has been popular since the birth of modern wave energy research in the 1970s. This concept has led to considerable research into wave prediction and feedforward WEC control algorithms. However, the findings in this report mostly negate the need for wave prediction to improve WEC energy absorption, and favor instead feedback driven control strategies. Feedback control is shown to provide performance that rivals a prediction-based controller, which has been unrealistically assumed to have perfect prediction.
We present the relative timing and pulse-shape discrimination performance of a H1949-50 photomultiplier tube to SensL ArrayX-B0B6_64S coupled to a SensL ArrayC-60035-64P- PCB Silicon Photomultiplier array. The goal of this work is to enable the replacement of photomultiplier readout of scintillators with Silicon Photomultiplier devices, which are more robust and have higher particle detection efficiency. The report quantifies the degradation of these performance parameters using commercial off the shelf summing circuits, and motivates the development of an improved summing circuit: the pulse-shape descrimination figure-of-merit drops from 1.7 at 500 keVee to 1.4, and the timing resolution (σ) is 288 ps for the photomultiplier readout and approximately 1 ns for the Silicon Photomultiplier readout. A degradation of this size will have a large negative impact on any device that relies on timing coincidence or pulse-shape discrimination to detect neutron interactions, such as neutron kinematic imaging or multiplicity measurements.
A new arms race is emerging among global powers: the hypersonic weapon. Hypersonics are flight vehicles that travel at Mach 5 (five times the speed of sound) or faster. They can cruise in the atmosphere, unlike traditional exo-atmospheric ballistic missiles, allowing stealth and maneuverability during midflight. Faster, lower, and stealthier means the missiles can better evade adversary defense systems. The U.S. has experimented with hypersonics for years, but current investments by Russia and China into their own offensive hypersonic systems may render U.S. missile defense systems ineffective. For the U.S. to avoid obsolescence in this strategically significant technology arena, hypersonics—combined with autonomy—needs to be a force multiplier.
This report describes software tools that can be used to evaluate and mitigate potential glare and avian-flux hazards from photovoltaic and concentrating solar power (CSP) plants. Enhancements to the Solar Glare Hazard Analysis Tool (SGHAT) include new block-space receptor models, integration of PVWatts for energy prediction, and a 3D daily glare visualization feature. Tools and methods to evaluate avian-flux hazards at CSP plants with large heliostat fields are also discussed. Alternative heliostat standby aiming strategies were investigated to reduce the avian-flux hazard and minimize impacts to operational performance. Finally, helicopter flyovers were conducted at the National Solar Thermal Test Facility and at the Ivanpah Solar Electric Generating System to evaluate the alternative heliostat aiming strategies and to provide a basis for model validation. Results showed that the models generally overpredicted the measured results, but they were able to simulate the trends in irradiance values with distance. A heliostat up-aiming strategy is recommended to alleviate both glare and avian-flux hazards, but operational schemes are required to reduce the impact on heliostat slew times and plant performance. Future studies should consider the trade-offs and collective impacts on these three factors of glare, avian-flux hazards, and plant operations and performance.
We present a preliminary investigation of the use of Multi-Layer Perceptrons (MLP) and Recurrent Neural Networks (RNNs) as surrogates of parameter-to-prediction maps of computational expensive dynamical models. In particular, we target the approximation of Quantities of Interest (QoIs) derived from the solution of a Partial Differential Equations (PDEs) at different time instants. In order to limit the scope of our study while targeting a relevant application, we focus on the problem of computing variations in the ice sheets mass (our QoI), which is a proxy for global mean sea-level changes. We present a number of neural network formulations and compare their performance with that of Polynomial Chaos Expansions (PCE) constructed on the same data.
Integration of renewable power sources into electrical grids remains an active research and development area, particularly for less developed renewable energy technologies, such as wave energy converters (WECs). High spatio-temporal resolution and accurate wave forecasts at a potential WEC (or WEC array) lease area are needed to improve WEC power prediction and to facilitate grid integration, particularly for microgrid locations. The availability of high quality measurement data from recently developed low-cost buoys allows for operational assimilation of wave data into forecast models at remote locations where real-time data have previously been unavailable. This work includes the development and assessment of a wave modeling framework with real-time data assimilation capabilities for WEC power prediction. Spoondrift wave measurement buoys were deployed off the coast of Yakutat, Alaska, a microgrid site with high wave energy resource potential. A wave modeling framework with data assimilation was developed and assessed, which was most effective when the incoming forecasted boundary conditions did not represent the observations well. For that case, assimilation of the wave height data using the ensemble Kalman filter resulted in a reduction of wave height forecast normalized root mean square error from 27% to an average of 16% over a 12-hour period. This results in reduction of wave power forecast error from 73% to 43%. In summary, the use of the low-cost wave buoy data assimilated into the wave modeling framework improved the forecast skill and will provide a useful development tool for the integration of WECs into electrical grids.
In designing a security module for inverter communications in a DER environment, it is critical to consider the impact of the additional security on the environment as well as what types of security is required for the various messages that must pass from the inverter to and from a utility. Also, since cyber security is more than just preventing an unauthorized user from viewing data, mechanisms for proving identity and ensuring that data cannot be altered without such a modification being discovered are needed. This is where the security principles of confidentiality, integrity, and availability come into play. For different types of communications, these different security principles may be important or not needed at all. Furthermore, the cost and constraints for applying cryptography for securing DER communications must be considered to help determine what is feasible within this environment and what will be the impact and cost of applying common cryptographic protections to inverter communications.
Data-driven modeling, including machine learning methods, continue to play an increasing role in society. Data-driven methods impact decision making for applications ranging from everyday determinations about which news people see and control of self-driving cars to high-consequence national security situations related to cyber security and analysis of nuclear weapons reliability. Although modern machine learning methods have made great strides in model induction and show excellent performance in a broad variety of complex domains, uncertainty remains an inherent aspect of any data-driven model. In this report, we provide an update to the preliminary results on uncertainty quantification for machine learning presented in SAND2017-6776. Specifically, we improve upon the general problem definition and expand upon the experiments conducted for the earlier re- port. Most importantly, we summarize key lessons learned about how and when uncertainty quantification can inform decision making and provide valuable insights into the quality of learned models and potential improvements to them.
The Co-Decontamination (CoDCon) Demonstration experiment at Pacific Northwest National Laboratory (PNNL) is designed to test the separation of a mixed U and Pu product from dissolved spent nuclear fuel. The primary purpose of the project is to demonstrate control of the Pu/U ratio throughout the entire process without producing a pure Pu stream. In addition, the project is quantifying the accuracy and precision to which a Pu/U mass ratio can be achieved. The system includes an on-line monitoring system using spectroscopy to monitor the ratios throughout the process. A dynamic model of the CoDCon flowsheet and the on-line monitoring system was developed to augment the experimental work. This model is based in MATLAB Simulink and provides the ability to expand the range of scenarios that can be examined for process control and determine overall measurement uncertainty. Experimental results have been used to inform and benchmark the model so that it can accurately simulate various transient scenarios. The results of the experimental benchmarking are presented here along with modeled scenarios to demonstrate the control and process monitoring of the system.
We report on work performed to measure the quenching factor of low kinetic energy germanium recoils, as a collaboration between Sandia National Laboratories (SNL) and Duke University. A small-mass low-noise high purity germanium detector was irradiated by a mono-energetic pulsed neutron beam produced by the Triangle Universities Nuclear Laboratory (TUNL) Van-de-Graaff accelerator. Data was collected to determine the germanium quenching factor as a function of 10 discrete recoil energy values in the range ~ [0.8, 5.0] keVnr. We describe the experiment, present the simulation and data processing for the 10 datasets, and discussed the quenching factor analysis result for one of them. This one result seems to indicate a somewhat large deviation from literature values, though it is still preliminary to claim the presence of a systematic bias in our data or analysis.
This project explored coupling modeling and analysis methods from multiple domains to address complex hybrid (cyber and physical) attacks on mission critical infrastructure. Robust methods to integrate these complex systems are necessary to enable large trade-space exploration including dynamic and evolving cyber threats and mitigations. Reinforcement learning employing deep neural networks, as in the AlphaGo Zero solution, was used to identify "best" (or approximately optimal) resilience strategies for operation of a cyber/physical grid model. A prototype platform was developed and the machine learning (ML) algorithm was made to play itself in a game of 'Hurt the Grid'. This proof of concept shows that machine learning optimization can help us understand and control complex, multi-dimensional grid space. A simple, yet high-fidelity model proves that the data have spatial correlation which is necessary for any optimization or control. Our prototype analysis showed that the reinforcement learning successfully improved adversary and defender knowledge to manipulate the grid. When expanded to more representative models, this exact type of machine learning will inform grid operations and defense - supporting mitigation development to defend the grid from complex cyber attacks! This same research can be expanded to similar complex domains.
In this article, we describe a prototype cosimulation framework using Xyce, GHDL and CocoTB that can be used to analyze digital hardware designs in out-of-nominal environments. We demonstrate current software methods and inspire future work via analysis of an open-source encryption core design. Note that this article is meant as a proof-of-concept to motivate integration of general cosimulation techniques with Xyce, an open-source circuit simulator.
In this study, a Johnson–Cook model was used as an example to analyze the relationship of compressive stress-strain response of engineering materials experimentally obtained at constant engineering and true strain rates. There was a minimal deviation between the stress-strain curves obtained at the same constant engineering and true strain rates. The stress-strain curves obtained at either constant engineering or true strain rates could be converted from one to the other, which both represented the intrinsic material response. There is no need to specify the testing requirement of constant engineering or true strain rates for material property characterization, provided that either constant engineering or constant true strain rate is attained during the experiment.
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly, exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.
We generalize the theory of underlying one-step methods to strictly stable general linear methods (GLMs) solving nonautonomous ordinary differential equations (ODEs) that satisfy a global Lipschitz condition. We combine this theory with the Lyapunov and Sacker-Sell spectral stability theory for one-step methods developed in [34, 35, 36] to analyze the stability of a strictly stable GLM solving a nonautonomous linear ODE. These results are applied to develop a stability diagnostic for the solution of nonautonomous linear ODEs by strictly stable GLMs.
Reliable engineering quality, safety, and performance are essential for a successful energy-storage project. The commercial energy-storage industry is entering its most formative period, which will impact the arc of the industry's development for years to come. Project announcements are increasing in both frequency and scale. Energy-storage systems (ESSs) are establishing themselves as a viable option for deployment across the entire electricity infrastructure as grid-connected energy-storage assets or in combination with other grid assets, such as hybrid generators. How the industry will evolve-in direction and degree-will depend largely on building a firm foundation of sound engineering requirements into project expectations.
Liquid metal breakup processes are important for understanding a variety of physical phenomena including metal powder formation, thermal spray coatings, fragmentation in explosive detonations and metalized propellant combustion. Since the breakup behaviors of liquid metals are not well studied, we experimentally investigate the roles of higher density and fast elastic surface oxide formation on breakup morphology and droplet characteristics. This work compares the column breakup of water with Galinstan, a room-temperature eutectic liquid metal alloy of gallium, indium and tin. A shock tube is used to generate a step change in convective velocity and back-lit imaging is used to classify morphologies for Weber numbers up to 250. Digital in-line holography (DIH) is then used to quantitatively capture droplet size, velocity and three-dimensional position information. Differences in geometry between canonical spherical drops and the liquid columns utilized in this paper are likely responsible for observations of earlier transition Weber numbers and uni-modal droplet volume distributions. Scaling laws indicate that Galinstan and water share similar droplet size-velocity trends and root-normal volume probability distributions. However, measurements indicate that Galinstan breakup occurs earlier in non-dimensional time and produces more non-spherical droplets due to fast oxide formation.
Aerosol jet printing (AJP) has emerged as a promising method for microscale digital additive manufacturing using functional nanomaterial inks. While compelling capabilities have been demonstrated in the research community in recent years, the development and refinement of inks and process parameters largely follows empirical observations, with an extensive phase space over which to optimize. While this has led to general qualitative guidelines and ink- and machine-specific correlations, a more fundamental understanding based on principles of aerosol physics and fluid mechanics is lacking. This contrasts with more mature printing technologies, for which foundational physical principles have been rigorously examined. Presented here is a broad framework for describing the AJP process. Simple analytical models are employed to ensure generality and accessibility of the results, while experimental validation using a silver nanoparticle ink supports the physical relevance of the approach. This basic understanding enables a description of process limitations grounded in fundamental principles, as well as guidelines for improved printer design, ink formulation, and print parameter optimization.
Improving the sensitivity of infrared detectors is an essential step for future applications, including satellite- and terrestrial-based systems. We investigate nanoantenna-enabled detectors (NEDs) in the infrared, where the nanoantenna arrays play a fundamental role in enhancing the level of absorption within the active material of a photodetector. The design and optimization of nanoantenna-enabled detectors via full-wave simulations is a challenging task given the large parameter space to be explored. Here, we present a fast and accurate fully analytic circuit model of patch-based NEDs. This model allows for the inclusion of real metals, realistic patch thicknesses, non-absorbing spacer layers, the active detector layer, and absorption due to higher-order evanescent modes of the metallic array. We apply the circuit model to the design of NED devices based on Type II superlattice absorbers, and show that we can achieve absorption of ∼70% of the incoming energy in subwavelength (∼λ∕5) absorber layers. The accuracy of the circuit model is verified against full-wave simulations, establishing this model as an efficient design tool to quickly and accurately optimize NED structures.
Partial differential equation (PDE) constrained optimization is designed to solve control, design, and inverse problems with underlying physics. A distinguishing challenge of this technique is the handling of large numbers of optimization variables in combination with the complexities of discretized PDEs. Over the last several decades, advances in algorithms, numerical simulation, software design, and computer architectures have allowed for the maturation of PDE constrained optimization (PDECO) technologies with subsequent solutions to complicated control, design, and inverse problems. This special journal edition, entitled “PDE-Constrained Optimization”, features eight papers that demonstrate new formulations, solution strategies, and innovative algorithms for a range of applications. In particular, these contributions demonstrate the impactfulness on our engineering and science communities. This paper offers brief remarks to provide some perspective and background for PDECO, in addition to summaries of the eight papers.
The role of power electronics in the utility grid is continually expanding. As converter design processes mature and new advanced materials become available, the pace of industry adoption is poised to accelerate. Looking forward, we can envision a future in which power electronics are as integral to grid functionality as the transformer is today. The Enabling Advanced Power Electronics Technologies for the Next Generation Electric Utility Grid Workshop was organized by Sandia National Laboratories and held in Albuquerque, New Mexico, July 17 - 18, 2018 . The workshop helped attendees to gain a broader understanding of power electronics R&D needs—from materials to systems—for the next generation electric utility grid. This report summarizes discussions and presentations from the workshop and identifies opportunities for future efforts.
The following discussion contains a detailed description of how to interface and operate Dosimetry Processor v1.0 software application. It describes the input required from the user to process dosimetry and actions to take for troubleshooting.
The performance of energetic materials (EM) varies significantly across production lots due to the inability of current production methods to yield consistent morphology and size. Lot-to-lot variations and the inability to remake the needed characteristics that meet specification is costly, increases uncertainty, and creates additional risk in programs using these materials. There is thus a pressing need to more reliably formulate EMs with greater control of morphology. The goal of this project is to use the surfactant-assisted self-assembly to generate EM particles with welldefined size and external morphologies using triaminotrinitrobenzene (TATB) and hexanitrohexaazaisowurtzitane (CL-20) as these EMs are both prevalent in the stockpile and present interesting/urgent reprocessing challenges. We intend to understand fundamental science on how molecular packing influences EM morphology. We develop scale up fabrication of EM particles with controlled morphology, promising to eliminate inconsistent performance by providing a trusted and reproducible method to improve EMs for NW applications.
The generalized linear Boltzmann equation is a recently developed framework based on non-classical transport theory for modeling the expected value of particle flux in an arbitrary stochastic medium. Provided with a non-classical cross-section for a given statistical description of a medium, any transport problem in that medium may be solved. Previous work has only considered one-dimensional media without finite boundary conditions and discrete binary mixtures of materials. In this work the solution approach for the GLBE in multidimensional media with finite boundaries is outlined. The discrete ordinates method with an implicit discretization of the pathlength variable is used to leverage sweeping methods for the transport operator. In addition, several convenient approximations for non-classical cross-sections are introduced. The solution approach is verified against random realizations of a Gaussian process medium in a square enclosure.
This report documents and describes the tabulation and analysis of historical pulse operation data from the Annular Core Research Reactor (ACRR) at Sandia National Laboratories (SNL). The pulse data was obtained from a combination of pulse log files generated at the control console and pulse diagnostics system data. The pulses presented were performed within the time period of April 2003 and December 2017. A brief analysis of the data is included to characterize the aggregate behavior of ACRR pulses with respect to theoretical treatments based on the point reactor kinetics model. It is expected that the data presented will provide an organized and consolidated resource to reference historical pulse data at the ACRR for use in analyses, verification and validation, and general understanding of the machine. A comprehensive set of data is presented to the reader in the appendices.