Additive manufacturing has ushered in a new paradigm of bottom-up materials-by-design of spatially non-uniform materials. Functionally graded materials have locally tailored compositions to provide optimized global properties and performance. In this letter, we propose an opportunity for the application of graded magnetic materials as lens elements for charged particle optics. A Hiperco50/Hymu80 (FeCo-2 V/Fe-80Ni-5Mo) graded magnetic alloy was successfully additively manufactured via Laser Directed Energy Deposition with spatially varying magnetic properties. The compositional gradient is then applied using computational simulations to demonstrate how a tailored material can enhance the magnetic performance of a critical, image-forming component of a transmission electron microscope.
Resonant plate shock testing techniques have been used for mechanical shock testing at Sandia for several decades. A mechanical shock qualification test is often done by performing three separate uniaxial tests on a resonant plate to simulate one shock event. Multi-axis mechanical shock activities, in which shock specifications are simultaneously met in different directions during a single shock test event performed in the lab, are not always repeatable and greatly depend on the fixture used during testing. This chapter provides insights into various designs of a concept fixture that includes both resonant plate and angle bracket used for multi-axis shock testing from a modeling and simulation point of view based on the results of finite element modal analysis. Initial model validation and testing performed show substantial excitation of the system under test as the fundamental modes drive the response in all three directions. The response also shows that higher order modes are influencing the system, the axial and transverse response are highly coupled, and tunability is difficult to achieve. By varying the material properties, changing thicknesses, adding masses, and moving the location of the fixture on the resonant plate, the response can be changed significantly. The goal of this work is to identify the parameters that have the greatest influence on the response of the system when using the angle bracket fixture for a mechanical shock test for the intent of tunability of the system.
Interim dry storage of spent nuclear fuel involves storing the fuel in welded stainless-steel canisters. Under certain conditions, the canisters could be subjected to environments that may promote stress corrosion cracking leading to a risk of breach and release of aerosol-sized particulate from the interior of the canister to the external environment through the crack. Research is currently under way by several laboratories to better understand the formation and propagation of stress corrosion cracks, however little work has been done to quantitatively assess the potential aerosol release. The purpose of the present work is to introduce a reliable generic numerical model for prediction of aerosol transport, deposition, and plugging in leak paths similar to stress corrosion cracks, while accounting for potential plugging from particle deposition. The model is dynamic (changing leak path geometry due to plugging) and it relies on the numerical solution of the aerosol transport equation in one dimension using finite differences. The model’s capabilities were also incorporated into a Graphical User Interface (GUI) that was developed to enhance user accessibility. Model validation efforts presented in this paper compare the model’s predictions with recent experimental data from Sandia National Laboratories (SNL) and results available in literature. We expect this model to improve the accuracy of consequence assessments and reduce the uncertainty of radiological consequence estimations in the remote event of a through-wall breach in dry cask storage systems.
Closed-loop geothermal systems (CLGSs) rely on circulation of a heat transfer fluid in a closed-loop design without penetrating the reservoir to extract subsurface heat and bring it to the surface. We developed and applied numerical models to study u-shaped and coaxial CLGSs in hot-dry-rock over a more comprehensive parameter space than has been studied before, including water and supercritical CO2 (sCO2) as working fluids. An economic analysis of each realization was performed to evaluate the levelized cost of heat (LCOH) for direct heating application and levelized cost of electricity (LCOE) for electrical power generation. The results of the parameter study, composed of 2.5 million simulations, combined with a plant and economic model comprise the backbone of a publicly accessible web application that can be used to query, analyze, and plot outlet states, thermal and mechanical power output, and LCOH/LCOE, thereby facilitating feasibility studies led by potential developers, geothermal scientists, or the general public (https://gdr.openei.org/submissions/1473). Our results indicate competitive LCOH can be achieved; however, competitive LCOE cannot be achieved without significant reductions in drilling costs. We also present a site-based case study for multi-lateral systems and discuss how our comprehensive single-lateral analyses can be applied to approximate multi-lateral CLGSs. Looking beyond hot-dry-rock, we detail CLGS studies in permeable wet rock, albeit for a more limited parameter space, indicating that reservoir permeability of greater than 250 mD is necessary to significantly improve CLGS power production, and that reservoir temperatures greater than 200 °C, achieved by going to greater depths (∼3–4 km), may significantly enhance power production.
Optimization is a key tool for scientific and engineering applications; however, in the presence of models affected by uncertainty, the optimization formulation needs to be extended to consider statistics of the quantity of interest. Optimization under uncertainty (OUU) deals with this endeavor and requires uncertainty quantification analyses at several design locations; i.e., its overall computational cost is proportional to the cost of performing a forward uncertainty analysis at each design location. An OUU workflow has two main components: an inner loop strategy for the computation of statistics of the quantity of interest, and an outer loop optimization strategy tasked with finding the optimal design, given a merit function based on the inner loop statistics. In this work, we propose to alleviate the cost of the inner loop uncertainty analysis by leveraging the so-called multilevel Monte Carlo (MLMC) method, which is able to allocate resources over multiple models with varying accuracy and cost. The resource allocation problem in MLMC is formulated by minimizing the computational cost given a target variance for the estimator. We consider MLMC estimators for statistics usually employed in OUU workflows and solve the corresponding allocation problem. For the outer loop, we consider a derivative-free optimization strategy implemented in the SNOWPAC library; our novel strategy is implemented and released in the Dakota software toolkit. We discuss several numerical test cases to showcase the features and performance of our approach with respect to its Monte Carlo single fidelity counterpart.
This article aims at discovering the unknown variables in the system through data analysis. The main idea is to use the time of data collection as a surrogate variable and try to identify the unknown variables by modeling gradual and sudden changes in the data. We use Gaussian process modeling and a sparse representation of the sudden changes to efficiently estimate the large number of parameters in the proposed statistical model. The method is tested on a realistic dataset generated using a one-dimensional implementation of a Magnetized Liner Inertial Fusion (MagLIF) simulation model, and encouraging results are obtained.
Comparison of pure sinusoidal vibration to random vibration or combinations of the two is an important and useful subject for dynamic testing. The objective of this chapter is to succinctly document the technical background for converting a sine-sweep test specification into an equivalent random vibration test specification. The information can also be used in reverse, i.e., to compare a random vibe spec with a sine-sweep, although that is less common in practice. Because of inherent assumptions involved in such conversions, it is always preferable to test to original specifications and conduct this conversion when other options are impractical. This chapter outlines the theoretical premise and relevant equations. An example of implementation with hypothetical but realistic data is provided that captures the conversion of a sinusoid to an equivalent ASD. The example also demonstrates how to account for the rate of sine-sweep to the duration of the random vibration. A significant content of this chapter is the discussion on the statistical distribution of peaks in a narrow-band random signal and the consequences of that on the damage imparted to a structure. Numerical simulations were carried out to capture the effect of various combinations of narrow-band random and pure sinusoid superimposed on each other. The consequences of this are captured to provide guidance on accuracy and conservatism.
Diesel generators (gensets) are often the lowest-cost electric generation for reliable supply in remote microgrids. The development of converter-dominated diesel-backed microgrids requires accurate dynamic modeling to ensure power quality and system stability. Dynamic response derived using original genset system models often does not match those observed in field experiments. This paper presents the experimental system identification of a frequency dynamics model for a 400 kVA diesel genset. The genset is perturbed via active power load changes and a linearized dynamics model is fit based on power and frequency measurements using moving horizon estimation (MHE). The method is first simulated using a detailed genset model developed in MATLAB/Simulink. The simulation model is then validated against the frequency response obtained from a real 400 kVA genset system at the Power System Integration (PSI) Lab at the University of Alaska Fairbanks (UAF). The simulation and experimental results had model errors of 3.17% and 11.65%, respectively. The resulting genset model can then be used in microgrid frequency dynamic studies, such as for the integration of renewable energy sources.
The importance of user-accessible multiple-input/multiple-output (MIMO) control methods has been highlighted in recent years. Several user-created control laws have been integrated into Rattlesnake, an open-source MIMO vibration controller developed at Sandia National Laboratories. Much of the effort to date has focused on stationary random vibration control. However, there are many field environments which are not well captured by stationary random vibration testing, for example shock, sine, or arbitrary waveform environments. This work details a time waveform replication technique that uses frequency domain deconvolution, including a theoretical overview and implementation details. Example usage is demonstrated using a simple structural dynamics system and complicated control waveforms at multiple degrees of freedom.
Heat waves are increasing in severity, duration, and frequency. The Multi-Scenario Extreme Weather Simulator (MEWS) models this using historical data, climate model outputs, and heat wave multipliers. In this study, MEWS is applied for planning of a community resilience hub in Hau’ula, Hawaii. The hub will have normal operations and resilience operations modes. Both these modes were modeled using EnergyPlus. The resilience operations mode includes cutting off air conditioning for many spaces to decrease power requirements during emergencies. Results were simulated for 300 future weather files generated by MEWS for 2020, 2040, 2060, and 2080. Shared socioeconomic pathways 2–4.5, 3–7.0 and 5–8.5 were used. The resilience operations mode results show two to six times increase of hours of exceedance beyond 32.2 °C from present conditions, depending on climate scenario and future year. The resulting decrease in thermal resilience enables an average decrease of energy use intensity of 26% with little sensitivity to climate change. The decreased thermal resilience predicted in the future is undesirable, but was not severe enough to require a more energy-intensive resilience mode. Instead, planning is needed to assure vulnerable individuals are given prioritized access to air-conditioned parts of the hub if worst-case heat waves occur.
This chapter will show the results of a study where component-based transfer path analysis was used to translate vibration environments between versions of the round-robin structure. This was done to evaluate a hybrid approach where the responses were measured experimentally, but the frequency response functions were derived analytically. This work will describe the test setup, force estimation process, response prediction (on the new system), and show comparisons between the predicted and measured responses. Observations will also be made on the applicability of this hybrid approach in more complex systems.
For turbulent reacting flow systems, identification of low-dimensional representations of the thermo-chemical state space is vitally important, primarily to significantly reduce the computational cost of device-scale simulations. Principal component analysis (PCA), and its variants, are a widely employed class of methods. Recently, an alternative technique that focuses on higher-order statistical interactions, co-kurtosis PCA (CoK-PCA), has been shown to effectively provide a low-dimensional representation by capturing the stiff chemical dynamics associated with spatiotemporally localized reaction zones. While its effectiveness has only been demonstrated based on a priori analyses with linear reconstruction, in this work, we employ nonlinear techniques to reconstruct the full thermo-chemical state and evaluate the efficacy of CoK-PCA compared to PCA. Specifically, we combine a CoK-PCA-/PCA-based dimensionality reduction (encoding) with an artificial neural network (ANN) based reconstruction (decoding) and examine, a priori, the reconstruction errors of the thermo-chemical state. In addition, we evaluate the errors in species production rates and heat release rates, which are nonlinear functions of the reconstructed state, as a measure of the overall accuracy of the dimensionality reduction technique. We employ four datasets to assess CoK-PCA/PCA coupled with ANN-based reconstruction: zero-dimensional (homogeneous) reactor for autoignition of an ethylene/air mixture that has conventional single-stage ignition kinetics, a dimethyl ether (DME)/air mixture which has two-stage (low and high temperature) ignition kinetics, a one-dimensional freely propagating premixed ethylene/air laminar flame, and a two-dimensional dataset representing turbulent autoignition of ethanol in a homogeneous charge compression ignition (HCCI) engine. Results from the analyses demonstrate the robustness of the CoK-PCA based low-dimensional manifold with ANN reconstruction in accurately capturing the data, specifically from the reaction zones.
The primary goal of any laboratory test is to expose the unit-under-test to conservative realistic representations of a field environment. Satisfying this objective is not always straightforward due to laboratory equipment constraints. For vibration and shock tests performed on shakers over-testing and unrealistic failures can result because the control is a base acceleration and mechanical shakers have nearly infinite impedance. Force limiting and response limiting are relatively standard practices to reduce over-test risks in random-vibration testing. Shaker controller software generally has response limiting as a built-in capability and it is done without much user intervention since vibration control is a closed loop process. Limiting in shaker shocks is done for the same reasons, but because the duration of a shock is only a few milliseconds, limiting is a pre-planned user in the loop process. Shaker shock response limiting has been used for at least 30 years at Sandia National Laboratories, but it seems to be little known or used in industry. This objective of this paper is to re-introduce response limiting for shaker shocks to the aerospace community. The process is demonstrated on the BARBECUE testbed.
The proliferation of small uncrewed aerial systems (UAS) poses many threats to airspace systems and critical infrastructures. In this paper, we apply deep reinforcement learning (DRL) to intercept rogue UAS in urban airspaces. We train a group of homogeneous friendly UAS, in this paper referred to as agents, to pursue and intercept a faster UAS evading capture while navigating through crowded airspace with several moving non-cooperating interacting entities (NCIEs). The problem is formulated as a multi-agent Markov Decision Process, and we develop the Proximal Policy Optimization based Advantage ActorCritic (PPO-A2C) method to solve it, where the actor and critic networks are trained in a centralized server and the derived actor network is distributed to the agents to generate the optimal action based their observations. The simulation results show that, as compared to the traditional method, PPO-A2C fosters collaborations among agents to achieve the highest probability of capturing the evader and maintain the collision rate with other agents and NCIEs in the environment.
Accurate measurement of frequency response functions is essential for system identification, model updating, and structural health monitoring. However, sensor noise and leakage cause variance and systematic errors in estimated FRFs. Low-noise sensors, windowing techniques, and intelligent experiment design can mitigate these effects but are often limited by practical considerations. This chapter is a guide to implementation of local modeling methods for FRF estimation, which have been extensively researched but are seldom used in practice. Theoretical background is presented, and a procedure for automatically selecting a parameterization and model order is proposed. Computational improvements are discussed that make local modeling feasible for systems with many input and output channels. The methods discussed herein are validated on a simulation example and two experimental examples: a multi-input, multi-output system with three inputs and 84 outputs and a nonlinear beam assembly. They are shown to significantly outperform the traditional H1 and HSVD estimators.
Multi-axis testing has become a popular test method because it provides a more realistic simulation of a field environment when compared to traditional vibration testing. However, field data may not be available to derive the multi-axis environment. This means that methods are needed to generate “virtual field data” that can be used in place of measured field data. Transfer path analysis (TPA) has been suggested as a method to do this since it can be used to estimate the excitation forces on a legacy system and then apply these forces to a new system to generate virtual field data. This chapter will provide a review of using TPA methods to do this. It will include a brief background on TPA, discuss the benefits of using TPA to compute virtual field data, and delve into the areas for future work that could make TPA more useful in this application.
The U.S. Department of Energy is funding research into studying the consequences of postclosure criticality on the performance of a generic repository by (1) identifying the features, events, and processes (FEPs) that need to be considered in such an analysis, (2) developing the tools needed to model the relevant FEPs in a postclosure performance assessment, and (3) conducting analyses both with and without the occurrence of a postclosure criticality and comparing the results. This paper describes progress in this area of research and presents the results to date of analyzing the consequences of a postulated steady-state criticality in a hypothetical saturated shale repository. Preliminary results indicate that postclosure criticality would not affect repository performance.
In many applications, one can only access the inexact gradients and inexact hessian times vector products. Thus it is essential to consider algorithms that can handle such inexact quantities with a guaranteed convergence to solution. An inexact adaptive and provably convergent semismooth Newton method is considered to solve constrained optimization problems. In particular, dynamic optimization problems, which are known to be highly expensive, are the focus. A memory efficient semismooth Newton algorithm is introduced for these problems. The source of efficiency and inexactness is the randomized matrix sketching. Applications to optimization problems constrained by partial differential equations are also considered.
The siting of nuclear waste is a process that requires consideration of concerns of the public. This report demonstrates the significant potential for natural language processing techniques to gain insights into public narratives around “nuclear waste.” Specifically, the report highlights that the general discourse regarding “nuclear waste” within the news media has fluctuated in prevalence compared to “nuclear” topics broadly over recent years, with commonly mentioned entities reflecting a limited variety of geographies and stakeholders. General sentiments within the “nuclear waste” articles appear to use neutral language, suggesting that a scientific or “facts-only” framing of “waste”-related issues dominates coverage; however, the exact nuances should be further evaluated. The implications of a number of these insights about how nuclear waste is framed in traditional media (e.g., regarding emerging technologies, historical events, and specific organizations) are discussed. This report lays the groundwork for larger, more systematic research using, for example, transformer-based techniques and covariance analysis to better understand relationships among “nuclear waste” and other nuclear topics, sentiments of specific entities, and patterns across space and time (including in a particular region). By identifying priorities and knowledge needs, these data-driven methods can complement and inform engagement strategies that promote dialogue and mutual learning regarding nuclear waste.
Multifidelity (MF) uncertainty quantification (UQ) seeks to leverage and fuse information from a collection of models to achieve greater statistical accuracy with respect to a single-fidelity counterpart, while maintaining an efficient use of computational resources. Despite many recent advancements in MF UQ, several challenges remain and these often limit its practical impact in certain application areas. In this manuscript, we focus on the challenges introduced by nondeterministic models to sampling MF UQ estimators. Nondeterministic models produce different responses for the same inputs, which means their outputs are effectively noisy. MF UQ is complicated by this noise since many state-of-the-art approaches rely on statistics, e.g., the correlation among models, to optimally fuse information and allocate computational resources. We demonstrate how the statistics of the quantities of interest, which impact the design, effectiveness, and use of existing MF UQ techniques, change as functions of the noise. With this in hand, we extend the unifying approximate control variate framework to account for nondeterminism, providing for the first time a rigorous means of comparing the effect of nondeterminism on different multifidelity estimators and analyzing their performance with respect to one another. Numerical examples are presented throughout the manuscript to illustrate and discuss the consequences of the presented theoretical results.
In this work, we evaluate the usefulness of nonsmooth basis functions for representing the periodic response of a nonlinear system subject to contact/impact behavior. As with sine and cosine basis functions for classical Fourier series, which have C∞ smoothness, nonsmooth counterparts with C0 smoothness are defined to develop a nonsmooth functional representation of the solution. Some properties of these basis functions are outlined, such as periodicity, derivatives, and orthogonality, which are useful for functional series applied via the Galerkin method. Least-squares fits of the classical Fourier series and nonsmooth basis functions are presented and compared using goodness-of-fit metrics for time histories from vibro-impact systems with varying contact stiffnesses. This formulation has the potential to significantly reduce the computational cost of harmonic balance solvers for nonsmooth dynamical systems. Rather than requiring many harmonics to capture a system response using classical, smooth Fourier terms, the frequency domain discretization could be captured by a combination of a finite Fourier series supplemented with nonsmooth basis functions to improve convergence of the solution for contact-impact problems.
While recent research has greatly improved our ability to test and model nonlinear dynamic systems, it is rare that these studies quantify the effect that the nonlinearity would have on failure of the structure of interest. While several very notable exceptions certainly exist, such as the work of Hollkamp et al. on the failure of geometrically nonlinear skin panels for high speed vehicles (see, e.g., Gordon and Hollkamp, Reduced-order models for acoustic response prediction. Technical Report AFRL-RB-WP-TR-2011-3040, Air Force Research Laboratory, AFRL-RB-WP-TR-2011-3040, Dayton, 2011. Issue: AFRL-RB-WP-TR-2011-3040AFRL-RB-WP-TR-2011-3040), other studies have given little consideration to failure. This work studies the effect of common nonlinearities on the failure (and failure margins) of components that undergo durability testing in dynamic environments. This context differs from many engineering applications because one usually assumes that any nonlinearities have been fully exercised during the test.
Ionic liquid (IL) pretreatment methods show incredible promise for the efficient conversion of lignocellulosic feedstocks to fuels and chemicals. Given their low vapor pressures, distillation-based methods of extracting ionic liquids out of biomass post-pretreatment have historically been ignored in favor of alternative methods. We demonstrate a process to distill four acetate-based ionic liquids ([EthA][OAc], [PropA][OAc], [MAEthA][OAc], and [DMAEthA][OAc]) at low pressure and high purity that overcome some disadvantages of “water washing” and “one pot” recovery methods. Out of four tested ILs, ethanolamine acetate ([EthA][OAc]) is shown to have the most agreeable conversion metrics for commercial bioconversion processes achieving 73.6 % and 51.4 % of theoretical glucose and xylose yields respectively and >85 % recovery rates. Our process metrics are factored into a techno-economic analysis where [EthA][OAc] distillation is compared to other recovery methods as well as ethanolamine pretreatment at both milliliter and liter scales. Although our TEA shows [EthA][OAc] distillation underperforming against other processes, we show a step-by-step avenue to reduce sugar production cost below the wholesale dextrose price at scale.
Wave energy converters (WECs) are designed to produce useful work from ocean waves. This useful work can take the form of electrical power or even pressurized water for, e.g., desalination. This report details the findings from a wave tank test focused on that production of useful work. To that end, the experimental system and test were specifically designed to validate models for power transmission throughout the WEC system. Additionally, the validity of co-design informed changes to the power take-off (PTO) were assessed and shown to provide the expected improvements in system performance.
Analytic relations that describe crack growth are vital for modeling experiments and building a theoretical understanding of fracture. Upon constructing an idealized model system for the crack and applying the principles of statistical thermodynamics, it is possible to formulate the rate of thermally activated crack growth as a function of load, but the result is analytically intractable. Here, an asymptotically correct theory is used to obtain analytic approximations of the crack growth rate from the fundamental theoretical formulation. These crack growth rate relations are compared to those that exist in the literature and are validated with respect to Monte Carlo calculations and experiments. The success of this approach is encouraging for future modeling endeavors that might consider more complicated fracture mechanisms, such as inhomogeneity or a reactive environment.
Traditional Monte Carlo methods for particle transport utilize source iteration to express the solution, the flux density, of the transport equation as a Neumann series. Our contribution is to show that the particle paths simulated within source iteration are associated with the adjoint flux density and the adjoint particle paths are associated with the flux density. We make our assertion rigorous through the use of stochastic calculus by representing the particle path used in source iteration as a solution to a stochastic differential equation (SDE). The solution to the adjoint Boltzmann equation is then expressed in terms of the same SDE, and the solution to the Boltzmann equation is expressed in terms of the SDE associated with the adjoint particle process. An important consequence is that the particle paths used within source iteration simultaneously provide Monte Carlo samples of the flux density and adjoint flux density in the detector and source regions, respectively. The significant practical implication is that particle trajectories can be reused to obtain both forward and adjoint quantities of interest. To the best our knowledge, the reuse of entire particles paths has not appeared in the literature. Monte Carlo simulations are presented to support the reuse of the particle paths.
Highlights Novel protocol for extracting knowledge from previously performed Finite Element corrosion simulations using machine learning. Obtain accurate predictions for corrosion current 5 orders of magnitude faster than Finite Element simulations. Accurate machine learning based model capable of performing an effective and efficient search over the multi-dimensional input space to identify areas/zones where corrosion is more (or less) noticeable.
Different data pipelines and statistical methods are applied to photovoltaic (PV) performance datasets to quantify the performance loss rate (PLR). Since the real values of PLR are unknown, a variety of unvalidated values are reported. As such, the PV industry commonly assumes PLR based on statistically extracted ranges from the literature. However, the accuracy and uncertainty of PLR depend on several parameters including seasonality, local climatic conditions, and the response of a particular PV technology. In addition, the specific data pipeline and statistical method used affect the accuracy and uncertainty. To provide insights, a framework of (≈200 million) synthetic simulations of PV performance datasets using data from different climates is developed. Time series with known PLR and data quality are synthesized, and large parametric studies are conducted to examine the accuracy and uncertainty of different statistical approaches over the contiguous US, with an emphasis on the publicly available and “standardized” library, RdTools. In the results, it is confirmed that PLRs from RdTools are unbiased on average, but the accuracy and uncertainty of individual PLR estimates vary with climate zone, data quality, PV technology, and choice of analysis workflow. Best practices and improvement recommendations based on the findings of this study are provided.
Multiple-input/multiple-output (MIMO) vibration control often relies on a least-squares solution utilizing a matrix pseudo-inverse. While this is simple and effective for many cases, it lacks flexibility in assigning preference to specific control channels or degrees of freedom (DOFs). For example, the user may have some DOFs where accuracy is very important and other DOFs where accuracy is less important. This chapter shows a method for assigning weighting to control channels in the MIMO vibration control process. These weights can be constant or frequency-dependent functions depending on the application. An algorithm is presented for automatically selecting DOF weights based on a frequency-dependent data quality metric to ensure the control solution is only using the best, linear data. An example problem is presented to demonstrate the effectiveness of the weighted solution.
We consider the problem of decentralized control of reactive power provided by distributed energy resources for voltage support in the distribution grid. We assume that the reactance matrix of the grid is unknown and potentially time-varying. We present a decentralized adaptive controller in which the reactive power at each inverter is set using a potentially heterogeneous droop curve and analyze the stability and the steady-state error of the resulting system. The effectiveness of the controller is validated in simulations using a modified version of the IEEE 13-bus and a 8500-node test system.
Physical experiments are often expensive and time-consuming. Test engineers must certify the compatibility of aircraft and their weapon systems before they can be deployed in the field, but the testing required is time consuming, expensive, and resource limited. Adopting Bayesian adaptive designs is a promising way to borrow from the successes seen in the clinical trials domain. The use of predictive probability (PP) to stop testing early and make faster decisions is particularly appealing given the aforementioned constraints. Given the high-consequence nature of the tests performed in the national security space, a strong understanding of new methods is required before being deployed. Although PP has been thoroughly studied for binary data, there is less work with continuous data, where many reliability studies are interested in certifying the specification limits of components. A simulation study evaluating the robustness of this approach indicates early stopping based on PP is reasonably robust to minor assumption violations, especially when only a few interim analyses are conducted. The simulation study also compares PP to conditional power, showing its relative strengths and weaknesses. A post-hoc analysis exploring whether release requirements of a weapon system from an aircraft are within specification with desired reliability resulted in stopping the experiment early and saving 33% of the experimental runs.
Plenoptic background-oriented schlieren is a diagnostictechnique that enables the measure-ment of three-dimensional refractive gradients by a combination of background-oriented schlieren and a plenoptic light field camera. This plenoptic camera is a modification of a traditional camera via the insertion of an array of microlenses between the imaging lens and digital sensor. This allows the collection of both spatial and angular information on the incoming light rays and therefore provides three-dimensional information about the imaged scene. Background-oriented schlieren requires a relatively simple experimental configurationincludingonlyacameraviewing a patterned background through the density field of interest. By using a plenoptic camera to capture background-oriented schlieren images the optical distortion created by density gradients in three dimensions can be measured. This chapter is intended to review critical developments in plenoptic background-oriented schlieren imaging and provide an outlook for future applications of this measurement technique.
Exploding bridgewire detonators (EBWs) containing pentaerythritol tetranitrate (PETN) exposed to high temperatures may not function following discharge of the design electrical firing signal from a charged capacitor. Knowing functionality of these arbitrarily facing EBWs is crucial when making safety assessments of detonators in accidental fires. Orientation effects are only significant when the PETN is partially melted. Here, the melting temperature can be measured with a differential scanning calorimeter. Nonmelting EBWs will be fully functional provided the detonator never exceeds 406 K (133 °C) for at least 1 h. Conversely, EBWs will not be functional once the average input pellet temperature exceeds 414 K (141 °C) for a least 1 min which is long enough to cause the PETN input pellet to completely melt. Functionality of the EBWs at temperatures between 406 and 414 K will depend on orientation and can be predicted using a stratification model for downward facing detonators but is more complex for arbitrary orientations. A conservative rule of thumb would be to assume that the EBWs are fully functional unless the PETN input pellet has completely melted.
Porous liquids (PLs) are an attractive material for gas separation and carbon sequestration due to their permanent internal porosity and high adsorption capacity. PLs that contain zeolitic imidazole frameworks (ZIFs), such as ZIF-8, form PLs through exclusion of aqueous solvents from the framework pore due to its hydrophobicity. The gas adsorption sites in ZIF-8 based PLs are historically unknown; gas molecules could be captured in the ZIF-8 pore or adsorb at the ZIF-8 interface. To address this question, ab initio molecular dynamics was used to predict CO2 binding sites in a PL composed of a ZIF-8 particle solvated in a water, ethylene glycol, and 2-methylimidazole solvent system. Further, the results show that CO2 energetically prefers to reside inside the ZIF-8 pore aperture due to strong van der Waals interactions with the terminal imidazoles. However, the CO2 binding site can be blocked by larger solvent molecules that have greater adsorption interactions. CO2 molecules were unable to diffuse into the ZIF-8 pore, with CO2 adsorption occurring due to binding with the ZIF-8 surface. Therefore, future design of ZIF-based PLs for enhanced CO2 adsorption should be based on the strength of gas binding at the solvated particle surface.
Brillouin scattering spectroscopy has been used to obtain an accurate (<1%) ρ-P equation of state (EOS) of 1:1 and 9:1 H2-He molar mixtures from 0.5 to 5.4 GPa at 296 K. Our calculated equations of state indicate close agreement with the experimental data right to the freezing pressure of hydrogen at 5.4 GPa. The measured velocities agree on average, within 0.5%, of an ideal mixing model. The ρ-P EOSs presented have a standard deviation of under 0.3% from the measured densities and under 1% deviation from ideal mixing. Furthermore, a detailed discussion of the accuracy, precision, and sources of error in the measurement and analyses of our equations of state is presented.
Accurate distribution system models are becoming increasingly critical for grid modernization tasks, and inaccurate phase labels are one type of modeling error that can have broad impacts on analyses using the distribution system models. This work demonstrates a phase identification methodology that leverages advanced metering infrastructure (AMI) data and additional data streams from sensors (relays in this case) placed throughout the medium-voltage sector of distribution system feeders. Intuitive confidence metrics are employed to increase the credibility of the algorithm predictions and reduce the incidence of false-positive predictions. The method is first demonstrated on a synthetic dataset under known conditions for robustness testing with measurement noise, meter bias, and missing data. Then, four utility feeders are tested, and the algorithm’s predictions are proven to be accurate through field validation by the utility. Lastly, the ability of the method to increase the accuracy of simulated voltages using the corrected model compared to actual measured voltages is demonstrated through quasi-static time-series (QSTS) simulations. The proposed methodology is a good candidate for widespread implementation because it is accurate on both the synthetic and utility test cases and is robust to measurement noise and other issues.
High-entropy alloys (HEAs) represent an interesting alloying strategy that can yield exceptional performance properties needed across a variety of technology applications, including hydrogen storage. Examples include ultrahigh volumetric capacity materials (BCC alloys → FCC dihydrides) with improved thermodynamics relative to conventional high-capacity metal hydrides (like MgH2), but still further destabilization is needed to reduce operating temperature and increase system-level capacity. In this work, we demonstrate efficient hydride destabilization strategies by synthesizing two new Al0.05(TiVNb)0.95-xMox (x = 0.05, 0.10) compositions. We specifically evaluate the effect of molybdenum (Mo) addition on the phase structure, microstructure, hydrogen absorption, and desorption properties. Both alloys crystallize in a bcc structure with decreasing lattice parameters as the Mo content increases. The alloys can rapidly absorb hydrogen at 25 °C with capacities of 1.78 H/M (2.79 wt %) and 1.79 H/M (2.75 wt %) with increasing Mo content. Pressure-composition isotherms suggest a two-step reaction for hydrogen absorption to a final fcc dihydride phase. The experiments demonstrate that increasing Mo content results in a significant hydride destabilization, which is consistent with predictions from a gradient boosting tree data-driven model for metal hydride thermodynamics. Furthermore, improved desorption properties with increasing Mo content and reversibility were observed by in situ synchrotron X-ray diffraction, in situ neutron diffraction, and thermal desorption spectroscopy.
As the prospect of exceeding global temperature targets set forth in the Paris Agreement becomes more likely, methods of climate intervention are increasingly being explored. With this increased interest there is a need for an assessment process to understand the range of impacts across different scenarios against a set of performance goals in order to support policy decisions. The methodology and tools developed for Performance Assessment (PA) for nuclear waste repositories shares many similarities with the needs and requirements for a framework for climate intervention. Using PA, we outline and test an evaluation framework for climate intervention, called Performance Assessment for Climate Intervention (PACI) with a focus on Stratospheric Aerosol Injection (SAI). We define a set of key technical components for the example PACI framework which include identifying performance goals, the extent of the system, and identifying which features, events, and processes are relevant and impactful to calculating model output for the system given the performance goals. Having identified a set of performance goals, the performance of the system, including uncertainty, can then be evaluated against these goals. Using the Geoengineering Large Ensemble (GLENS) scenario, we develop a set of performance goals for monthly temperature, precipitation, drought index, soil water, solar flux, and surface runoff. The assessment assumes that targets may be framed in the context of risk-risk via a risk ratio, or the ratio of the risk of exceeding the performance goal for the SAI scenario against the risk of exceeding the performance goal for the emissions scenario. From regional responses, across multiple climate variables, it is then possible to assess which pathway carries lower risk relative to the goals. The assessment is not comprehensive but rather a demonstration of the evaluation of an SAI scenario. Future work is needed to develop a more complete assessment that would provide additional simulations to cover parametric and aleatory uncertainty and enable a deeper understanding of impacts, informed scenario selection, and allow further refinements to the approach.
pvlib python is a community-developed, open-source software toolbox for simulating the performance of solar photovoltaic (PV) energy components and systems. It provides reference implementations of over 100 empirical and physics-based models from the peer-reviewed scientific literature, including solar position algorithms, irradiance models, thermal models, and PV electrical models. In addition to individual low-level model implementations, pvlib python provides high-level workflows that chain these models together like building blocks to form complete “weather-to-power” photovoltaic system models. It also provides functions to fetch and import a wide variety of weather datasets useful for PV modeling. pvlib python has been developed since 2013 and follows modern best practices for open-source python software, with comprehensive automated testing, standards-based packaging, and semantic versioning. Its source code is developed openly on GitHub and releases are distributed via the Python Package Index (PyPI) and the conda-forge repository. pvlib python’s source code is made freely available under the permissive BSD-3 license. Here we (the project’s core developers) present an update on pvlib python, describing capability and community development since our 2018 publication (Holmgren, Hansen, & Mikofski, 2018).
The thorium fuel cycle is emerging as an attractive alternative to conventional nuclear fuel cycles, as it does not require the enrichment of uranium for long-term sustainability. The operating principle of this fuel cycle is the irradiation of 232Th to produce 233U, which is fissile and sustains the fission chain reaction. 233U poses unique challenges for nuclear safeguards, as it is associated with a uniquely extreme γ-ray environment from 232U contamination, which limits the feasibility of the γ-ray-based assay, as well as more conservative accountability requirements than for 235U set by the International Atomic Energy Agency. Consequently, instrumentation used for safeguarding 235U in traditional fuel cycles may be inapplicable. It is essential that the nondestructive signatures of 233U be characterized so that nuclear safeguards can be applied to thorium fuel-cycle facilities as they come online. In this work, a set of 233U3O8 plates, containing 984 g233U, was measured at the National Criticality Experiments Research Center. A high-pressure 4He gaseous scintillation detector, which is insensitive to γ-rays, was used to perform a passive fast neutron spectral signature measurement of 233U3O8, and was used in conjunction with a pulsed deuterium-tritium neutron generator to demonstrate the differential die-away signature of this material. Furthermore, an array of 3He detectors was used in conjunction with the same neutron generator to measure the delayed neutron time profile of 233U, which is unique to this nuclide. These measurements provide a benchmark for future nondestructive assay instrumentation development, and demonstrate a set of key neutron signatures to be leveraged for nuclear safeguards in the thorium fuel cycle.
Absolute measurements of solid-material compressibility by magnetically driven shockless dynamic compression experiments to multi-megabar pressures have the potential to greatly improve the accuracy and precision of pressure calibration standards for use in diamond anvil cell experiments. To this end, we apply characteristics-based inverse Lagrangian analysis (ILA) to 11 sets of ramp-compression data on pure platinum (Pt) metal and then reduce the resulting weighted-mean stress-strain curve to the principal isentrope and room-temperature isotherm using simple models for yield stress and Grüneisen parameter. We introduce several improvements to methods for ILA and quasi-isentrope reduction, the latter including calculation of corrections in wave speed instead of stress and pressure to render results largely independent of initial yield stress while enforcing thermodynamic consistency near zero pressure. More importantly, we quantify in detail the propagation of experimental uncertainty through ILA and model uncertainty through quasi-isentrope reduction, considering all potential sources of error except the electrode and window material models used in ILA. Compared to previous approaches, we find larger uncertainty in longitudinal stress. Monte Carlo analysis demonstrates that uncertainty in the yield-stress model constitutes by far the largest contribution to uncertainty in quasi-isentrope reduction corrections. We present a new room-temperature isotherm for Pt up to 444 GPa, with 1-sigma uncertainty at that pressure of just under ± 1.2 % ; the latter is about a factor of three smaller than uncertainty previously reported for multi-megabar ramp-compression experiments on Pt. The result is well represented by a Vinet-form compression curve with (isothermal) bulk modulus K 0 = 270.3 ± 3.8 GPa, pressure derivative K 0 ′ = 5.66 ± 0.10 , and correlation coefficient R K 0 , K 0 ′ = − 0.843 .
As additive manufacturing (AM) has become a reliable method for creating complex and unique hardware rapidly, the quality assurance of printed parts remains a priority. In situ process monitoring offers an approach for performing quality control while simultaneously minimizing post-production inspection. For extrusion printing processes, direct linkages between extrusion pressure fluctuations and print defects can be established by integrating pressure sensors onto the print head. In this work, the sensitivity of process monitoring is tested using engineered spherical defects. Pressure and force sensors located near an ink reservoir and just before the nozzle are shown to assist in identification of air bubbles, changes in height between the print head and build surface, clogs, and particle aggregates with a detection threshold of 60–70% of the nozzle diameter. Visual evidence of printed bead distortion is quantified using optical image analysis and correlated to pressure measurements. Importantly, this methodology provides an ability to monitor the quality of AM parts produced by extrusion printing methods and can be accomplished using commonly available pressure-sensing equipment.