Robot manipulation of the environment often uses force feedback control approaches such as impedance control. Impedance controllers can be designed to be passive and work well while coupled to a variety of dynamic environments. However, in the presence of a high gear ratio and compliance in manipulator links, non-passive system properties may result in force feedback instabilities when coupled to certain environments. This necessitates an approach that ensures stability when using impedance control methods to interact with a wide range of environments. We propose a method for improving stability and steady-state convergence of an impedance controller by using a deep neural network to map a damping impedance control parameter. In this paper, a dynamic model and impedance controlled simulated system are presented and used for analyzing the coupled dynamic behavior in worst case environments. This simulation environment is used for Nyquist analysis and closed-loop stability analysis to algorithmically determine updated impedance damping parameters that secures stability and desired performance. The deep neural network inputs utilized present impedance control parameters and environmental dynamic properties to determine an updated value of damping that improves performance. In a data set of 10,000 combinations of control parameters and environmental dynamics, 20.3% of all the cases result in instability or do not meet convergence criterion. Our deep neural network improves this and reduces instabilities and failed control performance to 2.29%. The design of the network architecture to achieve this improvement is presented and compared to other architectures with their respective performances.
Chemistry tabulation is a common approach in practical simulations of turbulent combustion at engineering scales. Linear interpolants have traditionally been used for accessing precomputed multidimensional tables but suffer from large memory requirements and discontinuous derivatives. Higher-degree interpolants address some of these restrictions but are similarly limited to relatively low-dimensional tabulation. Artificial neural networks (ANNs) can be used to overcome these limitations but cannot guarantee the same accuracy as interpolants and introduce challenges in reproducibility and reliable training. These challenges are enhanced as the physics complexity to be represented within the tabulation increases. In this manuscript, we assess the efficiency, accuracy, and memory requirements of Lagrange polynomials, tensor product B-splines, and ANNs as tabulation strategies. We analyze results in the context of nonadiabatic flamelet modeling where higher dimension counts are necessary. While ANNs do not require structuring of data, providing benefits for complex physics representation, interpolation approaches often rely on some structuring of the table. Interpolation using structured table inputs that are not directly related to the variables transported in a simulation can incur additional query costs. This is demonstrated in the present implementation of heat losses. We show that ANNs, despite being difficult to train and reproduce, can be advantageous for high-dimensional, unstructured datasets relevant to nonadiabatic flamelet models. We also demonstrate that Lagrange polynomials show significant speedup for similar accuracy compared to B-splines.
Creation of a Sandia internally developed, shock-hardened Recoverable Data Recorder (RDR) necessitated experimentation by ballistically-firing the device into water targets at velocities up to 5,000 ft/s. The resultant mechanical environments were very severe—routinely achieving peak accelerations in excess of 200 kG and changes in pseudo-velocity greater than 38,000 inch/s. High-quality projectile deceleration datasets were obtained though high-speed imaging during the impact events. The datasets were then used to calibrate and validate computational models in both CTH and EPIC. Hydrodynamic stability in these environments was confirmed to differ from aerodynamic stability; projectile stability is maintained through a phenomenon known as “tail-slapping” or impingement of the rear of the projectile on the cavitation vapor-water interface which envelopes the projectile. As the projectile slows the predominate forces undergo a transition which is outside the codes’ capabilities to calculate accurately, however, CTH and EPIC both predict the projectile trajectory well in the initial hypervelocity regime. Stable projectile designs and the achievable acceleration space are explored through a large parameter sweep of CTH simulations. Front face chamfer angle has the largest influence on stability with low angles being more stable.
Previous research has provided strong evidence that CO2 and H2O gasification reactions can provide non-negligible contributions to the consumption rates of pulverized coal (pc) char during combustion, particularly in oxy-fuel environments. Fully quantifying the contribution of these gasification reactions has proven to be difficult, due to the dearth of knowledge of gasification rates at the elevated particle temperatures associated with typical pc char combustion processes, as well as the complex interaction of oxidation and gasification reactions. Gasification reactions tend to become more important at higher char particle temperatures (because of their high activation energy) and they tend to reduce pc oxidation due to their endothermicity (i.e. cooling effect). The work reported here attempts to quantify the influence of the gasification reaction of CO2 in a rigorous manner by combining experimental measurements of the particle temperatures and consumption rates of size-classified pc char particles in tailored oxy-fuel environments with simulations from a detailed reacting porous particle model. The results demonstrate that a specific gasification reaction rate relative to the oxidation rate (within an accuracy of approximately +/- 20% of the pre-exponential value), is consistent with the experimentally measured char particle temperatures and burnout rates in oxy-fuel combustion environments. Conversely, the results also show, in agreement with past calculations, that it is extremely difficult to construct a set of kinetics that does not substantially overpredict particle temperature increase in strongly oxygen-enriched N2 environments. This latter result is believed to result from deficiencies in standard oxidation mechanisms that fail to account for falloff in char oxidation rates at high temperatures.
We propose a set of benchmark tests for current-voltage (IV) curve fitting algorithms. Benchmark tests enable transparent and repeatable comparisons among algorithms, allowing for measuring algorithm improvement over time. An absence of such tests contributes to the proliferation of fitting methods and inhibits achieving consensus on best practices. Benchmarks include simulated curves with known parameter solutions, with and without simulated measurement error. We implement the reference tests on an automated scoring platform and invite algorithm submissions in an open competition for accurate and performant algorithms.
The DevOps movement, which aims to accelerate the continuous delivery of high-quality software, has taken a leading role in reshaping the software industry. Likewise, there is growing interest in applying DevOps tools and practices in the domains of computational science and engineering (CSE) to meet the ever-growing demand for scalable simulation and analysis. Translating insights from industry to research computing, however, remains an ongoing challenge; DevOps for science and engineering demands adaptation and innovation in those tools and practices. There is a need to better understand the challenges faced by DevOps practitioners in CSE contexts in bridging this divide. To that end, we conducted a participatory action research study to collect and analyze the experiences of DevOps practitioners at a major US national laboratory through the use of storytelling techniques. We share lessons learned and present opportunities for future investigation into DevOps practice in the CSE domain.
The structure-property linkage is one of the two most important relationships in materials science besides the process-structure linkage, especially for metals and polycrystalline alloys. The stochastic nature of microstructures begs for a robust approach to reliably address the linkage. As such, uncertainty quantification (UQ) plays an important role in this regard and cannot be ignored. To probe the structure-property linkage, many multi-scale integrated computational materials engineering (ICME) tools have been proposed and developed over the last decade to accelerate the material design process in the spirit of Material Genome Initiative (MGI), notably crystal plasticity finite element model (CPFEM) and phase-field simulations. Machine learning (ML) methods, including deep learning and physics-informed/-constrained approaches, can also be conveniently applied to approximate the computationally expensive ICME models, allowing one to efficiently navigate in both structure and property spaces effortlessly. Since UQ also plays a crucial role in verification and validation for both ICME and ML models, it is important to include UQ in the picture. In this paper, we summarize a few of our recent research efforts addressing UQ aspects of homogenized properties using CPFEM in a big picture context.
The current interest in hypersonic flows and the growing importance of plasma applications necessitate the development of diagnostics for high-enthalpy flow environments. Reliable and novel experimental data at relevant conditions will drive engineering and modeling efforts forward significantly. This study demonstrates the usage of nanosecond Coherent Anti-Stokes Raman Scattering (CARS) to measure temperature in an atmospheric, high-temperature (> 5500 K) air plasma. The experimental configuration is of interest as the plasma is close to thermodynamic equilibrium and the setup is a test-bed for heat shield materials. The determination of the non-resonant background at such high-temperatures is explored and rotational-vibrational equilibrium temperatures of the N2 ground state are determined via fits of the theory to measured spectra. Results show that the accuracy of the temperature measurements is affected by slow periodic variations in the plasma, causing sampling error. Moreover, depending on the experimental configuration, the measurements can be affected by two-beam interaction, which causes a bias towards lower temperatures, and stimulated Raman pumping, which causes a bias towards higher temperatures. The successful demonstration of CARS at the present conditions, and the exploration of its sensitivities, paves the way towards more complex measurements, e.g. close to interfaces in high-enthalpy plasma flows.
Conference Record of the IEEE Photovoltaic Specialists Conference
Hobbs, William B.; Black, Chloe L.; Holmgren, William F.; Anderson, Kevin
Subhourly changes in solar irradiance can lead to energy models being biased high if realistic distributions of irradiance values are not reflected in the resource data and model. This is particularly true in solar facility designs with high inverter loading ratios (ILRs). When resource data with sufficient temporal and spatial resolution is not available for a site, synthetic variability can be added to the data that is available in an attempt to address this issue. In this work, we demonstrate the use of anonymized commercial resource datasets with synthetic variability and compare results with previous estimates of model bias due to inverter clipping and increasing ILR.
Mann, James B.; Mohanty, Debapriya P.; Kustas, Andrew K.; Stiven Puentes Rodriguez, B.; Issahaq, Mohammed N.; Udupa, Anirudh; Sugihara, Tatsuya; Trumble, Kevin P.; M'Saoubi, Rachid; Chandrasekar, Srinivasan
Machining-based deformation processing is used to produce metal foil and flat wire (strip) with suitable properties and quality for electrical power and renewable energy applications. In contrast to conventional multistage rolling, the strip is produced in a single-step and with much less process energy. Examples are presented from metal systems of varied workability, and strip product scale in terms of size and production rate. By utilizing the large-strain deformation intrinsic to cutting, bulk strip with ultrafine-grained microstructure, and crystallographic shear-texture favourable for formability, are achieved. Implications for production of commercial strip for electric motor applications and battery electrodes are discussed.
Inverse problems constrained by partial differential equations (PDEs) play a critical role in model development and calibration. In many applications, there are multiple uncertain parameters in a model that must be estimated. However, high dimensionality of the parameters and computational complexity of the PDE solves make such problems challenging. A common approach is to reduce the dimension by fixing some parameters (which we will call auxiliary parameters) to a best estimate and use techniques from PDE-constrained optimization to estimate the other parameters. In this article, hyper-differential sensitivity analysis (HDSA) is used to assess the sensitivity of the solution of the PDE-constrained optimization problem to changes in the auxiliary parameters. Foundational assumptions for HDSA require satisfaction of the optimality conditions which are not always practically feasible as a result of ill-posedness in the inverse problem. We introduce novel theoretical and computational approaches to justify and enable HDSA for ill-posed inverse problems by projecting the sensitivities on likelihood informed subspaces and defining a posteriori updates. Our proposed framework is demonstrated on a nonlinear multiphysics inverse problem motivated by estimation of spatially heterogeneous material properties in the presence of spatially distributed parametric modeling uncertainties.
We present the SEU sensitivity and SEL results from proton and heavy ion testing performed on NVIDIA Xavier NX and AMD Ryzen V1605B GPU devices in both static and dynamic operation.
This is an investigation on two experimental datasets of laminar hypersonic flows, over a double-cone geometry, acquired in Calspan—University at Buffalo Research Center’s Large Energy National Shock (LENS)-XX expansion tunnel. These datasets have yet to be modeled accurately. A previous paper suggested that this could partly be due to mis-specified inlet conditions. The authors of this paper solved a Bayesian inverse problem to infer the inlet conditions of the LENS-XX test section and found that in one case they lay outside the uncertainty bounds specified in the experimental dataset. However, the inference was performed using approximate surrogate models. In this paper, the experimental datasets are revisited and inversions for the tunnel test-section inlet conditions are performed with a Navier–Stokes simulator. The inversion is deterministic and can provide uncertainty bounds on the inlet conditions under a Gaussian assumption. It was found that deterministic inversion yields inlet conditions that do not agree with what was stated in the experiments. An a posteriori method is also presented to check the validity of the Gaussian assumption for the posterior distribution. This paper contributes to ongoing work on the assessment of datasets from challenging experiments conducted in extreme environments, where the experimental apparatus is pushed to the margins of its design and performance envelopes.
As the width and depth of quantum circuits implemented by state-of-the-art quantum processors rapidly increase, circuit analysis and assessment via classical simulation are becoming unfeasible. It is crucial, therefore, to develop new methods to identify significant error sources in large and complex quantum circuits. In this work, we present a technique that pinpoints the sections of a quantum circuit that affect the circuit output the most and thus helps to identify the most significant sources of error. The technique requires no classical verification of the circuit output and is thus a scalable tool for debugging large quantum programs in the form of circuits. We demonstrate the practicality and efficacy of the proposed technique by applying it to example algorithmic circuits implemented on IBM quantum machines.
The Big Hill SPR site has a rich data set consisting of multi-arm caliper (MAC) logs collected from the cavern wells. This data set provides insight into the on-going casing deformation at the Big Hill site. This report summarizes the MAC surveys for each well and presents well longevity estimates where possible. Included in the report is an examination of the well twins for each cavern and a discussion on what may or may not be responsible for the different levels of deformation between some of the well twins. The report also takes a systematic view of the MAC data presenting spatial patterns of casing deformation and deformation orientation in an effort to better understand the underlying causes. The conclusions present a hypothesis suggesting the small-scale variations in casing deformation are attributable to similar scale variations in the character of the salt-caprock interface. These variations do not appear directly related to shear zones or faults.
The error detection performance of cyclic redundancy check (CRC) codes combined with bit framing in digital serial communication systems is evaluated. Advantages and disadvantages of the combined method are treated in light of the probability of undetected errors. It is shown that bit framing can increase the burst error detection of the CRC but it can also adversely affect CRC random error detection performance. To quantify the effect of bit framing on CRC error detection the concept of error "exposure"is introduced. Our investigations lead us to propose resilient generator polynomials that, when combined with bit framing, can result in improved CRC error detection performance at no additional implementation cost. Example results are generated for short codewords showing that proper choice of CRC generator polynomial can improve error detection performance when combined with bit framing. The implication is that CRC combined with bit framing can reduce the probability of undetected errors even under high error rate conditions.
Adler, James H.; He, Yunhui; Hu, Xiaozhe; Maclachlan, Scott; Ohm, Peter B.
Advanced finite-element discretizations and preconditioners for models of poroelasticity have attracted significant attention in recent years. The equations of poroelasticity offer significant challenges in both areas, due to the potentially strong coupling between unknowns in the system, saddle-point structure, and the need to account for wide ranges of parameter values, including limiting behavior such as incompressible elasticity. This paper was motivated by an attempt to develop monolithic multigrid preconditioners for the discretization developed in [C. Rodrigo et al., Comput. Methods App. Mech. Engrg, 341 (2018), pp. 467-484]; we show here why this is a difficult task and, as a result, we modify the discretization in [Rodrigo et al.] through the use of a reduced-quadrature approximation, yielding a more “solver-friendly” discretization. Local Fourier analysis is used to optimize parameters in the resulting monolithic multigrid method, allowing a fair comparison between the performance and costs of methods based on Vanka and Braess-Sarazin relaxation. Numerical results are presented to validate the local Fourier analysis predictions and demonstrate efficiency of the algorithms. Finally, a comparison to existing block-factorization preconditioners is also given.
Criticality Control Overpack (CCO) containers are being considered for the disposal of defense-related nuclear waste at the Waste Isolation Pilot Plant (WIPP).
This paper describes the methodology of designing a replacement blade tip and winglet for a wind turbine blade to demonstrate the potential of additive-manufacturing for wind energy. The team will later field-demonstrate this additive-manufactured, system-integrated tip (AMSIT) on a wind turbine. The blade tip aims to reduce the cost of wind energy by improving aerodynamic performance and reliability, while reducing transportation costs. This paper focuses on the design and modeling of a winglet for increased power production while maintaining acceptable structural loads of the original Vestas V27 blade design. A free-wake vortex model, WindDVE, was used for the winglet design analysis. A summary of the aerodynamic design process is presented along with a case study of a specific design.
This work developed a methodology for transmission line modeling of cable installations to predict the propagation of conducted high altitude electromagnetic pulses in a substation or generating plant. The methodology was applied to a termination cabinet example that was modeled with SPICE transmission line elements with information from electromagnetic field modeling and with validation using experimental data. The experimental results showed reasonable agreement to the modeled propagating pulse and can be applied to other installation structures in the future.
Filamentous fungi can synthesize a variety of nanoparticles (NPs), a process referred to as mycosynthesis that requires little energy input, do not require the use of harsh chemicals, occurs at near neutral pH, and do not produce toxic byproducts. While NP synthesis involves reactions between metal ions and exudates produced by the fungi, the chemical and biochemical parameters underlying this process remain poorly understood. Here, the role of fungal species and precursor salt on the mycosynthesis of zinc oxide (ZnO) NPs is investigated. This data demonstrates that all five fungal species tested are able to produce ZnO structures that can be morphologically classified into i) well-defined NPs, ii) coalesced/dissolving NPs, and iii) micron-sized square plates. Further, species-dependent preferences for these morphologies are observed, suggesting potential differences in the profile or concentration of the biochemical constituents in their individual exudates. This data also demonstrates that mycosynthesis of ZnO NPs is independent of the anion species, with nitrate, sulfate, and chloride showing no effect on NP production. Finally, these results enhance the understanding of factors controlling the mycosynthesis of ceramic NPs, supporting future studies that can enable control over the physical and chemical properties of NPs formed through this “green” synthesis method.
Phosphor thermometry has become an established remote sensing technique for acquiring the temperature of surfaces and gas-phase flows. Often, phosphors are excited by a light source (typically emitting in the UV region), and their temperature-sensitive emission is captured. Temperature can be inferred from shifts in the emission spectra or the radiative decay lifetime during relaxation. While recent work has shown that the emission of several phosphors remains thermographic during x-ray excitation, the radiative decay lifetime was not investigated. The focus of the present study is to characterize the lifetime decay of the phosphor Gd2O2S:Tb for temperature sensitivity after excitation from a pulsed x-ray source. These results are compared to the lifetime decays found for this phosphor when excited using a pulsed UV laser. Results show that the lifetime of this phosphor exhibits comparable sensitivity to temperature between both excitation sources for a temperature range between 21 °C to 140 °C in increments of 20 °C. This work introduces a novel method of thermometry for researchers to implement when employing x-rays for diagnostics.