With the recent surge in big data analytics for hyperdimensional data, there is a renewed interest in dimensionality reduction techniques. In order for these methods to improve performance gains and understanding of the underlying data, a proper metric needs to be identified. This step is often overlooked, and metrics are typically chosen without consideration of the underlying geometry of the data. In this paper, we present a method for incorporating elastic metrics into the t-distributed stochastic neighbour embedding (t-SNE) and Uniform Manifold Approximation and Projection (UMAP). We apply our method to functional data, which is uniquely characterized by rotations, parameterization and scale. If these properties are ignored, they can lead to incorrect analysis and poor classification performance. Through our method, we demonstrate improved performance on shape identification tasks for three benchmark data sets (MPEG-7, Car data set and Plane data set of Thankoor), where we achieve 0.77, 0.95 and 1.00 F1 score, respectively.
Reactive classical molecular dynamics simulations of sodium silicate glasses, xNa2O–(100 − x)SiO2 (x = 10–30), under quasi-static loading, were performed for the analysis of molecular scale fracture mechanisms. Mechanical properties of the sodium silicate glasses were consistent with experimentally reported values, and the amount of crack propagation varied with reported fracture toughness values. The most crack propagation occurred in NS20 systems (20-mol% Na2O) compared with the other simulated compositions. Dissipation via two mechanisms, the first through sodium migration as a lower activation energy process and the second through structural rearrangement as a higher activation energy process, was calculated and accounted for the energy that was not stored elastically or associated with the formation of new fracture surfaces. A correlation between crack propagation and energy dissipation was identified, with systems with higher crack propagation exhibiting less energy dissipation. Sodium silicate glass compositions with lower energy dissipation also exhibited the most sodium movement and structural rearrangement within 10 Å of the crack tip during loading. Therefore, high sodium mobility near the crack tip may enable energy dissipation without requiring formation of structural defects. Therefore, the varying mobilities of the network modifiers near crack tips influence the brittleness and the crack growth rate of modified amorphous oxide systems.
Data processing adds substantial soft costs to distributed energy systems. These costs are incurred primarily as labor necessary to collect, normalize, store and communicate data. The open-source Orange Button data exchange standard comprises data taxonomies, common data sources, and interoperable software tools which together can dramatically reduce these costs and thereby accelerate the deployment of distributed energy systems. We describe the data taxonomies and datasets, and the software enabled by these capabilities.
A large-scale numerical computation of five wind farms was performed as a part of the American WAKE experimeNt (AWAKEN). This high-fidelity computation used the ExaWind/AMR-Wind LES solver to simulate a 100 km × 100 km domain containing 541 turbines under unstable atmospheric conditions matching previous measurements. The turbines were represented by Joukowski and OpenFAST coupled actuator disk models. Results of this qualitative comparison illustrate the interactions of wind farms with large-scale ABL structures in the flow, as well as the extent of downstream wake penetration in the flow and blockage effects around wind farms.
Chemistry tabulation is a common approach in practical simulations of turbulent combustion at engineering scales. Linear interpolants have traditionally been used for accessing precomputed multidimensional tables but suffer from large memory requirements and discontinuous derivatives. Higher-degree interpolants address some of these restrictions but are similarly limited to relatively low-dimensional tabulation. Artificial neural networks (ANNs) can be used to overcome these limitations but cannot guarantee the same accuracy as interpolants and introduce challenges in reproducibility and reliable training. These challenges are enhanced as the physics complexity to be represented within the tabulation increases. In this manuscript, we assess the efficiency, accuracy, and memory requirements of Lagrange polynomials, tensor product B-splines, and ANNs as tabulation strategies. We analyze results in the context of nonadiabatic flamelet modeling where higher dimension counts are necessary. While ANNs do not require structuring of data, providing benefits for complex physics representation, interpolation approaches often rely on some structuring of the table. Interpolation using structured table inputs that are not directly related to the variables transported in a simulation can incur additional query costs. This is demonstrated in the present implementation of heat losses. We show that ANNs, despite being difficult to train and reproduce, can be advantageous for high-dimensional, unstructured datasets relevant to nonadiabatic flamelet models. We also demonstrate that Lagrange polynomials show significant speedup for similar accuracy compared to B-splines.
The current interest in hypersonic flows and the growing importance of plasma applications necessitate the development of diagnostics for high-enthalpy flow environments. Reliable and novel experimental data at relevant conditions will drive engineering and modeling efforts forward significantly. This study demonstrates the usage of nanosecond Coherent Anti-Stokes Raman Scattering (CARS) to measure temperature in an atmospheric, high-temperature (> 5500 K) air plasma. The experimental configuration is of interest as the plasma is close to thermodynamic equilibrium and the setup is a test-bed for heat shield materials. The determination of the non-resonant background at such high-temperatures is explored and rotational-vibrational equilibrium temperatures of the N2 ground state are determined via fits of the theory to measured spectra. Results show that the accuracy of the temperature measurements is affected by slow periodic variations in the plasma, causing sampling error. Moreover, depending on the experimental configuration, the measurements can be affected by two-beam interaction, which causes a bias towards lower temperatures, and stimulated Raman pumping, which causes a bias towards higher temperatures. The successful demonstration of CARS at the present conditions, and the exploration of its sensitivities, paves the way towards more complex measurements, e.g. close to interfaces in high-enthalpy plasma flows.
Awile, Omar; Knight, James C.; Nowotny, Thomas; Aimone, James B.; Diesmann, Markus; Schurmann, Felix
At the turn of the millennium the computational neuroscience community realized that neuroscience was in a software crisis: software development was no longer progressing as expected and reproducibility declined. The International Neuroinformatics Coordinating Facility (INCF) was inaugurated in 2007 as an initiative to improve this situation. The INCF has since pursued its mission to help the development of standards and best practices. In a community paper published this very same year, Brette et al. tried to assess the state of the field and to establish a scientific approach to simulation technology, addressing foundational topics, such as which simulation schemes are best suited for the types of models we see in neuroscience. In 2015, a Frontiers Research Topic “Python in neuroscience” by Muller et al. triggered and documented a revolution in the neuroscience community, namely in the usage of the scripting language Python as a common language for interfacing with simulation codes and connecting between applications. The review by Einevoll et al. documented that simulation tools have since further matured and become reliable research instruments used by many scientific groups for their respective questions. Open source and community standard simulators today allow research groups to focus on their scientific questions and leave the details of the computational work to the community of simulator developers. A parallel development has occurred, which has been barely visible in neuroscientific circles beyond the community of simulator developers: Supercomputers used for large and complex scientific calculations have increased their performance from ~10 TeraFLOPS (1013 floating point operations per second) in the early 2000s to above 1 ExaFLOPS (1018 floating point operations per second) in the year 2022. This represents a 100,000-fold increase in our computational capabilities, or almost 17 doublings of computational capability in 22 years. Moore's law (the observation that it is economically viable to double the number of transistors in an integrated circuit every other 18–24 months) explains a part of this; our ability and willingness to build and operate physically larger computers, explains another part. It should be clear, however, that such a technological advancement requires software adaptations and under the hood, simulators had to reinvent themselves and change substantially to embrace this technological opportunity. It actually is quite remarkable that—apart from the change in semantics for the parallelization—this has mostly happened without the users knowing. The current Research Topic was motivated by the wish to assemble an update on the state of neuroscientific software (mostly simulators) in 2022, to assess whether we can see more clearly which scientific questions can (or cannot) be asked due to our increased capability of simulation, and also to anticipate whether and for how long we can expect this increase of computational capabilities to continue.
Springs play important roles in many mechanisms, including critical safety components employed by Sandia National Laboratories. Due to the nature of these safety component applications, serious concerns arise if their springs become damaged or unhook from their posts. Finite element analysis (FEA) is one technique employed to ensure such adverse scenarios do not occur. Ideally, a very fine spring mesh would be used to make the simulation as accurate as possible with respect to mesh convergence. While this method does yield the best results, it is also the most time consuming and therefore most computationally expensive process. In some situations, reduced order models (ROMs) can be adopted to lower this cost at the expense of some accuracy. This study quantifies the error present between a fine, solid element mesh and a reduced order spring beam model, with the aim of finding the best balance of a low computational cost and high accuracy analysis. Two types of analyses were performed, a quasi-static displacement-controlled pull and a haversine shock. The first used implicit methods to examine basic properties as the elastic limit of the spring material was reached. This analysis was also used to study the convergence and residual tolerance of the models. The second used explicit dynamics methods to investigate spring dynamics and stress/strain properties, as well as examine the impact of the chosen friction coefficient. Both the implicit displacement-controlled pull test and explicit haversine shock test showed good similarities between the hexahedral and beam meshes. The results were especially favorable when comparing reaction force and stress trends and maximums. However, the EQPS results were not quite as favorable. This could be due to differences in how the shear stress is calculated in both models, and future studies will need to investigate the exact causes. The data indicates that the beam model may be less likely to correctly predict spring failure, defined as inappropriate application of tension and/or compressive forces to a larger assembly. Additionally, this study was able to quantify the computational cost advantage of using a reduced order model beam mesh. In the transverse haversine shock case, the hexahedral mesh took over three days with 228 processors to solve, compared to under 10 hours for the ROM using just a single processor. Depending on the required use case for the results, using the beam mesh will significantly improve the speed of work flows, especially when integrated into larger safety component models. However, appropriate use of the ROM should carefully balance these optimized run times with its reduction in accuracy, especially when examining spring failure and outputting variables such as equivalent plastic strain. Current investigations are broadening the scope of this work to include a validation study comparing the beam ROM to physical testing data.
A comprehensive control strategy is necessary to safely and effectively operate particle based concentrating solar power (CSP) technologies. Particle based CSP with thermal energy storage (TES) is an emerging technology with potential to decarbonize power and process heat applications. The high-temperature nature of particle based CSP technologies and daily solar transients present challenges for system control to prevent equipment damage and ensure operator safety. An operational controls strategy for a tower based particle CSP system during steady state and transient conditions with safety interlocks is described in this paper. Control of a solar heated particle recirculation loop, TES, and a supercritical carbon dioxide (sCO2) cooling loop designed to reject 1 MW of thermal power are considered and associated operational limitations and their influence on control strategy are discussed.
We demonstrate coherent anti-Stokes Raman scattering (CARS) detection of the CO and N2 molecules in the reaction layer of a graphite material sample exposed to the 5000-6000 K plume of an inductively-coupled plasma torch operating on air. CO is a dominant product in the surface oxidative reaction of graphite and lighter weight carbon-based thermalprotection-system materials. A standard nanosecond CARS approach using Nd:YAG and a single broadband dye laser with ~200 cm-1 spectral width is employed for demonstration measurements, with the CARS volume located less than 1-mm from an ablating graphite sample. Quantitative measurements of both temperature and the CO/N2 ratio are obtained from model fits to CARS spectra that have been averaged for 5 laser shots. The results indicate that CARS can be used for space- and time-resolved detection of CO in high-temperature ablation tests near atmospheric pressure.
Earth and Space 2022: Space Exploration, Utilization, Engineering, and Construction in Extreme Environments - Selected Papers from the 18th Biennial International Conference on Engineering, Science, Construction, and Operations in Challenging Environments
Analysis of radiation effects on electrical circuits requires computationally efficient compact radiation models. Currently, development of such models is dominated by analytic techniques that rely on empirical assumptions and physical approximations to render the governing equations solvable in closed form. In this paper we demonstrate an alternative numerical approach for the development of a compact delayed photocurrent model for a pn-junction device. Our approach combines a system identification step with a projection-based model order reduction step to obtain a small discrete time dynamical system describing the dynamics of the excess carriers in the device. Application of the model amounts to a few small matrix-vector multiplications having minimal computational cost. We demonstrate the model using a radiation pulse test for a synthetic pn-junction device.
Here we examine models for particle curtain dispersion using drag based formalisms and their connection to streamwise pressure difference closures. Focusing on drag models, we specifically demonstrate that scaling arguments developed in DeMauro et. al. [1] using early time drag modeling can be extended to include late time particle curtain dispersion behavior by weighting the dynamic portion of the drag relative velocity e.g. (Formula Presented) by the inverse of the particle volume fraction to the ¼th power. The additional parameter e.g. α introduced in this scaling is related to the model drag parameters by employing an early-time latetime matching argument. Comparison with the scaled measurements of DeMauro et. al. suggest that the proposed modification is an effective formalism. Next, the connection between drag-based models and streamwise pressure difference-based expressions is explored by formulating simple analytical models that verify an empirical (Daniel and Wagner [2]) upstream-downstream expression. Though simple, these models provide physics-based approached describing shock particle curtain interaction behavior.
With increasing penetration of variable renewable generation, battery energy storage systems (BESS) are becoming important for power system stability due to their operational flexibility. In this paper, we propose a method for determining the minimum BESS rated power that guarantees security constraints in a grid subject to disturbances induced by variable renewable generation. The proposed framework leverages sensitivity-based inverse uncertainty propagation where the dynamical responses of the states are parameterized with respect to random variables. Using this approach, the original nonlinear optimization problem for finding the security-constrained uncertainty interval may be formulated as a quadratically-constrained linear program. The resulting estimated uncertainty interval is utilized to find the BESS rated power required to satisfy grid stability constraints.
Power-flow studies on the 30-MA, 100-ns Z facility at Sandia National Labs have shown that plasmas in the facility's magnetically insulated transmission lines can result in a loss of current to the load.1 During the current pulse, electrode heating causes neutral surface contaminants (water, hydrogen, hydrocarbons, etc.) to desorb, ionize, and form plasmas in the anode-cathode gap.2 Shrinking typical electrode thicknesses (∼1 cm) to thin foils (5-200 μm) produces observable amounts of plasma on smaller pulsed power drivers <1 MA).3 We suspect that as electrode material bulk thickness decreases relative to the skin depth (50-100 μm for a 100-500-ns pulse in aluminum), the thermal energy delivered to the neutral surface contaminants increases, and thus desorb faster from the current carrying surface.
Event-based sensors are a novel sensing technology which capture the dynamics of a scene via pixel-level change detection. This technology operates with high speed (>10 kHz), low latency (10 µs), low power consumption (<1 W), and high dynamic range (120 dB). Compared to conventional, frame-based architectures that consistently report data for each pixel at a given frame rate, event-based sensor pixels only report data if a change in pixel intensity occurred. This affords the possibility of dramatically reducing the data reported in bandwidth-limited environments (e.g., remote sensing) and thus, the data needed to be processed while still recovering significant events. Degraded visual environments, such as those generated by fog, often hinder situational awareness by decreasing optical resolution and transmission range via random scattering of light. To respond to this challenge, we present the deployment of an event-based sensor in a controlled, experimentally generated, well-characterized degraded visual environment (a fog analogue), for detection of a modulated signal and comparison of data collected from an event-based sensor and from a traditional framing sensor.
7th IEEE Electron Devices Technology and Manufacturing Conference: Strengthen the Global Semiconductor Research Collaboration After the Covid-19 Pandemic, EDTM 2023
Accurate characterization of electrical device behavior is a key component of developing accurate electrical models and assessing reliability. Measurements characterizing an electrical device can be produced from current-voltage (I-V) sweeps. We introduce the pairwise midpoint method (PMM) for estimating the mean of a functional data set and apply it to I-V sweeps from a Zener diode. Comparisons indicate that the PMM is a viable method for describing the mean behavior of a functional data set.
An inherited containment vessel design that has been used in the past to contain items in an environmental testing unit was brought to the Explosives Applications Lab to be analyzed and modified. The goal was to modify the vessel to contain an explosive event of 4g TNT equivalence at least once without failure or significant girth expansion while maintaining a seal. A total of ten energetic tests were performed on multiple vessels. In these tests, the 7075-T6 aluminum vessels were instrumented with thin-film resistive strain gages and both static and dynamic pressure gauges to study its ability to withstand an oversize explosive charge of 8g. Additionally, high precision girth (pi tape) measurements were taken before and after each test to measure the plastic growth of the vessel due to the event. Concurrent with this explosive testing, hydrocode modeling of the containment vessel and charge was performed. The modeling results were shown to agree with the results measured in the explosive field testing. Based on the data obtained during this testing, this vessel design can be safely used at least once to contain explosive detonations of 8g at the center of the chamber for a charge that will not result in damaging fragments.
Finding the maximum cut of a graph (MAXCUT) is a classic optimization problem that has motivated parallel algorithm development. While approximate algorithms to MAXCUT offer attractive theoretical guarantees and demonstrate compelling empirical performance, such approximation approaches can shift the dominant computational cost to the stochastic sampling operations. Neuromorphic computing, which uses the organizing principles of the nervous system to inspire new parallel computing architectures, offers a possible solution. One ubiquitous feature of natural brains is stochasticity: the individual elements of biological neural networks possess an intrinsic randomness that serves as a resource enabling their unique computational capacities. By designing circuits and algorithms that make use of randomness similarly to natural brains, we hypothesize that the intrinsic randomness in microelectronics devices could be turned into a valuable component of a neuromorphic architecture enabling more efficient computations. Here, we present neuromorphic circuits that transform the stochastic behavior of a pool of random devices into useful correlations that drive stochastic solutions to MAXCUT. We show that these circuits perform favorably in comparison to software solvers and argue that this neuromorphic hardware implementation provides a path for scaling advantages. This work demonstrates the utility of combining neuromorphic principles with intrinsic randomness as a computational resource for new computational architectures.
Electronic control systems used for quantum computing have become increasingly complex as multiple qubit technologies employ larger numbers of qubits with higher fidelity target. Whereas the control systems for different technologies share some similarities, parameters, such as pulse duration, throughput, real-time feedback, and latency requirements, vary widely depending on the qubit type. In this article, we evaluate the performance of modern system-on-chip (SoC) architectures in meeting the control demands associated with performing quantum gates on trapped-ion qubits, particularly focusing on communication within the SoC. A principal focus of this article is the data transfer latency and throughput of several high-speed on-chip mechanisms on Xilinx multiprocessor SoCs, including those that utilize direct memory access (DMA). They are measured and evaluated to determine an upper bound on the time required to reconfigure a gate parameter. Worst-case and average-case bandwidth requirements for a custom gate sequencer core are compared with the experimental results. The lowest variability, highest throughput data-transfer mechanism is DMA between the real-time processing unit (RPU) and the programmable logic, where bandwidths up to 19.2 GB/s are possible. For context, this enables the reconfiguration of qubit gates in less than 2 μs, comparable to the fastest gate time. Though this article focuses on trapped-ion control systems, the gate abstraction scheme and measured communication rates are applicable to a broad range of quantum computing technologies.
While significant investments have been made in the exploration of ethics in computation, recent advances in high performance computing (HPC) and artificial intelligence (AI) have reignited a discussion for more responsible and ethical computing with respect to the design and development of pervasive sociotechnical systems within the context of existing and evolving societal norms and cultures. The ubiquity of HPC in everyday life presents complex sociotechnical challenges for all who seek to practice responsible computing and ethical technological innovation. The present paper provides guidelines which scientists, researchers, educators, and practitioners alike, can employ to become more aware of one’s personal values system that may unconsciously shape one’s approach to computation and ethics.
We demonstrate the use of low-temperature grown GaAs (LT-GaAs) metasurface as an ultrafast photoconductive switching element gated with 1550 nm laser pulses. The metasurface is designed to enhance a weak two-step photon absorption at 1550 nm, enabling THz pulse detection.
Kolmogorov's theory of turbulence assumes that the small-scale turbulent structures in the energy cascade are universal and are determined by the energy dissipation rate and the kinematic viscosity alone. However, thermal fluctuations, absent from the continuum description, terminate the energy cascade near the Kolmogorov length scale. Here, we propose a simple superposition model to account for the effects of thermal fluctuations on small-scale turbulence statistics. For compressible Taylor-Green vortex flow, we demonstrate that the superposition model in conjunction with data from direct numerical simulation of the Navier-Stokes equations yields spectra and structure functions that agree with the corresponding quantities computed from the direct simulation Monte Carlo method of molecular gas dynamics, verifying the importance of thermal fluctuations in the dissipation range.
Spent nuclear fuel repository simulations are currently not able to incorporate detailed fuel matrix degradation (FMD) process models due to their computational cost, especially when large numbers of waste packages breach. The current paper uses machine learning to develop artificial neural network and k-nearest neighbor regression surrogate models that approximate the detailed FMD process model while being computationally much faster to evaluate. Using fuel cask temperature, dose rate, and the environmental concentrations of CO32−, O2, Fe2+, and H2 as inputs, these surrogates show good agreement with the FMD process model predictions of the UO2 degradation rate for conditions within the range of the training data. A demonstration in a full-scale shale repository reference case simulation shows that the incorporation of the surrogate models captures local and temporal environmental effects on fuel degradation rates while retaining good computational efficiency.
Two relatively under-reported facets of fuel storage fire safety are examined in this work for a 250, 000 gallon two-tank storage system. Ignition probability is linked to the radiative flux from a presumed fire. First, based on observed features of existing designs, fires are expected to be largely contained within a designed footprint that will hold the full spilled contents of the fuel. The influence of the walls and the shape of the tanks on the magnitude of the fire is not a well-described aspect of conventional fire safety assessment utilities. Various resources are herein used to explore the potential hazard for a contained fire of this nature. Second, an explosive attack on the fuel storage has not been widely considered in prior work. This work explores some options for assessing this hazard. The various methods for assessing the constrained conventional fires are found to be within a reasonable degree of agreement. This agreement contrasts with the hazard from an explosive dispersal. Best available assessment techniques are used, which highlight some inadequacies in the existing toolsets for making predictions of this nature. This analysis, using the best available tools, suggests the offset distance for the ignition hazard from a fireball will be on the same order as the offset distance for the blast damage. This suggests the buy-down of risk by considering the fireball is minimal when considering the blast hazards. Assessment tools for the fireball predictions are not particularly mature, and ways to improve them for a higher-fidelity estimate are noted.
The National Aeronautics and Space Administration’s (NASA) Artemis program seeks to establish the first long-term presence on the Moon as part of a larger goal of sending the first astronauts to Mars. To accomplish this, the Artemis program is designed to develop, test, and demonstrate many technologies needed for deep space exploration and supporting life on another planet. Long-term operations on the lunar base include habitation, science, logistics, and in-situ resource utilization (ISRU). In this paper, a Lunar DC microgrid (LDCMG) structure is the backbone of the energy distribution, storage, and utilization infrastructure. The method to analyze the LDCMG power distribution network and ESS design is the Hamiltonian surface shaping and power flow control (HSSPFC). This ISRU system will include a networked three-microgrid system which includes a Photo-voltaic (PV) array (generation) on one sub-microgrid and water extraction (loads) on the other two microgrids. A system's reduced-order model (ROM) will be used to create a closed-form analytical model. Ideal ESS devices will be placed alongside each state of the ROM. The ideal ESS devices determine the response needed to conform to a specific operating scenario and system specifications.
IEEE International Symposium on Applications of Ferroelectrics, ISAF 2023, International Symposium on Integrated Functionalities, ISIF 2023 and Piezoresponse Force Microscopy Workshop, PFM 2023, Proceedings
Radio frequency (RF) magnetic devices are key components in RF front ends. However, they are difficult to miniaturize and remain the bulkiest components in RF systems. Acoustically driven ferromagnetic resonance (ADFMR) offers a route towards the miniaturization of RF magnetic devices. The ADFMR literature thus far has focused predominantly on the dynamics of the coupling process, with relatively little work done on the device optimization. In this work, we present an optimized 2 GHz ADFMR device utilizing relaxed SPUDT transducers in lithium tantalate. We report an insertion loss of -13.7 dB and an ADFMR attenuation constant of -71.7 dB/mm, making this device one of the best performing ADFMR devices to date.
Measurements of gas-phase temperature and pressure in hypersonic flows are important for understanding gas-phase fluctuations which can drive dynamic loading on model surfaces and to study fundamental compressible flow turbulence. To achieve this capability, femtosecond coherent anti-Stokes Raman scattering (fs CARS) is applied in Sandia National Laboratories’ cold-flow hypersonic wind tunnel facility. Measurements were performed for tunnel freestream temperatures of 42–58 K and pressures of 1.5–2.2 Torr. The CARS measurement volume was translated in the flow direction during a 30-second tunnel run using a single computer-controlled translation stage. After broadband femtosecond laser excitation, the rotational Raman coherence was probed twice, once at an early time where the collisional environment has not affected the Raman coherence, and another at a later time after the collisional environment has led to significant dephasing of the Raman coherent. The gas-phase temperature was obtained primarily from the early-probe CARS spectra, while the gas-phase pressure was obtained primarily from the late-probe CARS spectra. Challenges in implementing fs CARS in this facility such as changes in the nonresonant spectrum at different measurement location are discussed.
Laser-induced photoemission of electrons offers opportunities to trigger and control plasmas and discharges [1]. However, the underlying mechanisms are not sufficiently characterized to be fully utilized [2]. We present an investigation to characterize the effects of photoemission on plasma breakdown for different reduced electric fields, laser intensities, and photon energies. We perform Townsend breakdown experiments assisted by high-speed imaging and employ a quantum model of photoemission along with a 0D discharge model [3], [4] to interpret the experimental measurements.
Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering - OMAE
Laros, James H.; Davis, Jacob; Sharman, Krish; Tom, Nathan; Husain, Salman
Experiments were conducted on a wave tank model of a bottom raised oscillating surge wave energy converter (OSWEC) model in regular waves. The OSWEC model shape was a thin rectangular flap, which was allowed to pitch in response to incident waves about a hinge located at the intersection of the flap and the top of the supporting foundation. Torsion springs were added to the hinge in order to position the pitch natural frequency at the center of the wave frequency range of the wave maker. The flap motion as well as the loads at the base of the foundation were measured. The OSWEC was modeled analytically using elliptic functions in order to obtain closed form expressions for added mass and radiation damping coefficients, along with the excitation force and torque. These formulations were derived and reported in a previous publication by the authors. While analytical predictions of the foundation loads agree very well with experiments, large discrepancies are seen in the pitch response close to resonance. These differences are analyzed by conducting a sensitivity study, in which system parameters, including damping and added mass values, are varied. The likely contributors to the differences between predictions and experiments are attributed to tank reflections, standing waves that can occur in long, narrow wave tanks, as well as the thin plate assumption employed in the analytical approach.
Criticality Control Overpack (CCO) containers are being considered for the disposal of defense-related nuclear waste at the Waste Isolation Pilot Plant (WIPP).
Low loss silicon nitride ring resonator reflectors provide feedback to a III/V gain chip, achieving single-mode lasing at 772nm. The Si3N4 is fabricated in a CMOS foundry compatible process that achieves loss values of 0.036dB/cm.
The Reynolds-averaged Navier–Stokes (RANS) equations remain a workhorse technology for simulating compressible fluid flows of practical interest. Due to model-form errors, however, RANS models can yield erroneous predictions that preclude their use on mission-critical problems. This work presents a data-driven turbulence modeling strategy aimed at improving RANS models for compressible fluid flows. The strategy outlined has three core aspects: (1) prediction for the discrepancy in the Reynolds stress tensor and turbulent heat flux via machine learning (ML), (2) estimating uncertainties in ML model outputs via out-of-distribution detection, and (3) multi-step training strategies to improve feature-response consistency. Results are presented across a range of cases publicly available on NASA’s turbulence modeling resource involving wall-bounded flows, jet flows, and hypersonic boundary layer flows with cold walls. We find that one ML turbulence model is able to provide consistent improvements for numerous quantities-of-interest across all cases.
High-altitude electromagnetic pulse events are a growing concern for electric power grid vulnerability assessments and mitigation planning, and accurate modeling of surge arrester mitigations installed on the grid is necessary to predict pulse effects on existing equipment and to plan future mitigation. While some models of surge arresters at high frequency have been proposed, experimental backing for any given model has not been shown. This work examines a ZnO lightning surge arrester modeling approach previously developed for accurate prediction of nanosecond-scale pulse response. Four ZnO metal-oxide varistor pucks with different sizes and voltage ratings were tested for voltage and current response on a conducted electromagnetic pulse testbed. The measured clamping response was compared to SPICE circuit models to compare the electromagnetic pulse response and validate model accuracy. Results showed good agreement between simulation results and the experimental measurements, after accounting for stray testbed inductance between 100 and 250 nH.
Austenitic stainless steels are used in high-pressure hydrogen containment infrastructure for their resistance to hydrogen embrittlement. Applications for the use of austenitic stainless steels include pressure vessels, tubing, piping, valves, fittings and other piping components. Despite their resistance to brittle behavior in the presence of hydrogen, austenitic stainless steels can exhibit degraded fracture performance. The mechanisms of hydrogen-assisted fracture, however, remain elusive, which has motivated continued research on these alloys. There are two principal approaches to evaluate the influence of gaseous hydrogen on mechanical properties: internal and external hydrogen, respectively. The austenite phase has high solubility and low diffusivity of hydrogen at room temperature, which enables introduction of hydrogen into the material through thermal precharging at elevated temperature and pressure; a condition referred to as internal hydrogen. H-precharged material can subsequently be tested in ambient conditions. Alternatively, mechanical testing can be performed while test coupons are immersed in gaseous hydrogen thereby evaluating the effects of external hydrogen on property degradation. The slow diffusivity of hydrogen in austenite at room temperature can often be a limiting factor in external hydrogen tests and may not properly characterize lower bound fracture behavior in components exposed to hydrogen for long time periods. In this study, the differences between internal and external hydrogen environments are evaluated in the context of fracture resistance measurements. Fracture testing was performed on two different forged austenitic stainless steel alloys (304L and XM-11) in three different environments: 1) non-charged and tested in gaseous hydrogen at pressure of 1,000 bar (external H2), 2) hydrogen precharged and tested in air (internal H), 3) hydrogen precharged and tested in 1,000 bar H2 (internal H + external H2). For all environments, elastic-plastic fracture measurements were conducted to establish J-R curves following the methods of ASTM E1820. Following fracture testing, fracture surfaces were examined to reveal predominant fracture mechanisms for the different conditions and to characterize differences (and similarities) in the macroscale fracture processes associated with these environmental conditions.
A high altitude electromagnetic pulse (HEMP) caused by a nuclear explosion has the potential to severely impact the operation of large-scale electric power grids. This paper presents a top-down mitigation design strategy that considers grid-wide dynamic behavior during a simulated HEMP event - and uses optimal control theory to determine the compensation signals required to protect critical grid assets. The approach is applied to both a standalone transformer system and a demonstrative 3-bus grid model. The performance of the top-down approach relative to conventional protection solutions is evaluated, and several optimal control objective functions are explored. Finally, directions for future research are proposed.
This is an investigation on two experimental datasets of laminar hypersonic flows, over a double-cone geometry, acquired in Calspan—University at Buffalo Research Center’s Large Energy National Shock (LENS)-XX expansion tunnel. These datasets have yet to be modeled accurately. A previous paper suggested that this could partly be due to mis-specified inlet conditions. The authors of this paper solved a Bayesian inverse problem to infer the inlet conditions of the LENS-XX test section and found that in one case they lay outside the uncertainty bounds specified in the experimental dataset. However, the inference was performed using approximate surrogate models. In this paper, the experimental datasets are revisited and inversions for the tunnel test-section inlet conditions are performed with a Navier–Stokes simulator. The inversion is deterministic and can provide uncertainty bounds on the inlet conditions under a Gaussian assumption. It was found that deterministic inversion yields inlet conditions that do not agree with what was stated in the experiments. An a posteriori method is also presented to check the validity of the Gaussian assumption for the posterior distribution. This paper contributes to ongoing work on the assessment of datasets from challenging experiments conducted in extreme environments, where the experimental apparatus is pushed to the margins of its design and performance envelopes.
In recent years, high-altitude infrasound sensing has become more prolific, demonstrating an enormous value especially when utilized over regions inaccessible to traditional ground-based sensing. Similar to ground-based infrasound detectors, airborne sensors take advantage of the fact that impulsive atmospheric events such as explosions can generate low frequency acoustic waves, also known as infrasound. Due to negligible attenuation, infrasonic waves can travel over long distances, and provide important clues about their source. Here, we report infrasound detections of the Apollo detonation that was carried on 29 October 2020 as part of the Large Surface Explosion Coupling Experiment in Nevada, USA. Infrasound sensors attached to solar hot air balloons floating in the stratosphere detected the signals generated by the explosion at distances 170–210 km. Three distinct arrival phases seen in the signals are indicative of multipathing caused by the small-scale perturbations in the atmosphere. We also found that the local acoustic environment at these altitudes is more complex than previously thought.
A wall-modeled large-eddy simulation of a Mach 14 boundary layer flow over a flat plate was carried out for the conditions of the Arnold Engineering Development Complex Hypervelocity Tunnel 9. Adequate agreement of the mean velocity and temperature, as well as Reynolds stress profiles with a reference direct numerical simulation is obtained at much reduced grid resolution. The normalized root-mean-square optical path difference obtained from the present wall-modeled large-eddy simulations and reference direct nu- merical simulation are in good agreement with each other but below a prediction obtained from a semi-analytical relationship by Notre Dame University. This motivates an evalua- tion of the underlying assumptions of the Notre Dame model at high Mach number. For the analysis, recourse is taken to previously published wall-modeled large-eddy simulations of a Mach eight turbulent boundary layer. The analysis of the underlying assumptions focuses on the root-mean-square fluctuations of the thermodynamic quantities, on the strong Reynolds analogy, two-point correlations, and the linking equation. It is found that with increasing Mach number, the pressure fluctuations increase and the strong Reynolds anal- ogy over-predicts the temperature fluctuations. In addition, the peak of the correlation length shifts towards the boundary layer edge.
Creation of a Sandia internally developed, shock-hardened Recoverable Data Recorder (RDR) necessitated experimentation by ballistically-firing the device into water targets at velocities up to 5,000 ft/s. The resultant mechanical environments were very severe—routinely achieving peak accelerations in excess of 200 kG and changes in pseudo-velocity greater than 38,000 inch/s. High-quality projectile deceleration datasets were obtained though high-speed imaging during the impact events. The datasets were then used to calibrate and validate computational models in both CTH and EPIC. Hydrodynamic stability in these environments was confirmed to differ from aerodynamic stability; projectile stability is maintained through a phenomenon known as “tail-slapping” or impingement of the rear of the projectile on the cavitation vapor-water interface which envelopes the projectile. As the projectile slows the predominate forces undergo a transition which is outside the codes’ capabilities to calculate accurately, however, CTH and EPIC both predict the projectile trajectory well in the initial hypervelocity regime. Stable projectile designs and the achievable acceleration space are explored through a large parameter sweep of CTH simulations. Front face chamfer angle has the largest influence on stability with low angles being more stable.
Well-skipping radical-radical reactions can provide a chain-propagating pathway for formation of polycyclic radicals implicated in soot inception. Here we use controlled pyrolysis in a microreactor to isolate and examine the role of well-skipping channels in the phenyl (C6H5) + propargyl (C3H3) radical-radical reaction at temperatures of 800–1600 K and pressures near 25 Torr. The temperature and concentration dependence of the closed-shell (C9H8) and radical (C9H7) products are observed using electron-ionization mass spectrometry. The flow in the reactor is simulated using a boundary layer model employing a chemical mechanism based on recent rate coefficient calculations. Comparison between simulation and experiment shows reasonable agreement, within a factor of 3, while suggesting possible improvements to the model. In contrast, eliminating the well-skipping reactions from the chemistry mechanism causes a much larger discrepancy between simulation and experiment in the temperature dependence of the radical concentration, revealing that the well-skipping pathways, especially to form indenyl radical, are significant at temperatures of 1200 K and higher. While most C9H7 forms by well-skipping at 25 Torr, an additional simulation indicates that the well-skipping channels only contribute around 3% of the C9Hx yield at atmospheric pressure, thus indicating a negligible role of the well-skipping pathways at atmospheric and higher pressures.
The Synchronic Web is a highly scalable notary infrastructure that provides tamper-evident data provenance for historical web data. In this document, we describe the applicability of this infrastructure for web archiving across three envisioned stages of adoption. We codify the core mechanism enabling the value proposition: a procedure for splitting and merging cryptographic information fluidly across blockchain-backed ledgers. Finally, we present preliminary performance results that indicate the feasibility of our approach for modern web archiving scales.
Sandia National Laboratories has conducted geomechanical analysis to evaluate the performance of the Strategic Petroleum Reserve by modeling the viscoplastic, or creep, behavior of the salt in which their oil-storage caverns reside. The operation-driven imbalance between fluid pressure within the salt cavern and in-situ stress acting on the surrounding salt can cause the salt to creep, potentially leading to a loss of the cavern volume and consequently deformation of borehole casings. Therefore, a greater understanding of salt creep's behavior on borehole casing needs to be addressed to drive cavern operations decisions. To evaluate potential casing damage mechanisms with variation in geological constraints (e.g. material characteristics of salt or caprock) or physical mechanisms of cavern leakage, we developed a generic model with a layered and domal geometry including nine caverns, rather than use a specific field-site model, to save computational costs. The geomechanical outputs, such as cavern volume changes, vertical strain along the dome and caprock above the cavern and vertical displacement at the surface or cavern top, quantifies the impact of material parameters and cavern locations as well as multiple operations in multiple caverns on an individual cavern stability.
Extreme meteorological events, such as hurricanes and floods, cause significant infrastructure damage and, as a result, prolonged grid outages. To mitigate the negative effect of these outages and enhance the resilience of communities, microgrids consisting of solar photovoltaics (PV), energy storage (ES) technologies, and backup diesel generation are being considered. Furthermore, it is necessary to take into account how the extreme event affects the systems' performance during the outage, often referred to as black-sky conditions. In this paper, an optimization model is introduced to properly size ES and PV technologies to meet various duration of grid outages for selected critical infrastructure while considering black-sky conditions. A case study of the municipality of Villalba, Puerto Rico is presented to identify the several potential microgrid configurations that increase the community's resilience. Sensitivity analyses are performed around the grid outage durations and black-sky conditions to better decide what factors should be considered when scoping potential microgrids for community resilience.
Computational engineering models often contain unknown entities (e.g. parameters, initial and boundary conditions) that require estimation from other measured observable data. Estimating such unknown entities is challenging when they involve spatio-temporal fields because such functional variables often require an infinite-dimensional representation. We address this problem by transforming an unknown functional field using Alpert wavelet bases and truncating the resulting spectrum. Hence the problem reduces to the estimation of few coefficients that can be performed using common optimization methods. We apply this method on a one-dimensional heat transfer problem where we estimate the heat source field varying in both time and space. The observable data is comprised of temperature measured at several thermocouples in the domain. This latter is composed of either copper or stainless steel. The optimization using our method based on wavelets is able to estimate the heat source with an error between 5% and 7%. We analyze the effect of the domain material and number of thermocouples as well as the sensitivity to the initial guess of the heat source. Finally, we estimate the unknown heat source using a different approach based on deep learning techniques where we consider the input and output of a multi-layer perceptron in wavelet form. We find that this deep learning approach is more accurate than the optimization approach with errors below 4%.
Physics-Based Reduced Order Models (ROMs) tend to rely on projection-based reduction. This family of approaches utilizes a series of responses of the full-order model to assemble a suitable basis, subsequently employed to formulate a set of equivalent, low-order equations through projection. However, in a nonlinear setting, physics-based ROMs require an additional approximation to circumvent the bottleneck of projecting and evaluating the nonlinear contributions on the reduced space. This scheme is termed hyper-reduction and enables substantial computational time reduction. The aforementioned hyper-reduction scheme implies a trade-off, relying on a necessary sacrifice on the accuracy of the nonlinear terms’ mapping to achieve rapid or even real-time evaluations of the ROM framework. Since time is essential, especially for digital twins representations in structural health monitoring applications, the hyper-reduction approximation serves as both a blessing and a curse. Our work scrutinizes the possibility of exploiting machine learning (ML) tools in place of hyper-reduction to derive more accurate surrogates of the nonlinear mapping. By retaining the POD-based reduction and introducing the machine learning-boosted surrogate(s) directly on the reduced coordinates, we aim to substitute the projection and update process of the nonlinear terms when integrating forward in time on the low-order dimension. Our approach explores a proof-of-concept case study based on a Nonlinear Auto-regressive neural network with eXogenous Inputs (NARX-NN), trying to potentially derive a superior physics-based ROM in terms of efficiency, suitable for (near) real-time evaluations. The proposed ML-boosted ROM (N3-pROM) is validated in a multi-degree of freedom shear frame under ground motion excitation featuring hysteretic nonlinearities.
Accurately measuring aero-optical properties of non-equilibrium gases is critical for characterizing compressible flow dynamics and plasmas. At thermochemical non-equilibrium conditions, excited molecules begin to dissociate, causing optical distortion and non-constant Gladstone-Dale behavior. These regions typically occur behind a strong shock at high temperatures and pressures. Currently, no experimental data exists in the literature due to the small number of facilities capable of reaching such conditions and a lack of diagnostic techniques that can measure index of refraction across large, nearly-discrete gradients. In this work, a quadrature fringe imaging interferometer is applied at the Sandia free-piston high temperature shock tube for high temperature and pressure Gladstone-Dale measurements. This diagnostic resolves high-gradient density changes using a narrowband analog quadrature and broadband reference fringes. Initial simulations for target conditions show large deviations from constant Gladstone-Dale coefficient models and good matches with high temperature and pressure Gladstone-Dale models above 5000 K. Experimental results at 7653 K and 7.87 bar indicate that the index of refraction approaches high temperature and pressure theory, but significant flow bifurcation effects are noted in reflected shock.
Raffaelle, Patrick R.; Wang, George T.; Shestopalov, Alexander A.
The focus of this study was to demonstrate the vaporphase halogenation of Si(100) and subsequently evaluate the inhibiting ability of the halogenated surfaces toward atomic layer deposition (ALD) of aluminum oxide (Al2O3). Hydrogen-terminated silicon ⟨100⟩ (H−Si(100)) was halogenated using N-chlorosuccinimide (NCS), N-bromosuccinimide (NBS), and N-iodosuccinimide (NIS) in a vacuum-based chemical process. The composition and physical properties of the prepared monolayers were analyzed by using X-ray photoelectron spectroscopy (XPS) and contact angle (CA) goniometry. These measurements confirmed that all three reagents were more effective in halogenating H−Si(100) over OH−Si(100) in the vapor phase. The stability of the modified surfaces in air was also tested, with the chlorinated surface showing the greatest resistance to monolayer degradation and silicon oxide (SiO2) generation within the first 24 h of exposure to air. XPS and atomic force microscopy (AFM) measurements showed that the succinimide-derived Hal-Si(100) surfaces exhibited blocking ability superior to that of H− Si(100), a commonly used ALD resist. This halogenation method provides a dry chemistry alternative for creating halogen-based ALD resists on Si(100) in near-ambient environments.
The research investigates novel techniques to enhance supply chain security via addition of configuration management controls to protect Instrumentation and Control (I&C) systems of a Nuclear Power Plant (NPP). A secure element (SE) is integrated into a proof-of-concept testbed by means of a commercially available smart card, which provides tamper resistant key storage and a cryptographic coprocessor. The secure element simplifies setup and establishment of a secure communications channel between the configuration manager and verification system and the I&C system (running OpenPLC). This secure channel can be used to provide copies of commands and configuration changes of the I&C system for analysis.
Previous research has provided strong evidence that CO2 and H2O gasification reactions can provide non-negligible contributions to the consumption rates of pulverized coal (pc) char during combustion, particularly in oxy-fuel environments. Fully quantifying the contribution of these gasification reactions has proven to be difficult, due to the dearth of knowledge of gasification rates at the elevated particle temperatures associated with typical pc char combustion processes, as well as the complex interaction of oxidation and gasification reactions. Gasification reactions tend to become more important at higher char particle temperatures (because of their high activation energy) and they tend to reduce pc oxidation due to their endothermicity (i.e. cooling effect). The work reported here attempts to quantify the influence of the gasification reaction of CO2 in a rigorous manner by combining experimental measurements of the particle temperatures and consumption rates of size-classified pc char particles in tailored oxy-fuel environments with simulations from a detailed reacting porous particle model. The results demonstrate that a specific gasification reaction rate relative to the oxidation rate (within an accuracy of approximately +/- 20% of the pre-exponential value), is consistent with the experimentally measured char particle temperatures and burnout rates in oxy-fuel combustion environments. Conversely, the results also show, in agreement with past calculations, that it is extremely difficult to construct a set of kinetics that does not substantially overpredict particle temperature increase in strongly oxygen-enriched N2 environments. This latter result is believed to result from deficiencies in standard oxidation mechanisms that fail to account for falloff in char oxidation rates at high temperatures.
A wind tunnel test from AEDC Tunnel 9 of a hypersonic turbulent boundary layer is analyzed using several fidelities of numerical simulation including Wall-Modeled Large Eddy Simulation (WMLES), Large Eddy Simulation (LES), and Direct Numerical Simulation (DNS). The DNS was forced to transition to turbulence using a broad spectrum of planar, slow acoustic waves based on the freestream spectrum measured in the tunnel. Results show the flow transitions in a reasonably natural process developing into turbulent flow. This is due to several 2nd mode wave packets advecting downstream and eventually breaking down into turbulence with modest friction Reynolds numbers. The surface shear stress and heat flux agree well with a transitional RANS simulation. Comparisons of DNS data to experimental data showreasonable agreement with regard to mean surface quantities aswell as amplitudes of boundary layer disturbances. The DNS does show early transition relative to the experimental data. Several interesting aspects of the DNS and other numerical simulations are discussed. The DNS data are also analyzed through several common methods such as cross-correlations and coherence of the fluctuating surface pressure.
This study investigated the durability of four high temperature coatings for use as a Gardon gauge foil coating. Failure modes and effects analysis have identified Gardon gauge foil coating as a critical component for the development of a robust flux gauge for high intensity flux measurements. Degradation of coating optical properties and physical condition alters flux gauge sensitivity, resulting in flux measurement errors. In this paper, four coatings were exposed to solar and thermal cycles to simulate real-world aging. Solar simulator and box furnace facilities at the National Solar Thermal Test Facility (NSTTF) were utilized in separate test campaigns. Coating absorptance and emissivity properties were measured and combined into a figure of merit (FOM) to characterize the optical property stability of each coating, and physical coating degradation was assessed qualitatively using microscope images. Results suggest rapid high temperature cycling did not significantly impact coating optical properties and physical state. In contrast, prolonged exposure of coatings to high temperatures degraded coating optical properties and physical state. Coatings degraded after 1 hour of exposure at temperatures above 400 °C and stabilized after 6-24 hours of exposure. It is concluded that the combination of high temperatures and prolonged exposure provide the energy necessary to sustain coating surface reactions and alter optical and physical coating properties. Results also suggest flux gauge foil coatings could benefit from long duration high temperature curing (>400 °C) prior to sensor calibration to stabilize coating properties and increase measurement reliability in high flux and high temperature applications.
Most recently, stochastic control methods such as deep reinforcement learning (DRL) have proven to be efficient and quick converging methods in providing localized grid voltage control. Because of the random dynamical characteristics of grid reactive loads and bus voltages, such stochastic control methods are particularly useful in accurately predicting future voltage levels and in minimizing associated cost functions. Although DRL is capable of quickly inferring future voltage levels given specific voltage control actions, it is prone to high variance when the learning rate or discount factors are set for rapid convergence in the presence of bus noise. Evolutionary learning is also capable of minimizing cost function and can be leveraged for localized grid control, but it does not infer future voltage levels given specific control inputs and instead simply selects those control actions that result in the best voltage control. For this reason, evolutionary learning is better suited than DRL for voltage control in noisy grid environments. To illustrate this, using a cyber adversary to inject random noise, we compare the use of evolutionary learning and DRL in autonomous voltage control (AVC) under noisy control conditions and show that it is possible to achieve a high mean voltage control using a genetic algorithm (GA). We show that the GA additionally can provide superior AVC to DRL with comparable computational efficiency. We illustrate that the superior noise immunity properties of evolutionary learning make it a good choice for implementing AVC in noisy environments or in the presence of random cyber-attacks.
Uncertainty quantification (UQ) plays a critical role in verifying and validating forward integrated computational materials engineering (ICME) models. Among numerous ICME models, the crystal plasticity finite element method (CPFEM) is a powerful tool that enables one to assess microstructure-sensitive behaviors and thus, bridge material structure to performance. Nevertheless, given its nature of constitutive model form and the randomness of microstructures, CPFEM is exposed to both aleatory uncertainty (microstructural variability), as well as epistemic uncertainty (parametric and model-form error). Therefore, the observations are often corrupted by the microstructure-induced uncertainty, as well as the ICME approximation and numerical errors. In this work, we highlight several ongoing research topics in UQ, optimization, and machine learning applications for CPFEM to efficiently solve forward and inverse problems. The first aspect of this work addresses the UQ of constitutive models for epistemic uncertainty, including both phenomenological and dislocation-density-based constitutive models, where the quantities of interest (QoIs) are related to the initial yield behaviors. We apply a stochastic collocation (SC) method to quantify the uncertainty of the three most commonly used constitutive models in CPFEM, namely phenomenological models (with and without twinning), and dislocation-density-based constitutive models, for three different types of crystal structures, namely face-centered cubic (fcc) copper (Cu), body-centered cubic (bcc) tungsten (W), and hexagonal close packing (hcp) magnesium (Mg). The second aspect of this work addresses the aleatory and epistemic uncertainty with multiple mesh resolutions and multiple constitutive models by the multi-index Monte Carlo method, where the QoI is also related to homogenized materials properties. We present a unified approach that accounts for various fidelity parameters, such as mesh resolutions, integration time-steps, and constitutive models simultaneously. We illustrate how multilevel sampling methods, such as multilevel Monte Carlo (MLMC) and multi-index Monte Carlo (MIMC), can be applied to assess the impact of variations in the microstructure of polycrystalline materials on the predictions of macroscopic mechanical properties. The third aspect of this work addresses the crystallographic texture study of a single void in a cube. Using a parametric reduced-order model (also known as parametric proper orthogonal decomposition) with a global orthonormal basis as a model reduction technique, we demonstrate that the localized dynamic stress and strain fields can be predicted as a spatiotemporal problem.
Distribution systems may experience fast voltage swings in the matter of seconds from distributed energy resources, such as Wind Turbines Generators (WTG) and Photovoltaic (PV) inverters, due to their dependency on variable and intermittent wind speed and solar irradiance. This work proposes a WTG reactive power controller for fast voltage regulation. The controller is tested on a simulation model of a real distribution system. Real wind speed, solar irradiation, and load consumption data is used. The controller is based on a Reinforcement Learning Deep Deterministic Policy Gradient (DDPG) model that determines optimum control actions to avoid significant voltage deviations across the system. The controller has access to voltage measurements at all system buses. Results show that the proposed WTG reactive power controller significantly reduces system-wide voltage deviations across a large number of generation scenarios in order to comply with standardized voltage tolerances.
Albany is a parallel C++ finite element library for solving forward and inverse problems involving partial differential equations (PDEs). In this paper we introduce PyAlbany, a newly developed Python interface to the Albany library. PyAlbany can be used to effectively drive Albany enabling fast and easy analysis and post-processing of applications based on PDEs that are pre-implemented in Albany. PyAlbany relies on the library PyBind11 to bind Python with C++ Albany code. Here we detail the implementation of PyAlbany and showcase its capabilities through a number of examples targeting a heat-diffusion problem. In particular we consider the following: (1) the generation of samples for a Monte Carlo application, (2) a scalability study, (3) a study of parameters on the performance of a linear solver, and finally (4) a tool for performing eigenvalue decompositions of matrix-free operators for a Bayesian inference application.
Several studies have proven how ducted fuel injection (DFI) reduces soot emissions for compression-ignition engines. Nevertheless, no comprehensive study has investigated how DFI performs over a load range in combination with low-net-carbon fuels. In this study, optical-engine experiments were performed with four different fuels—conventional diesel and three low-net-carbon fuels—at low and moderate load, to measure emissions levels and performance. The 1.7-liter single-cylinder optical engine was equipped with a high-speed camera to capture natural luminosity images of the combustion event. Conventional diesel and DFI combustion were investigated at four different dilution levels (to simulate exhaust-gas recirculation effects), from 14 to 21 mol% oxygen in the intake. At a given dilution level, with commercial diesel fuel, DFI reduced soot by 82% at medium load, and 75% at low load without increasing NOx. The results further show how DFI with dilution reduces soot and NOx without compromising engine performance or other emission types, especially when combined with low-net-carbon fuels. DFI with the oxygenated low-net-carbon blend HEA67 simultaneously reduced soot and NOx by as much as 93 % and 82 %, respectively, relative to conventional diesel combustion with commercial diesel fuel. These soot and NOx reductions occurred while lifecycle CO2 was reduced by at least 70 % when using low-net-carbon fuels instead of conventional diesel. All emissions changes were compared with future emissions regulations for different vehicle sectors to investigate how DFI can be used to facilitate achievement of the regulations. Finally, the results show how the DFI cases fall below several future emissions regulation levels, rendering less need for aftertreatment systems and giving a possible lower cost of ownership.
Here, we introduce a mathematically rigorous formulation for a nonlocal interface problem with jumps and propose an asymptotically compatible finite element discretization for the weak form of the interface problem. After proving the well-posedness of the weak form, we demonstrate that solutions to the nonlocal interface problem converge to the corresponding local counterpart when the nonlocal data are appropriately prescribed. Several numerical tests in one and two dimensions show the applicability of our technique, its numerical convergence to exact nonlocal solutions, its convergence to the local limit when the horizons vanish, and its robustness with respect to the patch test.
Here we present a new method for coupled linear elasticity problems whose finite element discretization may lead to spatially non-coincident discretized interfaces. Our approach combines the classical Dirichlet–Neumann coupling formulation with a new set of discretized interface conditions obtained through Taylor series expansions. We show that these conditions ensure linear consistency of the coupled finite element solution. We then formulate an iterative solution method for the coupled discrete system and apply the new coupling approach to two representative settings for which we also provide several numerical illustrations. The first setting is a mesh-tying problem in which both coupled structures have the same Lamé parameters whereas the second setting is an interface problem for which the Lamé parameters in the two coupled structures are different.
Many applications require minimizing the sum of smooth and nonsmooth functions. For example, basis pursuit denoising problems in data science require minimizing a measure of data misfit plus an $\ell^1$-regularizer. Similar problems arise in the optimal control of partial differential equations (PDEs) when sparsity of the control is desired. Here, we develop a novel trust-region method to minimize the sum of a smooth nonconvex function and a nonsmooth convex function. Our method is unique in that it permits and systematically controls the use of inexact objective function and derivative evaluations. When using a quadratic Taylor model for the trust-region subproblem, our algorithm is an inexact, matrix-free proximal Newton-type method that permits indefinite Hessians. We prove global convergence of our method in Hilbert space and demonstrate its efficacy on three examples from data science and PDE-constrained optimization.
Satellite imagery can detect temporary cloud trails or ship tracks formed from aerosols emitted from large ships traversing our oceans, a phenomenon that global climate models cannot directly reproduce. Ship tracks are observable examples of marine cloud brightening, a potential solar climate intervention that shows promise in helping combat climate change. In this paper, we demonstrate a simulation-based approach in learning the behavior of ship tracks based upon a novel stochastic emulation mechanism. Our method uses wind fields to determine the movement of aerosol-cloud tracks and uses a stochastic partial differential equation (SPDE) to model their persistence behavior. This SPDE incorporates both a drift and diffusion term which describes the movement of aerosol particles via wind and their diffusivity through the atmosphere, respectively. We first present our proposed approach with examples using simulated wind fields and ship paths. We then successfully demonstrate our tool by applying the approximate Bayesian computation method-sequential Monte Carlo for data assimilation.
Interfacial segregation and chemical short-range ordering influence the behavior of grain boundaries in complex concentrated alloys. In this study, we use atomistic modeling of a NbMoTaW refractory complex concentrated alloy to provide insight into the interplay between these two phenomena. Hybrid Monte Carlo and molecular dynamics simulations are performed on columnar grain models to identify equilibrium grain boundary structures. Our results reveal extended near-boundary segregation zones that are much larger than traditional segregation regions, which also exhibit chemical patterning that bridges the interfacial and grain interior regions. Furthermore, structural transitions pertaining to an A2-to-B2 transformation are observed within these extended segregation zones. Both grain size and temperature are found to significantly alter the widths of these regions. An analysis of chemical short-range order indicates that not all pairwise elemental interactions are affected by the presence of a grain boundary equally, as only a subset of elemental clustering types are more likely to reside near certain boundaries. The results emphasize the increased chemical complexity that is associated with near-boundary segregation zones and demonstrate the unique nature of interfacial segregation in complex concentrated alloys.
Li-metal batteries (LMBs) employing conversion cathode materials (e.g., FeF3) are a promising way to prepare inexpensive, environmentally friendly batteries with high energy density. Pseudo-solid-state ionogel separators harness the energy density and safety advantages of solid-state LMBs, while alleviating key drawbacks (e.g., poor ionic conductivity and high interfacial resistance). In this work, a pseudo-solid-state conversion battery (Li-FeF3) is presented that achieves stable, high rate (1.0 mA cm–2) cycling at room temperature. The batteries described herein contain gel-infiltrated FeF3 cathodes prepared by exchanging the ionic liquid in a polymer ionogel with a localized high-concentration electrolyte (LHCE). The LHCE gel merges the benefits of a flexible separator (e.g., adaptation to conversion-related volume changes) with the excellent chemical stability and high ionic conductivity (~2 mS cm–1 at 25 °C) of an LHCE. The latter property is in contrast to previous solid-state iron fluoride batteries, where poor ionic conductivities necessitated elevated temperatures to realize practical power levels. Importantly, the stable, room-temperature Li-FeF3 cycling performance obtained with the LHCE gel at high current densities paves the way for exploring a range of architectures including flexible, three-dimensional, and custom shape batteries.
Automatic differentiation (AD) is a well-known technique for evaluating analytic derivatives of calculations implemented on a computer, with numerous software tools available for incorporating AD technology into complex applications. However, a growing challenge for AD is the efficient differentiation of parallel computations implemented on emerging manycore computing architectures such as multicore CPUs, GPUs, and accelerators as these devices become more pervasive. In this work, we explore forward mode, operator overloading-based differentiation of C++ codes on these architectures using the widely available Sacado AD software package. In particular, we leverage Kokkos, a C++ tool providing APIs for implementing parallel computations that is portable to a wide variety of emerging architectures. We describe the challenges that arise when differentiating code for these architectures using Kokkos, and two approaches for overcoming them that ensure optimal memory access patterns as well as expose additional dimensions of fine-grained parallelism in the derivative calculation. We describe the results of several computational experiments that demonstrate the performance of the approach on a few contemporary CPU and GPU architectures. We then conclude with applications of these techniques to the simulation of discretized systems of partial differential equations.
Wind turbine applications that leverage nacelle-mounted Doppler lidar are hampered by several sources of uncertainty in the lidar measurement, affecting both bias and random errors. Two problems encountered especially for nacelle-mounted lidar are solid interference due to intersection of the line of sight with solid objects behind, within, or in front of the measurement volume and spectral noise due primarily to limited photon capture. These two uncertainties, especially that due to solid interference, can be reduced with high-fidelity retrieval techniques (i.e., including both quality assurance/quality control and subsequent parameter estimation). Our work compares three such techniques, including conventional thresholding, advanced filtering, and a novel application of supervised machine learning with ensemble neural networks, based on their ability to reduce uncertainty introduced by the two observed nonideal spectral features while keeping data availability high. The approach leverages data from a field experiment involving a continuous-wave (CW) SpinnerLidar from the Technical University of Denmark (DTU) that provided scans of a wide range of flows both unwaked and waked by a field turbine. Independent measurements from an adjacent meteorological tower within the sampling volume permit experimental validation of the instantaneous velocity uncertainty remaining after retrieval that stems from solid interference and strong spectral noise, which is a validation that has not been performed previously. All three methods perform similarly for non-interfered returns, but the advanced filtering and machine learning techniques perform better when solid interference is present, which allows them to produce overall standard deviations of error between 0.2 and 0.3ms-1, or a 1%-22% improvement versus the conventional thresholding technique, over the rotor height for the unwaked cases. Between the two improved techniques, the advanced filtering produces 3.5% higher overall data availability, while the machine learning offers a faster runtime (i.e., 1/41s to evaluate) that is therefore more commensurate with the requirements of real-time turbine control. The retrieval techniques are described in terms of application to CW lidar, though they are also relevant to pulsed lidar. Previous work by the authors (Brown and Herges, 2020) explored a novel attempt to quantify uncertainty in the output of a high-fidelity lidar retrieval technique using simulated lidar returns; this article provides true uncertainty quantification versus independent measurement and does so for three techniques rather than one.
This report is the revised (Revision 9) Task F specification for DECOVALEX-2023. Task F is a comparison of the models and methods used in deep geologic repository performance assessment. The task proposes to develop a reference case for a mined repository in a fractured crystalline host rock (Task F1) and a reference case for a mined repository in a salt formation (Task F2). Teams may choose to participate in the comparison for either or both reference cases. For each reference case, a common set of conceptual models and parameters describing features, events, and processes that impact performance will be given, and teams will be responsible for determining how best to implement and couple the models. The comparison will be conducted in stages, beginning with a comparison of key outputs of individual process models, followed by a comparison of a single deterministic simulation of the full reference case, and moving on to uncertainty propagation and uncertainty and sensitivity analysis. This report provides background information, a summary of the proposed reference cases, and a staged plan for the analysis.
Clem, Paul G.; Nieves, Cesar A.; Yuan, Mengxue; Ogrinc, Andrew L.; Furman, Eugene; Kim, Seong H.; Lanagan, Michael T.
Ionic conduction in silicate glasses is mainly influenced by the nature, concentration, and mobility of the network-modifying (NWM) cations. The electrical conduction in SLS is dominated by the ionic migration of sodium moving from the anode to the cathode. An activation energy for this conduction process was calculated to be 0.82eV and in good agreement with values previously reported. The conduction process associated to the leakage current and relaxation peak in TSDC for HPFS is attributed to conduction between nonbridging oxygen hole centers (NBOHC). It is suggested that ≡Si-OH = ≡Si-O- + H0 under thermo-electric poling, promoting hole or proton injection from the anode and responsible for the 1.5eV relaxation peak. No previous TSDC data have been found to corroborate this mechanism. The higher activation energy and lower current intensity for the coated HPFS might be attributed to a lower concentration of NBOHC after heat treatment (Si-OH + OH-Si = SiO-Si + H2O). This could explain the TSDC signal around room temperature for the coated HPFS. Another possible explanation could be a redox reaction at the anode region dominating the current response.
This report provides a summary of measurement results used to compare the performance of the PHDS Fulcrum40h and Ortec Detective-X High Purity Germanium (HPGe) detector systems. Specifically, the measurement data collected was used to assess each detector system for gamma efficiency and resolution, gamma angular response and efficiency for an in-situ surface distribution, neutron efficiency, gamma pulse-pileup response, and gamma to neutron crosstalk.
Cemented annulus fractures are a major leakage path in a wellbore system, and their permeability plays an important role in the behavior of fluid flow through a leaky wellbore. The permeability of these fractures is affected by changing conditions including the external stresses acting on the fracture and the fluid pressure within the fracture. Laboratory gas flow experiments were conducted in a triaxial cell to evaluate the permeability of a wellbore cement fracture under a wide range of confining stress and pore pressure conditions. For the first time, an effective stress law that considers the simultaneous effect of confining stress and pore pressure was defined for the wellbore cement fracture permeability. Here the results showed that the effective stress coefficient (λ) for permeability increased linearly with the Terzaghi effective stress ( -p) with an average of λ = 1 in the range of applied pressures. The relationship between the effective stress and fracture permeability was examined using two physical-based models widely used for rock fractures. The results from the experimental work were incorporated into numerical simulations to estimate the impact of effective stress on the interpreted hydraulic aperture and leakage behavior through a fractured annular cement. Accounting for effective stress-dependent permeability through the wellbore length significantly increased the leakage rate at the wellhead compared with the assumption of a constant cemented annulus permeability.
This development of empirical data to support realistic and science-based input to safety regulations and transportation standards is a critical need for the hazardous material (HM) transportation industry. Current regulations and standards are based on the TNT equivalency model. However, real world experience indicates that use of the TNT equivalency model to predict composite overwrapped pressure vessel (COPV) potential energy release is unrealistically conservative. The purpose of this report is to characterize and quantify rupture events involving damaged COPV’s of the type used in HM transportation regulated by the Department of Transportation (DOT). This was accomplished using a series of five tests; 2 COPV tests for compressed natural gas (CNG), 2 COPV tests for hydrogen, and 1 COPV test for nitrogen. Measured overpressures from these tests were compared to predicted overpressures from a TNT equivalence model and blast curves. Comparison between the measurements and predictions shows that the predictions are generally conservative, and that the extent of conservatism is dominated by predictions of the chemical contribution to overpressure from fuel within the COPVs.
National security applications require artificial neural networks (ANNs) that consume less power, are fast and dynamic online learners, are fault tolerant, and can learn from unlabeled and imbalanced data. We explore whether two fundamentally different, traditional learning algorithms from artificial intelligence and the biological brain can be merged. We tackle this problem from two directions. First, we start from a theoretical point of view and show that the spike time dependent plasticity (STDP) learning curve observed in biological networks can be derived using the mathematical framework of backpropagation through time. Second, we show that transmission delays, as observed in biological networks, improve the ability of spiking networks to perform classification when trained using a backpropagation of error (BP) method. These results provide evidence that STDP could be compatible with a BP learning rule. Combining these learning algorithms will likely lead to networks more capable of meeting our national security missions.
Kim, Anthony D.; Curwen, Christopher A.; Wu, Yu; Reno, John L.; Addamane, Sadhvikas J.; Williams, Benjamin S.
Terahertz (THz) external-cavity lasers based on quantum-cascade (QC) metasurfaces are emerging as widely-tunable, single-mode sources with the potential to cover the 1--6 THz range in discrete bands with milliwatt-level output power. By operating on an ultra-short cavity with a length on the order of the wavelength, the QC vertical-external-cavity surface-emitting-laser (VECSEL) architecture enables continuous, broadband tuning while producing high quality beam patterns and scalable power output. The methods and challenges for designing the metasurface at different frequencies are discussed. As the QC-VECSEL is scaled below 2 THz, the primary challenges are reduced gain from the QC active region, increased metasurface quality factor and its effect on tunable bandwidth, and larger power consumption due to a correspondingly scaled metasurface area. At frequencies above 4.5 THz, challenges arise from a reduced metasurface quality factor and the excess absorption that occurs from proximity to the Reststrahlen band. The results of four different devices — with center frequencies 1.8 THz, 2.8 THz, 3.5 THz, and 4.5 THz — are reported. Each device demonstrated at least 200 GHz of continuous single-mode tuning, with the largest being 650 GHz around 3.5 THz. The limitations of the tuning range are well modeled by a Fabry-Pérot cavity which accounts for the reflection phase of the metasurface and the effect of the metasurface quality factor on laser threshold. Lastly, the effect of different output couplers on device performance is studied, demonstrating a significant trade-off between the slope efficiency and tuning bandwidth.
In order to meet 2025 goals for enhanced peak power (100 kW), specific power (50 kW/L), and reduced cost (3.3 $\$$/kW) in a motor that can operate at ≥ 20,000 rpm, improved soft magnetic materials must be developed. Better performing soft magnetic materials will also enable rare earth free electric motors. In fact, replacement of permanent magnets with soft magnetic materials was highlighted in the Electrical and Electronics Technical Team (EETT) Roadmap as a R&D pathway for meeting 2025 targets. Eddy current losses in conventional soft magnetic materials, such as silicon steel, begin to significantly impact motor efficiency as rotational speed increases. Soft magnetic composites (SMCs), which combine magnetic particles with an insulating matrix to boost electrical resistivity (ρ) and decrease eddy current losses, even at higher operating frequencies (or rotational speeds), are an attractive solution. Today, SMCs are being fabricated with values of ρ ranging between 10-3 to 10-1 μohm∙m, which is significantly higher than 3% silicon steel (~0.05 μohm∙m). The isotropic nature of SMCs is ideally suited for motors with 3D flux paths, such as axial flux motors. Additionally, the manufacturing cost of SMCs is low and they are highly amenable to advanced manufacturing and net-shaping into complex geometries, which further reduces manufacturing costs. There is still significant room for advancement in SMCs, and therefore additional improvements in electrical machine performance. For example, despite the inclusion of a non-magnetic insulating material, the electrical resistivities of SMCs are still far below that of soft ferrites (10 – 108 μohm∙m).
More efficient power conversion devices are able to transmit greater electrical power across larger distances to satisfy growing global electrical needs. A critical requirement to achieve more efficient power conversion are the soft magnetic materials used as core materials in transformers, inductors, and motors. To that effect it is well known that the use of non-equilibrium microstructures, which are, for example, nanocrystalline or consist of single phase solid solutions, can yield high saturation magnetic polarization and high electrical resistivity necessary for more efficient soft magnetic materials. In this work, we synthesized CoFe – P soft magnetic alloys containing nanocrystalline, single phase solid solution microstructures and studied the effect of a secondary intermetallic phase on the saturation magnetic polarization and electrical resistivity of the consolidated alloy. Single phase solid solution CoFe – P alloys were prepared through mechanically alloying metal powders and phase decomposition was observed after subsequent consolidation via spark plasma sintering (SPS) at various temperatures. The secondary intermetallic phase was identified as the orthorhombic (CoxFe1−x)2P phase and the magnetic properties of the (CoxFe1−x)2P intermetallic phase were found to be detrimental to the soft magnetic properties of the targeted CoFe – P alloy.
Clays are known for their small particle sizes and complex layer stacking. We show here that the limited dimension of clay particles arises from the lack of long-range order in low-dimensional systems. Because of its weak interlayer interaction, a clay mineral can be treated as two separate low-dimensional systems: a 2D system for individual phyllosilicate layers and a quasi-1D system for layer stacking. The layer stacking or ordering in an interstratified clay can be described by a 1D Ising model while the limited extension of individual phyllosilicate layers can be related to a 2D Berezinskii–Kosterlitz–Thouless transition. This treatment allows for a systematic prediction of clay particle size distributions and layer stacking as controlled by the physical and chemical conditions for mineral growth and transformation. Clay minerals provide a useful model system for studying a transition from a 1D to 3D system in crystal growth and for a nanoscale structural manipulation of a general type of layered materials.