Physics-Based Reduced Order Models (ROMs) tend to rely on projection-based reduction. This family of approaches utilizes a series of responses of the full-order model to assemble a suitable basis, subsequently employed to formulate a set of equivalent, low-order equations through projection. However, in a nonlinear setting, physics-based ROMs require an additional approximation to circumvent the bottleneck of projecting and evaluating the nonlinear contributions on the reduced space. This scheme is termed hyper-reduction and enables substantial computational time reduction. The aforementioned hyper-reduction scheme implies a trade-off, relying on a necessary sacrifice on the accuracy of the nonlinear terms’ mapping to achieve rapid or even real-time evaluations of the ROM framework. Since time is essential, especially for digital twins representations in structural health monitoring applications, the hyper-reduction approximation serves as both a blessing and a curse. Our work scrutinizes the possibility of exploiting machine learning (ML) tools in place of hyper-reduction to derive more accurate surrogates of the nonlinear mapping. By retaining the POD-based reduction and introducing the machine learning-boosted surrogate(s) directly on the reduced coordinates, we aim to substitute the projection and update process of the nonlinear terms when integrating forward in time on the low-order dimension. Our approach explores a proof-of-concept case study based on a Nonlinear Auto-regressive neural network with eXogenous Inputs (NARX-NN), trying to potentially derive a superior physics-based ROM in terms of efficiency, suitable for (near) real-time evaluations. The proposed ML-boosted ROM (N3-pROM) is validated in a multi-degree of freedom shear frame under ground motion excitation featuring hysteretic nonlinearities.
When exposed to mechanical environments such as shock and vibration, electrical connections may experience increased levels of contact resistance associated with the physical characteristics of the electrical interface. A phenomenon known as electrical chatter occurs when these vibrations are large enough to interrupt the electric signals. It is critical to understand the root causes behind these events because electrical chatter may result in unexpected performance or failure of the system. The root causes span a variety of fields, such as structural dynamics, contact mechanics, and tribology. Therefore, a wide range of analyses are required to fully explore the physical phenomenon. This paper intends to provide a better understanding of the relationship between structural dynamics and electrical chatter events. Specifically, electrical contact assembly composed of a cylindrical pin and bifurcated structure were studied using high fidelity simulations. Structural dynamic simulations will be performed with both linear and nonlinear reduced-order models (ROM) to replicate the relevant structural dynamics. Subsequent multi-physics simulations will be discussed to relate the contact mechanics associated with the dynamic interactions between the pin and receptacle to the chatter. Each simulation method was parametrized by data from a variety of dynamic experiments. Both structural dynamics and electrical continuity were observed in both the simulation and experimental approaches, so that the relationship between the two can be established.
We report on a two-step technique for post-bond III-V substrate removal involving precision mechanical milling and selective chemical etching. We show results on GaAs, GaSb, InP, and InAs substrates and from mm-scale chips to wafers.
A wind tunnel test from AEDC Tunnel 9 of a hypersonic turbulent boundary layer is analyzed using several fidelities of numerical simulation including Wall-Modeled Large Eddy Simulation (WMLES), Large Eddy Simulation (LES), and Direct Numerical Simulation (DNS). The DNS was forced to transition to turbulence using a broad spectrum of planar, slow acoustic waves based on the freestream spectrum measured in the tunnel. Results show the flow transitions in a reasonably natural process developing into turbulent flow. This is due to several 2nd mode wave packets advecting downstream and eventually breaking down into turbulence with modest friction Reynolds numbers. The surface shear stress and heat flux agree well with a transitional RANS simulation. Comparisons of DNS data to experimental data showreasonable agreement with regard to mean surface quantities aswell as amplitudes of boundary layer disturbances. The DNS does show early transition relative to the experimental data. Several interesting aspects of the DNS and other numerical simulations are discussed. The DNS data are also analyzed through several common methods such as cross-correlations and coherence of the fluctuating surface pressure.
Helium or neopentane can be used as surrogate gas fill for deuterium (D2) or deuterium-tritium (DT) in laser-plasma interaction studies. Surrogates are convenient to avoid flammability hazards or the integration of cryogenics in an experiment. To test the degree of equivalency between deuterium and helium, experiments were conducted in the Pecos target chamber at Sandia National Laboratories. Observables such as laser propagation and signatures of laser-plasma instabilities (LPI) were recorded for multiple laser and target configurations. It was found that some observables can differ significantly despite the apparent similarity of the gases with respect to molecular charge and weight. While a qualitative behaviour of the interaction may very well be studied by finding a suitable compromise of laser absorption, electron density, and LPI cross sections, a quantitative investigation of expected values for deuterium fills at high laser intensities is not likely to succeed with surrogate gases.
The Sliding Scale of Cybersecurity is a framework for understanding the actions that contribute to cybersecurity. The model consists of five categories that provide varying value towards cybersecurity and incur varying implementation costs. These categories range from offensive cybersecurity measures providing the least value and incurring the greatest cost, to architecture providing the greatest value and incurring the least cost. This paper presents an application of the Sliding Scale of Cybersecurity to the Tiered Cybersecurity Analysis (TCA) of digital instrumentation and control systems for advanced reactors. The TCA consists of three tiers. Tier 1 is design and impact analysis. In Tier 1 it is assumed that the adversary has control over all digital systems, components, and networks in the plant, and that the adversary is only constrained by the physical limitations of the plant design. The plant’s safety design features are examined to determine whether the consequences of an attack by this cyber-enabled adversary are eliminated or mitigated. Accident sequences that are not eliminated or mitigated by security by design features are examined in Tier 2 analysis. In Tier 2, adversary access pathways are identified for the unmitigated accident sequences, and passive measures are implemented to deny system and network access to those pathways wherever feasible. Any systems with remaining susceptible access pathways are then examined in Tier 3. In Tier 3, active defensive cybersecurity architecture features and cybersecurity plan controls are applied to deny the adversary the ability to conduct the tasks needed to cause a severe consequence. Earlier application of the TCA in the design process provides greater opportunity for an efficient graded approach and defense-in-depth.
Accurately measuring aero-optical properties of non-equilibrium gases is critical for characterizing compressible flow dynamics and plasmas. At thermochemical non-equilibrium conditions, excited molecules begin to dissociate, causing optical distortion and non-constant Gladstone-Dale behavior. These regions typically occur behind a strong shock at high temperatures and pressures. Currently, no experimental data exists in the literature due to the small number of facilities capable of reaching such conditions and a lack of diagnostic techniques that can measure index of refraction across large, nearly-discrete gradients. In this work, a quadrature fringe imaging interferometer is applied at the Sandia free-piston high temperature shock tube for high temperature and pressure Gladstone-Dale measurements. This diagnostic resolves high-gradient density changes using a narrowband analog quadrature and broadband reference fringes. Initial simulations for target conditions show large deviations from constant Gladstone-Dale coefficient models and good matches with high temperature and pressure Gladstone-Dale models above 5000 K. Experimental results at 7653 K and 7.87 bar indicate that the index of refraction approaches high temperature and pressure theory, but significant flow bifurcation effects are noted in reflected shock.
TFLN/silicon photonic modulators featuring active silicon photonic components are reported with a Vπ of 3.6 Vcm. This hybrid architecture utilizes the bottom of the buried oxide as the bonding surface which features minimum topology.
Scientific discovery increasingly relies on interoperable, multimodular workflows generating intermediate data. The complexity of managing intermediate data may cause performance losses or unexpected costs. This paper defines an approach to composing these scientific workflows on cloud services, focusing on workflow data orchestration, management, and scalability. We demonstrate the effectiveness of our approach with the SOMOSPIE scientific workflow that deploys machine learning (ML) models to predict high-resolution soil moisture using an HPC service (LSF) and an open-source cloud-native service (K8s) and object storage. Our approach enables scientists to scale from coarse-grained to fine-grained resolution and from a small to a larger region of interest. Using our empirical observations, we generate a cost model for the execution of workflows with hidden intermediate data on cloud services.
Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering - OMAE
Foulk, James W.; Davis, Jacob; Sharman, Krish; Tom, Nathan; Husain, Salman
Experiments were conducted on a wave tank model of a bottom raised oscillating surge wave energy converter (OSWEC) model in regular waves. The OSWEC model shape was a thin rectangular flap, which was allowed to pitch in response to incident waves about a hinge located at the intersection of the flap and the top of the supporting foundation. Torsion springs were added to the hinge in order to position the pitch natural frequency at the center of the wave frequency range of the wave maker. The flap motion as well as the loads at the base of the foundation were measured. The OSWEC was modeled analytically using elliptic functions in order to obtain closed form expressions for added mass and radiation damping coefficients, along with the excitation force and torque. These formulations were derived and reported in a previous publication by the authors. While analytical predictions of the foundation loads agree very well with experiments, large discrepancies are seen in the pitch response close to resonance. These differences are analyzed by conducting a sensitivity study, in which system parameters, including damping and added mass values, are varied. The likely contributors to the differences between predictions and experiments are attributed to tank reflections, standing waves that can occur in long, narrow wave tanks, as well as the thin plate assumption employed in the analytical approach.
A quantum-cascade-laser-absorption-spectroscopy (QCLAS) diagnostic was used to characterize post-detonation fireballs of RP-80 detonators via measurements of temperature, pressure, and CO column pressure at a repetition rate of 1 MHz. Scanned-wavelength direct-absorption spectroscopy was used to measure CO absorbance spectra near 2008.5 cm−1 which are dominated by the P(0,31), P(2,20), and P(3,14) transitions. Line-of-sight (LOS) measurements were acquired 51 and 91 mm above the detonator surface. Three strategies were employed to facilitate interpretation of the LAS measurements in this highly nonuniform environment and to evaluate the accuracy of four post-detonation fireball models: (1) High-energy transitions were used to deliberately bias the measurements to the high-temperature outer shell, (2) a novel dual-zone absorption model was used to extract temperature, pressure, and CO measurements in two distinct regions of the fireball at times where pressure variations along the LOS were pronounced, and (3) the LAS measurements were compared with synthetic LAS measurements produced using the simulated distributions of temperature, pressure, and gas composition predicted by reactive CFD modeling. The results indicate that the QCLAS diagnostic provides high-fidelity data for evaluating post-detonation fireball models, and that assumptions regarding thermochemical equilibrium and carbon freeze-out during expansion of detonation gases have a large impact on the predicted chemical composition of the fireball.
The research investigates novel techniques to enhance supply chain security via addition of configuration management controls to protect Instrumentation and Control (I&C) systems of a Nuclear Power Plant (NPP). A secure element (SE) is integrated into a proof-of-concept testbed by means of a commercially available smart card, which provides tamper resistant key storage and a cryptographic coprocessor. The secure element simplifies setup and establishment of a secure communications channel between the configuration manager and verification system and the I&C system (running OpenPLC). This secure channel can be used to provide copies of commands and configuration changes of the I&C system for analysis.
Earth and Space 2022: Space Exploration, Utilization, Engineering, and Construction in Extreme Environments - Selected Papers from the 18th Biennial International Conference on Engineering, Science, Construction, and Operations in Challenging Environments
Analysis of radiation effects on electrical circuits requires computationally efficient compact radiation models. Currently, development of such models is dominated by analytic techniques that rely on empirical assumptions and physical approximations to render the governing equations solvable in closed form. In this paper we demonstrate an alternative numerical approach for the development of a compact delayed photocurrent model for a pn-junction device. Our approach combines a system identification step with a projection-based model order reduction step to obtain a small discrete time dynamical system describing the dynamics of the excess carriers in the device. Application of the model amounts to a few small matrix-vector multiplications having minimal computational cost. We demonstrate the model using a radiation pulse test for a synthetic pn-junction device.
Most recently, stochastic control methods such as deep reinforcement learning (DRL) have proven to be efficient and quick converging methods in providing localized grid voltage control. Because of the random dynamical characteristics of grid reactive loads and bus voltages, such stochastic control methods are particularly useful in accurately predicting future voltage levels and in minimizing associated cost functions. Although DRL is capable of quickly inferring future voltage levels given specific voltage control actions, it is prone to high variance when the learning rate or discount factors are set for rapid convergence in the presence of bus noise. Evolutionary learning is also capable of minimizing cost function and can be leveraged for localized grid control, but it does not infer future voltage levels given specific control inputs and instead simply selects those control actions that result in the best voltage control. For this reason, evolutionary learning is better suited than DRL for voltage control in noisy grid environments. To illustrate this, using a cyber adversary to inject random noise, we compare the use of evolutionary learning and DRL in autonomous voltage control (AVC) under noisy control conditions and show that it is possible to achieve a high mean voltage control using a genetic algorithm (GA). We show that the GA additionally can provide superior AVC to DRL with comparable computational efficiency. We illustrate that the superior noise immunity properties of evolutionary learning make it a good choice for implementing AVC in noisy environments or in the presence of random cyber-attacks.
Two-dimensional (2D) layered oxides have recently attracted wide attention owing to the strong coupling among charges, spins, lattice, and strain, which allows great flexibility and opportunities in structure designs as well as multifunctionality exploration. In parallel, plasmonic hybrid nanostructures exhibit exotic localized surface plasmon resonance (LSPR) providing a broad range of applications in nanophotonic devices and sensors. A hybrid material platform combining the unique multifunctional 2D layered oxides and plasmonic nanostructures brings optical tuning into the new level. In this work, a novel self-assembled Bi2MoO6 (BMO) 2D layered oxide incorporated with plasmonic Au nanoinclusions has been demonstrated via one-step pulsed laser deposition (PLD) technique. Comprehensive microstructural characterizations, including scanning transmission electron microscopy (STEM), differential phase contrast imaging (DPC), and STEM tomography, have demonstrated the high epitaxial quality and particle-in-matrix morphology of the BMO-Au nanocomposite film. DPC-STEM imaging clarifies the magnetic domain structures of BMO matrix. Three different BMO structures including layered supercell (LSC) and superlattices have been revealed which is attributed to the variable strain states throughout the BMO-Au film. Owing to the combination of plasmonic Au and layered structure of BMO, the nanocomposite film exhibits a typical LSPR in visible wavelength region and strong anisotropy in terms of its optical and ferromagnetic properties. This study opens a new avenue for developing novel 2D layered complex oxides incorporated with plasmonic metal or semiconductor phases showing great potential for applications in multifunctional nanoelectronics devices. [Figure not available: see fulltext.]
The V31 containment vessel was procured by the US Army Recovered Chemical Material Directorate (RCMD) as a third-generation EDS containment vessel. It is the fifth EDS vessel to be fabricated under Code Case 2564 of the 2019 ASME Boiler and Pressure Vessel Code, which provides rules for the design of impulsively loaded vessels. The explosive rating for the vessel, based on the code case, is 24 lb (11 kg) TNT-equivalent for up to 1092 detonations. This report documents the results of explosive tests that were performed on the vessel at Sandia National Laboratories in Albuquerque, New Mexico to qualify the vessel for field operations use. There were three design basis configurations for qualification testing. Qualification test (1) consisted of a simulated M55 rocket motor and warhead assembly of 24 lb (11 kg) of Composition C-4 (30 lb [14 kg] TNT equivalent). This test was considered the maximum load case, based on modeling and simulation methods performed by Sandia prior to the vessel design phase. Qualification test (2) consisted of a regular, right circular cylinder, unitary charge, located central to the vessel interior of 19.2 lb (8.72 kg) of Composition C-4 (24 lb [11 kg] TNT equivalent). Qualification test (3) consisted of a 12-pack of regular, right circular cylinders of 2 lb (908 g) each, distributed evenly inside the vessel (totaling 19.2 lb [8.72 kg] of C-4, or 24 lb [11 kg] TNT equivalent). All vessel acceptance criteria were met.
Bao, Jichao; Lee, Jonghyun; Yoon, Hongkyu; Pyrak-Nolte, Laura
Characterization of geologic heterogeneity at an enhanced geothermal system (EGS) is crucial for cost-effective stimulation planning and reliable heat production. With recent advances in computational power and sensor technology, large-scale fine-resolution simulations of coupled thermal-hydraulic-mechanical (THM) processes have been available. However, traditional large-scale inversion approaches have limited utility for sites with complex subsurface structures unless one can afford high, often computationally prohibitive, computations. Key computational burdens are predominantly associated with a number of large-scale coupled numerical simulations and large dense matrix multiplications derived from fine discretization of the field site domain and a large number of THM and chemical (THMC) measurements. In this work, we present deep-generative model-based Bayesian inversion methods for the computationally efficient and accurate characterization of EGS sites. Deep generative models are used to learn the approximate subsurface property (e.g., permeability, thermal conductivity, and elastic rock properties) distribution from multipoint geostatistics-derived training images or discrete fracture network models as a prior and accelerated stochastic inversion is performed on the low-dimensional latent space in a Bayesian framework. Numerical examples with synthetic permeability fields with fracture inclusions with THM data sets based on Utah FORGE geothermal site will be presented to test the accuracy, speed, and uncertainty quantification capability of our proposed joint data inversion method.
Despite state-of-the-art deep learning-based computer vision models achieving high accuracy on object recognition tasks, x-ray screening of baggage at checkpoints is largely performed by hand. Part of the challenge in automation of this task is the relatively small amount of available labeled training data. Furthermore, realistic threat objects may have forms or orientations that do not appear in any training data, and radiographs suffer from high amounts of occlusion. Using deep generative models, we explore data augmentation techniques to expand the intra-class variation of threat objects synthetically injected into baggage radiographs using openly available baggage x-ray datasets. We also benchmark the performance of object detection algorithms on raw and augmented data.
Snow is a significant challenge for PV plants at northern latitudes, and snow-related power losses can exceed 30 % of annual production. Accurate loss estimates are needed for resource planning and to validate mitigation strategies, but this requires accurate snow detection at the inverter level. In this study, we propose and validate a framework for detecting snow in time-series inverter data. We identify four distinct snow-related power loss modes based on the inverter's operating points and electrical properties of the inverter and PV arrays. We validate these modes and identify their associated physical snow conditions using site images. Finally we examine relative frequencies of the snow power loss modes and their contributions to total power loss.
Lithium metal is an ideal anode for high energy density batteries, however the implementation of lithium metal anodes remains challenging. Beyond the development of highly efficient electrolytes, degradation processes restrict cycle life and reduce practical energy density. Herein lithium volumetric expansion and degradation pathways are studied in half cells through coupling electrochemical analysis with cross-sectional imaging of the intact electrode stack using a cryogenic laser plasma focused ion beam and scanning electron microscope. We find that the volumetric capacity is compromised as early as the first cycle, at best reaching values only half the theoretical capacity (1033 vs 2045 mAh cm−3). By the 101st electrodeposition, the practical volumetric capacity decreases to values ranging from 143 to 343 mAh cm−3
Residential solar photovoltaic (PV) systems are interconnected with the distribution grid at low-voltage secondary network locations. However, computational models of these networks are often over-simplified or non-existent, which makes it challenging to determine the operational impacts of new PV installations at those locations. In this work, a model-free locational hosting capacity analysis algorithm is proposed that requires only smart meter measurements at a given location to calculate the maximum PV size that can be accommodated without exceeding voltage constraints. The proposed algorithm was evaluated on two different smart meter datasets measuring over 2,700 total customer locations and was compared against results obtained from conventional model-based methods for the same smart meter datasets. Compared to the model-based results, the model-free algorithm had a mean absolute error (MAE) of less than 0.30 kW, was equally sensitive to measurement noise, and required much less computation time.
Through atomistic simulations, we uncover the dynamic properties of the Cantor alloy under shock-loading conditions and characterize its equation-of-state over a wide range of densities and pressures along with spall strength at ultra-high strain rates. Simulation results reveal the role of local phase transformations during the development of the shock wave on the alloy's high spall strength. The simulated shock Hugoniot results are in remarkable agreement with experimental data, validating the predictability of the model. These mechanistic insights along with the quantification of dynamical properties can drive further advancements in various applications of this class of alloys under extreme environments.
The novel Hydromine harvests energy from flowing water with no external moving parts, resulting in a robust system with minimal environmental impact. Here two deployment scenarios are considered: an offshore floating platform configuration to capture energy from relatively steady ocean currents at megawatt-scale, and a river-based system at kilowatt-scale mounted on a pylon. Hydrodynamic and techno-economic models are developed. The hydrodynamic models are used to maximize the efficiency of the power conversion. The techno-economic models optimize the system size and layout and ultimately seek to minimize the levelized-cost-of-electricity produced. Parametric and sensitivity analyses are performed on the models to optimize performance and reduce costs.
Geomagnetic disturbances (GMDs) give rise to geomagnetically induced currents (GICs) on the earth's surface which find their way into power systems via grounded transformer neutrals. The quasi-dc nature of the GICs results in half-cycle saturation of the power grid transformers which in turn results in transformer failure, life reduction, and other adverse effects. Therefore, transformers need to be more resilient to dc excitation. This paper sets forth dc immunity metrics for transformers. Furthermore, this paper sets forth a novel transformer architecture and a design methodology which employs the dc immunity metrics to make it more resilient to dc excitation. This is demonstrated using a time-stepping 2D finite element analysis (FEA) simulation. It was found that a relatively small change in the core geometry significantly increases transformer resiliency with respect to dc excitation.
Risk and resilience assessments for critical infrastructure focus on myriad objectives, from natural hazard evaluations to optimizing investments. Although research has started to characterize externalities associated with current or possible future states, incorporation of equity priorities at project inception is increasingly being recognized as critical for planning related activities. However, there is no standard methodology that guides development of equity-informed quantitative approaches for infrastructure planning activities. To address this gap, we introduce a logic model that can be tailored to capture nuances about specific geographies and community priorities, effectively incorporating them into different mathematical approaches for quantitative risk assessments. Specifically, the logic model uses a graded, iterative approach to clarify specific equity objectives as well as inform the development of equations being used to support analysis. We demonstrate the utility of this framework using case studies spanning aviation fuel, produced water, and microgrid electricity infrastructures. For each case study, the use of the logic model helps clarify the ways that local priorities and infrastructure needs are used to drive the types of data and quantitative methodologies used in the respective analyses. The explicit consideration of methodological limitations (e.g., data mismatches) and stakeholder engagements serves to increase the transparency of the associated findings as well as effectively integrate community nuances (e.g., ownership of assets) into infrastructure assessments. Such integration will become increasingly important to ensure that planning activities (which occur throughout the lifecycle of the infrastructure projects) lead to long-lasting solutions to meet both energy and sustainable development goals for communities.
The widespread adoption of residential solar PV requires distribution system studies to ensure the addition of solar PV at a customer location does not violate the system constraints, which can be referred to as locational hosting capacity (HC). These model-based analyses are prone to error due to their dependencies on the accuracy of the system information. Model-free approaches to estimate the solar PV hosting capacity for a customer can be a good alternative to this approach as their accuracies do not depend on detailed system information. In this paper, an Adaptive Boosting (AdaBoost) algorithm is deployed to utilize the statistical properties (mean, minimum, maximum, and standard deviation) of the customer's historical data (real power, reactive power, voltage) as inputs to estimate the voltage-constrained PV HC for the customer. A baseline comparison approach is also built that utilizes just the maximum voltage of the customer to predict PV HC. The results show that the ensemble-based AdaBoost algorithm outperformed the proposed baseline approach. The developed methods are also compared and validated by existing state-of-the-art model-free PV HC estimation methods.
Researchers at Sandia National Laboratories, in conjunction with the Nuclear Energy Institute and Light Water Reactor Sustainability Programs, have conducted testing and analysis to reevaluate and redefine the minimum passible opening size through which a person can effectively pass and navigate. Physical testing with a representative population has been performed on both simple two-dimensional (rectangular and circular cross sections up to 91.4 cm in depth) and more complex three-dimensional (circular cross sections of longer lengths up to 9.1 m and changes in direction) opening configurations. The primary impact of this effort is to define the physical design in which an adversary could successfully pass through a potentially complex opening, as well as to define the designs in which an adversary would not be expected to successfully traverse a complex opening. These data can then be used to support risk-informed decision making.
The near wake flow field associated with hypersonic blunt bodies is characterized by complex physical phenomena resulting in both steady and time dependent pressure loadings on the base of the vehicle. Here, we focus on the unsteady fluid dynamic pressure fluctuation behavior as a vibratory input loading. Typically, these flows are characterized by a locally low-pressure, separated flow region with an unsteady formation of vortical cells that are locally produced and convected downstream into the far-field wake. This periodic production and transport of vortical elements is very-well known from classical incompressible fluid mechanics and is usually termed as the (Von) Karman vortex street. While traditionally discussed within the scope of incompressible flow, the periodic vortex shedding phenomenon is known for compressible flows as well. To support vehicle vibratory loading design computations, we examine a suite of analytical and high-fidelity computational models supported by dedicated experimental measurements. While large scale simulation approaches offer very high-quality results, they are impractical for design-level decisions, implying that analytically derived reduced order models are essential. The major portions of this effort include an examination of the DeChant-Smith Power Spectral Density (PSD) [1] model to better understand both overall Root Mean Square (RMS) magnitude and functional maximum associated with a critical vortex shedding phenomenon. The critical frequency is examined using computational, experiments and an analytical shear layer frequency model. Finally, the PSD magnitude maximum is studied using a theory-based approach connecting the PSD to the spatial correlation that strongly supports the DeChant-Smith PSD model behavior. These results combine to demonstrate that the current employed PSD models provide plausible reduced order closures for turbulent base pressure fluctuations for high Reynolds number flows over range of Mach numbers. Access to a reliable base pressure fluctuation model then permits simulation of bluff body vibratory input.
Polymers are widely used as damping materials in vibration and impact applications. Liquid crystal elastomers (LCEs) are a unique class of polymers that may offer the potential for enhanced energy absorption capacity under impact conditions over conventional polymers due to their ability to align the nematic phase during loading. Being a relatively new material, the high rate compressive properties of LCEs have been minimally studied. Here, we investigated the high strain rate compression behavior of different solid LCEs, including cast polydomain and 3D-printed, preferentially oriented monodomain samples. Direct ink write (DIW) 3D printed samples allow unique sample designs, namely, a specific orientation of mesogens with respect to the loading direction. Loading the sample in different orientations can induce mesogen rotation during mechanical loading and subsequently different stress-strain responses under impact. We also used a reference polymer, bisphenol-A (BPA) cross-linked resin, to contrast LCE behavior with conventional elastomer behavior.
Computational simulations of high-speed flow play an important role in the design of hypersonic vehicles, for which experimental data are scarce; however, high-fidelity simulations of hypersonic flow are computationally expensive. Reduced order models (ROMs) have the potential to make many-query problems, such as design optimization and uncertainty quantification, tractable for this domain. Residual minimization-based ROMs, which formulate the projection onto a reduced basis as an optimization problem, are one promising candidate for model reduction of large-scale fluid problems. This work analyzes whether specific choices of norms and objective functions can improve the performance of ROMs of hypersonic flow. Specifically, we investigate the use of dimensionally consistent inner products and modifications designed for convective problems, including ℓ1 minimization and constrained optimization statements to enforce conservation laws. Particular attention is paid to accuracy for problems with strong shocks, which are common in hypersonic flow and challenging for projection-based ROMs. We demonstrate that these modifications can improve the predictability and efficiency of a ROM, though the impact of such formulations depends on the quantity of interest and problem considered.
Reinforcement learning (RL) may enable fixedwing unmanned aerial vehicle (UAV) guidance to achieve more agile and complex objectives than typical methods. However, RL has yet struggled to achieve even minimal success on this problem; fixed-wing flight with RL-based guidance has only been demonstrated in literature with reduced state and/or action spaces. In order to achieve full 6-DOF RL-based guidance, this study begins training with imitation learning from classical guidance, a method known as warm-staring (WS), before further training using Proximal Policy Optimization (PPO). We show that warm starting is critical to successful RL performance on this problem. PPO alone achieved a 2% success rate in our experiments. Warm-starting alone achieved 32% success. Warm-starting plus PPO achieved 57% success over all policies, with 40% of policies achieving 94% success.
This study presents a method for constructing machine learning-based reduced order models (ROMs) that accurately simulate nonlinear contact problems while quantifying epistemic uncertainty. These purely non-intrusive ROMs significantly lower computational costs compared to traditional full order models (FOMs). The technique utilizes adversarial training combined with an ensemble of Barlow twins reduced order models (BT-ROMs) to maximize the information content of the nonlinear reduced manifolds. These lower-dimensional manifolds are equipped with Gaussian error estimates, allowing for quantifying epistemic uncertainty in the ROM predictions. The effectiveness of these ROMs, referred to as UQ-BT-ROMs, is demonstrated in the context of contact between a rigid indenter and a hyperelastic substrate under finite deformations. The ensemble of BT-ROMs improves accuracy and computational efficiency compared to existing alternatives. The relative error between the UQ-BT-ROM and FOM solutions ranges from approximately 3% to 8% across all benchmarks. Remarkably, this high level of accuracy is achieved at a significantly reduced computational cost compared to FOMs. For instance, the online phase of the UQ-BT-ROM takes only 0.001 seconds, while a single FOM evaluation requires 63 seconds. Furthermore, the error estimate produced by the UQ-BT-ROMs reasonably captures the errors in the ROMs, with increasing accuracy as training data increases. The ensemble approach improves accuracy and computational efficiency compared to existing alternatives. The UQ-BT-ROMs provide a cost-effective solution with significantly reduced computational times while maintaining a high level of accuracy.
The block version of GMRES (BGMRES) is most advantageous over the single right hand side (RHS) counterpart when the cost of communication is high while the cost of floating point operations is not. This is the particular case on modern graphics processing units (GPUs), while it is generally not the case on traditional central processing units (CPUs). In this paper, experiments on both GPUs and CPUs are shown that compare the performance of BGMRES against GMRES as the number of RHS increases, with a particular forcus on GPU performance. The experiments indicate that there are many cases in which BGMRES is slower than GMRES on CPUs, but faster on GPUs. Furthermore, when varying the number of RHS on the GPU, there is an optimal number of RHS where BGMRES is clearly most advantageous over GMRES. A computational model for the GPU is developed using hardware specific parameters, providing insight towards how the qualitative behavior of BGMRES changes as the number of RHS increase, and this model also helps explain the phenomena observed in the experiments.
We demonstrate an InAs-based nonlinear dielectric metasurface, which can generate terahertz (THz) pulses with opposite phase in comparison to an unpatterned InAs layer. It enables binary phase THz metasurfaces for generation and focusing of THz pulses.
Visualization of mode shapes is a crucial step in modal analysis. However, the methods to create the test geometry, which typically require arduous hand measurements and approximations of rotation matrices, are crude. This leads to a lengthy test set-up process and a test geometry with potentially high measurement errors. Test and analysis delays can also be experienced if the orientation of an accelerometer is documented incorrectly, which happens more often than engineers would like to admit. To mitigate these issues, a methodology has been created to generate the test geometry (coordinates and rotation matrices) with probe data from a portable coordinate measurement machine (PCMM). This methodology has led to significant reductions in the test geometry measurement time, reductions in test geometry measurement errors, and even reduced test times. Simultaneously, a methodology has also been created to use the PCMM to easily identify desired measurement locations, as specified by a model. This paper will discuss the general framework of these methods and the realized benefits, using examples from actual tests.
IEEE International Symposium on Applications of Ferroelectrics, ISAF 2023, International Symposium on Integrated Functionalities, ISIF 2023 and Piezoresponse Force Microscopy Workshop, PFM 2023, Proceedings
Radio frequency (RF) magnetic devices are key components in RF front ends. However, they are difficult to miniaturize and remain the bulkiest components in RF systems. Acoustically driven ferromagnetic resonance (ADFMR) offers a route towards the miniaturization of RF magnetic devices. The ADFMR literature thus far has focused predominantly on the dynamics of the coupling process, with relatively little work done on the device optimization. In this work, we present an optimized 2 GHz ADFMR device utilizing relaxed SPUDT transducers in lithium tantalate. We report an insertion loss of -13.7 dB and an ADFMR attenuation constant of -71.7 dB/mm, making this device one of the best performing ADFMR devices to date.
Sustainable use of water resources continues to be a challenge across the globe. This is in part due to the complex set of physical and social behaviors that interact to influence water management from local to global scales. Analyses of water resources have been conducted using a variety of techniques, including qualitative evaluations of media narratives. This study aims to augment these methods by leveraging computational and quantitative techniques from the social sciences focused on text analyses. Specifically, we use natural language processing methods to investigate a large corpus (approx. 1.8M) of newspaper articles spanning approximately 35 years (1982–2017) for insights into human-nature interactions with water. Focusing on local and regional United States publications, our analysis demonstrates important dynamics in water-related dialogue about drinking water and pollution to other critical infrastructures, such as energy, across different parts of the country. Our assessment, which looks at water as a system, also highlights key actors and sentiments surrounding water. Extending these analytical methods could help us further improve our understanding of the complex roles of water in current society that should be considered in emerging activities to mitigate and respond to resource conflicts and climate change.
There is a global interest in decarbonizing the existing natural gas infrastructure by blending the natural gas with hydrogen. However, hydrogen is known to embrittle pipeline and pressure vessel steels used in gas transportation and storage applications. Thus, assessing the structural integrity of vintage pipeline (pre-1970s) in the presence of gaseous hydrogen is a critical step towards successful implementation of hydrogen blending into existing infrastructure. To this end, fatigue crack growth (FCG) behavior and fracture resistance of several vintage X52 pipeline steels were evaluated in high purity gaseous hydrogen environments at pressure of 210 bar (3,000 psi) and 34 bar (500 psi). The base metal and seam weld microstructures were characterized using optical microscopy, scanning electron microscopy (SEM) and Vickers hardness mapping. The base metals consisted of ferrite-pearlite banded microstructures, whereas the weld regions contained ferrite and martensite. In one case, a hook-like crack was observed in an electric resistance (seam) weld; whereas hard spots were observed near the bond line of a double-submerged arc (seam) weld. For a given hydrogen gas pressure, comparable FCG rates were observed for the different base metal and weld microstructures. Generally, the higher strength microstructures had lower fracture resistance in hydrogen. In particular, lower fracture resistance was measured when local hard spots were observed in the approximate region of the crack plane of the weld. Samples tested in lower H2 pressure (34 bar) exhibited lower FCG rates (in the lower ∆K regime) and greater fracture resistance when compared to the respective high-pressure (210 bar) hydrogen tests. The hydrogen-assisted fatigue and fracture surfaces were qualitatively characterized using SEM to rationalize the influence of microstructure on the dominant fracture mechanisms in gaseous hydrogen environment.
We demonstrate the use of low-temperature grown GaAs (LT-GaAs) metasurface as an ultrafast photoconductive switching element gated with 1550 nm laser pulses. The metasurface is designed to enhance a weak two-step photon absorption at 1550 nm, enabling THz pulse detection.
This is an investigation on two experimental datasets of laminar hypersonic flows, over a double-cone geometry, acquired in Calspan—University at Buffalo Research Center’s Large Energy National Shock (LENS)-XX expansion tunnel. These datasets have yet to be modeled accurately. A previous paper suggested that this could partly be due to mis-specified inlet conditions. The authors of this paper solved a Bayesian inverse problem to infer the inlet conditions of the LENS-XX test section and found that in one case they lay outside the uncertainty bounds specified in the experimental dataset. However, the inference was performed using approximate surrogate models. In this paper, the experimental datasets are revisited and inversions for the tunnel test-section inlet conditions are performed with a Navier–Stokes simulator. The inversion is deterministic and can provide uncertainty bounds on the inlet conditions under a Gaussian assumption. It was found that deterministic inversion yields inlet conditions that do not agree with what was stated in the experiments. An a posteriori method is also presented to check the validity of the Gaussian assumption for the posterior distribution. This paper contributes to ongoing work on the assessment of datasets from challenging experiments conducted in extreme environments, where the experimental apparatus is pushed to the margins of its design and performance envelopes.
In this work, we introduce and compare the results of several methods for determining the horizon profile at a PV site, and compare their use cases and limitations. The methods in this paper include horizon detection from time-series irradiance or performance data, modeling from GIS topology data, manual theodolite measurements, and camera-based horizon detection. We compare various combinations of these methods using data from 4 Regional Test Center sites in the US, and 3 World Bank sites in Nepal. The results show many differences between these methods, and we recommend the most practical solutions for various use-cases.
Two relatively under-reported facets of fuel storage fire safety are examined in this work for a 250, 000 gallon two-tank storage system. Ignition probability is linked to the radiative flux from a presumed fire. First, based on observed features of existing designs, fires are expected to be largely contained within a designed footprint that will hold the full spilled contents of the fuel. The influence of the walls and the shape of the tanks on the magnitude of the fire is not a well-described aspect of conventional fire safety assessment utilities. Various resources are herein used to explore the potential hazard for a contained fire of this nature. Second, an explosive attack on the fuel storage has not been widely considered in prior work. This work explores some options for assessing this hazard. The various methods for assessing the constrained conventional fires are found to be within a reasonable degree of agreement. This agreement contrasts with the hazard from an explosive dispersal. Best available assessment techniques are used, which highlight some inadequacies in the existing toolsets for making predictions of this nature. This analysis, using the best available tools, suggests the offset distance for the ignition hazard from a fireball will be on the same order as the offset distance for the blast damage. This suggests the buy-down of risk by considering the fireball is minimal when considering the blast hazards. Assessment tools for the fireball predictions are not particularly mature, and ways to improve them for a higher-fidelity estimate are noted.
Austenitic stainless steels have been extensively tested in hydrogen environments; however, limited information exists for the effects of hydrogen on the fatigue life of high-strength grades of austenitic stainless steels. Moreover, fatigue life testing of finished product forms (such as tubing and welds) is challenging. A novel test method for evaluating the influence of internal hydrogen on fatigue of orbital tube welds was reported, where a cross hole in a tubing specimen is used to establish a stress concentration analogous to circumferentially notched bar fatigue specimens for constant-load, axial fatigue testing. In that study (Kagay et al, ASME PVP2020-8576), annealed 316L tubing with a cross hole displayed similar fatigue performance as more conventional materials test specimens. A similar cross-hole tubing geometry is adopted here to evaluate the fatigue crack initiation and fatigue life of XM-19 austenitic stainless steel with high concentration of internal hydrogen. XM-19 is a nitrogen-strengthened Fe-Cr-Ni-Mn austenitic stainless steel that offers higher strength than conventional 3XX series stainless steels. A uniform hydrogen concentration in the test specimen is achieved by thermal precharging (exposure to high-pressure hydrogen at elevated temperature for two weeks) prior to testing in air to simulate the equilibrium hydrogen concentration near a stress concentration in gaseous hydrogen service. Specimens are also instrumented for direct current potential difference measurements to identify crack initiation. After accounting for the strengthening associated with thermal precharging, the fatigue crack initiation and fatigue life of XM-19 tubing were virtually unchanged by internal hydrogen.
Earth and Space 2022: Space Exploration, Utilization, Engineering, and Construction in Extreme Environments - Selected Papers from the 18th Biennial International Conference on Engineering, Science, Construction, and Operations in Challenging Environments
Partitioned methods allow one to build a simulation capability for coupled problems by reusing existing single-component codes. In so doing, partitioned methods can shorten code development and validation times for multiphysics and multiscale applications. In this work, we consider a scenario in which one or more of the “codes” being coupled are projection-based reduced order models (ROMs), introduced to lower the computational cost associated with a particular component. We simulate this scenario by considering a model interface problem that is discretized independently on two non-overlapping subdomains. We then formulate a partitioned scheme for this problem that allows the coupling between a ROM “code” for one of the subdomains with a finite element model (FEM) or ROM “code” for the other subdomain. The ROM “codes” are constructed by performing proper orthogonal decomposition (POD) on a snapshot ensemble to obtain a low-dimensional reduced order basis, followed by a Galerkin projection onto this basis. The ROM and/or FEM “codes” on each subdomain are then coupled using a Lagrange multiplier representing the interface flux. To partition the resulting monolithic problem, we first eliminate the flux through a dual Schur complement. Application of an explicit time integration scheme to the transformed monolithic problem decouples the subdomain equations, allowing their independent solution for the next time step. We show numerical results that demonstrate the proposed method’s efficacy in achieving both ROM-FEM and ROM-ROM coupling.
The design of thermal protection systems (TPS), including heat shields for reentry vehicles, rely more and more on computational simulation tools for design optimization and uncertainty quantification. Since high-fidelity simulations are computationally expensive for full vehicle geometries, analysts primarily use reduced-physics models instead. Recent work has shown that projection-based reduced-order models (ROMs) can provide accurate approximations of high-fidelity models at a lower computational cost. ROMs are preferable to alternative approximation approaches for high-consequence applications due to the presence of rigorous error bounds. The following paper extends our previous work on projection-based ROMs for ablative TPS by considering hyperreduction methods which yield further reductions in computational cost and demonstrating the approach for simulations of a three-dimensional flight vehicle. We compare the accuracy and potential performance of several different hyperreduction methods and mesh sampling strategies. This paper shows that with the correct implementation, hyperreduction can make ROMs up to 1-3 orders of magnitude faster than the full order model by evaluating the residual at only a small fraction of the mesh nodes.
While significant investments have been made in the exploration of ethics in computation, recent advances in high performance computing (HPC) and artificial intelligence (AI) have reignited a discussion for more responsible and ethical computing with respect to the design and development of pervasive sociotechnical systems within the context of existing and evolving societal norms and cultures. The ubiquity of HPC in everyday life presents complex sociotechnical challenges for all who seek to practice responsible computing and ethical technological innovation. The present paper provides guidelines which scientists, researchers, educators, and practitioners alike, can employ to become more aware of one’s personal values system that may unconsciously shape one’s approach to computation and ethics.
Mann, James B.; Mohanty, Debapriya P.; Kustas, Andrew B.; Stiven Puentes Rodriguez, B.; Issahaq, Mohammed N.; Udupa, Anirudh; Sugihara, Tatsuya; Trumble, Kevin P.; M'Saoubi, Rachid; Chandrasekar, Srinivasan
Machining-based deformation processing is used to produce metal foil and flat wire (strip) with suitable properties and quality for electrical power and renewable energy applications. In contrast to conventional multistage rolling, the strip is produced in a single-step and with much less process energy. Examples are presented from metal systems of varied workability, and strip product scale in terms of size and production rate. By utilizing the large-strain deformation intrinsic to cutting, bulk strip with ultrafine-grained microstructure, and crystallographic shear-texture favourable for formability, are achieved. Implications for production of commercial strip for electric motor applications and battery electrodes are discussed.
We present a new algorithm for infinite-dimensional optimization with general constraints, called ALESQP. In short, ALESQP is an augmented Lagrangian method that penalizes inequality constraints and solves equality-constrained nonlinear optimization subproblems at every iteration. The subproblems are solved using a matrix-free trust-region sequential quadratic programming (SQP) method that takes advantage of iterative, i.e., inexact linear solvers, and is suitable for large-scale applications. A key feature of ALESQP is a constraint decomposition strategy that allows it to exploit problem-specific variable scalings and inner products. We analyze convergence of ALESQP under different assumptions. We show that strong accumulation points are stationary. Consequently, in finite dimensions ALESQP converges to a stationary point. In infinite dimensions we establish that weak accumulation points are feasible in many practical situations. Under additional assumptions we show that weak accumulation points are stationary. We present several infinite-dimensional examples where ALESQP shows remarkable discretization-independent performance in all of its iterative components, requiring a modest number of iterations to meet constraint tolerances at the level of machine precision. Also, we demonstrate a fully matrix-free solution of an infinite-dimensional problem with nonlinear inequality constraints.
Chang, Chun; Nakagawa, Seiji; Kibikas, William M.; Kneafsey, Timothy; Dobson, Patrick; Samuel, Abraham; Otto, Michael; Bruce, Stephen; Kaargeson-Loe, Nils
Although enhancing permeability is vital for successful development of an Enhanced Geothermal System (EGS) reservoir, high-permeability pathways between injection and production wells can lead to short-circuiting of the flow, resulting in inefficient heat exchange with the reservoir rock. For this reason, the permeability of such excessively permeable paths needs to be reduced. Controlling the reservoir permeability away from wells, however, is challenging, because the injected materials need to form solid plugs only after they reach the target locations. To control the timing of the flow-diverter formation, we are developing a technology to deliver one or more components of the diverter-forming chemicals in microparticles (capsules) with a thin polymer shell. The material properties of the shell are designed so that it can withstand moderately high temperatures (up to ~200°C) of the injected fluid for a short period of time (up to ~30 minutes), but thermally degrades and releases the reactants at higher reservoir temperatures. A microfluidic system has been developed that can continuously produce reactant-encapsulating particles. The diameter of the produced particles is in the range of ~250-650 μm, which can be controlled by using capillary tubes with different diameters and by adjusting the flow rates of the encapsulated fluid and the UV-curable epoxy resin for the shell. Preliminary experiments have demonstrated that (1) microcapsules containing chemical activators for flow-diverter (silicate gel or metal silicate) formation can be produced, (2) the durability of the shell can be made to satisfy the required conditions, and (3) thermal degradation of the shell allows for release of the reaction activators and control of reaction kinetics in silica-based diverters.
Laser absorption spectroscopy (LAS) was used to measure temperature and XH2O at a rate of 500 kHz in post-detonation fireballs of solid explosives. A 25 g hemisphere of pentaerythritol tetranitrate (PETN) was initiated with an exploding-bridgewire detonator to produce a post-detonation fireball that traveled radially toward a hardened optical probe. The probe contained a pressure transducer and the near-infrared optics needed to measure H2O absorption transitions near 7185.6 cm-1 and 6806 cm-1 using peak-picking scanned-wavelength modulation-spectroscopy with first-harmonic-normalized second-harmonic detection (scanned-WMS-2f/1f). The two lasers were scanned across the peak of an absorption line at 500 kHz and modulated at either 35 MHz for the laser near 7185.6 cm-1 or 45.5 MHz for the laser near 6806 cm-1. This enabled measurements of temperature and XH2O at 500 kHz in the shock-heated air and trailing post-detonation fireball. Time histories of pressure, temperature, and XH2O were acquired at multiple standoff distances in order to quantify the temporal evolution of these quantities in the post-detonation environment produced by PETN.
Renewable microgrids are sustainable, resilient solutions to mitigate and adapt to climate change. Making electric loads nearly 100% available (i.e., power remains on) during outages increases cost. Near 100% availability is required when human life or high-cost assets are involved, but availability can be reduced for less consequential loads leading to lower costs. This study analyses costs for photo-voltaic and lithium-ion battery microgrids with availability ranging from 0–99%. We develop a methodology to analyse three Puerto Rican coastal communities. We consider power outage effects for hurricanes, earthquakes, and everyday outages. The results show cost versus availability from 0–99%. There is 27–31% cost reduction at 80% availability in comparison to 99% availability. A regression model of microgrid availability versus three ratios: 1) the annual generation to demand ratio, 2) storage to interruption energy ratio, and 3) peak storage to load ratio produced a coefficient of determination of 0.99949 with 70% of the data used for training and 30% for testing. The results can therefore be extended to other coastal Puerto Rican communities of varying sizes that have ratios within the ranges analysed in this study. This can empower decision makers to rapidly analyse designs that have availabilities well below 100%.
7th IEEE Electron Devices Technology and Manufacturing Conference: Strengthen the Global Semiconductor Research Collaboration After the Covid-19 Pandemic, EDTM 2023
This paper presents an assessment of electrical device measurements using functional data analysis (FDA) on a test case of Zener diode devices. We employ three techniques from FDA to quantify the variability in device behavior, primarily due to production lot and demonstrate that this has a significant effect in our data set. We also argue for the expanded use of FDA methods in providing principled, quantitative analysis of electrical device data.
Transient stability is often the limiting factor in power transmission. Due to lack of inertia, high penetration of inverter-based generation may exacerbate such instabilities. Emerging energy storage systems (ESSs) present a unique opportunity for addressing transient stability via real-power modulation. Such control could help avoid instabilities or the need for remedial actions such as load shedding. Some promising ESS transient-stability control strategies proposed in the literature utilize real-time wide-area generator speeds as the fundamental feedback signals. A potential proxy for the generator speed is electrical frequency obtained from a phasor-measurement unit (PMU) incorporated into a wide-area measurement system. This paper examines the impact of using PMU-based frequency measurements for such applications.
Complex angle theory can offer new fundamental insights into refraction at the absorptive interface. In this work we propose a new method to induce isofrequency opening via addition of scattering in the dual interface system.
Austenitic stainless steels are used in high-pressure hydrogen containment infrastructure for their resistance to hydrogen embrittlement. Applications for the use of austenitic stainless steels include pressure vessels, tubing, piping, valves, fittings and other piping components. Despite their resistance to brittle behavior in the presence of hydrogen, austenitic stainless steels can exhibit degraded fracture performance. The mechanisms of hydrogen-assisted fracture, however, remain elusive, which has motivated continued research on these alloys. There are two principal approaches to evaluate the influence of gaseous hydrogen on mechanical properties: internal and external hydrogen, respectively. The austenite phase has high solubility and low diffusivity of hydrogen at room temperature, which enables introduction of hydrogen into the material through thermal precharging at elevated temperature and pressure; a condition referred to as internal hydrogen. H-precharged material can subsequently be tested in ambient conditions. Alternatively, mechanical testing can be performed while test coupons are immersed in gaseous hydrogen thereby evaluating the effects of external hydrogen on property degradation. The slow diffusivity of hydrogen in austenite at room temperature can often be a limiting factor in external hydrogen tests and may not properly characterize lower bound fracture behavior in components exposed to hydrogen for long time periods. In this study, the differences between internal and external hydrogen environments are evaluated in the context of fracture resistance measurements. Fracture testing was performed on two different forged austenitic stainless steel alloys (304L and XM-11) in three different environments: 1) non-charged and tested in gaseous hydrogen at pressure of 1,000 bar (external H2), 2) hydrogen precharged and tested in air (internal H), 3) hydrogen precharged and tested in 1,000 bar H2 (internal H + external H2). For all environments, elastic-plastic fracture measurements were conducted to establish J-R curves following the methods of ASTM E1820. Following fracture testing, fracture surfaces were examined to reveal predominant fracture mechanisms for the different conditions and to characterize differences (and similarities) in the macroscale fracture processes associated with these environmental conditions.
In this study, we develop an end-to-end deep learning-based inverse design approach to determine the scatterer shape necessary to achieve a target acoustic field. This approach integrates non-uniform rational B-spline (NURBS) into a convolutional autoencoder (CAE) architecture while concurrently leveraging (in a weak sense) the governing physics of the acoustic problem. By utilizing prior physical knowledge and NURBS parameterization to regularize the ill-posed inverse problem, this method does not require enforcing any geometric constraint on the inverse design space, hence allowing the determination of scatterers with potentially any arbitrary shape (within the set allowed by NURBS). A numerical study is presented to showcase the ability of this approach to identify physically-consistent scatterer shapes capable of producing user-defined acoustic fields.
Piezoelectric acoustic devices that are integrated with semiconductors can leverage the acoustoelectric effect, allowing functionalities such as gain and isolation to be achieved in the acoustic domain. This could lead to performance improvements and miniaturization of radio-frequency electronic systems. However, acoustoelectric amplifiers that offer a large acoustic gain with low power consumption and noise figure at microwave frequencies in continuous operation have not yet been developed. Here we report non-reciprocal acoustoelectric amplifiers that are based on a three-layer heterostructure consisting of an indium gallium arsenide (In0.53Ga0.47As) semiconducting film, a lithium niobate (LiNbO3) piezoelectric film, and a silicon substrate. The heterostructure can continuously generate 28.0 dB of acoustic gain (4.0 dB net radio-frequency gain) for 1 GHz phonons with an acoustic noise figure of 2.8 dB, while dissipating 40.5 mW of d.c. power. We also create a device with an acoustic gain of 37.0 dB (11.3 dB net gain) at 1 GHz with 19.6 mW of d.c. power dissipation and a non-reciprocal transmission of over 55 dB.
Awile, Omar; Knight, James C.; Nowotny, Thomas; Aimone, James B.; Diesmann, Markus; Schurmann, Felix
At the turn of the millennium the computational neuroscience community realized that neuroscience was in a software crisis: software development was no longer progressing as expected and reproducibility declined. The International Neuroinformatics Coordinating Facility (INCF) was inaugurated in 2007 as an initiative to improve this situation. The INCF has since pursued its mission to help the development of standards and best practices. In a community paper published this very same year, Brette et al. tried to assess the state of the field and to establish a scientific approach to simulation technology, addressing foundational topics, such as which simulation schemes are best suited for the types of models we see in neuroscience. In 2015, a Frontiers Research Topic “Python in neuroscience” by Muller et al. triggered and documented a revolution in the neuroscience community, namely in the usage of the scripting language Python as a common language for interfacing with simulation codes and connecting between applications. The review by Einevoll et al. documented that simulation tools have since further matured and become reliable research instruments used by many scientific groups for their respective questions. Open source and community standard simulators today allow research groups to focus on their scientific questions and leave the details of the computational work to the community of simulator developers. A parallel development has occurred, which has been barely visible in neuroscientific circles beyond the community of simulator developers: Supercomputers used for large and complex scientific calculations have increased their performance from ~10 TeraFLOPS (1013 floating point operations per second) in the early 2000s to above 1 ExaFLOPS (1018 floating point operations per second) in the year 2022. This represents a 100,000-fold increase in our computational capabilities, or almost 17 doublings of computational capability in 22 years. Moore's law (the observation that it is economically viable to double the number of transistors in an integrated circuit every other 18–24 months) explains a part of this; our ability and willingness to build and operate physically larger computers, explains another part. It should be clear, however, that such a technological advancement requires software adaptations and under the hood, simulators had to reinvent themselves and change substantially to embrace this technological opportunity. It actually is quite remarkable that—apart from the change in semantics for the parallelization—this has mostly happened without the users knowing. The current Research Topic was motivated by the wish to assemble an update on the state of neuroscientific software (mostly simulators) in 2022, to assess whether we can see more clearly which scientific questions can (or cannot) be asked due to our increased capability of simulation, and also to anticipate whether and for how long we can expect this increase of computational capabilities to continue.
Electronic control systems used for quantum computing have become increasingly complex as multiple qubit technologies employ larger numbers of qubits with higher fidelity target. Whereas the control systems for different technologies share some similarities, parameters, such as pulse duration, throughput, real-time feedback, and latency requirements, vary widely depending on the qubit type. In this article, we evaluate the performance of modern system-on-chip (SoC) architectures in meeting the control demands associated with performing quantum gates on trapped-ion qubits, particularly focusing on communication within the SoC. A principal focus of this article is the data transfer latency and throughput of several high-speed on-chip mechanisms on Xilinx multiprocessor SoCs, including those that utilize direct memory access (DMA). They are measured and evaluated to determine an upper bound on the time required to reconfigure a gate parameter. Worst-case and average-case bandwidth requirements for a custom gate sequencer core are compared with the experimental results. The lowest variability, highest throughput data-transfer mechanism is DMA between the real-time processing unit (RPU) and the programmable logic, where bandwidths up to 19.2 GB/s are possible. For context, this enables the reconfiguration of qubit gates in less than 2 μs, comparable to the fastest gate time. Though this article focuses on trapped-ion control systems, the gate abstraction scheme and measured communication rates are applicable to a broad range of quantum computing technologies.
High reliability (Hi-Rel) electronics for mission critical applications are handled with extreme care; stress testing upon full assembly can increase a likelihood of degrading these systems before their deployment. Moreover, novel material parts, such as wide bandgap semiconductor devices, tend to have more complicated fabrication processing needs which could ultimately result in larger part variability or potential defects. Therefore, an intelligent screening and inspection technique for electronic parts, in particular gallium nitride (GaN) power transistors, is presented in this paper. We present a machine-learning-based non-intrusive technique that can enhance part-selection decisions to categorize the part samples to the population's expected electrical characteristics. This technique provides relevant information about GaN HEMT device characteristics without having to operate all of these devices at the high current region of the transfer and output characteristics, lowering the risk of damaging the parts prematurely. The proposed non-intrusive technique uses a small signal pulse width modulation (PWM) of various frequencies, ranging from 10 kHz to 500 kHz, injected into the transistor terminals and the corresponding output signals are observed and used as training dataset. Unsupervised clustering techniques with K-means and feature dimensional reduction through principal component analysis (PCA) have been used to correlate a population of GaN HEMT transistors to the expected mean of the devices' electrical characteristic performance.
We analyze the regression accuracy of convolutional neural networks assembled from encoders, decoders and skip connections and trained with multifidelity data. Besides requiring significantly less trainable parameters than equivalent fully connected networks, encoder, decoder, encoder-decoder or decoder-encoder architectures can learn the mapping between inputs to outputs of arbitrary dimensionality. We demonstrate their accuracy when trained on a few high-fidelity and many low-fidelity data generated from models ranging from one-dimensional functions to Poisson equation solvers in two-dimensions. We finally discuss a number of implementation choices that improve the reliability of the uncertainty estimates generated by Monte Carlo DropBlocks, and compare uncertainty estimates among low-, high- and multifidelity approaches.
Finding the maximum cut of a graph (MAXCUT) is a classic optimization problem that has motivated parallel algorithm development. While approximate algorithms to MAXCUT offer attractive theoretical guarantees and demonstrate compelling empirical performance, such approximation approaches can shift the dominant computational cost to the stochastic sampling operations. Neuromorphic computing, which uses the organizing principles of the nervous system to inspire new parallel computing architectures, offers a possible solution. One ubiquitous feature of natural brains is stochasticity: the individual elements of biological neural networks possess an intrinsic randomness that serves as a resource enabling their unique computational capacities. By designing circuits and algorithms that make use of randomness similarly to natural brains, we hypothesize that the intrinsic randomness in microelectronics devices could be turned into a valuable component of a neuromorphic architecture enabling more efficient computations. Here, we present neuromorphic circuits that transform the stochastic behavior of a pool of random devices into useful correlations that drive stochastic solutions to MAXCUT. We show that these circuits perform favorably in comparison to software solvers and argue that this neuromorphic hardware implementation provides a path for scaling advantages. This work demonstrates the utility of combining neuromorphic principles with intrinsic randomness as a computational resource for new computational architectures.
Power-flow studies on the 30-MA, 100-ns Z facility at Sandia National Labs have shown that plasmas in the facility's magnetically insulated transmission lines can result in a loss of current to the load.1 During the current pulse, electrode heating causes neutral surface contaminants (water, hydrogen, hydrocarbons, etc.) to desorb, ionize, and form plasmas in the anode-cathode gap.2 Shrinking typical electrode thicknesses (∼1 cm) to thin foils (5-200 μm) produces observable amounts of plasma on smaller pulsed power drivers <1 MA).3 We suspect that as electrode material bulk thickness decreases relative to the skin depth (50-100 μm for a 100-500-ns pulse in aluminum), the thermal energy delivered to the neutral surface contaminants increases, and thus desorb faster from the current carrying surface.
This paper introduces a new microprocessor-based system that is capable of detecting faults via the Traveling Wave (TW) generated from a fault event. The fault detection system is comprised of a commercially available Digital Signal Processing (DSP) board capable of accurately sampling signals at high speeds, performing the Discrete Wavelet Transform (DWT) decomposition to extract features from the TW, and a detection algorithm that makes use of the extracted features to determine the occurrence of a fault. Results show that this inexpensive fault detection system's performance is comparable to commercially available TW relays as accurate sampling and fault detection are achieved in a hundred and fifty microseconds. A detailed analysis of the execution times of each part of the process is provided.
Mega-ampere class pulsed power machines drive intense currents into small volumes to study high energy and density environments. Power lost during these events is a difficult and paramount problem to solve. For example, facilities such as Sandia National Laboratories’ Z machine experience meaningful power loss, which can be linked to non-linear ohmic heating at high currents (i.e., 26 MA on Z) leading to thermal desorption of contaminants and subsequent shunt plasma formation. Characterizing and understanding this type of thermal desorption is key to design optimizations necessary to minimize current loss, which will be even more important for next generation pulsed power. This type of characterization requires the ability to identify and determine concentration of analytes with nanosecond resolution given the pulse width of Z is on the order of 100 ns. This report summarizes progress on a small exploratory project focused on investigating options to meet this challenge using mass spectrometry. The main focus of these efforts utilized an Energy and Velocity Analyzer for Distributions of Electric Rockets intending to determine how quickly transient data could be resolved. This probe combines an electrostatic analyzer with a Wien velocity filter (ExB) to obtain ion energy and velocity distributions. Primary results from this exploratory project indicate significant additional work is needed to demonstrate a nanosecond time scale mass spectrometer for this application and also highlight that alternative detection methods such as laser-based diagnostics should be considered to meet the need for ultra-fast detection.
Radiation Portal Monitors (RPMs) were deployed throughout the port and border infrastructure of the United States (U.S.) beginning in 2003 to monitor for the possible presence of uncontrolled radiological and nuclear materials. Since that time, the U.S. Government (USG) has learned much about the operational challenges faced in the field. Principal among the shortcomings has been the lack of flexibility afforded the USG when all Internet Protocol (IP) rights and interfaces of the system are owned by the Original Equipment Manufacturer (OEM).
Neural ordinary differential equations (NODEs) have recently regained popularity as large-depth limits of a large class of neural networks. In particular, residual neural networks (ResNets) are equivalent to an explicit Euler discretization of an underlying NODE, where the transition from one layer to the next is one time step of the discretization. The relationship between continuous and discrete neural networks has been of particular interest. Notably, analysis from the ordinary differential equation viewpoint can potentially lead to new insights for understanding the behavior of neural networks in general. In this work, we take inspiration from differential equations to define the concept of stiffness for a ResNet via the interpretation of a ResNet as the discretization of a NODE. Here, we then examine the effects of stiffness on the ability of a ResNet to generalize, via computational studies on example problems coming from climate and chemistry models. We find that penalizing stiffness does have a unique regularizing effect, but we see no benefit to penalizing stiffness over L2 regularization (penalization of network parameter norms) in terms of predictive performance.
The Big Hill SPR site has a rich data set consisting of multi-arm caliper (MAC) logs collected from the cavern wells. This data set provides insight into the on-going casing deformation at the Big Hill site. This report summarizes the MAC surveys for each well and presents well longevity estimates where possible. Included in the report is an examination of the well twins for each cavern and a discussion on what may or may not be responsible for the different levels of deformation between some of the well twins. The report also takes a systematic view of the MAC data presenting spatial patterns of casing deformation and deformation orientation in an effort to better understand the underlying causes. The conclusions present a hypothesis suggesting the small-scale variations in casing deformation are attributable to similar scale variations in the character of the salt-caprock interface. These variations do not appear directly related to shear zones or faults.
This report documents the development of an arc flash hazard model to calculate the incident energy and zone of influence from high energy arcing faults involving aluminum. The NRC has identified the potential for (HEAFs) involving aluminum to increase the damage zone beyond what is currently postulated in fire probabilistic risk assessment (PRA) methodologies. To estimate the hazard from HEAFs involving aluminum an arc flash model was developed. Differences between the initial model and nuclear power plant (NPP) fire PRA scenarios were identified. Modification of the initial model established from existing literature and test data was used to minimize these differences. The developed model was evaluated against NRC datasets to understand the model prediction and relative uncertainties. Finally, a range of fire PRA zone of influences (ZOI) were developed based on the developed model, target fragility estimates and update HEAF PRA methodology. The results were developed to support an NRC LIC-504 evaluation in tandem with other modeling efforts. The report documents the effort and provides a reference for any future advancements in arc flash modeling.
On May 16-20, 2022, federal mission partners (e.g., DOE Consequence Management, CDC, FDA, FBI, DHS) as well as integrated state, local, tribal, and territorial governments took part in Cobalt Magnet 22 (CM22), a large-scale, week-long radiological incident exercise in Austin, Texas, that linked several important national assets (National Search Program, Radiological Assistance Program, and Consequence Management [CM] personnel) into a single response effort. The exercise had nine (9) overarching Objectives and an additional 162 associated Critical Tasks for all the participating organizations. In total, 13 National Core Capabilities spanning 5 Mission Areas were represented in the final exercise. This exercise enabled a full range of capabilities to be fielded together and examine the operational connection between major assets, discover any resource shortages associated with conducting multiple mission areas simultaneously or in close succession, and identify any challenges related to leadership. This report summarizes nearly 100 successes and observations provided from players and controllers supporting the LA Division, Fly Away Laboratory (FAL) and Gamma Spectroscopist operations. The observations were categorized to align with the FRMAC programmatic functional areas to consider for future improvements: Logistics, CBRN Responder, Laboratory Analysis, Sampling and Monitoring, Health and Safety, Gamma Spectroscopist Operations, Fly Away Laboratory, and the FRMAC Interdivision Interoperability Group (FIIG).
A natural clinoptilolite sample near the Nevada National Security Site was obtained to study adsorption and retardation on gas transport. Of interest is understanding the competition for adsorption sites that may reduce tracer gas adsorption relative to single-component measurements, which may be affected by the multi-scale pore structure of clinoptilolite. Clinoptilolite has three distinct domains of pore size distributions ranging from nanometers to micrometers: micropores with 0.4–0.7 nm diameters, measured on powders by CO2 adsorption at 273 K, representing the zeolite cages; mesopores with 4–200 nm diameters, observed using liquid nitrogen adsorption at 77 K; and macropores with 300–1000 nm diameters, measured by mercury injection on rock chips (~ 100 mesh), likely representing the microfractures. These pore size distributions are consistent with X-ray computed tomography (CT) and focused ion beam scanning electron microscope (FIB-SEM) images, which are used to construct the three-dimensional (3D) pore network to be used in future gas transport modeling. To quantify tracer gas adsorption in this multi-scale pore structure and multicomponent gas species environment, natural zeolite samples initially in equilibrium in air were exposed to a mixture of tracer gases. As the tracer gases diffuse and adsorb in the sample, the remaining tracer gases outside the sample fractionate. Using a quadrupole mass spectrometer to quantify this fractionation, the degree of adsorption of tracer gases in the multicomponent gas environment and multi-scale pore structure is assessed. The major finding is that Kr reaches equilibrium much faster than Xe in the presence of ambient air, which leads to more Kr uptake than Xe over limited exposure periods. When the clinoptilolite chips were exposed to humid air, the adsorption capability decreases significantly for both Xe and Kr with relative humidity (RH) as low as 3%. Both Xe and Kr reaches equilibrium faster at higher RH. The different, unexpected, adsorption behavior for Xe and Kr is due to their kinetic diameters similar to the micropores in clinoptilolite which makes it harder for Xe to access compared to Kr.
We evaluate neural radiance fields (NeRFs) as a method for reconstructing 3D volumetric scenes from low Earth orbit satellite imagery. We leverage commercial satellite data to reconstruct a scene using existing software tools. In doing so, we identify difficulties in these mapping datasets for NeRF generation. We propose potential applications in geospatial intelligence for context and improved image interpretation.
There are several U.S. government-sponsored programs with significant experience engaging with foreign government and industry partners to support capacity-building in export controls. This work seeks to answer the question: How can the outreach experience of the U.S. government-sponsored export control capacity-building programs (ECCBP) inform best practices for engaging with advanced reactor vendors in the domain of international nuclear safeguards? To answer this question, we interviewed export control subject matter experts with experience working for the U.S. ECCBPs – the Bureau of Industry and Security (BIS), the Export Control and Related Border Security (EXBS) program, and the International Nonproliferation Export Control Program (INECP) – and developed a set of recommendations for industry engagement based on the collective experience of interviewees.
Earth’s environment can be considered especially harsh due to the cyclic exposure of heat, moisture, oxygen, and ultraviolet (UV) and visible light. Polymer-derived materials subjected to these conditions over time often exhibit symptoms of degradation and deterioration, ultimately leading to accelerated material failure. To combat this, chemical additives known as antioxidants are often used to delay the onset of weathering and oxidative degradation. Phenol-derived antioxidants have been used for decades due to their excellent performance and stability; unfortunately, concerns regarding their toxicity and leaching susceptibility have driven researchers to identify novel solutions to replace phenolic antioxidants. Herein, we report on the antioxidant efficacy of organoborons, which have been known to exhibit antioxidant activity in plants and animals. Four different organoboron molecules were formulated into epoxy materials at various concentrations and subsequently cured into thermoset composites. Their antioxidant performance was subsequently analyzed via thermal, colorimetric, and spectroscopic techniques. Generally, thermal degradation and oxidation studies proved inconclusive and ambiguous. However, aging studies performed under thermal and UV-intensive conditions showed moderate to extreme color changes, suggesting poor antioxidant performance of all organoboron additives. Infrared spectroscopic analysis of the UV aged samples showed evidence of severe material oxidation, while the thermally aged samples showed only slight material oxidation. Solvent extraction experiments showed that even moderately high organoboron concentrations show negligible leaching susceptibility, confirming previously reported results. This finding may have benefits in applications where additive leaching may cause degradation to sensitive materials, such as microelectronics and other materials science related areas.
Criticality Control Overpack (CCO) containers are being considered for the disposal of defense-related nuclear waste at the Waste Isolation Pilot Plant (WIPP). At WIPP, these containers would be placed in underground disposal rooms, which will naturally close and compact the containers closer to one another over several centuries. This report details simulations to predict the final container configuration as an input to nuclear criticality assessments. Each container was discretely modeled, including the plywood and stainless steel pipe inside the 55-gallon drum, in order to capture its complex mechanical behavior. Although these high-fidelity simulations were computationally intensive, several different material models were considered in an attempt to reasonably bound the horizontal and vertical compaction percentages. When exceptionally strong materials were used for the containers, the horizontal and vertical closure respectively stabilized at 43:9 % and 93:7 %. At the other extreme, when the containers completely degraded and the clay seams between the salt layers were glued, the horizontal and vertical closure reached respective final values of 48:6 % and 100 %.
Two novel LiCl·DMSO polymer structures were created by combining dry LiCl salt with dimethyl sulfoxide (DMSO), namely, catena-poly[[chloridolithium(I)]-μ-(dimethyl sulfoxide)-κ2O:O-[chloridolithium(I)]-di-μ-(dimethyl sulfoxide)-κ4O:O], [Li2Cl2(C2H6OS)3]n, and catena-poly[lithium(I)-μ-chlorido-μ-(dimethyl sulfoxide)-κ2O:O], [LiCl(C2H6OS)]n. The initial synthesized phase had very small block-shaped crystals (<0.08 mm) with monoclinic symmetry and a 2 LiCl: 3 DMSO ratio. As the solution evaporated, a second phase formed with a plate-shaped crystal morphology. After about 20 minutes, large (>0.20 mm) octahedron-shaped crystals formed. The plate crystals and the octahedron crystals are the same tetragonal structure with a 1 LiCl: 1 DMSO ratio. These structures are reported and compared to other known LiCl·solvent compounds.
This report documents the development of the Blue Canyon Dome (BCD) testbed, including test site selection, development, instrumentation, and logistical considerations. The BCD testbed was designed for small-scale explosive tests (~5 kg TNT equivalence maximum) for the purpose of comparing diagnostic signals from different types of explosives, the assumption being that different chemical explosives would generate different signatures on geophysical and other monitoring tools. The BCD testbed is located at the Energetic Materials Research and Testing Center near Socorro, New Mexico. Instrumentation includes an electrical resistivity tomography array, geophones, distributed acoustic sensing, gas samplers, distributed temperature sensing, pressure transducers, and high-speed cameras. This SAND report is a reference for BCD testbed development that can be cited in future publications.
This report identifies current best understanding of federal agencies that are responsible for the safe transportation and handling of nuclear materials during various phases of space launch activities and how they interact. It explores the following questions: (1) Which federal agencies have roles, responsibilities, and statutory authorities related to the launch, orbit, and reentry of nuclear materials and components? (2) What relevant current/recent activities are those federal agencies involved in?
Pursuant to Section 20.6.2.3104 of the New Mexico Administrative Code (NMAC), the U.S. Department of Energy/National Nuclear Security Administration/Sandia Field Office (DOE/NNSA/SFO) as owner and National Technology & Engineering Solutions of Sandia, LLC (NTESS) as operator are submitting this annual monitoring and discharge report for calendar year (CY) 2022 as required in Discharge Permit 530 (DP-530), issued by the New Mexico Environment Department (NMED) for the Pulsed Power Development Facilities Evaporation Lagoons located at Sandia National Laboratories/New Mexico (SNL/NM) Technical Area IV (TA-IV).
The V31 containment vessel was procured by the US Army Recovered Chemical Materiel Directorate (RCMD) as a third-generation EDS containment vessel. It is the fifth EDS vessel to be fabricated under Code Case 2564 of the 2019 ASME Boiler and Pressure Vessel Code, which provides rules for the design of impulsively loaded vessels. The explosive rating for the vessel, based on the code case, is twenty-four (24) pounds TNT-equivalent for up to 1092 detonations. This report documents the results of explosive tests that were performed on the vessel at Sandia National Laboratories in Albuquerque, New Mexico to qualify the vessel for field operations use. There were three design basis configurations for qualification testing. Qualification test (1) consisted of a simulated M55 rocket motor and warhead assembly of 24lbs of Composition C-4 (30 lb TNT equivalent). This test was considered the maximum load case, based on modeling and simulation methods performed by Sandia prior to the vessel design phase. Qualification test (2) consisted of a regular, right circular cylinder, unitary charge, located central to the vessel interior of 19.2 lb of Composition C-4 (24 lb TNT equivalent). Qualification test (3) consisted of a 12-pack of regular, right circular cylinders of 2 lb each, distributed evenly inside the vessel (totaling 19.2 lb of C-4, or 24 lb TNT equivalent). All vessel acceptance criteria were met.
Direct air capture (DAC) of CO2 is a negative emission technology under development to limit the impacts of climate change. The dilute concentration of CO2 in the atmosphere (~400 ppm) requires new materials for carbon capture with increased CO2 selectivity that is not met with current carbon capture materials. Porous liquids (PLs) are an emerging candidate for carbon capture and consists of a combination of solvents and porous hosts that creates a liquid with permanent porosity. The fundamental mechanisms of carbon capture in a PL are relatively unknown. To uncover these mechanisms, PLs were synthesized consisting of three different zeolitic-imidazolate framework (ZIF-8, ZIF-67, or ZIF-69) porous host in a water/glycol/2-methylimidazole solvent. The most stable composition was based on ZIF-8 and exhibited carbon capture following exposure to CO2. Density functional theory identified a three-step carbon capture mechanism based on (i) reaction of OH- with ethylene glycol in the solution followed by (ii) formation of 2-hydroxyethyl carbonate, which (iii) further react with OH- to form a carbonate species. This mechanism was validated with experimental nuclear magnetic resonance spectroscopy (NMR) to identify the dissolved carbonate phases and the decrease in the pH during CO2 exposure. Deuterated samples of the ZIF-8 PLs were synthesized and analyzed via neutron diffraction at the Spallation Neutron Sources at Oak Ridge National Laboratory. Results identified differences in diffraction for PLs pre- and post-CO2 exposure that will be combined with ab initio molecular dynamics data of the same PL composition to identify how the presence of a solvent-porous host interfaces results in carbon capture.
Albany is a parallel C++ finite element library for solving forward and inverse problems involving partial differential equations (PDEs). In this paper we introduce PyAlbany, a newly developed Python interface to the Albany library. PyAlbany can be used to effectively drive Albany enabling fast and easy analysis and post-processing of applications based on PDEs that are pre-implemented in Albany. PyAlbany relies on the library PyBind11 to bind Python with C++ Albany code. Here we detail the implementation of PyAlbany and showcase its capabilities through a number of examples targeting a heat-diffusion problem. In particular we consider the following: (1) the generation of samples for a Monte Carlo application, (2) a scalability study, (3) a study of parameters on the performance of a linear solver, and finally (4) a tool for performing eigenvalue decompositions of matrix-free operators for a Bayesian inference application.
Several studies have proven how ducted fuel injection (DFI) reduces soot emissions for compression-ignition engines. Nevertheless, no comprehensive study has investigated how DFI performs over a load range in combination with low-net-carbon fuels. In this study, optical-engine experiments were performed with four different fuels—conventional diesel and three low-net-carbon fuels—at low and moderate load, to measure emissions levels and performance. The 1.7-liter single-cylinder optical engine was equipped with a high-speed camera to capture natural luminosity images of the combustion event. Conventional diesel and DFI combustion were investigated at four different dilution levels (to simulate exhaust-gas recirculation effects), from 14 to 21 mol% oxygen in the intake. At a given dilution level, with commercial diesel fuel, DFI reduced soot by 82% at medium load, and 75% at low load without increasing NOx. The results further show how DFI with dilution reduces soot and NOx without compromising engine performance or other emission types, especially when combined with low-net-carbon fuels. DFI with the oxygenated low-net-carbon blend HEA67 simultaneously reduced soot and NOx by as much as 93 % and 82 %, respectively, relative to conventional diesel combustion with commercial diesel fuel. These soot and NOx reductions occurred while lifecycle CO2 was reduced by at least 70 % when using low-net-carbon fuels instead of conventional diesel. All emissions changes were compared with future emissions regulations for different vehicle sectors to investigate how DFI can be used to facilitate achievement of the regulations. Finally, the results show how the DFI cases fall below several future emissions regulation levels, rendering less need for aftertreatment systems and giving a possible lower cost of ownership.
Here, we introduce a mathematically rigorous formulation for a nonlocal interface problem with jumps and propose an asymptotically compatible finite element discretization for the weak form of the interface problem. After proving the well-posedness of the weak form, we demonstrate that solutions to the nonlocal interface problem converge to the corresponding local counterpart when the nonlocal data are appropriately prescribed. Several numerical tests in one and two dimensions show the applicability of our technique, its numerical convergence to exact nonlocal solutions, its convergence to the local limit when the horizons vanish, and its robustness with respect to the patch test.
Many applications require minimizing the sum of smooth and nonsmooth functions. For example, basis pursuit denoising problems in data science require minimizing a measure of data misfit plus an $\ell^1$-regularizer. Similar problems arise in the optimal control of partial differential equations (PDEs) when sparsity of the control is desired. Here, we develop a novel trust-region method to minimize the sum of a smooth nonconvex function and a nonsmooth convex function. Our method is unique in that it permits and systematically controls the use of inexact objective function and derivative evaluations. When using a quadratic Taylor model for the trust-region subproblem, our algorithm is an inexact, matrix-free proximal Newton-type method that permits indefinite Hessians. We prove global convergence of our method in Hilbert space and demonstrate its efficacy on three examples from data science and PDE-constrained optimization.
Here we present a new method for coupled linear elasticity problems whose finite element discretization may lead to spatially non-coincident discretized interfaces. Our approach combines the classical Dirichlet–Neumann coupling formulation with a new set of discretized interface conditions obtained through Taylor series expansions. We show that these conditions ensure linear consistency of the coupled finite element solution. We then formulate an iterative solution method for the coupled discrete system and apply the new coupling approach to two representative settings for which we also provide several numerical illustrations. The first setting is a mesh-tying problem in which both coupled structures have the same Lamé parameters whereas the second setting is an interface problem for which the Lamé parameters in the two coupled structures are different.
Satellite imagery can detect temporary cloud trails or ship tracks formed from aerosols emitted from large ships traversing our oceans, a phenomenon that global climate models cannot directly reproduce. Ship tracks are observable examples of marine cloud brightening, a potential solar climate intervention that shows promise in helping combat climate change. In this paper, we demonstrate a simulation-based approach in learning the behavior of ship tracks based upon a novel stochastic emulation mechanism. Our method uses wind fields to determine the movement of aerosol-cloud tracks and uses a stochastic partial differential equation (SPDE) to model their persistence behavior. This SPDE incorporates both a drift and diffusion term which describes the movement of aerosol particles via wind and their diffusivity through the atmosphere, respectively. We first present our proposed approach with examples using simulated wind fields and ship paths. We then successfully demonstrate our tool by applying the approximate Bayesian computation method-sequential Monte Carlo for data assimilation.
Interfacial segregation and chemical short-range ordering influence the behavior of grain boundaries in complex concentrated alloys. In this study, we use atomistic modeling of a NbMoTaW refractory complex concentrated alloy to provide insight into the interplay between these two phenomena. Hybrid Monte Carlo and molecular dynamics simulations are performed on columnar grain models to identify equilibrium grain boundary structures. Our results reveal extended near-boundary segregation zones that are much larger than traditional segregation regions, which also exhibit chemical patterning that bridges the interfacial and grain interior regions. Furthermore, structural transitions pertaining to an A2-to-B2 transformation are observed within these extended segregation zones. Both grain size and temperature are found to significantly alter the widths of these regions. An analysis of chemical short-range order indicates that not all pairwise elemental interactions are affected by the presence of a grain boundary equally, as only a subset of elemental clustering types are more likely to reside near certain boundaries. The results emphasize the increased chemical complexity that is associated with near-boundary segregation zones and demonstrate the unique nature of interfacial segregation in complex concentrated alloys.
Li-metal batteries (LMBs) employing conversion cathode materials (e.g., FeF3) are a promising way to prepare inexpensive, environmentally friendly batteries with high energy density. Pseudo-solid-state ionogel separators harness the energy density and safety advantages of solid-state LMBs, while alleviating key drawbacks (e.g., poor ionic conductivity and high interfacial resistance). In this work, a pseudo-solid-state conversion battery (Li-FeF3) is presented that achieves stable, high rate (1.0 mA cm–2) cycling at room temperature. The batteries described herein contain gel-infiltrated FeF3 cathodes prepared by exchanging the ionic liquid in a polymer ionogel with a localized high-concentration electrolyte (LHCE). The LHCE gel merges the benefits of a flexible separator (e.g., adaptation to conversion-related volume changes) with the excellent chemical stability and high ionic conductivity (~2 mS cm–1 at 25 °C) of an LHCE. The latter property is in contrast to previous solid-state iron fluoride batteries, where poor ionic conductivities necessitated elevated temperatures to realize practical power levels. Importantly, the stable, room-temperature Li-FeF3 cycling performance obtained with the LHCE gel at high current densities paves the way for exploring a range of architectures including flexible, three-dimensional, and custom shape batteries.
Automatic differentiation (AD) is a well-known technique for evaluating analytic derivatives of calculations implemented on a computer, with numerous software tools available for incorporating AD technology into complex applications. However, a growing challenge for AD is the efficient differentiation of parallel computations implemented on emerging manycore computing architectures such as multicore CPUs, GPUs, and accelerators as these devices become more pervasive. In this work, we explore forward mode, operator overloading-based differentiation of C++ codes on these architectures using the widely available Sacado AD software package. In particular, we leverage Kokkos, a C++ tool providing APIs for implementing parallel computations that is portable to a wide variety of emerging architectures. We describe the challenges that arise when differentiating code for these architectures using Kokkos, and two approaches for overcoming them that ensure optimal memory access patterns as well as expose additional dimensions of fine-grained parallelism in the derivative calculation. We describe the results of several computational experiments that demonstrate the performance of the approach on a few contemporary CPU and GPU architectures. We then conclude with applications of these techniques to the simulation of discretized systems of partial differential equations.
Wind turbine applications that leverage nacelle-mounted Doppler lidar are hampered by several sources of uncertainty in the lidar measurement, affecting both bias and random errors. Two problems encountered especially for nacelle-mounted lidar are solid interference due to intersection of the line of sight with solid objects behind, within, or in front of the measurement volume and spectral noise due primarily to limited photon capture. These two uncertainties, especially that due to solid interference, can be reduced with high-fidelity retrieval techniques (i.e., including both quality assurance/quality control and subsequent parameter estimation). Our work compares three such techniques, including conventional thresholding, advanced filtering, and a novel application of supervised machine learning with ensemble neural networks, based on their ability to reduce uncertainty introduced by the two observed nonideal spectral features while keeping data availability high. The approach leverages data from a field experiment involving a continuous-wave (CW) SpinnerLidar from the Technical University of Denmark (DTU) that provided scans of a wide range of flows both unwaked and waked by a field turbine. Independent measurements from an adjacent meteorological tower within the sampling volume permit experimental validation of the instantaneous velocity uncertainty remaining after retrieval that stems from solid interference and strong spectral noise, which is a validation that has not been performed previously. All three methods perform similarly for non-interfered returns, but the advanced filtering and machine learning techniques perform better when solid interference is present, which allows them to produce overall standard deviations of error between 0.2 and 0.3ms-1, or a 1%-22% improvement versus the conventional thresholding technique, over the rotor height for the unwaked cases. Between the two improved techniques, the advanced filtering produces 3.5% higher overall data availability, while the machine learning offers a faster runtime (i.e., 1/41s to evaluate) that is therefore more commensurate with the requirements of real-time turbine control. The retrieval techniques are described in terms of application to CW lidar, though they are also relevant to pulsed lidar. Previous work by the authors (Brown and Herges, 2020) explored a novel attempt to quantify uncertainty in the output of a high-fidelity lidar retrieval technique using simulated lidar returns; this article provides true uncertainty quantification versus independent measurement and does so for three techniques rather than one.
It is hoped that quantum computers will offer advantages over classical computers for combinatorial optimization. Here, we introduce a feedback-based strategy for quantum optimization, where the results of qubit measurements are used to constructively assign values to quantum circuit parameters. We show that this procedure results in an estimate of the combinatorial optimization problem solution that improves monotonically with the depth of the quantum circuit. Importantly, the measurement-based feedback enables approximate solutions to the combinatorial optimization problem without the need for any classical optimization effort, as would be required for the quantum approximate optimization algorithm. We demonstrate this feedback-based protocol on a superconducting quantum processor for the graph-partitioning problem MaxCut, and present a series of numerical analyses that further investigate the protocol's performance.
This report is the revised (Revision 9) Task F specification for DECOVALEX-2023. Task F is a comparison of the models and methods used in deep geologic repository performance assessment. The task proposes to develop a reference case for a mined repository in a fractured crystalline host rock (Task F1) and a reference case for a mined repository in a salt formation (Task F2). Teams may choose to participate in the comparison for either or both reference cases. For each reference case, a common set of conceptual models and parameters describing features, events, and processes that impact performance will be given, and teams will be responsible for determining how best to implement and couple the models. The comparison will be conducted in stages, beginning with a comparison of key outputs of individual process models, followed by a comparison of a single deterministic simulation of the full reference case, and moving on to uncertainty propagation and uncertainty and sensitivity analysis. This report provides background information, a summary of the proposed reference cases, and a staged plan for the analysis.
Parke, Tyler; Silva-Quis, Dhamelyz; Wang, George T.; Teplyakov, Andrew V.
As atomic layer deposition (ALD) emerges as a method to fabricate architectures with atomic precision, emphasis is placed on understanding surface reactions and nucleation mechanisms. ALD of titanium dioxide with TiCl4 and water has been used to investigate deposition processes in general, but the effect of surface termination on the initial TiO2 nucleation lacks needed mechanistic insights. Here, this work examines the adsorption of TiCl4 on Cl–, H–, and HO– terminated Si(100) and Si(111) surfaces to elucidate the general role of different surface structures and defect types in manipulating surface reactivity of growth and non-growth substrates. The surface sites and their role in the initial stages of deposition are examined by X-ray photoelectron spectroscopy (XPS) and atomic force microscopy (AFM). Density functional theory (DFT) computations of the local functionalized silicon surfaces suggest oxygen-containing defects are primary drivers of selectivity loss on these surfaces.
Clem, Paul; Nieves, Cesar A.; Yuan, Mengxue; Ogrinc, Andrew L.; Furman, Eugene; Kim, Seong H.; Lanagan, Michael T.
Ionic conduction in silicate glasses is mainly influenced by the nature, concentration, and mobility of the network-modifying (NWM) cations. The electrical conduction in SLS is dominated by the ionic migration of sodium moving from the anode to the cathode. An activation energy for this conduction process was calculated to be 0.82eV and in good agreement with values previously reported. The conduction process associated to the leakage current and relaxation peak in TSDC for HPFS is attributed to conduction between nonbridging oxygen hole centers (NBOHC). It is suggested that ≡Si-OH = ≡Si-O- + H0 under thermo-electric poling, promoting hole or proton injection from the anode and responsible for the 1.5eV relaxation peak. No previous TSDC data have been found to corroborate this mechanism. The higher activation energy and lower current intensity for the coated HPFS might be attributed to a lower concentration of NBOHC after heat treatment (Si-OH + OH-Si = SiO-Si + H2O). This could explain the TSDC signal around room temperature for the coated HPFS. Another possible explanation could be a redox reaction at the anode region dominating the current response.
This development of empirical data to support realistic and science-based input to safety regulations and transportation standards is a critical need for the hazardous material (HM) transportation industry. Current regulations and standards are based on the TNT equivalency model. However, real world experience indicates that use of the TNT equivalency model to predict composite overwrapped pressure vessel (COPV) potential energy release is unrealistically conservative. The purpose of this report is to characterize and quantify rupture events involving damaged COPV’s of the type used in HM transportation regulated by the Department of Transportation (DOT). This was accomplished using a series of five tests; 2 COPV tests for compressed natural gas (CNG), 2 COPV tests for hydrogen, and 1 COPV test for nitrogen. Measured overpressures from these tests were compared to predicted overpressures from a TNT equivalence model and blast curves. Comparison between the measurements and predictions shows that the predictions are generally conservative, and that the extent of conservatism is dominated by predictions of the chemical contribution to overpressure from fuel within the COPVs.
This report provides a summary of measurement results used to compare the performance of the PHDS Fulcrum40h and Ortec Detective-X High Purity Germanium (HPGe) detector systems. Specifically, the measurement data collected was used to assess each detector system for gamma efficiency and resolution, gamma angular response and efficiency for an in-situ surface distribution, neutron efficiency, gamma pulse-pileup response, and gamma to neutron crosstalk.
Cemented annulus fractures are a major leakage path in a wellbore system, and their permeability plays an important role in the behavior of fluid flow through a leaky wellbore. The permeability of these fractures is affected by changing conditions including the external stresses acting on the fracture and the fluid pressure within the fracture. Laboratory gas flow experiments were conducted in a triaxial cell to evaluate the permeability of a wellbore cement fracture under a wide range of confining stress and pore pressure conditions. For the first time, an effective stress law that considers the simultaneous effect of confining stress and pore pressure was defined for the wellbore cement fracture permeability. Here the results showed that the effective stress coefficient (λ) for permeability increased linearly with the Terzaghi effective stress ( -p) with an average of λ = 1 in the range of applied pressures. The relationship between the effective stress and fracture permeability was examined using two physical-based models widely used for rock fractures. The results from the experimental work were incorporated into numerical simulations to estimate the impact of effective stress on the interpreted hydraulic aperture and leakage behavior through a fractured annular cement. Accounting for effective stress-dependent permeability through the wellbore length significantly increased the leakage rate at the wellhead compared with the assumption of a constant cemented annulus permeability.
National security applications require artificial neural networks (ANNs) that consume less power, are fast and dynamic online learners, are fault tolerant, and can learn from unlabeled and imbalanced data. We explore whether two fundamentally different, traditional learning algorithms from artificial intelligence and the biological brain can be merged. We tackle this problem from two directions. First, we start from a theoretical point of view and show that the spike time dependent plasticity (STDP) learning curve observed in biological networks can be derived using the mathematical framework of backpropagation through time. Second, we show that transmission delays, as observed in biological networks, improve the ability of spiking networks to perform classification when trained using a backpropagation of error (BP) method. These results provide evidence that STDP could be compatible with a BP learning rule. Combining these learning algorithms will likely lead to networks more capable of meeting our national security missions.
Kim, Anthony D.; Curwen, Christopher A.; Wu, Yu; Reno, John L.; Addamane, Sadhvikas J.; Williams, Benjamin S.
Terahertz (THz) external-cavity lasers based on quantum-cascade (QC) metasurfaces are emerging as widely-tunable, single-mode sources with the potential to cover the 1--6 THz range in discrete bands with milliwatt-level output power. By operating on an ultra-short cavity with a length on the order of the wavelength, the QC vertical-external-cavity surface-emitting-laser (VECSEL) architecture enables continuous, broadband tuning while producing high quality beam patterns and scalable power output. The methods and challenges for designing the metasurface at different frequencies are discussed. As the QC-VECSEL is scaled below 2 THz, the primary challenges are reduced gain from the QC active region, increased metasurface quality factor and its effect on tunable bandwidth, and larger power consumption due to a correspondingly scaled metasurface area. At frequencies above 4.5 THz, challenges arise from a reduced metasurface quality factor and the excess absorption that occurs from proximity to the Reststrahlen band. The results of four different devices — with center frequencies 1.8 THz, 2.8 THz, 3.5 THz, and 4.5 THz — are reported. Each device demonstrated at least 200 GHz of continuous single-mode tuning, with the largest being 650 GHz around 3.5 THz. The limitations of the tuning range are well modeled by a Fabry-Pérot cavity which accounts for the reflection phase of the metasurface and the effect of the metasurface quality factor on laser threshold. Lastly, the effect of different output couplers on device performance is studied, demonstrating a significant trade-off between the slope efficiency and tuning bandwidth.
Several studies suggest that metal ordering within metal-organic frameworks (MOFs) is important for understanding how MOFs behave in relevant applications; however, these siting trends can be difficult to determine experimentally. To garner insight into the energetic driving forces that may lead to nonrandom ordering within heterometallic MOFs, we employ density functional theory (DFT) calculations on several bimetallic metal-organic crystals composed of Nd and Yb metal atoms. We also investigate the metal siting trends for a newly synthesized MOF. Our DFT-based energy of mixing results suggest that Nd will likely occupy sites with greater access to electronegative atoms and that local homometallic domains within a mixed-metal Nd-Yb system are favored. We also explore the use of less computationally extensive methods such as classical force fields and cluster expansion models to understand their feasibility for large system sizes. This study highlights the impact of metal ordering on the energetic stability of heterometallic MOFs and crystal structures.
We have measured, analyzed, and simulated the ground state valence photoelectron spectrum, x-ray absorption (XA) spectrum, x-ray photoelectron (XP) spectrum as well as normal and resonant Auger-Meitner electron (AE) spectrum of oxazole at the carbon, oxygen, and nitrogen K-edge in order to understand its electronic structure. Experimental data are compared to theoretical calculations performed at the coupled cluster, restricted active space perturbation theory to second-order and time-dependent density functional levels of theory. We demonstrate (1) that both N and O K-edge XA spectra are sensitive to the amount of dynamical electron correlation included in the theoretical description and (2) that for a complete description of XP spectra, additional orbital correlation and orbital relaxation effects need to be considered. The normal AE spectra are dominated by a singlet excitation channel and well described by theory. The resonant AE spectra, however, are more complicated. While the participator decay channels, dominating at higher kinetic energies, are well described by coupled cluster theory, spectator channels can only be described satisfactorily using a method that combines restricted active space perturbation theory to second order for the bound part and a one-center approximation for the continuum.
A laser-strike detection system includes an imaging sensor mounted on a platform, and a computing device. The imaging sensor outputs image frames that are each representative of a portion of the platform at a different time, during which a laser may be striking the platform. The computing device receives the image frames, and computes a delay map that indicates time-of-arrival delays of the laser beam at points on the portion of the platform. The computing device converts the delay map to a path-length variation map by multiplying the delay map by the propagation speed of light. The computing device fits a plane to the path-length variation map constrained by a topological model of the platform. The computing device computes angular deflections in x- and y-directions based upon the fit, which angular deflections define a direction from the platform to an emitter of the laser beam.
In order to meet 2025 goals for enhanced peak power (100 kW), specific power (50 kW/L), and reduced cost (3.3 $\$$/kW) in a motor that can operate at ≥ 20,000 rpm, improved soft magnetic materials must be developed. Better performing soft magnetic materials will also enable rare earth free electric motors. In fact, replacement of permanent magnets with soft magnetic materials was highlighted in the Electrical and Electronics Technical Team (EETT) Roadmap as a R&D pathway for meeting 2025 targets. Eddy current losses in conventional soft magnetic materials, such as silicon steel, begin to significantly impact motor efficiency as rotational speed increases. Soft magnetic composites (SMCs), which combine magnetic particles with an insulating matrix to boost electrical resistivity (ρ) and decrease eddy current losses, even at higher operating frequencies (or rotational speeds), are an attractive solution. Today, SMCs are being fabricated with values of ρ ranging between 10-3 to 10-1 μohm∙m, which is significantly higher than 3% silicon steel (~0.05 μohm∙m). The isotropic nature of SMCs is ideally suited for motors with 3D flux paths, such as axial flux motors. Additionally, the manufacturing cost of SMCs is low and they are highly amenable to advanced manufacturing and net-shaping into complex geometries, which further reduces manufacturing costs. There is still significant room for advancement in SMCs, and therefore additional improvements in electrical machine performance. For example, despite the inclusion of a non-magnetic insulating material, the electrical resistivities of SMCs are still far below that of soft ferrites (10 – 108 μohm∙m).
More efficient power conversion devices are able to transmit greater electrical power across larger distances to satisfy growing global electrical needs. A critical requirement to achieve more efficient power conversion are the soft magnetic materials used as core materials in transformers, inductors, and motors. To that effect it is well known that the use of non-equilibrium microstructures, which are, for example, nanocrystalline or consist of single phase solid solutions, can yield high saturation magnetic polarization and high electrical resistivity necessary for more efficient soft magnetic materials. In this work, we synthesized CoFe – P soft magnetic alloys containing nanocrystalline, single phase solid solution microstructures and studied the effect of a secondary intermetallic phase on the saturation magnetic polarization and electrical resistivity of the consolidated alloy. Single phase solid solution CoFe – P alloys were prepared through mechanically alloying metal powders and phase decomposition was observed after subsequent consolidation via spark plasma sintering (SPS) at various temperatures. The secondary intermetallic phase was identified as the orthorhombic (CoxFe1−x)2P phase and the magnetic properties of the (CoxFe1−x)2P intermetallic phase were found to be detrimental to the soft magnetic properties of the targeted CoFe – P alloy.
The effect of crystallography on transgranular chloride-induced stress corrosion cracking (TGCISCC) of arc welded 304L austenitic stainless steel is studied on >300 grains along crack paths. Schmid and Taylor factor mismatches across grain boundaries (GBs) reveal that cracks propagate either from a hard to soft grain, which can be explained merely by mechanical arguments, or soft to hard grain. In the latter case, finite element analysis reveals that TGCISCC will arrest at GBs without sufficient mechanical stress, favorable crystallographic orientations, or crack tip corrosion. GB type does not play a significant role in determining TGCISCC cracking behavior nor susceptibility. TGCISCC crack behaviors at GBs are discussed in the context of the competition between mechanical, crystallographic, and corrosion factors.
We report the method-of-moments implementation of the electric-field integral equation (EFIE) yields many code-verification challenges due to the various sources of numerical error and their possible interactions. Matters are further complicated by singular integrals, which arise from the presence of a Green's function. To address these singular integrals, an approach is presented in wherein both the solution and Green's function are manufactured. Because the arising equations are poorly conditioned, they are reformulated as a set of constraints for an optimization problem that selects the solution closest to the manufactured solution. In this paper, we demonstrate how, for such practically singular systems of equations, computing the truncation error by inserting the exact solution into the discretized equations cannot detect certain orders of coding errors. On the other hand, the discretization error from the optimal solution is a more sensitive metric that can detect orders less than those of the expected convergence rate.
Clays are known for their small particle sizes and complex layer stacking. We show here that the limited dimension of clay particles arises from the lack of long-range order in low-dimensional systems. Because of its weak interlayer interaction, a clay mineral can be treated as two separate low-dimensional systems: a 2D system for individual phyllosilicate layers and a quasi-1D system for layer stacking. The layer stacking or ordering in an interstratified clay can be described by a 1D Ising model while the limited extension of individual phyllosilicate layers can be related to a 2D Berezinskii–Kosterlitz–Thouless transition. This treatment allows for a systematic prediction of clay particle size distributions and layer stacking as controlled by the physical and chemical conditions for mineral growth and transformation. Clay minerals provide a useful model system for studying a transition from a 1D to 3D system in crystal growth and for a nanoscale structural manipulation of a general type of layered materials.
This study explores the effect of surface re-finishing on the corrosion behavior of electron beam manufactured (EBM) Ti-G5 (Ti-6Al-4V), including the novel application of an electron beam surface remelting (EBSR) technique. Specifically, the relationship between material surface roughness and corrosion resistance was examined. Surface roughness was tested in the as-printed (AP), mechanically polished (MP), and EBSR states and compared to wrought (WR) counterparts. Electrochemical measurements were performed in chloride-containing media. It was observed that surface roughness, rather than differences in the underlying microstructure, played a more significant role in the general corrosion resistance in the environment explored here. While both MP and EBSR methods reduced surface roughness and enhanced corrosion resistance, mechanical polishing has many known limitations. The EBSR process explored herein demonstrated positive preliminary results. The surface roughness (Ra) of the EBM-AP material was considerably reduced by 82%. Additionally, the measured corrosion current density in 0.6 M NaCl for the EBSR sample is 0.05 µA cm−2, five times less than the value obtained for the EBM-AP specimen (0.26 µA cm−2).
The growing demand for bandwidth makes photonic systems a leading candidate for future telecommunication and radar technologies. Integrated photonic systems offer ultra-wideband performance within a small footprint, which can naturally interface with fiber-optic networks for signal transmission. However, it remains challenging to realize narrowband (∼MHz) filters needed for high-performance communications systems using integrated photonics. In this paper, we demonstrate all-silicon microwave-photonic notch filters with 50× higher spectral resolution than previously realized in silicon photonics. This enhanced performance is achieved by utilizing optomechanical interactions to access long-lived phonons, greatly extending available coherence times in silicon. We use a multi-port Brillouin-based optomechanical system to demonstrate ultra-narrowband (2.7 MHz) notch filters with high rejection (57 dB) and frequency tunability over a wide spectral band (6 GHz) within a microwave-photonic link. We accomplish this with an all-silicon waveguide system, using CMOS-compatible fabrication techniques.
The extreme miniaturization of a cold-atom interferometer accelerometer requires the development of novel technologies and architectures for the interferometer subsystems. Here, we describe several component technologies and a laser system architecture to enable a path to such miniaturization. We developed a custom, compact titanium vacuum package containing a microfabricated grating chip for a tetrahedral grating magneto-optical trap (GMOT) using a single cooling beam. In addition, we designed a multi-channel photonic-integrated-circuit-compatible laser system implemented with a single seed laser and single sideband modulators in a time-multiplexed manner, reducing the number of optical channels connected to the sensor head. In a compact sensor head containing the vacuum package, sub-Doppler cooling in the GMOT produces 15 μK temperatures, and the GMOT can operate at a 20 Hz data rate. We validated the atomic coherence with Ramsey interferometry using microwave spectroscopy, then demonstrated a light-pulse atom interferometer in a gravimeter configuration for a 10 Hz measurement data rate and T = 0–4.5 ms interrogation time, resulting in Δg/g = 2.0 × 10−6. This work represents a significant step towards deployable cold-atom inertial sensors under large amplitude motional dynamics.
Spin–orbit effects, inherent to electrons confined in quantum dots at a silicon heterointerface, provide a means to control electron spin qubits without the added complexity of on-chip, nanofabricated micromagnets or nearby coplanar striplines. Here, we demonstrate a singlet–triplet qubit operating mode that can drive qubit evolution at frequencies in excess of 200 MHz. This approach offers a means to electrically turn on and off fast control, while providing high logic gate orthogonality and long qubit dephasing times. We utilize this operational mode for dynamical decoupling experiments to probe the charge noise power spectrum in a silicon metal-oxide-semiconductor double quantum dot. In addition, we assess qubit frequency drift over longer timescales to capture low-frequency noise. We present the charge noise power spectral density up to 3 MHz, which exhibits a 1/fα dependence consistent with α ~ 0.7, over 9 orders of magnitude in noise frequency.
Although gold remains a preferred surface finish for components used in high-reliability electronics, rapid developments in this area have left a gap in the fundamental understanding of solder joint gold (Au) embrittlement. Furthermore, as electronic designs scale down in size, the effect of Au content is not well understood on increasingly smaller solder interconnections. As a result, previous findings may have limited applicability. The current study focused on addressing these gaps by investigating the interfacial microstructure that evolves in 63Sn-37Pb solder joints as a function of Au layer thickness. Those findings were correlated to the mechanical performance of the solder joints. Increasing the initial Au concentration decreased the mechanical strength of a joint, but only to a limited degree. Kirkendall voids were the primary contributor to low-strength joints, while brittle fracture within the intermetallic compounds (IMC) layers is less of a factor. The Au embrittlement mechanism appears to be self-limiting, but only once mechanical integrity is degraded. Sufficient void evolution prevents continued diffusion from the remaining Au.
Heavy metals released from kerogen to produced water during oil/gas extraction have caused major enviromental concerns. To curtail water usage and production in an operation and to use the same process for carbon sequestration, supercritical CO2 (scCO2) has been suggested as a fracking fluid or an oil/gas recovery agent. It has been shown previously that injection of scCO2 into a reservoir may cause several chemical and physical changes to the reservoir properties including pore surface wettability, gas sorption capacity, and transport properties. Using molecular dynamics simulations, we here demonstrate that injection of scCO2 might lead to desorption of physically adsorbed metals from kerogen structures. This process on one hand may impact the quality of produced water. On the other hand, it may enhance metal recovery if this process is used for in-situ extraction of critical metals from shale or other organic carbon-rich formations such as coal.
Migration of seismic events to deeper depths along basement faults over time has been observed in the wastewater injection sites, which can be correlated spatially and temporally to the propagation or retardation of pressure fronts and corresponding poroelastic response to given operation history. The seismicity rate model has been suggested as a physical indicator for the potential of earthquake nucleation along faults by quantifying poroelastic response to multiple well operations. Our field-scale model indicates that migrating patterns of 2015–2018 seismicity observed near Venus, TX are likely attributed to spatio-temporal evolution of Coulomb stressing rate constrained by the fault permeability. Even after reducing injection volumes since 2015, pore pressure continues to diffuse and steady transfer of elastic energy to the deep fault zone increases stressing rate consistently that can induce more frequent earthquakes at large distance scales. Sensitivity tests with variation in fault permeability show that (1) slow diffusion along a low-permeability fault limits earthquake nucleation near the injection interval or (2) rapid relaxation of pressure buildup within a high-permeability fault, caused by reducing injection volumes, may mitigate the seismic potential promptly.
Advances in machine learning (ML) have enabled the development of interatomic potentials that promise the accuracy of first principles methods and the low-cost, parallel efficiency of empirical potentials. However, ML-based potentials struggle to achieve transferability, i.e., provide consistent accuracy across configurations that differ from those used during training. In order to realize the promise of ML-based potentials, systematic and scalable approaches to generate diverse training sets need to be developed. This work creates a diverse training set for tungsten in an automated manner using an entropy optimization approach. Subsequently, multiple polynomial and neural network potentials are trained on the entropy-optimized dataset. A corresponding set of potentials are trained on an expert-curated dataset for tungsten for comparison. The models trained to the entropy-optimized data exhibited superior transferability compared to the expert-curated models. Furthermore, the models trained to the expert-curated set exhibited a significant decrease in performance when evaluated on out-of-sample configurations.
The interaction of an intense laser with a solid foil target can drive ∼ TV/m electric fields, accelerating ions to MeV energies. In this study, we experimentally observe that structured targets can dramatically enhance proton acceleration in the target normal sheath acceleration regime. At the Texas Petawatt Laser facility, we compared proton acceleration from a 1μm flat Ag foil, to a fixed microtube structure 3D printed on the front side of the same foil type. A pulse length (140–450 fs) and intensity ((4–10) × 10 20 W/cm2) study found an optimum laser configuration (140 fs, 4 × 10 20 W/cm2), in which microtube targets increase the proton cutoff energy by 50% and the yield of highly energetic protons (> 10 MeV) by a factor of 8×. When the laser intensity reaches 10 21 W/cm2, the prepulse shutters the microtubes with an overcritical plasma, damping their performance. 2D particle-in-cell simulations are performed, with and without the preplasma profile imported, to better understand the coupling of laser energy to the microtube targets. The simulations are in qualitative agreement with the experimental results, and show that the prepulse is necessary to account for when the laser intensity is sufficiently high.
Shuttling ions at high speed and with low motional excitation is essential for realizing fast and high-fidelity algorithms in many trapped-ion-based quantum computing architectures. Achieving such performance is challenging due to the sensitivity of an ion to electric fields and the unknown and imperfect environmental and control variables that create them. Here we implement a closed-loop optimization of the voltage waveforms that control the trajectory and axial frequency of an ion during transport in order to minimize the final motional excitation. The resulting waveforms realize fast round-trip transport of a trapped ion across multiple electrodes at speeds of 0.5 electrodes per microsecond (35 m·s−1 for a one-way transport of 210 μm in 6 μs) with a maximum of 0.36 ± 0.08 mean quanta gain. This sub-quanta gain is independent of the phase of the secular motion at the distal location, obviating the need for an electric field impulse or time delay to eliminate the coherent motion.
Zandanel, Amber; Sauer, Kirsten B.; Rock, Marlena; Caporuscio, Florie A.; Telfeyan, Katherine; Matteo, Edward N.
Direct disposal of dual-purpose canisters (DPC) has been proposed to streamline the disposal of spent nuclear fuel. However, there are scenarios where direct disposal of DPCs may result in temperatures in excess of the specified upper temperature limits for some engineered barrier system (EBS) materials, which may cause alteration within EBS materials dependent on local conditions such as host rock composition, chemistry of the saturating groundwaters, and interactions between barrier materials themselves. Here we report the results of hydrothermal experiments reacting EBS materials—bentonite buffer and steel—with an analogue crystalline host rock and groundwater at 250 °C. Experiment series explored the effect of reaction time on the final products and the effects of the mineral and fluid reactants on different steel types. Post-mortem X-ray diffraction, electron microprobe, and scanning electron microscopy analyses showed characteristic alteration of both bentonite and steel, including the formation of secondary zeolite and calcium silicate hydrate minerals within the bentonite matrix and the formation of iron-bearing clays and metal oxides at the steel surfaces. Swelling clays in the bentonite matrix were not quantitatively altered to non-swelling clay species by the hydrothermal conditions. The combined results of the solution chemistry over time and post-mortem mineralogy suggest that EBS alteration is more sensitive to initial groundwater chemistry than the presence of host rock, where limited potassium concentration in the solution prohibits conversion of the smectite minerals in the bentonite matrix to non-swelling clay species.