Simulation of the interaction of light with matter, including at the few-photon level, is important for understanding the optical and optoelectronic properties of materials and for modeling next-generation nonlinear spectroscopies that use entangled light. At the few-photon level the quantum properties of the electromagnetic field must be accounted for with a quantized treatment of the field, and then such simulations quickly become intractable, especially if the matter subsystem must be modeled with a large number of degrees of freedom, as can be required to accurately capture many-body effects and quantum noise sources. Motivated by this we develop a quantum simulation framework for simulating such light-matter interactions on platforms with controllable bosonic degrees of freedom, such as vibrational modes in the trapped ion platform. The key innovation in our work is a scheme for simulating interactions with a continuum field using only a few discrete bosonic modes, which is enabled by a Green's function (response function) formalism. We develop the simulation approach, sketch how the simulation can be performed using trapped ions, and then illustrate the method with numerical examples. Our work expands the reach of quantum simulation to important light-matter interaction models and illustrates the advantages of extracting dynamical quantities such as response functions from quantum simulations.
We propose a set of benchmark tests for current-voltage (IV) curve fitting algorithms. Benchmark tests enable transparent and repeatable comparisons among algorithms, allowing for measuring algorithm improvement over time. An absence of such tests contributes to the proliferation of fitting methods and inhibits achieving consensus on best practices. Benchmarks include simulated curves with known parameter solutions, with and without simulated measurement error. We implement the reference tests on an automated scoring platform and invite algorithm submissions in an open competition for accurate and performant algorithms.
A comprehensive study of the mechanical response of a 316 stainless steel is presented. The split-Hopkinson bar technique was used to evaluate the mechanical behavior at dynamic strain rates of 500 s−1, 1500 s−1, and 3000 s−1 and temperatures of 22 °C and 300 °C under tension and compression loading, while the Drop-Hopkinson bar was used to characterize the tension behavior at an intermediate strain rate of 200 s−1. The experimental results show that the tension and compression flow stress are reasonably symmetric, exhibit positive strain rate sensitivity, and are inversely dependent on temperature. The true failure strain was determined by measuring the minimum diameter of the post-test tension specimen. The 316 stainless steel exhibited a ductile response, and the true failure strain increased with increasing temperature and decreased with increasing strain rate.
A wind tunnel test from AEDC Tunnel 9 of a hypersonic turbulent boundary layer is analyzed using several fidelities of numerical simulation including Wall-Modeled Large Eddy Simulation (WMLES), Large Eddy Simulation (LES), and Direct Numerical Simulation (DNS). The DNS was forced to transition to turbulence using a broad spectrum of planar, slow acoustic waves based on the freestream spectrum measured in the tunnel. Results show the flow transitions in a reasonably natural process developing into turbulent flow. This is due to several 2nd mode wave packets advecting downstream and eventually breaking down into turbulence with modest friction Reynolds numbers. The surface shear stress and heat flux agree well with a transitional RANS simulation. Comparisons of DNS data to experimental data showreasonable agreement with regard to mean surface quantities aswell as amplitudes of boundary layer disturbances. The DNS does show early transition relative to the experimental data. Several interesting aspects of the DNS and other numerical simulations are discussed. The DNS data are also analyzed through several common methods such as cross-correlations and coherence of the fluctuating surface pressure.
A jet is formed from venting gases of lithium-ion batteries during thermal runaway. Heat fluxes to surrounding surfaces from vented gases are calculated with simulations of an impinging jet in a narrow gap. Heat transfer correlations for the impinging jet are used as a point of reference. Three cases of different gap sizes and jet velocities are investigated and safety hazards are assessed. Local and global safety hazard issues are addressed based on average heat flux, average temperature, and average temperature rise in a cell. The Results show that about 40% to about 70% of venting gases energy can leave the module gap where it can be transferred to other modules or causes combustion at the end of the gap if suitable conditions are satisfied. This work shows that multiple vents are needed to increase the temperatures of the other modules’ cells to go into thermal runaway. This work is a preliminary assessment for future analysis that will consider heat transfer to the adjacent modules from multiple venting events.
Two-dimensional (2D) layered oxides have recently attracted wide attention owing to the strong coupling among charges, spins, lattice, and strain, which allows great flexibility and opportunities in structure designs as well as multifunctionality exploration. In parallel, plasmonic hybrid nanostructures exhibit exotic localized surface plasmon resonance (LSPR) providing a broad range of applications in nanophotonic devices and sensors. A hybrid material platform combining the unique multifunctional 2D layered oxides and plasmonic nanostructures brings optical tuning into the new level. In this work, a novel self-assembled Bi2MoO6 (BMO) 2D layered oxide incorporated with plasmonic Au nanoinclusions has been demonstrated via one-step pulsed laser deposition (PLD) technique. Comprehensive microstructural characterizations, including scanning transmission electron microscopy (STEM), differential phase contrast imaging (DPC), and STEM tomography, have demonstrated the high epitaxial quality and particle-in-matrix morphology of the BMO-Au nanocomposite film. DPC-STEM imaging clarifies the magnetic domain structures of BMO matrix. Three different BMO structures including layered supercell (LSC) and superlattices have been revealed which is attributed to the variable strain states throughout the BMO-Au film. Owing to the combination of plasmonic Au and layered structure of BMO, the nanocomposite film exhibits a typical LSPR in visible wavelength region and strong anisotropy in terms of its optical and ferromagnetic properties. This study opens a new avenue for developing novel 2D layered complex oxides incorporated with plasmonic metal or semiconductor phases showing great potential for applications in multifunctional nanoelectronics devices. [Figure not available: see fulltext.]
A wall-modeled large-eddy simulation of a Mach 14 boundary layer flow over a flat plate was carried out for the conditions of the Arnold Engineering Development Complex Hypervelocity Tunnel 9. Adequate agreement of the mean velocity and temperature, as well as Reynolds stress profiles with a reference direct numerical simulation is obtained at much reduced grid resolution. The normalized root-mean-square optical path difference obtained from the present wall-modeled large-eddy simulations and reference direct nu- merical simulation are in good agreement with each other but below a prediction obtained from a semi-analytical relationship by Notre Dame University. This motivates an evalua- tion of the underlying assumptions of the Notre Dame model at high Mach number. For the analysis, recourse is taken to previously published wall-modeled large-eddy simulations of a Mach eight turbulent boundary layer. The analysis of the underlying assumptions focuses on the root-mean-square fluctuations of the thermodynamic quantities, on the strong Reynolds analogy, two-point correlations, and the linking equation. It is found that with increasing Mach number, the pressure fluctuations increase and the strong Reynolds anal- ogy over-predicts the temperature fluctuations. In addition, the peak of the correlation length shifts towards the boundary layer edge.
TFLN/silicon photonic modulators featuring active silicon photonic components are reported with a Vπ of 3.6 Vcm. This hybrid architecture utilizes the bottom of the buried oxide as the bonding surface which features minimum topology.
Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering - OMAE
Laros, James H.; Davis, Jacob; Sharman, Krish; Tom, Nathan; Husain, Salman
Experiments were conducted on a wave tank model of a bottom raised oscillating surge wave energy converter (OSWEC) model in regular waves. The OSWEC model shape was a thin rectangular flap, which was allowed to pitch in response to incident waves about a hinge located at the intersection of the flap and the top of the supporting foundation. Torsion springs were added to the hinge in order to position the pitch natural frequency at the center of the wave frequency range of the wave maker. The flap motion as well as the loads at the base of the foundation were measured. The OSWEC was modeled analytically using elliptic functions in order to obtain closed form expressions for added mass and radiation damping coefficients, along with the excitation force and torque. These formulations were derived and reported in a previous publication by the authors. While analytical predictions of the foundation loads agree very well with experiments, large discrepancies are seen in the pitch response close to resonance. These differences are analyzed by conducting a sensitivity study, in which system parameters, including damping and added mass values, are varied. The likely contributors to the differences between predictions and experiments are attributed to tank reflections, standing waves that can occur in long, narrow wave tanks, as well as the thin plate assumption employed in the analytical approach.
As the electric grid becomes increasingly cyber-physical, it is important to characterize its inherent cyber-physical interdepedencies and explore how that characterization can be leveraged to improve grid operation. It is crucial to investigate what data features are transferred at the system boundaries, how disturbances cascade between the systems, and how planning and/or mitigation measures can leverage that information to increase grid resilience. In this paper, we explore several numerical analysis and graph decomposition techniques that may be suitable for modeling these cyber-physical system interdependencies and for understanding their significance. An augmented WSCC 9-bus cyber-physical system model is used as a small use-case to assess these techniques and their ability in characterizing different events within the cyber-physical system. These initial results are then analyzed to formulate a high-level approach for characterizing cyber-physical interdependencies.
Phosphor thermometry has become an established remote sensing technique for acquiring the temperature of surfaces and gas-phase flows. Often, phosphors are excited by a light source (typically emitting in the UV region), and their temperature-sensitive emission is captured. Temperature can be inferred from shifts in the emission spectra or the radiative decay lifetime during relaxation. While recent work has shown that the emission of several phosphors remains thermographic during x-ray excitation, the radiative decay lifetime was not investigated. The focus of the present study is to characterize the lifetime decay of the phosphor Gd2O2S:Tb for temperature sensitivity after excitation from a pulsed x-ray source. These results are compared to the lifetime decays found for this phosphor when excited using a pulsed UV laser. Results show that the lifetime of this phosphor exhibits comparable sensitivity to temperature between both excitation sources for a temperature range between 21 °C to 140 °C in increments of 20 °C. This work introduces a novel method of thermometry for researchers to implement when employing x-rays for diagnostics.
An inherited containment vessel design that has been used in the past to contain items in an environmental testing unit was brought to the Explosives Applications Lab to be analyzed and modified. The goal was to modify the vessel to contain an explosive event of 4g TNT equivalence at least once without failure or significant girth expansion while maintaining a seal. A total of ten energetic tests were performed on multiple vessels. In these tests, the 7075-T6 aluminum vessels were instrumented with thin-film resistive strain gages and both static and dynamic pressure gauges to study its ability to withstand an oversize explosive charge of 8g. Additionally, high precision girth (pi tape) measurements were taken before and after each test to measure the plastic growth of the vessel due to the event. Concurrent with this explosive testing, hydrocode modeling of the containment vessel and charge was performed. The modeling results were shown to agree with the results measured in the explosive field testing. Based on the data obtained during this testing, this vessel design can be safely used at least once to contain explosive detonations of 8g at the center of the chamber for a charge that will not result in damaging fragments.
Physics-Based Reduced Order Models (ROMs) tend to rely on projection-based reduction. This family of approaches utilizes a series of responses of the full-order model to assemble a suitable basis, subsequently employed to formulate a set of equivalent, low-order equations through projection. However, in a nonlinear setting, physics-based ROMs require an additional approximation to circumvent the bottleneck of projecting and evaluating the nonlinear contributions on the reduced space. This scheme is termed hyper-reduction and enables substantial computational time reduction. The aforementioned hyper-reduction scheme implies a trade-off, relying on a necessary sacrifice on the accuracy of the nonlinear terms’ mapping to achieve rapid or even real-time evaluations of the ROM framework. Since time is essential, especially for digital twins representations in structural health monitoring applications, the hyper-reduction approximation serves as both a blessing and a curse. Our work scrutinizes the possibility of exploiting machine learning (ML) tools in place of hyper-reduction to derive more accurate surrogates of the nonlinear mapping. By retaining the POD-based reduction and introducing the machine learning-boosted surrogate(s) directly on the reduced coordinates, we aim to substitute the projection and update process of the nonlinear terms when integrating forward in time on the low-order dimension. Our approach explores a proof-of-concept case study based on a Nonlinear Auto-regressive neural network with eXogenous Inputs (NARX-NN), trying to potentially derive a superior physics-based ROM in terms of efficiency, suitable for (near) real-time evaluations. The proposed ML-boosted ROM (N3-pROM) is validated in a multi-degree of freedom shear frame under ground motion excitation featuring hysteretic nonlinearities.
Two relatively under-reported facets of fuel storage fire safety are examined in this work for a 250, 000 gallon two-tank storage system. Ignition probability is linked to the radiative flux from a presumed fire. First, based on observed features of existing designs, fires are expected to be largely contained within a designed footprint that will hold the full spilled contents of the fuel. The influence of the walls and the shape of the tanks on the magnitude of the fire is not a well-described aspect of conventional fire safety assessment utilities. Various resources are herein used to explore the potential hazard for a contained fire of this nature. Second, an explosive attack on the fuel storage has not been widely considered in prior work. This work explores some options for assessing this hazard. The various methods for assessing the constrained conventional fires are found to be within a reasonable degree of agreement. This agreement contrasts with the hazard from an explosive dispersal. Best available assessment techniques are used, which highlight some inadequacies in the existing toolsets for making predictions of this nature. This analysis, using the best available tools, suggests the offset distance for the ignition hazard from a fireball will be on the same order as the offset distance for the blast damage. This suggests the buy-down of risk by considering the fireball is minimal when considering the blast hazards. Assessment tools for the fireball predictions are not particularly mature, and ways to improve them for a higher-fidelity estimate are noted.
The Synchronic Web is a highly scalable notary infrastructure that provides tamper-evident data provenance for historical web data. In this document, we describe the applicability of this infrastructure for web archiving across three envisioned stages of adoption. We codify the core mechanism enabling the value proposition: a procedure for splitting and merging cryptographic information fluidly across blockchain-backed ledgers. Finally, we present preliminary performance results that indicate the feasibility of our approach for modern web archiving scales.
Springs play important roles in many mechanisms, including critical safety components employed by Sandia National Laboratories. Due to the nature of these safety component applications, serious concerns arise if their springs become damaged or unhook from their posts. Finite element analysis (FEA) is one technique employed to ensure such adverse scenarios do not occur. Ideally, a very fine spring mesh would be used to make the simulation as accurate as possible with respect to mesh convergence. While this method does yield the best results, it is also the most time consuming and therefore most computationally expensive process. In some situations, reduced order models (ROMs) can be adopted to lower this cost at the expense of some accuracy. This study quantifies the error present between a fine, solid element mesh and a reduced order spring beam model, with the aim of finding the best balance of a low computational cost and high accuracy analysis. Two types of analyses were performed, a quasi-static displacement-controlled pull and a haversine shock. The first used implicit methods to examine basic properties as the elastic limit of the spring material was reached. This analysis was also used to study the convergence and residual tolerance of the models. The second used explicit dynamics methods to investigate spring dynamics and stress/strain properties, as well as examine the impact of the chosen friction coefficient. Both the implicit displacement-controlled pull test and explicit haversine shock test showed good similarities between the hexahedral and beam meshes. The results were especially favorable when comparing reaction force and stress trends and maximums. However, the EQPS results were not quite as favorable. This could be due to differences in how the shear stress is calculated in both models, and future studies will need to investigate the exact causes. The data indicates that the beam model may be less likely to correctly predict spring failure, defined as inappropriate application of tension and/or compressive forces to a larger assembly. Additionally, this study was able to quantify the computational cost advantage of using a reduced order model beam mesh. In the transverse haversine shock case, the hexahedral mesh took over three days with 228 processors to solve, compared to under 10 hours for the ROM using just a single processor. Depending on the required use case for the results, using the beam mesh will significantly improve the speed of work flows, especially when integrated into larger safety component models. However, appropriate use of the ROM should carefully balance these optimized run times with its reduction in accuracy, especially when examining spring failure and outputting variables such as equivalent plastic strain. Current investigations are broadening the scope of this work to include a validation study comparing the beam ROM to physical testing data.
Holography is an effective diagnostic for the three-dimensional imaging of multiphase and particle-laden flows. Traditional digital inline holography (DIH), however, is subject to distortions from phase delays caused by index-of-refraction changes. This prevents DIH from being implemented in extreme conditions where shockwaves and significant thermal gradients are present. To overcome this challenge, multiple techniques have been developed to correct for the phase distortions. In this work, several holography techniques for distortion removal are discussed, including digital off-axis holography, phase conjugate digital in-line holography, and electric field techniques. Then, a distortion cancelling off-axis holography configuration is implemented for distortion removal and a high-magnification phase conjugate system is evaluated. Finally, both diagnostics are applied to study extreme pyrotechnic igniter environments.
This paper introduces a new microprocessor-based system that is capable of detecting faults via the Traveling Wave (TW) generated from a fault event. The fault detection system is comprised of a commercially available Digital Signal Processing (DSP) board capable of accurately sampling signals at high speeds, performing the Discrete Wavelet Transform (DWT) decomposition to extract features from the TW, and a detection algorithm that makes use of the extracted features to determine the occurrence of a fault. Results show that this inexpensive fault detection system's performance is comparable to commercially available TW relays as accurate sampling and fault detection are achieved in a hundred and fifty microseconds. A detailed analysis of the execution times of each part of the process is provided.
As the width and depth of quantum circuits implemented by state-of-the-art quantum processors rapidly increase, circuit analysis and assessment via classical simulation are becoming unfeasible. It is crucial, therefore, to develop new methods to identify significant error sources in large and complex quantum circuits. In this work, we present a technique that pinpoints the sections of a quantum circuit that affect the circuit output the most and thus helps to identify the most significant sources of error. The technique requires no classical verification of the circuit output and is thus a scalable tool for debugging large quantum programs in the form of circuits. We demonstrate the practicality and efficacy of the proposed technique by applying it to example algorithmic circuits implemented on IBM quantum machines.
A medium-scale (30 cm diameter) methanol pool fire was simulated using Sandia National Laboratories’ Sierra/Fuego low-Mach number multi-physics turbulent reacting flow code. Large Eddy Simulation (LES) with subgrid turbulent kinetic energy closure was used as the turbulence model. Combustion was modeled using a strained laminar flamelet library approach. Radiative heat transfer was modeled using the gray-gas approximation. This paper details analysis done to support a validation study for the fire model. In this analysis, integral quantities were primarily examined. The radiant fraction was computed and used as a model calibration parameter. Integrated buoyancy flux was calculated and compared to an engineering correlation. Entrainment rate was computed with and without a mixture fraction threshold filter and compared to engineering correlations. Turbulent kinetic energy was computed and the effect of mesh size on the subgrid and total turbulent kinetic energy was examined. Flame height was calculated using an intermittency definition with two input parameters. A sensitivity study was then conducted to determine the sensitivity of the estimated flame height to the input parameters. This analysis aided in achieving the primary validation study objectives by providing model calibration and expanding the scope of the validation effort. In addition, the range of physics examined was increased, enhancing the understanding of the model's overall performance and of the relationship between phenomena.