Here, we show that a laser at threshold can be utilized to generate the class of coherent and transform-limited waveforms (vt — z)mei(kz—ωt) at optical frequencies. We derive these properties analytically and demonstrate them in semiclassical time-domain laser simulations. We then utilize these waveforms to expand other waveforms with high modulation frequencies and demonstrate theoretically the feasibility of complex-frequency coherent absorption at optical frequencies, with efficient energy transduction and cavity loading. This approach has potential applications in quantum computing, photonic circuits, and biomedicine.
Alkali metals are among the most desirable negative electrodes for long duration energy storage due to their extremely high capacities. Currently, only high-temperature (>250 °C) batteries have successfully used alkali electrodes in commercial applications, due to limitations imposed by solid electrolytes, such as low conductivity at moderate temperatures and susceptibility to dendrites. Toward enabling the next generation of grid-scale, long duration batteries, we aim to develop molten sodium (Na) systems that operate with commercially attractive performance metrics including high current density (>100 mA cm-2), low temperature (<200 °C), and long discharge times (>12 h). In this work, we focus on the performance of NaSICON solid electrolytes in sodium symmetric cells at 110 °C. Specifically, we use a tin (Sn) coating on NaSICON to reduce interfacial resistance by a factor of 10, enabling molten Na symmetric cell operation with “discharge” durations up to 23 h at 100 mA cm-2 and 110 °C. Unidirectional galvanostatic testing shows a 70% overpotential reduction, and electrochemical impedance spectroscopy (EIS) highlights the reduction in interfacial resistance due to the Sn coating. Detailed scanning electron microscopy (SEM) and energy-dispersive spectroscopy (EDS) show that Sn-coated NaSICON enables current densities of up to 500 mA cm-2 at 110 °C by suppressing dendrite formation at the plating interface (Mode I). This analysis also provides a mechanistic understanding of dendrite formation at current densities up to 1000 mA cm-2, highlighting the importance of effective coatings that will enable advanced battery technologies for long-term energy storage.
Because of the high-risk nature of emergencies and illegal activities at sea, it is critical that algorithms designed to detect anomalies from maritime traffic data be robust. However, there exist no publicly available maritime traffic data sets with real-world expert-labeled anomalies. As a result, most anomaly detection algorithms for maritime traffic are validated without ground truth. We introduce the HawaiiCoast_GT data set, the first ever publicly available automatic identification system (AIS) data set with a large corresponding set of true anomalous incidents. This data set—cleaned and curated from raw Bureau of Ocean Energy Management (BOEM) and National Oceanic and Atmospheric Administration (NOAA) automatic identification system (AIS) data—covers Hawaii’s coastal waters for four years (2017–2020) and contains 88,749,176 AIS points for a total of 2622 unique vessels. This includes 208 labeled tracks corresponding to 154 rigorously documented real-world incidents.
We demonstrate an order of magnitude reduction in the sensitivity to optical crosstalk for neighboring trapped-ion qubits during simultaneous single-qubit gates driven with individual addressing beams. Gates are implemented via two-photon Raman transitions, where crosstalk is mitigated by offsetting the drive frequencies for each qubit to avoid first-order crosstalk effects from inter-beam two-photon resonance. The technique is simple to implement, and we find that phase-dependent crosstalk due to optical interference is reduced on the most impacted neighbor from a maximal fractional rotation error of 0.185 ( 4 ) without crosstalk mitigation to ≤ 0.006 with the mitigation strategy. Furthermore, we characterize first-order crosstalk in the two-qubit gate and avoid the resulting rotation errors for the arbitrary-axis Mølmer-Sørensen gate via a phase-agnostic composite gate. Finally, we demonstrate holistic system performance by constructing a composite CNOT gate using the improved single-qubit gates and phase-agnostic two-qubit gate. This work is done on the Quantum Scientific Computing Open User Testbed; however, our methods are widely applicable for individual addressing Raman gates and impose no significant overhead, enabling immediate improvement for quantum processors that incorporate this technique.
As global temperatures continue to rise, climate mitigation strategies such as stratospheric aerosol injections (SAI) are increasingly discussed, but the downstream effects of these strategies are not well understood. As such, there is interest in developing statistical methods to quantify the evolution of climate variable relationships during the time period surrounding an SAI. Feature importance applied to echo state network (ESN) models has been proposed as a way to understand the effects of SAI using a data-driven model. This approach depends on the ESN fitting the data well. If not, the feature importance may place importance on features that are not representative of the underlying relationships. Typically, time series prediction models such as ESNs are assessed using out-of-sample performance metrics that divide the times series into separate training and testing sets. However, this model assessment approach is geared towards forecasting applications and not scenarios such as the motivating SAI example where the objective is using a data driven model to capture variable relationships. Here, in this paper, we demonstrate a novel use of climate model replicates to investigate the applicability of the commonly used repeated hold-out model assessment approach for the SAI application. Simulations of an SAI are generated using a simplified climate model, and different initialization conditions are used to provide independent training and testing sets containing the same SAI event. The climate model replicates enable out-of-sample measures of model performance, which are compared to the single time series hold-out validation approach. For our case study, it is found that the repeated hold-out sample performance is comparable, but conservative, to the replicate out-of-sample performance when the training set contains enough time after the aerosol injection.
Efficient solution of the Vlasov equation, which can be up to six-dimensional, is key to the simulation of many difficult problems in plasma physics. The discontinuous Petrov-Galerkin (DPG) finite element methodology provides a framework for the development of stable (in the sense of Ladyzhenskaya–Babuška–Brezzi conditions) finite element formulations, with built-in mechanisms for adaptivity. While DPG has been studied extensively in the context of steady-state problems and to a lesser extent with space-time discretizations of transient problems, relatively little attention has been paid to time-marching approaches. In the present work, we study a first application of time-marching DPG to the Vlasov equation, using backward Euler for a Vlasov-Poisson discretization. We demonstrate adaptive mesh refinement for two problems: the two-stream instability problem, and a cold diode problem. We believe the present work is novel both in its application of unstructured adaptive mesh refinement (as opposed to block-structured adaptivity, which has been studied previously) in the context of Vlasov-Poisson, as well as in its application of DPG to the Vlasov-Poisson system. We also discuss extensive additions to the Camellia library in support of both the present formulation as well as extensions to higher dimensions, Maxwell equations, and space-time formulations.
Thermochemical air separation to produce high-purity N2 was demonstrated in a vertical tube reactor via a two-step reduction–oxidation cycle with an A-site substituted perovskite Ba0.15Sr0.85FeO3–δ (BSF1585). BSF1585 particles were synthesized and characterized in terms of their chemical, morphological, and thermophysical properties. A thermodynamic cycle model and sensitivity analysis using computational heat and mass transfer models of the reactor were used to select the system operating parameters for a concentrating solar thermal-driven process. Thermal reduction up to 800 °C in air and temperature-swing air separation from 800 °C to minimum temperatures between 400 and 600 °C were performed in the reactor containing a 35 g packed bed of BSF1585. The reactor was characterized for dispersion, and air separation was characterized via mass spectrometry. Gas measurements indicated that the reactor produced N2 with O2 impurity concentrations as low as 0.02 % for > 30 min of operation. A parametric study of air flow rates suggested that differences in observed and thermodynamically predicted O2 impurities were due to imperfect gas transport in the bed. Temperature swing reduction/oxidation cycling experiments between 800 and 400 °C in air were conducted with no statistically significant degradation in N2 purity over 50 cycles.
We investigate the interplay between the quantum Hall (QH) effect and superconductivity in InAs surface quantum well (SQW)/NbTiN heterostructures using a quantum point contact (QPC). We use QPC to control the proximity of the edge states to the superconductor. By measuring the upstream and downstream resistances of the device, we investigate the efficiency of Andreev conversion at the InAs/NbTiN interface. Our experimental data is analyzed using the Landauer-Büttiker formalism, generalized to allow for Andreev reflection processes. We show that by varying the voltage of the QPC, VQPC, the average Andreev reflection, A, at the QH-SC interface can be tuned from 50% to ∼10%. The evolution of A with VQPC extracted from the measurements exhibits plateaus separated by regions for which A varies continuously with VQPC. The presence of plateaus suggests that for some ranges of VQPC the QPC might be pinching off almost completely from the QH-SC interface some of the edge modes. Our work shows an experimental setup to control and advance the understanding of the complex interplay between superconductivity and QH effect in two-dimensional gas systems.
The size of a pressure transducer is known to affect the accuracy of measurements of wall-pressure fluctuations beneath a turbulent boundary layer because of spatial averaging over the sensing area of the transducer. In this paper, the effect of finite transducer size is investigated by applying spatial averaging or wavenumber filters to a database of hypersonic wall pressure generated from a direct numerical simulation (DNS) that simulates the turbulent portion of the boundary layer over a sharp 7° half-angle cone at nominally Mach 8. Here, a good comparison between the DNS and the experiment in the Sandia Hypersonic Wind Tunnel at Mach 8 is achieved after spatial averaging is applied to the DNS data over an area similar to the sensing area of the transducer. The study shows that a finite sensor size similar to that of the PCB132 transducer can cause significant attenuation in the root-mean-square and power spectral density (PSD) of wall-pressure fluctuations, and the attenuation effect is identical between cone and flat plate configurations at the same friction Reynolds number. The Corcos theory is found to successfully compensate for the attenuated high-frequency components of the wall-pressure PSD.
Color centers in diamond are one of the most promising tools for quantum information science. Of particular interest is the use of single-crystal diamond membranes with nanoscale-thickness as hosts for color centers. Indeed, such structures guarantee a better integration with a variety of other quantum materials or devices, which can aid the development of diamond-based quantum technologies, from nanophotonics to quantum sensing. A common approach for membrane production is what is known as “smart-cut”, a process where membranes are exfoliated from a diamond substrate after the creation of a thin sub-surface amorphous carbon layer by He+ implantation. Due to the high ion fluence required, this process can be time-consuming. In this work, we demonstrated the production of thin diamond membranes by neon implantation of diamond substrates. With the target of obtaining membranes of ~200 nm thickness and finding the critical damage threshold, we implanted different diamonds with 300 keV Ne+ ions at different fluences. We characterized the structural properties of the implanted diamonds and the resulting membranes through SEM, Raman spectroscopy, and photoluminescence spectroscopy. We also found that a SRIM model based on a two-layer diamond/sp2 -carbon target better describes ion implantation, allowing us to estimate the diamond critical damage threshold for Ne+ implantation. Compared to He+ smart-cut, the use of a heavier ion like Ne+ results in a ten-fold decrease in the ion fluence required to obtain diamond membranes and allows to obtain shallower smart-cuts, i.e. thinner membranes, at the same ion energy.
We propose a method to extract the upper laser level’s (ULL’s) excess electronic temperature from the analysis of the maximum light output power (Pmax) and current dynamic range ΔJd = (Jmax – Jth) of terahertz quantum cascade lasers (THz QCLs). We validated this method, both through simulation and experiment, by applying it on THz QCLs supporting a clean three-level system. Detailed knowledge of electronic excess temperatures is of utmost importance in order to achieve high temperature performance of THz QCLs. Our method is simple and can be easily implemented, meaning an extraction of the excess electron temperature can be achieved without intensive experimental effort. This knowledge should pave the way toward improvement of the temperature performance of THz QCLs beyond the state-of-the-art.
We hereby offer a comprehensive analysis of various factors that could potentially enable terahertz quantum cascade lasers (THz QCLs) to achieve room temperature performance. We thoroughly examine and integrate the latest findings from recent studies in the field. Our work goes beyond a mere analysis; it represents a nuanced and comprehensive exploration of the intricate factors influencing the performance of THz QCLs. Through a comprehensive and holistic approach, we propose novel insights that significantly contribute to advancing strategies for improving the temperature performance of THz QCLs. This all-encompassing perspective allows us not only to present a synthesis of existing knowledge but also to offer a fresh and nuanced strategy to improve the temperature performance of THz QCLs. We draw new conclusions from prior works, demonstrating that the key to enhancing THz QCL temperature performance involves not only optimizing interface quality but also strategically managing doping density, its spatial distribution, and profile. This is based on our results from different structures, such as two experimentally demonstrated devices: the spit-well resonant-phonon and the two-well injector direct-phonon schemes for THz QCLs, which allow efficient isolation of the laser levels from excited and continuum states. In these schemes, the doping profile has a setback that lessens the overlap of the doped region with the active laser states. Our work stands as a valuable resource for researchers seeking to gain a deeper understanding of the evolving landscape of THz technology. Furthermore, we present a novel strategy for future endeavors, providing an enhanced framework for continued exploration in this dynamic field. This strategy should pave the way to potentially reach higher temperatures than the latest records reached for Tmax of THz QCLs.
Intermediate verification languages like Why3 and Boogie have made it much easier to build program verifiers, transforming the process into a logic compilation problem rather than a proof automation one. Why3 in particular implements a rich logic for program specification with polymorphism, algebraic data types, recursive functions and predicates, and inductive predicates; it translates this logic to over a dozen solvers and proof assistants. Accordingly, it serves as a backend for many tools, including Frama-C, EasyCrypt, and GNATProve for Ada SPARK. But how can we be sure that these tools are correct? The alternate foundational approach, taken by tools like VST and CakeML, provides strong guarantees by implementing the entire toolchain in a proof assistant, but these tools are harder to build and cannot directly take advantage of SMT solver automation. As a first step toward enabling automated tools with similar foundational guarantees, we give a formal semantics in Coq for the logic fragment of Why3. We show that our semantics are useful by giving a correct-by-construction natural deduction proof system for this logic, using this proof system to verify parts of Why3's standard library, and proving sound two of Why3's transformations used to convert terms and formulas into the simpler logics supported by the backend solvers.
Understanding pure H2 and H2/CH4 adsorption and diffusion in earth materials is one vital step toward a successful and safe H2 storage in depleted gas reservoirs. Despite recent research efforts such understanding is far from complete. In this work we first use Nuclear Magnetic Resonance (NMR) experiments to study the NMR response of injected H2 into Duvernay shale and Berea sandstone samples, representing materials in confining and storage zones. Then we use molecular simulations to investigate H2/CH4 competitive adsorption and diffusion in kerogen, a common component of shale. Our results indicate that in shale there are two H2 populations, i.e., free H2 and adsorbed H2, that yield very distinct NMR responses. However, only free gas presents in sandstone that yields a H2 NMR response similar to that of bulk H2. About 10 % of injected H2 can be lost due to adsorption/desorption hysteresis in shale, and no H2 loss (no hysteresis) is observed in sandstone. Our molecular simulation results support our NMR results that there are two H2 populations in nanoporous materials (kerogen). The simulation results also indicate that CH4 outcompetes H2 in adsorption onto kerogen, due to stronger CH4-kerogen interactions than H2-kerogen interactions. Nevertheless, in a depleted gas reservoir with low CH4 gas pressure, about ∼30 % of residual CH4 can be desorbed upon H2 injection. The simulation results also predict that H2 diffusion in porous kerogen is about one order of magnitude higher than that of CH4 and CO2. This work provides an understanding of H2/CH4 behaviors in deleted gas reservoirs upon H2 injection and predictions of H2 loss and CH4 desorption in H2 storage.
Gallium nitride (GaN)-based nanoscale vacuum electron devices, which offer advantages of both traditional vacuum tube operation and modern solid-state technology, are attractive for radiation-hard applications due to the inherent radiation hardness of vacuum electron devices and the high radiation tolerance of GaN. Here, we investigate the radiation hardness of top-down fabricated n-GaN nanoscale vacuum electron diodes (NVEDs) irradiated with 2.5-MeV protons (p) at various doses. We observe a slight decrease in forward current and a slight increase in reverse leakage current as a function of cumulative protons fluence due to a dopant compensation effect. The NVEDs overall show excellent radiation hardness with no major change in electrical characteristics up to a cumulative fluence of 5E14 p/cm2, which is significantly higher than the existing state-of-the-art radiation-hardened devices to our knowledge. The results show promise for a new class of GaN-based nanoscale vacuum electron devices for use in harsh radiation environments and space applications.
Interim dry storage of spent nuclear fuel involves storing the fuel in welded stainless-steel canisters. Under certain conditions, the canisters could be subjected to environments that may promote stress corrosion cracking leading to a risk of breach and release of aerosol-sized particulate from the interior of the canister to the external environment through the crack. Research is currently under way by several laboratories to better understand the formation and propagation of stress corrosion cracks, however little work has been done to quantitatively assess the potential aerosol release. The purpose of the present work is to introduce a reliable generic numerical model for prediction of aerosol transport, deposition, and plugging in leak paths similar to stress corrosion cracks, while accounting for potential plugging from particle deposition. The model is dynamic (changing leak path geometry due to plugging) and it relies on the numerical solution of the aerosol transport equation in one dimension using finite differences. The model’s capabilities were also incorporated into a Graphical User Interface (GUI) that was developed to enhance user accessibility. Model validation efforts presented in this paper compare the model’s predictions with recent experimental data from Sandia National Laboratories (SNL) and results available in literature. We expect this model to improve the accuracy of consequence assessments and reduce the uncertainty of radiological consequence estimations in the remote event of a through-wall breach in dry cask storage systems.
In this work, we evaluate the usefulness of nonsmooth basis functions for representing the periodic response of a nonlinear system subject to contact/impact behavior. As with sine and cosine basis functions for classical Fourier series, which have C∞ smoothness, nonsmooth counterparts with C0 smoothness are defined to develop a nonsmooth functional representation of the solution. Some properties of these basis functions are outlined, such as periodicity, derivatives, and orthogonality, which are useful for functional series applied via the Galerkin method. Least-squares fits of the classical Fourier series and nonsmooth basis functions are presented and compared using goodness-of-fit metrics for time histories from vibro-impact systems with varying contact stiffnesses. This formulation has the potential to significantly reduce the computational cost of harmonic balance solvers for nonsmooth dynamical systems. Rather than requiring many harmonics to capture a system response using classical, smooth Fourier terms, the frequency domain discretization could be captured by a combination of a finite Fourier series supplemented with nonsmooth basis functions to improve convergence of the solution for contact-impact problems.
The diesel-piloted dual-fuel compression ignition combustion strategy is well-suited to accelerate the decarbonization of transportation by adopting hydrogen as a renewable energy carrier into the existing internal combustion engine with minimal engine modifications. Despite the simplicity of engine modification, many questions remain unanswered regarding the optimal pilot injection strategy for reliable ignition with minimum pilot fuel consumption. The present study uses a single-cylinder heavy-duty optical engine to explore the phenomenology and underlying mechanisms governing the pilot fuel ignition and the subsequent combustion of a premixed hydrogen-air charge. The engine is operated in a dual-fuel mode with hydrogen premixed into the engine intake charge with a direct pilot injection of n-heptane as a diesel pilot fuel surrogate. Optical diagnostics used to visualize in-cylinder combustion phenomena include high-speed IR imaging of the pilot fuel spray evolution as well as high-speed HCHO* and OH* chemiluminescence as indicators of low-temperature and high-temperature heat release, respectively. Three pilot injection strategies are compared to explore the effects of pilot fuel mass, injection pressure, and injection duration on the probability and repeatability of successful ignition. The thermodynamic and imaging data analysis supported by zero-dimensional chemical kinetics simulations revealed a complex interplay between the physical and chemical processes governing the pilot fuel ignition process in a hydrogen containing charge. Hydrogen strongly inhibits the ignition of pilot fuel mixtures and therefore requires longer injection duration to create zones with sufficiently high pilot fuel concentration for successful ignition. Results show that ignition typically tends to rely on stochastic pockets with high pilot fuel concentration, which results in poor repeatability of combustion and frequent misfiring. This work has improved the understanding on how the unique chemical properties of hydrogen pose a challenge for maximization of hydrogen's energy share in hydrogen dual-fuel engines and highlights a potential mitigation pathway.
Tabulated chemistry models are widely used to simulate large-scale turbulent fires in applications including energy generation and fire safety. Tabulation via piecewise Cartesian interpolation suffers from the curse-of-dimensionality, leading to a prohibitive exponential growth in parameters and memory usage as more dimensions are considered. Artificial neural networks (ANNs) have attracted attention for constructing surrogates for chemistry models due to their ability to perform high-dimensional approximation. However, due to well-known pathologies regarding the realization of suboptimal local minima during training, in practice they do not converge and provide unreliable accuracy. Partition of unity networks (POUnets) are a recently introduced family of ANNs which preserve notions of convergence while performing high-dimensional approximation, discovering a mesh-free partition of space which may be used to perform optimal polynomial approximation. We assess their performance with respect to accuracy and model complexity in reconstructing unstructured flamelet data representative of nonadiabatic pool fire models. Our results show that POUnets can provide the desirable accuracy of classical spline-based interpolants with the low memory footprint of traditional ANNs while converging faster to significantly lower errors than ANNs. For example, we observe POUnets obtaining target accuracies in two dimensions with 40 to 50 times less memory and roughly double the compression in three dimensions. We also address the practical matter of efficiently training accurate POUnets by studying convergence over key hyperparameters, the impact of partition/basis formulation, and the sensitivity to initialization.
In this work, the frequency response of a simplified shaft-bearing assembly is studied using numerical continuation. Roller-bearing clearances give rise to contact behavior in the system, and past research has focused on the nonlinear normal modes of the system and its response to shock-type loads. A harmonic balance method (HBM) solver is applied instead of a time integration solver, and numerical continuation is used to map out the system’s solution branches in response to a harmonic excitation. Stability analysis is used to understand the bifurcation behavior and possibly identify numerical or system-inherent anomalies seen in past research. Continuation is also performed with respect to the forcing magnitude, resulting in what are known as S-curves, in an effort to detect isolated solution branches in the system response.
The primary goal of any laboratory test is to expose the unit-under-test to conservative realistic representations of a field environment. Satisfying this objective is not always straightforward due to laboratory equipment constraints. For vibration and shock tests performed on shakers over-testing and unrealistic failures can result because the control is a base acceleration and mechanical shakers have nearly infinite impedance. Force limiting and response limiting are relatively standard practices to reduce over-test risks in random-vibration testing. Shaker controller software generally has response limiting as a built-in capability and it is done without much user intervention since vibration control is a closed loop process. Limiting in shaker shocks is done for the same reasons, but because the duration of a shock is only a few milliseconds, limiting is a pre-planned user in the loop process. Shaker shock response limiting has been used for at least 30 years at Sandia National Laboratories, but it seems to be little known or used in industry. This objective of this paper is to re-introduce response limiting for shaker shocks to the aerospace community. The process is demonstrated on the BARBECUE testbed.
Disposal of commercial spent nuclear fuel in a geologic repository is studied. In situ heater experiments in underground research laboratories provide a realistic representation of subsurface behavior under disposal conditions. This study describes process model development and modeling analysis for a full-scale heater experiment in opalinus clay host rock. The results of thermal-hydrology simulation, solving coupled nonisothermal multiphase flow, and comparison with experimental data are presented. The modeling results closely match the experimental data.
The siting of nuclear waste is a process that requires consideration of concerns of the public. This report demonstrates the significant potential for natural language processing techniques to gain insights into public narratives around “nuclear waste.” Specifically, the report highlights that the general discourse regarding “nuclear waste” within the news media has fluctuated in prevalence compared to “nuclear” topics broadly over recent years, with commonly mentioned entities reflecting a limited variety of geographies and stakeholders. General sentiments within the “nuclear waste” articles appear to use neutral language, suggesting that a scientific or “facts-only” framing of “waste”-related issues dominates coverage; however, the exact nuances should be further evaluated. The implications of a number of these insights about how nuclear waste is framed in traditional media (e.g., regarding emerging technologies, historical events, and specific organizations) are discussed. This report lays the groundwork for larger, more systematic research using, for example, transformer-based techniques and covariance analysis to better understand relationships among “nuclear waste” and other nuclear topics, sentiments of specific entities, and patterns across space and time (including in a particular region). By identifying priorities and knowledge needs, these data-driven methods can complement and inform engagement strategies that promote dialogue and mutual learning regarding nuclear waste.