Simulations and diagnostics of high-energy-density plasmas and warm dense matter rely on models of material response properties, both static and dynamic (frequency-dependent). Here, we systematically investigate variations in dynamic electron-ion collision frequencies ν ( ω ) in warm dense matter using data from a self-consistent-field average-atom model. We show that including the full quantum density of states, strong collisions, and inelastic collisions lead to significant changes in ν ( ω ) . These changes result in red shifts and broadening of the plasmon peak in the dynamic structure factor, an effect observable in x-ray Thomson scattering spectra, and modify stopping powers around the Bragg peak. These changes improve the agreement of computationally efficient average-atom models with first-principles time-dependent density functional theory in warm dense aluminum, carbon, and deuterium.
The notion of plane shock waves is a macroscopic, very fruitful idealization of near discontinuous disturbance propagating at supersonic speed. Such a picture is comparable to the picture of shorelines seen from a very high altitude. When viewed at the grain scale where the structure of solids is inherently heterogeneous and stochastic, features of shock waves are non-laminar and field variables, such as particle velocity and pressure, fluctuate. This paper reviews select aspects of such fluctuating nonequilibrium features of plane shock waves in solids with focus on grain scale phenomena and raises the need for a paradigm change to achieve a deeper understanding of plane shock waves in solids.
In recent years, the Engine Combustion Network (ECN) has developed as a worldwide reference for understanding and describing engine combustion processes, successfully bringing together experimental and numerical efforts. Since experiments and numerical simulations both target the same boundary conditions, an accurate characterization of the stratified environment that is inevitably present in experimental facilities is required. The difference between the core-, and pressure-derived bulk-temperature of pre-burn combustion vessels has been addressed in various previous publications. Additionally, thermocouple measurements have provided initial data on the boundary layer close to the injector nozzle, showing a transition to reduced ambient temperatures. The conditions at the start of fuel injection influence physicochemical properties of a fuel spray, including near nozzle mixing, heat release computations, and combustion parameters. To address the temperature stratification in more detail, thermocouple measurements at larger distances from the spray axis have been conducted. Both the temperature field prior to the pre-combustion event that preconditions the high-temperature, high-pressure ambient, as well as the stratification at the moment of fuel injection were studied. To reveal the cold boundary layer near the injector with a better spatial resolution, Rayleigh scattering experiments and thermocouple measurements at various distances close to the nozzle have been carried out. The impact of the boundary layers and temperature stratification are illustrated and quantified using numerical simulations at Spray A conditions. Next to a reference simulation with a uniform temperature field, six different stratified temperature distributions have been generated. These distributions were based on the mean experimental temperature superimposed by a randomized variance, again derived from the experiments. The results showed that an asymmetric flame structure arises in the computed results when the temperature stratification input is used. In these predictions, first-stage ignition is advanced by 24μs, while second-stage ignition is delayed by 11μs. At the same time a lift-off length difference between the top and the bottom of up to 1.1 mm is observed. Furthermore, the lift-off length is less stable over time. Given the shown dependency, the temperature data is made available along with the vessel geometry data as a recommended basis for future numerical simulations.
This paper presents a methodology for simultaneous fault detection, classification, and topology estimation for adaptive protection of distribution systems. The methodology estimates the probability of the occurrence of each one of these events by using a hybrid structure that combines three sub-systems, a convolutional neural network for topology estimation, a fault detection based on predictive residual analysis, and a standard support vector machine with probabilistic output for fault classification. The input to all these sub-systems is the local voltage and current measurements. A convolutional neural network uses these local measurements in the form of sequential data to extract features and estimate the topology conditions. The fault detector is constructed with a Bayesian stage (a multitask Gaussian process) that computes a predictive distribution (assumed to be Gaussian) of the residuals using the input. Since the distribution is known, these residuals can be transformed into a Standard distribution, whose values are then introduced into a one-class support vector machine. The structure allows using a one-class support vector machine without parameter cross-validation, so the fault detector is fully unsupervised. Finally, a support vector machine uses the input to perform the classification of the fault types. All three sub-systems can work in a parallel setup for both performance and computation efficiency. We test all three sub-systems included in the structure on a modified IEEE123 bus system, and we compare and evaluate the results with standard approaches.
The domain wall-magnetic tunnel junction (DW-MTJ) is a versatile device that can simultaneously store data and perform computations. These three-terminal devices are promising for digital logic due to their nonvolatility, low-energy operation, and radiation hardness. Here, we augment the DW-MTJ logic gate with voltage-controlled magnetic anisotropy (VCMA) to improve the reliability of logical concatenation in the presence of realistic process variations. VCMA creates potential wells that allow for reliable and repeatable localization of domain walls (DWs). The DW-MTJ logic gate supports different fanouts, allowing for multiple inputs and outputs for a single device without affecting the area. We simulate a systolic array of DW-MTJ multiply-accumulate (MAC) units with 4-bit and 8-bit precision, which uses the nonvolatility of DW-MTJ logic gates to enable fine-grained pipelining and high parallelism. The DW-MTJ systolic array provides comparable throughput and efficiency to state-of-the-art CMOS systolic arrays while being radiation-hard. These results improve the feasibility of using DW-based processors, especially for extreme-environment applications such as space.
Hybrid bonded silicon nitride thin-film lithium niobate (TFLN) Mach-Zehnder modulators (MZMs) at 1310 nm were designed with metal coplanar waveguide electrodes buried in the silicon-on-insulator (SOI) chip. The MZM devices showed greatly improved performance compared to earlier devices of a similar design, and similar performance to comparable MZM devices with gold electrodes made on top of the TFLN layer. Both devices achieve a 3-dB electro-optic bandwidth greater than 110 GHz and voltage-driven optical extinction ratios greater than 28 dB. Half-wave voltage-length products ( Vπ L) of 2.8 and 2.5 Vċ cm were measured for the 0.5 and 0.4 cm long buried metal and top gold electrode MZMs, respectively.
A long-standing area of research for Eulerian shock wave physics codes has been the treatment of strength and damage for materials. Here we present a method that will aid in the analysis of strength and failure in shock physics applications where excessive diffusion of critical variables can occur and control the solution outcome. Eulerian methods excel for large deformation simulations in general but are inaccurate in capturing structural behavior. Lagrangian methods provide better structural response, but finite element meshes can become tangled. Therefore, a technique for merging Lagrangian and Eulerian treatments of material response, within a single numerical framework, was implemented in the Multiple Component computational shock physics hydrocode. The capability is a Lagrangian/Eulerian Particle Method (LEPM) that uses particles to interface a Lagrangian treatment of material strength with a more traditional Eulerian treatment of the Equation of State (EOS). Lagrangian numerical methods avoid the advection diffusion found in Eulerian methods, which typically strongly affects strength constitutive law internal variables, such as equivalent plastic strain, porosity and/or damage. The Lagrangian capability enhances existing capabilities and permits accurate predictions of high rate, large deformation and/or shock of mechanical structures.
Earthquake location algorithms typically require travel time calculation. Doing this calculation in 3D, despite advances in algorithm efficiency and computational power, can still be prohibitively expensive in terms of resources and storage. Implementation of high-resolution 3D models in routine earthquake location would be a significant step forward in most of the world. Machine learning algorithms have potential to act as substitutes for travel time calculation algorithms or stored travel time tables. We investigate EikoNet - a physics informed neural network machine learning model that estimates travel times very quickly and comes with negligible memory-overhead. Specifically, we apply EikoNet to the Wasatch Fault Community Velocity Model (WFCVM), a highly detailed and complex 3D velocity model of the Salt Lake City, UT region. While routine locations in the area and studies of the 2020 Magna, UT earthquake sequence used a 1D velocity model, a 3D model may help better our understanding the structure of the major fault in the region. Our primary goal was to test the speed, memory requirements, and accuracy of EikoNet compared to a reference eikonal solver. We find that while the EikoNet is exceedingly fast and requires little memory overhead, achieving acceptable accuracy in estimated travel times is difficult and requires extensive computational resources.