Bayesian Framework for Forecasting Dynamical Systems in the Presence of Corrupted Data
Abstract not provided.
Abstract not provided.
Combustion and Flame
Surface mass loss rates due to sublimation and oxidation at temperatures of 3000–7000 K have been measured in a shock tube for graphite and carbon black (CB) particles. Diagnostics are presented for measuring surface mass loss rates by diffuse backlit illumination extinction imaging and thermal emission. The surface mass loss rate is found by regression fitting extinction and emission signals with an independent spherical primary particle assumption. Measured graphite sublimation and oxidation rates are reported to be an order of magnitude greater than CB sublimation and oxidation rates. It is speculated that the difference between CB and graphite surface mass loss rates is largely due to the primary particle assumption of the presented technique which misrepresents the effective surface area of an aggregate particle where primary particles overlap and shield inner particles. Measured sublimation rates are compared to sublimation models in the literature, and it is seen graphite shows fair agreement with the models while CB underestimates, likely a result of the particle shielding affect not being considered in the sublimation model.
IEEE Journal of Photovoltaics
Stereo high-speed video of photovoltaic modules undergoing laboratory hail tests was processed using digital image correlation to determine module surface deformation during and immediately following impact. The purpose of this work was to demonstrate a methodology for characterizing module impact response differences as a function of construction and incident hail parameters. Video capture and digital image analysis were able to capture out-of-plane module deformation to a resolution of ±0.1 mm at 11 kHz on an in-plane grid of 10 × 10 mm over the area of a 1 × 2 m commercial photovoltaic module. With lighting and optical adjustments, the technique was adaptable to arbitrary module designs, including size, backsheet color, and cell interconnection. Impacts were observed to produce an initially localized dimple in the glass surface, with peak deflection proportional to the square root of incident energy. Subsequent deformation propagation and dissipation were also captured, along with behavior for instances when the module glass fractured. Natural frequencies of the module were identifiable by analyzing module oscillations postimpact. Limitations of the measurement technique were that the impacting ice ball obscured the data field immediately surrounding the point of contact, and both ice and glass fracture events occurred within 100 μs, which was not resolvable at the chosen frame rate. Increasing the frame rate and visualizing the back surface of the impact could be applied to avoid these issues. Applications for these data include validating computational models for hail impacts, identifying the natural frequencies of a module, and identifying damage initiation mechanisms.
Abstract not provided.
Quantum Science and Technology
Junctions are fundamental elements that support qubit locomotion in two-dimensional ion trap arrays and enhance connectivity in emerging trapped-ion quantum computers. In surface ion traps they have typically been implemented by shaping radio frequency (RF) electrodes in a single plane to minimize the disturbance to the pseudopotential. However, this method introduces issues related to RF lead routing that can increase power dissipation and the likelihood of voltage breakdown. Here, we propose and simulate a novel two-layer junction design incorporating two perpendicularly rotoreflected (rotated, then reflected) linear ion traps. The traps are vertically separated, and create a trapping potential between their respective planes. The orthogonal orientation of the RF electrodes of each trap relative to the other provides perpendicular axes of confinement that can be used to realize transport in two dimensions. While this design introduces manufacturing and operating challenges, as now two separate structures have to be precisely positioned relative to each other in the vertical direction and optical access from the top is obscured, it obviates the need to route RF leads below the top surface of the trap and eliminates the pseudopotential bumps that occur in typical junctions. In this paper the stability of idealized ion transfer in the new configuration is demonstrated, both by solving the Mathieu equation analytically to identify the stable regions and by numerically modeling ion dynamics. Our novel junction layout has the potential to enhance the flexibility of microfabricated ion trap control to enable large-scale trapped-ion quantum computing.
Abstract not provided.
Materials Characterization
High-throughput image segmentation of atomic resolution electron microscopy data poses an ongoing challenge for materials characterization. In this paper, we investigate the application of the polyhedral template matching (PTM) method, a technique widely employed for visualizing three-dimensional (3D) atomistic simulations, to the analysis of two-dimensional (2D) atomic resolution electron microscopy images. This technique is complementary with other atomic resolution data reduction techniques, such as the centrosymmetry parameter, that use the measured atomic peak positions as the starting input. Furthermore, since the template matching process also gives a measure of the local rotation, the method can be used to segment images based on local orientation. We begin by presenting a 2D implementation of the PTM method, suitable for atomic resolution images. We then demonstrate the technique's application to atomic resolution scanning transmission electron microscopy images from close-packed metals, providing examples of the analysis of twins and other grain boundaries in FCC gold and martensite phases in 304 L austenitic stainless steel. Finally, we discuss factors, such as positional errors in the image peak locations, that can affect the accuracy and sensitivity of the structural determinations.
Geothermics
Treatment of lost circulation can represent anywhere from 5 to 25 % of the cost in drilling geothermal wells. The cost of the materials used for lost circulation treatment is less important than their effectiveness at reducing fluid losses. In geothermal systems, the high temperatures (>90 °C) are expected to degrade many commonly used lost circulation materials over time. This degradation could compromise different materials ability to mitigate fluid loss, creating more non-productive time as multiple treatments are needed, but may result in recovering desired permeability zones within the reservoir section over time. This research aimed to study how thermal degradation of eight different lost circulation materials affected their properties relevant to sealing loss zones in geothermal wells. Mass loss experiments were conducted with each material at temperatures of 90–250 °C for 1–42 days to measure the breakdown of the material at geothermal conditions, collecting gases during several experiments to determine the waste produced during degradation. Compaction experiments were conducted with the degraded materials to show how temperatures reduced the rigidity and increased packing of the materials. Viscosity tests were conducted to show the impact of different materials on drilling fluid rheology. Microscope observations were conducted to characterize the alterations to each material due to thermal degradation. Organic materials tend to degrade more than inorganic materials, with organics like microcellulose, cotton seed hulls and sawdust losing 30–50 % of their mass after 1 day of heating at 200 °C, while inorganics like magma fiber only lose ∼5–10 % of its mass after one day of heating at 200 °C. Granular materials are the strongest when compacted despite any mass loss, while fibrous and flaky materials are fairly weak and breakdown easily under stress. The materials do not generally affect fluid rheology unless they have a viscosifying agent as part of the mixture. Microscopic analysis showed that more rigid materials like microcellulose and cedar fiber degrade in brittle manners with splitting and fracturing, while others like cotton seed hulls degrade in more ductile manners forming meshes or clumps of material. The thermal breakdown of lost circulation materials tested suggests that each material should also be classified by its degree of thermal degradability, as at certain temperatures the materials can lose the capability to bridge loss zones around the wellbore.
Journal of Materials Science
Recent experimentally validated alloy design theories have demonstrated nanocrystalline binary alloys that are stable against thermally induced grain growth. An open question is whether such thermal stability also translates to stability under irradiation. In this study, we investigate the response to heavy ion irradiation of a nanocrystalline platinum gold alloy that is known to be thermally stable from previous studies. Heavy ion irradiation was conducted at both room temperature and elevated temperatures on films of nanocrystalline platinum and platinum gold. Using scanning/transmission electron microscopy equipped with energy-dispersive spectroscopy and automated crystallographic orientation mapping, we observe substantial grain growth in the irradiated area compared to the controlled area beyond the range of heavy ions, as well as compositional redistribution under these conditions, and discuss mechanisms underpinning this instability. These findings highlight that grain boundary stability against one external stimulus, such as heat, does not always translate into grain boundary stability under other stimuli, such as displacement damage.
Abstract not provided.
Journal of Computational Physics
In a computational fluid model of the atmosphere, the advective transport of trace species, or tracers, can be computationally expensive. For efficiency, models often use semi-Lagrangian advection methods. High-order interpolation semi-Lagrangian (ISL) methods, in particular, can be extremely efficient, if the problem of property preservation specific to them can be addressed. Atmosphere models often use geometrically and logically nonuniform grids for efficiency and, as a result, element-based discretizations. Such grids and discretizations make stability a particular problem for ISL methods. Generally, high-order, element-based ISL methods that use the natural polynomial interpolant associated with a nodal finite-element discretization are unstable. We derive new bases having order of accuracy up to nine, with positive nodal weights, that stabilize the element-based ISL method. We use these bases to construct the linear advection operator in the property-preserving Interpolation Semi-Lagrangian Element-based Transport (Islet) method. Then we discuss key software implementation details. Finally, we show performance results for the Energy Exascale Earth System Model's atmosphere dynamical core, comparing the original and new transport methods. These simulations used up to 27,600 Graphical Processing Units (GPU) on the Oak Ridge Leadership Computing Facility's Summit supercomputer.
Journal of Physical Chemistry C
Charging a Li-ion battery requires Li-ion transport between the cathode and the anode. This Li-ion transport is dependent on (among other factors) the electrostatic environment that the ion encounters within the solid electrolyte interphase (SEI), which separates the anode from the surrounding electrolyte. A previous first-principles work has illuminated the reaction barriers through likely atomistic SEI environments but has had difficulty accurately reflecting the larger electrostatic potential landscape that an ion encounters moving through the SEI. In this work, we apply the recently developed quantum continuum approximation (QCA) technique to provide an equilibrium electronic potentiostat for first-principles interface calculations. Using QCA, we calculate the potential barrier for Li-ion transport through LiF, Li2O, and Li2CO3 SEIs along with LiF-LiF and LiF-Li2O grain boundaries, all paired with Li metal anodes. We demonstrate that the SEI potential barrier is dependent on the electrochemical potentials of the anode in each system. Finally, we use these techniques to estimate the change in the diffusion barrier for a Li ion moving in a LiF SEI as a function of the anode potential. We find that properly accounting for interface and electronic voltage effects significantly lowers reaction barriers compared with previous literature results.
Acta Materialia
Density-functional theory (DFT) is used to identify phase-equilibria in multi-principal-element and high-entropy alloys (MPEAs/HEAs), including duplex-phase and eutectic microstructures. A combination of composition-dependent formation energy and electronic-structure-based ordering parameters were used to identify a transition from FCC to BCC favoring mixtures, and these predictions experimentally validated in the Al-Co-Cr-Cu-Fe-Ni system. A sharp crossover in lattice structure and dual-phase stability as a function of composition were predicted via DFT and validated experimentally. The impact of solidification kinetics and thermodynamic stability was explored experimentally using a range of techniques, from slow (castings) to rapid (laser remelting), which showed a decoupling of phase fraction from thermal history, i.e., phase fraction was found to be solidification rate-independent, enabling tuning of a multi-modal cell and grain size ranging from nanoscale through macroscale. Strength and ductility tradeoffs for select processing parameters were investigated via uniaxial tension and small-punch testing on specimens manufactured via powder-based additive manufacturing (directed-energy deposition). This work establishes a pathway for design and optimization of next-generation multiphase superalloys via tailoring of structural and chemical ordering in concentrated solid solutions.
Journal of Materials Science Research
The analysis of the work hardening variation with stress reveals insight to operative stress-strain mechanisms in material systems. The onset of plasticity can be assessed and related to ensuing plastic deformation up to the structural instability using one constitutive relationship that incorporates both behaviors of rapid work hardening (Stage 3) and the asymptotic leveling of stress (Stage 4). Results are presented for the mechanical behavior analysis of Ti-6Al-4V wherein the work hardening variation of Stages 3 and 4 are found to: be dependent through a constitutive relationship; be useful in a Hall-Petch formulation of yield strength; and provide the basis for a two point-slope fit method to model the experimental work hardening and stress-strain behavior.
npj Spintronics (Online)
Skyrmions and antiskyrmions are nanoscale swirling textures of magnetic moments formed by chiral interactions between atomic spins in magnetic noncentrosymmetric materials and multilayer films with broken inversion symmetry. These quasiparticles are of interest for use as information carriers in next-generation, low-energy spintronic applications. To develop skyrmion-based memory and logic, we must understand skyrmion-defect interactions with two main goals—determining how skyrmions navigate intrinsic material defects and determining how to engineer disorder for optimal device operation. Here, we introduce a tunable means of creating a skyrmion-antiskyrmion system by engineering the disorder landscape in FeGe using ion irradiation. Specifically, we irradiate epitaxial B20-phase FeGe films with 2.8 MeV Au4+ ions at varying fluences, inducing amorphous regions within the crystalline matrix. Using low-temperature electrical transport and magnetization measurements, we observe a strong topological Hall effect with a double-peak feature that serves as a signature of skyrmions and antiskyrmions. These results are a step towards the development of information storage devices that use skyrmions and antiskyrmions as storage bits, and our system may serve as a testbed for theoretically predicted phenomena in skyrmion-antiskyrmion crystals.
Cryptography
Recent evaluations of counter-based periodic testing strategies for fault detection in Microprocessor(μP) have shown that only a small set of counters is needed to provide complete coverage of severe faults. Severe faults are defined as faults that leak sensitive information, e.g., an encryption key on the output of a serial port. Alternatively, fault detection can be accomplished by executing instructions that periodically test the control and functional units of the μP. In this paper, we propose a fault detection method that utilizes an ’engineered’ executable program combined with a small set of strategically placed counters in pursuit of a hardware Periodic Built-In-Self-Test (PBIST). We analyze two distinct methods for generating such a binary; the first uses an Automatic Test Generation Pattern (ATPG)-based methodology, and the second uses a process whereby existing counter-based node-monitoring infrastructure is utilized. We show that complete fault coverage of all leakage faults is possible using relatively small binaries with low latency to fault detection and by utilizing only a few strategically placed counters in the μP.
Journal of Instrumentation
The Single Volume Scatter Camera (SVSC) Collaboration aims to develop portable neutron imaging systems for a variety of applications in nuclear non-proliferation. Conventional double-scatter neutron imagers are composed of several separate detector volumes organized in at least two planes. A neutron must scatter in two of these detector volumes for its initial trajectory to be reconstructed. As such, these systems typically have a large footprint and poor geometric efficiency. We report on the design and characterization of a prototype monolithic neutron scatter camera that is intended to significantly improve upon the geometrical shortcomings of conventional neutron cameras. The detector consists of a 50 mm×56 mm× 60 mm monolithic block of EJ-204 plastic scintillator instrumented on two faces with arrays of 64 Hamamatsu S13360-6075PE silicon photomultipliers (SiPMs). The electronic crosstalk is limited to < 5% between adjacent channels and < 0.1% between all other channel pairs. SiPMs introduce a significantly elevated dark count rate over PMTs, as well as correlated noise from after-pulsing and optical crosstalk. In this article, we characterize the dark count rate and optical crosstalk and present a modified event reconstruction likelihood function that accounts for them. We find that the average dark count rate per SiPM is 4.3 MHz with a standard deviation of 1.5 MHz among devices. The analysis method we employ to measure internal optical crosstalk also naturally yields the mean and width of the single-electron pulse height. We calculate separate contributions to the width of the single-electron pulse-height from electronic noise and avalanche fluctuations. We demonstrate a timing resolution for a single-photon pulse to be (128 ± 4) ps. Finally, coincidence analysis is employed to measure external (pixel-to-pixel) optical crosstalk. We present a map of the average external crosstalk probability between 2×4 groups of SiPMs, as well as the in-situ timing characteristics extracted from the coincidence analysis. Further work is needed to characterize the performance of the camera at reconstructing single- and double-site interactions, as well as image reconstruction.
PRX Energy
PRX Energy
Abstract not provided.
Micro and Nano Engineering
A thermally driven, micrometer-scale switch technology has been created that utilizes the ErH3/Er2O3 materials system. The technology is comprised of novel thin film switches, interconnects, on-board micro-scale heaters for passive thermal environment sensing, and on-board micro-scale heaters for individualized switch actuation. Switches undergo a thermodynamically stable reduction/oxidation reaction leading to a multi-decade (>11 orders) change in resistance. The resistance contrast remains after cooling to room temperature, making them suitable as thermal fuses. An activation energy of 290 kJ/mol was calculated for the switch reaction, and a thermos-kinetic model was employed to determine switch times of 120 ms at 560 °C with the potential to scale to 1 ms at 680 °C.
International Journal of Electrical Power and Energy Systems
This manuscript presents a complete framework for the development and verification of physics-informed neural networks with application to the alternating-current power flow (ACPF) equations. Physics-informed neural networks (PINN)s have received considerable interest within power systems communities for their ability to harness underlying physical equations to produce simple neural network architectures that achieve high accuracy using limited training data. The methodology developed in this work builds on existing methods and explores new important aspects around the implementation of PINNs including: (i) obtaining operationally relevant training data, (ii) efficiently training PINNs and using pruning techniques to reduce their complexity, and (iii) globally verifying the worst-case predictions given known physical constraints. The methodology is applied to the IEEE-14 and 118 bus systems where PINNs show substantially improved accuracy in a data-limited setting and attain better guarantees with respect to worst-case predictions.
Abstract not provided.
Neuromorphic Computing and Engineering
Abstract As modern neuroscience tools acquire more details about the brain, the need to move towards biological-scale neural simulations continues to grow. However, effective simulations at scale remain a challenge. Beyond just the tooling required to enable parallel execution, there is also the unique structure of the synaptic interconnectivity, which is globally sparse but has relatively high connection density and non-local interactions per neuron. There are also various practicalities to consider in high performance computing applications, such as the need for serializing neural networks to support potentially long-running simulations that require checkpoint-restart. Although acceleration on neuromorphic hardware is also a possibility, development in this space can be difficult as hardware support tends to vary between platforms and software support for larger scale models also tends to be limited. In this paper, we focus our attention on Simulation Tool for Asynchronous Cortical Streams (STACS), a spiking neural network simulator that leverages the Charm++ parallel programming framework, with the goal of supporting biological-scale simulations as well as interoperability between platforms. Central to these goals is the implementation of scalable data structures suitable for efficiently distributing a network across parallel partitions. Here, we discuss a straightforward extension of a parallel data format with a history of use in graph partitioners, which also serves as a portable intermediate representation for different neuromorphic backends. We perform scaling studies on the Summit supercomputer, examining the capabilities of STACS in terms of network build and storage, partitioning, and execution. We highlight how a suitably partitioned, spatially dependent synaptic structure introduces a communication workload well-suited to the multicast communication supported by Charm++. We evaluate the strong and weak scaling behavior for networks on the order of millions of neurons and billions of synapses, and show that STACS achieves competitive levels of parallel efficiency.
Journal of Quantitative Spectroscopy and Radiative Transfer
Monte Carlo simulations are at the heart of many high-fidelity simulations and analyses for radiation transport systems. As is the case with any complex computational model, it is important to propagate sources of input uncertainty and characterize how they affect model output. Unfortunately, uncertainty quantification (UQ) is made difficult by the stochastic variability that Monte Carlo transport solvers introduce. The standard method to avoid corrupting the UQ statistics with the transport solver noise is to increase the number of particle histories, resulting in very high computational costs. In this contribution, we propose and analyze a sampling estimator based on the law of total variance to compute UQ variance even in the presence of residual noise from Monte Carlo transport calculations. We rigorously derive the statistical properties of the new variance estimator, compare its performance to that of the standard method, and demonstrate its use on neutral particle transport model problems involving both attenuation and scattering physics. We illustrate, both analytically and numerically, the estimator's statistical performance as a function of available computational budget and the distribution of that budget between UQ samples and particle histories. We show analytically and corroborate numerically that the new estimator is unbiased, unlike the standard approach, and is more accurate and precise than the standard estimator for the same computational budget.
JOM
Crystal plasticity finite element method (CPFEM) has been an integrated computational materials engineering (ICME) workhorse to study materials behaviors and structure-property relationships for the last few decades. These relations are mappings from the microstructure space to the materials properties space. Due to the stochastic and random nature of microstructures, there is always some uncertainty associated with materials properties, for example, in homogenized stress-strain curves. For critical applications with strong reliability needs, it is often desirable to quantify the microstructure-induced uncertainty in the context of structure-property relationships. However, this uncertainty quantification (UQ) problem often incurs a large computational cost because many statistically equivalent representative volume elements (SERVEs) are needed. In this article, we apply a multi-level Monte Carlo (MLMC) method to CPFEM to study the uncertainty in stress-strain curves, given an ensemble of SERVEs at multiple mesh resolutions. By using the information at coarse meshes, we show that it is possible to approximate the response at fine meshes with a much reduced computational cost. We focus on problems where the model output is multi-dimensional, which requires us to track multiple quantities of interest (QoIs) at the same time. Our numerical results show that MLMC can accelerate UQ tasks around 2.23×, compared to the classical Monte Carlo (MC) method, which is widely known as ensemble average in the CPFEM literature.