Treatment of lost circulation can represent anywhere from 5 to 25 % of the cost in drilling geothermal wells. The cost of the materials used for lost circulation treatment is less important than their effectiveness at reducing fluid losses. In geothermal systems, the high temperatures (>90 °C) are expected to degrade many commonly used lost circulation materials over time. This degradation could compromise different materials ability to mitigate fluid loss, creating more non-productive time as multiple treatments are needed, but may result in recovering desired permeability zones within the reservoir section over time. This research aimed to study how thermal degradation of eight different lost circulation materials affected their properties relevant to sealing loss zones in geothermal wells. Mass loss experiments were conducted with each material at temperatures of 90–250 °C for 1–42 days to measure the breakdown of the material at geothermal conditions, collecting gases during several experiments to determine the waste produced during degradation. Compaction experiments were conducted with the degraded materials to show how temperatures reduced the rigidity and increased packing of the materials. Viscosity tests were conducted to show the impact of different materials on drilling fluid rheology. Microscope observations were conducted to characterize the alterations to each material due to thermal degradation. Organic materials tend to degrade more than inorganic materials, with organics like microcellulose, cotton seed hulls and sawdust losing 30–50 % of their mass after 1 day of heating at 200 °C, while inorganics like magma fiber only lose ∼5–10 % of its mass after one day of heating at 200 °C. Granular materials are the strongest when compacted despite any mass loss, while fibrous and flaky materials are fairly weak and breakdown easily under stress. The materials do not generally affect fluid rheology unless they have a viscosifying agent as part of the mixture. Microscopic analysis showed that more rigid materials like microcellulose and cedar fiber degrade in brittle manners with splitting and fracturing, while others like cotton seed hulls degrade in more ductile manners forming meshes or clumps of material. The thermal breakdown of lost circulation materials tested suggests that each material should also be classified by its degree of thermal degradability, as at certain temperatures the materials can lose the capability to bridge loss zones around the wellbore.
Radiation source localization is important for nuclear nonproliferation and can be obtained using time-encoded imaging systems with unsegmented detectors. A scintillation crystal can be used with a moving coded-aperture mask to vary the detected count rate produced from radiation sources in the far field. The modulation of observed counts over time can be used to reconstruct an image with the known coded-aperture mask pattern. Current time-encoded imaging systems incorporate cylindrical coded-aperture masks and have limits to their fully coded imaging field-of-view. This work focuses on expanding the field-of-view to 4π by using a novel spherical coded-aperture mask. A regular icosahedron is used to approximate a spherical mask. This icosahedron consists of 20 equilateral triangles; the faces of which are each subdivided into four equilateral triangle-shaped voxels which are then projected onto a spherical surface, creating an 80-voxel coded-aperture mask. These polygonal voxels can be made from high-Z materials for gamma-ray modulation and/or low-Z materials for neutron modulation. In this work, we present Monte Carlo N-Particle (MCNP) simulations and simple models programmed in Mathematica to explore image reconstruction capabilities of this 80-voxel coded-aperture mask.
In order to make design decisions, engineers may seek to identify regions of the design domain that are acceptable in a computationally efficient manner. A design is typically considered acceptable if its reliability with respect to parametric uncertainty exceeds the designer’s desired level of confidence. Despite major advancements in reliability estimation and in design classification via decision boundary estimation, the current literature still lacks a design classification strategy that incorporates parametric uncertainty and desired design confidence. To address this gap, this works offers a novel interpretation of the acceptance region by defining the decision boundary as the hypersurface which isolates the designs that exceed a user-defined level of confidence given parametric uncertainty. This work addresses the construction of this novel decision boundary using computationally efficient algorithms that were developed for reliability analysis and decision boundary estimation. The proposed approach is verified on two physical examples from structural and thermal analysis using Support Vector Machines and Efficient Global Optimization-based contour estimation.
Nop, Gavin N.; Smith, Jonathan D.H.; Paudyal, Durga; Stick, Daniel L.
Junctions are fundamental elements that support qubit locomotion in two-dimensional ion trap arrays and enhance connectivity in emerging trapped-ion quantum computers. In surface ion traps they have typically been implemented by shaping radio frequency (RF) electrodes in a single plane to minimize the disturbance to the pseudopotential. However, this method introduces issues related to RF lead routing that can increase power dissipation and the likelihood of voltage breakdown. Here, we propose and simulate a novel two-layer junction design incorporating two perpendicularly rotoreflected (rotated, then reflected) linear ion traps. The traps are vertically separated, and create a trapping potential between their respective planes. The orthogonal orientation of the RF electrodes of each trap relative to the other provides perpendicular axes of confinement that can be used to realize transport in two dimensions. While this design introduces manufacturing and operating challenges, as now two separate structures have to be precisely positioned relative to each other in the vertical direction and optical access from the top is obscured, it obviates the need to route RF leads below the top surface of the trap and eliminates the pseudopotential bumps that occur in typical junctions. In this paper the stability of idealized ion transfer in the new configuration is demonstrated, both by solving the Mathieu equation analytically to identify the stable regions and by numerically modeling ion dynamics. Our novel junction layout has the potential to enhance the flexibility of microfabricated ion trap control to enable large-scale trapped-ion quantum computing.
In a computational fluid model of the atmosphere, the advective transport of trace species, or tracers, can be computationally expensive. For efficiency, models often use semi-Lagrangian advection methods. High-order interpolation semi-Lagrangian (ISL) methods, in particular, can be extremely efficient, if the problem of property preservation specific to them can be addressed. Atmosphere models often use geometrically and logically nonuniform grids for efficiency and, as a result, element-based discretizations. Such grids and discretizations make stability a particular problem for ISL methods. Generally, high-order, element-based ISL methods that use the natural polynomial interpolant associated with a nodal finite-element discretization are unstable. We derive new bases having order of accuracy up to nine, with positive nodal weights, that stabilize the element-based ISL method. We use these bases to construct the linear advection operator in the property-preserving Interpolation Semi-Lagrangian Element-based Transport (Islet) method. Then we discuss key software implementation details. Finally, we show performance results for the Energy Exascale Earth System Model's atmosphere dynamical core, comparing the original and new transport methods. These simulations used up to 27,600 Graphical Processing Units (GPU) on the Oak Ridge Leadership Computing Facility's Summit supercomputer.
High-throughput image segmentation of atomic resolution electron microscopy data poses an ongoing challenge for materials characterization. In this paper, we investigate the application of the polyhedral template matching (PTM) method, a technique widely employed for visualizing three-dimensional (3D) atomistic simulations, to the analysis of two-dimensional (2D) atomic resolution electron microscopy images. This technique is complementary with other atomic resolution data reduction techniques, such as the centrosymmetry parameter, that use the measured atomic peak positions as the starting input. Furthermore, since the template matching process also gives a measure of the local rotation, the method can be used to segment images based on local orientation. We begin by presenting a 2D implementation of the PTM method, suitable for atomic resolution images. We then demonstrate the technique's application to atomic resolution scanning transmission electron microscopy images from close-packed metals, providing examples of the analysis of twins and other grain boundaries in FCC gold and martensite phases in 304 L austenitic stainless steel. Finally, we discuss factors, such as positional errors in the image peak locations, that can affect the accuracy and sensitivity of the structural determinations.
Daniel, Kyle; Willhardt, Colton; Glumac, Nick; Chen, Damon; Guildenbecher, Daniel
Surface mass loss rates due to sublimation and oxidation at temperatures of 3000–7000 K have been measured in a shock tube for graphite and carbon black (CB) particles. Diagnostics are presented for measuring surface mass loss rates by diffuse backlit illumination extinction imaging and thermal emission. The surface mass loss rate is found by regression fitting extinction and emission signals with an independent spherical primary particle assumption. Measured graphite sublimation and oxidation rates are reported to be an order of magnitude greater than CB sublimation and oxidation rates. It is speculated that the difference between CB and graphite surface mass loss rates is largely due to the primary particle assumption of the presented technique which misrepresents the effective surface area of an aggregate particle where primary particles overlap and shield inner particles. Measured sublimation rates are compared to sublimation models in the literature, and it is seen graphite shows fair agreement with the models while CB underestimates, likely a result of the particle shielding affect not being considered in the sublimation model.
Prior to every ion implantation experiment a simulation of the ion range and other relevant parameters is performed using Monte-Carlo based codes. Although increasing computing power has improved the speed of these calculations, the demands on Monte-Carlo codes are also increasing, requiring evaluation of the optimal number of simulations while ensuring accuracy within threshold bounds. We evaluate the “Stopping and Range of Ions in Matter” (SRIM) code due to its widespread usage. We show how dividing simulations into multiple parallel simulations with different random seeds can lead to calculation speedup and find lower bounds for the required number of ion traces simulated based on an exemplar system of a Ga focused ion beam and a high energy C beam as used in high linear energy transfer testing. Our results indicate simulations can yield results within the underlying data accuracy of SRIM at 10X and 100X shorter simulation time than the SRIM default values.
Charging a Li-ion battery requires Li-ion transport between the cathode and the anode. This Li-ion transport is dependent on (among other factors) the electrostatic environment that the ion encounters within the solid electrolyte interphase (SEI), which separates the anode from the surrounding electrolyte. A previous first-principles work has illuminated the reaction barriers through likely atomistic SEI environments but has had difficulty accurately reflecting the larger electrostatic potential landscape that an ion encounters moving through the SEI. In this work, we apply the recently developed quantum continuum approximation (QCA) technique to provide an equilibrium electronic potentiostat for first-principles interface calculations. Using QCA, we calculate the potential barrier for Li-ion transport through LiF, Li2O, and Li2CO3 SEIs along with LiF-LiF and LiF-Li2O grain boundaries, all paired with Li metal anodes. We demonstrate that the SEI potential barrier is dependent on the electrochemical potentials of the anode in each system. Finally, we use these techniques to estimate the change in the diffusion barrier for a Li ion moving in a LiF SEI as a function of the anode potential. We find that properly accounting for interface and electronic voltage effects significantly lowers reaction barriers compared with previous literature results.
Density-functional theory (DFT) is used to identify phase-equilibria in multi-principal-element and high-entropy alloys (MPEAs/HEAs), including duplex-phase and eutectic microstructures. A combination of composition-dependent formation energy and electronic-structure-based ordering parameters were used to identify a transition from FCC to BCC favoring mixtures, and these predictions experimentally validated in the Al-Co-Cr-Cu-Fe-Ni system. A sharp crossover in lattice structure and dual-phase stability as a function of composition were predicted via DFT and validated experimentally. The impact of solidification kinetics and thermodynamic stability was explored experimentally using a range of techniques, from slow (castings) to rapid (laser remelting), which showed a decoupling of phase fraction from thermal history, i.e., phase fraction was found to be solidification rate-independent, enabling tuning of a multi-modal cell and grain size ranging from nanoscale through macroscale. Strength and ductility tradeoffs for select processing parameters were investigated via uniaxial tension and small-punch testing on specimens manufactured via powder-based additive manufacturing (directed-energy deposition). This work establishes a pathway for design and optimization of next-generation multiphase superalloys via tailoring of structural and chemical ordering in concentrated solid solutions.
The analysis of the work hardening variation with stress reveals insight to operative stress-strain mechanisms in material systems. The onset of plasticity can be assessed and related to ensuing plastic deformation up to the structural instability using one constitutive relationship that incorporates both behaviors of rapid work hardening (Stage 3) and the asymptotic leveling of stress (Stage 4). Results are presented for the mechanical behavior analysis of Ti-6Al-4V wherein the work hardening variation of Stages 3 and 4 are found to: be dependent through a constitutive relationship; be useful in a Hall-Petch formulation of yield strength; and provide the basis for a two point-slope fit method to model the experimental work hardening and stress-strain behavior.
Skyrmions and antiskyrmions are nanoscale swirling textures of magnetic moments formed by chiral interactions between atomic spins in magnetic noncentrosymmetric materials and multilayer films with broken inversion symmetry. These quasiparticles are of interest for use as information carriers in next-generation, low-energy spintronic applications. To develop skyrmion-based memory and logic, we must understand skyrmion-defect interactions with two main goals—determining how skyrmions navigate intrinsic material defects and determining how to engineer disorder for optimal device operation. Here, we introduce a tunable means of creating a skyrmion-antiskyrmion system by engineering the disorder landscape in FeGe using ion irradiation. Specifically, we irradiate epitaxial B20-phase FeGe films with 2.8 MeV Au4+ ions at varying fluences, inducing amorphous regions within the crystalline matrix. Using low-temperature electrical transport and magnetization measurements, we observe a strong topological Hall effect with a double-peak feature that serves as a signature of skyrmions and antiskyrmions. These results are a step towards the development of information storage devices that use skyrmions and antiskyrmions as storage bits, and our system may serve as a testbed for theoretically predicted phenomena in skyrmion-antiskyrmion crystals.
This manuscript presents a complete framework for the development and verification of physics-informed neural networks with application to the alternating-current power flow (ACPF) equations. Physics-informed neural networks (PINN)s have received considerable interest within power systems communities for their ability to harness underlying physical equations to produce simple neural network architectures that achieve high accuracy using limited training data. The methodology developed in this work builds on existing methods and explores new important aspects around the implementation of PINNs including: (i) obtaining operationally relevant training data, (ii) efficiently training PINNs and using pruning techniques to reduce their complexity, and (iii) globally verifying the worst-case predictions given known physical constraints. The methodology is applied to the IEEE-14 and 118 bus systems where PINNs show substantially improved accuracy in a data-limited setting and attain better guarantees with respect to worst-case predictions.
A thermally driven, micrometer-scale switch technology has been created that utilizes the ErH3/Er2O3 materials system. The technology is comprised of novel thin film switches, interconnects, on-board micro-scale heaters for passive thermal environment sensing, and on-board micro-scale heaters for individualized switch actuation. Switches undergo a thermodynamically stable reduction/oxidation reaction leading to a multi-decade (>11 orders) change in resistance. The resistance contrast remains after cooling to room temperature, making them suitable as thermal fuses. An activation energy of 290 kJ/mol was calculated for the switch reaction, and a thermos-kinetic model was employed to determine switch times of 120 ms at 560 °C with the potential to scale to 1 ms at 680 °C.
Monte Carlo simulations are at the heart of many high-fidelity simulations and analyses for radiation transport systems. As is the case with any complex computational model, it is important to propagate sources of input uncertainty and characterize how they affect model output. Unfortunately, uncertainty quantification (UQ) is made difficult by the stochastic variability that Monte Carlo transport solvers introduce. The standard method to avoid corrupting the UQ statistics with the transport solver noise is to increase the number of particle histories, resulting in very high computational costs. In this contribution, we propose and analyze a sampling estimator based on the law of total variance to compute UQ variance even in the presence of residual noise from Monte Carlo transport calculations. We rigorously derive the statistical properties of the new variance estimator, compare its performance to that of the standard method, and demonstrate its use on neutral particle transport model problems involving both attenuation and scattering physics. We illustrate, both analytically and numerically, the estimator's statistical performance as a function of available computational budget and the distribution of that budget between UQ samples and particle histories. We show analytically and corroborate numerically that the new estimator is unbiased, unlike the standard approach, and is more accurate and precise than the standard estimator for the same computational budget.
Precise control of light-matter interactions at the nanoscale lies at the heart of nanophotonics. However, experimental examination at this length scale is challenging since the corresponding electromagnetic near-field is often confined within volumes below the resolution of conventional optical microscopy. In semiconductor nanophotonics, electromagnetic fields are further restricted within the confines of individual subwavelength resonators, limiting access to critical light-matter interactions in these structures. In this work, we demonstrate that photoelectron emission microscopy (PEEM) can be used for polarization-resolved near-field spectroscopy and imaging of electromagnetic resonances supported by broken-symmetry silicon metasurfaces. We find that the photoemission results, enabled through an in situ potassium surface layer, are consistent with full-wave simulations and far-field reflectance measurements across visible and near-infrared wavelengths. In addition, we uncover a polarization-dependent evolution of collective resonances near the metasurface array edge taking advantage of the far-field excitation and full-field imaging of PEEM. Here, we deduce that coupling between eight resonators or more establishes the collective excitations of this metasurface. All told, we demonstrate that the high-spatial resolution hyperspectral imaging and far-field illumination of PEEM can be leveraged for the metrology of collective, non-local, optical resonances in semiconductor nanophotonic structures.
Bayesian inference with a simple Gaussian error model is used to efficiently compute prediction variances for energies, forces, and stresses in the linear SNAP interatomic potential. The prediction variance is shown to have a strong correlation with the absolute error over approximately 24 orders of magnitude. Using this prediction variance, an active learning algorithm is constructed to iteratively train a potential by selecting the structures with the most uncertain properties from a pool of candidate structures. The relative importance of the energy, force, and stress errors in the objective function is shown to have a strong impact upon the trajectory of their respective net error metrics when running the active learning algorithm. Batched training of different batch sizes is also tested against singular structure updates, and it is found that batches can be used to significantly reduce the number of retraining steps required with only minor impact on the active learning trajectory.
Photonic Doppler Velocimetry (PDV) is a fiber-based measurement amenable to a wide range of experimental conditions. Interference between two optical signals—one Doppler shifted and the other not—is the essential principle in these measurements. A confluence of commercial technologies, largely driven by the telecommunication industry, makes PDV particularly convenient at near-infrared wavelengths. This discussion considers how measurement time scales of interest relate to the design, operation, and analysis of a PDV measurement, starting from the steady state through nanosecond resolution. Benefits and outstanding challenges of PDV are summarized, with comparisons to related diagnostics.
The Single Volume Scatter Camera (SVSC) Collaboration aims to develop portable neutron imaging systems for a variety of applications in nuclear non-proliferation. Conventional double-scatter neutron imagers are composed of several separate detector volumes organized in at least two planes. A neutron must scatter in two of these detector volumes for its initial trajectory to be reconstructed. As such, these systems typically have a large footprint and poor geometric efficiency. We report on the design and characterization of a prototype monolithic neutron scatter camera that is intended to significantly improve upon the geometrical shortcomings of conventional neutron cameras. The detector consists of a 50 mm×56 mm× 60 mm monolithic block of EJ-204 plastic scintillator instrumented on two faces with arrays of 64 Hamamatsu S13360-6075PE silicon photomultipliers (SiPMs). The electronic crosstalk is limited to < 5% between adjacent channels and < 0.1% between all other channel pairs. SiPMs introduce a significantly elevated dark count rate over PMTs, as well as correlated noise from after-pulsing and optical crosstalk. In this article, we characterize the dark count rate and optical crosstalk and present a modified event reconstruction likelihood function that accounts for them. We find that the average dark count rate per SiPM is 4.3 MHz with a standard deviation of 1.5 MHz among devices. The analysis method we employ to measure internal optical crosstalk also naturally yields the mean and width of the single-electron pulse height. We calculate separate contributions to the width of the single-electron pulse-height from electronic noise and avalanche fluctuations. We demonstrate a timing resolution for a single-photon pulse to be (128 ± 4) ps. Finally, coincidence analysis is employed to measure external (pixel-to-pixel) optical crosstalk. We present a map of the average external crosstalk probability between 2×4 groups of SiPMs, as well as the in-situ timing characteristics extracted from the coincidence analysis. Further work is needed to characterize the performance of the camera at reconstructing single- and double-site interactions, as well as image reconstruction.
In this paper we extend the DGiT multirate framework, developed in Connors and Sockwell (2022) for scalar transmission problems, to a solid–solid interaction (SSI) problem involving two coupled elastic solids and a coupled air–sea model with the rotating, thermal shallow water equations. In so doing we aim to demonstrate the broad applicability of the mathematical theory and governing principles established in Connors and Sockwell (2022) to coupled problems characterized by subproblems evolving at different temporal scales. Multirate time integration algorithms employing different time steps, optimized for the dynamics of each subproblem, can significantly improve simulation efficiency for such coupled problems. However, development of multirate algorithms is a highly non-trivial task due to the coupling, which can impact accuracy, stability or other desired properties such as preservation of system invariants. DGiT provides a general template for multirate time integration that can achieve these properties. To elucidate the manner in which DGiT accomplishes this task, we fully detail each step in the application of the framework to the SSI and air–sea coupled problems. Numerical examples illustrate key properties of the resulting multirate schemes for both problems.
Artificial intelligence (AI) and machine learning (ML) are near-ubiquitous in day-to-day life; from cars with automated driver-assistance, recommender systems, generative content platforms, and large language chatbots. Implementing AI as a tool for international safeguards could significantly decrease the burden on safeguards inspectors and nuclear facility operators. The use of AI would allow inspectors to complete their in-field activities quicker, while identifying patterns and anomalies and freeing inspectors to focus on the uniquely human component of inspections. Sandia National Laboratories has spent the past two and a half years developing on-device machine learning to develop both a digital and robotic assistant. This combined platform, which we term INSPECTA, has numerous on-device machine learning capabilities that have been demonstrated at the laboratory scale. This work describes early successes implementing AI/ML capabilities to reduce the burden of tedious inspector tasks such as seal examination, information recall, note taking, and more.
This data documentation report describes geologic and hydrologic laboratory analysis and data collected in support of site characterization of the Physical Experiment 1 (PE1) testbed, Aqueduct Mesa, Nevada. The documentation includes a summary of laboratory tests performed, discussion of sample selection for assessing heterogeneity of various testbed properties, methods, and results per data type.
The use of structural mechanics models during the design process often leads to the development of models of varying fidelity. Often low-fidelity models are efficient to simulate but lack accuracy, while the high-fidelity counterparts are accurate with less efficiency. This paper presents a multifidelity surrogate modeling approach that combines the accuracy of a high-fidelity finite element model with the efficiency of a low-fidelity model to train an even faster surrogate model that parameterizes the design space of interest. The objective of these models is to predict the nonlinear frequency backbone curves of the Tribomechadynamics research challenge benchmark structure which exhibits simultaneous nonlinearities from frictional contact and geometric nonlinearity. The surrogate model consists of an ensemble of neural networks that learn the mapping between low and high-fidelity data through nonlinear transformations. Bayesian neural networks are used to assess the surrogate model’s uncertainty. Once trained, the multifidelity neural network is used to perform sensitivity analysis to assess the influence of the design parameters on the predicted backbone curves. Additionally, Bayesian calibration is performed to update the input parameter distributions to correlate the model parameters to the collection of experimentally measured backbone curves.
Lehoucq, Richard B.; Mckinley, Scott A.; Miles, Christopher E.; Ding, Fangyuan
Many imaging techniques for biological systems—like fixation of cells coupled with fluorescence microscopy—provide sharp spatial resolution in reporting locations of individuals at a single moment in time but also destroy the dynamics they intend to capture. These snapshot observations contain no information about individual trajectories, but still encode information about movement and demographic dynamics, especially when combined with a well-motivated biophysical model. The relationship between spatially evolving populations and single-moment representations of their collective locations is well-established with partial differential equations (PDEs) and their inverse problems. However, experimental data is commonly a set of locations whose number is insufficient to approximate a continuous-in-space PDE solution. Here, motivated by popular subcellular imaging data of gene expression, we embrace the stochastic nature of the data and investigate the mathematical foundations of parametrically inferring demographic rates from snapshots of particles undergoing birth, diffusion, and death in a nuclear or cellular domain. Toward inference, we rigorously derive a connection between individual particle paths and their presentation as a Poisson spatial process. Using this framework, we investigate the properties of the resulting inverse problem and study factors that affect quality of inference. One pervasive feature of this experimental regime is the presence of cell-to-cell heterogeneity. Rather than being a hindrance, we show that cell-to-cell geometric heterogeneity can increase the quality of inference on dynamics for certain parameter regimes. Altogether, the results serve as a basis for more detailed investigations of subcellular spatial patterns of RNA molecules and other stochastically evolving populations that can only be observed for single instants in their time evolution.
Abstract As modern neuroscience tools acquire more details about the brain, the need to move towards biological-scale neural simulations continues to grow. However, effective simulations at scale remain a challenge. Beyond just the tooling required to enable parallel execution, there is also the unique structure of the synaptic interconnectivity, which is globally sparse but has relatively high connection density and non-local interactions per neuron. There are also various practicalities to consider in high performance computing applications, such as the need for serializing neural networks to support potentially long-running simulations that require checkpoint-restart. Although acceleration on neuromorphic hardware is also a possibility, development in this space can be difficult as hardware support tends to vary between platforms and software support for larger scale models also tends to be limited. In this paper, we focus our attention on Simulation Tool for Asynchronous Cortical Streams (STACS), a spiking neural network simulator that leverages the Charm++ parallel programming framework, with the goal of supporting biological-scale simulations as well as interoperability between platforms. Central to these goals is the implementation of scalable data structures suitable for efficiently distributing a network across parallel partitions. Here, we discuss a straightforward extension of a parallel data format with a history of use in graph partitioners, which also serves as a portable intermediate representation for different neuromorphic backends. We perform scaling studies on the Summit supercomputer, examining the capabilities of STACS in terms of network build and storage, partitioning, and execution. We highlight how a suitably partitioned, spatially dependent synaptic structure introduces a communication workload well-suited to the multicast communication supported by Charm++. We evaluate the strong and weak scaling behavior for networks on the order of millions of neurons and billions of synapses, and show that STACS achieves competitive levels of parallel efficiency.
Additive manufacturing has established itself to be advantageous beyond small-scale prototyping, now supporting full-scale production of components for a variety of applications. Despite its integration across industries, marine renewable energy technology is one largely untapped application with potential to bolster clean energy production on the global scale. Wave energy converters (WEC) are one specific facet within this realm that could benefit from AM. As such, wire arc additive manufacturing (WAAM) has been identified as a practical method to produce larger scale marine energy components by leveraging cost-effective and readily available A36 steel feedstock material. The flexibility associated with WAAM can benefit production of WEC by producing more complex structural geometries that are challenging to produce traditionally. Additionally, for large components where fine details are less critical, the high deposition rate of WAAM in comparison to traditional wrought techniques could reduce build times by an order of magnitude. In this context of building and supporting WEC, which experience harsh marine environments, an understanding of performance under large loads and corrosive environments must be understood. Hence, WAAM and wrought A36 steel tensile samples were manufactured, and mechanical properties compared under both dry and corroded conditions. The unique microstructure created via the WAAM process was found to directly correlate to the increased ultimate tensile and yield strength compared to the wrought condition. Static corrosion testing in a simulated saltwater environment in parallel with electrochemical testing highlighted an outperformance of corroded WAAM A36 steel than wrought, despite having a slighter higher corrosion rate. Ultimately, this study shows how marine energy systems may benefit from additive manufacturing components and provides a foundation for future applications of WAAM A36 steel.
In this paper, we present a first-order Stress-Hybrid Virtual Element Method (SH-VEM) on six-noded triangular meshes for linear plane elasticity. We adopt the Hellinger–Reissner variational principle to construct a weak equilibrium condition and a stress based projection operator. In each element, the stress projection operator is expressed in terms of the nodal displacements, which leads to a displacement based formulation. This stress-hybrid approach assumes a globally continuous displacement field while the stress field is discontinuous across each element. The stress field is initially represented by divergence-free tensor polynomials based on Airy stress functions, but we also present a formulation that uses a penalty term to enforce the element equilibrium conditions, referred to as the Penalty Stress-Hybrid Virtual Element Method (PSH-VEM). Numerical results are presented for PSH-VEM and SH-VEM, and we compare their convergence to the composite triangle FEM and B-bar VEM on benchmark problems in linear elasticity. The SH-VEM converges optimally in the L2 norm of the displacement, energy seminorm, and the L2 norm of hydrostatic stress. Furthermore, the results reveal that PSH-VEM converges in most cases at a faster rate than the expected optimal rate, but it requires the selection of a suitably chosen penalty parameter.
The goal of this work is to provide a database of quality-checked seismic parameters which can be integrated with the Geologic Framework Model (GFM) for the LYNM-PE1 (Low Yield Nuclear Monitoring – Physical Experiment 1) testbed. We integrated data from geophysical borehole logs, tabletop measurements on collected core, and laboratory measurements.
Somoye, Idris O.; Plusquellic, Jim; Mannos, Tom M.; Dziki, Brian
Recent evaluations of counter-based periodic testing strategies for fault detection in Microprocessor(μP) have shown that only a small set of counters is needed to provide complete coverage of severe faults. Severe faults are defined as faults that leak sensitive information, e.g., an encryption key on the output of a serial port. Alternatively, fault detection can be accomplished by executing instructions that periodically test the control and functional units of the μP. In this paper, we propose a fault detection method that utilizes an ’engineered’ executable program combined with a small set of strategically placed counters in pursuit of a hardware Periodic Built-In-Self-Test (PBIST). We analyze two distinct methods for generating such a binary; the first uses an Automatic Test Generation Pattern (ATPG)-based methodology, and the second uses a process whereby existing counter-based node-monitoring infrastructure is utilized. We show that complete fault coverage of all leakage faults is possible using relatively small binaries with low latency to fault detection and by utilizing only a few strategically placed counters in the μP.
Crystal plasticity finite element method (CPFEM) has been an integrated computational materials engineering (ICME) workhorse to study materials behaviors and structure-property relationships for the last few decades. These relations are mappings from the microstructure space to the materials properties space. Due to the stochastic and random nature of microstructures, there is always some uncertainty associated with materials properties, for example, in homogenized stress-strain curves. For critical applications with strong reliability needs, it is often desirable to quantify the microstructure-induced uncertainty in the context of structure-property relationships. However, this uncertainty quantification (UQ) problem often incurs a large computational cost because many statistically equivalent representative volume elements (SERVEs) are needed. In this article, we apply a multi-level Monte Carlo (MLMC) method to CPFEM to study the uncertainty in stress-strain curves, given an ensemble of SERVEs at multiple mesh resolutions. By using the information at coarse meshes, we show that it is possible to approximate the response at fine meshes with a much reduced computational cost. We focus on problems where the model output is multi-dimensional, which requires us to track multiple quantities of interest (QoIs) at the same time. Our numerical results show that MLMC can accelerate UQ tasks around 2.23×, compared to the classical Monte Carlo (MC) method, which is widely known as ensemble average in the CPFEM literature.
Hydrogen energy storage can be used to achieve goals of national energy security, renewable energy integration, and grid resilience. Adapting underground natural gas storage facility (UNGSF) infrastructure for underground hydrogen storage (UHS) is one method of storing large quantities of hydrogen that has already largely been proven to work for natural gas. There are currently some underground salt caverns in the United States that are being used for hydrogen storage by commercial entities, but it is still a fairly new concept in that it has not been widely deployed nor has it been done with other geologic formations like depleted hydrocarbon reservoirs. Assessments of UHS systems can help identify and evaluate risks to people both working within the facility and residing nearby. This report provides example risk assessment methodologies and analyses for generic wellhead and processing facility configurations, specifically in the context of the risks of unintentional hydrogen releases into the air. Assessment of the hydrogen containment in the subsurface is also critically important for a safety assessment for a UHS facility, but those geomechanical assessments are not included in this report.
We describe a data-driven, multiscale technique to model reactive wetting of a silver–aluminum alloy on a Kovar™ (Fe-Ni-Co alloy) surface. We employ molecular dynamics simulations to elucidate the dependence of surface tension and wetting angle on the drop's composition and temperature. A design of computational experiments is used to efficiently generate training data of surface tension and wetting angle from a limited number of molecular dynamics simulations. The simulation results are used to parameterize models of the material's wetting properties and compute the uncertainty in the models due to limited data. The data-driven models are incorporated into an engineering-scale (continuum) model of a silver–aluminum sessile drop on a Kovar™ substrate. Model predictions of the wetting angle are compared with experiments of pure silver spreading on Kovar™ to quantify the model-form errors introduced by the limited training data versus the simplifications inherent in the molecular dynamics simulations. The paper presents innovations in the determination of “convergence” of noisy MD simulations before they are used to extract the wetting angle and surface tension, and the construction of their models which approximate physio-chemical processes that are left unresolved by the engineering-scale model. Together, these constitute a multiscale approach that integrates molecular-scale information into continuum scale models.
Fluid flow through fractured media is typically governed by the distribution of fracture apertures, which are in turn governed by stress. Consequently, understanding subsurface stress is critical for understanding and predicting subsurface fluid flow. Although laboratory-scale studies have established a sensitive relationship between effective stress and bulk electrical conductivity in crystalline rock, that relationship has not been extensively leveraged to monitor stress evolution at the field scale using electrical or electromagnetic geophysical monitoring approaches. In this paper we demonstrate the use time-lapse 3-dimensional (4D) electrical resistivity tomography to image perturbations in the stress field generated by pressurized borehole packers deployed during shear-stimulation attempts in a 1.25 km deep metamorphic crystalline rock formation.
GaN/InGaN microLEDs are a very promising technology for next-generation displays. Switching control transistors and their integration are key components in achieving high-performance, efficient displays. Monolithic integration of microLEDs with GaN switching devices provides an opportunity to control microLED output power with capacitive (voltage)-controlled rather than current-controlled schemes. This approach can greatly reduce system complexity for the driver circuit arrays while maintaining device opto-electronic performance. In this work, we demonstrate a 3-terminal GaN micro-light emitting transistor that combines a GaN/InGaN blue tunneling-based microLED with a GaN n-channel FET. The integrated device exhibits excellent gate control, drain current control, and optical emission control. This work provides a promising pathway for future monolithic integration of GaN FETs with microLED to enable fast switching, high-efficiency microLED display and communication systems.
Electrical polarization and defect transport are examined in 0.8BaTiO3–0.2BiZn0.5Ti0.5O3, an attractive capacitor material for high power electronics. Oxygen vacancies are suggested to be the majority charge carrier at or below 250°C with a grain conduction hopping activation energy of 0.97 eV and 0.92 eV for thermally stimulated depolarization current (TSDC) and impedance spectroscopy measurements, respectively. At higher temperature, thermally generated electronic conduction with an activation energy of 1.6 eV is dominant. Significant oxygen vacancy concentration is indicated (up to ~1%) due to cation vacancy formation (i.e., acceptor defects) from observed Bi (and likely Zn) volatility. Oxygen vacancy diffusivity is estimated to be 10-12.8 cm2/s at 250°C. Low diffusivity and high activation energies are indicative of significant defect interactions. Dipolar oxygen vacancy defects are also indicated, with an activation energy of 0.59 eV from TSDC measurements. In conclusion, the large oxygen vacancy content leads to a short lifetime during high voltage (30 kV/cm), high temperature (250°C) direct current (DC) electrical measurements.
A new strategy is presented for computing anharmonic partition functions for the motion of adsorbates relative to a catalytic surface. Importance sampling is compared with conventional Monte Carlo. The importance sampling is significantly more efficient. This new approach is applied to CH3* on Ni(111) as a test case. The motion of methyl relative to the nickel surface is found to be anharmonic, with significantly higher entropy compared to the standard harmonic oscillator model. The new method is freely available as part of the Minima-Preserving Neural Network within the AdTherm package.
Nanoporous, gas-selective membranes have shown encouraging results for the removal of CO2 from flue gas, yet the optimal design for such membranes is often unknown. Therefore, we used molecular dynamics simulations to elucidate the behavior of CO2 within aqueous and ionic liquid (IL) systems ([EMIM][TFSI] and [OMIM][TFSI]), both confined individually and as an interfacial aqueous/IL system. We found that within aqueous systems the mobility of CO2 is reduced due to interactions between the CO2 oxygens and hydroxyl groups on the pore surface. Within the IL systems, we found that confinement has a greater effect on the [EMIM][TFSI] system as opposed to the [OMIM][TFSI] system. Paradoxically, the larger and more asymmetrical [OMIM]+ molecule undergoes less efficient packing, resulting in fewer confinement effects. Free energy surfaces of the nanoconfined aqueous/IL interface demonstrate that CO2 will transfer spontaneously from the aqueous to the IL phase.
Lasa, Ane; Park, Jae-Sun; Lore, Jeremy; Blondel, Sophie; Bernholdt, David E.; Canik, John M.; Cianciosa, Mark; Coburn, Jonathan D.; Curreli, Davide; Elwasif, Wael; Guterl, Jerome; Hoffman, Josh; Park, Jim M.; Sinclair, Gregory; Wirth, Brian D.
Integrated modeling of plasma-surface interactions provides a comprehensive and self-consistent description of the system, moving the field closer to developing predictive and design capabilities for plasma facing components. One such workflow, including descriptions for the scrape-off-layer plasma, ion-surface interactions and the sub-surface evolution, was previously used to address steady-state scenarios and has recently been extended to incorporate time-dependence and two-way information flow. The new model can address dynamic recycling in transient scenarios, such as the application presented in this paper: the evolution of W samples pre-damaged by helium and exposed to ELMy H-mode plasmas in the DIII-D DiMES. A first set of simulations explored the effect of ELM frequency. This study was discussed in detail in this conference's proceedings and is summarized here. The 2nd set of simulations, which is the focus of this paper, explores the effect of code-coupling frequency. These simulations include initial SOLPS solutions converged to the inter-ELM state, ion impact energy (Ein) and angles (Ain) calculated by hPIC2, and an improved heat transfer description in Xolotl. The model predicts increases in particle fluxes and decreases in heat fluxes by 10%–20% with the coupling time-step. Compared with the first set of simulations, the less shallow impact angle leads to smaller reflection rates and significant D implantation. The higher fraction of implanted flux (and deeper), in particular during ELMs, increases the accumulated D content in the W near-surface region. Future expansion of the workflow includes coupling to hPIC2 and GITR to ensure accurate descriptions of Ein and Ain, and W impurity transport.
Experiments offer incredible value to science, but results must always come with an uncertainty quantification to be meaningful. This requires grappling with sources of uncertainty and how to reduce them. In wind energy, field experiments are sometimes conducted with a control and treatment. In this scenario uncertainty due to bias errors can often be neglected as they impact both control and treatment approximately equally. However, uncertainty due to random errors propagates such that the uncertainty in the difference between the control and treatment is always larger than the random uncertainty in the individual measurements if the sources are uncorrelated. As random uncertainties are usually reduced with additional measurements, there is a need to know the minimum duration of an experiment required to reach acceptable levels of uncertainty. We present a general method to simulate a proposed experiment, calculate uncertainties, and determine both the measurement duration and the experiment duration required to produce statistically significant and converged results. The method is then demonstrated as a case study with a virtual experiment that uses real-world wind resource data and several simulated tip extensions to parameterize results by the expected difference in power. With the method demonstrated herein, experiments can be better planned by accounting for specific details such as controller switching schedules, wind statistics, and postprocess binning procedures such that their impacts on uncertainty can be predicted and the measurement duration needed to achieve statistically significant and converged results can be determined before the experiment.
Stress corrosion cracking behavior of stainless steel 304 L was investigated in full immersion, evaporated artificial sea salt brines (ASW) at 55 °C. It was observed that brines representative of thermodynamically stable brines at lower relative humidity (40% RH, MgCl2-dominant) had a faster crack growth rate than high relative humidity brines (76% RH, NaCl-dominant). Observed crack growth rates (da/dt) under constant stress intensity (K) conditions were determined to be independent of transitioning procedure (rising K or decreasing frequency) regardless of solutions investigated for the orientation presented. Further, positive strain rates had little to no impact on the observed da/dt. The observed behavior suggests an anodic dissolution enhanced hydrogen embrittlement mechanism for SS304L in concentrated ASW environments at 55 °C. Additional explorations further examined environmental influences on da/dt. Nitrate additions to 40% ASW at 55 °C solutions were shown to decrease measured da/dt and further additions stopped measurable crack growth. After sufficient nitrate had been added to fully stifle crack growth, a temperature increase to 75 °C induced cracking again, and a subsequent decrease to 55 °C once again stopped da/dt. These tests demonstrate the importance of ascertaining both brine-specific chemical and dynamic environmental influences on da/dt.
Neural operators, which can act as implicit solution operators of hidden governing equations, have recently become popular tools for learning the responses of complex real-world physical systems. Nevertheless, most neural operator applications have thus far been data-driven and neglect the intrinsic preservation of fundamental physical laws in data. In this work, we introduce a novel integral neural operator architecture called the Peridynamic Neural Operator (PNO) that learns a nonlocal constitutive law from data. This neural operator provides a forward model in the form of state-based peridynamics, with objectivity and momentum balance laws automatically guaranteed. As applications, we demonstrate the expressivity and efficacy of our model in learning complex material behaviors from both synthetic and experimental data sets. We also compare the performances with baseline models that use predefined constitutive laws. We show that, owing to its ability to capture complex responses, our learned neural operator achieves improved accuracy and efficiency. Moreover, by preserving the essential physical laws within the neural network architecture, the PNO is robust in treating noisy data. The method shows generalizability to different domain configurations, external loadings, and discretizations.
Warecki, Zoey; Ferrari, Victoria C.; Robinson, Donald A.; Sugar, Joshua D.; Lee, Jonathan; Ievlev, Anton V.; Kim, Nam S.; Stewart, David M.; Lee, Sang B.; Albertus, Paul; Rubloff, Gary; Talin, A.A.
We show that the deposition of the solid-state electrolyte LiPON onto films of V2O5 leads to their uniform lithiation of up to 2.2 Li per V2O5, without affecting the Li concentration in the LiPON and its ionic conductivity. Our results indicate that Li incorporation occurs during LiPON deposition, in contrast to earlier mechanisms proposed to explain postdeposition Li transfer between LiPON and LiCoO2. We use our discovery to demonstrate symmetric thin film batteries with a capacity of >270 mAh/g, at a rate of 20C, and 1600 cycles with only 8.4% loss in capacity. We also show how autolithiation can simplify fabrication of Li iontronic transistors attractive for emerging neuromorphic computing applications. Our discovery that LiPON deposition results in autolithiation of the underlying insertion oxide has the potential to substantially simplify and enhance the fabrication process for thin film solid state Li ion batteries and emerging lithium iontronic neuromorphic computing devices.