Electromagnetic (EM) methods are among the original techniques for subsurface characterization in exploration geophysics because of their particular sensitivity to the earth electrical conductivity, a physical property of rocks distinct yet complementary to density, magnetization, and strength. However, this unique ability also makes them sensitive to metallic artifacts - infrastructure such as pipes, cables, and other forms of cultural clutter - the EM footprint of which often far exceeds their diminutive stature when compared to that of bulk rock itself. In the hunt for buried treasure or unexploded ordnance, this is an advantage; in the long-term monitoring of mature oil fields after decades of production, it is quite troublesome indeed. Here we consider the latter through the lens of an evolving energy industry landscape in which the traditional methods of EM characterization for the exploration geophysicist are applied toward emergent problems in well-casing integrity, carbon capture and storage, and overall situational awareness in the oil field. We introduce case studies from these exemplars, showing how signals from metallic artifacts can dominate those from the target itself and impose significant burdens on the requisite simulation complexity. We also show how recent advances in numerical methods mitigate the computational explosivity of infrastructure modeling, providing feasible and real-time analysis tools for the desktop geophysicist. Lastly, we demonstrate through comparison of field data and simulation results that incorporation of infrastructure into the analysis of such geophysical data is, in a growing number of cases, a requisite but now manageable step.
A data analysis automation interface that incorporates machine learning (ML) has been developed to improve productivity, efficiency, and consistency in identifying and defining critical load values (or other values associated with optically identifiable characteristics) of a coating when a scratch test is performed. In this specific program, the machine learning component of the program has been trained to identify the Critical Load 2 (LC2 ) value by analyzing images of the scratch tracks created in each test. An optical examination of the scratch by a human operator is currently used to determine where this value occurs. However, the vagueness of the standard has led to varying interpretations and nonuniform usage by different operators at different laboratories where the test is implemented, resulting in multiple definitions of the desired parameter. Using a standard set of training and validation images to create the dataset, the critical load can be identified consistently amongst different laboratories using the automation interface without requiring the training of human operators. When the model was used in conjunction with an instrument manufacturer's scratch test software, the model produced accurate and repeatable results and defined LC2 values in as little as half of the time compared to a human operator. When combined with a program that automates other aspects of the scratch testing process usually conducted by a human operator, scratch testing and analysis can occur with little to no intervention from a human beyond initial setup and frees them to complete other work in the lab.
The self-magnetic pinch (SMP) diode is a type of radiographic diode used to generate an intense electron beam for radiographic applications. At Sandia National Laboratories, SMP was the diode load for the six-cavity radiographic integrated test stand inductive voltage adder (IVA) driver operated in a magnetically insulated transmission line (MITL). The MITL contributes a flow current in addition to the current generated within the diode itself. Extensive experiments with a MITL of 40 ω load impedance [T. J. Renk et al., Phys. Plasmas 29, 023105 (2022)] indicate that the additional flow current leads to results similar to what might be expected from a conventional high-voltage interface driver, where flow current is not present. However, when the MITL flow impedance was increased to 80 ω, qualitatively different diode behavior was observed. This includes large retrapping waves suggestive of an initial coupling to low impedance as well as diode current decreasing with time even as the total current does not. A key observation is that the driver generates total current (flow + diode) consistent with the flow impedance of the MITL used. The case is made in this paper that the 80 ω MITL experiments detailed here can only be understood when the IVA-MITL-SMP diode is considered as a total system. The constraint of fixed total current plus the relatively high flow impedance limits the ability of the diode (whether SMP or other type) to act as an independent load. An unexpected new result is that in tracking the behavior of the electron strike angle on the converter as a function of time, we observed that the conventional cIVx "Radiographic"radiation scaling (where x ∼2.2) begins to break down for voltages above 8 MV, and cubic scaling is required to recover accurate angle tracking.
Fast detection and isolation of faults in a DC microgrid is of particular importance. Fast tripping protection (i) increases the lifetime of power electronics (PE) switches by avoiding high fault current magnitudes and (ii) enhances the controllability of PE converters. This paper proposes a traveling wave (TW) based scheme for fast tripping protection of DC microgrids. The proposed scheme utilizes a discrete wavelet transform (DWT) to calculate the high-frequency components of DC fault currents. Multiresolution analysis (MRA) using DWT is utilized to detect TW components for different frequency ranges. The Parseval energy of the MRA coefficients are then calculated to demonstrate a quantitative relationship between the fault current signal energy and coefficients’ energy. The calculated Parseval energy values are used to train a Support Vector Machine classifier to identify the fault type and a Gaussian Process regression engine to estimate the fault location on the DC cables. The proposed approach is verified by simulating two microgrid test systems in PSCAD/EMTDC.
Radiographic diodes focus on an intense electron beam to a small spot size to minimize the source area of energetic photons for radiographic interrogation. The self-magnetic pinch (SMP) diode has been developed as such a source and operated as a load for the six-cavity radiographic integrated test stand (RITS-6) inductive voltage adder driver. While experiments support the generally accepted conclusion that a 1:1 aspect diode (cathode diameter equals anode-cathode gap) delivers optimum SMP performance, such experiments also show that reducing the cathode diameter, while reducing spot size, also results in reduced radiation dose, by as much as 50%, and degraded shot reproducibility. Analysis of the effective electron impingement angle on the anode converter with time made possible by a newly developed dose-rate array diagnostic indicates that fast-developing oscillations of the angle are correlated with early termination of the radiation pulse on many of the smaller-diameter SMP shots. This behavior as a function of relative cathode size persists through experiments with output voltages and currents up to 11.5 MV and 225 kA, respectively, and with spot sizes below approximately few millimeters. Since simulations to date have not predicted such oscillatory behavior, considerable discussion of the angle behavior of SMP shots is made to lend credence to the inference. There is clear anecdotal evidence that DC heating of the SMP diode region leads to stabilization of this oscillatory behavior. This is the first of two papers on the performance of the SMP diode on the RITS-6 accelerator.
The progression of wind turbine technology has led to wind turbines being incredibly optimized machines often approaching their theoretical maximum production capabilities. When placed together in arrays to make wind farms, however, they are subject to wake interference that greatly reduces downstream turbines' power production, increases structural loading and maintenance, reduces their lifetimes, and ultimately increases the levelized cost of energy. Development of techniques to manage wakes and operate larger and larger arrays of turbines more efficiently is now a crucial field of research. Herein, four wake management techniques in various states of development are reviewed. These include axial induction control, wake steering, the latter two combined, and active wake control. Each of these is reviewed in terms of its control strategies and use for power maximization, load reduction, and ancillary services. By evaluating existing research, several directions for future research are suggested.
Cookoff experiments of powdered and pressed TATB-based plastic bonded explosives (PBXs) have been modeled using a pressure-dependent universal cookoff model (UCM) in combination with a micromechanics pressurization (MMP) model described in a companion paper. The MMP model is based on the accumulation of decomposition gases at nucleation sites that load the surrounding TATB crystals and binder. This is the first cookoff model to use an analytical mechanics solution for compressibility and thermal expansion to describe internal pressurization caused by both temperature and decomposition occurring within closed-pore explosives. This approach produces more accurate predictions of ignition time and pressurization within high-density explosives than simple equation-of-state models. The current paper gives details of the reaction chemistry, model parameters, predicted uncertainty, and validation using experiments from multiple laboratories with errors less than 6 %. The UCM/MMP model framework gives more accurate thermal ignition predictions for high density explosives that are initially impermeable to decomposition gases.
Stainless steels are susceptible to localized forms of corrosion attack, such as pitting. The size and lifetime of a nucleated pit can vary, depending on a critical potential or current density criterion, which determines if the pit repassivates or continues growing. This work uses finite element method (FEM) modeling to compare the critical pit radii predicted by thermodynamic and kinetic repassivation criteria. Experimental electrochemical boundary conditions are used to capture the active pit kinetics. Geometric and environmental parameters, such as the pit shape and size (analogous to additively manufactured lack-of-fusion pores), solution concentration, and water layer thickness were considered to assess their impact on the pit repassivation criterion. The critical pit radius (the transition point from stable growth to repassivation) predicted for a hemispherical pit was larger when using the repassivation potential (Erp) criteria, as opposed to the current density criteria (pit stability product). Including both the pit stability product and Erp into its calculations, the analytical maximum pit model predicted a critical radius two times more conservative than the FEA approach, under the conditions studied herein. The complex pits representing lack-of-fusion pores were shown to have minimal impact on the critical radius in atmospheric conditions.
Neuromorphic computing, which aims to replicate the computational structure and architecture of the brain in synthetic hardware, has typically focused on artificial intelligence applications. What is less explored is whether such brain-inspired hardware can provide value beyond cognitive tasks. Here we show that the high degree of parallelism and configurability of spiking neuromorphic architectures makes them well suited to implement random walks via discrete-time Markov chains. These random walks are useful in Monte Carlo methods, which represent a fundamental computational tool for solving a wide range of numerical computing tasks. Using IBM’s TrueNorth and Intel’s Loihi neuromorphic computing platforms, we show that our neuromorphic computing algorithm for generating random walk approximations of diffusion offers advantages in energy-efficient computation compared with conventional approaches. We also show that our neuromorphic computing algorithm can be extended to more sophisticated jump-diffusion processes that are useful in a range of applications, including financial economics, particle physics and machine learning.
Triangle counting is a fundamental building block in graph algorithms. In this article, we propose a block-based triangle counting algorithm to reduce data movement during both sequential and parallel execution. Our block-based formulation makes the algorithm naturally suitable for heterogeneous architectures. The problem of partitioning the adjacency matrix of a graph is well-studied. Our task decomposition goes one step further: it partitions the set of triangles in the graph. By streaming these small tasks to compute resources, we can solve problems that do not fit on a device. We demonstrate the effectiveness of our approach by providing an implementation on a compute node with multiple sockets, cores and GPUs. The current state-of-the-art in triangle enumeration processes the Friendster graph in 2.1 seconds, not including data copy time between CPU and GPU. Using that metric, our approach is 20 percent faster. When copy times are included, our algorithm takes 3.2 seconds. This is 5.6 times faster than the fastest published CPU-only time.
In 2010, nuclear weapon effects experts at Sandia National Laboratories (SNL) were asked to provide a quick reference document containing estimated prompt nuclear effects. This report is an update to the 2010 document that includes updated model assumptions. This report addresses only the prompt effects associated with a nuclear detonation (e.g., blast, thermal fluence, and prompt ionizing radiation). The potential medium- and longer-term health effects associated with nuclear fallout are not considered in this report because, in part, of the impracticality of making generic estimates given the high dependency of fallout predictions on the local meteorological conditions at the time of the event. The results included in this report also do not consider the urban environment (e.g., shielding by or collapse of structures) which may affect the extent of prompt effects. It is important to note that any operational recommendations made using the estimates in this report are limited by the generic assumptions considered in the analysis and should not replace analyses made for a specific scenario/device. Furthermore, nuclear effects experts (John Hogan, SNL, and Byron Ristvet, Defense Threat Reduction Agency (DTRA)) have indicated that the accuracy of effects predictions below 0.5 kilotons (kT) or 500 tons nuclear yield have greater uncertainty because of the limited data available for the prompt effects in this regime. The Specialized Hazard Assessment Response Capability (SHARC) effects prediction tool was used for these analyses. Specifically, the NUKE model within SHARC 2021 Version 10.2 was used. NUKE models only the prompt effects following a nuclear detonation. The algorithms for predicting range-to-output data contained within the NUKE model are primarily based on nuclear test effects data. Probits have been derived from nuclear test data and the U.S. Environmental Protection Agency (EPA) protective action guides. Probits relate the probability of a hazard (e.g., fatality or injury) caused by a given insult (e.g., overpressure, thermal fluence, dose level). Several probits have been built into SHARC to determine the fatality and injury associated with a given level of insult. Some of these probits differ with varying yield. Such probits were used to develop the tables and plots in this report.
Magnetized Liner Inertial Fusion (MagLIF) [Slutz et al., Phys. Plasmas 17, 056303 (2010)] experiments driven by the Z machine produce >1013 deuterium-deuterium fusion reactions [Gomez et al., Phys. Rev. Lett. 125, 155002 (2020)]. Simulations indicate high yields and gains (1000) with increased current and deuterium-tritium layers for burn propagation [Slutz et al., Phys. Plasmas 23, 022702 (2016)]. Such a coating also isolates the metal liner from the gaseous fuel, which should reduce mixing of liner material into the fuel. However, the vapor density at the triple point is only 0.3 kg/m3, which is not high enough for MagLIF operation. We present two solutions to this problem. First, a fuel wetted low-density plastic foam can be used to form a layer on the inside of the liner. The desired vapor density can be obtained by controlling the temperature. This does however introduce carbon into the layer which will enhance radiation losses. Simulations indicate that this wetted foam layer can significantly contribute to the fusion yield when the foam density is less than 35 kg/m3. Second, we show that a pure frozen fuel layer can first be formed on the inside of the liner and then low temperature gaseous fuel can be introduced just before the implosion without melting a significant amount of the ice layer. This approach is the most promising for MagLIF to produce high yield and gain.
The Multi-Fidelity Toolkit (MFTK) is a simulation tool being developed at Sandia National Laboratories for aerodynamic predictions of compressible flows over a range of physics fidelities and computational speeds. These models include the Reynolds-Averaged-Navier-Stokes (RANS) equations, the Euler equations, and modified Newtonian aerodynamics (MNA) equations, and they can be invoked independently or coupled with hierarchical Kriging to interpolate between high-fidelity simulations using lower-fidelity data. However, as with any new simulation capability, verification and validation are necessary to gather credibility evidence. This work describes formal code- and solution-verification activities as well as model validation with uncertainty considerations. Code verification is performed on the MNA model by comparing with an analytical solution for flat-plate and inclined-plate geometries. Solution-verification activities include grid-refinement studies of HIFiRE-1 wind tunnel measurements, which are used for validation, for all model fidelities. A thorough treatment of the validation comparison with prediction error and validation uncertainty is also presented.
Progress and status reviews allow teams to provide updates and targeted information designed to inform the customer of progress and to help the customer understand current risks and challenges. Both presenters and the customer should have well-calibrated expectations for the level of content and information. However, what needs to be covered in systems-level management reviews can too often be poorly defined. These unclear expectations can lead teams to overpreparing or attempting to guess what information the customer considers as most critical. This aspect of the review process is stressful, disruptive, and bad for morale – and time spent overpreparing reports is time spent not focusing on the technical work necessary to stay on schedule. To define and address these issues, this report was designed to observe various aspects of development program coordination and review activities for NNSA and Navy customers, and then to conduct unbiased, independent Human Factors observation and analysis from an outside perspective. The report concludes with suggestions and recommendations for improving the efficiency of information flow related to reviews, with the goals of increasing productivity and benefitting both Sandia and the customer.
De Lucia, Frank C.; Giri, Lily; Pesce-Rodriguez, Rose A.; Wu, Chi C.; Dean, Steven W.; Tovar, Trenton M.; Sausa, Rosario C.; Wainwright, Elliot R.; Gottfried, Jennifer L.
We characterized nine commercial aluminum (Al) powders using several methods to measure particle characteristics and thermal analysis, with the goal to understand how these parameters influence energy release. Although it is well-known that lot-to-lot variations in commercial nanoparticles are common, the Al powders were more heterogeneous than anticipated – both with regards to particle size distributions and impurities. Manufacturer specifications – often quoted in the literature without confirmation – were not always accurate for the specific sample lots we investigated. In several cases, different conclusions could be drawn from individual particle size techniques; a combination of multiple techniques provides a clearer picture of the powder properties. Thorough characterization of Al powders is required prior to interpretation of experimental results from a variety of applications. In particular, previous studies have shown contradictory results on the influence of Al on detonation performance, perhaps partially due to insufficient characterization of the Al powders themselves.