The supercritical carbon dioxide (sCO2) Brayton cycle is a promising candidate for future nuclear reactors due to its ability to improve power cycle energy conversion efficiency. The sCO2 Brayton cycle can operate with an efficiency of 45-50% at operating temperatures of 550-700 C. One of the greatest hurdles currently faced by sCO2 Brayton cycles is the extreme corrosivity of sCO2. This affects the longevity of the power cycle and thus the levelized cost of electricity. Past studies have shown that sCO2 corrosion occurs through the formation of metal carbonates, oxide layers, and carburization, and alloys with Cr, Mo and Ni generally exhibit less corrosion. While stainless steels may offer sufficient corrosion resistance at the lower range of temperatures seen by the sCO2 Brayton cycles, more expensive alloys such as Inconel and Haynes are typically needed for the higher temperature regions. This study investigates the effects of corrosion on the Haynes 230 alloy, focusing on changes in the mechanical properties.
This paper explores the Systems Engineering structure, strategies and tools for real world scenarios involving work with accident response planning. A systems engineering approach must be taken by the technical teams to prepare for a successful response and design the technical systems in support of the operations. The scope of this project is focused on laying out the foundation of the systems engineering approach taken to help the teams develop an accident response strategy and identify new engineering designs in support of these operations for the black box systems. This Masters project involved several interdisciplinary teams & stakeholders. Identifying the proper tools to use was key to addressing the big picture needs of the multiple stakeholders. The integrated project work primarily took place over the course of eight weeks via integrated team meetings. Other work in support of this project was conducted off-line as needed by the project lead. Details on the prospective timeline, milestones, key dates and work scope can be referenced in this paper. Key systems engineering methodologies and tools used in support of this project included but is not limited to: Market Surveys and Interviews Project Charter Feasibility Study Swim Lane Diagram Knowledge Management Plan. A full suite of tools and the details regarding the application of these tools and results of this study is provided in this report.
Eldridge, Brent; Castillo, Anya; Knueven, Bernard; Garcia, Manuel J.
This document is an online supplement for Sparse, Dense, and Compact Linearizations of the AC OPF. Here we present complete derivations of the formulations examined, details of the lazy constraint algorithm, and full computational results supporting Sparse, Dense, and Compact Linearizations of the AC OPF.
Macrohomogeneous battery models are widely used to predict battery performance, necessarily relying on effective electrode properties, such as specific surface area, tortuosity, and electrical conductivity. While these properties are typically estimated using ideal effective medium theories, in practice they exhibit highly non-ideal behaviors arising from their complex mesostructures. In this paper, we computationally reconstruct electrodes from X-ray computed tomography of 16 nickel-manganese-cobalt-oxide electrodes, manufactured using various material recipes and calendering pressures. Due to imaging limitations, a synthetic conductive binder domain (CBD) consisting of binder and conductive carbon is added to the reconstructions using a binder bridge algorithm. Reconstructed particle surface areas are significantly smaller than standard approximations predicted, as the majority of the particle surface area is covered by CBD, affecting electrochemical reaction availability. Finite element effective property simulations are performed on 320 large electrode subdomains to analyze trends and heterogeneity across the electrodes. Significant anisotropy of up to 27% in tortuosity and 47% in effective conductivity is observed. Electrical conductivity increases up to 7.5× with particle lithiation. We compare the results to traditional Bruggeman approximations and offer improved alternatives for use in cellscale modeling, with Bruggeman exponents ranging from 1.62 to 1.72 rather than the theoretical value of 1.5. We also conclude that the CBD phase alone, rather than the entire solid phase, should be used to estimate effective electronic conductivity. This study provides insight into mesoscale transport phenomena and results in improved effective property approximations founded on realistic, image-based morphologies.
Since its introduction 25 years ago, the probe-fed U-slot patch antenna has remained popular. Recently, Characteristic Mode Analysis (CMA) revealed these devices are governed by Coupled Mode Theory (CMT). Although this principle is conceptually simple, achieving this understanding is only possible through a systematic analysis using CMA. This paper uses the U-slot patch to illustrate a general process for analyzing electrically small antennas using CMA with the software package FEKO.
Component coupling is a crucial part of climate models, such as DOE's E3SM (Caldwell et al., 2019). A common coupling strategy in climate models is for their components to exchange flux data from the previous time-step. This approach effectively performs a single step of an iterative solution method for the monolithic coupled system, which may lead to instabilities and loss of accuracy. In this paper we formulate an Interface-Flux-Recovery (IFR) coupling method which improves upon the conventional coupling techniques in climate models. IFR starts from a monolithic formulation of the coupled discrete problem and then uses a Schur complement to obtain an accurate approximation of the flux across the interface between the model components. This decouples the individual components and allows one to solve them independently by using schemes that are optimized for each component. To demonstrate the feasibility of the method, we apply IFR to a simplified ocean–atmosphere model for heat-exchange coupled through the so-called bulk condition, common in ocean–atmosphere systems. We then solve this model on matching and non-matching grids to estimate numerically the convergence rates of the IFR coupling scheme.
Functional variables are often used as predictors in regression problems. A commonly used parametric approach, called scalar-on-function regression, uses the L2 inner product to map functional predictors into scalar responses. This method can perform poorly when predictor functions contain undesired phase variability, causing phases to have disproportionately large influence on the response variable. One past solution has been to perform phase–amplitude separation (as a pre-processing step) and then use only the amplitudes in the regression model. Here we propose a more integrated approach, termed elastic functional regression model (EFRM), where phase-separation is performed inside the regression model, rather than as a pre-processing step. This approach generalizes the notion of phase in functional data, and is based on the norm-preserving time warping of predictors. Due to its invariance properties, this representation provides robustness to predictor phase variability and results in improved predictions of the response variable over traditional models. We demonstrate this framework using a number of datasets involving gait signals, NMR data, and stock market prices.
Lean operation of Spark-Ignition engines can provide higher thermal efficiency compared to standard stoichiometric operation. However, for a homogeneous lean mixture, the associated reduction of flame speeds becomes an important issue from the perspective of robust ignition and fast flame spread throughout the charge. This study is focused on the use of a lean partial fuel stratification strategy that can stabilize the deflagration, while sufficiently fast combustion is ensured via the use of end-gas autoignition. The engine has a spray-guided Direct-Injection Spark-Ignition combustion system and was fueled with either a high-octane certification gasoline or E85. Partial fuel stratification was achieved using several fuel injections during the intake stroke in combination with a small pilot-injection concurrent with the Spark-Ignition. The results reveal that partial fuel stratification enables very stable combustion, offering higher thermal efficiency for parts of the load range in comparison to well-mixed lean and stoichiometric combustion. The heat release and flame imaging demonstrate that the combustion often has three distinct stages. The combustion of the pilot-injected fuel, ignited by the normal spark, acts as a “super igniter,” ensuring a very repeatable initiation of combustion, and flame incandescence reveals locally rich conditions. The second stage is mainly composed of blue flame propagation in a well-mixed lean mixture. The third stage is the compression autoignition of a well-mixed and typically very lean end-gas. The end-gas autoignition is critical for achieving high combustion efficiency, high thermal efficiency, and stable combustion. Partial fuel stratification enables very effective combustion-phasing control, which is critical for controlling the occurrence and intensity of end-gas autoignition. Comparing the gasoline and E85 fuels, it is noted that achieving end-gas autoignition for the higher octane E85 requires a more aggressive compression of the end-gas via the use of a more advanced combustion phasing or higher intake-air temperature.
Kustas, Jessica; Hoffman, Jacob B.; Alonso, David; Reed, Julian H.; Gonsalves, Andrew E.; Oh, Junho; Hong, Sungmin; Jo, Kyoo D.; Dana, Catherine E.; Alleyne, Marianne; Cropek, Donald M.
Cicada wings exhibit several intriguing properties that arise from a combination of nanopillar structures and chemical constituents, including superhydrophobicity, as well as antimicrobial, antireflective, and self-cleaning functions. While the physical dimensions of the nanofeatures are relatively simple to characterize through microscopy, the chemicals that cover these pillars are more difficult to characterize due to the variety and complexity of the mixture. Here, we compared the extractable chemicals from the wing surfaces of two different cicada species using both gas chromatography time-of-flight mass spectrometry (GC-TOFMS) and two-dimensional gas chromatography time-of-flight mass spectrometry (GC × GC-TOFMS) platforms. Chemical extracts from Neotibicen pruinosus and Magicicada septendecim cicada wings were separated and analyzed. The GC × GC-TOFMS platform was able to isolate and identified roughly three times the number of constituents as the GC-TOFMS platform at a signal-to-noise ratio (SNR) ≥10.0 and spectral similarity ≥800. When comparing the two cicada species wing extracts, the two-dimensional platform was able to expose differences in the chemical composition that were undetectable by the one-dimensional technique. GC × GC-TOFMS revealed nearly four times the number of unique species-specific compounds as compared to the number identified by GC-TOFMS. Further, surface chemicals were identified that are likely xenobiotics and can pinpoint location and contamination from where the cicada was collected. While the advantages of GC × GC-TOFMS over GC-TOFMS have been documented in the past, our work presents a powerful biological application of GC × GC-TOFMS with promise to reveal both organism species-specific biomarkers while providing insight into the environmental conditions of individual organisms.
The study of thermal effects, both classical and quantum, at cryogenic temperatures requires the use of on-chip, local, high-sensitivity thermometry. Carbon-platinum composites fabricated using focused ion beam (FIB) assisted deposition form a granular structure which is shown in this study to be uniquely suited for this application. Carbon-platinum thermometers deposited using a 24 pA ion beam current have high sensitivities below 1 K, comparable to the best cryogenic thermometers. In addition, these thermometers can be accurately placed to within 10s of nanometers on the chip using a mask-free process. They also have a weak magnetic field dependence, < 3% change in resistance with applied magnetic fields from 0 to 8 T. Finally, these thermometers are integrable into a variety of nanoscale devices due to the existing wide spread use of FIB.
We demonstrate a Bayesian method for the “real-time” characterization and forecasting of partially observed COVID-19 epidemic. Characterization is the estimation of infection spread parameters using daily counts of symptomatic patients. The method is designed to help guide medical resource allocation in the early epoch of the outbreak. The estimation problem is posed as one of Bayesian inference and solved using a Markov chain Monte Carlo technique. The data used in this study was sourced before the arrival of the second wave of infection in July 2020. The proposed modeling approach, when applied at the country level, generally provides accurate forecasts at the regional, state and country level. The epidemiological model detected the flattening of the curve in California, after public health measures were instituted. The method also detected different disease dynamics when applied to specific regions of New Mexico.
X-ray phase contrast imaging (XPCI) is a nondestructive evaluation technique that enables high-contrast detection of low-attenuation materials that are largely transparent in traditional radiography. Extending a grating-based Talbot-Lau XPCI system to three-dimensional imaging with computed tomography (CT) imposes two motion requirements: the analyzer grating must translate transverse to the optical axis to capture image sets for XPCI reconstruction, and the sample must rotate to capture angular data for CT reconstruction. The acquisition algorithm choice determines the order of movement and positioning of the two stages. The choice of the image acquisition algorithm for XPCI CT is instrumental to collecting high fidelity data for reconstruction. We investigate how data acquisition influences XPCI CT by comparing two simple data acquisition algorithms and determine that capturing a full phase-stepping image set for a CT projection before rotating the sample results in higher quality data.
We present an effort to port the nonhydrostatic atmosphere dynamical core of the Energy Exascale Earth System Model (E3SM) to efficiently run on a variety of architectures, including conventional CPU, many-core CPU, and GPU. We specifically target cloud-resolving resolutions of 3 km and 1 km. To express on-node parallelism we use the C++ library Kokkos, which allows us to achieve a performance portable code in a largely architecture-independent way. Our C++ implementation is at least as fast as the original Fortran implementation on IBM Power9 and Intel Knights Landing processors, proving that the code refactor did not compromise the efficiency on CPU architectures. On the other hand, when using the GPUs, our implementation is able to achieve 0.97 Simulated Years Per Day, running on the full Summit supercomputer. To the best of our knowledge, this is the most achieved to date by any global atmosphere dynamical core running at such resolutions.
Ebrish, Mona A.; Anderson, Travis J.; Koehler, Andrew D.; Foster, Geoffrey M.; Gallagher, James C.; Kaplar, Robert; Gunning, Brendan P.; Hobart, Karl D.
GaN is a favorable martial for future efficient high voltage power switches. GaN has not dominated the power electronics market due to immature substrate, homoepitaxial growth, and immature processing technology. Understanding the impact of the substrate and homoepitaxial growth on the device performance is crucial for boosting the performance of GaN. In this work, we studied vertical GaN PiN diodes that were fabricated on non-homogenous Hydride Vapor Phase Epitaxy (HVPE) substrates from two different vendors. We show that defects which stemmed from growth techniques manifest themselves as leakage hubs. Different non-homogenous substrates showed different distribution of those defects spatially with the lesser quality substrates clustering those defects in clusters that causes pre-mature breakdown. Energetically these defects are mostly mid-gap around 1.8Ev with light emission spans from 450nm to 700nm. Photon emission spectrometry and hyperspectral electroluminescence were used to locate these defects spatially and energetically.
In shale gas production, gas composition may vary over time. To understand this phenomenon, we use molecular dynamics simulations to study the permeation of CH4, C2H6 and their mixture from a source container through a pyrophyllite nanopore driven by a pressure gradient. For a pure gas, the flow rate of CH4 is always higher than that of C2H6, regardless of pore size. For a 1:1 C2H6: CH4 mixture, however, C2H6:CH4 flow rate ratio is higher than the compositional ratio in the container (i.e., 1:1) when the pore size is smaller than ~1.8 nm. The selective transport is caused by the competitive adsorption of C2H6 over CH4 in the nanopore. The selectivity is also determined by the interplay between the surface diffusion of the adsorbed molecules and the viscous flow in the center of the pore, and it diminishes as the viscous flow becomes to dominate the surface diffusion when the pore size becomes larger than 1.8 nm. Our work shows that compositional differentiation of shale gas in production is a consequence of nanopore confinement and therefore a key characteristic of an unconventional reservoir. The related compositional information can potentially be used for monitoring the status of a production well such as its recovery rate.
Digital computing is nearing its physical limits as computing needs and energy consumption rapidly increase. Analogue-memory-based neuromorphic computing can be orders of magnitude more energy efficient at data-intensive tasks like deep neural networks, but has been limited by the inaccurate and unpredictable switching of analogue resistive memory. Filamentary resistive random access memory (RRAM) suffers from stochastic switching due to the random kinetic motion of discrete defects in the nanometer-sized filament. In this work, this stochasticity is overcome by incorporating a solid electrolyte interlayer, in this case, yttria-stabilized zirconia (YSZ), toward eliminating filaments. Filament-free, bulk-RRAM cells instead store analogue states using the bulk point defect concentration, yielding predictable switching because the statistical ensemble behavior of oxygen vacancy defects is deterministic even when individual defects are stochastic. Both experiments and modeling show bulk-RRAM devices using TiO2-X switching layers and YSZ electrolytes yield deterministic and linear analogue switching for efficient inference and training. Bulk-RRAM solves many outstanding issues with memristor unpredictability that have inhibited commercialization, and can, therefore, enable unprecedented new applications for energy-efficient neuromorphic computing. Beyond RRAM, this work shows how harnessing bulk point defects in ionic materials can be used to engineer deterministic nanoelectronic materials and devices.
Proceedings of INDIS 2020: Innovating the Network for Data-Intensive Science, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
Priority-based Flow Control (PFC), RDMA over Converged Ethernet (RoCE) and Enhanced Transmission Selection (ETS) are three enhancements to Ethernet networks which allow increased performance and may make Ethernet attractive for systems supporting a diverse scientific workload. We constructed a 96-node testbed cluster with a 100 Gb/s Ethernet network configured as a tapered fat tree. Tests representing important network operating conditions were completed and we provide an analysis of these performance results. RoCE running over a PFC-enabled network was found to significantly increase performance for both bandwidth-sensitive and latency-sensitive applications when compared to TCP. Additionally, a case study of interfering applications showed that ETS can prevent starvation of network traffic for latency-sensitive applications running on congested networks. We did not encounter any notable performance limitations for our Ethernet testbed, but we found that practical disadvantages still tip the balance towards traditional HPC networks unless a system design is driven by additional external requirements.
Potential performance gains from optimal (non-causal) impedance-matching control of wave energy devices in irregular ocean waves are dependent on deterministic wave elevation prediction techniques that work well in practical applications. Although a number of devices are designed for operation in intermediate water depths, little work has been reported on deterministic wave prediction in such depths. Investigated in this paper is a deterministic wave-prediction technique based on an approximate propagation model that leads to an analytical formulation, which may be convenient to implement in practice. To improve accuracy, an approach to combine predictions based on multiple up-wave measurement points is evaluated. The overall method is tested using experimental time-series measurements recorded in the U.S. Navy MASK basin in Carderock, MD, USA. For comparison, an alternative prediction approach based on Fourier coefficients is also tested with the same data. Comparison of prediction approaches with direct measurements suggest room for improvement. Possible sources of error including tank reflections are estimated, and potential mitigation approaches are discussed.
Proceedings of ExaMPI 2020: Exascale MPI Workshop, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
We present the execution model of Virtual Transport (VT) a new, Asynchronous Many-Task (AMT) runtime system that provides unprecedented integration and interoperability with MPI. We have developed VT in conjunction with large production applications to provide a highly incremental, high-value path to AMT adoption in the dominant ecosystem of MPI applications, libraries, and developers. Our aim is that the'MPI+X' model of hybrid parallelism can smoothly extend to become'MPI+VT +X'. We illustrate a set of design and implementation techniques that have been useful in building VT. We believe that these ideas and the code embodying them will be useful to others building similar systems, and perhaps provide insight to how MPI might evolve to better support them. We motivate our approach with two applications that are adopting VT and have begun to benefit from increased asynchrony and dynamic load balancing.
Proceedings of PMBS 2020: Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis
Dominguez-Trujillo, Jered; Haskins, Keira; Khouzani, Soheila J.; Leap, Christopher; Tashakkori, Sahba; Wofford, Quincy; Estrada, Trilce; Bridges, Patrick G.; Widener, Patrick
Performance variation deriving from hardware and software sources is common in modern scientific and data-intensive computing systems, and synchronization in parallel and distributed programs often exacerbates their impacts at scale. The decentralized and emergent effects of such variation are, unfortunately, also difficult to systematically measure, analyze, and predict; modeling assumptions which are stringent enough to make analysis tractable frequently cannot be guaranteed at meaningful application scales, and longitudinal methods at such scales can require the capture and manipulation of impractically large amounts of data. This paper describes a new, scalable, and statistically robust approach for effective modeling, measurement, and analysis of large-scale performance variation in HPC systems. Our approach avoids the need to reason about complex distributions of runtimes among large numbers of individual application processes by focusing instead on the maximum length of distributed workload intervals. We describe this approach and its implementation in MPI which makes it applicable to a diverse set of HPC workloads. We also present evaluations of these techniques for quantifying and predicting performance variation carried out on large-scale computing systems, and discuss the strengths and limitations of the underlying modeling assumptions.
Arm processors have been explored in HPC for several years, however there has not yet been a demonstration of viability for supporting large-scale production workloads. In this paper, we offer a retrospective on the process of bringing up Astra, the first Petascale supercomputer based on 64-bit Arm processors, and validating its ability to run production HPC applications. Through this process several immature technology gaps were addressed, including software stack enablement, Linux bugs at scale, thermal management issues, power management capabilities, and advanced container support. From this experience, several lessons learned are formulated that contributed to the successful deployment of Astra. These insights can be helpful to accelerate deploying and maturing other first-seen HPC technologies. With Astra now supporting many users running a diverse set of production applications at multi-thousand node scales, we believe this constitutes strong supporting evidence that Arm is a viable technology for even the largest-scale supercomputer deployments.
Shale is characterized by the predominant presence of nanometer-scale (1-100 nm) pores. The behavior of fluids in those pores directly controls shale gas storage and release in shale matrix and ultimately the wellbore production in unconventional reservoirs. Recently, it has been recognized that a fluid confined in nanopores can behave dramatically differently from the corresponding bulk phase due to nanopore confinement. CO2 and H2O, either preexisting or introduced, are two major components that coexist with shale gas (predominately CH4) during hydrofracturing and gas extraction. Note that liquid or supercritical CO2 has been suggested as an alternative fluid for subsurface fracturing such that CO2 enhanced gas recovery can also serve as a CO2 sequestration process. Limited data indicate that CO2 may preferentially adsorb in nanopores (particularly those in kerogen) and therefore displace CH4 in shale. Similarly, the presence of water moisture seems able to displace or trap CH4 in shale matrix. Therefore, fundamental understanding of CH4-CO2-H2O behavior and their interactions in shale nanopores is of great importance for gas production and the related CO2 sequestration. This project focuses on the systematic study of CH4-CO2-H2O interactions in shale nanopores under high-pressure and high temperature reservoir conditions. The proposed work will help develop new stimulation strategies to enable efficient resource recovery from fewer and less environmentally impactful wells.
The goal of the ExaWind project is to enable predictive simulations of wind farms comprised of many megawatt-scale turbines situated in complex terrain. Predictive simulations will require computational fluid dynamics (CFD) simulations for which the mesh resolves the geometry of the turbines and captures the rotation and large deflections of blades. Whereas such simulations for a single turbine are arguably petascale class, multi-turbine wind farm simulations will require exascale-class resources. The primary physics codes in the ExaWind simulation environment are Nalu-Wind, an unstructured-grid solver for the acoustically incompressible Navier-Stokes equations, AMR-Wind, a block-structured-grid solver with adaptive mesh refinement capabilities, and OpenFAST, a wind-turbine structural dynamics solver. The Nalu-Wind model consists of the mass-continuity Poisson-type equation for pressure and Helmholtz-type equations for transport of momentum and other scalars. For such modeling approaches, simulation times are dominated by linear-system setup and solution for the continuity and momentum systems. For the ExaWind challenge problem, the moving meshes greatly affect overall solver costs as reinitialization of matrices and recomputation of preconditioners is required at every time step. The choice of overset-mesh methodology to model the moving and non-moving parts of the computational domain introduces constraint equations in the elliptic pressure-Poisson solver. The presence of constraints greatly affects the performance of algebraic multigrid preconditioners.
Quantum Monte Carlo (QMC) methods are useful for studies of strongly correlated materials because they are many body in nature and use the physical Hamiltonian. Typical calculations assume as a starting point a wave function constructed from single-particle orbitals obtained from one-body methods, e.g., density functional theory. However, mean-field-derived wave functions can sometimes lead to systematic QMC biases if the mean-field result poorly describes the true ground state. Here, we study the accuracy and flexibility of QMC trial wave functions using variational and fixed-node diffusion QMC estimates of the total spin density and lattice distortion of antiferromagnetic iron oxide (FeO) in the ground state B1 crystal structure. We found that for relatively simple wave functions the predicted lattice distortion was controlled by the choice of single-particle orbitals used to construct the wave function, rather than by subsequent wave function optimization techniques within QMC. By optimizing the orbitals with QMC, we then demonstrate starting-point independence of the trial wave function with respect to the method by which the orbitals were constructed by demonstrating convergence of the energy, spin density, and predicted lattice distortion for two qualitatively different sets of orbitals. The results suggest that orbital optimization is a promising method for accurate many-body calculations of strongly correlated condensed phases.
Coherent anti-Stokes Raman scattering (CARS) is a valuable spectroscopic tool for the measurement of temperature and species concentration. In recent years, multi-dimensional CARS has seen focused development and is especially important in reacting flows. An important aspect of multi-dimensional CARS is the phase-matching scheme used. Historically, collinear and BOXCARS phase-matching schemes have been used to achieve phase matching over a broad spectral range. For 1-D and 2-D CARS imaging, two-beam or counter-propagating beam arrangements are necessary. The two-beam arrangement offers many advantages, but introduces a phase mismatch which limits the spectral response of the measurement. This work explores the tradeoffs in spatial resolution, spectral bandwidth, and CARS intensity in 2-D CARS arrangements. Calculations are made for two-beam and counter-propagating beam CARS.
Mishra, Umakant; Gautam, Sagar; Riley, William J.; Hoffman, Forrest M.
Various approaches of differing mathematical complexities are being applied for spatial prediction of soil properties. Regression kriging is a widely used hybrid approach of spatial variation that combines correlation between soil properties and environmental factors with spatial autocorrelation between soil observations. In this study, we compared four machine learning approaches (gradient boosting machine, multinarrative adaptive regression spline, random forest, and support vector machine) with regression kriging to predict the spatial variation of surface (0–30 cm) soil organic carbon (SOC) stocks at 250-m spatial resolution across the northern circumpolar permafrost region. We combined 2,374 soil profile observations (calibration datasets) with georeferenced datasets of environmental factors (climate, topography, land cover, bedrock geology, and soil types) to predict the spatial variation of surface SOC stocks. We evaluated the prediction accuracy at randomly selected sites (validation datasets) across the study area. We found that different techniques inferred different numbers of environmental factors and their relative importance for prediction of SOC stocks. Regression kriging produced lower prediction errors in comparison to multinarrative adaptive regression spline and support vector machine, and comparable prediction accuracy to gradient boosting machine and random forest. However, the ensemble median prediction of SOC stocks obtained from all four machine learning techniques showed highest prediction accuracy. Although the use of different approaches in spatial prediction of soil properties will depend on the availability of soil and environmental datasets and computational resources, we conclude that the ensemble median prediction obtained from multiple machine learning approaches provides greater spatial details and produces the highest prediction accuracy. Thus an ensemble prediction approach can be a better choice than any single prediction technique for predicting the spatial variation of SOC stocks.
Time-resolved x-ray diffraction (XRD) was used to examine the behavior of Ce under shock loading to stress states up to 22 GPa that span the shock-melt transition. Experiments reported here observed Ce held at a steady state for ∼500 ns prior to being uniaxially released to ambient pressure. Time-resolved XRD shows a constant diffraction pattern over the duration of the steady state with rapid solidification occurring on release. Cerium was found to remain crystalline as Poisson's ratio (ν) increases in the α-phase with incipient melt observed in XRD once ν reaches 0.5. Diffraction results along with sound speed measurements limit melt completion to be between 12 and 14 GPa, significantly lower than previously expected. The XRD results add confidence to previous methods used to define incipient melt and help to define a method to constrain the melt region along the Hugoniot independent of a light source.
We describe a set of precise single-ion conducting polymers that form self-assembled percolated ionic aggregates in glassy polymer matrices and have decoupled transport of metal cations. These precise single-ion conductors (SICs), synthesized by a scalable ring-opening metathesis polymerization, consist of a polyethylene backbone with a sulfonated phenyl group pendant on every fifth carbon and are fully neutralized by a counterion X+ (Li+, Na+, or Cs+). Experimental X-ray scattering measurements and fully atomistic molecular dynamics (MD) simulations are in good agreement. The MD simulations show that the ionic groups nanophase separate from the polymer backbone to form percolating ionic aggregates. Using graph theory, we find that within the Li+- and Na+-neutralized polymers the percolated aggregates exhibit planar and ribbon-like configurations at intermediate length scales, while the percolated aggregates within the Cs+-neutralized polymers are more isotropic. Electrical impedance spectroscopy measurements show that the ionic conductivities exhibit Arrhenius behavior, with conductivities of 10-7 to 10-6 S/cm at 180 °C. In the MD simulations, the cations move between sulfonate groups in the percolated aggregates, larger ions travel further, and overall cations travel further than the polymer backbones, indicating a decoupled ion-transport mechanism. Thus, the percolated ionic aggregates in these polymers can serve as pathways to facilitate decoupled ion motion through a glassy polymer matrix.
The availability of repair garage infrastructure for hydrogen fuel cell vehicles is becoming increasingly important for future industry growth. Ventilation requirements for hydrogen fuel cell vehicles can affect both retrofitted and purpose-built repair garages and the costs associated with these requirements can be significant. A hazard and operability (HAZOP) study was performed to identify risk-significant scenarios related to light-duty hydrogen vehicles in a repair garage. Detailed simulations and modeling were performed using appropriate computational tools to estimate the location, behavior, and severity of hydrogen release based on key HAZOP scenarios. Here, this work compares current fire code requirements to an alternate ventilation strategy to further reduce potential hazardous conditions. Modeling shows that position, direction, and velocity of ventilation have a significant impact on the amount of instantaneous flammable mass in the domain.
Tensor decomposition is a well-known tool for multiway data analysis. This work proposes using stochastic gradients for efficient generalized canonical polyadic (GCP) tensor decomposition of large-scale tensors. GCP tensor decomposition is a recently proposed version of tensor decomposition that allows for a variety of loss functions such as Bernoulli loss for binary data or Huber loss for robust estimation. Here, the stochastic gradient is formed from randomly sampled elements of the tensor and is efficient because it can be computed using the sparse matricized-tensor times Khatri--Rao product tensor kernel. For dense tensors, we simply use uniform sampling. For sparse tensors, we propose two types of stratified sampling that give precedence to sampling nonzeros. Numerical results demonstrate the advantages of the proposed approach and its scalability to large-scale problems.
Spurgeon, Steven R.; Ophus, Colin; Jones, Lewys; Kalinin, Sergei V.; Olszta, Matthew J.; Dunin-Borkowski, Rafal E.; Salmon, Norman; Hattar, Khalid M.; Yang, Wei-Chang D.; Sharma, Renu; Du, Yingge; Chiaramonti, Ann; Zheng, Haimei; Buck, Edgar C.; Kovarik, Libor; Penn, R.L.; Li, Dongsheng; Zhang, Xin; Murayama, Mitsuhiro; Taheri, Mitra L.
Electron microscopy touches on nearly every aspect of modern life, underpinning materials development for quantum computing, energy and medicine. We discuss the open, highly integrated and data-driven microscopy architecture needed to realize transformative discoveries in the coming decade.
A semantic understanding of the environment is needed to enable high level autonomy in robotic systems. Recent results have demonstrated rapid progress in underlying technology areas, but few results have been reported on end-to-end systems that enable effective autonomous perception in complex environments. In this paper, we describe an approach for rapidly and autonomously mapping unknown environments with integrated semantic and geometric information. We use surfel-based RGB-D SLAM techniques, with incremental object segmentation and classification methods to update the map in realtime. Information theoretic and heuristic measures are used to quickly plan sensor motion and drive down map uncertainty. Preliminary experimental results in simple and cluttered environments are reported.
A semantic understanding of the environment is needed to enable high level autonomy in robotic systems. Recent results have demonstrated rapid progress in underlying technology areas, but few results have been reported on end-to-end systems that enable effective autonomous perception in complex environments. In this paper, we describe an approach for rapidly and autonomously mapping unknown environments with integrated semantic and geometric information. We use surfel-based RGB-D SLAM techniques, with incremental object segmentation and classification methods to update the map in realtime. Information theoretic and heuristic measures are used to quickly plan sensor motion and drive down map uncertainty. Preliminary experimental results in simple and cluttered environments are reported.
In this paper, we present an optimization-based coupling method for local and nonlocal continuum models. Our approach couches the coupling of the models into a control problem where the states are the solutions of the nonlocal and local equations, the objective is to minimize their mismatch on the overlap of the local and nonlocal problem domains, and the virtual controls are the nonlocal volume constraint and the local boundary condition. We present the method in the context of Local-to-Nonlocal di usion coupling. Numerical examples illustrate the theoretical properties of the approach.
As self-sustained oscillators, lasers possess the unusual ability to spontaneously synchronize. These nonlinear dynamics are the basis for a simple yet powerful stabilization technique known as injection locking, in which a laser's frequency and phase can be controlled by an injected signal. Because of its inherent simplicity and favorable noise characteristics, injection locking has become a workhorse for coherent amplification and high-fidelity signal synthesis in applications ranging from precision atomic spectroscopy to distributed sensing. Within integrated photonics, however, these injection-locking dynamics remain relatively untapped - despite significant potential for technological and scientific impact. Here, we demonstrate injection locking in a silicon photonic Brillouin laser. Injection locking of this monolithic device is remarkably robust, allowing us to tune the laser emission by a significant fraction of the Brillouin gain bandwidth. Harnessing these dynamics, we demonstrate amplification of small signals by more than 23 dB. Moreover, we demonstrate that the injection-locking dynamics of this system are inherently nonreciprocal, yielding unidirectional control and backscatter immunity in an all-silicon system. This device physics opens the door to strategies for phase-noise reduction, low-noise amplification, and backscatter immunity in silicon photonics.
Here, we study a setting where a group of agents, each receiving partially informative private signals, seek to collaboratively learn the true underlying state of the world (from a finite set of hypotheses) that generates their joint observation profiles. To solve this problem, we propose a distributed learning rule that differs fundamentally from existing approaches, in that it does not employ any form of “belief-averaging”. Instead, agents update their beliefs based on a min-rule. Under standard assumptions on the observation model and the network structure, we establish that each agent learns the truth asymptotically almost surely. As our main contribution, we prove that with probability 1, each false hypothesis is ruled out by every agent exponentially fast, at a network-independent rate that is strictly larger than existing rates. We then develop a computationally-efficient variant of our learning rule that is provably resilient to agents who do not behave as expected (as represented by a Byzantine adversary model) and deliberately try to spread misinformation.
Marianno, C.M.; Ordonez, E.A.; King, J.W.; Suh, R.
Polyvinyl toluene (PVT) based detectors are used in radiation portal monitors (RPMs) worldwide to detect the trafficking of illicit nuclear material. PVT scintillators, which are normally optically clear, have been observed to suffer internal fogging throughout their volume due to prolonged exposure to varying environmental conditions. These changes could lead to reduced performance for RPMs that utilize plastic scintillators. In this research, a proof of concept system, consisting of different color light emitting diodes (LEDs) and a single optical sensor (OS), were used to examine the change in light transmission through a PVT scintillator. This Optical Monitoring System (OMS), coupled with an environmentally exposed PVT detector, was tested in an environmental chamber where it was subjected to changes in temperature and humidity ranging from 55 °C at 100% relative humidity to –20 °C at 40% relative humidity. At temperatures below 10 °C, light transmission was reduced by 81% ± 8% for blue LEDs, 84% ± 5% for yellow LEDs, and 49% ± 4% for green LEDs. Similar reductions in detected light were not recorded when the OMS was tested with only air between the LEDs and OS. Therefore, the significant reductions in transmitted light were attributed to changes occurring within the PVT scintillator. These results indicate that a significant reduction in PVT opacity occurs due to wide environmental changes. A device such as the OMS could be used to track these changes and provide users an early indication that a portal monitor is suffering from reduced performance.
Elinburg, Jessica K.; Hyre, Ariel S.; Mcneely, James; Alam, Todd M.; Klenner, Steffen; Pottgen, Rainer; Rheingold, Arnold L.; Arnold, Leon E.; Doerrer, Linda H.
The synthesis and characterization of a series of Sn(ii) and Sn(iv) complexes supported by the highly electron-withdrawing dianionic perfluoropinacolate (pinF) ligand are reported herein. Three analogs of [SnIV(pinF)3]2- with NEt3H+ (1), K+ (2), and {K(18C6)}+ (3) counter cations and two analogs of [SnII(pinF)2]2- with K+ (4) and {K(15C5)2}+ (5) counter cations were prepared and characterized by standard analytical methods, single-crystal X-ray diffraction, and 119Sn Mössbauer and NMR spectroscopies. The six-coordinate SnIV(pinF) complexes display 119Sn NMR resonances and 119Sn Mössbauer spectra similar to SnO2 (cassiterite). In contrast, the four-coordinate SnII(pinF) complexes, featuring a stereochemically-active lone pair, possess low 119Sn NMR chemical shifts and relatively high quadrupolar splitting. Furthermore, the Sn(ii) complexes are unreactive towards both Lewis bases (pyridine, NEt3) and acids (BX3, Et3NH+). Calculations confirm that the Sn(ii) lone pair is localized within the 5s orbital and reveal that the Sn 5px LUMO is energetically inaccessible, which effectively abates reactivity. This journal is
The impressive performance that deep neural networks demonstrate on a range of seismic monitoring tasks depends largely on the availability of event catalogs that have been manually curated over many years or decades. However, the quality, duration, and availability of seismic event catalogs vary significantly across the range of monitoring operations, regions, and objectives. Semisupervised learning (SSL) enables learning from both labeled and unlabeled data and provides a framework to leverage the abundance of unreviewed seismic data for training deep neural networks on a variety of target tasks. We apply two SSL algorithms (mean-teacher and virtual adversarial training) as well as a novel hybrid technique (exponential average adversarial training) to seismic event classification to examine how unlabeled data with SSL can enhance model performance. In general, we find that SSL can perform as well as supervised learning with fewer labels. We also observe in some scenarios that almost half of the benefits of SSL are the result of the meaningful regularization enforced through SSL techniques and may not be attributable to unlabeled data directly. Lastly, the benefits from unlabeled data scale with the difficulty of the predictive task when we evaluate the use of unlabeled data to characterize sources in new geographic regions. In geographic areas where supervised model performance is low, SSL significantly increases the accuracy of source-type classification using unlabeled data.
While elastic metasurfaces offer a remarkable and very effective approach to the subwavelength control of stress waves, their use in practical applications is severely hindered by intrinsically narrow band performance. In applications to electromagnetic and photonic metamaterials, some success in extending the operating dynamic range was obtained by using nonlocality. However, while electronic properties in natural materials can show significant nonlocal effects, even at the macroscales, in mechanics, nonlocality is a higher-order effect that becomes appreciable only at the microscales. This study introduces the concept of intentional nonlocality as a fundamental mechanism to design passive elastic metasurfaces capable of an exceptionally broadband operating range. The nonlocal behavior is achieved by exploiting nonlocal forces, conceptually akin to long-range interactions in nonlocal material microstructures, between subsets of resonant unit cells forming the metasurface. These long-range forces are obtained via carefully crafted flexible elements, whose specific geometry and local dynamics are designed to create remarkably complex transfer functions between multiple units. The resulting nonlocal coupling forces enable achieving phase-gradient profiles that are functions of the wavenumber of the incident wave. The identification of relevant design parameters and the assessment of their impact on performance are explored via a combination of semianalytical and numerical models. The nonlocal metasurface concept is tested, both numerically and experimentally, by embedding a total-internal-reflection design in a thin-plate waveguide. Results confirm the feasibility of the intentionally nonlocal design concept and its ability to achieve a fully passive and broadband wave control.
The goals of an electron beam-driven radiographic source are the focusing of high current at high voltage to a minimal spot size with excellent shot-to-shot reproducibility. The Self-Magnetic Pinch (SMP) diode makes use of such an intense electron beam impinging on a high-atomic weight (tantalum) converter, a counter-streaming ion beam to help minimize the spot size, and operation in a magnetic field-free diode region which further encourages small spot size. Through a series of diode development experiments, output voltages up to 12.5 MV and output currents up to 225 kA have been characterized, with resulting spot sizes below ~ few mm. Scaling studies with parameter variation, such as diode aspect ratio and anode-cathode (A-K) gap variation, give systematic validation to what has heretofore been noted anecdotally by other research groups. While the lack of an imbedded magnetic field helps minimize the SMP spot size, a secondary result may be the generation of beam instabilities which can terminate the radiation pulse. There is anecdotal evidence that in-situ DC heating of the diode region can help stabilize the beam pinch. Clear experimental evidence exists that DC heating/RF cleaning results in better control over the counter-streaming ion population. Expanded use of spatial dose-rate detection is shown to yield new insights into electron beam dynamics in the SMP diode. An attendant study of the SMP diode as a load for an Inductive Voltage Adder (IVA) driver leads to insights into the behavior of the IVA-SMP diode configuration, viewed as a total system, and yields constraints on the overall impedance behavior of the SMP diode load.
There is an emerging consciousness in India of the importance of nuclear security and safety. Motivated by a combination of rapid growth in its civil nuclear sector, heightened scrutiny of recent nuclear accidents around the world, and the deteriorating regional security environment, India has pushed to adopt measures to strengthen and enhance its nuclear security and safety governance structures. India recognizes that the various recent global nuclear security initiatives are in its own best interest and has been an enthusiastic participant in the Nuclear Security Summit process. Today, India demonstrates a greater willingness to showcase its nuclear security arrangements before the public and has undertaken many institutional, legal, and operational reforms to maintain international regime compliance. This study takes a comprehensive look at India's approach to nuclear security and critically examines the physical security measures the country has put in place. Particular focus is placed on the evolution and strengths, as well as weaknesses, of the country's nuclear security institutions, instruments, practices, and culture. Given that the strengthening of India's nuclear security governance is an ongoing endeavor, the paper also puts forward a number of policy recommendations.
Pulsed power drivers such as the Z generator of Sandia National Laboratories typically deliver high current (>20MA) to single experiments. This project is intended to develop and assess ways to simultaneously drive multiple targets on a single pulsed power driver (specifically a neutron and an x-ray producing target driven in a single experiment). The combined x-ray/neutron environment produced will then be used to investigate potential synergistic effects in integrated circuits. A pre-requisite for being able to design and study multiple targets on Z is first adapting simulation tools to be able to model them effectively. This will enable us to assess the tradeoffs between the different ways multiple targets can be combined, and to better understand how existing and future pulsed power machines can be used to generate combined testing environments. This report is limited to documenting the initial development of a parallel load modeling capability that is presently being applied to design experiments to produce combined neutron/x-ray environments on Z.
The objective of this study was to conduct a series of tests looking at the deposition and resuspension of aerosol particles deposited onto multiple representative substrate surfaces for a range of particle sizes under varying environmental conditions. The benefit of this study is to provide additional insight into the understanding of early time resuspension from different mechanisms and compare to existing literature. The resuspension methods utilized in this study were full-scale and the substrates were representative of real- world ground level conditions. Multiple experiments were conducted to assess the impact on resuspension from the varying substrates and mechanisms. The results of this study show variations in the size distribution of aerosol as a function of height from the source resuspension factors. Additionally, the aerosolized mass concentration and resuspension factor were evaluated. The maximum resuspension factor was found to be on the order of 10 -4 m -1 which is higher than most resuspension factors found in literature but represents idealized conditions due to the well constrained experimental setup.
The potential advantages of AM (e.g. weight reduction, novel geometries) are well understood within a systems context. However, adoption of AM at the system level has been slow due to the relative uncertainty in the final material properties, which leaves capabilities and/or performance gains unrealized. Utilizing remelt strategies it may be possible to expand the available process window to provide densities and microstructures beyond what is capable with standard scan strategies. This work explored remelting strategies for 316L stainless steel to tailor grain size and increase density. Twelve scan strategies were explored experimentally and computationally to understand the limitations of remelt strategies and the robustness of the current simulation package. Results show tailoring of grain size, density, and texture is achievable through remelting and several key lessons learned were made to improve the texture evaluation through simulation.
Natural events and human activity often generate acoustic waves capable of traveling tens to tens of thousands of kilometers across the globe. Ground-based acoustic sensors are limited to dry land and often suffer from wind noise. In contrast, balloon borne acoustic sensors can cross oceans, polar ice caps, and other inhospitable areas, greatly expanding sensor coverage. Since they move with the mean wind speed, their background noise levels are exceptionally low. In the last six years, such sensors have recorded sounds from colliding ocean waves, surface and buried chemical explosions, thunder, wind/mountain interactions, wind turbines, aircraft, and possibly meteors and the aurora. These results have led to new insights on acoustic heating of the upper atmosphere, the detectability of underground explosions, and directional sound fields generated by ocean waves.
Germanium–antimony–telluride has emerged as a nonvolatile phase change memory material due to the large resistivity contrast between amorphous and crystalline states, rapid crystallization, and cyclic endurance. Improving thermal phase stability, however, has necessitated further alloying with optional addition of a quaternary species (e.g., C). In this work, the thermal transport implications of this additional species are investigated using frequency-domain thermoreflectance in combination with structural characterization derived from x-ray diffraction and Raman spectroscopy. Specifically, the room temperature thermal conductivity and heat capacity of (Ge2Sb2Te5)1–xCx are reported as a function of carbon concentration (x ≤ 0:12) and anneal temperature (T ≤ 350 °C) with results assessed in reference to the measured phase, structure, and electronic resistivity. Phase stability imparted by the carbon comes with comparatively low thermal penalty as materials exhibiting similar levels of crystallinity have comparable thermal conductivity despite the addition of carbon. The additional thermal stability provided by the carbon does, however, necessitate higher anneal temperatures to achieve similar levels of structural order.
With inverter-based distributed energy resources (DERs) becoming more prevalent in grid-connected or islanded distribution feeders, a better understanding of the performance of these devices is needed. Increasing the amount of inverter-based generation, and therefore reducing conventional generation, i.e. rotating machines and synchronous generators, decreases generation sources with well-known characteristic responses for unbalanced and transient fault conditions. This paper experimentally tests the performance of commercial grid-forming inverters under fault and unbalanced conditions and provides a comparison between grid-forming inverters and their grid-following counterparts.
Changes in the Demand Profile and a growing role for renewable and distributed generation are leading to rapid evolution in the electric grid. These changes are beginning to considerably strain the transmission and distribution infrastructure. Utilities are increasingly recognizing that the integration of energy storage in the grid infrastructure will help manage intermittency and improve grid reliability. This recognition, coupled with the proliferation of state-level renewable portfolio standards and rapidly declining lithium-ion (Li-ion) battery costs, has led to a surge in the deployment of battery energy storage systems (BESSs). Additionally, although BESSs represented less than 1% of grid-scale energy storage in the United States in 2019, they are the preferred technology to meet growing demand because they are modular, scalable, and easy to deploy across diverse use cases and geographic locations.