In this paper we investigate the utility of one-dimensional convolutional neural network (CNN) models in epidemiological forecasting. Deep learning models, in particular variants of recurrent neural networks (RNNs) have been studied for ILI (Influenza-Like Illness) forecasting, and have achieved a higher forecasting skill compared to conventional models such as ARIMA. In this study, we adapt two neural networks that employ one-dimensional temporal convolutional layers as a primary building block—temporal convolutional networks and simple neural attentive meta-learners—for epidemiological forecasting. We then test them with influenza data from the US collected over 2010-2019. We find that epidemiological forecasting with CNNs is feasible, and their forecasting skill is comparable to, and at times, superior to, plain RNNs. Thus CNNs and RNNs bring the power of nonlinear transformations to purely data-driven epidemiological models, a capability that heretofore has been limited to more elaborate mechanistic/compartmental disease models.
The National Nuclear Security Agency (NNSA) initiated the Minority Serving Institution Partnership Plan (MSIPP) to 1) align investments in a university capacity and workforce development with the NNSA mission to develop the needed skills and talent for NNSA’s enduring technical workforce at the laboratories and production plants, and 2) to enhance research and education at under-represented colleges and universities. Out of this effort, MSIPP launched a new consortium in early FY17 focused on Tribal Colleges and Universities (TCUs) known as the Advanced Manufacturing Network Initiative (AMNI). This consortium has been extended for FY20 and FY21. The following report summarizes the status update during this quarter.
This project was a follow-on to the Sandia National Laboratories (SNL) and the Laboratory for Laser Energetics (LLE) ARPA-E ALPHA project entitled “Demonstrating Fuel Magnetization and Laser Heating Tools for Low-Cost Fusion Energy”. The primary purpose of this follow-on project was to obtain additional data at the OMEGA facility to help better understand how MagLIF, a platform that has already demonstrated the scientific viability of magneto-inertial fusion, scales across a factor of 1000 in driver energy. A secondary aspect of this project was to extend simulations and analysis at SNL to cover a wider magneto-inertial fusion (MIF) parameter space and test scaling of those models across this wide range of input energies and conditions of the target. This work was successful in improving understanding of how key physics elements of MIF scales and improves confidence in setting requirements for fusion gain with larger drivers. The OMEGA experiments at the smaller scale verified the hypothesis that preheating the fuel plays a significant role in introducing wall contaminants that mix into the fuel and significantly degrade fusion performance. This contamination not only impacts target performance but the optimal input conditions for the target. However, analysis at the Z-scale showed that target performance at high preheat levels is limited by the Nernst effect, which advects magnetic flux from the hot spot, reducing magnetic insulation and consequently reduces the temperature of the fuel. The combination of MagLIF experiments at the disparate scales of OMEGA and Z along with a multiscale 3D simulation analysis has led to new insight into the physical mechanisms responsible for limiting target performance and provides important benchmarks to assess target scaling more generally for MIF schemes. Finally, in addition to the MagLIF related work, a semi-analytic model of liner driven Field Reversed Configuration (FRC) was developed that predicts the fusion gain for such systems. This model was also validated with 2D radiation magneto-hydrodynamic simulations and predicts that fusion gains of near unity could be driven by the Z machine.
The MELCOR Accident Consequence Code System (MACCS) is used by Nuclear Regulatory Commission (NRC) and various national and international organizations for probabilistic consequence analysis of nuclear power accidents. This User Guide is intended to assist analysts in understanding the MACCS/WinMACCS model and to provide information regarding the code. This user guide version describes MACCS Version 4.0. Features that have been added to MACCS in subsequent versions are described in separate documentation. This User Guide provides a brief description of the model history, explains how to set up and execute a problem, and informs the user of the definition of various input parameters and any constraints placed on those parameters. This report is part of a series of reports documenting MACCS. Other reports include the MACCS Theory Manual, MACCS Verification Report, Technical Bases for Consequence Analyses Using MACCS, as well as documentation for preprocessor codes including SecPop, MelMACCS, and COMIDA2.
The costs associated with the increasing maintenance and surveillance needs of aging structures are rising at an unexpected rate. Multi-site fatigue damage, hidden cracks in hard-to-reach locations, disbonded joints, erosion, impact, and corrosion are among the major flaws encountered in today’s extensive fleet of aging aircraft and space vehicles. Aircraft maintenance and repairs represent about a quarter of a commercial fleet’s operating costs. The application of Structural Health Monitoring (SHM) systems using distributed sensor networks can reduce these costs by facilitating rapid and global assessments of structural integrity. The use of in-situ sensors for real-time health monitoring can overcome inspection impediments stemming from accessibility limitations, complex geometries, and the location and depth of hidden damage. Reliable, structural health monitoring systems can automatically process data, assess structural condition, and signal the need for human intervention. The ease of monitoring an entire on-board network of distributed sensors means that structural health assessments can occur more often, allowing operators to be even more vigilant with respect to flaw onset. SHM systems also allow for condition-based maintenance practices to be substituted for the current time-based or cycle-based maintenance approach thus optimizing maintenance labor. The Federal Aviation Administration has conducted a series of SHM validation and certification programs intended to comprehensively support the evolution and adoption of SHM practices into routine aircraft maintenance practices. This report presents one of those programs involving a Sandia Labs-aviation industry effort to move SHM into routine use for aircraft maintenance. The Airworthiness Assurance NDI Validation Center (AANC) at Sandia Labs, in conjunction with Sikorsky, Structural Monitoring Systems Ltd., Anodyne Electronics Manufacturing Corp., Acellent Technologies Inc., and the Federal Aviation Administration (FAA) carried out a trial validation and certification program to evaluate Comparative Vacuum Monitoring (CVM) and Piezoelectric Transducers (PZT) as a structural health monitoring solution to specific rotorcraft applications. Validation tasks were designed to address the SHM equipment, the health monitoring task, the resolution required, the sensor interrogation procedures, the conditions under which the monitoring will occur, the potential inspector population, adoption of CVM and PZT systems into rotorcraft maintenance programs and the document revisions necessary to allow for their routine use as an alternate means of performing periodic structural inspections. This program addressed formal SHM technology validation and certification issues so that the full spectrum of concerns, including design, deployment, performance and certification were appropriately considered. Sandia Labs designed, implemented, and analyzed the results from a focused and statistically relevant experimental effort to quantify the reliability of a CVM system applied to Sikorsky S-92 fuselage frame application and a PZT system applied to an S-92 main gearbox mount beam application. The applications included both local and global damage detection assessments. All factors that affect SHM sensitivity were included in this program: flaw size, shape, orientation and location relative to the sensors, as well as operational and environmental variables. Statistical methods were applied to performance data to derive Probability of Detection (POD) values for SHM sensors in a manner that agrees with current nondestructive inspection (NDI) validation requirements and is acceptable to both the aviation industry and regulatory bodies. The validation work completed in this program demonstrated the ability of both CVM and PZT SHM systems to detect cracks in rotorcraft components. It proved the ability to use final system response parameters to provide a Green Light/Red Light (“GO” – “NO GO”) decision on the presence of damage. In additional to quantifying the performance of each SHM system for the trial applications on the S-92 platform, this study also identified specific methods that can be used to optimize damage detection, guidance on deployment scenarios that can affect performance and considerations that must be made to properly apply CVM and PZT sensors. These results support the main goal of safely integrating SHM sensors into rotorcraft maintenance programs. Additional benefits from deploying rotorcraft Health and Usage Monitoring Systems (HUMS) may be realized when structural assessment data, collected by an SHM system, is also used to detect structural damage to compliment the operational environment monitoring. The use of in-situ sensors for health monitoring of rotorcraft structures can be a viable option for both flaw detection and maintenance planning activities. This formal SHM validation will allow aircraft manufacturers and airlines to confidently make informed decisions about the proper utilization of CVM and PZT technology. It will also streamline future regulatory actions and formal certification measures needed to assure the safe application of SHM solutions.
The Primary Standards Lab employs guardbanding methods to reduce risk of false acceptance in calibration when test uncertainty ratios are low. Similarly, production agencies guardband their requirements to reduce false accept rates in product acceptance. The root-sum-square guardbanding method is recommended by PSL, but many other guardbanding methods have been proposed in literature or implemented in commercial software. This report analyzes the false accept and reject rates resulting from the most common guardbanding methods. It is shown that the root-sum-square method and the Dobbert Managed Guardband strategy are similar and both are suitable for calibration and product acceptance work in the NSE.
Glass–ceramics have received recent attention for use in glass–ceramic to metal hermetic seals. Due to their heterogeneous microstructure, these materials exhibit a number of advantageous responses over conventional glass based seals. Key amongst them is the possibility of a controllable thermal strain response and apparent coefficient of thermal expansion which may be used to minimize thermally induced residual stresses for aforementioned seals. These behaviors result from an inorganic glass matrix and variety of crystalline ceramic phases including silica polymorph(s) that may undergo reversible solid-to-solid transformations with associated inelastic strain. Correspondingly, these materials exhibit complex thermomechanical responses associated with multiple inelastic mechanisms (viscoelasticity and phase transformation). While modeling these behaviors is essential for developing and analyzing the corresponding applications, no such model exists. Therefore, in this work a three-dimensional continuum constitutive model for glass–ceramic materials combining these various inelastic mechanisms is developed via an internal state variable approach. A corresponding fully implicit three dimensional numerical formulation is also proposed and implemented. The model is used to simulate existing experiments and validate the proposed formalism. As an example, the simple seal problem of a glass–ceramic seal inside a concentric metal shell is explored. Finally, the impact of the cooling rates, viscoelastic shift factors, and inelastic strain on final residual stress state are all investigated and the differing contributions highlighted.
The Lasserre Hierarchy, [18, 19], is a set of semidefinite programs which yield increasingly tight bounds on optimal solutions to many NP-hard optimization problems. The hierarchy is parameterized by levels, with a higher level corresponding to a more accurate relaxation. High level programs have proven to be invaluable components of approximation algorithms for many NP-hard optimization problems [3, 7, 26]. There is a natural analogous quantum hierarchy [5, 8, 24], which is also parameterized by level and provides a relaxation of many (QMA-hard) quantum problems of interest [5, 6, 9]. In contrast to the classical case, however, there is only one approximation algorithm which makes use of higher levels of the hierarchy [5]. Here we provide the first ever use of the level-2 hierarchy in an approximation algorithm for a particular QMA-complete problem, so-called Quantum Max Cut [2, 9]. We obtain modest improvements on state-of-the-art approximation factors for this problem, as well as demonstrate that the level-2 hierarchy satisfies many physically-motivated constraints that the level-1 does not satisfy. Indeed, this observation is at the heart of our analysis and indicates that higher levels of the quantum Lasserre Hierarchy may be very useful tools in the design of approximation algorithms for QMA-complete problems.
The reliable design of magnetically insulated transmission lines (MITLs) for very high current pulsed power machines must be accomplished in the future by utilizing a variety of sophisticated modeling tools. The complexity of the models required is high and the number of sub-models and approximations large. The potential for significant analyst error using a single tool is large, with possible reliability issues associated with the plasma modeling tools themselves or the chosen approach by the analyst to solve a given problem. We report on a software infrastructure design that provides a workable framework for building self-consistent models and constraining feedback to limit analyst error. The framework and associated tools aid the development of physical intuition, the development of increasingly sophisticated models, and the comparison of performance results. The work lays the computational foundation for designing state-of-the-art pulsed-power experiments. The design and useful features of this environment are described. We discuss the utility of the Git source code management system and a GitLab interface for use in project management that extends beyond software development tasks.
The prevalent use of organic materials in manufacturing is a fire safety concern, and motivates the need for predictive thermal decomposition models. A critical component of predictive modeling is numerical inference of kinetic parameters from bench scale data. Currently, an active area of computational pyrolysis research focuses on identifying efficient, robust methods for optimization. This paper demonstrates that kinetic parameter calibration problems can successfully be solved using classical gradient-based optimization. We explore calibration examples that exhibit characteristics of concern: high nonlinearity, high dimensionality, complicated schemes, overlapping reactions, noisy data, and poor initial guesses. The examples demonstrate that a simple, non-invasive change to the problem formulation can simultaneously avoid local minima, avoid computation of derivative matrices, achieve a computational efficiency speedup of 10x, and make optimization robust to perturbations of parameter components. Techniques from the mathematical optimization and inverse problem communities are employed. By re-examining gradient-based algorithms, we highlight opportunities to develop kinetic parameter calibration methods that should outperform current methods.
Previous studies have shown that atmospheric models with a spectral element grid can benefit from putting physics calculations on a relatively coarse finite volume grid. Here we demonstrate an alternative high-order, element-based mapping approach used to implement a quasi-equal-area, finite volume physics grid in E3SM. Unlike similar methods, the new method in E3SM requires topology data purely local to each spectral element, which trivially allows for regional mesh refinement. Simulations with physics grids defined by 2 × 2, 3 × 3, and 4 × 4 divisions of each element are shown to verify that the alternative physics grid does not qualitatively alter the model solution. The model performance is substantially affected by the reduction of physics columns when using the 2 × 2 grid, which can increase the throughput of physics calculations by roughly 60%–120% depending on whether the computational resources are configured to maximize throughput or efficiency. A pair of regionally refined cases are also shown to highlight the refinement capability.
The electric discharge across a varistor granule filled air gap under a fast-rising voltage pulse was investigated for surge protection applications. The effects of temperature and pressure on the arc and the electrical conduction were analyzed by the characteristic changes in voltage waveforms triggered by a fast-rising high voltage pulse. In addition to the gap size, experimental results show that competing mechanisms among arc conduction, conduction through the varistor granule network, thermionic emission from Joule heating at granule-to-granule contact points, and the magnitude of the switching voltage dictate the maximum surge protection voltage for the filled air gap. Experimental evidence indicated that accumulated degradation was created at small contact points between varistor granules by repetitive assaults from longer duration, high voltage pulses. The uniqueness of using varistor over other dielectric granules in an air gap for surge protection is identified and discussed.
In the face of increasing natural disasters and an aging grid, utilities need to optimally choose investments to the existing infrastructure to promote resiliency. This paper presents a new investment decision optimization model to minimize unserved load over the recovery time and improve grid resilience to extreme weather event scenarios. Our optimization model includes a network power flow model which decides generator status and generator dispatch, optimal transmission switching (OTS) during the multi-time period recovery process, and an investment decision model subject to a given budget. Investment decisions include the hardening of transmission lines, generators, and substations. Our model uses a second order cone programming (SOCP) relaxation of the AC power flow model and is compared to the classic DC power flow approximation. A case study is provided on the 73-bus RTS-GMLC test system for various investment budgets and multiple hurricane scenarios to highlight the difference in optimal investment decisions between the SOCP model and the DC model, and demonstrate the advantages of OTS in resiliency settings. Results indicate that the network models yield different optimal investments, unit commitment, and OTS decisions, and an AC feasibility study indicates our SOCP resiliency model is more accurate than the DC model.
Austenitic stainless steels are the standard materials for containment of hydrogen and tritium because of their resistance to mechanical property degradation in those environments. The mechanical performance of the primary containment material is critical for tritium handling, processing, and storage, thus comprehensive understanding of the processes of tritium embrittlement is an enabling capability for fusion energy. This work describes the investigation of the effects of low levels of tritium-decay-helium ingrowth on 304 L tubes. Long-term aging with tritium leads to high helium contents in austenitic stainless steels and can reduce fracture toughness by 95 %, but the details of behavior at low helium contents are not as well characterized. Here, we present results from tensile testing of tritium pre-charged 304 L tube specimens with a variety of starting microstructures that all contain a low level of helium. The results of the tritium exposed-and-aged materials are compared to previously reported results on similar specimens tested in an unexposed condition as well as the hydrogen precharged condition. Tritium precharging and aging for a short duration resulted in increased yield strengths, ultimate tensile strengths and slightly increased elongation to failure, comparable to higher concentrations of hydrogen precharging.
It used to think that is impossible to determine/measure electric field inside a physically isolated volume, especially inside an electrically shielded space, because a conventional electric-field sensor can only measure electric field at the location of the sensor, and when an electric-field source is screened by conductive materials, no leakage electric field can be detected. For first time, we experimentally demonstrated that electrically neutral particles, neutrons, can be used to measure/image electric field behind a physical barrier. This work enables a new measurement capability that can visualize electric-relevant properties inside a studied sample or detection target for scientific research and engineering applications.
Laplace, T.A.; Goldblum, B.L.; Manfredi, J.J.; Brown, J.A.; Bleuel, D.L.; Brand, C.A.; Gabella, G.; Gordon, J.; Brubaker, Erik B.
Background: Organic scintillators are widely used for neutron detection in both basic nuclear physics and applications. While the proton light yield of organic scintillators has been extensively studied, measurements of the light yield from neutron interactions with carbon nuclei are scarce. Purpose: Demonstrate a new approach for the simultaneous measurement of the proton and carbon light yield of organic scintillators. Provide new carbon light yield data for the EJ-309 liquid and EJ-204 plastic organic scintillators. Method: A 33-MeV H+2 beam from the 88-Inch Cyclotron at Lawrence Berkeley National Laboratory was impinged upon a 3-mm-thick Be target to produce a high-flux, broad-spectrum neutron beam. The double time-of-flight technique was extended to simultaneously measure the proton and carbon light yields of the organic scintillators, wherein the light output associated with the recoil particle was determined using np and nC elastic scattering kinematics. Results: The proton and carbon light yield relations of the EJ-309 liquid and EJ-204 plastic organic scintillators were measured over a recoil energy range of approximately 0.3 to 1 MeV and 2 to 5 MeV, respectively, for EJ-309, and 0.2 to 0.5 MeV and 1 to 4 MeV, respectively, for EJ-204. Conclusions: These data provide new insight into the ionization quenching effect in organic scintillators and key input for simulation of the response of organic scintillators for both basic science and a broad range of applications.
This report represents the milestone deliverable M4SF-21SN010309021 “Modeling Activities Related to Waste Form Degradation: Progress Report” that describes the progress of R&D activities of ongoing modeling investigations specifically on nuclear waste glass degradation, Density Functional Theory (DFT) studies on clarkeite structure and stability, and electrochemical modeling of spent nuclear fuel (SNF). These activities are part of the newly-created Waste form Testing, Modeling, and Performance work package at Sandia National Laboratories (SNL). This work package is part of the “Inventory and Waste Form Characteristics and Performance” control account that includes various experimental and modeling activities on nuclear waste degradation conducted at Oak Ridge National Laboratory (ORNL), SNL, Argonne National Laboratory (ANL), and Pacific Northwest National Laboratory (PNNL).
The Sandia National Laboratories (SNL) staff is meeting the requirements of the Nuclear Fuel Cycle and Supply Chain (NFCSC) Quality Assurance Program Document (QAPD). Each of the 5 NFCSC FY 20 packages for SNL were reviewed. None of the 5 packages had incorrect QRL categories. No major corrective actions are assigned. Additional and minor PICS:NE checkbox errors may exist, but none were identified in this assessment. Training is now also geared to encourage WPMs to take credit for optional technical reviews by considering setting the QRL level to 3. This is because when the level is set to 4, the PICS:NE system does not provide fields for review information. Finally, training in the NFCSC QAPD has been delivered as always to WPMs.
Ramp-compression experiments have been performed on the “Z” pulsed-power facility to investigate the strengths of Be and lead-antimony alloy. Yield strength and shear stress near peak pressure were obtained from measurements of the sound speed on release and using the Asay self-consistent method. Two S-65 grade Be samples, from batches that showed a significant difference in yield strength at ambient conditions, were found to have near identical yield strengths, which were also in agreement with similar earlier measurements on S-200 grade Be. Yield strength of the Pb4Sb alloy at ∼120 GPa was 1.35 GPa, while a National Ignition Facility experiment by Krygier et al. [Phys. Rev. Lett. 123, 205701 (2020)] found 3.8 GPa at ∼400 GPa pressure. Our result is intermediate between the ambient value and the one by Krygier et al., but the significantly increased strength is probably not associated with the transition to the high-pressure bcc phase of lead.
High penetration of solar photovoltaics can have a significant impact on the power flows and voltages in distribution systems. In order to support distribution grid planning, control and optimization, it is imperative for utilities to maintain an accurate database of the locations and sizes of PV systems. This paper extends previous work on methods to estimate the location of PV systems based on knowledge of the distribution network model and availability of voltage magnitude measurement streams. The proposed method leverages the expected impact of solar injection variations on the circuit voltage and takes into account the operation and impact of changes in voltage due to discrete voltage regulation equipment (VRE). The estimation model enables determining the most likely location of PV systems, as well as voltage regulator tap and switching capacitors state changes. The method has been tested for individual and multiple PV system, using the Chi-Square test as a metric to evaluate the goodness of fit. Simulations on the IEEE 13-bus and IEEE 123-bus distribution feeders demonstrate the ability of the method to provide consistent estimations of PV locations as well as VRE actions.
Currently, spent nuclear fuel (SNF) is stored in on-site independent spent-fuel storage installations (ISFSIs) at seventythree (73) nuclear power plants (NPPs) in the US. Because a site for geologic repository for permanent disposal of SNF has not been constructed, the SNF will remain in dry storage significantly longer than planned. During this time, the ISFSIs, and potentially consolidated storage facilities, will experience earthquakes of different magnitudes. The dry storage systems are designed and licensed to withstand large seismic loads. When dry storage systems experience seismic loads, there are little data on the response of SNF assemblies contained within them. The Spent Fuel Waste Disposition (SFWD) program is planning to conduct a full-scale seismic shake table test to close the gap related to the seismic loads on the fuel assemblies in dry storage systems. This test will allow for quantifying the strains and accelerations on surrogate fuel assembly hardware and cladding during earthquakes of different magnitudes and frequency content. The main component of the test unit will be the full-scale NUHOMS 32 PTH2 dry storage canister. The canister will be loaded with three surrogate fuel assemblies and twenty-nine dummy assemblies. Two dry storage configurations will be tested – horizontal and vertical above-ground concrete overpacks. These configurations cover 91% of the current dry storage configurations. The major input into the shake table test are the seismic excitations or the earthquake ground motions – acceleration time histories in two horizontal and one vertical direction that will be applied to the shake table surface during the tests. The shake table surface represents the top of the concrete pad on which a dry storage system is placed. The goal of the ground motion task is to develop the ground motions that would be representative of the range of seismotectonic and other conditions that any site in the Western US (WUS) or Central Eastern US (CEUS) might entail. This task is challenging because of the large number of the ISFSI sites, variety of seismotectonic and site conditions, and effects that soil amplification, soil-structure interaction, and pad flexibility may have on the ground motions.
A two-step solar thermochemical cycle was considered for air separation to produce N2 based on (Ba,La)xSr1-xFeO3-δ perovskite reduction/oxidation (redox) reactions for A-site fractions of 0 ≤ x ≤ 0.2. The cycle steps encompassed (1) thermal reduction and O2 release via concentrated solar input and (2) re-oxidation with air to uptake O2 and produce high-purity N2. Thermogravimetry at temperatures between 400 and 1100 °C in atmospheres of 0.005 to 90% O2/Ar at 1 bar was performed to measure equilibrium nonstoichiometries. The compound energy formalism was applied to model redox thermodynamics for both Ba2+ and La3+ substitution. Non-linear regression was used to determine the empirical parameters based on the thermogravimetric measurements. The model was used to define partial molar reaction enthalpies and entropies and predicted equilibrium oxygen nonstoichiometry as functions of oxide stoichiometry, site fraction, temperature, and O2 partial pressure. The thermodynamic analysis showed the materials are appealing for air separation at temperatures below 800 °C.
These points are covered in this presentation: Distributed GPU stencil, non-contiguous data; Equivalence of strided datatypes and minimal representation; GPU communication methods; Deploying on managed systems; Large messages and MPI datatypes; Translation and canonicalization; Automatic model-driven transfer method selection; and Interposed library implementation.
The efficient condition assessment of engineered systems requires the coupling of high fidelity models with data extracted from the state of the system ‘as-is’. In enabling this task, this paper implements a parametric Model Order Reduction (pMOR) scheme for nonlinear structural dynamics, and the particular case of material nonlinearity. A physics-based parametric representation is developed, incorporating dependencies on system properties and/or excitation characteristics. The pMOR formulation relies on use of a Proper Orthogonal Decomposition applied to a series of snapshots of the nonlinear dynamic response. A new approach to manifold interpolation is proposed, with interpolation taking place on the reduced coefficient matrix mapping local bases to a global one. We demonstrate the performance of this approach firstly on the simple example of a shear-frame structure, and secondly on the more complex 3D numerical case study of an wind turbine tower under a ground motion excitation. Parametric dependence pertains to structural properties, as well as the temporal and spectral characteristics of the applied excitation. The developed parametric Reduced Order Model (pROM) can be exploited for a number of tasks including monitoring and diagnostics, control of vibrating structures, and residual life estimation of critical components.
Matthews, Bethany E.; Sassi, Michel; Barr, Christopher; Ophus, Colin; Kaspar, Tiffany C.; Jiang, Weilin; Hattar, Khalid M.; Spurgeon, Steven R.
Mastery of order-disorder processes in highly nonequilibrium nanostructured oxides has significant implications for the development of emerging energy technologies. However, we are presently limited in our ability to quantify and harness these processes at high spatial, chemical, and temporal resolution, particularly in extreme environments. Here, we describe the percolation of disorder at the model oxide interface LaMnO3/SrTiO3, which we visualize during in situ ion irradiation in the transmission electron microscope. We observe the formation of a network of disorder during the initial stages of ion irradiation and track the global progression of the system to full disorder. We couple these measurements with detailed structural and chemical probes, examining possible underlying defect mechanisms responsible for this unique percolative behavior.
The high theoretical lithium storage capacity of Sn makes it an enticing anode material for Li-ion batteries (LIBs); however, its large volumetric expansion during Li–Sn alloying must be addressed. Combining Sn with metals that are electrochemically inactive to lithium leads to intermetallics that can alleviate volumetric expansion issues and still enable high capacity. Here, we present the cycling behavior of a nanostructured MnSn2 intermetallic used in LIBs. Nanostructured MnSn2 is synthesized by reducing Sn and Mn salts using a hot injection method. The resulting MnSn2 is characterized by x-ray diffraction and transmission electron microscopy and then is investigated as an anode for LIBs. The MnSn2 electrode delivers a stable capacity of 514 mAh g-1 after 100 cycles at a C/10 current rate with a Coulombic efficiency >99%. Unlike other Sn-intermetallic anodes, an activation overpotential peak near 0.9 V versus Li is present from the second lithiation and in subsequent cycles. We hypothesize that this effect is likely due to electrolyte reactions with segregated Mn from MnSn2. To prevent these undesirable Mn reactions with the electrolyte, a 5 nm TiO2 protection layer is applied onto the MnSn2 electrode surface via atomic layer deposition. The TiO2-coated MnSn2 electrodes do not exhibit the activation overpotential peak. The protection layer also increases the capacity to 612 mAh g-1 after 100 cycles at a C/10 current rate with a Coulombic efficiency >99%. This higher capacity is achieved by suppressing the parasitic reaction of Mn with the electrolyte, as is supported by x-ray photoelectron spectroscopy analysis.
Recent experimental and simulation studies have shown that polymer-nanoparticle (NP) composites (PNCs) with ultra-high NP loading (>50%) exhibit remarkable mechanical properties and dramatic increases in polymer glass-transition temperature, viscosity, and thermal stability compared to the bulk polymer. These deviations in macroscopic properties suggest a slowdown in both segmental and chain-scale polymer dynamics due to confinement. In this work, we examine the polymer conformations and dynamics in these PNCs using molecular dynamics simulations of both unentangled and entangled coarse-grained polymers in random-close-packed NP packings with varying polymer fill fractions. We find that the changes in the polymer dynamics depend on the number of NPs in contact with a polymer segment. Using the number of polymer-NP contacts and different polymer chain conformations as criteria for categorization, we further examine the polymer dynamics at multiple length scales to show the high level of dynamic heterogeneity in PNCs with ultra-high NP loading.
We consider an optimal control synthesis problem for a class of control-affine nonlinear systems. We propose Sum-of-Square based computational framework for optimal control synthesis. The proposed computation framework relies on the convex formulation of the optimal control problem in the dual space of densities. The convex formulation to the optimal control problem is based on the duality results in dynamical systems' stability theory. We used the Sum-of-Square based computational framework for the finite-dimensional approximation of the convex optimization problem. The efficacy of the developed framework is demonstrated using simulation results.
We present progress on the synthesis of semimetal Cd3As2 by metal–organic chemical-vapor deposition (MOCVD). Specifically, we have optimized the growth conditions needed to obtain technologically useful growth rates and acceptable thin-film microstructures, with our studies evaluating the effects of varying the temperature, pressure, and carrier-gas type for MOCVD of Cd3As2 when performed using dimethylcadmium and tertiary-butylarsine precursors. In the course of the optimization studies, exploratory Cd3As2 growths are attempted on GaSb substrates, strain-relaxed InAs buffer layers grown on GaSb substrates, and InAs substrates. Notably, only the InAs-terminated substrate surfaces yield desirable results. Extensive microstructural studies of Cd3As2 thin films on InAs are performed by using multiple advanced imaging microscopies and x-ray diffraction modalities. The studied films are 5–75 nm in thickness and consist of oriented, coalesced polycrystals with lateral domain widths of 30–80 nm. The most optimized films are smooth and specular, exhibiting a surface roughness as low as 1.0 nm rms. Under cross-sectional imaging, the Cd3As2-InAs heterointerface appears smooth and abrupt at a lower film thickness, ~30 nm, but becomes quite irregular as the average thickness increases to ~55 nm. The films are strain-relaxed with a residual biaxial tensile strain (ϵxx = +0.0010) that opposes the initially compressive lattice-mismatch strain of Cd3As2 coherent on InAs (ϵxx = - 0.042). Importantly, phase-identification studies find a thin-film crystal structure consistent with the P42/nbc space group, placing MOCVD-grown Cd3As2 among the Dirac semimetals of substantial interest for topological quantum materials studies.
A general problem when designing functional nanomaterials for energy storage is the lack of control over the stability and reactivity of metastable phases. Using the high-capacity hydrogen storage candidate LiAlH4 as an exemplar, we demonstrate an alternative approach to the thermodynamic stabilization of metastable metal hydrides by coordination to nitrogen binding sites within the nanopores of N-doped CMK-3 carbon (NCMK-3). The resulting LiAlH4@NCMK-3 material releases H2 at temperatures as low as 126 °C with full decomposition below 240 °C, bypassing the usual Li3AlH6 intermediate observed in bulk. Moreover, >80% of LiAlH4 can be regenerated under 100 MPa H2, a feat previously thought to be impossible. Nitrogen sites are critical to these improvements, as no reversibility is observed with undoped CMK-3. Density functional theory predicts a drastically reduced Al-H bond dissociation energy and supports the observed change in the reaction pathway. The calculations also provide a rationale for the solid-state reversibility, which derives from the combined effects of nanoconfinement, Li adatom formation, and charge redistribution between the metal hydride and the host.
We consider the development of multifluid models for partially ionized multispecies plasmas. The models are composed of a standard set of five-moment fluid equations for each species plus a description of electromagnetics. The most general model considered utilizes a full set of fluid equations for each charge state of each atomic species, plus a set of fluid equations for electrons. The fluid equations are coupled through source terms describing electromagnetic coupling, ionization, recombination, charge exchange, and elastic scattering collisions in the low-density coronal limit. The form of each of these source terms is described in detail, and references for required rate coefficients are identified for a diverse range of atomic species. Initial efforts have been made to extend these models to incorporate some higher-density collisional effects, including ionization potential depression and three- body recombination. Some reductions of the general multifluid model are considered. First, a reduced multifluid model is derived which averages over all of the charge states (including neutrals) of each atomic species in the general multifluid model. The resulting model maintains full consistency with the general multifluid model from which it is derived by leveraging a quasi-steady-state collisional ionization equilibrium assumption to recover the ionization fractions required to make use of the general collision models. Further reductions are briefly considered to derive certain components of a single-fluid magnetohydrodynamics (MHD) model. In this case, a generalized Ohm's law is obtained, and the standard MHD resistivity is expressed in terms of the collisional models used in the general multifluid model. A number of numerical considerations required to obtain robust implementations of these multifluid models are discussed. First, an algebraic flux correction (AFC) stabilization approach for a continuous Galerkin finite element discretization of the multifluid system is described in which the characteristic speeds used in the stabilization of the fluid systems are synchronized across all species in the model. It is demonstrated that this synchronization is crucial in order to obtain a robust discretization of the multifluid system. Additionally, several different formulations are considered for describing the electromagnetics portion of the multifluid system using nodal continuous Galerkin finite element discretizations. The formulations considered include a parabolic divergence cleaning method and an implicit projection method for the traditional curl formulation of Maxwell's equations, a purely- hyperbolic potential-based formulation of Maxwell's equations, and a mixed hyperbolic-elliptic potential-based formulation of Maxwell's equations. Some advantages and disadvantages of each formulation are explored to compare solution robustness and the ease of use of each formulation. Numerical results are presented to demonstrate the accuracy and robustness of various components of our implementation. Analytic solutions for a spatially homogeneous damped plasma oscillation are derived in order to verify the implementation of the source terms for electromagnetic coupling and elastic collisions between fluid species. Ionization balance as a function of electron temperature is evaluated for several atomic species of interest by comparing to steady-state calculations using various sets of ionization and recombination rate coefficients. Several test problems in one and two spatial dimensions are used to demonstrate the accuracy and robustness of the discretization and stabilization approach for the fluid components of the multifluid system. This includes standard test problems for electrostatic and electromagnetic shock tubes in the two-fluid and ideal shock-MHD limits, a cylindrical diocotron instability, and the GEM challenge magnetic reconnection problem. A one-dimensional simplified prototype of an argon gas puff configuration as deployed on Sandia's Z-machine is used as a demonstration to exercise the full range of capabilities associated with the general multifluid model.
Laishram, Ricky; Hozhabrierdi, Pegah; Wendt, Jeremy D.; Soundarajan, Sucheta
In many network applications, it may be desirable to conceal certain target nodes from detection by a data collector, who is using a crawling algorithm to explore a network. For example, in a computer network, the network administrator may wish to protect those computers (target nodes) with sensitive information from discovery by a hacker who has exploited vulnerable machines and entered the network. These networks are often protected by hiding the machines (nodes) from external access, and allow only fixed entry points into the system (protection against external attacks). However, in this protection scheme, once one of the entry points is breached, the safety of all internal machines is jeopardized (i.e., the external attack turns into an internal attack). In this paper, we view this problem from the perspective of the data protector. We propose the Node Protection Problem: given a network with known entry points, which edges should be removed/added so as to protect as many target nodes from the data collector as possible? A trivial way to solve this problem would be to simply disconnect either the entry points or the target nodes - but that would make the network non-functional. Accordingly, we impose certain constraints: for each node, only (1 - r) fraction of its edges can be removed, and the resulting network must not be disconnected. We propose two novel scoring mechanisms - the Frequent Path Score and the Shortest Path Score. Using these scores, we propose NetProtect, an algorithm that selects edges to be removed or added so as to best impede the progress of the data collector. We show experimentally that NetProtect outperforms baseline node protection algorithms across several real-world networks. In some datasets, With 1% of the edges removed by NetProtect, we found that the data collector requires up to 6 (4) times the budget compared to the next best baseline in order to discover 5 (50) nodes.
Large-scale, high-throughput computational science faces an accelerating convergence of software and hardware. Software container-based solutions have become common in cloud-based datacenter environments, and are considered promising tools for addressing heterogeneity and portability concerns. However, container solutions reflect a set of assumptions which complicate their adoption by developers and users of scientific workflow applications. Nor are containers a universal solution for deployment in high-performance computing (HPC) environments which have specialized and vertically integrated scheduling and runtime software stacks. In this paper, we present a container design and deployment approach which uses modular layering to ease the deployment of containers into existing HPC environments. This layered approach allows operating system integrations, support for different communication and performance monitoring libraries, and application code to be defined and interchanged in isolation. We describe in this paper the details of our approach, including specifics about container deployment and orchestration for different HPC scheduling systems. We also describe how this layering method can be used to build containers for two separate applications, each deployed on clusters with different batch schedulers, MPI networking support, and performance monitoring requirements. Our experience indicates that the layered approach is a viable strategy for building applications intended to provide similar behavior across widely varying deployment targets.
Detailed finite element models of a 60-cell crystalline silicon photovoltaic module undergoing a ±1.0 and ±2.4 kPa pressure load were simulated to compare differences created by a constrained frame boundary condition versus replicating manufacturer recommended rack mounting. Module deflection, interconnect strain, and first principal stresses on cell volumes were used as comparison metrics to assess how internal module damage was affected. Average results across all loads scenarios showed that constraining the frame of the module to its initial unloaded plane reduced peak deflections by approximately 13%, interconnect strains by 11%, and first principal stress by 11% when compared to a module with correctly modeled racking. Analysis of results based on damage metrics indicated that the constrained boundary condition reduced interconnect stress at most locations and increased fatigue life by an average of 34%, and likewise reduced the average probability of cell fracture by 82%, though individual results were highly variable. Nonetheless, location-specific trends were generally consistent across constraint methodologies, indicating that the constraint simplification can be applied successfully if corrected for with increased load, additional test cycles, or an informed interpretation of results. The goal of this work was to exercise a methodology for quantifying differences created by a simplified test constraint setup, since expedient experimental simplifications are often used or considered to reduce the complexity of exploratory mechanical tests not related to standards qualification.
Cell cracking in PV modules can lead to a variety of changes in module operation, with vastly different performance degradation based on the type and severity of the cracks. In this work, we demonstrate automated measurement of cell crack properties from electroluminescence images, and correlate these properties with current-voltage curve features on 35 four-cell Al-BSF and PERC mini-modules showing a range of crack types and severity. Power loss in PERC modules was associated with more total crack length, resulting in electrical isolation of cell areas and mild shunting and recombination. Many of the Al-BSF modules suffered catastrophic power loss due to crack-related shunts. Mild power loss in Al-BSF modules was not as strongly correlated with total crack length; instead crack angles and branching were better indicators of module performance for this cell type.
Dynamic operations of electric power switches in microgrid mode allows for distributed photovoltaic (PV) systems to support a critical load and enable the transfer of electrical power to non-critical loads. Instead of relying on an expensive system that includes a constant generation source (e.g. fossil fuel based generators), this work assess the potential balance of load and PV generation to properly charge a critical load battery while also supporting non-critical loads during the day. This work assumes that the battery is sized to only support the critical load and that the PV at the critical load is undersized. To compensate for the limited power capacity, a battery charging algorithm predicts and defines battery demand throughout the day; a particle swarm optimization (PSO) scheme connects and disconnects switch sections inside a distribution system with the objective of minimizing the difference between load and generation. The PSO reconfiguration scheme allows for continuous operations of a critical load as well as inclusion of non-critical loads.
PV system reliability analyses often depend on production data to evaluate the system state. However, using this information alone leads to incomplete assessments, since contextual information about potential sources of data quality issues is lacking (e.g., missing data from offline communications vs. offline production). This paper introduces a new Python-based software capability (called pvOps) for fusing production data with readily available text-based maintenance information to improve reliability assessments. In addition to details about the package development process, the general capabilities to gain actionable insights using field data are presented through a case study. These findings highlight the significant potential for continued advancements in operational assessments.
Conference Record of the IEEE Photovoltaic Specialists Conference
Venkat, Sameera N.; Liu, Jiqi; Wegmueller, Jakob; Yu, Ben; Gould, Brian; Li, Xinjun; Braid, Jennifer L.; Bruckman, Laura S.; French, Roger H.
Network structural equation modeling has been used for degradation modeling of glass/backsheet (GB) and double glass (DG) PERC PV minimodules, made by CSI and CWRU. The encapsulants used were ethylene vinyl acetate (EVA) and polyolefin elastomer (POE). The exposures included modified damp heat (80°C and 85% relative humidity), with and without full spectrum light. Each exposure cycle consists of 2520 hours, 5 steps of 504 hours each. The data from I-V and Suns-Voc was used in the analysis. We observe that most DG minimodules exhibit stability in power with exposure time and GB minimodules by CWRU showed a power loss of 5-6% on average due to corrosion.
Smoke from wildfires results in air pollution that can impact the performance of solar photovoltaic plants. Production is impacted by factors including the proximity of the fire to a site of interest, the extent of the wildfire, wind direction, and ambient weather conditions. We construct a model that quantifies the relationships among weather, wildfire-induced pollution, and PV production for utility-scale and distributed generation sites located in the western USA. The regression model identified a 9.4%-37.8% reduction in solar PV production on smokey days. This model can be used to determine expected production losses at impacted sites. We also present an analysis of factors that contribute to solar photovoltaic energy production impacts from wildfires. This work will inform anticipated production changes for more accurate grid planning and operational considerations.
Conference Record of the IEEE Photovoltaic Specialists Conference
Curran, Alan J.; Colvin, Dylan; Iqbal, Nafis; Davis, Kris O.; Moran, Thomas; Huey, Bryan D.; Brownell, Brent; Yu, Ben; Braid, Jennifer L.; Bruckman, Laura S.; French, Roger H.
To assess the reliability of PERC cells compared to Al-BSF in a commercial setting minimodules with cell and encapsulant combinations are compared in accelerated exposure. In both modified damp heat and modified damp heat with full spectrum light exposures, white EVA samples showed a higher susceptibility for metallization corrosion degradation than all other encapsulants. Al-BSF cells in particular showed higher power loss than PERC cells with white EVA. It was observed that the degree of degradation had a strong significance on the manufacturer of the white EVA encapsulant. In both exposures the encapsulant was a much stronger predictor of degradation than cell type. For modules with the same encapsulant, PERC cells showed the higher performance or were comparable to Al-BSF cells for all but one case.
Conference Record of the IEEE Photovoltaic Specialists Conference
Nihar, Arafath; Curran, Alan J.; Karimi, Ahmad M.; Braid, Jennifer L.; Bruckman, Laura S.; Koyuturk, Mehmet; Wu, Yinghui; French, Roger H.
We present the application of FAIR principles to photovoltaic time series data to increase their reusability within the photovoltaic research community. The main requirements for a "FAIRified"dataset is to have a clearly defined data format, and to make accessible all metadata for this dataset to humans and machines. To achieve FAIRification, we implement a data model that separates the photovoltaic data and its metadata. The metadata and their descriptions are registered on a data repository in a human and machine readable format, using JSON-LD. Also, secure APIs are developed to access photovoltaic data. This approach has long term scalability and maintainability.
Grid support functionalities from advanced PV inverters are increasingly being utilized to help regulate grid conditions and enable high PV penetration levels. To ensure a high degree of reliability, it is paramount that protective devices respond properly to a variety of fault conditions. However, while the fault response of PV inverters operating at unity power factor has been well documented, less work has been done to characterize the fault contributions and impacts of advanced inverters with grid support enabled under conditions like voltage sags and phase angle jumps. To address this knowledge gap, this paper presents experimental results of a three-phase photovoltaic inverter's response during and after a fault to investigate how PV systems behave under fault conditions when operating with and without a grid support functionality (autonomous Volt-Var) enabled. Simulations were then conducted to quantify the potential impact of the experimental findings on protection systems. It was observed that fault current magnitudes across several protective devices were impacted by non-unity power factor operating conditions, suggesting that protection settings may need to be studied and updated whenever grid support functions are enabled or modified.
The advent of bifacial PV systems drives new requirements for irradiance measurement at PV projects for monitoring and assessment purposes. While there are several approaches, there is still no uniform guidance for what irradiance parameters to measure and for the optimal selection and placement of irradiance sensors at bifacial arrays. Standards are emerging to address these topics but are not yet available. In this paper we review approaches to bifacial irradiance monitoring which are being discussed in the research literature and pursued in early systems, to provide a preliminary guide and framework for developers planning bifacial projects.
In order to address the recent inclement weather-related energy events, electricity production is experiencing an important transition from conventional fossil fuel based resources to the use of Distributed Energy Resources (DER), providing clean and renewable energy. These DERs make use of power electronic based devices that perform the energy conversion process required to interface with the utility grids. For the particular cases where DC/AC conversion is required, grid-forming inverters (GFMI) are gaining popularity over their grid-following (GFLI) counterpart. This is due to the fact that GFMI do not require a dedicated Phase Locked Loop (PLL) to synchronize with the grid. The absence of a PLL allows GFMI to operate in stand-alone (off-grid) mode when needed. Nowadays, inverter manufacturers are already offering several products with grid-forming capabilities. However, modeling the dynamics of commercially available GFMI under heavy loads or faults scenarios has become a critical task not only for stability studies, but also for coordination and protection schemes in power grids (or microgrids) that are experiencing a steady growth in their levels of DERs. Based upon experimental low-impedance fault results performed on a commercially available GFMI, this paper presents a modeling effort to replicate the dynamics of such inverters under these abnormal scenarios. The proposed modeling approach relies on modifying previously developed GFMI models, by adding the proper dynamics, to match the current and voltage transient behavior under low-impedance fault scenarios. For the first inverter tested, a modified CERTS GFMI model provides matching transient dynamics under faults scenarios with respect to the experimental results from the commercially available inverter.
As conventional generation sources continue to be replaced with inverter-based resources, the traditional fixed overcurrent protection schemes used at the distribution level will no longer be valid. Adaptive protection will provide the ability to update the protection scheme in near real-time to ensure reliability and increase the resilience of the grid. However, knowing and detecting when to update protection parameters that are calculated with an adaptive protection algorithm to prevent unnecessarily communicating with relays still needs to be understood. The proposed method provides a sensitivity analysis to understand when it is necessary to issue new parameters to the relays. The results show that settings do not need to be issued at each available time step. Instead, the proposed sensitivity analysis method can be used to ensure that only the imperative protection parameters are communicated to the relay, allowing for more optimal utilization of the communications. The results show that the sensitivity analysis reduces the settings communicated to the devices by 93% over the year.
DC microgrids envisioned with high bandwidth communications may well expand their application range by considering autonomous strategies as resiliency contingencies. In most cases, these strategies are based on the droop control method, seeking low voltage regulation and proportional load sharing. Control challenges arise when coordinating the output of multiple DC microgrids composed of several Distributed Energy Resources. This paper proposes an autonomous control strategy for transactional converters when multiple DC microgrids are connected through a common bus. The control seeks to match the external bus voltage with the internal bus voltage balancing power. Three case scenarios are considered: standalone operation of each DC microgrid, excess generation, and generation deficit in one DC microgrid. Results using Sandia National Laboratories Secure Scalable Microgrid Simulink library, and models developed in MATLAB are compared.
In this work, a model predictive dispatch framework is proposed to utilize Energy Storage Systems (ESSs) for voltage regulation in distribution systems. The objective is to utilize ESS resources to assist with voltage regulation while reducing the utilization of legacy devices such as on-load tap changers (OLTCs), capacitor banks, etc. The proposed framework is part of a two-stage solution where a secondary layer computes the ESS dispatch every 5-min based on 1-hr generation and load forecasts while a primary layer would handle the real-time uncertainties. In this paper, the secondary layer to dispatch the ESS is formulated. Simulation results show that dispatching ESSs by providing active and reactive support can minimize the OLTC movement in distribution networks thus increasing the lifetime of legacy mechanical devices.
In this paper, the development of a mathematical model for islanding detection method based on the concept of a digital twin is presented. The model estimates the grid impedance seen by a distributed energy resource. The proposed algorithm has characteristics of passive and active islanding detection methods. Using a discrete state-space representation of a dq0 axis power system as equality constraints, a digital twin is optimized to match the power system of interest. The concept is to use the estimated grid impedance as the parameter to identify the difference between normal operation and islanding scenarios. Selecting arbitrary initial values, the digital twin approximates the response of the actual system and therefore a value for the system impedance. Results indicate that the proposed method has the potential to estimate the grid impedance at the point of common coupling.
Recent trends in PV economics and advanced inverter functionalities have contributed to the rapid growth in PV adoption; PV modules have gotten much cheaper and advanced inverters can deliver a range of services in support of grid operations. However, these phenomena also provide conditions for PV curtailment, where high penetrations of distributed PV often necessitate the use of advanced inverter functions with VAR priority to address abnormal grid conditions like over- and under-voltages. This paper presents a detailed energy loss analysis, using a combination of open-source PV modeling tools and high-resolution time-series simulations, to place the magnitude of clipped and curtailed PV energy in context with other operational sources of PV energy loss. The simulations were conducted on a realistic distribution circuit, modified to include utility load data and 341 modeled PV systems at 25% of the customer locations. The results revealed that the magnitude of clipping losses often overshadows that of curtailment but, on average, both were among the lowest contributors to total annual PV energy loss. However, combined clipping and curtailment loss are likely to become more prevalent as recent trends continue.
Due to the increased penetration in Distributed Energy Resources (DERs), especially in Photovoltaic (PV) systems, voltage and frequency regulation has become a topic of interest. Utilities have been requesting DER voltage and frequency support for almost two decades. Their request was addressed by standards such as the IEEE Std 1547-2018. With the continuous improvements in inverters' ability to control their output voltage, power, and frequency, a group of advanced techniques to support the grid is now required by the interconnection standard. These techniques are known as Grid Support Functions (GSF), and they allow the inverter to provide voltage and frequency support to the grid as well as the ability to ride-through abnormal events. Understanding how a GSF behaves is challenging, especially when multiple GSFs are combined to help the utility to control the system voltage and frequency. This paper evaluates the effects of GSF's on the IEEE Std 1547.1-2020 Unintentional Islanding Test 5B by comparing simulation results from a developed PV inverter model and experimental results from a Power Hardware-in-the-Loop platform.
Detailed finite element models of a 60-cell crystalline silicon photovoltaic module undergoing a ±1.0 and ±2.4 kPa pressure load were simulated to compare differences created by a constrained frame boundary condition versus replicating manufacturer recommended rack mounting. Module deflection, interconnect strain, and first principal stresses on cell volumes were used as comparison metrics to assess how internal module damage was affected. Average results across all loads scenarios showed that constraining the frame of the module to its initial unloaded plane reduced peak deflections by approximately 13%, interconnect strains by 11%, and first principal stress by 11% when compared to a module with correctly modeled racking. Analysis of results based on damage metrics indicated that the constrained boundary condition reduced interconnect stress at most locations and increased fatigue life by an average of 34%, and likewise reduced the average probability of cell fracture by 82%, though individual results were highly variable. Nonetheless, location-specific trends were generally consistent across constraint methodologies, indicating that the constraint simplification can be applied successfully if corrected for with increased load, additional test cycles, or an informed interpretation of results. The goal of this work was to exercise a methodology for quantifying differences created by a simplified test constraint setup, since expedient experimental simplifications are often used or considered to reduce the complexity of exploratory mechanical tests not related to standards qualification.
High penetration of distributed energy resources presents challenges for monitoring and control of power distribution systems. Some of these problems might be solved through accurate monitoring of distribution systems, such as what can be achieved with distribution system state estimation (DSSE). With the recent large-scale deployment of advanced metering infrastructure associated with existing SCADA measurements, DSSE may become a reality in many utilities. In this paper, we present a sensitivity analysis of DSSE with respect to phase mislabeling of single-phase service transformers, another class of errors distribution system operators are faced with regularly. The results show DSSE is more robust to phase label errors than a power flow-based technique, which would allow distribution engineers to more accurately capture the impacts and benefits of distributed PV.
PV system reliability analyses often depend on production data to evaluate the system state. However, using this information alone leads to incomplete assessments, since contextual information about potential sources of data quality issues is lacking (e.g., missing data from offline communications vs. offline production). This paper introduces a new Python-based software capability (called pvOps) for fusing production data with readily available text-based maintenance information to improve reliability assessments. In addition to details about the package development process, the general capabilities to gain actionable insights using field data are presented through a case study. These findings highlight the significant potential for continued advancements in operational assessments.
The advent of bifacial PV systems drives new requirements for irradiance measurement at PV projects for monitoring and assessment purposes. While there are several approaches, there is still no uniform guidance for what irradiance parameters to measure and for the optimal selection and placement of irradiance sensors at bifacial arrays. Standards are emerging to address these topics but are not yet available. In this paper we review approaches to bifacial irradiance monitoring which are being discussed in the research literature and pursued in early systems, to provide a preliminary guide and framework for developers planning bifacial projects.
Renewable energy has become a viable solution for reducing the harmful effects that fossil fuels have on our environment, prompting utilities to replace traditional synchronous generators (SG) with more inverter-based devices that can provide clean energy. One of the biggest challenges utilities are facing is that by replacing SG, there is a reduction in the systems' mechanical inertia, making them vulnerable to frequency instability. Grid-forming inverters (GFMI) have the ability to create and regulate their own voltage reference in a manner that helps stabilize system frequency. As an emerging technology, there is a need for understanding their dynamic behavior when subjected to abrupt changes. This paper evaluates the performance of a GFMI when subjected to voltage phase jump conditions. Experimental results are presented for the GFMI subjected to both balanced and unbalanced voltage phase jump events in both P/Q and V/f modes.
This work presents a 3-Port acoustoelectric switch design for surface acoustic wave signal processing. Using a multistrip coupler, the input acoustic wave at Port 1 is split into two parallel and electrically cross-linked acoustoelectric delay lines where an applied voltage can alter the gain and attenuation in each delay line based on the voltage polarity. The switch is demonstrated using a 270 MHz Leaky SAW mode on an InGaAs on 41° Y-cut lithium niobate heterostructure. Applying a +40 V voltage pulse results in an IL of -12.5 dB and -57.5 dB in the gain and isolation switch paths, respectively. This leads to a 45 dB difference in signal strength at the output ports.
Corrective maintenance strategies are important for safeguarding optimum photovoltaic (PV) performance while also minimizing downtimes due to failures. In this work, a complete operation and maintenance (OM) decision support system (DSS) was developed for corrective maintenance. The DSS operates entirely on field measurements and incorporates technical asset and financial management features. It was validated experimentally on a large-scale PV system installed in Greece and the results demonstrated the financial benefits of performing corrective actions in case of failures and reversible loss mechanisms. Reduced response and resolution times of corrective actions could improve the PV power production of the test PV plant by up to 2.41%. Even for 1% energy yield improvement by performing corrective actions, a DSS is recommended for large-scale PV plants (with a peak capacity of at least 250 kWp).
Thermal spray processes involve the repeated impact of millions of discrete particles, whose melting, deformation, and coating-formation dynamics occur at microsecond timescales. The accumulated coating that evolves over minutes is comprised of complex, multiphase microstructures, and the timescale difference between the individual particle solidification and the overall coating formation represents a significant challenge for analysts attempting to simulate microstructure evolution. In order to overcome the computational burden, researchers have created rule-based models (similar to cellular automata methods) that do not directly simulate the physics of the process. Instead, the simulation is governed by a set of predefined rules, which do not capture the fine-details of the evolution, but do provide a useful approximation for the simulation of coating microstructures. Here, we introduce a new rules-based process model for microstructure formation during thermal spray processes. The model is 3D, allows for an arbitrary number of material types, and includes multiple porosity-generation mechanisms. Example results of the model for tantalum coatings are presented along with sensitivity analyses of model parameters and validation against 3D experimental data. The model's computational efficiency allows for investigations into the stochastic variation of coating microstructures, in addition to the typical process-to-structure relationships.
We analyze experimentally and theoretically the transport spectra of a gated lateral GaAs double quantum dot containing two holes. The strong spin-orbit interaction present in the hole subband lifts the Pauli spin blockade and allows to map out the complete spectra of the two-hole system. By performing measurements in both source-drain voltage directions, at different detunings and magnetic fields, we carry out quantitative fitting to a Hubbard two-site model accounting for the tunnel coupling to the leads and the spin-flip relaxation process. We extract the singlet-triplet gap and the magnetic field corresponding to the singlet-triplet transition in the double-hole ground state. Additionally, at the singlet-triplet transition we find a resonant enhancement (in the blockaded direction) and suppression of current (in the conduction direction). The current enhancement stems from the multiple resonance of two-hole levels, opening several conduction channels at once. The current suppression arises from the quantum interference of spin-conserving and spin-flipping tunneling processes.
In this paper we present an alternative approach to the representation of simulation particles for unstructured electrostatic and electromagnetic PIC simulations. In our modified PIC algorithm we represent particles as having a smooth shape function limited by some specified finite radius, r0. A unique feature of our approach is the representation of this shape by surrounding simulation particles with a set of virtual particles with delta shape, with fixed offsets and weights derived from Gaussian quadrature rules and the value of r0. As the virtual particles are purely computational, they provide the additional benefit of increasing the arithmetic intensity of traditionally memory bound particle kernels. The modified algorithm is implemented within Sandia National Laboratories' unstructured EMPIRE-PIC code, for electrostatic and electromagnetic simulations, using periodic boundary conditions. We show results for a representative set of benchmark problems, including electron orbit, a transverse electromagnetic wave propagating through a plasma, numerical heating, and a plasma slab expansion. Good error reduction across all of the chosen problems is achieved as the particles are made progressively smoother, with the optimal particle radius appearing to be problem-dependent.
Silva-Quinones, Dhamelyz; Butera, Robert E.; Wang, George T.; Teplyakov, Andrew V.
The reactions of boric acid and 4-fluorophenylboronic acid with H- and Cl-terminated Si(100) surfaces in solution were investigated. X-ray photoelectron spectroscopy (XPS) studies reveal that both molecules react preferentially with Cl-Si(100) and not with H-Si(100) at identical conditions. On Cl-Si(100), the reactions introduce boron onto the surface, forming a Si-O-B structure. The quantification of boron surface coverage demonstrates that the 4-fluorophenylboronic acid leads to ∼2.8 times higher boron coverage compared to that of boric acid on Cl-Si(100). Consistent with these observations, density functional theory studies show that the reaction of boric acid and 4-fluorophenylboronic acid is more favorable with the Cl- versus H-terminated surface and that on Cl-Si(100) the reaction with 4-fluorophenylboronic acid is ∼55.3 kJ/mol more thermodynamically favorable than the reaction with boric acid. The computational studies were also used to demonstrate the propensity of the overall approach to form high-coverage monolayers on these surfaces, with implications for selective-area boron-based monolayer doping.
Schmalbach, Kevin M.; Lin, Albert C.; Bufford, Daniel C.; Wang, Chenguang; Sun, Changquan C.; Mara, Nathan A.
Nanoindentation provides a convenient and high-throughput means for mapping mechanical properties and for measuring the strain rate sensitivity of a material. Here, nanoindentation was applied to the study of microcrystalline cellulose. Constant strain rate nanoindentation revealed a depth dependence of nanohardness and modulus, mostly attributed to material densification. Nanomechanical maps of storage modulus and hardness resolved the shape and size of voids present in larger particles. In smaller, denser particles, however, where storage modulus varied little spatially, there was still some spatial dependence of hardness, which can be explained by cellulose’s structural anisotropy. Additionally, hardness changed with the indentation strain rate in strain rate jump tests. The resulting strain rate sensitivity values were found to be in agreement with those obtained by other techniques in the literature. Graphic abstract: [Figure not available: see fulltext.]
A review of a new vertically aligned nanocomposite (VAN) structure based on two-dimensional (2D) layered oxides has been designed and self-assembled on both LaAlO3 (001) and SrTiO3 (001) substrates. The new VAN structure consists of epitaxially grown Co3O4 nanopillars embedded in the Bi2WO6 matrix with a unique 2D layered structure, as evidenced by the microstructural analysis. Physical property measurements show that the new Bi2WO6-Co3O4 VAN structure exhibits strong ferromagnetic and piezoelectric response at room temperature as well as anisotropic permittivity response. This work demonstrates a new approach in processing multifunctional VANs structure based on the layered oxide systems towards future nonlinear optics, ferromagnets, and multiferroics.
This paper presents a new high gain, multilevel, bidirectional DC-DC converter for interfacing battery energy storage systems (BESS) with the distribution grid. The proposed topology employs a current-fed structure on the low-voltage (LV) BESS side to obtain high voltage gain during battery-to-grid mode of operation without requiring a large turns ratio isolation transformer. The high-voltage (HV) side of the converter is a voltage-doubler network comprising two half-bridge circuits with an intermediary bidirectional switch that re-configures the two bridges in series connection to enhance the boost ratio. A seamless commutation of the transformer leakage inductor current is ensured by the phase-shift modulation of HV side devices. The modulating duty cycle of the intermediary bidirectional devices generates a multilevel voltage of twice the switching frequency at the grid-side dc link, which significantly reduces the filter size. The presented modulation strategy ensures zero current switching (ZCS) of the LV devices and zero voltage switching (ZVS) of the HV devices to achieve a high power conversion efficiency. Design and operation of the proposed converter is explained with modal analysis, and further verified by detailed simulation results.
The dramatic 50% improvement in energy density that Li-metal anodes offer in comparison to graphite anodes in conventional lithium (Li)-ion batteries cannot be realized with current cell designs because of cell failure after a few cycles. Often, failure is caused by Li dendrites that grow through the separator, leading to short circuits. Here, we used a new characterization technique, cryogenic femtosecond laser cross sectioning and subsequent scanning electron microscopy, to observe the electroplated Li-metal morphology and the accompanying solid electrolyte interphase (SEI) into and through the intact coin cell battery's separator, gradually opening pathways for soft-short circuits that cause failure. We found that separator penetration by the SEI guided the growth of Li dendrites through the cell. A short-circuit mechanism via SEI growth at high current density within the separator is provided. These results will inform future efforts for separator and electrolyte design for Li-metal anodes.
Dechent, Philipp; Epp, Alexander; Jost, Dominik; Preger, Yuliya; Attia, Peter M.; Li, Weihan; Sauer, Dirk U.
Due to their impressive energy density, power density, lifetime, and cost, lithium-ion batteries have become the most important electrochemical storage system, with applications including consumer electronics, electric vehicles, and stationary energy storage. However, each application has unique, often conflicting product specifications, requiring a balanced overall assessment. The Ragone plot is a commonly-used plot to compare energy and power of lithium-ion battery chemistries. Important parameters including cost, lifetime, and temperature sensitivity are not considered. Overall, a standardized and balanced reporting and visualization of specifications would greatly help an informed cell selection process.
The self-Assembly of binary polymer-grafted nanoparticles (NPs) in a selective solvent is investigated using coarse-grained simulations. Simulations are performed using theoretically informed Langevin dynamics (TILD), a particle-based method that employs a particle-To-mesh scheme to efficiently calculate the nonbonded interactions. The particles are densely grafted with two immiscible polymers, A and B, that are permanently bound to the NP either at random grafting sites (random-grafted) or with all the A chains on one hemisphere of the NP and all the B chains on the other hemisphere (Janus-grafted). For NPs with random grafting, the polymers phase-separate on the surface of the NP to form Janus-Type structures in dilute solution, even though some of the chains have to stretch around the particle to form the Janus structure. When the solvent quality is sufficiently poor for the solvophobic chains, the binary grafted NPs assemble into various structures, including double-walled vesicles. In particular, vesicles are formed when the solvophilic volume fraction is between 0.2 and 0.3, in a similar range to that required for vesicle formation in diblock copolymers in a selective solvent. For mixed-grafted NPs, there is considerable variation in the structure of each individual NP, but nevertheless, these NPs form ordered vesicles, similar to those formed by Janus-grafted NPs.
Functional data registration is a necessary processing step for many applications. The observed data can be inherently noisy, often due to measurement error or natural process uncertainty; which most functional alignment methods cannot handle. A pair of functions can also have multiple optimal alignment solutions, which is not addressed in current literature. In this paper, a flexible Bayesian approach to functional alignment is presented, which appropriately accounts for noise in the data without any pre-smoothing required. Additionally, by running parallel MCMC chains, the method can account for multiple optimal alignments via the multi-modal posterior distribution of the warping functions. To most efficiently sample the warping functions, the approach relies on a modification of the standard Hamiltonian Monte Carlo to be well-defined on the infinite-dimensional Hilbert space. In this work, this flexible Bayesian alignment method is applied to both simulated data and real data sets to show its efficiency in handling noisy functions and successfully accounting for multiple optimal alignments in the posterior; characterizing the uncertainty surrounding the warping functions.
Modeling the degradation of cement-based infrastructure due to aqueous environmental conditions continues to be a challenge. In order to develop a capability to predict concrete infrastructure failure due to chemical degradation, we created a chemomechanical model of the effects of long-term water exposure on cement paste. The model couples the mechanical static equilibrium balance with reactive–diffusive transport and incorporates fracture and failure via peridynamics (a meshless simulation method). The model includes fundamental aspects of degradation of ordinary Portland cement (OPC) paste, including the observed softening, reduced toughness, and shrinkage of the cement paste, and increased reactivity and transport with water induced degradation. This version of the model focuses on the first stage of cement paste decalcification, the dissolution of portlandite. Given unknowns in the cement paste degradation process and the cost of uncertainty quantification (UQ), we adopt a minimally complex model in two dimensions (2D) in order to perform sensitivity analysis and UQ. We calibrate the model to existing experimental data using simulations of common tests such as flexure, compression and diffusion. Then we calculate the global sensitivity and uncertainty of predicted failure times based on variation of eleven unique and fundamental material properties. We observed particularly strong sensitivities to the diffusion coefficient, the reaction rate, and the shrinkage with degradation. Also, the predicted time of first fracture is highly correlated with the time to total failure in compression, which implies fracture can indicate impending degradation induced failure; however, the distributions of the two events overlap so the lead time may be minimal. Extension of the model to include the multiple reactions that describe complete degradation, viscous relaxation, post-peak load mechanisms, and to three dimensions to explore the interactions of complex fracture patterns evoked by more realistic geometry is straightforward and ongoing.
Terry steam turbines are widely used in various industries because of their robust design. Within the nuclear power generation industry, they are used in the Reactor Core Isolation Cooling System to remove decay heat during reactor isolation events. During the Fukushima Daiichi nuclear power station disaster in Japan in 2011, the Reactor Core Isolation Cooling System and associated Terry turbine operated for over 70 hours in Unit 2; this runtime is well beyond the expected operating duration. Theories suggest the turbine was subjected to a two-phase inlet flow, which could degrade the turbine performance. In this work, an experimental test rig was constructed to test a full-scale Terry model GS-2 steam turbine under two-phase air/water flows. Steady-state efficiency and torque performance maps of the turbine were developed over a range of turbine inlet pressures (1.38–4.83 bar or 20–70 psia), air mass fractions (0.05–1.0) and rotational speeds up to 4000 RPM. Turbine performance followed expected trends with torque varying linearly and efficiency varying quadratically with rotational speed. In addition, high-speed images of the two-phase flow entering the turbine were also analyzed to understand how changes in inlet pressure and air mass fraction affect the flow regime and homogenization. The present tests with air–water two-phase mixtures are an important step towards providing an understanding of the full-scale Terry turbine's behavior and performance curves under two-phase conditions. The results of this work will be combined with air/water and steam/water data gathered using a small-scale Terry ZS-1 steam turbine in order to understand the scaling relationship between large and small size Terry turbines and fluid pairs. The combined data set will enable further development of analytical models over a wide range of conditions and may be used to provide technical justification for expanded use of the Terry turbines in nuclear power plant safety systems and other systems.
Single Image SICD-Based Automatic Object Processing (SIS-AOP) is an automatic object identification tool for SAR imagery. It ingests a SAR image in standard SICD format, and it will run a suite of algorithms to cue possible vehicle detections, cull those detections and then ultimately label them either as detections only or possible expound to give a class-level ID or a vehicle-type ID. The SIS-AOP results are given in an XML (Extensible Markup Language) output format. This document defines the elements in the SISAOPR XML output format.
Approximately 93% of US total energy supply is dependent on wellbores in some form. The industry will drill more wells in next ten years than in the last 100 years (King, 2014). Global well population is around 1.8 million of which approximately 35% has some signs of leakage (i.e. sustained casing pressure). Around 5% of offshore oil and gas wells “fail” early, more with age and most with maturity. 8.9% of “shale gas” wells in the Marcellus play have experienced failure (120 out of 1,346 wells drilled in 2012) (Ingraffea et al., 2014). Current methods for identifying wells that are at highest priority for increased monitoring and/or at highest risk for failure consists of “hand” analysis of multi-arm caliper (MAC) well logging data and geomechanical models. Machine learning (ML) methods are of interest to explore feasibility for increasing analysis efficiency and/or enhanced detection of precursors to failure (e.g. deformations). MAC datasets used to train ML algorithms and preliminary tests were run for “predicting” casing collar locations and performed above 90% in classification and identifying of casing collar locations.
The adsorption of AlCl3 on Si(100) and the effect of annealing the AlCl3-dosed substrate were studied to reveal key surface processes for the development of atomic-precision, acceptor-doping techniques. This investigation was performed via scanning tunneling microscopy (STM), X-ray photoelectron spectroscopy (XPS), and density functional theory (DFT) calculations. At room temperature, AlCl3 readily adsorbed to the Si substrate dimers and dissociated to form a variety of species. Annealing the AlCl3-dosed substrate at temperatures below 450 °C produced unique chlorinated aluminum chains (CACs) elongated along the Si(100) dimer row direction. An atomic model for the chains is proposed with supporting DFT calculations. Al was incorporated into the Si substrate upon annealing at 450 °C and above, and Cl desorption was observed for temperatures beyond 450 °C. Al-incorporated samples were encapsulated in Si and characterized by secondary ion mass spectrometry (SIMS) depth profiling to quantify the Al atom concentration, which was found to be in excess of 1020 cm-3 across a ∼2.7 nm-thick δ-doped region. The Al concentration achieved here and the processing parameters utilized promote AlCl3 as a viable gaseous precursor for novel acceptor-doped Si materials and devices for quantum computing.
Approximately 93% of US total energy supply is dependent on wellbores in some form. The industry will drill more wells in next ten years than in the last 100 years (King, 2014). Global well population is around 1.8 million of which approximately 35% has some signs of leakage (i.e. sustained casing pressure). Around 5% of offshore oil and gas wells “fail” early, more with age and most with maturity. 8.9% of “shale gas” wells in the Marcellus play have experienced failure (120 out of 1,346 wells drilled in 2012) (Ingraffea et al., 2014). Current methods for identifying wells that are at highest priority for increased monitoring and/or at highest risk for failure consists of “hand” analysis of multi-arm caliper (MAC) well logging data and geomechanical models. Machine learning (ML) methods are of interest to explore feasibility for increasing analysis efficiency and/or enhanced detection of precursors to failure (e.g. deformations). MAC datasets used to train ML algorithms and preliminary tests were run for “predicting” casing collar locations and performed above 90% in classification and identifying of casing collar locations.
The Smoothed Particle Hydrodynamics (SPH) package within LAMMPS is explored as a possible tool for simulating the motion of bubbles in a vibrating liquid-filled container. As an initial test case, the unphysical but computationally less intense situation of a two-dimensional single bubble rising in a quiescent liquid under the influence of gravity is considered herein. Although physically plausible behavior was obtained under certain conditions, this behavior depends strongly on the system parameters. Moreover, the large density ratio between the liquid and bubble requires extremely small timesteps, which make the simulations undesirably computationally expensive. Ultimately, it was determined that this method is not feasible for providing quantitatively accurate results for the desired application.
The credibility of an engineering model is of critical importance in large-scale projects. How concerned should an engineer be when reusing someone else's model when they may not know the author or be familiar with the tools that were used to create it? In this report, the authors advance engineers' capabilities for assessing models through examination of the underlying semantic structure of a model--the ontology. This ontology defines the objects in a model, types of objects, and relationships between them. In this study, two advances in ontology simplification and visualization are discussed and are demonstrated on two systems engineering models. These advances are critical steps toward enabling engineering models to interoperate, as well as assessing models for credibility. For example, results of this research show an 80% reduction in file size and representation size, dramatically improving the throughput of graph algorithms applied to the analysis of these models. Finally, four future problems are outlined in ontology research toward establishing credible models--ontology discovery, ontology matching, ontology alignment, and model assessment.
The critical pitting temperature (CPT) of selective laser melted (SLM) 316 L stainless steel in 1.0 M NaCl was measured and compared with a commercial wrought alloy. Potentiostatic measurements determined a mean CPT value of 16 ± 0.7 °C, 27.5 ± 0.8 °C and 31 ± 1 °C for the wrought alloy, the SLM alloy normal to the build direction and parallel to the build direction, respectively. The lead-in pencil electrode technique was used to study the pit chemistry of the two alloys and to explain the higher CPT values observed for the SLM alloy. A lower critical current density required for passivation in a simulated pit solution was measured for the SLM alloy. Moreover, the ratio of the critical concentration to saturated concentration of dissolving metal cations was found to be higher for the SLM alloy, which was related to its different salt film properties, possibly as a result of the SLM's distinct microstructure.
We present the results of large scale molecular dynamics simulations aimed at understanding the origins of high friction coefficients in pure metals, and their concomitant reduction in alloys and composites. We utilize a series of targeted simulations to demonstrate that different slip mechanisms are active in the two systems, leading to differing frictional behavior. Specifically, we show that in pure metals, sliding occurs along the crystallographic slip planes, whereas in alloys shear is accommodated by grain boundaries. In pure metals, there is significant grain growth induced by the applied shear stress and the slip planes are commensurate contacts with high friction. However, the presence of dissimilar atoms in alloys suppresses grain growth and stabilizes grain boundaries, leading to low friction via grain boundary sliding. Graphic Abstract: [Figure not available: see fulltext.]
Distributed controllers play a prominent role in electric power grid operation. The coordinated failure or malfunction of these controllers is a serious threat, where the resulting mechanisms and consequences are not yet well-known and planned against. If certain controllers are maliciously compromised by an adversary, they can be manipulated to drive the system to an unsafe state. The authors present a strategy for distributed controller defence (SDCD) for improved grid tolerance under conditions of distributed controller compromise. The work of the authors’ first formalises the roles that distributed controllers play and their control support groups using controllability analysis techniques. With these formally defined roles and groups, the authors then present defence strategies for maintaining or regaining system control during such an attack. A general control response framework is presented here for the compromise or failure of distributed controllers using the remaining, operational set. The SDCD approach is successfully demonstrated with a 7-bus system and the IEEE 118-bus system for single and coordinated distributed controller compromise; the results indicate that SDCD is able to significantly reduce system stress and mitigate compromise consequences.
We present a new method to discriminate between earthquakes and buried explosions using observed seismic data. The method is different from previous seismic discrimination algorithms in two main ways. First, we use seismic spatial gradients, as well as the wave attributes estimated from them (referred to as gradiometric attributes), rather than the conventional three-component seismograms recorded on a distributed array. The primary advantage of this is that a gradiometer is only a fraction of a wavelength in aperture com¬pared with a conventional seismic array or network. Second, we use the gradiometric attributes as input data into a machine learning algorithm. The resulting discrimination algorithm uses the norms of truncated principal components obtained from the gradio- metric data to distinguish the two classes of seismic events. Using high-fidelity synthetic data, we show that the data and gradiometric attributes recorded by a single seismic gra¬diometer performs as well as a conventional distributed array at the event type discrimi¬nation task.