Fluctuating boundary layer pressure fluctuations are an important loading component for reentry bodies. Characterization of these loads is often described through cross-spectral density-based definitions, such as, longitudinal and lateral coherence, spatial correlation and frequency power spectral density. The widely utilized Corcos separable coherence model functional form has been employed in this study. While the classical Corcos D xD style model using a self-similar velocity-spacing variable e.g. (here the subscript denotes a dimensional U vaiable) has been effectively used for low speed simulations, high speed problems often require a model that involves both the self-similar variable and the sensor spacing D Here we examine longitudinal coherence formulations that include explicit D behavior as well as the self-similar variable. Examination of an analytical model/synthetic pressure fluctuation correlation function developed here clearly demonstrate that the self-similar form may need to be supplement by non-similar information. Using the synthetic space-time correlation expression, a coherence model which uses self-similar variables and explicit (but continuous) spatial information is proposed. Estimates for the parameters in the coherence model are derived using asymptotic arguments available from the synthetic result. Further, relationships are derived to estimate coherence model parameters and their connection to longitudinal correlation behavior assuming exponential auto-spectral density models. Comparison of these expressions with wind tunnel test and DNS simulation shows good comparison. Measurements from flight tests which deviate greatly from the classical self-similar form can be successfully described using the extended model although the coherence model parameters must be modified. In summary, an extended coherence model is developed which provides good explanations of longitudinal coherence and correlation behavior.
Incident response is an area within cyber defense that is responsible for detecting, mitigating, and preventing threats within a given network. Like other areas of cyber security, incident response is experiencing a shortage of qualified workers which has led to technological development aimed at alleviating labor-related pressures on organizations. A cognitive task analysis was conducted with incident response experts to capture expertise requirements and used an existing construct to help prioritize development of new technology. Findings indicated that current software development incorporates factors such as analyst efficiency and consistency. Gaps were identified regarding communication and team navigation that are inherent to dynamic team environments. This research identified which expertise areas are needed at lower-tier levels of incident response and which of those areas current automation platforms are addressing. These gaps help focus future studies by bridging expertise research to development efforts.
Second-order optimizers hold intriguing potential for deep learning, but suffer from increased cost and sensitivity to the non-convexity of the loss surface as compared to gradient-based approaches. We introduce a coordinate descent method to train deep neural networks for classification tasks that exploits global convexity of the cross-entropy loss in the weights of the linear layer. Our hybrid Newton/Gradient Descent (NGD) method is consistent with the interpretation of hidden layers as providing an adaptive basis and the linear layer as providing an optimal fit of the basis to data. By alternating between a second-order method to find globally optimal parameters for the linear layer and gradient descent to train the hidden layers, we ensure an optimal fit of the adaptive basis to data throughout training. The size of the Hessian in the second-order step scales only with the number weights in the linear layer and not the depth and width of the hidden layers; furthermore, the approach is applicable to arbitrary hidden layer architecture. Previous work applying this adaptive basis perspective to regression problems demonstrated significant improvements in accuracy at reduced training cost, and this work can be viewed as an extension of this approach to classification problems. We first prove that the resulting Hessian matrix is symmetric semi-definite, and that the Newton step realizes a global minimizer. By studying classification of manufactured two-dimensional point cloud data, we demonstrate both an improvement in validation error and a striking qualitative difference in the basis functions encoded in the hidden layer when trained using NGD. Application to image classification benchmarks for both dense and convolutional architectures reveals improved training accuracy, suggesting gains of second-order methods over gradient descent. A Tensorflow implementation of the algorithm is available at github.com/rgp62/.
Emery, Benjamin F.; Niles, Meredith T.; Wiltshire, Serge; Brown, Molly E.; Fisher, Brendan; Ricketts, Taylor H.
It is widely anticipated that climate change will negatively affect both food security and diet diversity. Diet diversity is especially critical for children as it correlates with macro and micronutrient intake important for child development. Despite these anticipated links, little empirical evidence has demonstrated a relationship between diet diversity and climate change, especially across large datasets spanning multiple global regions and with more recent climate data. Here we use survey data from 19 countries and more than 107 000 children, coupled with 30 years of precipitation and temperature data, to explore the relationship of climate to child diet diversity while controlling for other agroecological, geographic, and socioeconomic factors. We find that higher long-term temperatures are associated with decreases in overall child diet diversity, while higher rainfall in the previous year, compared to the long-term average rainfall, is associated with greater diet diversity. Examining six regions (Asia, Central America, North Africa, South America, Southeast Africa, and West Africa) individually, we find that five have significant reductions in diet diversity associated with higher temperatures while three have significant increases in diet diversity associated with higher precipitation. In West Africa, increasing rainfall appears to counterbalance the effect of rising temperature impacts on diet diversity. In some regions, the statistical effect of climate on diet diversity is comparable to, or greater than, other common development efforts including those focused on education, improved water and toilets, and poverty reduction. These results suggest that warming temperatures and increasing rainfall variability could have profound short- and long-term impacts on child diet diversity, potentially undermining widespread development interventions aimed at improving food security.
Direct Numerical Simulations (DNS) data of Moderate or Intense Low-oxygen Dilution (MILD) combustion are analysed to identify the contributions of the autoignition and flame modes. This is performed using an extended Chemical Explosive Mode Analysis (CEMA) which accounts for diffusion effects allowing it to discriminate between deflagration and autoignition. This analysis indicates that in premixed MILD combustion conditions, the main combustion mode is ignition for all dilution and turbulence levels and for the two reactant temperature conditions considered. In non-premixed conditions, the preponderance of the ignition mode was observed to depend on the axial location and mixture fraction stratification. With a large mixture fraction lengthscale, ignition is more preponderant in the early part of the domain while the deflagrative mode increases further downstream. On the other hand, when the mixture fraction lengthscale is small, sequential autoignition is observed. Finally, the various combustion modes are observed to correlate strongly with mixture fraction where lean mixtures are more likely to autoignite while stoichiometric and rich mixtures are more likely to react as deflagrative structures.
In the present work, three-dimensional turbulent non-premixed oblique slot-jet flames impinging at a wall were investigated using direct numerical simulation (DNS). Two cases are considered with the Damköhler number (Da) of case A being twice that of case B. A 17 species and 73-step mechanism for methane combustion was employed in the simulations. It was found that flame extinction in case B is more prominent compared to case A. Reignition in the lower branch of combustion for case A occurs when the scalar dissipation rate relaxes, while no reignition occurs in the lower branch for case B due to excessive scalar dissipation rate. A method was proposed to identify the flame quenching edges of turbulent non-premixed flames in wall-bounded flows based on the intersections of mixture fraction and OH mass fraction iso-surfaces. The flame/wall interactions were examined in terms of the quenching distance and the wall heat flux along the quenching edges. There is essentially no flame/wall interaction in case B due to the extinction caused by excessive turbulent mixing. In contrast, significant interactions between flames and the wall are observed in case A. The quenching distance is found to be negatively correlated with wall heat flux as previously reported in turbulent premixed flames. The influence of chemical reactions and wall on flow topologies was identified. The FS/U and FC/U topologies are found near flame edges, and the NNN/U topology appears when reignition occurs. The vortex-dominant topologies, FC/U and FS/S, play an increasingly important role as the jet turbulence develops.
The interaction of multiple injections in a diesel engine facilitates a complex interplay between freshly introduced fuel, previous combustion products, and overall combustion. To improve understanding of the relevant processes, high-speed Planar Laser-Induced Fluorescence (PLIF) with 355-nm excitation of formaldehyde and Polycyclic Aromatic Hydrocarbon (PAH) soot precursors is applied to multiple injections of n-dodecane from Engine Combustion Network Spray D, characterized by a converging 189-µm nozzle. High-speed schlieren imaging is applied simultaneously with 50-kHz PLIF excitation to visualize the spray structures, jet penetration, and ignition processes. For the first injection, formaldehyde (as an indicator of low-temperature chemistry) is first found in the jet periphery, after which it quickly propagates through the center of the jet, towards the jet head prior to high-temperature ignition. At second-stage ignition, downstream formaldehyde is consumed rapidly and upstream formaldehyde develops into a quasi-steady structure for as long as the momentum flux from the injector continues. Since the first injection in this work is relatively short, differences to a single long injection are readily observed, ultimately resulting in high-temperature combustion and PAH structures appearing farther upstream after the end of injection. For the second injection in this work, the first formaldehyde signal is significantly advanced because of the entrained high-temperature combustion products, and an obvious premixed burn event does not occur. The propensity for combustion recession after the end of the first injection changes significantly with ambient temperature, thereby affecting the level of interaction between the first- and second injection.
A key strategy for protecting municipal water supplies is the use of sensors to detect the presence of contaminants in associated water distribution systems. Deploying a contamination warning system involves the placement of a limited number of sensors—placed in order to maximize the level of protection afforded. Researchers have proposed several models and algorithms for generating such placements, each optimizing with respect to a different design objective. The use of disparate design objectives raises several questions: (1) What is the relationship between optimal sensor placements for different design objectives? and (2) Is there any risk in focusing on specific design objectives? We model the sensor placement problem via a mixed-integer programming formulation of the well-known p-median problem from facility location theory to answer these questions. Our model can express a broad range of design objectives. Using three large test networks, we show that optimal solutions with respect to one design objective are often highly sub-optimal with respect to other design objectives. However, it is sometimes possible to construct solutions that are simultaneously near-optimal with respect to a range of design objectives. The design of contamination warning systems thus requires careful and simultaneous consideration of multiple, disparate design objectives.
Distribution systems with high levels of solar PV may experience notable changes due to external conditions, such as temperature or solar irradiation. Fault detection methods must be developed in order to support these changes of conditions. This paper develops a method for fast detection, location, and classification of faults in a system with a high level of solar PV. The method uses the Continuous Wavelet Transform (CWT) technique to detect the traveling waves produced by fault events. The CWT coefficients of the current waveform at the traveling wave arrival time provide a fingerprint that is characteristic of each fault type and location. Two Convolutional Neural Networks are trained to classify any new fault event. The method relays of several protection devices and doesn't require communication between them. The results show that for multiple fault scenarios and solar PV conditions, high accuracy for both location and type classification can be obtained.
A computational simulation of various mixing laws for gaseous equations of state using planar traveling shocks for multiple mixtures in three dimensions is analyzed against nominal experimental data. Numerical simulations use the Sandia National Laboratories shock hydrodynamic code CTH and other codes including the thermochemical equilibrium code TIGER and the uncertainty qualification and sensitivity analysis code DAKOTA. The mixtures are 1:1 and a 1:3 molar mixtures of helium and sulfur hexafluoride. The mixing laws to be analyzed are the ideal-gas law, Amagat’s law, Dalton’s law, the Becker–Kistiakowsky–Wilson equation of state (EOS), the exponential 6 EOS, and the Jacobs-Cowperthwaite-Zwisler EOS. Examination of the experimental data with TIGER revealed that the shock strength should not be strong enough to turn the mixture nonideal because the compressibility factor z was essentially unity (z ≈ 1.02). Experimental results show that none of the equations of state are able to accurately predict the properties of the shocked mixture; similar discrepancies have been observed in previous works. Kinetic molecular theory appears to introduce a parameter that offers an explanation regarding the discrepancies. Implementation of the kinetic molecular theory parameter into the EOS is left for future work.
In the Nuclear Security Enterprise (NSE), many high reliability components must be stored for long periods of time before being called on to function a single time. During dormant storage, changes in the performance of these components may occur due to environmental exposures. These exposures may enhance the natural degradation of materials or result in shifts in the performance of electronics. Ongoing assessment of these components is necessary to inform the need for upgrades or replacements to ensure high reliability requirements are being maintained. This paper presents several assessment methodologies that are used and have been proposed for this problem. We also present methods that we believe to be most appropriate for the assessment of nuclear weapons components subjected to dormant storage.
The Source Physics Experiment (SPE) Phase I conducted six underground chemical explosions at the same experimental pad with the goal of characterizing underground explosions to enhance the United States (U.S.) ability to detect and discriminate underground nuclear explosions (UNEs). A fully polarimetric synthetic aperture RADAR (PolSAR) collected imagery in VideoSAR mode during the fifth and sixth explosions in the series (SPE-5 and SPE-6). Previously, we reported the prompt PolSAR surface changes cause by SPE-5 and SPE-6 explosions within seconds or minutes of the underground chemical explosions, including a drop of spatial coherence and polarimetric scattering changes. Therein it was hypothesized that surface changes occurred when surface particles experienced upward acceleration greater than 1 g. Because the SPE site was instrumented with surface accelerometers, we explore that hypothesis and report our findings in this article. We equate explosion-caused prompt surface expressions measured by PolSAR to the prompt surface movement measured by accelerometers. We tie these findings to UNE detection by comparing the PolSAR and accelerometer results to empirical ground motion predictions derived from accelerometer recordings of UNEs collected prior to cessation of U.S. nuclear testing. We find the single threshold greater than 1 g hypothesis is not correct for it does not explain the PolSAR results. Our findings show PolSAR surface coherence spatial extent is highly correlated with surface velocity, both measured and predicted, and the resulting surface deformation extent is corroborated by accelerometer records and the predicted lateral spall extent. PolSAR scattering changes measured during SPE-6 are created by the prompt surface displacement being larger than the spall gap.
Two key mechanical processes exist in the formation of powder compacts. These include the complex kinematics of particle rearrangement as the powder is densified and particle deformation leading to mechanical failure and fragmentation. Experiments measuring the time varying forces across a densifying powder bed have been performed in powders of microcrystalline cellulose with mean particle sizes between 0.4 and 1.2 mm. In these experiments, diagnostics measured the applied and transmitted loads and the bulk powder density. Any insight into the particle behavior must be inferred from deviations in the smoothly increasing stress-density compaction relationship. By incorporating a window in the compaction die body, simultaneous images of particle rearrangement and fracture at the confining window are captured. The images are post-processed in MATLAB® to track individual particle motion during compression. Complimentary discrete element method (DEM) simulations are presented and compared to experiment. The comparison provides insight into applying DEM methods for simulating large or permanent particle deformation and suggests areas for future study.
Harris, Trevor; Bolin, Anthony W.; Steiger, Nathan J.; Smerdon, Jason E.; Narisetty, Naveen
Abstract–Climate field reconstructions (CFRs) attempt to estimate spatiotemporal fields of climate variables in the past using climate proxies such as tree rings, ice cores, and corals. Data assimilation (DA) methods are a recent and promising new means of deriving CFRs that optimally fuse climate proxies with climate model output. Despite the growing application of DA-based CFRs, little is understood about how much the assimilated proxies change the statistical properties of the climate model data. To address this question, we propose a robust and computationally efficient method, based on functional data depth, to evaluate differences in the distributions of two spatiotemporal processes. We apply our test to study global and regional proxy influence in DA-based CFRs by comparing the background and analysis states, which are treated as two samples of spatiotemporal fields. We find that the analysis states are significantly altered from the climate-model-based background states due to the assimilation of proxies. Moreover, the difference between the analysis and background states increases with the number of proxies, even in regions far beyond proxy collection sites. Our approach allows us to characterize the added value of proxies, indicating where and when the analysis states are distinct from the background states. Supplementary materials for this article are available online.
Linear elastic fracture mechanics theory predicts a parabolic crack opening profile. However, direct observation of crack tip shape in situ for brittle materials is challenging due to the small size of the active crack tip region. By leveraging advances in optical microscopy techniques and using a soft brittle hydrogel material, we can measure crack geometry on the micron scale. For glasses and ceramics, expected crack opening displacements are on the order of nanometers. However, for hydrogels, we can achieve crack opening displacements on the order of hundreds of microns or larger while maintaining brittle fracture behavior. Knowing the elastic properties, we can use crack geometry to calculate the stress intensity factor, K, and energy release rate, G, during propagation. Assuming the gel is hyperelastic, we can also approximate the size of the nonlinear region ahead of the crack tip. Geometric measurement of fracture properties eliminates the need to measure complex boundary and loading conditions, allowing us to explore new methods of inducing crack propagation. Further, this allows us to define measures of fracture resistance in materials that do not fit the traditionally defined theories of fracture mechanics.
University research is a strong focus of the Office of Nuclear Energy within the Department of Energy. This research complements existing work in the various program areas and provides support and training for students entering the field. Four university projects have provided support to the Material Protection Accounting and Controls Technologies (MPACT) 2020 milestone focused on safeguards for electrochemical processing facilities. The University of Tennessee Knoxville has examined data fusion of NDA measurements such as Hybrid K-Edge Densitometry and Cyclic Voltammetry. Oregon State University and Virginia Polytechnic Institute have examined the integration of accountancy data with process monitoring data for safeguards. The Ohio State University and the University of Utah have developed a Ni-Pt SiC Schottky diode capable of high temperature alpha spectroscopy for actinide detection of molten salts. Finally, the University of Colorado has developed a key enabling technology for the use of Microcalorimetry.
Several recent workshops conducted by the DOE Advanced Scientific Computing Research program have established the fact that the complexity of developing applications and executing them on high-performance computing (HPC) systems is rising at a rate which will make it nearly impossible to continue to achieve higher levels of performance and scalability. Absent an alternative approach to managing this ever-growing complexity, HPC systems will become increasingly difficult to use. A more holistic approach to designing and developing applications and managing system resources is required. This paper outlines a research strategy for managing the increasing the complexity by providing the programming environment, software stack, and hardware capabilities needed for autonomous resource management of HPC systems. Developing portable applications for a variety of HPC systems of varying scale requires a paradigm shift from the current approach, where applications are painstakingly mapped to individual machine resources, to an approach where machine resources are automatically mapped and optimized to applications as they execute. Achieving such automated resource management for HPC systems is a daunting challenge that requires significant sustained investment in exploring new approaches and novel capabilities in software and hardware that span the spectrum from programming systems to device-level mechanisms. This paper provides an overview of the functionality needed to enable autonomous resource management and optimization and describes the components currently being explored at Sandia National Laboratories to help support this capability.
Physical vapor deposition (PVD) of high explosives can produce energetic samples with unique microstructure and morphology compared to traditional powder processing techniques, but challenges may exist in fabricating explosive films without defects. Deposition conditions and substrate material may promote microcracking and other defects in the explosive films. In this study, we investigate effects of engineered microscale defects (gaps) on detonation propagation and failure for pentaerythritol tetranitrate (PETN) films using ultra-high-speed refractive imaging and hydrocode modelling. Observations of the air shock above the gap reveal significant instabilities during gap crossing and re-ignition.
Oxiranes are a class of cyclic ethers formed in abundance during low-temperature combustion of hydrocarbons and biofuels, either via chain-propagating steps that occur from unimolecular decomposition of β-hydroperoxyalkyl radicals (β-̇QOOH) or from reactions of HOȮ with alkenes. Ethyloxirane is one of four alkyl-substituted cyclic ether isomers produced as an intermediate from n-butane oxidation. While rate coefficients for β-̇QOOH → ethyloxirane + ȮH are reported extensively, subsequent reaction mechanisms of the cyclic ether are not. As a result, chemical kinetics mechanisms commonly adopt simplified chemistry to describe ethyloxirane consumption by convoluting several elementary reactions into a single step, which may introduce mechanism truncation error—uncertainty derived from missing or incomplete chemistry. The present work provides fundamental insight on reaction mechanisms of ethyloxirane in support of ongoing efforts to minimize mechanism truncation error. Reaction mechanisms are inferred from the detection of products during chlorine atom-initiated oxidation experiments using multiplexed photoionization mass spectrometry conducted at 10 Torr and temperatures of 650 K and 800 K. To complement the experiments, calculations of stationary point energies were conducted using the ccCA-PS3 composite method on ̇R + O2 potential energy surfaces for the four ethyloxiranyl radical isomers, which produced barrier heights for 24 reaction pathways. In addition to products from ̇QOOH → cyclic ether + ȮH and ̇R + O2 → conjugate alkene + HOȮ, both of which were significant pathways and are prototypical to alkane oxidation, other species were identified from ring-opening of both ethyloxiranyl and ̇QOOH radicals. The latter occurs when the unpaired electron is localized on the ether group, causing the initial ̇QOOH structure to ring-open and form a resonance-stabilized ketohydroperoxide-type radical. The present work provides the first analysis of ethyloxirane oxidation chemistry, which reveals that consumption pathways are complex and may require an expansion of submechanisms to increase the fidelity of chemical kinetics mechanisms.
We introduce a new algorithm, Construction of dIfferentially Private Empirical Distributions from a low-order marginals set tHrough solving linear Equations with l2 Regularization (CIPHER), that produces differentially private empirical joint distributions from a set of low-order marginals. CIPHER is conceptually simple and requires no more than decomposing joint probabilities via basic probability rules to construct a linear equation set and subsequently solving the equations. Compared to the full-dimensional histogram (FDH) sanitization, CIPHER has drastically lower requirements on computational storage and memory, which is practically attractive especially considering that the high-order signals preserved by the FDH sanitization are likely just sample randomness and rarely of interest. Our experiments demonstrate that CIPHER outperforms the multiplicative weighting exponential mechanism in preserving original information and has similar or superior cost-normalized utility to FDH sanitization at the same privacy budget.
Synthetic Aperture Radar (SAR) projects a 3-D scene’s reflectivity into a 2-D image. In doing so, it generally focusses the image to a surface, usually a ground plane. Consequently, scatterers above or below the focal/ground plane typically exhibit some degree of distortion manifesting as a geometric distortion and misfocusing or smearing. Limits to acceptable misfocusing define a Height of Focus (HOF), analogous to Depth of Field in optical systems. This may be exacerbated by the radar’s flightpath during the synthetic aperture data collection. It might also be exploited for target height estimation and offer insight to other height estimation techniques.
The National Solar Thermal Test Facility (NSTTF) at Sandia National Laboratories New Mexico (SNL/NM) developed this Project Execution Plan (PEP) to document its process for executing, monitoring, controlling and closing-out Phase 3 of the Gen 3 Particle Pilot Plant G3P3. This plan serves as a resource for stakeholders who wish to be knowledgeable of project objectives and how they will be accomplished. The plan is intended to be used by the development partners, principal investigator, and the federal project director. Project objectives are derived from the mission needs statement, and an integrated project team assists in development of the PEP. This plan is a living document and will be updated throughout the project to describe current and future processes and procedures. The scope of the PEP covers: Cost, schedule, and scope Project reporting Staffing plan Quality assurance plan Environment, safety, security, and health This document is a tailored approach for the Facilities Management and Operations Center (FMOC) to meet the project management principles of DOE Order 413.3B, Program and Project Management for the Acquisition of Capital Assets , and DOE G 413.3-15, DOE Guide for Project Execution Plans. This document will elaborate on content as knowledge of the project is gained or refined.
Technical procedures systematically describe a series of steps for the operation, maintenance, or testing of systems or components. They are widely used as a method for ensuring consistency, reducing human error, and improving the quality of the end-product. This guide provides specific guidance to procedure writers to help them generate high-quality technical procedures. The guidance is aimed at reducing confusion or ambiguity on the part of the operator, thereby increasing efficiency and reducing errors and rework. The appendices to this document define key terms associated with the creation of technical procedures, list common error traps, and define a set of action verbs that should be used in technical procedures.
Heat waves have catastrophic effects causing mortality, air quality loss, grid failures, infrastructure damage, and increases in electricity consumption. The literature indicates that heat waves are growing in intensity, duration, and frequency. This paper documents a heat wave study of the Sandia National Laboratories (SNL) California site. The analysis involves: 1) projection of a heat wave based on historical data and NEX-DCP30 climate projections, 2) Classification of peak electricity load points that represent the site on workdays, Fridays, and weekends 3) Regression of the peak load data to produce confidence bounds for the analysis, and 4) Calibration and projection of building energy models (BEMs) to the heat wave scenario. This approach worked well for the previous NM site analysis of meter data and 97 representative BEM's. For the CA site, the BEM calibration procedure was unsuccessful without individual BEM calibrations. Many of the 23 California BEM's required calibration at the building level rather than for the entire site. This was found to be due to many of the BEM's having significantly different electric demand profiles than their meter data whereas the NM BEM's were much more accurate. Unlike the NM site, the CA site did not distinguish Friday operations clearly and the associated K-mean cluster algorithm that worked for the NM site did not add value for the CA site. The regression analyses produced estimates of site-wide increases to daily peak loads with 95% confidence bounds that were much wider than the NM analysis. The CA site was found to have higher average peak load sensitivity of 1.07%/0C (0.59%/0F) in comparison to the NM site with 0.61%/0C (0.34%/0F). Even so, the larger sensitivity is counteracted by a milder projection for future heat waves from NEX-DCP30 downscaled climate projections. The expected heat wave maximum temperature of 45.10C (113.20F) did not even break the current record of 46.10C (115.00F) in Livermore, CA and only had total heating energy of 280C·day (510F·day) from baseline 2019 weather in comparison to NM's 380C·day (680F·day). This work emphasizes issues that can aid development of future guidelines for application of BEM and meter data to heat-wave scenarios.
The Arroyo Seco Improvement Program is being carried out at Sandia National Laboratories, California in order to address erosion and other streambed instability issues in the Arroyo Seco as it crosses the Sandia National Laboratories, California. The work involves both repair of existing eroded areas, and habitat enhancement. This work is being carried out under the requirements of Army Corps of Engineers permit 2006-400195S and California Regional Water Quality Control Board, San Francisco Bay Region Water Quality Certification Site No. 02-01-C0987.
Neuromorphic computing with spintronic devices has been of interest due to the limitations of CMOS-driven von Neumann computing. Domain wall-magnetic tunnel junction (DW-MTJ) devices have been shown to be able to intrinsically capture biological neuron behavior. Edgy-relaxed behavior, where a frequently firing neuron experiences a lower action potential threshold, may provide additional artificial neuronal functionality when executing repeated tasks. In this letter, we demonstrate that this behavior can be implemented in DW-MTJ artificial neurons via three alternative mechanisms: shape anisotropy, magnetic field, and current-driven soft reset. Using micromagnetics and analytical device modeling to classify the Optdigits handwritten digit dataset, we show that edgy-relaxed behavior improves both classification accuracy and classification rate for ordered datasets while sacrificing little to no accuracy for a randomized dataset. This letter establishes methods by which artificial spintronic neurons can be flexibly adapted to datasets.
Fuel-lean combustion using late injection during the compression stroke can result in increased soot emissions due to excessive wall-wetting and locally unfavorable air-fuel mixtures due to spray collapse. Multi-hole injectors, most commonly used, experiencing spray collapse, can worsen both problems. Hence, it is of interest to study the contribution of spray collapse to wall-wetting to understand how it can be avoided. This optical-engine study reveals spray characteristics and the associated wall-wetting for collapsing and non-collapsing sprays, when systematically changing the intake pressure, injection duration and timing. High-speed imaging of Mie-scattered light was used to observe changes in the spray structure, and a refractive index matching (RIM) technique was utilized to detect and quantify the area of fuel-film patterns on bottom of the piston bowl. E30 (gasoline blended with 30% ethanol by volume) was used throughout the experiments. E30 is known to be more susceptible to spray collapse and the high heat of vaporization of ethanol tends to exacerbate fuel-film formation. These experimental results highlight the impact of in-cylinder ambient conditions on spray morphology and the influence of spray behavior on fuel-films. Analysis of the spray images reveals that spray collapse is a strong function of in-cylinder density and its evolution in spite of the changes in in-cylinder pressure, temperature, and flow at the operating condition used in this study. This explains similarities in the degree of spray collapse and resultant wall-wetting from various injection timings and intake pressures. It is also found that at operating conditions where the spray undergoes transition from non-collapsing to collapsing spray during an injection event, both fuel-film area and variability in fuel-film pattern increased.
In this study, we present spectral equivalence results for higher-order tensor product edge-, face- and interior-based finite elements. Specifically, we show for certain choices of shape functions that the mass and stiffness matrices of the higher-order elements are spectrally equivalent to those for an assembly of lowest-order elements on the associated Gauss-Lobatto-Legendre mesh. Based on this equivalence, efficient preconditioners can be designed with favorable computational complexity. Numerical results are presented which confirm the theory and demonstrate the benefits of the equivalence results for overlapping Schwarz preconditioners.
PixelStorm is a software application for displaying native high-performance applications from remote cloud environments. It is tailored for remote sensing missions that require high framerates, high resolutions, and minimal loss of quality. PixelStorm utilizes hardware-accelerated video compression on graphics processing units and a Sandia-developed streaming network protocol. Using our architecture, we can demonstrate interactive native applications running across two 4K monitors at 60 frames per second while maintaining the visual fidelity required by our missions. This technology allows for the migration of mission critical desktop applications to cloud environments.
This report presents the results of a collaborative effort under the Verification, Validation, and Uncertainty Quantification (VVUQ) thrust area of the North American Energy Resilience Model (NAERM) program. The goal of the effort described in this report was to integrate the Dakota software with the NAERM software framework to demonstrate sensitivity analysis of a co-simulation for NAERM.
The intentional spread of disinformation is not a new challenge for the scientific world. We have seen it perpetuate the idea of a flat earth, convince communities that vaccines are more dangerous than helpful, and even suggest a connection between the “5G” communication infrastructure and COVID-19.1 Nor is disinformation a new phenomenon in the weapons inspection arena. Weapons inspectors themselves are often forced to sift through alternative narratives of events and inconsistent reporting, and they regularly see their credibility and conclusions questioned in the face of government politics or public biases. But certain recent disinformation campaigns have become so overwhelmingly comprehensive and effective that they constitute a new kind of threat. By preventing accountability for clear violations of international law, these campaigns have created a challenge to the survival of arms control treaties themselves. If weapons inspectors cannot regain the trust of the international community in the face of this challenge, it will be increasingly difficult to ensure compliance with arms control and disarmament treaties going forward. In this essay, I will briefly discuss one of the most comprehensive disinformation efforts of the past decade: the disinformation campaign used to prevent accountability for Syria's repeated use of chemical weapons. After this discussion, I will propose one possible approach to help protect the credibility of disarmament experts and weapons inspectors in the face of pervasive disinformation. This approach will require a concerted effort to connect and support compliance experts and to understand and explain their expertise across cultural, political, national, economic, and religious divides.
Two surface chemical explosive tests were observed for the Large Surface Explosion Coupling Experiment (LSECE) at the Nevada National Security Site in October 2020. The tests consisted of two one-ton explosions, one occurring before dawn and one occurring mid- afternoon. LSECE was performed in the same location as previous underground tests and aimed to explore the relationship between surface and underground explosions in support of global nonproliferation efforts. Several pieces of remote sensing equipment were deployed from a trailer 2.02 km from ground zero including high-speed cameras, radiometers and a spectrometer. The data collected from these tests will increase the knowledge of large surface chemical explosive signatures.
The purpose of this paper is to study a Helmholtz problem with a spectral fractional Laplacian, instead of the standard Laplacian. Recently, it has been established that such a fractional Helmholtz problem better captures the underlying behavior in geophysical electromagnetics. We establish the well-posedness and regularity of this problem. We introduce a hybrid spectral-finite element approach to discretize it and show well-posedness of the discrete system. In addition, we derive a priori discretization error estimates. Finally, we introduce an efficient solver that scales as well as the best possible solver for the classical integer-order Helmholtz equation. We conclude with several illustrative examples that confirm our theoretical findings.
This paper offers new insights into a partial fuel stratification (PFS) combustion strategy that has proven to be effective at stabilizing overall lean combustion in direct injection spark ignition engines. To this aim, high spatial and temporal resolution optical diagnostics were applied in an optically accessible engine working in PFS mode for two fuels and two different durations of pilot injection at the time of spark: 210 μs and 330 μs for E30 (gasoline blended with ethanol by 30% volume fraction) and gasoline, respectively. In both conditions, early injections during the intake stroke were used to generate a well-mixed lean background. The results were compared to rich, stoichiometric and lean well-mixed combustion with different spark timings. In the PFS combustion process, it was possible to detect a non-spherical and highly wrinkled blue flame, coupled with yellow diffusive flames due to the combustion of rich zones near the spark plug. The initial flame spread for both PFS cases was faster compared to any of the well-mixed cases (lean, stoichiometric and rich), suggesting that the flame propagation for PFS is enhanced by both enrichment and enhanced local turbulence caused by the pilot injection. Different spray evolutions for the two pilot injection durations were found to strongly influence the flame kernel inception and propagation. PFS with pilot durations of 210 μs and 330 μs showed some differences in terms of shapes of the flame front and in terms of extension of diffusive flames. Yet, both cases were highly repeatable.
Falling particle receivers (FPRs) are being studied in concentrating solar power applications to enable high temperatures for supercritical CO2 (sCO2) Brayton power cycles. The falling particles are introduced into the cavity receiver via a linear actuated slide gate and irradiated by concentrated sunlight. The thickness of the particle curtain associated with the slide-gate opening dimension dictates the mass flow rate of the particle curtain. A thicker, higher mass flow rate, particle curtain would typically be associated with a smaller temperature rise through the receiver, and a thinner, lower mass flow rate, particle curtain would result in a larger temperature rise. Using the receiver outlet temperature as the process variable and the linear actuated slide gate as the input parameter a proportional, integral, and derivative (PID) controller was implemented to control the temperature of the particles leaving the receiver. The PID parameters were tuned to respond in a quick and stable manner. The PID controlled slide gate was tested using the 1 MW receiver at the National Solar Thermal Test Facility (NSTTF). The receiver outlet temperature was ramped from ambient to 800°C then maintained at the setpoint temperature. After reaching a steady-state, perturbations of 15%-20% of the initial power were applied by removing heliostats to simulate passing clouds. The PID controller reacted to the change in the input power by adjusting the mass flow rate through the receiver to maintain a constant receiver outlet temperature. A goal of ±2σ ≤ 10°C in the outlet temperature for the 5 minutes following the perturbation was achieved.
Falling particle receiver (FPR) systems are a rapidly developing technology for concentrating solar power applications. Solid particles are used as both the heat transfer fluid and system thermal energy storage media. Through the direct irradiation of the solid particles, flux and temperature limitations of tube-bundle receives can be overcome, leading to higher operating temperatures and energy conversion efficiencies. Candidate particles for FPR systems must be resistant to changes in optical properties during long term exposure to high temperatures and thermal cycling using highly concentrated solar irradiance. Five candidate particles, CARBOBEAD HSP 40/70, CARBOBEAD CP 40/100, including three novel particles, CARBOBEAD MAX HD 35, CARBOBEAD HD 350, and WanLi Diamond Black, were tested using simulated solar flux cycling and tube furnace thermal aging. Each particle candidate was exposed for 10 000 cycles (simulating the exposure of a 30-year lifetime) using a shutter to attenuate the solar simulator flux. Feedback from a pyrometer temperature measurement of the irradiated particle surface was used to control the maximum temperatures of 775 °C and 975 °C. Particle solar-weighted absorptivity and emissivity were measured at 2000 cycle intervals. Particle thermal degradation was also studied by heating particles to 800 °C, 900 °C, and 1000 °C for 300 hours in a tube furnace purged with bottled unpurified air. Here particle absorptivity and emissivity were measured at 100-hour intervals. Measurements taken after irradiance cycling and thermal aging were compared to measurements taken from as-received particles. WanLi Diamond Black particles had the highest initial value for solar weighted absorptance, 96%, but degraded up to 4% in irradiance cycling and 6% in thermal aging. CARBOBEAD HSP 40/70 particles currently in use in the prototype FPR at the National Solar Thermal Test Facility had an initial value of 95% solar absorptance with up to a 1% drop after irradiance cycling and 4% drop after 1000 °C thermal aging.
Solving dense systems of linear equations is essential in applications encountered in physics, mathematics, and engineering. This paper describes our current efforts toward the development of the ADELUS package for current and next generation distributed, accelerator-based, high-performance computing platforms. The package solves dense linear systems using partial pivoting LU factorization on distributed-memory systems with CPUs/GPUs. The matrix is block-mapped onto distributed memory on CPUs/GPUs and is solved as if it was torus-wrapped for an optimal balance of computation and communication. A permutation operation is performed to restore the results so the torus-wrap distribution is transparent to the user. This package targets performance portability by leveraging the abstractions provided in the Kokkos and Kokkos Kernels libraries. Comparison of the performance gains versus the state-of-the-art SLATE and DPLASMA GESV functionalities on the Summit supercomputer are provided. Preliminary performance results from large-scale electromagnetic simulations using ADELUS are also presented. The solver achieves 7.7 Petaflops on 7600 GPUs of the Sierra supercomputer translating to 16.9% efficiency.
The spatial and temporal locations of autoignition depend on fuel chemistry and the temperature, pressure, and mixing trajectories in the fuel jets. Dual-fuel systems can provide insight into fuel-chemistry aspects through variation of the proportions of fuels with different reactivities, and engine operating condition variations can provide information on physical effects. In this context, the spatial and temporal progression of two-stage autoignition of a diesel-fuel surrogate, n-heptane, in a lean-premixed charge of synthetic natural gas (NG) and air is imaged in an optically accessible heavy-duty diesel engine. The lean-premixed charge of NG is prepared by fumigation upstream of the engine intake manifold. Optical diagnostics include: infrared (IR) imaging for quantifying both the in-cylinder NG concentration and the pilot-jet penetration rate and spreading angle, high-speed cool-flame chemiluminescence imaging as an indicator of low-temperature heat release (LTHR), and high-speed OH* chemiluminescence imaging as an indicator high-temperature heat release (HTHR). To aid interpretation of the experimental observations, zero-dimensional chemical kinetics simulations provide further understanding of the underlying interplay between the physical and chemical processes of mixing (pilot fuel-jet entrainment) and autoignition (two-stage ignition chemistry). Increasing the premixed NG concentration prolongs the ignition delay of the pilot fuel and increases the combustion duration. Due to the relatively short pilot-fuel injections utilized, the transient increase in entrainment near the end of injection (entrainment wave) plays an important role in mixing. To achieve desired combustion characteristics, i.e., ignition and combustion timing (e.g., for combustion phasing) and location (e.g., for reducing wall heat-transfer or tailoring charge stratification), injection parameters can be suitably selected to yield the necessary mixing trajectories that potentially help offset changes in fuel ignition chemistry, which could be a valuable tool for combustion design.
We analyze the coupling into a slotted cylindrical cavity operating at fundamental cavity modal frequencies overlapping with the slot’s first resonance frequency through an unmatched formulation that accounts for the slot’s absorption and radiation processes. The model is validated through full-wave simulations and experimental data. We then couple the unmatched formulation to a perturbation theory model to investigate an absorber within the cavity to reduce the interior field strength, also validated with full-wave simulations and experiments. These models are pivotal to understanding the physical processes involved in the electromagnetic penetration through slots, and may constitute design tools to mitigate electromagnetic interference effects within cavities.
The National Solar Thermal Test Facility (NSTTF) at Sandia National Laboratories is conducting research on a Generation 3 Particle Pilot Plant (G3P3) that uses falling sandlike particles as the heat transfer medium. G3P3 proposes a system with 6 MWh of thermal energy storage in cylindrical bins made of steel that will be insulated internally using multiple layers of refractory materials[1]. The refractory materials can be applied by stacking pre-cast panels in a cylindrical arrangement or by spraying refractory slurry to the walls (shotcrete). A study on the two methods determined that shotcrete would be the preferred method in order to minimize geometric tolerance issues in the pre-cast panels, improve repairability, and to more closely resemble commercial-scale construction methods. Testing and analysis was conducted which showed shotcrete refractories could be applied with minimal damage and acceptable heat loss.
This paper describes the development of a facility for evaluating the performance of small-scale particle-to-sCO2 heat exchangers, which includes an isobaric sCO2 flow loop and an electrically heated particle flow loop. The particle flow loop is capable of delivering up to 60 kW of heat at a temperature of 600°C and flow rate of 0.4 kg/s. The loop was developed to facilitate long duration off-sun testing of small prototype heat exchangers to produce model validation data at steady-state operating conditions. Lessons learned on instrumentation, control, and system integration from prior testing of larger heat exchangers with solar thermal input were used to guide the design of the test facility. In addition, the development and testing of a novel 20-kWt moving packed-bed particle-to-sCO2 heat exchanger using the integrated flow loops is reported. The prototype heat exchanger implements many novel features for increasing thermal performance and reducing pressure drop which include integral porting of the sCO2 flow, unique bond/braze manufacturing, narrow plate spacing, and pure counter-flow arrangement. The experimental data collected for the prototype heat exchanger was compared to model predictions to verify the sizing, thermal performance, and pressure drop which will be extended to multi-megawatt heat exchanger designs in the future.
IS and T International Symposium on Electronic Imaging Science and Technology
Livingston, Mark A.; Matzen, Laura E.; Brock, Derek; Harrison, Andre; Decker, Jonathan W.
Expert advice and conventional wisdom say that important information within a statistical graph should be more salient than the other components. If readers are able to find relevant information quickly, in theory, they should perform better on corresponding response tasks. To our knowledge, this premise has not been thoroughly tested. We designed two types of salient cues to draw attention to task-relevant information within statistical graphs. One type primarily relied on text labels and the other on color highlights. The utility of these manipulations was assessed with groups of questions that varied from easy to hard. We found main effects from the use of our salient cues. Error and response time were reduced, and the portion of eye fixations near the key information increased. An interaction between the cues and the difficulty of the questions was also observed. In addition, participants were given a baseline skills test, and we report the corresponding effects. We discuss our experimental design, our results, and implications for future work with salience in statistical graphs.
The Online Waste Library (OWL) provides a consolidated source of information on Department of Energy-managed radioactive waste likely to require deep geologic disposal. With the release of OWL Version 1.0 in fiscal year 2019 (FY2019), much of the FY2020 work involved developing the OWL change control process and the OWL release process. These two processes (in draft form) were put into use for OWL Version 2.0, which was released in early FY2021. With the knowledge gained, the OWL team refined and documented the two processes in two separate reports. This report focuses on the change control process and discusses the following: (1) definitions and system components; (2) roles and responsibilities; (3) origin of changes; (4) the change control process including the Change List, Task List, activity categories, implementation examples, and checking and review; and (5) the role of the re lease process in ensuring changes in the Change List are incorporated into a public release.
The Online Waste Library (OWL) provides one consolidated source of information on Department of Energy-managed wastes likely to require deep geologic disposal. With the release of OWL Version 1.0 in fiscal year (FY) 2019, much of the FY2020 work involved developing the OWL change control process and the OWL release process. These two processes (in draft form) were put into use for OWL Version 2.0, which was released in early FY2021. With the knowledge gained, the OWL team refined and documented the two processes in two separate reports. This report addresses the release process starting with a definition of release management in Section 2. Section 3 describes the Information Technology Infrastructure Library (ITIL) framework, part of which includes the three different environments used for release management. Section 4 presents the OWL components existing in the different environments and provides details on the release schedule and procedures.
The 30 cm drop is the remaining NRC normal conditions of transport (NCT) regulatory requirement (10 CFR 71.71) for which there are no data on the response of spent fuel. While obtaining data on the spent fuel is not a direct requirement, it allows for quantifying the risk of fuel breakage resulting from a cask drop from a height of 30 cm or less. Because a full-scale cask and impact limiters are very expensive, 3 consecutive drop tests were conducted to obtain strains on a full-scale surrogate 17x17 PWR assembly. The first step was a 30 cm drop of a 1/3 scale cask loaded with dummy assemblies. The second step was a 30 cm drop test of a full-scale dummy assembly. The third step was a 30 cm drop of a full-scale surrogate assembly. The results of this final test are presented in this paper. The test was conducted in May 2020. The acceleration pulses on the surrogate assembly were in good agreement with the expected pulses derived from steps 1 and 2. This confirmed that during the 30 cm drop the surrogate assembly experienced the same conditions as it would have if it had been dropped in a full-scale cask with impact limiters. The surrogate assembly was instrumented with 27 strain gauges. Pressure paper was inserted between the rods within the two long and two short spacer grid spans in order to register the pressure in case of rod-to-rod contact. The maximum observed peak strain on the surrogate assembly was 1,724 microstrain at the bottom end of the assembly. The pressure paper sheets from the two short spans were blank. The pressure paper sheets from the two long spans, except a few middle ones, showed marks indicating rod-to-rod contact. The maximum estimated contact pressure was 4,100 psi. The longitudinal bending stress corresponding to the maximum observed strain value (calculated from the stress-strain curve for low burnup cladding) was 22,230 psi. Both values are significantly below the yield strength of the cladding. The major conclusion is that the fuel rods will maintain their integrity following a 30 cm drop inside of a transportation cask.
In this project, ceramic encapsulation materials were studied for high temperature (>~°500 C) applications where typical polymer encapsulants are unstable. A new low temperature (<~°200 C) method of processing ceramics, the cold sintering process was examined. Additionally, commercially available high temperature ceramic cements were investigated. In both cases, the mechanical strengths of available materials are less than desired (i.e., desired strengths similar to Si3N4), limiting applicability. Composite designs to increase mechanical strength are suggested. Additionally, non-uniformities in stresses and densification while embedding alumina sheets in encapsulants via cold sintering using uni-axial pressing led to fracture of sheets, and an alternative iso-static based approach is recommended for future studies.
Sandia National Laboratories (SNL) conducted an independent assessment of three different certified N95 respirators for the State of New Mexico Department of Homeland Security and Emergency Management. The testing conducted under this effort mimicked traditional NIOSH certification testing methodologies, where possible (NIOSH 2019). This included the use of a commercially available off-the-shelf (COTS) instrument typically used in industry for N95 respirator certification (ATI 2018). The COTS system, an Air Techniques International 100Xs automated filter tester, was used for all the testing reported in this document. It is important to note that SNL is NOT a certification laboratory, and all quantitative results are for informational purposes only. Additional technical information of N95-related efforts conducted by this team may be found in: Omana et al. (2020a), Omana et al. (2020b), Celina et al. (2020)
Digital Instrumentation and Control Systems (ICSs) have replaced analog control systems in nuclear power plants raising cybersecurity concerns. To study and understand the cybersecurity risks of nuclear power plants both high fidelity models of the plant physics and controllers must be created, and a framework to test and evaluate cyber security events must be established. A testing and evaluation framework of cybersecurity events consists of a method of interfering with control systems, a simulation of the plant network, and a network packet capture and recording tool. Sandia National Labs (SNL) in collaboration with the University of New Mexico’s Institute for Space and Nuclear Power Studies (UNM-ISNPS) is developing such a cybersecurity testing framework.
The accurate construction of a surrogate model is an effective and efficient strategy for performing Uncertainty Quantification (UQ) analyses of expensive and complex engineering systems. Surrogate models are especially powerful whenever the UQ analysis requires the computation of statistics which are difficult and prohibitively expensive to obtain via a direct sampling of the model, e.g. high-order moments and probability density functions. In this paper, we discuss the construction of a polynomial chaos expansion (PCE) surrogate model for radiation transport problems for which quantities of interest are obtained via Monte Carlo simulations. In this context, it is imperative to account for the statistical variability of the simulator as well as the variability associated with the uncertain parameter inputs. More formally, in this paper we focus on understanding the impact of the Monte Carlo transport variability on the recovery of the PCE coefficients. We are able to identify the contribution of both the number of uncertain parameter samples and the number of particle histories simulated per sample in the PCE coefficient recovery. Our theoretical results indicate an accuracy improvement when using few Monte Carlo histories per random sample with respect to configurations with an equivalent computational cost. These theoretical results are numerically illustrated for a simple synthetic example and two configurations of a one-dimensional radiation transport problem in which a slab is represented by means of materials with uncertain cross sections.
This report describes the high-level accomplishments from the Plasma Science and Engineering Grand Challenge LDRD at Sandia National Laboratories. The Laboratory has a need to demonstrate predictive capabilities to model plasma phenomena in order to rapidly accelerate engineering development in several mission areas. The purpose of this Grand Challenge LDRD was to advance the fundamental models, methods, and algorithms along with supporting electrode science foundation to enable a revolutionary shift towards predictive plasma engineering design principles. This project integrated the SNL knowledge base in computer science, plasma physics, materials science, applied mathematics, and relevant application engineering to establish new cross-laboratory collaborations on these topics. As an initial exemplar, this project focused efforts on improving multi-scale modeling capabilities that are utilized to predict the electrical power delivery on large-scale pulsed power accelerators. Specifically, this LDRD was structured into three primary research thrusts that, when integrated, enable complex simulations of these devices: (1) the exploration of multi-scale models describing the desorption of contaminants from pulsed power electrodes, (2) the development of improved algorithms and code technologies to treat the multi-physics phenomena required to predict device performance, and (3) the creation of a rigorous verification and validation infrastructure to evaluate the codes and models across a range of challenge problems. These components were integrated into initial demonstrations of the largest simulations of multi-level vacuum power flow completed to-date, executed on the leading HPC computing machines available in the NNSA complex today. These preliminary studies indicate relevant pulsed power engineering design simulations can now be completed in (of order) several days, a significant improvement over pre-LDRD levels of performance.
Numerical simulations of Greenland and Antarctic ice sheets involve the solution of large-scale highly nonlinear systems of equations on complex shallow geometries. This work is concerned with the construction of Schwarz preconditioners for the solution of the associated tangent problems, which are challenging for solvers mainly because of the strong anisotropy of the meshes and wildly changing boundary conditions that can lead to poorly constrained problems on large portions of the domain. Here, two-level GDSW (Generalized Dryja–Smith–Widlund) type Schwarz preconditioners are applied to different land ice problems, i.e., a velocity problem, a temperature problem, as well as the coupling of the former two problems. We employ the MPI-parallel implementation of multi-level Schwarz preconditioners provided by the package FROSch (Fast and Robust Schwarz)from the Trilinos library. The strength of the proposed preconditioner is that it yields out-of-the-box scalable and robust preconditioners for the single physics problems. To our knowledge, this is the first time two-level Schwarz preconditioners are applied to the ice sheet problem and a scalable preconditioner has been used for the coupled problem. The pre-conditioner for the coupled problem differs from previous monolithic GDSW preconditioners in the sense that decoupled extension operators are used to compute the values in the interior of the sub-domains. Several approaches for improving the performance, such as reuse strategies and shared memory OpenMP parallelization, are explored as well. In our numerical study we target both uniform meshes of varying resolution for the Antarctic ice sheet as well as non uniform meshes for the Greenland ice sheet are considered. We present several weak and strong scaling studies confirming the robustness of the approach and the parallel scalability of the FROSch implementation. Among the highlights of the numerical results are a weak scaling study for up to 32 K processor cores (8 K MPI-ranks and 4 OpenMP threads) and 566 M degrees of freedom for the velocity problem as well as a strong scaling study for up to 4 K processor cores (and MPI-ranks) and 68 M degrees of freedom for the coupled problem.
Objectives of the project include: Enable the use of high strength steel hydrogen pipelines, as significant cost savings can result by implementing high strength steels as compared to lower strength pipes. Demonstrate that girth welds in high-strength steel pipe exhibit fatigue performance similar to lower-strength steels in high-pressure hydrogen gas. Identify pathways for developing high-strength pipeline steels by establishing the relationship between microstructure constituents and hydrogen-accelerated fatigue crack growth (HA-FCG)
The supercritical carbon dioxide (sCO2) Brayton cycle is a promising candidate for future nuclear reactors due to its ability to improve power cycle energy conversion efficiency. The sCO2 Brayton cycle can operate with an efficiency of 45-50% at operating temperatures of 550-700 C. One of the greatest hurdles currently faced by sCO2 Brayton cycles is the corrosivity of sCO2 and the lack of long-term alloy corrosion and mechanical performance data, as these will be key to enhancing the longevity of the system, and thus the levelized cost of electricity. Past studies have shown that sCO2 corrosion occurs through the formation of metal carbonates, oxide layers, and carburization, and alloys with Cr, Mo and Ni generally exhibit less corrosion. While stainless steels may offer sufficient corrosion resistance at the lower range of temperatures seen by the sCO2 Brayton cycles, more expensive nickel-based alloys are typically needed for the higher temperature regions. This study investigates the effects of corrosion on the Haynes 230 alloy, with a preliminary view on changes in the mechanical properties. High temperature CO2 is used for this study as the corrosion products are similar to that of supercritical CO2, allowing for an estimation of the susceptibility towards corrosion without the need for high pressure experimentation.
This study investigates the issues and challenges surrounding energy storage project and portfolio valuation and provide insights in to improving visibility into the process for developers, capital providers, and customers so they can make more informed choices. Energy storage project valuation methodology is typical of power sector projects through evaluating various revenue and cost assumptions in a project economic model. The difference is that energy storage projects have many more design and operational variables to incorporate, and the governing market rules that control these variables are still evolving. This makes project valuation for energy storage more difficult. As the number of operating projects grow, portfolios of these projects are being developed, garnering the interest of larger investors. Valuation challenges of these portfolios can be even more challenging as market role and geographical diversity can actually exacerbate the variability, not mitigate it. By proposing additional visibility of key factors and drivers for industry participants, the US DOE can reduce investment risk, expanding both the number and types of investors, plus helping emerging energy storage technology into sustained commercialization.
Since grid energy storage is still evolving rapidly, it is often difficult to obtain project specific capital costs for various energy storage technologies. This information is necessary to evaluate the profitability of the facility, as well as comparing different energy storage technology options. The goal of this report is to summarize energy storage capital costs that were obtained from industry pricing surveys. The methodology breaks down the cost of an energy storage system into the following component categories: the storage module; the balance of system; the power conversion system; the energy management system; and the engineering, procurement, and construction costs. By evaluating each of the different component costs separately, a synthetic system cost can be developed that provides internal pricing consistency between different project sizes using the same technology, as well as between different technologies that utilize similar components.
This report presents a framework to evaluate the impact of a high-altitude electromagnetic pulse (HEMP) event on a bulk electric power grid. This report limits itself to modeling the impact of EMP E1 and E3 components. The co-simulation of E1 and E3 is presented in detail, and the focus of the paper is on the framework rather than actual results. This approach is highly conservative as E1 and E3 are not maximized with the same event characteristics and may only slightly overlap. The actual results shown in this report are based on a synthetic grid with synthetic data and a limited exemplary EMP model. The framework presented can be leveraged and used to analyze the impact of other threat scenarios, both manmade and natural disasters. This report d escribes a Monte-Carlo based methodology to probabilistically quantify the transient response of the power grid to a HEMP event. The approach uses multiple fundamental steps to characterize the system response to HEMP events, focused on the E1 and E3 components of the event. 1) Obtain component failure data related to HEMP events testing of components and creating component failure models. Use the component failure model to create component failure conditional probability density function (PDF) that is a function of the HEMP induced terminal voltage. 2) Model HEMP scenarios and calculate the E1 coupled voltage profiles seen by all system components. Model the same HEMP scenarios and calculate the transformer reactive power consumption profiles due to E3. 3) Sample each component failure PDF to determine which grid components will fail, due to the E1 voltage spike, for each scenario. 4) Perform dynamic simulations that incorporate the predicted component failures from E1 and reactive power consumption at each transformer affected by E3. These simulations allow for secondary transients to affect the relays/protection remaining in service which can lead to cascading outages. 5) Identify the locations and amount of load lost for each scenario through grid dynamic simulation. This can be an indication of the immediate grid impacts from a HEMP event. In addition, perform more detailed analysis to determine critical nodes and system trends. 6) To help realize the longer-term impacts, a security constrained alternating current optimal power flow (ACOPF) is run to maximize critical load served. This report describes a modeling framework to assess the systemic grid impacts due to a HEMP event. This stochastic simulation framework generates a large amount of data for each Monte Carlo replication, including HEMP location and characteristics, relay and component failures, E3 GIC profiles, cascading dynamics including voltage and frequency over time, and final system state. This data can then be analyzed to identify trends, e.g., unique system behavior modes or critical components whose failure is more likely to cause serious systemic effects. The proposed analysis process is demonstrated on a representative system. In order to draw realistic conclusions of the impact of a HEMP event on the grid, a significant amount of work remains with respect to modeling the impact on various grid components.
Explosively driven ferroelectric generators (FEG) are used as pulsed power sources in many applications that require a compact design that delivers a short high-voltage and high-current pulse. A mechanical shock applied to ferroelectrics releases bound electrical charge through a combination of piezoelectric, domain reorientation, and phase transformation effects. Lead-zirconate-titanate (PZT) 95/5 lies near the ferroelectric (FE)-antiferroelectric (AF) phase boundary and readily transforms to AF phase under compression because AF has a smaller unit volume. This makes it a popular choice for FEGs as the FE-AF transformation completely releases all the stored dipole charge. The complexity of piezoelectric, domain reorientation, and phase transformation behaviors under high deviatoric stress makes modeling this FE to AF transformation and the accompanying charge release challenging. The mode and direction of domain reorientation and phase transformation varies significantly with different deviatoric and hydrostatic stress states. Microstructure changes due to domain reorientation and phase alter the piezoelectric properties of the material. Inaccuracies in modeling any one of these phenomena can result in inaccurate electrical response. This work demonstrates a material model that accurately captures the linear piezoelectric, domain reorientation and phase transformation phenomena by using a micromechanical approach to approximate the changes in domain-structure.
Explosively driven ferroelectric generators (FEG) are used as pulsed power sources in many applications that require a compact design that delivers a short high-voltage and high-current pulse. A mechanical shock applied to ferroelectrics releases bound electrical charge through a combination of piezoelectric, domain reorientation, and phase transformation effects. Lead-zirconate-titanate (PZT) 95/5 lies near the ferroelectric (FE)-antiferroelectric (AF) phase boundary and readily transforms to AF phase under compression because AF has a smaller unit volume. This makes it a popular choice for FEGs as the FE-AF transformation completely releases all the stored dipole charge. The complexity of piezoelectric, domain reorientation, and phase transformation behaviors under high deviatoric stress makes modeling this FE to AF transformation and the accompanying charge release challenging. The mode and direction of domain reorientation and phase transformation varies significantly with different deviatoric and hydrostatic stress states. Microstructure changes due to domain reorientation and phase alter the piezoelectric properties of the material. Inaccuracies in modeling any one of these phenomena can result in inaccurate electrical response. This work demonstrates a material model that accurately captures the linear piezoelectric, domain reorientation and phase transformation phenomena by using a micromechanical approach to approximate the changes in domain-structure.
Cybersecurity for internet - connected Distributed Energy Resources (DER) is essential for the safe and reliable operation of the US power system. Many facets of DER cybersecurity are currently being investigated within different standards development organizations, research communities, and industry committees to address this critical need. This report covers DER access control guidance compiled by the Access Controls Subgroup of the SunSpec/Sandia DER Cybersecurity Workgroup. The goal of the group was to create a consensus - based technical framework to minimize the risk of unauthorized access to DER systems. The subgroup set out to define a strict control environment where users are authorized to access DER monitoring and control features through three steps: (a) user is identified using a proof-of-identity, (b) the user is authenticated by a managed database, (c) and the user is authorized for a specific level of access. DER access control also provides accountability and nonrepudiation within the power system control environment that can be used for forensic analysis and attribution in the event of a cyber-attack. This paper covers foundational requirements for a DER access control environment as well as offering a collection of possible policy, model, and mechanism implementation approaches for IEEE 1547-mandated communication protocols.
Microchannel heat exchangers have seen increasing adoption in many high-pressure applications in recent decades but are subject to particulate fouling from the relatively small channel size compared to traditional designs. Typical cleaning methods require process shutdown, heat exchanger removal, cleaning, then reassembly. The objective of this project was to refine and transfer technology to enable header design improvements for Cleaning-in-Place (CIP), allowing for reduced/negligible process interruption for the cleaning process. The technology transfer was from Sandia National Laboratories (Sandia) to Vacuum Process Engineering, Inc. (VPE). This primary purpose of CIP was developed while considering channel flow uniformity and heat exchanger cost. The project phases were to 1) capture and define potential improvement options, 2) evaluate options with both simulation and experiments, and 3) transfer design knowledge to the industry partner. These efforts resulted in improved header designs from the first known focused effort in this area. The improved designs will help the entire microchannel heat exchanger field that has applications in supercritical CO2 power cycles, hydrogen (fuel cell) vehicle fueling, liquified natural gas processing, and more.
Solving classification problems with machine learning often entails laborious manual labeling of test data, requiring valuable time from a subject matter expert (SME). This process can be even more challenging when each sample is multidimensional. In the case of an anomaly detection system, a standard two-class problem, the dataset is likely imbalanced with few anomalous observations and many “normal” observations (e.g., credit card fraud detection). We propose a unique methodology that quickly identifies individual samples for SME tagging while automatically classifying commonly occurring samples as normal. In order to facilitate such a process, the relationships among the dimensions (or features) must be easily understood by both the SME and system architects such that tuning of the system can be readily achieved. The resulting system demonstrates how combining human knowledge with machine learning can create an interpretable classification system with robust performance.
This report explores the effects of tubing size reductions on natural gas flow from representative depleted reservoir underground storage wells and fields using basic models for coupled reservoir and pipe flow. This work was motivated by interest at the U.S. Department of Transportation, Pipeline and Hazardous Materials Safety Administration, in evaluating the effects of tubing and packer as a potential safety upgrade to implement double-barrier systems to existing underground natural gas storage wells. Reservoir and well flow models were developed from widely accepted industry equations, verified against a commercial process simulator model, and validated against field data. The study utilized U.S. operator survey data to provide context and assure that modeling parameters including aver age deliverability rates for wells and fields, operating pressures, well depths, and storage capacities were all carefully considered to keep the modeling relevant to the known range of U.S. operations. The models generally found that wells and fields with inherently low deliverability were relatively insensitive to reductions in tubing diameter, primarily because the hydraulics in those cases were controlled by reservoir properties. For the high-producing wells and fields, the models found that reducing tubing diameter could produce significant reductions in deliverability, both at the field- and well-level. When put into context with occurrence data regarding average deliverability of fields and wells, it appears that most wells and most fields across the U.S. would experience deliverability reductions on the low end of what was simulated here, generally below 20%. For fields with moderate to high deliverability, reductions were generally larger, and could reach as high as 60% for the highest-producing wells and fields.
To meet the extreme compute demands for deep learning across commercial and scientific applications, dataflow accelerators are becoming increasingly popular. While these “domain-specific” accelerators are not fully programmable like CPUs and GPUs, they retain varying levels of flexibility with respect to data orchestration, i.e., dataflow and tiling optimizations to enhance efficiency. There are several challenges when designing new algorithms and mapping approaches to execute the algorithms for a target problem on new hardware. Previous works have addressed these challenges individually. To address this challenge as a whole, in this work, we present a HW-SW codesign ecosystem for spatial accelerators called Union within the popular MLIR compiler infrastructure. Our framework allows exploring different algorithms and their mappings on several accelerator cost models. Union also includes a plug-and-play library of accelerator cost models and mappers which can easily be extended. The algorithms and accelerator cost models are connected via a novel mapping abstraction that captures the map space of spatial accelerators which can be systematically pruned based on constraints from the hardware, workload, and mapper. We demonstrate the value of Union for the community with several case studies which examine offloading different tensor operations (CONV/GEMM/Tensor Contraction) on diverse accelerator architectures using different mapping schemes.
In this study, we present spectral equivalence results for high-order tensor product edge- and face-based finite elements for the H(curl) and H(div) function spaces. Specifically, we show for certain choices of shape functions that the mass and stiffness matrices of the high-order elements are spectrally equivalent to those for an assembly of low-order elements on the associated Gauss-Lobatto-Legendre mesh. Based on this equivalence, efficient preconditioners can be designed with favorable computational complexity. Numerical results are presented which confirm the theory and demonstrate the benefits of the equivalence results for overlapping Schwarz preconditioners.
The Quantum Scientific Computing Open User Testbed (QSCOUT) at Sandia National Laboratories is a trapped-ion qubit system designed to evaluate the potential of near-term quantum hardware in scientific computing applications for the U.S. Department of Energy and its Advanced Scientific Computing Research program. Similar to commercially available platforms, it offers quantum hardware that researchers can use to perform quantum algorithms, investigate noise properties unique to quantum systems, and test novel ideas that will be useful for larger and more powerful systems in the future. However, unlike most other quantum computing testbeds, the QSCOUT allows both quantum circuit and low-level pulse control access to study new modes of programming and optimization. The purpose of this article is to provide users and the general community with details of the QSCOUT hardware and its interface, enabling them to take maximum advantage of its capabilities.
The Sodium Chemistry (NAC) package in MELCOR has been developed to enhance application to sodium cooled fast reactor. The models in the NAC package have been assessed through benchmark analyses. The F7-1 sodium pool fire experimental analysis is conducted within the framework of the U.S.-Japan collaboration under the Civil Nuclear Energy Research and Development Working Group. This study assesses the capability of the improved models proposed for the sodium pool fire in MELCOR through comparison with the F7-1 experimental data and the results of the SPHINCS code developed in Japan. Pool heat transfer, pool oxide layer, and pool spreading models are improved in this study to mitigate the deviations exhibited in the previous study where the original CONTAIN-LMR models are used: the overestimation of combustion rate and associated temperature during the initial phase of the sodium fire relative to the experimental data and SPHINCS results, and the underestimation of them during the later phase. The analytical result of the improved sodium pool fire model agrees well with the experimental data and SPHINCS results over the entire course of the sodium fire. This study illustrates these enhanced capabilities for MELCOR to reliably simulate sodium pool fire events.
Weaver, Wayne W.; Wilson, David G.; Robinett, Rush D.; Young, Joseph
Typical Type-4 wind turbines use DC-link inverters to couple the electrical machine to the power grid. Each wind turbine has two power conversion steps. Therefore, an N-turbine farm will have 2N power converters. This work presents a DC bus collection system for a type-4 wind farm that reduces the overall required number of converters and minimizes the energy storage system (ESS) requirements. This approach requires one conversion step per turbine, one converter for the ESS and a single grid coupling converter, which leads to N + 2 converters for the wind farm which will result in significant cost savings. However, one of the trade-offs for a DC collection system is the need for increased energy storage to filter the power variations and improve power quality to the grid. This paper presents a novel approach to an effective DC bus collection system design. The DC collection for the wind farm implements a power phasing control method between turbines that filter the variations and improves power quality while minimizing the need for added energy storage system hardware and improved power quality. The phasing control takes advantage of a novel power packet network concept with nonlinear power flow control design techniques that guarantees both stable and enhanced dynamic performance. This paper presents the theoretical design of the DC collection and phasing control. To demonstrate the efficacy of this approach detailed numerical simulation examples are presented.
In the Republic of Uganda, it is estimated that nearly 28 million people lack access to clean and safe drinking water [1]. The authors developed a model by performing multiple linear regression using predictors air temperature, irradiance, and wind speeds in two rural areas to determine the suitability of a proposed system to assist. The system consists of a commercial grade vertical axis wind turbine (VAWT) with embedded solar cells capable of providing water pumping and small-scale electricity generation. A suite of equations was used alongside the regression models to determine how to adjust the mechanical properties of the turbine such that the solar and wind energy function as mutually redundant drivers for both a small-scale electricity generator and water pumping system. The results are consistent with the following: solar output variation of 20% (from 4.5 to 5.5 W/m2) [2], [3]; 3.7 m/s to 6m/sec require to operate the turbine; and water pumped at the rate of 3748.54 ft-lbs to 42314.72 ft-lbs per hour. The primary researcher for this project has applied for and received a provisional patent to advance the VAWT/SES technology. This year a utility patent was filed to move the complete energy renewal system toward commercialization.
Proceedings of the European Wave and Tidal Energy Conference
Beaujean, Pierre P.; Murray, Bryan; Gunawan, Budi G.; Driscoll, Frederick
This paper reports on the development of a self-synchronizing underwater acoustic network developed for remote monitoring of mooring loads in Wave Energy Converters (WECs). This network uses Time Division Multiple Access and operates self-contained with the ability for users to remotely transmit commands to the network as needed. Each node is a self-contained unit, consisting of a protocol adaptor board, an FAU-DPAM underwater acoustic modem and a battery pack. A node can be connected to a load cell, to a topside user or to the WEC. Every node is swapable. The protocol adaptor board, named Protocol Adaptor for Digital LOad Cell (PADLOC) supports a variety of digital load cell message formats (CAN, MODBUS, custom ASCII) and underwater acoustic modem serial formats. PADLOC enables topside users to connect to separate load cells through a user-specific command. This is especially important if the user is monitoring multiple load cells during deployment or maintenance, when the primary data system may be offline. Each PADLOC board handles formatting, buffering and has a one-on-one serial connection with each pair (node) of a digital load cell and acoustic modem. In addition, each PADLOC board handles the timekeeping and power saving features for each node. The only limitation is the data bit rate and delay limitations associated with the underwater acoustic modem. A four node self-synchronizing network has been developed to demonstrate the load cell monitoring capability using the PADLOC technology on the CalWave WEC.
The National Nuclear Security Agency (NNSA) initiated the Minority Serving Institution Partnership Plan (MSIPP) to 1) align investments in a university capacity and workforce development with the NNSA mission to develop the needed skills and talent for NNSA’s enduring technical workforce at the laboratories and production plants, and 2) to enhance research and education at under-represented colleges and universities. Out of this effort, MSIPP launched a new consortium in early FY17 focused on Tribal Colleges and Universities (TCUs) known as the Advanced Manufacturing Network Initiative (AMNI). This consortium has been extended for FY20 and FY21. The following report summarizes the status update during this quarter.
Both the data science and scientific computing communities are embracing GPU acceleration for their most demanding workloads. For scientific computing applications, the massive volume of code and diversity of hardware platforms at supercomputing centers has motivated a strong effort toward performance portability. This property of a program, denoting its ability to perform well on multiple architectures and varied datasets, is heavily dependent on the choice of parallel programming model and which features of the programming model are used. In this paper, we evaluate performance portability in the context of a data science workload in contrast to a scientific computing workload, evaluating the same sparse matrix kernel on both. Among our implementations of the kernel in different performance-portable programming models, we find that many struggle to consistently achieve performance improvements using the GPU compared to simple one-line OpenMP parallelization on high-end multicore CPUs. We show one that does, and its performance approaches and sometimes even matches that of vendor-provided GPU math libraries.