The explosive BTF (benzotrifuroxan) is an interesting molecule for sub-millimeter studies of initiation and detonation. It has no hydrogen, thus no water in the detonation products and a subsequently high temperature in the reaction zone. The material has impact sensitivity that is comparable or less than that of PETN (pentaerythritol tetranitrate) and slightly greater than RDX, HMX, and CL-20. Physical vapor deposition (PVD) can be used to grow high-density films of pure explosives with precise control over geometry, and we apply this technique to BTF to study detonation and initiation behavior as a function of sample thickness. The geometrical effects on detonation and corner turning behavior are studied with the critical detonation thickness experiment and the micromushroom test, respectively. Initiation behavior is studied with the high-throughput initiation experiment. Vapor-deposited films of BTF show detonation failure, corner turning, and initiation consistent with a heterogeneous explosive. Scaling of failure thickness to failure diameter shows that BTF has a very small failure diameter.
Multi-axis testing has become a popular test method because it provides a more realistic simulation of a field environment when compared to traditional vibration testing. However, field data may not be available to derive the multi-axis environment. This means that methods are needed to generate “virtual field data” that can be used in place of measured field data. Transfer path analysis (TPA) has been suggested as a method to do this since it can be used to estimate the excitation forces on a legacy system and then apply these forces to a new system to generate virtual field data. This chapter will provide a review of using TPA methods to do this. It will include a brief background on TPA, discuss the benefits of using TPA to compute virtual field data, and delve into the areas for future work that could make TPA more useful in this application.
We consider the problem of decentralized control of reactive power provided by distributed energy resources for voltage support in the distribution grid. We assume that the reactance matrix of the grid is unknown and potentially time-varying. We present a decentralized adaptive controller in which the reactive power at each inverter is set using a potentially heterogeneous droop curve and analyze the stability and the steady-state error of the resulting system. The effectiveness of the controller is validated in simulations using a modified version of the IEEE 13-bus and a 8500-node test system.
Measurements of the oxidation rates of various forms of carbon (soot, graphite, coal char) have often shown an unexplained attenuation with increasing temperatures in the vicinity of 2000 K, even when accounting for diffusional transport limitations and gas-phase chemical effects (e.g. CO2 dissociation). With the development of oxy-fuel combustion approaches for pulverized coal utilization with carbon capture, high particle temperatures are readily achieved in sufficiently oxygen-enriched environments. In this work, a new semi-global intrinsic kinetics model for high temperature carbon oxidation is created by starting with a previously developed 5-step mechanism that was shown to reproduce all major known trends in carbon oxidation, except for its high temperature kinetic falloff, and incorporating a recently discovered surface oxide decomposition step. The predictions of this new model are benchmarked by deploying the kinetic model in a steady-state reacting particle code (SKIPPY) and comparing the simulated results against a carefully measured set of pulverized coal char combustion temperature measurements over a wide range of oxygen concentrations in N2 and CO2 environments. The results show that the inclusion of the spontaneous surface oxide decomposition reaction step significantly improves predictions at high particle temperatures. Furthermore, the simulations reveal that O atoms released from the oxide decomposition step enhance the radical pool in the near-surface region and within the particle interior itself. Incorporation of literature rates for O and OH reactions with the carbon surface results in a reduction in the predicted radical pool concentrations and a very minor enhancement of the overall carbon oxidation rate.
Deep neural networks (DNNs) achieve state-of-the-art performance in video anomaly detection. However, the usage of DNNs is limited in practice due to their computational overhead, generally requiring significant resources and specialized hardware. Further, despite recent progress, current evaluation criteria of video anomaly detection algorithms are flawed, preventing meaningful comparisons among algorithms. In response to these challenges, we propose (1) a compression-based technique referred to as Spatio-Temporal N-Gram Prediction by Partial Matching (STNG PPM) and (2) simple modifications to current evaluation criteria for improved interpretation and broader applicability across algorithms. STNG PMM does not require specialized hardware, has few parameters to tune, and is competitive with DNNs on multiple benchmark data sets in video anomaly detection.
Deep neural networks for automatic target recognition (ATR) have been shown to be highly successful for a large variety of Synthetic Aperture Radar (SAR) benchmark datasets. However, the black box nature of neural network approaches raises concerns about how models come to their decisions, especially when in high-stake scenarios. Accordingly, a variety of techniques are being pursued seeking to offer understanding of machine learning algorithms. In this paper, we first provide an overview of explainability and interpretability techniques introducing their concepts and the insights they produce. Next we summarize several methods for computing specific approaches to explainability and interpretability as well as analyzing their outputs. Finally, we demonstrate the application of several attribution map methods and apply both attribution analysis metrics as well as localization interpretability analysis to six neural network models trained on the Synthetic and Measured Paired Labeled Experiment (SAMPLE) dataset to illustrate the insights these methods offer for analyzing SAR ATR performance.
Accuracy-optimized convolutional neural networks (CNNs) have emerged as highly effective models at predicting neural responses in brain areas along the primate ventral stream, but it is largely unknown whether they effectively model neurons in the complementary primate dorsal stream. We explored how well CNNs model the optic flow tuning properties of neurons in dorsal area MSTd and we compared our results with the Non-Negative Matrix Factorization (NNMF) model, which successfully models many tuning properties of MSTd neurons. To better understand the role of computational properties in the NNMF model that give rise to optic flow tuning that resembles that of MSTd neurons, we created additional CNN model variants that implement key NNMF constraints – non-negative weights and sparse coding of optic flow. While the CNNs and NNMF models both accurately estimate the observer's self-motion from purely translational or rotational optic flow, NNMF and the CNNs with nonnegative weights yield substantially less accurate estimates than the other CNNs when tested on more complex optic flow that combines observer translation and rotation. Despite its poor accuracy, NNMF gives rise to tuning properties that align more closely with those observed in primate MSTd than any of the accuracy-optimized CNNs. This work offers a step toward a deeper understanding of the computational properties and constraints that describe the optic flow tuning of primate area MSTd.
Accurate understanding of the behavior of commercial-off-the-shelf electrical devices is important in many applications. This paper discusses methods for the principled statistical analysis of electrical device data. We present several recent successful efforts and describe two current areas of research that we anticipate will produce widely applicable methods. Because much electrical device data is naturally treated as functional, and because such data introduces some complications in analysis, we focus on methods for functional data analysis.
Disposal of commercial spent nuclear fuel in a geologic repository is studied. In situ heater experiments in underground research laboratories provide a realistic representation of subsurface behavior under disposal conditions. This study describes process model development and modeling analysis for a full-scale heater experiment in opalinus clay host rock. The results of thermal-hydrology simulation, solving coupled nonisothermal multiphase flow, and comparison with experimental data are presented. The modeling results closely match the experimental data.
Emerging hydrogen technologies span a diverse range of operating environments. High-pressure storage for mobility applications has become commonplace up to about 1,000 bar, whereas transmission of gaseous hydrogen can occur at hydrogen partial pressure of a few bar when blended into natural gas. In the former case, cascade storage is utilized to manage hydrogen-assisted fatigue and the Boiler and Pressure Vessel Code, Section VIII, Division 3 includes fatigue design curves for fracture mechanics design of hydrogen vessels at pressure of 1,030 bar (using a Paris Law formulation). Recent research on hydrogen-assisted fatigue crack growth has shown that a diverse range of ferritic steels show similar fatigue crack growth behavior in gaseous hydrogen environments, including low-carbon steels (e.g., pipeline steels) as well as quench and tempered Cr-Mo and Ni-Cr-Mo pressure vessel steels with tensile strength less than 915 MPa. However, measured fatigue crack growth is sensitive to hydrogen partial pressure and fatigue crack growth can be accelerated in hydrogen at pressure as low as 1 bar. The effect of hydrogen partial pressure from 1 to 1,000 bar can be quantified through a simple semi-empirical correction factor to the fatigue crack growth design curves. This paper documents the technical basis for the pressure-sensitive fatigue crack growth rules for gaseous hydrogen service in ASME B31.12 Code Case 220 and for revision of ASME VIII-3 Code Case 2938-1, including the range of applicability of these fatigue design curves in terms of environmental, materials and mechanics variables.
A single Synthetic Aperture Radar (SAR) image is a 2-Dimensional projection of a 3-Dimensional scene, with very limited ability to estimate surface topography. However, with multiple SAR images collected from suitably different geometries, they may be compared with multilateration calculations to estimate characteristics of the missing dimension. The ability to employ effective multilateration algorithms is highly dependent on the geometry of the data collections, and can be cast as a least-squares exercise. A measure of Dilution of Precision (DOP) can be used to compare the relative merits of various collection geometries.
The importance of user-accessible multiple-input/multiple-output (MIMO) control methods has been highlighted in recent years. Several user-created control laws have been integrated into Rattlesnake, an open-source MIMO vibration controller developed at Sandia National Laboratories. Much of the effort to date has focused on stationary random vibration control. However, there are many field environments which are not well captured by stationary random vibration testing, for example shock, sine, or arbitrary waveform environments. This work details a time waveform replication technique that uses frequency domain deconvolution, including a theoretical overview and implementation details. Example usage is demonstrated using a simple structural dynamics system and complicated control waveforms at multiple degrees of freedom.
A method for battery state of charge (SoC) estimation that compensates input noise using an adaptive square-root unscented Kalman filter (ASRUKF) is presented in this paper. In contrast to traditional state estimation approaches that consider deterministic system inputs, this method can improve the accuracy of battery state estimator by considering that the measurements of the control input variable of the filter, the cell currents, are subject to noise. Also, this paper presents two estimators for input and output noise covariance. The proposed method consists of initialization, state correction, sigma point calculations, state prediction, and covariance estimation steps and is demonstrated using simulations. We simulate two battery cycling protocols of three series-connected batteries whose SoC is estimated by the proposed method. The results show that the improved ASRUKF can track closely the states and achieves a 20.63 % reduction in SoC estimation error when compared to a benchmark that does not consider input noise.
We demonstrate evanescently coupled waveguide integrated silicon photonic avalanche photodiodes designed for single photon detection for quantum applications. Simulation, high responsivity, and record low dark currents for evanescently coupled devices are presented.
There has always been a desire to port high-fidelity reactive flow models from one code to another. For example, the AWE reactive burn model known as CREST has been or is being implemented in several of the U.S. Department of Energy hydrocodes. Those involved with reactive burn model implementation recognize the challenges immediately, e.g., Eulerian versus Lagrangian frameworks, the form of the equation of state, the closure relations, etc. In this work, we report the development of the CREST reactive burn model in CTH, a multidimensional, multi-material hydrocode developed by Sandia National Laboratories, following an earlier implementation shown at the last International Detonation Symposium. Results include code-to-code comparisons between CTH and the AWE hydrocode PERUSE, focusing on the simulated particle velocity histories during a shock-to-detonation transition, and corresponding to previous gas gun impact experiments as well as new model verification studies. Lessons learned are provided, including discussions of the numerical accuracy, in addition to the role of artificial viscosity and artificial viscous work. Finally, simulation results are shown to compare the Snowplough versus P-Alpha porosity model options.
In this work, we evaluate the usefulness of nonsmooth basis functions for representing the periodic response of a nonlinear system subject to contact/impact behavior. As with sine and cosine basis functions for classical Fourier series, which have C∞ smoothness, nonsmooth counterparts with C0 smoothness are defined to develop a nonsmooth functional representation of the solution. Some properties of these basis functions are outlined, such as periodicity, derivatives, and orthogonality, which are useful for functional series applied via the Galerkin method. Least-squares fits of the classical Fourier series and nonsmooth basis functions are presented and compared using goodness-of-fit metrics for time histories from vibro-impact systems with varying contact stiffnesses. This formulation has the potential to significantly reduce the computational cost of harmonic balance solvers for nonsmooth dynamical systems. Rather than requiring many harmonics to capture a system response using classical, smooth Fourier terms, the frequency domain discretization could be captured by a combination of a finite Fourier series supplemented with nonsmooth basis functions to improve convergence of the solution for contact-impact problems.
Structural materials used in combustion or power generation systems need to have both environmental and temperature resistance to ensure long-term performance. As the energy sector transitions to hydrogen, there is a need to ensure compatibility of highly-alloyed austenitic steels and nickel-based alloys with hydrogen over a range of temperatures. Hydrogen embrittlement of these alloy systems is often considered most detrimental near ambient temperatures and low temperatures, although there is some evidence in the literature that hydrogen can affect creep behavior at elevated temperature. In the intermediate temperature range (e.g., 100-400C), it is uncertain whether hydrogen degradation of mechanical properties will be of concern. In this study, three alloys (304L, IN625, Hastelloy X) commonly used in power generation systems were thermally precharged with hydrogen and subsequently tensile tested to failure in air at temperatures ranging from 20°C to 200°C. At 20°C, the hydrogen-precharged condition for all materials exhibited loss in ductility with relative reduction of area ranging between 32% and 57%. The three alloys exhibited different trends with temperature but, in general, the relative reduction of area improved with increasing temperature tending towards noncharged behavior. Tests were performed at a nominal strain rate of 2 x 10-3 s-1 in order to minimize loss of hydrogen during elevated temperature testing. Hydrogen contents from the grip sections were measured both before and after testing and remained within 10% of starting content for 100°C tests and within 8-23% for 200°C tests.
The development of multi-axis force sensing ca-pabilities in elastomeric materials has enabled new types of human motion measurement with many potential applications. In this work, we present a new soft insole that enables mobile measurement of ground reaction forces (GRFs) outside of a lab-oratory setting. This insole is based on hybrid shear and normal force detecting (SAND) tactile elements (taxels) consisting of optical sensors optimized for shear sensing and piezoresistive pressure sensors dedicated to normal force measurement. We develop polynomial regression and deep neural network (DNN) GRF prediction models and compare their performance to ground-truth force plate data during two walking experiments. Utilizing a 4-layer DNN, we demonstrate accurate prediction of the anterior-posterior (AP), medial-lateral (ML) and vertical components of the GRF with normalized mean absolute errors (NMAE) of <5.1 %, 4.1 %, and 4.5%, respectively. We also demonstrate the durability of the hybrid SAND insole construction through more than 20,000 cycles of use.
Underground caverns in salt formations are promising geologic features to store hydrogen (H2) because of salt's extremely low permeability and self-healing behavior.Successful salt-cavern H2 storage schemes must maximize the efficiency of cyclic injection-production while minimizing H2 loss through adjacent damaged salt.The salt cavern storage community, however, has not fully understood the geomechanical behaviors of salt rocks driven by quick operation cycles of H2 injection-production, which may significantly impact the cost-effective storage-recovery performance.Our field-scale generic model captures the impact of combined drag and back stressing on the salt creep behavior corresponding to cycles of compression and extension, which may lead to substantial loss of cavern volumes over time and diminish the cavern performance for H2 storage.Our preliminary findings address that it is essential to develop a new salt constitutive model based on geomechanical tests of site-specific salt rock to probe the cyclic behaviors of salt both beneath and above the dilatancy boundary, including reverse (inverse transient) creep, the Bauschinger effect and fatigue.
This paper provides a summary of planning work for experiments that will be necessary to address the long-term model validation needs required to meet offshore wind energy deployment goals. Conceptual experiments are identified and laid out in a validation hierarchy for both wind turbine and wind plant applications. Instrumentation needs that will be required for the offshore validation experiments to be impactful are then listed. The document concludes with a nominal vision for how these experiments can be accomplished.
Different data pipelines and statistical methods are applied to photovoltaic (PV) performance datasets to quantify the performance loss rate (PLR). Since the real values of PLR are unknown, a variety of unvalidated values are reported. As such, the PV industry commonly assumes PLR based on statistically extracted ranges from the literature. However, the accuracy and uncertainty of PLR depend on several parameters including seasonality, local climatic conditions, and the response of a particular PV technology. In addition, the specific data pipeline and statistical method used affect the accuracy and uncertainty. To provide insights, a framework of (≈200 million) synthetic simulations of PV performance datasets using data from different climates is developed. Time series with known PLR and data quality are synthesized, and large parametric studies are conducted to examine the accuracy and uncertainty of different statistical approaches over the contiguous US, with an emphasis on the publicly available and “standardized” library, RdTools. In the results, it is confirmed that PLRs from RdTools are unbiased on average, but the accuracy and uncertainty of individual PLR estimates vary with climate zone, data quality, PV technology, and choice of analysis workflow. Best practices and improvement recommendations based on the findings of this study are provided.
2024 IEEE International Power Modulator and High Voltage Conference, IPMHVC 2024
Graves, David Z.; Lehmann, Megan; Bilbao, Argenis V.; Bayne, Stephen B.; Schrock, Emily A.
This paper builds upon previous research in developing a SiC Drift Step Recovery Diode (DSRD) Model in Silvaco Victory Device. For this research, the DSRD is based on an N-type substrate for improved manufacturability. The model described in this paper was developed by characterizing DSRD devices under DC and transient conditions. The details of the pulsed power testbed developed for the transient characterization is outlined in this paper. The goal of this model is to allow the rapid development of future pulsed power systems and for further device structure optimization.
While recent research has greatly improved our ability to test and model nonlinear dynamic systems, it is rare that these studies quantify the effect that the nonlinearity would have on failure of the structure of interest. While several very notable exceptions certainly exist, such as the work of Hollkamp et al. on the failure of geometrically nonlinear skin panels for high speed vehicles (see, e.g., Gordon and Hollkamp, Reduced-order models for acoustic response prediction. Technical Report AFRL-RB-WP-TR-2011-3040, Air Force Research Laboratory, AFRL-RB-WP-TR-2011-3040, Dayton, 2011. Issue: AFRL-RB-WP-TR-2011-3040AFRL-RB-WP-TR-2011-3040), other studies have given little consideration to failure. This work studies the effect of common nonlinearities on the failure (and failure margins) of components that undergo durability testing in dynamic environments. This context differs from many engineering applications because one usually assumes that any nonlinearities have been fully exercised during the test.
Control volume analysis models physics via the exchange of generalized fluxes between subdomains. We introduce a scientific machine learning framework adopting a partition of unity architecture to identify physically-relevant control volumes, with generalized fluxes between subdomains encoded via Whitney forms. The approach provides a differentiable parameterization of geometry which may be trained in an end-to-end fashion to extract reduced models from full field data while exactly preserving physics. The architecture admits a data-driven finite element exterior calculus allowing discovery of mixed finite element spaces with closed form quadrature rules. An equivalence between Whitney forms and graph networks reveals that the geometric problem of control volume learning is equivalent to an unsupervised graph discovery problem. The framework is developed for manifolds in arbitrary dimension, with examples provided for H(div) problems in R2 establishing convergence and structure preservation properties. Finally, we consider a lithium-ion battery problem where we discover a reduced finite element space encoding transport pathways from high-fidelity microstructure resolved simulations. The approach reduces the 5.89M finite element simulation to 136 elements while reproducing pressure to under 0.1% error and preserving conservation.
Numerous types of pulsed power driven inertial confinement fusion (ICF) and high energy density (HED) systems rely on implosion stability to achieve desired temperatures, pressures, and densities. Sandia National Laboratories Pulsed Power Sciences Center’s main ICF platform, Magnetized Liner Inertial Fusion (MagLIF), suffers from implosion instabilities which limit attainable fuel conditions and can compromise fuel confinement. This Truman Fellowship research primarily focused on computationally exploring (a) methods for improving our understanding of hydrodynamic and magnetohydrodynamic instabilities that form during cylindrical liner implosions, (b) methods for mitigating implosion instabilities, particularly those that degrade performance of MagLIF targets, and (c) novel MagLIF target designs intended to improve target performance primarily via enhanced implosion stability. Several multi-dimensional computational tools were used, including the magnetohydrodynamics code ALEGRA, the radiation-magnetohydrodynamics code HYDRA, and the magnetohydrodynamics code KRAKEN. This research succeeded in executing and analyzing simulations of automagnetizing liner implosions, shockless MagLIF implosions, dynamic screw pinch driven cylindrical liner implosions, and cylindrically convergent HED instability studies. The methods and tools explored and developed in this Truman Fellowship research have been published in several peer-reviewed journal articles and will serve as useful contributions to the fields of pulsed power science and engineering, particularly pertaining to pulsed power ICF and HED science.
For the cylindrically symmetric targets that are normally fielded on the Z machine, two dimensional axisymmetric MHD simulations provide the backbone of our target design capability. These simulations capture the essential operation of the target and allow for a wide range of physics to be addressed at a substantially lower computational cost than 3D simulations. This approach, however, makes some approximations that may impact its ability to accurately provide insight into target operation. As an example, in 2D simulations, targets are able to stagnate directly to the axis in a way that is not entirely physical, leading to uncertainty in the impact of the dynamical instabilities that are an important source of degradation for ICF concepts. In this report, we have performed a series of 3D calculations in order to assess the importance of this higher fidelity treatment on MagLIF target performance.
A technique using the photon kerma cross section for a material in combination with the number fraction from a photon energy spectrum has been developed to determine the estimated subzone dimension needed to provide an energy deposition profile in radiation transport calculations. The technique was verified using the ITS code for monoenergetic photon sources and a selection of photon spectra. A Python script was written to use the CEPXS cross-section file with a Rapture calculated transmission spectrum to provide the dimensional estimates in a rapid fashion. The script is available for SNL users through the corporate gitlab server.
The nonlinear viscoelastic Spectacular model is calibrated to the thermo-mechanical behavior of 828/D230/Alox with an alox volume fraction of 20 %. Legacy experimental data from Sandia’s polymer properties database (PPD) is used to calibrate the model. Based on known densities of the epoxy 828/D230 and the alox filler, the alox volume fractions listed on the PPD were likely reported incorrectly. The alox volume fractions are recalculated here. Using the recalculated alox volume fractions, the PPD contains experimental data for 828/D230/Alox with alox volume fractions of 16 %, 24 %, and 33 %, so the thermo-mechanical behavior at 20 % alox volume fraction is estimated by interpolating between the bounding cases of of 16 % and 24 %. Because the Spectacularmodel can be fairly challenging to calibrate, the calibration procedure is described in detail. Several of the calibration steps involve inverse parameter identification, where an experiment is simulated and parameters are iteratively updated until the model response matches the experimental data. As the PPD does not fully describe all experimental procedures, the experimental simulations use assumed thermal and mechanical loading rates that are typical for the viscoelastic characterization of epoxies. Spectacular uses four independent relaxation functions related to volumetric (ƒ1), shear (ƒ2), thermal strain (ƒ3), and thermal relaxations (ƒ4). The previous SPEC model form, also known as the universal_polymer model, uses two independent relaxation functions related to volumetric and thermal relaxation (ƒν = ƒ1 = ƒ3 = ƒ4) and shear relaxation (ƒs = ƒ2). The two constitutive choices are briefly evaluated here, where it is found that the four relaxation function approach of Spectacular was better suited for fitting the coefficient of thermal expansion during both heating and cooling.
The Storage Sizing and Placement Simulation (SSIM) application allows a user to define the possible sizes and locations of energy storage elements on an existing grid model defined in OpenDSS. Given these possibilities, the software will automatically search through them and attempt to determine which configurations result in the best overall grid performance. This quick-start guide will go through, in detail, the creation of an SSIM model based on a modified version of the IEEE 34 bus test feeder system. There are two primary parts of this document. The first is a complete list of instructions with little-to-no explanation of the meanings of the actions requested. The second is a detailed description of each input and action stating the intent and effect of each. There are links between the two sections.
Optimization is a key tool for scientific and engineering applications; however, in the presence of models affected by uncertainty, the optimization formulation needs to be extended to consider statistics of the quantity of interest. Optimization under uncertainty (OUU) deals with this endeavor and requires uncertainty quantification analyses at several design locations; i.e., its overall computational cost is proportional to the cost of performing a forward uncertainty analysis at each design location. An OUU workflow has two main components: an inner loop strategy for the computation of statistics of the quantity of interest, and an outer loop optimization strategy tasked with finding the optimal design, given a merit function based on the inner loop statistics. Here, in this work, we propose to alleviate the cost of the inner loop uncertainty analysis by leveraging the so-called multilevel Monte Carlo (MLMC) method, which is able to allocate resources over multiple models with varying accuracy and cost. The resource allocation problem in MLMC is formulated by minimizing the computational cost given a target variance for the estimator. We consider MLMC estimators for statistics usually employed in OUU workflows and solve the corresponding allocation problem. For the outer loop, we consider a derivative-free optimization strategy implemented in the SNOWPAC library; our novel strategy is implemented and released in the Dakota software toolkit. We discuss several numerical test cases to showcase the features and performance of our approach with respect to its Monte Carlo single fidelity counterpart.
This report describes work originally performed in FY19 that assembled a workflow enabling formal verification of high-consequence digital controllers. The approach builds on an engineering analysis strategy using multiple abstraction levels (Model-Based Design) and performs exhaustive formal analysis of appropriate levels – here, state machines and C code – to assure always/never properties of digital logic that cannot be verified by testing alone. The operation of the workflow is illustrated using example models and code, including expected failures of verification when properties are violated.
The International Database of Reference Gamma-Ray Spectra of Various Nuclear Matter is designed to hold curated gamma spectral data is hosted by the International Atomic Energy Agency on its public facing web site. The database used to hold the spectral data was designed by Sandia National Labs under the auspices of the State Department’s Support Program. This document describes the tables and entity relationships that make up the database.
Long-term stable sealing elements are a basic component in the safety concept for a possible repository for heat-emitting radioactive waste in rock salt. The sealing elements will be part of the closure concept for drifts and shafts. They will be made from a welldefinied crushed salt in employ a specific manufacturing process. The use of crushed salt as geotechnical barrier as required by the German Site Selection Act from 2017 /STA 17/ represents a paradigm change in the safety function of crushed salt, since this material was formerly only considered as stabilizing backfill for the host rock. The demonstration of the long-term stability and impermeability of crushed salt is crucial for its use as a geotechnical barrier. The KOMPASS-II project, is a follow-up of the KOMPASS-I project and continues the work with focus on improving the understanding of the thermal-hydraulic-mechanical (THM) coupled processes in crushed salt compaction with the objective to enhance the scientific competence for using crushed salt for the long-term isolation of high-level nuclear waste within rock salt repositories. The project strives for an adequate characterization of the compaction process and the essential influencing parameters, as well as a robust and reliable long-term prognosis using validated constitutive models. For this purpose, experimental studies on long-term compaction tests are combined with microstructural investigations and numerical modeling. The long-term compaction tests in this project focused on the effect of mean stress, deviatoric stress and temperature on the compaction behavior of crushed salt. A laboratory benchmark was performed identifying a variability in compaction behavior. Microstructural investigations were executed with the objective to characterize the influence of pre-compaction procedure, humidity content and grain size/grain size distribution on the overall compaction process of crushed salt with respect to the deformation mechanisms. The created database was used for benchmark calculations aiming for improvement and optimization of a large number of constitutive models available for crushed salt. The models were calibrated, and the improvement process was made visible applying the virtual demonstrator.
Infrasound, low frequency sound less than 20 Hz, is generated by both natural and anthropogenic sources. Infrasound sensors measure pressure fluctuations only in the vertical plane and are single channel. However, the most robust infrasound signal detection methods rely on stations with multiple sensors (arrays), despite the fact that these are sparse. Automated methods developed for seismic data, such as short-term average to long-term average ratio (STA/LTA), often have a high false alarm rate when applied to infrasound data. Leveraging single channel infrasound stations has the potential to decrease signal detection limits, though this cannot be done without a reliable detection method. Therefore, this report presents initial results using (1) a convolutional neural network (CNN) to detect infrasound signals and (2) unsupervised learning to gain insight into source type.
This report summarizes the collaboration between Sandia National Laboratories (SNL) and the Nuclear Regulatory Commission (NRC) to improve the state of knowledge on chloride induced stress corrosion cracking (CISCC). The foundation of this work relied on using SNL’s CISCC computer code to assess the current state of knowledge for probabilistically modeling CISCC on stainless steel canisters. This work is presented as three tasks. The first task is exploring and independently comparing crack growth rate (CGR) models typically used in CISCC modeling by the research community. The second task is implementing two of the more conservative CGR models from the first task into SNL’s full CISCC code to understand the impact of the different CGR models on a full probabilistic analysis while studying uncertainty from three key input parameters. The combined work of the first two tasks showed that properly measuring salt deposition rates is impactful to reducing uncertainty when modeling CISCC. The work in Task 2 also showed how probabilistic CGR models can be more appropriate at capturing aleatory uncertainty when modeling SCC. Lastly, appropriate and realistic input parameters relevant for CISCC modeling were documented in the last task as a product of the simulations considered in the first two tasks.
Accurately locating seismoacoustic sources with geophysical observations helps to monitor natural and anthropogenic phenomena. Sparsely deployed infrasound arrays can readily locate large sources thousands of kms away, but small events typically produce signals observable at only local to regional distances. At such distances, accurate location efforts rely on observations across smaller regional or temporary deployments which often consist of single-channel infrasound sensors that cannot record direction of arrival. Event locations can also be aided by inclusion of ground coupled airwaves (GCA). This study demonstrates how we can robustly locate a catalog of seismoacoustic events using infrasound, GCA, and seismic arrival times at local to near-regional distances. We employ a probabilistic location framework using simplified forward models. Our results indicate that both single-channel infrasound and GCA arrival times can provide accurate estimates of event location in the absence of array-based observations even when using simple models. However, one must carefully choose model uncertainty bounds to avoid underestimation of confidence intervals.
This work was conducted in support of the American Made Geothermal Prize. The following data summary report presents the testing conducted at Sandia National Labs to validate the performance of the Ultra-High Temperature Seismic Tool for Geothermal Wells. The goal of the testing was to measure the sensitivity of the device to seismic vibrations and reliability of the instrument at elevated temperatures. To this end, two tests were conducted: 1) Ambient Temperature Seismic Testing, which measured the response of the tool to a sweep of frequencies from 1 to 1000 Hz, and 2) Elevated Temperature Survivability Testing which measured the voltage response of the device at 225°C over a month-long testing window. The details of the testing methodology and summary of the tests are presented herein.