Accurate prediction of ductile behavior of structural alloys up to and including failure is essential in component or system failure assessment, which is necessary for nuclear weapons alteration and life extensions programs of Sandia National Laboratories. Modeling such behavior requires computational capabilities to robustly capture strong nonlinearities (geometric and material), rate- dependent and temperature-dependent properties, and ductile failure mechanisms. This study's objective is to validate numerical simulations of a high-deformation crush of a stainless steel can. The process consists of identifying a suitable can geometry and loading conditions, conducting the laboratory testing, developing a high-quality Sierra/SM simulation, and then drawing comparisons between model and measurement to assess the fitness of the simulation in regards to material model (plasticity), finite element model construction, and failure model. Following previous material model calibration, a J2 plasticity model with a microstructural BCJ failure model is employed to model the test specimen made of 304L stainless steel. Simulated results are verified and validated through mesh and mass-scaling convergence studies, parameter sensitivity studies, and a comparison to experimental data. The converged mesh and degree of mass-scaling are the mesh discretization with 140,372 elements, and a mass scaling with a target time increment of 1.0e-6 seconds and time step scale factor of 0.5, respectively. Results from the coupled thermal-mechanical explicit dynamic analysis are comparable to the experimental data. Simulated global force vs displacement (F/D) response predicts key points such as yield, ultimate, and kinks of the experimental F/D response. Furthermore, the final deformed shape of the can and field data predicted from the analysis are similar to that of the deformed can, as measured by 3D optical CMM scans and DIC data from the experiment.
International safeguards currently rely on material accountancy to verify that declared nuclear material is present and unmodified. Although effective, material accountancy for large bulk facilities can be expensive to implement due to the high precision instrumentation required to meet regulatory targets. Process monitoring has long been considered to improve material accountancy. However, effective integration of process monitoring has been met with mixed results. Given the large successes in other domains, machine learning may present a solution for process monitoring integration. Past work has shown that unsupervised approaches struggle due to measurement error. Although not studied in depth for a safeguards context, supervised approaches often have poor generalization for unseen classes of data (e.g., unseen material loss patterns). This work shows that engineered datasets, when used for training, can improve the generalization of supervised approaches. Further, the underlying models needed to generate these datasets need only accurately model certain high importance features.
Particle heat exchangers are a critical enabling technology for next generation concentrating solar power (CSP) plants that use supercritical carbon dioxide (sCO2) as a working fluid. This report covers the design, manufacturing and testing of a prototype particle-to-sCO2 heat exchanger targeting thermal performance levels required to meet commercial scale cost targets. In addition, the the design and assembly of integrated particle and sCO2 flow loops for heat exchanger performance testing are detailed. The prototype heat exchanger was tested to particle inlet temperatures of 500 °C at 17 MPa which resulted in overall heat transfer coefficients of approximately 300 W/m2-K at the design point and cases using high approach temperature with peak values as high as 400 W/m2-K
In this study, we experimentally investigate the high stain rate and spall behavior of Cantor high-entropy alloy (HEA), CoCrFeMnNi. First, the Hugoniot equations of state (EOS) for the samples are determined using laser-driven CoCrFeMnNi flyers launched into known Lithium Fluoride (LiF) windows. Photon Doppler Velocimetry (PDV) recordings of the velocity profiles find the EOS coefficients using an impedance mismatch technique. Following this set of measurements, laser-driven aluminum flyer plates are accelerated to velocities of 0.5–1.0 km/s using a high-energy pulse laser. Upon impact with CoCrFeMnNi samples, the shock response is found through PDV measurements of the free surface velocities. From this second set of measurements, the spall strength of the alloy is found for pressures up to 5 GPa and strain rates in excess of 106 s−1. Further analysis of the failure mechanisms behind the spallation is conducted using fractography revealing the occurrence of ductile fracture at voids presumed to be caused by chromium oxide deposits created during the manufacturing process.
Broadly applicable solutions to multimodal and multisensory fusion problems across domains remain a challenge because effective solutions often require substantive domain knowledge and engineering. The chief questions that arise for data fusion are in when to share information from different data sources, and how to accomplish the integration of information. The solutions explored in this work remain agnostic to input representation and terminal decision fusion approaches by sharing information through the learning objective as a compound objective function. The objective function this work uses assumes a one-to-one learning paradigm within a one-to-many domain which allows the assumption that consistency can be enforced across the one-to-many dimension. The domains and tasks we explore in this work include multi-sensor fusion for seismic event location and multimodal hyperspectral target discrimination. We find that our domain- informed consistency objectives are challenging to implement in stable and successful learning because of intersections between inherent data complexity and practical parameter optimization. While multimodal hyperspectral target discrimination was not enhanced across a range of different experiments by the fusion strategies put forward in this work, seismic event location benefited substantially, but only for label-limited scenarios.
Quantifying the sensitivity - how a quantity of interest (QoI) varies with respect to a parameter – and response – the representation of a QoI as a function of a parameter - of a computer model of a parametric dynamical system is an important and challenging problem. Traditional methods fail in this context since sensitive dependence on initial conditions implies that the sensitivity and response of a QoI may be ill-conditioned or not well-defined. If a chaotic model has an ergodic attractor, then ergodic averages of QoIs are well-defined quantities and their sensitivity can be used to characterize model sensitivity. The response theorem gives sufficient conditions such that the local forward sensitivity – the derivative with respect to a given parameter - of an ergodic average of a QoI is well-defined. We describe a method based on ergodic and response theory for computing the sensitivity and response of a given QoI with respect to a given parameter in a chaotic model with an ergodic and hyperbolic attractor. This method does not require computation of ensembles of the model with perturbed parameter values. The method is demonstrated and some of the computations are validated on the Lorenz 63 and Lorenz 96 models.
Th e U.S. Strategic Petroleum Reserve (SPR) is a crude oil storage system administered by the U.S. Department of Energy. The reserve consists of 60 active storage caverns located in underground salt domes spread across four sites in Louisiana and Texas, near the Gulf of Mexico. Beginning in 2016, the SPR started executing C ongressionally mandated oil sales. The configuration of the reserve, with a total capacity of greater than 700 million barrels ( MMB ) , re quires that unsaturated water (referred to herein as ?raw? water) is injected into the storage caverns to displace oil for sales , exchanges, and drawdowns . As such, oil sales will produce cavern growth to the extent that raw water contacts the salt cavern walls and dissolves (leaches) the surrounding salt before reaching brine saturation. SPR injected a total of over 45 MMB of raw water into twenty - six caverns as part of oil sales in CY21 . Leaching effects were monitored in these caverns to understand how the sales operations may impact the long - term integrity of the caverns. While frequent sonars are the most direct means to monitor changes in cavern shape, they can be resource intensive for the number of caverns involved in sales and exchanges. An interm ediate option is to model the leaching effects and see if any concerning features develop. The leaching effects were modeled here using the Sandia Solution Mining Code , SANSMIC . The modeling results indicate that leaching - induced features do not raise co ncern for the majority of the caverns, 15 of 26. Eleven caverns, BH - 107, BH - 110, BH - 112, BH - 113, BM - 109, WH - 11, WH - 112, WH - 114, BC - 17, BC - 18, and BC - 19 have features that may grow with additional leaching and should be monitored as leaching continues in th ose caverns. Additionally, BH - 114, BM - 4, and BM - 106 were identified in previous leaching reports for recommendation of monitoring. Nine caverns had pre - and post - leach sonars that were compared with SANSMIC results. Overall, SANSMIC was able to capture the leaching well. A deviation in the SANSMIC and sonar cavern shapes was observed near the cavern floor in caverns with significant floor rise, a process not captured by SANSMIC. These results validate that SANSMIC continues to serve as a useful tool for mon itoring changes in cavern shape due to leaching effects related to sales and exchanges.
This report summarizes research performed in the context of a REHEDS LDRD project that explores methods for measuring electrical properties of vessel joints. These properties, which include contact points and associated contact resistance, are “hidden” in the sense that they are not apparent from a computer-assisted design (CAD) description or visual inspection. As is demonstrated herein, the impact of this project is the development of electromagnetic near-field scanning capabilities that allow weapon cavity joints to be characterized with high spatial and/or temporal resolution. Such scans provide insight on the hidden electrical properties of the joint, allowing more detailed and accurate models of joints to be developed, and ultimately providing higher fidelity shielding effectiveness (SE) predictions. The capability to perform high-resolution temporal scanning of joints under vibration is also explored, using a multitone probing concept, allowing time-varying properties of joints to be characterized and the associated modulation to SE to be quantified.
Buczkowski, Nicole E.; Foss, Mikil D.; Parks, Michael L.; Radu, Petronela
The paper presents a collection of results on continuous dependence for solutions to nonlocal problems under perturbations of data and system parameters. The integral operators appearing in the systems capture interactions via heterogeneous kernels that exhibit different types of weak singularities, space dependence, even regions of zero-interaction. The stability results showcase explicit bounds involving the measure of the domain and of the interaction collar size, nonlocal Poincaré constant, and other parameters. In the nonlinear setting, the bounds quantify in different Lp norms the sensitivity of solutions under different nonlinearity profiles. The results are validated by numerical simulations showcasing discontinuous solutions, varying horizons of interactions, and symmetric and heterogeneous kernels.
This document provides very basic background information and initial enabling guidance for computational analysts to develop and utilize GitOps practices within the Common Engineering Environment (CEE) and High Performance Computing (HPC) computational environment at Sandia National Laboratories through GitLab/Jacamar runner based workflows.
The purpose of our report is to discuss the notion of entropy and its relationship with statistics. Our goal is to provide a manner in which you can think about entropy, its central role within information theory and relationship with statistics. We review various relationships between information theory and statistics—nearly all are well-known but unfortunately are often not recognized. Entropy quantities the "average amount of surprise" in a random variable and lies at the heart of information theory, which studies the transmission, processing, extraction, and utilization of information. For us, data is information. What is the distinction between information theory and statistics? Information theorists work with probability distributions. Instead, statisticians work with samples. In so many words, information theory using samples is the practice of statistics.
Due to significant computational expense, discrete element method simulations of jammed packings of size-dispersed spheres with size ratios greater than 1:10 have remained elusive, limiting the correspondence between simulations and real-world granular materials with large size dispersity. Invoking a recently developed neighbor binning algorithm, we generate mechanically stable jammed packings of frictionless spheres with power-law size distributions containing up to nearly 4 000 000 particles with size ratios up to 1:100. By systematically varying the width and exponent of the underlying power laws, we analyze the role of particle size distributions on the structure of jammed packings. The densest packings are obtained for size distributions that balance the relative abundance of large-large and small-small particle contacts. Although the proportion of rattler particles and mean coordination number strongly depend on the size distribution, the mean coordination of nonrattler particles attains the frictionless isostatic value of six in all cases. The size distribution of nonrattler particles that participate in the load-bearing network exhibits no dependence on the width of the total particle size distribution beyond a critical particle size for low-magnitude exponent power laws. This signifies that only particles with sizes greater than the critical particle size contribute to the mechanical stability. However, for high-magnitude exponent power laws, all particle sizes participate in the mechanical stability of the packing.
This report summarizes the needs, challenges, and opportunities associated with carbon-free energy and energy storage for manufacturing and industrial decarbonization. Energy needs and challenges for different manufacturing and industrial sectors (e.g., cement/steel production, chemicals, materials synthesis) are identified. Key issues for industry include the need for large, continuous on-site capacity (tens to hundreds of megawatts), compatibility with existing infrastructure, cost, and safety. Energy storage technologies that can potentially address these needs, which include electrochemical, thermal, and chemical energy storage, are presented along with key challenges, gaps, and integration issues. Analysis tools to value energy storage technologies in the context of manufacturing and industrial decarbonizations are also presented. Material is drawn from the Energy Storage for Manufacturing and Industrial Decarbonization (Energy StorM) Workshop, held February 8 - 9, 2022. The objective was to identify research opportunities and needs for the U.S. Department of Energy as part of its Energy Storage Grand Challenge program.
This report documents a method for the quantitative identification of radionuclides of potential interest for accident consequence analysis involving advanced nuclear reactors. Based on previous qualitative assessments of radionuclide inventories for advanced reactors coupled with the review of a radiological inventory developed for a heat pipe reactor, a 1 Ci activity airborne release was calculated for 137 radionuclides using the MACCS 4.1 code suite. Several assumptions regarding release conditions were made and discussed herein. The potential release of a heat pipe reactor inventory was also modeled following the same assumptions. Results provide an estimation of the relative EARLY and CHRONC phase dose contribution from advanced reactor radionuclides and are normalized to doses from equivalent releases of I-131 and Cs-137, respectively. Ultimately, a list of 69 radionuclides with EARLY or CHRONC dose contributions at least 1/100th that of I-131 or Cs-137, respectively – 48 of which are currently considered for LWR consequence analyses – was identified of being of potential importance for analyses involving a heat pipe reactor.
This document is intended to be utilized with the Equipment Test Environment being developed to provide a standard process by which the ETE can be validated. The ETE is developed with the intent of establishing cyber intrusion, data collection and through automation provide objective goals that provide repeatability. This testing process is being developed to interface with the Technical Area V physical protection system. The document will overview the testing structure, interfaces, device and network logging and data capture. Additionally, it will cover the testing procedure, criteria and constraints necessary to properly capture data and logs and record them for experimental data capture and analysis.
The parallel strong-scaling of iterative methods is often determined by the number of global reductions at each iteration. Low-synch Gram–Schmidt algorithms are applied here to the Arnoldi algorithm to reduce the number of global reductions and therefore to improve the parallel strong-scaling of iterative solvers for nonsymmetric matrices such as the GMRES and the Krylov–Schur iterative methods. In the Arnoldi context, the QR factorization is “left-looking” and processes one column at a time. Among the methods for generating an orthogonal basis for the Arnoldi algorithm, the classical Gram–Schmidt algorithm, with reorthogonalization (CGS2) requires three global reductions per iteration. A new variant of CGS2 that requires only one reduction per iteration is presented and applied to the Arnoldi algorithm. Delayed CGS2 (DCGS2) employs the minimum number of global reductions per iteration (one) for a one-column at-a-time algorithm. The main idea behind the new algorithm is to group global reductions by rearranging the order of operations. DCGS2 must be carefully integrated into an Arnoldi expansion or a GMRES solver. Numerical stability experiments assess robustness for Krylov–Schur eigenvalue computations. Performance experiments on the ORNL Summit supercomputer then establish the superiority of DCGS2 over CGS2.
Using a newly developed coupling of the ElectroMagnetic Plasma In Realistic Environments (EMPIRE) code with the Integrated Tiger Series (ITS) code, radiation environment calculations have been performed. The effort was completed as part of the Saturn Recapitalization (Recap) program that represents activities to upgrade and modernize the Saturn accelerator facility. The radiation environment calculations performed provide baseline results with current or planned hardware in the facility. As facility design changes are proposed and implemented as part of Saturn Recap, calculations of the radiation environment will be performed to understand how the changes impact the output of the Saturn accelerator.
Fractured media models comprise discontinuities of multiple lengths (e.g. fracture lengths and apertures, wellbore area) that fall into the relatively insignificant length scales spanning millimeter-scale fractures to centimeter-scale wellbores in comparison to the extensions of the field of interest, and challenge the conventional discretization methods imposing highly-fine meshing and formidably large numerical cost. By utilizing the recent developments in the finite element analysis of electromagnetics that allow to represent material properties on a hierarchical geometry, this project develops computational capabilities to model fluid flow, heat conduction, transport and induced polarization in large-scale geologic environments that possess geometrically-complex fractures and man-made infrastructures without explosive computational cost. The computational efficiency and robustness of this multi-physics modeling tool are demonstrated by considering various highly-realistic complex geologic environments that are common in many energy and national security related engineering problems.
The goal of this workshop is to role play and walk through various UAS incursion scenarios to: 1. Recognize the complex interactions between physical protection, response, and UAS technologies in a nuclear security event; 2. Identify potential regulatory and legal complications dealing with UAS as aircraft; 3. Identify communication/coordination touch points with facility security and law enforcement; 4. Identify possible physical security and response strategies to help mitigate UAS impact.
Time-resolved X-ray thermometry is an enabling technology for measuring temperature and phase change of components. However, current diagnostic methods are limited in their ability due to the invasive nature of probes or the requirement of coatings and optical access to the component. Our proposed developments overcome these challenges by utilizing X-rays to directly measure the objects temperature. Variable-Temperature X-ray Diffraction (VT-XRD) was performed over a wide range of temperatures and diffraction angles and was performed on several materials to analyze the patterns of the bulk materials for sensitivity. "High-speed" VT-XRD was then performed for a single material over a small range of diffraction angles to see how fast the experiments could be performed, whilst still maintaining peaks sufficiently large enough for analysis.
This report details model development, theory, and a literature review focusing on the emission of contaminants on solid substrates in fires. This is the final report from a 2-year Nuclear Safety Research and Development (NSRD) project. The work represents progress towards a goal of having modeling and simulation capabilities that are sufficiently mature and accurate that they can be utilized in place of physical tests for determining safe handling practices. At present, the guidelines for safety are largely empirically based, derived from a survey of existing datasets. This particular report details the development, verification and calibration of a number of code improvements that have been implemented in the SIERRA suite of codes, and the application of those codes to three different experimental scenarios that have been subject of prior tests. The first scenario involves a contaminated PMMA slab, which is exposed to heat. The modeling involved a novel method for simulating the viscous diffusion of the particles in the slab. The second scenario involved a small pool fire of contaminated combustible liquid mimicking historical tests and finds that the release of contaminants has a high functionality with the height of the liquid in the container. The third scenario involves the burning of a contaminated tray of shredded cellulose. A novel release mechanism was formulated based on predicted progress of the decomposition of the cellulose, and while the model was found to result in release that can be tuned to match the experiments, some modifications to the model are desirable to achieve quantitative accuracy.
Computed tomography (CT) resolution has become high enough to monitor morphological changes due to aging in materials in long-term applications. We explored the utility of the critic of a generative adversarial network (GAN) to automatically detect such changes. The GAN was trained with images of pristine Pharmatose, which is used as a surrogate energetic material. It is important to note that images of the material with altered morphology were only used during the test phase. The GAN-generated images visually reproduced the microstructure of Pharmatose well, although some unrealistic particle fusion was seen. Calculated morphological metrics (volume fraction, interfacial line length, and local thickness) for the synthetic images also showed good agreement with the training data, albeit with signs of mode collapse in the interfacial line length. While the critic exposed changes in particle size, it showed limited ability to distinguish images by particle shape. The detection of shape differences was also a more challenging task for the selected morphological metrics that related to energetic material performance. We further tested the critic with images of aged Pharmatose. Subtle changes due to aging are difficult for the human analyst to detect. Both critic and morphological metrics analysis showed image differentiation.