Developing a pressure system to extract pressure coefficients of electrical measurement standards
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Particle heat exchangers are a critical enabling technology for next generation concentrating solar power (CSP) plants that use supercritical carbon dioxide (sCO2) as a working fluid. This report covers the design, manufacturing and testing of a prototype particle-to-sCO2 heat exchanger targeting thermal performance levels required to meet commercial scale cost targets. In addition, the the design and assembly of integrated particle and sCO2 flow loops for heat exchanger performance testing are detailed. The prototype heat exchanger was tested to particle inlet temperatures of 500 °C at 17 MPa which resulted in overall heat transfer coefficients of approximately 300 W/m2-K at the design point and cases using high approach temperature with peak values as high as 400 W/m2-K
Abstract not provided.
Abstract not provided.
Structural health monitoring of an engineered component in a harsh environment is critical for multiple DOE missions including nuclear fuel cycle, subsurface energy production/storage, and energy conversion. Supported by a seeding Laboratory Directed Research & Development (LDRD) project, we have explored a new concept for structural health monitoring by introducing a self-sensing capability into structural components. The concept is based on two recent technological advances: metamaterials and additive manufacturing. A self-sensing capability can be engineered by embedding a metastructure, for example, a sheet of electromagnetic resonators, either metallic or dielectric, into a material component. This embedment can now be realized using 3-D printing. The precise geometry of the embedded metastructure determines how the material interacts with an incident electromagnetic wave. Any change in the structure of the material (e.g., straining, degradation, etc.) would inevitably perturbate the embedded metastructures or metasurface array and therefore alter the electromagnetic response of the material, thus resulting in a frequency shift of a reflection spectrum that can be detected passively and remotely. This new sensing approach eliminates complicated environmental shielding, in-situ power supply, and wire routing that are generally required by the existing active-circuit-based sensors. The work documented in this report has preliminarily demonstrated the feasibility of the proposed concept. The work has established the needed simulation tools and experimental capabilities for future studies.
The National Solar Thermal Test Facility (NSTTF) at Sandia National Laboratories New Mexico (SNL/NM) developed this Life Cycle Management Plan (LCMP) to document its process for executing, monitoring, controlling and closing-out Phase 3 of the Gen 3 Particle Pilot Plant (G3P3). This plan serves as a resource for stakeholders who wish to be knowledgeable of project objectives and how they will be accomplished.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Solar Thermal Ammonia Production has the potential to synthesize ammonia in a green, renewable process that can greatly reduce the carbon footprint left by conventional Haber-Bosch reaction. Ternary nitrides in the family A3BxN (A=Co, Ni, Fe; B=Mo; x=2,3) have been identified as a potential candidate for NH3 production. Experiments with Co3Mo3N in Ammonia Synthesis Reactor demonstrate cyclable NH3 production from bulk nitride under pure H2. Production rates were fairly flat in all the reduction steps with no evident dependence on the consumed solid-state nitrogen, as would be expected from catalytic Mars-van Krevelen mechanism. Material can be re-nitridized under pure N2. Bulk nitrogen per reduction step average between 25 – 40% of the total solid-state nitrogen. Selectivity to NH3 stabilized at 55 – 60% per cycle. Production rates (NH3 and N2) become apparent above 600 °C at P(H2) = 0.5 – 2 bar. Optimal point of operation to keep selectivity high without compromising NH3 rates currently estimated at 650 °C and 1.5 - 2 bar. The next steps are to optimize production rates, examine effect of N2 addition in NH3 synthesis reaction, and test additional ternary nitrides.
International safeguards currently rely on material accountancy to verify that declared nuclear material is present and unmodified. Although effective, material accountancy for large bulk facilities can be expensive to implement due to the high precision instrumentation required to meet regulatory targets. Process monitoring has long been considered to improve material accountancy. However, effective integration of process monitoring has been met with mixed results. Given the large successes in other domains, machine learning may present a solution for process monitoring integration. Past work has shown that unsupervised approaches struggle due to measurement error. Although not studied in depth for a safeguards context, supervised approaches often have poor generalization for unseen classes of data (e.g., unseen material loss patterns). This work shows that engineered datasets, when used for training, can improve the generalization of supervised approaches. Further, the underlying models needed to generate these datasets need only accurately model certain high importance features.
This report details model development, theory, and a literature review focusing on the emission of contaminants on solid substrates in fires. This is the final report from a 2-year Nuclear Safety Research and Development (NSRD) project. The work represents progress towards a goal of having modeling and simulation capabilities that are sufficiently mature and accurate that they can be utilized in place of physical tests for determining safe handling practices. At present, the guidelines for safety are largely empirically based, derived from a survey of existing datasets. This particular report details the development, verification and calibration of a number of code improvements that have been implemented in the SIERRA suite of codes, and the application of those codes to three different experimental scenarios that have been subject of prior tests. The first scenario involves a contaminated PMMA slab, which is exposed to heat. The modeling involved a novel method for simulating the viscous diffusion of the particles in the slab. The second scenario involved a small pool fire of contaminated combustible liquid mimicking historical tests and finds that the release of contaminants has a high functionality with the height of the liquid in the container. The third scenario involves the burning of a contaminated tray of shredded cellulose. A novel release mechanism was formulated based on predicted progress of the decomposition of the cellulose, and while the model was found to result in release that can be tuned to match the experiments, some modifications to the model are desirable to achieve quantitative accuracy.
Abstract not provided.
This document is intended to be utilized with the Equipment Test Environment being developed to provide a standard process by which the ETE can be validated. The ETE is developed with the intent of establishing cyber intrusion, data collection and through automation provide objective goals that provide repeatability. This testing process is being developed to interface with the Technical Area V physical protection system. The document will overview the testing structure, interfaces, device and network logging and data capture. Additionally, it will cover the testing procedure, criteria and constraints necessary to properly capture data and logs and record them for experimental data capture and analysis.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Physical Review E
Due to significant computational expense, discrete element method simulations of jammed packings of size-dispersed spheres with size ratios greater than 1:10 have remained elusive, limiting the correspondence between simulations and real-world granular materials with large size dispersity. Invoking a recently developed neighbor binning algorithm, we generate mechanically stable jammed packings of frictionless spheres with power-law size distributions containing up to nearly 4 000 000 particles with size ratios up to 1:100. By systematically varying the width and exponent of the underlying power laws, we analyze the role of particle size distributions on the structure of jammed packings. The densest packings are obtained for size distributions that balance the relative abundance of large-large and small-small particle contacts. Although the proportion of rattler particles and mean coordination number strongly depend on the size distribution, the mean coordination of nonrattler particles attains the frictionless isostatic value of six in all cases. The size distribution of nonrattler particles that participate in the load-bearing network exhibits no dependence on the width of the total particle size distribution beyond a critical particle size for low-magnitude exponent power laws. This signifies that only particles with sizes greater than the critical particle size contribute to the mechanical stability. However, for high-magnitude exponent power laws, all particle sizes participate in the mechanical stability of the packing.
Abstract not provided.
Abstract not provided.
Using a newly developed coupling of the ElectroMagnetic Plasma In Realistic Environments (EMPIRE) code with the Integrated Tiger Series (ITS) code, radiation environment calculations have been performed. The effort was completed as part of the Saturn Recapitalization (Recap) program that represents activities to upgrade and modernize the Saturn accelerator facility. The radiation environment calculations performed provide baseline results with current or planned hardware in the facility. As facility design changes are proposed and implemented as part of Saturn Recap, calculations of the radiation environment will be performed to understand how the changes impact the output of the Saturn accelerator.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Gaining a proper understanding of how Earth structure and other near-source properties affect estimates of explosion yield is important to the nonproliferation mission. The yields of explosion sources are often based on seismic moment or waveform amplitudes. Quantifying how the seismic waveforms or estimates of the source characteristics derived from those waveforms are influenced by natural or man-made structures within the near-source region, where the wavefield behaves nonlinearly, is required to understand the full range of uncertainty in those yield estimates. We simulate tamped chemical explosions using a nonlinear, shock physics code and couple the ground motions beyond the elastic radius to a linear elastic, full waveform seismic simulation algorithm through 3D media. In order to isolate the effects of simple small-scale 3D structures on the seismic wavefield and linear seismic source estimates, we embed spheres and cylinders close to the fully- tamped source location within an otherwise homogenous half-space. The 3 m diameters spheres, given their small size compared to the predominate wavelengths investigated, not surprisingly are virtually invisible with only negligible perturbations to the far-field waveforms and resultant seismic source time functions. Similarly, the 11 m diameter basalt sphere has a larger, but still relatively minor impact on the wavefield. However, the 11 m diameter air-filled sphere has the largest impact on both waveforms and the estimated seismic moment of any of the investigated cases with a reduction of ~25% compared to the tamped moment. This significant reduction is likely due in large part to the cavity collapsing from the shock instead of being solely due to diffraction effects . Although the cylinders have the same diameters as the 3 m spheres, their length of interaction with the wavefield produces noticeable changes to the seismic waveforms and estimated source terms with reductions in the peak seismic moment on the order of 10%. Both the cylinders and 11 m diameter spheres generate strong shear waves that appear to emanate from body force sources.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Time-resolved X-ray thermometry is an enabling technology for measuring temperature and phase change of components. However, current diagnostic methods are limited in their ability due to the invasive nature of probes or the requirement of coatings and optical access to the component. Our proposed developments overcome these challenges by utilizing X-rays to directly measure the objects temperature. Variable-Temperature X-ray Diffraction (VT-XRD) was performed over a wide range of temperatures and diffraction angles and was performed on several materials to analyze the patterns of the bulk materials for sensitivity. "High-speed" VT-XRD was then performed for a single material over a small range of diffraction angles to see how fast the experiments could be performed, whilst still maintaining peaks sufficiently large enough for analysis.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Classification of features in a scene typically requires conversion of the incoming photonic field int the electronic domain. Recently, an alternative approach has emerged whereby passive structured materials can perform classification tasks by directly using free-space propagation and diffraction of light. In this manuscript, we present a theoretical and computational study of such systems and establish the basic features that govern their performance. We show that system architecture, material structure, and input light field are intertwined and need to be co-designed to maximize classification accuracy. Our simulations show that a single layer metasurface can achieve classification accuracy better than conventional linear classifiers, with an order of magnitude fewer diffractive features than previously reported. For a wavelength λ, single layer metasurfaces of size 100λ x 100λ with aperture density λ-2 achieve ~96% testing accuracy on the MNIST dataset, for an optimized distance ~100λ to the output plane. This is enabled by an intrinsic nonlinearity in photodetection, despite the use of linear optical metamaterials. Furthermore, we find that once the system is optimized, the number of diffractive features is the main determinant of classification performance. The slow asymptotic scaling with the number of apertures suggests a reason why such systems may benefit from multiple layer designs. Finally, we show a trade-off between the number of apertures and fabrication noise.
Abstract not provided.
Abstract not provided.
In this work we present a novel method for improving the high-temperature performance of silicon photomultipliers (SiPMs) via focused ion beam (FIB) modification of individual microcells. The literature suggests that most of the dark count rate (DCR) in a SiPM is contributed by a small percentage (<5%) of microcells. By using a FIB to electrically deactivate this relatively small number of microcells, we believe we can greatly reduce the overall DCR of the SiPM at the expense of a small reduction in overall photodetection efficiency, thereby improving its high temperature performance. In this report we describe our methods for characterizing the SiPM to determine which individual microcells contribute the most to the DCR, preparing the SiPM for FIB, and modifying the SiPM using the FIB to deactivate the identified microcells.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The composition and phase fraction of the intergranular phase of 94ND10 ceramic is determined and fabricated ex situ. The fraction of each phase is 85.96 vol% Al2O3 bulk phase, 9.46 vol% Mg-rich intergranular phase, 4.36 vol% Ca/Si-rich intergranular phase, and 0.22 vol% voids. The Ca/Si-rich phase consists of 0.628 at% Mg, 12.59 at% Si, 10.24 at% Ca, 17.23 at% Al, and balance O. The Mgrich phase consists of 14.17 at% Mg, 0.066 at% Si, 0.047 at% Ca, 28.69 at% Al, and balance O. XRD of the ex situ intergranular material made by mixed oxides consisting of the above phase and element fractions yielded 92 vol% MgAl2O4 phase and 8 vol% CaAl2Si2O8 phase. The formation of MgAl2O4 phase is consistent with prior XRD of 94ND10, while the CaAl2Si2O8 phase may exist in 94ND10 but at a concentration not readily detected with XRD. The MgAl2O4 and CaAl2Si2O8 phases determined from XRD are expected to have the elemental compositions for the Mg-rich and Ca/Si-rich phases above by cation substitutions (e.g., some Mg substituted for by Ca in the Mg-rich phase) and impurity phases not detectable with XRD.
Abstract not provided.
The purpose of our report is to discuss the notion of entropy and its relationship with statistics. Our goal is to provide a manner in which you can think about entropy, its central role within information theory and relationship with statistics. We review various relationships between information theory and statistics—nearly all are well-known but unfortunately are often not recognized. Entropy quantities the "average amount of surprise" in a random variable and lies at the heart of information theory, which studies the transmission, processing, extraction, and utilization of information. For us, data is information. What is the distinction between information theory and statistics? Information theorists work with probability distributions. Instead, statisticians work with samples. In so many words, information theory using samples is the practice of statistics.
Abstract not provided.
Abstract not provided.
Optimal mitigation planning for highly disruptive contingencies to a transmission-level power system requires optimization with dynamic power system constraints, due to the key role of dynamics in system stability to major perturbations. We formulate a generalized disjunctive program to determine optimal grid component hardening choices for protecting against major failures, with differential algebraic constraints representing system dynamics (specifically, differential equations representing generator and load behavior and algebraic equations representing instantaneous power balance over the transmission system). We optionally allow stochastic optimal pre-positioning across all considered failure scenarios, and optimal emergency control within each scenario. This novel formulation allows, for the first time, analyzing the resilience interdependencies of mitigation planning, preventive control, and emergency control. Using all three strategies in concert is particularly effective at maintaining robust power system operation under severe contingencies, as we demonstrate on the Western System Coordinating Council (WSCC) 9-bus test system using synthetic multi-device outage scenarios. Towards integrating our modeling framework with real threats and more realistic power systems, we explore applying hybrid dynamics to power systems. Our work is applied to basic RL circuits with the ultimate goal of using the methodology to model protective tripping schemes in the grid. Finally, we survey mitigation techniques for HEMP threats and describe a GIS application developed to create threat scenarios in a grid with geographic detail.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This report summarizes the needs, challenges, and opportunities associated with carbon-free energy and energy storage for manufacturing and industrial decarbonization. Energy needs and challenges for different manufacturing and industrial sectors (e.g., cement/steel production, chemicals, materials synthesis) are identified. Key issues for industry include the need for large, continuous on-site capacity (tens to hundreds of megawatts), compatibility with existing infrastructure, cost, and safety. Energy storage technologies that can potentially address these needs, which include electrochemical, thermal, and chemical energy storage, are presented along with key challenges, gaps, and integration issues. Analysis tools to value energy storage technologies in the context of manufacturing and industrial decarbonizations are also presented. Material is drawn from the Energy Storage for Manufacturing and Industrial Decarbonization (Energy StorM) Workshop, held February 8 - 9, 2022. The objective was to identify research opportunities and needs for the U.S. Department of Energy as part of its Energy Storage Grand Challenge program.
Abstract not provided.
IEEE Transactions on Device and Materials Reliability
This paper describes a new non-charge-based data storing technique in NAND flash memory called watermark that encodes read-only data in the form of physical properties of flash memory cells. Unlike traditional charge-based data storing method in flash memory, the proposed technique is resistant to total ionizing dose (TID) effects. To evaluate its resistance to irradiation effects, we analyze data stored in several commercial single-level-cell (SLC) flash memory chips from different vendors and technology nodes. These chips are irradiated using a Co-60 gamma-ray source array for up to 100 krad(Si) at Sandia National Laboratories. Experimental evaluation performed on a flash chip from Samsung shows that the intrinsic bit error rate (BER) of watermark increases from mathbf {sim }0.8 % for TID = 0 krad(Si) to mathbf {mathrm {sim }}1 % for TID = 100 krad(Si). Conversely, the BER of charge-based data stored on the same chip increases from 0% at TID = 0 krad(Si) to 1.5% at TID = 100 krad(Si). The results imply that the proposed technique may potentially offer significant improvements in data integrity relative to traditional charge-based data storage for very high radiation (TID mathbf { > } 100 krad(Si)) environments. These gains in data integrity relative to the charge-based data storage are useful in radiation-prone environments, but they come at the cost of increased write times and higher BERs before irradiation.
Abstract not provided.
Accurate prediction of ductile behavior of structural alloys up to and including failure is essential in component or system failure assessment, which is necessary for nuclear weapons alteration and life extensions programs of Sandia National Laboratories. Modeling such behavior requires computational capabilities to robustly capture strong nonlinearities (geometric and material), rate- dependent and temperature-dependent properties, and ductile failure mechanisms. This study's objective is to validate numerical simulations of a high-deformation crush of a stainless steel can. The process consists of identifying a suitable can geometry and loading conditions, conducting the laboratory testing, developing a high-quality Sierra/SM simulation, and then drawing comparisons between model and measurement to assess the fitness of the simulation in regards to material model (plasticity), finite element model construction, and failure model. Following previous material model calibration, a J2 plasticity model with a microstructural BCJ failure model is employed to model the test specimen made of 304L stainless steel. Simulated results are verified and validated through mesh and mass-scaling convergence studies, parameter sensitivity studies, and a comparison to experimental data. The converged mesh and degree of mass-scaling are the mesh discretization with 140,372 elements, and a mass scaling with a target time increment of 1.0e-6 seconds and time step scale factor of 0.5, respectively. Results from the coupled thermal-mechanical explicit dynamic analysis are comparable to the experimental data. Simulated global force vs displacement (F/D) response predicts key points such as yield, ultimate, and kinks of the experimental F/D response. Furthermore, the final deformed shape of the can and field data predicted from the analysis are similar to that of the deformed can, as measured by 3D optical CMM scans and DIC data from the experiment.
I started my internship in January 2022 but the research on measuring dispersion and loss of 355nm light from a silicon oxide waveguide began in August 2022 which will be the focus of this paper. The motivation of this project is to determine whether it is possible to use pulsed 355nm light in an integrated waveguide within an ion trap chip. To begin this project, light from the 355nm Coherent Paladin laser was coupled into a fiber which will be referred to as the “source fiber.” After coupling into a fiber, loss and dispersion measurements could be performed as this fiber was used to deliver light to each of the experiments which will be covered in detail in the following paragraphs.
Th e U.S. Strategic Petroleum Reserve (SPR) is a crude oil storage system administered by the U.S. Department of Energy. The reserve consists of 60 active storage caverns located in underground salt domes spread across four sites in Louisiana and Texas, near the Gulf of Mexico. Beginning in 2016, the SPR started executing C ongressionally mandated oil sales. The configuration of the reserve, with a total capacity of greater than 700 million barrels ( MMB ) , re quires that unsaturated water (referred to herein as ?raw? water) is injected into the storage caverns to displace oil for sales , exchanges, and drawdowns . As such, oil sales will produce cavern growth to the extent that raw water contacts the salt cavern walls and dissolves (leaches) the surrounding salt before reaching brine saturation. SPR injected a total of over 45 MMB of raw water into twenty - six caverns as part of oil sales in CY21 . Leaching effects were monitored in these caverns to understand how the sales operations may impact the long - term integrity of the caverns. While frequent sonars are the most direct means to monitor changes in cavern shape, they can be resource intensive for the number of caverns involved in sales and exchanges. An interm ediate option is to model the leaching effects and see if any concerning features develop. The leaching effects were modeled here using the Sandia Solution Mining Code , SANSMIC . The modeling results indicate that leaching - induced features do not raise co ncern for the majority of the caverns, 15 of 26. Eleven caverns, BH - 107, BH - 110, BH - 112, BH - 113, BM - 109, WH - 11, WH - 112, WH - 114, BC - 17, BC - 18, and BC - 19 have features that may grow with additional leaching and should be monitored as leaching continues in th ose caverns. Additionally, BH - 114, BM - 4, and BM - 106 were identified in previous leaching reports for recommendation of monitoring. Nine caverns had pre - and post - leach sonars that were compared with SANSMIC results. Overall, SANSMIC was able to capture the leaching well. A deviation in the SANSMIC and sonar cavern shapes was observed near the cavern floor in caverns with significant floor rise, a process not captured by SANSMIC. These results validate that SANSMIC continues to serve as a useful tool for mon itoring changes in cavern shape due to leaching effects related to sales and exchanges.
Parallel Computing
The parallel strong-scaling of iterative methods is often determined by the number of global reductions at each iteration. Low-synch Gram–Schmidt algorithms are applied here to the Arnoldi algorithm to reduce the number of global reductions and therefore to improve the parallel strong-scaling of iterative solvers for nonsymmetric matrices such as the GMRES and the Krylov–Schur iterative methods. In the Arnoldi context, the QR factorization is “left-looking” and processes one column at a time. Among the methods for generating an orthogonal basis for the Arnoldi algorithm, the classical Gram–Schmidt algorithm, with reorthogonalization (CGS2) requires three global reductions per iteration. A new variant of CGS2 that requires only one reduction per iteration is presented and applied to the Arnoldi algorithm. Delayed CGS2 (DCGS2) employs the minimum number of global reductions per iteration (one) for a one-column at-a-time algorithm. The main idea behind the new algorithm is to group global reductions by rearranging the order of operations. DCGS2 must be carefully integrated into an Arnoldi expansion or a GMRES solver. Numerical stability experiments assess robustness for Krylov–Schur eigenvalue computations. Performance experiments on the ORNL Summit supercomputer then establish the superiority of DCGS2 over CGS2.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Quantifying the sensitivity - how a quantity of interest (QoI) varies with respect to a parameter – and response – the representation of a QoI as a function of a parameter - of a computer model of a parametric dynamical system is an important and challenging problem. Traditional methods fail in this context since sensitive dependence on initial conditions implies that the sensitivity and response of a QoI may be ill-conditioned or not well-defined. If a chaotic model has an ergodic attractor, then ergodic averages of QoIs are well-defined quantities and their sensitivity can be used to characterize model sensitivity. The response theorem gives sufficient conditions such that the local forward sensitivity – the derivative with respect to a given parameter - of an ergodic average of a QoI is well-defined. We describe a method based on ergodic and response theory for computing the sensitivity and response of a given QoI with respect to a given parameter in a chaotic model with an ergodic and hyperbolic attractor. This method does not require computation of ensembles of the model with perturbed parameter values. The method is demonstrated and some of the computations are validated on the Lorenz 63 and Lorenz 96 models.
Abstract not provided.