Particle heat exchangers are a critical enabling technology for next generation concentrating solar power (CSP) plants that use supercritical carbon dioxide (sCO2) as a working fluid. This report covers the design, manufacturing and testing of a prototype particle-to-sCO2 heat exchanger targeting thermal performance levels required to meet commercial scale cost targets. In addition, the the design and assembly of integrated particle and sCO2 flow loops for heat exchanger performance testing are detailed. The prototype heat exchanger was tested to particle inlet temperatures of 500 °C at 17 MPa which resulted in overall heat transfer coefficients of approximately 300 W/m2-K at the design point and cases using high approach temperature with peak values as high as 400 W/m2-K
Gaining a proper understanding of how Earth structure and other near-source properties affect estimates of explosion yield is important to the nonproliferation mission. The yields of explosion sources are often based on seismic moment or waveform amplitudes. Quantifying how the seismic waveforms or estimates of the source characteristics derived from those waveforms are influenced by natural or man-made structures within the near-source region, where the wavefield behaves nonlinearly, is required to understand the full range of uncertainty in those yield estimates. We simulate tamped chemical explosions using a nonlinear, shock physics code and couple the ground motions beyond the elastic radius to a linear elastic, full waveform seismic simulation algorithm through 3D media. In order to isolate the effects of simple small-scale 3D structures on the seismic wavefield and linear seismic source estimates, we embed spheres and cylinders close to the fully- tamped source location within an otherwise homogenous half-space. The 3 m diameters spheres, given their small size compared to the predominate wavelengths investigated, not surprisingly are virtually invisible with only negligible perturbations to the far-field waveforms and resultant seismic source time functions. Similarly, the 11 m diameter basalt sphere has a larger, but still relatively minor impact on the wavefield. However, the 11 m diameter air-filled sphere has the largest impact on both waveforms and the estimated seismic moment of any of the investigated cases with a reduction of ~25% compared to the tamped moment. This significant reduction is likely due in large part to the cavity collapsing from the shock instead of being solely due to diffraction effects . Although the cylinders have the same diameters as the 3 m spheres, their length of interaction with the wavefield produces noticeable changes to the seismic waveforms and estimated source terms with reductions in the peak seismic moment on the order of 10%. Both the cylinders and 11 m diameter spheres generate strong shear waves that appear to emanate from body force sources.
This document provides very basic background information and initial enabling guidance for computational analysts to develop and utilize GitOps practices within the Common Engineering Environment (CEE) and High Performance Computing (HPC) computational environment at Sandia National Laboratories through GitLab/Jacamar runner based workflows.
This document is intended to be utilized with the Equipment Test Environment being developed to provide a standard process by which the ETE can be validated. The ETE is developed with the intent of establishing cyber intrusion, data collection and through automation provide objective goals that provide repeatability. This testing process is being developed to interface with the Technical Area V physical protection system. The document will overview the testing structure, interfaces, device and network logging and data capture. Additionally, it will cover the testing procedure, criteria and constraints necessary to properly capture data and logs and record them for experimental data capture and analysis.
Aero-optics refers to optical distortions due to index-of-refraction gradients that are induced by aerodynamic density gradients. At hypersonic flow conditions, the bulk velocity is many times the speed of sound and density gradients may originate from shock waves, compressible turbulent structures, acoustic waves, thermal variations, etc. Due to the combination of these factors, aero-optic distortions are expected to differ from those common to sub-sonic and lower super-sonic speeds. This report summarizes the results from a 2019-2022 Laboratory Directed Research and Development (LDRD) project led by Sandia National Laboratories in collaboration with the University of Notre Dame, New Mexico State University, and the Georgia Institute of Technology. Efforts extended experimental and simulation methodologies for the study of turbulent hypersonic boundary layers. Notable experimental advancements include development of spectral de-aliasing techniques for highspeed wavefront measurements, a Spatially Selective Wavefront Sensor (SSWFS) technique, new experimental data at Mach 8 and 14, a Quadrature Fringe Imaging Interferometer (QFII) technique for time-resolved index-of-refraction measures, and application of QFII to shock-heated air. At the same time, model advancements include aero-optic analysis of several Direct Numerical Simulation (DNS) datasets from Mach 0.5 to 14 and development of wall-modeled Large Eddy Simulation (LES) techniques for aero-optic predictions. At Mach 8 measured and predicted root mean square Optical Path Differences agree within confidence bounds but are higher than semi-empirical trends extrapolated from lower Mach conditions. Overall, results show that aero-optic effects in the hypersonic flow regime are not simple extensions from prior knowledge at lower speeds and instead reflect the added complexity of compressible hypersonic flow physics.
The parallel strong-scaling of iterative methods is often determined by the number of global reductions at each iteration. Low-synch Gram–Schmidt algorithms are applied here to the Arnoldi algorithm to reduce the number of global reductions and therefore to improve the parallel strong-scaling of iterative solvers for nonsymmetric matrices such as the GMRES and the Krylov–Schur iterative methods. In the Arnoldi context, the QR factorization is “left-looking” and processes one column at a time. Among the methods for generating an orthogonal basis for the Arnoldi algorithm, the classical Gram–Schmidt algorithm, with reorthogonalization (CGS2) requires three global reductions per iteration. A new variant of CGS2 that requires only one reduction per iteration is presented and applied to the Arnoldi algorithm. Delayed CGS2 (DCGS2) employs the minimum number of global reductions per iteration (one) for a one-column at-a-time algorithm. The main idea behind the new algorithm is to group global reductions by rearranging the order of operations. DCGS2 must be carefully integrated into an Arnoldi expansion or a GMRES solver. Numerical stability experiments assess robustness for Krylov–Schur eigenvalue computations. Performance experiments on the ORNL Summit supercomputer then establish the superiority of DCGS2 over CGS2.
This report details model development, theory, and a literature review focusing on the emission of contaminants on solid substrates in fires. This is the final report from a 2-year Nuclear Safety Research and Development (NSRD) project. The work represents progress towards a goal of having modeling and simulation capabilities that are sufficiently mature and accurate that they can be utilized in place of physical tests for determining safe handling practices. At present, the guidelines for safety are largely empirically based, derived from a survey of existing datasets. This particular report details the development, verification and calibration of a number of code improvements that have been implemented in the SIERRA suite of codes, and the application of those codes to three different experimental scenarios that have been subject of prior tests. The first scenario involves a contaminated PMMA slab, which is exposed to heat. The modeling involved a novel method for simulating the viscous diffusion of the particles in the slab. The second scenario involved a small pool fire of contaminated combustible liquid mimicking historical tests and finds that the release of contaminants has a high functionality with the height of the liquid in the container. The third scenario involves the burning of a contaminated tray of shredded cellulose. A novel release mechanism was formulated based on predicted progress of the decomposition of the cellulose, and while the model was found to result in release that can be tuned to match the experiments, some modifications to the model are desirable to achieve quantitative accuracy.
The methyl radical plays a central role in plasma-assisted hydrocarbon chemistry but is challenging to detect due to its high reactivity and strongly pre-dissociative electronically excited states. We report the development of a photo-fragmentation laser-induced fluorescence (PF-LIF) diagnostic for quantitative 2D imaging of methyl profiles in a plasma. This technique provides temporally and spatially resolved measurements of local methyl distributions, including in near-surface regions that are important for plasma-surface interactions such as plasma-assisted catalysis. The technique relies on photo-dissociation of methyl by the fifth harmonic of a Nd:YAG laser at 212.8 nm to produce CH fragments. These photofragments are then detected with LIF imaging by exciting a transition in the B-X(0, 0) band of CH with a second laser at 390 nm. Fluorescence from the overlapping A-X(0, 0), A-X(1, 1), and B-X(0, 1) bands of CH is detected near 430 nm with the A-state populated by collisional B-A electronic energy transfer. This non-resonant detection scheme enables interrogation close to a surface. The PF-LIF diagnostic is calibrated by producing a known amount of methyl through photo-dissociation of acetone vapor in a calibration gas mixture. We demonstrate PF-LIF imaging of methyl production in methane-containing nanosecond pulsed plasmas impinging on dielectric surfaces. Absolute calibration of the diagnostic is demonstrated in a diffuse, plane-to-plane discharge. Measured profiles show a relatively uniform distribution of up to 30 ppm of methyl. Relative methyl measurements in a filamentary plane-to-plane discharge and a plasma jet reveal highly localized intense production of methyl. The utility of the PF-LIF technique is further demonstrated by combining methyl measurements with formaldehyde LIF imaging to capture spatiotemporal correlations between methyl and formaldehyde, which is an important intermediate species in plasma-assisted oxidative coupling of methane.
White, Rebekah D.; Alexanderian, A.; Karbalaeisadegh, Y.; Bekele-Maxwell, K.; Banks, H.T.; Talmant, M.; Grimal, Q.; Muller, M.
In this work we infer the underlying distribution on pore radius in human cortical bone samples using ultrasonic attenuation data. We first discuss how to formulate polydisperse attenuation models using a probabilistic approach and the Waterman Truell model for scattering attenuation. We then compare the Independent Scattering Approximation and the higher-order Waterman Truell models’ forward predictions for total attenuation in polydisperse samples. Following this, we formulate an inverse problem under the Prohorov Metric Framework coupled with variational regularization to stabilize this inverse problem. We then use experimental attenuation data taken from human cadaver samples and solve inverse problems resulting in nonparametric estimates of the probability density function on pore radius. We compare these estimates to the “true” microstructure of the bone samples determined via microCT imaging. We find that our methodology allows us to reliably estimate the underlying microstructure of the bone from attenuation data.
Quantifying the sensitivity - how a quantity of interest (QoI) varies with respect to a parameter – and response – the representation of a QoI as a function of a parameter - of a computer model of a parametric dynamical system is an important and challenging problem. Traditional methods fail in this context since sensitive dependence on initial conditions implies that the sensitivity and response of a QoI may be ill-conditioned or not well-defined. If a chaotic model has an ergodic attractor, then ergodic averages of QoIs are well-defined quantities and their sensitivity can be used to characterize model sensitivity. The response theorem gives sufficient conditions such that the local forward sensitivity – the derivative with respect to a given parameter - of an ergodic average of a QoI is well-defined. We describe a method based on ergodic and response theory for computing the sensitivity and response of a given QoI with respect to a given parameter in a chaotic model with an ergodic and hyperbolic attractor. This method does not require computation of ensembles of the model with perturbed parameter values. The method is demonstrated and some of the computations are validated on the Lorenz 63 and Lorenz 96 models.