Publications

Results 5601–5700 of 99,299

Search results

Jump to search filters

Multi-fidelity information fusion and resource allocation

Jakeman, John D.; Eldred, Michael; Geraci, Gianluca; Seidl, D.T.; Smith, Thomas M.; Gorodetsky, Alex A.; Pham, Trung; Narayan, Akil; Zeng, Xiaoshu; Ghanem, Roger

This project created and demonstrated a framework for the efficient and accurate prediction of complex systems with only a limited amount of highly trusted data. These next generation computational multi-fidelity tools fuse multiple information sources of varying cost and accuracy to reduce the computational and experimental resources needed for designing and assessing complex multi-physics/scale/component systems. These tools have already been used to substantially improve the computational efficiency of simulation aided modeling activities from assessing thermal battery performance to predicting material deformation. This report summarizes the work carried out during a two year LDRD project. Specifically we present our technical accomplishments; project outputs such as publications, presentations and professional leadership activities; and the project’s legacy.

More Details

Modeling Urban Acoustic Noise in the Las Vegas, NV Region

Wynn, Nora C.R.; Dannemann Dugick, Fransiska K.

Ambient infrasound noise in quiet, rural environments has been extensively studied and well-characterized through noise models for several decades. More recently, creating noise models for high-noise rural environments has also become an area of active research. However, far less work has been done to create generalized low-frequency noise models for urban areas. The high ambient noise levels expected in cities and other highly populated areas means that these environments are regarded as poor locations for acoustic sensors, and historically, sensor deployment in urban areas were avoided for this reason. However, there are several advantages to placing sensors in urban environments, including convenience of deployment and maintenance, and increasingly, necessity, as more previously rural areas become populated. This study seeks to characterize trends in low-frequency urban noise by creating a background noise model for Las Vegas, NV, using the Las Vegas Infrasound Array (LVIA): a network of eleven infrasound sensors deployed throughout the city. Data included in this study spans from 2019 to 2021 and provides a largely uninterrupted record of noise levels in the city from 0.1–500 Hz, with only minor discontinuities on individual stations. We organize raw data from the LVIA sensors into hourly power spectral density (PSD) averages for each station and select from these PSDs to create frequency distributions for time periods of interest . These frequency distributions are converted into probability density functions (PDFs), which are then used to evaluate variations in frequency and amplitude over daily to seasonal timescale s. In addition to PDFs, the median, 5th percentile, and 95th percentile amplitude values are calculated across the entire frequency range. This methodology follows a well-established process for noise model creation.

More Details

Extension of Interferometric Synthetic Aperture Radar to Multiple Phase-Centers (Midyear LDRD Final Report – second edition)

Bickel, Douglas L.; Delaurentis, John M.

This document contains the final report for the midyear LDRD titled "Extension of Interferometric Synthetic Aperture Radar to Multiple Phase-Centers." This report presents an overview of several methods for approaching the two-target in layover problem that exists in interferometric synthetic aperture radar systems. Simulation results for one of the methods are presented. In addition, a new direct approach is introduced.

More Details

Sensitivity Analyses for Monte Carlo Sampling-Based Particle Simulations

Bond, Stephen D.; Franke, Brian C.; Lehoucq, Rich; Mckinley, Scott A.

Computational design-based optimization is a well-used tool in science and engineering. Our report documents the successful use of a particle sensitivity analysis for design-based optimization within Monte Carlo sampling-based particle simulation—a currently unavailable capability. Such a capability enables the particle simulation communities to go beyond forward simulation and promises to reduce the burden on overworked analysts by getting more done with less computation.

More Details

System Response Characterization for a d–t Neutron Radiography System

Sweany, Melinda D.; Weinfurther, Kyle J.; Sjoberg, Kurt C.; Marleau, P.

We report the system response of a pixelated associated particle imaging (API) neutron radiography system. The detector readout currently consists of a 2x2 array of organic glass scintillator detectors, each with an 8x8 array of optically isolated pixels that match the size and pitch of the ARRAYJ-60035-64P-PCB Silicon Photomultiplier (SiPM) array from SensL/onsemi with 6x6 mm2 SiPMs. The alpha screen of the API deuterium-tritium neutron generator is read out with the S13361-3050AE-08 from Hamamatsu, which is an 8x8 array of 3x3 mm2 SiPMs. Data from the 320 channel system is acquired with the TOFPET2-based readout system. We present the predicted imaging capability of an eventual 5x5 detector array, the waveform-based energy and pulse shape characterization of the individual detectors, and the timing and energy response from the TOFPET2 system.

More Details

Probabilistic Nanomagnetic Memories for Uncertain and Robust Machine Learning

Bennett, Christopher; Xiao, Tianyao P.; Liu, Samuel; Humphrey, Leonard; Incorvia, Jean A.; Debusschere, Bert; Ries, Daniel; Agarwal, Sapan

This project evaluated the use of emerging spintronic memory devices for robust and efficient variational inference schemes. Variational inference (VI) schemes, which constrain the distribution for each weight to be a Gaussian distribution with a mean and standard deviation, are a tractable method for calculating posterior distributions of weights in a Bayesian neural network such that this neural network can also be trained using the powerful backpropagation algorithm. Our project focuses on domain-wall magnetic tunnel junctions (DW-MTJs), a powerful multi-functional spintronic synapse design that can achieve low power switching while also opening the pathway towards repeatable, analog operation using fabricated notches. Our initial efforts to employ DW-MTJs as an all-in-one stochastic synapse with both a mean and standard deviation didn’t end up meeting the quality metrics for hardware-friendly VI. In the future, new device stacks and methods for expressive anisotropy modification may make this idea still possible. However, as a fall back that immediately satisfies our requirements, we invented and detailed how the combination of a DW-MTJ synapse encoding the mean and a probabilistic Bayes-MTJ device, programmed via a ferroelectric or ionically modifiable layer, can robustly and expressively implement VI. This design includes a physics-informed small circuit model, that was scaled up to perform and demonstrate rigorous uncertainty quantification applications, up to and including small convolutional networks on a grayscale image classification task, and larger (Residual) networks implementing multi-channel image classification. Lastly, as these results and ideas all depend upon the idea of an inference application where weights (spintronic memory states) remain non-volatile, the retention of these synapses for the notched case was further interrogated. These investigations revealed and emphasized the importance of both notch geometry and anisotropy modification in order to further enhance the endurance of written spintronic states. In the near future, these results will be mapped to effective predictions for room temperature and elevated operation DW-MTJ memory retention, and experimentally verified when devices become available.

More Details

A Model of Narrative Reinforcement on a Dual-Layer Social Network

Emery, Benjamin; Ting, Christina; Gearhart, Jared L.; Tucker, J.D.

Widespread integration of social media into daily life has fundamentally changed the way society communicates, and, as a result, how individuals develop attitudes, personal philosophies, and worldviews. The excess spread of disinformation and misinformation due to this increased connectedness and streamlined communication has been extensively studied, simulated, and modeled. Less studied is the interaction of many pieces of misinformation, and the resulting formation of attitudes. We develop a framework for the simulation of attitude formation based on exposure to multiple cognitions. We allow a set of cognitions with some implicit relational topology to spread on a social network, which is defined with separate layers to specify online and offline relationships. An individual’s opinion on each cognition is determined by a process inspired by the Ising model for ferromagnetism. We conduct experimentation using this framework to test the effect of topology, connectedness, and social media adoption on the ultimate prevalence of and exposure to certain attitudes.

More Details

Towards Z-Next: The Integration of Theory, Experiments, and Computational Simulation in a Bayesian Data Assimilation Framework

Maupin, Kathryn A.; Foulk, James W.; Foulk, James W.; Knapp, P.F.; Joseph, V.R.; Wu, C.F.J.; Glinsky, Michael E.; Valaitis, Sonata M.

Making reliable predictions in the presence of uncertainty is critical to high-consequence modeling and simulation activities, such as those encountered at Sandia National Laboratories. Surrogate or reduced-order models are often used to mitigate the expense of performing quality uncertainty analyses with high-fidelity, physics-based codes. However, phenomenological surrogate models do not always adhere to important physics and system properties. This project develops surrogate models that integrate physical theory with experimental data through a maximally-informative framework that accounts for the many uncertainties present in computational modeling problems. Correlations between relevant outputs are preserved through the use of multi-output or co-predictive surrogate models; known physical properties (specifically monotoncity) are also preserved; and unknown physics and phenomena are detected using a causal analysis. By endowing surrogate models with key properties of the physical system being studied, their predictive power is arguably enhanced, allowing for reliable simulations and analyses at a reduced computational cost.

More Details

Measuring nonlinearities of a cantilever beam using a low-cost efficient wireless intelligent sensor for strain (LEWIS-S)

Engineering Research Express

Robbins, E.; Kuether, Robert J.; Moreu, F.

In the context of experimental vibration data, strain gauges can obtain linear and nonlinear dynamic measurements. However, measuring strain can be disincentivizing and expensive due to the complexity of data acquisition systems, lack of portability, and high costs. This research introduces the use of a low-cost efficient wireless intelligent sensor for strain (LEWIS-S) that is based on a portable-sensor-design platform that streamlines strain sensing. The softening behavior of a cantilever beam with geometric and inertial nonlinearities is characterized by the LEWIS-S based on high force level inputs. Two experiments were performed on a nonlinear cantilever beam with measurements obtained by the LEWIS-S sensor and an accelerometer. First, a sine sweep test was performed through the fundamental resonance of the system, then a ring-down test was performed from a large initial static deformation. Good agreement was revealed in quantities of interest such as frequency response functions, the continuous wavelet transforms, and softening behavior in the backbone curves.

More Details

Improving Predictive Capability in REHEDS Simulations with Fast, Accurate, and Consistent Non-Equilibrium Material Properties

Hansen, Stephanie B.; Baczewski, Andrew D.; Gomez, Thomas; Hentschel, T.W.; Jennings, Christopher A.; Kononov, Alina K.; Nagayama, Taisuke; Adler, Kelsey; Cangi, A.; Cochrane, Kyle; Foulk, James W.; Schleife, A.

Predictive design of REHEDS experiments with radiation-hydrodynamic simulations requires knowledge of material properties (e.g. equations of state (EOS), transport coefficients, and radiation physics). Interpreting experimental results requires accurate models of diagnostic observables (e.g. detailed emission, absorption, and scattering spectra). In conditions of Local Thermodynamic Equilibrium (LTE), these material properties and observables can be pre-computed with relatively high accuracy and subsequently tabulated on simple temperature-density grids for fast look-up by simulations. When radiation and electron temperatures fall out of equilibrium, however, non-LTE effects can profoundly change material properties and diagnostic signatures. Accurately and efficiently incorporating these non-LTE effects has been a longstanding challenge for simulations. At present, most simulations include non-LTE effects by invoking highly simplified inline models. These inline non-LTE models are both much slower than table look-up and significantly less accurate than the detailed models used to populate LTE tables and diagnose experimental data through post-processing or inversion. Because inline non-LTE models are slow, designers avoid them whenever possible, which leads to known inaccuracies from using tabular LTE. Because inline models are simple, they are inconsistent with tabular data from detailed models, leading to ill-known inaccuracies, and they cannot generate detailed synthetic diagnostics suitable for direct comparisons with experimental data. This project addresses the challenge of generating and utilizing efficient, accurate, and consistent non-equilibrium material data along three complementary but relatively independent research lines. First, we have developed a relatively fast and accurate non-LTE average-atom model based on density functional theory (DFT) that provides a complete set of EOS, transport, and radiative data, and have rigorously tested it against more sophisticated first-principles multi-atom DFT models, including time-dependent DFT. Next, we have developed a tabular scheme and interpolation methods that compactly capture non-LTE effects for use in simulations and have implemented these tables in the GORGON magneto-hydrodynamic (MHD) code. Finally, we have developed post-processing tools that use detailed tabulated non-LTE data to directly predict experimental observables from simulation output.

More Details

Full 3D Kinetic Modeling and Quantification of Positive Streamer Evolution in an Azimuthally Swept Pin-to-Plane Wedge Geometry

Jindal, Ashish K.; Moore, Christopher H.; Fierro, Andrew S.; Hopkins, Matthew M.

Cathode-directed streamer evolution in near atmospheric air is modeled in 3D pin-to-plane geometries using a 3D kinetic Particle-In-Cell (PIC) code that simulates particle-particle collisions via the Direct Simulation Monte Carlo (DSMC) method. Due to the computational challenges associated with a complete 360° volumetric domain, a practical alternative was achieved using a wedge domain and a range of azimuthal angles was explored (5°, 15°, 30°, and 45°) to study possible effects on the streamer growth and propagation due to the finite wedge angle. A DC voltage of 6 kV is administered to a hemispherical anode of radius 100 μm, with a planar cathode held at ground potential, generating an over-volted state with an electric field of 4 MV/m across a 1500 μm gap. The domain is seeded with an initial ion and electron density of 1018 m-3 at 1 eV temperature confined to a spherical region of radius 100 μm centered at the tip of the anode. The air chemistry model [1] includes standard Townsend breakdown mechanisms (electron-neutral elastic, excitation, ionization, attachment, and detachment collision chemistry and secondary electron emission) as well as streamer mechanisms (photoionization and ion-neutral collisions) via tracking excited state neutrals which can then either quench via collisions or spontaneously emit a photon based on specific Einstein-A coefficients [2, 3]. In this work, positive streamer dynamics are formally quantified for each wedge angle in terms of electron velocity and density as temporal functions of coordinates r, Φ, and z. Applying a random plasma seed for each simulation, particles of interest are tracked with near femtosecond temporal resolution out to 1.4 ns and spatially binned. This process is repeated six times and results are averaged. Prior 2D studies have shown that the reduced electric field, E/n, can significantly impact streamer evolution [4]. We extend the analysis to 3D wedge geometries, to limit computational costs, and examine the wedge angle’s effect on streamer branching, propagation, and velocity. Results indicate that the smallest wedge angle that produced an acceptably converged solution is 30°. The potential effects that a mesh, when under-resolved with respect to the Debye length, can impart on streamer dynamics and numerical heating were not investigated, and we explicitly state here that the smallest cell size was approximately 10 times the minimum λD in the streamer channel at late times. This constraint on cell size was the result of computational limitations on total mesh count.

More Details

Gen 3 Particle Pilot Plant (G3P3) Life Cycle Management Plan (SAND report)

Sment, Jeremy N.I.; Ho, Clifford K.

The National Solar Thermal Test Facility (NSTTF) at Sandia National Laboratories New Mexico (SNL/NM) developed this Life Cycle Management Plan (LCMP) to document its process for executing, monitoring, controlling and closing-out Phase 3 of the Gen 3 Particle Pilot Plant (G3P3). This plan serves as a resource for stakeholders who wish to be knowledgeable of project objectives and how they will be accomplished.

More Details

Carboxylate binding prefers two cations to one

Physical Chemistry Chemical Physics

Stevens, Mark J.; Rempe, Susan

Almost all studies of specific ion binding by carboxylates (-COO−) have considered only a single cation, but clustering of ions and ligands is a common phenomenon. We apply density functional theory to investigate how variations in the number of acetate ligands in binding to two monovalent cations affects ion binding preferences. We study a series of monovalent (Li+, Na+, K+, Cs+) ions relevant to experimental work on many topics, including ion channels, battery storage, water purification and solar cells. We find that the preferred optimal structure has 3 acetates except for Cs+, which has 2 acetates. The optimal coordination of the cation by the carboxylate O atoms is 4 for both Na+ and K+, and 3 for Li+ and Cs+. There is a 4-fold coordination minimum just a few kcal mol−1 higher than the optimal 3-fold structure for Li+. For two cations, multiple minima occur in the vicinity of the lowest free energy state. We find that, for Li, Na and K, the preferred optimal structure with two cations is favored over a mixture of single cation complexes, providing a basis for understanding ionic cluster formation that is relevant for engineering proteins and other materials for rapid, selective ion transport.

More Details

Accelerating Multiscale Materials Modeling with Machine Learning

Modine, Normand A.; Stephens, John A.; Swiler, Laura P.; Thompson, A.P.; Vogel, Dayton J.; Cangi, Attila; Feilder, Lenz; Rajamanickam, Sivasankaran

The focus of this project is to accelerate and transform the workflow of multiscale materials modeling by developing an integrated toolchain seamlessly combining DFT, SNAP, LAMMPS, (shown in Figure 1-1) and a machine-learning (ML) model that will more efficiently extract information from a smaller set of first-principles calculations. Our ML model enables us to accelerate first-principles data generation by interpolating existing high fidelity data, and extend the simulation scale by extrapolating high fidelity data (102 atoms) to the mesoscale (104 atoms). It encodes the underlying physics of atomic interactions on the microscopic scale by adapting a variety of ML techniques such as deep neural networks (DNNs), and graph neural networks (GNNs). We developed a new surrogate model for density functional theory using deep neural networks. The developed ML surrogate is demonstrated in a workflow to generate accurate band energies, total energies, and density of the 298K and 933K Aluminum systems. Furthermore, the models can be used to predict the quantities of interest for systems with more number of atoms than the training data set. We have demonstrated that the ML model can be used to compute the quantities of interest for systems with 100,000 Al atoms. When compared with 2000 Al system the new surrogate model is as accurate as DFT, but three orders of magnitude faster. We also explored optimal experimental design techniques to choose the training data and novel Graph Neural Networks to train on smaller data sets. These are promising methods that need to be explored in the future.

More Details

Fluid-Kinetic Coupling: Advanced Discretizations for Simulations on Emerging Heterogeneous Architectures (LDRD FY20-0643)

Roberts, Nathan V.; Bond, Stephen D.; Miller, Sean T.; Cyr, Eric C.

Plasma physics simulations are vital for a host of Sandia mission concerns, for fundamental science, and for clean energy in the form of fusion power. Sandia's most mature plasma physics simulation capabilities come in the form of particle-in-cell (PIC) models and magnetohydrodynamics (MHD) models. MHD models for a plasma work well in denser plasma regimes when there is enough material that the plasma approximates a fluid. PIC models, on the other hand, work well in lower-density regimes, in which there is not too much to simulate; error in PIC scales as the square root of the number of particles, making high-accuracy simulations expensive. Real-world applications, however, almost always involve a transition region between the high-density regimes where MHD is appropriate, and the low-density regimes for PIC. In such a transition region, a direct discretization of Vlasov is appropriate. Such discretizations come with their own computational costs, however; the phase-space mesh for Vlasov can involve up to six dimensions (seven if time is included), and to apply appropriate homogeneous boundary conditions in velocity space requires meshing a substantial padding region to ensure that the distribution remains sufficiently close to zero at the velocity boundaries. Moreover, for collisional plasmas, the right-hand side of the Vlasov equation is a collision operator, which is non-local in velocity space, and which may dominate the cost of the Vlasov solver. The present LDRD project endeavors to develop modern, foundational tools for the development of continuum-kinetic Vlasov solvers, using the discontinuous Petrov-Galerkin (DPG) methodology, for discretization of Vlasov, and machine-learning (ML) models to enable efficient evaluation of collision operators. DPG affords several key advantages. First, it has a built-in, robust error indicator, allowing us to adapt the mesh in a very natural way, enabling a coarse velocity-space mesh near the homogeneous boundaries, and a fine mesh where the solution has fine features. Second, it is an inherently high-order, high-intensity method, requiring extra local computations to determine so-called optimal test functions, which makes it particularly suited to modern hardware in which floating-point throughput is increasing at a faster rate than memory bandwidth. Finally, DPG is a residual-minimizing method, which enables high-accuracy computation: in typical cases, the method delivers something very close to the $L^2$ projection of the exact solution. Meanwhile, the ML-based collision model we adopt affords a cost structure that scales as the square root of a standard direct evaluation. Moreover, we design our model to conserve mass, momentum, and energy by construction, and our approach to training is highly flexible, in that it can incorporate not only synthetic data from direct-simulation Monte Carlo (DSMC) codes, but also experimental data. We have developed two DPG formulations for Vlasov-Poisson: a time-marching, backward-Euler discretization and a space-time discretization. We have conducted a number of numerical experiments to verify the approach in a 1D1V setting. In this report, we detail these formulations and experiments. We also summarize some new theoretical results developed as part of this project (published as papers previously): some new analysis of DPG for the convection-reaction problem (of which the Vlasov equation is an instance), a new exponential integrator for DPG, and some numerical exploration of various DPG-based time-marching approaches to the heat equation. As part of this work, we have contributed extensively to the Camellia open-source library; we also describe the new capabilities and their usage. We have also developed a well-documented methodology for single-species collision operators, which we applied to argon and demonstrated with numerical experiments. We summarize those results here, as well as describing at a high level a design extending the methodology to multi-species operators. We have released a new open-source library, MLC, under a BSD license; we include a summary of its capabilities as well.

More Details

Comprehensive uncertainty quantification (UQ) for full engineering models by solving probability density function (PDF) equation

Kolla, Hemanth; De, Saibal; Jones, Reese E.; Hansen, Michael A.; Plews, Julia A.

This report details a new method for propagating parameter uncertainty (forward uncertainty quantification) in partial differential equations (PDE) based computational mechanics applications. The method provides full-field quantities of interest by solving for the joint probability density function (PDF) equations which are implied by the PDEs with uncertain parameters. Full-field uncertainty quantification enables the design of complex systems where quantities of interest, such as failure points, are not known apriori. The method, motivated by the well-known probability density function (PDF) propagation method of turbulence modeling, uses an ensemble of solutions to provide the joint PDF of desired quantities at every point in the domain. A small subset of the ensemble is computed exactly, and the remainder of the samples are computed with approximation of the driving (dynamics) term of the PDEs based on those exact solutions. Although the proposed method has commonalities with traditional interpolatory stochastic collocation methods applied directly to quantities of interest, it is distinct and exploits the parameter dependence and smoothness of the dynamics term of the governing PDEs. The efficacy of the method is demonstrated by applying it to two target problems: solid mechanics explicit dynamics with uncertain material model parameters, and reacting hypersonic fluid mechanics with uncertain chemical kinetic rate parameters. A minimally invasive implementation of the method for representative codes SPARC (reacting hypersonics) and NimbleSM (finite- element solid mechanics) and associated software details are described. For solid mechanics demonstration problems the method shows order of magnitudes improvement in accuracy over traditional stochastic collocation. For the reacting hypersonics problem, the method is implemented as a streamline integration and results show very good accuracy for the approximate sample solutions of re-entry flow past the Apollo capsule geometry at Mach 30.

More Details

Combining Physics and Machine Learning for the Next Generation of Molecular Simulation

Rackers, Joshua R.

Simulating molecules and atomic systems at quantum accuracy is a grand challenge for science in the 21st century. Quantum-accurate simulations would enable the design of new medicines and the discovery of new materials. The defining problem in this challenge is that quantum calculations on large molecules, like proteins or DNA, are fundamentally impossible with current algorithms. In this work, we explore a range of different methods that aim to make large, quantum-accurate simulations possible. We show that using advanced classical models, we can accurately simulate ion channels, an important biomolecular system. We show how advanced classical models can be implemented in an exascale-ready software package. Lastly, we show how machine learning can learn the laws of quantum mechanics from data and enable quantum electronic structure calculations on thousands of atoms, a feat that is impossible for current algorithms. Altogether, this work shows that combining advances in physics models, computing, and machine learning, we are moving closer to the reality of accurately simulating our molecular world.

More Details

Efficient approach to kinetic simulation in the inner magnetically insulated transmission line on Z

Evstatiev, Evstati G.; Hess, Mark H.

This project explores the idea of performing kinetic numerical simulations in the Z inner magnetically insulated transmission line (inner MITL) by reduced physics models such as a guiding center drift kinetic approximation for particles and electrostatic and magnetostatic approximation for the fields. The basic problem explored herein is the generation, formation, and evolution of vortices by electron space charge limited (SCL) emission. The results indicate that for relevant to Z values of peak current and pulse length, these approximations are excellent, while also providing tens to hundreds of times reduction in the computational load. The benefits could be enormous: Implementation of these reduced physics models in present particle-in-cell (PIC) codes could enable them to be routinely used for experimental design while still capturing essential non-thermal (kinetic) physics.

More Details

Neuromorphic Information Processing by Optical Media

Leonard, Francois; Fuller, Elliot J.; Teeter, Corinne M.; Vineyard, Craig M.

Classification of features in a scene typically requires conversion of the incoming photonic field int the electronic domain. Recently, an alternative approach has emerged whereby passive structured materials can perform classification tasks by directly using free-space propagation and diffraction of light. In this manuscript, we present a theoretical and computational study of such systems and establish the basic features that govern their performance. We show that system architecture, material structure, and input light field are intertwined and need to be co-designed to maximize classification accuracy. Our simulations show that a single layer metasurface can achieve classification accuracy better than conventional linear classifiers, with an order of magnitude fewer diffractive features than previously reported. For a wavelength λ, single layer metasurfaces of size 100λ x 100λ with aperture density λ-2 achieve ~96% testing accuracy on the MNIST dataset, for an optimized distance ~100λ to the output plane. This is enabled by an intrinsic nonlinearity in photodetection, despite the use of linear optical metamaterials. Furthermore, we find that once the system is optimized, the number of diffractive features is the main determinant of classification performance. The slow asymptotic scaling with the number of apertures suggests a reason why such systems may benefit from multiple layer designs. Finally, we show a trade-off between the number of apertures and fabrication noise.

More Details

Model-Form Epistemic Uncertainty Quantification for Modeling with Differential Equations: Application to Epidemiology

Foulk, James W.; Portone, Teresa; Dandekar, Raj; Rackauckas, Chris; Bandy, Rileigh J.; Huerta, Jose G.; Dytzel, India

Modeling real-world phenomena to any degree of accuracy is a challenge that the scientific research community has navigated since its foundation. Lack of information and limited computational and observational resources necessitate modeling assumptions which, when invalid, lead to model-form error (MFE). The work reported herein explored a novel method to represent model-form uncertainty (MFU) that combines Bayesian statistics with the emerging field of universal differential equations (UDEs). The fundamental principle behind UDEs is simple: use known equational forms that govern a dynamical system when you have them; then incorporate data-driven approaches – in this case neural networks (NNs) – embedded within the governing equations to learn the interacting terms that were underrepresented. Utilizing epidemiology as our motivating exemplar, this report will highlight the challenges of modeling novel infectious diseases while introducing ways to incorporate NN approximations to MFE. Prior to embarking on a Bayesian calibration, we first explored methods to augment the standard (non-Bayesian) UDE training procedure to account for uncertainty and increase robustness of training. In addition, it is often the case that uncertainty in observations is significant; this may be due to randomness or lack of precision in the measurement process. This uncertainty typically manifests as “noisy” observations which deviate from a true underlying signal. To account for such variability, the NN approximation to MFE is endowed with a probabilistic representation and is updated using available observational data in a Bayesian framework. By representing the MFU explicitly and deploying an embedded, data-driven model, this approach enables an agile, expressive, and interpretable method for representing MFU. In this report we will provide evidence that Bayesian UDEs show promise as a novel framework for any science-based, data-driven MFU representation; while emphasizing that significant advances must be made in the calibration of Bayesian NNs to ensure a robust calibration procedure.

More Details

Fractal-Fin, Dimpled Solar Heat Collector with Solar Glaze

Rodriguez, Salvador B.

Exterior solar glaze was added to a 3 foot x 3 foot x 3 foot aluminum solar collector that had six triangular dimpled fins for enhanced heat transfer. The interior vertical wall on the south side was also dimpled. The solar glaze was added to compare its solar collection performance with unglazed solar collector experiments conducted at Sandia in 2021. The east, west, front, and top sides of the solar collector were encased with solar glaze glass. Because the solar incident heat on the north and bottom sides was minimal, they were insulated to retain the heat that was collected by the other four sides. The advantages of the solar glaze include the entrapment of more solar heat, as well as insulation from the wind. The disadvantages are that it increases the cost of the solar collector and has fragile structural properties when compared to the aluminum walls. Nevertheless, prior to conducting experiments with the glazed solar collector, it was not clear if the benefits outweighed the disadvantages. These issues are addressed herein, with the conclusion that the additional amount of heat collected by the glaze justifies the additional cost. The solar collector glaze design, experimental data, and costs and benefits are documented in this report.

More Details

Performance Evaluation of a Prototype Moving Packed-Bed Particle/sCO2 Heat Exchanger

Albrecht, Kevin; Laubscher, Hendrik F.; Bowen, Christopher P.; Ho, Clifford K.

Particle heat exchangers are a critical enabling technology for next generation concentrating solar power (CSP) plants that use supercritical carbon dioxide (sCO2) as a working fluid. This report covers the design, manufacturing and testing of a prototype particle-to-sCO2 heat exchanger targeting thermal performance levels required to meet commercial scale cost targets. In addition, the the design and assembly of integrated particle and sCO2 flow loops for heat exchanger performance testing are detailed. The prototype heat exchanger was tested to particle inlet temperatures of 500 °C at 17 MPa which resulted in overall heat transfer coefficients of approximately 300 W/m2-K at the design point and cases using high approach temperature with peak values as high as 400 W/m2-K

More Details

Octane Requirements of Lean Mixed-Mode Combustion in a Direct-Injection Spark-Ignition Engine

Energy and Fuels

Kim, Namho K.; Vuilleumier, David; Singh, Eshan; Sjoberg, Carl M.

This study investigates the octane requirements of a hybrid flame propagation and controlled autoignition mode referred to as mixed-mode combustion (MMC), which allows for strong control over combustion parameters via a spark-initiated deflagration phase. Due to the throughput limitations associated with both experiments and 3-D computational fluid dynamics calculations, a hybrid 0-D and 1-D modeling methodology was developed, supported by experimental validation data. This modeling approach relied on 1-D, two-zone engine simulations to predict bulk in-cylinder thermodynamic conditions over a range of engine speeds, compression ratios, intake pressures, trapped residual levels, fueling rates, and spark timings. Those predictions were then transferred to a 0-D chemical kinetic model, which was used to evaluate the autoignition behavior of fuels when subjected to temperature-pressure trajectories of interest. Finally, the predicted autoignition phasings were screened relative to the progress of the modeled deflagration-based combustion in order to determine if an operating condition was feasible or infeasible due to knock or stability limits. The combined modeling and experimental results reveal that MMC has an octane requirement similar to modern stoichiometric spark-ignition engines in that fuels with high research octane number (RON) and high octane sensitivity (S) enable higher loads. Experimental trends with varying RON and S were well predicted by the model for 1000 and 1400 rpm, confirming its utility in identifying the compatibility of a fuel's autoignition behavior with an engine configuration and operating strategy. However, the model was not effective in predicting (nor designed to predict) operability limits due to cycle-to-cycle variations, which experimentally inhibited operation of some fuels at 2000 rpm. Putting the operable limits and efficiency from MMC in the context of a state-of-the-art engine, the MMC showed superior efficiencies over the range investigated, demonstrating the potential to further improve fuel economy.

More Details

Development of self-sensing materials for extreme environments based on metamaterial concept and additive manufacturing

Wang, Yifeng

Structural health monitoring of an engineered component in a harsh environment is critical for multiple DOE missions including nuclear fuel cycle, subsurface energy production/storage, and energy conversion. Supported by a seeding Laboratory Directed Research & Development (LDRD) project, we have explored a new concept for structural health monitoring by introducing a self-sensing capability into structural components. The concept is based on two recent technological advances: metamaterials and additive manufacturing. A self-sensing capability can be engineered by embedding a metastructure, for example, a sheet of electromagnetic resonators, either metallic or dielectric, into a material component. This embedment can now be realized using 3-D printing. The precise geometry of the embedded metastructure determines how the material interacts with an incident electromagnetic wave. Any change in the structure of the material (e.g., straining, degradation, etc.) would inevitably perturbate the embedded metastructures or metasurface array and therefore alter the electromagnetic response of the material, thus resulting in a frequency shift of a reflection spectrum that can be detected passively and remotely. This new sensing approach eliminates complicated environmental shielding, in-situ power supply, and wire routing that are generally required by the existing active-circuit-based sensors. The work documented in this report has preliminarily demonstrated the feasibility of the proposed concept. The work has established the needed simulation tools and experimental capabilities for future studies.

More Details

Computational Response Theory for Dynamics

Steyer, Andrew J.

Quantifying the sensitivity - how a quantity of interest (QoI) varies with respect to a parameter – and response – the representation of a QoI as a function of a parameter - of a computer model of a parametric dynamical system is an important and challenging problem. Traditional methods fail in this context since sensitive dependence on initial conditions implies that the sensitivity and response of a QoI may be ill-conditioned or not well-defined. If a chaotic model has an ergodic attractor, then ergodic averages of QoIs are well-defined quantities and their sensitivity can be used to characterize model sensitivity. The response theorem gives sufficient conditions such that the local forward sensitivity – the derivative with respect to a given parameter - of an ergodic average of a QoI is well-defined. We describe a method based on ergodic and response theory for computing the sensitivity and response of a given QoI with respect to a given parameter in a chaotic model with an ergodic and hyperbolic attractor. This method does not require computation of ensembles of the model with perturbed parameter values. The method is demonstrated and some of the computations are validated on the Lorenz 63 and Lorenz 96 models.

More Details
Results 5601–5700 of 99,299
Results 5601–5700 of 99,299