The following discussion contains a detailed description of how to interface and operate Dosimetry Processor v1.0 software application. It describes the input required from the user to process dosimetry and actions to take for troubleshooting.
The performance of energetic materials (EM) varies significantly across production lots due to the inability of current production methods to yield consistent morphology and size. Lot-to-lot variations and the inability to remake the needed characteristics that meet specification is costly, increases uncertainty, and creates additional risk in programs using these materials. There is thus a pressing need to more reliably formulate EMs with greater control of morphology. The goal of this project is to use the surfactant-assisted self-assembly to generate EM particles with welldefined size and external morphologies using triaminotrinitrobenzene (TATB) and hexanitrohexaazaisowurtzitane (CL-20) as these EMs are both prevalent in the stockpile and present interesting/urgent reprocessing challenges. We intend to understand fundamental science on how molecular packing influences EM morphology. We develop scale up fabrication of EM particles with controlled morphology, promising to eliminate inconsistent performance by providing a trusted and reproducible method to improve EMs for NW applications.
The generalized linear Boltzmann equation is a recently developed framework based on non-classical transport theory for modeling the expected value of particle flux in an arbitrary stochastic medium. Provided with a non-classical cross-section for a given statistical description of a medium, any transport problem in that medium may be solved. Previous work has only considered one-dimensional media without finite boundary conditions and discrete binary mixtures of materials. In this work the solution approach for the GLBE in multidimensional media with finite boundaries is outlined. The discrete ordinates method with an implicit discretization of the pathlength variable is used to leverage sweeping methods for the transport operator. In addition, several convenient approximations for non-classical cross-sections are introduced. The solution approach is verified against random realizations of a Gaussian process medium in a square enclosure.
This report documents and describes the tabulation and analysis of historical pulse operation data from the Annular Core Research Reactor (ACRR) at Sandia National Laboratories (SNL). The pulse data was obtained from a combination of pulse log files generated at the control console and pulse diagnostics system data. The pulses presented were performed within the time period of April 2003 and December 2017. A brief analysis of the data is included to characterize the aggregate behavior of ACRR pulses with respect to theoretical treatments based on the point reactor kinetics model. It is expected that the data presented will provide an organized and consolidated resource to reference historical pulse data at the ACRR for use in analyses, verification and validation, and general understanding of the machine. A comprehensive set of data is presented to the reader in the appendices.
The Vanguard program informally began in January 2017 with the submission of a white paper entitled "Sandia's Vision for a 2019 Arm Testbed" to NNSA headquarters. The program proceeded in earnest in May 2017 with an announcement by Doug Wade (Director, Office of Advanced Simulation and Computing and Institutional R&D at NNSA) that Sandia National Laboratories (Sandia) would host the first Advanced Architecture Prototype platform based on the Arm architecture. In August 2017, Sandia formed a Tri-lab team chartered to develop a robust HPC software stack for Astra to support the Vanguard program goal of demonstrating the viability of Arm in supporting ASC production computing workloads.
Computational modeling and simulation are paramount to modern science. Computational models often replace physical experiments that are prohibitively expensive, dangerous, or occur at extreme scales. Thus, it is critical that these models accurately represent and can be used as replacements for reality. This paper provides an analysis of metrics that may be used to determine the validity of a computational model. While some metrics have a direct physical meaning and a long history of use, others, especially those that compare probabilistic data, are more difficult to interpret. Furthermore, the process of model validation is often application-specific, making the procedure itself challenging and the results difficult to defend. We therefore provide guidance and recommendations as to which validation metric to use, as well as how to use and decipher the results. An example is included that compares interpretations of various metrics and demonstrates the impact of model and experimental uncertainty on validation processes.
We present a direct numerical simulation of a temporal jet between n-dodecane and diluted air undergoing spontaneous ignition at conditions relevant to low-temperature diesel combustion. The jet thermochemical conditions were selected to result in two-stage ignition. Reaction rates were computed using a 35-species reduced mechanism which included both the low- and high-temperature reaction pathways. The aim of this study is to elucidate the mechanisms by which low-temperature reactions promote high-temperature ignition under turbulent, non-premixed conditions. We show that low-temperature heat release in slightly rich fuel regions initiates multiple cool flame kernels that propagate towards very rich fuel regions through a reaction-diffusion mechanism. Although low-temperature ignition is delayed by imperfect mixing, the propagation speed of the cool flames is high: as a consequence, high-temperature reactions in fuel-rich regions become active early during the ignition transient. Because of this early start, high-temperature ignition, which occurs in fuel-rich regions, is faster than homogeneous ignition. Following ignition, the high-temperature kernels expand and engulf the stoichiometric mixture-fraction iso-surface which in turn establish edge flames which propagate along the iso-surface. The present results indicate the preponderance of flame folding of existing burning surfaces, and that ignition due to edge-flame propagation is of lesser importance. Finally, a combustion mode analysis that extends an earlier classification [1] is proposed to conceptualize the multi-stage and multi-mode nature of diesel combustion and to provide a framework for reasoning about the effects of different ambient conditions on diesel combustion.
Freight transportation represents about 9.5% of GDP in the U.S., it is responsible for about 8% of greenhouse gas emissions, and supports the import and export of about 3.6 trillion in international trade. It is therefore important that the national freight transportation system is designed and operated efficiently. Hence, this paper develops a mathematical model to estimate international and domestic freight flows across ocean, rail, and truck modes, which can be used to study the impacts of changes in our infrastructure, as well as the imposition of new user fees and changes in operating policies. The model integrates a user equilibrium-based logit argument for path selection with a system optimal argument for rail network operations. This leads to the development of a unique solution procedure that is demonstrated in a large-scale analysis focused on all intercity freight and U.S export/import containerized freight. The model results are compared with the reported flow volumes. The model is applied to two case studies: (1) a disruption of the seaports of Los Angeles and Long Beach (LA and LB) similar to the impacts that would be felt in an earthquake; and (2) implementation of new user fees at the California ports.
Active participation in international R&D is crucial for achieving the Spent Fuel Waste Science & Technology (SFWST) long-term goals of conducting "experiments to fill data needs and confirm advanced modeling approaches" and of having a "robust modeling and experimental basis for evaluation of multiple disposal system options" (by 2020). DOE's Office of Nuclear Energy (NE) has developed a strategic plan to advance cooperation with international partners. The international collaboration on the evaluation of crystalline disposal media at Sandia National Laboratories (SNL) in FY18 focused on the collaboration through the Development of Coupled Models and their Validation against Experiments (DECOVALEX- 2019) project. The DECOVALEX project is an international research and model comparison collaboration, initiated in 1992, for advancing the understanding and modeling of coupled thermo-hydro-mechanicalchemical (THMC) processes in geological systems. SNL has been participating in three tasks of the DECOVALEX project: Task A. Modeling gas injection experiments (ENGINEER), Task C. Modeling groundwater recovery experiment in tunnel (GREET), and Task F. Fluid inclusion and movement in the tight rock (FINITO). FY18 work focused on Task C and preparing the interim reports for the three tasks SNL has been imvolved. The major accomplishments are summarized
MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission. MELCOR is a fully integrated code (encompassing the reactor coolant system and the containment building) that models the progression of postulated accidents in light water reactor power plants. It provides a capability for independently auditing analyses submitted by reactor manufacturers and utilities. In order to assess the adequacy of containment thermal - hydraulic modeling incorporated in the MELCOR code, a key containment test facility was analyzed. This report documents MELCOR code calculations for simulating steam - water blowdown tests performed in the Heissdampfreaktor ( HDR) de-commissioned containment facility located near Frankfurt , Germany . These tests are a series of blowdown experiments in a large scaled test facility ; including some tests with the addition of hydrogen release which are intended to simulate a variety of postulated break s inside large containment buildings. The key objectives of this MELCOR assessment are to study: (1) the expansion and transport of high energy steam - water releases, (2) heat and mass transfer to structural passive heat sinks, and (3) containment gas mixing and stratification. Moreover, MELCOR results are compared to the CONTAIN code for the same tests.
Applications of the severe accident analysis code MELCOR, developed for the U.S. Nuclear Regulatory Commission (NRC) by Sandia National Laboratories (SNL), have been supported by the graphical user-interface and post-processing suite Symbolic Nuclear Analysis Package (SNAP), developed for the NRC by Applied Programming Technology (APT). With the release of MELCOR 2.2, new user functionality and models have been introduced and an update to the SNAP MELCOR plugin user interface is necessary to access these new features. This document relates all new features introduced into MELCOR to the development team at APT as well as the NRC.
Material equation-of-state (EOS) models, generally providing the pressure and internal energy for a given density and temperature, are required to close the equations of hydrodynamics. As a result they are an essential piece of physics used to simulate inertial confinement fusion (ICF) implosions. Historically, EOS models based on different physical/chemical pictures of matter have been developed for ICF relevant materials such as the deuterium (D2) or deuterium-tritium (DT) fuel, as well as candidate ablator materials such as polystyrene (CH), glow-discharge polymer (GDP), beryllium (Be), carbon (C), and boron carbide (B4C). The accuracy of these EOS models can directly affect the reliability of ICF target design and understanding, as shock timing and material compressibility are essentially determined by what EOS models are used in ICF simulations. Systematic comparisons of current EOS models, benchmarking with experiments, not only help us to understand what the model differences are and why they occur, but also to identify the state-of-the-art EOS models for ICF target designers to use. For this purpose, the first Equation-of-State Workshop, supported by the US Department of Energy's ICF program, was held at the Laboratory for Laser Energetics (LLE), University of Rochester on 31 May–2nd June, 2017. This paper presents a detailed review on the findings from this workshop: (1) 5–10% model-model variations exist throughout the relevant parameter space, and can be much larger in regions where ionization and dissociation are occurring, (2) the D2 EOS is particularly uncertain, with no single model able to match the available experimental data, and this drives similar uncertainties in the CH EOS, and (3) new experimental capabilities such as Hugoniot measurements around 100 Mbar and high-quality temperature measurements are essential to reducing EOS uncertainty.
The On-Line Waste Library is a website that contains information regarding United States Department of Energy-managed high-level waste, spent nuclear fuel, and other wastes that are likely candidates for deep geologic disposal, with links to supporting documents for the data. This report provides supporting information for the data for which an already published source was not available.
In order to increase neutron yield in fusion experiments on the Magnetized Linear Inertial Fusion (MagLIF) platform, it is important to maximize the energy coupled to the fuel during the laser-preheat stage. However, laser-energy coupling is limited by laser—plasma instabilities (LPI). In this regard, the Pecos facility at Sandia National Laboratories uses the Z-beamlet laser to test and study the effects of LPI on MagLIF-relevant targets. In particular, stimulated Raman scattering (SRS) is measured in Pecos by using two photo-diodes and a near-beam imager. The measurements from the photo-diodes are processed using a synthetic spectrum based on a Gaussian model. With this relatively simple model, the mean wavelength and intensity of backscattered light can be deduced. Our measurements show similar trends as those given by time-resolved spectrometer data. Hence, this model provides a simple way to approximate time-resolved spectra for scattered light by SRS.
This is a work planning document that describes technical and programmatic goals for disposition of spent nuclear fuel (SNF) that is currently in dry storage in dual-purpose canisters (DPCs), or will be in the foreseeable future. It then describes how those goals can be promoted by a research and development (R&D) program. The needed R&D is compared to the ongoing work supported by the U.S. Department of Energy in FY18, and planned for FY19 and beyond. Some additional R&D activities are recommended, and plans are presented for technical integration activities that address the efficacy of the Direct Disposal of DPCs program (WBS 1.08.01.03.05), and integration with the overall Disposal Research program (WBS 1.08.01.03). The planned deliverable for this work package in FY19 (M2SF-1951\1010305051-Analysis of Solutions for DPC Disposal; 6/19/19) will be the product of this workplan. The deliverable will evaluate technical options for DPC direct disposal, taking into account the range ofpast and current DPC designs in the existing fleet. It will describe a set of goals for successful disposition of spent fuel in DPCs. It will analyze the scope and timing of needed R&D activities (R&D Plan), and discuss the uses of generic and site-specific analyses. Where appropriate, it will use alternative management cases to represent how DPC direct disposal could be incorporated in the overall geologic disposal program, given uncertainties in program direction and funding.
The Federal Radiological Monitoring and Assessment Center (FRMAC) relies on accurate and defensible analytical laboratory data to support its mission. FRMAC Laboratory Analysis personnel are responsible for (1) receiving samples, (2) managing samples, and (3) providing data quality assurance. Currently, the RadResponder software application does not meet all these needs. With some modifications, RadResponder could meet the needs for sample receiving functions, but it does not meet the needs of sample management and data quality assurance functions. The FRMAC Laboratory Analysis team has discussed and reviewed the following options moving forward: Option 1: Make minor revisions to RadResponder to improve sample receiving capability, purchase and configure a commercial laboratory information management system (LIMS) to perform sample management and data quality assurance, and build an interface between RadResponder and the commercial-off-the-shelf LIMS. Option 2: Make major revisions to RadResponder for all FRMAC Laboratory Analysis functions to support required sample management and data quality assurance activities. Option 3: Create a custom-built LIMS system to interface with RadResponder. Note: All three options will require the development of a Laboratory Analysis web portal and will require funding for ongoing maintenance and training. The FRMAC Laboratory Analysis team highly recommends Option 1 as the best and most efficient path forward. Commercial-off-the-shelf LIMS products have been proven successful in the laboratory community for decades. Option 1 leverages these proven technologies and takes advantage of RadResponder's current strengths.
This milestone presents a demonstration of an average surface mapping model that maps single-phase average wall temperatures from STAR-CCM+ to Cobra-TF using a multiplier that is linearly dependent on axial and azimuthal coordinates of the Cobra-TF mesh. The work presented herein lays the foundation for adding greater complexity to the average surface mapping model such as fluid property dependence. This average surface mapping model will be incorporated into the surface mapping model developed by Lindsay Gilkey to map fluctuations from the mean surface temperatures.
Objectives for Multi-Physics Simulations include: Provide a systematic framework for multi-process modeling — Conduct parallel model development efforts that cover the technical areas needed to support criticality consequence screening in performance assessment (PA) and that will be more closely integrated as development proceeds; Investigate separate effects — Allow partitioning of the overall waste package (WP) internal criticality multi-physics modeling effort during development activities, for study of specific processes that can later be coupled if warranted from interpretation of results; Study scaling and bounding approaches — Where possible, represent criticality consequences in PA using simplification of uncertain criticality event frequency and magnitude, bounding of consequences for screening purposes, and scaling of consequences to multiple WPs; and, Integration among participants — Multiple modeling teams (mainly SNL and ORNL, and their collaborators) will work on different parts of the in-package criticality phenomenology. Insights generated this way will be combined for more realistic coupled modeling, and for validation.
This SAND Report Guide offers support to authors, technical writers, principal investigators, and others involved in the process of creating, formatting, or refining a SAND Report. It details what you need to know before you begin compiling a SAND Report, directs you to the SAND Report templates, outlines the order of elements in a SAND Report, and explains what to do when your report is completed and ready for Review and Approval and subsequent distribution. Supporting information is provided in the appendix, such as guidance on styles, where to get technical assistance, trademarks, Microsoft Word, and equations.
Understanding performance is the key to risk management in energy storage project financing. Technical performance underlies both capital and operating costs, directly impacting the system's economic performance Since project development is an exercise in risk management, financing costs are the clearest view into how lenders' perceive a project's riskiness. Addressing this perception is the challenge facing the energy storage industry today. Growth in the early solar market was hindered until OEMs and project developers used verifiable performance to allay lenders' apprehension about the long-term viability of those projects. The energy storage industry is similarly laying the groundwork for sustained growth through better technical Standards and best practices. However, the storage industry remains far more complex than other markets, leading lenders to need better data, analytical tools, and performance metrics to invest not only to maximize returns, but also safely--through incorporating more precise performance metrics into the project's documents.
This document details the computational fluid dynamic and system-level modeling, including a mechanistic representation of a Terry turbopump. Until this recent effort, data and modeling results show that a Terry turbine, flowing air (or steam) at a certain rate, can develop the same power at two very different speeds, and has large implications with respect to understanding how a boiling water reactor's reactor core isolation cooling system or a pressurized water reactor turbine driven auxiliary feedwater system would respond to a loss of electrical power for Terry turbine speed governing. This work has provided insights in modeling uncertainties and provides confirmation for experimental efforts for the Terry turbopump expanded operating band being conducted at Texas A&M University.
This document summarizes research performed under the Laboratory Directed Research and Development (LDRD) project titled Developing Fugitive Emissions Sensor Networks: New Optimization Algorithms for Monitoring, Measurement and Verification. The purpose of this project is to develop methods and software to enhance detection programs through optimal design of the sensor network. This project includes both software development and field work. While this project is focused on methane emissions, the sensor placement optimization framework can be applied to a wide range of applications, including the placement of water quality sensors, surveillance cameras, fire and chemical detectors. This research has the potential to improve national security by improving the way sensors are deployed in the field.
The state of stress in the earth is complicated and it is difficult to determine all three components and directions of the stress. However, the state of stress affects all activities which take place in the earth, from causing earthquakes on critically stressed faults, to affecting production from hydraulically fractured shale reservoirs, to determining closure rates around a subterranean nuclear waste repository. Current state of the art methods commonly have errors in magnitude and direction of up to 40%. This is especially true for the intermediate principal stress. This project seeks to better understand the means which are used to determine the state of stress in the earth and improve upon current methods to decrease the uncertainty in the measurement. This is achieved by a multipronged experimental investigation which is closely coupled with advanced constitutive and numeric modeling.
A new technical basis on the mechanics of energetic materials at the individual particle scale has been developed. Despite these particles being in most of our Sandia non-nuclear explosive components, we have historically lacked any understanding of particle behavior. Through the novel application of nanoidentation methods to single crystal films and single particles of energetic materials with complex shapes, discovery data has been collected elucidating phenomena of particle strength, elastic and plastic deformation, and fracture. This work specifically developed the experimental techniques and analysis methodologies to distill data into relationships suitable for future integration into particle level simulations of particle reassembly. This project utilized experimental facilities at CINT and the Explosive Components Facility to perform ex-situ and in-situ nanoidentation experiments with simultaneous scanning electron microscope (SEM) imaging. Data collected by an applied axial compressive load in either force-control or displacement-control was well represented by Hertzian contact theory for linear elastic materials. Particle fracture phenomenology was effectively modeled by an empirical damage model.
Stochastic optimization is a fundamental field of research for machine learning. Stochastic gradient descent (SGD) and related methods provide a feasible means to train complicated prediction models over large datasets. SGD, however, does not explicitly address the problem of overfitting, which can lead to predictions that perform poorly on new data. This difference between loss performance on unseen testing data verses that of training data defines the generalization gap of a model. We introduce a new computational kernel called Stochastic Hessian Projection (SHP) that uses a maximum likelihood framework to simultaneously estimate gradient noise covariance and local curvature of the loss function. Our analysis illustrates that these quantities affect the evolution of parameter uncertainty and therefore generalizability. We show how these computations allow us to predict the generalization gap without requiring holdout data. Explicitly assessing this metric for generalizability during training may improve machine learning predictions when data is scarce and understanding prediction variability is critical.
Sierra/SolidMechanics (Sierra/SM) is a Lagrangian, three-dimensional code for finite element analysis of solids and structures. It provides capabilities for explicit dynamic, implicit quasistatic and dynamic analyses. The explicit dynamics capabilities allow for the efficient and robust solution of models with extensive contact subjected to large, suddenly applied loads. For implicit problems, Sierra/SM uses a multi-level iterative solver, which enables it to effectively solve problems with large deformations, nonlinear material behavior, and contact. Sierra/SM has a versatile library of continuum and structural elements, and a large library of material models. The code is written for parallel computing environments enabling scalable solutions of extremely large problems for both implicit and explicit analyses. It is built on the SIERRA Framework, which facilitates coupling with other SIERRA mechanics codes. This document describes the functionality and input syntax for Sierra/SM.
This document is a user's guide for capabilities that are not considered mature but are available in Sierra/SolidMechanics (Sierra/SM) for early adopters. The determination of maturity of a capability is determined by many aspects: having regression and verification level testing, documentation of functionality and syntax, and usability are such considerations. Capabilities in this document are lacking in one or many of these aspects.
In this work we examine approaches for using implementation diversity to disrupt or disable hardware trojans. We explore a variety of general frameworks for building diverse variants of circuits in voting architectures, and examine the impact of these on attackers and defenders mathematically and empirically. This work is augmented by analysis of a new majority voting technique. We also describe several automated approaches for generating diverse variants of a circuit and empirically study the overheads associated with these. We then describe a general technique for targeting functional circuit modifications to hardware trojans, present several specific implementations of this technique, and study the impact that they have on trojanized benchmark circuits.
In this article, we describe a prototype cosimulation framework using Xyce, GHDL and CocoTB that can be used to analyze digital hardware designs in out-of-nominal environments. We demonstrate current software methods and inspire future work via analysis of an open-source encryption core design. Note that this article is meant as a proof-of-concept to motivate integration of general cosimulation techniques with Xyce, an open-source circuit simulator.
The goal of the DOE OE Energy Storage System Safety Roadmap is to foster confidence in the safety and reliability of energy storage systems. There are three interrelated objectives to support the realization of that goal: research, codes and standards (C/S) and communication/coordination. The objective focused on C/S is "To apply research and development to support efforts that refocused on ensuring that codes and standards are available to enable the safe implementation of energy storage systems in a comprehensive, non-discriminatory and science-based manner."
The SNL Engineered Barrier System (EBS) International activities were focused on two main collaborative efforts for FY18: 1) Benchmarking semi-analytical codes used for thermal analysis, and 2) Benchmarking of reactive transport codes (including PFLOTRAN) used for chemical evolution of cementitious EBS components. The former topic, was completed over the course of FY18, while the latter has just begun in the latter half of FY18 under the aegis of additional appropriations and scoped as "Additional FY18 Activities". This report contains a complete summary of Item #1, as well as a status update on the progress of Item #2.
Photodetectors sensitive to the ultra-violet spectrum were demonstrated using an AlGaN high electron mobility transistor with an GaN nanodot optical floating gate. Peak responsivity of 2 x 109 A/W was achieved with a gain-bandwidth product > 1 GHz at a cut-on energy of 4.10 eV. Similar devices exhibited visible-blind rejection ratios > 106. The photodetection mechanism for $β$-Ga2O3 was also investigated. It was concluded that Schottky barrier lowering by self-trapped holes enables photodetector gain.
Cheap and efficient ion conducting separators are needed to improve efficiency and lifetime in fuel cells, batteries, and electrolyzers. Current state-of-the-art polymeric separators are made from Nafion, which is too expensive to be competitive with other technologies. Sandia has developed unique polymer separators that have lower cost and equivalent or superior ion transport compared to Nafion. These membranes consist of sulfonated Diels-Alder poly(phenylene) (SDAPP), a completely hydrocarbon polymer that conducts protons when hydrated. SDAPP membranes are thermally and chemically robust, with conductivities rivaling those of Nafion at high sulfonation levels. However, rational design of new separators requires molecular-level knowledge, currently unknown, of how polymer morphology affects transport. Here we describe the use of multiple computational and experimental techniques to understand the nanoscale morphology and water/proton transport properties in a series of sulfonated SDAPP membranes over a wide range of temperature, hydration, and sulfonation conditions.
“Heat waves” is a colloquial term used to describe convective currents in air formed when different objects in an area are at different temperatures. In the context of Digital Image Correlation (DIC) and other optical-based image processing techniques, imaging an object of interest through heat waves can significantly distort the apparent location and shape of the object. There are many potential heat sources in DIC experiments, including but not limited to lights, cameras, hot ovens, and sunlight, yet error caused by heat waves is often overlooked. This paper first briefly presents three practical situations in which heat waves contributed significant error to DIC measurements to motivate the investigation of heat waves in more detail. Then the theoretical background of how light is refracted through heat waves is presented, and the effects of heat waves on displacements and strains computed from DIC are characterized in detail. Finally, different filtering methods are investigated to reduce the displacement and strain errors caused by imaging through heat waves. The overarching conclusions from this work are that errors caused by heat waves are significantly higher than typical noise floors for DIC measurements, and that the errors are difficult to filter because the temporal and spatial frequencies of the errors are in the same range as those of typical signals of interest. Therefore, eliminating or mitigating the effects of heat sources in a DIC experiment is the best solution to minimizing errors caused by heat waves.
We present a direct numerical simulation of a temporal jet between n-dodecane and diluted air undergoing spontaneous ignition at conditions relevant to low-temperature diesel combustion. The jet thermochemical conditions were selected to result in two-stage ignition. Reaction rates were computed using a 35-species reduced mechanism which included both the low- and high-temperature reaction pathways. The aim of this study is to elucidate the mechanisms by which low-temperature reactions promote high-temperature ignition under turbulent, non-premixed conditions. We show that low-temperature heat release in slightly rich fuel regions initiates multiple cool flame kernels that propagate towards very rich fuel regions through a reaction-diffusion mechanism. Although low-temperature ignition is delayed by imperfect mixing, the propagation speed of the cool flames is high: as a consequence, high-temperature reactions in fuel-rich regions become active early during the ignition transient. Because of this early start, high-temperature ignition, which occurs in fuel-rich regions, is faster than homogeneous ignition. Following ignition, the high-temperature kernels expand and engulf the stoichiometric mixture-fraction iso-surface which in turn establish edge flames which propagate along the iso-surface. The present results indicate the preponderance of flame folding of existing burning surfaces, and that ignition due to edge-flame propagation is of lesser importance. Finally, a combustion mode analysis that extends an earlier classification [1] is proposed to conceptualize the multi-stage and multi-mode nature of diesel combustion and to provide a framework for reasoning about the effects of different ambient conditions on diesel combustion.
Fibers doped with Yb3+ serve as optical amplification elements in many high-power amplification systems, and there is an interest in significantly extending the capabilities of rare-earth doped fiber amplifiers to space-based systems. We investigate the effects of gamma-radiation-induced photodarkening on the performance of such fibers, both for passive as well as active configurations. With an emphasis on low total ionizing doses, passive irradiations were found to show increased absorption across the visible and IR spectrum. Furthermore, continuous-pumping of an Yb3+ -doped fiber amplifier in a gamma radiation environment was found to exhibit significantly greater degradation than a similar intermittently-pumped irradiated amplifier for low total ionizing doses of under 10 krad(Si) [100 Gy(Si)]. We discuss the implications of the data which provide insight into energy-transfer mechanisms in the fibers and the relationship of gamma-radiation-induced photodarkening and pump-radiation-induced photodarkening associated with the observed fiber degradation.
The 2018 NRC HEAF tests were conducted in Chalfont Pennsylvania at KEMA High Power Laboratory during the week of September 10th. These scoping tests were executed to determine the most effective measurement methodologies for future tests. The goal of Sandia’s Photometrics group was to provide high-speed quantitative and qualitative imaging of the arcing fault tests for the Nuclear Regulatory Commission. The measurement methods included visible high-speed imaging, high-speed high-dynamic range visible imaging, thermal imaging, and quantitative flow imaging. In addition, data fusion products were generated to visualize instrumentation data and imaging measurements. All imaging has been time synchronized to the start of the arcing event.
This milestone presents a demonstration of a surface mapping model to map the surface temperature for single-phase STAR-CCM+ to Cobra-TF using average temperature data. This model can be used to generate high-resolution surface temperature data. This can be accomplished with linear equations or with an alternative non-linear model. Improvements and a path forward for the surface mapping model to be applied to two-phase temperature mappings is also laid out in this milestone report.
The overall goal of this work was to perform an in-depth analysis of resilience schemes adapted to the Asynchronous Many-Task (AMT) programming and execution model with the goal of informing the Sandia Advanced Simulation and Computing (ASC) program's application development strategy for next generation platforms (NGPs).
The overall goal of this work was to perform an in-depth analysis of resilience schemes adapted to the Asynchronous Many-Task (AMT) programming and execution model with the goal of informing the Sandia Advanced Simulation and Computing (ASC) program's application development strategy for next generation platforms (NGPs).
The overall goal of this work was to perform an in-depth analysis of resilience schemes adapted to the Asynchronous Many-Task (AMT) programming and execution model with the goal of informing the Sandia Advanced Simulation and Computing (ASC) program's application development strategy for next generation platforms (NGPs).
The biotransport of the intravascular nanoparticle (NP) is influenced by both the complex cellular flow environment and the NP characteristics. Being able to computationally simulate such intricate transport phenomenon with high efficiency is of far-reaching significance to the development of nanotherapeutics, yet challenging due to large length-scale discrepancies between NP and red blood cell (RBC) as well as the complexity of nanoscale particle dynamics. Recently, a lattice-Boltzmann (LB) based multiscale simulation method has been developed to capture both NP–scale and cell–level transport phenomenon at high efficiency. The basic components of this method include the LB treatment for the fluid phase, a spectrin-link method for RBCs, and a Langevin dynamics (LD) approach to capturing the motion of the suspended NPs. Comprehensive two-way coupling schemes are established to capture accurate interactions between each component. The accuracy and robustness of the LB–LD coupling method are demonstrated through the relaxation of a single NP with initial momentum and self-diffusion of NPs. This approach is then applied to study the migration of NPs in micro-vessels under physiological conditions. It is shown that Brownian motion is most significant for the NP distribution in 20μm venules. For 1 ∼ 100 nm particles, the Brownian diffusion is the dominant radial diffusive mechanism compared to the RBC-enhanced diffusion. For ∼ 500 nm particles, the Brownian diffusion and RBC-enhanced diffusion are comparable drivers for the particle radial diffusion process.
State of the art semiconductor processes have created high performance and low power consumption technologies. There is a drive to use these technologies for commercial space, defense, and infrastructure.
In October 2017, Sandia broke ground for a new computing center dedicated to High Performance Computing. The east expansion of Building 725 was entirely conceived of, designed, and built in less than 18 months and is a certified LEED Gold design building, the first of its kind for a data center in the State of New Mexico. This 15,000 square-foot building, with novel energy and water-saving technologies, will house Astra, the first in a new generation of Advanced Architecture Prototype Systems to be deployed by the NNSA and the first of many HPC systems in Building 725 East.
Demonstrating the thermodynamic efficiency of hydrogen conversion processes using various materials is a critical step in developing new technologies for storing concentrated solar energy, and is largely accomplished by using a thermodynamic model derived from experimental data. A main goal of this project is to calculate the uncertainty of the thermodynamic efficiency by calculating the uncertainty of the components that feed into the efficiency. Many different models and data sets were used to test the workflow. First, the models were fit to the data using a Bayesian Inference and a method called Markov Chain Monte Carlo (MCMC), which found the maximum a priori parameters, and a posterior probability distribution of the parameters. Next, the different models were compared to each other using model evidence values. It was found that for cleaner data sets, overfitting had not yet been reached, and the most complicated model was ideal, but on the noisier data sets, the less complex models were favored because the more complicated models resulted in overfitting. Next, forward propagation was used to calculate the enthalpy change and its associated uncertainty. A few variations on the models were tried, such as fitting in a different variable, producing negligible or negative effects on the fits of the models. Thus, the original models were used. A sensitivity analysis was performed, and used to calculate the model error. On the cleaner data sets, there was very minimal experimental noise, and thus, all resulting error was from the model. With consideration of the model error, the models fit the data very well, and the simpler model had a high model error, as expected. All these components will then be used to calculate the thermodynamic efficiency of the different materials.
We present novel stochastic optimization models to improve power systems resilience to extreme weather events. We consider proactive redispatch, transmission line hardening, and transmission line capacity increases as alternatives for mitigating expected load shed due to extreme weather. Our model is based on linearized or "DC" optimal power flow, similar to models in widespread use by independent system operators (ISOs) and regional transmission operators (RTOs). Our computational experiments indicate that proactive redispatch alone can reduce the expected load shed by as much as 25% relative to standard economic dispatch. This resiliency enhancement strategy requires no capital investments and is implementable by ISOs and RTOs solely through operational adjustments. We additionally demonstrate that transmission line hardening and increases in transmission capacity can, in limited quantities, be effective strategies to further enhance power grid resiliency, although at significant capital investment cost. We perform a cross validation analysis to demonstrate the robustness of proposed recommendations. Our proposed model can be augmented to incorporate a variety of other operational and investment resilience strategies, or combination of such strategies.