Sandia National Laboratories (SNL) was contracted by the Defense Threat Reduction Agency (DTRA), through KBRwyle to perform testing and evaluation of the SNL Smart Pre-concentrator (SPC) system and a COTS FTIR system procured by DTRA through KBRwyle. Two common chemical warfare agent simulants, dimethyl methylphosphonate and triethyl phosphate were selected as the compounds of interest. SNL tested both systems using a COTS vapor generation system, capable of delivering known concentrations of specific chemical compounds to both detection systems, with Sandia responsible for the SPC system. Both systems were measured against COTS sorbent collection tubes analyzed by SNL via a laboratory GCMS system. Concentrations measured from tubes upstream from the FTIR system differed from the expected concentrations, while downstream tubes were mostly within 20% of the target concentration.
Dragonflies are known to be highly successful hunters (achieving 90-95% success rate in nature) that implement a guidance law like proportional navigation to intercept their prey. This project tested the hypothesis that dragonflies are able to implement proportional navigation using prey-image translation on their eyes. The model dragonfly presented here calculates changes in pitch and yaw to maintain the prey's image at a designated location (the fovea) on a two-dimensional screen (the model's eyes ). When the model also uses self-knowledge of its own maneuvers as an error signal to adjust the location of the fovea, its interception trajectory becomes equivalent to proportional navigation. I also show that this model can also be applied successfully (in a limited number of scenarios) against maneuvering prey. My results provide a proof-of-concept demonstration of the potential of using the dragonfly nervous system to design a robust interception algorithm for implementation on a man-made system.
A finite element numerical analysis model, that consists of a realistic mesh capturing the geometries of Big Hill (BH) Strategic Petroleum Reserve (SPR) site using the multi-mechanism deformation (MD) salt constitutive model and including data taken daily of the wellhead pressure and level of the oil-brine interface, has been upgraded. The upgraded model contains the shear zone to examine the interbed behavior in a realistic manner. The salt creep rate is not uniform in the salt dome, and creep test data for BH salt is limited. Therefore, a model calibration is necessary to simulate the geomechanical behavior of the salt dome. Cavern volumetric closures of SPR caverns calculated from sonar survey reports are used for the field baseline measurement. The structure factor, A2, and transient strain limit factor, K0, in the M-D constitutive model are used for model calibration. An A2 value obtained experimentally from the BH salt and K0 value of WIPP salt are used as the baseline values. To adjust the magnitude of A2 and K0, multiplication factors A2F and K0F are defined, respectively. The A2F and K0F values of the salt dome and salt drawdown layer of elements surrounding each SPR cavern have been determined through a number of back fitting analyses. The trendlines of the predictions and sonar data match up well for BH 101, 103, 104, 106, 110, 111, 112, and 113. The prediction curves are close to the sonar data for BH 102 and 114. However, the prediction curves for BH 105, 107, 108, and 109 are not close to the sonar data. An inconsistency was found in the sonar data, i.e. the volume measured later is larger than that before in some time intervals, even if the leached volume is taken into account, for BH 101, 104, 106, 107, and 112. Project discussions are needed to determine possibilities on how to resolve the issues and determine the best path forward for future computer modeling attempts.
Adversarial machine learning is an active field of research that seeks to investigate the security of machine learning methods against cyber-attacks. An important branch of this field is adversarial examples, which seek to trick machine learning models into misclassifying inputs by maliciously tampering with input data. As a result of the pervasiveness of machine learning models in diverse areas such as computer vision, health care, and national security, this vulnerability is a rapidly growing threat. With the increasing use of AI solutions, threats against AI must be considered before deploying systems in a contested space. Adversarial machine learning is a problem strongly tied to software security, and just like other more common software vulnerabilities, it exploits a weakness in software, like components of machine learning models. During this project, we attempted to survey and replicate several adversarial machine learning techniques with the goal of developing capabilities for Sandia to advise and defend against these threats. To accomplish this, we scanned state of the art research for robust defenses against adversarial examples and applied them to a machine learning problem.
In this short article, we summarize a step-by-step methodology to forecast power output from a photovoltaic solar generator using hourly auto-regressive moving average (ARMA) models. We illustrate how to build an ARMA model, to use statistical tests to validate it, and construct hourly samples. The resulting model inherits nice properties for embedding it into more sophisticated operation and planning models, while at the same time showing relatively good accuracy. Additionally, it represents a good forecasting tool for sample generation for stochastic energy optimization models.
Approximation algorithms for constraint satisfaction problems (CSPs) are a central direction of study in theoretical computer science. In this work, we study classical product state approximation algorithms for a physically motivated quantum generalization of Max-Cut, known as the quantum Heisenberg model. This model is notoriously difficult to solve exactly, even on bipartite graphs, in stark contrast to the classical setting of Max-Cut. Here we show, for any interaction graph, how to classically and efficiently obtain approximation ratios 0.649 (anti-feromagnetic XY model) and 0.498 (anti-ferromagnetic Heisenberg XYZ model). These are almost optimal; we show that the best possible ratios achievable by a product state for these models is 2/3 and 1/2, respectively.
Here we present the development of the building blocks of a Josephson parametric amplifier (JPA), namely the superconducting quantum interference device (SQUID) and the inductive pick-up coil that permits current coupling from a quantum dot into the SQUID. We also discuss our efforts in making depletion mode quantum dots using delta doped GaAs quantum wells. Because quantum dot based spin qubits utilize very low-level (~10 - 100pA), short duration (1ms - 1μs) current signals for state preparation and readout, these systems require close proximity cryogenic amplification to prevent signal corruption. Common amplification methods in these semiconductor quantum dots rely on heterojunction bipolar transistors (HBTs) and high electron mobility transistors (HEMTs) to amplify the readout signal from a single qubit. The state of the art for HBTs and HEMTs produce approximately 10µW of power when operating at high bandwidths. For few-qubit systems this level of heat dissipation is acceptable. However, for scaling up the number of qubits to several hundred or a thousand, the heat load produced in a 1 to 1 amplifier to qubit arrangement would overload the cooling capacity of a common dilution refrigerator, which typically has a cooling power of ~100µW at its base temperature. Josephson parametric amplifiers have been shown to dissipate ~1pW of power with current sensitivies on par with HBTs and HEMTs and with bandwidths 30 times that of HBTs and HEMTs, making them attractive for multi-qubit platforms. In this report we describe in detail the fabrication process flow for developing inductive pick-up coils and the fabrication and measurement of NbTiN and A1/A1Ox/A1 SQUIDs.
Due to its balance of accuracy and computational cost, density functional theory has become the method of choice for computing the electronic structure and related properties of materials. However, present-day semi-local approximations to the exchange-correlation energy of density functional theory break down for materials containing d and f electrons. In this report we summarize the results of our research efforts within the LDRD 200202 titled "Making density functional theory work for all materials" in addressing this issue. Our efforts are grouped into two research thrusts. In the first thrust, we develop an exchange-correlation functional (BSC functional) within the subsystem functional formalism. It enables us to capture bulk, surface, and confinement physics with a single, semi-local exchange-correlation functional in density functional theory calculations. We present the analytical properties of the BSC functional and demonstrate that the BSC functional is able to capture confinement physics more accurately than standard semi-local exchange-correlation functionals. The second research thrust focusses on developing a database for transition metal binary compounds. The database consists of materials properties (formation energies, ground-state energies, lattice constants, and elastic constants) of 26 transition metal elements and 89 transition metal alloys. It serves as a reference for benchmarking computational models (such as lower-level modeling methods and exchange-correlation functionals). We expect that our database will significantly impact the materials science community. We conclude with a brief discussion on the future research directions and impact of our results.
Under Department of Energy (DOE), Office of Nuclear Energy (NE), Gateway for Accelerated Innovation in Nuclear (GAIN), Sandia National Laboratories (SNL) was awarded DOE-NE GAIN voucher GA-19SN020107, "Risk-informed mechanistic source term calculations for a sodium fast reactor." Under this GAIN voucher, SNL supported the industry partners development in preparation for licensing and commercialization by providing subject matter expertise on heat pipe technologies, providing computer code training and support, and perform first-of-a-kind experiments demonstrating the safety/risk impacts of heat pipe breach failures. The experiments that were performed had two primary goals: measure the peak heat fluxes that lead to heat pipe dry out and subsequent wall breach; and observe the consequences that result from catastrophic failure of a heat pipe wall. Intentional breaching of the heat pipe walls took advantage of heat pipe physics and operating limits. Large and nearly instantaneous heat fluxes were applied to the heat pipe to first cause localized dry out at the evaporator section which then leads to melting of the heat pipe wall. The hour glass heat pipe (Test 1) experienced dry out at 112 W/cm2 and after 45 seconds, wall temperatures measure about 1,280°C and intentional failure of the heat pipe wall was achieved. The cylindrical heat pipe (Test 2) experienced dry out at 125 W/cm2 and after 65 seconds, wall temperatures exceeded 1,400°C and intentional failure of the heat pipe wall was achieved. Both experiments characterize the parameters needed to lead to heat pipe wall failure. Furthermore, the failure of the heat pipes characterizes the safety/risk impacts from sodium-oxygen reactions that occur following the intentional failure. There were two major conclusions of these intentional failure tests: the heat pipes were able to continue operating beyond expected performance limits, and the failure behavior validated decades of operational experience.
This report summarizes the accomplishments and challenges of a two year LDRD effort focused on improving design-to-simulation agility. The central bottleneck in most solid mechanics simulations is the process of taking CAD geometry and creating a discretization of suitable quality, i.e., the "meshing" effort. This report revisits meshfree methods and documents some key advancements that allow their use on problems with complex geometries, low quality meshes, nearly incompressible materials or that involve fracture. The resulting capability was demonstrated to be an effective part of an agile simulation process by enabling rapid discretization techniques without increasing the time to obtain a solution of a given accuracy. The first enhancement addressed boundary-related challenges associated with meshfree methods. When using point clouds and Euclidean metrics to construct approximation spaces, the boundary information is lost, which results in low accuracy solutions for non-convex geometries and mate rial interfaces. This also complicates the application of essential boundary conditions. The solution involved the development of conforming window functions which use graph and boundary information to directly incorporate boundaries into the approximation space.
Sandia's Z Pulsed Power Facility is able to dynamically compress matter to extreme states with exceptional uniformity, duration, and size, which are ideal for investigations of fundamental material properties of high energy density conditions. X-ray diffraction (XRD) is a key atomic scale probe since it provides direct observation of the compression and strain of the crystal lattice, and is used to detect, identify, and quantify phase transitions. Because of the destructive nature of Z-Dynamic Materials Properties (DMP) experiments and low signal vs background emission levels of XRD, it is very challenging to detect the XRD pattern close to the Z-DMP load and to recover the data. We developed a new Spherical Crystal Diffraction Imager (SCDI) diagnostic to relay and image the diffracted x-ray pattern away from the load debris field. The SCDI diagnostic utilizes the Z-Beamlet laser to generate 6.2-keV Mn-He c , x-rays to probe a shock-compressed sample on the Z-DMP load. A spherically bent crystal composed of highly oriented pyrolytic graphite is used to collect and focus the diffracted x-rays into a 1-inch thick tungsten housing, where an image plate is used to record the data. We performed experiments to implement the SCDI diagnostic on Z to measure the XRD pattern of shock compressed beryllium samples at pressures of 1.8-2.2 Mbar.
This document details the development of modeling and simulations for existing plant security regimes using identified target sets to link dynamic assessment methodologies by leveraging reactor system level modeling with force-on-force modeling and 3D visualization for developing table-top scenarios. This work leverages an existing hypothetical example used for international physical security training, the Lone Pine nuclear power plant facility for target sets and modeling.
This document details the development of modeling and simulations for existing plant security regimes using identified target sets to link dynamic assessment methodologies by leveraging reactor system level modeling with force-on-force modeling and 3D visualization for developing table-top scenarios. This work leverages an existing hypothetical example used for international physical security training, the Lone Pine nuclear power plant facility for target sets and modeling.
The CTH multiphysics hydrocode is used in a wide variety of important calculations. An essential part of ensuring hydrocode accuracy and credibility is thorough code verification and validation (V&V). In the past, CTH V&V work (particularly verification) has not been consistently well documented. In FY19, we have made substantial progress towards addressing this need. In this report, we present a new CTH V&V test suite composed of traditional hydrocode verification problems used by similar ASC codes as well as validation problems for some of the most frequently used materials models and capabilities in CTH. For the verification problems, we present not only results and computed errors, but also convergence rates. Validation problems include mesh refinement studies, providing evidence that results are converging.
This Laboratory Directed Research and Development (LDRD) effort performed fundamental Research and Development (R&D) to develop a robust radar processing algorithm capable of assessing the difference between an Unmanned Aerial System (UAS) and a biological target such as a bird, based on mathematics applied to the polarized radar returns of the target object, alone. The current threats of using such a UAS as a delivery platform for a host of destructive components is a major concern for the protection of various assets. Most recently, on 14th Sept. 2019, dozens of suicide or kamikaze drones (UAV-X) coordinated an attack on two Saudi oil facilities that demonstrated the potential to disrupt global oil supplies. While radar-based UAS detection systems can detect UAS at ranges greater than 1-km, the issues of excessive Nuisance/False Alarm Rates (NAR/FAR) from natural sources (birds in particular) has not been sufficiently addressed. In this effort we describe and utilize the Adaptive Polarization Difference Imaging-based (APDI) algorithms for the detection and automatic non-visual assessment of Unmanned Aerial System applications. Originally developed for optical imaging and sensing of polarization information in nature, the algorithms developed here are modified to serve for the target detection purposes in counter-UAS (cUAS) environments. We exploit the polarization statistics of the observing scene for detection and identification of changes within the scene and assess from these changes for UAS/bird classifications. Several cases are considered from independent data sources, including numerically generated data, anechoic chamber data as well as experimental radar data, to show the applicability of the techniques developed here. The methods developed in this effort are designed to be used in cUAS setups but have shown promise for a multitude of other radar-based classification uses as well.
Dai, Steve X.; Gao, Min; Tang, Xiao; Leung, Chung M.; Viswan, Ravindranath; Li, Jiefang; Viehland, Dwight D.
(Pb0.98, La0.02)(Zr0.95, Ti0.05)O3 (PLZT) thin films of 300 nm thickness were epitaxially deposited on (100), (110), and (111) SrTiO3 single crystal substrates by pulsed laser deposition. X-ray diffraction line and reciprocal space mapping scans were used to determine the crystal structure. Tetragonal ((001) PLZT) and monoclinic MA ((011) and (111) PLZT) structures were found, which influenced the stored energy density. Electric field-induced antiferroelectric to ferroelectric (AFE→FE) phase transitions were found to have a large reversible energy density of up to 30 J/cm3. With increasing temperature, an AFE to relaxor ferroelectric (AFE→RFE) transition was found. The RFE phase exhibited lower energy loss, and an improved energy storage efficiency. The results are discussed from the perspective of crystal structure, dielectric phase transitions, and energy storage characteristics. Besides, unipolar drive was also performed, providing notably higher energy storage efficiency values due to low energy losses.
This communication is the final report for the project Utilizing Highly Scattered Light for Intelligence through Aerosols funded by the Laboratory Directed Research and Development (LDRD) program at Sandia National Laboratories and lasting six months in 2019. Aerosols like fog reduce visibility and cause down-time that for critical systems or operations are unacceptable. Information is lost due to the random scattering and absorption of light by tiny particles. Computational diffuse optical imaging methods show promise for interpreting the light transmitted through fog, enabling sensing and imaging to improve situational awareness at depths 10 times greater than current methods. Developing this capability first requires verification and validation of diffusion models of light propagation in fog. For this reason, analytical models were developed and compared to experimental data captured at the Sandia National Laboratory Fog Chamber facility. A methodology was developed to incorporate the propagation of scattered light through the imaging optics to a pixel array. The diffusion approximation to the radiative transfer equation was found to predict light propagation in fog under the appropriate conditions.
An international safeguards mentoring program was established by Sandia National Laboratories (SNL) for early career university faculty. The inaugural year of the program focused on course material development and connecting faculty to experts at national laboratories. Two faculty members were selected for participation; one developed a safeguards-by-design course, and the other created lecture material related to unmanned robotic systems in safeguards for integration in existing courses. Faculty members were paired with SNL subject matter experts based on the topic of their individual projects. The program also included a two week visit to SNL. The structure of this program not only supported the development of new course material, but also provided junior faculty members with an opportunity to make connections and build collaborations in the field of international safeguards. Programs like this are important for professional development of faculty members and to help strengthen connections between universities and the national laboratories.
SpinDX, often referred to as a "lab-on-a-disk" is a portable, medical diagnostic platform that uses a disposable, centrifugal disk with microfluidic flow paths to manipulate a biological sample. This allows multiple tests to be carried out on a single sample with no preparation required. The device operates by distributing drops of raw, unprocessed samples into different channels that function as dozens of tiny test tubes. When the disc spins, the samples interact with test reagents inside the channels. If there is a chemical interaction between the sample and the reagent, the tip of the channel will produce a fluorescent glow indicating that there is an infectious agent present. The data is then transferred to a software interface that displays the test results.
Sierra/SolidMechanics (Sierra/SM) is a Lagrangian, three-dimensional code for finite element analysis of solids and structures. It provides capabilities for explicit dynamic, implicit quasistatic and dynamic analyses. The explicit dynamics capabilities allow for the efficient and robust solution of models with extensive contact subjected to large, suddenly applied loads. For implicit problems, Sierra/SM uses a multi-level iterative solver, which enables it to effectively solve problems with large deformations, nonlinear material behavior, and contact. Sierra/SM has a versatile library of continuum and structural elements, and a large library of material models. The code is written for parallel computing environments enabling scalable solutions of extremely large problems for both implicit and explicit analyses. It is built on the SIERRA Framework, which facilitates coupling with other SIERRA mechanics codes. This document describes the functionality and input syntax for Sierra/SM.
The SAFER Sandbox is a web application for testing various types of data visualization tools that analyze safety data across national laboratories. By enabling the user to visualize this data through visual and textual analyses, the SAFER Sandbox helps to facilitate the research and support for the National Nuclear Security Administration (NNSA) Office of Safety, Infrastructure and Operations (NA-50). Safety data across the national laboratory complex is not currently analyzed in one location. This prohibits administrators from viewing safety data trends and making large scale decisions based on a holistic view of the data. By implementing data analytics web services such as representative text summary, noun phrase extraction and graph exploration, users can view and explore safety data based on search queries. The data visualizations in the SAFER Sandbox reveal trends and point to items of concern based on safety data. The SAFER Sandbox is a testing ground for these visualizations.
This document archives the results developed by the Lab Directed Research and Development (LDRD) project sponsored by Sandia National Laboratories (SNL). In this work, a numerical study was performed to show the feasibility of approximating the non-linear operator of SNL's unique high-energy hyperspectral computed tomography system as a sequence of linear operators. The four main results gained from this work include the development of a simulation test-bed using a particle-transport Monte Carlo approach; the demonstration to assemble a linear operator of almost-arbitrary resolution for a given narrow energy window, developing a compression approach to dramatically reduce the size of the linear operator via a spline approach, and the demonstration of using the linear operator to perform processing of x-ray data; in this case, the development of an iterative reconstruction method. This numerical study has indicated that if these results can be replicated on the SNL system, the improved performance could be revolutionary as the method to approximate the nonlinear operator for a hyperspectral CT system is not feasible to perform on a traditional CT system.
Quantifying in-situ subsurface stresses and predicting fracture development are critical to reducing risks of induced seismicity and improving modern energy activities in the subsurface. In this work, we developed a novel integration of controlled mechanical failure experiments coupled with microCT imaging, acoustic sensing, modeling of fracture initiation and propagation, and machine learning for event detections and waveform characterization. Through additive manufacturing (3D printing), we were able to produce bassanite-gypsum rock samples with repeatable physical, geochemical and structural properties. With these "geoarchitected" rock, we provided the role of mineral texture orientation on fracture surface roughness. The impact of poroelastic coupling on induced seismicity has been systematically investigated to improve mechanistic understanding of post shut-in surge of induced seismicity. This research will set the groundwork for characterizing seismic waveforms by using multiphysics and machine learning approaches and improve the detection of low-magnitude seismic events leading to the discovery of hidden fault/fracture systems.
This work is a follow-on guide to running the Weather Research and Forecasting (WRF) model from Aur et al, (2018), Building and Running 1 DAAPS Models: IFRF Postdictions. This guide details running WRF in a nudged configuration, where the u and v wind components, temperature, and moisture within a specified spatial and temporal window, are adjusted towards the observations, radiosonde observations in this case, using WRF's observation nudging technique. The primary modification to this methodology from Aur et al. (2018), is the use of the OBSGRID program to generate the nudging files and the updates to the namelist.input file. These steps, combined with those outlined in Aur et al. (2018), will generate a nudged WRF hindcast (or postdiction) simulation.
The following topics are considered in this presentation: (i) Overview of evidence theory, (ii) Representation of loss of assured safety (LOAS) with evidence theory for a 1 SL, 1 WL system, (iii) Description of 2 SLs and 1 WL used for illustration, (iv) Plausibility and belief for LOAS and associated sampling-based verification calculations for a 2 SL, 1 WL system, (iv) Plausibility and belief for margins associated with LOAS for a 2 SL, 1 WL system, (v) Plausibility and belief for LOAS for a 2 SL, 2 WL system, (vi) Incorporation of evidence spaces for link temperature curves into LOAS calculations, (vii) Plausibility and belief for LOAS for WL/SL systems with SL subsystems, and (viii) Sampling-based procedures for the estimation of plausibility and belief. 2
Reed, B.W.; Moghadam, A.A.; Bloom, R.S.; Park, S.T.; Monterrosa, A.M.; Price, Patrick M.; Barr, C.M.; Briggs, S.A.; Hattar, Khalid M.; Mckeown, J.T.; Masiel, D.J.
We present kilohertz-scale video capture rates in a transmission electron microscope, using a camera normally limited to hertz-scale acquisition. An electrostatic deflector rasters a discrete array of images over a large camera, decoupling the acquisition time per subframe from the camera readout time. Total-variation regularization allows features in overlapping subframes to be correctly placed in each frame. Moreover, the system can be operated in a compressive-sensing video mode, whereby the deflections are performed in a known pseudorandom sequence. Compressive sensing in effect performs data compression before the readout, such that the video resulting from the reconstruction can have substantially more total pixels than that were read from the camera. This allows, for example, 100 frames of video to be encoded and reconstructed using only 15 captured subframes in a single camera exposure. We demonstrate experimental tests including laser-driven melting/dewetting, sintering, and grain coarsening of nanostructured gold, with reconstructed video rates up to 10 kHz. The results exemplify the power of the technique by showing that it can be used to study the fundamentally different temporal behavior for the three different physical processes. Both sintering and coarsening exhibited self-limiting behavior, whereby the process essentially stopped even while the heating laser continued to strike the material. We attribute this to changes in laser absorption and to processes inherent to thin-film coarsening. In contrast, the dewetting proceeded at a relatively uniform rate after an initial incubation time consistent with the establishment of a steady-state temperature profile.
In the summer of 2020, the National Aeronautics and Space Administration (NASA) plans to launch a spacecraft as part of the Mars 2020 mission. The rover on the proposed spacecraft will use a Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) to provide continuous electrical and thermal power for the mission. The MMRTG uses radioactive plutonium dioxide. NASA is preparing a Supplemental Environmental Impact Statement (SEIS) for the mission in accordance with the National Environmental Policy Act. This Nuclear Risk Assessment addresses the responses of the MMRTG option to potential accident and abort conditions during the launch opportunity for the Mars 2020 mission and the associated consequences. This information provides the technical basis for the radiological risks discussed in the SEIS.
In this investigation a series of small-scale tests were conducted, which were sponsored by the Nuclear Regulatory Commission (NRC) Office of Nuclear Regulatory Research (RES) and performed at Sandia National Laboratories (SNL). These tests were designed to better understand localized particle dispersion phenomena resulting from electrical arcing faults. The purpose of these tests was to better characterize aluminum particle size distribution, rates of production, and morphology (agglomeration) of electrical arc faults. More specifically, this effort characterized ejected particles and high-energy dispersion, where this work characterized HEAF electrical characteristics, particle movement/distributions, and morphology near the arc. The results and measurements techniques from this investigation will be used to inform an energy balance model to predict additional energy from aluminum involvement in the arc fault. The experimental setup was developed based on prior work by KEMA and SNL for phase-to-ground and phase-to-phase electrical circuit faults. The small-scale tests results should not be expected to be scale-able to the hazards associated with full-scale HEAF events. Here, the test voltages will consist of four different levels: 480V, 4160V, 6900V and 10kV, based on those realized in nuclear power plant (NPP) HEAF events.
Often, the presence of cracks in manufactured components are detrimental to their overall performance. We develop a workflow and tools in this report using CUBIT and Sierra/SM for generating and modeling crack defects to better understand their impact on such components. To this end, we provide a CUBIT library of various prototypical crack defects embedded in pipes and plates that can be readily used in a wide range of simulations, with specific application to those used in Gas Transfer Systems (GTS). We verify the accuracy of the J-integral post-processing capability in Sierra against solutions available in existing literature for the cracks and geometries of interest within the context of linear elastic fracture mechanics, and describe ongoing efforts to quantify and assess numerical errors. Through this process, we outline overall suggestions and recommendations to the user based on the proposed workflow.
The thermal performance of commercial spent nuclear fuel dry storage casks is evaluated through detailed numerical analysis. These modeling efforts are completed by the vendor to demonstrate performance and regulatory compliance. The calculations are then independently verified by the Nuclear Regulatory Commission (NRC). Canistered dry storage cask systems rely on ventilation between the inner canister and the overpack to convect heat away from the canister to the surrounding environment for both horizontal and vertical configurations. Recent advances in dry storage cask designs have significantly increased the maximum thermal load allowed in a canister in part by increasing the efficiency of internal conduction pathways and by increasing the internal convection through greater canister helium pressure. Carefully measured data sets generated from testing of full-sized casks or smaller cask analogs are widely recognized as vital for validating these models. While several testing programs have been previously conducted, these earlier validation studies did not integrate all the physics or components important in a modern, horizontal dry cask system. The purpose of the present investigation is to produce data sets that can be used to benchmark the codes and best practices presently used to determine cladding temperatures and induced cooling air flows in modern horizontal dry storage systems. The horizontal dry cask simulator (HDCS) has been designed to generate this benchmark data and add to the existing knowledge base. The objective of the HDCS investigation is to capture the dominant physics of a commercial dry storage system in a well-characterized test apparatus for any given set of operational parameters. The close coupling between the thermal response of the canister system and the resulting induced cooling air flow rate is of particular importance.
Chemical Risk Management is a system or process to control safety and security risks associated with hazardous or toxic chemicals. Chemical Risk Management includes the management of both chemical safety and chemical security. There are five pillars that make up chemical security management10 (Figure 1). Each of the five pillars are key components to the implementation of a chemical security risk management system. In this paper, we will review the "Material Control and Accountability" pillar and how an academic institution can implement this principle using a chemical inventory management system.
The work presented in this report applies the MELCOR code to evaluate potential accidents in non-reactor nuclear facilities, focusing on Design Basis Accidents. Ten accident scenarios were modeled using NRC's best-estimate severe accident analysis code, MELCOR 2.2. The accident scenarios simulated a range of explosions and/or fires related to a nuclear fuel reprocessing facility. The objective was to evaluate the radionuclide source term to the environment following initiating explosion and/or fire events. The simulations were performed using a MELCOR model of the Barnwell Nuclear Fuel Plant, which was decommissioned before beginning reprocessing operations. Five of the accident scenarios were based on the Class 5 Design Basis Accidents from the Final Safety Analysis Report. Three of the remaining accident scenarios include sensitivity studies on smaller solvent fires. The final two accidents included an induced fire from an initial explosion. The radionuclide inventory was developed from ORIGEN calculations of spent PWR fuel with an initial enrichment of 4.5% U-235 by weight. The fuel aged for five years after a final 500-day irradiation cycle. The burn-up was conservatively increased to 60 GWd/MTU to bound current US operations. The results are characterized in terms of activity release to the environment and the building decontamination factor, which is related the leak path factor used in Department of Energy safety analyses. The MELCOR 2.2 results consider adverse consequences to the filters, ventilation system, and structures as a result of the explosions and fires. The calculations also include best-estimate models for aerosol transport, agglomeration, and deposition. The new calculations illustrate best-estimate approaches for predicting the source term from a reprocessing facility accident.
In cases where building infrastructure is both costly and possibly hazardous, it is advantageous to seek new methods of transferring and transmitting electrical power. It is the goal of this report to study the current available technologies used in transmitting power with no physical connection between the source and load; and to discuss their feasibility, reliability, efficiency, and safety requirements for use in the field.
This report outlines the fiscal year (FY) 2019 status of an ongoing multi-year effort to develop a general, microstructurally-aware, continuum-level model for representing the dynamic response of material with complex microstructures. This work has focused on accurately representing the response of both conventionally wrought processed and additively manufactured (AM) 304L stainless steel (SS) as a test case. Additive manufacturing, or 3D printing, is an emerging technology capable of enabling shortened design and certification cycles for stockpile components through rapid prototyping. However, there is not an understanding of how the complex and unique microstructures of AM materials affect their mechanical response at high strain rates. To achieve our project goal, an upscaling technique was developed to bridge the gap between the microstructural and continuum scales to represent AM microstructures on a Finite Element (FE) mesh. This process involves the simulations of the additive process using the Sandia developed kinetic Monte Carlo (KMC) code SPPARKS. These SPPARKS microstructures are characterized using clustering algorithms from machine learning and used to populate the quadrature points of a FE mesh. Additionally, a spall kinetic model (SKM) was developed to more accurately represent the dynamic failure of AM materials. Validation experiments were performed using both pulsed power machines and projectile launchers. These experiments have provided equation of state (EOS) and flow strength measurements of both wrought and AM 304L SS to above Mbar pressures. In some experiments, multi-point interferometry was used to quantify the variation is observed material response of the AM 304L SS. Analysis of these experiments is ongoing, but preliminary comparisons of our upscaling technique and SKM to experimental data were performed as a validation exercise. Moving forward, this project will advance and further validate our computational framework, using advanced theory and additional high-fidelity experiments.
Recent news reports coming from Asia and the UK have highlighted the emerging threats of Non-Traditional Agents (NTAs) to national security. The UK incident underscores how NTAs may linger in the environment and at trace. Building on Sandia's extensive analytical chemistry work in this field, a polysilphenylene analog of Sandia's proprietary DKAP polymer coatings was synthesized and evaluated for high temperature operation. Initial test results are inconclusive as to the improved thermal stability of the new polymer with TGA/DSC results indicating a lower glass transition go temperature for the new "Hot DKAP" material and a similar to slightly lower start to mass loss for "Hot DKAP", but slower degradation rate in clean dry air. Additional testing with a TGA-MS system to identify what the fragments lost as a function of temperature is still needed to fully characterize the materials thermal properties. In addition, the material still needs to be evaluated for thermodynamic properties for analytes of interest using either GC or SPC coated devices.
The electrothermal instability (ETI) is driven by Joule heating and arises from the dependence of resistivity on temperature. ETI may drive azimuthally correlated surface density variations which seed magneto Rayleigh-Taylor (MRT) instability growth. Liner implosion studies suggest that dielectric surface coatings reduce the amplitude of ETI driven perturbations. Furthermore, previous fundamental physics studies suggest that non-metallic inclusions within the metal can seed ETI growth. In this project, we aimed to (1) determine how dielectric coatings modify ETI growth by varying the coating thickness and the surface structure of the underlying metal, and (2) study overheating from engineered defects—designed lattices of micron-scale pits. Engineered pits divert current density and drive local overheating in a way that can be compared with 3DMHD simulations. All experiments were executed on the Sandia Mykonos Facility. Facility and diagnostic investments enabled high quality data to be gathered in support of project deliverables.
Mixing of cold, higher-Z elements into the fuel region of an inertial confinement fusion target spoils the fusion burn efficiency. This mixing process is driven by both "turbulent" and "atomic" mixing processes, the latter being modeled through transport corrections to the basic hydrodynamic models. Recently, there has been a surge in the development of dense plasma transport modeling and the associated transport coefficients; however, experimental validation remains in its infancy.
Food, energy, and water (FEW) are primary resources required for human populations and ecosystems. Availability of the raw resources is essential, but equally important are the services that deliver resources to human populations, such as adequate access to safe drinking water, electricity, and sufficient food. Any failures in either resource availability or FEW resources-related services will have an impact on human health. The ability of countries to intervene and overcome the challenges in the FEW domain depends on governance, education, and economic capacities. We distinguish between FEW resources, FEW services, and FEW health outcomes to develop an analysis framework for evaluating interrelationships among these critical resources. The framework is applied using a data-driven approach for sub-Saharan African countries, a region with notable FEW insecurity challenges. The data-driven approach using a cross-validated stepwise regression analysis indicates that limited governance and socioeconomic capacity in sub-Saharan African countries, rather than lack of the primary resources, more significantly impact access to FEW services and associated health outcomes. The proposed framework helps develop a cohesive approach for evaluating FEW metrics and could be applied to other regions of the world to continue improving our understanding of the FEW nexus.
During the development of new seismic data processing methods, the verification of potential events and associated signals can present a nontrivial obstacle to the assessment of algorithm performance, especially as detection thresholds are lowered, resulting in the inclusion of significantly more anthropogenic signals. Here, we present two 14 day seismic event catalogs, a local-scale catalog developed using data from the University of Utah Seismograph Stations network, and a global-scale catalog developed using data from the International Monitoring System. Each catalog was built manually to comprehensively identify events from all sources that were locatable using phase arrival timing and directional information from seismic network stations, resulting in significant increases compared to existing catalogs. The new catalogs additionally contain challenging event sequences (prolific aftershocks and small events at the detection and location threshold) and novel event types and sources (e.g., infrasound only events and long-wall mining events) that make them useful for algorithm testing and development, as well as valuable for the unique tectonic and anthropogenic event sequences they contain.
Quantum materials have long promised to revolutionize everything from energy transmission (high temperature superconductors) to both quantum and classical information systems (topological materials). However, their discovery and application has proceeded in an Edisonian fashion due to both an incomplete theoretical understanding and the difficulty of growing and purifying new materials. This project leverages Sandia's unique atomic precision advanced manufacturing (APAM) capability to design small-scale tunable arrays (designer materials) made of donors in silicon. Their low-energy electronic behavior can mimic quantum materials, and can be tuned by changing the fabrication parameters for the array, thereby enabling the discovery of materials systems which can't yet be synthesized. In this report, we detail three key advances we have made towards development of designer quantum materials. First are advances both in APAM technique and underlying mechanisms required to realize high-yielding donor arrays. Second is the first-ever observation of distinct phases in this material system, manifest in disordered 2D sheets of donors. Finally are advances in modeling the electronic structure of donor clusters and regular structures incorporating them, critical to understanding whether an array is expected to show interesting physics. Combined, these establish the baseline knowledge required to manifest the strongly-correlated phases of the Mott-Hubbard model in donor arrays, the first step to deploying APAM donor arrays as analogues of quantum materials.
In stochastic optimization, probabilities naturally arise as cost functionals and chance constraints. Unfortunately, these functions are difficult to handle both theoretically and computationally. The buffered probability of failure and its subsequent extensions were developed as numerically tractable, conservative surrogates for probabilistic computations. In this manuscript, we introduce the higher-moment buffered probability. Whereas the buffered probability is defined using the conditional value-at-risk, the higher-moment buffered probability is defined using higher-moment coherent risk measures. In this way, the higher-moment buffered probability encodes information about the magnitude of tail moments, not simply the tail average. We prove that the higher-moment buffered probability is closed, monotonic, quasi-convex and can be computed by solving a smooth one-dimensional convex optimization problem. These properties enable smooth reformulations of both higher-moment buffered probability cost functionals and constraints.
We present a new method for reducing parallel applications’ communication time by mapping their MPI tasks to processors in a way that lowers the distance messages travel and the amount of congestion in the network. Assuming geometric proximity among the tasks is a good approximation of their communication interdependence, we use a geometric partitioning algorithm to order both the tasks and the processors, assigning task parts to the corresponding processor parts. In this way, interdependent tasks are assigned to “nearby” cores in the network. We also present a number of algorithmic optimizations that exploit specific features of the network or application to further improve the quality of the mapping. We specifically address the case of sparse node allocation, where the nodes assigned to a job are not necessarily located in a contiguous block nor within close proximity to each other in the network. However, our methods generalize to contiguous allocations as well, and results are shown for both contiguous and non-contiguous allocations. We show that, for the structured finite difference mini-application MiniGhost, our mapping methods reduced communication time up to 75 percent relative to MiniGhost’s default mapping on 128K cores of a Cray XK7 with sparse allocation. For the atmospheric modeling code E3SM/HOMME, our methods reduced communication time up to 31% on 16K cores of an IBM BlueGene/Q with contiguous allocation.