Dragonflies are known to be highly successful hunters (achieving 90-95% success rate in nature) that implement a guidance law like proportional navigation to intercept their prey. This project tested the hypothesis that dragonflies are able to implement proportional navigation using prey-image translation on their eyes. The model dragonfly presented here calculates changes in pitch and yaw to maintain the prey's image at a designated location (the fovea) on a two-dimensional screen (the model's eyes ). When the model also uses self-knowledge of its own maneuvers as an error signal to adjust the location of the fovea, its interception trajectory becomes equivalent to proportional navigation. I also show that this model can also be applied successfully (in a limited number of scenarios) against maneuvering prey. My results provide a proof-of-concept demonstration of the potential of using the dragonfly nervous system to design a robust interception algorithm for implementation on a man-made system.
A finite element numerical analysis model, that consists of a realistic mesh capturing the geometries of Big Hill (BH) Strategic Petroleum Reserve (SPR) site using the multi-mechanism deformation (MD) salt constitutive model and including data taken daily of the wellhead pressure and level of the oil-brine interface, has been upgraded. The upgraded model contains the shear zone to examine the interbed behavior in a realistic manner. The salt creep rate is not uniform in the salt dome, and creep test data for BH salt is limited. Therefore, a model calibration is necessary to simulate the geomechanical behavior of the salt dome. Cavern volumetric closures of SPR caverns calculated from sonar survey reports are used for the field baseline measurement. The structure factor, A2, and transient strain limit factor, K0, in the M-D constitutive model are used for model calibration. An A2 value obtained experimentally from the BH salt and K0 value of WIPP salt are used as the baseline values. To adjust the magnitude of A2 and K0, multiplication factors A2F and K0F are defined, respectively. The A2F and K0F values of the salt dome and salt drawdown layer of elements surrounding each SPR cavern have been determined through a number of back fitting analyses. The trendlines of the predictions and sonar data match up well for BH 101, 103, 104, 106, 110, 111, 112, and 113. The prediction curves are close to the sonar data for BH 102 and 114. However, the prediction curves for BH 105, 107, 108, and 109 are not close to the sonar data. An inconsistency was found in the sonar data, i.e. the volume measured later is larger than that before in some time intervals, even if the leached volume is taken into account, for BH 101, 104, 106, 107, and 112. Project discussions are needed to determine possibilities on how to resolve the issues and determine the best path forward for future computer modeling attempts.
Adversarial machine learning is an active field of research that seeks to investigate the security of machine learning methods against cyber-attacks. An important branch of this field is adversarial examples, which seek to trick machine learning models into misclassifying inputs by maliciously tampering with input data. As a result of the pervasiveness of machine learning models in diverse areas such as computer vision, health care, and national security, this vulnerability is a rapidly growing threat. With the increasing use of AI solutions, threats against AI must be considered before deploying systems in a contested space. Adversarial machine learning is a problem strongly tied to software security, and just like other more common software vulnerabilities, it exploits a weakness in software, like components of machine learning models. During this project, we attempted to survey and replicate several adversarial machine learning techniques with the goal of developing capabilities for Sandia to advise and defend against these threats. To accomplish this, we scanned state of the art research for robust defenses against adversarial examples and applied them to a machine learning problem.
In this short article, we summarize a step-by-step methodology to forecast power output from a photovoltaic solar generator using hourly auto-regressive moving average (ARMA) models. We illustrate how to build an ARMA model, to use statistical tests to validate it, and construct hourly samples. The resulting model inherits nice properties for embedding it into more sophisticated operation and planning models, while at the same time showing relatively good accuracy. Additionally, it represents a good forecasting tool for sample generation for stochastic energy optimization models.
Approximation algorithms for constraint satisfaction problems (CSPs) are a central direction of study in theoretical computer science. In this work, we study classical product state approximation algorithms for a physically motivated quantum generalization of Max-Cut, known as the quantum Heisenberg model. This model is notoriously difficult to solve exactly, even on bipartite graphs, in stark contrast to the classical setting of Max-Cut. Here we show, for any interaction graph, how to classically and efficiently obtain approximation ratios 0.649 (anti-feromagnetic XY model) and 0.498 (anti-ferromagnetic Heisenberg XYZ model). These are almost optimal; we show that the best possible ratios achievable by a product state for these models is 2/3 and 1/2, respectively.
Here we present the development of the building blocks of a Josephson parametric amplifier (JPA), namely the superconducting quantum interference device (SQUID) and the inductive pick-up coil that permits current coupling from a quantum dot into the SQUID. We also discuss our efforts in making depletion mode quantum dots using delta doped GaAs quantum wells. Because quantum dot based spin qubits utilize very low-level (~10 - 100pA), short duration (1ms - 1μs) current signals for state preparation and readout, these systems require close proximity cryogenic amplification to prevent signal corruption. Common amplification methods in these semiconductor quantum dots rely on heterojunction bipolar transistors (HBTs) and high electron mobility transistors (HEMTs) to amplify the readout signal from a single qubit. The state of the art for HBTs and HEMTs produce approximately 10µW of power when operating at high bandwidths. For few-qubit systems this level of heat dissipation is acceptable. However, for scaling up the number of qubits to several hundred or a thousand, the heat load produced in a 1 to 1 amplifier to qubit arrangement would overload the cooling capacity of a common dilution refrigerator, which typically has a cooling power of ~100µW at its base temperature. Josephson parametric amplifiers have been shown to dissipate ~1pW of power with current sensitivies on par with HBTs and HEMTs and with bandwidths 30 times that of HBTs and HEMTs, making them attractive for multi-qubit platforms. In this report we describe in detail the fabrication process flow for developing inductive pick-up coils and the fabrication and measurement of NbTiN and A1/A1Ox/A1 SQUIDs.
Due to its balance of accuracy and computational cost, density functional theory has become the method of choice for computing the electronic structure and related properties of materials. However, present-day semi-local approximations to the exchange-correlation energy of density functional theory break down for materials containing d and f electrons. In this report we summarize the results of our research efforts within the LDRD 200202 titled "Making density functional theory work for all materials" in addressing this issue. Our efforts are grouped into two research thrusts. In the first thrust, we develop an exchange-correlation functional (BSC functional) within the subsystem functional formalism. It enables us to capture bulk, surface, and confinement physics with a single, semi-local exchange-correlation functional in density functional theory calculations. We present the analytical properties of the BSC functional and demonstrate that the BSC functional is able to capture confinement physics more accurately than standard semi-local exchange-correlation functionals. The second research thrust focusses on developing a database for transition metal binary compounds. The database consists of materials properties (formation energies, ground-state energies, lattice constants, and elastic constants) of 26 transition metal elements and 89 transition metal alloys. It serves as a reference for benchmarking computational models (such as lower-level modeling methods and exchange-correlation functionals). We expect that our database will significantly impact the materials science community. We conclude with a brief discussion on the future research directions and impact of our results.
Under Department of Energy (DOE), Office of Nuclear Energy (NE), Gateway for Accelerated Innovation in Nuclear (GAIN), Sandia National Laboratories (SNL) was awarded DOE-NE GAIN voucher GA-19SN020107, "Risk-informed mechanistic source term calculations for a sodium fast reactor." Under this GAIN voucher, SNL supported the industry partners development in preparation for licensing and commercialization by providing subject matter expertise on heat pipe technologies, providing computer code training and support, and perform first-of-a-kind experiments demonstrating the safety/risk impacts of heat pipe breach failures. The experiments that were performed had two primary goals: measure the peak heat fluxes that lead to heat pipe dry out and subsequent wall breach; and observe the consequences that result from catastrophic failure of a heat pipe wall. Intentional breaching of the heat pipe walls took advantage of heat pipe physics and operating limits. Large and nearly instantaneous heat fluxes were applied to the heat pipe to first cause localized dry out at the evaporator section which then leads to melting of the heat pipe wall. The hour glass heat pipe (Test 1) experienced dry out at 112 W/cm2 and after 45 seconds, wall temperatures measure about 1,280°C and intentional failure of the heat pipe wall was achieved. The cylindrical heat pipe (Test 2) experienced dry out at 125 W/cm2 and after 65 seconds, wall temperatures exceeded 1,400°C and intentional failure of the heat pipe wall was achieved. Both experiments characterize the parameters needed to lead to heat pipe wall failure. Furthermore, the failure of the heat pipes characterizes the safety/risk impacts from sodium-oxygen reactions that occur following the intentional failure. There were two major conclusions of these intentional failure tests: the heat pipes were able to continue operating beyond expected performance limits, and the failure behavior validated decades of operational experience.
This report summarizes the accomplishments and challenges of a two year LDRD effort focused on improving design-to-simulation agility. The central bottleneck in most solid mechanics simulations is the process of taking CAD geometry and creating a discretization of suitable quality, i.e., the "meshing" effort. This report revisits meshfree methods and documents some key advancements that allow their use on problems with complex geometries, low quality meshes, nearly incompressible materials or that involve fracture. The resulting capability was demonstrated to be an effective part of an agile simulation process by enabling rapid discretization techniques without increasing the time to obtain a solution of a given accuracy. The first enhancement addressed boundary-related challenges associated with meshfree methods. When using point clouds and Euclidean metrics to construct approximation spaces, the boundary information is lost, which results in low accuracy solutions for non-convex geometries and mate rial interfaces. This also complicates the application of essential boundary conditions. The solution involved the development of conforming window functions which use graph and boundary information to directly incorporate boundaries into the approximation space.
Sandia's Z Pulsed Power Facility is able to dynamically compress matter to extreme states with exceptional uniformity, duration, and size, which are ideal for investigations of fundamental material properties of high energy density conditions. X-ray diffraction (XRD) is a key atomic scale probe since it provides direct observation of the compression and strain of the crystal lattice, and is used to detect, identify, and quantify phase transitions. Because of the destructive nature of Z-Dynamic Materials Properties (DMP) experiments and low signal vs background emission levels of XRD, it is very challenging to detect the XRD pattern close to the Z-DMP load and to recover the data. We developed a new Spherical Crystal Diffraction Imager (SCDI) diagnostic to relay and image the diffracted x-ray pattern away from the load debris field. The SCDI diagnostic utilizes the Z-Beamlet laser to generate 6.2-keV Mn-He c , x-rays to probe a shock-compressed sample on the Z-DMP load. A spherically bent crystal composed of highly oriented pyrolytic graphite is used to collect and focus the diffracted x-rays into a 1-inch thick tungsten housing, where an image plate is used to record the data. We performed experiments to implement the SCDI diagnostic on Z to measure the XRD pattern of shock compressed beryllium samples at pressures of 1.8-2.2 Mbar.
This document details the development of modeling and simulations for existing plant security regimes using identified target sets to link dynamic assessment methodologies by leveraging reactor system level modeling with force-on-force modeling and 3D visualization for developing table-top scenarios. This work leverages an existing hypothetical example used for international physical security training, the Lone Pine nuclear power plant facility for target sets and modeling.
This document details the development of modeling and simulations for existing plant security regimes using identified target sets to link dynamic assessment methodologies by leveraging reactor system level modeling with force-on-force modeling and 3D visualization for developing table-top scenarios. This work leverages an existing hypothetical example used for international physical security training, the Lone Pine nuclear power plant facility for target sets and modeling.
The CTH multiphysics hydrocode is used in a wide variety of important calculations. An essential part of ensuring hydrocode accuracy and credibility is thorough code verification and validation (V&V). In the past, CTH V&V work (particularly verification) has not been consistently well documented. In FY19, we have made substantial progress towards addressing this need. In this report, we present a new CTH V&V test suite composed of traditional hydrocode verification problems used by similar ASC codes as well as validation problems for some of the most frequently used materials models and capabilities in CTH. For the verification problems, we present not only results and computed errors, but also convergence rates. Validation problems include mesh refinement studies, providing evidence that results are converging.
This Laboratory Directed Research and Development (LDRD) effort performed fundamental Research and Development (R&D) to develop a robust radar processing algorithm capable of assessing the difference between an Unmanned Aerial System (UAS) and a biological target such as a bird, based on mathematics applied to the polarized radar returns of the target object, alone. The current threats of using such a UAS as a delivery platform for a host of destructive components is a major concern for the protection of various assets. Most recently, on 14th Sept. 2019, dozens of suicide or kamikaze drones (UAV-X) coordinated an attack on two Saudi oil facilities that demonstrated the potential to disrupt global oil supplies. While radar-based UAS detection systems can detect UAS at ranges greater than 1-km, the issues of excessive Nuisance/False Alarm Rates (NAR/FAR) from natural sources (birds in particular) has not been sufficiently addressed. In this effort we describe and utilize the Adaptive Polarization Difference Imaging-based (APDI) algorithms for the detection and automatic non-visual assessment of Unmanned Aerial System applications. Originally developed for optical imaging and sensing of polarization information in nature, the algorithms developed here are modified to serve for the target detection purposes in counter-UAS (cUAS) environments. We exploit the polarization statistics of the observing scene for detection and identification of changes within the scene and assess from these changes for UAS/bird classifications. Several cases are considered from independent data sources, including numerically generated data, anechoic chamber data as well as experimental radar data, to show the applicability of the techniques developed here. The methods developed in this effort are designed to be used in cUAS setups but have shown promise for a multitude of other radar-based classification uses as well.
The work presented in this report applies the MELCOR code to evaluate potential accidents in non-reactor nuclear facilities, focusing on Design Basis Accidents. Ten accident scenarios were modeled using NRC's best-estimate severe accident analysis code, MELCOR 2.2. The accident scenarios simulated a range of explosions and/or fires related to a nuclear fuel reprocessing facility. The objective was to evaluate the radionuclide source term to the environment following initiating explosion and/or fire events. The simulations were performed using a MELCOR model of the Barnwell Nuclear Fuel Plant, which was decommissioned before beginning reprocessing operations. Five of the accident scenarios were based on the Class 5 Design Basis Accidents from the Final Safety Analysis Report. Three of the remaining accident scenarios include sensitivity studies on smaller solvent fires. The final two accidents included an induced fire from an initial explosion. The radionuclide inventory was developed from ORIGEN calculations of spent PWR fuel with an initial enrichment of 4.5% U-235 by weight. The fuel aged for five years after a final 500-day irradiation cycle. The burn-up was conservatively increased to 60 GWd/MTU to bound current US operations. The results are characterized in terms of activity release to the environment and the building decontamination factor, which is related the leak path factor used in Department of Energy safety analyses. The MELCOR 2.2 results consider adverse consequences to the filters, ventilation system, and structures as a result of the explosions and fires. The calculations also include best-estimate models for aerosol transport, agglomeration, and deposition. The new calculations illustrate best-estimate approaches for predicting the source term from a reprocessing facility accident.
In cases where building infrastructure is both costly and possibly hazardous, it is advantageous to seek new methods of transferring and transmitting electrical power. It is the goal of this report to study the current available technologies used in transmitting power with no physical connection between the source and load; and to discuss their feasibility, reliability, efficiency, and safety requirements for use in the field.
This report outlines the fiscal year (FY) 2019 status of an ongoing multi-year effort to develop a general, microstructurally-aware, continuum-level model for representing the dynamic response of material with complex microstructures. This work has focused on accurately representing the response of both conventionally wrought processed and additively manufactured (AM) 304L stainless steel (SS) as a test case. Additive manufacturing, or 3D printing, is an emerging technology capable of enabling shortened design and certification cycles for stockpile components through rapid prototyping. However, there is not an understanding of how the complex and unique microstructures of AM materials affect their mechanical response at high strain rates. To achieve our project goal, an upscaling technique was developed to bridge the gap between the microstructural and continuum scales to represent AM microstructures on a Finite Element (FE) mesh. This process involves the simulations of the additive process using the Sandia developed kinetic Monte Carlo (KMC) code SPPARKS. These SPPARKS microstructures are characterized using clustering algorithms from machine learning and used to populate the quadrature points of a FE mesh. Additionally, a spall kinetic model (SKM) was developed to more accurately represent the dynamic failure of AM materials. Validation experiments were performed using both pulsed power machines and projectile launchers. These experiments have provided equation of state (EOS) and flow strength measurements of both wrought and AM 304L SS to above Mbar pressures. In some experiments, multi-point interferometry was used to quantify the variation is observed material response of the AM 304L SS. Analysis of these experiments is ongoing, but preliminary comparisons of our upscaling technique and SKM to experimental data were performed as a validation exercise. Moving forward, this project will advance and further validate our computational framework, using advanced theory and additional high-fidelity experiments.
Recent news reports coming from Asia and the UK have highlighted the emerging threats of Non-Traditional Agents (NTAs) to national security. The UK incident underscores how NTAs may linger in the environment and at trace. Building on Sandia's extensive analytical chemistry work in this field, a polysilphenylene analog of Sandia's proprietary DKAP polymer coatings was synthesized and evaluated for high temperature operation. Initial test results are inconclusive as to the improved thermal stability of the new polymer with TGA/DSC results indicating a lower glass transition go temperature for the new "Hot DKAP" material and a similar to slightly lower start to mass loss for "Hot DKAP", but slower degradation rate in clean dry air. Additional testing with a TGA-MS system to identify what the fragments lost as a function of temperature is still needed to fully characterize the materials thermal properties. In addition, the material still needs to be evaluated for thermodynamic properties for analytes of interest using either GC or SPC coated devices.
The electrothermal instability (ETI) is driven by Joule heating and arises from the dependence of resistivity on temperature. ETI may drive azimuthally correlated surface density variations which seed magneto Rayleigh-Taylor (MRT) instability growth. Liner implosion studies suggest that dielectric surface coatings reduce the amplitude of ETI driven perturbations. Furthermore, previous fundamental physics studies suggest that non-metallic inclusions within the metal can seed ETI growth. In this project, we aimed to (1) determine how dielectric coatings modify ETI growth by varying the coating thickness and the surface structure of the underlying metal, and (2) study overheating from engineered defects—designed lattices of micron-scale pits. Engineered pits divert current density and drive local overheating in a way that can be compared with 3DMHD simulations. All experiments were executed on the Sandia Mykonos Facility. Facility and diagnostic investments enabled high quality data to be gathered in support of project deliverables.
Mixing of cold, higher-Z elements into the fuel region of an inertial confinement fusion target spoils the fusion burn efficiency. This mixing process is driven by both "turbulent" and "atomic" mixing processes, the latter being modeled through transport corrections to the basic hydrodynamic models. Recently, there has been a surge in the development of dense plasma transport modeling and the associated transport coefficients; however, experimental validation remains in its infancy.
Food, energy, and water (FEW) are primary resources required for human populations and ecosystems. Availability of the raw resources is essential, but equally important are the services that deliver resources to human populations, such as adequate access to safe drinking water, electricity, and sufficient food. Any failures in either resource availability or FEW resources-related services will have an impact on human health. The ability of countries to intervene and overcome the challenges in the FEW domain depends on governance, education, and economic capacities. We distinguish between FEW resources, FEW services, and FEW health outcomes to develop an analysis framework for evaluating interrelationships among these critical resources. The framework is applied using a data-driven approach for sub-Saharan African countries, a region with notable FEW insecurity challenges. The data-driven approach using a cross-validated stepwise regression analysis indicates that limited governance and socioeconomic capacity in sub-Saharan African countries, rather than lack of the primary resources, more significantly impact access to FEW services and associated health outcomes. The proposed framework helps develop a cohesive approach for evaluating FEW metrics and could be applied to other regions of the world to continue improving our understanding of the FEW nexus.
During the development of new seismic data processing methods, the verification of potential events and associated signals can present a nontrivial obstacle to the assessment of algorithm performance, especially as detection thresholds are lowered, resulting in the inclusion of significantly more anthropogenic signals. Here, we present two 14 day seismic event catalogs, a local-scale catalog developed using data from the University of Utah Seismograph Stations network, and a global-scale catalog developed using data from the International Monitoring System. Each catalog was built manually to comprehensively identify events from all sources that were locatable using phase arrival timing and directional information from seismic network stations, resulting in significant increases compared to existing catalogs. The new catalogs additionally contain challenging event sequences (prolific aftershocks and small events at the detection and location threshold) and novel event types and sources (e.g., infrasound only events and long-wall mining events) that make them useful for algorithm testing and development, as well as valuable for the unique tectonic and anthropogenic event sequences they contain.
Quantum materials have long promised to revolutionize everything from energy transmission (high temperature superconductors) to both quantum and classical information systems (topological materials). However, their discovery and application has proceeded in an Edisonian fashion due to both an incomplete theoretical understanding and the difficulty of growing and purifying new materials. This project leverages Sandia's unique atomic precision advanced manufacturing (APAM) capability to design small-scale tunable arrays (designer materials) made of donors in silicon. Their low-energy electronic behavior can mimic quantum materials, and can be tuned by changing the fabrication parameters for the array, thereby enabling the discovery of materials systems which can't yet be synthesized. In this report, we detail three key advances we have made towards development of designer quantum materials. First are advances both in APAM technique and underlying mechanisms required to realize high-yielding donor arrays. Second is the first-ever observation of distinct phases in this material system, manifest in disordered 2D sheets of donors. Finally are advances in modeling the electronic structure of donor clusters and regular structures incorporating them, critical to understanding whether an array is expected to show interesting physics. Combined, these establish the baseline knowledge required to manifest the strongly-correlated phases of the Mott-Hubbard model in donor arrays, the first step to deploying APAM donor arrays as analogues of quantum materials.
In stochastic optimization, probabilities naturally arise as cost functionals and chance constraints. Unfortunately, these functions are difficult to handle both theoretically and computationally. The buffered probability of failure and its subsequent extensions were developed as numerically tractable, conservative surrogates for probabilistic computations. In this manuscript, we introduce the higher-moment buffered probability. Whereas the buffered probability is defined using the conditional value-at-risk, the higher-moment buffered probability is defined using higher-moment coherent risk measures. In this way, the higher-moment buffered probability encodes information about the magnitude of tail moments, not simply the tail average. We prove that the higher-moment buffered probability is closed, monotonic, quasi-convex and can be computed by solving a smooth one-dimensional convex optimization problem. These properties enable smooth reformulations of both higher-moment buffered probability cost functionals and constraints.
We present a new method for reducing parallel applications’ communication time by mapping their MPI tasks to processors in a way that lowers the distance messages travel and the amount of congestion in the network. Assuming geometric proximity among the tasks is a good approximation of their communication interdependence, we use a geometric partitioning algorithm to order both the tasks and the processors, assigning task parts to the corresponding processor parts. In this way, interdependent tasks are assigned to “nearby” cores in the network. We also present a number of algorithmic optimizations that exploit specific features of the network or application to further improve the quality of the mapping. We specifically address the case of sparse node allocation, where the nodes assigned to a job are not necessarily located in a contiguous block nor within close proximity to each other in the network. However, our methods generalize to contiguous allocations as well, and results are shown for both contiguous and non-contiguous allocations. We show that, for the structured finite difference mini-application MiniGhost, our mapping methods reduced communication time up to 75 percent relative to MiniGhost’s default mapping on 128K cores of a Cray XK7 with sparse allocation. For the atmospheric modeling code E3SM/HOMME, our methods reduced communication time up to 31% on 16K cores of an IBM BlueGene/Q with contiguous allocation.
This paper describes the design and implementation of a proof-of-concept Pacific dc Intertie (PDCI) wide area damping controller and includes system test results on the North American Western Interconnection (WI). To damp inter-area oscillations, the controller modulates the power transfer of the PDCI, a ±500 kV dc transmission line in the WI. The control system utilizes real-time phasor measurement unit (PMU) feedback to construct a commanded power signal which is added to the scheduled power flow for the PDCI. After years of design, simulations, and development, this controller has been implemented in hardware and successfully tested in both open and closed-loop operation. The most important design specifications were safe, reliable performance, no degradation of any system modes in any circumstances, and improve damping to the controllable modes in the WI. The main finding is that the controller adds significant damping to the modes of the WI and does not adversely affect the system response in any of the test cases. The primary contribution of this paper, to the state of the art research, is the design methods and test results of the first North American real-time control system that uses wide area PMU feedback.
In recent years the use of security gateways (SG) located within the electrical grid distribution network has become pervasive. SGs in substations and renewable distributed energy resource aggregators (DERAs) protect power distribution control devices from cyber and cyber-physical attacks. When encrypted communications within a DER network is used, TCP/IP packet inspection is restricted to packet header behavioral analysis which in most cases only allows the SG to perform anomaly detection of blocks of time-series data (event windows). Packet header anomaly detection calculates the probability of the presence of a threat within an event window, but fails in such cases where the unreadable encrypted payload contains the attack content. The SG system log (syslog) is a time-series record of behavioral patterns of network users and processes accessing and transferring data through the SG network interfaces. Threatening behavioral pattern in the syslog are measurable using both anomaly detection and graph theory. In this paper it will be shown that it is possible to efficiently detect the presence of and classify a potential threat within an SG syslog using light-weight anomaly detection and graph theory.
A generic constant-efficiency energy flow model is commonly used in techno-economic analyses of grid energy storage systems. In practice, charge and discharge efficiencies of energy storage systems depend on state of charge, temperature, and charge/discharge powers. Furthermore, the operating characteristics of energy storage devices are technology specific. Therefore, generic constant-efficiency energy flow models do not accurately capture the system performance. In this work, we propose to use technology-specific nonlinear energy flow models based on nonlinear operating characteristics of the storage devices. These models are incorporated into an optimization problem to find the optimal market participation of energy storage systems. We develop a dynamic programming method to solve the optimization problem and perform two case studies for maximizing the revenue of a vanadium redox flow battery (VRFB) and a Li-ion battery system in Pennsylvania New Jersey Maryland (PJM) interconnection's energy and frequency regulation markets.
Two perspectives are used to reframe Simonton’s recent three-factor definition of creative outcome. The first perspective is functional: that creative ideas are those that add significantly to knowledge by providing both utility and learning. The second perspective is calculational: that learning can be estimated by the change in probabilistic beliefs about an idea’s utility before and after it has played out in its environment. The results of the reframing are proposed conceptual and mathematical definitions of (a) creative outcome as the product of two overarching factors (utility and learning) and (b) learning as a function of two subsidiary factors (blindness reduction and surprise). Learning will be shown to depend much more strongly on surprise than on blindness reduction, so creative outcome may then also be defined as “implausible utility.”.
Leakage along wellbores is of concern for a variety of applications, including sub-surface fluid storage facilities, geothermal wells, and CO2 storage wells. We have investigated whether corroded casing is permeable to gas and can serve as a leakage pathway along wellbores. Three specimens were prepared from laboratory steel plates corroded using different mechanisms to reflect different possible field conditions and produce a variety of corrosion rates. Single-phase gas flow measurements were made under a range of gas pressures to investigate flow in both the viscous and visco-inertial flow regimes. Tests were conducted at different confining stresses (range from 3.45 to 13.79 MPa) following both loading and unloading paths. The gas flow test results suggest corroded casing can serve as a significant leakage path along the axis of a wellbore. Transmissivity was found to be sensitive to the variation in confining stress suggesting that the corrosion product is deformable. Gas slip factors and the coefficients of inertial resistance of the corrosion product were comparable to those available in the literature for other porous media. Post-test examination of the corrosion product revealed it to be a heterogeneous, mesoporous material with mostly non-uniform slit type porosity. There was no discernable difference in the composition of corrosion product from specimens corroded by different mechanisms.
Stress intensity factors (SIFs) are used in continuum fracture mechanics to quantify the stress fields surrounding a crack in a homogeneous material in the linear elastic regime. Critical values of the SIFs define an intrinsic measure of the resistance of a material to propagate a crack. At atomic scales, however, fracture occurs as a series of atomic bonds breaking, differing from the continuum description. As a consequence, a formal analog of the continuum SIFs calculated from atomistic simulations can have spatially localized, microstructural contributions that originate from varying bond configurations. The ability to characterize fracture at the atomic scale in terms of the SIFs offers both an opportunity to probe the effects of chemistry, as well as how the addition of a microstructural component affects the accuracy. We present a novel numerical method to determine SIFs from molecular dynamics (MD) simulations. The accuracy of this approach is first examined for a simple model, and then applied to atomistic simulations of fracture in amorphous silica. MD simulations provide time and spatially dependent SIFs, with results that are shown to be in good agreement with experimental values for fracture toughness in silica glass.
We present a comprehensive physics investigation of electrothermal effects in III-V heterojunction bipolar transistors (HBTs) via extensive Technology Computer Aided Design (TCAD) simulation and modeling. We show for the first time that the negative differential resistances of the common-emitter output responses in InGaP/GaAs HBTs are caused not only by the well-known carrier mobility reduction, but more importantly also by the increased base-To-emitter hole back injection, as the device temperature increases from self-heating. Both self-heating and impact ionization can cause fly-backs in the output responses under constant base-emitter voltages. We find that the fly-back behavior is due to competing processes of carrier recombination and self-heating or impact ionization induced carrier generation. These findings will allow us to understand and potentially improve the safe operating areas and circuit compact models of InGaP/GaAs HBTs.
We parallelize the LU factorization of a hierarchical low-rank matrix (H-matrix) on a distributed-memory computer. This is much more difficult than the H-matrix-vector multiplication due to the dataflow of the factorization, and it is much harder than the parallelization of a dense matrix factorization due to the irregular hierarchical block structure of the matrix. Block low-rank (BLR) format gets rid of the hierarchy and simplifies the parallelization, often increasing concurrency. However, this comes at a price of losing the near-linear complexity of the H-matrix factorization. In this work, we propose to factorize the matrix using a “lattice H-matrix” format that generalizes the BLR format by storing each of the blocks (both diagonals and off-diagonals) in the H-matrix format. These blocks stored in the linear complexity of the-matrix format are referred to as lattices. Thus, this lattice format aims to combine the parallel scalability of BLR factorization with the near-linear complexity of linear complexity of the-matrix factorization. We first compare factorization performances using the L-matrix, BLR, and lattice H-matrix formats under various conditions on a shared-memory computer. Our performance results show that the lattice format has storage and computational complexities similar to those of the H-matrix format, and hence a much lower cost of factorization than BLR. We then compare the BLR and lattice (H-matrix factorization on distributed-memory computers. Our performance results demonstrate that compared with BLR, the lattice format with the lower cost of factorization may lead to faster factorization on the distributed-memory computer.
The emergence of deep learning as a leading computational workload for machine learning tasks on large-scale cloud infrastructure installations has led to plethora of accelerator hardware releases. However, the reduced precision and range of the floating-point numbers on these new platforms makes it a non-trivial task to leverage these unprecedented advances in computational power for numerical linear algebra operations that come with a guarantee of robust error bounds. In order to address these concerns, we present a number of strategies that can be used to increase the accuracy of limited-precision iterative refinement. By limited precision, we mean 16-bit floating-point formats implemented in modern hardware accelerators and are not necessarily compliant with the IEEE half-precision specification. We include the explanation of a broader context and connections to established IEEE floating-point standards and existing high-performance computing (HPC) benchmarks. We also present a new formulation of LU factorization that we call signed square root LU which produces more numerically balanced L and U factors which directly address the problems of limited range of the low-precision storage formats. The experimental results indicate that it is possible to recover substantial amounts of the accuracy in the system solution that would otherwise be lost. Previously, this could only be achieved by using iterative refinement based on single-precision floating-point arithmetic. The discussion will also explore the numerical stability issues that are important for robust linear solvers on these new hardware platforms.
Triangle counting is a representative graph problem that shows the challenges of improving graph algorithm performance using algorithmic techniques and adopting graph algorithms to new architectures. In this paper, we describe an update to the linear-algebraic formulation of the triangle counting problem. Our new approach relies on fine-grained tasking based on a tile layout. We adopt this task based algorithm to heterogeneous architectures (CPUs and GPUs) for up to 10.8x speed up over past year's graph challenge submission. This implementation also results in the fastest kernel time known at time of publication for real-world graphs like twitter (3.7 second) and friendster (1.8 seconds) on GPU accelerators when the graph is GPU resident. This is a 1.7 and 1.2 time improvement over previous state-of-the-art triangle counting on GPUs. We also improved end-to-end execution time by overlapping computation and communication of the graph to the GPUs. In terms of end-to-end execution time, our implementation also achieves the fastest end-to-end times due to very low overhead costs.
We present a comprehensive physics investigation of electrothermal effects in III-V heterojunction bipolar transistors (HBTs) via extensive Technology Computer Aided Design (TCAD) simulation and modeling. We show for the first time that the negative differential resistances of the common-emitter output responses in InGaP/GaAs HBTs are caused not only by the well-known carrier mobility reduction, but more importantly also by the increased base-To-emitter hole back injection, as the device temperature increases from self-heating. Both self-heating and impact ionization can cause fly-backs in the output responses under constant base-emitter voltages. We find that the fly-back behavior is due to competing processes of carrier recombination and self-heating or impact ionization induced carrier generation. These findings will allow us to understand and potentially improve the safe operating areas and circuit compact models of InGaP/GaAs HBTs.
Dai, Steve X.; Gao, Min; Tang, Xiao; Leung, Chung M.; Viswan, Ravindranath; Li, Jiefang; Viehland, Dwight D.
(Pb0.98, La0.02)(Zr0.95, Ti0.05)O3 (PLZT) thin films of 300 nm thickness were epitaxially deposited on (100), (110), and (111) SrTiO3 single crystal substrates by pulsed laser deposition. X-ray diffraction line and reciprocal space mapping scans were used to determine the crystal structure. Tetragonal ((001) PLZT) and monoclinic MA ((011) and (111) PLZT) structures were found, which influenced the stored energy density. Electric field-induced antiferroelectric to ferroelectric (AFE→FE) phase transitions were found to have a large reversible energy density of up to 30 J/cm3. With increasing temperature, an AFE to relaxor ferroelectric (AFE→RFE) transition was found. The RFE phase exhibited lower energy loss, and an improved energy storage efficiency. The results are discussed from the perspective of crystal structure, dielectric phase transitions, and energy storage characteristics. Besides, unipolar drive was also performed, providing notably higher energy storage efficiency values due to low energy losses.
This communication is the final report for the project Utilizing Highly Scattered Light for Intelligence through Aerosols funded by the Laboratory Directed Research and Development (LDRD) program at Sandia National Laboratories and lasting six months in 2019. Aerosols like fog reduce visibility and cause down-time that for critical systems or operations are unacceptable. Information is lost due to the random scattering and absorption of light by tiny particles. Computational diffuse optical imaging methods show promise for interpreting the light transmitted through fog, enabling sensing and imaging to improve situational awareness at depths 10 times greater than current methods. Developing this capability first requires verification and validation of diffusion models of light propagation in fog. For this reason, analytical models were developed and compared to experimental data captured at the Sandia National Laboratory Fog Chamber facility. A methodology was developed to incorporate the propagation of scattered light through the imaging optics to a pixel array. The diffusion approximation to the radiative transfer equation was found to predict light propagation in fog under the appropriate conditions.
An international safeguards mentoring program was established by Sandia National Laboratories (SNL) for early career university faculty. The inaugural year of the program focused on course material development and connecting faculty to experts at national laboratories. Two faculty members were selected for participation; one developed a safeguards-by-design course, and the other created lecture material related to unmanned robotic systems in safeguards for integration in existing courses. Faculty members were paired with SNL subject matter experts based on the topic of their individual projects. The program also included a two week visit to SNL. The structure of this program not only supported the development of new course material, but also provided junior faculty members with an opportunity to make connections and build collaborations in the field of international safeguards. Programs like this are important for professional development of faculty members and to help strengthen connections between universities and the national laboratories.
SpinDX, often referred to as a "lab-on-a-disk" is a portable, medical diagnostic platform that uses a disposable, centrifugal disk with microfluidic flow paths to manipulate a biological sample. This allows multiple tests to be carried out on a single sample with no preparation required. The device operates by distributing drops of raw, unprocessed samples into different channels that function as dozens of tiny test tubes. When the disc spins, the samples interact with test reagents inside the channels. If there is a chemical interaction between the sample and the reagent, the tip of the channel will produce a fluorescent glow indicating that there is an infectious agent present. The data is then transferred to a software interface that displays the test results.
Sierra/SolidMechanics (Sierra/SM) is a Lagrangian, three-dimensional code for finite element analysis of solids and structures. It provides capabilities for explicit dynamic, implicit quasistatic and dynamic analyses. The explicit dynamics capabilities allow for the efficient and robust solution of models with extensive contact subjected to large, suddenly applied loads. For implicit problems, Sierra/SM uses a multi-level iterative solver, which enables it to effectively solve problems with large deformations, nonlinear material behavior, and contact. Sierra/SM has a versatile library of continuum and structural elements, and a large library of material models. The code is written for parallel computing environments enabling scalable solutions of extremely large problems for both implicit and explicit analyses. It is built on the SIERRA Framework, which facilitates coupling with other SIERRA mechanics codes. This document describes the functionality and input syntax for Sierra/SM.
The SAFER Sandbox is a web application for testing various types of data visualization tools that analyze safety data across national laboratories. By enabling the user to visualize this data through visual and textual analyses, the SAFER Sandbox helps to facilitate the research and support for the National Nuclear Security Administration (NNSA) Office of Safety, Infrastructure and Operations (NA-50). Safety data across the national laboratory complex is not currently analyzed in one location. This prohibits administrators from viewing safety data trends and making large scale decisions based on a holistic view of the data. By implementing data analytics web services such as representative text summary, noun phrase extraction and graph exploration, users can view and explore safety data based on search queries. The data visualizations in the SAFER Sandbox reveal trends and point to items of concern based on safety data. The SAFER Sandbox is a testing ground for these visualizations.
This document archives the results developed by the Lab Directed Research and Development (LDRD) project sponsored by Sandia National Laboratories (SNL). In this work, a numerical study was performed to show the feasibility of approximating the non-linear operator of SNL's unique high-energy hyperspectral computed tomography system as a sequence of linear operators. The four main results gained from this work include the development of a simulation test-bed using a particle-transport Monte Carlo approach; the demonstration to assemble a linear operator of almost-arbitrary resolution for a given narrow energy window, developing a compression approach to dramatically reduce the size of the linear operator via a spline approach, and the demonstration of using the linear operator to perform processing of x-ray data; in this case, the development of an iterative reconstruction method. This numerical study has indicated that if these results can be replicated on the SNL system, the improved performance could be revolutionary as the method to approximate the nonlinear operator for a hyperspectral CT system is not feasible to perform on a traditional CT system.
Quantifying in-situ subsurface stresses and predicting fracture development are critical to reducing risks of induced seismicity and improving modern energy activities in the subsurface. In this work, we developed a novel integration of controlled mechanical failure experiments coupled with microCT imaging, acoustic sensing, modeling of fracture initiation and propagation, and machine learning for event detections and waveform characterization. Through additive manufacturing (3D printing), we were able to produce bassanite-gypsum rock samples with repeatable physical, geochemical and structural properties. With these "geoarchitected" rock, we provided the role of mineral texture orientation on fracture surface roughness. The impact of poroelastic coupling on induced seismicity has been systematically investigated to improve mechanistic understanding of post shut-in surge of induced seismicity. This research will set the groundwork for characterizing seismic waveforms by using multiphysics and machine learning approaches and improve the detection of low-magnitude seismic events leading to the discovery of hidden fault/fracture systems.
This work is a follow-on guide to running the Weather Research and Forecasting (WRF) model from Aur et al, (2018), Building and Running 1 DAAPS Models: IFRF Postdictions. This guide details running WRF in a nudged configuration, where the u and v wind components, temperature, and moisture within a specified spatial and temporal window, are adjusted towards the observations, radiosonde observations in this case, using WRF's observation nudging technique. The primary modification to this methodology from Aur et al. (2018), is the use of the OBSGRID program to generate the nudging files and the updates to the namelist.input file. These steps, combined with those outlined in Aur et al. (2018), will generate a nudged WRF hindcast (or postdiction) simulation.
The following topics are considered in this presentation: (i) Overview of evidence theory, (ii) Representation of loss of assured safety (LOAS) with evidence theory for a 1 SL, 1 WL system, (iii) Description of 2 SLs and 1 WL used for illustration, (iv) Plausibility and belief for LOAS and associated sampling-based verification calculations for a 2 SL, 1 WL system, (iv) Plausibility and belief for margins associated with LOAS for a 2 SL, 1 WL system, (v) Plausibility and belief for LOAS for a 2 SL, 2 WL system, (vi) Incorporation of evidence spaces for link temperature curves into LOAS calculations, (vii) Plausibility and belief for LOAS for WL/SL systems with SL subsystems, and (viii) Sampling-based procedures for the estimation of plausibility and belief. 2
Reed, B.W.; Moghadam, A.A.; Bloom, R.S.; Park, S.T.; Monterrosa, A.M.; Price, Patrick M.; Barr, C.M.; Briggs, S.A.; Hattar, Khalid M.; Mckeown, J.T.; Masiel, D.J.
We present kilohertz-scale video capture rates in a transmission electron microscope, using a camera normally limited to hertz-scale acquisition. An electrostatic deflector rasters a discrete array of images over a large camera, decoupling the acquisition time per subframe from the camera readout time. Total-variation regularization allows features in overlapping subframes to be correctly placed in each frame. Moreover, the system can be operated in a compressive-sensing video mode, whereby the deflections are performed in a known pseudorandom sequence. Compressive sensing in effect performs data compression before the readout, such that the video resulting from the reconstruction can have substantially more total pixels than that were read from the camera. This allows, for example, 100 frames of video to be encoded and reconstructed using only 15 captured subframes in a single camera exposure. We demonstrate experimental tests including laser-driven melting/dewetting, sintering, and grain coarsening of nanostructured gold, with reconstructed video rates up to 10 kHz. The results exemplify the power of the technique by showing that it can be used to study the fundamentally different temporal behavior for the three different physical processes. Both sintering and coarsening exhibited self-limiting behavior, whereby the process essentially stopped even while the heating laser continued to strike the material. We attribute this to changes in laser absorption and to processes inherent to thin-film coarsening. In contrast, the dewetting proceeded at a relatively uniform rate after an initial incubation time consistent with the establishment of a steady-state temperature profile.
In the summer of 2020, the National Aeronautics and Space Administration (NASA) plans to launch a spacecraft as part of the Mars 2020 mission. The rover on the proposed spacecraft will use a Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) to provide continuous electrical and thermal power for the mission. The MMRTG uses radioactive plutonium dioxide. NASA is preparing a Supplemental Environmental Impact Statement (SEIS) for the mission in accordance with the National Environmental Policy Act. This Nuclear Risk Assessment addresses the responses of the MMRTG option to potential accident and abort conditions during the launch opportunity for the Mars 2020 mission and the associated consequences. This information provides the technical basis for the radiological risks discussed in the SEIS.
In this investigation a series of small-scale tests were conducted, which were sponsored by the Nuclear Regulatory Commission (NRC) Office of Nuclear Regulatory Research (RES) and performed at Sandia National Laboratories (SNL). These tests were designed to better understand localized particle dispersion phenomena resulting from electrical arcing faults. The purpose of these tests was to better characterize aluminum particle size distribution, rates of production, and morphology (agglomeration) of electrical arc faults. More specifically, this effort characterized ejected particles and high-energy dispersion, where this work characterized HEAF electrical characteristics, particle movement/distributions, and morphology near the arc. The results and measurements techniques from this investigation will be used to inform an energy balance model to predict additional energy from aluminum involvement in the arc fault. The experimental setup was developed based on prior work by KEMA and SNL for phase-to-ground and phase-to-phase electrical circuit faults. The small-scale tests results should not be expected to be scale-able to the hazards associated with full-scale HEAF events. Here, the test voltages will consist of four different levels: 480V, 4160V, 6900V and 10kV, based on those realized in nuclear power plant (NPP) HEAF events.
Often, the presence of cracks in manufactured components are detrimental to their overall performance. We develop a workflow and tools in this report using CUBIT and Sierra/SM for generating and modeling crack defects to better understand their impact on such components. To this end, we provide a CUBIT library of various prototypical crack defects embedded in pipes and plates that can be readily used in a wide range of simulations, with specific application to those used in Gas Transfer Systems (GTS). We verify the accuracy of the J-integral post-processing capability in Sierra against solutions available in existing literature for the cracks and geometries of interest within the context of linear elastic fracture mechanics, and describe ongoing efforts to quantify and assess numerical errors. Through this process, we outline overall suggestions and recommendations to the user based on the proposed workflow.
The thermal performance of commercial spent nuclear fuel dry storage casks is evaluated through detailed numerical analysis. These modeling efforts are completed by the vendor to demonstrate performance and regulatory compliance. The calculations are then independently verified by the Nuclear Regulatory Commission (NRC). Canistered dry storage cask systems rely on ventilation between the inner canister and the overpack to convect heat away from the canister to the surrounding environment for both horizontal and vertical configurations. Recent advances in dry storage cask designs have significantly increased the maximum thermal load allowed in a canister in part by increasing the efficiency of internal conduction pathways and by increasing the internal convection through greater canister helium pressure. Carefully measured data sets generated from testing of full-sized casks or smaller cask analogs are widely recognized as vital for validating these models. While several testing programs have been previously conducted, these earlier validation studies did not integrate all the physics or components important in a modern, horizontal dry cask system. The purpose of the present investigation is to produce data sets that can be used to benchmark the codes and best practices presently used to determine cladding temperatures and induced cooling air flows in modern horizontal dry storage systems. The horizontal dry cask simulator (HDCS) has been designed to generate this benchmark data and add to the existing knowledge base. The objective of the HDCS investigation is to capture the dominant physics of a commercial dry storage system in a well-characterized test apparatus for any given set of operational parameters. The close coupling between the thermal response of the canister system and the resulting induced cooling air flow rate is of particular importance.
The SNL EBS International activities were focused on two main collaborative efforts for FY19 — 1) Developing analytical tools to study and better understand multi-phase flow and coupled process physics in engineered barrier materials and at the interface between EBS materials and host media, and 2) Benchmarking of reactive transport codes (including PFLOTRAN) used for chemical evolution of cementitious EBS components. Topic 1 is being studied as part of the SKB EBS Task Force, while Topic 2 is being pursued as a collaboration with researchers from Vanderbilt University and NRG in the the Netherlands.
A preliminary study on the microstructural characteristics and stress corrosion cracking susceptibility of a friction stir welded (FSW) 304L stainless steel plate was carried out. The weld examined was characterized by several typical microstructural features of friction stir welds including a gradient of dynamically recrystallized microstructure with distinct material flow patterns reflective of the complex distribution of thermomechanical histories. Evidence of process-induced microstructural sensitization was lacking Immersion testing of the friction stir welded plate in boiling magnesium chloride solution indicated the FSW region was more susceptible to SCC than the base 304L material, especially along the weld toes. The microstructural origins of this SCC susceptibility are not clear, but it is likely driven by residual stress imparted by the welding process. Future work will focus on direct examination of the SCC damaged microstructure and residual stress of the weld zone to further clarify the operative characteristics controlling SCC susceptibility.
This report summarizes the 2019 fiscal year (FY19) status of the borehole heater test in salt funded by the US Department of Energy Office of Nuclear Energy (DOE-NE) Spent Fuel and Waste Science & Technology (SFWST) campaign. This report satisfies SFWST level-three milestone report M3SF-19SN010303033. This report is an update of the April 2019 level-two milestone report M2SF-19SNO10303031 to reflect the nearly complete as-built status of the borehole heater test. This report discusses the fiscal year 2019 (FY19) design, implementation, and preliminary data interpretation plan for a set of borehole heater tests call the brine availability tests in salt (BATS), which is funded by the DOE Office of Nuclear Energy (DOE-NE) at the Waste Isolation Pilot Plant (WIPP), a DOE Office of Environmental Management (DOE-EM) site. The organization of BATS is outlined in Project Plan: Salt In-Situ Heater Test (SNL, 2018). An early design of the field test is laid out in Kuhlman et al. (2017), including extensive references to previous field tests, which illustrates aspects of the present test. The previous test plan by Stauffer et al. (2015) places BATS in the context of a multi-year testing strategy, which involves tests of multiple scales and processes, eventually culminating in a drift-scale disposal demonstration. This level-3 milestone report is an update of a level-2 milestone report from April 2019 by the same name. The update adds as-built details of the heater test, which at the time of writing (August 2019) is near complete implementation.
Single-photon detectors have historically consisted of macroscopic-sized materials but recent experimental and theoretical progress suggests new approaches based on nanoscale and molecular electronics. Here, we present a theoretical study of photodetection in a system composed of a quantum electronic transport channel functionalized by a photon absorber. Notably, the photon field, absorption process, transduction mechanism, and measurement process are all treated as part of one fully coupled quantum system, with explicit interactions. Using nonequilibrium, time-dependent quantum transport simulations, we reveal the unique temporal signatures of the single-photon detection process, and show that the system can be described using optical Bloch equations, with a new nonlinearity as a consequence of time-dependent detuning caused by the back-action from the transport channel via the dynamical Stark effect. We compute the photodetector signal-to-noise ratio and demonstrate that single-photon detection at high count rate is possible for realistic parameters by exploiting a unique nonequilibrium control of back-action.
This report will describe an improved computer code for two-photon opacity. The new code incorporates many recent advances and is ready to start to face the experiments. It incorporates the difficult mathematical techniques for handling free states and free-free matrix elements.
This report is based on discussions held during an unclassified workshop hosted by Sandia National Laboratories (SNL) and the Council on Strategic Risks (CSR) on August 29, 2019. The first in a planned series, this workshop brought together experts from government, national laboratories, academia, industry, and the policy and entrepreneur communities to examine the potential to use strategy, technology advances, policy, and other tools to make bioweapons obsolete. The workshop provided participants with a rare opportunity to step back from their day-to-day jobs and think strategically about how to achieve this goal more effectively and rapidly. The conversation was held under the Chatham House Rule. The objective was to generate and share ideas and identify questions that will be critical to answer in pursuit of making bioweapons obsolete. Its purpose was not to create consensus. This report does not represent consensus among participants, nor does it assign specific perspectives to any individual participant or represent the official views of any United States (U.S.) government agency or the organizing institutions namely, SNL and CSR.
Luketa, Anay; Blanchat, Thomas K.; Lord, David; Hogge, Joseph; Cruz-Cabrera, Alvaro A.; Allen, Ray
This report describes an experimental study of physical, chemical, and combustion characteristics of selected North American crude oils, and how these associate with thermal hazard distances resulting from pool fires and fireballs. The emergence of large volumes of tight oils within the North American Transportation system over the last decade coupled with several high-profile train accidents involving crude oils, has raised questions about the role of oil properties in general, and tight oils in particular, in affecting the severity of hazard outcomes in related crude oil fires. The objective of the pool fire experiments is to measure parameters necessary for hazard evaluation, namely, burn rate, surface emissive power, flame height, and heat flux to an engulfed object. To carry out this objective, a series of 2-m diameter indoor and 5-m diameter outdoor experiments were performed. The objective of the fireball experiments is to measure parameters required for hazard evaluation which include fireball maximum diameter, height at maximum diameter, duration, and surface emissive power using 400-gallons of crude oil per test. The crude oil samples used for the experiments were obtained from several U.S. locations, including including “tight” oils from the Bakken region of North Dakota and Permian region of Texas, and a conventionally produced oil from the U.S. Strategic Petroleum Reserve stockpile. These samples spanned a measurable range of vapor pressure (VPCRx(T)) and light ends content representative of U.S. domestic conventional and tight crudes. The results indicate that all the oils tested here have comparable thermal hazard distances and the measured properties are consistent with other alkane-based hydrocarbon liquids. The similarity of pool fire and fireball burn characteristics pertinent to thermal hazard outcomes of the three oils studied indicate that vapor pressure is not a statistically significant factor in affecting these outcomes. Thus, the results from this work do not support creating a distinction for crude oils based on vapor pressure with regards to these combustion events.
The study of both linear and nonlinear structural vibrations routinely circles the concise yet complex problem of choosing a set of coordinates which yield simple equations of motion. In both experimental and mathematical methods, that choice is a difficult one because of measurement, computational, and interpretation difficulties. Often times, researchers choose to solve their problems in terms of linear, undamped mode shapes because they are easy to obtain; however, this is known to give rise to complicated phenomena such as mode coupling and internal resonance. This work considers the nature of mode coupling and internal resonance in systems containing non-proportional damping, linear detuning, and cubic nonlinearities through the method of multiple scales as well as instantaneous measures of effective damping. The energy decay observed in the structural modes is well approximated by the slow-flow equations in terms of the modal amplitudes, and it is shown how mode coupling enhances the damping observed in the system. Moreover, in the presence of a 3:1 internal resonance between two modes, the nonlinearities not only enhance the dissipation, but can allow for the exchange and transfer of energy between the resonant modes. However, this exchange depends on the resonant phase between the modes and is proportional to the energy in the lowest mode. The results of the analysis tie together interpretations used by both experimentalists and theoreticians to study such systems and provide a more concrete way to interpret these phenomena.
IEEE Transactions on Geoscience and Remote Sensing
Krishnamoorthy, Siddharth; Lai, Voon H.; Komjathy, Attila; Pauken, Michael T.; Cutts, James A.; Garcia, Raphael F.; Mimoun, David; Jackson, Jennifer M.; Bowman, Daniel B.; Kassarian, Ervan; Martire, Leo; Sournac, Anthony; Cadu, Alexandre
Seismology on Venus has long eluded planetary scientists due to extreme temperature and pressure conditions on its surface, which most electronics cannot withstand for mission durations required for ground-based seismic studies. Here, we show that infrasonic (low-frequency) pressure fluctuations, generated as a result of ground motion, produced by an artificial seismic source known as a seismic hammer, and recorded using sensitive microbarometers deployed on a tethered balloon, are able to replicate the frequency content of ground motion. We also show that weak, artificial seismic activity thus produced may be geolocated by using multiple airborne barometers. The success of this technique paves the way for balloon-based aero-seismology, leading to a potentially revolutionary method to perform seismic studies from a remote airborne station on the earth and solar system objects with substantial atmospheres such as Venus and Titan.
Magnetically launched flyer plates were used to investigate the shock response of beryllium between 90 and 300 GPa. Solid aluminum flyer plates drove steady shocks into polycrystalline beryllium to constrain the Hugoniot from 90 to 190 GPa. Multilayered copper/aluminum flyer plates generated a shock followed by an overtaking rarefaction which was used to determine the sound velocity in both solid and liquid beryllium between 130 and 300 GPa. Disappearance of the longitudinal wave was used to identify the onset of melt along the Hugoniot and measurements were compared to density functional theory calculations to explore the proposed hcp-bcc transition at high pressure. The onset of melt along the Hugoniot was identified at ∼205GPa, which is in good agreement with theoretical predictions. These results show no clear indication of an hcp-bcc transition prior to melt along the beryllium Hugoniot. Rather, the shear stress, determined from the release wave profiles, was found to gradually decrease with stress and eventually vanish at the onset of melt.
Buchholz, Stuart; Keffeler, Evan; Lipp, Karla; Devries, Kerry; Hansen, Francis
The 10th US/German Workshop on Salt Repository Research, Design, and Operation was hosted by RESPEC and South Dakota School of Mines & Technology, both located in the Black Hills of South Dakota. Over 60 registered participants representing Germany, United States, the Netherlands, and the United Kingdom availed themselves to excellent facilities on the School of Mines campus. As the 10th annual workshop, this occasion is a milestone of the modern era of collaboration between the US and Germany, which has extended to other countries with potential for radioactive waste disposal in salt. Thrust areas covered in the annual workshops are selected by participants and typically include elements of continuity from year to year. This year's themes included siting, modeling challenges, seal systems and materials, operational safety, and special topics. Two major breakout sessions addressed test sample conditioning and natural closure of salt openings. When the new-generation US/German workshops were conceived, one goal was to identify challenging issues related to salt repository sciences and then conduct open discussions in special breakout sessions. These timely sessions achieve the workshop paradigm and provide in-depth dialogue on important salt repository considerations. In many respects, general themes of these salt repository workshops reflect advances in the scientific basis for nuclear-waste disposal in salt formations and develop naturally as a consequence of unremitting attention largely as a result of the workshop commitment. Technical capabilities in the laboratory and field continue to improve in concert with accumulating experience. Because we are working together, mechanical deformation at the micro-scale can be interpreted at a large scale, which is fundamental to predictive modeling of salt repository evolution. This document records the Proceedings of the 2019 gathering of salt repository nations and has been compiled by RESPEC, demonstrating an adopted protocol in which the host organization creates that year's Proceedings. To assist with organization and compilation, individual chapters are summarized by volunteer subject-matter experts from the participatory audience; these contributors are recognized in the Acknowledgements. All formal presentations are included in this document, thus providing a resource for referencing and many excellent photographic images. Appendices include the agenda, list of participants, abstracts, and presentations. A primary purpose for recording workshop activities is to create and sustain an accessible record of salt repository research. These archives also illustrate transparent development of research areas each year.
The ITER divertor will feature tungsten monoblocks as the plasma-facing component (PFC) that will be subject to extreme temperature and radiation environments. This paper reports the development of surface morphologies on tungsten under helium bombardment at high temperatures, which has important implications for safety, retention, and PFC erosion. Polycrystalline tungsten samples were implanted in the Dual Advanced Ion Simultaneous Implantation Experiment dual-beam ion implantation experiment at the University of Wisconsin-Madison with He-only and simultaneous He-D implantation at incidence angles of 55 deg, ion energies of 30 keV, and surface temperatures of 900°C to 1100°C. Morphologies resulting from angled incidence conditions differed from those produced under normal incidence bombardment at similar energy and temperature conditions in previous work. A variety of ordered and disordered morphologies dependent on grain orientation were observed for fluences up to 6 × 1018 He cm−2. These morphologies displayed dependencies on crystal orientation at low fluences and incident beam directions at higher fluences. These structures appeared, with variation, under both single-species He and mixed He-D implantations.
The fundamental understanding of photoexcitation landscape and dynamics in hybrid organic-inorganic perovskites is essential for improving their performance in solar cells and other applications. The dual emission features from the orthorhombic phase in perovskites have been the focus of numerous recent studies, and yet their underlying molecular origin remains elusive. We use optical two-dimensional coherent spectroscopy to study the carrier dynamics and coupling of the dual emissions in a methylammonium lead iodide film at 115 K. The two-dimensional spectra reveal an ultrafast redistribution of the photoexcited carriers into the two emission resonances within 250 fs. The high-energy resonance is a short-lived transient state, and the low-energy emission state interacts with coherent phonons. The observed carrier dynamics provide important experimental evidence that can be compared with potential theoretical models and contribute to the understanding of the dual emissions as well as the overall energy level structure in hybrid organic-inorganic perovskites.
Tallman, Aaron E.; Stopka, Krzysztof S.; Swiler, Laura P.; Wang, Yan; Kalidindi, Surya R.; Mcdowell, David L.
Data-driven tools for finding structure–property (S–P) relations, such as the Materials Knowledge System (MKS) framework, can accelerate materials design, once the costly and technical calibration process has been completed. A three-model method is proposed to reduce the expense of S–P relation model calibration: (1) direct simulations are performed as per (2) a Gaussian process-based data collection model, to calibrate (3) an MKS homogenization model in an application to α-Ti. The new methods are compared favorably with expert texture selection on the performance of the so-calibrated MKS models. Benefits for the development of new and improved materials are discussed.
We present various approximations to joint chance constraints arising in two-stage stochastic programming models. Our approximations are derived from three classical inequalities: Markov's inequality, Chebysev's inequality, and Chernoff's bound. We provide preliminary computational results illustrating the quality of our approximation using a two-stage joint-chance-constrained stochastic program from the literature. We also briefly introduce other alternatives for constructing approximations for joint-chance-constrained two-stage programs.
We propose a method that exploits sparse representation of potential energy surfaces (PES) on a polynomial basis set selected by compressed sensing. The method is useful for studies involving large numbers of PES evaluations, such as the search for local minima, transition states, or integration. We apply this method for estimating zero point energies and frequencies of molecules using a three step approach. In the first step, we interpret the PES as a sparse tensor on polynomial basis and determine its entries by a compressed sensing based algorithm using only a few PES evaluations. Then, we implement a rank reduction strategy to compress this tensor in a suitable low-rank canonical tensor format using standard tensor compression tools. This allows representing a high dimensional PES as a small sum of products of one dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate the product of sparse canonical low-rank representation of PES and Green’s function in the second-order diagrammatic vibrational many-body Green’s function theory (XVH2) for estimation of zero-point energies and frequencies. Numerical tests on molecules considered in this work suggest a more efficient scaling of computational cost with molecular size as compared to other methods.
Hadland, Erik; Falmbigl, Matthias; Medlin, Douglas L.; Johnson, David C.
A significant experimental challenge in testing proposed relationships between structure and properties is the synthesis of targeted structures with atomistic control over both the structure and the composition. SnSe2(MoSe2)1.32 was synthesized to test the hypothesis that the low-temperature synthesis of two interleaved structures would result in complete turbostratic disorder and that the disorder would result in ultralow thermal conductivity. SnSe2(MoSe2)1.32 was prepared by depositing elements to form a precursor containing Sn|Se and Mo|Se bilayers, each containing the number of atoms required to form single dichalcogenide planes. The nanoarchitecture of alternating Sn and Mo layers is preserved as the dichalcogenide planes self-assemble at low temperatures. The resulting compound contains well-formed dichalcogenide planes that closely resemble that found in the binary compounds and extensive turbostratic disorder. As expected from proposed structure-property relationships, the thermal conductivity of SnSe2(MoSe2)1.32 is ultralow, ∼0.05 W m-1 K-1.
We report a novel approach whereby cross-linked polybutadiene (PB) networks can be depolymerized in situ based on thermally activated alkene metathesis. A commercially available latent Ru catalyst, HeatMet, was compared to the common second-generation Hoveyda-Grubbs catalyst, HG2, in the metathetic depolymerization of PB. HeatMet was found to possess exceptional stability and negligible activity toward PB under ambient conditions, in solution and in bulk. This enabled cross-linked networks to be prepared containing homogeneously distributed Ru catalyst. The dynamic mechanical properties of networks containing HeatMet and cross-linked using alcohol-isocyanate or thiol-ene chemistry were evaluated during cross-linking and post-cross-linking under isothermal and nonisothermal heating. In both cases, above minimum catalyst loadings ranging from 0.004 to 0.024 mol %, the networks exhibited rapid degelation into a soluble oil upon heating to 100 °C. At these temperatures, extensive depolymerization of the PB segments through ring-closing metathesis of 1,4/1,2 diads by the activated HeatMet introduced network defects in significantly greater proportion than the original number of cross-links. The in situ depolymerization of cross-linked PB networks through latent catalysis, as described here, may enable facile disposal and recycling of PB encapsulants and adhesives, among other applications.
This report presents the code verification of EMPIRE-PIC to the analytic solution to a cold diode which was first derived by Jaffe. The cold diode was simulated using EMPIRE-PIC and the error norms were computed based on the Jaffe solution. The diode geometry is one-dimensional and uses the EMPIRE electrostatic field solver. After a transient start-up phase as the electrons first cross the anode-cathode gap, the simulations reach an equilibrium where the electric potential and electric field are approximately steady. The expected spatial order of convergence for potential, electric field and particle velocity are observed.
Iodine detection is crucial for nuclear waste clean-up and first responder activities. For ease of use and durability of response, robust active materials that enable the direct electrical detection of I2 are needed. Herein, a large reversible electrical response is demonstrated as I2 is controllably and repeatedly adsorbed and desorbed from a series of metal-organic frameworks (MOFs) MFM-300(X), each possessing a different metal center (X = Al, Fe, In, or Sc) bridged by biphenyl-3,3′,5,5′-tetracarboxylate linkers. Impedance spectroscopy is used to evaluate how the different metal centers influence the electrical response upon cycling of I2 gas, ranging from 10× to 106× decrease in resistance upon I2 adsorption in air. This large variation in electrical response is attributed not only to the differing structural characteristics of the MOFs but also to the differing MOF morphologies and how this influences the degree of reversibility of I2 adsorption. Interestingly, MFM-300(Al) and MFM-300(In) displayed the largest changes in resistance (up to 106×) yet lost much of their adsorption capacity after five I2 adsorption cycles in air. On the other hand, MFM-300(Fe) and MFM-300(Sc) revealed more moderate changes in resistance (10-100×), maintaining most of their original adsorption capacity after five cycles. This work demonstrates how changes in MOFs can profoundly affect the magnitude and reversibility of the electrical response of sensor materials. Tuning both the intrinsic (resistivity and adsorption capacity) and extrinsic (surface area and particle morphology) properties is necessary to develop highly reversible, large signal-generating MOF materials for direct electrical readout for I2 sensing.
Computational prediction of ductile failure remains a challenging and important problem as demonstrated by the recent Sandia Fracture Challenges. In addition to emphasizing the complexity of such problems, the variety of solution strategies also highlighted the number of possible approaches to this problem. A common engineering approach for such efforts is to use a failure model in conjunction with element deletion. In the second Sandia Fracture Challenge, for instance, nine of the fourteen teams used some form of element deletion. For such schemes, a critical decision pertains to the selection of the appropriate failure model; of which many may be found in the literature (see the review of Corona and Reedlunn). The variety may also be observed in the aforementioned second Sandia Fracture Challenge in which at least eight different failure criteria are listed for the nine element deletion based approaches. The selection of the appropriate failure model is a difficult challenge depending on the material being considered and such criteria can variously depend on stress state (i.e. triaxiality, Lode angle) and loading conditions (i.e. strain rate, temperature). Separate implementations of each criteria with different plasticity models can be a repetitive and cumbersome process which may limit available models for an engineering analyst. To mitigate this issue, an effort was pursued to flexibly implement failure models in which different failure models could be specified and utilized within the same elastic-plastic constitutive routine by simply changing the input syntax. Similarly, the same models are implemented across a suite of elastic-plastic formulations enabling consistent definitions. As will be discussed later, a specific "modular failure" model is also implemented which allows for the selection or specification of different dependencies depending on the current need. At this stage, this effort is limited to defining failure models; progression/damage evolution in the constitutive model is not treated and left to future efforts.
Glass-ceramics are a unique class of materials in which the growth of a ceramic phase(s) may be induced in an inorganic glass resulting in a microstructurally heterogeneous material with both glass and ceramic phases. This specialized processing is often referred to as "ceramming''. A wide variety of such materials have been developed through the use of different initial glass compositions and thermomechanical processing routes and that have enabled applications in dentistry, consumer kitchenware, and telescopes mirrors. These materials may also exhibit large apparent coefficients of thermal expansion making them attractive for consideration in glass-ceramic seals. These large apparent coefficients of thermal expansion often arise from silica polymorphs, such as cristobalite, undergoing a solid-to-solid phase transformations producing additional non-linearity in the effective material response.
In order to determine a material's hydrogen storage potential, capacity measurements must be robust, reproducible, and accurate. Commonly, research reports focus on the gravimetric capacity, and often times the volumetric capacity is not reported. Determining volumetric capacities is not as straight-forward, especially for amorphous materials. This is the first study to compare measurement reproducibility across laboratories for excess and total volumetric hydrogen sorption capacities based on the packing volume. The use of consistent measurement protocols, common analysis, and figure of merits for reporting data in this study, enable the comparison of the results for two different materials. Importantly, the results show good agreement for excess gravimetric capacities amongst the laboratories. Irreproducibility for excess and total volumetric capacities is attributed to real differences in the measured packing volume of the material.
We present a new, distributed-memory parallel algorithm for detection of degenerate mesh features that can cause singularities in ice sheet mesh simulations. Identifying and removing mesh features such as disconnected components (icebergs) or hinge vertices (peninsulas of ice detached from the land) can significantly improve the convergence of iterative solvers. Because the ice sheet evolves during the course of a simulation, it is important that the detection algorithm can run in situ with the simulation - - running in parallel and taking a negligible amount of computation time - - so that degenerate features (e.g., calving icebergs) can be detected as they develop. We present a distributed memory, BFS-based label-propagation approach to degenerate feature detection that is efficient enough to be called at each step of an ice sheet simulation, while correctly identifying all degenerate features of an ice sheet mesh. Our method finds all degenerate features in a mesh with 13 million vertices in 0.0561 seconds on 1536 cores in the MPAS Albany Land Ice (MALI) model. Compared to the previously used serial pre-processing approach, we observe a 46,000x speedup for our algorithm, and provide additional capability to do dynamic detection of degenerate features in the simulation.
This report represents completion of milestone deliverable M2SF-19SNO10309013 "Online Waste Library (OWL) and Waste Forms Characteristics Annual Report" that reports annual status on fiscal year (FY) 2019 activities for the work package SF-19SN01030901 and is due on August 2, 2019. The online waste library (OWL) has been designed to contain information regarding United States (U.S.) Department of Energy (DOE)-managed (as) high-level waste (DHLW), spent nuclear fuel (SNF), and other wastes that are likely candidates for deep geologic disposal, with links to the current supporting documents for the data (when possible; note that no classified or official-use-only (OUO) data are planned to be included in OWL). There may be up to several hundred different DOE-managed wastes that are likely to require deep geologic disposal. This annual report on FY2019 activities includes evaluations of waste form characteristics and waste form performance models, updates to the OWL development, and descriptions of the management processes for the OWL. Updates to the OWL include an updated user's guide, additions to the OWL database content for wastes and waste forms, results of the beta testing and changes implemented from it. Also added are descriptions of the management/control processes for the OWL development, version control, and archiving. These processes have been implemented as part of the full production release of OWL (i.e., OWL Version 1.0), which has been developed on, and will be hosted and managed on, Sandia National Laboratories (SNL) systems. The version control/update processes will be implemented for updates to the OWL in the future. Additionally, another process covering methods for interfacing with the DOE SNF Database (DOE 2007) at Idaho National Laboratory on the numerous entries for DOE-managed SNF (DSNF) has been pushed forward by defining data exchanges and is planned to be implemented sometime in FY2020. The INL database is also sometimes referred to as the Spent Fuel Database or the SFDB, which is the acronym that will be used in this report. Once fully implemented, this integration effort will serve as a template for interfacing with additional databases throughout the DOE complex.
This report is a summary of the international collaboration and laboratory work funded by the US Department of Energy Office (DOE) of Nuclear Energy Spent Fuel and Waste Science & Technology (SFWST) as part of the Sandia National Laboratories Salt R&D work package. This report satisfies milestone level-four milestone M4SF-19SNO10303064. Several stand-alone sections make up this summary report, each completed by the participants. The first two sections discuss international collaborations on geomechanical benchmarking exercises (WEIMOS), granular salt reconsolidation (KOMPASS), engineered barriers (RANGERS), and documentation of Features, Events, and Processes (FEPs).
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the United States Department of Energy (DOE) National Nuclear Security Administration. The National Nuclear Security Administration's Sandia Field Office administers the contract and oversees contractor operations at Sandia National Laboratories, New Mexico. Activities at the site support research and development programs with a wide variety of national security missions, resulting in technologies for nonproliferation, homeland security, energy and infrastructure, and defense systems and assessments. DOE and its management and operating contractor for Sandia are committed to safeguarding the environment reassessing sustainability practices and ensuring the validity and accuracy of the monitoring data presented in this Annual Site Environmental Report. This report summarizes the environmental protection and monitoring programs in place at Sandia National Laboratories, New Mexico, during calendar year 2018. Environmental topics include air quality, ecology, environmental restoration, oil storage, site sustainability, terrestrial surveillance, waste management, water quality, and implementation of the National Environmental Policy Act. This report is prepared in accordance with and as required by DOE O 231.1B, Admin Change 1, Environment, Safety, and Health Reporting and has been approved for public distribution.
The 2018 Predictive Engineering Science Panel (PESP) is pleased with Sandia's response to our 2018 PESP Final Report recommendations. We have read the response memorandum, applaud the overall Electromagnetic Radiation (EMR) Action Plan and Path Forward and compliment Sandia's progress to date. The panel identifies no pressing issues and proposes no course corrections. The panel does offer two high-level suggestions for the Advanced Simulation and Computing (ASC) Program Office.
Scientific computing is no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/0 limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project provides the necessary research and infrastructure for scientific discovery in this new computational ecosystem by addressing four interlocking challenges: emerging processor technology, in situ integration, usability, and proxy analysis. This report reviews the accomplishments of the XVis project to prepare scientific visualization for Exascale computing.
The results of a computational analysis of self-shielding factors are presented. The analysis highlights the total self-shielding, which is a combination of energy and spatial self-shielding, associated with different neutron detection materials. The Monte Carlo N-Particle (MCNP) transport code was used in conjunction with the Evaluated Nuclear Data File (ENDF) and the International Reactor Dosimetry and Fusion Files (IRDFF). This analysis was done with neutron activation analysis in mind, and therefore is modeled and presented in a similar fashion.
Loc003D is a software tool that computes geographical locations for seismic events at regional to global scales. This software has a rich set of features, including the ability to use custom 3D velocity models, correlated observations and master event locations. The Loc003D software is especially useful for research related to seismic monitoring applications, since it allows users to easily explore a variety of location methods and scenarios and is compatible with the CSS3.0 software format used in monitoring applications. The Loc003D software is available on the web at: www.sandia.gov/salsa3d/Software.html The software is packaged with this user's manual and a set of example datasets, the use of which is described in this manual.
The primary goal of this project is to gather and analyze data supporting development of diagnostic tests and vaccines to mitigate contagious caprine pleuropneumonia in Pakistan. This disease of primarily goats and sheep is a substantial burden to agricultural productivity and survival of farmers and families in Pakistan. During this phase of the project, we have collected and analyzed (genome sequencing) clinical samples from a broad affected region in Pakistan. Preliminary results show that there are at least two distinct clades (variants) of the organism prevalent in different regions of the country. This information, combined with our planned efforts in the coming year, will help identify better methods to diagnose and vaccinate herds against this pathogen. Our vision is that this work will improve critical livestock health in Pakistan, and consequently economic and social stability.
The ability to print polymeric materials at a high volume rate (~1000 in3/hr) has been demonstrated by Oak Ridge National Lab's (ORNL) Manufacturing Demonstration Facility (MDF) and shows promise for new opportunities in Additive Manufacturing (AM), particularly in the rapid fabrication of tooling equipment for prototyping. However, in order to be effective, the polymeric materials require a metallic coating akin to tool steels to survive the mechanical and thermal environments for their intended application. Thus, the goal of this project was to demonstrate a pathway for metallizing Big Area Additive Manufactured (BAAM) polymers using a Twin Wire Arc (TWA) spray coating process. Key problems addressed in this study were the adhesion of sprayed layers to the BAA1V1 polymer substrates and demonstration of hardness and compression testing of the metallized layers.
One promising method for solar energy storage is Solar Thermochemical Hydrogen (STCH) production. This two-step thermochemical process utilizes nonstoichiometric metal oxides to convert solar energy into hydrogen gas. The oxide first undergoes reduction via exposure to heat generated from concentrated solar power. When subsequently exposed to steam, the reduced oxide splits water molecules through its re-oxidation process, thus producing hydrogen gas. The viability of STCH depends on identifying redox-active materials that have fast redox kinetics, structural stability and low reduction temperatures. Complex perovskite oxides show promise for more efficient hydrogen production at lower reduction temperatures than current materials. In this work, a stagnation flow reactor was used to characterize the water splitting capabilities of BaCe0.25Mn0.75O3(BCM). In the future, the method outlined will be used to characterize structural analogues of BCM, to provide insight into the effect of material composition on water splitting behavior and ultimately guide the synthesis of more efficient STCH materials.
The strength of brittle porous media is of concern in numerous applications, for example, earth penetration, crater formation, and blast loading; thus it is of importance to possess techniques that allow for constitutive model calibration within the laboratory setting. It is the goal of the immediate work to demonstrate an experimental technique allowing for strength assessment, which can be implemented into pressure dependent yield surfaces within numerical simulation schemes. As a case study, the deviatoric strength of distended α-SiO2 has been captured in a tamped Richtmyer- Meshkov instability environment at a pressure regime of 4-10 GPa. In contrast to traditional RMI studies used to infer strength in solids, the described approach herein is implemented to probe the behavior of the porous tamp media backing the corrugated solid surface. Hydrocode simulation has been used to interpret the experiment, and a resulting pressure-dependent yield surface akin to the often employed Modified Drucker-Prager model has been calibrated via the coupled experiment and simulation. The simulations indicate that the resulting jet length generated by the RMI is highly sensitive to the porous media strength, thereby providing a feasible experimental platform capable of capturing pressurized granular deviatoric response. Additionally, a Mach lens loading environment has also been implemented as a validation case study, demonstrating good agreement between experiment and simulation within an alternative loading environment. Calibration and validation of the pressure-dependent yield surface gives confidence to the model form, thereby providing a framework for future porous media strength studies.
This knowledge guide was developed as a training and reference manual for Federal Radiological Monitoring and Assessment Center (FRMAC) gamma spectroscopists. The knowledge guide is geared towards applied High Purity Germanium (HPGe) gamma spectroscopy with an emphasis on examples. As such, the knowledge guide generally provides a limited but sufficient discussion of physics concepts. For more detailed information on the physics concepts discussed in this guide, please refer to the references listed.
This report documents the completion of milestone STPRO4-13 "Documented Kokkos API", which is part of the Exascale Computing Project (ECP). The goal of this Milestone was to generate documentation for the Kokkos programming model accessible to the open HPC community, beyond what was available via the tutorials. The total documentation for Kokkos now contains the equivalent of about 250 pages in text book format. About a third of it is contained in a more text book like style like the Kokkos Programming Guide, while most of the rest is an API reference modelled after popular C++ reference webpages. On the order of 175 pages was generated new as part of the work for this milestone.
The Pipe Overpack Container (POC) was developed at Rocky Flats to transport plutonium residues with higher levels of plutonium than standard transuranic (TRU) waste to the Waste Isolation Pilot Plant (WIPP) for disposal. In 1996 Sandia National Laboratories (SNL) conducted a series of tests to determine the degree of protection POCs provided during storage accident events. One of these tests exposed four of the POCs to a 30-minute engulfing pool fire. This test resulted in one of the POCs generating sufficient internal pressure to pop off its drum lid and expose the top of the pipe container (PC) to the fire environment. The initial contents of the POCs were inert materials that would not generate large internal pressure within the PC if heated. However, POCs are now being used to store combustible Transuranic (TRU) waste at Department of Energy (DOE) sites. At the request of DOE's Office of Environmental Management (EM) and National Nuclear Security Administration (NNSA), SNL started conducting a new series of fire tests in 2015 to examine whether PCs with combustibles would reach a temperature that could result in: (1) decomposition of inner contents and (2) subsequent generation of sufficient gas to cause the PC to over-pressurize and release its inner contents. In 2016, Phase II of the tests showed that POCs tested in a pool fire failed within 3 minutes of ignition with the POC lid ejecting. These POC lids were fitted with a NUCFIL-019DS filter and revealed that this specific filter did not relieve sufficient pressure to prevent lid ejection. In the Fall of 2017, Phase II-A was conducted to expose POCs to a 30-minute pool fire with similar configurations to those tested in Phase II, except that the POC lids were fitted with an UltraTech (UT) 9424S filter instead. That specific filter was chosen because of its design to help relieve internal pressure during the fire and thus prevent lid ejection. In Phase II-A, however, setups of two POCs stacked upon one another were never tested, which led to this phase of tests, Phase II-B. This report will describe the various tests conducted in Phase II-B, present results from these tests, and implications for the POCs based on the test results will be discussed.
The purpose of this document to provide an advance copy of a "Build Guide" for a Generic Runnable System (GRS) of the Geophysical Monitoring Systems' (GMS) common source code. This guide includes a list of software dependencies and licenses, hardware specifications, and related instructions for how to build the system from the source code. The document is written for individuals who are experienced as administrators of Linux systems. The intention is to support preparation activities prior to the open source release so that dependencies may be in place to build and run the system. This document will be updated and provided with the open source release on GitHub, accompanied by a "Run Guide" for the system. An additional "Configuration Guide" will also be provided after the open source release so that users may explore system configuration options.
pCalc is a software tool that computes travel-time predictions, ray path geometry and model queries. This software has a rich set of features, including the ability to use custom 3D velocity models to compute predictions using a variety of geometries. The pCalc software is especially useful for research related to seismic monitoring applications. The pCalc software is available on the web at: www.sandia.gov/salsa3d/Software.html The software is packaged with this user's manual and a set of example datasets, the use of which is described in this manual.
Entropy stable numerical methods for compressible flow have been demonstrated to exhibit better robustness than purely linearly stable methods and need less overall artificial dissipation for long simulations in subsonic and transonic flows. In this work we seek to extend these benefits to multicomponent, multitemperature flows in thermochemical nonequilibrium such as combustion and hypersonic flight. We first derive entropy functions that symmetrize the governing equations and allow stability proofs for such systems. The impact of diffusion model selection on provable entropy stability is considered in detail, including both rigorous models of irreversible thermodynamics and simplified models of greater practical interest. Based on the proven entropy functions we develop affordable, entropy conservative two-point flux functions for solution in conservation form. We derive entropy conservative fluxes for calorically and thermally perfect mixtures, with heat capacities described by either polynomials of the temperature or formulas from statistical thermodynamics.
Agencies that monitor for underground nuclear tests are interested in techniques that automatically characterize earthquake aftershock sequences to reduce the human analyst effort required to produce high-quality event bulletins. Waveform correlation is effective in detecting similar seismic waveforms from repeating earthquakes, including aftershock sequences. We report the results of an experiment that uses waveform templates recorded by multiple stations of the Comprehensive Nuclear-Test-Ban Treaty International Monitoring System during the first twelve hours after a mainshock to detect and identify aftershocks that occur during the subsequent week. We discuss approaches for station and template selection, threshold setting, and event detection that are specialized for aftershock processing for a sparse, global network. We apply the approaches to three aftershock sequences to evaluate the potential for establishing a set of standards for aftershock waveform correlation processing that can be effective for operational monitoring systems with a sparse network. We compare candidate events detected with our processing methods to the Reviewed Event Bulletin of the International Data Center to develop an intuition about potential reduction in analyst workload.
We report on the verification of elastic collisions in EMPIRE-PIC and EMPIRE-Fluid in support of the ATDM L2 V&V Milestone. The thermalization verification problem and the theory behind it is presented along with an analytic solution for the temperature of each species over time. The problem is run with both codes under multiple parameter regimes. The temperature over time is compared between the two codes and the theoretical results. A preliminary convergence analysis is performed on the results from EMPIRE-PIC and EMPIRE-Fluid showing the rate at which the codes converge to the analytic solution in time (EMPIRE-Fluid) and particles (EMPIRE-PIC).
Mimetic methods discretize divergence by restricting the Gauss theorem to mesh cells. Because point clouds lack such geometric entities, construction of a compatible meshfree divergence is a challenge. In this work, we define an abstract Meshfree Mimetic Divergence (MMD) operator on point clouds by contraction of field and virtual face moments. This MMD satisfies a discrete divergence theorem, provides a discrete local conservation principle, and is first-order accurate.
Hydrogen is increasingly being used in the public sector as a fuel for vehicles. Due to the high density of hydrogen in its liquid phase, fueling stations that receive deliveries of and store hydrogen as a liquid are more practical for high volume stations. There is a critical need for validated models to assess the risk at hydrogen fueling stations with cryogenic hydrogen on-site. In this work, a cryogenic hydrogen release experiment generated controlled releases of cryogenic hydrogen in the laboratory. We measured the maximum ignition distance, flame length and the radiative heat flux and developed correlations to calculate the ignition ditance and the radiative heat flux. We also measured the concentration and temperature fields of releases under unignited conditions and used these measurements to validate a model for these cryogenic conditions. This study provides critical information on the development of models to inform the safety codes and standards of hydrogen infrastructure.
The workshop on hydrogen rail applications was attended by representatives from over 40 organizations across academia, government, and industry. The workshop agenda is provided in Appendix A, and a list of workshop organizations is provided in Appendix B. The first day of the workshop focused on domestic and international government agency perspectives. The second day highlighted technology status and development, R&D topics, and industry perspectives on hydrogen rail activities. Topic sessions were followed by panel discussions on relative challenges and issues. This report captures the key themes discussed by the workshop participants and provides details on specific recommendations and collaborative opportunities. The report includes presentation overviews, panel discussion summaries, and a summary of major outcomes, recommendations, and envisioned pathways forward in the development and deployment of hydrogen rail technology and international collaboration.
This article presents an example by which design loads for a wave energy converter (WEC) might be estimated through the various stages of the WEC design process. Unlike previous studies, this study considers structural loads, for which, an accurate assessment is crucial to the optimization and survival of a WEC. Three levels of computational fidelity are considered. The first set of design load approximations are made using a potential flow frequency-domain boundary-element method with generalized body modes. The second set of design load approximations are made using a modified version of the linear-based time-domain code WEC-Sim. The final set of design load simulations are realized using computational fluid dynamics coupled with finite element analysis to evaluate the WEC's loads in response to both regular and focused waves. This study demonstrates an efficient framework for evaluating loads through each of the design stages. In comparison with experimental and high-fidelity simulation results, the linear-based methods can roughly approximate the design loads and the sea states at which they occur. The high-fidelity simulations for regular wave responses correspond well with experimental data and appear to provide reliable design load data. The high-fidelity simulations of focused waves, however, result in highly nonlinear interactions that are not predicted by the linear-based most-likely extreme response design load method.
This document will detail a field demonstration test procedure for the Module OT device developed for the joint NREL-SNL DOE CEDS project titled "Modular Security Apparatus for Managing Distributed Cryptography for Command & Control Messages on Operational Technology (OT) Networks." The aim of this document is to create the testing and evaluation procedure for field demonstration of the device; this includes primarily functional testing and implementation testing at Public Service Company of New Mexico's (PNM's) Prosperity solar site environment. Specifically, the Module OT devices will be integrated into the Prosperity solar site system; traffic will be encrypted between several points of interest at the site (e.g., inverter micrologger and switch). The tests described in this document will be performed to assess the impact and effectiveness of the encryption capabilities provided by the Module OT device.
We measure the frequency dependence of a niobium microstrip resonator as a function of temperature from 1.4 to 8.4 K. In a 2-micrometer-wide half-wave resonator, we find the frequency of resonance changes by a factor of 7 over this temperature range. From the resonant frequencies, we extract inductance per unit length, characteristic impedance, and propagation velocity (group velocity). We discuss how these results relate to superconducting electronics. Over the 2 K to 6 K temperature range where superconducting electronic circuits operate, inductance shows a 19% change and both impedance and propagation velocity show an 11% change.
The complex environments that characterize combustion systems can influence the distribution of gas-phase species, the relative importance of various growth mechanisms and the chemical and physical characteristics of the soot precursors generated. In order to provide molecular insights on the effect of combustion environments on the formation of gas-phase species, in this paper, we study the temporal and spatial dependence of soot precursors growth mechanisms in an ethylene/oxygen/argon counterflow diffusion flame. As computational tools of investigation, we included fluid dynamics simulations and stochastic discrete modeling. Results show the relative importance of various reaction pathways in flame, with the hydrogen-abstraction-acetylene-addition mechanism contributing to the formation of pure hydrocarbons near the stagnation plane, and oxygen chemistry prevailing near the maximum temperature region, where the concentration of atomic oxygen reaches its peak and phenols, ethers and furan-embedded species are formed. The computational results show excellent agreement with measurements obtained using aerosol mass spectrometry coupled with vacuum-ultraviolet photoionization. Knowledge acquired in this study can be used to predict the type of compounds formed in various locations of the flame and eventually provide insights on the environmental parameters that influence the growth of soot precursors. Additionally, the results reported in this paper highlight the importance of modeling counterflow flames in two or three dimensions to capture the spatial dependence of growth mechanisms of soot precursors.
Furth, Paul M.; Veerabathini, Anurag; Saifullah, Z.M.; Rivera, Derrick T.; Elkanishy, Abdelrahman; Badawy, Abdel H.A.; Michael, Christopher M.
Towards the goal of enhanced hardware security, this work proposes compact supervisory circuits to perform low-frequency monitoring of a communication SoC. The communication RF output is monitored through an integrated RF envelope detector. The input supply to the transceiver block of the SoC is delivered by an integrated linear voltage regulator with output current monitoring. These two supervisory circuits are inexpensively fabricated in 0.6-μm technology. The useful bandwidth of the envelope detector is measured as 1-6 GHz at a supply of 3.3 VDC and quiescent current of 2.65 mA. The linear regulator generates 3.3 VDC using an input of 5 VDC, quiescent current of 1.83 mA, and load current from 1-120 mA. Static and transient load current tests demonstrate linear output current monitoring.
Timeseries power and voltage data recorded by electricity smart meters in the US have been shown to provide immense value to utilities when coupled with advanced analytics. However, Advanced Metering Infrastructure (AMI) has diverse characteristics depending on the utility implementing the meters. Currently, there are no specific guidelines for the parameters of data collection, such as measurement interval, that are considered optimal, and this continues to be an active area of research. This paper aims to review different grid edge, delay tolerant algorithms using AMI data and to identify the minimum granularity and type of data required to apply these algorithms to improve distribution system models. The primary focus of this report is on distribution system secondary circuit topology and parameter estimation (DSPE).
We report on the fabrication and characterization of Nb/Ta-N/Nb Josephson junctions grown by room temperature magnetron sputtering on 150-mm diameter Si wafers. Junction characteristics depend upon the Ta-N barrier composition, which was varied by adjusting the N2 flow during film deposition. Higher N2 flow rates raise the barrier resistance and increase the junction critical current. This work demonstrates the viability of Ta-N as an alternative barrier to aluminum oxide, with the potential for large scale integration.
The ability to localize defects in order to understand failure mechanisms in complex superconducting electronics circuits, while operating at low temperature, does not yet exist. This work applies thermally-induced voltage alteration (TIVA), to a biased superconducting electronics (SCE) circuit at ambient temperature. TIVA is a commonly used, laser-based failure analysis technique developed for silicon-based microelectronics. The non-operational circuit consisted of an arithmetic logic unit (ALU) in a high-frequency test bed designed at HYPRES and fabricated by MIT Lincoln Laboratory using their SFQ5ee process. Localized TIVA signals were correlated with reflected light images at the surface, and these sites were further investigated by scanning electron microscopy imaging of focused ion-beam cross-sections. The areas investigated, where prominent TIVA signals were observed, showed seams in the Nb wiring layers at contacts to Josephson junctions or inductors and/or disrupted junction morphologies. These results suggest that the TIVA technique can be used at ambient temperature to diagnose fabrication defects that may cause low temperature circuit failure.
Laser diagnostics are essential for time-resolved studies of solid rocket propellant combustion and small explosive detonations. Digital in-line holography (DIH) is a powerful tool for three-dimensional particle tracking in multiphase flows. By combining DIH with complementary diagnostics, particle temperatures and soot/smoke properties can be identified.
Control of the atomic structure, as measured by the extent of the embrittling B2 chemically ordered phase, is demonstrated in intermetallic alloys through additive manufacturing (AM) and characterized using high fidelity neutron diffraction. As a layer-by-layer rapid solidification process, AM was employed to suppress the extent of chemically ordered B2 phases in a soft ferromagnetic Fe-Co alloy, as a model material system of interest to electromagnetic applications. The extent of atomic ordering was found to be insensitive to the spatial location within specimens and suggests that the thermal conditions within only a few AM layers were most influential in controlling the microstructure, in agreement with the predictions from a thermal model for welding. Analysis of process parameter effects on ordering found that suppression of B2 phase was the result of an increased average cooling rate during processing. AM processing parameters, namely interlayer interval time and build velocity, were used to systematically control the relative fraction of ordered B2 phase in specimens from 0.49 to 0.72. Hardness of AM specimens was more than 150% higher than conventionally processed bulk material. Implications for tailoring microstructures of intermetallic alloys are discussed.
Proceedings - 2019 IEEE Symposium on High-Performance Interconnects, HOTI 2019
Jha, Saurabh; Patke, Archit; Brandt, James M.; Gentile, Ann C.; Showerman, Mike; Roman, Eric; Kalbarczyk, Zbigniew T.; Kramer, Bill; Iyer, Ravishankar K.
Network congestion in high-speed interconnects is a major source of application runtime performance variation. Recent years have witnessed a surge of interest from both academia and industry in the development of novel approaches for congestion control at the network level and in application placement, mapping, and scheduling at the system-level. However, these studies are based on proxy applications and benchmarks that are not representative of field-congestion characteristics of high-speed interconnects. To address this gap, we present (a) an end-to-end framework for monitoring and analysis to support long-term field-congestion characterization studies, and (b) an empirical study of network congestion in petascale systems across two different interconnect technologies: (i) Cray Gemini, which uses a 3-D torus topology, and (ii) Cray Aries, which uses the DragonFly topology.
Natural gas hydrate is often found in marine sediment in heterogeneous distributions in different sediment types. Diffusion may be a dominant mechanism for methane migration and affect hydrate distribution. We use a 1-D advection-diffusion-reaction model to understand hydrate distribution in and surrounding thin coarse-grained layers to examine the sensitivity of four controlling factors in a diffusion-dominant gas hydrate system. These factors are the particulate organic carbon content at seafloor, the microbial reaction rate constant, the sediment grading pattern, and the cementation factor of the coarse-grained layer. We use available data at Walker Ridge 313 in the northern Gulf of Mexico where two ~3-m-thick hydrate-bearing coarse-grained layers were observed at different depths. The results show that the hydrate volume and the total amount of methane within thin, coarse-grained layers are most sensitive to the particulate organic carbon of fine-grained sediments when deposited at the seafloor. The thickness of fine-grained hydrate free zones surrounding the coarse-grained layers is most sensitive to the microbial reaction rate constant. Moreover, it may be possible to estimate microbial reaction rate constants at other locations by studying the thickness of the hydrate free zones using the Damköhler number. In addition, we note that sediment grading patterns have a strong influence on gas hydrate occurrence within coarse-grained layers.
Battery energy storage is being installed behind-the-meter to reduce electrical bills while improving power system efficiency and resiliency. This paper demonstrates the development and application of an advanced optimal control method for battery energy storage systems to maximize these benefits. We combine methods for accurately modeling the state-of-charge, temperature, and state-of-health of lithium-ion battery cells into a model predictive controller to optimally schedule charge/discharge, air-conditioning, and forced air convection power to shift a electric customer's consumption and hence reduce their electric bill. While linear state-of-health models produce linear relationships between battery usage and degradation, a non-linear, stress-factor model accounts for the compounding improvements in lifetime that can be achieved by reducing several stress factors at once. Applying this controller to a simulated system shows significant benefits from cooling-in-the-loop control and that relatively small sacrifices in bill reduction performance can yield large increases in battery life. This trade-off function is highly dependent on the battery's degradation mechanisms and what model is used to represent them.
Battery energy storage is being installed behind-the-meter to reduce electrical bills while improving power system efficiency and resiliency. This paper demonstrates the development and application of an advanced optimal control method for battery energy storage systems to maximize these benefits. We combine methods for accurately modeling the state-of-charge, temperature, and state-of-health of lithium-ion battery cells into a model predictive controller to optimally schedule charge/discharge, air-conditioning, and forced air convection power to shift a electric customer's consumption and hence reduce their electric bill. While linear state-of-health models produce linear relationships between battery usage and degradation, a non-linear, stress-factor model accounts for the compounding improvements in lifetime that can be achieved by reducing several stress factors at once. Applying this controller to a simulated system shows significant benefits from cooling-in-the-loop control and that relatively small sacrifices in bill reduction performance can yield large increases in battery life. This trade-off function is highly dependent on the battery's degradation mechanisms and what model is used to represent them.