Publications
Search results
Jump to search filtersWater Network Tool for Resilience (WNTR). User Manual, Version 0.2.3
Klise, Katherine; Hart, David; Bynum, Michael; Hogge, Joseph W.; Haxton, Terranna; Murray, Regan; Burkhardt, Jonathan
The Water Network Tool for Resilience (WNTR, pronounced winter) is a Python package designed to simulate and analyze resilience of water distribution networks. Here, a network refers to the collection of pipes, pumps, valves, junctions, tanks, and reservoirs that make up a water distribution system. WNTR has an application programming interface (API) that is flexible and allows for changes to the network structure and operations, along with simulation of disruptive incidents and recovery actions. WNTR is based upon EPANET, which is a tool to simulate the movement and fate of drinking water constituents within distribution systems. Users are encouraged to be familiar with the use of EPANET and/or should have background knowledge in hydraulics and pressurized pipe network modeling before using WNTR. EPANET has a graphical user interface that might be a useful tool to facilitate the visualization of the network and the associated analysis results. Information on EPANET can be found at https://www.epa.gov/water-research/epanet. WNTR is compatible with EPANET 2.00.12 [Ross00]. In addition, users should have experience using Python, including the installation of additional Python packages. General information on Python can be found at https://www.python.org/.
Quantum Super-resolution Bioimaging using Massively Entangled Multimode Squeezed Light
This report presents a new method for realizing a super-resolution quantum imaging using massively entangled multimode squeezed light (MEMSL). Each branch of the entangled multimode light interacts with the sample and bears the spatially varying optical phase delay. When imaging optics with finite pupil sizes are used, information is lost. Thanks to the analyticity in the Fourier plane, a noiseless measurement would recover the lost information and accomplish super resolution imaging beating the Rayleigh diffraction limit. I proved rigorously in a fully quantum formalism and presented that (1) such information recovery is possible and (2) the information recovery can be accomplished with much less resources when MEMSL is used than those needed in any non-entangled or non-squeezed classical imaging method. Furthermore, the action of the optical loss in the imaging system that degrades the imaging performance is also rigorously analyzed and presented. Several bioimaging applications that can benefit tremendously from the proposed quantum imaging scheme are also suggested.
Features Events and Processes Relevant to DPC Disposal Criticality Analysis
Alsaed, Halim; Price, Laura L.
The Department of Energy is evaluating the technical feasibility of disposal of spent nuclear fuel in dual-purpose canisters in various geologies. As part of ongoing research and development, the effect of potential post-closure criticality events on repository performance is being studied. Many different features, events, and processes (FEPs) could affect the potential for criticality or the extent of a criticality event. Additionally, a criticality event could affect other FEPs. This report uses existing lists of FEPs as a starting point to evaluate the FEPs that could affect or be affected by an in- package criticality event. The evaluation indicates that most of the FEPs associated with the waste form, the waste, or the EBS have some effect on post-closure criticality and/or are affected by the consequences of post-closure criticality. In addition, FEPs not previously considered are identified for further development.
ALEGRA/Sceptre Code Coupling [Brief]
Researchers at Sandia have developed an advanced radiation hydrodynamic simulation capability by coupling the ALEGRA and Sceptre codes.
Application and Certification of Comparative Vacuum Monitoring Sensors for Structural Health Monitoring of 737 Wing Box Fittings
Multi-site fatigue damage, hidden cracks in hard-to-reach locations, disbonded joints, erosion, impact, and corrosion are among the major flaws encountered in today's extensive fleet of aging aircraft and space vehicles. The use of in-situ sensors for real-time health monitoring of aircraft structures are a viable option to overcome inspection impediments stemming from accessibility limitations, complex geometries, and the location and depth of hidden damage. Reliable, structural health monitoring systems can automatically process data, assess structural condition, and signal the need for human intervention. Prevention of unexpected flaw growth and structural failure can be improved if on-board health monitoring systems are used to continuously assess structural integrity. Such systems are able to detect incipient damage before catastrophic failures occurs. Condition-based maintenance practices could be substituted for the current time-based maintenance approach. Other advantages of on-board distributed sensor systems are that they can eliminate costly, and potentially damaging, disassembly, improve sensitivity by producing optimum placement of sensors and decrease maintenance costs by eliminating more time- consuming manual inspections. This report presents a Sandia Labs-aviation industry effort to move SHM into routine use for aircraft maintenance. This program addressed formal SHM technology validation and certification issues so that the full spectrum of concerns, including design, deployment, performance and certification were appropriately considered. The Airworthiness Assurance NDI Validation Center (AANC) at Sandia Labs, in conjunction with Boeing, Delta Air Lines, Structural Monitoring Systems Ltd., Anodyne Electronics Manufacturing Corp. and the Federal Aviation Administration (FAA) carried out a certification program to formally introduce Comparative Vacuum Monitoring (CVM) as a structural health monitoring solution to a specific aircraft wing box application. Validation tasks were designed to address the SHM equipment, the health monitoring task, the resolution required, the sensor interrogation procedures, the conditions under which the monitoring will occur, the potential inspector population, adoption of CVM into an airline maintenance program and the document revisions necessary to allow for routine use of CVM as an alternate means of performing periodic structural inspects. To carry out the validation process, knowledge of aircraft maintenance practices was coupled with an unbiased, independent evaluation. Sandia Labs designed, implemented, and analyzed the results from a focused and statistically-relevant experimental effort to quantify the reliability of the CVM system applied to the Boeing 737 Wing Box fitting application. All factors that affect SHM sensitivity were included in this program: flaw size, shape, orientation and location relative to the sensors, as well as operational and environmental variables. Statistical methods were applied to performance data to derive Probability of Detection (POD) values for CVM sensors in a manner that agrees with current nondestructive inspection (NDI) validation requirements and also is acceptable to both the aviation industry and regulatory bodies. This report presents the use of several different statistical methods, some of them adapted from NDI performance assessments and some proposed to address the unique nature of damage detection via SHM systems, and discusses how they can converge to produce a confident quantification of SHM performance An important element in developing SHM validation processes is a clear understanding of the regulatory measures needed to adopt SHM solutions along with the knowledge of the structural and maintenance characteristics that may impact the operational performance of an SHM system. This report describes the major elements of an SHM validation approach and differentiates the SHM elements from those found in NDI validation. The activities conducted in this program demonstrated the feasibility of routine SHM usage in general and CVM in particular for the application selected. They also helped establish an optimum OEM-airline-regulator process and determined how to safely adopt SHM solutions. This formal SHM validation will allow aircraft manufacturers and airlines to confidently make informed decisions about the proper utilization of CVM technology. It will also streamline the regulatory actions and formal certification measures needed to assure the safe application of SHM solutions.
Inverting infrasound data for the seismoacoustic source time functions and surface spall at the Source Physics Experiments Phase II: Dry Alluvium Geology
Poppeliers, Christian P.; Preston, Leiph A.
This report presents the infrasound data recorded as part of the Source Physics Experiment - Phase 2, Dry Alluvium Geology. This experiment, also known colloquially as DAG, consisted of four underground chemical explosions at the Nevada National Security Site. We focus our analysis on only the fourth explosion (DAG-4) as we determined that this was the only event that produced clear source-generated infrasound energy as recorded by the DAG sensors. We analyze the data using two inversion methods. The first method is designed to estimate the point-source seismoacoustic source time functions, and the second inversion method is designed to estimate the first-order characteristics (e.g. horizontal dimensions and maximum amplitude) of the actual spall surface. For both analysis methods, we are able to fit the data reasonably well, with various assumptions of the source model. The estimated seismoacoustic source appears to be a combination of a buried, isotropic explosion with a maximum amplitude of ~2 x 109 Nm and a vertically oriented force, applied to the Earth's surface with a maximum amplitude of 4 x 107 N. We use the vertically oriented force to simulate surface spall. The estimated spall surface has an approximate radius of ~40 m with a maximum acceleration magnitude in the range of 0.8 to 1.5 m/s/s. These estimates are approximately similar to the measured surface acceleration at the site.
Evaluation of Component Reliability in Photovoltaic Systems using Field Failure Statistics
Gunda, Thushara G.; Homan, Rachel
Ongoing operations and maintenance (O&M) are needed to ensure photovoltaic (PV) systems continue to operate and meet production targets over the lifecycle of the system. Although average costs to operate and maintain PV systems have been decreasing over time, reported costs can vary significantly at the plant level. Estimating O&M costs accurately is important for informing financial planning and tracking activities, and subsequently lowering the levelized cost of electricity (LCOE) of PV systems. This report describes a methodology for improving O&M planning estimates by using empirically-derived failure statistics to capture component reliability in the field. The report also summarizes failure patterns observed for specific PV components and local environmental conditions observed in Sandia's PV Reliability, Operations & Maintenance (PVROM) database, a collection of field records across 800+ systems in the U.S. Where system-specific or fleet-specific data are lacking, PVROM-derived failure distribution values can be used to inform cost modeling and other reliability analyses to evaluate opportunities for performance improvements.
Multimodal Deep Learning for Flaw Detection in Software Programs
Heidbrink, Scott H.; Rodhouse, Kathryn N.; Dunlavy, Daniel D.
We explore the use of multiple deep learning models for detecting flaws in software programs. Current, standard approaches for flaw detection rely on a single representation of a software program (e.g., source code or a program binary). We illustrate that, by using techniques from multimodal deep learning, we can simultaneously leverage multiple representations of software programs to improve flaw detection over single representation analyses. Specifically, we adapt three deep learning models from the multimodal learning literature for use in flaw detection and demonstrate how these models outperform traditional deep learning models. We present results on detecting software flaws using the Juliet Test Suite and Linux Kernel.
Securing machine learning models
Skryzalin, Jacek S.; Goss, Kenneth G.; Jackson, Benjamin C.
We discuss the challenges and approaches to securing numeric computation against adversaries who may want to discover hidden parameters or values used by the algorithm. We discuss techniques that are both cryptographic and non-cryptographic in nature. Cryptographic solutions are either not yet algorithmically feasible or currently require more computational resources than are reasonable to have in a deployed setting. Non-cryptographic solutions may be computationally faster, but these cannot stop a determined adversary. For one such non-cryptographic solution, mixed Boolean arithmetic, we suggest a number of improvements that may protect the obfuscated calculation against current automated deobfuscation methods.
Sandia's Research in Support of COVID-19 Pandemic Response: Computing and Information Sciences
Bauer, Travis L.; Beyeler, Walter E.; Finley, Patrick D.; Jeffers, Robert F.; Laird, Carl D.; Makvandi, Monear M.; Outkin, Alexander V.; Safta, Cosmin S.; Simonson, Katherine M.
This report summarizes the goals and findings of eight research projects conducted under the Computing and Information Sciences (CIS) Research Foundation and related to the COVID- 19 pandemic. The projects were all formulated in response to Sandia's call for proposals for rapid-response research with the potential to have a positive impact on the global health emergency. Six of the projects in the CIS portfolio focused on modeling various facets of disease spread, resource requirements, testing programs, and economic impact. The two remaining projects examined the use of web-crawlers and text analytics to allow rapid identification of articles relevant to specific technical questions, and categorization of the reliability of content. The portfolio has collectively produced methods and findings that are being applied by a range of state, regional, and national entities to support enhanced understanding and prediction of the pandemic's spread and its impacts.
Performance of CsI:Tl Cyrstal with a Spectrum Matching Photomultiplier Tube
Yang, Pin Y.; Laros, James H.; Harmon, Charles D.
This report documents an effort to improve the energy resolution for a thallium doped cesium iodide (CsI:T1) scintillator paired with a spectrum matching photomultiplier tube (PMT). A comparison of the differences in the pulse height spectra from thallium doped (CsI:T1) and sodium doped (CsI:Na) single crystals with PMTs of different spectrum responses was performed. Results show that energy resolution of the detector only improves 0.5% at room temperature when these scintillators are coupled with a spectrum matching PMT. Based on a spectrum matching PMT, the best results for energy resolution are 7.39% and 7.88% for CsI:T1 and CsI:Na scintillators, respectively. The improvement is primarily attributed to the increase of photon statistics from the increase of photons (N) being detected in the spectrum matching PMT. Other factors, such as optical quantum yield and non-proportionality of the CsI:T1 and CsI:Na crystals, that can affect the energy resolution were also studied and reported. The results indicate that although the use of a spectrum matching PMT enhances the photon statistics, it also exacerbates the nonproportionality response. Consequently, a promised improvement on energy resolution due solely to photon statistics was not fully realized.
Resiliency of Degraded Built Infrastructure
Infrastructure resiliency depends on the ability of infrastructure systems to withstand, adapt, and recover from chronic and extreme stresses. In this white paper, we address the resiliency of infrastructure assets and discuss improving infrastructure stability through development of our understanding of cement and concrete degradation. The resiliency of infrastructure during extreme events relies on the condition, adaptability, and recoverability of built infrastructure (roads, bridges, dams), which serves as the backbone of existing infrastructure systems. Much of the built infrastructure in the US has consistently been rated D+ by the American Society of Civil Engineers (ASCE). Aged infrastructure introduces risk to the system, since unreliable infrastructure increases the likelihood of failures under chronic and extreme stress and are particularly concerning when extreme events occur. To understand and account for this added risk from poor infrastructure quality, more research is needed on (i) how the changing environment alters the aging of new and existing built infrastructure and (ii) how degradation causes unique failure mechanisms. The aging of built infrastructure is based on degradation of the structural materials, such as concrete and steel supports, which causes failure. Current work in cement/concrete degradation is based on (i) the development of high strength and degradation resistance concrete mixtures, (ii) methods of assessing the age and reliability of existing structures, and (3) modeling of structural stability and the microstructural evolution of concrete/cement from degradation mechanisms (sulfide attack, carbonation, decalcification). Sandia National Laboratories (SNL) has made several investments in studying the durability and degradation of cement based materials, including using SNL-developed codes and methodologies (peridynamics, PFLOTRAN) to focus on chemo-mechanical fracture of cement for energy applications. Additionally, a recent collaboration with the University of Colorado Boulder has included fracture of concrete gravity dams, scaling the existing work to applications in full sized infrastructure problems. Ultimately, SNL has the experience in degradation of cementitious materials to extend the current research portfolio and answer concerns about the resilience of aging built infrastructure.
Image Processing Algorithms for Tuning Quantum Devices and Nitrogen-Vacancy Imaging
Monical, Cara P.; Lewis, Phillip J.; Agron, Abrielle; Larson, K.W.; Mounce, Andrew M.
Semiconductor quantum dot devices can be challenging to configure into a regime where they are suitable for qubit operation. This challenge arises from variations in gate control of quantum dot electron occupation and tunnel coupling between quantum dots on a single device or across several devices. Furthermore, a single control gate usually has capacitive coupling to multiple quantum dots and tunnel barriers between dots. If the device operator, be it human or machine, has quantitative knowledge of how gates control the electrostatic and dynamic properties of multiqubit devices, the operator can more quickly and easily navigate the multidimensional gate space to find a qubit operating regime. We have developed and applied image analysis techniques to quantitatively detect where charge offsets from different quantum dots intersect, so called anticrossings. In this document we outline the details of our algorithm for detecting single anticrossings, which has been used to fine-tune the inter-dot tunnel rates for a three quantum dot system. Additionally, we show that our algorithm can detect multiple anticrossings in the same dataset, which can aid in the coarse tuning the electron occupation of multiple quantum dots. We also include an application of cross correlation to the imaging of magnetic fields using nitrogen vacancies.
RSVP - Flu Like Illness and Respiratory Syndromes COVID-19 Syndromic Reporting Tool Prototype
Caskey, Susan A.; Finley, Melissa F.; Makvandi, Monear M.; Bynum, Leo B.; Edgar, Pablo A.
Individuals infected with SARS-CoV-2, the virus that causes COVID-19, may be infectious between 1-3 days prior to symptom onset. People may delay seeking medical care after symptom development due to multiple determinants of health seeking behavior like availability of testing, accessibility of providers, and ability to pay. Therefore, understanding symptoms in the general public is important to better predict and inform resource management plans and engage in reopening. As the influenza season looms, the ability to differentiate between clinical presentation of COVID-19 and seasonal influenza will also be important to health providers and public health response efforts. This project has developed an algorithm that when used with captured syndromic trends can help provide both differentiation to various influenza-like illnesses (ILI) as well as provide public health decision makers a better understanding regarding spatial and temporal trends. This effort has also developed a web-based tool to allow for the capturing of generalized syndromic trends and provide both spatial and temporal outputs on these trends. This page left blank
Terry Turbopump Expanded Operating Band Modeling and Simulation Efforts in Fiscal Year 2020 - Progress Report
Beeny, Bradley A.; Gilkey, Lindsay N.; Solom, Matthew A.; Luxat, David L.
The Terry Turbine Expanded Operating Band Project is currently conducting testing at Texas A&M University as part of a revised experimental program meant to supplant previous full-scale testing plans under the headings of Milestone 5 and Milestone 6. In consultation with Sandia National Laboratories technical staff and with modeling and simulation support from the same, the hybrid Milestone 5&6 plan is moving forward with experiments aimed at addressing knowledge gaps regarding scale, working fluid, and turbopump self-regulation. Modeling and simulation efforts at Sandia National Laboratories in FY20 fell under the broad umbrella of Milestone 7 and consisted exclusively of MELCOR-related tasks aimed at: 1) Constructing/improving input models of Texas A&M University experiments, 2) Constructing a generic boiling water reactor input model according to best practices with systems-level Teny turbine capabilities, and 3) Adding code capability in order to leverage experimental data/findings, address bugs, and improve general code robustness Project impacts of the Covid-19 pandemic have fortunately been minimal thus far but are mentioned as necessary when discussing the hybrid Milestone 5&6 progress as well as the corresponding Milestone 7 modeling and simulation progress.
Research Needs for Trusted Analytics in National Security Settings
Stracuzzi, David J.; Speed, Ann S.
As artificial intelligence, machine learning, and statistical modeling methods become commonplace in national security applications, the drive to create trusted analytics becomes increasingly important. The goal of this report is to identify areas of research that can provide the foundational understanding and technical prerequisites for the development and deployment of trusted analytics in national security settings. Our review of the literature covered several disjoint research communities, including computer science, statistics, human factors, and several branches of psychology and cognitive science, which tend not to interact with one another or cite each other's literatures. As a result, there exists no agreed-upon theoretical framework for understanding how various factors influence trust and no well-established empirical paradigm for studying these effects. This report therefore takes three steps. First, we define several key terms in an effort to provide a unifying language for trusted analytics and to manage the scope of the problem. Second, we outline an empirical perspective that identifies key independent, moderating, and dependent variables in assessing trusted analytics. Though not a substitute for a theoretical framework, the empirical perspective does support research and development of trusted analytics in the national security domain. Finally, we discuss several research gaps relevant to developing trusted analytics for the national security mission space.
HSolo: Homography from a single affine aware correspondence
Gonzales, Antonio G.; Monical, Cara P.; Perkins, Tony P.
The performance of existing robust homography estimation algorithms is highly dependent on the inlier rate of feature point correspondences. In this paper, we present a novel procedure for homography estimation that is particularly well suited for inlier-poor domains. By utilizing the scale and rotation byproducts created by affine aware feature detectors such as SIFT and SURF, we obtain an initial homography estimate from a single correspondence pair. This estimate allows us to filter the correspondences to an inlier-rich subset for use with a robust estimator. Especially at low inlier rates, our novel algorithm provides dramatic performance improvements.
Prediction of Circuit Response to an Electromagnetic Environment (ASC IC FY2020 Milestone 7179)
Mei, Ting M.; Huang, Andy H.; Thornquist, Heidi K.; Sholander, Peter E.; Verley, Jason V.
This report covers the work performed in support of the ASC Integrated Codes FY20 Milestone 7179. For the Milestone, Sandia's Xyce analog circuit simulator was enhanced to enable a loose coupling to Sandia's EIGER electromagnetic (EM) simulation tool. A device was added to Xyce that takes as its input network parameters (representing the impedance response) and short-circuit current induced in a wire or other element, as calculated by an EM simulator such as EIGER. Simulations were performed in EIGER and in Xyce (using Harmonic Balance analysis) for a variety of linear and nonlinear circuit problems, including various op amp circuits. Results of those simulations are presented and future work is also discussed.
Digital Signal Processing of Radar Pulse Echoes
Modern high-performance radar systems are employing ever-more Digital Signal Processing (DSP), replacing ever-more formerly analog components. Precisely predicting the performance of digital filters and correlators requires an awareness of some of the finer points and characteristics of digital filters. We examine a representative radar receiver DSP chain that is processing a Linear Frequency Modulated (LFM) chirp.
AniMACCS User Guide
Laros, James H.; Bixler, Nathan E.; Leute, Jennifer E.; Whitener, Dustin H.; Eubanks, Lloyd L.
AniMACCS is a utility code in the MELCOR Accident Consequence Code System (MACCS) software suite that allows for certain MACCS output information to be visually displayed and overlaid onto a geospatial map background. AniMACCS was developed by Sandia National Laboratories for the U.S. Nuclear Regulatory Commission. MACCS is designed to calculate health and economic consequences following a release of radioactive material in the atmosphere. MACCS accomplishes this by modeling the atmospheric dispersion, deposition, and consequences of the release, which depend on several factors including the source term, weather, population, economic, and land-use characteristics of the impacted geographical area. From these inputs, MACCS determines the characteristics of the plume, as well as ground and air concentrations as a function of time and radionuclide.
CephFS experiments on stria.sandia.gov
This report is an institutional record of experiments conducted to explore performance of a vendor installation of CephFS on the SNL stria cluster. Comparisons between CephFS, the Lustre parallel file system, and NFS were done using the IOR and MDTEST benchmarking tools, a test program which uses the SEACAS/Trilinos IOSS library, and the checkpointing activity performed by the LAMMPS molecular dynamics simulation.
Wind Turbine Lightning Mitigation System Radar Cross Section Reduction
Modern wind turbines employ Lightning Mitigation Systems (LMSs) in order to reduce costly damages caused by lightning strikes. Lightning strikes on wind turbines occur frequently making LMS configurations a necessity. An LMS for a single turbine includes, among other equipment, cables running inside each blade, along the entire blade length. These cables are connected to various metallic receptors on the outside surface of the blades. The LMS cables can act as significant electromagnetic scatterers which may cause interference to radar systems. This interference may be mitigated by reducing the Radar Cross-Section (RCS) of the wind turbine's LMS. This report investigates proposed modifications to LMS cables in order to reduce the RCS when illuminated by Re locatable Over the Horizon Radar (ROTHR) systems which operate in the HF band (3 - 30 MHz). The proposed modifications include breaking up the LMS cables using spark gap connections, and changing the orientation of the LMS cable within the turbine blade. Both simulated analyses of such RCS mitigation techniques is provided as well as recommendations on further research.
A Bezier Curve Informed Melt Pool Geometry to Model Additive Manufacturing Microstructures Using SPPARKS
Trageser, Jeremy T.; Mitchell, John A.
Additive manufacturing is a transformative technology with the potential to manufacture designs which traditional subtractive machining methods cannot. Additive manufacturing offers fast builds at near final desired geometry; however, material properties and variability from part to part remain a challenge for certification and qualification of metallic components. AM induced metallic microstructures are spatially heterogeneous and highly process dependent. Engineering properties such as strength and toughness are significantly affected by microstructure morphologies resulting from the manufacturing process Linking process parameters to microstructures and ultimately to the dynamic response of AM materials is critical to certifying and qualifying AM built parts and components and improving the performance of AM materials. The AM fabrication process is characterized by building parts layer by layer using a selective laser melt process guided by a computer. A laser selectively scans and melts metal according to a designated geometry. As the laser scans, metal melts, fuses, and solidifies forming the final geometry in a layerwise fashion. As the laser heat source moves away, the metal cools and solidifies forming metallic microstructures. This work describes a microstructure modeling application implemented in the SPPARKS kinetic Monte Carlo computational framework for simulating the resulting microstructures. The application uses Bzier curves and surfaces to model the melt pool surface and spatial temperature profile induced by moving the laser heat source; it simulates the melting and fusing of metal at the laser hot spot and microstructure formation and evolution when the laser moves away. The geometry of the melt pool is quite flexible and we explore effects of variances in model parameters on simulated microstructures.
Asset Management: Beyond 2020
This Asset Management Beyond 2020 document provides: (1) an introduction to asset management (AM) and an asset management system (AMS), along with insights to next steps, (2) an overview of the International Standards Organization (ISO) 55000 series documents, (3) an overview of the ISO 55001 gap analysis of Center 4700/4800 ("Facilities") organizations at Sandia National Laboratories, hereafter referred to as Sandia, with observations, gaps, and gap closure recommendations, and (4) an asset management architecture (AMA) recommendation aligned with the ISO 55001. The AMS and the AM are different but related. ISO 55000 cites AMS as "a management system for AM" to "direct, coordinate, and control AM activities." ISO 55000 cites AM as translating "the organizational objectives into technical and financial decisions, plans and activities."1 Essentially, the AMS begins with all levels of leadership to enable a structured and consistent approach to AM culminating in lifecycle asset management excellence.
Physical Security Model Development of an Electrochemical Facility
Parks, Mancel J.; Noel, Todd G.; Stromberg, Benjamin
Nuclear facilities in the U.S. and around the world face increasing challenges in meeting evolving physical security requirements while keeping costs reasonable. The addition of security features after a facility has been designed and without attention to optimization (the approach of the past) can easily lead to cost overruns. Instead, security should be considered at the beginning of the design process in order to provide robust, yet efficient physical security designs. The purpose of this work is to demonstrate how modeling and simulation can be used to optimize the design of physical protection systems. A suite of tools, including Scribe3D and Blender, were used to model a generic electrochemical reprocessing facility. Physical protection elements such as sensors, portal monitors, barriers, and guard forces were added to the model based on best practices for physical security. Two theft scenarios (an outsider attack and insider diversion) as well as a sabotage scenario were examined in order to optimize the security design. Security metrics are presented. This work fits into a larger Virtual Facility Distributed Test Bed 2020 Milestone in the Material Protection, Accounting, and Control Technologies (MPACT) program through the Department of Energy (DOE). The purpose of the milestone is to demonstrate how a series of experimental and modeling capabilities across the DOE complex provide the capabilities to demonstrate complete Safeguards and Security by Design (SSBD) for nuclear facilities.
Cyber Resilience as a Deterrence Strategy
Hammer, Ann H.; Miller, Trisha H.; Uribe, Eva U.
This paper was written by the Cyber Deterrence and Resilience Strategic Initiative in partnership with the Resilience Energy Systems Strategic Initiative. Resilience and deterrence are both part of a comprehensive cyber strategy where tactics may overlap across defense, resilience, deterrence, and other strategic spaces. This paper explores how building resiliency in cyberspace can not only serve to strengthen the defender's posture and capabilities in a general sense but also deter adversaries from attacking.
Big-Data-Driven Geo-Spatiotemporal Correlation Analysis between Precursor Pollen and Influenza and its Implication to Novel Coronavirus Outbreak
Although studies of many respiratory viruses and pollens are often framed by both seasonal and health related perspectives, pollen has yet to be extensively examined as an important covariate to seasonal respiratory viruses (SRVs) in any context, including a causal one. This study contributes to those goals through an investigation of SRVs and pollen counts at selected regions across the Western Hemisphere. Two complementary decadal-scaled geospatial profiles were developed. One laterally spanned the US and was anchored by detailed pollen information for Albuquerque, New Mexico. The other straddled the equator to include Fortaleza, Brazil. We found that the geospatial and climatological patterns of pollen advancement and decline across the US every year presented a statistically significant correlation to the subsequent emergence and decline of SRVs. Other significant covariates included winds, temperatures, and atmospheric moisture. Our study indicates that areas of the US with lower geostrophic wind baselines are typically areas of persistently higher and earlier influenza like illness (ILI) cases. In addition to that continental- scaled contrast, many sites indicated seasonal highs of geostrophic winds and ILI which were closely aligned. These observations suggest extensive scale-dependent connectivity of viruses to geostrophic circulation. Pollen emergence and its own scale-dependent circulation may contribute to the geospatial and seasonal patterns of ILI. We explore some uncertainties associated with this investigation, and consider the possibility that in a temperate climate, following a Spring pollen emergence, a resulting increase in pollen triggered human Immunoglobulin E (IgE) antibodies may suppress ILIs for several months.
Modeling a ring magnet in ALEGRA
Niederhaus, John H.; Pacheco, Jose L.; Wilkes, John; Hooper, Russell H.; Siefert, Christopher S.; Goeke, Ronald S.
We show here that Sandia's ALEGRA software can be used to model a permanent magnet in 2D and 3D, with accuracy matching that of the open-source commercial software FEMM. This is done by conducting simulations and experimental measurements for a commercial-grade N42 neodymium alloy ring magnet with a measured magnetic field strength of approximately 0.4 T in its immediate vicinity. Transient simulations using ALEGRA and static simulations using FEMM are conducted. Comparisons are made between simulations and measurements, and amongst the simulations, for sample locations in the steady-state magnetic field. The comparisons show that all models capture the data to within 7%. The FEMM and ALEGRA results agree to within approximately 2%. The most accurate solutions in ALEGRA are obtained using quadrilateral or hexahedral elements. In the case where iron shielding disks are included in the magnetized space, ALEGRA simulations are considerably more expensive because of the increased magnetic diffusion time, but FEMM and ALEGRA results are still in agreement. The magnetic field data are portable to other software interfaces using the Exodus file format.
Learning Hidden Structure in Multi-Fidelity Information Sources for Efficient Uncertainty Quantification (LDRD 218317)
Jakeman, John D.; Eldred, Michael S.; Geraci, Gianluca G.; Smith, Thomas M.; Gorodetsky, Alex A.
This report summarizes the work done under the Laboratory Directed Research and Development (LDRD) project entitled "Learning Hidden Structure in Multi-Fidelity Information Sources for Efficient Uncertainty Quantification". In this project we investigated multi-fidelity strategies for fusing data from information sources of varying cost and accuracy. Most existing strategies exploit hierarchical relationships between models, for example that occur when different models are generated by refining a numerical discretization parameter. In this work we focused on encoding the relationships between information sources using directed acyclic graphs. The multi-fidelity networks can have general structure and represent a significantly greater variety of modeling relationships than recursive networks used in the current state literature. Numerical results show that a non-hierarchical multi-fidelity Monte Carlo strategy can reduce the cost of estimating uncertainty in predictions of a model of plasma expanding in a vacuum by almost two orders of magnitude.
Increasing the Lifetime of Epoxy Components with Antioxidant Stabilizers
Narcross, Hannah N.; Redline, Erica M.; Celina, Mathias C.; Bowman, Ashley M.
Epoxy thermoset resins are ubiquitous materials with extensive applications where they are used as encapsulants, composites, and adhesives/staking compounds used to secure sensitive components. Epoxy resins are inherently sensitive to thermo-oxidative aging, especially at elevated temperatures, which changes the bulk properties of the material and can lead to component failure for example by cracking due to embrittlement or by adhesion failure between the epoxy and filler material in a composite. This project investigated the effects of three commercial antioxidants (Irganox® 1010 (I-102), butylated hydroxytoluene (BHT), or Chisorb® 770 (HALS)) at two different loadings (2.5 and 5 wt%) on the mechanical and chemical aging of a model epoxy system (EPONTM 828 / Jeffamine® T-403) under ambient conditions, 65, 95, and 110 °C. Additionally, synthetic routes towards an antioxidant capable of being covalently bound to the resin so as to prevent leaching were explored with one such molecule being successfully synthesized and purified. One commercial antioxidant (Irganox® 1010) was found to reduce the degree of thermo-oxidatively induced damage in the system.
Incorporating physical constraints into Gaussian process surrogate models (LDRD Project Summary)
Swiler, Laura P.; Gulian, Mamikon G.; Frankel, Ari L.; Jakeman, John D.; Safta, Cosmin S.
This report summarizes work done under the Laboratory Directed Research and Development (LDRD) project titled "Incorporating physical constraints into Gaussian process surrogate models?' In this project, we explored a variety of strategies for constraint implementations. We considered bound constraints, monotonicity and related convexity constraints, Gaussian processes which are constrained to satisfy linear operator constraints which represent physical laws expressed as partial differential equations, and intrinsic boundary condition constraints. We wrote three papers and are currently finishing two others. We developed initial software implementations for some approaches. This report summarizes the work done under this LDRD.
Using Neural Architecture Search for Improving Software Flaw Detection in Multimodal Deep Learning Models
Cooper, Alexis C.; Zhou, Xin Z.; Dunlavy, Daniel D.; Heidbrink, Scott H.
Software flaw detection using multimodal deep learning models has been demonstrated as a very competitive approach on benchmark problems. In this work, we demonstrate that even better performance can be achieved using neural architecture search (NAS) combined with multimodal learning models. We adapt a NAS framework aimed at investigating image classification to the problem of software flaw detection and demonstrate improved results on the Juliet Test Suite, a popular benchmarking data set for measuring performance of machine learning models in this problem domain.
Evaluation the Geotech GS-13BH Borehole Seismic Sensor
Sandia National Laboratories has tested and evaluated the Geotech GS-13BH borehole sensor. The sensor provides a response similar to that of the standard GS-13 short-period seismic sensor intended for pier-installations in a borehole package. The purpose of this seismometer evaluation was to determine a measured sensitivity, amplitude and phase response, self-noise and dynamic range, passband and acceleration response of its calibration coil.
What Questions Would a Systems Engineer Ask to Assess Systems Engineering Models as Credible
Carroll, Edward R.; Malins, Robert J.
Digital Systems Engineering strategies typically call for digital Systems Engineering models to be retained in repositories and certified as an authoritative source of truth (enabling model reuse, qualification, and collaboration). In order for digital Systems Engineering models to be certified as authoritative (credible), they need to be assessed - verified and validated - and with the amount of uncertainty in the model quantified (consider reusing someone else's model without knowing the author). Due to this increasing model complexity, the authors assert that traditional human-based methods for validating, verifying, and uncertainty quantification - such as human-based peer-review sessions - cannot sufficiently establish that a digital Systems Engineering model of a complex system is credible. Digital Systems Engineering models of complex systems can contain millions of nodes and edges. The authors assert that this level of detail is beyond the ability of any group of humans - even working for weeks at a time - to discern and catch every minor model infraction. In contrast, computers are highly effective at discerning infractions with massive amounts of information. The authors suggest that a better approach might be to focus the humans at what model patterns should be assessed and enable the computer to assess the massive details in accordance with those patterns - by running through perhaps 100,000 test loops. In anticipation of future projects to implement and automate the assessment of models at Sandia National Laboratories, a study was initiated to elicit input from a group of 25 Systems Engineering experts. The authors positioning query began with - 'What questions would a Systems Engineer ask to assess Systems Engineering models for credibility?" This report documents the results of that survey.
Response of GaN-Based Semiconductor Devices to Ion and Gamma Irradiation
Aguirre, Brandon A.; King, Joseph K.; Manuel, Jack E.; Vizkelethy, Gyorgy V.; Bielejec, Edward S.; Griffin, Patrick J.
GaN has electronic properties that make it an excellent material for the next generation of power electronics; however, its radiation hardening still needs further understanding before it is used in radiation environments. In this work we explored the response of commercial InGaN LEDs to two different radiation environments: ion and gamma irradiations. For ion irradiations we performed two types of irradiations at the Ion Beam Lab (IBL) at Sandia National Laboratories (SNL): high energy and end of range (EOR) irradiations. For gamma irradiations we fielded devices at the gamma irradiation facility (GIF) at SNL. The response of the LEDs to radiation was investigated by IV, light output and light output vs frequency measurements. We found that dose levels up to 500 krads do not degrade the electrical properties of the devices and that devices exposed to ion irradiations exhibit a linear and non- linear dependence with fluence for two different ranges of fluence levels. We also performed current injection annealing studies to explore the annealing properties of InGaN LEDs.
Coupling CTH to Linear Acoustic Propagation across an Air-Earth Interface
Preston, Leiph A.; Eliassi, Mehdi E.; Poppeliers, Christian P.
The interface between the Earth and the atmosphere forms a strong contrast in material properties. As such, numerical issues can arise when simulating an elastic wavefield across such a boundary when using a numerical simulation scheme. This is exacerbated when two different simulation codes are coupled straddling that interface. In this report we document how we implement the coupling of CTH, a nonlinear shock physics code, to a linearized elastic/acoustic wave propagation algorithm, axiElasti, across the air-earth interface. We first qualitatively verify that this stable coupling between the two algorithms produces expected results with no visible effects of the coupling interface. We then verify the coupling interface quantitatively by checking consistency with results from previous work and with coupled acoustic-elastic seismo-acoustic source inversions in three earth materials.
Multiscale Approach to Fast ModSim for Laser Processing of Metals for Future Nuclear Deterrence Environments
Moser, Daniel M.; Martinez, Mario J.; Johnson, Kyle J.; Rodgers, Theron R.
Predicting performance of parts produced using laser-metal processing remains an out- standing challenge. While many computational models exist, they are generally too computationally expensive to simulate the build of an engineering-scale part. This work develops a reduced order thermal model of a laser-metal system using analytical Green's function solutions to the linear heat equation, representing a step towards achieving a full part performance prediction in an "overnight" time frame. The developed model is able to calculate a thermal history for an example problem 72 times faster than a traditional FEM method. The model parameters are calibrated using a non-linear solution and microstructures and residual stresses calculated and compared to a non-linear case. The calibrated model shows promising agreement with a non-linear solution.
Particle Sensitivity Analysis
Lehoucq, Richard B.; Franke, Brian C.; Bond, Stephen D.; Mckinley, Scott A.
We propose to develop a computational sensitivity analysis capability for Monte Carlo sampling-based particle simulation relevant to Aleph, Cheetah-MC, Empire, Emphasis, ITS, SPARTA, and LAMMPS codes. These software tools model plasmas, radiation transport, low-density fluids, and molecular motion. Our report demonstrates how adjoint optimization methods can be combined with Monte Carlo sampling-based adjoint particle simulation. Our goal is to develop a sensitivity analysis to drive robust design-based optimization for Monte Carlo sampling-based particle simulation - a currently unavailable capability.
Advancing the science of explosive fragmentation and afterburn fireballs though experiments and simulations at the benchtop scale
Guildenbecher, Daniel R.; Dallman, Ann R.; Munz, Elise D.; Halls, Benjamin R.; Jones, Elizabeth M.; Kearney, S.P.; Marinis, Ryan T.; Murzyn, Christopher M.; Richardson, Daniel R.; Perez, Francisco; Reu, Phillip L.; Thompson, Andrew D.; Welliver, Marc W.; Mazumdar, Yi C.; Brown, Alex; Pourpoint, Timothee L.; White, Catriona M.L.; Balachandar, S.; Houim, Ryan W.
Detonation of explosive devices produces extremely hazardous fragments and hot, luminous fireballs. Prior experimental investigations of these post-detonation environments have primarily considered devices containing hundreds of grams of explosives. While relevant to many applications, such large- scale testing also significantly restricts experimental diagnostics and provides limited data for model validation. As an alternative, the current work proposes experiments and simulations of the fragmentation and fireballs from commercial detonators with less than a gram of high explosive. As demonstrated here, reduced experimental hazards and increased optical access significantly expand the viability of advanced imaging and laser diagnostics. Notable developments include the first known validation of MHz-rate optical fragment tracking and the first ever Coherent Anti-Stokes Raman Scattering (CARS) measures of post-detonation fireball temperatures. While certainly not replacing the need for full-scale verification testing, this work demonstrates new opportunities to accelerate developments of diagnostics and predictive models of post-detonation environments.
Hanging String Cuts in SPR Caverns: Modeling Investigation and Comparison with Sonar Data
Zeitler, Todd Z.; Chojnicki, Kirsten C.
Investigation of leaching for oil sales includes looking closely at cavern geometries. Anomalous cavern "features" have been observed near the foot of some caverns subsequent to partial drawdowns. One potential mitigation approach to reducing further growth of preexisting features is based on the hypothesis that reducing the brine string length via a "string cue' would serve to move the zone associated with additional leaching to a location higher up in the cavern and thus away from the preexisting feature. Cutting of the hanging string is expected to provide a control of leaching depth that could be used to "smooth" existing features and thus reduce geomechanical instability in that region of the cavern. The SANSMIC code has been used to predict cavern geometry changes (i.e., the extent of cavern growth with depth) based on variable input parameters for four caverns: West Hackberry 11 (WH11), West Hackberry 113 (WH113), Big Hill 104 (BH104), Big Hill 114 (BH114). By comparing the initial sonar geometry with resultant geometries calculated by the SANSMIC code, conclusions may be drawn about the potential impact of these variables on future cavern growth. Ultimately, these conclusions can be used to assess possible mitigation strategies such as the potential advantage of cutting versus not cutting a brine string. This work has resulted in a recommendation that a hanging string cut of 80 ft in WH11 would be beneficial to future cavern geometry, while there would be little to no benefit to string cuts in the other three caverns investigated here. The WH11 recommendation was followed in 2019, resulting in an operational string cut. A sonar performed after the string cut showed no adverse leaching in the area of the preexisting flare, as expected from the results of the preliminary SANSMIC runs described in this report. Additional SANSMIC modeling of the actual amount of injected raw water resulted in good agreement with the post-cut sonar.
HEMP Testing of Substation Yard Circuit Breaker Control and Protective Relay Circuits
Baughman, Alfred N.; Bowman, Tyler B.; Guttromson, Ross G.; Halligan, Matthew H.; Minteer, Tim; Mooney, Travis; Vorse, Chad
There are concerns about the effects of High-Altitude Electromagnetic Pulses (HEMP) on the electric power grid. Activities to date tested and analyzed vulnerability of digital protective relays (DPRs) used in power substations, but the effect of HEMP on the greater substation environment is not well known. This work establishes a method of testing the vulnerability of circuit breaker control and protective relay circuits to the radiated E1 pulse associated with HEMP based on coupling to the cables in a substation yard. Two DPRs from Schweitzer Engineering Laboratories, Inc. were independently tested. The test setup also included a typical cable in a substation yard with return plane to emulate the ground grid and other ground conductors near the yard cable, cabinetry housing the installed DPRs, station battery and battery charger, terminal block elements, and a breaker simulator to emulate a substation yard configuration. The DPRs were powered from the station battery and the transformer inputs energized with a three-phase source to maintain typical operating conditions during the tests. Vulnerability testing consisted of a conducted E1 pulse injected into the center of the yard cable of the DPR circuits. Current measurements on the yard cable and DPR inputs indicated significant attenuation of the conducted pulse arriving at the control house equipment from the emulated substation yard. This reduction was quantified with respect to the equivalent open-circuit voltage on the yard cable. No equipment damage or undesired operation occurred on the tested circuits for values below 180 kV, which is significantly higher than the anticipated coupling to a substation yard cable.
Arctic Tipping Points Triggering Global Change (LDRD Final Report)
Peterson, Kara J.; Powell, Amy J.; Kalashnikova, Irina; Roesler, Erika L.; Nichol, Jeffrey N.; Peterson, Matthew G.; Davis, Warren L.; Jakeman, John D.; Stracuzzi, David J.; Bull, Diana L.
The Arctic is warming and feedbacks in the coupled Earth system may be driving the Arctic to tipping events that could have critical downstream impacts for the rest of the globe. In this project we have focused on analyzing sea ice variability and loss in the coupled Earth system Summer sea ice loss is happening rapidly and although the loss may be smooth and reversible, it has significant consequences for other Arctic systems as well as geopolitical and economic implications. Accurate seasonal predictions of sea ice minimum extent and long-term estimates of timing for a seasonally ice-free Arctic depend on a better understanding of the factors influencing sea ice dynamics and variation in this strongly coupled system. Under this project we have investigated the most influential factors in accurate predictions of September Arctic sea ice extent using machine learning models trained separately on observational data and on simulation data from five E3SM historical ensembles. Monthly averaged data from June, July, and August for a selection of ice, ocean, and atmosphere variables were used to train a random forest regression model. Gini importance measures were computed for each input feature with the testing data. We found that sea ice volume is most important earlier in the season (June) and sea ice extent became a more important predictor closer to September. Results from this study provide insight into how feature importance changes with forecast length and illustrates differences between observational data and simulated Earth system data. We have additionally performed a global sensitivity analysis (GSA) using a fully coupled ultra- low resolution configuration E3SM. To our knowledge, this is the first global sensitivity analysis involving the fully-coupled E3SM Earth system model. We have found that parameter variations show significant impact on the Arctic climate state and atmospheric parameters related to cloud parameterizations are the most significant. We also find significant interactions between parameters from different components of E3SM. The results of this study provide invaluable insight into the relative importance of various parameters from the sea ice, atmosphere and ocean components of the E3SM (including cross-component parameter interactions) on various Arctic-focused quantities of interest (QOIs).
Diversified Therapeutic Phage Cocktails from Close Relatives of the Target Bacterium
This project tackles the antibiotic resistance crisis, developing a new method for discovering numerous efficacious bacteriophages for therapeutic cocktails against bacterial pathogens. The phage therapy approach to infectious disease, recently rekindled in U.S. medicine, requires numerous phages for each bacterial pathogen. Our approach 1) uses Sandia-unique software to identify dormant phages (prophages) integrated into bacterial chromosomes, 2) identifies prophage-laden bacteria that are close relatives of the target pathogenic strain to be killed, and 3) engineers away properties of these phages that are undesirable for therapy. We have perfected our phage-finding software, implemented our phage therapy strategy by targeting the pathogen Pseudomonas aeruginosa, and prepared new software to assist the phage engineering. We then turned toward Burkholderia pathogens, aiming to overcome the difficulty to transform these bacteria with a novel phage conjugation approach. Our work demonstrates the validity of a new approach to phage therapy for killing antibiotic resistant pathogens.
Regression Based Approach for Robust Finite Element Analysis on Arbitrary Grids. LDRD Final Report
Kuberry, Paul A.; Bochev, Pavel B.; Koester, Jacob K.; Trask, Nathaniel A.
This report summarizes the work performed under a one-year LDRD project aiming to enable accurate and robust numerical simulation of partial differential equations for meshes that are of poor quality. Traditional finite element methods use the mesh to both discretize the geometric domain and to define the finite element shape functions. The latter creates a dependence between the quality of the mesh and the properties of the finite element basis that may adversely affect the accuracy of the discretized problem. In this project, we propose a new approach for defining finite element shape functions that breaks this dependence and separates mesh quality from the discretization quality. At the core of the approach is a meshless definition of the shape functions, which limits the purpose of the mesh to representing the geometric domain and integrating the basis functions without having any role in their approximation quality. The resulting non-conforming space can be utilized within a standard discontinuous Galerkin framework providing a rigorous foundation for solving partial differential equations on low-quality meshes. We present a collection of numerical experiments demonstrating our approach in a wide range of settings: strongly coercive elliptic problems, linear elasticity in the compressible regime, and the stationary Stokes problem. We demonstrate convergence for all problems and stability for element pairs for problems which usually require inf-sup compatibility for conforming methods, also referring to a minor modification possible through the symmetric interior penalty Galerkin framework for stabilizing element pairs that would otherwise be traditionally unstable. Mesh robustness is particularly critical for elasticity, and we provide an example that our approach provides a greater than 5x improvement in accuracy and allows for taking an 8x larger stable timestep for a highly deformed mesh, compared to the continuous Galerkin finite element method. The report concludes with a brief summary of ongoing projects and collaborations that utilize or extend the products of this work.
A Review of Sandia Energy Storage Research Capabilities and Opportunities (2020 to 2030)
Ho, Clifford K.; Atcitty, Stanley A.; Bauer, Stephen J.; Borneo, Daniel R.; Byrne, Raymond H.; Chalamala, Babu C.; Lamb, Joshua H.; Lambert, Timothy N.; Schenkman, Benjamin L.; Spoerke, Erik D.; Zimmerman, Jonathan A.
Large-scale integration of energy storage on the electric grid will be essential to enabling greater penetration of intermittent renewable energy sources, modernizing the grid for increased flexibility security, reliability, and resilience, and enabling cleaner forms of transportation. The purpose of this report is to summarize Sandia's research and capabilities in energy storage and to provide a preliminary roadmap for future efforts in this area that can address the ongoing program needs of DOE and the nation. Mission and vision statements are first presented followed by an overview of the organizational structure at Sandia that provides support and activities in energy storage. Then, a summary of Sandia's energy storage capabilities is presented by technology, including battery storage and materials, power conversion and electronics, subsurface-based energy storage, thermal/thermochemical energy storage, hydrogen storage, data analytics/systems optimization/controls, safety of energy storage systems, and testing/demonstrations/model validation. A summary of identified gaps and needs is also presented for each technology and capability.
Bistatic Synthetic Aperture Radar - Issues Analysis and Design
The physical separation of the transmitter from the receiver into perhaps separate flight vehicles (with separate flight paths) in a bistatic Synthetic Aperture radar system adds considerable complexity to an already complex system. Synchronization of waveform parameters and timing attributes become problematic, and notions of even the synthetic aperture itself take on a new level of abstractness. Consequently, a high-performance, fine-resolution, and reliable bistatic SAR system really needs to be engineered from the ground up, with tighter specifications on a number of parameters, and entirely new functionality in other areas. Nevertheless, such a bistatic SAR system appears viable.
Modeling of Atom Interferometer Accelerometer
Soh, Daniel B.; Lee, Jongmin L.; Schwindt, Peter S.
This report presents the theoretical effort to model and simulate the atom-interferometer accelerometer operating in a highly mobile environment. Multitudes of non-idealities may occur in such a rapidly-changing environment with a large acceleration whose amplitude and direction both change quickly. We studied the undesired effect of high mobility in the atom-interferometer accelerator in a detailed model and a simulator. The undesired effects include the atom cloud's movement during Raman pulses, the Doppler effect due to the relative movement between the atom-cloud and the supporting platform, the finite atom cloud temperature, and the lateral movement of the atom cloud. We present the relevant feed-forward mitigation strategies for each identified non-ideality to neutralize the impact and obtain accurate acceleration measurements.
Dispersion Validation for Flow Involving a Large Structure Revisited: 45 Degree Rotation
Brown, Alexander B.; Lance, Blake L.; Clemenson, Michael D.; Jones, Samuel T.; Benson, Michael J.; Elkins, Chris
The atmospheric dispersion of contaminants in the wake of a large urban structure is a challenging fluid mechanics problem of interest to the scientific and engineering communities. Magnetic Resonance Velocimetry (MRV) and Magnetic Resonance Concentration (MRC) are relatively new techniques that leverage diagnostic equipment used primarily by the medical field to make 3D engineering measurements of flow and contaminant dispersal. SIERRA/Fuego, a computational fluid dynamics (CFD) code at Sandia National Labs is employed to make detailed comparisons to the dataset to evaluate the quantitative and qualitative accuracy of the model. This work is the second in a series of scenarios. In the prior work, a single large building in an array of similar buildings was considered with the wind perpendicular to a building face. In this work, the geometry is rotated by 45 degrees and improved studies are performed for simulation credibility. The comparison exercise shows conditionally good comparisons between the model and experiment. Model uncertainties are assessed through parametric variations. Various methods of quantifying the accuracy between experiments and data are examined Three-dimensional analysis of accuracy is performed. The effort helped identify deficiencies in the techniques used to make these comparisons, and further methods development therefore becomes one of the main recommendations for follow-on work.
Feasibility Study of Replacing the R/V Robert Gordon Sproul with a Hybrid Vessel Employing Zero-emission Propulsion Technology
Klebanoff, Leonard E.; Caughlan, Sean A.M.; Madsen, Robert T.; Leach, Timothy S.; Conard, Cody J.; Appelgate Jr., Bruce
This project is a natural "follow-on" to the 2017 MARAD-funded project establishing the technical, regulatory, and economic feasibilities of a zero-emission hydrogen fuel-cell coastal research vessel named the Zero-V. In this follow-on project, we examine the applicability of hydrogen fuel-cell propulsion technology for a different kind of vessel, namely a smaller coastal/local research vessel targeted as a replacement for the Scripps Institution of Oceanography (SIO) R/V Robert Gordon Sproul, which is approaching the end of its service life.
Effects of EMP Testing on Residential DC/AC Microinverters
Fierro, Andy; Le, Ken; Sanabria, David E.; Guttromson, Ross G.; Halligan, Matthew H.; Lehr, J.M.
Electromagnetic pulse (EMP) coupling into electronic devices can be destructive to components potentially causing device malfunction or failure. A large electromagnetic field generated from the EMP can induce large voltages and currents in components. As such, the effects of EMP on different devices needs to be understood to elucidate the effect of EMP on potentially vulnerable systems. This report presents test results for small-scale residential DC to AC solar panel microinverters that were subjected to high voltage impulses and currents. The impulses were intended to emulate an EMP coupling event to the AC and DC sides of the microinverter. State-of-health measurements were conducted to characterize device performance before and after each test.
Sandia's Integrated Methodology for Energy and Infrastructure Resilience Analysis
Wachtel, Amanda; Jones, Katherine A.; Baca, Michael J.; O'Neill-Carrillo, Efrain O.; Demenno, Mercy B.
Sandia National Laboratories' (Sandia) Resilient Energy Systems (RES) Strategic Initiative is establishing a strategic vision for U.S. energy systems' resilience through threat-informed research and development, enabling energy and interdependent infrastructure systems to successfully adapt in an environment of accelerating change. A key challenge in promoting energy systems resilience lies in developing rigorous resilience analysis methodologies to quantify system performance. Resilience analysis methodologies should enable evaluation of the consequences of various disruptions and the relative effectiveness of potential mitigations. To address this challenge, RES synthesized the common components of Sandia's resilience frameworks into an integrated methodology for energy and infrastructure resilience analysis. This report documents, demonstrates, and extends this methodology.
Microstructural Changes to Thermally Sprayed Materials Subjected to Dynamic Compression
McCoy, C.A.; Moore, Nathan W.; Vackel, Andrew V.
Dynamic compression of materials can induce a variety of microstructural changes. As thermally-sprayed materials have highly complex microstructures, the expected pressure at which changes occur cannot be predicted a priori. In addition, typical in-situ measurements such as velocimetry are unable to adequately diagnose microstructural changes such as failure or pore collapse. Quasi-isentropic compression experiments with sample recovery were conducted to examine microstructural changes in thermally sprayed tantalum and tantalum-niobium blends up to 8 GPa pressure. Spall fracture was observed in all tests, and post-shot pore volume decreased relative to the initial state. The blended material exhibited larger spall planes with fracture occurring at interphase boundaries. An estimate of the pressure at which pore collapse is complete was determined to be ~26 GPa for pure tantalum and ~19 GPa for the tantalumniobium blend under these loading conditions.
ALEGRA Parallel Scaling for Shock in a Heterogeneous Structure
We investigate the strong and weak parallel scaling performance of the ALEGRA multiphysics finite element program when solving a problem involving shock propagation through a heterogeneous material. We determine that ALEGRA scales well over a wide range of problem sizes, cores, and element sizes, and that scaling generally improves as the minimum element size in the mesh increases.
Efficacy and Delivery of Novel FAST Agents for Coronaviruses
We proposed to test and develop advanced delivery for novel agents from our collaborators Facile Accelerated Specific Therapeutics (FAST) platform to reduce coronavirus replication. Sachi Bioworks Inc., Prof. Anushree Chatterjee, and Prof. Prashant Nagpal at the University of Colorado Boulder have developed a bioinformatics and synthesis pipeline to produce sequence specific theranostic agents (agents that can be therapies and/or diagnostics) that are inherently transported into the cytoplasm of mammalian host cells and sequence-specifically interfere in nucleic acid replication. The agent comprises a small nanoparticle (2-5 nm) chosen for ideal cellular transport and/or imaging conjugated to a short, synthetic DNA analog oligomer designed for binding to one or more target viral sequences. The sequence specific binding of the FAST agent to its target prevents nucleic acid replication due to its high affinity binding. While the small nanoparticle facilitates delivery in vitro, we plan to package the FAST agents into a larger nanoparticle (80-300 nm) for future in vivo delivery applications. Our team at Sandia has expertise encapsulating biomolecules including protein, DNA, and RNA into solid lipid nanoparticles (LNP) and lipid coated mesoporous silica nanoparticles (LC-MSN) and shown successful delivery in mouse models to multiple tissues. Our team focused on formulation parameters for FAST agents into lipid nanoparticles (LNP) and lipid coated mesoporous silica nanoparticles (LC-MSN) for enhanced delivery and/or efficacy and in vivo translation. We used lipid formulas that have been shown in literature to facility in vitro and more importantly, in vivo delivery. In our work discussed below, we successfully demonstrate loading and release of FAST agents on silica core and stable LC-MSN in a reasonable size range for in vivo testing.
Joint Analysis of Program Data Representations using Machine Learning for Improved Software Assurance and Development Capabilities
Heidbrink, Scott H.; Rodhouse, Kathryn N.; Dunlavy, Daniel D.; Cooper, Alexis C.; Zhou, Xin Z.
We explore the use of multiple deep learning models for detecting flaws in software programs. Current, standard approaches for flaw detection rely on a single representation of a software program (e.g., source code or a program binary). We illustrate that, by using techniques from multimodal deep learning, we can simultaneously leverage multiple representations of software programs to improve flaw detection over single representation analyses. Specifically, we adapt three deep learning models from the multimodal learning literature for use in flaw detection and demonstrate how these models outperform traditional deep learning models. We present results on detecting software flaws using the Juliet Test Suite and Linux Kernel.
Resistive heating in an electrified domain with a spherical inclusion: an ALEGRA verification study
Rodriguez, Angel E.; Siefert, Christopher S.; Niederhaus, John H.
A verification study is conducted for the ALEGRA software, using the problem of an electrified medium with a spherical inclusion, paying special attention to resistive heating. We do so by extending an existing analytic solution for this problem to include both conducting and insulating inclusions, and we examine the effects of mesh resolution and mesh topology, considering both body-fitted and rectangular meshes containing mixed cells. We present observed rates of convergence with respect to mesh refinement for four electromagnetic quantities: electric potential, electric field, current density and Joule power.
Partitioning of Complex Fluids at Mineral Surfaces
Greathouse, Jeffery A.; Long, Daniel M.; Xu, Guangping X.; Yoon, Hongkyu Y.; Kim, Iltai; Jungjohann, Katherine L.
This report summarizes the results obtained during the LDRD project entitled "Partitioning of Complex Fluids at Mineral Interfaces." This research addressed fundamental aspects of such interfaces, which are relevant to energy-water applications in the subsurface, including fossil energy extraction and carbon sequestration. This project directly addresses the problem of selectivity of complex fluid components at mineral-fluid interfaces, where complex fluids are defined as a mixture of hydrophobic and hydrophilic components: e.g., water, aqueous ions, polar/nonpolar organic compounds. Specifically, this project investigates how adsorption selectivity varies with surface properties and fluid composition. Both experimental and molecular modeling techniques were used to better understand trends in surface wettability on mineral surfaces. The experimental techniques spanned the macroscale (contact angle measurements) to the nanoscale (cryogenic electronic microscopy and vibrational spectroscopy). We focused on an anionic surfactant and a well-characterized mineral phase representative of clay phases present in oil- and gas-producing shale deposits. Collectively, the results consistently demonstrate that the presence of surfactant in the aqueous fluid significantly affects the mineral-fluid interfacial structure. Experimental and molecular modeling results reveal details of the surfactant structure at the interface, and how this structure varies with surfactant coverage and fluid composition.
Understanding Microstructural Effects on Dynamic Performance Towards the Development of Shock Metamaterials
Branch, Brittany A.; Specht, Paul E.; Ruggles, Timothy R.; Moore, David G.; Jared, Bradley H.
With the recent advances in additive manufacturing (AM), long-range periodic lattice assemblies are being developed for vibration and shock mitigation components in aerospace and military applications with unique geometric and topological structures. There has been extensive work in understanding the static properties associated with varying topology of these lattice architectures, but there is almost no understanding of microstructural affects in such structures under high-strain rate dynamic loading conditions. Here we report the shock behavior of lattices with varying intrinsic grain structures achieved by post process annealing. High resolution 316L stainless steel lattices were 3D printed by a laser-powder bed fusion machine and characterized by computed tomography. Subsequent annealing resulted in stress-relieved and recrystallized lattices. Overall the lattices had strong cubic texture aligning with the x-, y- and z-directions of the build with a preference outside the build direction (z). The recrystallized sample had more equiaxed polygonal grains and a layer of BCC ferrite at the surface of the structure approximately 1 grain thick. Upon dynamic compression the as-deposited lattice showed steady compaction behavior while the heat-treated lattices exhibit negative velocity behavior indicative of failure. We attribute this to the stiffer BCC ferrite in the annealed lattices becoming damaged and fragmenting during compression.
Arctic Coastal Erosion: Modeling and Experimentation
Bull, Diana L.; Bristol, Emily M.; Brown, Eloise; Choens, Robert C.; Connolly, Craig T.; Flanary, Christopher; Frederick, Jennifer M.; Jones, Benjamin M.; Jones, Craig A.; Ward Jones, Melissa; Mcclelland, James W.; Mota, Alejandro M.; Kalashnikova, Irina
Increasing Arctic coastal erosion rates have put critical infrastructure and native communities at risk while also mobilizing ancient organic carbon into modern carbon cycles. Although the Arctic comprises one-third of the global coastline and has some of the fastest eroding coasts, current tools for quantifying permafrost erosion are unable to explain the episodic, storm-driven erosion events. Our approach, mechanistically coupling oceanographic predictions with a terrestrial model to capture the thermo-mechanical dynamics of erosion, enables this much needed treatment of transient erosion events. The Arctic Coastal Erosion Model consists of oceanographic and atmospheric boundary conditions that force a coastal terrestrial permafrost environment in Albany (a multi-physics based finite element model). An oceanographic modeling suite (consisting of WAVEWATCH III, Delft3D-FLOW, and Delft3D-WAVE) produced time-dependent surge and run-up boundary conditions for the terrestrial model. In the terrestrial model, a coupling framework unites the mechanical and thermal aspects of erosion. 3D stress/strain fields develop in response to a plasticity model of the permafrost that is controlled by the frozen water content determined by modeling 3D heat conduction and solid-liquid phase change. This modeling approach enables failure from any allowable deformation (block failure, slumping, etc.). Extensive experimental work has underpinned the ACE Model development including field campaigns to measure in situ ocean and erosion processes, strength properties derived from thermally driven geomechanical experiments, as well as extensive physical composition and geochemical analyses. Combined, this work offers the most comprehensive and physically grounded treatment of Arctic coastal erosion available in the literature. The ACE model and experimental results can be used to inform scientific understanding of coastal erosion processes, contribute to estimates of geochemical and sediment land-to-ocean fluxes, and facilitate infrastructure susceptibility assessments.
Coupling of Laminar-Turbulent Transition with RANS Computational Fluid Dynamics
Wagnild, Ross M.; Fike, Jeffrey A.; Kucala, Alec K.; Krygier, Michael K.; Bitter, Neal
This project combines several new concepts to create a boundary layer transition prediction capability that is suitable for analyzing modern hypersonic flight vehicles. The first new concept is the use of ''optimization'' methods to detect the hydrodynamic instabilities that cause boundary layer transition; the use of this method removes the need for many limiting assumptions of other methods and enables quantification of the interactions between boundary layer instabilities and the flow field imperfections that generate them. The second new concept is the execution of transition analysis within a conventional hypersonics CFD code, using the same mesh and numerical schemes for the transition analysis and the laminar flow simulation. This feature enables rapid execution of transition analysis with less user oversight required and no interpolation steps needed.
Multi-Axis Resonant Plate Shock Testing Evaluation and Test Specification Development
Sisemore, Carl; Babuska, Vit B.; Flores, Robert X.
Resonant plate testing is a shock test method that is frequently used to simulate pyroshock events in the laboratory. Recently, it was discovered that if the unit under test is installed at an off-center location, a tri-axial accelerometer would record a shock response in three directions and the resulting shock response spectra implied that the test may have qualified the component in three directions simultaneously. The purpose of this research project was to evaluate this idea of multi-axis shock testing to determine if it was truly a multi-axis shock environment and if such a test could be used as an equivalent component qualification test. A study was conducted using generic, additively manufactured components tested on a resonant plate, along with an investigation of plate motion to evaluate the component response to off- center plate excitation. The data obtained here along with the analytical simulations performed indicate that off-center resonant plate tests are actually not three-axis shock tests, but rather single axis shocks at an arbitrary angle dictated by the location of the unit under test on the plate. This conclusion is supported by the fact that only one vectored shock input is provided to the component in a resonant plate test. Thus, the output response is a coupled response of the transverse plate vibration and the rotational motion of the component on the plate. Additionally, a multi-axis shock test defined by three single axis test environments always results in a significant component over-test in one direction.
How Low Can You Go? Using Synthetic 3D Imagery to Drastically Reduce Real-World Training Data for Object Detection
Gastelum, Zoe N.; Shead, Timothy M.
Deep convolutional neural networks (DCNNs) currently provide state-of-the-art performance on image classification and object detection tasks, and there are many global security mission areas where such models could be extremely useful. Crucially, the success of these models is driven in large part by the widespread availability of high-quality open source data sets such as Image Net, Common Objects in Context (COCO), and KITTI, which contain millions of images with thousands of unique labels. However, global security relevant objects-of-interest can be difficult to obtain: relevant events are low frequency and high consequence; the content of relevant images is sensitive; and adversaries and proliferators seek to obscure their activities. For these cases where exemplar data is hard to come-by, even fine-tuning an existing model with available data can be effectively impossible. Recent work demonstrated that models can be trained using a combination of real-world and synthetic images generated from 3D representations; that such models can exceed the performance of models trained using real-world data alone; and that the generated images need not be perfectly realistic (Tremblay, et al., 2018). However, this approach still required hundreds to thousands of real-world images for training and fine tuning, which for sparse, global security-relevant datasets can be an unrealistic hurdle. In this research, we validate the performance and behavior of DCNN models as we drive the number of real-world images used for training object detection tasks down to a minimal set. We perform multiple experiments to identify the best approach to train DCNNs from an extremely small set of real-world images. In doing so, we: Develop state-of-the-art, parameterized 3D models based on real-world images and sample from their parameters to increase the variance in synthetic image training data; Use machine learning explainability techniques to highlight and correct through targeted training the biases that result from training using completely synthetic images; and Validate our results by comparing the performance of the models trained on synthetic data to one another, and to a control model created by fine-tuning an existing ImageNet-trained model with a limited number (hundreds) of real-world images.
Assessing the Vulnerability of Unmanned Aircraft Systems to Directed Acoustic Energy
The increasingly large payloads of Unmanned Aircraft Systems (UASs) are exponentially increasing the threat to the nuclear enterprise. Current mitigation using RF interference is effective, but not feasible for fully autonomous systems and is prohibited in many areas. A new approach to UAS threat mitigation is needed that does not create radio interference but is effective against any type of vehicle. At the present time there is no commercial counter-UAS system that directly assaults the mems gyros and accelerometers in the Inertial Measurement Unit on the aircraft. But lab testing has revealed resonances in some IMUs that make them susceptible to moderate amplitude acoustic monotones. Sandia's energetic materials facility has enabled a quick and thorough exploration of UAS vulnerability to directed acoustic energy by using intense acoustic impulses to destabilize or down a UAS. We have: 1) detonated/deflagrated explosive charges of various sizes; 2) accurately measured impulse pressure and pulse duration; 3) determined what magnitude of acoustic insult to the IMU disrupts flight and for how long and; 4) determined if the air blast/shock wave on aircraft/propellers disrupts flight.
FY20Q4 report for ATDM AD projects to ECP [Kokkos, etc.]
Activities, accomplishments, next steps and outreach are reported, primarily related to the Kokkos project.
Summary Report for the NEPA Impact Analysis. Revision 1
Zeitler, Todd Z.; Brunell, Sarah B.; Feng, Lianzhong M.; Kicker, Dwayne C.; Kim, Sungtae K.; Long, Jennifer J.; Rechard, Robert P.; Hansen, Clifford H.; Wagner, Stephen W.
The Waste Isolation Pilot Plant (WIPP), located in southeastern New Mexico, has been developed by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of defense-related transuranic (TRU) waste. Containment of TRU waste at the WIPP facility is derived from standards set forth in Title 40 of the Code of Federal Regulations (CFR), Part 191. The DOE assesses compliance with the containment standards according to the Certification Criteria in Title 40 CFR Part 194 by means of Performance Assessment (PA) calculations performed by Sandia National Laboratories (SNL). WIPP PA calculations estimate the probability of radionuclide releases from the repository to the accessible environment for a regulatory period of 10,000 years after facility closure. The DOE Carlsbad Field Office (CBFO) has initiated a National Environmental Policy Act (NEPA) action for a proposal to excavate and use additional transuranic (TRU) waste disposal panels at the WIPP facility. This report documents an analysis undertaken as part of an effort to evaluate the potential environmental consequences of the proposed action. Although not explicitly required for a NEPA analysis, evaluations of a dose indicator to hypothetical members of the public after final facility closure are presented in this report. The analysis is carried out in two stages: first, Performance Assessment (PA) calculations quantify the potential releases to the accessible environment over a 10,000-year post-closure period. Second, dose was evaluated for three hypothetical exposure pathways using the conservative radionuclide concentrations assumed to be released to the accessible environment.
ISO 55001 Asset Management Gap Analysis - Final Results [Spreadsheet]
Otero, Pete; Foster, Birgitta T.; Clark, Waylon T.; Evans, Christopher A.; Zavadil, John Z.; Michaels, Jeremy M.; Sholtis, Diane; Martinez, Gabriel
A spreadsheet showing the final results from the ISO 55001 Section Alignment ratings is shown, including gap analysis and IAM maturity ratings.
FY20Q4 Report for ATDM AD Projects to ECP [SPARC, etc.]
Sections include: performance to plan, components, exceeds, and lessons learned. Updated areas include: SPARC, UMR, EMPIRE, Panzer.
Review of the Nuclear Energy Agency (NEA) Ancillary Thermodynamic Database (TDB) Volume (DRAFT REV. 0)
Jove Colon, Carlos F.; Sanchez, Amanda C.
The Nuclear Energy Agency (NEA) Ancillary data volume comprises thermodynamic data of mineral and aqueous species that, in addition to Auxiliary Data (as referred to in previous NEA thermodynamic data volumes), is necessary to calculations of chemical interactions relevant to radioactive waste management and nuclear energy. This SAND report is a review of the NEA Ancillary data critical reviews volume of thermodynamic data parameters. The review given in this report mainly involves data comparison with other thermodynamic data assessments, analysis of thermodynamic parameters, and examination of data sources. Only new and updated data parameters were considered in this review. Overall, no major inconsistencies or errors were found as allowed by the comparisons conducted in this review. Some remarks were noted, for example, on the consideration of relevant studies and/or comparisons on the analysis and retrieval of thermodynamic data parameters not cited in the respective sections.
Pre-Symptomatic COVID Screening
Temperature checks for fever are extensively used for preliminary COVID screenings but are ineffective during the incubation stage of infection when a person is asymptotic. Researchers at the European Centre for Disease Prevention and Control concluded that approximately 75% of passengers infected with COVID-19 and traveling from affected Chinese cities would not be detected by early screening. Core body temperature is normally kept within a narrow range and has the smallest relative standard deviation of all vital signs. Heat in the body is prioritized around internal organs at the expense of the periphery by controlling blood flow. In fact, blood flow to the skin may vary by a factor of 100 depending on thermal conditions. This adaptation causes rapid temperature fluctuations in different skin regions from changes in cardiac output, metabolism, and likely cytokine diffusion during inflammation that would not be seen in average core body temperature. Current IR and thermal scanners used for temperature checks are not necessarily reflective of core body temperatures and require cautious interpretation as they frequently result in false positive and false negative diagnosis. Hand held thermometers measure average skin temperatures and can get readings that differ from core body temperature by as much as 7°. Rather than focusing on a core body temperature threshold assessment we believe that variability of temperature patterns using a novel wearable transdermal microneedle sensor will be more sensitive to infections in the incubation stage and propose to develop a wearable transdermal temperature sensor using established Sandia microneedle technology for pre-symptomatic COVID screening that can additionally be used to monitor disease progression at later stages.
Predicting Future Disease Burden in a Rapidly Changing Climate
Powell, Amy J.; Kalashnikova, Irina; Davis, Warren L.; Peterson, Kara J.; Rempe, Susan R.; Smallwood, Chuck R.; Roesler, Erika L.
The interplay of a rapidly changing climate and infectious disease occurrence is emerging as a critical topic, requiring investigation of possible direct, as well as indirect, connections between disease processes and climate-related variation and phenomena. First, we introduce and overview three infectious disease exemplars (dengue, influenza, valley fever) representing different transmission classes (insect-vectored, human-to-human, environmentally-transmitted) to illuminate the complex and significant interplay between climate disease processes, as well as to motivate discussion of how Sandia can transform the field, and change our understanding of climate-driven infectious disease spread. We also review state-of-the-art epidemiological and climate modeling approaches, together with data analytics and machine learning methods, potentially relevant to climate and infectious disease studies. We synthesize the modeling and disease exemplars information, suggesting initial avenues for research and development (R&D) in this area, and propose potential sponsors for this work. Whether directly or indirectly, it is certain that a rapidly changing climate will alter global disease burden. The trajectory of climate change is an important control on this burden, from local, to regional and global scales. The efforts proposed herein respond to the National Research Councils call for the creation of a multidisciplinary institute that would address critical aspects of these interlocking, cascading crises.
Asynchronous Ballistic Reversible Computing using Superconducting elements
Lewis, Rupert; Missert, Nancy A.; Henry, Michael D.; Frank, Michael P.
Computing uses energy. At the bare minimum, erasing information in a computer increases the entropy. Landauer has calculated %7E kBT log(2) Joules is dissipated per bit of energy erased. While the success of Moores law has allowed increasing computing power and efficiency for many years, these improvements are coming to an end. This project asks if there is a way to continue those gains by circumventing Landauer through reversible computing. We explore a new reversible computing paradigm, asynchronous ballistic reversible computing or ABRC. The ballistic nature of data in ABRC matches well with superconductivity which provides a low-loss environment and a quantized bit encoding the fluxon. We discuss both these and our development of a superconducting fabrication process at Sandia. We describe a fully reversible 1-bit memory cell based on fluxon dynamics. Building on this model, we propose several other gates which may also offer reversible operation.
Rapid Assessment of Autoignition Propensity in Novel Fuels and Blends
Sheps, Leonid S.; Buras, Zachary B.; Zador, Judit Z.; Au, Kendrew; Safta, Cosmin S.
We developed a computational strategy to correlate bulk combustion metrics of novel fuels and blends in the low-temperature autoignition regime with measurements of key combustion intermediates in a small-volume, dilute, high-pressure reactor. We used neural net analysis of a large simulation dataset to obtain an approximate correlation and proposed experimental and computational steps needed to refine such a predictive correlation. We also designed and constructed a high-pressure laboratory apparatus to conduct the proposed measurements and demonstrated its performance on three canonical fuels: n-heptane, i-octane, and dimethyl ether.
A Quantum Analog Coprocessor for Correlated Electron Systems Simulation
Baczewski, Andrew D.; Brickson, Mitchell I.; Campbell, Quinn C.; Jacobson, Noah T.; Maurer, Leon
Analog quantum simulation is an approach for studying physical systems that might otherwise be computationally intractable to simulate on classical high-performance computing (HPC) systems. The key idea behind analog quantum simulation is the realization of a physical system with a low-energy effective Hamiltonian that is the same as the low-energy effective Hamiltonian of some target system to be studied. Purpose-built nanoelectronic devices are a natural candidate for implementing the analog quantum simulation of strongly correlated materials that are otherwise challenging to study using classical HPC systems. However, realizing devices that are sufficiently large to study the properties of a non-trivial material system (e.g., those described by a Fermi-Hubbard model) will eventually require the fabrication, control, and measurement of at least 0(10) quantum dots, or other engineered quantum impurities. As a step toward large-scale analog or digital quantum simulation platforms based on nanoelectronic devices, we propose a new approach to analog quantum simulation that makes use of the large Hilbert space dimension of the electronic baths that are used to adjust the occupancy of one or a few engineered quantum impurities. This approach to analog quantum simulation allows us to study a wide array of quantum impurity models. We can further augment the computational power of such an approach by combining it with a classical computer to facilitate dynamical mean-field theory (DMFT) calculations. DMFT replaces the solution of a lattice impurity problem with the solution of a family of localized impurity problems with bath couplings that are adjusted to satisfy a self-consistency condition between the two models. In DMFT, the computationally challenging task is the high-accuracy solution of an instance of a quantum impurity model that is determined self-consistently in coordination with a mean-field calculation. We propose using one or a few engineered quantum impurities with adjustable couplings to baths to realize an analog quantum coprocessor that effects the solution of such a model through measurements of a physical quantum impurity, operating in coordination with a classical computer to achieve a self-consistent solution to a DMFT calculation. We focus on implementation details relevant to a number of technologies for which Sandia has design, fabrication, and measurement expertise. The primary technical advances outlined in this report concern the development of a supporting modeling capability. As with all analog quantum simulation platforms, the successful design and operation of individual devices depends critically on one's ability to predict the effective low-energy Hamiltonian governing its dynamics Our project has made this possible and lays the foundation for future experimental implementations.
Handheld Biosensor for COVID-19 Screening
Branch, Darren W.; Hayes, Dulce C.
We have made significant progress toward the development of an integrated nucleic acid amplification system for Autonomous Medical Devices Incorporated (AMDIs) Optikus handheld diagnostic device. In this effort, we developed a set of loop-mediated isothermal amplification (LAMP) primers for SARS-CoV-2 and then demonstrate amplification directly on a surface acoustic wave (SAW) sensor. We built associated hardware and developed a C-code to control the amplification process. The goal of this project was to develop a nucleic amplification assay that is compatible with SAW sensors to enable both nucleic and serological testing in a single handheld diagnostic device. Toward this goal, AMDI is collaborating Sandia National Laboratories to develop a rapid, portable diagnostic screening device that utilizes Sandias unique surface acoustic wave biosensor (SAW) for COVID-19 detection. Previously, the SANDIA- AMDI SAW sensor has successfully detected multiple high-profile bacteria viruses, including Ebola, HIV, Sin Nombre, and Anthrax. Over the last two years, AMDI and SANDIA have significantly improved the sensitivity and detection capability of the SAW biosensor and have also developed a modular hand-held, portable platform called the Optikus, which uses CD microfluidics and handheld instrumentation to automate all sample preparation, reagent introduction, sample delivery, and measurement for a number of different assay targets. We propose to use this platform for the development of a rapid (%3C30 minutes), point-of-care diagnostic test for detection of COVID-19 from nasal swab samples.
Neuromorphic scaling advantages for energy-efficient random walk computations
Smith, John D.; Hill, Aaron J.; Reeder, Leah; Franke, Brian C.; Lehoucq, Richard B.; Parekh, Ojas D.; Severa, William M.; Aimone, James B.
Computing stands to be radically improved by neuromorphic computing (NMC) approaches inspired by the brain's incredible efficiency and capabilities. Most NMC research, which aims to replicate the brain's computational structure and architecture in man-made hardware, has focused on artificial intelligence; however, less explored is whether this brain-inspired hardware can provide value beyond cognitive tasks. We demonstrate that high-degree parallelism and configurability of spiking neuromorphic architectures makes them well-suited to implement random walks via discrete time Markov chains. Such random walks are useful in Monte Carlo methods, which represent a fundamental computational tool for solving a wide range of numerical computing tasks. Additionally, we show how the mathematical basis for a probabilistic solution involving a class of stochastic differential equations can leverage those simulations to provide solutions for a range of broadly applicable computational tasks. Despite being in an early development stage, we find that NMC platforms, at a sufficient scale, can drastically reduce the energy demands of high-performance computing platforms.
Bioscience COVID Rapid Response Report
The COVID-19 disease outbreak and its impact on global health and economies have highlighted the national security threat posed by pathogens with pandemic potential and the need for rapid development of effective diagnostics and medical countermeasures. The Bioscience IA selected for funding rapid COVID LDRD project proposals that addressed critical R&D gaps in pandemic response that could be accomplished in 1-3 months with the requested funding. In total, the Bioscience IA funded nine rapid projects that addressed 1) rapid and accurate methods for SARS-CoV-2 RNA detection, 2) modeling tools to help prioritize populations for diagnostic testing, 3) bioinformatic tools to track SARS-CoV-2 genomic sequence changes over time, 4) molecular inhibitors of SARS-CoV-2 cellular infection, and 5) method for rapid staging of COVID19 disease to enable administration of more effective treatments. In addition, LDRD funded one larger project to be completed in FY21 that leverages Sandia capabilities to address the need for platform diagnostics and therapeutics that can be rapidly tailored against emerging pathogen targets.
Experimental and Theoretical Studies of Ultrafast Vibrational Energy Transfer Dynamics in Energetic Materials
Ramasesha, Krupa R.; Wood, Mitchell A.; Cole-Filipiak, Neil C.; Knepper, Robert
Energy transfer through anharmonically-coupled vibrations influences the earliest chemical steps in shockwave-induced detonation in energetic materials. A mechanistic description of vibrational energy transfer is therefore necessary to develop predictive models of energetic material behavior. We performed transient broadband infrared spectroscopy on hundreds of femtoseconds to hundreds of picosecond timescales as well as density functional theory and molecular dynamics simulations to investigate the evolution of vibrational energy distribution in thin film samples of pentaerythritol tetranitrate (PETN) , 1,3,5 - trinitroperhydro - 1,3,5 - triazine (RDX) , and 2,4,6 - triamino 1,3,5 - trinitrobenzene (TATB). Experimental results show dynamics on multiple timescales, providing strong evidence for coupled vibrations in these systems, as well as material-dependent evolution on tens to hundreds of picosecond timescales. Theoretical results also reveal pathways and distinct timescales for energy transfer through coupled vibrations in the three investigated materials, providing further insight into the mechanistic underpinnings of energy transfer dynamics in energetic material sensitivity.
Noise Erasure in Quantum-Limited Current Amplifiers
Harris, Charles T.; Lu, Tzu-Ming L.; Bethke, Donald T.; Lewis, Rupert; Skinner Ramos, Sueli D.
Superconducting quantum interference devices (SQUIDs) are extraordinarily sensitive to magnetic flux and thus make excellent current amplifiers for cryogenic applications. One such application of high interest to Sandia is the set-up and state read-out of quantum dot based qubits, where a qubit state is read out from a short current pulse (microseconds to milliseconds long) of approximately 100 pA, a signal that is easily corrupted by noise in the environment. A Parametric SQUID Amplifier can be high bandwidth (in the GHz range), low power dissipation (less than 1pW), and can be easily incorporated into multi-qubit systems. In this SAIL LDRD, we will characterize the noise performance of the parametric amplifier front end -- the SQUID -- in an architecture specific to current readout for spin qubits. Noise is a key metric in amplification, and identifying noise sources will allow us to optimize the system to reduce its effects, resulting in higher fidelity readout. This effort represents a critical step in creating the building blocks of a high speed, low power, parametric SQUID current amplifier that will be needed in the near term as quantum systems with many qubits begin to come on line in the next few years.
Efficient Scalable Tomography of Many-Qubit Quantum Processors
Quantum computing has the potential to realize powerful and revolutionary applications. A quantum computer can, in theory, solve certain problems exponentially faster than its classical counterparts. The current state of the art devices, however, are too small and noisy to practically realize this goal. An important tool for the advancement of quantum hardware, called model-based characterization, seeks to learn what types of noise are exhibited in a quantum processor. This technique, however, is notoriously difficult to scale up to even modest numbers of qubit, and has been limited to just 2 qubits until now. In this report, we present a novel method for performing model-based characterization, or tomography, on a many-qubit quantum processor. We consider up to 10 qubits, but the technique is expected to scale to even larger systems.
A Multi-Instance learning Framework for Seismic Detectors
Ray, Jaideep R.; Wang, Fulton W.; Young, Christopher J.
In this report, we construct and test a framework for fusing the predictions of a ensemble of seismic wave detectors. The framework is drawn from multi-instance learning and is meant to improve the predictive skill of the ensemble beyond that of the individual detectors. We show how the framework allows the use of multiple features derived from the seismogram to detect seismic wave arrivals, as well as how it allows only the most informative features to be retained in the ensemble. The computational cost of the "ensembling" method is linear in the size of the ensemble, allowing a scalable method for monitoring multiple features/transformations of a seismogram. The framework is tested on teleseismic and regional p-wave arrivals at the IMS (International Monitoring System) station in Warramunga, NT, Australia and the PNSU station in University of Utah's monitoring network.
Applying Compression-Based Metrics to Seismic Data in Support of Global Nuclear Explosion Monitoring
Matzen, Laura E.; Ting, Christina T.; Field, Richard V.; Morrow, J.D.; Brogan, Ronald; Young, Christopher J.; Zhou, Angela; Trumbo, Michael C.; Coram, Jamie L.
The analysis of seismic data for evidence of possible nuclear explosion testing is a critical global security mission that relies heavily on human expertise to identify and mark seismic signals embedded in background noise. To assist analysts in making these determinations, we adapted two compression distance metrics for use with seismic data. First, we demonstrated that the Normalized Compression Distance (NCD) metric can be adapted for use with waveform data and can identify the arrival times of seismic signals. Then we tested an approximation for the NCD called Sliding Information Distance (SLID), which can be computed much faster than NCD. We assessed the accuracy of the SLID output by comparing it to both the Akaike Information Criterion (AIC) and the judgments of expert seismic analysts. Our results indicate that SLID effectively identifies arrival times and provides analysts with useful information that can aid their analysis process.
Simulation Analysis of Geometry and Material Effects for Dropkinson Bar
Brif, Constantin B.; Stershic, Andrew J.
The reported research is motivated by the need to address a key issue affecting the Dropkinson bar apparatus. This unresolved issue is the interference of the stress wave reflected from the bar-beam boundary with the measurement of the stress-strain response of a material tested in the apparatus. The purpose of the wave beam that is currently connected to the bar is to dissipate the stress wave, but the portion of the wave reflected from the bar-beam boundary is still significant. First, we focused on understanding which parameters affect the reflected wave's arrival time at a strain gauge. Specifically, we used finite-element numerical simulations with the Sierra/SM module to study the effects of various bar-beam connection fixities, alternative wave beam materials, and alternative geometries of the Dropkinson bar system based on a monolithic design. The conclusion of this study is that a partial reflection always occurs at the bar-beam boundary (or, for a monolithic design, at a point where the bar geometry changes). Therefore, given a fixed total length of the bar, it is impossible to increase the reflected wave's arrival time by any significant amount. After reaching this conclusion, we focused instead on trying to minimize the energy of the reflected stress wave circulating up and down through the bar over a relatively long period of time (10 ms). Once again, we used numerical simulations with the Sierra/SM module to investigate the effects of various bar-beam connection fixities, alternative wave beam materials, and parameters of an asymmetric monolithic design of the bar-and-beam system. This study demonstrated that various parameters can significantly affect the energy of the wave reflections, with the difference between best and worst configurations being about one order of magnitude in terms of energy. Based on the obtained results, we conclude with concrete takeaways for Dropkinson bar users and propose potential directions for future research and optimization.
Conditional Generative Adversarial Networks for Solving Heat Transfer Problems
Martinez, Matthew T.; Heiner, Olivia N.
Generative Adversarial Networks (GANs) have been used as a deep learning approach to solving physics and engineering problems. Using deep learning for these problems is attractive in that reasonably accurate models can be inferred from only raw data, eliminating the need to define the exact physical equations governing a problem. We expand on previous work using GANs to generate steady-state solutions to the two-dimensional heat equation. Using a basic conditional GAN (cGAN), we generate accurate solutions for rectangular domains conditioned on four edge boundary conditions (MAE < 0.5%). For finding steady-state solutions over arbitrary two-dimensional domains (not constrained to rectangles), we use a cGAN designed for image-to-image translation. We train this GAN on various types of geometric domains (circles, squares, triangles, shapes with one circular or rectangular hole), achieving accurate results on test data made up of geometries similar to those in training (MAE < 1%). For both of these GANs, we experiment with different loss function terms, showing that a term using the gradients of solution images significantly improves the basic cGAN but not the image-to-image GAN. Lastly, we show that the image-to-image GAN performs poorly when applied to two-dimensional geometries that vary in structure from training data (MAE < 8% for shapes with multiple holes or different shaped holes). This demonstrates the cGAN's lack of generalizability. While the cGAN is an accurate and computationally efficient method when trained and tested on similarly structured data, it is a much less reliable method when applied to data that is slightly different in structure from the training data.
Mosaics, The Best of Both Worlds: Analog devices with Digital Spiking Communication to build a Hybrid Neural Network Accelerator
Aimone, James B.; Bennett, Christopher H.; Cardwell, Suma G.; Dellana, Ryan A.; Xiao, Tianyao X.
Neuromorphic architectures have seen a resurgence of interest in the past decade owing to 100x-1000x efficiency gain over conventional Von Neumann architectures. Digital neuromorphic chips like Intel's Loihi have shown efficiency gains compared to GPUs and CPUs and can be scaled to build larger systems. Analog neuromorphic architectures promise even further savings in energy efficiency, area, and latency than their digital counterparts. Neuromorphic analog and digital technologies provide both low-power and configurable acceleration of challenging artificial intelligence (AI) algorithms. We present a hybrid analog-digital neuromorphic architecture that can amplify the advantages of both high-density analog memory and spike-based digital communication while mitigating each of the other approaches' limitations.
Hydrogen Risk Assessment Models (HyRAM) (Version 3.0 Technical Reference Manual)
Ehrhart, Brian D.; Hecht, Ethan S.
The HyRAM software toolkit provides a basis for conducting quantitative risk assessment and consequence modeling for hydrogen infrastructure and transportation systems. HyRAM is designed to facilitate the use of state-of-the-art science and engineering models to conduct robust, repeatable assessments of hydrogen safety, hazards, and risk. HyRAM includes generic probabilities for hydrogen equipment failures, probabilistic models for the impact of heat flux on humans and structures, and computationally and experimentally validated first-order models of hydrogen release and flame physics. HyRAM integrates deterministic and probabilistic models for quantifying accident scenarios, predicting physical effects, and characterizing hydrogen hazards (thermal effects from jet fires, overpressure effects from deflagrations), and assessing impact on people and structures. HyRAM is developed at Sandia National Laboratories for the U.S. Department of Energy to increase access to technical data about hydrogen safety and to enable the use of that data to support development and revision of national and international codes and standards. HyRAM is a research software in active development and thus the models and data may change. This report will be updated at appropriate developmental intervals. This document provides a description of the methodology and models contained in the HyRAM version 3.0. HyRAM 3.0 includes the new ability to model cryogenic hydrogen releases from liquid hydrogen systems, using a different property calculation method and different equations of state. Other changes include modifications to the ignition probability calculations, component leak frequency calculations, and addition of default impulse data.
Tuning the critical Li intercalation concentrations for MoX2 bilayer phase transitions using classical and machine learning approaches
Spataru, Dan C.; Witman, Matthew; Jones, Reese E.
Transition metal dichalcogenides (TMDs) such as MoX2 are known to undergo a structural phase transformation as well as a change in the electronic conductivity upon Li intercalation. These properties make them candidates for charge tunable ion-insertion materials that could be used in electro-chemical devices for neuromorphic computing applications. In this work we study the phase stability and electronic structure of Li-intercalated bilayer MoX2 with X=S, Se or Te. Using first-principles calculations in combination with classical and machine learning modeling approaches we find that the energy needed to stabilize the conductive phase decreases with increasing atomic mass of the chalcogen atom X. A similar decreasing trend is found in the threshold Li concentration where the structural phase transition takes place. While the electronic conductivity increases with increasing ion concentration at low concentrations, we do not observe a conductivity jump at the phase transition point.
Measuring and Extracting Activity from Time Series Data
Stracuzzi, David J.; Peterson, Matthew G.; Popoola, Gabriel A.
This report summarizes the results of an LDRD focused on developing and demonstrating statistically rigorous methods for analyzing and comparing complex activities from remote sensing data. Identifying activity from remote sensing data, particularly those that play out over time and span multiple locations, often requires extensive manual effort because of the variety of features that describe the activity and the required domain expertise. Our results suggest that there are some hidden challenges in extracting and representing activities in sensor data. In particular, we found that the variability in the underlying behaviors can be difficult to overcome statistically, and the report identifies several examples of the issue. We discuss key lessons learned in the context of the project, and finally conclude with recommendations on next steps and future work.
Language Independent Static Analysis (LISA)
Ghormley, Douglas P.; Reedy, Geoffrey E.; Landin, Kirk T.
Software is becoming increasingly important in nearly every aspect of global society and therefore in nearly every aspect of national security as well. While there have been major advancements in recent years in formally proving properties of program source code during development, such approaches are still in the minority among development teams, and the vast majority of code in this software explosion is produced without such properties. In these cases, the source code must be analyzed in order to establish whether the properties of interest hold. Because of the volume of software being produced, automated approaches to software analysis are necessary to meet the need. However, this software boom is not occurring in just one language. There are a wide range of languages of interest in national security spaces, including well-known languages such as C, C++, Python, Java, Javascript, and many more. But recent years have produced a wide range of new languages, including Nim, (2008), Go (2009), Rust (2010), Dart (2011), Kotlin (2011), Elixir (2011), Red (2011), Julia (2012), Typescript (2012), Swift (2014), Hack (2014), Crystal (2014), Ballerina (2017) and more. Historically, automated software analyses are implemented as tools that intermingle both the analysis question at hand with target language dependencies throughout their code, making re-use of components for different analysis questions or different target languages impractical. This project seeks to explore how mission-relevant, static software analyses can be designed and constructed in a language-independent fashion, dramatically increasing the reusability of software analysis investments.
Sandia National Laboratories Early Career University Faculty Mentoring Program in International Safeguards
Solodov, Alexander A.; Peter-Stein, Natacha P.; Hartig, Kyle C.; Padilla, Eduardo A.; Di Fulvio, Angela; Shoman, Nathan
Recent years have seen a significantly increased focus in the areas of knowledge retention and mentoring of junior staff within the U.S. national laboratory complex. In order to involve the university community in this process, as well, an international safeguards mentoring program was established by Sandia National Laboratories (SNL) for early career university faculty. After a successful experience during 2019, the program continued into 2020 to include two new faculty members who were paired with SNL subject matter experts based on the topic of their individual projects: one to work on advanced laboratory work for physics, technology, and policy of nuclear safeguards and nonproliferation, and the other to look at machine learning applied to international safeguards and nonproliferation. There is a two-pronged purpose to the program: fostering the development of educational resources available for international safeguards and exploring new research topics stemming from the exchange of mentor and mentee. Further, the program as a whole allows for junior faculty members to establish and expand a relationship network within international safeguards. In addition, programs such as this build stronger connections between the academic and the national laboratory community. Thanks to the junior faculty members that now have new connections into the laboratory community and potential for collaboration projects with the laboratories in the future, safeguards knowledge can actually increase far beyond just individually engaging students using this new and efficient avenue.
Characterizing Shielded Special Nuclear Material by Neutron Capture Gamma-Ray Multiplicity Counting
O'Brien, Sean; Hamel, Michael C.
We present a new neutron multiplicity counting analysis and measurement method for neutron-shielded fissile material using neutron-capture gamma rays. Neutrons absorbed in shielding produce characteristic gamma rays that preserve the otherwise lost neutron multiplicity signature. Neutron multiplicity counting provides estimates of fission parameters, such as neutron leakage multiplication, spontaneous fissioner (e.g. Pu-240) mass, and (α,n) ratio. Standard neutron multiplicity counting can incorporate the new neutron-capture gamma-ray multiplicity counting technique to characterize previously degenerate or intractable source configurations by maximizing the multiplicity signature. The new method decouples neutron source-detector interferences, such as reflection and thermalization time in the detector, that could improve measurements of the mean neutron lifetime. We also develop a detector prototype for the multiplicity counting of neutron-capture gamma rays and present detector design considerations, such as detection material and shielding, to optimize the detection of the 2.2 MeV hydrogen capture gamma ray. We simulate the prototype neutron- capture gamma-ray multiplicity counter against the BeRP ball in polyethylene shells to inform future measurements.
Time Series Dimension Reduction for Surrogate Models of Port Scanning Cyber Emulations
Laros, James H.; Swiler, Laura P.; Pinar, Ali P.
Surrogate model development is a key resource in the scientific modeling community for providing computational expedience when simulating complex systems without loss of great fidelity. The initial step to development of a surrogate model is identification of the primary governing components of the system. Principal component analysis (PCA) is a widely used data science technique that provides inspection of such driving factors, when the objective for modeling is to capture the greatest sources of variance inherent to a dataset. Although an efficient linear dimension reduction tool, PCA makes the fundamental assumption that the data is continuous and normally distributed. Thus, it provides ideal performance when these conditions are met. In the case for which cyber emulations provide realizations of a port scanning scenario, the data to be modeled follows a discrete time series function comprised of monotonically increasing piece-wise constant steps. The sources of variance are related to the timing and magnitude of these steps. Therefore, we consider using XPCA, an extension to PCA for continuous and discrete random variates. This report provides the documentation of the trade-offs between the PCA and XPCA linear dimension reduction algorithms, for the intended purpose to identify key components of greatest variance in our time series data. These components will ultimately provide the basis for future surrogate models of port scanning cyber emulations.
Generic Spiking Architecture (GenSA)
Rothganger, Fredrick R.; Rodrigues, Arun
Neuromorphic devices are a rapidly growing area of interest in industry, with machines in production by IBM and Intel, among others. These devices promise to reduce size, weight and power (SWaP) costs while increasing resilience and facilitating high- performance computing (HPC). Each device will favor some set of algorithms, but this relationship has not been thoroughly studied. The field of neuromorphic computing is so new that existing devices were designed with merely estimated use-cases in mind. To better understand the fit between neuromorphic algorithms and machines, a simulated machine can be configured to any point in the design space. This will identify better choices of devices, and perhaps guide the market in new directions. The design of a generic spiking machine generalizes existing examples while also looking forward to devices that haven't been built yet. Each parameter is specified, along the approach/mechanism by which the relevant component is implemented in the simulator.
Designing Catalysts for Dehydrogenation of Methane for Reducing Greenhouse Gas during Natural Gas Extraction
Catalytic conversion of methane (CH 4) into useful products is critical for maximizing the utility of natural gas output and for reducing green house gas release associated with flaring (burning off CH4 at natural gas extraction sites). One particular useful technique is methane dry reforming (DRM), which involves the chemical reaction of CH4 with carbon dioxide (CO2) to generate carbon monoxide (CO), hydrogen gas (H2), and subsequently other useful products. New and improved catalysts are required to facilitate efficient dry methane reforming. In this report, we apply the Density Functional Theory (DFT) computational technique to investigate a catalyst consisting of small nickel clusters (Ni n , n < 10) on ceria (Ce02 (111) surfaces) support. One main thrust of this project is to study the initial CH4 and CO2 reactions with the catalyst. We find that CH4 exhibits barrierless reactive adsorption on to the catalyst. In order words, this step is likely not the rate-determining step. A second thrust is to perform detailed studies of the catalyst itself and examine the role of oxygen vacancies. Using a specific DFT method and a hypothesis about the absence of the Ce(III) redox state, we obtain predictions about oxygen vacancies in good agreement with experimental observations.
Aerial Crosspolarized NQR-NMR: Buried Explosive Detection From a Safe Distance
Nuclear quadrupole resonance is a non-destructive detection and inspection technique with potential as a non-destructive test (NDT) tool. Establishment of the capability opens the door to its use in furthering the mission of the labs. There are many possible uses of the capability: explosive detection and stress/strain detection in epoxies are two of the more obvious and are the main results of this work. Enhancement of the signal-to-noise ratio (SNR) and improvements in the acquisition time of the experiment were key focuses of this work. These were achieved by combing special spin-lock pulse sequences with cross-polarization (CP) schemes to improve the signals with shorter acquisition times. A novel rotating magnetic field device was created to facilitate CP in the field. Implementation of these schemes provided a significant reduction in SNR/time.
Diagnosing Field Strengths and Plasma Conditions in Magnetically Insulated Transmission Lines Using Active Dopant Spectroscopy
Patel, Sonal P.; Hutsel, Brian T.; Steiner, Adam M.; Perea, L.; Jaramillo, Deanna M.
Experimental validation data is needed to inform simulations of large pulsed power devices which are in development to understand and improve existing accelerators and inform future pulsed power capabilities. Using current spectroscopic techniques on the Z-machine, we have been unable to reliably diagnose plasma conditions and electric and magnetic fields within power flow regions. Laser ablation of a material produces a low density plasma, resulting in narrow spectroscopic line widths. By introducing a laser ablated plasma to the anode cathode gap of the Mykonos accelerator, we can monitor how the line shapes change due the current pulse by comparing these line shapes to spectral measurements taken without power flow. In this report we show several examples of measurements conducted on Mykonos on various dopant materials. We also show a negligible effect on power flow due to the presence of the ablation plasma for a range of parameters.
Theoretical study of various nonlinear phenomena in plasma systems and scaling of magneto-inertial-fusion targets
Plasma physics is an exciting field of study with a wide variety of nonlinear processes that come into play. Examples of such processes include the interaction of small-scale turbulence with large-scale plasma structures and the nonlinear saturation of plasma instabilities, for example those of magneto-hydrodynamical nature. During this Truman LDRD project, I studied a collection of nonlinear problems that are of interest to the field of plasma physics. This LDRD report summarizes four main research accomplishments. First, a new statistical model for describing inhomogeneous drift-wave turbulence inter- acting with zonal flows was developed. This new model includes the effects of nonlinear wave-wave collisions, which are expected to change the spectrum of the underlying DW turbulence and therefore the generation of zonal flows. Second, a new mathematical formalism was proposed to systematically apply the non- linear WKB approximation to general field theories, including those often used in fluid dynamics. This formalism represents an interesting tool for studying physical systems that show an explicit scale separation. Third, a weakly nonlinear model was developed to describe the magneto-Rayleigh-Taylor instability. This instability is of paramount importance to understand as it can reduce the performance of magnetic-inertial-fusion (MIF) platforms. The developed models captures the effects of harmonic generation and saturation of the linear growth of the instability. Finally, a framework was proposed for scaling magneto-inertial fusion (MIF) targets to larger pulsed-power drivers. From this framework, a set of scaling rules were derived that conserve the physical regimes of MIF systems when scaling up in peak current. By doing so, deleterious nonlinear processes that affect MIF performance may be kept at bay.
Computational Modeling To Adapt Neutralizing Antibody
Monoclonal antibodies (mAbs) is the leading therapy for viral infections because it provides immediate protection and can be administered at higher levels than in a natural immune response. Finding mAbs that neutralize a broad spectrum of viral targets has proven difficult because many species and strains exist and blanket targeting is a slow and laborious process to experimentally screen 108 variants. A new method is needed to rapidly redesign mAbs for homologous targets. This project speeds up redesign using structure-based computational design to reduce the mAbs search space to a manageable level and screen mutants at a much higher rate than in experiments. Computation will also provide critical knowledge about the fundamental interactions. The project will adapt S230, a human antibody that neutralizes SARS-CoV, to neutralize SARS-COV-2.
Neural Network Interatomic Potentials
Saavedra, Gary J.; Thompson, Aidan P.
In this project, we investigate the use of neural networks for the prediction of molecular properties, namely the interatomic potential. We use the machine learning package Tensorflow to build a variety of neural networks and compare performance with a popular Fortran package - Atomic Energy Networks (aenet). There are two primary goals for this work: 1) use the wide availability of different optimization techniques in Tensorflow to outperform aenet and 2) use new descriptors that can outperform Behler descriptors.
COVID-19 Infection Prevention through Natural Product Molecules
Corbin, William C.; Negrete, Oscar N.; Saada, Edwin A.
This project evaluates natural product molecules with the potential to prevent 2019- nCOV infection. The molecules theoretically work by blocking the ACE2 protein active site in human airways. Previous work focused on modeling candidate natural compounds, but this work examined baicalin, hesperetin, glycyrrhizin, and scutellarin in experimental in vitro studies, which included recombinant protein inhibition assays, cell culture virus inhibition assays, and cytotoxicity assays. The project delivered selectivity indices (ratio that measures the window between cytotoxicity and antiviral activity) of the four natural compounds that will help guide the direction of SARS-CoV-2 therapeutic development.
Physics-Informed Machine Learning for Epidemiological Models
Martinez, Carianne M.; Jones, Jessica E.; Levin, Drew L.; Trask, Nathaniel A.; Finley, Patrick D.
One challenge of using compartmental SEIR models for public health planning is the difficulty in manually tuning parameters to capture behavior reflected in the real-world data. This team conducted initial, exploratory analysis of a novel technique to use physics-informed machine learning tools to rapidly develop data-driven models for physical systems. This machine learning approach may be used to perform data assimilation of compartment models which account for unknown interactions between geospatial domains (i.e. diffusion processes coupling across neighborhoods/counties/states/etc.). Results presented here are early, proof-of-concept ideas that demonstrate initial success in using a physically informed neural network (PINN) model to assimilate data in a compartmental epidemiology model. The results demonstrate initial success and warrant further research and development.
Developing inductively driven diagnostic X-ray sources to enable transformative radiography and diffraction capabilities on Z
Myers, Clayton E.; Gomez, Matthew R.; Lamppa, Derek C.; Webb, Timothy J.; Yager-Elorriaga, David A.; Hutsel, Brian T.; Jennings, Christopher A.; Knapp, Patrick K.; Kossow, Michael R.; Lucero, Larry M.; Obregon, Robert J.; Steiner, Adam M.; Sinars, Daniel S.
Penetrating X-rays are one of the most effective tools for diagnosing high energy density experiments, whether through radiographic imaging or X-ray diffraction. To expand the X-ray diagnostic capabilities at the 26-MA Z Pulsed Power Facility, we have developed a new diagnostic X-ray source called the inductively driven X-pinch (IDXP). This X-ray source is powered by a miniature transmission line that is inductively coupled to fringe magnetic fields in the final power feed. The transmission line redirects a small amount of Zs magnetic energy into a secondary cavity where 150+ kA of current is delivered to a hybrid X-pinch. In this report, we describe the multi-stage development of the IDXP concept through experiments both on Z and in a surrogate setup on the 1 MA Mykonos facility. Initial short-circuit experiments to verify power ow on Z are followed by short-circuit and X-ray source development experiments on Mykonos. The creation of a radiography-quality X-pinch hot spot is verified through a combination of X-ray diode traces, laser shadowgraphy, and source radiography. The success of the IDXP experiments on Mykonos has resulted in the design and fabrication of an IDXP for an upcoming Z experiment that will be the first-ever X-pinch fielded on Z. We have also pursued the development of two additional technologies. First, the extended convolute post (XCP) has been developed as an alternate method for powering diagnostic X-pinches on Z. This concept, which directly couples the current owing in one of the twelve Z convolute posts to an X-pinch, greatly increases the amount of available current relative to an IDXP (900 kA versus 150 kA). Initial short-circuit XCP experiments have demonstrated the efficacy of power ow in this geometry. The second technology pursued here is the inductively driven transmission line (IDTL) current monitor. These low-current IDTLs seek to measure the current in the final power feed with high fidelity. After three generations of development, IDTL current monitors frequently return cleaner current measurements than the standard B-dot sensors that are fielded on Z. This is especially true on high-inductance experiments where the harshest conditions are created in the nal power feed.
Improvements to the New CTH Code Verification & Validation Test Suite (FY2020)
Duncan-Reynolds, Gabrielle C.; Key, Christopher T.
The CTH multiphysics hydrocode, which is used for a wide range of important calculations, has undertaken in recent years to overhaul its software quality and testing processes. A key part of this effort entailed building a new, robust V&V test suite made up of traditional hydrocode verification problems, such as those listed in the ASC Tri-Lab Test Suite and the Enhanced Tri-Lab Test Suite, as well as validation problems for some of CTHs most frequently used equations of state, materials models, and other key capabilities. Substantial progress towards this goal was made in FY19. In FY20, this test suite has been expanded to include verification and validation tests of the Sesame and JWL equation of state models as well as the Mader verification problem from the Tri-Lab Test Suite and the Blake verification problem - a linear elastic analog to the Hunter problem from the Enhanced Tri-Lab Test Suite. This report documents CTH performance on the new test suite problems. Verification test results are compared to analytic solutions and, for most tests, convergence results are presented. Validation test results are compared to experimental data and mesh refinement studies are included. CTH performs well overall on the new test problems. Convergence rates for the Blake and Mader problems are comparable to those for similar ASC codes. The JWL and Sesame verification tests show good agreement with analytic solutions. Likewise, CTH simulation results show good agreement with experimental validation data for the Sesame and JWL equations of state for the materials tested. Future V&V work will focus on adding tests for other key capabilities like fracture and high explosive models.
A novel, magnetically driven convergent Richtmyer-Meshkov platform
Physics of Plasmas
In this paper, we introduce a novel experimental platform for the study of the Richtmyer-Meshkov instability in a cylindrically converging geometry using a magnetically driven cylindrical piston. Magnetically driven solid liner implosions are used to launch a shock into a liquid deuterium working fluid and, ultimately, into an on-axis rod with a pre-imposed perturbation. The shock front trajectory is tracked through the working fluid and up to the point of impacting the rod through the use of on axis photonic Doppler velocimetry. This configuration allows for precise characterization of the shock state as it impacts the perturbed rod interface. Monochromatic x-ray radiography is used to measure the post-shock interface evolution and rod density profile. The ALEGRA MHD model is used to simulate the dynamics of the experiment in one dimension. We show that late in time the perturbation growth becomes non-linear as evidenced by the observation of high-order harmonics, up to n = 5. Two dimensional simulations performed using a combination of the GORGON MHD code and the xRAGE radiation hydrodynamics code suggest that the late time non-linear growth is modified by convergence effects as the bubbles and spikes experience differences in the pressure of the background flow.
High pressure/high temperature multiphase simulations of dodecane injection to nitrogen: Application on ECN Spray-A
Fuel
Koukouvinis, Phoevos; Vidal-Roncero, Alvaro; Rodriguez, Carlos; Pickett, Lyle M.
The present work investigates the complex phenomena associated with pressure/high temperature dodecane injection for the Engine Combustion Network (ECN) Spray-A case, employing more elaborate thermodynamic closures, to avoid well known deficiencies concerning density and speed of sound prediction using traditional cubic models. A tabulated thermodynamic approach is proposed here, based on log10(p)-T tables, providing very high accuracy across a large range of pressures, spanning from 0 to 2500 bar, with only a small number of interpolation points. The tabulation approach is directly extensible to any thermodynamic model, existing or to be developed in the future. Here NIST REFPROP properties are used, combined with PC-SAFT Vapor-Liquid-Equilibrium to identify the liquid in mixtures penetration, hence avoiding the use of an arbitrary threshold for mass fraction. Identified liquid and vapour penetration are compared against experimental data from the ECN database showing a good agreement, within approximately 3–8% for axial penetration of liquid, 2% for vapor axial penetration and within experimental uncertainty for radial distribution of mass fraction. Analysis of the vortex evolution indicates that driving mechanisms behind the jet break-up are vortex tilting/stretching, then baroclinic torque, leading to Rayleigh-Taylor instabilities, closely followed by vortex dilation and finally viscous effects.
Survey and Assessment of Computational Capabilities for Advanced (Non-LWR) Reactor Mechanistic Source Term Analysis
Clark, Andrew C.; Luxat, David L.; Laros, James H.; Laros, James H.
A vital part of the licensing process for advanced (non-LWR) nuclear reactor developers in the United States is the assessment of the reactor’s source term, i.e., the potential release of radionuclides from the reactor system to the environment during normal operations and accident sequences. In comparison to source term assessments which follow a bounding approach with conservative assumptions, a mechanistic approach to modeling radionuclide transport, which realistically accounts for transport and retention phenomena, is expected to be used for advanced reactor systems. As the designs of advanced reactors increase in maturity and progress towards licensing, there is a need to advance modeling and simulation capabilities in analyzing the mechanistic source term (MST) of a prospective reactor concept. In the present work, a survey is provided of existing computational capabilities for the modeling of advanced reactors MSTs. The following reactors are considered: high temperature gas reactors (HTGR); molten salt reactors (MSR) which include salt-fueled reactors and fluoride salt-cooled high temperature reactors (FHR); and sodium- and lead-cooled fast reactors (SFR, LFR). A review of relevant codes which may be useful in providing information to MST analyses is also completed, including codes that have been used for source term analyses of LWRs, as well as those being developed for other aspects of advanced reactor system modeling such as reactor physics, thermal hydraulics, and chemistry. A discussion of MST modeling capabilities for each reactor type is provided with additional focus on important phenomena and functional requirements. Additionally, a comprehensive survey is provided of tools for consequence modeling such as atmospheric transport and dispersion (ATD).
Large Bubbles in Vibrated Liquid Are Levitated by Wall Motions
Torczynski, J.R.; Koehler, Timothy P.
Abstract not provided.
Electrically Detected Magnetic Resonance Study of High-Field Stress Induced Si/SiO2 Interface Defects
Moxim, Stephen J.; Lenahan, Patrick M.; Sharov, Fedor V.; Haase, Gad S.; Hughart, David R.
It is widely accepted that the breakdown of Si02 gate dielectrics is caused by the buildup of stress-induced defects over time. Although several physical mechanisms have been proposed for the generation of these defects, very little direct experimental evidence as to the chemical and physical identity of these defects has been generated in the literature thus far. Here, we present electrically detected magnetic resonance (EDMR) measurements obtained via spin-dependent recombination currents at the interface of high-field stressed Si/Si02 metal-oxide-semiconductor field effect transistors (MOSFETs).
ISO 35001 Implementation at SNL: Analysis Question Set 2
Abstract not provided.
Graph Theory and IC Component Design Analysis
Proceedings - 2020 3rd International Conference on Artificial Intelligence for Industries, AI4I 2020
Obert, James O.; Turner, Sean D.; Hamlet, Jason H.
Graph analysis in large integrated circuit (IC) designs is an essential tool for verifying design logic and timing via dynamic timing analysis (DTA). IC designs resemble graphs with each logic gate as a vertex and the conductive connections between gates as edges. Using DTA digital statistical correlations, graph condensation, and graph partitioning, it is possible to identify high-entropy component centers and paths within an IC design. Identification of high-entropy component centers (HECC) enables focused DTA, effectively lowering the computational complexity of DTA on large integrated circuit graphs. In this paper, a devised methodology termed IC layout subgraph component center identification (CCI) is described. CCI lowers DTA computational complexity by condensing IC graphs into reduced subgraphs in which dominant logic functions are verified.
High-resolution surface topographic change analyses to characterize a series of underground explosions
Remote Sensing of Environment
Schultz-Fellenz, Emily S.; Swanson, Erika M.; Sussman, Aviva J.; Coppersmith, Ryan T.; Kelley, Richard E.; Miller, Elizabeth D.; Crawford, Brandon M.; Lavadie-Bulnes, Anita F.; Cooley, James R.; Townsend, Margaret J.; Larotonda, Jennifer M.
The understanding of subsurface events that cannot be directly observed is dependent on the ability to relate surface-based observations to subsurface processes. This is particularly important for nuclear explosion monitoring, as any future clandestine tests will likely be underground. We collected ground-based lidar and optical imagery using remote, very-low-altitude unmanned aerial system platforms, before and after several underground high explosive experiments. For the lidar collections, we used a terrestrial lidar scanner to obtain high-resolution point clouds and create digital elevation models (DEMs). For the imagery collections, we used structure-from-motion photogrammetry techniques and a dense grid of surveyed ground control points to create high-resolution DEMs. Comparisons between the pre- and post-experiment DEMs indicate changes in surface topography that vary between explosive experiments with varying yield and depth parameters. Our work shows that the relationship between explosive yield and the extent of observable surface change differs from the standard scaled-depth-of-burial model. This suggests that the surface morphological change from underground high explosive experiments can help constrain the experiments' yield and depth, and may impact how such activities are monitored and verified.
Multi-morphology lattices lead to improved plastic energy absorption
Materials and Design
Alberdi, Ryan A.; Dingreville, Remi P.; Robbins, Joshua R.; Walsh, Timothy W.; White, Benjamin C.; Jared, Bradley H.; Boyce, Brad B.
While lattice metamaterials can achieve exceptional energy absorption by tailoring periodically distributed heterogeneous unit cells, relatively little focus has been placed on engineering heterogeneity above the unit-cell level. In this work, the energy-absorption performance of lattice metamaterials with a heterogeneous spatial layout of different unit cell architectures was studied. Such multi-morphology lattices can harness the distinct mechanical properties of different unit cells while being composed out of a single base material. A rational design approach was developed to explore the design space of these lattices, inspiring a non-intuitive design which was evaluated alongside designs based on mixture rules. Fabrication was carried out using two different base materials: 316L stainless steel and Vero White photopolymer. Results show that multi-morphology lattices can be used to achieve higher specific energy absorption than homogeneous lattice metamaterials. Additionally, it is shown that a rational design approach can inspire multi-morphology lattices which exceed rule-of-mixtures expectations.
Faceted Branched Nickel Nanoparticles with Tunable Branch Length for High-Activity Electrocatalytic Oxidation of Biomass
Angewandte Chemie - International Edition
Poerwoprajitno, Agus R.; Gloag, Lucy; Watt, John; Cychy, Steffen; Cheong, Soshan; Kumar, Priyank V.; Benedetti, Tania M.; Deng, Chen; Wu, Kuang H.; Marjo, Christopher E.; Huber, Dale L.; Muhler, Martin; Gooding, J.J.; Schuhmann, Wolfgang; Da Wang, Wei; Tilley, Richard D.
Controlling the formation of nanosized branched nanoparticles with high uniformity is one of the major challenges in synthesizing nanocatalysts with improved activity and stability. Using a cubic-core hexagonal-branch mechanism to form highly monodisperse branched nanoparticles, we vary the length of the nickel branches. Lengthening the nickel branches, with their high coverage of active facets, is shown to improve activity for electrocatalytic oxidation of 5-hydroxymethylfurfural (HMF), as an example for biomass conversion.
3D Hybrid Plasmonic Framework with Au Nanopillars Embedded in Nitride Multilayers Integrated on Si
Advanced Materials Interfaces
Integration of nanoscale photonic and plasmonic components on Si substrates is a critical step toward Si-based integrated nanophotonic devices. In this work, a set of unique complex 3D metamaterials with intercalated nanolayered and nanopillar structures with tunable plasmonic and optical properties on Si substrates is designed. More specifically, the 3D metamaterials combine metal (Au) nanopillars and alternating metal-nitride (Au-TiN and Au-TaN) nanolayers, epitaxially grown on Si substrates. The ultrafine Au nanopillars (d ≈ 3 nm) continuously grow throughout all the nanolayers with high epitaxial quality. Novel optical properties, such as highly anisotropic optical property, high absorbance covering the entire visible spectrum regime, and hyperbolic property in the visible regime, are demonstrated. Furthermore, a waveguide based on a silicon nitride (Si3N4) ridge with a multilayer structure is successfully fabricated. The demonstration of 3D nanoscale metamaterial design integrated on Si opens up a new route toward tunable metamaterials nanostructure designs with versatile material selection for various optical components in Si integrated photonics.
GMRES with embedded ensemble propagation for the efficient solution of parametric linear systems in uncertainty quantification of computational models
Computer Methods in Applied Mechanics and Engineering
Liegeois, Kim; Boman, Romain; Phipps, Eric T.; Wiesner, Tobias A.; Arnst, Maarten
In a previous work, embedded ensemble propagation was proposed to improve the efficiency of sampling-based uncertainty quantification methods of computational models on emerging computational architectures. It consists of simultaneously evaluating the model for a subset of samples together, instead of evaluating them individually. A first approach introduced to solve parametric linear systems with ensemble propagation is ensemble reduction. In Krylov methods for example, this reduction consists in coupling the samples together using an inner product that sums the sample contributions. Ensemble reduction has the advantages of being able to use optimized implementations of BLAS functions and having a stopping criterion which involves only one scalar. However, the reduction potentially decreases the rate of convergence due to the gathering of the spectra of the samples. In this paper, we investigate a second approach: ensemble propagation without ensemble reduction in the case of GMRES. This second approach solves each sample simultaneously but independently to improve the convergence compared to ensemble reduction. This raises two new issues which are solved in this paper: the fact that optimized implementations of BLAS functions cannot be used anymore and that ensemble divergence, whereby individual samples within an ensemble must follow different code execution paths, can occur. We tackle those issues by implementing a high-performing ensemble GEMV and by using masks. The proposed ensemble GEMV leads to a similar cost per GMRES iteration for both approaches, i.e. with and without reduction. For illustration, we study the performances of the new linear solver in the context of a mesh tying problem. This example demonstrates improved ensemble propagation speed-up without reduction.
Stochastic simulation of cloud-aerosol tracks
Patel, Lekha P.; Shand, Lyndsay S.
Abstract not provided.
Fast Frequency Support in Low Inertia Power System Using Energy Storage
Abstract not provided.
Scalable Geometric Modeler for Overlap Detection and Resolution (ASC IC L2 Milestone 7181 FY2020 Final Review)
Clark, Brett W.; Laros, James H.; Moore, Jacquelyn R.; Kensek, Ronald P.; Hoffman, Edward L.; Ibanez-Granados, Daniel A.
The final review for the FY20 Advanced Simulation and Computing (ASC) Integrated Codes (IC) L2 Milestone #7181 was conducted on August 31, 2020 at Sandia National Laboratories in Albuquerque, New Mexico. The review panel unanimously agreed that the milestone has been successfully completed. Roshan Quadros (1543) led the milestone team and various members from the team presented the results. The review panel was comprised of staff from Sandia National Laboratories Albuquerque and California that are involved with computational engineering modeling and analysis. The panel consisted of experts in the fields of solid modeling, discretization, meshing, simulation workflows, and computational analysis including personnel Brett Clark (1543, Chair); Jay Foulk (8363); Jackie Moore (1553); Ron Kensek (1341); Ed Hoffman (8753); Dan Ibanez (1443). The presentation documented the technical approach of the team and summarized the results with sufficient detail to demonstrate both the value and the completion of the milestone. A separate SAND report was also generated with more detail to supplement the presentation. The purpose of the milestone was to advance capabilities for automatically finding, displaying, and resolving geometric overlaps in CAD models.
Ionization wave propagation in a He plasma jet in a controlled gas environment
Journal of Applied Physics
Lietz, Amanda M.; Barnat, Edward V.; Foster, John E.; Kushner, Mark J.
Characterizing ionization wave propagation in low temperature plasma jets is critical to predicting production of reactive species and plasma-surface interactions for biomedical applications and surface functionalization. In this paper, results from optical emission and laser induced fluorescence measurements of the ionization wave in a He plasma jet operating in a controlled gas environment are discussed and used for comparison with numerical modeling. The ionization wave was observed using ICCD (Intensified Charge Coupled Device) imaging and characterized by time and spatially resolved electron density measurements using laser-collision-induced fluorescence. The plasma jet was initially characterized using pure He (nominally at 200 Torr), while varying pressure and voltage. When operating in pure He, the ionization wave broadly expands exiting the plasma tube. Increasing the operating pressure reduces the speed and isotropic expansion of the ionization wave. The jet operated with a humid He shroud was also studied. The humid He shroud results in the electron density increasing and having an annular profile due to the lower ionization potential of H2O compared to He and localized photoionization in the mixing region. Numerical modeling highlighted the importance of resonance radiation emitted by excited states of He, photoelectron emission from the quartz tube, and the kinetic behavior of the electrons produced by photoionization ahead of the ionization front.
Assessing atomically thin delta-doping of silicon using mid-infrared ellipsometry
Journal of Materials Research
Katzenmeyer, Aaron M.; Luk, Ting S.; Bussmann, Ezra B.; Young, Steve M.; Anderson, Evan M.; Marshall, Michael T.; Ohlhausen, J.A.; Kotula, Paul G.; Lu, Ping L.; Campbell, DeAnna M.; Lu, Tzu-Ming L.; Liu, Peter Q.; Ward, Daniel R.; Misra, Shashank M.
Hydrogen lithography has been used to template phosphine-based surface chemistry to fabricate atomic-scale devices, a process we abbreviate as atomic precision advanced manufacturing (APAM). Here, we use mid-infrared variable angle spectroscopic ellipsometry (IR-VASE) to characterize single-nanometer thickness phosphorus dopant layers (δ-layers) in silicon made using APAM compatible processes. A large Drude response is directly attributable to the δ-layer and can be used for nondestructive monitoring of the condition of the APAM layer when integrating additional processing steps. The carrier density and mobility extracted from our room temperature IR-VASE measurements are consistent with cryogenic magneto-transport measurements, showing that APAM δ-layers function at room temperature. Finally, the permittivity extracted from these measurements shows that the doping in the APAM δ-layers is so large that their low-frequency in-plane response is reminiscent of a silicide. However, there is no indication of a plasma resonance, likely due to reduced dimensionality and/or low scattering lifetime.
Physical Properties and Gas Hydrate at a Near-Seafloor Thrust Fault, Hikurangi Margin, New Zealand
Geophysical Research Letters
Cook, Ann E.; Paganoni, Matteo; Clennell, Michael B.; Mcnamara, David D.; Nole, Michael A.; Wang, Xiujuan; Han, Shuoshuo; Bell, Rebecca E.; Solomon, Evan A.; Saffer, Demian M.; Barnes, Philip M.; Pecher, Ingo A.; Wallace, Laura M.; Levay, Leah J.; Petronotis, Katerina E.
The Pāpaku Fault Zone, drilled at International Ocean Discovery Program (IODP) Site U1518, is an active splay fault in the frontal accretionary wedge of the Hikurangi Margin. In logging-while-drilling data, the 33-m-thick fault zone exhibits mixed modes of deformation associated with a trend of downward decreasing density, P-wave velocity, and resistivity. Methane hydrate is observed from ~30 to 585 m below seafloor (mbsf), including within and surrounding the fault zone. Hydrate accumulations are vertically discontinuous and occur throughout the entire logged section at low to moderate saturation in silty and sandy centimeter-thick layers. We argue that the hydrate distribution implies that the methane is not sourced from fluid flow along the fault but instead by local diffusion. This, combined with geophysical observations and geochemical measurements from Site U1518, suggests that the fault is not a focused migration pathway for deeply sourced fluids and that the near-seafloor Pāpaku Fault Zone has little to no active fluid flow.
Ultrahigh temperature in situ transmission electron microscopy based bicrystal coble creep in zirconia I: Nanowire growth and interfacial diffusivity
Acta Materialia
Vikrant, K.S.N.; Grosso, Robson L.; Feng, Lin; Muccillo, Eliana N.S.; Muche, Dereck N.F.; Jawaharram, Gowtham S.; Barr, Christopher M.; Monterrosa, Anthony M.; Castro, Ricardo H.R.; Garcia, R.E.; Hattar, Khalid M.; Dillon, Shen J.
This study demonstrates novel in situ transmission electron microscopy-based microscale single grain boundary Coble creep experiments used to grow nanowires through a solid-state process in cubic ZrO2 between ≈ 1200 °C and ≈ 2100 °C. Experiments indicate Coble creep drives the formation of nanowires from asperity contacts during tensile displacement, which is confirmed by phase field simulations. The experiments also facilitate efficient measurement of grain boundary diffusivity and surface diffusivity. 10 mol% Sc2O3 doped ZrO2 is found to have a cation grain boundary diffusivity of $D_{gb} = (0.056 ± 0.05)exp (\frac{-380,000±41,000}{RT})m^2 s^{-1}$, and $D_s = (0.10 ± 0.27)exp(\frac{-380,000 ± 28,000}{RT}) m^2 s^{-1}$.
Ultrahigh temperature in situ transmission electron microscopy based bicrystal coble creep in Zirconia II: Interfacial thermodynamics and transport mechanisms
Acta Materialia
Grosso, Robson L.; Vikrant, K.S.N.; Feng, Lin; Muccillo, Eliana N.S.; Muche, Dereck N.F.; Jawaharram, Gowtham S.; Barr, Christopher M.; Monterrosa, Anthony M.; Castro, Ricardo H.R.; Garcia, R.E.; Hattar, Khalid M.; Dillon, Shen J.
This work uses a combination of stress dependent single grain boundary Coble creep and zero-creep experiments to measure interfacial energies, along with grain boundary point defect formation and migration volumes in cubic ZrO2. These data, along with interfacial diffusivities measured in a companion paper are then applied to analyzing two-particle sintering. The analysis presented here indicates that the large activation volume, primarily derives from a large migration volume and suggests that the grain boundary rate limiting defects are delocalized, possibly due to electrostatic interactions between charge compensating defects. The discrete nature of the sintering and creep process observed in the small-scale experiments supports the hypothesis that grain boundary dislocations serve as sources and sinks for grain boundary point defects and facilitate strain during sintering and Coble creep. Model two-particle sintering experiments demonstrate that initial-stage densification follows interface reaction rate-limited kinetics.
Post-detonation fireball thermometry via femtosecond-picosecond coherent anti-Stokes Raman Scattering (CARS)
Proceedings of the Combustion Institute
Richardson, Daniel R.; Kearney, S.P.; Guildenbecher, Daniel R.
Accurate knowledge of post-detonation fireball temperatures is important for understanding device performance and for validation of numerical models. Such measurements are difficult to make even under controlled laboratory conditions. Here, temperature measurements were performed in the fireball of a commercial detonator (RP-80, Teledyne RISI). The explosion and fragments were contained in a plastic enclosure with glass windows for optical access. A hybrid femtosecond-picosecond (fs-ps) rotational coherent anti-Stokes Raman scattering (CARS) instrument was used to perform gas-phase thermometry along a one-dimensional measurement volume in a single laser shot. The 13-mm-thick windows on the explosive-containment housing introduced significant nonlinear chirp on the fs lasers pulses, which reduced the Raman excitation bandwidth and did not allow for efficient excitation of high-J Raman transitions populated at flame temperatures. To overcome this, distinct pump and Stokes pulses were used in conjunction with spectral focusing, achieved by varying the relative timing between the pump and Stokes pulses to preferentially excite Raman transitions relevant to flame thermometry. Light scattering from particulate matter and solid fragments was a significant challenge and was mitigated using a new polarization scheme to isolate the CARS signal. Fireball temperatures were measured 35–40 mm above the detonator, 12–25 mm radially outward from the detonator centerline, and at 18 and 28 µs after initiation. At these locations and times, significant mixing between the detonation products and ambient air had occurred thus increasing the nitrogen-based CARS thermometry signal. Initial measurements show a distribution of fireball temperatures in the range 300–2000 K with higher temperatures occurring 28 µs after detonation.
Effects of Tethered Polymers on Dynamics of Nanoparticles in Unentangled Polymer Melts
Macromolecules
Ge, Ting; Grest, Gary S.; Rubinstein, Michael
Polymer-tethered nanoparticles (NPs) are commonly added to a polymer matrix to improve the material properties. Critical to the fabrication and processing of such composites is the mobility of the tethered NPs. Here, we study the motion of tethered NPs in unentangled polymer melts using molecular dynamics simulations, which offer a precise control of the grafted chain length Ng and the number z of grafted chains per particle. As Ng increases, there is a crossover from particle-dominated to tethered-chain-dominated terminal diffusion of NPs with the same z. The mean squared displacement of loosely tethered NPs in the case of tethered-chain-dominated terminal diffusion exhibits two subdiffusive regimes at intermediate time scales for small z. The first one at shorter time scales arises from the dynamical coupling of the particle and matrix chains, while the one at longer time scales is due to the participation of the particle in the dynamics of the tethered chains. The friction of loosely grafted chains in unentangled melts scales linearly with the total number of monomers in the chains, as the friction of individual monomers is additive in the absence of hydrodynamic coupling. As more chains are grafted to a particle, hydrodynamic interactions between grafted chains emerge. As a result, there is a nondraining layer of hydrodynamically coupled chain segments surrounding the bare particle. Outside the nondraining layer is a free-draining layer of grafted chain segments with no hydrodynamic coupling. The boundary of the two layers is the stick surface where the shear stress due to the relative melt flow is balanced by the friction between the grafted and melt chains in the interpenetration layer. The stick surface is located further away from the bare surface of the particle with higher grafting density.
Nanoconfinement of Molecular Magnesium Borohydride Captured in a Bipyridine-Functionalized Metal-Organic Framework
ACS Nano
Schneemann, Andreas; Wan, Liwen F.; Lipton, Andrew S.; Liu, Yi S.; Snider, Jonathan S.; Baker, Alexander A.; Sugar, Joshua D.; Spataru, Dan C.; Guo, Jinghua; Autrey, Tom S.; Jorgensen, Mathias; Jensen, Torben R.; Wood, Brandon C.; Allendorf, Mark D.; Stavila, Vitalie S.
The lower limit of metal hydride nanoconfinement is demonstrated through the coordination of a molecular hydride species to binding sites inside the pores of a metal-organic framework (MOF). Magnesium borohydride, which has a high hydrogen capacity, is incorporated into the pores of UiO-67bpy (Zr6O4(OH)4(bpydc)6 with bpydc2- = 2,2′-bipyridine-5,5′-dicarboxylate) by solvent impregnation. The MOF retained its long-range order, and transmission electron microscopy and elemental mapping confirmed the retention of the crystal morphology and revealed a homogeneous distribution of the hydride within the MOF host. Notably, the B-, N-, and Mg-edge XAS data confirm the coordination of Mg(II) to the N atoms of the chelating bipyridine groups. In situ 11B MAS NMR studies helped elucidate the reaction mechanism and revealed that complete hydrogen release from Mg(BH4)2 occurs as low as 200 °C. Sieverts and thermogravimetric measurements indicate an increase in the rate of hydrogen release, with the onset of hydrogen desorption as low as 120 °C, which is approximately 150 °C lower than that of the bulk material. Furthermore, density functional theory calculations support the improved dehydrogenation properties and confirm the drastically lower activation energy for B-H bond dissociation.
ISO 35001 Implementation at SNL: Analysis Question Set 1
Abstract not provided.
Complementary Measurements of Residual Stresses Before and After Base Plate Removal in an Intricate Additively-Manufactured Stainless-Steel Valve Housing
Additive Manufacturing
Clausen, Bjorn; D'Elia, C.R.; Prime, Michael B.; Laros, James H.; Bishop, Joseph E.; Johnson, Kyle J.; Jared, Bradley H.; Allen, K.M.; Balch, Dorian K.; Roach, A.; Brown, Donald W.
Residual stress measurements using neutron diffraction and the contour method were performed on a valve housing made from 316 L stainless steel powder with intricate three-dimensional internal features using laser powder-bed fusion additive manufacturing. The measurements captured the evolution of the residual stress fields from a state where the valve housing was attached to the base plate to a state where the housing was cut free from the base plate. Making use of this cut, thus making it a non-destructive measurement in this application, the contour method mapped the residual stress component normal to the cut plane (this stress field is completely relieved by cutting) over the whole cut plane, as well as the change in all stresses in the entire housing due to the cut. The non-destructive nature of the neutron diffraction measurements enabled measurements of residual stress at various points in the build prior to cutting and again after cutting. Good agreement was observed between the two measurement techniques, which showed large, tensile build-direction residual stresses in the outer regions of the housing. The contour results showed large changes in multiple stress components upon removal of the build from the base plate in two distinct regions: near the plane where the build was cut free from the base plate and near the internal features that act as stress concentrators. These observations should be useful in understanding the driving mechanisms for builds cracking near the base plate and to identify regions of concern for structural integrity. Neutron diffraction measurements were also used to show the shear stresses near the base plate were significantly lower than normal stresses, an important assumption for the contour method because of the asymmetric cut.
Investigation of the ignition processes of a multi-injection flame in a Diesel engine environment using the flamelet model
Proceedings of the Combustion Institute
Wen, Xu; Rieth, Martin R.; Han, Wang; Chen, Jacqueline H.; Hasse, Christian
In this paper, the first flamelet analysis is conducted of a highly resolved DNS of a multi-injection flame with both auto-ignition and ignition induced by flame-flame interaction. A novel method is proposed to identify the different combustion modes of ignition processes using generalized flamelet equations. The state-of-the-art DNS database generated by Rieth et al. (US National Combustion Meeting, 2019) for a multi-injection flame in a Diesel engine environment is investigated. Three-dimensional flamelets are extracted from the DNS at different time instants with a focus on auto-ignition and interaction-ignition processes. The influences of mixture field interactions and the scalar dissipation rate on the ignition process are investigated by varying the species composition boundary conditions of the transient flamelet equations. Budget analyses of the generalized flamelet equations show that the transport along the mixture fraction iso-surface is insignificant during the auto-ignition process, but becomes important when interaction-ignition occurs, which is further confirmed through a flamelet regime classification method.
Recovery from plasma etching-induced nitrogen vacancies in p-type gallium nitride using UV/O3treatments
Applied Physics Letters
Foster, Geoffrey M.; Koehler, Andrew; Ebrish, Mona; Gallagher, James E.; Anderson, Travis M.; Noesges, Brenton; Brillson, Leonard; Gunning, Brendan P.; Hobart, Karl D.; Kub, Francis
Plasma etching of p-type GaN creates n-type nitrogen vacancy (VN) defects at the etched surface, which can be detrimental to device performance. In mesa isolated diodes, etch damage on the sidewalls degrades the ideality factor and leakage current. A treatment was developed to recover both the ideality factor and leakage current, which uses UV/O3 treatment to oxidize the damaged layers followed by HF etching to remove them. The temperature dependent I-V measurement shows that the reverse leakage transport mechanism is dominated by Poole-Frenkel emission at room temperature through the etch-induced VN defect. Depth resolved cathodoluminescence confirms that the damage is limited to first several nanometers and is consistent with the VN defect.
ALO-NMF: Accelerated Locality-Optimized Non-negative Matrix Factorization
Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
Moon, Gordon E.; Ellis, John E.; Sukumaran-Rajam, Aravind; Parthasarathy, Srinivasan; Sadayappan, P.
Non-negative Matrix Factorization (NMF) is a key kernel for unsupervised dimension reduction used in a wide range of applications, including graph mining, recommender systems and natural language processing. Due to the compute-intensive nature of applications that must perform repeated NMF, several parallel implementations have been developed. However, existing parallel NMF algorithms have not addressed data locality optimizations, which are critical for high performance since data movement costs greatly exceed the cost of arithmetic/logic operations on current computer systems. In this paper, we present a novel optimization method for parallel NMF algorithm based on the HALS (Hierarchical Alternating Least Squares) scheme that incorporates algorithmic transformations to enhance data locality. Efficient realizations of the algorithm on multi-core CPUs and GPUs are developed, demonstrating a new Accelerated Locality-Optimized NMF (ALO-NMF) that obtains up to 2.29x lower data movement cost and up to 4.45x speedup over existing state-of-the-art parallel NMF algorithms.
ALO-NMF: Accelerated Locality-Optimized Non-negative Matrix Factorization
Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
Moon, Gordon E.; Ellis, John E.; Sukumaran-Rajam, Aravind; Parthasarathy, Srinivasan; Sadayappan, P.
Non-negative Matrix Factorization (NMF) is a key kernel for unsupervised dimension reduction used in a wide range of applications, including graph mining, recommender systems and natural language processing. Due to the compute-intensive nature of applications that must perform repeated NMF, several parallel implementations have been developed. However, existing parallel NMF algorithms have not addressed data locality optimizations, which are critical for high performance since data movement costs greatly exceed the cost of arithmetic/logic operations on current computer systems. In this paper, we present a novel optimization method for parallel NMF algorithm based on the HALS (Hierarchical Alternating Least Squares) scheme that incorporates algorithmic transformations to enhance data locality. Efficient realizations of the algorithm on multi-core CPUs and GPUs are developed, demonstrating a new Accelerated Locality-Optimized NMF (ALO-NMF) that obtains up to 2.29x lower data movement cost and up to 4.45x speedup over existing state-of-the-art parallel NMF algorithms.
Multi-fidelity machine-learning with uncertainty quantification and Bayesian optimization for materials design: Application to ternary random alloys
Journal of Chemical Physics
Laros, James H.; Wildey, Timothy M.; Tranchida, Julien G.; Thompson, Aidan P.
We present a scale-bridging approach based on a multi-fidelity (MF) machine-learning (ML) framework leveraging Gaussian processes (GP) to fuse atomistic computational model predictions across multiple levels of fidelity. Through the posterior variance of the MFGP, our framework naturally enables uncertainty quantification, providing estimates of confidence in the predictions. We used density functional theory as high-fidelity prediction, while a ML interatomic potential is used as low-fidelity prediction. Practical materials' design efficiency is demonstrated by reproducing the ternary composition dependence of a quantity of interest (bulk modulus) across the full aluminum-niobium-titanium ternary random alloy composition space. The MFGP is then coupled to a Bayesian optimization procedure, and the computational efficiency of this approach is demonstrated by performing an on-the-fly search for the global optimum of bulk modulus in the ternary composition space. The framework presented in this manuscript is the first application of MFGP to atomistic materials simulations fusing predictions between density functional theory and classical interatomic potential calculations.
A combined SECM and electrochemical AFM approach to probe interfacial processes affecting molecular reactivity at redox flow battery electrodes
Journal of Materials Chemistry A
Watkins, Tylan W.; Sarbapalli, Dipobrato; Counihan, Michael J.; Danis, Andrew S.; Zhang, Jingjing; Zhang, Lu; Zavadil, Kevin R.; Rodriguez-Lopez, Joaquin
Redox flow batteries are attractive technologies for grid energy storage since they use solutions of redox-active molecules that enable a superior scalability and the decoupling of power and energy density. However, the reaction mechanisms of the redox active components at RFB electrodes are complex, and there is currently a pressing need to understand how interfacial processes impact the kinetics and operational reversibility of RFB systems. Here, we developed a combined electrochemical imaging methodology rooted in scanning electrochemical microscopy (SECM) and atomic force microscopy (AFM) for exploring the impact of electrode structure and conditioning on the electron transfer properties of model redox-active dialkoxybenzene derivatives, 2,5-di-tert-butyl-1,4-bis(2-methoxyethoxy)benzene (C1) and 2,3-dimethyl-1,4-dialkoxybenzene (C7). Using AFM and secondary-ion mass spectrometry (SIMS), we observed the formation of interfacial films with distinct mechanical properties compared to those of cleaved graphitic surfaces, and exclusively during reduction of electrogenerated radical cations. These films had an impact on the median rate and distribution of the electron transfer rate constant at the basal plane of multilayer and single layer graphene electrodes, displaying kinetically-limited values that did not yield the activation expected per the Butler-Volmer model with a transfer coefficient ∼0.5. These changes were dependent on redoxmer structure: SECM showed strong attenuation of C7 kinetics by a surface layer on MLG and SLG, while C1 kinetics were only affected by SLG. SECM and AFM results together show that these limiting films operate exclusively on the basal plane of graphite, with the edge plane showing a relative insensitivity to cycling and operation potential. This integrated electrochemical imaging methodology creates new opportunities to understand the unique role of interfacial processes on the heterogeneous reactivity of redoxmers at electrodes for RFBs, with a future role in elucidating phenomena at high active concentrations and spatiotemporal variations in electrode dynamics. This journal is
Calculating Interval Uncertainties for Calibration Standards That Drift with Time
NCSLI Measure
Delker, Collin J.; Solomon, Otis M.; Auden, Elizabeth C.
Calibrated values of many devices exhibit predictable drift over time. To provide an uncertainty statement valid over the entire calibration interval, one must account for drift. In this article, a method of accounting for drift is proposed based on guidance in the Guide to Expression of Uncertainty in Measurement. An additional uncertainty term is computed using a linear regression of historical measurement data, which is included along with the time-of-test uncertainty. This method is evaluated by analyzing its average out-of-tolerance (OOT) rate using a Monte Carlo simulation, which results in the desired 5% average OOT rate when the total uncertainty is expanded to a 95% confidence interval.
Near-field probing of strong light-matter coupling in single IR antennae
Proceedings of SPIE - The International Society for Optical Engineering
Mitrofanov, Oleg; Wang, Chih-Feng; Habteyes, Terefe G.; Luk, Ting S.; Klem, John F.; Brener, Igal B.; Chen, Hou-Tong
Quantum well intersubband polaritons are traditionally studied in large scale ensembles, over many wavelengths in size.In this presentation, we demonstrate that it is possible to detect and investigate intersubband polaritons in a single sub-wavelength nanoantenna in the IR frequency range. We observe polariton formation using a scattering-type near-fieldmicroscope and nano-FTIR spectroscopy. In this work, we will discuss near-field spectroscopic signatures of plasmonic antennae withand without coupling to the intersubband transition in quantum wells located underneath the antenna. Evanescent fieldamplitude spectra recorded on the antenna surface show a mode anti-crossing behavior in the strong coupling case. Wealso observe a corresponding strong-coupling signature in the phase of the detected field. We anticipate that this near-fieldapproach will enable explorations of strong and ultrastrong light-matter coupling in the single nanoantenna regime,including investigations of the elusive effect of ISB polariton condensation.
Structural and dynamical properties of potassium dodecahydro-monocarba-closo-dodecaborate: KCB11H12
Journal of Physical Chemistry C
MCB11H12 (M: Li, Na) dodecahydro-monocarba-closo-dodecaborate salt compounds are known to have stellar superionic Li+ and Na+ conductivities in their high-temperature disordered phases, making them potentially appealing electrolytes in all-solid-state batteries. Nonetheless, it is of keen interest to search for other related materials with similar conductivities while at the same time exhibiting even lower (more device-relevant) disordering temperatures, a key challenge for this class of materials. With this in mind, the unknown structural and dynamical properties of the heavier KCB11H12 congener were investigated in detail by X-ray powder diffraction, differential scanning calorimetry, neutron vibrational spectroscopy, nuclear magnetic resonance, quasielastic neutron scattering, and AC impedance measurements. This salt indeed undergoes an entropy-driven, reversible, order-disorder transformation and with a lower onset temperature (348 K upon heating and 340 K upon cooling) in comparison to the lighter LiCB11H12 and NaCB11H12 analogues. The K+ cations in both the low-T ordered monoclinic (P21/c) and high-T disordered cubic (Fm3¯ m) structures occupy octahedral interstices formed by CB11H12- anions. In the low-T structure, the anions orient themselves so as to avoid close proximity between their highly electropositive C-H vertices and the neighboring K+ cations. In the high-T structure, the anions are orientationally disordered, although to best avoid the K+ cations, the anions likely orient themselves so that their C-H axes are aligned in one of eight possible directions along the body diagonals of the cubic unit cell. Across the transition, anion reorientational jump rates change from 6.2 × 106 s-1 in the low-T phase (332 K) to 2.6 × 1010 s-1 in the high-T phase (341 K). In tandem, K+ conductivity increases by about 30-fold across the transition, yielding a high-T phase value of 3.2 × 10-4 S cm-1 at 361 K. However, this is still about 1 to 2 orders of magnitude lower than that observed for LiCB11H12 and NaCB11H12, suggesting that the relatively larger K+ cation is much more sterically hindered than Li+ and Na+ from diffusing through the anion lattice via the network of smaller interstitial sites.
Dynamic Simulation Technoeconomic Model for Power Generation
Brady, Patrick V.; Middleton, Bobby M.
Sandia National Laboratories has built and successfully tested a dynamic simulation technoeconomic model of the Palo Verde Generating Station that is now being updated to help other US power plants improve operations. Palo Verde, located west of Phoenix, Arizona, is the largest electricity generator in the US at 4 GW. Palo Verde uses — 60 million gallons per day of treated wastewater from Phoenix to cool reactors, and disposes of blowdown in evaporation ponds. The model built for Palo Verde numerically evaluates the economic impact of changing, for example, alternative cooling technologies, water usage and treatment, and influent water chemistry, and is based on detailed accounting of mass, energy, and cash flows.
MIRaGE: Design Software for Metamaterials
Metamaterials are artificial optical structures that allow control of light in ways not found in, or offered by, naturally occurring materials. Sandia's Multiscale Inverse Rapid Group-theory for Engineered-metamaterials (MIRaGE) software, which won an R&D100 award in 2019, allows researchers to deterministically design and produce metamaterials with unique characteristics. MIRaGE also provides powerful autonomous optimization techniques for real-world performance in a rigorous, robust, and accurate manner.
All-Epitaxial Integration of Long-Wavelength Infrared Plasmonic Materials and Detectors for Enhanced Responsivity
ACS Photonics
Nordin, Leland; Kamboj, Abhilasha; Petluru, Priyanka; Shaner, Eric A.; Wasserman, Daniel
Infrared detectors using monolithically integrated doped semiconductor "designer metals"are proposed and experimentally demonstrated. We leverage the "designer metal"groundplanes to form resonant cavities with enhanced absorption tuned across the long-wave infrared (LWIR). Detectors are designed with two target absorption enhancement wavelengths: 8 and 10 μm. The core of our detectors are quantum-engineered LWIR type-II superlattice p-i-n detectors with total thicknesses of only 1.42 and 1.80 μm for the 8 and 10 μm absorption enhancement devices, respectively. Our 8 and 10 μm structures show peak external quantum efficiencies of 45 and 27%, which are 4.5× and 2.7× enhanced, respectively, compared to control structures. We demonstrate the clear advantages of this detector architecture, both in terms of ease of growth/fabrication and enhanced device performance. The proposed architecture is absorber- A nd device-structure agnostic, much thinner than state-of-the-art LWIR T2SLs, and offers the opportunity for the integration of low dark current LWIR detector architectures for significant enhancement of IR detectivity.
Looking at the bigger picture: Identifying the photoproducts of pyruvic acid at 193 nm
Journal of Chemical Physics
Samanta, Bibek R.; Fernando, Ravin; Rösch, Daniel; Reisler, Hanna; Osborn, David L.
Here, photodissociation of pyruvic acid (PA) was studied in the gas-phase at 193 nm using two complementary techniques. The time-sliced velocity map imaging arrangement was used to determine kinetic energy release distributions of fragments and estimate dissociation timescales. The multiplexed photoionization mass spectrometer setup was used to identify and quantify photoproducts, including isomers and free radicals, by their mass-to-charge ratios, photoionization spectra, and kinetic time profiles. Using these two techniques, it is possible to observe the major dissociation products of PA photodissociation: CO2, CO, H, OH, HCO, CH2CO, CH3CO, and CH3. Acetaldehyde and vinyl alcohol are minor primary photoproducts at 193 nm, but products that are known to arise from their unimolecular dissociation, such as HCO, H2CO, and CH4, are identified and quantified. A multivariate analysis that takes into account the yields of the observed products and assumes a set of feasible primary dissociation reactions provides a reasonable description of the photoinitiated chemistry of PA despite the necessary simplifications caused by the complexity of the dissociation. These experiments offer the first comprehensive description of the dissociation pathways of PA initiated on the S3 excited state. Most of the observed products and yields are rationalized on the basis of three reaction mechanisms: (i) decarboxylation terminating in CO2 + other primary products (~50%); (ii) Norrish type I dissociation typical of carbonyls (~30%); and (iii) O—H and C—H bond fission reactions generating the H atom (~10%). The analysis shows that most of the dissociation reactions create more than two products. This observation is not surprising considering the high excitation energy (~51 800 cm–1) and fairly low energy required for dissociation of PA. We find that two-body fragmentation processes yielding CO2 are minor, and the expected, unstable primary co-fragment, methylhydroxycarbene, is not observed because it probably undergoes fast secondary dissociation and/or isomerization. Norrish type I dissociation pathways generate OH and only small yields of CH3CO and HOCO, which have low dissociation energies and further decompose via three-body fragmentation processes. Experiments with d1-PA (CH3COCOOD) support the interpretations. The dissociation on S3 is fast, as indicated by the products’ recoil angular anisotropy, but the roles of internal conversion and intersystem crossing to lower states are yet to be determined.
Performance Portable Supernode-based Sparse Triangular Solver for Manycore Architectures
ACM International Conference Proceeding Series
Yamazaki, Ichitaro Y.; Rajamanickam, Sivasankaran R.; Ellingwood, Nathan D.
Sparse triangular solver is an important kernel in many computational applications. However, a fast, parallel, sparse triangular solver on a manycore architecture such as GPU has been an open issue in the field for several years. In this paper, we develop a sparse triangular solver that takes advantage of the supernodal structures of the triangular matrices that come from the direct factorization of a sparse matrix. We implemented our solver using Kokkos and Kokkos Kernels such that our solver is portable to different manycore architectures. This has the additional benefit of allowing our triangular solver to use the team-level kernels and take advantage of the hierarchical parallelism available on the GPU. We compare the effects of different scheduling schemes on the performance and also investigate an algorithmic variant called the partitioned inverse. Our performance results on an NVIDIA V100 or P100 GPU demonstrate that our implementation can be 12.4 × or 19.5 × faster than the vendor optimized implementation in NVIDIA's CuSPARSE library.
Analysis of Tempered Fractional Operators
D'Elia, Marta D.; Olson, Hayley
Tempered fractional operators provide an improved predictive capability for modeling anomalous effects that cannot be captured by standard partial differential equations. These effects include subdiffusion and superdiffusion (i.e. the mean square displacement in a diffusion process is proportional to a fractional power of the time), that often occur in, e.g., geoscience and hydrology. We analyze tempered fractional operators within the nonlocal vector calculus framework in order to assimilate them to the rigorous mathematical structure developed for nonlocal models. First, we show they are special instances of generalized nonlocal operators by means of a proper choice of the nonlocal kernel. Then, we present a plan for showing tempered fractional operators are equivalent to truncated fractional operators. These truncated operators are useful because they are less computationally intensive than the tempered operators.
Development of a heterogeneous nanostructure through abnormal recrystallization of a nanotwinned Ni superalloy
Acta Materialia
Bahena, Joel A.; Heckman, Nathan H.; Barr, Christopher M.; Hattar, Khalid M.; Boyce, Brad B.; Hodge, Andrea M.
This work explores the development of a heterogeneous nanostructured material through leveraging abnormal recrystallization, which is a prominent phenomenon in coarse-grained Ni-based superalloys. Through synthesis of a sputtered Inconel 725 film with a heterogeneous distribution of stored energy and subsequent aging treatments at 730°C, a unique combination of grain sizes and morphologies was observed throughout the thickness of the material. Three distinct domains are formed in the aged microstructure, where abnormally large grains are observed in-between a nanocrystalline and a nanotwinned region. In order to investigate the transitions towards a heterogeneous structure, crystallographic orientation and elemental mapping at interval aging times up to 8 h revealed the microstructural evolution and precipitation behavior. From the experimental observations and the detailed analysis of this study, the current methodology can be utilized to further expand the design space of current heterogeneous nanostructured materials.
Axial-torsion behavior of superelastic tubes: Part I, proportional isothermal experiments
International Journal of Solids and Structures
Reedlunn, Benjamin R.; Lepage, William S.; Daly, Samantha H.; Shaw, John A.
The tensile response of superelastic shape memory alloys (SMAs) has been widely studied, but detailed experimental studies under multi-axial loading are relatively rare. In Part I, we present the isothermal responses of commercially-available superelastic NiTi tubes for a series of proportional stretch-twist controlled histories, spanning pure tension to simple torsion to pure compression. These axial-shear responses are used to quantify the onset and saturation during forward (loading) and reverse (unloading) stress-induced transformations for the first time. Each of the four transformation surfaces is well-captured by a smooth (three-parameter) ellipse in both strain and stress space. A simple Gibbs free energy model is presented to show how the driving force for phase transformation is approximately constant across all proportional strain paths and how the stress and strain transformation surfaces are conjugate to one another. In addition, transformation kinetics and surface strain morphologies are characterized by stereo digital image correlation (DIC). Under extension at low amounts of twist, stress-induced transformation involves strain localization in helical bands that evolve into axial propagation of ring-like transformation fronts with fine criss-crossing fingers (similar to those seen by Q. P. Sun and co-workers in pure extension). However, at large amounts of twist, including simple torsion and pure torsion, we report a new transformation morphology, involving strain localization along nearly longitudinal bands in the tube. The sequel (Part II) will address the response to non-proportional stretch-twist paths. Together, these detailed multi-axial results advance the scientific understanding of superelasticity and inform efforts to develop high-fidelity SMA constitutive models and simulation tools.
X-ray topography characterization of gallium nitride substrates for power device development
Journal of Crystal Growth
Raghothamachar, Balaji; Liu, Yafei; Peng, Hongyu; Ailihumaer, Tuerxun; Dudley, Michael; Shahedipour-Sandvik, F.S.; Jones, Kenneth A.; Armstrong, Andrew A.; Allerman, A.A.; Han, Jung; Fu, Houqiang; Fu, Kai; Zhao, Yuji
Gallium nitride substrates grown by the hydride vapor phase epitaxy (HVPE) method using a patterned growth process have been characterized by synchrotron monochromatic beam X-ray topography in the grazing incidence geometry. Images reveal a starkly heterogeneous distribution of dislocations with areas as large as 0.3 mm2 containing threading dislocation densities below 103 cm−2 in between a grid of strain centers with higher threading dislocation densities (>104 cm−2). Basal plane dislocation densities in these areas are as low as 104 cm−2. By comparing the recorded images of dislocations with ray tracing simulations of expected dislocations in GaN, the Burgers vectors of the dislocations have been determined. The distribution of threading screw/mixed dislocations (TSDs/TMDs), threading edge dislocations (TEDs) and basal plane dislocations (BPDs) is discussed with implications for fabrication of power devices.
Numerical simulations of enhanced ion current losses in the inner magnetically insulated transmission line of the Z accelerator
Physical Review Accelerators and Beams
Rose, David V.; Waisman, Eduardo M.; Desjarlais, Michael P.; Cuneo, M.E.; Hutsel, Brian T.; Welch, Dale R.; Bennett, Nichelle L.; Laity, George R.
Two-dimensional electromagnetic (EM) particle-in-cell (PIC) simulations of a radial magnetically-insulated-transmission-line are presented and compared to the model of E. M. Waisman, M. P. Desjarlais, and M. E. Cuneo [Phys. Rev. Accel. Beams 22, 030402 (2019) in the “high-enhancement” (WDC-HE) limit. The simulations use quasi-equilibrium current and voltage values based on the Sandia National Laboratories Z accelerator, with prescribed injection of an electron sheath that gives electron density profiles qualitatively similar to those used in the WDC-HE model. We find that the WDC-HE model accurately predicts the quasiequilibrium ion current losses in the EM PIC simulations for a wide range of current and voltage values. For the case of two ion species where one is magnetically insulated by the ambient magnetic field and the other is not, the charge of the lighter insulated species in the anode-cathode gap can modify the electric field profile, reducing the ion current density enhancement for the heavier ion species. On the other hand, for multiple ion species, when the lighter ions are not magnetically insulated and are a significant fraction of the anode plasma, they dominate the current loss, producing loss currents which are a significant fraction of the lighter ion WDC values. The observation of this effect in the present work is new to the field and may significantly impact the analysis of ion current losses in the Z machine inner MITL and convolute.
A terminology for in situ visualization and analysis systems
International Journal of High Performance Computing Applications
Childs, Hank; Ahern, Sean D.; Ahrens, James; Bauer, Andrew C.; Bennett, Janine C.; Bethel, E.W.; Bremer, Peer-Timo; Brugger, Eric; Cottam, Joseph; Dorier, Matthieu; Dutta, Soumya; Favre, Jean M.; Fogal, Thomas; Frey, Steffen; Garth, Christoph; Geveci, Berk; Godoy, William F.; Hansen, Charles D.; Harrison, Cyrus; Insley, Joseph; Johnson, Chris R.; Klasky, Scott; Knoll, Aaron; Kress, James; Laros, James H.; Lofstead, Gerald F.; Ma, Kwan-Liu; Malakar, Preeti; Meredith, Jeremy; Moreland, Kenneth D.; Navratil, Paul; Leary, Manish'; Parashar, Manish; Pascucci, Valerio; Patchett, John; Peterka, Tom; Petruzza, Steve; Pugmire, David; Rasquin, Michel; Rizzi, Silvio; Rogers, David M.; Sane, Sudhanshu; Sauer, Franz; Sisneros, Johnny R.; Shen, Han-Wei; Usher, Will; Vickery, Rhonda; Vishwanath, Venkatram; Wald, Ingo; Wang, Ruonan; Weber, Gunther H.; Whitlock, Brad; Wolf, Matthew; Yu, Hongfeng; Ziegeler, Sean B.
The term “in situ processing” has evolved over the last decade to mean both a specific strategy for visualizing and analyzing data and an umbrella term for a processing paradigm. The resulting confusion makes it difficult for visualization and analysis scientists to communicate with each other and with their stakeholders. To address this problem, a group of over 50 experts convened with the goal of standardizing terminology. This paper summarizes their findings and proposes a new terminology for describing in situ systems. An important finding from this group was that in situ systems are best described via multiple, distinct axes: integration type, proximity, access, division of execution, operation controls, and output type. Here, they discuss these axes, evaluate existing systems within the axes, and explore how currently used terms relate to the axes.
Efficient optimization method for finding minimum energy paths of magnetic transitions
Journal of Physics Condensed Matter
Tranchida, Julien G.; Ivanov, A.V.; Dagbartsson, D.; Uzdin, V.M.; Jonsson, H.
Efficient algorithms for the calculation of minimum energy paths of magnetic transitions are implemented within the geodesic nudged elastic band (GNEB) approach. While an objective function is not available for GNEB and a traditional line search can, therefore, not be performed, the use of limited memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) and conjugate gradient algorithms in conjunction with orthogonal spin optimization (OSO) approach is shown to greatly outperform the previously used velocity projection and dissipative Landau-Lifschitz dynamics optimization methods. The implementation makes use of energy weighted springs for the distribution of the discretization points along the path and this is found to improve performance significantly. The various methods are applied to several test problems using a Heisenberg-type Hamiltonian, extended in some cases to include Dzyaloshinskii-Moriya and exchange interactions beyond nearest neighbours. Minimum energy paths are found for magnetization reversals in a nano-island, collapse of skyrmions in two-dimensional layers and annihilation of a chiral bobber near the surface of a three-dimensional magnet. The LBFGS-OSO method is found to outperform the dynamics based approaches by up to a factor of 8 in some cases.
In Situ TEM Study of Radiation Resistance of Metallic Glass–Metal Core–Shell Nanocubes
ACS Applied Materials and Interfaces
Kiani, Mehrdad T.; Hattar, Khalid M.; Gu, X.W.
Radiation damage can cause significantly more surface damage in metallic nanostructures than bulk materials. Structural changes from displacement damage compromise the performance of nanostructures in radiation environments such as nuclear reactors and outer space, or used in radiation therapy for biomedical treatments. As such, it is important to develop strategies to prevent this from occurring if nanostructures are to be incorporated into these applications. In this work, in situ transmission electron microscope ion irradiation was used to investigate whether a metallic glass (MG) coating mitigates sputtering and morphological changes in metallic nanostructures. Dislocation-free Au nanocubes and Au nanocubes coated with a Ni–B MG were bombarded with 2.8 MeV Au4+ ions. The formation of internal defects in bare Au nanocubes was observed at a fluence of 7.5 × 1011 ions/cm2 (0.008 dpa), and morphological changes such as surface roughening, rounding of corners, and formation of nanofilaments began at 4 × 1012 ions/cm2 (0.04 dpa). In contrast, the Ni–B MG-coated Au nanocubes (Au@NiB) showed minimal morphological changes at a fluence of 1.9 × 1013 ions/cm2 (0.2 dpa). Finally, the MG coating maintains its amorphous nature under all irradiation conditions investigated.
A Curated Experimental Compilation Analyzed by Theory Is More than a Review
Macromolecules
Winey, Karen I.; Frischknecht, Amalie F.
Macromolecules is an exceptional resource in the field of polymer science and now publishes more than 1000 original articles a year that set the standard for scientific rigor and creative insights. Over the years, these individual contributions have combined to build the foundation of polymer science, broadly and inclusively defined. In addition to the individual articles, many of which are being celebrated in this series of editorials, Macromolecules has published invaluable reviews and perspectives. These scholarly contributions integrate the insights and results from numerous sources into a unified whole and often recommend future directions for the field. Novices and experts alike benefit from these works that capture topics from emerging discoveries to long-pondered topics and everything in between. To explore the importance of Macromolecules’ reviews and perspectives, we considered their influence on the field and found the 1994 review by Fetters et al. entitled “Connection between Polymer Molecular Weight, Density, Chain Dimensions, and Melt Viscoelastic Properties”1 to be a singularity. This review expertly curates and compiles a trove of data to build robust correlations between molecular characteristics and macroscopic viscoelastic properties of polymer melts, in the context of the tube model of entanglements.
Imagery Applications for Advanced Event Analytics (WBS 24.3.1.3.3-IDC FY2020 Final Project Report)
Miller, Sarah E.; Lavadie-Bulnes, Anita; Schultz-Fellenz, Emily; Bynum, Leo B.; Slater, Jonathon T.; Sussman, Aviva J.
Accurate event locations and replicability of location analyses are essential for assessing the nature of an event, its context, ambient site conditions, and proximity to relevant facilities and infrastructure. Additionally, accurate event locations provide valuable information that reduce uncertainties, improve confidence in event analyses, and inform in-field verification activities. However, event location/relocation and replicability are difficult due to a number of factors, including spatially-sparse network coverage in some areas of the globe and variability in seismic data processing. This team proposed that the incorporation of high-fidelity imagery as a data backbone to the analytical assessment of a suspected underground explosion and/or an advanced seismic event bulletin produced by the International Data Centre (IDC) of the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO PrepCom) could reduce uncertainties and improve confidence in analyses. Specifically, temporally-separated images can reduce uncertainty by identifying areas where change has occurred (e.g., building construction or demolition, road or facilities improvements). The primary goal of this project was to develop an automated geospatial processing script for imagery change detection to better reflect needs of the technical community (including the IDC) and to make the use of such a tool accessible in a variety of settings across platforms. Technical experts at Los Alamos National Laboratory successfully built GAIA: the Geospatial Automated Imagery Analysis tool, to fill this need. GAIA combines five tool components to produce orthorectified time-separated imagery and imagery change detection maps. Our toolkit (1) reduces error by providing a standardized workflow for image analyses and (2) significantly reduces processing time from between 7 and 24+ hours to approximately 5 minutes. Technical experts at Sandia National Laboratories supported GAIA via beta-testing and by introducing a web-based system approach for increased applicability. To test the function, performance, broad application, and ease-of-use of GAIA, we applied it to four separate test cases. The results of this preliminary investigation show promise in reducing uncertainty in seismic event locations: if satellite imagery can show regions where operations that produce seismic activity likely occurred, then pursuing imagery to locate epicenters of seismic nuclear events could reduce the time needed to find the true epicenter location.
A multirate mass transfer model to represent the interaction of multicomponent biogeochemical processes between surface water and hyporheic zones (SWAT-MRMT-R 1.0)
Geoscientific Model Development
Fang, Yilin; Chen, Xingyuan; Velez, Jesus G.; Zhang, Xuesong; Duan, Zhuoran; Hammond, Glenn E.; Goldman, Amy E.; Garayburu-Caruso, Vanessa A.; Graham, Emily B.
Surface water quality along river corridors can be modulated by hyporheic zones (HZs) that are ubiquitous and biogeochemically active. Watershed management practices often ignore the potentially important role of HZs as a natural reactor. To investigate the effect of hydrological exchange and biogeochemical processes on the fate of nutrients in surface water and HZs, a novel model, SWAT-MRMT-R, was developed coupling the Soil and Water Assessment Tool (SWAT) watershed model and the reaction module from a flow and reactive transport code (PFLOTRAN). SWAT-MRMT-R simulates concurrent nonlinear multicomponent biogeochemical reactions in both the channel water and its surrounding HZs, connecting the channel water and HZs through hyporheic exchanges using multirate mass transfer (MRMT) representation. Within the model, HZs are conceptualized as transient storage zones with distinguished exchange rates and residence times. The biogeochemical processes within HZs are different from those in the channel water. Hyporheic exchanges are modeled as multiple first-order mass transfers between the channel water and HZs. As a numerical example, SWAT-MRMT-R is applied to the Hanford Reach of the Columbia River, a large river in the United States, focusing on nitrate dynamics in the channel water. Major nitrate contaminants entering the Hanford Reach include those from the legacy waste, irrigation return flows (irrigation water that is not consumed by crops and runs off as point sources to the stream), and groundwater seepage resulting from irrigated agriculture. A two-step reaction sequence for denitrification and an aerobic respiration reaction is assumed to represent the biogeochemical transformations taking place within the HZs. The spatially variable hyporheic exchange rates and residence times in this example are estimated with the basin-scale Networks with EXchange and Subsurface Storage (NEXSS) model. Our simulation results show that (1), given a residence time distribution, how the exchange fluxes to HZs are approximated when using MRMT can significantly change the amount of nitrate consumption in HZs through denitrification and (2) source locations of nitrate have a different impact on surface water quality due to the spatially variable hyporheic exchanges.
Parallel algorithms for hyperdynamics and local hyperdynamics
Journal of Chemical Physics
Plimpton, Steven J.; Perez, Danny; Voter, Arthur F.
Hyperdynamics (HD) is a method for accelerating the timescale of standard molecular dynamics (MD). It can be used for simulations of systems with an energy potential landscape that is a collection of basins, separated by barriers, where transitions between basins are infrequent. HD enables the system to escape from a basin more quickly while enabling a statistically accurate renormalization of the simulation time, thus effectively boosting the timescale of the simulation. In the work of Kim et al. [J. Chem. Phys. 139, 144110 (2013)], a local version of HD was formulated, which exploits the intrinsic locality characteristic typical of most systems to mitigate the poor scaling properties of standard HD as the system size is increased. Here, we discuss how both HD and local HD can be formulated to run efficiently in parallel. We have implemented these ideas in the LAMMPS MD code, which means HD can be used with any interatomic potential LAMMPS supports. Together, these parallel methods allow simulations of any size to achieve the time acceleration offered by HD (which can be orders of magnitude), at a cost of 2-4× that of standard MD. As examples, we performed two simulations of a million-atom system to model the diffusion and clustering of Pt adatoms on a large patch of the Pt(100) surface for 80 μs and 160 μs.
Improved reference system for the corrected rigid spheres equation of state model
Journal of Applied Physics
Cowen, Benjamin J.; Carpenter, John H.
The Corrected Rigid Spheres (CRIS) equation of state (EOS) model [Kerley, J. Chem. Phys. 73, 469 (1980); 73, 478 (1980); 73, 487 (1980)], developed from fluid perturbation theory using a hard sphere reference system, has been successfully used to calculate the EOS of many materials, including gases and metals. The radial distribution function (RDF) plays a pivotal role in choosing the sphere diameter, through a variational principle, as well as the thermodynamic response. Despite its success, the CRIS model has some shortcomings in that it predicts too large a temperature for liquid-vapor critical points, can break down at large compression, and is computationally expensive. We first demonstrate that an improved analytic representation of the hard sphere RDF does not alleviate these issues. Relaxing the strict adherence of the RDF to hard spheres allows an accurate fit to the isotherms and vapor dome of the Lennard-Jones fluid using an arbitrary reference system. The second order correction is eliminated, limiting the breakdown at large compression and significantly reducing the computation cost. The transferability of the new model to real systems is demonstrated on argon, with an improved vapor dome compared to the original CRIS model.
Quasi-equilibrium predictions of water desorption kinetics from rapidly-heated metal oxide surfaces
Journal of Physics Condensed Matter
Leung, Kevin L.; Criscenti, Louise J.
Controlling sub-microsecond desorption of water and other impurities from electrode surfaces at high heating rates is crucial for pulsed power applications. Despite the short time scales involved, quasi-equilibrium ideas based on transition state theory (TST) and Arrhenius temperature dependence have been widely applied to fit desorption activation free energies. In this work, we apply molecular dynamics (MD) simulations in conjunction with equilibrium potential-of-mean-force (PMF) techniques to directly compute the activation free energies (ΔG∗) associated with desorption of intact water molecules from Fe2O3 and Cr2O3 (0001) surfaces. The desorption free energy profiles are diffuse, without maxima, and have substantial dependences on temperature and surface water coverage. Incorporating the predicted ΔG∗ into an analytical form gives rate equations that are in reasonable agreement with non-equilibrium molecular dynamics desorption simulations. We also show that different ΔG∗ analytical functional forms which give similar predictions at a particular heating rate can yield desorption times that differ by up to a factor of four or more when the ramp rate is extrapolated by 8 orders of magnitude. This highlights the importance of constructing a physically-motivated ΔG∗ functional form to predict fast desorption kinetics.
California Native Species Field Survey Form - American Badger
Abstract not provided.
Fully transparent GaN homojunction tunnel junction-enabled cascaded blue LEDs
Applied Physics Letters
Jamal-Eddine, Zane; Hasan, Syed M.N.; Gunning, Brendan P.; Chandrasekar, Hareesh; Jung, Hyemin; Crawford, Mary H.; Armstrong, Andrew A.; Arafin, Shamsul; Rajan, Siddharth
A sidewall activation process was optimized for buried magnesium-doped p-GaN layers yielding a significant reduction in tunnel junction-enabled light emitting diode (LED) forward voltage. This buried activation enabled the realization of cascaded blue LEDs with fully transparent GaN homojunction tunnel junctions. The initial optimization of buried p-GaN activation was performed on PN junctions grown by metal organic chemical vapor deposition (MOCVD) buried under hybrid tunnel junctions grown by MOCVD and molecular beam epitaxy. Next the activation process was implemented in cascaded blue LEDs emitting at 450 nm, which were enabled by fully transparent GaN homojunction tunnel junctions. The tunnel junction-enabled multi-active region blue LEDs were grown monolithically by MOCVD. This work demonstrates a state-of-the-art tunnel junction-enabled cascaded LED utilizing homojunction tunnel junctions which do not contain any heterojunction interface.
Generation of reactive species in water film dielectric barrier discharges sustained in argon, helium, air, oxygen and nitrogen
Journal of Physics. D, Applied Physics
Mohades, Soheila; Lietz, Amanda M.; Kushner, Mark J.
Activation of liquids with atmospheric pressure plasmas is being investigated for environmental and biomedical applications. When activating the liquid using gas plasma produced species (as opposed to plasmas sustained in the liquid), a rate limiting step is transport of these species into the liquid. To first order, the efficiency of activating the liquid is improved by increasing the ratio of the surface area of the water in contact with the plasma compared to its volume—often called the surface-to-volume ratio (SVR). Maximizing the SVR then motivates the plasma treatment of thin films of liquids. In this paper, results are discussed from a computational investigation using a global model of atmospheric pressure plasma treatment of thin water films by a dielectric barrier discharge (DBD) sustained in different gases (Ar, He, air, N2, O2). The densities of reactive species in the plasma activated water (PAW) are evaluated. The residence time of the water in contact with the plasma is increased by recirculating the PAW in plasma reactor. Longer lived species such as H2O2aq and NO3-aq accumulate over time (aq denotes an aqueous species). DBDs sustained in Ar and He are the most efficient at producing H2O2aq, DBDs sustained in argon produces the largest density of NO3-aq with the lowest pH, and discharges sustained in O2 and air produce the highest densities of O3aq. Finally, comparisons to experiments by others show agreement in the trends in densities in PAW including O3aq, OHaq, H2O2aq and NO3-aq, and highlight the importance of controlling desolvation of species from the activated water.
Identifying errors in service transformer connections
IEEE Power and Energy Society General Meeting
Blakely, Logan; Reno, Matthew J.
Distribution system models play a critical role in the modern grid, driving distributed energy resource integration through hosting capacity analysis and providing insight into critical areas of interest such as grid resilience and stability. Thus, the ability to validate and improve existing distribution system models is also critical. This work presents a method for identifying service transformers which contain errors in specifying the customers connected to the low-voltage side of that transformer. Pairwise correlation coefficients of the smart meter voltage time series are used to detect when a customer is not in the transformer grouping that is specified in the model. The proposed method is demonstrated both on synthetic data as well as a real utility feeder, and it successfully identifies errors in the transformer labeling in both datasets.
Models and analysis of fuel switching generation impacts on power system resilience
IEEE Power and Energy Society General Meeting
Wilches-Bernal, Felipe; Knueven, Ben; Staid, Andrea S.; Watson, Jean-Paul W.
This paper presents model formulations for generators that have the ability to use multiple fuels and to switch between them if necessary. These models are used to generate different scenarios of fuel switching penetration from a test power system. With these scenarios, for a severe disruption in the fuel supply to multiple generators, the paper analyzes the effect that fuel switching has on the resilience of the power system. Load not served is used as the proxy metric to evaluate power system resilience. The paper shows that the presence of generators with fuel switching capabilities considerably reduces the amount and duration of the load shed by the system facing the fuel disruption.
Opportunities and Trends for Energy Storage plus Solar in CAISO: 2014-2018
IEEE Power and Energy Society General Meeting
Byrne, Raymond H.; Nguyen, Tu A.; Headley, Alexander H.; Wilches-Bernal, Felipe; Concepcion, Ricky J.; Trevizan, Rodrigo D.
The state of California is leading the nation with respect to solar energy and storage. The California Energy Commission has mandated that starting in 2020 all new homes must be solar powered. In 2010 the California state legislature adopted an energy storage mandate AB 2514. This required California's three largest utilities to contract for an additiona11.3 GW of energy storage by 2020, coming online by 2024. Therefore, there is keen interest in the potential advantages of deploying solar combined with energy storage. This paper formulates the optimization problem to identify the maximum potential revenue from pairing storage with solar and participating in the California Independent System Operator (CAISO) day ahead market for energy. Using the optimization formulation, five years of historical market data (2014-2018) for 2, 172 price nodes were analyzed to identify trends and opportunities for the deployment of solar plus storage.
A testbed for synthetic inertia control design using point-on-wave frequency estimates
IEEE Power and Energy Society General Meeting
Hill Balliet, W.; Wilches-Bernal, Felipe; Wold, Josh
Practical implementations of synthetic inertia (SI) control require fast and accurate frequency estimation from measurable data. While previous research has shown that SI can be effective at improving inertial and primary frequency regulation when accurate frequency measurements are available, these works typically do not include the technicalities of frequency estimation in their analysis. This paper presents a testbed that allows the user to examine such estimation methods and their effect on SI control performance. An SI control compensator is then designed and implemented for a test case. The compensator is shown to reduce many of the limitations imposed by point-on-wave based frequency estimation on the SI control action.
Evaluation of curtailment associated with PV system design considerations
IEEE Power and Energy Society General Meeting
Azzolini, Joseph A.; Reno, Matthew J.; Horowitz, Kelsey A.W.
Distributed photovoltaic (PV) systems equipped with advanced inverters can control real and reactive power output based on grid and atmospheric conditions. The Volt-Var control method allows inverters to regulate local grid voltages by producing or consuming reactive power. Based on their power ratings, the inverters may need to curtail real power to meet the reactive power requirements, which decreases their total energy production. To evaluate the expected curtailment associated with Volt-Var control, yearlong quasi-static time-series (QSTS) simulations were conducted on a realistic distribution feeder under a variety of PV system design considerations. Overall, this paper found that the amount of curtailed energy is low (< 0.55%) compared to the total PV energy production in a year but is affected by several PV system design considerations.
Overall capacity assessment of distribution feeders with different electric vehicle adoptions
IEEE Power and Energy Society General Meeting
Jones, Christian B.; Lave, Matthew S.; Darbali-Zamora, Rachid
An overall capacity assessment and an analysis of the system's X/R ratios for six actual distribution feeders was conducted to characterize the voltage response to various levels of distributed Electric Vehicle Supply Equipment (EVSE). The evaluation identified the capacity of the system at which a voltage violation occurred. This included a review of the uncontrolled and controlled cases to quantify the value of injecting reactive power as the grid voltage decreases. The evaluation found that the implementation of a Volt-Var curve with a global voltage reference provided a notable increase in capacity. A local reference voltage, measured at the point of common coupling, did not increase the capacity of every feeder in the experiment. The review of the X/R line properties using a Principal Component Analysis (PCA) identified groups within the six feeders that corresponded with each system's voltage response rate. This suggests the X/R ratios provide a direct prediction of the feeder's ability to avoid voltage violations while charging EVs.
Molecular Statics Analyses of Thermodynamics and Kinetics of Hydrogen Cottrell Atmosphere Formation Around Edge Dislocations in Aluminum
JOM
Zhou, Xiaowang Z.; Spataru, Dan C.; Chu, Kevin; Sills, Ryan B.
Aluminum alloys are being explored as lightweight structural materials for use in hydrogen-containing environments.To understand hydrogen effects on deformation, we perform molecular statics studies of the hydrogen Cottrell atmosphere around edge dislocations in aluminum. First, we calculate the hydrogen binding energies at all interstitial sites in a periodic aluminum crystal containing an edge dislocation dipole. This allows us to use the Boltzmann equation to quantify the hydrogen Cottrell atmosphere. Based on these binding energies, we then construct a continuum model to study the kinetics of the hydrogen Cottrell atmosphere formation. Finally, we compare our results with existing theories and discuss the effects of hydrogen on deformation of aluminum-based alloys.
Predicting System Response at Unmeasured Locations
Experimental Techniques
Mayes, R.L.; Ankers, L.; Daborn, P.
Traditional techniques to derive dynamic specification for components have a great deal of uncertainty. One of the major sources of uncertainty is that the number of response measurements in the operational system environment is insufficient to determine the component motion. This inadequacy is due to logistical limitations for data recording in field testing and space limitations for accelerometers, strain gages and associated wiring. Available measurements are often some distance from the component and therefore do not represent component motion. Typical straight-line envelopes of these unrepresentative measurements guarantee an increase in the uncertainty. In this paper multiple methods are attempted to expand a sparse set of field test measurements on a system to responses of interest that cannot be measured in the field due to the limitations. Proof of concept is demonstrated on the Modal Analysis Test Vehicle (MATV). The responses of interest, known as “truth responses”, are measured in a system vibration environment along with an optimized sparse set of 30 field responses. Methods to expand the field responses to the truth responses are demonstrated by comparing the acceleration spectral density of the expanded response to the measured response. Two methods utilize a validated finite element model of the MATV. One is developed from purely experiment based frequency response functions of a laboratory pre-test. These approaches are designed to drastically reduce the uncertainty of the component in-service motion as a basis for developing specifications that are guaranteed to be conservative with a known (instead of unknown) conservatism.
Data-consistent inversion for stochastic input-to-output maps
Inverse Problems
Wildey, Timothy M.; Butler, Troy; Yen, Tian Y.
Data-consistent inversion is a recently developed measure-theoretic framework for solving a stochastic inverse problem involving models of physical systems. The goal is to construct a probability measure on model inputs (i.e., parameters of interest) whose associated push-forward measure matches (i.e., is consistent with) a probability measure on the observable outputs of the model (i.e., quantities of interest). Previous implementations required the map from parameters of interest to quantities of interest to be deterministic. This work generalizes this framework for maps that are stochastic, i.e., contain uncertainties and variation not explainable by variations in uncertain parameters of interest. Generalizations of previous theorems of existence, uniqueness, and stability of the data-consistent solution are provided while new theoretical results address the stability of marginals on parameters of interest. A notable aspect of the algorithmic generalization is the ability to query the solution to generate independent identically distributed samples of the parameters of interest without requiring knowledge of the so-called stochastic parameters. This work therefore extends the applicability of the data-consistent inversion framework to a much wider class of problems. This includes those based on purely experimental and field data where only a subset of conditions are either controllable or can be documented between experiments while the underlying physics, measurement errors, and any additional covariates are either uncertain or not accounted for by the researcher. Numerical examples demonstrate application of this approach to systems with stochastic sources of uncertainties embedded within the modeling of a system and a numerical diagnostic is summarized that is useful for determining if a key assumption is verified among competing choices of stochastic maps.
Validation of finite-element models using full-field experimental data: Levelling finite-element analysis data through a digital image correlation engine
Strain
Lava, Pascal; Jones, Elizabeth M.; Wittevrongel, Lukas; Pierron, Fabrice
Full-field data from digital image correlation (DIC) provide rich information for finite-element analysis (FEA) validation. However, there are several inherent inconsistencies between FEA and DIC data that must be rectified before meaningful, quantitative comparisons can be made, including strain formulations, coordinate systems, data locations, strain calculation algorithms, spatial resolutions and data filtering. In this paper, we investigate two full-field validation approaches: (1) the direct interpolation approach, which addresses the first three inconsistencies by interpolating the quantity of interest from one mesh to the other, and (2) the proposed DIC-levelling approach, which addresses all six inconsistencies simultaneously by processing the FEA data through a stereo-DIC simulator to ‘level' the FEA data to the DIC data in a regularisation sense. Synthetic ‘experimental' DIC data were generated based on a reference FEA of an exemplar test specimen. The direct interpolation approach was applied, and significant strain errors were computed, even though there was no model form error, because the filtering effect of the DIC engine was neglected. In contrast, the levelling approach provided accurate validation results, with no strain error when no model form error was present. Next, model form error was purposefully introduced via a mismatch of boundary conditions. With the direct interpolation approach, the mismatch in boundary conditions was completely obfuscated, while with the levelling approach, it was clearly observed. Finally, the ‘experimental' DIC data were purposefully misaligned slightly from the FEA data. Both validation techniques suffered from the misalignment, thus motivating continued efforts to develop a robust alignment process. In summary, direct interpolation is insufficient, and the proposed levelling approach is required to ensure that the FEA and the DIC data have the same spatial resolution and data filtering. Only after the FEA data have been ‘levelled' to the DIC data can meaningful, quantitative error maps be computed.
Comparison of continuum and cross-core theories of dynamic strain aging
Journal of the Mechanics and Physics of Solids
Epperly, E.N.; Sills, Ryan B.
Dynamic strain aging (DSA) is the process of solute atoms segregating around dislocations on the timescale of loading. Continuum theories of DSA derived from elasticity theory have been shown to severely overpredict both the timescale and strengthening of DSA. Recently, cross-core theory was developed to reconcile this gap, invoking a special single-atomic-hop diffusion mechanism across the core of an extended dislocation. In this work, we show that the classical continuum theory expression for the rate of solute segregation is in error. After correcting this error, we show that continuum theory predictions match cross-core theory when the elevated diffusivity near the dislocation core is accounted for. Our findings indicate that continuum theory is still a useful tool for studying dislocation-solute interactions.
An active learning high-throughput microstructure calibration framework for solving inverse structure–process problems in materials informatics
Acta Materialia
Laros, James H.; Mitchell, John A.; Swiler, Laura P.; Wildey, Timothy M.
Determining a process–structure–property relationship is the holy grail of materials science, where both computational prediction in the forward direction and materials design in the inverse direction are essential. Problems in materials design are often considered in the context of process–property linkage by bypassing the materials structure, or in the context of structure–property linkage as in microstructure-sensitive design problems. However, there is a lack of research effort in studying materials design problems in the context of process–structure linkage, which has a great implication in reverse engineering. In this work, given a target microstructure, we propose an active learning high-throughput microstructure calibration framework to derive a set of processing parameters, which can produce an optimal microstructure that is statistically equivalent to the target microstructure. The proposed framework is formulated as a noisy multi-objective optimization problem, where each objective function measures a deterministic or statistical difference of the same microstructure descriptor between a candidate microstructure and a target microstructure. Furthermore, to significantly reduce the physical waiting wall-time, we enable the high-throughput feature of the microstructure calibration framework by adopting an asynchronously parallel Bayesian optimization by exploiting high-performance computing resources. Case studies in additive manufacturing and grain growth are used to demonstrate the applicability of the proposed framework, where kinetic Monte Carlo (kMC) simulation is used as a forward predictive model, such that for a given target microstructure, the target processing parameters that produced this microstructure are successfully recovered.
An additive manufacturing design approach to achieving high strength and ductility in traditionally brittle alloys via laser powder bed fusion
Additive Manufacturing
Babuska, Tomas F.; Johnson, Kyle J.; Verdonik, Trevor; Subia, Samuel R.; Krick, Brandon A.; Susan, D.F.; Kustas, Andrew K.
Additive Manufacturing (AM) presents unprecedented opportunities to enable design freedom in parts that are unachievable via conventional manufacturing. However, AM-processed components generally lack the necessary performance metrics for widespread commercial adoption. We present a novel AM processing and design approach using removable heat sink artifacts to tailor the mechanical properties of traditionally low strength and low ductility alloys. The design approach is demonstrated with the Fe-50 at.% Co alloy, as a model material of interest for electromagnetic applications. AM-processed components exhibited unprecedented performance, with a 300 % increase in strength and an order-of-magnitude improvement in ductility relative to conventional wrought material. These results are discussed in the context of product performance, production yield, and manufacturing implications toward enabling the design and processing of high-performance, next-generation components, and alloys.
Tunable, room-temperature multiferroic Fe-BaTiO3 vertically aligned nanocomposites with perpendicular magnetic anisotropy
Materials Today Nano
Room-temperature ferromagnetic materials with perpendicular magnetic anisotropy are widely sought after for spintronics, magnetic data storage devices, and stochastic computing. To address this need, a new Fe-BaTiO3 vertically aligned nanocomposite (VAN) has been fabricated—combining both the strong room-temperature ferromagnetic properties of Fe nanopillars and the strong room-temperature ferroelectric properties of the BaTiO3 matrix. Furthermore, the Fe-BaTiO3 VAN allows for highly anisotropic magnetic properties with tunable magnetization and coercivity. In addition, to demonstrate the multiferroic properties of the Fe-BaTiO3 system, the new metal-oxide hybrid material system has been incorporated in a multilayer stack. This new multiferroic VAN system possesses great potential in magnetic anisotropy and property tuning and demonstrates a new material family of oxide-metal hybrid systems for room-temperature multiferroic material designs.
Optimal Investments to Improve Grid Resilience Considering Initial Transient Response and Long-Term Restoration
2020 International Conference on Probabilistic Methods Applied to Power Systems, PMAPS 2020 - Proceedings
Pierre, Brian J.; Arguello, Bryan A.; Garcia, Manuel J.
This paper presents a multi-Time period two-stage stochastic mixed-integer linear optimization model which determines the optimal hardening investments to improve power system resilience to natural disaster threat scenarios. The input to the optimization model is a set of scenarios for specific natural disaster events, that is based on historical data. The objective of the optimization model is to minimize the expected weighted load shed from the initial impact and the restoration process over all scenarios. The optimization model considers the initial impact of the severe event by using electromechanical transient dynamic simulations. The initial impact weighted load shed is determined by the transient simulation, which allows for secondary transients from protection devices and cascading failures. The rest of the event, after the initial shock, is modeled in the optimization with a multi-Time period dc optimal power flow (DCOPF) which is initialized with the solution from the dynamic simulation. The first stage of the optimization model determines the optimal investments. The second stage, given the investments, determines the optimal unit commitment, generator dispatch, and transmission line switching during the multi-Time period restoration process to minimize the weighted load shed over all scenarios. Note, an investment will change the transient simulation result, and therefore change the initialization to the DCOPF restoration model. The investment optimization model encompasses both the initial impact (dynamic transient simulation results) and the restoration period (DCOPF) of the event, as components come back online. The model is tested on the IEEE RTS-96 system.
Phase identification using co-association matrix ensemble clustering
IET Smart Grid
Blakely, Logan; Reno, Matthew J.
Calibrating distribution system models to aid in the accuracy of simulations such as hosting capacity analysis is increasingly important in the pursuit of the goal of integrating more distributed energy resources. The recent availability of smart meter data is enabling the use of machine learning tools to automatically achieve model calibration tasks. This research focuses on applying machine learning to the phase identification task, using a co-association matrix-based, ensemble spectral clustering approach. The proposed method leverages voltage time series from smart meters and does not require existing or accurate phase labels. This work demonstrates the success of the proposed method on both synthetic and real data, surpassing the accuracy of other phase identification research.
Robustness of the tokamak error field correction tolerance scaling
Plasma Physics and Controlled Fusion
Logan, N.C.; Park, J.K.; Hu, Q.; Paz-Soldan, C.; Markovic, T.; Wang, H.H.; In, Y.; Piron, L.; Piovesan, P.; Myers, Clayton E.; Maraschek, M.; Wolfe, S.M.; Strait, E.J.; Munaretto, S.
This paper presents the subtleties of obtaining robust experimental scaling laws for the core resonant error field threshold that leads to field penetration, locked modes, and disruptions. Recent progress in attempts to project this threshold to new machines has focused on advances in the metric used to quantify the dangerous error fields, incorporating the ideal MHD plasma response in a metric referred to as the 'dominant mode overlap'. However, the scaling of this or any quantity with experimental parameters known to be important for the complicated tearing layer physics requires regressions performed for databases that, for historical reasons, unevenly sample the available parametric space. This paper presents the distribution of the existing international n = 1 database and details biases in the available sampling and details the sensitivity of ITER projections to simple least-squares regressions. Downsampling and a simple kernel density estimation weighted regression are used here to demonstrate the difference in projections that acknowledging the machine sampling bias can make. This results in more robust projection to parameters far from the 'usual' devices built thus far. Two multi-device and multi-parameter scalings of the EF threshold in Ohmic and powered plasmas are presented, projecting the threshold to ITER and investigating the impact of sampling biases.
Heterogeneous polymer dynamics explored using static 1H NMR spectra
International Journal of Molecular Sciences
Alam, Todd M.; Allers, Joshua P.; Jones, Brad H.
NMR spectroscopy continues to provide important molecular level details of dynamics in different polymer materials, ranging from rubbers to highly crosslinked composites. It has been argued that thermoset polymers containing dynamic and chemical heterogeneities can be fully cured at temperatures well below the final glass transition temperature (Tg). In this paper, we described the use of static solid-state 1H NMR spectroscopy to measure the activation of different chain dynamics as a function of temperature. Near Tg, increasing polymer segmental chain fluctuations lead to dynamic averaging of the local homonuclear proton-proton (1H-1H) dipolar couplings, as reflected in the reduction of the NMR line shape second moment (M2) when motions are faster than the magnitude of the dipolar coupling. In general, for polymer systems, distributions in the dynamic correlation times are commonly expected. To help identify the limitations and pitfalls of M2 analyses, the impact of activation energy or, equivalently, correlation time distributions, on the analysis of 1H NMR M2 temperature variations is explored. It is shown by using normalized reference curves that the distributions in dynamic activation energies can be measured from the M2 temperature behavior. An example of the M2 analysis for a series of thermosetting polymers with systematically varied dynamic heterogeneity is presented and discussed.
Dynamic x-ray diffraction and nanosecond quantification of kinetics of formation of β -zirconium under shock compression
Physical Review B
Laros, James H.; Brown, Justin L.; Specht, Paul E.; Root, Seth R.; White, Melanie; Smith, Jesse S.
We report the atomic- and nanosecond-scale quantification of kinetics of a shock-driven phase transition in Zr metal. We uniquely make use of a multiple shock-and-release loading pathway to shock Zr into the β phase and to create a quasisteady pressure and temperature state shortly after. Coupling shock loading with in situ time-resolved synchrotron x-ray diffraction, we probe the structural transformation of Zr in the steady state. Our results provide a quantified expression of kinetics of formation of β-Zr phase under shock loading: transition incubation time, completion time, and crystallization rate.
Code-verification techniques for hypersonic reacting flows in thermochemical nonequilibrium
Journal of Computational Physics
Freno, Brian A.; Carnes, Brian C.; Weirs, Vincent G.
The study of hypersonic flows and their underlying aerothermochemical reactions is particularly important in the design and analysis of vehicles exiting and reentering Earth's atmosphere. Computational physics codes can be employed to simulate these phenomena; however, code verification of these codes is necessary to certify their credibility. To date, few approaches have been presented for verifying codes that simulate hypersonic flows, especially flows reacting in thermochemical nonequilibrium. In this work, we present our code-verification techniques for verifying the spatial accuracy and thermochemical source term in hypersonic reacting flows in thermochemical nonequilibrium. Additionally, we demonstrate the effectiveness of these techniques on the Sandia Parallel Aerodynamics and Reentry Code (SPARC).
Bayesian Analysis Techniques Part 2: Deep Learning in the Inference Loop
Laros, James H.; Knapp, Patrick K.; Gomez, Matthew R.; Harvey-Thompson, Adam J.; Schmit, Paul S.; Slutz, Stephen A.; Ampleford, David A.
Abstract not provided.
Vision for magnetized HED fundamental science experiments on Z
Abstract not provided.
Understanding the Warm Absorber Photoionized Plasma Experiment at Z
Mayes, D.C.; Mancini, R.C.; Swanson, K.J.; Bailey, James E.; Loisel, Guillaume P.
Abstract not provided.
Statistical data analysis for Z opacity data
Abstract not provided.
Cyber Deterrence and Resilience Strategic Initiative
Johnson, David J.; Mangerian, Nicholas B.
Abstract not provided.
Liquid-vapor coexistence & critical point of Mg2SiO4 from ab-initio simulations
Abstract not provided.
Photoionized plasma experiment for accretion-powered sources
Abstract not provided.
Breakout session Test of time-dependent effects on opacity measurements
Abstract not provided.
Achieving Versatile Energy Efficiency With the WANDERER Biped Robot
Buerger, Stephen B.; Hobart, Clinton G.; Spencer, Steven; Kuehl, Michael K.; Mazumdar, Anirban; Quigley, Morgan; Smith, Jesper; Bertrand, Sylvain; Pratt, Jerry
Abstract not provided.
Introduction to Remote Sensing Detection
Abstract not provided.
Hypersonic Guidance and Control via Deep Reinforcement Learning Methods
Furfaro, Roberto
Abstract not provided.
Update on MagLIF preheat experiments
Harvey-Thompson, Adam J.; Geissel, Matthias G.; Weis, Matthew R.; Galloway, B.R.; Fein, Jeffrey R.; Awe, Thomas J.; Crabtree, Jerry A.; Ampleford, David A.; Bliss, David E.; Glinsky, Michael E.; Gomez, Matthew R.; Hanson, Joseph C.; Harding, Eric H.; Jennings, Christopher A.; Kimmel, Mark W.; Perea, L.; Peterson, Kyle J.; Porter, James D.; Rambo, Patrick K.; Robertson, Grafton K.; Ruiz, Daniel E.; Schwarz, Jens S.; Shores, Jonathon S.; Slutz, Stephen A.; Smith, Ian C.; York, Adam Y.; Paguio, R.R.; Smith, G.E.; Maudlin, M.; Pollock, B.
Abstract not provided.
Nonlinear Model Predictive Control for Hypersonic Vehicles
Park, Hyeongjun
Abstract not provided.
Agile Methodologies Redux
Willenbring, James M.; Heroux, Michael A.; Bernholdt, David
Abstract not provided.
New inflow boundary conditions for relativistic and Newtonian fluids
Roberds, Nicholas R.; Beckwith, Kristian B.; Bettencourt, Matthew T.
Abstract not provided.
Experimental Validation of Dense Plasma Transport Models using the Z-Machine
Abstract not provided.
Improvements in Mg Line Shape Model
Abstract not provided.
Break out session -Radiative recombination continuum test
Abstract not provided.
PIC V&V Problems
Abstract not provided.
Accelerating phase-field based predictions via surrogate models trained by machine learning methods
Dingreville, Remi P.; Montes de Oca Zapiain, David M.; Stewart, James A.
Abstract not provided.
Quantum Monte Carlo for Ab-Initio Equation of States
Abstract not provided.