New Reserve Products to Improve Primary Frequency Response
Abstract not provided.
Abstract not provided.
Achieving efficient learning for AI systems was identified as a major challenge in the DOE's recently released, AI for Science, report. The human brain is capable of efficient and low-powered learning. It is likely that implementing brain-like principles will lead to more efficient AI systems. In this LDRD, I aim to contribute to this goal by creating a foundation for implementing and studying a brain phenomenon termed short term plasticity (STP) in spiking artificial neural networks within Sandia. First, data collected by the Allen Institute for Brain Science (AIBS) was analyzed to see if STP could be classified into types using the data collected. Although the data was inadequate at the time, AIBS has updated their database and created models that could be utilized in the future. Second, I began creating a software package to assess the ability of a Boltzmann machine utilizing STP to sample from national security data.
Abstract not provided.
Abstract not provided.
Hole spin qubits confined to lithographically - defined lateral quantum dots in Ge/SiGe heterostructures show great promise. On reason for this is the intrinsic spin - orbit coupling that allows all - electric control of the qubit. That same feature can be exploited as a coupling mechanism to coherently link spin qubits to a photon field in a superconducting resonator, which could, in principle, be used as a quantum bus to distribute quantum information. The work reported here advances the knowledge and technology required for such a demonstration. We discuss the device fabrication and characterization of different quantum dot designs and the demonstration of single hole occupation in multiple devices. Superconductor resonators fabricated using an outside vendor were found to have adequate performance and a path toward flip-chip integration with quantum devices is discussed. The results of an optical study exploring aspects of using implanted Ga as quantum memory in a Ge system are presented.
Abstract not provided.
The purpose of this report is to document improvements in the simulation of commercial vacuum drying procedures at the Nuclear Energy Work Complex at Sandia National Laboratories. Validation of the extent of water removal in a dry spent nuclear fuel storage system based on drying procedures used at nuclear power plants is needed to close existing technical gaps. Operational conditions leading to incomplete drying may have potential impacts on the fuel, cladding, and other components in the system. A general lack of data suitable for model validation of commercial nuclear canister drying processes necessitates additional, well-designed investigations of drying process efficacy and water retention. Scaled tests that incorporate relevant physics and well-controlled boundary conditions are essential to provide insight and guidance to the simulation of prototypic systems undergoing drying processes.
Physical protection of public buildings has long been a concern of police and security services where a balance of facility security and personnel safety is vital. Due to the nature of public spaces, the use of permanently installed and deploy-on-demand physical barrier systems must be safe for the legitimate occupants and visitors of that space. Such systems must seek to mitigate the personal and organizational consequences of unintentionally seriously injuring or killing an innocent bystander by slamming a heavy, rigid, and quick-deploying barrier into place. Consideration and implementation of less-than-lethal technologies is necessary to reduce risk to visitors and building personnel. One potential barrier solution is a fast-acting, high-strength, composite airbag barrier system for doorways and hallways to quickly deploy a less-than-lethal barrier at entry points as well as isolate intruders who have already gained access. This system is envisioned to be stored within an architecturally attractive selectively frangible shell that could be permanently installed at a facility or installed in remote or temporary locations as dictated by risk. The system would be designed to be activated remotely (hardwired or wireless) from a Central Alarm Station (CAS) or other secure location.
Our primary aim in this work is to understand how to efficiently obtain reliable uncertainty quantification in automatic learning algorithms with limited training datasets. Standard approaches rely on cross-validation to tune hyper parameters. Unfortunately, when our datasets are too small, holdout datasets become unreliable—albeit unbiased—measures of prediction quality due to the lack of adequate sample size. We should not place confidence in holdout estimators under conditions wherein the sample variance is both large and unknown. More poigniantly, our training experiments on limited data (Duersch and Catanach, 2021) show that even if we could improve estimator quality under these conditions, the typical training trajectory may never even encounter generalizable models.
Within the past half-decade, it has become overwhelmingly clear that suppressing the spread of deliberate false and misleading information is of the utmost importance for protecting democratic institutions. Disinformation has been found to come from both foreign and domestic actors, but the effects from either can be disastrous. From the simple encouragement of unwarranted distrust to conspiracy theories promoting violence, the results of disinformation have put the functionality of American democracy under direct threat. Present scientific challenges posed by this problem include detecting disinformation, quantifying its potential impact, and preventing its amplification. We present a model on which we can experiment with possible strategies toward the third challenge: the prevention of amplification. This is a social contagion network model, which is decomposed into layers to represent physical, ''offline'', interactions as well as virtual interactions on a social media platform. Along with the topological modifications to the standard contagion model, we use state-transition rules designed specifically for disinformation, and distinguish between contagious and non-contagious infected nodes. We use this framework to explore the effect of grassroots social movements on the size of disinformation cascades by simulating these cascades in scenarios where a proportion of the agents remove themselves from the social platform. We also test the efficacy of strategies that could be implemented at the administrative level by the online platform to minimize such spread. These top-down strategies include banning agents who disseminate false information, or providing corrective information to individuals exposed to false information to decrease their probability of believing it. We find an abrupt transition to smaller cascades when a critical number of random agents are removed from the platform, as well as steady decreases in the size of cascades with increasingly more convincing corrective information. Finally, we compare simulated cascades on this framework with real cascades of disinformation recorded on Whatsapp surrounding the 2019 Indian election. We find a set of hyperparameter values that produces a distribution of cascades matching the scaling exponent of the distribution of actual cascades recorded in the dataset. We acknowledge the available future directions for improving the performance of the framework and validation methods, as well as ways to extend the model to capture additional features of social contagion.
The properties of materials can change dramatically at the nanoscale new and useful properties can emerge. An example is found in the paramagnetism in iron oxide magnetic nanoparticles. Using magnetically sensitive nitrogen-vacancy centers in diamond, we developed a platform to study electron spin resonance of nanoscale materials. To implement the platform, diamond substrates were prepared with nitrogen vacancy centers near the surface. Nanoparticles were placed on the surface using a drop casting technique. Using optical and microwave pulsing techniques, we demonstrated T1 relaxometry and double electron-electron resonance techniques for measuring the local electron spin resonance. The diamond NV platform developed in this project provides a combination of good magnetic field sensitivity and high spatial resolution and will be used for future investigations of nanomaterials and quantum materials.
The Material Protection, Accounting, and Control Technologies program utilizes modeling and simulation to assess Material Control and Accountability (MC&A) concerns for a variety of nuclear facilities. Single analyst tools allow for rapid design and evaluation of advanced approaches for new and existing nuclear facilities. A low enriched uranium (LEU) fuel conversion and fabrication facility simulator is developed to assist with MC&A for existing facilities. Measurements are added to the model (consistent with current best practices). Material balance calculations and statistical tests are also added to the model. In addition, scoping work is performed for developing a single stage aqueous reprocessing model. Preliminary results are presented and discussed, and next steps outlined.
Nanomaterials
A series of nanopillar compression tests were performed on tungsten as a function of temperature using in situ transmission electron microscopy with localized laser heating. Surface oxidation was observed to form on the pillars and grow in thickness with increasing temperature. Deformation between 850◦C and 1120◦C is facilitated by long-range diffusional transport from the tungsten pillar onto adjacent regions of the Y2O3-stabilized ZrO2 indenter. The constraint imposed by the surface oxidation is hypothesized to underly this mechanism for localized plasticity, which is generally the so-called whisker growth mechanism. The results are discussed in context of the tungsten fuzz growth mechanism in He plasma-facing environments. The two processes exhibit similar morphological features and the conditions under which fuzz evolves appear to satisfy the conditions necessary to induce whisker growth.
Abstract not provided.
Abstract not provided.
Most earth materials are anisotropic with regard to seismic wave-speeds, especially materials such as shales, or where oriented fractures are present. However, the base assumption for many numerical simulations is to treat earth materials as isotropic media. This is done for simplicity, the apparent weakness of anisotropy in the far field, and the lack of well-characterized anisotropic material properties for input into numerical simulations. One approach for addressing the higher complexity of actual geologic regions is to model the material as an orthorhombic medium. We have developed an explicit time-domain, finite-difference (FD) algorithm for simulating three-dimensional (3D) elastic wave propagation in a heterogeneous orthorhombic medium. The objective of this research is to investigate the errors and biases that result from modeling a non-isotropic medium as an isotropic medium. This is done by computing “observed data” by using synthetic, anisotropic simulations with the assumption of an orthorhombic, anisotropic earth model. Green’s functions for an assumed isotropic earth model are computed and then used an inversion designed to estimate moment tensors with the “observed” data. One specific area of interest is how shear waves, which are introduced in an anisotropic model even for an isotropic explosion, affect the characterization of seismic sources when isotropic earth assumptions are made. This work is done in support of the modeling component of the Source Physics Experiment (SPE), a series of underground chemical explosions at the Nevada National Security Site (NNSS).
Abstract not provided.
Abstract not provided.
This report summarizes the findings and outcomes of the LDRD-express project with title “Fluid models of charged species transport: numerical methods with mathematically guaranteed properties”. The primary motivation of this project was the computational/mathematical exploration of the ideas advanced aiming to improve the state-of-the-art on numerical methods for the one-fluid Euler-Poisson models and gain some understanding on the Euler-Maxwell model. Euler-Poisson and Euler-Maxwell, by themselves are not the most technically relevant PDE plasma-models. However, both of them are elementary building blocks of PDE-models used in actual technical applications and include most (if not all) of their mathematical difficulties. Outside the classical ideal MHD models, rigorous mathematical and numerical understanding of one-fluid models is still a quite undeveloped research area, and the treatment/understanding of boundary conditions is minimal (borderline non-existent) at this point in time. This report focuses primarily on bulk-behaviour of Euler-Poisson’s model, touching boundary conditions only tangentially.
Machine-learned models, specifically neural networks, are increasingly used as “closures” or “constitutive models” in engineering simulators to represent fine-scale physical phenomena that are too computationally expensive to resolve explicitly. However, these neural net models of unresolved physical phenomena tend to fail unpredictably and are therefore not used in mission-critical simulations. In this report, we describe new methods to authenticate them, i.e., to determine the (physical) information content of their training datasets, qualify the scenarios where they may be used and to verify that the neural net, as trained, adhere to physics theory. We demonstrate these methods with neural net closure of turbulent phenomena used in Reynolds Averaged Navier-Stokes equations. We show the types of turbulent physics extant in our training datasets, and, using a test flow of an impinging jet, identify the exact locations where the neural network would be extrapolating i.e., where it would be used outside the feature-space where it was trained. Using Generalized Linear Mixed Models, we also generate explanations of the neural net (à la Local Interpretable Model agnostic Explanations) at prototypes placed in the training data and compare them with approximate analytical models from turbulence theory. Finally, we verify our findings by reproducing them using two different methods.
We describe a novel pulsed magnetic gradiometer based on the optical interference of sidebands generated using two spatially separated alkali vapor cells. In contrast to traditional magnetic gradiometers, our approach provides a direct readout of the gradient field without the intermediate step of subtracting the outputs of two spatially separated magnetometers. Operation of the gradiometer in multiple field orientations is discussed. The noise floor is measured as low as 25$\frac{fT}{\sqrt{Hz-cm}}$ in a room without magnetic shielding.
Scientific applications run on high-performance computing (HPC) systems are critical for many national security missions within Sandia and the NNSA complex. However, these applications often face performance degradation and even failures that are challenging to diagnose. To provide unprecedented insight into these issues, the HPC Development, HPC Systems, Computational Science, and Plasma Theory & Simulation departments at Sandia crafted and completed their FY21 ASC Level 2 milestone entitled "Integrated System and Application Continuous Performance Monitoring and Analysis Capability." The milestone created a novel integrated HPC system and application monitoring and analysis capability by extending Sandia's Kokkos application portability framework, Lightweight Distributed Metric Service (LDMS) monitoring tool, and scalable storage, analysis, and visualization pipeline. The extensions to Kokkos and LDMS enable collection and storage of application data during run time, as it is generated, with negligible overhead. This data is combined with HPC system data within the extended analysis pipeline to present relevant visualizations of derived system and application metrics that can be viewed at run time or post run. This new capability was evaluated using several week-long, 290-node runs of Sandia's ElectroMagnetic Plasma In Realistic Environments ( EMPIRE ) modeling and design tool and resulted in 1TB of application data and 50TB of system data. EMPIRE developers remarked this capability was incredibly helpful for quickly assessing application health and performance alongside system state. In short, this milestone work built the foundation for expansive HPC system and application data collection, storage, analysis, visualization, and feedback framework that will increase total scientific output of Sandia's HPC users.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Cybersecurity for industrial control systems is an important consideration that advance reactor designers will need to consider. How cyber risk is managed is the subject of on-going research and debate in the nuclear industry. This report seeks to identify potential cyber risks for advance reactors. Identified risks are divided into absorbed risk and licensee managed risk to clearly show how cyber risks for advance reactors can potentially be transferred. Absorbed risks are risks that originate external to the licensee but may unknowingly propagate into the plant. Insights include (1) the need for unification of safety, physical security, and cybersecurity risk assessment frameworks to ensure optimal coordination of risk, (2) a quantitative risk assessment methodology in conjunction with qualitative assessments may be useful in efficiently and sufficiently managing cyber risks, and (3) cyber risk management techniques should align with a risked informed regulatory framework for advance reactors.
Computing in Science and Engineering
State-of-the-art engineering and science codes have grown in complexity dramatically over the last two decades. Application teams have adopted more sophisticated development strategies, leveraging third party libraries, deploying comprehensive testing, and using advanced debugging and profiling tools. In today's environment of diverse hardware platforms, these applications also desire performance portability-avoiding the need to duplicate work for various platforms. The Kokkos EcoSystem provides that portable software stack. Based on the Kokkos Core Programming Model, the EcoSystem provides math libraries, interoperability capabilities with Python and Fortran, and Tools for analyzing, debugging, and optimizing applications. In this article, we overview the components, discuss some specific use cases, and highlight how codesigning these components enables a more developer friendly experience.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Seven generation III+ and generation IV nuclear reactor types, based on twelve reactor concepts surveyed, are examined using functional decomposition to extract relevant operational technology (OT) architecture information. This information is compared to existing nuclear power plants (NPPs) OT architectures to highlight novel and emergent cyber risks associated with next generation NPPs. These insights can help inform operational technology architecture requirements that will be unique to a given reactor type. Next generation NPPs have streamlined OT architectures relative to the current generation II commercial NPP fleet. Overall, without compensatory measures that provide sufficient and efficient cybersecurity controls, next generation NPPs will have increased cyber risk. Verification and validation of cyber-physical testbeds and cyber risk assessment methodologies may be an important next step to reduce cyber risk in the OT architecture design and testing phase. Coordination with safety requirements can result in OT architecture design being an iterative process.
Abstract not provided.
Abstract not provided.
The AXIOM-Unfold application is a computational code for performing spectral unfolds along with uncertainty quantification of the photon spectrum. While this code was principally designed for spectral unfolds on the Saturn source, it is also relevant to other radiation sources such as Pithon. This code is a component of the AXIOM project which was undertaken in order to measure the time-resolved spectrum of the Saturn source; to support this, the AXIOM-Unfold code is able to process time-dependent dose measurements in order to obtain a time-resolved spectrum. This manual contains a full description of the algorithms used by the method. The code features are fully documented along with several worked examples.
The DOE-NE NWM Cloud was designed to be a generic set of tools and applications for any nuclear waste management program. As policymakers continue to consider approaches that emphasize consolidated interim storage and transportation of spent nuclear fuel, a gap analysis of the tools and applications provided for spent nuclear fuel and high-level radioactive waste disposal in comparison those needed for siting, licensing, and developing a consolidated interim storage facility and/or for a transportation campaign will help prepare DOE for implementing such potential policy direction. This report evaluates the points of alignment and potential gaps between the applications on the NWM Cloud that supported SNF disposal project, and the applications needed to address QA requirements and for other project support needs of an SNF storage project.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Verification, Validation and Uncertainty Quantification
There is a dearth in the literature on how to capture the uncertainty generated by material surface evolution in thermal modeling. This leads to inadequate or highly variable uncertainty representations for material properties, specifically emissivity when minimal information is available. Inaccurate understandings of prediction uncertainties may lead decision makers to incorrect conclusions, so best engineering practices should be developed for this domain. In order to mitigate the aforementioned issues, this study explores different strategies to better capture the thermal uncertainty response of engineered systems exposed to fire environments via defensible emissivity uncertainty characterizations that can be easily adapted to a variety of use cases. Two unique formulations (one physics-informed and one mathematically based) are presented. The formulations and methodologies presented herein are not exhaustive but more so are a starting point and give the reader a basis for how to customize their uncertainty definitions for differing fire scenarios and materials. Finally, the impact of using this approach versus other commonly used strategies and the usefulness of adding rigor to material surface evolution uncertainty is demonstrated.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Applied Physics Express
Ultra-low voltage drop tunnel junctions (TJs) were utilized to enable multi-active region blue light emitting diodes (LEDs) with up to three active regions in a single device. The multi-active region blue LEDs were grown monolithically by metal-organic chemical vapor deposition (MOCVD) without growth interruption. This is the first demonstration of a MOCVD grown triple-junction LED. Optimized TJ design enabled near-ideal voltage and EQE scaling close to the number of junctions. This work demonstrates that with proper TJ design, improvements in wall-plug efficiency at high output power operation are possible by cascading multiple III-nitride based LEDs.
Downscaling of the silicon metal-oxide-semiconductor field-effect transistor technology is expected to reach a fundamental limit soon. A paradigm shift in computing is occurring. Spin field-effect transistors are considered a candidate architecture for next-generation microelectronics. Being able to leverage the existing infrastructure for silicon, a spin field-effect transistor technology based on group IV heterostructures will have unparalleled technical and economical advantages. For the same material platform reason, germanium hole quantum dots are also considered a competitive architecture for semiconductor-based quantum technology. In this project, we investigated several approaches to creating hole devices in germanium-based materials as well as injecting hole spins in such structures. We also explored the roles of hole injection in wet chemical etching of germanium. Our main results include the demonstration of germanium metal-oxide-semiconductor field-effect transistors operated at cryogenic temperatures, ohmic current-voltage characteristics in germanium/silicon-germanium heterostructures with ferromagnetic contacts at deep cryogenic temperatures and high magnetic fields, evaluation of the effects of surface preparation on carrier mobility in germanium/silicon- germanium heterostructures, and hole spin polarization through integrated permanent magnets. These results serve as essential components for fabricating next-generation germanium-based devices for microelectronics and quantum systems.
Abstract not provided.
Although many software teams across the laboratories comply with yearly software quality engineering (SQE) assessments, the practice of introducing quality into each phase of the software lifecycle, or the team processes, may vary substantially. Even with the support of a quality engineer, many teams struggle to adapt and right-size software engineering best practices in quality to fit their context, and these activities aren’t framed in a way that motivates teams to take action. In short, software quality is often a “check the box for compliance” activity instead of a cultural practice that both values software quality and knows how to achieve it. In this report, we present the results of our 6600 VISTA Innovation Tournament project, "Incentivizing and Motivating High Confidence and Research Software Teams to Adopt the Practice of Quality." We present our findings and roadmap for future work based on 1) a rapid review of relevant literature, 2) lessons learned from an internal design thinking workshop, and 3) an external Collegeville 2021 workshop. These activities provided an opportunity for team ideation and community engagement/feedback. Based on our findings, we believe a coordinated effort (e.g. strategic communication campaign) aimed at diffusing the innovation of the practice of quality across Sandia National Laboratories could over time effect meaningful organizational change. As such, our roadmap addresses strategies for motivating and incentivizing individuals ranging from early career to seasoned software developers/scientists.
This document describes the Power and Energy Storage Systems Toolbox for MATLAB, abbreviated as PSTess. This computing package is a fork of the Power Systems Toolbox (PST). PST was originally developed at Rensselaer Polytechnic Institute (RPI) and later upgraded by Dr. Graham Rogers at Cherry Tree Scientific Software. While PSTess shares a common lineage with PST Version 3.0, it is a substantially different application. This document supplements the main PST manual by describing the features and models that are unique to PSTess. As the name implies, the main distinguishing characteristic of PSTess is its ability to model inverter-based energy storage systems (ESS). The model that enables this is called ess.m , and it serves the dual role of representing ESS operational constraints and the generator/converter interface. As in the WECC REGC_A model, the generator/converter interface is modeled as a controllable current source with the ability to modulate both real and reactive current. The model ess.m permits four-quadrant modulation, which allows it to represent a wide variety of inverter-based resources beyond energy storage when paired with an appropriate supplemental control model. Examples include utility-scale photovoltaic (PV) power plants, Type 4 wind plants, and static synchronous compensators (STATCOM). This capability is especially useful for modeling hybrid plants that combine energy storage with renewable resources or FACTS devices.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This document describes the Cybersecurity Research Development and Demonstration (RD&D) Program, established by the Department of Energy Office of Nuclear Energy (NE) to provide sciencebased methods and technologies necessary for cost-effective, cyber-secure digital instrumentation, control and communication in collaboration with nuclear energy stakeholders. It provides an overview of program goals, objectives, linkages to organizational strategies, management structure, and stakeholder and cross-program interfaces.
New Journal of Physics
We present a simple and powerful technique for finding a good error model for a quantum processor. The technique iteratively tests a nested sequence of models against data obtained from the processor, and keeps track of the best-fit model and its wildcard error (a metric of the amount of unmodeled error) at each step. Each best-fit model, along with a quantification of its unmodeled error, constitutes a characterization of the processor. We explain how quantum processor models can be compared with experimental data and to each other. We demonstrate the technique by using it to characterize a simulated noisy two-qubit processor.
Parallel Computing
Graph partitioning has been an important tool to partition the work among several processors to minimize the communication cost and balance the workload. While accelerator-based supercomputers are emerging to be the standard, the use of graph partitioning becomes even more important as applications are rapidly moving to these architectures. However, there is no distributed-memory-parallel, multi-GPU graph partitioner available for applications. We developed a spectral graph partitioner, Sphynx, using the portable, accelerator-friendly stack of the Trilinos framework. In Sphynx, we allow using different preconditioners and exploit their unique advantages. We use Sphynx to systematically evaluate the various algorithmic choices in spectral partitioning with a focus on the GPU performance. We perform those evaluations on two distinct classes of graphs: regular (such as meshes, matrices from finite element methods) and irregular (such as social networks and web graphs), and show that different settings and preconditioners are needed for these graph classes. The experimental results on the Summit supercomputer show that Sphynx is the fastest alternative on irregular graphs in an application-friendly setting and obtains a partitioning quality close to ParMETIS on regular graphs. When compared to nvGRAPH on a single GPU, Sphynx is faster and obtains better balance and better quality partitions. Sphynx provides a good and robust partitioning method across a wide range of graphs for applications looking for a GPU-based partitioner.
Abstract not provided.
Journal of Peridynamics and Nonlocal Modeling
The propagation of a wave pulse due to low-speed impact on a one-dimensional, heterogeneous bar is studied. Due to the dispersive character of the medium, the pulse attenuates as it propagates. This attenuation is studied over propagation distances that are much longer than the size of the microstructure. A homogenized peridynamic material model can be calibrated to reproduce the attenuation and spreading of the wave. The calibration consists of matching the dispersion curve for the heterogeneous material near the limit of long wavelengths. It is demonstrated that the peridynamic method reproduces the attenuation of wave pulses predicted by an exact microstructural model over large propagation distances.
Abstract not provided.
Abstract not provided.
Abstract not provided.
IEEE Transactions on Nuclear Science
High-performance radiation detection materials are an integral part of national security, medical imaging, and nuclear physics applications. Those that offer compositional and manufacturing versatility are of particular interest. Here, we report a new family of radiological particle-discriminating scintillators containing bis(9,9-dimethyl-9H-fluoren-2-yl)diphe-nylsilane (compound 'P2') and in situ polymerized vinyltoluene (PVT) that is phase stable and mechanically robust at any blend ratio. The gamma-ray light yield increases nearly linearly across the composition range, to 16 400 photons/MeV at 75 wt.% P2. These materials are also capable of performing γ/n pulse shape discrimination (PSD), and between 20% and 50% P2 loading is competitive with the PSD quality of commercially available plastic scintillators. The 137Cs scintillation rise and decay times are sensitive to P2 loading and approach the values for 'pure' P2. Additionally, the radiation detection performance of P2-PVT blends can be made stable in 60 °C air for at least 1.5 months with the application of a thin film of poly(vinylalcohol) to the scintillator surfaces.
Abstract not provided.
Journal of Computational Physics
In this paper we analyze the noise in macro-particle methods used in plasma physics and fluid dynamics, leading to approaches for minimizing the total error, focusing on electrostatic models in one dimension. We begin by describing kernel density estimation for continuous values of the spatial variable x, expressing the kernel in a form in which its shape and width are represented separately. The covariance matrix C(x,y) of the noise in the density is computed, first for uniform true density. The bandwidth of the covariance matrix is related to the width of the kernel. A feature that stands out is the presence of constant negative terms in the elements of the covariance matrix both on and off-diagonal. These negative correlations are related to the fact that the total number of particles is fixed at each time step; they also lead to the property ∫C(x,y)dy=0. We investigate the effect of these negative correlations on the electric field computed by Gauss's law, finding that the noise in the electric field is related to a process called the Ornstein-Uhlenbeck bridge, leading to a covariance matrix of the electric field with variance significantly reduced relative to that of a Brownian process. For non-constant density, ρ(x), still with continuous x, we analyze the total error in the density estimation and discuss it in terms of bias-variance optimization (BVO). For some characteristic length l, determined by the density and its second derivative, and kernel width h, having too few particles within h leads to too much variance; for h that is large relative to l, there is too much smoothing of the density. The optimum between these two limits is found by BVO. For kernels of the same width, it is shown that this optimum (minimum) is weakly sensitive to the kernel shape. We repeat the analysis for x discretized on a grid. In this case the charge deposition rule is determined by a particle shape. An important property to be respected in the discrete system is the exact preservation of total charge on the grid; this property is necessary to ensure that the electric field is equal at both ends, consistent with periodic boundary conditions. We find that if the particle shapes satisfy a partition of unity property, the particle charge deposited on the grid is conserved exactly. Further, if the particle shape is expressed as the convolution of a kernel with another kernel that satisfies the partition of unity, then the particle shape obeys the partition of unity. This property holds for kernels of arbitrary width, including widths that are not integer multiples of the grid spacing. We show results relaxing the approximations used to do BVO optimization analytically, by doing numerical computations of the total error as a function of the kernel width, on a grid in x. The comparison between numerical and analytical results shows good agreement over a range of particle shapes. We discuss the practical implications of our results, including the criteria for design and implementation of computationally efficient particle shapes that take advantage of the developed theory.
Abstract not provided.
Abstract not provided.
This SAND report fulfills the completion requirements for the ASC Physics and Engineering Modeling Level 2 Milestone 7836 during Fiscal Year 2021. The Sandia Simplified potential energy clock (SPEC) non-linear viscoelastic constitutive model was developed to predict a whole host of polymer glass physical behaviors in order to provide a tool to assess the effects of stress on these materials over their lifecycle. Polymer glasses are used extensively in applications such as electronics packaging, where encapsulants and adhesives can be critical to device performance. In this work, the focus is on assessing the performance of the model in predicting material evolution associated with long-term physical aging, an area that the model has not been fully vetted in. These predictions are key to utilizing models to help demonstrate electronics packaging component reliability over decades long service lives, a task that is very costly and time consuming to execute experimentally. The initiating hypothesis for the work was that a model calibration process can be defined that enables confidence in physical aging predictions under ND relevant environments and timescales without sacrificing other predictive capabilities. To test the hypothesis, an extensive suite of calibration and aging data was assembled from a combination of prior work and collaborating projects (Aging and Lifetimes as well as the DoD Joint Munitions Program) for two mission relevant epoxy encapsulants, 828DGEBA/DEA and 828DGEBA/T403. Multiple model calibration processes were developed and evaluated against the entire set of data for each material. A qualitative assessment of each calibration's ability to predict the wide range of aging responses was key to ranking the calibrations against each other. During this evaluation, predictions that were identified as non-physical, i.e., demonstrated something that was qualitatively different than known material behavior, were heavily weighted against the calibration performance. Thus, unphysical predictions for one aspect of aging response could generate a lower overall rating for a calibration process even if that process generated better quantitative predictions for another aspect of aging response. This insurance that all predictions are qualitatively correct is important to the overall aim of utilizing the model to predict residual stress evolution, which will depend on the interplay amongst the different material aging responses. The DSC-focused calibration procedure generated the best all-around aging predictions for both materials, demonstrating material models that can qualitatively predict the whole host of different physical aging responses that have been measured. This step forward in predictive capability comes from an unanticipated source, utilization of calorimetry measurements to specify model parameters. The DSC-focused calibration technique performed better than compression-focused techniques that more heavily weigh measurements more closely related to the structural responses to be predicted. Indeed, the DSC-focused calibration procedure was only possible due to recent incorporation of the enthalpy and heat capacity features into SPEC that was newly verified during this L2 milestone. Fundamentally similar aspects of the two material model calibrations as well as parametric studies to assess sensitives of the aging predictions are discussed within the report. A perspective on the next steps to the overall goal of residual stress evolution predictions under stockpile conditions closes the report.
Abstract not provided.
Defects in materials are an ongoing challenge for quantum bits, so called qubits. Solid state qubits—both spins in semiconductors and superconducting qubits—suffer from losses and noise caused by two-level-system (TLS) defects thought to reside on surfaces and in amorphous materials. Understanding and reducing the number of such defects is an ongoing challenge to the field. Superconducting resonators couple to TLS defects and provide a handle that can be used to better understand TLS. We develop noise measurements of superconducting resonators at very low temperatures (20 mK) compared to the resonant frequency, and low powers, down to single photon occupation.
In this LDRD project, we developed a versatile capability for high-resolution measurements of electron scattering processes in gas-phase molecules, such as ionization, dissociation, and electron attachment/detachment. This apparatus is designed to advance fundamental understanding of these processes and to inform predictions of plasmas associated with applications such as plasma-assisted combustion, neutron generation, re-entry vehicles, and arcing that are critical to national security. We use innovative coupling of electron-generation and electron-imaging techniques that leverages Sandia’s expertise in ion/electron imaging methods. Velocity map imaging provides a measure of the kinetic energies of electrons or ion products from electron scattering in an atomic or molecular beam. We designed, constructed, and tested the apparatus. Tests include dissociative electron attachment to O2 and SO2, as well as a new method for studying laser-initiated plasmas. This capability sets the stage for new studies in dynamics of electron scattering processes, including scattering from excited-state atoms and molecules.
Abstract not provided.
Abstract not provided.
An active source experiment using an accelerated weight drop was conducted in Rock Valley, Nevada National Security Site, during the spring of 2021 in order to characterize the shallow seismic structure of the region. P-wave first arrival travel times picked from this experiment were used to construct a preliminary 3-D compressional wave speed model over an area that is roughly 4 km wide east-west and 8 km north-south to a depth of about 500-600 m below the surface, but with primary data concentration along the transects of the experimental lines. The preliminary model shows good correlation with basic geology and surface features, but geological interpretation is not the focus of this report. We describe the methods used in the tomographic inversion of the data and show results from this preliminary P-wave model.
Abstract not provided.
Seismic source modeling allows researchers both to simulate how a source that induces seismic waves interacts with the Earth to produce observed seismograms and, inversely, to infer what the time histories, sizes, and force distributions were for a seismic source given observed seismograms. In this report, we discuss improvements made in FY21 to our software as applies to both the forward and inverse seismic source modeling problems. For the forward portion of the problem, we have added the ability to use full 3-D nonlinear simulations by implementing 3-D time varying boundary conditions within Sandia’s linear seismic code Parelasti. Secondly, on the inverse source modeling side, we have developed software that allows us to invert seismic gradiometer-derived observations in conjunction with standard translational motion seismic data to infer properties of the source that may improve characterization in certain circumstances. First, we describe the basic theory behind each software enhancement and then demonstrate the software in action with some simple examples.
This project aimed to identify the performance-limiting mechanisms in mid- to far infrared (IR) sensors by probing photogenerated free carrier dynamics in model detector materials using scanning ultrafast electron microscopy (SUEM). SUEM is a recently developed method based on using ultrafast electron pulses in combination with optical excitations in a pump- probe configuration to examine charge dynamics with high spatial and temporal resolution and without the need for microfabrication. Five material systems were examined using SUEM in this project: polycrystalline lead zirconium titanate (a pyroelectric), polycrystalline vanadium dioxide (a bolometric material), GaAs (near IR), InAs (mid IR), and Si/SiO 2 system as a prototypical system for interface charge dynamics. The report provides detailed results for the Si/SiO 2 and the lead zirconium titanate systems.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
As the seismic monitoring community advances toward detecting, identifying, and locating ever-smaller natural and anthropogenic events, the need is constantly increasing for higher resolution, higher fidelity data, models, and methods for accurately characterizing events. Local-distance seismic data provide robust constraints on event locations, but also introduce complexity due to the significant geologic heterogeneity of the Earth’s crust and upper mantle, and the relative sparsity of data that often occurs with small events recorded on regional seismic networks. Identifying the critical characteristics for improving local-scale event locations and the factors that impact location accuracy and reliability is an ongoing challenge for the seismic community. Using Utah as a test case, we examine three data sets of varying duration, finesse, and magnitude to investigate the effects of local earth structure and modeling parameters on local-distance event location precision and accuracy. We observe that the most critical elements controlling relocation precision are azimuthal coverage and local-scale velocity structure, with tradeoffs based on event depth, type, location, and range.
Abstract not provided.
We present our research findings on the novel NDN protocol. In this work, we defined key attack scenarios for possible exploitation and detail software security testing procedures to evaluate the security of the NDN software. This work was done in the context of distributed energy resources (DER). The software security testing included an execution of unit tests and static code analyses to better understand the software rigor and the security that has been implemented. The results from the penetration testing are presented. Recommendations are discussed to provide additional defense for secure end-to-end NDN communications.
Wellbore integrity is a significant problem in the U.S. and worldwide, which has serious adverse environmental and energy security consequences. Wells are constructed with a cement barrier designed to last about 50 years. Indirect measurements and models are commonly used to identify wellbore damage and leakage, often producing subjective and even erroneous results. The research presented herein focuses on new technologies to improve monitoring and detection of wellbore failures (leaks) by developing a multi-step machine learning approach to localize two types of thermal defects within a wellbore model, a prototype mechatronic system for automatically drilling small diameter holes of arbitrary depth to monitor the integrity of oil and gas wells in situ, and benchtop testing and analyses to support the development of an autonomous real-time diagnostic tool to enable sensor emplacement for monitoring wellbore integrity. Each technology was supported by experimental results. This research has provided tools to aid in the detection of wellbore leaks and significantly enhanced our understanding of the interaction between small-hole drilling and wellbore materials.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
U.S. critical infrastructure assets are often designed to operate for decades, and yet long-term planning practices have historically ignored climate change. With the current pace of changing operational conditions and severe weather hazards, research is needed to improve our ability to translate complex, uncertain risk assessment data into actionable inputs to improve decision-making for infrastructure planning. Decisions made today need to explicitly account for climate change – the chronic stressors, the evolution of severe weather events, and the wide-ranging uncertainties. If done well, decision making with climate in mind will result in increased resilience and decreased impacts to our lives, economies, and national security. We present a three-tier approach to create the research products needed in this space: bringing together climate projection data, severe weather event modeling, asset-level impacts, and contextspecific decision constraints and requirements. At each step, it is crucial to capture uncertainties and to communicate those uncertainties to decision-makers. While many components of the necessary research are mature (i.e., climate projection data), there has been little effort to develop proven tools for long-term planning in this space. The combination of chronic and acute stressors, spatial and temporal uncertainties, and interdependencies among infrastructure sectors coalesce into a complex decision space. By applying known methods from decision science and data analysis, we can work to demonstrate the value of an interdisciplinary approach to climate-hazard decision making for longterm infrastructure planning.
Abstract not provided.
Abstract not provided.
Corrosion
The localized corrosion of laser surface melted (LSM) 316L stainless steel is investigated by a combination of potentiodynamic anodic polarization in 0.1 M HCl and microscopic investigation of the initiation and propagation of localized corrosion. The pitting potential of LSM 316L is significantly lower than the pitting potential of wrought 316L. The LSM microstructure is highly banded as a consequence of the high laser power density and high linear energy density. The bands are composed of zones of changing modes of solidification, cycling between very narrow regions of primary austenite solidification and very wide regions of primary ferrite solidification. Pits initiate in the outer edge of each band where the mode of solidification is primary austenite plane front solidification and primary austenite cellular solidification. The primary austenite regions have low chromium concentration (and possibly low molybdenum concentration), which explains their susceptibility to pitting corrosion. The ferrite is enriched in chromium, which explains the absence of pitting in the primary ferrite regions. The presence of the low chromium regions of primary austenite solidification explains the lower pitting resistance of LSM 316L relative to wrought 316L. The influence of banding on localized corrosion is applicable to other rapidly solidified processes such as additive manufacturing.