Zhang, Xiang; Wang, Fei; Yan, Xueliang; Li, Xing Z.; Hattar, Khalid M.; Cui, Bai
A nanostructured oxide-dispersion-strengthened (ODS) CoCrFeMnNi high-entropy alloy (HEA) is synthesized by a powder metallurgy process. The thermal stability, including the grain size and crystal structure of the HEA matrix and oxide dispersions, is carefully investigated by X-ray diffraction (XRD) and electron microscopy characterizations after annealing at 900 °C. The limited grain growth may be attributed to Zener pinning of yttria dispersions that impede the grain boundary mobility and diffusivity. The high hardness is caused by both the fine grain size and yttria dispersions, which are also retained after annealing at 900 °C. Herein, it is implied that the combination of ODS and HEA concepts may provide a new design strategy for the development of thermally stable nanostructured alloys for extreme environments.
A six-month research effort has advanced the hybrid kinetic-fluid modeling capability required for developing non-thermal warm x-ray sources on Z. The three particle treatments of quasi-neutral, multi-fluid, and kinetic are demonstrated in 1D simulations of an Ar gas puff. The simulations determine required resolutions for the advanced implicit solution techniques and debug hybrid particle treatments with equation-of-state and radiation transport. The kinetic treatment is used in preliminary analysis of the non-Maxwellian nature of a gas target. It is also demonstrates the sensitivity of the cyclotron and collision frequencies in determining the transition from thermal to non-thermal particle populations. Finally, a 2D Ar gas puff simulation of a Z shot demonstrates the readiness to proceed with realistic target configurations. The results put us on a very firm footing to proceed to a full LDRD which includes continued development transition criteria and x-ray yield calculation.
This work focuses on estimation of unknown states and parameters in a discrete-time, stochastic, SEIR model using reported case counts and mortality data. An SEIR model is based on classifying individuals with respect to their status in regards to the progression of the disease, where S is the number individuals who remain susceptible to the disease, E is the number of individuals who have been exposed to the disease but not yet infectious, I is the number of individuals who are currently infectious, and R is the number of recovered individuals. For convenience, we include in our notation the number of infections or transmissions, T, that represents the number of individuals transitioning from compartment S to compartment E over a particular interval. Similarly, we use C to represent the number of reported cases.
Evstatiev, E.G.; Finn, J.M.; Shadwick, B.A.; Hengartner, N.
In this paper we analyze the noise in macro-particle methods used in plasma physics and fluid dynamics, leading to approaches for minimizing the total error, focusing on electrostatic models in one dimension. We begin by describing kernel density estimation for continuous values of the spatial variable x, expressing the kernel in a form in which its shape and width are represented separately. The covariance matrix C(x,y) of the noise in the density is computed, first for uniform true density. The bandwidth of the covariance matrix is related to the width of the kernel. A feature that stands out is the presence of constant negative terms in the elements of the covariance matrix both on and off-diagonal. These negative correlations are related to the fact that the total number of particles is fixed at each time step; they also lead to the property ∫C(x,y)dy=0. We investigate the effect of these negative correlations on the electric field computed by Gauss's law, finding that the noise in the electric field is related to a process called the Ornstein-Uhlenbeck bridge, leading to a covariance matrix of the electric field with variance significantly reduced relative to that of a Brownian process. For non-constant density, ρ(x), still with continuous x, we analyze the total error in the density estimation and discuss it in terms of bias-variance optimization (BVO). For some characteristic length l, determined by the density and its second derivative, and kernel width h, having too few particles within h leads to too much variance; for h that is large relative to l, there is too much smoothing of the density. The optimum between these two limits is found by BVO. For kernels of the same width, it is shown that this optimum (minimum) is weakly sensitive to the kernel shape. We repeat the analysis for x discretized on a grid. In this case the charge deposition rule is determined by a particle shape. An important property to be respected in the discrete system is the exact preservation of total charge on the grid; this property is necessary to ensure that the electric field is equal at both ends, consistent with periodic boundary conditions. We find that if the particle shapes satisfy a partition of unity property, the particle charge deposited on the grid is conserved exactly. Further, if the particle shape is expressed as the convolution of a kernel with another kernel that satisfies the partition of unity, then the particle shape obeys the partition of unity. This property holds for kernels of arbitrary width, including widths that are not integer multiples of the grid spacing. We show results relaxing the approximations used to do BVO optimization analytically, by doing numerical computations of the total error as a function of the kernel width, on a grid in x. The comparison between numerical and analytical results shows good agreement over a range of particle shapes. We discuss the practical implications of our results, including the criteria for design and implementation of computationally efficient particle shapes that take advantage of the developed theory.
Additively manufactured (AM) stainless steels (SSs) exhibit numerous microstructural differences compared to their wrought counterparts, such as Cr-enriched dislocation cell structures. The influence these unique features have on a SSs corrosion resistance are still under investigation with most current works limited to laboratory experiments. The work herein shows the first documented study of AM 304L and 316L exposed to a severe marine environment on the eastern coast of Florida with comparisons made to wrought counterparts. Coupons were exposed for 21 months and resulted in significant pitting corrosion to initiate after 1 month of exposure for all conditions. At all times, the AM coupons exhibited lower average and maximum pit depths than their wrought counterparts. After 21 months, pits on average were 4 μm deep for AM 316L specimen and 8 μm deep for wrought specimen. Pits on the wrought samples tended to be nearly hemispherical and polished with some pits showing crystallographic attack while pits on AM coupons exhibited preferential attack at melt pool boundaries and the cellular microstructure.
Trujillo, Natasha; Rose-Coss, Dylan; Heath, Jason; Dewers, Thomas D.; Ampomah, William; Mozley, Peter S.; Cather, Martha
Leakage pathways through caprock lithologies for underground storage of CO2 and/or enhanced oil recovery (EOR) include intrusion into nano-pore mudstones, flow within fractures and faults, and larger-scale sedimentary heterogeneity (e.g., stacked channel deposits). To assess multiscale sealing integrity of the caprock system that overlies the Morrow B sandstone reservoir, Farnsworth Unit (FWU), Texas, USA, we combine pore-to-core observations, laboratory testing, well logging results, and noble gas analysis. A cluster analysis combining gamma ray, compressional slowness, and other logs was combined with caliper responses and triaxial rock mechanics testing to define eleven lithologic classes across the upper Morrow shale and Thirteen Finger limestone caprock units, with estimations of dynamic elastic moduli and fracture breakdown pressures (minimum horizontal stress gradients) for each class. Mercury porosimetry determinations of CO2 column heights in sealing formations yield values exceeding reservoir height. Noble gas profiles provide a “geologic time-integrated” assessment of fluid flow across the reservoir-caprock system, with Morrow B reservoir measurements consistent with decades-long EOR water-flooding, and upper Morrow shale and lower Thirteen Finger limestone values being consistent with long-term geohydrologic isolation. Together, these data suggest an excellent sealing capacity for the FWU and provide limits for injection pressure increases accompanying carbon storage activities.
Aria is a Galerkin finite element based program for solving coupled-physics problems described by systems of PDEs and is capable of solving nonlinear, implicit, transient and direct-to-steady state problems in two and three dimensions on parallel architectures. The suite of physics currently supported by Aria includes thermal energy transport, species transport, and electrostatics as well as generalized scalar, vector and tensor transport equations. Additionally, Aria includes support for manufacturing process flows via the incompressible Navier-Stokes equations specialized to a low Reynolds number (Re < 1) regime. Enhanced modeling support of manufacturing processing is made possible through use of either arbitrary Lagrangian-Eulerian (ALE) and level set based free and moving boundary tracking in conjunction with quasi-static nonlinear elastic solid mechanics for mesh control. Coupled physics problems are solved in several ways including fully-coupled Newton’s method with analytic or numerical sensitivities, fully-coupled Newton-Krylov methods and a loosely-coupled nonlinear iteration about subsets of the system that are solved using combinations of the aforementioned methods. Error estimation, uniform and dynamic ℎ-adaptivity and dynamic load balancing are some of Aria’s more advanced capabilities.
With the rapid proliferation of additive manufacturing and 3D printing technologies, architected cellular solids including truss-like 3D lattice topologies offer the opportunity to program the effective material response through topological design at the mesoscale. The present report summarizes several of the key findings from a 3-year Laboratory Directed Research and Development Program. The program set out to explore novel lattice topologies that can be designed to control, redirect, or dissipate energy from one or multiple insult environments relevant to Sandia missions, including crush, shock/impact, vibration, thermal, etc. In the first 4 sections, we document four novel lattice topologies stemming from this study: coulombic lattices, multi-morphology lattices, interpenetrating lattices, and pore-modified gyroid cellular solids, each with unique properties that had not been achieved by existing cellular/lattice metamaterials. The fifth section explores how unintentional lattice imperfections stemming from the manufacturing process, primarily sur face roughness in the case of laser powder bed fusion, serve to cause stochastic response but that in some cases such as elastic response the stochastic behavior is homogenized through the adoption of lattices. In the sixth section we explore a novel neural network screening process that allows such stocastic variability to be predicted. In the last three sections, we explore considerations of computational design of lattices. Specifically, in section 7 using a novel generative optimization scheme to design novel pareto-optimal lattices for multi-objective environments. In section 8, we use computational design to optimize a metallic lattice structure to absorb impact energy for a 1000 ft/s impact. And in section 9, we develop a modified micromorphic continuum model to solve wave propagation problems in lattices efficiently.
We present a new experimental methodology for detailed experimental investigations of depolymerization reactions over solid catalysts. This project aims to address a critical need in fundamental research on chemical upcycling of polymers – the lack of rapid, sensitive, isomerselective probing techniques for the detection of reaction intermediates and products. Our method combines a heterogeneous catalysis reactor for the study of multiphase (gas/polymer melt/solid) systems, coupled to a vacuum UV photoionization time-of-flight mass spectrometer. This apparatus draws on our expertise in probing complex gas-phase chemistry and enables highthroughput, detailed chemical speciation measurements of the gas phase above the catalyst, providing valuable information on the heterogeneous catalytic reactions. Using this approach, we investigated the depolymerization of high-density polyethylene (HDPE) over Ir-doped zeolite catalysts. We showed that the product distribution was dominated by low-molecular weight alkenes with terminal C=C double bonds and revealed the presence of many methyl-substituted alkenes and alkanes, suggesting extensive methyl radical chemistry. In addition, we investigated the fundamental reactivity of model oligomer molecules n-butane and isobutane over ZSM-5 zeolites. We demonstrated the first direct detection of methyl radical intermediates, confirming the key role of methyl in zeolite-catalyzed activation of alkanes. Our results show the potential of this experimental method to achieve deep insight into the complex depolymerization reactions and pave the way for detailed mechanistic studies, leading to increased fundamental understanding of key processes in chemical upcycling of polymers.
This report summarizes the findings and outcomes of the LDRD-express project with title “Fluid models of charged species transport: numerical methods with mathematically guaranteed properties”. The primary motivation of this project was the computational/mathematical exploration of the ideas advanced aiming to improve the state-of-the-art on numerical methods for the one-fluid Euler-Poisson models and gain some understanding on the Euler-Maxwell model. Euler-Poisson and Euler-Maxwell, by themselves are not the most technically relevant PDE plasma-models. However, both of them are elementary building blocks of PDE-models used in actual technical applications and include most (if not all) of their mathematical difficulties. Outside the classical ideal MHD models, rigorous mathematical and numerical understanding of one-fluid models is still a quite undeveloped research area, and the treatment/understanding of boundary conditions is minimal (borderline non-existent) at this point in time. This report focuses primarily on bulk-behaviour of Euler-Poisson’s model, touching boundary conditions only tangentially.
Identification and characterization of underground events from surface or remote data requires a thorough understanding of the rock material properties. However, material properties usually come from borehole data, which is expensive and not always available. A potential alternative is to use topographic characteristics to approximate the strength, but this has never been done before quantitatively. Here we present the results from the first steps towards this goal. We have found that there are strong correlations between compressive and tensile strengths and slopes, but these correlations vary depending on data analysis details. Rugosity may be better correlated to strength than slope values. More comprehensive analyses are needed to fully understand the best method of predicting strength from topography for this area. We also found that misalignment of multiple GIS datasets can have a large influence on the ability to make interpretations. Lastly, these results will require further study in a variety of climatic conditions before being applicable to other sites.
This report summarizes work completed under the Laboratory Directed Research and Development (LDRD) project "Uncertainty Quantification of Geophysical Inversion Using Stochastic Differential Equations." Geophysical inversions often require computationally expensive algorithms to find even one solution, let alone propagating uncertainties through to the solution domain. The primary purpose of this project was to find more computationally efficient means to approximate solution uncertainty in geophysical inversions. We found multiple computationally efficient methods of propagating Earth model uncertainty into uncertainties in solutions of full waveform seismic moment tensor inversions. However, the optimum method of approximating the uncertainty in these seismic source solutions was to use the Karhunen-Love theorem with data misfit residuals. This method was orders of magnitude more computationally efficient than traditional Monte Carlo methods and yielded estimates of uncertainty that closely approximated those of Monte Carlo. We will summarize the various methods we evaluated for estimating uncertainty in seismic source inversions as well as work toward this goal in the realm of 3-D seismic tomographic inversion uncertainty.
Sandia National Laboratories has performed testing on several Hyperion 5313A infrasound sensors in order to determine the length of time it takes for the sensors to thermally equilibrate under a variety of environmental conditions. The motivation for performing these tests is to aid in determining suitable procedures for station operators to follow when installing these sensors. The desired outcome is for the station operators to be able to determine more quickly and reliably whether the sensors are performing correctly at the time of installation.
A new generation of concentrating solar power (CSP) technologies is under development to provide dispatchable renewable power generation and reduce the levelized cost of electricity (LCOE) to 6 cents/kWh by leveraging heat transfer fluids (HTFs) capable of operation at higher temperatures and coupling with higher efficiency power conversion cycles. The U.S. Department of Energy (DOE) has funded three pathways for Generation 3 CSP (Gen3CSP) technology development to leverage solid, liquid, and gaseous HTFs to transfer heat to a supercritical carbon dioxide (sCO2) Brayton cycle. This paper presents the design and off-design capabilities of a 1 MWth sCO2 test system that can provide sCO2 coolant to the primary heat exchangers (PHX) coupling the high-Temperature HTFs to the sCO2 working fluid of the power cycle. This system will demonstrate design, performance, lifetime, and operability at a scale relevant to commercial CSP. A dense-phase high-pressure canned motor pump is used to supply up to 5.3 kg/s of sCO2 flow to the primary heat exchanger at pressures up to 250 bar and temperatures up to 715 °C with ambient air as the ultimate heat sink. Key component requirements for this system are presented in this paper.
Nonlocal models naturally handle a range of physics of interest to SNL, but discretization of their underlying integral operators poses mathematical challenges to realize the accuracy and robustness commonplace in discretization of local counterparts. This project focuses on the concept of asymptotic compatibility, namely preservation of the limit of the discrete nonlocal model to a corresponding well-understood local solution. We address challenges that have traditionally troubled nonlocal mechanics models primarily related to consistency guarantees and boundary conditions. For simple problems such as diffusion and linear elasticity we have developed complete error analysis theory providing consistency guarantees. We then take these foundational tools to develop new state-of-the-art capabilities for: lithiation-induced failure in batteries, ductile failure of problems driven by contact, blast-on-structure induced failure, brittle/ductile failure of thin structures. We also summarize ongoing efforts using these frameworks in data-driven modeling contexts. This report provides a high-level summary of all publications which followed from these efforts.
This project aimed to identify the performance-limiting mechanisms in mid- to far infrared (IR) sensors by probing photogenerated free carrier dynamics in model detector materials using scanning ultrafast electron microscopy (SUEM). SUEM is a recently developed method based on using ultrafast electron pulses in combination with optical excitations in a pump- probe configuration to examine charge dynamics with high spatial and temporal resolution and without the need for microfabrication. Five material systems were examined using SUEM in this project: polycrystalline lead zirconium titanate (a pyroelectric), polycrystalline vanadium dioxide (a bolometric material), GaAs (near IR), InAs (mid IR), and Si/SiO 2 system as a prototypical system for interface charge dynamics. The report provides detailed results for the Si/SiO 2 and the lead zirconium titanate systems.
This report summarizes a series of SIERRA/Fuego validation efforts of turbulent flow models on canonical wall-bounded configurations. In particular, direct numerical simulations (DNS) and large eddy simulations (LES) turbulence models are tested on a periodic channel, a periodic pipe, and an open jet for which results are compared to the velocity profiles obtained theoretically or experimentally. Velocity inlet conditions for channel and pipe flows are developed for application to practical simulations. To show this capability, LES is performed over complex terrain in the form of two natural hills and the results are compared with other flow solvers. The practical purpose of the report is to document the creation of inflow boundary conditions of fully developed turbulent flows for other LES calculations where the role of inflow turbulence is critical.
We present an experimental and numerical study of a terahertz metamaterial with a nonlinear response that is controllable via the relative structural arrangement of two stacked split ring resonator arrays. The first array is fabricated on an n-doped GaAs substrate, and the second array is fabricated vertically above the first using a polyimide spacer layer. Due to GaAs carrier dynamics, the on-resonance terahertz transmission at 0.4 THz varies in a nonlinear manner with incident terahertz power. The second resonator layer dampens this nonlinear response. In samples where the two layers are aligned, the resonance disappears, and the total nonlinear modulation of the on-resonance transmission decreases. The nonlinear modulation is restored in samples where an alignment offset is imposed between the two resonator arrays. Structurally tunable metamaterials and metasurfaces can therefore act as a design template for tunable nonlinear THz devices by controlling the coupling of confined electric fields to nonlinear phenomena in a complex material substrate or inclusion.
The June 15, 1991 Mt. Pinatubo eruption is simulated in E3SM by injecting 10 Tg of SO2 gas in the stratosphere, turning off prescribed volcanic aerosols, and enabling E3SM to treat stratospheric volcanic aerosols prognostically. This experimental prognostic treatment of volcanic aerosols in the stratosphere results in some realistic behaviors (SO2 evolves into H2SO4 which heats the lower stratosphere), and some expected biases (H2SO4 aerosols sediment out of the stratosphere too quickly). Climate fingerprinting techniques are used to establish a Mt. Pinatubo fingerprint based on the vertical profile of temperature from the E3SMv1 DECK ensemble. By projecting reanalysis data and preindustrial simulations onto the fingerprint, the Mt. Pinatubo stratospheric heating anomaly is detected. Projecting the experimental prognostic aerosol simulation onto the fingerprint also results in a detectable heating anomaly, but, as expected, the duration is too short relative to reanalysis data.
Although many software teams across the laboratories comply with yearly software quality engineering (SQE) assessments, the practice of introducing quality into each phase of the software lifecycle, or the team processes, may vary substantially. Even with the support of a quality engineer, many teams struggle to adapt and right-size software engineering best practices in quality to fit their context, and these activities aren’t framed in a way that motivates teams to take action. In short, software quality is often a “check the box for compliance” activity instead of a cultural practice that both values software quality and knows how to achieve it. In this report, we present the results of our 6600 VISTA Innovation Tournament project, "Incentivizing and Motivating High Confidence and Research Software Teams to Adopt the Practice of Quality." We present our findings and roadmap for future work based on 1) a rapid review of relevant literature, 2) lessons learned from an internal design thinking workshop, and 3) an external Collegeville 2021 workshop. These activities provided an opportunity for team ideation and community engagement/feedback. Based on our findings, we believe a coordinated effort (e.g. strategic communication campaign) aimed at diffusing the innovation of the practice of quality across Sandia National Laboratories could over time effect meaningful organizational change. As such, our roadmap addresses strategies for motivating and incentivizing individuals ranging from early career to seasoned software developers/scientists.
Garg, Raveesh; Qin, Eric; Martinez, Francisco M.; Guirado, Robert; Jain, Akshay; Abadal, Sergi; Abellan, Jose L.; Acacio, Manuel E.; Alarcon, Eduard; Rajamanickam, Sivasankaran R.; Krishna, Tushar
Graph Neural Networks (GNNs) have garnered a lot of recent interest because of their success in learning representations from graph-structured data across several critical applications in cloud and HPC. Owing to their unique compute and memory characteristics that come from an interplay between dense and sparse phases of computations, the emergence of reconfigurable dataflow (aka spatial) accelerators offers promise for acceleration by mapping optimized dataflows (i.e., computation order and parallelism) for both phases. The goal of this work is to characterize and understand the design-space of dataflow choices for running GNNs on spatial accelerators in order for the compilers to optimize the dataflow based on the workload. Specifically, we propose a taxonomy to describe all possible choices for mapping the dense and sparse phases of GNNs spatially and temporally over a spatial accelerator, capturing both the intra-phase dataflow and the inter-phase (pipelined) dataflow. Using this taxonomy, we do deep-dives into the cost and benefits of several dataflows and perform case studies on implications of hardware parameters for dataflows and value of flexibility to support pipelined execution.
Physical protection of public buildings has long been a concern of police and security services where a balance of facility security and personnel safety is vital. Due to the nature of public spaces, the use of permanently installed and deploy-on-demand physical barrier systems must be safe for the legitimate occupants and visitors of that space. Such systems must seek to mitigate the personal and organizational consequences of unintentionally seriously injuring or killing an innocent bystander by slamming a heavy, rigid, and quick-deploying barrier into place. Consideration and implementation of less-than-lethal technologies is necessary to reduce risk to visitors and building personnel. One potential barrier solution is a fast-acting, high-strength, composite airbag barrier system for doorways and hallways to quickly deploy a less-than-lethal barrier at entry points as well as isolate intruders who have already gained access. This system is envisioned to be stored within an architecturally attractive selectively frangible shell that could be permanently installed at a facility or installed in remote or temporary locations as dictated by risk. The system would be designed to be activated remotely (hardwired or wireless) from a Central Alarm Station (CAS) or other secure location.
Graph partitioning has been an important tool to partition the work among several processors to minimize the communication cost and balance the workload. While accelerator-based supercomputers are emerging to be the standard, the use of graph partitioning becomes even more important as applications are rapidly moving to these architectures. However, there is no distributed-memory-parallel, multi-GPU graph partitioner available for applications. We developed a spectral graph partitioner, Sphynx, using the portable, accelerator-friendly stack of the Trilinos framework. In Sphynx, we allow using different preconditioners and exploit their unique advantages. We use Sphynx to systematically evaluate the various algorithmic choices in spectral partitioning with a focus on the GPU performance. We perform those evaluations on two distinct classes of graphs: regular (such as meshes, matrices from finite element methods) and irregular (such as social networks and web graphs), and show that different settings and preconditioners are needed for these graph classes. The experimental results on the Summit supercomputer show that Sphynx is the fastest alternative on irregular graphs in an application-friendly setting and obtains a partitioning quality close to ParMETIS on regular graphs. When compared to nvGRAPH on a single GPU, Sphynx is faster and obtains better balance and better quality partitions. Sphynx provides a good and robust partitioning method across a wide range of graphs for applications looking for a GPU-based partitioner.
Virtual prototyping in engineering design rely on modern numerical models of contacting structures with accurate resolution of interface mechanics, which strongly affect the system-level stiffness and energy dissipation due to frictional losses. High-fidelity modeling within the localized interfaces is required to resolve local quantities of interest that may drive design decisions. The high-resolution finite element meshes necessary to resolve inter-component stresses tend to be computationally expensive, particularly when the analyst is interested in response time histories. The Hurty/Craig-Bampton (HCB) transformation is a widely used method in structural dynamics for reducing the interior portion of a finite element model while having the ability to retain all nonlinear contact degrees of freedom (DOF) in physical coordinates. These models may still require many DOF to adequately resolve the kinematics of the interface, leading to inadequate reduction and computational savings. This study proposes a novel interface reduction method to overcome these challenges by means of system-level characteristic constraint (SCC) modes and properly orthogonal interface modal derivatives (POIMDs) for transient dynamic analyses. Both SCC modes and POIMDs are computed using the reduced HCB mass and stiffness matrices, which can be directly computed from many commercial finite element analysis software. Comparison of time history responses to an impulse-type load in a mechanical beam assembly indicate that the interface-reduced model correlates well with the HCB truth model. Localized features like slip and contact area are well-represented in the time domain when the beam assembly is loaded with a broadband excitation. The proposed method also yields reduced-order models with greater critical timestep lengths for explicit integration schemes.
Virtual prototyping in engineering design rely on modern numerical models of contacting structures with accurate resolution of interface mechanics, which strongly affect the system-level stiffness and energy dissipation due to frictional losses. High-fidelity modeling within the localized interfaces is required to resolve local quantities of interest that may drive design decisions. The high-resolution finite element meshes necessary to resolve inter-component stresses tend to be computationally expensive, particularly when the analyst is interested in response time histories. The Hurty/Craig-Bampton (HCB) transformation is a widely used method in structural dynamics for reducing the interior portion of a finite element model while having the ability to retain all nonlinear contact degrees of freedom (DOF) in physical coordinates. These models may still require many DOF to adequately resolve the kinematics of the interface, leading to inadequate reduction and computational savings. This study proposes a novel interface reduction method to overcome these challenges by means of system-level characteristic constraint (SCC) modes and properly orthogonal interface modal derivatives (POIMDs) for transient dynamic analyses. Both SCC modes and POIMDs are computed using the reduced HCB mass and stiffness matrices, which can be directly computed from many commercial finite element analysis software. Comparison of time history responses to an impulse-type load in a mechanical beam assembly indicate that the interface-reduced model correlates well with the HCB truth model. Localized features like slip and contact area are well-represented in the time domain when the beam assembly is loaded with a broadband excitation. The proposed method also yields reduced-order models with greater critical timestep lengths for explicit integration schemes.
Current methods for stochastic media transport are either computationally expensive or, by nature, approximate. Moreover, none of the well-developed, benchmarked approximate methods can compute the variance caused by the stochastic mixing, a quantity especially important to safety calculations. Therefore, we derive and apply a new conditional probability function (CPF) for use in the recently developed stochastic media transport algorithm Conditional Point Sampling (CoPS), which 1) leverages the full intra-particle memory of CoPS to yield errorless computation of stochastic media outputs in 1D, binary, Markovian-mixed media, and 2) leverages the full inter-particle memory of CoPS and the recently developed Embedded Variance Deconvolution method to yield computation of the variance in transport outputs caused by stochastic material mixing. Numerical results demonstrate errorless stochastic media transport as compared to reference benchmark solutions with the new CPF for this class of stochastic mixing as well as the ability to compute the variance caused by the stochastic mixing via CoPS. Using previously derived, non-errorless CPFs, CoPS is further found to be more accurate than the atomic mix approximation, Chord Length Sampling (CLS), and most of memory-enhanced versions of CLS surveyed. In addition, we study the compounding behavior of CPF error as a function of cohort size (where a cohort is a group of histories that share intra-particle memory) and recommend that small cohorts be used when computing the variance in transport outputs caused by stochastic mixing.
Typical approaches to classify scenes from light convert the light field to electrons to perform the computation in the digital electronic domain. This conversion and downstream computational analysis require significant power and time. Diffractive neural networks have recently emerged as unique systems to classify optical fields at lower energy and high speeds. Previous work has shown that a single layer of diffractive metamaterial can achieve high performance on classification tasks. In analogy with electronic neural networks, it is anticipated that multilayer diffractive systems would provide better performance, but the fundamental reasons for the potential improvement have not been established. In this work, we present extensive computational simulations of two - layer diffractive neural networks and show that they can achieve high performance with fewer diffractive features than single layer systems.
Supplementing an existing high-quality seismic monitoring network with openly available station data could improve coverage and decrease magnitudes of completeness; however, this can present challenges when varying levels of data quality exist. Without discerning the quality of openly available data, using it poses significant data management, analysis, and interpretation issues. Incorporating additional stations without properly identifying and mitigating data quality problems can degrade overall monitoring capability. If openly available stations are to be used routinely, a robust, automated data quality assessment for a wide range of quality control (QC) issues is essential. To meet this need, we developed Pycheron, a Python-based library for QC of seismic waveform data. Pycheron was initially based on the Incorporated Research Institutions for Seismology's Modular Utility for STAtistical kNowledge Gathering but has been expanded to include more functionality. Pycheron can be implemented at the beginning of a data processing pipeline or can process stand-alone data sets. Its objectives are to (1) identify specific QC issues; (2) automatically assess data quality and instrumentation health; (3) serve as a basic service that all data processing builds on by alerting downstream processing algorithms to any quality degradation; and (4) improve our ability to process orders of magnitudes more data through performance optimizations. This article provides an overview of Pycheron, its features, basic workflow, and an example application using a synthetic QC data set.
The properties of materials can change dramatically at the nanoscale new and useful properties can emerge. An example is found in the paramagnetism in iron oxide magnetic nanoparticles. Using magnetically sensitive nitrogen-vacancy centers in diamond, we developed a platform to study electron spin resonance of nanoscale materials. To implement the platform, diamond substrates were prepared with nitrogen vacancy centers near the surface. Nanoparticles were placed on the surface using a drop casting technique. Using optical and microwave pulsing techniques, we demonstrated T1 relaxometry and double electron-electron resonance techniques for measuring the local electron spin resonance. The diamond NV platform developed in this project provides a combination of good magnetic field sensitivity and high spatial resolution and will be used for future investigations of nanomaterials and quantum materials.
Recent work has shown that artificial opsonins stimulate the targeted destruction of bacteria by phagocyte immune cells. Artificial opsonization has the potential to direct the innate immune system to target novel antigens, potentially even viral pathogens. Furthermore, the engagement of innate immunity presents a potential solution for the spread of pandemics in a scenario when a vaccine is unavailable or ineffective. Funded by the LDRD late start bioscience pandemic response program, we tested whether artificial opsonins can be developed to target viral pathogens using phage MS2 and a SARS-CoV-2 surrogate. To direct opsonization against these viruses we purified antibody derived viral targeting motifs and attempted the same chemical conjugation strategies that produced bacterial targeting artificial opsonins. However, the viral targeting motifs proved challenging to conjugate using these methods, frequently resulting in precipitation and loss of product. Future studies may be successful with this approach if a smaller and more soluble viral-targeting peptide could be used.
Thermographic phosphors have been employed for temperature sensing in challenging environments, such as on surfaces or within solid samples exposed to dynamic heating, because of the high temporal and spatial resolution that can be achieved using this approach. Typically, UV light sources are employed to induce temperature-sensitive spectral responses from the phosphors. However, it would be beneficial to explore x-rays as an alternate excitation source to facilitate simultaneous x-ray imaging of material deformation and temperature of heated samples and to reduce UV absorption within solid samples being investigated. The phosphors BaMgAl10O17:Eu (BAM), Y2SiO5:Ce, YAG:Dy, La2O2S:Eu, ZnGa2O4:Mn, Mg3F2GeO4:Mn, Gd2O2S:Tb, and ZnO were excited in this study using incident synchrotron x-ray radiation. These materials were chosen to include conventional thermographic phosphors as well as x-ray scintillators (with crossover between these two categories). X-ray-induced thermographic behavior was explored through the measurement of visible spectral response with varying temperature. The incident x-rays were observed to excite the same electronic energy level transitions in these phosphors as UV excitation. Similar shifts in the spectral response of BAM, Y2SiO5:Ce, YAG:Dy, La2O2S:Eu, ZnGa2O4:Mn, Mg3F2GeO4:Mn, and Gd2O2S:Tb were observed when compared to their response to UV excitation found in literature. Some phosphors were observed to thermally quench in the temperature ranges tested here, while the response from others did not rise above background noise levels. This may be attributed to the increased probability of non-radiative energy release from these phosphors due to the high energy of the incident x-rays. These results indicate that x-rays can serve as a viable excitation source for phosphor thermometry.
The AXIOM-Unfold application is a computational code for performing spectral unfolds along with uncertainty quantification of the photon spectrum. While this code was principally designed for spectral unfolds on the Saturn source, it is also relevant to other radiation sources such as Pithon. This code is a component of the AXIOM project which was undertaken in order to measure the time-resolved spectrum of the Saturn source; to support this, the AXIOM-Unfold code is able to process time-dependent dose measurements in order to obtain a time-resolved spectrum. This manual contains a full description of the algorithms used by the method. The code features are fully documented along with several worked examples.
The quantum k-Local Hamiltonian problem is a natural generalization of classical constraint satisfaction problems (k-CSP) and is complete for QMA, a quantum analog of NP. Although the complexity of k-Local Hamiltonian problems has been well studied, only a handful of approximation results are known. For Max 2-Local Hamiltonian where each term is a rank 3 projector, a natural quantum generalization of classical Max 2-SAT, the best known approximation algorithm was the trivial random assignment, yielding a 0.75-approximation. We present the first approximation algorithm beating this bound, a classical polynomial-time 0.764-approximation. For strictly quadratic instances, which are maximally entangled instances, we provide a 0.801 approximation algorithm, and numerically demonstrate that our algorithm is likely a 0.821-approximation. We conjecture these are the hardest instances to approximate. We also give improved approximations for quantum generalizations of other related classical 2-CSPs. Finally, we exploit quantum connections to a generalization of the Grothendieck problem to obtain a classical constant-factor approximation for the physically relevant special case of strictly quadratic traceless 2-Local Hamiltonians on bipartite interaction graphs, where a inverse logarithmic approximation was the best previously known (for general interaction graphs). Our work employs recently developed techniques for analyzing classical approximations of CSPs and is intended to be accessible to both quantum information scientists and classical computer scientists.
Swiler, Laura P.; Becker, Dirk-Alexander; Brooks, Dusty M.; Govaerts, Joan; Koskinen, Lasse; Plischke, Elmar; Rohlig, Klaus-Jurgen; Saveleva, Elena; Spiessl, Sabine M.; Stein, Emily S.; Svitelman, Valentina
Over the past four years, an informal working group has developed to investigate existing sensitivity analysis methods, examine new methods, and identify best practices. The focus is on the use of sensitivity analysis in case studies involving geologic disposal of spent nuclear fuel or nuclear waste. To examine ideas and have applicable test cases for comparison purposes, we have developed multiple case studies. Four of these case studies are presented in this report: the GRS clay case, the SNL shale case, the Dessel case, and the IBRAE groundwater case. We present the different sensitivity analysis methods investigated by various groups, the results obtained by different groups and different implementations, and summarize our findings.
Denoising contaminated seismic signals for later processing is a fundamental problem in seismic signals analysis. The most straightforward denoising approach, using spectral filtering, is not effective when noise and seismic signal occupy the same frequency range. Neural network approaches have shown success denoising local signal when trained on short-time Fourier transform spectrograms (Zhu et al 2018; Tibi et al 2021). Scalograms, a wavelet-based transform, achieved ~15% better reconstruction as measured by dynamic time warping on a seismic waveform test set than spectrograms, suggesting their use as an alternative for denoising. We train a deep neural network on a scalogram dataset derived from waveforms recorded by the University of Utah Seismograph Stations network. We find that initial results are no better than a spectrogram approach, with additional overhead imposed by the significantly larger size of scalograms. A robust exploration of neural network hyperparameters and network architecture was not performed, which could be done in follow on work.
Jawaharram, Gowtham S.; Barr, Christopher M.; Hattar, Khalid M.; Dillon, Shen J.
A series of nanopillar compression tests were performed on tungsten as a function of temperature using in situ transmission electron microscopy with localized laser heating. Surface oxidation was observed to form on the pillars and grow in thickness with increasing temperature. Deformation between 850◦C and 1120◦C is facilitated by long-range diffusional transport from the tungsten pillar onto adjacent regions of the Y2O3-stabilized ZrO2 indenter. The constraint imposed by the surface oxidation is hypothesized to underly this mechanism for localized plasticity, which is generally the so-called whisker growth mechanism. The results are discussed in context of the tungsten fuzz growth mechanism in He plasma-facing environments. The two processes exhibit similar morphological features and the conditions under which fuzz evolves appear to satisfy the conditions necessary to induce whisker growth.
An exceptional set of newly-discovered advanced superalloys known as refractory high-entropy alloys (RHEAs) can provide near-term solutions for wear, erosion, corrosion, high-temperature strength, creep, and radiation issues associated with supercritical carbon dioxide (sCO2) Brayton Cycles and advanced nuclear reactors. In particular, these superalloys can significantly extend their durability, reliability, and thermal efficiency, thereby making them more cost-competitive, safer, and reliable. For this project, it was endeavored to manufacture and test certain RHEAs, to solve technical issues impacting the Brayton Cycle and advanced nuclear reactors. This was achieved by leveraging Sandia’s patents, technical advances, and previous experience working with RHEAs. Herein, three RHEA manufacturing methods were applied: laser engineered net shaping, spark plasma sintering, and spray coating. Two promising RHEAs were selected, HfNbTaZr and MoNbTaVW. To demonstrate their performance, erosion, structural, radiation, and hightemperature experiments were conducted on the RHEAs, stainless steel (SS) 316 L, SS 1020, and Inconel 718 test coupons, as well as bench-top components. The experimental data is presented, analyzed, and confirms the superior performance of the HfNbTaZr and MoNbTaVW RHEAs vs. SS 316 L, SS 1020, and Inconel 718. In addition, to gain more insights for larger-scale RHEA applications, the erosion and structural capabilities for the two RHEAs were simulated and compared with the experimental data. The experimental data confirm the superior performance of the HfNbTaZr and MoNbTaVW RHEAs vs. SS and Inconel. Most importantly, the erosion and the coating material experimental data show that erosion in sCO2 Brayton Cycles can be eliminated completely if RHEAs are used. The experimental suite and validations confirm that HfNbTaZr is suitable for harsh environments that do not include nuclear radiation, while MoNbTaVW is suitable for harsh environments that include radiation.
The ability to model ductile rupture in metal parts is critical in highly stressed applications. The initiation of a ductile fracture is a function of the plastic strain, the stress state, and stress history. This paper develops a ductile rupture failure surface for PH13-8Mo H950 steel using the Xue-Wierzbicki failure model. The model is developed using data from five tensile specimen tests conducted at -40⁰C and 20⁰C. The specimens are designed to cover a Lode parameter range of 0 and 1 with a stress triaxiality range from zero in pure shear to approximately 1.0 in tension. The failure surface can be implemented directly into the finite element code or used as a post processing check.
The DOE-NE NWM Cloud was designed to be a generic set of tools and applications for any nuclear waste management program. As policymakers continue to consider approaches that emphasize consolidated interim storage and transportation of spent nuclear fuel, a gap analysis of the tools and applications provided for spent nuclear fuel and high-level radioactive waste disposal in comparison those needed for siting, licensing, and developing a consolidated interim storage facility and/or for a transportation campaign will help prepare DOE for implementing such potential policy direction. This report evaluates the points of alignment and potential gaps between the applications on the NWM Cloud that supported SNF disposal project, and the applications needed to address QA requirements and for other project support needs of an SNF storage project.
Currently a set of 71 radionuclides are accounted for in off-site consequence analysis for LWRs. Radionuclides of dose consequence are expected to change for non-LWRs, with radionuclides of interest being type-specific. This document identifies an expanded set of radionuclides that may need to be accounted for in multiple non-LWR systems: high temperature gas reactors (HTGRs); fluoride-salt-cooled high-temperature reactors (FHRs); thermal-spectrum fluoride-based molten salt reactors (MSRs); fast-spectrum chloride-based MSRs; and, liquid metal fast reactors with metallic fuel (LMRs) Specific considerations are provided for each reactor type in Chapter 2 through Chapter 5, and a summary of all recommendations is provided in Chapter 6. All identified radionuclides are already incorporated within the MACCS software, yet the development of tritium-specific and carbon-specific chemistry models are recommended.
The twenty-seven critical experiments in this series were performed in 2020 in the SCX at the Sandia Pulsed Reactor Facility. The experiments are grouped by fuel rod pitch. Case 1 is a base case with a pitch of 0.8001 cm and no water holes in the array. Cases 2 through 6 have the same pitch as Case 1 but contain various configurations with water holes, providing slight variations in the fuel-to-water ratio. Similarly, Case 7 is a base case with a pitch of 0.854964 cm and no water holes in the array. Cases 8 through 11 have the same pitch as Case 7 but contain various configurations with water holes. Cases 12 through 15 have a pitch of 1.131512 cm and differ according to the number of water holes in the array, with Case 12 having no water holes. Cases 16 through 19 have a pitch of 1.209102 cm and differ according to number of water holes in the array, with Case 16 having no water holes. Cases 20 through 23 have a pitch of 1.6002 cm and differ according to number of water holes in the array, with Case 20 having no water holes. Cases 24 through 27 have a pitch of 1.709928 cm and differ according to number of water holes in the array, with Case 24 having no water holes. As the experiment case number increases, the fuel-to-water volume ratio decreases.
Arithmetic Coding (AC) using Prediction by Partial Matching (PPM) is a compression algorithm that can be used as a machine learning algorithm. This paper describes a new algorithm, NGram PPM. NGram PPM has all the predictive power of AC/PPM, but at a fraction of the computational cost. Unlike compression-based analytics, it is also amenable to a vector space interpretation, which creates the ability for integration with other traditional machine learning algorithms. AC/PPM is reviewed, including its application to machine learning. Then NGram PPM is described and test results are presented, comparing them to AC/PPM.
This report summarizes the activities performed as part of the Science and Engineering of Cybersecurity by Uncertainty quantification and Rigorous Experimentation (SECURE) Grand Challenge LDRD project. We provide an overview of the research done in this project, including work on cyber emulation, uncertainty quantification, and optimization. We present examples of integrated analyses performed on two case studies: a network scanning/detection study and a malware command and control study. We highlight the importance of experimental workflows and list references of papers and presentations developed under this project. We outline lessons learned and suggestions for future work.
Ship tracks are quasi-linear cloud patterns produced from the interaction of ship emissions with low boundary layer clouds. They are visible throughout the diurnal cycle in satellite images from space-borne assets like the Advanced Baseline Imagers (ABI) aboard the National Oceanic and Atmospheric Administration Geostationary Operational Environmental Satellites (GOES-R). However, complex atmospheric dynamics often make it difficult to identify and characterize the formation and evolution of tracks. Ship tracks have the potential to increase a cloud's albedo and reduce the impact of global warming. Thus, it is important to study these patterns to better understand the complex atmospheric interactions between aerosols and clouds to improve our climate models, and examine the efficacy of climate interventions, such as marine cloud brightening. Over the course of this 3-year project, we have developed novel data-driven techniques that advance our ability to assess the effects of ship emissions on marine environments and the risks of future marine cloud brightening efforts. The three main innovative technical contributions we will document here are a method to track aerosol injections using optical flow, a stochastic simulation model for track formations and an automated detection algorithm for efficient identification of ship tracks in large datasets.
This report presents the results of the “Foundations of Rigorous Cyber Experimentation” (FORCE) Laboratory Directed Research and Development (LDRD) project. This project is a companion project to the “Science and Engineering of Cyber security through Uncertainty quantification and Rigorous Experimentation” (SECURE) Grand Challenge LDRD project. This project leverages the offline, controlled nature of cyber experimentation technologies in general, and emulation testbeds in particular, to assess how uncertainties in network conditions affect uncertainties in key metrics. We conduct extensive experimentation using a Firewheel emulation-based cyber testbed model of Invisible Internet Project (I2P) networks to understand a de-anonymization attack formerly presented in the literature. Our goals in this analysis are to see if we can leverage emulation testbeds to produce reliably repeatable experimental networks at scale, identify significant parameters influencing experimental results, replicate the previous results, quantify uncertainty associated with the predictions, and apply multi-fidelity techniques to forecast results to real-world network scales. The I2P networks we study are up to three orders of magnitude larger than the networks studied in SECURE and presented additional challenges to identify significant parameters. The key contributions of this project are the application of SECURE techniques such as UQ to a scenario of interest and scaling the SECURE techniques to larger network sizes. This report describes the experimental methods and results of these studies in more detail. In addition, the process of constructing these large-scale experiments tested the limits of the Firewheel emulation-based technologies. Therefore, another contribution of this work is that it informed the Firewheel developers of scaling limitations, which were subsequently corrected.
As the seismic monitoring community advances toward detecting, identifying, and locating ever-smaller natural and anthropogenic events, the need is constantly increasing for higher resolution, higher fidelity data, models, and methods for accurately characterizing events. Local-distance seismic data provide robust constraints on event locations, but also introduce complexity due to the significant geologic heterogeneity of the Earth’s crust and upper mantle, and the relative sparsity of data that often occurs with small events recorded on regional seismic networks. Identifying the critical characteristics for improving local-scale event locations and the factors that impact location accuracy and reliability is an ongoing challenge for the seismic community. Using Utah as a test case, we examine three data sets of varying duration, finesse, and magnitude to investigate the effects of local earth structure and modeling parameters on local-distance event location precision and accuracy. We observe that the most critical elements controlling relocation precision are azimuthal coverage and local-scale velocity structure, with tradeoffs based on event depth, type, location, and range.
To date, disinformation research has focused largely on the production of false information ignoring the suppression of select information. We term this alternative form of disinformation information suppression. Information suppression occurs when facts are withheld with the intent to mislead. In order to detect information suppression, we focus on understanding the actors who withhold information. In this research, we use knowledge of human behavior to find signatures of different gatekeeping behaviors found in text. Specifically, we build a model to classify the different types of edits on Wikipedia using the added text alone and compare a human-informed feature engineering approach to a featureless algorithm. Being able to computationally distinguish gatekeeping behaviors is a first step towards identifying when information suppression is occurring.
A new copper equation of state is developed utilizing the available experimental data in addition to recent theoretical calculations. Semi-empirical models are fit to the data and the results are tabulated in the SNL SESAME format. Comparison to other copper EOS tables are given, along with recommendations of which tables provide the best accuracy.
Wellbore integrity is a significant problem in the U.S. and worldwide, which has serious adverse environmental and energy security consequences. Wells are constructed with a cement barrier designed to last about 50 years. Indirect measurements and models are commonly used to identify wellbore damage and leakage, often producing subjective and even erroneous results. The research presented herein focuses on new technologies to improve monitoring and detection of wellbore failures (leaks) by developing a multi-step machine learning approach to localize two types of thermal defects within a wellbore model, a prototype mechatronic system for automatically drilling small diameter holes of arbitrary depth to monitor the integrity of oil and gas wells in situ, and benchtop testing and analyses to support the development of an autonomous real-time diagnostic tool to enable sensor emplacement for monitoring wellbore integrity. Each technology was supported by experimental results. This research has provided tools to aid in the detection of wellbore leaks and significantly enhanced our understanding of the interaction between small-hole drilling and wellbore materials.
Battery cells with metal casings are commonly considered incompatible with nuclear magnetic resonance (NMR) spectroscopy because the oscillating radio-frequency magnetic fields ("rf fields") responsible for excitation and detection of NMR active nuclei do not penetrate metals. Here, we show that rf fields can still efficiently penetrate nonmetallic layers of coin cells with metal casings provided "B1 damming"configurations are avoided. With this understanding, we demonstrate noninvasive high-field in situ 7Li and 19F NMR of coin cells with metal casings using a traditional external NMR coil. This includes the first NMR measurements of an unmodified commercial off-the-shelf rechargeable battery in operando, from which we detect, resolve, and separate 7Li NMR signals from elemental Li, anodic β-LiAl, and cathodic LixMnO2 compounds. Real-time changes of β-LiAl lithium diffusion rates and variable β-LiAl 7Li NMR Knight shifts are observed and tied to electrochemically driven changes of the β-LiAl defect structure.
While it is likely practically a bad idea to shrink a transistor to the size of an atom, there is no arguing that it would be fantastic to have atomic-scale control over every aspect of a transistor – a kind of crystal ball to understand and evaluate new ideas. This project showed that it was possible to take a niche technique used to place dopants in silicon with atomic precision and apply it broadly to study opportunities and limitations in microelectronics. In addition, it laid the foundation to attaining atomic-scale control in semiconductor manufacturing more broadly.
This report summarizes the 2021 fiscal year (FY21) status of ongoing borehole heater tests in salt funded by the disposal research and development (R&D) program of the Office of Spent Fuel & Waste Science and Technology (SFWST) of the US Department of Energy’s Office of Nuclear Energy’s (DOE-NE) Office of Spent Fuel and Waste Disposition (SFWD). This report satisfies SFWST milestone M2SF- 21SN010303052 by summarizing test activities and data collected during FY21. The Brine Availability Test in Salt (BATS) is fielded in a pair of similar arrays of horizontal boreholes in an experimental area at the Waste Isolation Pilot Plant (WIPP). One array is heated, the other unheated. Each array consists of 14 boreholes, including a central borehole with gas circulation to measure water production, a cement seal exposure test, thermocouples to measure temperature, electrodes to infer resistivity, a packer-isolated borehole to add tracers, fiber optics to measure temperature and strain, and piezoelectric transducers to measure acoustic emissions. The key new data collected during FY21 include a series of gas tracer tests (BATS phase 1b), a pair of liquid tracer tests (BATS phase 1c), and data collected under ambient conditions (including a period with limited access due to the ongoing pandemic) since BATS phase 1a in 2020. A comparison of heated and unheated gas tracer test results clearly shows a decrease in permeability of the salt upon heating (i.e., thermal expansion closes fractures, which reduces permeability).
University partnerships play an essential role in sustaining Sandia’s vitality as a national laboratory. The SAA is an element of Sandia’s broader University Partnerships program, which facilitates recruiting and research collaborations with dozens of universities annually. The SAA program has two three-year goals. SAA aims to realize a step increase in hiring results, by growing the total annual inexperienced hires from each out-of-state SAA university. SAA also strives to establish and sustain strategic research partnerships by establishing several federally sponsored collaborations and multi-institutional consortiums in science & technology (S&T) priorities such as autonomy, advanced computing, hypersonics, quantum information science, and data science. The SAA program facilitates access to talent, ideas, and Research & Development facilities through strong university partnerships. Earlier this year, the SAA program and campus executives hosted John Myers, Sandia’s former Senior Director of Human Resources (HR) and Communications, and senior-level staff at Georgia Tech, U of Illinois, Purdue, UNM, and UT Austin. These campus visits provided an opportunity to share the history of the partnerships from the university leadership, tours of research facilities, and discussions of ongoing technical work and potential recruiting opportunities. These visits also provided valuable feedback to HR management that will help Sandia realize a step increase in hiring from SAA schools. The 2020-2021 Collaboration Report is a compilation of accomplishments in 2020 and 2021 from SAA and Sandia’s valued SAA university partners.
Develop, verify, and document model capabilities sufficient for comparing field wake measurements from SWiFT with synthetic lidar wake measurements from Nalu-Wind (hereafter referred to as `Nalu').
Gannon, Renae N.; Hamann, Danielle M.; Ditto, Jeffrey; Mitchson, Gavin; Bauers, Sage R.; Merrill, Devin R.; Medlin, Douglas L.; Johnson, David C.
Layered van der Waals heterostructures provide extraordinary opportunities for applications such as thermoelectrics and allow for tunability of optical and electronic properties. The performance of devices made from these heterostructures will depend on their properties, which are sensitive to the nanoarchitecture (constituent layer thicknesses, layer sequence, etc.). However, performance will also be impacted by defects, which will vary in concentration and identity with the nanoarchitecture and preparation conditions. Here, we identify several types of defects and propose mechanisms for their formation, focusing on compounds in the ([SnSe]1+δ)m(TiSe2)n system prepared using the modulated elemental reactants method. The defects were observed by atomic resolution high-angle annular dark-field scanning transmission electron microscopy and can be broadly categorized into those that form domain boundaries as a result of rotational disorder from the self-assembly process and those that are layer-thickness-related and result from local or global deviations in the amount of material deposited. Defect type and density were found to depend on the nanoarchitecture of the heterostructure. Categorizing the defects provides insights into defect formation in these van der Waals layered heterostructures and suggests strategies for controlling their concentrations. Strategies for controlling defect type and concentration are proposed, which would have implications for transport properties for applications in thermoelectrics.
This report summarizes the FY21 Activities for EBS International Collaborations Work Package. The international collaborations work packages aim to leverage knowledge, expertise, and tools from the international nuclear waste community, as deemed relevant according to SFWST “roadmap” priorities. This report describes research and development (R&D) activities conducted during fiscal year 2021(FY21) specifically related to the Engineered Barrier System (EBS) R&D Work Package in the Spent Fuel and Waste Science and Technology (SFWST) Campaign supported by the United States (U.S.) Department of Energy (DOE). It fulfills the SFWST Campaign deliverable M4SF- 21SN010308062. The R&D activities described in this report focus on understanding EBS component evolution and interactions within the EBS, as well as interactions between the host media and the EBS. A primary goal is to advance the development of process models that can be implemented directly within the Generic Disposal System Analysis (GDSA) platform or that can contribute to the safety case in some manner such as building confidence, providing further insight into the processes being modeled, establishing better constraints on barrier performance, etc. Sandia National Laboratories is participating in THM modeling in the international projects EBS Task Force and DECOVALEX 2023. EBS Task Force, Task 11 is on modeling of laboratory-scale High Temperature Column Test conducted at Lawrence Berkeley National Laboratory. DECOVALEX 2023, Task C is on THM modeling of the full-scale emplacement experiment (FE experiment) at the Mont Terri Underground Rock Laboratory, Switzerland. This report summarizes Sandia’s progress in the modeling studies of DECOVALEX 2023, Task C. Modeling studies related to the High Temperature Column Test will be documented in future reports.
Steam cracking of ethane, a non-catalytic thermochemical process, remains the dominant means of ethylene production. The severe reaction conditions and energy expenditure involved in this process incentivize the search for alternative reaction pathways and reactor designs which maximize ethylene yield while minimizing cost and energy input. Herein, we report a comparison of catalytic and non-catalytic non-oxidative dehydrogenation of ethane. We achieve ethylene yields as high as 67 % with an open tube quartz reactor without the use of a catalyst at residence times ∼4 s. The open tube reactor design promotes simplicity, low cost, and negligible coke formation. Pristine quartz tubes were most effective, since coke formation was detected when defects were introduced by scratching the surface of the quartz. Surprisingly, the addition of solids to the quartz tube, such as quartz sand, alumina powder, or even Pt-based intermetallic catalysts, led to lower ethylene yield. Pt alloy catalysts are effective at lower temperatures, such as at 575 °C, but conversion is limited due to thermodynamic constraints. When operated at industrially relevant temperatures, such as 700 °C and above, these catalysts were not stable in our tests, causing ethylene yield to drop below that of the open tube. These results suggest that future research on non-oxidative dehydrogenation should be directed at optimizing reactor designs to improve the conversion of ethane to ethylene, since this approach shows promise for decentralized production of ethylene from natural gas deposits.
The final review for the FY21 Advanced Simulation and Computing (ASC) Computational Systems and Software Environments (CSSE) L2 Milestone #7840 was conducted on August 25th, 2021 at Sandia National Laboratories in Albuquerque, New Mexico. The review committee/panel unanimously agreed that the milestone has been successfully completed, exceeding expectations on several of the key deliverables.
Choffel, Marisa A.; Gannon, Renae N.; Gohler, Fabian; Miller, Aaron M.; Medlin, Douglas L.; Seyller, Thomas; Johnson, David C.
The synthesis and electrical properties of a new misfit compound containing BiSe, Bi2Se3, and MoSe2 constituent layers are reported. The reaction pathway involves competition between the formation of (BiSe)1+x(Bi2Se3)1+y(BiSe)1+x(MoSe2) and [(Bi2Se3)1+y]2(MoSe2). Excess Bi and Se are required in the precursor to synthesize (BiSe)1+x(Bi2Se3)1+y(BiSe)1+x(MoSe2). High-angle annular dark field-scanning transmission electron microscopy (HAADF-STEM) confirm the stacking sequence of the heterostructure. Small grains of both 2H-and 1T-MoSe2 are observed in the MoSe2 layers. X-ray photoelectron spectroscopy (XPS) indicates that there is a significantly higher percentage of 1T-MoSe2 in (BiSe)1+x(Bi2Se3)1+y(BiSe)1+x(MoSe2) than in (BiSe)0.97(MoSe2), suggesting that more charge transfer to MoSe2 occurs due to the additional BiSe layer. The additional charge transfer results in (BiSe)1+x(Bi2Se3)1+y(BiSe)1+x(MoSe2) having a low resistivity (14-19 μω m) with metallic temperature dependence. The heterogeneous mix of MoSe2 polytypes observed in the XPS complicates the interpretation of the Hall data as two bands contribute to the electrical continuity.
Atomic precision advanced manufacturing (APAM) leverages the highly reactive nature of Si dangling bonds relative to H- or Cl-passivated Si to selectively adsorb precursor molecules into lithographically defined areas with sub-nanometer resolution. Due to the high reactivity of dangling bonds, this process is confined to ultra-high vacuum (UHV) environments, which currently limits its commercialization and broad-based appeal. In this work, we explore the use of halogen adatoms to preserve APAM-derived lithographic patterns outside of UHV to enable facile transfer into real-world commercial processes. Specifically, we examine the stability of H-, Cl-, Br-, and I-passivated Si(100) in inert N2 and ambient environments. Characterization with scanning tunneling microscopy and x-ray photoelectron spectroscopy (XPS) confirmed that each of the fully passivated surfaces were resistant to oxidation in 1 atm of N2 for up to 44 h. Varying levels of surface degradation and contamination were observed upon exposure to the laboratory ambient environment. Characterization by ex situ XPS after ambient exposures ranging from 15 min to 8 h indicated the Br– and I–passivated Si surfaces were highly resistant to degradation, while Cl–passivated Si showed signs of oxidation within minutes of ambient exposure. As a proof-of-principle demonstration of pattern preservation, a H–passivated Si sample patterned and passivated with independent Cl, Br, I, and bare Si regions was shown to maintain its integrity in all but the bare Si region post-exposure to an N2 environment. The successful demonstration of the preservation of APAM patterns outside of UHV environments opens new possibilities for transporting atomically-precise devices outside of UHV for integrating with non-UHV processes, such as other chemistries and commercial semiconductor device processes.
Lithium metal is considered the "holy grail"material to replace typical Li-ion anodes due to the absence of a host structure coupled with a high theoretical capacity. The absence of a host structure results in large volumetric changes when lithium is electrodeposited/dissolved, making the lithium prone to stranding and parasitic reactions with the electrolyte. Lithium research is focused on enabling highly reversible lithium electrodeposition/dissolution, which is important to achieving long cycle life. Understanding the various mechanisms of self-discharge is also critical for realizing practical lithium metal batteries but is often overlooked. In contrast to previous work, it is shown here that self-discharge via galvanic corrosion is negligible, particularly when lithium is cycled to relevant capacities. Rather, the continued electrochemical cycling of lithium metal results in self-discharge when periodic rest is applied during cycling. The extent of self-discharge can be controlled by increasing the capacity of plated lithium, tuning electrolyte chemistry, incorporating regular rest, or introducing lithiophilic materials. The Coulombic losses that occur during periodic rest are largely reversible, suggesting that the dominant self-discharge mechanism in this work is not an irreversible chemical process but rather a morphological process.
Abdelfattah, Ahmad; Anzt, Hartwig; Ayala, Alan; Boman, Erik G.; Carson, Erin C.; Cayrols, Sebastien; Cojean, Terry; Dongarra, Jack J.; Falgout, Rob; Gates, Mark; G, R\{U}Tzmacher; Higham, Nicholas J.; Kruger, Scott E.; Li, Sherry; Lindquist, Neil; Liu, Yang; Loe, Jennifer A.; Nayak, Pratik; Osei-Kuffuor, Daniel; Pranesh, Sri; Rajamanickam, Sivasankaran R.; Ribizel, Tobias; Smith, Bryce B.; Swirydowicz, Kasia; Thomas, Stephen J.; Tomov, Stanimire; Tsai, Yaohung M.; Yamazaki, Ichitaro Y.; Yang, Urike M.
Over the last year, the ECP xSDK-multiprecision effort has made tremendous progress in developing and deploying new mixed precision technology and customizing the algorithms for the hardware deployed in the ECP flagship supercomputers. The effort also has succeeded in creating a cross-laboratory community of scientists interested in mixed precision technology and now working together in deploying this technology for ECP applications. In this report, we highlight some of the most promising and impactful achievements of the last year. Among the highlights we present are: Mixed precision IR using a dense LU factorization and achieving a 1.8× speedup on Spock; results and strategies for mixed precision IR using a sparse LU factorization; a mixed precision eigenvalue solver; Mixed Precision GMRES-IR being deployed in Trilinos, and achieving a speedup of 1.4× over standard GMRES; compressed Basis (CB) GMRES being deployed in Ginkgo and achieving an average 1.4× speedup over standard GMRES; preparing hypre for mixed precision execution; mixed precision sparse approximate inverse preconditioners achieving an average speedup of 1.2×; and detailed description of the memory accessor separating the arithmetic precision from the memory precision, and enabling memory-bound low precision BLAS 1/2 operations to increase the accuracy by using high precision in the computations without degrading the performance. We emphasize that many of the highlights presented here have also been submitted to peer-reviewed journals or established conferences, and are under peer-review or have already been published.
Porous nanoscale carbonaceous materials are widely employed for catalysis, separations, and electrochemical devices where device performance often relies upon specific and well-defined regular feature sizes. The use of block polymers as templates has enabled affordable and scalable production of diverse porous carbons. However, popular carbon preparations use equilibrating micelles which can change dimensions in response to the processing environment. Thus, polymer methods have not yet demonstrated carbon nanomaterials with constant average template diameter and tailored wall thickness. In contrast, persistent micelle templates (PMTs) use kinetic control to preserve constant micelle template diameters, and thus PMT has enabled constant pore diameter metrics. With PMT, the wall thickness is independently adjustable via the amount of material precursor added to the micelle templates. Previous PMT demonstrations relied upon thermodynamic barriers to inhibit chain exchange while in solution, followed by rapid evaporation and cross-linking of material precursors to mitigate micelle reorganization once the solvent evaporated. It is shown here that this approach, however, fails to deliver kinetic micelle control when used with slowly cross-linking material precursors such as those for porous carbons. A new modality for kinetic control over micelle templates, glassy-PMTs, is shown using an immobilized glassy micelle core composed of polystyrene (PS). Although PS based polymers have been used to template carbon materials before, all prior reports included plasticizers that prevented kinetic micelle control. Here the key synthetic conditions for carbon materials with glassy-PMT control are enumerated, including dependencies upon polymer block selection, block molecular mass, solvent selection, and micelle processing timeline. The use of glassy-PMTs also enables the direct observation of micelle cores by TEM which are shown to be commensurate with template dimensions. Glassy-PMTs are thus robust and insensitive to material processing kinetics, broadly enabling tailored nanomaterials with diverse chemistries.
The DOE R&D program under the Spent Fuel Waste Science Technology (SFWST) campaign has made key progress in modeling and experimental approaches towards the characterization of chemical and physical phenomena that could impact the long-term safety assessment of heatgenerating nuclear waste disposition in deep-seated clay/shale/argillaceous rock. International collaboration activities such as heater tests, continuous field data monitoring, and postmortem analysis of samples recovered from these have elucidated key information regarding changes in the engineered barrier system (EBS) material exposed to years of thermal loads. Chemical and structural analyses of sampled bentonite material from such tests as well as experiments conducted on these are key to the characterization of thermal effects affecting bentonite clay barrier performance and the extent of sacrificial zones in the EBS during the thermal period. Thermal, hydrologic, and chemical data collected from heater tests and laboratory experiments has been used in the development, validation, and calibration of THMC simulators to model near-field coupled processes. This information leads to the development of simulation approaches (e.g., continuum and discrete) to tackle issues related to flow and transport at various scales of the host-rock, its interactions with barrier materials, and EBS design concept.
In this study, model derivations are carried out of a dynamical system under base excitations with a piezoelectric energy harvesting absorber as the tuned-mass-damper. Additionally, amplitude stoppers are included to the absorber in order to create a broadband resonant response, increasing the window of operational use for energy harvesting and system's control. This study is unique in the fact that the energy harvester is coupled to the source of its excitation. A nonlinear reduced-order model is developed using Euler–Lagrange principle and the Galerkin method to accurately estimate the energy harvesting absorber's displacement, harvested power, and the oscillating response of the primary structure. The nonlinear interaction of the energy harvesting absorber and the amplitude stoppers are the focus of this study, where an in-depth investigation of bifurcation points of the primary structure and energy harvesting absorber responses is performed. Due to a transfer of energy between the primary structure and the absorber, it is shown that a soft stopper with stiffness $5 \times {10^3}\,{\text{N}}\;{{\text{m}}^{ - 1}}\,$ has great control of the primary structure with 60% of the uncontrolled amplitude being reduced, as well as an increase of the harvested energy. Medium stoppers with small initial gaps size and hard stoppers do not control the primary structure and show a decrease in the energy harvesting capabilities due to the activation of the nonlinear contact-impact interactions. Finally, these stoppers also generate aperiodic regions due to the possible presence of grazing bifurcations.
As a general-purpose force field for molecular simulations of layered materials and their fluid interfaces, Clayff continues to see broad usage in atomistic computational modeling for numerous geoscience and materials science applications due to its (1) success in predicting properties of bulk nanoporous materials and their interfaces, (2) transferability to a range of layered and nanoporous materials, and (3) simple functional form which facilitates incorporation into a variety of simulation codes. Here, we review applications of Clayff to model bulk phases and interfaces not included in the original parameter set and recent modifications for modeling surface terminations such as hydroxylated nanoparticle edges. We conclude with a discussion of expectations for future developments.
The following trade study was done to answer the following task from the Sandia JPL Collaboration for Europa Lander Statement of Work: Survey facility infrastructure SNL may have for performing aseptic assembly and integration of S/C and assess its suitability for PP applications.
Currently, traditional methods such as short-term average/long-term average (STA/LTA) are used to detect arrivals in three-component seismic waveform data. Accurately establishing the identity and arrival of these waves is helpful in detecting and locating seismic events. Convolutional Neural Networks (CNNs) have been shown to significantly improve performance at local distances. This work will expand the use of CNNs to more remote distances and lower magnitudes. Sandia National Labs (SNL) will explore the advantages and limits of a particular approach and investigate requirements for expanding this technique to different types, distances, and magnitudes of events in the future. The team will describe detailed performance results of this method tuned on a curated dataset from Utah with its expert-defined arrival picks.
This report summarizes initial results from a series of gun experiments which were conducted at the DICE facility. The target of these experiments was a modified metal slug composed of a tantalum/tungsten alloy (Ta-10W). The general geometry of the slug was a right circular cylinder with a through-hole cut normal to the cylinder's axis. In all experiments, hardened steel impactors were used, the desired impact velocity was 200 m/s, the slug was preheated to a target temperature of 175° C, photon doppler velocimetry (PDV) was used to measure the projectile velocity before and after impact, and the impact event was recorded with high-speed video. In two of the impacts the slug was oriented perpendicular to the projectile, while in the remaining two it was tilted 8° from normal. Initial high-speed speed video results showed slug failure in the tilted impact case, while the slug survived normal impacts. Recovery fixtures were used to preserve impacted slugs for future postmortem analysis. Discussions are included regarding improvements to potential future experiments involving these slugs.
This report is a functional review of the radionuclide containment strategies of fluoride-salt-cooled high temperature reactor (FHR), molten salt reactor (MSR) and high temperature gas reactor (HTGR) systems. This analysis serves as a starting point for further, more in-depth analyses geared towards identifying phenomenological gaps that still exist, hindering the creation of a mechanistic source term for these reactor types. As background information to this review, an overview of how a mechanistic source term is created and used for consequence assessment necessary for licensing is provided. How a mechanistic source term is used within the Licensing Modernization Project (LMP) is also provided. Lastly, the characteristics of non-LWR mechanistic source terms are examined. This report does not assess the viability of any software system for use with advanced reactor designs, but instead covers system function requirements. Future work within the Nuclear Energy Advanced Modeling and Simulations (NEAMS) program will address such gaps. This document is an update of SAND 2020-6730. An additional chapter is included as well as edits to original content.
Sandia will provide technical assistance to New Mexico Department of Health to provide analysis of SafeGraph mobility data (for which Sandia already has the data and a Data Use Agreement in place with the data provider). Sandia will produce analysis to determine the contribution of travel to SARS-CoV-2 spread within New Mexico.
The goal of this work is to develop a Bayesian framework to characterize the uncertainty of material response when using a nonlocal, homogenized model to describe wave propagation through heterogeneous, disordered materials. Our approach is based on an operator regression technique combined with Bayesian optimization, through which the nonlocal kernel for a specific disordered microstructure is investigated.
We present an approach for constructing a surrogate from ensembles of information sources of varying cost and accuracy. The multifidelity surrogate encodes connections between information sources as a directed acyclic graph, and is trained via gradient-based minimization of a nonlinear least squares objective. While the vast majority of state-of-the-art assumes hierarchical connections between information sources, our approach works with flexibly structured information sources that may not admit a strict hierarchy. The formulation has two advantages: (1) increased data efficiency due to parsimonious multifidelity networks that can be tailored to the application; and (2) no constraints on the training data—we can combine noisy, non-nested evaluations of the information sources. Finally, numerical examples ranging from synthetic to physics-based computational mechanics simulations indicate the error in our approach can be orders-of-magnitude smaller, particularly in the low-data regime, than single-fidelity and hierarchical multifidelity approaches.
This Storm Water Pollution Prevention Plan (SWPPP) has been prepared for the Sandia National Laboratories Water Line Project – Northern Portion, in Livermore, CA. The project, located at 7011 East Avenue, and will entail the portion of the site north of the Arroyo Section. The project is comprised of 19,584 linear feet of water line improvements totaling approximately 9.0 acres. The property is owned by the U.S. Department of Energy, and managed and operated by National Technology & Engineering Solutions of Sandia, LLC with this project being developed by NTESS for Sandia National Laboratories.
The advanced materials team investigated the use of additively manufactured metallic lattice structures for mitigating impact response in a Davis gun earth penetrator impact experiment. High-fidelity finite element models were developed and validated with quasistatic experiments. These models were then used to simulate the response of such lattices when subjected to the acceleration loads expected in the Davis gun experiment. Results reveal how the impact mitigation performance of lattices can change drastically at a certain relative density. Based on these observations, an experiment deck was designed to probe the response of lattices with different relative densities during the Davis gun phase 2 shots. The expected performance of these lattices is predicted before testing based on simulation results. The results of the Davis gun phase 2 shots are expected to provide data which will be used to assess the predictive capability of the finite element simulations in such a complex impact environment.
Historically, nuclear component manufacturing vendors, from small businesses through large conglomerates, have felt compelled to obtain an American Society for Mechanical Engineers (ASME) Nuclear Certification, known colloquially as an "N-stamp", to assure supply chain quality standards that will be acceptable to regulators and safety concerns. Since the N-stamp quality standard is a U.S.-origin code, combined with the apparent decline in the U.S. nuclear industry alongside the growth of the Asian nuclear industry, there is the question of whether the rest of the world, including new entrants to the nuclear industry, also regard N-stamp as a needed certification. This study addresses this question through analysis of the entire N-stamp database of holders, and former holders, of N-stamp certificates of all types and for all regions worldwide from 1989-2020 (the dates available in the database). From this 30 years of data, we find that actually U.S.-based vendors still consistently obtain the largest number of N-stamps worldwide over all time periods, but also find that the countries participating in the N-stamp certification process has broadly expanded beyond just North America, Japan, S. Korea and Western Europe (the primary N-stamp recipients before the mid-2000's). We produced global heats maps and bar charts to illustrate our findings, as well as further investigation into why the data shows changes over time and region. We note that nuclear entities involved with Soviet-type reactors do not participate in the N-stamp process, but instead pursue the Russian version PNAE, which is substantially similar to the ASME code. We conclude that at least from the N-stamp database, the United States nuclear component manufacturing industry is alive and well, although there have been some consolidations, and that the ASME N-stamp appears to still be a valued certificate worldwide, including in China which now ranks second only to the United States in obtaining N-stamp certificates in recent years. We further note that the vendors of new reactor types, in particular High Temperature Gas-Cooled Reactors (HTGRs) and Small Modular Reactors (SMRs), are actively engaged with ASME (and other U.S.-based nuclear standards bodies such as the American Nuclear Society and Nuclear Energy Institute) to coordinate updates to the ASME N-Stamp criteria to ensure applicability of the code for these new designs. Implications of these findings include the following: The global use of the U.S.-origin N-stamp certification supports the view that, despite the decline of the U.S. nuclear industry, the United States remains an esteemed global leader in the area of nuclear safety. As the U.S. Government works to revitalize the U.S. nuclear industry, especially in the area of exports, it may be beneficial to leverage the global standing of the N-stamp certification. The findings indicate that the N-stamp database would be a useful tool for the U.S. Government to use to track the growth of the civil nuclear industry in foreign countries, under certain circumstances. The Excel-format N-stamp database produced as part of this study may be a useful tool for this purpose. N-stamp data may be a useful tool for foreign governments to use to identify nuclear manufacturers within their own country, especially to identify "targets" for outreach on nuclear export control compliance. The U.S. Government could carry this message to foreign partners during bilateral engagements or as part of Nuclear Suppliers Group (NSG) discussions on industry outreach.
MELCOR is a fully integrated, engineering-level computer code for modeling the progression of severe accidents in light water reactors (LWR) at nuclear power plants and nuclear fuel cycle facilities. Originally developed to assess severe accidents following Three Mile Island, MELCOR’s flexible modeling framework has enabled it to be applied to safety assessments of a much broader range of nuclear power reactor designs and other types of nuclear facilities processing radioactive material. Further, MELCOR can model a broad spectrum of severe accident phenomena such as thermal-hydraulic response in a reactor coolant system; core heat-up, degradation, and relocation; and transport behavior in both boiling water and pressurized water reactors.
Sandia will provide technical assistance to Helpful Engineering to develop and test the Universal Citizen Protection Device (UCPD) which is a UV-based filter-less PPE concept that aims to keep the Sars-CoV-2 virus out of eyes, nose and mouth with a 99%+ reliability. The heart of the device would be a concealed UV Chamber that decontaminates all air going in and out of the PPE. Helpful Engineering’s goal is to build this device to be reusable and cost less than $100 to construct and can be worn for 8 hours. The UCPD is an open source project, and once developed, prototyped, tested and approved, it will be shared with interested manufacturers globally.
Information from 2015 annual report highlighting several tasks, including: Task 7: Research of microspectrophotometry for inspection and validation of laser color markings. Task 8: Investigate new laser fabrication techniques that produce color markings with improved corrosion resistance. Task 9: Research new methods for laser marking curved surfaces (and large areas). Task 10: Complete model simulations of laser-induced ripple formation-involves an ElectroMagnetic field solver.
As the demand for higher-performance batteries has increased, so has the body of research on theoretical high-capacity anode materials. However, the research has been hindered because the high-capacity anode material properties and interactions are not well understood, largely due to the difficulty of observing cycling in situ. Using electrochemical scanning transmission electron microscopy (ec-STEM), we report the real-time observation and electrochemical analysis of pristine tin (Sn) and titanium dioxide-coated Sn (TiO2@Sn) electrodes during lithiation/delithiation. As expected, we observed a volume expansion of the pristine Sn electrodes during lithiation, but we further observed that the expansion was followed by Sn detachment from the current collector. Remarkably, although the TiO2@Sn electrodes also exhibited similar volume expansion during lithiation, they showed no evidence of Sn detachment. We found that the TiO2 surface layer acted as an electrochemically activated artificial solid-electrolyte interphase that serves to conduct Li ions. As a physical coating, it mechanically prevented Sn detachment following volume changes during cycling, providing significant degradation resistance and 80% Coulombic efficiency for a complete lithiation/delithiation cycle. Interestingly, upon delithiation, TiO2@Sn electrode displayed a self-healing mechanism of small pore formation in the Sn particle followed by agglomeration into several larger pores as delithiation continued.
Kreitz, Bjarne; Sargsyan, Khachik S.; Mazeau, Emily J.; Blondal, Katrin; West, Richard H.; Wehinger, Gregor D.; Turek, Thomas; Goldsmith, C.F.
Automatic mechanism generation is used to determine mechanisms for the CO2 hydrogenation on Ni(111) in a two-stage process while considering the correlated uncertainty in DFT-based energetic parameters systematically. In a coarse stage, all the possible chemistry is explored with gas-phase products down to the ppb level, while a refined stage discovers the core methanation submechanism. Five thousand unique mechanisms were generated, which contain minor perturbations in all parameters. Global uncertainty assessment, global sensitivity analysis, and degree of rate control analysis are performed to study the effect of this parametric uncertainty on the microkinetic model predictions. Comparison of the model predictions with experimental data on a Ni/SiO2 catalyst find a feasible set of microkinetic mechanisms within the correlated uncertainty space that are in quantitative agreement with the measured data, without relying on explicit parameter optimization. Global uncertainty and sensitivity analyses provide tools to determine the pathways and key factors that control the methanation activity within the parameter space. Together, these methods reveal that the degree of rate control approach can be misleading if parametric uncertainty is not considered. The procedure of considering uncertainties in the automated mechanism generation is not unique to CO2 methanation and can be easily extended to other challenging heterogeneously catalyzed reactions.
Nanostructures with a high density of interfaces, such as in nanoporous materials and nanowires, resist radiation damage by promoting the annihilation and migration of defects. This study details the size effect and origins of the radiation damage mechanisms in nanowires and nanoporous structures in model face-centered (gold) and body-centered (niobium) cubic nanostructures using accelerated multi-cascade atomistic simulations and in-situ ion irradiation experiments. Our results reveal three different size-dependent mechanisms of damage accumulation in irradiated nanowires and nanoporous structures: sputtering for very small nanowires and ligaments, the formation and accumulation of point defects and dislocation loops in larger nanowires, and a face-centered-cubic to hexagonal-close-packed phase transformation for a narrow range of wire diameters in the case of gold nanowires. Smaller nanowires and ligaments have a net effect of lowering the radiation damage as compared to larger wires that can be traced back to the fact that smaller nanowires transition from a rapid accumulation of defects to a saturation and annihilation mechanism at a lower dose than larger nanowires. These irradiation damage mechanisms are accompanied with radiation-induced surface roughening resulting from defect-surface interactions. Comparisons between nanowires and nanoporous structures show that the various mechanisms seen in nanowires provide adequate bounds for the defect accumulation mechanisms in nanoporous structures with the difference attributed to the role of nodes connecting ligaments in nanoporous structures. Taken together, our results shed light on the compounded, size-dependent mechanisms leading to the radiation resistance of nanowires and nanoporous structures.
We describe a method to automatically generate an ion implantation recipe, a set of energies and fluences, to produce a desired defect density profile in a solid using the fewest required energies. We simulate defect density profiles for a range of ion energies, fit them with an appropriate function, and interpolate to yield defect density profiles at arbitrary ion energies. Given N energies, we then optimize a set of N energy-fluence pairs to match a given target defect density profile. Finally, we find the minimum N such that the error between the target defect density profile and the defect density profile generated by the N energy-fluence pairs is less than a given threshold. Inspired by quantum sensing applications with nitrogen-vacancy centers in diamond, we apply our technique to calculate optimal ion implantation recipes to create uniform-density 1 μm surface layers of 15N or vacancies (using 4He).
Schneemann, Andreas; Ying, Juan; Evans, Jack D.; Toyao, Takashi; Hijikata, Yuh; Kamiya, Yuichi; Shimizu, Ken I.; Burtch, Nicholas C.
The trapping of paraffins is beneficial compared to selective olefin adsorption for adsorptive olefin purification from a process engineering point of view. Here we demonstrate the use of a series of Zn2(X-bdc)2(dabco) (where X-bdc2−is bdc2−= 1,4-benzenedicarboxylate with substituting groups X, DM-bdc2−= 2,5-dimethyl-1,4-benzenedicarboxylate or TM-bdc2−= 2,3,5,6-tetramethyl-1,4-benzenedicarboxylate and dabco = diazabicyclo[2.2.2.]octane) metal-organic frameworks (MOFs) for the adsorptive removal of ethane from ethylene streams. The best performing material from this series is Zn2(TM-bdc)2(dabco) (DMOF-TM), which shows a high ethane uptake of 5.31 mmol g−1at 110 kPa, with a good IAST selectivity of 1.88 towards ethane over ethylene. Through breakthrough measurements a high productivity of 13.1 L kg−1per breakthrough is revealed with good reproducibility over five consecutive cycles. Molecular simulations show that the methyl groups of DMOF-TM are forming a van der Waals trap with the methylene groups from dabco, snuggly fitting the ethane. Further, rarely used high pressure coadsorption measurements, in pressure regimes that most scientific studies on hydrocarbon separation on MOFs ignore, reveal an increase in ethane capacity and selectivity for binary mixtures with increased pressures. The coadsorption measurements reveal good selectivity of 1.96 at 1000 kPa, which is verified also through IAST calculations up to 3000 kPa. This study overall showcases the opportunities that pore engineering by alkyl group incorporation and pressure increase offer to improve hydrocarbon separation in reticular materials.
The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Fuel Cycle Technology (FCT) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and high-level nuclear waste (HLW). Two high priorities for SFWST disposal R&D are design concept development and disposal system modeling. These priorities are directly addressed in the SFWST Geologic Disposal Safety Assessment (GDSA) control account, which is charged with developing a geologic repository system modeling and analysis capability, and the associated software, GDSA Framework, for evaluating disposal system performance for nuclear waste in geologic media. GDSA Framework is supported by SFWST Campaign and its predecessor the Used Fuel Disposition (UFD) campaign. This report fulfills the GDSA Uncertainty and Sensitivity Analysis Methods work package (SF-21SN01030404) level 3 milestone, Uncertainty and Sensitivity Analysis Methods and Applications in GDSA Framework (FY2021) (M3SF-21SN010304042). It presents high level objectives and strategy for development of uncertainty and sensitivity analysis tools, demonstrates uncertainty quantification (UQ) and sensitivity analysis (SA) tools in GDSA Framework in FY21, and describes additional UQ/SA tools whose future implementation would enhance the UQ/SA capability of GDSA Framework. This work was closely coordinated with the other Sandia National Laboratory GDSA work packages: the GDSA Framework Development work package (SF-21SN01030405), the GDSA Repository Systems Analysis work package (SF-21SN01030406), and the GDSA PFLOTRAN Development work package (SF-21SN01030407). This report builds on developments reported in previous GDSA Framework milestones, particularly M3SF 20SN010304032.
Organic materials are an attractive choice for structural components due to their light weight and versatility. However, because they decompose at low temperatures relative to tradiational materials they pose a safety risk due to fire and loss of structural integrity. To quantify this risk, analysts use chemical kinetics models to describe the material pyrolysis and oxidation using thermogravimetric analysis. This process requires the calibration of many model parameters to closely match experimental data. Previous efforts in this field have largely been limited to finding a single best-fit set of parameters even though the experimental data may be very noisy. Furthermore the chemical kinetics models are often simplified representations of the true de- composition process. The simplification induces model-form errors that the fitting process cannot capture. In this work we propose a methodology for calibrating decomposition models to thermogravimetric analysis data that accounts for uncertainty in the model-form and experimental data simultaneously. The methodology is applied to the decomposition of a carbon fiber epoxy composite with a three-stage reaction network and Arrhenius kinetics. The results show a good overlap between the model predictions and thermogravimetric analysis data. Uncertainty bounds capture devia- tions of the model from the data. The calibrated parameter distributions are also presented. In conclusion, the distributions may be used in forward propagation of uncertainty in models that leverage this material.
The representation of material heterogeneity (also referred to as "spatial variation") plays a key role in the material failure simulation method used in ALEGRA. ALEGRA is an arbitrary Lagrangian-Eulerian shock and multiphysics code developed at Sandia National Laboratories and contains several methods for incorporating spatial variation into simulations. A desirable property of a spatial variation method is that it should produce consistent stochastic behavior regardless of the mesh used (a property referred to as "mesh independence"). However, mesh dependence has been reported using the Weibull distribution with ALEGRA's spatial variation method. This report describes efforts towards providing additional insight into both the theory and numerical experiments investigating such mesh dependence. In particular, we have implemented a discrete minimum order statistic model with properties that are theoretically mesh independent.
Continued creation of harmful emissions such as NOx and soot from compression-ignition engines utilizing mixing-controlled combustion systems (i.e., diesel engines) remains a problem and is the subject of on-going research. The inherently high efficiency, relatively low cost, and numerous other desirable attributes of such engines, coupled with a widely supported infrastructure, motivates their continued advancement. Recently, a scientifically distinct and mechanically simple technology called ducted fuel injection (DFI) has shown a robust ability to allow such engines to operate with simultaneously low engine-out soot and NOx emissions when it is employed with simulated exhaust-gas recirculation. To better understand the property ranges of sustainable, oxygenated-fuel blending stocks that will most improve engine performance, two oxygenated blendstocks were separately blended with a commercial diesel base fuel and tested within a heavy-duty diesel optical engine equipped with a four-duct DFI configuration. Conventional and crank-angle-resolved optical diagnostics were used to elucidate the effects of fuel ignition quality, oxygenate molecular structure, and overall oxygen content on engine performance.
Designing polymers with controlled nanoscale morphologies and scalable synthesis is of great interest in the development of fluorine-free materials for proton-exchange membranes in fuel cells. This study focuses on a precision polyethylene with phenylsulfonic acid branches at every fifth carbon, p5PhSA, with a high ion-exchange capacity (4.2 mmol/g). The polymers self-assemble into hydrophilic and hydrophobic co-continuous nanoscale domains. In the hydrated state, the hydrophilic domain, composed of polar sulfonic acid moieties and water, serves as a pathway for efficient mesoscopic proton conductivity. The morphology and proton transport of p5PhSA are evaluated under hydrated conditions using in situ X-ray scattering and electrochemical impedance spectroscopy techniques. At 40 °C and 95% relative humidity, the proton conductivity of p5PhSA is 0.28 S/cm, which is four times greater than Nafion 117 under the same conditions. Atomistic molecular dynamics (MD) simulations are also used to elucidate the interplay between the structure and the water dynamics. The MD simulations show strong nanophase separation between the percolated hydrophilic and hydrophobic domains over a wide range of water contents. The percolated hydrophilic nanoscale domain facilitates the rapid proton transport in p5PhSA and demonstrates the potential of precise hydrocarbon-based polymers as processible and effective proton-exchange membranes.
Klein, Brianna A.; Song, Yiwen; Ranga, Praneeth; Zhang, Yingying; Feng, Zixuan; Huang, Hsien-Lien; Santia, Marco D.; Badescu, Stefan C.; Gonzalez-Valle, C.U.; Perez, Carlos; Ferri, Kevin; Lavelle, Robert M.; Snyder, David W.; Deitz, Julia D.; Baca, Albert G.; Maria, Jon-Paul; Ramos-Alvarado, Bladimir; Hwang, Jinwoo; Zhao, Hongping; Wang, Xiaojia; Krishnamoorthy, Sriram; Foley, Brian M.; Choi, Sukwon
Heteroepitaxy of β-phase gallium oxide (β-Ga2O3) thin films on foreign substrates shows promise for the development of next-generation deep ultraviolet solar blind photodetectors and power electronic devices. In this work, the influences of the film thickness and crystallinity on the thermal conductivity of ($\bar{2}01$)-oriented β-Ga2O3 heteroepitaxial thin films were investigated. Unintentionally doped β-Ga2O3 thin films were grown on c-plane sapphire substrates with off-axis angles of 0° and 6° toward $\langle$$11\bar{2}0$$\rangle$ via metal–organic vapor phase epitaxy (MOVPE) and low-pressure chemical vapor deposition. The surface morphology and crystal quality of the β-Ga2O3 thin films were characterized using scanning electron microscopy, X-ray diffraction, and Raman spectroscopy. The thermal conductivities of the β-Ga2O3 films were measured via time-domain thermoreflectance. The interface quality was studied using scanning transmission electron microscopy. The measured thermal conductivities of the submicron-thick β-Ga2O3 thin films were relatively low as compared to the intrinsic bulk value. The measured thin film thermal conductivities were compared with the Debye–Callaway model incorporating phononic parameters derived from first-principles calculations. The comparison suggests that the reduction in the thin film thermal conductivity can be partially attributed to the enhanced phonon-boundary scattering when the film thickness decreases. They were found to be a strong function of not only the layer thickness but also the film quality, resulting from growth on substrates with different offcut angles. Growth of β-Ga2O3 films on 6° offcut sapphire substrates was found to result in higher crystallinity and thermal conductivity than films grown on on-axis c-plane sapphire. However, the β-Ga2O3 films grown on 6° offcut sapphire exhibit a lower thermal boundary conductance at the β-Ga2O3/sapphire heterointerface. In addition, the thermal conductivity of MOVPE-grown ($\bar{2}01$)-oriented β-(AlxGa1–x)2O3 thin films with Al compositions ranging from 2% to 43% was characterized. Because of phonon-alloy disorder scattering, the β-(AlxGa1–x)2O3 films exhibit lower thermal conductivities (2.8–4.7 W/m∙K) than the β-Ga2O3 thin films. The dominance of the alloy disorder scattering in β-(AlxGa1–x)2O3 is further evidenced by the weak temperature dependence of the thermal conductivity. This work provides fundamental insight into the physical interactions that govern phonon transport within heteroepitaxially grown β-phase Ga2O3 and (AlxGa1–x)2O3 thin films and lays the groundwork for the thermal modeling and design of β-Ga2O3 electronic and optoelectronic devices.
Metallic enclosures are commonly used to protect electronic circuits against unwanted electromagnetic (EM) interactions. However, these enclosures may be sealed with imperfect mechanical seams or joints. These joints form narrow slots that allow external EM energy to couple into the cavity and then to the internal circuits. This coupled EM energy can severely affect circuit operations, particularly at the cavity resonance frequencies when the cavity has a high Q factor. To model these slots and the corresponding EM coupling, a thin-slot sub-cell model [1] , developed for slots in infinite ground plane and extended to numerical modeling of cavity-backed apertures, was successfully implemented in Sandia's electromagnetic code EIGER [2] and its next-generation counterpart Gemma [3]. However, this thin-slot model only considers resonances along the length of the slot. At sufficiently high frequencies, the resonances due to the slot depth must also be considered. Currently, slots must be explicitly meshed to capture these depth resonances, which can lead to low-frequency instability (due to electrically small mesh elements). Therefore, a slot sub-cell model that considers resonances in both length and depth is needed to efficiently and accurately capture the slot coupling.
Cyber testbeds provide an important mechanism for experimentally evaluating cyber security performance. However, as an experimental discipline, reproducible cyber experimentation is essential to assure valid, unbiased results. Even minor differences in setup, configuration, and testbed components can have an impact on the experiments, and thus, reproducibility of results. This paper documents a case study in reproducing an earlier emulation study, with the reproduced emulation experiment conducted by a different research group on a different testbed. We describe lessons learned as a result of this process, both in terms of the reproducibility of the original study and in terms of the different testbed technologies used by both groups. This paper also addresses the question of how to compare results between two groups' experiments, identifying candidate metrics for comparison and quantifying the results in this reproduction study.
This paper describe our team's experience using minimega, a network emulation system using node and network virtualization, to support evaluation of a set of networked and distributed systems for topology discovery, traffic classification and engineering in the DARPA Searchlight program [18]. We present the methodology we developed to encode network and traffic definitions into an experiment description model, and how our tools compile this model onto the underlying minimega API. We then present three cases studies which demonstrate the ability of our EDM to support experiments with diverse network topologies, diverse traffic mixes, and networks with specialized layer-2 connectivity requirements. We conclude with the overall takeaways from using minimega to support our evaluation process.
Polymerization induced phase separation (PIPS) in a three component thermoset is studied using molecular dynamics simulations of a new coarse-grained thermoset model. The system includes two crosslinker molecules, which differ in their glass transition temperatures (Tg) and chain length and thus have the potential for phase separation. One crosslinker has a high Tg corresponding to a rubbery behavior, and simulations were performed for a short length (4 beads) and a long length (33 beads). The resin and other crosslinker have low Tg. A coarse-grained model is developed with these features and with interaction parameters determined so that for either rubbery crosslinker length, the system is in the liquid state at the cure temperature. For sufficiently slow reaction rates, the long rubbery molecule exhibits PIPS into a bicontinuous array of nanoscale domains, but the short one does not, reproducing recent experimental results. The simulations demonstrate that the reaction rates must be slow enough to allow diffusion to yield phase separation. Particularly, the reaction rate corresponding to the secondary amine must be very slow, else the structure of crosslinked clusters and the substantially increased diffusion time will prevent PIPS.
We study the deformation of tantalum under extreme loading conditions. Experimental velocity data are drawn from both ramp loading experiments on Sandia's Z-machine and gas gun compression experiments. The drive conditions enable the study of materials under pressures greater than 100 GPa. We provide a detailed forward model of the experiments including a model of the magnetic drive for the Z-machine. Utilizing these experiments, we simultaneously infer several different types of physically motivated parameters describing equation of state, plasticity, and anelasticity via the computational device of Bayesian model calibration. Characteristics of the resulting calculated posterior distributions illustrate relationships among the parameters of interest via the degree of cross correlation. The calibrated velocity traces display good agreement with the experiments up to experimental uncertainty as well as improvement over previous calibrations. Examining the Z-shots and gun-shots together and separately reveals a trade-off between accuracy and transferability across different experimental conditions. Implications for model calibration, limitations from model form, and suggestions for improvements are discussed.
Electric vehicles (EVs) represent an important socio-economic development opportunity for islands and remote locations because they can lead to reduced fuel imports, electricity storage, grid services, and environmental and health benefits. This paper presents an overview of opportunities, challenges, and examples of EVs in islands and remote power systems, and is meant to provide background to researchers, utilities, energy offices, and other stakeholders interested in the impacts of electrification of transportation. The impact of uncontrolled EV charging on the electric grid operation is discussed, as well as several mitigation strategies. Of particular importance in many islands and remote systems is taking advantage of local resources by combining renewable energy and EV charging. Policy and economic issues are presented, with emphasis on the need for an overarching energy policy to guide the strategies for EVs growth. The key conclusion of this paper is that an orderly transition to EVs, one that maximizes benefits while addressing the challenges, requires careful analysis and comprehensive planning.
Ionogels are hybrid materials formed by impregnating the pore space of a solid matrix with a conducting ionic liquid. By combining the properties of both component materials, ionogels can act as self-supporting electrolytes in Li batteries. In this study, molecular dynamics simulations are used to investigate the dependence of mechanical properties of silica ionogels on solid fraction, temperature, and pore width. Comparisons are made with corresponding aerogels. We find that the solid matrix fraction increases the moduli and strength of the ionogel. This varies nonlinearly with temperature and strain rate, according to the contribution of the viscous ionic liquid to resisting deformation. Owing to the temperature and strain sensitivity of the ionic liquid viscosity, the mechanical properties approach a linear mixing law at high temperature and low strain rates. The median pore width of the solid matrix plays a complex role, with its influence varying qualitatively with deformation mode. Narrower pores increase the relevant elastic modulus under shear and uniaxial compression but reduce the modulus obtained under uniaxial tension. Conversely, shear and tensile strength are increased by narrowing the pore width. All of these pore size effects become more pronounced as the silica fraction increases. Pore size effects, similar to the effects of temperature and strain rate, are linked to the ease of fluid redistribution within the pore space during deformation-induced changes in the geometry of the pores.
Superionic phases of bulk anhydrous salts based on large cluster-like polyhedral (carba)borate anions are generally stable only well above room temperature, rendering them unsuitable as solid-state electrolytes in energy-storage devices that typically operate at close to room temperature. To unlock their technological potential, strategies are needed to stabilize these superionic properties down to subambient temperatures. One such strategy involves altering the bulk properties by confinement within nanoporous insulators. In the current study, the unique structural and ion dynamical properties of an exemplary salt, NaCB11H12, nanodispersed within porous, high-surface-area silica via salt-solution infiltration were studied by differential scanning calorimetry, X-ray powder diffraction, neutron vibrational spectroscopy, nuclear magnetic resonance, quasielastic neutron scattering, and impedance spectroscopy. Combined results hint at the formation of a nanoconfined phase that is reminiscent of the high-temperature superionic phase of bulk NaCB11H12, with dynamically disordered CB11H12- anions exhibiting liquid-like reorientational mobilities. However, in contrast to this high-temperature bulk phase, the nanoconfined NaCB11H12 phase with rotationally fluid anions persists down to cryogenic temperatures. Moreover, the high anion mobilities promoted fast-cation diffusion, yielding Na+ superionic conductivities of ∼0.3 mS/cm at room temperature, with higher values likely attainable via future optimization. It is expected that this successful strategy for conductivity enhancement could be applied as well to other related polyhedral (carba)borate-based salts. Thus, these results present a new route to effectively utilize these types of superionic salts as solid-state electrolytes in future battery applications.
Poly(carbon monofluoride), or (CF)n, is a layered fluorinated graphite material consisting of nanosized platelets. Here, we present experimental multidimensional solid-state NMR spectra of (CF)n, supported by density functional theory (DFT) calculations of NMR parameters, which overhauls our understanding of structure and bonding in the material by elucidating many ways in which disorder manifests. We observe strong 19F NMR signals conventionally assigned to elongated or "semi-ionic"C-F bonds and find that these signals are in fact due to domains where the framework locally adopts boat-like cyclohexane conformations. We calculate that C-F bonds are weakened but are not elongated by this conformational disorder. Exchange NMR suggests that conformational disorder avoids platelet edges. We also use a new J-resolved NMR method for disordered solids, which provides molecular-level resolution of highly fluorinated edge states. The strings of consecutive difluoromethylene groups at edges are relatively mobile. Topologically distinct edge features, including zigzag edges, crenellated edges, and coves, are resolved in our samples by solid-state NMR. Disorder should be controllable in a manner dependent on synthesis, affording new opportunities for tuning the properties of graphite fluorides.
Finding dense regions of graphs is fundamental in graph mining. We focus on the computation of dense hierarchies and regions with graph nuclei - -a generalization of k-cores and trusses. Static computation of nuclei, namely through variants of 'peeling', are easy to understand and implement. However, many practically important graphs undergo continuous change. Dynamic algorithms, maintaining nucleus computations on dynamic graph streams, are nuanced and require significant effort to port between nuclei, e.g., from k-cores to trusses. We propose a unifying framework to maintain nuclei in dynamic graph streams. First, we show no dynamic algorithm can asymptotically beat re-computation, highlighting the need to experimentally understand variability. Next, we prove equivalence between k-cores on a special hypergraph and nuclei. Our algorithm splits the problem into maintaining the special hypergraph and maintaining k-cores on it. We implement our algorithm and experimentally demonstrate improvements up to 108 x over re-computation. We show algorithmic improvements on k-cores apply to trusses and outperform truss-specific implementations.
In this paper, the effects and mitigation strategies of pulsed loads on medium voltage DC (MVDC) electric ships are explored. Particularly, the effect of high-powered pulsed loads on generator frequency stability are examined. As a method to stabilize a generator which has been made unstable by high-powered pulsed loads, it is proposed to temporarily extract energy from the propulsion system using regenerative propeller braking. The damping effects on generator speed oscillation of this method of control are examined. The impacts on propeller and ship speed are also presented.
Finding dense regions of graphs is fundamental in graph mining. We focus on the computation of dense hierarchies and regions with graph nuclei - -a generalization of k-cores and trusses. Static computation of nuclei, namely through variants of 'peeling', are easy to understand and implement. However, many practically important graphs undergo continuous change. Dynamic algorithms, maintaining nucleus computations on dynamic graph streams, are nuanced and require significant effort to port between nuclei, e.g., from k-cores to trusses. We propose a unifying framework to maintain nuclei in dynamic graph streams. First, we show no dynamic algorithm can asymptotically beat re-computation, highlighting the need to experimentally understand variability. Next, we prove equivalence between k-cores on a special hypergraph and nuclei. Our algorithm splits the problem into maintaining the special hypergraph and maintaining k-cores on it. We implement our algorithm and experimentally demonstrate improvements up to 108 x over re-computation. We show algorithmic improvements on k-cores apply to trusses and outperform truss-specific implementations.
Recently, lithium nitride (Li3N) has been proposed as a chemical warfare agent (CWA) neutralization reagent for its ability to produce nucleophilic ammonia molecules and hydroxide ions in aqueous solution. Quantum chemical calculations can provide insight into the Li3N neutralization process that has been studied experimentally. Here, we calculate reaction-free energies associated with the Li3N-based neutralization of the CWA VX using quantum chemical density functional theory and ab initio methods. We find that alkaline hydrolysis is more favorable to either ammonolysis or neutral hydrolysis for initial P-S and P-O bond cleavages. Reaction-free energies of subsequent reactions are calculated to determine the full reaction pathway. Notably, products predicted from favorable reactions have been identified in previous experiments.
Costs to permit Marine Energy projects are poorly understood. In this paper we examine environmental compliance and permitting costs for 19 projects in the U.S., covering the last 2 decades. Guided discussions were conducted with developers over a 3-year period to obtain historical and ongoing project cost data relative to environmental studies (e.g., baseline or pre-project site characterization as well as post-installation effects monitoring), stakeholder outreach, and mitigation, as well as qualitative experience of the permitting process. Data are organized in categories of technology type, permitted capacity, pre-and post-installation, geographic location, and funding types. We also compare our findings with earlier logic models created for the Department of Energy (i.e., Reference Models). Environmental studies most commonly performed were for Fish and Fisheries, Noise, Marine Habitat/Benthic Studies and Marine Mammals. Studies for tidal projects were more expensive than those performed for wave projects and the range of reported project costs tended to be wider than ranges predicted by logic models. For eight projects reporting full project costs, from project start to FERC or USACE permit, the average amount for environmental permitting compliance was 14.6%.
Automated vehicles (AV) hold great promise for improving safety, as well as reducing congestion and emissions. In order to make automated vehicles commercially viable, a reliable and highperformance vehicle-based computing platform that meets ever-increasing computational demands will be key. Given the state of existing digital computing technology, designers will face significant challenges in meeting the needs of highly automated vehicles without exceeding thermal constraints or consuming a large portion of the energy available on vehicles, thus reducing range between charges or refills. The accompanying increases in energy for AV use will place increased demand on energy production and distribution infrastructure, which also motivates increasing computational energy efficiency.
Based on the latest DOE (Department of Energy) milestones, Sandia needs to convert to IPv6 (Internet Protocol version 6)-only networks over the next 5 years. Our original IPv6 migration plan did not include migrating to IPv6-only networks at any point within the next 10 years, so it must necessarily change. To be successful in this endeavor, we need to evaluate technologies that will enable us to deploy IPv6-only networks early without creating system stability or security issues. We have set up a test environment using technology representative of our production network where we configured and evaluated industry standard translation technologies and techniques. Based on our results, bidirectional translation between IPv4 (Internet Protocol version 4) and IPv6 is achievable with our current equipment, but due to the complexity of the configuration, may not scale well to our production environment.
Underground explosions nonlinearly deform the surrounding earth material and can interact with the free surface to produce spall. However, at typical seismological observation distances the seismic wavefield can be accurately modeled using linear approximations. Although nonlinear algorithms can accurately simulate very near field ground motions, they are computationally expensive and potentially unnecessary for far field wave simulations. Conversely, linearized seismic wave propagation codes are orders of magnitude faster computationally and can accurately simulate the wavefield out to typical observational distances. Thus, devising a means of approximating a nonlinear source in terms of a linear equivalent source would be advantageous both for scenario modeling and for interpretation of seismic source models that are based on linear, far-field approximations. This allows fast linear seismic modeling that still incorporates many features of the nonlinear source mechanics built into the simulation results so that one can have many of the advantages of both types of simulations without the computational cost of the nonlinear computation. In this report we first show the computational advantage of using linear equivalent models, and then discuss how the near-source (within the nonlinear wavefield regime) environment affects linear source equivalents and how well we can fit seismic wavefields derived from nonlinear sources.
The generalized singular value decomposition (GSVD) is a valuable tool that has many applications in computational science. However, computing the GSVD for large-scale problems is challenging. Motivated by applications in hyper-differential sensitivity analysis (HDSA), we propose new randomized algorithms for computing the GSVD which use randomized subspace iteration and weighted QR factorization. Detailed error analysis is given which provides insight into the accuracy of the algorithms and the choice of the algorithmic parameters. We demonstrate the performance of our algorithms on test matrices and a large-scale model problem where HDSA is used to study subsurface flow.
PCalc is a software tool that computes travel-time predictions, ray path geometry and model queries. This software has a rich set of features, including the ability to use custom 3D velocity models to compute predictions using a variety of geometries. The PCalc software is especially useful for research related to seismic monitoring applications.
The continuum-scale electrokinetic porous-media flow and excess charge redistribution equations are uncoupled using eigenvalue decomposition. The uncoupling results in a pair of independent diffusion equations for “intermediate” potentials subject to modified material properties and boundary conditions. The fluid pressure and electrostatic potential are then found by recombining the solutions to the two intermediate uncoupled problems in a matrix-vector multiplication. Expressions for the material properties or source terms in the intermediate uncoupled problem may require extended precision or careful rewriting to avoid numerical cancellation, but the solutions themselves can typically be computed in double precision. The approach works with analytical or gridded numerical solutions and is illustrated through two examples. The solution for flow to a pumping well is manipulated to predict streaming potential and electroosmosis, and a periodic one-dimensional analytical solution is derived and used to predict electroosmosis and streaming potential in a laboratory flow cell subjected to low frequency alternating current and pressure excitation. The examples illustrate the utility of the eigenvalue decoupling approach, repurposing existing analytical solutions or numerical models and leveraging solutions that are simpler to derive for coupled physics.
LocOO3D is a software tool that computes geographical locations for seismic events at regional to global scales. This software has a rich set of features, including the ability to use custom 3D velocity models, correlated observations and master event locations. The LocOO3D software is especially useful for research related to seismic monitoring applications, since it allows users to easily explore a variety of location methods and scenarios and is compatible with the CSS3.0 data format used in monitoring applications. The LocOO3D software, User's Manual, and Examples are available on the web at: https://github.com/sandialabs/LocOO3D For additional information on GeoTess, SALSA3D, RSTT, and other related software, please see: https://github.com/sandialabs/GeoTessJava, www.sandia.gov/geotess, www.sandia.gov/salsa3d, and www.sandia.gov/rstt