Although many software teams across the laboratories comply with yearly software quality engineering (SQE) assessments, the practice of introducing quality into each phase of the software lifecycle, or the team processes, may vary substantially. Even with the support of a quality engineer, many teams struggle to adapt and right-size software engineering best practices in quality to fit their context, and these activities aren’t framed in a way that motivates teams to take action. In short, software quality is often a “check the box for compliance” activity instead of a cultural practice that both values software quality and knows how to achieve it. In this report, we present the results of our 6600 VISTA Innovation Tournament project, "Incentivizing and Motivating High Confidence and Research Software Teams to Adopt the Practice of Quality." We present our findings and roadmap for future work based on 1) a rapid review of relevant literature, 2) lessons learned from an internal design thinking workshop, and 3) an external Collegeville 2021 workshop. These activities provided an opportunity for team ideation and community engagement/feedback. Based on our findings, we believe a coordinated effort (e.g. strategic communication campaign) aimed at diffusing the innovation of the practice of quality across Sandia National Laboratories could over time effect meaningful organizational change. As such, our roadmap addresses strategies for motivating and incentivizing individuals ranging from early career to seasoned software developers/scientists.
This paper presents a practical methodology for propagating and processing uncertainties associated with random measurement and estimation errors (that vary from test-to-test) and systematic measurement and estimation errors (uncertain but similar from test-to-test) in inputs and outputs of replicate tests to characterize response variability of stochastically varying test units. Also treated are test condition control variability from test-to-test and sampling uncertainty due to limited numbers of replicate tests. These aleatory variabilities and epistemic uncertainties result in uncertainty on computed statistics of output response quantities. The methodology was developed in the context of processing experimental data for “real-space” (RS) model validation comparisons against model-predicted statistics and uncertainty thereof. The methodology is flexible and sufficient for many types of experimental and data uncertainty, offering the most extensive data uncertainty quantification (UQ) treatment of any model validation method the authors are aware of. It handles both interval and probabilistic uncertainty descriptions and can be performed with relatively little computational cost through use of simple and effective dimension- and order-adaptive polynomial response surfaces in a Monte Carlo (MC) uncertainty propagation approach. A key feature of the progressively upgraded response surfaces is that they enable estimation of propagation error contributed by the surrogate model. Sensitivity analysis of the relative contributions of the various uncertainty sources to the total uncertainty of statistical estimates is also presented. The methodologies are demonstrated on real experimental validation data involving all the mentioned sources and types of error and uncertainty in five replicate tests of pressure vessels heated and pressurized to failure. Simple spreadsheet procedures are used for all processing operations.
Arithmetic Coding (AC) using Prediction by Partial Matching (PPM) is a compression algorithm that can be used as a machine learning algorithm. This paper describes a new algorithm, NGram PPM. NGram PPM has all the predictive power of AC/PPM, but at a fraction of the computational cost. Unlike compression-based analytics, it is also amenable to a vector space interpretation, which creates the ability for integration with other traditional machine learning algorithms. AC/PPM is reviewed, including its application to machine learning. Then NGram PPM is described and test results are presented, comparing them to AC/PPM.
U.S. critical infrastructure assets are often designed to operate for decades, and yet long-term planning practices have historically ignored climate change. With the current pace of changing operational conditions and severe weather hazards, research is needed to improve our ability to translate complex, uncertain risk assessment data into actionable inputs to improve decision-making for infrastructure planning. Decisions made today need to explicitly account for climate change – the chronic stressors, the evolution of severe weather events, and the wide-ranging uncertainties. If done well, decision making with climate in mind will result in increased resilience and decreased impacts to our lives, economies, and national security. We present a three-tier approach to create the research products needed in this space: bringing together climate projection data, severe weather event modeling, asset-level impacts, and contextspecific decision constraints and requirements. At each step, it is crucial to capture uncertainties and to communicate those uncertainties to decision-makers. While many components of the necessary research are mature (i.e., climate projection data), there has been little effort to develop proven tools for long-term planning in this space. The combination of chronic and acute stressors, spatial and temporal uncertainties, and interdependencies among infrastructure sectors coalesce into a complex decision space. By applying known methods from decision science and data analysis, we can work to demonstrate the value of an interdisciplinary approach to climate-hazard decision making for longterm infrastructure planning.
Scientific applications run on high-performance computing (HPC) systems are critical for many national security missions within Sandia and the NNSA complex. However, these applications often face performance degradation and even failures that are challenging to diagnose. To provide unprecedented insight into these issues, the HPC Development, HPC Systems, Computational Science, and Plasma Theory & Simulation departments at Sandia crafted and completed their FY21 ASC Level 2 milestone entitled "Integrated System and Application Continuous Performance Monitoring and Analysis Capability." The milestone created a novel integrated HPC system and application monitoring and analysis capability by extending Sandia's Kokkos application portability framework, Lightweight Distributed Metric Service (LDMS) monitoring tool, and scalable storage, analysis, and visualization pipeline. The extensions to Kokkos and LDMS enable collection and storage of application data during run time, as it is generated, with negligible overhead. This data is combined with HPC system data within the extended analysis pipeline to present relevant visualizations of derived system and application metrics that can be viewed at run time or post run. This new capability was evaluated using several week-long, 290-node runs of Sandia's ElectroMagnetic Plasma In Realistic Environments ( EMPIRE ) modeling and design tool and resulted in 1TB of application data and 50TB of system data. EMPIRE developers remarked this capability was incredibly helpful for quickly assessing application health and performance alongside system state. In short, this milestone work built the foundation for expansive HPC system and application data collection, storage, analysis, visualization, and feedback framework that will increase total scientific output of Sandia's HPC users.
Subsurface energy activities such as unconventional resource recovery, enhanced geothermal energy systems, and geologic carbon storage require fast and reliable methods to account for complex, multiphysical processes in heterogeneous fractured and porous media. Although reservoir simulation is considered the industry standard for simulating these subsurface systems with injection and/or extraction operations, reservoir simulation requires spatio-temporal “Big Data” into the simulation model, which is typically a major challenge during model development and computational phase. In this work, we developed and applied various deep neural network-based approaches to (1) process multiscale image segmentation, (2) generate ensemble members of drainage networks, flow channels, and porous media using deep convolutional generative adversarial network, (3) construct multiple hybrid neural networks such as convolutional LSTM and convolutional neural network-LSTM to develop fast and accurate reduced order models for shale gas extraction, and (4) physics-informed neural network and deep Q-learning for flow and energy production. We hypothesized that physicsbased machine learning/deep learning can overcome the shortcomings of traditional machine learning methods where data-driven models have faltered beyond the data and physical conditions used for training and validation. We improved and developed novel approaches to demonstrate that physics-based ML can allow us to incorporate physical constraints (e.g., scientific domain knowledge) into ML framework. Outcomes of this project will be readily applicable for many energy and national security problems that are particularly defined by multiscale features and network systems.
We present our research findings on the novel NDN protocol. In this work, we defined key attack scenarios for possible exploitation and detail software security testing procedures to evaluate the security of the NDN software. This work was done in the context of distributed energy resources (DER). The software security testing included an execution of unit tests and static code analyses to better understand the software rigor and the security that has been implemented. The results from the penetration testing are presented. Recommendations are discussed to provide additional defense for secure end-to-end NDN communications.
Graph partitioning has been an important tool to partition the work among several processors to minimize the communication cost and balance the workload. While accelerator-based supercomputers are emerging to be the standard, the use of graph partitioning becomes even more important as applications are rapidly moving to these architectures. However, there is no distributed-memory-parallel, multi-GPU graph partitioner available for applications. We developed a spectral graph partitioner, Sphynx, using the portable, accelerator-friendly stack of the Trilinos framework. In Sphynx, we allow using different preconditioners and exploit their unique advantages. We use Sphynx to systematically evaluate the various algorithmic choices in spectral partitioning with a focus on the GPU performance. We perform those evaluations on two distinct classes of graphs: regular (such as meshes, matrices from finite element methods) and irregular (such as social networks and web graphs), and show that different settings and preconditioners are needed for these graph classes. The experimental results on the Summit supercomputer show that Sphynx is the fastest alternative on irregular graphs in an application-friendly setting and obtains a partitioning quality close to ParMETIS on regular graphs. When compared to nvGRAPH on a single GPU, Sphynx is faster and obtains better balance and better quality partitions. Sphynx provides a good and robust partitioning method across a wide range of graphs for applications looking for a GPU-based partitioner.
In this LDRD project, we developed a versatile capability for high-resolution measurements of electron scattering processes in gas-phase molecules, such as ionization, dissociation, and electron attachment/detachment. This apparatus is designed to advance fundamental understanding of these processes and to inform predictions of plasmas associated with applications such as plasma-assisted combustion, neutron generation, re-entry vehicles, and arcing that are critical to national security. We use innovative coupling of electron-generation and electron-imaging techniques that leverages Sandia’s expertise in ion/electron imaging methods. Velocity map imaging provides a measure of the kinetic energies of electrons or ion products from electron scattering in an atomic or molecular beam. We designed, constructed, and tested the apparatus. Tests include dissociative electron attachment to O2 and SO2, as well as a new method for studying laser-initiated plasmas. This capability sets the stage for new studies in dynamics of electron scattering processes, including scattering from excited-state atoms and molecules.
The DOE-NE NWM Cloud was designed to be a generic set of tools and applications for any nuclear waste management program. As policymakers continue to consider approaches that emphasize consolidated interim storage and transportation of spent nuclear fuel, a gap analysis of the tools and applications provided for spent nuclear fuel and high-level radioactive waste disposal in comparison those needed for siting, licensing, and developing a consolidated interim storage facility and/or for a transportation campaign will help prepare DOE for implementing such potential policy direction. This report evaluates the points of alignment and potential gaps between the applications on the NWM Cloud that supported SNF disposal project, and the applications needed to address QA requirements and for other project support needs of an SNF storage project.
In this project, our goal was to develop methods that would allow us to make accurate predictions about individual differences in human cognition. Understanding such differences is important for maximizing human and human-system performance. There is a large body of research on individual differences in the academic literature. Unfortunately, it is often difficult to connect this literature to applied problems, where we must predict how specific people will perform or process information. In an effort to bridge this gap, we set out to answer the question: can we train a model to make predictions about which people understand which languages? We chose language processing as our domain of interest because of the well- characterized differences in neural processing that occur when people are presented with linguistic stimuli that they do or do not understand. Although our original plan to conduct several electroencephalography (EEG) studies was disrupted by the COVID-19 pandemic, we were able to collect data from one EEG study and a series of behavioral experiments in which data were collected online. The results of this project indicate that machine learning tools can make reasonably accurate predictions about an individual?s proficiency in different languages, using EEG data or behavioral data alone.
This project studied the potential for multiscale group dynamics in complex social systems, including emergent recursive interaction. Current social theory on group formation and interaction focuses on a single scale (individuals forming groups) and is largely qualitative in its explanation of mechanisms. We combined theory, modeling, and data analysis to find evidence that these multiscale phenomena exist, and to investigate their potential consequences and develop predictive capabilities. In this report, we discuss the results of data analysis showing that some group dynamics theory holds at multiple scales. We introduce a new theory on communicative vibration that uses social network dynamics to predict group life cycle events. We discuss a model of behavioral responses to the COVID-19 pandemic that incorporates influence and social pressures. Finally, we discuss a set of modeling techniques that can be used to simulate multiscale group phenomena.
The aim of this project was to advance single-cell RNA-Seq methods toward the establishment of a platform that may be used to simultaneously interrogate the gene expression profiles of mammalian host cells and bacterial pathogens. Existing genetic sequencing methods that measure bulk groups of cells do not account for the heterogeneity of cell-microbe interactions that occur within a complex environment, have limited efficiency, and cannot simultaneously interrogate bacterial sequences. In order to overcome these challenges, separate biochemistry workflows were developed based on a No-So-Random hexamer priming strategy or libraries of targeted molecular probes. Computational tools were developed to facilitate these methods, and feasibility was demonstrated for single-cell RNA-Seq for both bacterial and mammalian transcriptomes. This work supports cross-agency national priorities on addressing the threat of biological pathogens and understanding the role of the microbiome in modulating immunity and susceptibility to infection.
The propagation of a wave pulse due to low-speed impact on a one-dimensional, heterogeneous bar is studied. Due to the dispersive character of the medium, the pulse attenuates as it propagates. This attenuation is studied over propagation distances that are much longer than the size of the microstructure. A homogenized peridynamic material model can be calibrated to reproduce the attenuation and spreading of the wave. The calibration consists of matching the dispersion curve for the heterogeneous material near the limit of long wavelengths. It is demonstrated that the peridynamic method reproduces the attenuation of wave pulses predicted by an exact microstructural model over large propagation distances.
The purpose of this report is to document improvements in the simulation of commercial vacuum drying procedures at the Nuclear Energy Work Complex at Sandia National Laboratories. Validation of the extent of water removal in a dry spent nuclear fuel storage system based on drying procedures used at nuclear power plants is needed to close existing technical gaps. Operational conditions leading to incomplete drying may have potential impacts on the fuel, cladding, and other components in the system. A general lack of data suitable for model validation of commercial nuclear canister drying processes necessitates additional, well-designed investigations of drying process efficacy and water retention. Scaled tests that incorporate relevant physics and well-controlled boundary conditions are essential to provide insight and guidance to the simulation of prototypic systems undergoing drying processes.
Nonlocal models naturally handle a range of physics of interest to SNL, but discretization of their underlying integral operators poses mathematical challenges to realize the accuracy and robustness commonplace in discretization of local counterparts. This project focuses on the concept of asymptotic compatibility, namely preservation of the limit of the discrete nonlocal model to a corresponding well-understood local solution. We address challenges that have traditionally troubled nonlocal mechanics models primarily related to consistency guarantees and boundary conditions. For simple problems such as diffusion and linear elasticity we have developed complete error analysis theory providing consistency guarantees. We then take these foundational tools to develop new state-of-the-art capabilities for: lithiation-induced failure in batteries, ductile failure of problems driven by contact, blast-on-structure induced failure, brittle/ductile failure of thin structures. We also summarize ongoing efforts using these frameworks in data-driven modeling contexts. This report provides a high-level summary of all publications which followed from these efforts.
Nonlocal models use integral operators that embed length-scales in their definition. However, the integrands in these operators are difficult to define from the data that are typically available for a given physical system, such as laboratory mechanical property tests. In contrast, molecular dynamics (MD) does not require these integrands, but it suffers from computational limitations in the length and time scales it can address. To combine the strengths of both methods and to obtain a coarse-grained, homogenized continuum model that efficiently and accurately captures materials' behavior, we propose a learning framework to extract, from MD data, an optimal nonlocal model as a surrogate for MD displacements. Our framework guarantees that the resulting model is mathematically well-posed, physically consistent, and that it generalizes well to settings that are different from the ones used during training. The efficacy of this approach is demonstrated with several numerical tests for single layer graphene both in the case of perfect crystal and in the presence of thermal noise.
Trujillo, Natasha; Rose-Coss, Dylan; Heath, Jason; Dewers, Thomas D.; Ampomah, William; Mozley, Peter S.; Cather, Martha
Leakage pathways through caprock lithologies for underground storage of CO2 and/or enhanced oil recovery (EOR) include intrusion into nano-pore mudstones, flow within fractures and faults, and larger-scale sedimentary heterogeneity (e.g., stacked channel deposits). To assess multiscale sealing integrity of the caprock system that overlies the Morrow B sandstone reservoir, Farnsworth Unit (FWU), Texas, USA, we combine pore-to-core observations, laboratory testing, well logging results, and noble gas analysis. A cluster analysis combining gamma ray, compressional slowness, and other logs was combined with caliper responses and triaxial rock mechanics testing to define eleven lithologic classes across the upper Morrow shale and Thirteen Finger limestone caprock units, with estimations of dynamic elastic moduli and fracture breakdown pressures (minimum horizontal stress gradients) for each class. Mercury porosimetry determinations of CO2 column heights in sealing formations yield values exceeding reservoir height. Noble gas profiles provide a “geologic time-integrated” assessment of fluid flow across the reservoir-caprock system, with Morrow B reservoir measurements consistent with decades-long EOR water-flooding, and upper Morrow shale and lower Thirteen Finger limestone values being consistent with long-term geohydrologic isolation. Together, these data suggest an excellent sealing capacity for the FWU and provide limits for injection pressure increases accompanying carbon storage activities.
Given the prevalent role of metals in a variety of industries, schemes to integrate corresponding constitutive models in finite element applications have long been studied. A number of formulations have been developed to accomplish this task; each with their own advantages and costs. Often the focus has been on ensuring the accuracy and numerical stability of these algorithms to enable robust integration. While important, emphasis on these performance metrics may often come at the cost of computational expense potentially neglecting the needs of individual problems. In the current work, the performance of two of the most common integration methods for anisotropic plasticity -- the convex cutting plane (CCP) and closest point projection (CPP) -- across a variety of metrics is assessed; including accuracy and cost. A variety of problems are considered ranging from single elements to large representative simulations including both implicit quasistatic and explicit transient dynamic type responses. The relative performance of each scheme in the different instances is presented with an eye towards guidance on when the different algorithms may be beneficial.
This user’s guide documents capabilities in Sierra/SolidMechanics which remain “in-development” and thus are not tested and hardened to the standards of capabilities listed in Sierra/SM 5.2 User’s Guide. Capabilities documented herein are available in Sierra/SM for experimental use only until their official release. These capabilities include, but are not limited to, novel discretization approaches such as the conforming reproducing kernel (CRK) method, numerical fracture and failure modeling aids such as the extended finite element method (XFEM) and J-integral, explicit time step control techniques, dynamic mesh rebalancing, as well as a variety of new material models and finite element formulations.
Defects in materials are an ongoing challenge for quantum bits, so called qubits. Solid state qubits—both spins in semiconductors and superconducting qubits—suffer from losses and noise caused by two-level-system (TLS) defects thought to reside on surfaces and in amorphous materials. Understanding and reducing the number of such defects is an ongoing challenge to the field. Superconducting resonators couple to TLS defects and provide a handle that can be used to better understand TLS. We develop noise measurements of superconducting resonators at very low temperatures (20 mK) compared to the resonant frequency, and low powers, down to single photon occupation.
The effect of extreme waves on the coastal community includes inundation, loss of habitats, increasing shoreline erosion, and increasing risks to coastal infrastructures (e.g., ports, breakwaters, oil and gas platforms), important for supporting coastal resilience. The coastal communities along the US Gulf of Mexico are very low-lying, which makes the region particularly vulnerable to impacts of extreme waves generated by storm events. We propose assessing and mapping the risks from extreme waves for the Gulf of Mexico coast to support coastal resiliency planning. The risks will be assessed by computing n-year recurring wave height (e.g., 1, 5, 50, 100-year) using 32-year wave hindcast data and various extreme value analysis techniques including Peak- Over-Threshold and Annual Maxima method. The characteristics of the extreme waves, e.g., relations between the mean and extreme wave climates, directions associated with extreme waves, will be investigated. Hazard maps associated with extreme wave heights at different return periods will be generated to help planners identify potential risks and envision places that are less susceptible to future storm damage.
The development of new hypersonic flight vehicles is limited by the physical understanding that may be obtained from ground test facilities. This has motivated the present development of a temporally and spatially resolved velocimetry measurement for Sandia National Laboratories (SNL) Hypersonic Wind Tunnel (HWT) using Femtosecond Laser Electronic Excitation Tagging (FLEET). First, a multi-line FLEET technique has been created for the first time and tested in a supersonic jet, allowing simultaneous measurements of velocities along multiple profiles in a flow. Secondly, two different approaches have been demonstrated for generating dotted FLEET lines. One employs a slit mask pattern focused into points to yield a dotted line, allowing for two- or three-component velocity measurements free of contamination between components. The other dotted-line approach is based upon an optical wedge array and yields a grid of points rather than a dotted line. Two successful FLEET measurement campaigns have been conducted in SNL’s HWT. The first effort established optimal diagnostic configurations in the hypersonic environment based on earlier benchtop reproductions, including validation of the use of a 267 nm beam to boost the measurement signal-to-noise ratio (SNR) with minimal risk of perturbing the flow and greater simplicity than a comparable resonant technique at 202 nm. The same FLEET system subsequently was reconstituted to demonstrate the ability to make velocimetry measurements of hypersonic turbulence in a realistic flow field. Mean velocity profiles and turbulence intensity profiles of the shear layer in the wake of a hypersonic cone model were measured at several different downstream stations, proving the viability of FLEET as a hypersonic diagnostic.
Our primary aim in this work is to understand how to efficiently obtain reliable uncertainty quantification in automatic learning algorithms with limited training datasets. Standard approaches rely on cross-validation to tune hyper parameters. Unfortunately, when our datasets are too small, holdout datasets become unreliable—albeit unbiased—measures of prediction quality due to the lack of adequate sample size. We should not place confidence in holdout estimators under conditions wherein the sample variance is both large and unknown. More poigniantly, our training experiments on limited data (Duersch and Catanach, 2021) show that even if we could improve estimator quality under these conditions, the typical training trajectory may never even encounter generalizable models.
Gamma irradiation is a process that uses Cobalt60 radionuclide produced artificially in nuclear reactors to irradiate a variety of items using gamma radiation. A key characteristic of gamma irradiation is its high penetration capability and the fact that it can modify physical, chemical, and biological properties of the irradiated materials.
Magnetic microscopy with high spatial resolution helps to solve a variety of technical problems in condensed-matter physics, electrical engineering, biomagnetism, and geomagnetism. In this work we used quantum diamond magnetic microscope (QDMM) setups, which use a dense uniform layer of magnetically-sensitive nitrogen-vacancy (NV) centers in diamond to image an external magnetic field using a fluorescence microscope. We used this technique for imaging few-micron ferromagnetic needles used as a physically unclonable function (PUF) and to passively interrogate electric current paths in a commercial 555 timer integrated circuit (IC). As part of the QDMM development, we also found a way to calculate ion implantation recipes to create diamond samples with dense uniform NV layers at the surface. This work opens the possibility for follow-up experiments with 2D magnetic materials, ion implantation, and electronics characterization and troubleshooting.
Evstatiev, E.G.; Finn, J.M.; Shadwick, B.A.; Hengartner, N.
In this paper we analyze the noise in macro-particle methods used in plasma physics and fluid dynamics, leading to approaches for minimizing the total error, focusing on electrostatic models in one dimension. We begin by describing kernel density estimation for continuous values of the spatial variable x, expressing the kernel in a form in which its shape and width are represented separately. The covariance matrix C(x,y) of the noise in the density is computed, first for uniform true density. The bandwidth of the covariance matrix is related to the width of the kernel. A feature that stands out is the presence of constant negative terms in the elements of the covariance matrix both on and off-diagonal. These negative correlations are related to the fact that the total number of particles is fixed at each time step; they also lead to the property ∫C(x,y)dy=0. We investigate the effect of these negative correlations on the electric field computed by Gauss's law, finding that the noise in the electric field is related to a process called the Ornstein-Uhlenbeck bridge, leading to a covariance matrix of the electric field with variance significantly reduced relative to that of a Brownian process. For non-constant density, ρ(x), still with continuous x, we analyze the total error in the density estimation and discuss it in terms of bias-variance optimization (BVO). For some characteristic length l, determined by the density and its second derivative, and kernel width h, having too few particles within h leads to too much variance; for h that is large relative to l, there is too much smoothing of the density. The optimum between these two limits is found by BVO. For kernels of the same width, it is shown that this optimum (minimum) is weakly sensitive to the kernel shape. We repeat the analysis for x discretized on a grid. In this case the charge deposition rule is determined by a particle shape. An important property to be respected in the discrete system is the exact preservation of total charge on the grid; this property is necessary to ensure that the electric field is equal at both ends, consistent with periodic boundary conditions. We find that if the particle shapes satisfy a partition of unity property, the particle charge deposited on the grid is conserved exactly. Further, if the particle shape is expressed as the convolution of a kernel with another kernel that satisfies the partition of unity, then the particle shape obeys the partition of unity. This property holds for kernels of arbitrary width, including widths that are not integer multiples of the grid spacing. We show results relaxing the approximations used to do BVO optimization analytically, by doing numerical computations of the total error as a function of the kernel width, on a grid in x. The comparison between numerical and analytical results shows good agreement over a range of particle shapes. We discuss the practical implications of our results, including the criteria for design and implementation of computationally efficient particle shapes that take advantage of the developed theory.
Seismic source modeling allows researchers both to simulate how a source that induces seismic waves interacts with the Earth to produce observed seismograms and, inversely, to infer what the time histories, sizes, and force distributions were for a seismic source given observed seismograms. In this report, we discuss improvements made in FY21 to our software as applies to both the forward and inverse seismic source modeling problems. For the forward portion of the problem, we have added the ability to use full 3-D nonlinear simulations by implementing 3-D time varying boundary conditions within Sandia’s linear seismic code Parelasti. Secondly, on the inverse source modeling side, we have developed software that allows us to invert seismic gradiometer-derived observations in conjunction with standard translational motion seismic data to infer properties of the source that may improve characterization in certain circumstances. First, we describe the basic theory behind each software enhancement and then demonstrate the software in action with some simple examples.
A six-month research effort has advanced the hybrid kinetic-fluid modeling capability required for developing non-thermal warm x-ray sources on Z. The three particle treatments of quasi-neutral, multi-fluid, and kinetic are demonstrated in 1D simulations of an Ar gas puff. The simulations determine required resolutions for the advanced implicit solution techniques and debug hybrid particle treatments with equation-of-state and radiation transport. The kinetic treatment is used in preliminary analysis of the non-Maxwellian nature of a gas target. It is also demonstrates the sensitivity of the cyclotron and collision frequencies in determining the transition from thermal to non-thermal particle populations. Finally, a 2D Ar gas puff simulation of a Z shot demonstrates the readiness to proceed with realistic target configurations. The results put us on a very firm footing to proceed to a full LDRD which includes continued development transition criteria and x-ray yield calculation.
A simple approach to simulate contact between deformable objects is presented which relies on levelset descriptions of the Lagrangian geometry and an optimization-based solver. Modeling contact between objects remains a significant challenge for computational mechanics simulations. Common approaches are either plagued by lack of robustness or are exceedingly complex and require a significant number of heuristics. In contrast, the levelset contact approach presented herein is essentially heuristic free. Furthermore, the presented algorithm enables resolving and enforcing contact between objects with a significant amount of initial overlap. Examples demonstrating the feasibility of this approach are shown, including the standard Hertz contact problem, the robust removal of overlap between two overlapping blocks, and overlap-removal and pre-load for a bolted configuration.
All energy production systems need efficient energy conversion systems. Current Rankine cycles use water to generate steam at temperatures where efficiency is limited to around 40%. As existing fossil and nuclear power plants are decommissioned due to end of effective life and/or societies’ desire for cleaner generation options, more efficient energy conversion is needed to keep up with increasing electricity demands. Modern energy generation technologies, such as advanced nuclear reactors and concentrated solar, coupled to high efficiency sCO2 conversion systems provide a solution to efficient, clean energy systems. Leading R&D communities worldwide agree that the successful development of sCO2 Brayton power cycle technology will eventually bring about large-scale changes to existing multi-billion-dollar global markets and enable power applications not currently possible or economically justifiable. However, all new technologies face challenges in the path to commercialization and the electricity sector is distinctively risk averse. The Sandia sCO2 Brayton team needs to better understand what the electricity sector needs in terms of new technology risk mitigation, generation efficiency, reliability improvements above current technology, and cost requirements which would make new technology adoption worthwhile. Relying on the R&D community consensus that a sCO2 power cycle will increase the revenue of the electrical industry, without addressing the electrical industry’s concerns, significantly decreases the potential for adoption at commercial scale. With a clear understanding of the market perspectives on technology adoption, including military, private sector, and utilities customers, the Sandia Brayton Team can resolve industry concerns for smoother development and faster transition to commercialization. An extensive customer discovery process, similar to that executed through the NSF’s I-Corp program, is necessary in order to understand the pain points of the market and articulate the value proposition of Brayton systems in terms that engage decision makers and facilitate commercialization of the technology.
The typical topology optimization workflow uses a design domain that does not change during the optimization process. Consequently, features of the design domain, such as the location of loads and constraints, must be determined in advance and are not optimizable. A method is proposed herein that allows the design domain to be optimized along with the topology. This approach uses topology and shape derivatives to guide nested optimizers to the optimal topology and design domain. The details of the method are discussed, and examples are provided that demonstrate the utility of this approach.
Highlights: Battery energy storage may improve energy efficiency and reliability of hybrid energy systems composed by diesel and solar photovoltaic power generators serving isolated communities.In projects aiming update of power plants serving electrically isolated communities with redundant diesel generation, battery energy storage can improve overall economic performance of power supply system by reducing fuel usage, decreasing capital costs by replacing redundant diesel generation units, and increasing generator system life by shortening yearly runtime.Fast-acting battery energy storage systems with grid-forming inverters might have potential for improving drastically the reliability indices of isolated communities currently supplied by diesel generation. Abstract: This paper will highlight unique challenges and opportunities with regard to energy storage utilization in remote, self-sustaining communities. The energy management of such areas has unique concerns. Diesel generation is often the go-to power source in these scenarios, but these systems are not devoid of issues. Without dedicated maintenance crews as in large, interconnected network areas, minor interruptions can be frequent and invasive not only for those who lose power, but also for those in the community that must then correct any faults. Although the immediate financial benefits are perhaps not readily apparent, energy storage could be used to address concerns related to reliability, automation, fuel supply concerns, generator degradation, solar utilization, and, yes, fuel costs to name a few. These ideas are shown through a case study of the Levelock Village of Alaska. Currently, the community is faced with high diesel prices and a difficult supply chain, which makes temporary loss of power very common and reductions in fuel consumption very impactful. This study will investigate the benefits that an energy storage system could bring to the overall system life, fuel costs, and reliability of the power supply. The variable efficiency of the generators, impact of startup/shutdown process, and low-load operation concerns are considered. The technological benefits of the combined system will be explored for various scenarios of future diesel prices and technology maintenance/replacement costs as well as for the avoidance of power interruptions that are so common in the community currently. Graphic abstract: [Figure not available: see fulltext.] Discussion: In several cases, energy storage can provide a means to promote energy equity by improving remote communities’ power supply reliability to levels closer to what the average urban consumer experiences at a reduced cost compared to transmission buildout. Furthermore, energy equity represents a hard-to-quantify benefit achieved by the integration of energy storage to isolated power systems of under-served communities, which suggests that the financial aspects of such projects should be questioned as the main performance criterion. To improve battery energy storage system valuation for diesel-based power systems, integration analysis must be holistic and go beyond fuel savings to capture every value stream possible.
Downscaling of the silicon metal-oxide-semiconductor field-effect transistor technology is expected to reach a fundamental limit soon. A paradigm shift in computing is occurring. Spin field-effect transistors are considered a candidate architecture for next-generation microelectronics. Being able to leverage the existing infrastructure for silicon, a spin field-effect transistor technology based on group IV heterostructures will have unparalleled technical and economical advantages. For the same material platform reason, germanium hole quantum dots are also considered a competitive architecture for semiconductor-based quantum technology. In this project, we investigated several approaches to creating hole devices in germanium-based materials as well as injecting hole spins in such structures. We also explored the roles of hole injection in wet chemical etching of germanium. Our main results include the demonstration of germanium metal-oxide-semiconductor field-effect transistors operated at cryogenic temperatures, ohmic current-voltage characteristics in germanium/silicon-germanium heterostructures with ferromagnetic contacts at deep cryogenic temperatures and high magnetic fields, evaluation of the effects of surface preparation on carrier mobility in germanium/silicon- germanium heterostructures, and hole spin polarization through integrated permanent magnets. These results serve as essential components for fabricating next-generation germanium-based devices for microelectronics and quantum systems.
Virtual prototyping in engineering design rely on modern numerical models of contacting structures with accurate resolution of interface mechanics, which strongly affect the system-level stiffness and energy dissipation due to frictional losses. High-fidelity modeling within the localized interfaces is required to resolve local quantities of interest that may drive design decisions. The high-resolution finite element meshes necessary to resolve inter-component stresses tend to be computationally expensive, particularly when the analyst is interested in response time histories. The Hurty/Craig-Bampton (HCB) transformation is a widely used method in structural dynamics for reducing the interior portion of a finite element model while having the ability to retain all nonlinear contact degrees of freedom (DOF) in physical coordinates. These models may still require many DOF to adequately resolve the kinematics of the interface, leading to inadequate reduction and computational savings. This study proposes a novel interface reduction method to overcome these challenges by means of system-level characteristic constraint (SCC) modes and properly orthogonal interface modal derivatives (POIMDs) for transient dynamic analyses. Both SCC modes and POIMDs are computed using the reduced HCB mass and stiffness matrices, which can be directly computed from many commercial finite element analysis software. Comparison of time history responses to an impulse-type load in a mechanical beam assembly indicate that the interface-reduced model correlates well with the HCB truth model. Localized features like slip and contact area are well-represented in the time domain when the beam assembly is loaded with a broadband excitation. The proposed method also yields reduced-order models with greater critical timestep lengths for explicit integration schemes.
Denoising contaminated seismic signals for later processing is a fundamental problem in seismic signals analysis. The most straightforward denoising approach, using spectral filtering, is not effective when noise and seismic signal occupy the same frequency range. Neural network approaches have shown success denoising local signal when trained on short-time Fourier transform spectrograms (Zhu et al 2018; Tibi et al 2021). Scalograms, a wavelet-based transform, achieved ~15% better reconstruction as measured by dynamic time warping on a seismic waveform test set than spectrograms, suggesting their use as an alternative for denoising. We train a deep neural network on a scalogram dataset derived from waveforms recorded by the University of Utah Seismograph Stations network. We find that initial results are no better than a spectrogram approach, with additional overhead imposed by the significantly larger size of scalograms. A robust exploration of neural network hyperparameters and network architecture was not performed, which could be done in follow on work.
Wellbore integrity is a significant problem in the U.S. and worldwide, which has serious adverse environmental and energy security consequences. Wells are constructed with a cement barrier designed to last about 50 years. Indirect measurements and models are commonly used to identify wellbore damage and leakage, often producing subjective and even erroneous results. The research presented herein focuses on new technologies to improve monitoring and detection of wellbore failures (leaks) by developing a multi-step machine learning approach to localize two types of thermal defects within a wellbore model, a prototype mechatronic system for automatically drilling small diameter holes of arbitrary depth to monitor the integrity of oil and gas wells in situ, and benchtop testing and analyses to support the development of an autonomous real-time diagnostic tool to enable sensor emplacement for monitoring wellbore integrity. Each technology was supported by experimental results. This research has provided tools to aid in the detection of wellbore leaks and significantly enhanced our understanding of the interaction between small-hole drilling and wellbore materials.
Laser powder bed fusion (LPBF) Additive manufacturing (AM) has attracted interest as an agile method of building production metal parts to reduce design-build-test cycle times for systems. However, predicting part performance is difficult due to inherent process variabilities. This makes qualification challenging. Computational process models have attempted to address some of these challenges, including mesoscale, full physics models and reduced fidelity conduction models. The goal of this work is credible multi-fidelity modeling of the LPBF process by investigating methods for estimating the error between models of two different fidelities. Two methods of error estimation are investigated, adjoint-based error estimation and Bayesian calibration. Adjoint-based error estimation is found to effectively bounding the error between the two models, but with very conservative bounds, making predictions highly uncertain. Bayesian parameter calibration applied to conduction model heat source parameters is found to effectively bound the observed error between the models for melt pool morphology quantities of interest. However, the calibrations do not effectively bound the error in heat distribution.