Currently a set of 71 radionuclides are accounted for in off-site consequence analysis for LWRs. Radionuclides of dose consequence are expected to change for non-LWRs, with radionuclides of interest being type-specific. This document identifies an expanded set of radionuclides that may need to be accounted for in multiple non-LWR systems: high temperature gas reactors (HTGRs); fluoride-salt-cooled high-temperature reactors (FHRs); thermal-spectrum fluoride-based molten salt reactors (MSRs); fast-spectrum chloride-based MSRs; and, liquid metal fast reactors with metallic fuel (LMRs) Specific considerations are provided for each reactor type in Chapter 2 through Chapter 5, and a summary of all recommendations is provided in Chapter 6. All identified radionuclides are already incorporated within the MACCS software, yet the development of tritium-specific and carbon-specific chemistry models are recommended.
Although many software teams across the laboratories comply with yearly software quality engineering (SQE) assessments, the practice of introducing quality into each phase of the software lifecycle, or the team processes, may vary substantially. Even with the support of a quality engineer, many teams struggle to adapt and right-size software engineering best practices in quality to fit their context, and these activities aren’t framed in a way that motivates teams to take action. In short, software quality is often a “check the box for compliance” activity instead of a cultural practice that both values software quality and knows how to achieve it. In this report, we present the results of our 6600 VISTA Innovation Tournament project, "Incentivizing and Motivating High Confidence and Research Software Teams to Adopt the Practice of Quality." We present our findings and roadmap for future work based on 1) a rapid review of relevant literature, 2) lessons learned from an internal design thinking workshop, and 3) an external Collegeville 2021 workshop. These activities provided an opportunity for team ideation and community engagement/feedback. Based on our findings, we believe a coordinated effort (e.g. strategic communication campaign) aimed at diffusing the innovation of the practice of quality across Sandia National Laboratories could over time effect meaningful organizational change. As such, our roadmap addresses strategies for motivating and incentivizing individuals ranging from early career to seasoned software developers/scientists.
Within the past half-decade, it has become overwhelmingly clear that suppressing the spread of deliberate false and misleading information is of the utmost importance for protecting democratic institutions. Disinformation has been found to come from both foreign and domestic actors, but the effects from either can be disastrous. From the simple encouragement of unwarranted distrust to conspiracy theories promoting violence, the results of disinformation have put the functionality of American democracy under direct threat. Present scientific challenges posed by this problem include detecting disinformation, quantifying its potential impact, and preventing its amplification. We present a model on which we can experiment with possible strategies toward the third challenge: the prevention of amplification. This is a social contagion network model, which is decomposed into layers to represent physical, ''offline'', interactions as well as virtual interactions on a social media platform. Along with the topological modifications to the standard contagion model, we use state-transition rules designed specifically for disinformation, and distinguish between contagious and non-contagious infected nodes. We use this framework to explore the effect of grassroots social movements on the size of disinformation cascades by simulating these cascades in scenarios where a proportion of the agents remove themselves from the social platform. We also test the efficacy of strategies that could be implemented at the administrative level by the online platform to minimize such spread. These top-down strategies include banning agents who disseminate false information, or providing corrective information to individuals exposed to false information to decrease their probability of believing it. We find an abrupt transition to smaller cascades when a critical number of random agents are removed from the platform, as well as steady decreases in the size of cascades with increasingly more convincing corrective information. Finally, we compare simulated cascades on this framework with real cascades of disinformation recorded on Whatsapp surrounding the 2019 Indian election. We find a set of hyperparameter values that produces a distribution of cascades matching the scaling exponent of the distribution of actual cascades recorded in the dataset. We acknowledge the available future directions for improving the performance of the framework and validation methods, as well as ways to extend the model to capture additional features of social contagion.
Graph partitioning has been an important tool to partition the work among several processors to minimize the communication cost and balance the workload. While accelerator-based supercomputers are emerging to be the standard, the use of graph partitioning becomes even more important as applications are rapidly moving to these architectures. However, there is no distributed-memory-parallel, multi-GPU graph partitioner available for applications. We developed a spectral graph partitioner, Sphynx, using the portable, accelerator-friendly stack of the Trilinos framework. In Sphynx, we allow using different preconditioners and exploit their unique advantages. We use Sphynx to systematically evaluate the various algorithmic choices in spectral partitioning with a focus on the GPU performance. We perform those evaluations on two distinct classes of graphs: regular (such as meshes, matrices from finite element methods) and irregular (such as social networks and web graphs), and show that different settings and preconditioners are needed for these graph classes. The experimental results on the Summit supercomputer show that Sphynx is the fastest alternative on irregular graphs in an application-friendly setting and obtains a partitioning quality close to ParMETIS on regular graphs. When compared to nvGRAPH on a single GPU, Sphynx is faster and obtains better balance and better quality partitions. Sphynx provides a good and robust partitioning method across a wide range of graphs for applications looking for a GPU-based partitioner.
Typical approaches to classify scenes from light convert the light field to electrons to perform the computation in the digital electronic domain. This conversion and downstream computational analysis require significant power and time. Diffractive neural networks have recently emerged as unique systems to classify optical fields at lower energy and high speeds. Previous work has shown that a single layer of diffractive metamaterial can achieve high performance on classification tasks. In analogy with electronic neural networks, it is anticipated that multilayer diffractive systems would provide better performance, but the fundamental reasons for the potential improvement have not been established. In this work, we present extensive computational simulations of two - layer diffractive neural networks and show that they can achieve high performance with fewer diffractive features than single layer systems.
The quantum k-Local Hamiltonian problem is a natural generalization of classical constraint satisfaction problems (k-CSP) and is complete for QMA, a quantum analog of NP. Although the complexity of k-Local Hamiltonian problems has been well studied, only a handful of approximation results are known. For Max 2-Local Hamiltonian where each term is a rank 3 projector, a natural quantum generalization of classical Max 2-SAT, the best known approximation algorithm was the trivial random assignment, yielding a 0.75-approximation. We present the first approximation algorithm beating this bound, a classical polynomial-time 0.764-approximation. For strictly quadratic instances, which are maximally entangled instances, we provide a 0.801 approximation algorithm, and numerically demonstrate that our algorithm is likely a 0.821-approximation. We conjecture these are the hardest instances to approximate. We also give improved approximations for quantum generalizations of other related classical 2-CSPs. Finally, we exploit quantum connections to a generalization of the Grothendieck problem to obtain a classical constant-factor approximation for the physically relevant special case of strictly quadratic traceless 2-Local Hamiltonians on bipartite interaction graphs, where a inverse logarithmic approximation was the best previously known (for general interaction graphs). Our work employs recently developed techniques for analyzing classical approximations of CSPs and is intended to be accessible to both quantum information scientists and classical computer scientists.
This report is a preliminary test plan of the seismic shake table test. The final report will be developed when all decisions regarding the test hardware, instrumentation, and shake table inputs are made. A new revision of this report will be issued in spring of 2022. The preliminary test plan documents the free-field ground motions that will be used as inputs to the shake table, the test hardware, and instrumentation. It also describes the facility at which the test will take place in late summer of 2022.
This document describes the Power and Energy Storage Systems Toolbox for MATLAB, abbreviated as PSTess. This computing package is a fork of the Power Systems Toolbox (PST). PST was originally developed at Rensselaer Polytechnic Institute (RPI) and later upgraded by Dr. Graham Rogers at Cherry Tree Scientific Software. While PSTess shares a common lineage with PST Version 3.0, it is a substantially different application. This document supplements the main PST manual by describing the features and models that are unique to PSTess. As the name implies, the main distinguishing characteristic of PSTess is its ability to model inverter-based energy storage systems (ESS). The model that enables this is called ess.m , and it serves the dual role of representing ESS operational constraints and the generator/converter interface. As in the WECC REGC_A model, the generator/converter interface is modeled as a controllable current source with the ability to modulate both real and reactive current. The model ess.m permits four-quadrant modulation, which allows it to represent a wide variety of inverter-based resources beyond energy storage when paired with an appropriate supplemental control model. Examples include utility-scale photovoltaic (PV) power plants, Type 4 wind plants, and static synchronous compensators (STATCOM). This capability is especially useful for modeling hybrid plants that combine energy storage with renewable resources or FACTS devices.
Probabilistic and Bayesian neural networks have long been proposed as a method to incorporate uncertainty about the world (both in training data and operation) into artificial intelligence applications. One approach to making a neural network probabilistic is to leverage a Monte Carlo sampling approach that samples a trained network while incorporating noise. Such sampling approaches for neural networks have not been extensively studied due to the prohibitive requirement of many computationally expensive samples. While the development of future microelectronics platforms that make this sampling more efficient is an attractive option, it has not been immediately clear how to sample a neural network and what the quality of random number generation should be. This research aimed to start addressing these two fundamental questions by examining basic “off the shelf” neural networks can be sampled through a few different mechanisms (including synapse “dropout” and neuron “dropout”) and examine how these sampling approaches can be evaluated both in terms of evaluating algorithm effectiveness and the required quality of random numbers.
Rock Valley, in the southern end of the Nevada National Security Site, hosts a fault system that was responsible for a shallow (< 3 km below surface ) magnitude 3.7 earthquake in May 1993. In order to better understand this system, seismic properties of the shallow subsurface need to be better constrained. In April and May of 2021, accelerated weight drop (AWD) active-source seismic data were recorded in order to measure P- and S-wave travel-times for the area. This report describes the processing and phase picking of the recorded seismic waveforms. In total, we picked 7,982 P-wave arrivals at offsets up to ~2500 m, and 4,369 S-wave arrivals at offsets up to ~2200 m. These travel-time picks can be inverted for shallow P-wave and S-wave velocity structure in future studies.
Metal oxides have been an attractive option for a range of applications, including hydrogen sensors, microelectronics, and catalysis, due to their reactivity and tunability. The properties of metal oxides can vary greatly on their precise surface structure; however, few surface science techniques can achieve atomistic-level determinations of surface structure, and fewer yet can do so for insulator surfaces. Low energy ion beam analysis offers a potential insulator-compatible solution to characterizing the surface structure of metal oxides. As a feasibility study, we apply low energy ion beam analysis to investigate the surface structure of a magnetite single crystal, Fe3O4(100). We obtain multi-angle maps using both forward-scattering low energy ion scattering (LEIS) and backscattering impact-collision ion scattering spectroscopy (ICISS). Both sets of experimental maps have intensity patterns that reflect the symmetries of the Fe3O4(100) surface structure. However, analytical interpretation of these intensity patterns to extract details of the surface structure is significantly more complex than previous LEIS and ICISS structural studies of one-component metal crystals, which had far more symmetries to exploit. To gain further insight into the surface structure, we model our experimental measurements with ion-trajectory tracing simulations using molecular dynamics. Our simulations provide a qualitative indication that our experimental measurements agree better with a subsurface cation vacancy model than a distorted bulk model.
A simple approach to simulate contact between deformable objects is presented which relies on levelset descriptions of the Lagrangian geometry and an optimization-based solver. Modeling contact between objects remains a significant challenge for computational mechanics simulations. Common approaches are either plagued by lack of robustness or are exceedingly complex and require a significant number of heuristics. In contrast, the levelset contact approach presented herein is essentially heuristic free. Furthermore, the presented algorithm enables resolving and enforcing contact between objects with a significant amount of initial overlap. Examples demonstrating the feasibility of this approach are shown, including the standard Hertz contact problem, the robust removal of overlap between two overlapping blocks, and overlap-removal and pre-load for a bolted configuration.
Subsurface energy activities such as unconventional resource recovery, enhanced geothermal energy systems, and geologic carbon storage require fast and reliable methods to account for complex, multiphysical processes in heterogeneous fractured and porous media. Although reservoir simulation is considered the industry standard for simulating these subsurface systems with injection and/or extraction operations, reservoir simulation requires spatio-temporal “Big Data” into the simulation model, which is typically a major challenge during model development and computational phase. In this work, we developed and applied various deep neural network-based approaches to (1) process multiscale image segmentation, (2) generate ensemble members of drainage networks, flow channels, and porous media using deep convolutional generative adversarial network, (3) construct multiple hybrid neural networks such as convolutional LSTM and convolutional neural network-LSTM to develop fast and accurate reduced order models for shale gas extraction, and (4) physics-informed neural network and deep Q-learning for flow and energy production. We hypothesized that physicsbased machine learning/deep learning can overcome the shortcomings of traditional machine learning methods where data-driven models have faltered beyond the data and physical conditions used for training and validation. We improved and developed novel approaches to demonstrate that physics-based ML can allow us to incorporate physical constraints (e.g., scientific domain knowledge) into ML framework. Outcomes of this project will be readily applicable for many energy and national security problems that are particularly defined by multiscale features and network systems.
Downscaling of the silicon metal-oxide-semiconductor field-effect transistor technology is expected to reach a fundamental limit soon. A paradigm shift in computing is occurring. Spin field-effect transistors are considered a candidate architecture for next-generation microelectronics. Being able to leverage the existing infrastructure for silicon, a spin field-effect transistor technology based on group IV heterostructures will have unparalleled technical and economical advantages. For the same material platform reason, germanium hole quantum dots are also considered a competitive architecture for semiconductor-based quantum technology. In this project, we investigated several approaches to creating hole devices in germanium-based materials as well as injecting hole spins in such structures. We also explored the roles of hole injection in wet chemical etching of germanium. Our main results include the demonstration of germanium metal-oxide-semiconductor field-effect transistors operated at cryogenic temperatures, ohmic current-voltage characteristics in germanium/silicon-germanium heterostructures with ferromagnetic contacts at deep cryogenic temperatures and high magnetic fields, evaluation of the effects of surface preparation on carrier mobility in germanium/silicon- germanium heterostructures, and hole spin polarization through integrated permanent magnets. These results serve as essential components for fabricating next-generation germanium-based devices for microelectronics and quantum systems.
This project developed prototype germanium telluride switches, which can be used in RF applications to improve SWAP (size, weight, and power) and signal quality in RF systems. These switches can allow for highly reconfigurable systems, including antennas, communications, optical systems, phased arrays, and synthetic aperture radar, which all have high impact on current National Security goals for improved communication systems and communication technology supremacy. The final result of the project was the demonstration of germanium telluride RF switches, which could act as critical elements necessary for a single chip RF communication system that will demonstrate low SWAP and high reconfigurability
Nonlocal models use integral operators that embed length-scales in their definition. However, the integrands in these operators are difficult to define from the data that are typically available for a given physical system, such as laboratory mechanical property tests. In contrast, molecular dynamics (MD) does not require these integrands, but it suffers from computational limitations in the length and time scales it can address. To combine the strengths of both methods and to obtain a coarse-grained, homogenized continuum model that efficiently and accurately captures materials' behavior, we propose a learning framework to extract, from MD data, an optimal nonlocal model as a surrogate for MD displacements. Our framework guarantees that the resulting model is mathematically well-posed, physically consistent, and that it generalizes well to settings that are different from the ones used during training. The efficacy of this approach is demonstrated with several numerical tests for single layer graphene both in the case of perfect crystal and in the presence of thermal noise.
Understanding the fundamental mechanisms underpinning shock initiation is critical to predicting energetic material (EM) safety and performance. Currently, the timescales and pathways by which shock-excited lattice modes transfer energy into specific chemical bonds remains an open question. Towards understanding these mechanisms, our group has previously measured the vibrational energy transfer (VET) pathways in several energetic thin films using broadband, femtosecond transient absorption spectroscopy. However, new technologies are needed to move beyond these thin film surrogates and measure broadband VET pathways in realistic EM morphologies. Herein, we describe a new broadband, femtosecond, attenuated total reflectance spectroscopy apparatus. Performance of the system is benchmarked against published data and the first VET results from a pressed EM pellet are presented. This technology enables fundamental studies of VET dynamics across sample configurations and environments (pressure, temperature, etc .) and supports the potential use of VET studies in the non-destructive surveillance of EM components.
This SAND report fulfills the completion requirements for the ASC Physics and Engineering Modeling Level 2 Milestone 7836 during Fiscal Year 2021. The Sandia Simplified potential energy clock (SPEC) non-linear viscoelastic constitutive model was developed to predict a whole host of polymer glass physical behaviors in order to provide a tool to assess the effects of stress on these materials over their lifecycle. Polymer glasses are used extensively in applications such as electronics packaging, where encapsulants and adhesives can be critical to device performance. In this work, the focus is on assessing the performance of the model in predicting material evolution associated with long-term physical aging, an area that the model has not been fully vetted in. These predictions are key to utilizing models to help demonstrate electronics packaging component reliability over decades long service lives, a task that is very costly and time consuming to execute experimentally. The initiating hypothesis for the work was that a model calibration process can be defined that enables confidence in physical aging predictions under ND relevant environments and timescales without sacrificing other predictive capabilities. To test the hypothesis, an extensive suite of calibration and aging data was assembled from a combination of prior work and collaborating projects (Aging and Lifetimes as well as the DoD Joint Munitions Program) for two mission relevant epoxy encapsulants, 828DGEBA/DEA and 828DGEBA/T403. Multiple model calibration processes were developed and evaluated against the entire set of data for each material. A qualitative assessment of each calibration's ability to predict the wide range of aging responses was key to ranking the calibrations against each other. During this evaluation, predictions that were identified as non-physical, i.e., demonstrated something that was qualitatively different than known material behavior, were heavily weighted against the calibration performance. Thus, unphysical predictions for one aspect of aging response could generate a lower overall rating for a calibration process even if that process generated better quantitative predictions for another aspect of aging response. This insurance that all predictions are qualitatively correct is important to the overall aim of utilizing the model to predict residual stress evolution, which will depend on the interplay amongst the different material aging responses. The DSC-focused calibration procedure generated the best all-around aging predictions for both materials, demonstrating material models that can qualitatively predict the whole host of different physical aging responses that have been measured. This step forward in predictive capability comes from an unanticipated source, utilization of calorimetry measurements to specify model parameters. The DSC-focused calibration technique performed better than compression-focused techniques that more heavily weigh measurements more closely related to the structural responses to be predicted. Indeed, the DSC-focused calibration procedure was only possible due to recent incorporation of the enthalpy and heat capacity features into SPEC that was newly verified during this L2 milestone. Fundamentally similar aspects of the two material model calibrations as well as parametric studies to assess sensitives of the aging predictions are discussed within the report. A perspective on the next steps to the overall goal of residual stress evolution predictions under stockpile conditions closes the report.
This project focused on providing a fundamental physico-chemical understanding of the coupling mechanisms of corrosion- and radiation-induced degradation at material-salt interfaces in Ni-based alloys operating in emulated Molten Salt Reactor(MSR) environments through the use of a unique suite of aging experiments, in-situ nanoscale characterization experiments on these materials, and multi-physics computational models. The technical basis and capabilities described in this report bring us a step closer to accelerate the deployment of MSRs by closing knowledge gaps related to materials degradation in harsh environments.
Arithmetic Coding (AC) using Prediction by Partial Matching (PPM) is a compression algorithm that can be used as a machine learning algorithm. This paper describes a new algorithm, NGram PPM. NGram PPM has all the predictive power of AC/PPM, but at a fraction of the computational cost. Unlike compression-based analytics, it is also amenable to a vector space interpretation, which creates the ability for integration with other traditional machine learning algorithms. AC/PPM is reviewed, including its application to machine learning. Then NGram PPM is described and test results are presented, comparing them to AC/PPM.
Using an optical microscopy setup adapted to in-situ studies of ice formation at ambient pressure, we examined a specific multicomponent mineral, microcline, with the ultimate aim of gaining a more realistic understanding of ice nucleation in Earth’s atmosphere. We focused on a perthitic feldspar, microcline, to test the hypothesis that co-existence in some feldspars of K-rich and Na-rich phases are contributing to enhanced ice nucleation. On a sample deliberately chosen to contain lamella, a typical perthitic microstructure, and flat surface regions next to each other, we performed a series of ice formation experiments. We found microcline to promote ice formation, causing a large number of ice nucleation events at around - 27°C. The number of ice nuclei decreased from experimental run to experimental run, indicating surface aging upon repeated exposure to humidity. An analysis of 10 experimental runs of identical conditions did not reveal an obvious enhancement of ice formation at the lamellar microstructure. Instead, we find efficient nucleation at various surface sites that produce orientationally aligned ice crystallites with asymmetric shape. Based on this observation we propose that surface steps running along select directions produce microfacets of an orientation that is favorable to enhanced ice nucleation, similar to previously reported for K-rich feldspars.
We present an overview of the design and development of optical self-emission and debris imaging diagnostics for the Z Machine at Sandia National Laboratories. These diagnostics were designed and implemented to address several gaps in our understanding of visibly emitting phenomenon on Z and the post-shot debris environment. Optical emission arises from plasmas that form on the transmission line that delivers energy to Z loads and on the Z targets themselves; however, the dynamics of these plasmas are difficult to assess without imaging data. Addressing this, we developed a new optical imager called SEGOI (Self-Emission Gated Optical Imager) that leverages the eight gated optical imagers and two streak cameras of the Z Line VISAR system. SEGOI is a low cost, side-on imager with a 1 cm field of view and 30-50 µm spatial resolution, sensitive to green light (540-600 nm). This report outlines the design considerations and development of this diagnostic and presents an overview of the first diagnostic data acquired from four experimental campaigns. SEGOI was fielded on power flow experiments to image plasmas forming on and between transmission lines, on an inertial confinement fusion experiment called the Dynamic Screw Pinch to image low density plasmas forming on return current posts, on an experiment designed to measure the magneto Rayleigh-Taylor instability to image the instability bubble trajectory and self-emission structures, and finally on a Magnetized Liner Inertial Fusion (MagLIF) experiment to image the emission from the target. The second diagnostic developed, called DINGOZ (Debris ImagiNG on Z), was designed to improve our understanding of the post-shot debris environment. DINGOZ is an airtight enclosure that houses electronics and batteries to operate a high-speed (10-400 kfps) camera in the Z Machine center section. We report on the design considerations of this new diagnostic and present the first high-speed imaging data of the post-shot debris environment on Z.
The twenty-seven critical experiments in this series were performed in 2020 in the SCX at the Sandia Pulsed Reactor Facility. The experiments are grouped by fuel rod pitch. Case 1 is a base case with a pitch of 0.8001 cm and no water holes in the array. Cases 2 through 6 have the same pitch as Case 1 but contain various configurations with water holes, providing slight variations in the fuel-to-water ratio. Similarly, Case 7 is a base case with a pitch of 0.854964 cm and no water holes in the array. Cases 8 through 11 have the same pitch as Case 7 but contain various configurations with water holes. Cases 12 through 15 have a pitch of 1.131512 cm and differ according to the number of water holes in the array, with Case 12 having no water holes. Cases 16 through 19 have a pitch of 1.209102 cm and differ according to number of water holes in the array, with Case 16 having no water holes. Cases 20 through 23 have a pitch of 1.6002 cm and differ according to number of water holes in the array, with Case 20 having no water holes. Cases 24 through 27 have a pitch of 1.709928 cm and differ according to number of water holes in the array, with Case 24 having no water holes. As the experiment case number increases, the fuel-to-water volume ratio decreases.
Swiler, Laura P.; Becker, Dirk-Alexander; Brooks, Dusty M.; Govaerts, Joan; Koskinen, Lasse; Plischke, Elmar; Rohlig, Klaus-Jurgen; Saveleva, Elena; Spiessl, Sabine M.; Stein, Emily S.; Svitelman, Valentina
Over the past four years, an informal working group has developed to investigate existing sensitivity analysis methods, examine new methods, and identify best practices. The focus is on the use of sensitivity analysis in case studies involving geologic disposal of spent nuclear fuel or nuclear waste. To examine ideas and have applicable test cases for comparison purposes, we have developed multiple case studies. Four of these case studies are presented in this report: the GRS clay case, the SNL shale case, the Dessel case, and the IBRAE groundwater case. We present the different sensitivity analysis methods investigated by various groups, the results obtained by different groups and different implementations, and summarize our findings.
In this LDRD project, we developed a versatile capability for high-resolution measurements of electron scattering processes in gas-phase molecules, such as ionization, dissociation, and electron attachment/detachment. This apparatus is designed to advance fundamental understanding of these processes and to inform predictions of plasmas associated with applications such as plasma-assisted combustion, neutron generation, re-entry vehicles, and arcing that are critical to national security. We use innovative coupling of electron-generation and electron-imaging techniques that leverages Sandia’s expertise in ion/electron imaging methods. Velocity map imaging provides a measure of the kinetic energies of electrons or ion products from electron scattering in an atomic or molecular beam. We designed, constructed, and tested the apparatus. Tests include dissociative electron attachment to O2 and SO2, as well as a new method for studying laser-initiated plasmas. This capability sets the stage for new studies in dynamics of electron scattering processes, including scattering from excited-state atoms and molecules.
This report summarizes the activities performed as part of the Science and Engineering of Cybersecurity by Uncertainty quantification and Rigorous Experimentation (SECURE) Grand Challenge LDRD project. We provide an overview of the research done in this project, including work on cyber emulation, uncertainty quantification, and optimization. We present examples of integrated analyses performed on two case studies: a network scanning/detection study and a malware command and control study. We highlight the importance of experimental workflows and list references of papers and presentations developed under this project. We outline lessons learned and suggestions for future work.
In order to establish a zone of influence (ZOI) due to a high energy arcing fault (HEAF) environment, the fragility of the targets must be determined. The high heat flux/short duration exposure of a HEAF is considerably different than that of a traditional hydrocarbon fire, which previous research has addressed. The previous failure metrics (e.g., internal jacket temperature of a cable exposed to a fire) were based on low heat flux/long duration exposures. Because of this, evaluation of different physics and failure modes was considered to evaluate the fragility of cables exposed to a HEAF. Tests on cable targets were performed at high heat flux/short duration exposures to gain insight on the relevant physics and failure modes. These tests yielded data on several relevant failure modes, including electrical failure and sustained ignition. Additionally, the results indicated a relationship between the total energy of exposure and the damage state of the cable target. This data can be used to inform the fragility of the targets.
This report provides a summary of notes for building and running the Sandia Computational Engine for Particle Transport for Radiation Effects (SCEPTRE) code. SCEPTRE is a general- purpose C++ code for solving the linear Boltzmann transport equation in serial or parallel using unstructured spatial finite elements, multigroup energy treatment, and a variety of angular treatments including discrete ordinates (Sn) and spherical harmonics (Pn). Either the first-order form of the Boltzmann equation or one of the second-order forms may be solved. SCEPTRE requires a small number of open-source Third Party Libraries (TPL) to be available, and example scripts for building these TPL are provided. The TPL needed by SCEPTRE are Trilinos, Boost, and Netcdf. SCEPTRE uses an autotools build system, and a sample configure script is provided. Running the SCEPTRE code requires that the user provide a spatial finite-elements mesh in Exodus format and a cross section library in a format that will be described. SCEPTRE uses an xml-based input, and several examples will be provided.
Most earth materials are anisotropic with regard to seismic wave-speeds, especially materials such as shales, or where oriented fractures are present. However, the base assumption for many numerical simulations is to treat earth materials as isotropic media. This is done for simplicity, the apparent weakness of anisotropy in the far field, and the lack of well-characterized anisotropic material properties for input into numerical simulations. One approach for addressing the higher complexity of actual geologic regions is to model the material as an orthorhombic medium. We have developed an explicit time-domain, finite-difference (FD) algorithm for simulating three-dimensional (3D) elastic wave propagation in a heterogeneous orthorhombic medium. The objective of this research is to investigate the errors and biases that result from modeling a non-isotropic medium as an isotropic medium. This is done by computing “observed data” by using synthetic, anisotropic simulations with the assumption of an orthorhombic, anisotropic earth model. Green’s functions for an assumed isotropic earth model are computed and then used an inversion designed to estimate moment tensors with the “observed” data. One specific area of interest is how shear waves, which are introduced in an anisotropic model even for an isotropic explosion, affect the characterization of seismic sources when isotropic earth assumptions are made. This work is done in support of the modeling component of the Source Physics Experiment (SPE), a series of underground chemical explosions at the Nevada National Security Site (NNSS).
This user’s guide documents capabilities in Sierra/SolidMechanics which remain “in-development” and thus are not tested and hardened to the standards of capabilities listed in Sierra/SM 5.2 User’s Guide. Capabilities documented herein are available in Sierra/SM for experimental use only until their official release. These capabilities include, but are not limited to, novel discretization approaches such as the conforming reproducing kernel (CRK) method, numerical fracture and failure modeling aids such as the extended finite element method (XFEM) and J-integral, explicit time step control techniques, dynamic mesh rebalancing, as well as a variety of new material models and finite element formulations.
An exceptional set of newly-discovered advanced superalloys known as refractory high-entropy alloys (RHEAs) can provide near-term solutions for wear, erosion, corrosion, high-temperature strength, creep, and radiation issues associated with supercritical carbon dioxide (sCO2) Brayton Cycles and advanced nuclear reactors. In particular, these superalloys can significantly extend their durability, reliability, and thermal efficiency, thereby making them more cost-competitive, safer, and reliable. For this project, it was endeavored to manufacture and test certain RHEAs, to solve technical issues impacting the Brayton Cycle and advanced nuclear reactors. This was achieved by leveraging Sandia’s patents, technical advances, and previous experience working with RHEAs. Herein, three RHEA manufacturing methods were applied: laser engineered net shaping, spark plasma sintering, and spray coating. Two promising RHEAs were selected, HfNbTaZr and MoNbTaVW. To demonstrate their performance, erosion, structural, radiation, and hightemperature experiments were conducted on the RHEAs, stainless steel (SS) 316 L, SS 1020, and Inconel 718 test coupons, as well as bench-top components. The experimental data is presented, analyzed, and confirms the superior performance of the HfNbTaZr and MoNbTaVW RHEAs vs. SS 316 L, SS 1020, and Inconel 718. In addition, to gain more insights for larger-scale RHEA applications, the erosion and structural capabilities for the two RHEAs were simulated and compared with the experimental data. The experimental data confirm the superior performance of the HfNbTaZr and MoNbTaVW RHEAs vs. SS and Inconel. Most importantly, the erosion and the coating material experimental data show that erosion in sCO2 Brayton Cycles can be eliminated completely if RHEAs are used. The experimental suite and validations confirm that HfNbTaZr is suitable for harsh environments that do not include nuclear radiation, while MoNbTaVW is suitable for harsh environments that include radiation.
Trujillo, Natasha; Rose-Coss, Dylan; Heath, Jason; Dewers, Thomas D.; Ampomah, William; Mozley, Peter S.; Cather, Martha
Leakage pathways through caprock lithologies for underground storage of CO2 and/or enhanced oil recovery (EOR) include intrusion into nano-pore mudstones, flow within fractures and faults, and larger-scale sedimentary heterogeneity (e.g., stacked channel deposits). To assess multiscale sealing integrity of the caprock system that overlies the Morrow B sandstone reservoir, Farnsworth Unit (FWU), Texas, USA, we combine pore-to-core observations, laboratory testing, well logging results, and noble gas analysis. A cluster analysis combining gamma ray, compressional slowness, and other logs was combined with caliper responses and triaxial rock mechanics testing to define eleven lithologic classes across the upper Morrow shale and Thirteen Finger limestone caprock units, with estimations of dynamic elastic moduli and fracture breakdown pressures (minimum horizontal stress gradients) for each class. Mercury porosimetry determinations of CO2 column heights in sealing formations yield values exceeding reservoir height. Noble gas profiles provide a “geologic time-integrated” assessment of fluid flow across the reservoir-caprock system, with Morrow B reservoir measurements consistent with decades-long EOR water-flooding, and upper Morrow shale and lower Thirteen Finger limestone values being consistent with long-term geohydrologic isolation. Together, these data suggest an excellent sealing capacity for the FWU and provide limits for injection pressure increases accompanying carbon storage activities.
Virtual prototyping in engineering design rely on modern numerical models of contacting structures with accurate resolution of interface mechanics, which strongly affect the system-level stiffness and energy dissipation due to frictional losses. High-fidelity modeling within the localized interfaces is required to resolve local quantities of interest that may drive design decisions. The high-resolution finite element meshes necessary to resolve inter-component stresses tend to be computationally expensive, particularly when the analyst is interested in response time histories. The Hurty/Craig-Bampton (HCB) transformation is a widely used method in structural dynamics for reducing the interior portion of a finite element model while having the ability to retain all nonlinear contact degrees of freedom (DOF) in physical coordinates. These models may still require many DOF to adequately resolve the kinematics of the interface, leading to inadequate reduction and computational savings. This study proposes a novel interface reduction method to overcome these challenges by means of system-level characteristic constraint (SCC) modes and properly orthogonal interface modal derivatives (POIMDs) for transient dynamic analyses. Both SCC modes and POIMDs are computed using the reduced HCB mass and stiffness matrices, which can be directly computed from many commercial finite element analysis software. Comparison of time history responses to an impulse-type load in a mechanical beam assembly indicate that the interface-reduced model correlates well with the HCB truth model. Localized features like slip and contact area are well-represented in the time domain when the beam assembly is loaded with a broadband excitation. The proposed method also yields reduced-order models with greater critical timestep lengths for explicit integration schemes.
A six-month research effort has advanced the hybrid kinetic-fluid modeling capability required for developing non-thermal warm x-ray sources on Z. The three particle treatments of quasi-neutral, multi-fluid, and kinetic are demonstrated in 1D simulations of an Ar gas puff. The simulations determine required resolutions for the advanced implicit solution techniques and debug hybrid particle treatments with equation-of-state and radiation transport. The kinetic treatment is used in preliminary analysis of the non-Maxwellian nature of a gas target. It is also demonstrates the sensitivity of the cyclotron and collision frequencies in determining the transition from thermal to non-thermal particle populations. Finally, a 2D Ar gas puff simulation of a Z shot demonstrates the readiness to proceed with realistic target configurations. The results put us on a very firm footing to proceed to a full LDRD which includes continued development transition criteria and x-ray yield calculation.
Metallic lattice structures are being considered for shock mitigation applications due to their superior mechanical properties, energy absorption capability and lightweight characteristics inherent of the additive manufacturing process. In this study, shock compression experiments coupled to x-ray phase contrast imaging (PCI) were conducted on 316L stainless steel lattices. Meso-scale simulations incorporating the as-built lattice structure characterized by computed tomography were used to simulate PCI radiographs in CTH for direct comparison to experimental data. The methodology presented here offers robust validation for constitutive properties to further our understanding of lattice compaction at application-relevant strain rates.
Magnetic microscopy with high spatial resolution helps to solve a variety of technical problems in condensed-matter physics, electrical engineering, biomagnetism, and geomagnetism. In this work we used quantum diamond magnetic microscope (QDMM) setups, which use a dense uniform layer of magnetically-sensitive nitrogen-vacancy (NV) centers in diamond to image an external magnetic field using a fluorescence microscope. We used this technique for imaging few-micron ferromagnetic needles used as a physically unclonable function (PUF) and to passively interrogate electric current paths in a commercial 555 timer integrated circuit (IC). As part of the QDMM development, we also found a way to calculate ion implantation recipes to create diamond samples with dense uniform NV layers at the surface. This work opens the possibility for follow-up experiments with 2D magnetic materials, ion implantation, and electronics characterization and troubleshooting.
This paper presents a practical methodology for propagating and processing uncertainties associated with random measurement and estimation errors (that vary from test-to-test) and systematic measurement and estimation errors (uncertain but similar from test-to-test) in inputs and outputs of replicate tests to characterize response variability of stochastically varying test units. Also treated are test condition control variability from test-to-test and sampling uncertainty due to limited numbers of replicate tests. These aleatory variabilities and epistemic uncertainties result in uncertainty on computed statistics of output response quantities. The methodology was developed in the context of processing experimental data for “real-space” (RS) model validation comparisons against model-predicted statistics and uncertainty thereof. The methodology is flexible and sufficient for many types of experimental and data uncertainty, offering the most extensive data uncertainty quantification (UQ) treatment of any model validation method the authors are aware of. It handles both interval and probabilistic uncertainty descriptions and can be performed with relatively little computational cost through use of simple and effective dimension- and order-adaptive polynomial response surfaces in a Monte Carlo (MC) uncertainty propagation approach. A key feature of the progressively upgraded response surfaces is that they enable estimation of propagation error contributed by the surrogate model. Sensitivity analysis of the relative contributions of the various uncertainty sources to the total uncertainty of statistical estimates is also presented. The methodologies are demonstrated on real experimental validation data involving all the mentioned sources and types of error and uncertainty in five replicate tests of pressure vessels heated and pressurized to failure. Simple spreadsheet procedures are used for all processing operations.
Identification and characterization of underground events from surface or remote data requires a thorough understanding of the rock material properties. However, material properties usually come from borehole data, which is expensive and not always available. A potential alternative is to use topographic characteristics to approximate the strength, but this has never been done before quantitatively. Here we present the results from the first steps towards this goal. We have found that there are strong correlations between compressive and tensile strengths and slopes, but these correlations vary depending on data analysis details. Rugosity may be better correlated to strength than slope values. More comprehensive analyses are needed to fully understand the best method of predicting strength from topography for this area. We also found that misalignment of multiple GIS datasets can have a large influence on the ability to make interpretations. Lastly, these results will require further study in a variety of climatic conditions before being applicable to other sites.
All energy production systems need efficient energy conversion systems. Current Rankine cycles use water to generate steam at temperatures where efficiency is limited to around 40%. As existing fossil and nuclear power plants are decommissioned due to end of effective life and/or societies’ desire for cleaner generation options, more efficient energy conversion is needed to keep up with increasing electricity demands. Modern energy generation technologies, such as advanced nuclear reactors and concentrated solar, coupled to high efficiency sCO2 conversion systems provide a solution to efficient, clean energy systems. Leading R&D communities worldwide agree that the successful development of sCO2 Brayton power cycle technology will eventually bring about large-scale changes to existing multi-billion-dollar global markets and enable power applications not currently possible or economically justifiable. However, all new technologies face challenges in the path to commercialization and the electricity sector is distinctively risk averse. The Sandia sCO2 Brayton team needs to better understand what the electricity sector needs in terms of new technology risk mitigation, generation efficiency, reliability improvements above current technology, and cost requirements which would make new technology adoption worthwhile. Relying on the R&D community consensus that a sCO2 power cycle will increase the revenue of the electrical industry, without addressing the electrical industry’s concerns, significantly decreases the potential for adoption at commercial scale. With a clear understanding of the market perspectives on technology adoption, including military, private sector, and utilities customers, the Sandia Brayton Team can resolve industry concerns for smoother development and faster transition to commercialization. An extensive customer discovery process, similar to that executed through the NSF’s I-Corp program, is necessary in order to understand the pain points of the market and articulate the value proposition of Brayton systems in terms that engage decision makers and facilitate commercialization of the technology.
Battery cells with metal casings are commonly considered incompatible with nuclear magnetic resonance (NMR) spectroscopy because the oscillating radio-frequency magnetic fields ("rf fields") responsible for excitation and detection of NMR active nuclei do not penetrate metals. Here, we show that rf fields can still efficiently penetrate nonmetallic layers of coin cells with metal casings provided "B1 damming"configurations are avoided. With this understanding, we demonstrate noninvasive high-field in situ 7Li and 19F NMR of coin cells with metal casings using a traditional external NMR coil. This includes the first NMR measurements of an unmodified commercial off-the-shelf rechargeable battery in operando, from which we detect, resolve, and separate 7Li NMR signals from elemental Li, anodic β-LiAl, and cathodic LixMnO2 compounds. Real-time changes of β-LiAl lithium diffusion rates and variable β-LiAl 7Li NMR Knight shifts are observed and tied to electrochemically driven changes of the β-LiAl defect structure.