The purpose of this report is to document updates to the simulation of commercial vacuum drying procedures at the Nuclear Energy Work Complex at Sandia National Laboratories. Validation of the extent of water removal in a dry spent nuclear fuel storage system based on drying procedures used at nuclear power plants is needed to close existing technical gaps. Operational conditions leading to incomplete drying may have potential impacts on the fuel, cladding, and other components in the system. A general lack of data suitable for model validation of commercial nuclear canister drying processes necessitates additional, well-designed investigations of drying process efficacy and water retention. Scaled tests that incorporate relevant physics and well-controlled boundary conditions are essential to provide insight and guidance to the simulation of prototypic systems undergoing drying processes. This report documents testing updates for the Dashpot Drying Apparatus (DDA), an apparatus constructed at a reduced scale with multiple Pressurized Water Reactor (PWR) fuel rod surrogates and a single guide tube dashpot. This apparatus is fashioned from a truncated 5×5 section of a prototypic 17×17 PWR fuel skeleton and includes the lowest segment of a single guide tube, often referred to as the dashpot region. The guide tube in this assembly is open and allows for insertion of a poison rod (neutron absorber) surrogate.
There has been ever-growing interest and engagement regarding net-zero and carbon neutrality goals, with many nations committing to steep emissions reductions by mid-century. Although water plays critical roles in various sectors, there has been a distinct gap in discussions to date about the role of water in the transition to a carbon neutral future. To address this need, a webinar was convened in April 2022 to gain insights into how water can support or influence active strategies for addressing emissions activities across energy, industrial, and carbon sectors. The webinar presentations and discussions highlighted various nuances of direct and indirect water use both within and across technology sectors (Figure ES-1). For example, hydrogen and concrete production, water for mining, and inland waterways transportation are all heavily influenced by the energy sources used (fossil fuels vs. renewable sources) as well as local resource availabilities. Algal biomass, on the other hand, can be produced across diverse geographies (terrestrial to sea) in a range of source water qualities, including wastewater and could also support pollution remediation through nutrient and metals recovery. Finally, water also influences carbon dynamics and cycling within natural systems across terrestrial, aquatic, and geologic systems. These dynamics underscore not only the critical role of water within the energy-water nexus, but also the extension into the energy-watercarbon nexus.
High-fidelity complex engineering simulations are often predictive, but also computationally expensive and often require substantial computational efforts. The mitigation of computational burden is usually enabled through parallelism in high-performance cluster (HPC) architecture. Optimization problems associated with these applications is a challenging problem due to the high computational cost of the high-fidelity simulations. In this paper, an asynchronous parallel constrained Bayesian optimization method is proposed to efficiently solve the computationally expensive simulation-based optimization problems on the HPC platform, with a budgeted computational resource, where the maximum number of simulations is a constant. The advantage of this method are three-fold. First, the efficiency of the Bayesian optimization is improved, where multiple input locations are evaluated parallel in an asynchronous manner to accelerate the optimization convergence with respect to physical runtime. This efficiency feature is further improved so that when each of the inputs is finished, another input is queried without waiting for the whole batch to complete. Second, the proposed method can handle both known and unknown constraints. Third, the proposed method samples several acquisition functions based on their rewards using a modified GP-Hedge scheme. The proposed framework is termed aphBO-2GP-3B, which means asynchronous parallel hedge Bayesian optimization with two Gaussian processes and three batches. The numerical performance of the proposed framework aphBO-2GP-3B is comprehensively benchmarked using 16 numerical examples, compared against other 6 parallel Bayesian optimization variants and 1 parallel Monte Carlo as a baseline, and demonstrated using two real-world high-fidelity expensive industrial applications. The first engineering application is based on finite element analysis (FEA) and the second one is based on computational fluid dynamics (CFD) simulations.
Sierra/SD provides a massively parallel implementation of structural dynamics finite element analysis, required for high-fidelity, validated models used in modal, vibration, static and shock analysis of weapons systems. This document provides a user's guide to the input for Sierra/SD. Details of input specifications for the different solution types, output options, element types and parameters are included. The appendices contain detailed examples, and instructions for running the software on parallel platforms.
This paper serves as the Interface Control Document (ICD) for the Seascape automated test harness developed at Sandia National Laboratories. The primary purposes of the Seascape system are: (1) provide a place for accruing large, curated, labeled data sets useful for developing and evaluating detection and classification algorithms (including, but not limited to, supervised machine learning applications) (2) provide an automated structure for specifying, running and generating reports on algorithm performance. Seascape uses GitLab, Nexus, Solr, and Banana, open source software, together with code written in the Python language, to automatically provision and configure computational nodes, queue up jobs to accomplish algorithms test runs against the stored data sets, gather the results and generate reports which are then stored in the Nexus artifact server.
State chart notations with ‘run to completion’ semantics are popular with engineers for designing controllers that react to environment events with a sequence of state transitions but lack formal refinement and rigorous verification methods. State chart models are typically used to design complex control systems that respond to environmental triggers with a sequential process. The model is usually constructed at a concrete level and verified and validated using animation techniques relying on human judgement. Event-B, on the other hand, is based on refinement from an initial abstraction and is designed to make formal verification by automatic theorem provers feasible. Abstraction and formal verification provide greater assurance that critical (e.g. safety or security) properties are not violated by the control system. In this paper, we introduce a notion of refinement into a ‘run to completion’ state chart modelling notation and leverage Event-B’s tool support for theorem proving. We describe the difficulties in translating ‘run to completion’ semantics into Event-B refinements and suggest a solution. We illustrate our approach and show how models can be validated at different refinement levels using our scenario checker animation tools. We show how critical invariant properties can be verified by proof despite the reactive nature of the system and how behavioural aspects of the system can be verified by testing the expected reactions using a temporal logic, model checking approach. To verify liveness, we outline a proof that the run to completion is deadlock-free and converges to complete the run.
White dwarfs (WDs) are useful across a wide range of astrophysical contexts. The appropriate interpretation of their spectra relies on the accuracy of WD atmosphere models. One essential ingredient of atmosphere models is the theory used for the broadening of spectral lines. To date, the models have relied on Vidal et al., known as the unified theory of line broadening (VCS). There have since been advancements in the theory; however, the calculations used in model atmosphere codes have only received minor updates. Meanwhile, advances in instrumentation and data have uncovered indications of inaccuracies: spectroscopic temperatures are roughly 10% higher and spectroscopic masses are roughly 0.1 M higher than their photometric counterparts. The evidence suggests that VCS-based treatments of line profiles may be at least partly responsible. Gomez et al. developed a simulation-based line-profile code Xenomorph using an improved theoretical treatment that can be used to inform questions around the discrepancy. However, the code required revisions to sufficiently decrease noise for use in model spectra and to make it computationally tractable and physically realistic. In particular, we investigate three additional physical effects that are not captured in the VCS calculations: ion dynamics, higher-order multipole expansion, and an expanded basis set. We also implement a simulation-based approach to occupation probability. The present study limits the scope to the first three hydrogen Balmer transitions (Hα, Hβ, and Hγ). We find that screening effects and occupation probability have the largest effects on the line shapes and will likely have important consequences in stellar synthetic spectra.
The RISC-V instruction set architecture open licensing policy has spawned a hive of development activity, making a range of implementations publicly available. The environments in which RISC-V operates have expanded correspondingly, driving the need for a generalized approach to evaluating the reliability of RISC-V implementations under adverse operating conditions or after normal wear-out periods. Fault injection (FI) refers to the process of changing the state of registers or wires, either permanently or momentarily, and then observing execution behavior. The analysis provides insight into the development of countermeasures that protect against the leakage or corruption of sensitive information, which might occur because of unexpected execution behavior. In this article, we develop a hardware-software co-design architecture that enables fast, configurable fault emulation and utilize it for information leakage and data corruption analysis. Modern system-on-chip FPGAs enable building an evaluation platform, where control elements run on a processor(s) (PS) simultaneously with the target design running in the programmable logic (PL). Software components of the FI system introduce faults and report execution behavior. A pair of RISC-V FI-instrumented implementations are created and configured to execute the Advanced Encryption Standard and Twister algorithms. Key and plaintext information leakage and degraded pseudorandom sequences are both observed in the output for a subset of the emulated faults.
Advances on differentiating between malicious intent and natural "organizational evolution"to explain observed anomalies in operational workplace patterns suggest benefit from evaluating collective behaviors observed in the facilities to improve insider threat detection and mitigation (ITDM). Advances in artificial neural networks (ANN) provide more robust pathways for capturing, analyzing, and collating disparate data signals into quantitative descriptions of operational workplace patterns. In response, a joint study by Sandia National Laboratories and the University of Texas at Austin explored the effectiveness of commercial artificial neural network (ANN) software to improve ITDM. This research demonstrates the benefit of learning patterns of organizational behaviors, detecting off-normal (or anomalous) deviations from these patterns, and alerting when certain types, frequencies, or quantities of deviations emerge for improving ITDM. Evaluating nearly 33,000 access control data points and over 1,600 intrusion sensor data points collected over a nearly twelve-month period, this study's results demonstrated the ANN could recognize operational patterns at the Nuclear Engineering Teaching Laboratory (NETL) and detect off-normal behaviors - suggesting that ANNs can be used to support a data-analytic approach to ITDM. Several representative experiments were conducted to further evaluate these conclusions, with the resultant insights supporting collective behavior-based analytical approaches to quantitatively describe insider threat detection and mitigation.