The purpose of this report is to document updates to the simulation of commercial vacuum drying procedures at the Nuclear Energy Work Complex at Sandia National Laboratories. Validation of the extent of water removal in a dry spent nuclear fuel storage system based on drying procedures used at nuclear power plants is needed to close existing technical gaps. Operational conditions leading to incomplete drying may have potential impacts on the fuel, cladding, and other components in the system. A general lack of data suitable for model validation of commercial nuclear canister drying processes necessitates additional, well-designed investigations of drying process efficacy and water retention. Scaled tests that incorporate relevant physics and well-controlled boundary conditions are essential to provide insight and guidance to the simulation of prototypic systems undergoing drying processes. This report documents testing updates for the Dashpot Drying Apparatus (DDA), an apparatus constructed at a reduced scale with multiple Pressurized Water Reactor (PWR) fuel rod surrogates and a single guide tube dashpot. This apparatus is fashioned from a truncated 5×5 section of a prototypic 17×17 PWR fuel skeleton and includes the lowest segment of a single guide tube, often referred to as the dashpot region. The guide tube in this assembly is open and allows for insertion of a poison rod (neutron absorber) surrogate.
There is a growing interest in custom spatial accelerators for machine learning applications. These accelerators employ a spatial array of processing elements (PEs) interacting via custom buffer hierarchies and networks-on-chip. The efficiency of these accelerators comes from employing optimized dataflow (i.e., spatial/temporal partitioning of data across the PEs and fine-grained scheduling) strategies to optimize data reuse. The focus of this work is to evaluate these accelerator architectures using a tiled general matrix-matrix multiplication (GEMM) kernel. To do so, we develop a framework that finds optimized mappings (dataflow and tile sizes) for a tiled GEMM for a given spatial accelerator and workload combination, leveraging an analytical cost model for runtime and energy. Our evaluations over five spatial accelerators demonstrate that the tiled GEMM mappings systematically generated by our framework achieve high performance on various GEMM workloads and accelerators.
High-fidelity complex engineering simulations are often predictive, but also computationally expensive and often require substantial computational efforts. The mitigation of computational burden is usually enabled through parallelism in high-performance cluster (HPC) architecture. Optimization problems associated with these applications is a challenging problem due to the high computational cost of the high-fidelity simulations. In this paper, an asynchronous parallel constrained Bayesian optimization method is proposed to efficiently solve the computationally expensive simulation-based optimization problems on the HPC platform, with a budgeted computational resource, where the maximum number of simulations is a constant. The advantage of this method are three-fold. First, the efficiency of the Bayesian optimization is improved, where multiple input locations are evaluated parallel in an asynchronous manner to accelerate the optimization convergence with respect to physical runtime. This efficiency feature is further improved so that when each of the inputs is finished, another input is queried without waiting for the whole batch to complete. Second, the proposed method can handle both known and unknown constraints. Third, the proposed method samples several acquisition functions based on their rewards using a modified GP-Hedge scheme. The proposed framework is termed aphBO-2GP-3B, which means asynchronous parallel hedge Bayesian optimization with two Gaussian processes and three batches. The numerical performance of the proposed framework aphBO-2GP-3B is comprehensively benchmarked using 16 numerical examples, compared against other 6 parallel Bayesian optimization variants and 1 parallel Monte Carlo as a baseline, and demonstrated using two real-world high-fidelity expensive industrial applications. The first engineering application is based on finite element analysis (FEA) and the second one is based on computational fluid dynamics (CFD) simulations.
We report microfabricated surface ion traps are a principal component of many ion-based quantum information science platforms. The operational parameters of these devices are pushed to the edge of their physical capabilities as the experiments strive for increasing performance. When the applied radio-frequency (RF) voltage is increased excessively, the devices can experience damaging electric discharge events known as RF breakdown. We introduce two novel techniques for in situ detection of RF breakdown, which we implemented while characterizing the breakdown threshold of surface ion traps produced at Sandia National Laboratories. In these traps, breakdown did not always occur immediately after increasing the RF voltage, but often minutes or even hours later. This result is surprising in the context of the suggested mechanisms for RF breakdown in vacuum. Additionally, the extent of visible damage caused by breakdown events increased with the applied voltage. To minimize the probability for damage when RF power is first applied to a device, our results strongly suggest that the voltage should be ramped up over the course of several hours and monitored for breakdown.
The nuclear accident consequence analysis code MACCS has traditionally modeled dispersion during downwind transport using a Gaussian plume segment model. MACCS is designed to estimate consequence measures such as air concentrations and ground depositions, radiological doses, and health and economic impacts on a statistical basis over the course of a year to produce annualaveraged output measures. The objective of this work is to supplement the Gaussian atmospheric transport and diffusion (ATD) model currently in MACCS with a new option using the HYSPLIT model. HYSPLIT/MACCS coupling has been implemented, with HYSPLIT as an alternative ATD option. The subsequent calculations in MACCS use the HYSPLIT-generated air concentration, and ground deposition values to calculate the same range of output quantities (dose, health effects, risks, etc.) that can be generated when using the MACCS Gaussian ATD model. Based on the results from the verification test cases, the implementation of the HYSPLIT/MACCS coupling is confirmed. This report contains technical details of the HYSPLIT/MACCS coupling and presents a benchmark analysis using the HYSPLIT/MACCS coupling system. The benchmark analysis, which involves running specific scenarios and sensitivity studies designed to examine how the results generated by the traditional MACCS Gaussian plume segment model compare to the new, higher fidelity HYSPLIT/MACCS modeling option, demonstrates the modeling results that can be obtained by using this new option. The comparisons provided herein can also help decision-makers evaluate the potential benefit of using results based on higher fidelity modeling with the additional computational burden needed to perform the calculations. Three sensitivity studies to investigate the potential impact of alternative modeling options, regarding 1) input meteorological data set, 2) method to estimate stability class, and 3) plume dispersion model for larger distances, on consequence results were also performed. The results of these analyses are provided and discussed in this report.
Wind energy can provide renewable, sustainable electricity to rural Native homes and power schools and businesses. It can even provide tribes with a source of income and economic development. The purpose of this research is to determine the potential for deploying community and utility-scale wind renewable technologies on Turtle Mountain Band of Chippewa tribal lands. Ideal areas for wind technology development were investigated, based on wind resources, terrain, land usage, and other factors. This was done using tools like the National Renewable Energy Laboratory Wind Prospector, in addition to consulting tribal members and experts in the field. The result was a preliminary assessment of wind energy potential on Turtle Mountain lands, which can be used to justify further investigation and investment into determining the feasibility of future wind technology projects.
Hawkins, Brendan E.; Turney, Damon E.; Messinger, Robert J.; Kiss, Andrew M.; Yadav, Gautam G.; Banerjee, Sanjoy; Lambert, Timothy N.
Zinc oxide is of great interest for advanced energy devices because of its low cost, wide direct bandgap, non-toxicity, and facile electrochemistry. In zinc alkaline batteries, ZnO plays a critical role in electrode passivation, a process that hinders commercialization and remains poorly understood. Here, novel observations of an electroactive type of ZnO formed in Zn-metal alkaline electrodes are disclosed. The electrical conductivity of battery-formed ZnO is measured and found to vary by factors of up to 104, which provides a first-principles-based understanding of Zn passivation in industrial alkaline batteries. Simultaneous with this conductivity change, protons are inserted into the crystal structure and electrons are inserted into the conduction band in quantities up to ≈1020 cm−3 and ≈1 mAh gZnO−1. Electron insertion causes blue electrochromic coloration with efficiencies and rates competitive with leading electrochromic materials. The electroactivity of ZnO is evidently enabled by rapid crystal growth, which forms defects that complex with inserted cations, charge-balanced by the increase of conduction band electrons. This property distinguishes electroactive ZnO from inactive classical ZnO. Knowledge of this phenomenon is applied to improve cycling performance of industrial-design electrodes at 50% zinc utilization and the authors propose other uses for ZnO such as electrochromic devices.
High-speed, optical imaging diagnostics are presented for three-dimensional (3D) quantification of explosively driven metal fragmentation. At early times after detonation, Digital Image Correlation (DIC) provides non-contact measures of 3D case velocities, strains, and strain rates, while a proposed stereo imaging configuration quantifies in-flight fragment masses and velocities at later times. Experiments are performed using commercially obtained RP-80 detonators from Teledyne RISI, which are shown to create a reproducible fragment field at the benchtop scale. DIC measurements are compared with 3D simulations, which have been ‘leveled’ to match the spatial resolution of DIC. Results demonstrate improved ability to identify predicted quantities-of-interest that fall outside of measurement uncertainty and shot-to-shot variability. Similarly, video measures of fragment trajectories and masses allow rapid experimental repetition and provide correlated fragment size-velocity measurements. Measured and simulated fragment mass distributions are shown to agree within confidence bounds, while some statistically meaningful differences are observed between the measured and predicted conditionally averaged fragment velocities. Together these techniques demonstrate new opportunities to improve future model validation.