This report documents the results of a long-term (5.79 year) exposure of 4-point bend corrosion test samples in the inlet and outlet vents of four spent nuclear fuel dry storage systems at the Maine Yankee Independent Spent Fuel Storage Installation. The goal of the test was to evaluate the corrosiveness of salt aerosols in a realistic near-marine environment, providing a data set for improved understanding of stress corrosion cracking of spent nuclear fuel dry storage canisters. Examination of the samples after extraction showed minor corrosion was present, mostly on rough-ground surfaces. However, dye penetrant testing showed that no SCC cracks were present. Dust collected on coupons co-located with the corrosion specimens was analyzed by scanning electron microscopy and leached to determine the soluble salts present. The dust was mostly organic material (pollen and stellate trichomes), with lesser detrital mineral grains. Salts present were a mix of sea-salts and continental salts, with chloride dominating the anions, but significant amounts of nitrate were also present. Both corrosion samples and dust samples showed evidence of wetting, indicating entry of water into the vents. The results of this field test suggest that the environment at Maine Yankee is not highly aggressive, although extrapolation from the periodically wetted vent samples to the hot, dry, canister surface may be difficult. No stress corrosion cracks were observed, but minor corrosion was present despite high nitrate concentrations in the salts. These observations may help address the ongoing question of the importance of nitrate in suppressing corrosion and SCC.
Sandia National Laboratories (SNL) performed a high-altitude nuclear electromagnetic pulse (HEMP) critical generation station component vulnerability test campaign with a focus on high-frequency, conducted early-time (E1) HEMP for the Department of Energy (DOE) Office of Cybersecurity, Energy Security, and Emergency Response (CESER). This report provides vulnerability test results to investigate component response and/or damage thresholds to reasonable HEMP threat levels that will help to inform site vulnerability assessments, mitigation planning, and modeling calibrations. This work details testing of North American Electric (NAE) magnetic motor starters to determine the effects of conducted HEMP environments. Motor starters are the control elements that provide power to motors throughout a generating plant; a starter going offline would cause loss of power to critical pumps and compressors, which could lead to component damage or unplanned plant outages. Additionally, failed starters would be unable to support plant startup. Six industrial motor starters were tested: two 2 horsepower (HP) starters with breaker disconnects and typical protection equipment, two 20 HP starters with breaker disconnects, and two 20 HP starters with fused disconnects. Each starter was placed in a circuit with a generator and inductive motor matching the starter rating. The conducted EMP insult was injected on the power cables passing through the motor starter, with separate tests for the generator and motor sides of the starter.
The On-Line Waste Library is a website that contains information regarding United States Department of Energy-managed high-level waste, spent nuclear fuel, and other wastes that are likely candidates for deep geologic disposal, with links to supporting documents for the data. This report provides supporting information for the data for which an already published source was not available.
The final quality of any AI/ML system is directly related to the quality of the input data used to train the system. In this case, we are trying to build a reliable image classifier that can correctly identify electrical components in x-ray images. The classification confidence is directly related to the quality of the labels in the training data, which are used in developing the AI/ML classifier. Incorrect or incomplete labels can substantially hinder the performance of the system during the training process, as it tries to compensate for variations that should not exist. Image labels are entered by subject matter experts, and in general can be assumed to be correct. However, this is not a guarantee, so developing ways to measure label quality and help identify or reject bad labels is important, especially as the database continues to grow. Given the current size of the database, a full manual review of each component is not feasible. This report will highlight the current state of the “RECON” x-ray image database and summarize several recent developments to try to help ensure high quality labeling both now and in the future. Questions that we hope to answer with this development include: 1) Are there any components with incorrect labels? 2) Can we suggest labels for components that are marked “Unknown”? 3) What kind of overall confidence do we have in the quality of the existing labels? 4) What systems or procedures can we put in place to maximize label quality?
Quantifying the radioactive sources present in gamma spectra is an ever-present and growing national security mission and a time-consuming process for human analysts. While machine learning models exist that are trained to estimate radioisotope proportions in gamma spectra, few address the eventual need to provide explanatory outputs beyond the estimation task. In this work, we develop two machine learning models for a NaI detector measurements: one to perform the estimation task, and the other to characterize the first model’s ability to provide reasonable estimates. To ensure the first model exhibits a behavior that can be characterized by the second model, the first model is trained using a custom, semi-supervised loss function which constrains proportion estimates to be explainable in terms of a spectral reconstruction. The second auxiliary model is an out-of-distribution detection function (a type of meta-model) leveraging the proportion estimates of the first model to identify when a spectrum is sufficiently unique from the training domain and thus is out-of-scope for the model. In demonstrating the efficacy of this approach, we encourage the use of meta-models to better explain ML outputs used in radiation detection and increase trust.
The U.S. Department of Energy (DOE) Office of Cybersecurity, Energy Security, and Emergency Response (CESER), and Office of Electricity (OE) commissioned the National Renewable Energy Laboratory (NREL) to develop a method and tool to enable electric utilities to understand and manage the risk of cybersecurity events that can lead to physical effects like blackouts. This tool, called Cyber100 Compass, uses cybersecurity data elicited from cybersecurity experts, then incorporates that data into a tool designed to be usable by cybersecurity non-experts who understand the system itself. The tool estimates dollar-valued risks for a current or postulated future electric power digital control configuration, in order to enable utility risk planners to prioritize among proposed cybersecurity risk mitigation options. With the development of the Cyber100 Compass tool for quantification of future cyber-physical security risks, NREL has taken an initial bold step in the direction of enabling and indeed encouraging electric utilities to address the potential for cybersecurity incidents to produce detrimental physical effects related to electric power delivery. As part of the Cyber100 Compass development process, DOE funded NREL to seek out an independent technical review of the risk methodology embodied in the tool. NREL requested this review from Sandia National Laboratories, and made available to Sandia a very late version of the project report, as well as NREL personnel to provide clarification and to respond to questions. This paper provides the result of the independent review activity.
Derivative computation is a key component of optimization, sensitivity analysis, uncertainty quantification, and the solving of nonlinear problems. Automatic differentiation (AD) is a powerful technique for evaluating such derivatives, and in recent years, has been integrated into programming environments such as Jax, PyTorch, and TensorFlow to support derivative computations needed for training of machine learning models, facilitating wide-spread use of these technologies. The C++ language has become the de facto standard for scientific computing due to numerous factors, yet language complexity has made the wide-spread adoption of AD technologies for C++ difficult, hampering the incorporation of powerful differentiable programming approaches into C++ scientific simulations. This is exacerbated by the increasing emergence of architectures, such as GPUs, with limited memory capabilities and requiring massive thread-level concurrency. C++ AD tools must effectively use these environments to bring novel scientific simulations to next-generation DOE experimental and observational facilities. In this project, we investigated source transformation-based automatic differentiation using LLVM compiler infrastructure to automatically generate portable and efficient gradient computations of Kokkos-based code. We have demonstrated that our proposed strategy is feasible by investigating the usage of a prototype LLVM-based source transformation tool to generate gradients of simple functions made of sequences of simple Kokkos parallel regions. Speedups of up to 500x compared to Sacado were observed on NVIDIA V100 GPU.
We can improve network pruning by leveraging the loss-topography extraction techniques used by projective integral updates for variational inference. Low-variance Hessians facilitate more aggressive pruning by providing better loss approximations when a parameter is removed.
As machine learning models for radioisotope quantification become more powerful, likewise the need for high-quality synthetic training data grows as well. For problem spaces that involve estimating the relative isotopic proportions of various sources in gamma spectra it is necessary to generate training data that accurately represents the variance of proportions encountered. In this report, we aim to provide guidance on how to target a desired variance of proportions which are randomly when using the PyRIID Seed Mixer, which samples from a Dirichlet distribution. We provide a method for properly parameterizing the Dirichlet distribution in order to maintain a constant variance across an arbitrary number of dimensions, where each dimension represents a distinct source template being mixed. We demonstrate that our method successfully parameterizes the Dirichlet distribution to target a specific variance of proportions, provided that several conditions are met. This allows us to follow a principled technique for controlling how random mixture proportions are generated which are then used downstream in the synthesis process to produce the final, noisy gamma spectra.
Commercial nuclear power plants typically use nuclear fuel that is enriched to less than five weight percent in the isotope 235U. However, recently several vendors have proposed new nuclear power plant designs that would use fuel with 235U enrichments between five weight percent and 19.75 weight percent. Nuclear fuel with this level of 235U enrichment is known as “high assay low-enriched uranium.” Once it has been irradiated in a nuclear reactor and becomes used (or spent) nuclear fuel, it will be stored, transported, and disposed of. However, irradiated high assay low-enriched uranium differs from typical irradiated nuclear fuel in several ways, and these differences may have economic effects on its storage, transport, and disposal, compared to typical irradiated nuclear fuel. This report describes those differences and qualitatively discusses their potential economic effects on storage, transport, and disposal.