This paper applies sensitivity and uncertainty analysis to compare two model alternatives for fuel matrix degradation for performance assessment of a generic crystalline repository. The results show that this model choice has little effect on uncertainty in the peak 129I concentration. The small impact of this choice is likely due to the higher importance of uncertainty in the instantaneous release fraction and differences in epistemic uncertainty between the alternatives.
The National Academy of Sciences, Engineering, and Medicine (NASEM) defines reproducibility as 'obtaining consistent computational results using the same input data, computational steps, methods, code, and conditions of analysis,' and replicability as 'obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data' [1]. Due to an increasing number of applications of artificial intelligence and machine learning (AI/ML) to fields such as healthcare and digital medicine, there is a growing need for verifiable AI/ML results, and therefore reproducible research and replicable experiments. This paper establishes examples of irreproducible AI/ML applications to medical sciences and quantifies the variance of common AI/ML models (Artificial Neural Network, Naive Bayes classifier, and Random Forest classifiers) for tasks on medical data sets.
Integrating recent advancements in resilient algorithms and techniques into existing codes is a singular challenge in fault tolerance - in part due to the underlying complexity of implementing resilience in the first place, but also due to the difficulty introduced when integrating the functionality of a standalone new strategy with the preexisting resilience layers of an application. We propose that the answer is not to build integrated solutions for users, but runtimes designed to integrate into a larger comprehensive resilience system and thereby enable the necessary jump to multi-layered recovery. Our work designs, implements, and verifies one such comprehensive system of runtimes. Utilizing Fenix, a process resilience tool with integration into preexisting resilience systems as a design priority, we update Kokkos Resilience and the use pattern of VeloC to support application-level integration of resilience runtimes. Our work shows that designing integrable systems rather than integrated systems allows for user-designed optimization and upgrading of resilience techniques while maintaining the simplicity and performance of all-in-one resilience solutions. More application-specific choice in resilience strategies allows for better long-term flexibility, performance, and - importantly - simplicity.
The Ghareb Formation in the Yasmin Plain of Israel is under investigation as a potential disposal rock for nuclear waste disposal. Triaxial deformation tests and hydrostatic water-permeability tests were conducted with samples of the Ghareb to assess relevant thermal, hydrological, and mechanical properties. Axial deformation tests were performed on dry and water-saturated samples at effective pressures ranging from 0.7 to 19.6 MPa and temperatures of 23 ℃ and 100 ℃, while permeability tests were conducted at ambient temperatures and effective pressures ranging from 0.7 to 20 MPa. Strength and elastic moduli increase with increasing effective pressure for the triaxial tests. Dry room temperature tests are generally the strongest, while the samples deformed at 100 ℃ exhibit large permanent compaction even at low effective pressures. Water permeability decreases by 1-2 orders of magnitude under hydrostatic conditions while experiencing permanent volume loss of 4-5%. Permeability loss is retained after unloading, resulting from permanent compaction. A 3-D compaction model was used to demonstrate that compaction in one direction is associated with de-compaction in the orthogonal directions. The model accurately reproduces the measured axial and transverse strain components. The experimentally constrained deformational properties of the Ghareb will be used for 3-D thermal-hydrological-mechanical modelling of borehole stability.
There is a need to perform offline anomaly detection in count data streams to simultaneously identify both systemic changes and outliers, simultaneously. We propose a new algorithmic method, called the Anomaly Detection Pipeline, which leverages common statistical process control procedures in a novel way to accomplish this. The method we propose does not require user-defined control or phase I training data, automatically identifying regions of stability for improved parameter estimation to support change point detection. The method does not require data to be normally distributed, and it detects outliers relative to the regimes in which they occur. Our proposed method performs comparably to state-of-the-art change point detection methods, provides additional capabilities, and is extendable to a larger set of possible data streams than known methods.
Modern day processes depend heavily on data-driven techniques that use large datasets clustered into relevant groups help them achieve higher efficiency, better utilization of the operation, and improved decision making. However, building these datasets and clustering by similar products is challenging in research environments that produce many novel and highly complex low-volume technologies. In this work, the author develops an algorithm that calculates the similarity between multiple low-volume products from a research environment using a real-world data set. The algorithm is applied to pulse power operations data, which routinely performs novel experiments for inertial confinement fusion, radiation effects, and nuclear stockpile stewardship. The author shows that the algorithm is successful in calculating similarity between experiments of varying complexity such that comparable shots can be used for further analysis. Furthermore, it has been able to identify experiments not traditionally seen as identical.
This paper presents a visualization technique for incorporating eigenvector estimates with geospatial data to create inter-area mode shape maps. For each point of measurement, the method specifies the radius, color, and angular orientation of a circular map marker. These characteristics are determined by the elements of the right eigenvector corresponding to the mode of interest. The markers are then overlaid on a map of the system to create a physically intuitive visualization of the mode shape. This technique serves as a valuable tool for differentiating oscillatory modes that have similar frequencies but different shapes. This work was conducted within the Western Interconnection Modes Review Group (WIMRG) in the Western Electric Coordinating Council (WECC). For testing, we employ the WECC 2021 Heavy Summer base case, which features a high-fidelity, industry standard dynamic model of the North American Western Interconnection. Mode estimates are produced via eigen-decomposition of a reduced-order state matrix identified from simulated ringdown data. The results provide improved physical intuition about the spatial characteristics of the inter-area modes. In addition to offline applications, this visualization technique could also enhance situational awareness for system operators when paired with online mode shape estimates.
In order to evaluate the time evolution of avalanche breakdown in wide and ultra-wide bandgap devices, we have developed a cable pulser experimental setup that can evaluate the time-evolution of the terminating impedance for a semiconductor device with a time resolution of 130 ps. We have utilized this pulser setup to evaluate the time-to-breakdown of vertical Gallium Nitride and Silicon Carbide diodes for possible use as protection elements in the electrical grid against fast transient voltage pulses (such as those induced by an electromagnetic pulse event). We have found that the Gallium Nitride device demonstrated faster dynamics compared to the Silicon Carbide device, achieving 90% conduction within 1.37 ns compared to the SiC device response time of 2.98 ns. While the Gallium Nitride device did not demonstrate significant dependence of breakdown time with applied voltage, the Silicon Carbide device breakdown time was strongly dependent on applied voltage, ranging from a value of 2.97 ns at 1.33 kV to 0.78 ns at 2.6 kV. The fast response time (< 5 ns) of both the Gallium Nitride and Silicon Carbide devices indicate that both materials systems could meet the stringent response time requirements and may be appropriate for implementation as protection elements against electromagnetic pulse transients.
Software sustainability is critical for Computational Science and Engineering (CSE) software. Measuring sustainability is challenging because sustainability consists of many attributes. One factor that impacts software sustainability is the complexity of the source code. This paper introduces an approach for utilizing complexity data, with a focus on hotspots of and changes in complexity, to assist developers in performing code reviews and inform project teams about longer-term changes in sustainability and maintainability from the perspective of cyclomatic complexity. We present an analysis of data associated with four real-world pull requests to demonstrate how the metrics may help guide and inform the code review process and how the data can be used to measure changes in complexity over time.
It is impossible in practice to comprehensively test even small software programs due to the vastness of the reachable state space; however, modern cyber-physical systems such as aircraft require a high degree of confidence in software safety and reliability. Here we explore methods of generating test sets to effectively and efficiently explore the state space for a module based on the Traffic Collision Avoidance System (TCAS) used on commercial aircraft. A formal model of TCAS in the model-checking language NuSMV provides an output oracle. We compare test sets generated using various methods, including covering arrays, random, and a low-complexity input paradigm applied to 28 versions of the TCAS C program containing seeded errors. Faults are triggered by tests for all 28 programs using a combination of covering arrays and random input generation. Complexity-based inputs perform more efficiently than covering arrays, and can be paired with random input generation to create efficient and effective test sets. A random forest classifier identifies variable values that can be targeted to generate tests even more efficiently in future work, by combining a machine-learned fuzzing algorithm with more complex model oracles developed in model-based systems engineering (MBSE) software.
The detonation of explosives produces luminous fireballs often containing particulates such as carbon soot or remnants of partially reacted explosives. The spatial distribution of these particulates is of great interest for the derivation and validation of models. In this work, three ultra-high-speed imaging techniques: diffuse back-illumination extinction, schlieren, and emission imaging, are utilized to investigate the particulate quantity, spatial distribution, and structure in a small-scale fireball. The measurements show the evolution of the particulate cloud in the fireball, identifying possible emission sources and regions of high optical thickness. Extinction measurements performed at two wavelengths shows that extinction follows the inverse wavelength behavior expected of absorptive particles in the Rayleigh scattering regime. The estimated mass from these extinction measurements shows an average soot yield consistent with previous soot collection experiments. The imaging diagnostics discussed in the current work can provide detailed information on the spatial distribution and concentration of soot, crucial for validation opportunities in the future.
This chapter deals with experimental dynamic substructures which are reduced order models that can be coupled with each other or with finite element derived substructures to estimate the system response of the coupled substructures. A unifying theoretical framework in the physical, modal or frequency domain is reviewed with examples. The major issues that have hindered experimental based substructures are addressed. An example is demonstrated with the transmission simulator method that overcomes the major historical difficulties. Guidelines for the transmission simulator design are presented.
Numerous projects are looking into distributing blends of natural gas and different amounts of gaseous hydrogen through the existing natural gas distribution system, which is widely composed of medium density polyethylene (MDPE) line pipes. The mechanical behavior of MDPE with hydrogen is not well understood; therefore, the effect of gaseous H2 on the mechanical properties of MDPE needs to be examined. In the current study, we investigate the effects of gaseous H2 on fatigue life and fracture resistance of MDPE in the presence of 3.4 MPa gaseous H2. Fatigue life tests were also conducted at a pressure of 21 MPa to investigate the effect of gas pressure on the fatigue behavior of MDPE. Results showed that the presence of gaseous H2 did not degrade the fatigue life nor the fracture resistance of MDPE. Additionally, based on the value of fracture resistance calculated, a failure assessment diagram was constructed to determine the applicability of using MDPE pipeline for distribution of gaseous H2. Even in the presence of a large internal crack, the failure assessment evaluation indicated that the MDPE pipes lie within the safe region under typical service conditions of natural gas distribution pipeline system.
Powders under compression form mesostructures of particle agglomerations in response to both inter- and intra-particle forces. The ability to computationally predict the resulting mesostructures with reasonable accuracy requires models that capture the distributions associated with particle size and shape, contact forces, and mechanical response during deformation and fracture. The following report presents experimental data obtained for the purpose of validating emerging mesostructures simulated by discrete element method and peridynamic approaches. A custom compression apparatus, suitable for integration with our micro-computed tomography (micro-CT) system, was used to collect 3-D scans of a bulk powder at discrete steps of increasing compression. Details of the apparatus and the microcrystalline cellulose particles, with a nearly spherical shape and mean particle size, are presented. Comparative simulations were performed with an initial arrangement of particles and particle shapes directly extracted from the validation experiment. The experimental volumetric reconstruction was segmented to extract the relative positions and shapes of individual particles in the ensemble, including internal voids in the case of the microcrystalline cellulose particles. These computationally determined particles were then compressed within the computational domain and the evolving mesostructures compared directly to those in the validation experiment. The ability of the computational models to simulate the experimental mesostructures and particle behavior at increasing compression is discussed.
The paper proposes an implementation of Graph Neural Networks (GNNs) for distribution power system Traveling Wave (TW) - based protection schemes. Simulated faults on the IEEE 34 system are processed by using the Karrenbauer Transform and the Stationary Wavelet Transform (SWT), and the energy of the resulting signals is calculated using the Parseval's Energy Theorem. This data is used to train Graph Convolutional Networks (GCNs) to perform fault zone location. Several levels of measurement noise are considered for comparison. The results show outstanding performance, more than 90% for the most developed models, and outline a fast, reliable, asynchronous and distributed protection scheme for distribution level networks.
Operon prediction in prokaryotes is critical not only for understanding the regulation of endogenous gene expression, but also for exogenous targeting of genes using newly developed tools such as CRISPR-based gene modulation. A number of methods have used transcriptomics data to predict operons, based on the premise that contiguous genes in an operon will be expressed at similar levels. While promising results have been observed using these methods, most of them do not address uncertainty caused by technical variability between experiments, which is especially relevant when the amount of data available is small. In addition, many existing methods do not provide the flexibility to determine the stringency with which genes should be evaluated for being in an operon pair. We present OperonSEQer, a set of machine learning algorithms that uses the statistic and p-value from a non-parametric analysis of variance test (Kruskal-Wallis) to determine the likelihood that two adjacent genes are expressed from the same RNA molecule. We implement a voting system to allow users to choose the stringency of operon calls depending on whether your priority is high recall or high specificity. In addition, we provide the code so that users can retrain the algorithm and re-establish hyperparameters based on any data they choose, allowing for this method to be expanded as additional data is generated. We show that our approach detects operon pairs that are missed by current methods by comparing our predictions to publicly available long-read sequencing data. OperonSEQer therefore improves on existing methods in terms of accuracy, flexibility, and adaptability.
Any program tasked with the evaluation and acquisition of algorithms for use in deployed scenarios must have an impartial, repeatable, and auditable means of benchmarking both candidate and fielded algorithms. Success in this endeavor requires a body of representative sensor data, data labels indicating the proper algorithmic response to the data as adjudicated by subject matter experts, a means of executing algorithms under review against the data, and the ability to automatically score and report algorithm performance. Each of these capabilities should be constructed in support of program and mission goals. By curating and maintaining data, labels, tests, and scoring methodology, a program can understand and continually improve the relationship between benchmarked and fielded performance of acquired algorithms. A system supporting these program needs, deployed in an environment with sufficient computational power and necessary security controls is a powerful tool for ensuring due diligence in evaluation and acquisition of mission critical algorithms. This paper describes the Seascape system and its place in such a process.
This study presents a method that can be used to gain information relevant to determining the corrosion risk for spent nuclear fuel (SNF) canisters during extended dry storage. Currently, it is known that stainless steel canisters are susceptible to chloride-induced stress corrosion cracking (CISCC). However, the rate of CISCC degradation and the likelihood that it could lead to a through-wall crack is unknown. This study uses well-developed computational fluid dynamics and particle-tracking tools and applies them to SNF storage to determine the rate of deposition on canisters. The deposition rate is determined for a vertical canister system and a horizontal canister system, at various decay heat rates with a uniform particle size distribution, ranging from 0.25 to 25 µm, used as an input. In all cases, most of the dust entering the overpack passed through without depositing. Most of what was retained in the overpack was deposited on overpack surfaces (e.g., inlet and outlet vents); only a small fraction was deposited on the canister itself. These results are provided for generalized canister systems with a generalized input; as such, this technical note is intended to demonstrate the technique. This study is a part of an ongoing effort funded by the U.S. Department of Energy, Nuclear Energy Office of Spent Fuel Waste Science and Technology, which is tasked with doing research relevant to developing a sound technical basis for ensuring the safe extended storage and subsequent transport of SNF. This work is being presented to demonstrate a potentially useful technique for SNF canister vendors, utilities, regulators, and stakeholders to utilize and further develop for their own designs and site-specific studies.
Variable energy resources (VERs) like wind and solar are the future of electricity generation as we gradually phase out fossil fuel due to environmental concerns. Nations across the globe are also making significant strides in integrating VERs into their power grids as we strive toward a greener future. However, integration of VERs leads to several challenges due to their variable nature and low inertia characteristics. In this paper, we discuss the hurdles faced by the power grid due to high penetration of wind power generation and how energy storage system (ESSs) can be used at the grid-level to overcome these hurdles. We propose a new planning strategy using which ESSs can be sized appropriately to provide inertial support as well as aid in variability mitigation, thus minimizing load curtailment. A probabilistic framework is developed for this purpose, which takes into consideration the outage of generators and the replacement of conventional units with wind farms. Wind speed is modeled using an autoregressive moving average technique. The efficacy of the proposed methodology is demonstrated on the WSCC 9-bus test system.