IMoFi - Intelligent Model Fidelity: Physics-Based Data-Driven Grid Modeling to Accelerate Accurate PV Integration
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Wind Engineering and Industrial Aerodynamics
The complexity and associated uncertainties involved with atmospheric-turbine-wake interactions produce challenges for accurate wind farm predictions of generator power and other important quantities of interest (QoIs), even with state-of-the-art high-fidelity atmospheric and turbine models. A comprehensive computational study was undertaken with consideration of simulation methodology, parameter selection, and mesh refinement on atmospheric, turbine, and wake QoIs to identify capability gaps in the validation process. For neutral atmospheric boundary layer conditions, the massively parallel large eddy simulation (LES) code Nalu-Wind was used to produce high-fidelity computations for experimental validation using high-quality meteorological, turbine, and wake measurement data collected at the Department of Energy/Sandia National Laboratories Scaled Wind Farm Technology (SWiFT) facility located at Texas Tech University's National Wind Institute. The wake analysis showed the simulated lidar model implemented in Nalu-Wind was successful at capturing wake profile trends observed in the experimental lidar data.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This report provides basic background data on the Manipulate-2020 code. This code is used for processing and "manipulation" of nuclear data in support of radiation metrology applications. The code is made available on the open GitHub repository and is available to the general nuclear data community.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Reverse engineering (RE) analysts struggle to address critical questions about the safety of binary code accurately and promptly, and their supporting program analysis tools are simply wrong sometimes. The analysis tools have to approximate in order to provide any information at all, but this means that they introduce uncertainty into their results. And those uncertainties chain from analysis to analysis. We hypothesize that exposing sources, impacts, and control of uncertainty to human binary analysts will allow the analysts to approach their hardest problems with high-powered analytic techniques that they know when to trust. Combining expertise in binary analysis algorithms, human cognition, uncertainty quantification, verification and validation, and visualization, we pursue research that should benefit binary software analysis efforts across the board. We find a strong analogy between RE and exploratory data analysis (EDA); we begin to characterize sources and types of uncertainty found in practice in RE (both in the process and in supporting analyses); we explore a domain-specific focus on uncertainty in pointer analysis, showing that more precise models do help analysts answer small information flow questions faster and more accurately; and we test a general population with domain-general sudoku problems, showing that adding "knobs" to an analysis does not significantly slow down performance. This document describes our explorations in uncertainty in binary analysis.
Abstract not provided.
Abstract not provided.
Abstract not provided.