Reverse engineering (RE) analysts struggle to address critical questions about the safety of binary code accurately and promptly, and their supporting program analysis tools are simply wrong sometimes. The analysis tools have to approximate in order to provide any information at all, but this means that they introduce uncertainty into their results. And those uncertainties chain from analysis to analysis. We hypothesize that exposing sources, impacts, and control of uncertainty to human binary analysts will allow the analysts to approach their hardest problems with high-powered analytic techniques that they know when to trust. Combining expertise in binary analysis algorithms, human cognition, uncertainty quantification, verification and validation, and visualization, we pursue research that should benefit binary software analysis efforts across the board. We find a strong analogy between RE and exploratory data analysis (EDA); we begin to characterize sources and types of uncertainty found in practice in RE (both in the process and in supporting analyses); we explore a domain-specific focus on uncertainty in pointer analysis, showing that more precise models do help analysts answer small information flow questions faster and more accurately; and we test a general population with domain-general sudoku problems, showing that adding "knobs" to an analysis does not significantly slow down performance. This document describes our explorations in uncertainty in binary analysis.
On April 6-8, 2021, Sandia National Laboratories hosted a virtual workshop to explore the potential for developing AI-Enhanced Co-Design for Next-Generation Microelectronics (AICoM). The workshop brought together two themes. The first theme was articulated in the 2018 Department of Energy Office of Science (DOE SC) “Basic Research Needs for Microelectronics” (BRN) report, which called for a “fundamental rethinking” of the traditional design approach to microelectronics, in which subject matter experts (SMEs) in each microelectronics discipline (materials, devices, circuits, algorithms, etc.) work near-independently. Instead, the BRN called for a non-hierarchical, egalitarian vision of co-design, wherein “each scientific discipline informs and engages the others” in “parallel but intimately networked efforts to create radically new capabilities.” The second theme was the recognition of the continuing breakthroughs in artificial intelligence (AI) that are currently enhancing and accelerating the solution of traditional design problems in materials science, circuit design, and electronic design automation (EDA).
For digital twins (DTs) to become a central fixture in mission critical systems, a better understanding is required of potential modes of failure, quantification of uncertainty, and the ability to explain a model’s behavior. These aspects are particularly important as the performance of a digital twin will evolve during model development and deployment for real-world operations.
Signal arrival-time estimation plays a critical role in a variety of downstream seismic analy-ses, including location estimation and source characterization. Any arrival-time errors propagate through subsequent data-processing results. In this article, we detail a general framework for refining estimated seismic signal arrival times along with full estimation of their associated uncertainty. Using the standard short-term average/long-term average threshold algorithm to identify a search window, we demonstrate how to refine the pick estimate through two different approaches. In both cases, new waveform realizations are generated through bootstrap algorithms to produce full a posteriori estimates of uncertainty of onset arrival time of the seismic signal. The onset arrival uncertainty estimates provide additional data-derived information from the signal and have the potential to influence seismic analysis along several fronts.