The First Tri-Lab Workshop on Formal Verification was held in Santa Fe, New Mexico, on December 5th, 2023. This workshop gathered staff from Sandia, Los Alamos, and Lawrence Livermore National Laboratories and NASA’s Jet Propulsion Laboratory. This report summarizes and expands on the presentations given and discussion had at this workshop. In this report, we describe the current capabilities and research needs related to formal methods at the NNSA labs. In particular, we identify medium-term and long-term research gaps in programming languages, formalization efforts of complex systems, embedded systems verification, hardware verification, cybersecurity, formal methods usability, workflows, numerical methods, the use of formal methods for artificial intelligence (and its converse, artificial intelligence for formal methods), and collaboration opportunities and considerations on these topics. We conclude with a small number of exemplar research problems related to these topics.
This report describes work originally performed in FY19 that assembled a workflow enabling formal verification of high-consequence digital controllers. The approach builds on an engineering analysis strategy using multiple abstraction levels (Model-Based Design) and performs exhaustive formal analysis of appropriate levels – here, state machines and C code – to assure always/never properties of digital logic that cannot be verified by testing alone. The operation of the workflow is illustrated using example models and code, including expected failures of verification when properties are violated.
We present Q Framework: a verification framework used at Sandia National Laboratories. Q is a collection of tools used to verify safety and correctness properties of high-consequence embedded systems and captures the structure and compositionality of system specifications written with state machines in order to prove system-level properties about their implementations. Q consists of two main workflows: 1) compilation of temporal properties and state machine models (such as those made with Stateflow) into SMV models and 2) generation of ACSL specifications for the C code implementation of the state machine models. These together prove a refinement relation between the state machine model and its C code implementation, with proofs of properties checked by NuSMV (for SMV models) and Frama-C (for ACSL specifications).
New manufacturing technologies such as additive manufacturing require research and development to minimize the uncertainties in the produced parts. The research involves experimental measurements and large simulations, which result in huge quantities of data to store and analyze. We address this challenge by alleviating the data storage requirements using lossy data compression. We select wavelet bases as the mathematical tool for compression. Unlike images, additive manufacturing data is often represented on irregular geometries and unstructured meshes. Thus, we use Alpert tree-wavelets as bases for our data compression method. We first analyze different basis functions for the wavelets and find the one that results in maximal compression and miminal error in the reconstructed data. We then devise a new adaptive thresholding method that is data-agnostic and allows a priori estimation of the reconstruction error. Finally, we propose metrics to quantify the global and local errors in the reconstructed data. One of the error metrics addresses the preservation of physical constraints in reconstructed data fields, such as divergence-free stress field in structural simulations. While our compression and decompression method is general, we apply it to both experimental and computational data obtained from measurements and thermal/structural modeling of the sintering of a hollow cylinder from metal powders using a Laser Engineered Net Shape process. The results show that monomials achieve optimal compression performance when used as wavelet bases. The new thresholding method results in compression ratios that are two to seven times larger than the ones obtained with commonly used thresholds. Overall, adaptive Alpert tree-wavelets can achieve compression ratios between one and three orders of magnitude depending on the features in the data that are required to preserve. These results show that Alpert tree-wavelet compression is a viable and promising technique to reduce the size of large data structures found in both experiments and simulations.