Validation and verification of engineering models is important to understand potential weaknesses and issues in the model. This is accomplished through the application of constraint logic to the model. These models and the constraints put upon them can be represented through a graph structure. Here we give a visualization system to aid users understanding, locating, and fixing constraint violations in their systems. We give users several ways to narrow down on the specific errors and parts of the graph they’re interested in. Users have the opportunity to choose the types of errors that will be shown in the graph. Clustering is applied to the graph to help users narrow down their searches. Several other graph interactions are given to support discovery of constraint violations.
The credibility of an engineering model is of critical importance in large-scale projects. How concerned should an engineer be when reusing someone else's model when they may not know the author or be familiar with the tools that were used to create it? In this report, the authors advance engineers' capabilities for assessing models through examination of the underlying semantic structure of a model--the ontology. This ontology defines the objects in a model, types of objects, and relationships between them. In this study, two advances in ontology simplification and visualization are discussed and are demonstrated on two systems engineering models. These advances are critical steps toward enabling engineering models to interoperate, as well as assessing models for credibility. For example, results of this research show an 80% reduction in file size and representation size, dramatically improving the throughput of graph algorithms applied to the analysis of these models. Finally, four future problems are outlined in ontology research toward establishing credible models--ontology discovery, ontology matching, ontology alignment, and model assessment.
Digital engineering strategies typically assume that digital engineering models interoperate seamlessly across the multiple different engineering modeling software applications involved, such as model- based systems engineering (MBSE), mechanical computer-aided design (MCAD), electrical computer-aided design (ECAD), and other engineering modeling applications. The presumption is that the data schema in these modeling software applications are structured in the familiar flat- tabular schema like any other software application. Engineering domain-specific applications (e.g., systems, mechanical, electrical, simulation) are typically designed to solve domain-specific problems, necessarily excluding explicit representations of non-domain information to help the engineer focus on the domain problems (system definition, design, simulation). Such exclusions become problematic in inter-domain information exchange. The obvious assumptions of one domain might not be so obvious to experts in another domain. Ambiguity in domain-specific language can erode the ability to enable different domain modeling applications to interoperate, unless the underlying language is understood and used as the basis for translation from one application to another. The engineering modeling software application industry has struggled for decades to enable these applications to interoperate. Industry standards have been developed, but they have not unified the industry. Why is this? The authors assert that the industry has relied on traditional database integration methods. The basic issue prohibiting successful application integration then is that traditional database-driven integration does not consider the distinct languages of each domain. An engineering models meaning is expressed through the underlying language of that engineering domain. In essence, traditional integration methods do not retain the semantic context (meaning) of the model. The basis of this research stems from the widely held assumption that systems engineering models are (or can be) structured according to the underlying semantic ontology of the model. This assumption can be imagined from two thoughts. 1) Digital systems engineering models are often represented using graph theory (the graph of a complex systems model can contain millions of nodes and edges). When examining the nodes one at a time and following the outbound edges of each node one by one, one can end up with rudimentary statements about the model (i.e., node A relates to node B), as in a semantic graph. 2) Likewise, from the study of natural languages, a sentence can be structured into unambiguous triples of subject-predicate-object within formal and highly expressive semantic ontologies. The rudimentary statements about a systems model discerned with graph theory closely mimic the triples used in the ontologies that try to structure natural languages. In other words, a systems models semantic graph can be (or is) structured into an ontology. Additionally, it is well established in industry that through natural language processing (NLP), which provides the means to create language structures, that computers can interpret ontological graphs. Therefore, the authors hypothesized that if the integrity of the underlying semantic structure of a systems model is retained, the contextual meaning of the model is retained. By structuring system models into the triples of the underlying ontology during the transformation from one MBSE application to another, the authors have provided a proof of the concept that the meaning of a system model can be retained during transformation. The authors assert that this is the missing ingredient in effective systems model-to-model interoperability. ACKNOWLEDGEMENTS The authors would like to thank the FY19 Model Interoperability team members who provided a solid foundation for the FY20 team to leverage: John McCloud, for the work he did to guide us toward the right use of technology that will appropriately discover and manipulate ontologies. Carlos Tafoya, for the work he did to develop an application programming interface (API)/Adapter that would export ontology-based data from GENESYS. Peter Chandler, for the work he did to architect our overall integration solution, with an eye toward the future that would influence a large-scale federated production-level systems engineering digital model ecosystem.
Digital Systems Engineering strategies typically call for digital Systems Engineering models to be retained in repositories and certified as an authoritative source of truth (enabling model reuse, qualification, and collaboration). In order for digital Systems Engineering models to be certified as authoritative (credible), they need to be assessed - verified and validated - and with the amount of uncertainty in the model quantified (consider reusing someone else's model without knowing the author). Due to this increasing model complexity, the authors assert that traditional human-based methods for validating, verifying, and uncertainty quantification - such as human-based peer-review sessions - cannot sufficiently establish that a digital Systems Engineering model of a complex system is credible. Digital Systems Engineering models of complex systems can contain millions of nodes and edges. The authors assert that this level of detail is beyond the ability of any group of humans - even working for weeks at a time - to discern and catch every minor model infraction. In contrast, computers are highly effective at discerning infractions with massive amounts of information. The authors suggest that a better approach might be to focus the humans at what model patterns should be assessed and enable the computer to assess the massive details in accordance with those patterns - by running through perhaps 100,000 test loops. In anticipation of future projects to implement and automate the assessment of models at Sandia National Laboratories, a study was initiated to elicit input from a group of 25 Systems Engineering experts. The authors positioning query began with - 'What questions would a Systems Engineer ask to assess Systems Engineering models for credibility?" This report documents the results of that survey.
Sandia and SLAC staff evaluated Princeton Plasma Physics Laboratory (PPPL) engineering procedures. The report documents, team members, interviewees and documents consulted, lines of inquiry, strengths and best practices, weaknesses and risks, opportunities for improvement, and general comments.
A one-day workshop was held on April 14, 2016 to explore Nuclear Weapons Mission Area (NWMA) strategy enablers from a systems perspective. This report documents the workshop and is intended to identify initiatives, based on the workshop exchanges, and catalyze these initiatives to enable implementation of the NWMA strategy using systems thinking and methodology. Topics explored include Model-based Engineering, Enabling Viable Capabilities, and Enterprise Decision Awareness. The morning of the workshop featured Dr. Dinesh Verma (Stevens Institute/SERC) as keynote and during the afternoon attendees participated in three facilitated sessions on the topics. There were over 70 participants from about 40 departments across Sandia National Laboratories.