Publications Details
What Questions Would a Systems Engineer Ask to Assess Systems Engineering Models as Credible
Carroll, Edward R.; Malins, Robert J.
Digital Systems Engineering strategies typically call for digital Systems Engineering models to be retained in repositories and certified as an authoritative source of truth (enabling model reuse, qualification, and collaboration). In order for digital Systems Engineering models to be certified as authoritative (credible), they need to be assessed - verified and validated - and with the amount of uncertainty in the model quantified (consider reusing someone else's model without knowing the author). Due to this increasing model complexity, the authors assert that traditional human-based methods for validating, verifying, and uncertainty quantification - such as human-based peer-review sessions - cannot sufficiently establish that a digital Systems Engineering model of a complex system is credible. Digital Systems Engineering models of complex systems can contain millions of nodes and edges. The authors assert that this level of detail is beyond the ability of any group of humans - even working for weeks at a time - to discern and catch every minor model infraction. In contrast, computers are highly effective at discerning infractions with massive amounts of information. The authors suggest that a better approach might be to focus the humans at what model patterns should be assessed and enable the computer to assess the massive details in accordance with those patterns - by running through perhaps 100,000 test loops. In anticipation of future projects to implement and automate the assessment of models at Sandia National Laboratories, a study was initiated to elicit input from a group of 25 Systems Engineering experts. The authors positioning query began with - 'What questions would a Systems Engineer ask to assess Systems Engineering models for credibility?" This report documents the results of that survey.