Publications

Results 1–25 of 29
Skip to search filters

SAGE Intrusion Detection System: Sensitivity Analysis Guided Explainability for Machine Learning

Smith, Michael R.; Acquesta, Erin A.; Ames, Arlo L.; Carey, Alycia N.; Cueller, Christopher R.; Field, Richard V.; Maxfield, Trevor M.; Mitchell, Scott A.; Morris, Elizabeth S.; Moss, Blake C.; Nyre-Yu, Megan N.; Rushdi, Ahmad R.; Stites, Mallory C.; Smutz, Charles S.; Zhou, Xin Z.

This report details the results of a three-fold investigation of sensitivity analysis (SA) for machine learning (ML) explainability (MLE): (1) the mathematical assessment of the fidelity of an explanation with respect to a learned ML model, (2) quantifying the trustworthiness of a prediction, and (3) the impact of MLE on the efficiency of end-users through multiple users studies. We focused on the cybersecurity domain as the data is inherently non-intuitive. As ML is being using in an increasing number of domains, including domains where being wrong can elicit high consequences, MLE has been proposed as a means of generating trust in a learned ML models by end users. However, little analysis has been performed to determine if the explanations accurately represent the target model and they themselves should be trusted beyond subjective inspection. Current state-of-the-art MLE techniques only provide a list of important features based on heuristic measures and/or make certain assumptions about the data and the model which are not representative of the real-world data and models. Further, most are designed without considering the usefulness by an end-user in a broader context. To address these issues, we present a notion of explanation fidelity based on Shapley values from cooperative game theory. We find that all of the investigated MLE explainability methods produce explanations that are incongruent with the ML model that is being explained. This is because they make critical assumptions about feature independence and linear feature interactions for computational reasons. We also find that in deployed, explanations are rarely used due to a variety of reason including that there are several other tools which are trusted more than the explanations and there is little incentive to use the explanations. In the cases when the explanations are used, we found that there is the danger that explanations persuade the end users to wrongly accept false positives and false negatives. However, ML model developers and maintainers find the explanations more useful to help ensure that the ML model does not have obvious biases. In light of these findings, we suggest a number of future directions including developing MLE methods that directly model non-linear model interactions and including design principles that take into account the usefulness of explanations to the end user. We also augment explanations with a set of trustworthiness measures that measure geometric aspects of the data to determine if the model output should be trusted.

More Details

Intrinsic Uncertainties in Modeling Complex Systems

Cooper, Curtis S.; Bramson, Aaron L.; Ames, Arlo L.

Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrained within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project #170979, entitled "Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties", which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.

More Details

Dynamical systems probabilistic risk assessment

Denman, Matthew R.; Ames, Arlo L.

Probabilistic Risk Assessment (PRA) is the primary tool used to risk-inform nuclear power regulatory and licensing activities. Risk-informed regulations are intended to reduce inherent conservatism in regulatory metrics (e.g., allowable operating conditions and technical specifications) which are built into the regulatory framework by quantifying both the total risk profile as well as the change in the risk profile caused by an event or action (e.g., in-service inspection procedures or power uprates). Dynamical Systems (DS) analysis has been used to understand unintended time-dependent feedbacks in both industrial and organizational settings. In dynamical systems analysis, feedback loops can be characterized and studied as a function of time to describe the changes to the reliability of plant Structures, Systems and Components (SSCs). While DS has been used in many subject areas, some even within the PRA community, it has not been applied toward creating long-time horizon, dynamic PRAs (with time scales ranging between days and decades depending upon the analysis). Understanding slowly developing dynamic effects, such as wear-out, on SSC reliabilities may be instrumental in ensuring a safely and reliably operating nuclear fleet. Improving the estimation of a plant's continuously changing risk profile will allow for more meaningful risk insights, greater stakeholder confidence in risk insights, and increased operational flexibility.

More Details

Complex Adaptive Systems of Systems (CASOS) engineering environment

Linebarger, John M.; Detry, Richard J.; Glass, Robert J.; Beyeler, Walter E.; Ames, Arlo L.; Finley, Patrick D.

Complex Adaptive Systems of Systems, or CASoS, are vastly complex physical-socio-technical systems which we must understand to design a secure future for the nation. The Phoenix initiative implements CASoS Engineering principles combining the bottom up Complex Systems and Complex Adaptive Systems view with the top down Systems Engineering and System-of-Systems view. CASoS Engineering theory and practice must be conducted together to develop a discipline that is grounded in reality, extends our understanding of how CASoS behave and allows us to better control the outcomes. The pull of applications (real world problems) is critical to this effort, as is the articulation of a CASoS Engineering Framework that grounds an engineering approach in the theory of complex adaptive systems of systems. Successful application of the CASoS Engineering Framework requires modeling, simulation and analysis (MS and A) capabilities and the cultivation of a CASoS Engineering Community of Practice through knowledge sharing and facilitation. The CASoS Engineering Environment, itself a complex adaptive system of systems, constitutes the two platforms that provide these capabilities.

More Details

Complex Adaptive Systems of Systems (CASoS) engineering and foundations for global design

Beyeler, Walter E.; Ames, Arlo L.; Brown, Theresa J.; Brodsky, Nancy S.; Finley, Patrick D.; Linebarger, John M.

Complex Adaptive Systems of Systems, or CASoS, are vastly complex ecological, sociological, economic and/or technical systems which must be recognized and reckoned with to design a secure future for the nation and the world. Design within CASoS requires the fostering of a new discipline, CASoS Engineering, and the building of capability to support it. Towards this primary objective, we created the Phoenix Pilot as a crucible from which systemization of the new discipline could emerge. Using a wide range of applications, Phoenix has begun building both theoretical foundations and capability for: the integration of Applications to continuously build common understanding and capability; a Framework for defining problems, designing and testing solutions, and actualizing these solutions within the CASoS of interest; and an engineering Environment required for 'the doing' of CASoS Engineering. In a secondary objective, we applied CASoS Engineering principles to begin to build a foundation for design in context of Global CASoS

More Details

Complex Adaptive System of Systems (CASoS) Engineering Applications. Version 1.0

Brown, Theresa J.; Glass, Robert J.; Beyeler, Walter E.; Ames, Arlo L.; Linebarger, John M.

Complex Adaptive Systems of Systems, or CASoS, are vastly complex eco-socio-economic-technical systems which we must understand to design a secure future for the nation and the world. Perturbations/disruptions in CASoS have the potential for far-reaching effects due to highly-saturated interdependencies and allied vulnerabilities to cascades in associated systems. The Phoenix initiative approaches this high-impact problem space as engineers, devising interventions (problem solutions) that influence CASoS to achieve specific aspirations. CASoS embody the world's biggest problems and greatest opportunities: applications to real world problems are the driving force of our effort. We are developing engineering theory and practice together to create a discipline that is grounded in reality, extends our understanding of how CASoS behave, and allows us to better control those behaviors. Through application to real-world problems, Phoenix is evolving CASoS Engineering principles while growing a community of practice and the CASoS engineers to populate it.

More Details

Phoenix : Complex Adaptive System of Systems (CASoS) engineering version 1.0

Glass, Robert J.; Ames, Arlo L.; Brown, Theresa J.; Linebarger, John M.; Beyeler, Walter E.

Complex Adaptive Systems of Systems, or CASoS, are vastly complex ecological, sociological, economic and/or technical systems which we must understand to design a secure future for the nation and the world. Perturbations/disruptions in CASoS have the potential for far-reaching effects due to pervasive interdependencies and attendant vulnerabilities to cascades in associated systems. Phoenix was initiated to address this high-impact problem space as engineers. Our overarching goals are maximizing security, maximizing health, and minimizing risk. We design interventions, or problem solutions, that influence CASoS to achieve specific aspirations. Through application to real-world problems, Phoenix is evolving the principles and discipline of CASoS Engineering while growing a community of practice and the CASoS engineers to populate it. Both grounded in reality and working to extend our understanding and control of that reality, Phoenix is at the same time a solution within a CASoS and a CASoS itself.

More Details
Results 1–25 of 29
Results 1–25 of 29