Publications

Results 1–25 of 58

Search results

Jump to search filters

Defender Policy Evaluation and Resource Allocation With MITRE ATT&CK Evaluations Data

IEEE Transactions on Dependable and Secure Computing

Outkin, Alexander V.; Schulz, Patricia V.; Schulz, Timothy; Tarman, Thomas D.; Pinar, Ali P.

Protecting against multi-step attacks of uncertain start times and duration forces the defenders into indefinite, always ongoing, resource-intensive response. To allocate resources effectively, the defender must analyze and respond to an uncertain stream of potentially undetected multiple multi-step attacks and take measures of attack and response intensity over time into account. Such response requires estimation of overall attack success metrics and evaluating effect of defender strategies and actions associated with specific attack steps on overall attack metrics. We present a novel game-theoretic approach GPLADD to attack metrics estimation and demonstrate it on attack data derived from MITRE's ATT&CK Framework and other sources. In GPLADD, the time to complete attack steps is explicit; the attack dynamics emerges from attack graph and attacker-defender capabilities and strategies and therefore reflects 'physics' of attacks. The time the attacker takes to complete an attack step is drawn from a probability distribution determined by attacker and defender strategies and capabilities. This makes time a physical constraint on attack success parameters and enables comparing different defender resource allocation strategies across different attacks. We solve for attack success metrics by approximating attacker-defender games as discrete-time Markov chains and show evaluation of return on detection investments associated with different attack steps. We apply GPLADD to MITRE's APT3 data from ATT&CK Framework and show that there are substantial and un-intuitive differences in estimated real-world vendor performance against a simplified APT3 attack. We focus on metrics that reflect attack difficulty versus attacker ability to remain hidden in the system after gaining control. This enables practical defender optimization and resource allocation against multi-step attacks.

More Details

Updated Economic Model for Estimation of GDP Losses in the MACCS Offsite Consequence Analysis Code RDEIM Model Report for MACCS v4.2

Outkin, Alexander V.; Bixler, Nathan E.; Osborn, Douglas M.; Andrews, Nathan A.; Walton, Fotini W.

This report updates the Regional Disruption Economic Impact Model (RDEIM) GDP-based model described in Bixler et al. (2020) used in the MACCS accident consequence analysis code. MACCS is the U.S. Nuclear Regulatory Commission (NRC) used to perform probabilistic health and economic consequence assessments for atmospheric releases of radionuclides. It is also used by international organizations, both reactor owners and regulators. It is intended and most commonly used for hypothetical accidents that could potentially occur in the future rather than to evaluate past accidents or to provide emergency response during an ongoing accident. It is designed to support probabilistic risk and consequence analyses and is used by the NRC, U.S. nuclear licensees, the Department of Energy, and international vendors, licensees, and regulators. The update of the RDEIM model in version 4.2 expresses the national recovery calculation explicitly, rather than implicitly as in the previous version. The calculation of the total national GDP losses remains unchanged. However, anticipated gains from recovery are now allocated across all the GDP loss types – direct, indirect, and induced – whereas in version 4.1, all recovery gains were accounted for in the indirect loss type. To achieve this, we’ve introduced new methodology to streamline and simplify the calculation of all types of losses and recovery. In addition, RDEIM includes other kinds of losses, including tangible wealth. This includes loss of tangible assets (e.g., depreciation) and accident expenditures (e.g., decontamination). This document describes the updated RDEIM economic model and provides examples of loss and recovery calculation, results analysis, and presentation. Changes to the tangible cost calculation and accident expenditures are described in section 2.2. The updates to the RDEIM input-output (I-O) model are not expected to affect the final benchmark results Bixler et al. (2020), as the RDEIM calculation for the total national GDP losses remains unchanged. The reader is referred to the MACCS revision history for other cost modelling changes since version 4.0 that may affect the benchmark. RDEIM has its roots in a code developed by Sandia National Laboratories for the Department of Homeland Security to estimate short-term losses from natural and manmade accidents, called the Regional Economic Accounting analysis tool (REAcct). This model was adapted and modified for MACCS. It is based on I-O theory, which is widely used in economic modeling. It accounts for direct losses to a disrupted region affected by an accident, indirect losses to the national economy due to disruption of the supply chain, and induced losses from reduced spending by displaced workers. RDEIM differs from REAcct in in its treatment and estimation of indirect loss multipliers, elimination of double-counting associated with inter-industry trade in the affected area, and that it is intended to be used for extended periods that can occur from a major nuclear reactor accident, such as the one that occurred at the Fukushima Daiichi site in Japan. Most input-output models do not account for economic adaptation and recovery, and in this regard RDEIM differs from its parent, REAcct, because it allows for a user-definable national recovery period. Implementation of a recovery period was one of several recommendations made by an independent peer review panel to ensure that RDEIM is state-of-practice. For this and several other reasons, RDEIM differs from REAcct.

More Details

Science & Engineering of Cyber Security by Uncertainty Quantification and Rigorous Experimentation (SECURE) HANDBOOK

Pinar, Ali P.; Tarman, Thomas D.; Swiler, Laura P.; Gearhart, Jared L.; Hart, Derek H.; Vugrin, Eric D.; Cruz, Gerardo C.; Arguello, Bryan A.; Geraci, Gianluca G.; Debusschere, Bert D.; Hanson, Seth T.; Outkin, Alexander V.; Thorpe, Jamie T.; Hart, William E.; Sahakian, Meghan A.; Gabert, Kasimir G.; Glatter, Casey J.; Johnson, Emma S.; Punla-Green, and She?Ifa S.

Abstract not provided.

Science and Engineering of Cybersecurity by Uncertainty quantification and Rigorous Experimentation (SECURE) (Final Report)

Pinar, Ali P.; Tarman, Thomas D.; Swiler, Laura P.; Gearhart, Jared L.; Hart, Derek H.; Vugrin, Eric D.; Cruz, Gerardo C.; Arguello, Bryan A.; Geraci, Gianluca G.; Debusschere, Bert D.; Hanson, Seth T.; Outkin, Alexander V.; Thorpe, Jamie T.; Hart, William E.; Sahakian, Meghan A.; Gabert, Kasimir G.; Glatter, Casey J.; Johnson, Emma S.; Punla-Green, She'Ifa

This report summarizes the activities performed as part of the Science and Engineering of Cybersecurity by Uncertainty quantification and Rigorous Experimentation (SECURE) Grand Challenge LDRD project. We provide an overview of the research done in this project, including work on cyber emulation, uncertainty quantification, and optimization. We present examples of integrated analyses performed on two case studies: a network scanning/detection study and a malware command and control study. We highlight the importance of experimental workflows and list references of papers and presentations developed under this project. We outline lessons learned and suggestions for future work.

More Details

Defender Policy Evaluation and Resource Allocation against MITRE ATT&CK Data and Evaluations

Outkin, Alexander V.; Schulz, Patricia V.; Schulz, Timothy; Tarman, Thomas D.; Pinar, Ali P.

Protecting against multi-step attacks of uncertain duration and timing forces defenders into an indefinite, always ongoing, resource-intensive response. To effectively allocate resources, a defender must be able to analyze multi-step attacks under assumption of constantly allocating resources against an uncertain stream of potentially undetected attacks. To achieve this goal, we present a novel methodology that applies a game-theoretic approach to the attack, attacker, and defender data derived from MITRE´s ATT&CK® Framework. Time to complete attack steps is drawn from a probability distribution determined by attacker and defender strategies and capabilities. This constraints attack success parameters and enables comparing different defender resource allocation strategies. By approximating attacker-defender games as Markov processes, we represent the attacker-defender interaction, estimate the attack success parameters, determine the effects of attacker and defender strategies, and maximize opportunities for defender strategy improvements against an uncertain stream of attacks. This novel representation and analysis of multi-step attacks enables defender policy optimization and resource allocation, which we illustrate using the data from MITRE´ s APT3 ATT&CK® Framework.

More Details

Techno-Economic Analysis: Best Practices and Assessment Tools

Kobos, Peter H.; Drennen, Thomas E.; Outkin, Alexander V.; Webb, Erik K.; Paap, Scott M.; Wiryadinata, Steven W.

A team at Sandia National Laboratories (SNL) recognized the growing need to maintain and organize the internal community of Techno - Economic Assessment analysts at the lab . To meet this need, an internal core team identified a working group of experienced, new, and future analysts to: 1) document TEA best practices; 2) identify existing resources at Sandia and elsewhere; and 3) identify gaps in our existing capabilities . Sandia has a long history of using techno - economic analyses to evaluate various technologies , including consideration of system resilience . Expanding our TEA capabilities will provide a rigorous basis for evaluating science, engineering and technology - oriented projects, allowing Sandia programs to quantify the impact of targeted research and development (R&D), and improving Sandia's competitiveness for external funding options . Developing this working group reaffirms the successful use of TEA and related techniques when evaluating the impact of R&D investments, proposed work, and internal approaches to leverage deep technical and robust, business - oriented insights . The main findings of this effort demonstrated the high - impact TEA has on future cost, adoption for applications and impact metric forecasting insights via key past exemplar applied techniques in a broad technology application space . Recommendations from this effort include maintaining and growing the best practices approaches when applying TEA, appreciating the tools (and their limits) from other national laboratories and the academic community, and finally a recognition that more proposals and R&D investment decision s locally at Sandia , and more broadly in the research community from funding agencies , require TEA approaches to justify and support well thought - out project planning.

More Details

Sandia's Research in Support of COVID-19 Pandemic Response: Computing and Information Sciences

Bauer, Travis L.; Beyeler, Walter E.; Finley, Patrick D.; Jeffers, Robert F.; Laird, Carl D.; Makvandi, Monear M.; Outkin, Alexander V.; Safta, Cosmin S.; Simonson, Katherine M.

This report summarizes the goals and findings of eight research projects conducted under the Computing and Information Sciences (CIS) Research Foundation and related to the COVID- 19 pandemic. The projects were all formulated in response to Sandia's call for proposals for rapid-response research with the potential to have a positive impact on the global health emergency. Six of the projects in the CIS portfolio focused on modeling various facets of disease spread, resource requirements, testing programs, and economic impact. The two remaining projects examined the use of web-crawlers and text analytics to allow rapid identification of articles relevant to specific technical questions, and categorization of the reliability of content. The portfolio has collectively produced methods and findings that are being applied by a range of state, regional, and national entities to support enhanced understanding and prediction of the pandemic's spread and its impacts.

More Details

Economic Model for Estimation of GDP Losses in the MACCS Offsite Consequence Analysis Code

Bixler, Nathan E.; Outkin, Alexander V.; Osborn, Douglas M.; Andrews, Nathan A.; Walton, Fotini W.

The MACCS (MELCOR Accident Consequence Code System) code is the U.S. Nuclear Regulatory Commission (NRC) tool used to perform probabilistic health and economic consequence assessments for atmospheric releases of radionuclides. It is also used by international organizations, both reactor owners and regulators. It is intended and most commonly used for hypothetical accidents that could potentially occur in the future rather than to evaluate past accidents or to provide emergency response during an ongoing accident. It is designed to support probabilistic risk and consequence analyses and is used by the NRC, U.S. nuclear licensees, the Department of Energy, and international vendors, licensees, and regulators. This report describes the modeling framework, implementation, verification, and benchmarking of a GDP-based model for economic losses that has recently been developed as an alternative to the original cost-based economic loss model in MACCS. The GDP-based model has its roots in a code developed by Sandia National Laboratories for the Department of Homeland Security to estimate short-term losses from natural and manmade accidents, called the Regional Economic Accounting analysis tool (REAcct). This model was adapted and modified for MACCS and is now called the Regional Disruption Economic Impact Model (RDEIM). It is based on input-output theory, which is widely used in economic modeling. It accounts for direct losses to a disrupted region affected by an accident, indirect losses to the national economy due to disruption of the supply chain, and induced losses from reduced spending by displaced workers. RDEIM differs from REAcct in its treatment and estimation of indirect loss multipliers, elimination of double counting associated with inter-industry trade in the affected area, and that it is designed to be used to estimate impacts for extended periods that can occur from a major nuclear reactor accident, such as the one that occurred at the Fukushima Daiichi site in Japan. Most input-output models do not account for economic adaptation and recovery, and in this regard RDEIM differs from its parent, REAcct, because it allows for a user-definable national recovery period. Implementation of a recovery period was one of several recommendations made by an independent peer review panel to ensure that RDEIM is state-of-practice. For this and several other reasons, RDEIM differs from REAcct. Both the original and the RDEIM economic loss models account for costs from evacuation and relocation, decontamination, depreciation, and condemnation. Where the original model accounts for an expected rate of return, based on the value of property, that is lost during interdiction, the RDEIM model instead accounts for losses of GDP based on the industrial sectors located within a county. The original model includes costs for disposal of crops and milk that the RDEIM model currently does not, but these costs tend to contribute insignificantly to the overall losses. This document discusses three verification exercises to demonstrate that the RDEIM model is implemented correctly in MACCS. It also describes a benchmark study at five nuclear power plants chosen to represent the spectrum of U.S. commercial sites. The benchmarks provide perspective on the expected differences between the RDEIM and the original cost-based economic loss models. The RDEIM model is shown to consistently predict larger losses than the original model, probably in part because it accounts for national losses by including indirect and induced losses; whereas, the original model only accounts for regional losses. Nonetheless, the RDEIM model predicts losses that are remarkably consistent with the original cost-based model, differing by 16% at most for the five sites combined with three source terms considered in this benchmark.

More Details

Attack detection and strategy optimization in game-theoretic trust models

Sahakian, Meghan A.; Vugrin, Eric D.; Outkin, Alexander V.; Wyss, Gregory D.; Eames, Brandon K.

Trust in a microelectronics-based systems can be characterized as the level of confidence that the system is free of subversive alterations inserted by a malicious adversary during system development. Outkin et al. recently developed GPLADD, a game-theoretic framework that enables trust analysis through a set of mathematical models that represent multi-step attack graphs and contention between system attackers and defenders. This paper extends GPLADD to include detection of attacks on development processes and defender decision processes that occur in response to detection events. The paper provides mathematical details for implementing attack detection and demonstrates the models on an example system. The authors further demonstrate how optimal defender strategies vary when solution concepts and objective functions are modified.

More Details

GPLadd: Quantifying trust in government and commercial systems a game-theoretic approach

ACM Transactions on Privacy and Security

Outkin, Alexander V.; Eames, Brandon K.; Sahakian, Meghan A.; Walsh, Sarah; Vugrin, Eric D.; Heersink, Byron; Hobbs, Jacob A.; Wyss, Gregory D.

Trust in a microelectronics-based system can be characterized as the level of confidence that a system is free of subversive alterations made during system development, or that the development process of a system has not been manipulated by a malicious adversary. Trust in systems has become an increasing concern over the past decade. This article presents a novel game-theoretic framework, called GPLADD (Graph-based Probabilistic Learning Attacker and Dynamic Defender), for analyzing and quantifying system trustworthiness at the end of the development process, through the analysis of risk of development-time system manipulation. GPLADD represents attacks and attacker-defender contests over time. It treats time as an explicit constraint and allows incorporating the informational asymmetries between the attacker and defender into analysis. GPLADD includes an explicit representation of attack steps via multi-step attack graphs, attacker and defender strategies, and player actions at different times. GPLADD allows quantifying the attack success probability over time and the attacker and defender costs based on their capabilities and strategies. This ability to quantify different attacks provides an input for evaluation of trust in the development process. We demonstrate GPLADD on an example attack and its variants. We develop a method for representing success probability for arbitrary attacks and derive an explicit analytic characterization of success probability for a specific attack. We present a numeric Monte Carlo study of a small set of attacks, quantify attack success probabilities, attacker and defender costs, and illustrate the options the defender has for limiting the attack success and improving trust in the development process.

More Details

Analysis of Microgrid Locations Benefitting Community Resilience for Puerto Rico

Jeffers, Robert F.; Staid, Andrea S.; Baca, Michael J.; Currie, Frank M.; Fogleman, William; DeRosa, Sean D.; Wachtel, Amanda; Outkin, Alexander V.

An analysis of microgrids to increase resilience was conducted for the island of Puerto Rico. Critical infrastructure throughout the island was mapped to the key services provided by those sectors to help inform primary and secondary service sources during a major disruption to the electrical grid. Additionally, a resilience metric of burden was developed to quantify community resilience, and a related baseline resilience figure was calculated for the area. To improve resilience, Sandia performed an analysis of where clusters of critical infrastructure are located and used these suggested resilience node locations to create a portfolio of 159 microgrid options throughout Puerto Rico. The team then calculated the impact of these microgrids on the region's ability to provide critical services during an outage, and compared this impact to high-level estimates of cost for each microgrid to generate a set of efficient microgrid portfolios costing in the range of 218-917M dollars. This analysis is a refinement of the analysis delivered on June 01, 2018.

More Details

Teaching Game Theory to Kids and Limits of Prediction

Outkin, Alexander V.

I have once been asked to read a lecture to a group of 6th graders on Game Theory. After agreeing to it, I realized that explaining the game theory basics to 6th graders my be difficult, given that terms such as Nash equilibrium, minimax, maximin, optimization may not resonate in a 6th grade classroom. Instead I've introduced game theory using the rock-paper-scissors (RPS) game. Turns out kids are excellent gametheoreticians. In RPS, they understood both the benefits of randomizing their own strategy and of predicting their opponent's moves. They offered a number of heuristics both for the prediction and opening move. These heuristics included optimizing against past opponent moves, such as not playing rock if the opponent just played scissors, and playing a specific opening hand, such as "paper". Visualizing the effects of such strategic choices on-the-fly would be interesting and educational. This brief essay attempts demonstrating and visualizing the value of a few different strategic options in RPS. Specifically, we would like to illustrate the following: 1) what is the value of being unpredictable?; and 2) what is the value of being able to predict your opponent? In regard to prediction of human players, the question 2) has been reflected in Jon McLoone's entry in Wolfram Blog from January 20, 2014[1]. McLoone created a predictive algorithm for playing against human opponents, that learns to beat human opponents reliably after approximately 30 - 40 games. I use McLoone's implementation to represent a predictive and random strategies. The rest of this documents 1) investigates performance of this predictive strategy against a random strategy (which is optimal in RPS) and in 2) attempts to turn this predictive power against the predictive strategy by allowing the opponent the full knowledge of the predictor's strategy (but not the choices made using the strategy). This exposes a weakness in predictions made without taking risks into account by illustrating that predictive strategy may make the predictor predictable as well.

More Details
Results 1–25 of 58
Results 1–25 of 58