Protecting against multi-step attacks of uncertain start times and duration forces the defenders into indefinite, always ongoing, resource-intensive response. To allocate resources effectively, the defender must analyze and respond to an uncertain stream of potentially undetected multiple multi-step attacks and take measures of attack and response intensity over time into account. Such response requires estimation of overall attack success metrics and evaluating effect of defender strategies and actions associated with specific attack steps on overall attack metrics. We present a novel game-theoretic approach GPLADD to attack metrics estimation and demonstrate it on attack data derived from MITRE's ATT&CK Framework and other sources. In GPLADD, the time to complete attack steps is explicit; the attack dynamics emerges from attack graph and attacker-defender capabilities and strategies and therefore reflects 'physics' of attacks. The time the attacker takes to complete an attack step is drawn from a probability distribution determined by attacker and defender strategies and capabilities. This makes time a physical constraint on attack success parameters and enables comparing different defender resource allocation strategies across different attacks. We solve for attack success metrics by approximating attacker-defender games as discrete-time Markov chains and show evaluation of return on detection investments associated with different attack steps. We apply GPLADD to MITRE's APT3 data from ATT&CK Framework and show that there are substantial and un-intuitive differences in estimated real-world vendor performance against a simplified APT3 attack. We focus on metrics that reflect attack difficulty versus attacker ability to remain hidden in the system after gaining control. This enables practical defender optimization and resource allocation against multi-step attacks.
This report updates the Regional Disruption Economic Impact Model (RDEIM) GDP-based model described in Bixler et al. (2020) used in the MACCS accident consequence analysis code. MACCS is the U.S. Nuclear Regulatory Commission (NRC) used to perform probabilistic health and economic consequence assessments for atmospheric releases of radionuclides. It is also used by international organizations, both reactor owners and regulators. It is intended and most commonly used for hypothetical accidents that could potentially occur in the future rather than to evaluate past accidents or to provide emergency response during an ongoing accident. It is designed to support probabilistic risk and consequence analyses and is used by the NRC, U.S. nuclear licensees, the Department of Energy, and international vendors, licensees, and regulators. The update of the RDEIM model in version 4.2 expresses the national recovery calculation explicitly, rather than implicitly as in the previous version. The calculation of the total national GDP losses remains unchanged. However, anticipated gains from recovery are now allocated across all the GDP loss types – direct, indirect, and induced – whereas in version 4.1, all recovery gains were accounted for in the indirect loss type. To achieve this, we’ve introduced new methodology to streamline and simplify the calculation of all types of losses and recovery. In addition, RDEIM includes other kinds of losses, including tangible wealth. This includes loss of tangible assets (e.g., depreciation) and accident expenditures (e.g., decontamination). This document describes the updated RDEIM economic model and provides examples of loss and recovery calculation, results analysis, and presentation. Changes to the tangible cost calculation and accident expenditures are described in section 2.2. The updates to the RDEIM input-output (I-O) model are not expected to affect the final benchmark results Bixler et al. (2020), as the RDEIM calculation for the total national GDP losses remains unchanged. The reader is referred to the MACCS revision history for other cost modelling changes since version 4.0 that may affect the benchmark. RDEIM has its roots in a code developed by Sandia National Laboratories for the Department of Homeland Security to estimate short-term losses from natural and manmade accidents, called the Regional Economic Accounting analysis tool (REAcct). This model was adapted and modified for MACCS. It is based on I-O theory, which is widely used in economic modeling. It accounts for direct losses to a disrupted region affected by an accident, indirect losses to the national economy due to disruption of the supply chain, and induced losses from reduced spending by displaced workers. RDEIM differs from REAcct in in its treatment and estimation of indirect loss multipliers, elimination of double-counting associated with inter-industry trade in the affected area, and that it is intended to be used for extended periods that can occur from a major nuclear reactor accident, such as the one that occurred at the Fukushima Daiichi site in Japan. Most input-output models do not account for economic adaptation and recovery, and in this regard RDEIM differs from its parent, REAcct, because it allows for a user-definable national recovery period. Implementation of a recovery period was one of several recommendations made by an independent peer review panel to ensure that RDEIM is state-of-practice. For this and several other reasons, RDEIM differs from REAcct.
This report summarizes the activities performed as part of the Science and Engineering of Cybersecurity by Uncertainty quantification and Rigorous Experimentation (SECURE) Grand Challenge LDRD project. We provide an overview of the research done in this project, including work on cyber emulation, uncertainty quantification, and optimization. We present examples of integrated analyses performed on two case studies: a network scanning/detection study and a malware command and control study. We highlight the importance of experimental workflows and list references of papers and presentations developed under this project. We outline lessons learned and suggestions for future work.
Protecting against multi-step attacks of uncertain duration and timing forces defenders into an indefinite, always ongoing, resource-intensive response. To effectively allocate resources, a defender must be able to analyze multi-step attacks under assumption of constantly allocating resources against an uncertain stream of potentially undetected attacks. To achieve this goal, we present a novel methodology that applies a game-theoretic approach to the attack, attacker, and defender data derived from MITRE´s ATT&CK® Framework. Time to complete attack steps is drawn from a probability distribution determined by attacker and defender strategies and capabilities. This constraints attack success parameters and enables comparing different defender resource allocation strategies. By approximating attacker-defender games as Markov processes, we represent the attacker-defender interaction, estimate the attack success parameters, determine the effects of attacker and defender strategies, and maximize opportunities for defender strategy improvements against an uncertain stream of attacks. This novel representation and analysis of multi-step attacks enables defender policy optimization and resource allocation, which we illustrate using the data from MITRE´ s APT3 ATT&CK® Framework.
A team at Sandia National Laboratories (SNL) recognized the growing need to maintain and organize the internal community of Techno - Economic Assessment analysts at the lab . To meet this need, an internal core team identified a working group of experienced, new, and future analysts to: 1) document TEA best practices; 2) identify existing resources at Sandia and elsewhere; and 3) identify gaps in our existing capabilities . Sandia has a long history of using techno - economic analyses to evaluate various technologies , including consideration of system resilience . Expanding our TEA capabilities will provide a rigorous basis for evaluating science, engineering and technology - oriented projects, allowing Sandia programs to quantify the impact of targeted research and development (R&D), and improving Sandia's competitiveness for external funding options . Developing this working group reaffirms the successful use of TEA and related techniques when evaluating the impact of R&D investments, proposed work, and internal approaches to leverage deep technical and robust, business - oriented insights . The main findings of this effort demonstrated the high - impact TEA has on future cost, adoption for applications and impact metric forecasting insights via key past exemplar applied techniques in a broad technology application space . Recommendations from this effort include maintaining and growing the best practices approaches when applying TEA, appreciating the tools (and their limits) from other national laboratories and the academic community, and finally a recognition that more proposals and R&D investment decision s locally at Sandia , and more broadly in the research community from funding agencies , require TEA approaches to justify and support well thought - out project planning.
This report summarizes the goals and findings of eight research projects conducted under the Computing and Information Sciences (CIS) Research Foundation and related to the COVID- 19 pandemic. The projects were all formulated in response to Sandia's call for proposals for rapid-response research with the potential to have a positive impact on the global health emergency. Six of the projects in the CIS portfolio focused on modeling various facets of disease spread, resource requirements, testing programs, and economic impact. The two remaining projects examined the use of web-crawlers and text analytics to allow rapid identification of articles relevant to specific technical questions, and categorization of the reliability of content. The portfolio has collectively produced methods and findings that are being applied by a range of state, regional, and national entities to support enhanced understanding and prediction of the pandemic's spread and its impacts.
The MACCS (MELCOR Accident Consequence Code System) code is the U.S. Nuclear Regulatory Commission (NRC) tool used to perform probabilistic health and economic consequence assessments for atmospheric releases of radionuclides. It is also used by international organizations, both reactor owners and regulators. It is intended and most commonly used for hypothetical accidents that could potentially occur in the future rather than to evaluate past accidents or to provide emergency response during an ongoing accident. It is designed to support probabilistic risk and consequence analyses and is used by the NRC, U.S. nuclear licensees, the Department of Energy, and international vendors, licensees, and regulators. This report describes the modeling framework, implementation, verification, and benchmarking of a GDP-based model for economic losses that has recently been developed as an alternative to the original cost-based economic loss model in MACCS. The GDP-based model has its roots in a code developed by Sandia National Laboratories for the Department of Homeland Security to estimate short-term losses from natural and manmade accidents, called the Regional Economic Accounting analysis tool (REAcct). This model was adapted and modified for MACCS and is now called the Regional Disruption Economic Impact Model (RDEIM). It is based on input-output theory, which is widely used in economic modeling. It accounts for direct losses to a disrupted region affected by an accident, indirect losses to the national economy due to disruption of the supply chain, and induced losses from reduced spending by displaced workers. RDEIM differs from REAcct in its treatment and estimation of indirect loss multipliers, elimination of double counting associated with inter-industry trade in the affected area, and that it is designed to be used to estimate impacts for extended periods that can occur from a major nuclear reactor accident, such as the one that occurred at the Fukushima Daiichi site in Japan. Most input-output models do not account for economic adaptation and recovery, and in this regard RDEIM differs from its parent, REAcct, because it allows for a user-definable national recovery period. Implementation of a recovery period was one of several recommendations made by an independent peer review panel to ensure that RDEIM is state-of-practice. For this and several other reasons, RDEIM differs from REAcct. Both the original and the RDEIM economic loss models account for costs from evacuation and relocation, decontamination, depreciation, and condemnation. Where the original model accounts for an expected rate of return, based on the value of property, that is lost during interdiction, the RDEIM model instead accounts for losses of GDP based on the industrial sectors located within a county. The original model includes costs for disposal of crops and milk that the RDEIM model currently does not, but these costs tend to contribute insignificantly to the overall losses. This document discusses three verification exercises to demonstrate that the RDEIM model is implemented correctly in MACCS. It also describes a benchmark study at five nuclear power plants chosen to represent the spectrum of U.S. commercial sites. The benchmarks provide perspective on the expected differences between the RDEIM and the original cost-based economic loss models. The RDEIM model is shown to consistently predict larger losses than the original model, probably in part because it accounts for national losses by including indirect and induced losses; whereas, the original model only accounts for regional losses. Nonetheless, the RDEIM model predicts losses that are remarkably consistent with the original cost-based model, differing by 16% at most for the five sites combined with three source terms considered in this benchmark.
Trust in a microelectronics-based systems can be characterized as the level of confidence that the system is free of subversive alterations inserted by a malicious adversary during system development. Outkin et al. recently developed GPLADD, a game-theoretic framework that enables trust analysis through a set of mathematical models that represent multi-step attack graphs and contention between system attackers and defenders. This paper extends GPLADD to include detection of attacks on development processes and defender decision processes that occur in response to detection events. The paper provides mathematical details for implementing attack detection and demonstrates the models on an example system. The authors further demonstrate how optimal defender strategies vary when solution concepts and objective functions are modified.
Trust in a microelectronics-based system can be characterized as the level of confidence that a system is free of subversive alterations made during system development, or that the development process of a system has not been manipulated by a malicious adversary. Trust in systems has become an increasing concern over the past decade. This article presents a novel game-theoretic framework, called GPLADD (Graph-based Probabilistic Learning Attacker and Dynamic Defender), for analyzing and quantifying system trustworthiness at the end of the development process, through the analysis of risk of development-time system manipulation. GPLADD represents attacks and attacker-defender contests over time. It treats time as an explicit constraint and allows incorporating the informational asymmetries between the attacker and defender into analysis. GPLADD includes an explicit representation of attack steps via multi-step attack graphs, attacker and defender strategies, and player actions at different times. GPLADD allows quantifying the attack success probability over time and the attacker and defender costs based on their capabilities and strategies. This ability to quantify different attacks provides an input for evaluation of trust in the development process. We demonstrate GPLADD on an example attack and its variants. We develop a method for representing success probability for arbitrary attacks and derive an explicit analytic characterization of success probability for a specific attack. We present a numeric Monte Carlo study of a small set of attacks, quantify attack success probabilities, attacker and defender costs, and illustrate the options the defender has for limiting the attack success and improving trust in the development process.
An analysis of microgrids to increase resilience was conducted for the island of Puerto Rico. Critical infrastructure throughout the island was mapped to the key services provided by those sectors to help inform primary and secondary service sources during a major disruption to the electrical grid. Additionally, a resilience metric of burden was developed to quantify community resilience, and a related baseline resilience figure was calculated for the area. To improve resilience, Sandia performed an analysis of where clusters of critical infrastructure are located and used these suggested resilience node locations to create a portfolio of 159 microgrid options throughout Puerto Rico. The team then calculated the impact of these microgrids on the region's ability to provide critical services during an outage, and compared this impact to high-level estimates of cost for each microgrid to generate a set of efficient microgrid portfolios costing in the range of 218-917M dollars. This analysis is a refinement of the analysis delivered on June 01, 2018.
I have once been asked to read a lecture to a group of 6th graders on Game Theory. After agreeing to it, I realized that explaining the game theory basics to 6th graders my be difficult, given that terms such as Nash equilibrium, minimax, maximin, optimization may not resonate in a 6th grade classroom. Instead I've introduced game theory using the rock-paper-scissors (RPS) game. Turns out kids are excellent gametheoreticians. In RPS, they understood both the benefits of randomizing their own strategy and of predicting their opponent's moves. They offered a number of heuristics both for the prediction and opening move. These heuristics included optimizing against past opponent moves, such as not playing rock if the opponent just played scissors, and playing a specific opening hand, such as "paper". Visualizing the effects of such strategic choices on-the-fly would be interesting and educational. This brief essay attempts demonstrating and visualizing the value of a few different strategic options in RPS. Specifically, we would like to illustrate the following: 1) what is the value of being unpredictable?; and 2) what is the value of being able to predict your opponent? In regard to prediction of human players, the question 2) has been reflected in Jon McLoone's entry in Wolfram Blog from January 20, 2014[1]. McLoone created a predictive algorithm for playing against human opponents, that learns to beat human opponents reliably after approximately 30 - 40 games. I use McLoone's implementation to represent a predictive and random strategies. The rest of this documents 1) investigates performance of this predictive strategy against a random strategy (which is optimal in RPS) and in 2) attempts to turn this predictive power against the predictive strategy by allowing the opponent the full knowledge of the predictor's strategy (but not the choices made using the strategy). This exposes a weakness in predictions made without taking risks into account by illustrating that predictive strategy may make the predictor predictable as well.
The MELCOR Accident Consequence Code System (MACCS) code is the NRC code used to perform probabilistic health and economic consequence assessments for atmospheric releases of radionuclides. MACCS is used by U.S. nuclear power plant license renewal applicants to support the plant specific evaluation of severe accident mitigation alternatives (SAMA) analyses as part of an applicant's environmental report for license renewal. MACCS is also used in severe accident mitigation design alternatives (SAMDA) and severe accident consequence analyses for environmental impact statements (EISs) for new reactor applications. The NRC uses MACCS in its cost-benefit assessments supporting regulatory analyses that evaluate potential new regulatory requirements for nuclear power plants. NRC regulatory analysis guidelines recommend the use of MACCS to estimate the averted "offsite property damage" cost (benefit) and the averted offsite dose cost elements.
This project evaluates the effectiveness of moving target defense (MTD) techniques using a new game we have designed, called PLADD, inspired by the game FlipIt [28]. PLADD extends FlipIt by incorporating what we believe are key MTD concepts. We have analyzed PLADD and proven the existence of a defender strategy that pushes a rational attacker out of the game, demonstrated how limited the strategies available to an attacker are in PLADD, and derived analytic expressions for the expected utility of the game’s players in multiple game variants. We have created an algorithm for finding a defender’s optimal PLADD strategy. We show that in the special case of achieving deterrence in PLADD, MTD is not always cost effective and that its optimal deployment may shift abruptly from not using MTD at all to using it as aggressively as possible. We believe our effort provides basic, fundamental insights into the use of MTD, but conclude that a truly practical analysis requires model selection and calibration based on real scenarios and empirical data. We propose several avenues for further inquiry, including (1) agents with adaptive capabilities more reflective of real world adversaries, (2) the presence of multiple, heterogeneous adversaries, (3) computational game theory-based approaches such as coevolution to allow scaling to the real world beyond the limitations of analytical analysis and classical game theory, (4) mapping the game to real-world scenarios, (5) taking player risk into account when designing a strategy (in addition to expected payoff), (6) improving our understanding of the dynamic nature of MTD-inspired games by using a martingale representation, defensive forecasting, and techniques from signal processing, and (7) using adversarial games to develop inherently resilient cyber systems.
The current expansion of natural gas (NG) development in the United States requires an understanding of how this change will affect the natural gas industry, downstream consumers, and economic growth in order to promote effective planning and policy development. The impact of this expansion may propagate through the NG system and US economy via changes in manufacturing, electric power generation, transportation, commerce, and increased exports of liquefied natural gas. We conceptualize this problem as supply shock propagation that pushes the NG system and the economy away from its current state of infrastructure development and level of natural gas use. To illustrate this, the project developed two core modeling approaches. The first is an Agent-Based Modeling (ABM) approach which addresses shock propagation throughout the existing natural gas distribution system. The second approach uses a System Dynamics-based model to illustrate the feedback mechanisms related to finding new supplies of natural gas - notably shale gas - and how those mechanisms affect exploration investments in the natural gas market with respect to proven reserves. The ABM illustrates several stylized scenarios of large liquefied natural gas (LNG) exports from the U.S. The ABM preliminary results demonstrate that such scenario is likely to have substantial effects on NG prices and on pipeline capacity utilization. Our preliminary results indicate that the price of natural gas in the U.S. may rise by about 50% when the LNG exports represent 15% of the system-wide demand. The main findings of the System Dynamics model indicate that proven reserves for coalbed methane, conventional gas and now shale gas can be adequately modeled based on a combination of geologic, economic and technology-based variables. A base case scenario matches historical proven reserves data for these three types of natural gas. An environmental scenario, based on implementing a $50/tonne CO 2 tax results in less proven reserves being developed in the coming years while demand may decrease in the absence of acceptable substitutes, incentives or changes in consumer behavior. An increase in demand of 25% increases proven reserves being developed by a very small amount by the end of the forecast period of 2025.
Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.
Adaptation is believed to be a source of resilience in systems. It has been difficult to measure the contribution of adaptation to resilience, unlike other resilience mechanisms such as restoration and recovery. One difficulty comes from treating adaptation as a deus ex machina that is interjected after a disruption. This provides no basis for bounding possible adaptive responses. We can bracket the possible effects of adaptation when we recognize that it occurs continuously, and is in part responsible for the current system’s properties. In this way the dynamics of the system’s pre-disruption structure provides information about post-disruption adaptive reaction. Seen as an ongoing process, adaptation has been argued to produce “robust-yet-fragile” systems. Such systems perform well under historical stresses but become committed to specific features of those stresses in a way that makes them vulnerable to system-level collapse when those features change. In effect adaptation lessens the cost of disruptions within a certain historical range, at the expense of increased cost from disruptions outside that range. Historical adaptive responses leave a signature in the structure of the system. Studies of ecological networks have suggested structural metrics that pick out systemic resilience in the underlying ecosystems. If these metrics are generally reliable indicators of resilience they provide another strategy for gaging adaptive resilience. To progress in understanding how the process of adaptation and the property of resilience interrelate in infrastructure systems, we pose some specific questions: Does adaptation confer resilience?; Does it confer resilience to novel shocks as well, or does it tune the system to fragility?; Can structural features predict resilience to novel shocks?; Are there policies or constraints on the adaptive process that improve resilience?.