Probabilistic basis and assessment methodology for effectiveness of protecting nuclear materials
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
With the increasing reliance on cyber technology to operate and control physical security system components, there is a need for methods to assess and model the interactions between the cyber system and the physical security system to understand the effects of cyber technology on overall security system effectiveness. This paper evaluates two methodologies for their applicability to the combined cyber and physical security problem. The comparison metrics include probabilities of detection (P{sub D}), interruption (P{sub I}), and neutralization (P{sub N}), which contribute to calculating the probability of system effectiveness (P{sub E}), the probability that the system can thwart an adversary attack. P{sub E} is well understood in practical applications of physical security but when the cyber security component is added, system behavior becomes more complex and difficult to model. This paper examines two approaches (Bounding Analysis Approach (BAA) and Expected Value Approach (EVA)) to determine their applicability to the combined physical and cyber security issue. These methods were assessed for a variety of security system characteristics to determine whether reasonable security decisions could be made based on their results. The assessments provided insight on an adversary's behavior depending on what part of the physical security system is cyber-controlled. Analysis showed that the BAA is more suited to facility analyses than the EVA because it has the ability to identify and model an adversary's most desirable attack path.
Assessing the risk of malevolent attacks against large-scale critical infrastructures requires modifications to existing methodologies. Existing risk assessment methodologies consider physical security and cyber security separately. As such, they do not accurately model attacks that involve defeating both physical protection and cyber protection elements (e.g., hackers turning off alarm systems prior to forced entry). This paper presents a risk assessment methodology that accounts for both physical and cyber security. It also preserves the traditional security paradigm of detect, delay and respond, while accounting for the possibility that a facility may be able to recover from or mitigate the results of a successful attack before serious consequences occur. The methodology provides a means for ranking those assets most at risk from malevolent attacks. Because the methodology is automated the analyst can also play 'what if with mitigation measures to gain a better understanding of how to best expend resources towards securing the facilities. It is simple enough to be applied to large infrastructure facilities without developing highly complicated models. Finally, it is applicable to facilities with extensive security as well as those that are less well-protected.
This document is a reference guide for the UNIX Library/Standalone version of the Latin Hypercube Sampling Software. This software has been developed to generate Latin hypercube multivariate samples. This version runs on Linux or UNIX platforms. This manual covers the use of the LHS code in a UNIX environment, run either as a standalone program or as a callable library. The underlying code in the UNIX Library/Standalone version of LHS is almost identical to the updated Windows version of LHS released in 1998 (SAND98-0210). However, some modifications were made to customize it for a UNIX environment and as a library that is called from the DAKOTA environment. This manual covers the use of the LHS code as a library and in the standalone mode under UNIX.
Proposed for publication in SIMULATION: Transaction of the Society International for Computer Simulation. Special issue on air traffic simulation.
This article describes how features of event tree analysis and Monte Carlo-based discrete event simulation can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology, with some of the best features of each. The resultant object-based event scenario tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible. Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST methodology is then applied to an aviation safety problem that considers mechanisms by which an aircraft might become involved in a runway incursion incident. The resulting OBEST model demonstrates how a close link between human reliability analysis and probabilistic risk assessment methods can provide important insights into aviation safety phenomenology.
Event tree analysis and Monte Carlo-based discrete event simulation have been used in risk assessment studies for many years. This report details how features of these two methods can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology with some of the best features of each. The resultant Object-Based Event Scenarios Tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible (especially those that exhibit inconsistent or variable event ordering, which are difficult to represent in an event tree analysis). Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST method uses a recursive algorithm to solve the object model and identify all possible scenarios and their associated probabilities. Since scenario likelihoods are developed directly by the solution algorithm, they need not be computed by statistical inference based on Monte Carlo observations (as required by some discrete event simulation methods). Thus, OBEST is not only much more computationally efficient than these simulation methods, but it also discovers scenarios that have extremely low probabilities as a natural analytical result--scenarios that would likely be missed by a Monte Carlo-based method. This report documents the OBEST methodology, the demonstration software that implements it, and provides example OBEST models for several different application domains, including interactions among failing interdependent infrastructure systems, circuit analysis for fire risk evaluation in nuclear power plants, and aviation safety studies.
Uncertainty distributions for specific parameters of the Cassini General Purpose Heat Source Radioisotope Thermoelectric Generator (GPHS-RTG) Final Safety Analysis Report consequence risk analysis were revised and updated. The revisions and updates were done for all consequence parameters for which relevant information exists from the joint project on Probabilistic Accident Consequence Uncertainty Analysis by the United States Nuclear Regulatory Commission and the Commission of European Communities.
Abstract not provided.
Object-oriented analysis methods have been used in the computer science arena for a number of years to model the behavior of computer-based systems. This report documents how such methods can be applied to surety analysis. By embodying the causality and behavior of a system in a common object-oriented analysis model, surety analysts can make the assumptions that underlie their models explicit and thus better communicate with system designers. Furthermore, given minor extensions to traditional object-oriented analysis methods, it is possible to automatically derive a wide variety of traditional risk and reliability analysis methods from a single common object model. Automatic model extraction helps ensure consistency among analyses and enables the surety analyst to examine a system from a wider variety of viewpoints in a shorter period of time. Thus it provides a deeper understanding of a system's behaviors and surety requirements. This report documents the underlying philosophy behind the common object model representation, the methods by which such common object models can be constructed, and the rules required to interrogate the common object model for derivation of traditional risk and reliability analysis models. The methodology is demonstrated in an extensive example problem.
This document is a reference guide for LHS, Sandia`s Latin Hypercube Sampling Software. This software has been developed to generate either Latin hypercube or random multivariate samples. The Latin hypercube technique employs a constrained sampling scheme, whereas random sampling corresponds to a simple Monte Carlo technique. The present program replaces the previous Latin hypercube sampling program developed at Sandia National Laboratories (SAND83-2365). This manual covers the theory behind stratified sampling as well as use of the LHS code both with the Windows graphical user interface and in the stand-alone mode.
This report documents a new method for computing all-terminal reliability for networks that cannot be described in terms of a physical or logical hierarchy--so-called arbitrarily interconnected networks. The method uses an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram without the construction of a fault tree model. The efficiency of the search algorithm can be attributed in large part to the novel cut set quantification scheme developed for this project. This quantification scheme uses cut sets composed only of link failures to compute the reliability of a network in which arbitrary combinations of nodes and links can fail. The scheme further enables the computation of traditional risk importance measures for nodes and links from these same link-based cut sets. This novel quantification scheme leads to a dramatic reduction in the computational effort required to assess network reliability because the cut set search process (the most computationally intensive part of the assessment) can neglect the possibility of node failures when finding cut sets to describe all-terminal reliability. Computational savings can be several orders of magnitude over previous cut set-based network reliability assessment methods. The method is applicable to both planar and nonplanar networks.
The ARRAMIS risk and reliability analysis software suite developed by Sandia National Laboratories enables analysts to evaluate the safety and reliability of a wide range of complex systems whose failure results in high consequences. This software was originally designed to model the systems, responses, and phenomena associated with potential severe accidents at commercial nuclear power reactors by solving very large fault tree and event tree models. However, because of its power and versatility, ARRAMIS and its constituent analysis engines have recently been used to evaluate a wide variety of systems, including nuclear weapons, telecommunications facilities, robotic material handling systems, and aircraft systems using hybrid fault tree event tree analysis techniques incorporating fully integrated uncertainty analysis capabilities. This paper describes recent applications in the area of nuclear reactor accident progression analysis using a large event tree methodology and the ARRAMIS package.
The Cassini spacecraft is a deep space probe whose mission is to explore the planet Saturn and its moons. Since the spacecraft`s electrical requirements will be supplied by radioisotope thermoelectric generators (RTGs), the spacecraft designers and mission planners must assure that potential accidents involving the spacecraft do not pose significant human risk. The Cassini risk analysis team is seeking to perform a quantitative uncertainty analysis as a part of the overall mission risk assessment program. This paper describes the uncertainty analysis methodology to be used for the Cassini mission and compares it to the methods that were originally developed for evaluation of commercial nuclear power reactors.
Sandia National Laboratories has assembled an interdisciplinary team to explore the applicability of probabilistic logic modeling (PLM) techniques to model network reliability for a wide variety of communications network architectures. The authors have found that the reliability and failure modes of current generation network technologies can be effectively modeled using fault tree PLM techniques. They have developed a ``plug-and-play`` fault tree analysis methodology that can be used to model connectivity and the provision of network services in a wide variety of current generation network architectures. They have also developed an efficient search algorithm that can be used to determine the minimal cut sets of an arbitrarily-interconnected (non-hierarchical) network without the construction of a fault tree model. This paper provides an overview of these modeling techniques and describes how they are applied to networks that exhibit hybrid network structures (i.e., a network in which some areas are hierarchical and some areas are not hierarchical).
Vulnerability analyses for information systems are complicated because the systems are often geographically distributed. Sandia National Laboratories has assembled an interdisciplinary team to explore the applicability of probabilistic logic modeling (PLM) techniques (including vulnerability and vital area analysis) to examine the risks associated with networked information systems. The authors have found that the reliability and failure modes of many network technologies can be effectively assessed using fault trees and other PLM methods. The results of these models are compatible with an expanded set of vital area analysis techniques that can model both physical locations and virtual (logical) locations to identify both categories of vital areas simultaneously. These results can also be used with optimization techniques to direct the analyst toward the most cost-effective security solution.
This document is a reference guide for the Sandia Automated Boolean Logic Evaluation software (SABLE) version 2.0 developed at Sandia National Laboratories. SABLE 2.0 is designed to solve and quantify fault trees on IBM-compatible personal computers using the Microsoft Windows operating environment. SABLE 2.0 consists of a Windows user interface combined with a fault tree solution engine that is derived from the well-known SETS fault tree analysis code. This manual explains the fundamentals of solving fault trees and shows how to use the Windows SABLE 2.0 interface to specify a problem, solve the problem, and view the output.
Traditional approaches to the assessment of information systems have treated system security, system reliability, data integrity, and application functionality as separate disciplines. However, each areas requirements and solutions have a profound impact on the successful implementation of the other areas. A better approach is to assess the ``surety`` of an information system, which is defined as ensuring the ``correct`` operation of an information system by incorporating appropriate levels of safety, functionality, confidentiality, availability, and integrity. Information surety examines the combined impact of design alternatives on all of these areas. We propose a modelling approach that combines aspects of fault trees and influence diagrams for assessing information surety requirements under a risk assessment framework. This approach allows tradeoffs to be based on quantitative importance measures such as risk reduction while maintaining the modelling flexibility of the influence diagram paradigm. This paper presents an overview of the modelling method and a sample application problem.
Short communication.
A Level II/III probabilistic risk assessment (PRA) has been performed for N Reactor, a Department of Energy (DOE) production reactor located on the Hanford reservation in Washington. The accident progression analysis documented in this report determines how core damage accidents identified in the Level I PRA progress from fuel damage to confinement response and potential releases the environment. The objectives of the study are to generate accident progression data for the Level II/III PRA source term model and to identify changes that could improve plant response under accident conditions. The scope of the analysis is comprehensive, excluding only sabotage and operator errors of commission. State-of-the-art methodology is employed based largely on the methods developed by Sandia for the US Nuclear Regulatory Commission in support of the NUREG-1150 study. The accident progression model allows complex interactions and dependencies between systems to be explicitly considered. Latin Hypecube sampling was used to assess the phenomenological and systemic uncertainties associated with the primary and confinement system responses to the core damage accident. The results of the analysis show that the N Reactor confinement concept provides significant radiological protection for most of the accident progression pathways studied.