Surface acoustic wave (SAW) measurements were combined with direct, in-situ molecular spectroscopy to understand the interactions of surface-confined sensing films with gas-phase analytes. This was accomplished by collecting Fourier-transform infrared external-reflectance spectra (FTIR-ERS) on operating SAW devices during dosing of their specifically coated surfaces with key analytes.
The reduction of NpO{sub 2}{sup 2+} and PuO{sub 2}{sup 2+} by oxalate, citrate, and ethylenediaminetetraacetic acid (EDTA) was investigated in low ionic strength media and brines. This was done to help establish the stability of the An(VI) oxidation state depended on the pH nd relative strength of the various oxidation state-specific complexes. At low ionic strength and pH 6, NpO{sub 2}{sup 2+} was rapidly reduced to form NpO{sub 2}{sup +} organic complexes. At longer times, Np(IV) organic complexes were observed in the presence of citrate. PuO{sub 2}{sup 2+} was predominantly reduced to Pu{sup 4+}, resulting in the formation of organic complexes or polymeric/hydrolytic precipitates. The relative rates of reduction to the An(V) complex were EDTA > citrate > oxalate. Subsequent reduction to An(IV) complexes, however, occurred in the following order: citrate > EDTA > oxalate because of the stability of the An(VI)-EDTA complex. The presence of organic complexants led to the rapid reduction of NpO{sub 2}{sup 2+} and PuO{sub 2}{sup 2+} in G-Seep brine at pHs 5 and 7. At pHs 8 and 10 in ERDA-6 brine, carbonate and hydrolytic complexes predominated and slowed down or prevented the reduction of An(VI) by the organics present.
Pollution Prevention (P2) programs and projects within the DOE Environmental Restoration (ER) and Decontamination and Decommissioning (D and D) Programs have been independently developed and implemented at various sites. As a result, unique, innovative solutions used at one site may not be known to other sites, and other sites may continue to duplicate efforts to develop and implement similar solutions. Several DOE Program offices have funded the development of tools to assist ER/D and D P2 projects. To realize the full value of these tools, they need to be evaluated and publicized to field sites. To address these needs and concerns, Sandia National Laboratory (SNL/NM), Los Alamos National Laboratory (LANL), and the Oak Ridge Field Office (DOE-OR) have teamed to pilot test DOE training and tracking tools; transfer common P2 analyses between sites, and evaluate and expand P2 tools and methodologies. The project is supported by FY 98 DOE Pollution Prevention Complex-Wide Project Funds. This paper presents the preliminary results for each of the following project modules: Training, Waste Tracking Pilot, Information Exchange, Evaluate P2 Tools for ER/D and D, Field Test of P2 Tools; and DOE Information Exchange.
The Sandia Bicycle Commuters Group (SBCG) formed three years ago for the purpose of addressing issues that impact the bicycle commuting option. The meeting that launched the SBCG was scheduled in conjunction with National Bike-to-Work day in May 1995. Results from a survey handed out at the meeting solidly confirmed the issues and that an advocacy group was needed. The purpose statement for the Group headlines its web site and brochure: ``Existing to assist and educate the SNL workforce bicyclist on issues regarding Kirtland Air Force Base (KAFB) access, safety and bicycle-supporting facilities, in order to promote bicycling as an effective and enjoyable means of commuting.`` The SNL Pollution Prevention (P2) Team`s challenge to the SNL workforce is to ``prevent pollution, conserve natural resources, and save money``. In the first winter of its existence, the SBCG sponsored a winter commute contest in conjunction with the City`s Clean Air Campaign (CAC). The intent of the CAC is to promote alternative (to the single-occupant vehicle) commuting during the Winter Pollution Advisory Period (October 1--February 28), when the City runs the greatest risk of exceeding federal pollution limits.
At Sandia National Laboratories, the authors are developing the ability to accurately predict motions for arbitrary numbers of bodies of arbitrary shapes experiencing multiple applied forces and intermittent contacts. In particular, they are concerned with the simulation of systems such as part feeders or mobile robots operating in realistic environments. Preliminary investigation of commercial dynamics software packages led us to the conclude that they could use a commercial code to provide everything they needed except for the contact model. They found that ADAMS best fit the needs for a simulation package. To simulate intermittent contacts, they need collision detection software that can efficiently compute the distances between non-convex objects and return the associated witness features. They also require a computationally efficient contact model for rapid simulation of impact, sustained contact under load, and transition to and from contact conditions. This paper provides a technical review of a custom hierarchical distance computation engine developed at Sandia, called the C-Space Toolkit (CSTk). In addition, the authors describe an efficient contact model using a non-linear damping term developed at Ohio State. Both the CSTk and the non-linear damper have been incorporated in a simplified two-body testbed code, which is used to investigate how to correctly model the contact using these two utilities. They have incorporated this model into ADAMS SOLVER using the callable function interface. An example that illustrate the capabilities of the 9.02 release of ADAMS with the extensions is provided.
In the past three years, tremendous strides have been made in x-ray production using high-current z-pinches. Today, the x-ray energy and power output of the Z accelerator (formerly PBFA II) is the largest available in the laboratory. These z-pinch x-ray sources have great potential to drive high-yield inertial confinement fusion (ICF) reactions at affordable cost if several challenging technical problems can be overcome. Technical challenges in three key areas are discussed in this paper: (1) the design of a target for high yield, (2) the development of a suitable pulsed power driver, and (3) the design of a target chamber capable of containing the high fusion yield.
The Reproducing Kernel Particle Method (RKPM) has many attractive properties that make it ideal for treating a broad class of physical problems. RKPM may be implemented in a mesh-full or a mesh-free manner and provides the ability to tune the method, via the selection of a dilation parameter and window function, in order to achieve the requisite numerical performance. RKPM also provides a framework for performing hierarchical computations making it an ideal candidate for simulating multi-scale problems. Although RKPM has many appealing attributes, the method is quite new and its numerical performance is still being quantified with respect to more traditional discretization methods. In order to assess the numerical performance of RKPM, detailed studies of RKPM on a series of model partial differential equations has been undertaken. The results of von Neumann analyses for RKPM semi-discretizations of one and two-dimensional, first and second-order wave equations are presented in the form of phase and group errors. Excellent dispersion characteristics are found for the consistent mass matrix with the proper choice of dilation parameter. In contrast, the influence of row-sum lumping the mass matrix is shown to introduce severe lagging phase errors. A higher-order mass matrix improves the dispersion characteristics relative to the lumped mass matrix but delivers severe lagging phase errors relative to the fully integrated, consistent mass matrix.
In the past thirty-six months, great progress has been made in x-ray production using high-current z-pinches. Today, the x-ray energy and power output of the Z accelerator (formerly PBFA-II) is the largest available in the laboratory. These z-pinch x-ray sources have the potential to drive high-yield ICF reactions at affordable cost if several challenging technical problems can be overcome. In this paper, the recent technical progress with Z-pinches will be described, and a technical strategy for achieving high-yield ICF with z-pinches will be presented.
A parachute system was designed and prototypes built to deploy a telemetry package behind an earth-penetrating weapon just before impact. The parachute was designed to slow the 10 lb. telemetry package and wire connecting it to the penetrator to 50 fps before impact occurred. The parachute system was designed to utilize a 1.3-ft-dia cross pilot parachute and a 10.8-ft-dia main parachute. A computer code normally used to model the deployment of suspension lines from a packed parachute system was modified to model the deployment of wire from the weapon forebody. Results of the design calculations are presented. Two flight tests of the WBS were conducted, but initiation of parachute deployment did not occur in either of the tests due to difficulties with other components. Thus, the trajectory calculations could not be verified with data. Draft drawings of the major components of the parachute system are presented.
A new visualization technique is reported, which dramatically improves interactivity for scientific visualizations by working directly with voxel data and by employing efficient algorithms and data structures. This discussion covers the research software, the file structures, examples of data creation, data search, and triangle rendering codes that allow geometric surfaces to be extracted from volumetric data. Uniquely, these methods enable greater interactivity by allowing an analyst to dynamically specify both the desired isosurface threshold and required level-of-detail to be used while rendering the image. The key idea behind this visualization paradigm is that various levels-of-detail are represented as differently sized hexahedral virtual voxels, which are stored in a three-dimensional kd-tree; thus the level-of-detail representation is done in voxel space instead of the traditional approach which relies on surface or geometry space decimations. This algorithm has been implemented as an integral component in the EIGEN/VR project at Sandia National Laboratories, which provides a rich environment for scientists to interactively explore and visualize the results of very large-scale simulations performed on massively parallel supercomputers.
Coupled blast-structural computational simulations using supercomputer capabilities will significantly advance the understanding of how complex structures respond under dynamic loads caused by explosives and earthquakes, an understanding with application to the surety of both federal and nonfederal buildings. Simulation of the effects of explosives on structures is a challenge because the explosive response can best be simulated using Eulerian computational techniques and structural behavior is best modeled using Lagrangian methods. Due to the different methodologies of the two computational techniques and code architecture requirements, they are usually implemented in different computer programs. Explosive and structure modeling in two different codes make it difficult or next to impossible to do coupled explosive/structure interaction simulations. Sandia National Laboratories has developed two techniques for solving this problem. The first is called Smoothed Particle Hydrodynamics (SPH), a relatively new gridless method comparable to Eulerian, that is especially suited for treating liquids and gases such as those produced by an explosive. The SPH capability has been fully implemented into the transient dynamics finite element (Lagrangian) codes PRONTO-2D and -3D. A PRONTO-3D/SPH simulation of the effect of a blast on a protective-wall barrier is presented in this paper. The second technique employed at Sandia National Laboratories uses a relatively new code called ALEGRA which is an ALE (Arbitrary Lagrangian-Eulerian) wave code with specific emphasis on large deformation and shock propagation. ALEGRA is capable of solving many shock-wave physics problems but it is especially suited for modeling problems involving the interaction of decoupled explosives with structures.
The US Department of Energy`s (DOE) Accelerated Strategic Computing Initiative (ASCI) program calls for the development of high end computing and advanced application simulations as one component of a program to eliminate reliance upon nuclear testing in the US nuclear weapons program. This paper presents results from the ASCI program`s examination of needs for focused validation and verification (V and V). These V and V activities will ensure that 100 TeraOP-scale ASCI simulation code development projects apply the appropriate means to achieve high confidence in the use of simulations for stockpile assessment and certification. The authors begin with an examination of the roles for model development and validation in the traditional scientific method. The traditional view is that the scientific method has two foundations, experimental and theoretical. While the traditional scientific method does not acknowledge the role for computing and simulation, this examination establishes a foundation for the extension of the traditional processes to include verification and scientific software development that results in the notional framework known as Sargent`s Framework. This framework elucidates the relationships between the processes of scientific model development, computational model verification and simulation validation. This paper presents a discussion of the methodologies and practices that the ASCI program will use to establish confidence in large-scale scientific simulations. While the effort for a focused program in V and V is just getting started, the ASCI program has been underway for a couple of years. The authors discuss some V and V activities and preliminary results from the ALEGRA simulation code that is under development for ASCI. The breadth of physical phenomena and the advanced computational algorithms that are employed by ALEGRA make it a subject for V and V that should typify what is required for many ASCI simulations.
CPA -- Cost and Performance Analysis -- is a methodology that joins Activity Based Cost (ABC) estimation with performance based analysis of physical protection systems. CPA offers system managers an approach that supports both tactical decision making and strategic planning. Current exploratory applications of the CPA methodology are addressing analysis of alternative conceptual designs. To support these activities, the original architecture for CPA, is being expanded to incorporate results from a suite of performance and consequence analysis tools such as JTS (Joint Tactical Simulation), ERAD (Explosive Release Atmospheric Dispersion) and blast effect models. The process flow for applying CPA to the development and analysis conceptual designs is illustrated graphically.
The detection and removal of buried unexploded ordnance (UXO) and landmines is one of the most important problems facing the world today. Numerous detection strategies are being developed, including infrared, electrical conductivity, ground-penetrating radar, and chemical sensors. Chemical sensors rely on the detection of TNT molecules, which are transported from buried UXO/landmines by advection and diffusion in the soil. As part of this effort, numerical models are being developed to predict TNT transport in soils including the effect of precipitation and evaporation. Modifications will be made to TOUGH2 for application to the TNT chemical sensing problem. Understanding the fate and transport of TNT in the soil will affect the design, performance and operation of chemical sensors by indicating preferred sensing strategies.
The US Department of Energy (DOE) is investigating Yucca Mountain, Nevada as a potential site for the disposal of high-level nuclear waste. The site is located near the southwest corner of the Nevada Test Site (NTS) in southern Nye County, Nevada. The underground Exploratory Studies Facility (ESF) tunnel traverses part of the proposed repository block. Alcove 5, located within the ESF is being used to field two in situ ESF thermal tests: the Single Heater Test (SHT) and the Drift Scale Test (DST). Laboratory test specimens were collected from three sites within Alcove 5 including each in situ field test location and one additional site. The aim of the laboratory tests was to determine site-specific thermal and mechanical rock properties including thermal expansion, thermal conductivity, unconfined compressive strength, and elastic moduli. In this paper, the results obtained for the SHT and DST area characterization are compared with data obtained from other locations at the proposed repository site. Results show that thermal expansion, and mechanical properties of Alcove 5 laboratory specimens are slightly different than the average values obtained on specimens from surface drillholes.
The International Thermonuclear Experimental Reactor (ITER) is envisioned to be the next major step in the world`s fusion program from the present generation of tokamaks and is designed to study fusion plasmas with a reactor relevant range of plasma parameters. During normal operation, it is expected that a fraction of the unburned tritium, that is used to routinely fuel the discharge, will be retained together with deuterium on the surfaces and in the bulk of the plasma facing materials (PFMs) surrounding the core and divertor plasma. The understanding of he basic retention mechanisms (physical and chemical) involved and their dependence upon plasma parameters and other relevant operation conditions is necessary for the accurate prediction of the amount of tritium retained at any given time in the ITER torus. Accurate estimates are essential to assess the radiological hazards associated with routine operation and with potential accident scenarios which may lead to mobilization of tritium that is not tenaciously held. Estimates are needed to establish the detritiation requirements for coolant water, to determine the plasma fueling and tritium supply requirements, and to establish the needed frequency and the procedures for tritium recovery and clean-up. The organization of this paper is as follows. Section 2 provides an overview of the design and operating conditions of the main components which define the plasma boundary of ITER. Section 3 reviews the erosion database and the results of recent relevant experiments conducted both in laboratory facilities and in tokamaks. These data provide the experimental basis and serve as an important benchmark for both model development (discussed in Section 4) and calculations (discussed in Section 5) that are required to predict tritium inventory build-up in ITER. Section 6 emphasizes the need to develop and test methods to remove the tritium from the codeposited C-based films and reviews the status and the prospects of the most attractive techniques. Section 7 identifies the unresolved issues and provides some recommendations on potential R and D avenues for their resolution. Finally, a summary is provided in Section 8.
This report describes the research accomplishments achieved under the LDRD Project ``Double Electron Layer Tunneling Transistor.`` The main goal of this project was to investigate whether the recently discovered phenomenon of 2D-2D tunneling in GaAs/AlGaAs double quantum wells (DQWs), investigated in a previous LDRD, could be harnessed and implemented as the operating principle for a new type of tunneling device the authors proposed, the double electron layer tunneling transistor (DELTT). In parallel with this main thrust of the project, they also continued a modest basic research effort on DQW physics issues, with significant theoretical support. The project was a considerable success, with the main goal of demonstrating a working prototype of the DELTT having been achieved. Additional DELTT advances included demonstrating good electrical characteristics at 77 K, demonstrating both NMOS and CMOS-like bi-stable memories at 77 K using the DELTT, demonstrating digital logic gates at 77 K, and demonstrating voltage-controlled oscillators at 77 K. In order to successfully fabricate the DELTT, the authors had to develop a novel flip-chip processing scheme, the epoxy-bond-and-stop-etch (EBASE) technique. This technique was latter improved so as to be amenable to electron-beam lithography, allowing the fabrication of DELTTs with sub-micron features, which are expected to be extremely high speed. In the basic physics area they also made several advances, including a measurement of the effective mass of electrons in the hour-glass orbit of a DQW subject to in-plane magnetic fields, and both measurements and theoretical calculations of the full Landau level spectra of DQWs in both perpendicular and in-plane magnetic fields. This last result included the unambiguous demonstration of magnetic breakdown of the Fermi surface. Finally, they also investigated the concept of a far-infrared photodetector based on photon assisted tunneling in a DQW. Absorption calculations showed a narrowband absorption which persisted to temperatures much higher than the photon energy being detected. Preliminary data on prototype detectors indicated that the absorption is not only narrowband, but can be tuned in energy through the application of a gate voltage.
The Yucca Mountain Project is currently evaluating the coupled thermal-mechanical-hydrological-chemical (TMHC) response of the potential repository host rock through an in situ thermal testing program. A drift scale test (DST) was constructed during 1997 and heaters were turned on in December 1997. The DST includes nine canister-sized containers with thirty operating heaters each located within the heated drift (HD) and fifty wing heaters located in boreholes in both ribs with a total power output of nominally 210kW. A total of 147 boreholes (combined length of 3.3 km) houses most of the over 3700 TMHC sensors connected with 201 km of cabling to a central data acquisition system. The DST is located in the Exploratory Studies Facility in a 5-m diameter drift approximately 50 m in length. Heating will last up to four years and cooling will last another four years. The rock mass surrounding the DST will experience a harsh thermal environment with rock surface temperatures expected to reach a maximum of about 200 C. This paper describes the process of designing the DST. The first 38 m of the 50-m long Heated Drift (HD) is dedicated to collection of data that will lead to a better understanding of the complex coupled TMHC processes in the host rock of the proposed repository. The final 12 m is dedicated to evaluating the interactions between the heated rock mass and cast-in-place (CIP) concrete ground support systems at elevated temperatures. In addition to a description of the DST design, data from site characterization, and a general description of the analyses and analysis approach used to design the test and make pretest predictions are presented. Test-scoping and pretest numerical predictions of one way thermal-hydrologic, thermal-mechanical, and thermal-chemical behaviors have been completed (TRW, 1997a). These analyses suggest that a dry-out zone will be created around the DST and a 10,000 m{sup 3} volume of rock will experience temperatures above 100 C. The HD will experience large stress increases, particularly in the crown of the drift. Thermoelastic displacements of up to about 16 mm are predicted for some thermomechanical gages. Additional analyses using more complex models will be performed during the conduct of the DST and the results compared with measured data.
Here, the authors report on the lubricating effects of self-assembled monolayers (SAMs) on MEMS by measuring static and dynamic friction with two polysilicon surface- micromachined devices. The first test structure is used to study friction between laterally sliding surfaces and with the second, friction between vertical sidewalls can be investigated. Both devices are SAM-coated following the sacrificial oxide etch and the microstructures emerge released and dry from the final water rinse. The coefficient of static friction, {mu}{sub s} was found to decrease from 2.1 {+-} 0.8 for the SiO{sub 2} coating to 0.11 {+-} 0.01 and 0.10 {+-} 0.01 for films derived from octadecyltrichloro-silane (OTS) and 1H,1H,2H,2H-perfluorodecyl-trichlorosilane (FDTS). Both OTS and FDTS SAM-coated structures exhibit dynamic coefficients of friction, {mu}{sub d} of 0.08 {+-} 0.01. These values were found to be independent of the apparent contact area, and remain unchanged after 1 million impacts at 5.6 {micro}N (17 kPa), indicating that these SAMs continue to act as boundary lubricants despite repeated impacts. Measurements during sliding friction from the sidewall friction testing structure give comparable initial {mu}{sub d} values of 0.02 at a contact pressure of 84 MPa. After 15 million wear cycles, {mu}{sub d} was found to rise to 0.27. Wear of the contacting surfaces was examined by SEM. Standard deviations in the {mu} data for SAM treatments indicate uniform coating coverage.
Development of well-controlled hypervelocity launch capabilities is the first step to understand material behavior at extreme pressures and temperatures not available using conventional gun technology. In this paper, techniques used to extend both the launch capabilities of a two-stage light-gas gun to 10 km/s and their use to determine material properties at pressures and temperature states higher than those ever obtained in the laboratory are summarized. Time-resolved interferometric techniques have been used to determine shock loading and release characteristics of materials impacted by titanium and aluminum fliers launched by the only developed three-stage light-gas gun at 10 km/s. In particular, the Sandia three stage light gas gun, also referred to as the hypervelocity launcher, HVL, which is capable of launching 0.5 mm to 1.0 mm thick by 6 mm to 19 mm diameter plates to velocities approaching 16 km/s has been used to obtain the necessary impact velocities. The VISAR, interferometric particle-velocity techniques has been used to determine shock loading and release profiles in aluminum and titanium at impact velocities of 10 km/s.
Economic and political demands are driving computational investigation of systems and processes like never before. It is foreseen that questions of safety, optimality, risk, robustness, likelihood, credibility, etc. will increasingly be posed to computational modelers. This will require the development and routine use of computing infrastructure that incorporates computational physics models within the framework of larger meta-analyses involving aspects of optimization, nondeterministic analysis, and probabilistic risk assessment. This paper describes elements of an ongoing case study involving the computational solution of several meta-problems in optimization, nondeterministic analysis, and optimization under uncertainty pertaining to the surety of a generic weapon safing device. The goal of the analyses is to determine the worst-case heating configuration in a fire that most severely threatens the integrity of the device. A large, 3-D, nonlinear, finite element thermal model is used to determine the transient thermal response of the device in this coupled conduction/radiation problem. Implications of some of the numerical aspects of the thermal model on the selection of suitable and efficient optimization and nondeterministic analysis algorithms are discussed.
The Russia-US joint program on the safe management of nuclear materials was initiated to address common technical issues confronting the US and Russia in the management of excess weapons grade nuclear materials. The program was initiated after the 1993 Tomsk-7 accident. This paper provides an update on program activities since 1996. The Fourth US Russia Nuclear Materials Safety Management Workshop was conducted in March 1997. In addition, a number of contracts with Russian Institutes have been placed by Lawrence Livermore National Laboratory (LLNL) and Sandia National Laboratories (SNL). These contracts support research related to the safe disposition of excess plutonium (Pu) and highly enriched uranium (HEU). Topics investigated by Russian scientists under contracts with SNL and LLNL include accident consequence studies, the safety of anion exchange processes, underground isolation of nuclear materials, and the development of materials for the immobilization of excess weapons Pu.
This report provides an introduction to the various probabilistic methods developed roughly between 1956--1985 for performing reliability or probabilistic uncertainty analysis on complex systems. This exposition does not include the traditional reliability methods (e.g. parallel-series systems, etc.) that might be found in the many reliability texts and reference materials (e.g. and 1977). Rather, the report centers on the relatively new, and certainly less well known across the engineering community, analytical techniques. Discussion of the analytical methods has been broken into two reports. This particular report is limited to those methods developed between 1956--1985. While a bit dated, methods described in the later portions of this report still dominate the literature and provide a necessary technical foundation for more current research. A second report (Analytical Techniques 2) addresses methods developed since 1985. The flow of this report roughly follows the historical development of the various methods so each new technique builds on the discussion of strengths and weaknesses of previous techniques. To facilitate the understanding of the various methods discussed, a simple 2-dimensional problem is used throughout the report. The problem is used for discussion purposes only; conclusions regarding the applicability and efficiency of particular methods are based on secondary analyses and a number of years of experience by the author. This document should be considered a living document in the sense that as new methods or variations of existing methods are developed, the document and references will be updated to reflect the current state of the literature as much as possible. For those scientists and engineers already familiar with these methods, the discussion will at times become rather obvious. However, the goal of this effort is to provide a common basis for future discussions and, as such, will hopefully be useful to those more intimate with probabilistic analysis and design techniques. There are clearly alternative methods of dealing with uncertainty (e.g. fuzzy set theory, possibility theory), but this discussion will be limited to those methods based on probability theory.
When designing a high consequence system, considerable care should be taken to ensure that the system can not easily be placed into a high consequence failure state. A formal system design process should include a model that explicitly shows the complete state space of the system (including failure states) as well as those events (e.g., abnormal environmental conditions, component failures, etc.) that can cause a system to enter a failure state. In this paper the authors present such a model and formally develop a notion of risk-based refinement with respect to the model.
This paper presents a graph-based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The graph-based tool can identify the set of attack paths that have a high probability of success (or a low effort cost) for the attacker. The system could be used to test the effectiveness of making configuration changes, implementing an intrusion detection system, etc. The analysis system requires as input a database of common attacks, broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is matched with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example the class of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level-of-effort for the attacker, various graph algorithms such as shortest-path algorithms can identify the attack paths with the highest probability of success.