Protecting against multi-step attacks of uncertain start times and duration forces the defenders into indefinite, always ongoing, resource-intensive response. To allocate resources effectively, the defender must analyze and respond to an uncertain stream of potentially undetected multiple multi-step attacks and take measures of attack and response intensity over time into account. Such response requires estimation of overall attack success metrics and evaluating effect of defender strategies and actions associated with specific attack steps on overall attack metrics. We present a novel game-theoretic approach GPLADD to attack metrics estimation and demonstrate it on attack data derived from MITRE's ATT&CK Framework and other sources. In GPLADD, the time to complete attack steps is explicit; the attack dynamics emerges from attack graph and attacker-defender capabilities and strategies and therefore reflects 'physics' of attacks. The time the attacker takes to complete an attack step is drawn from a probability distribution determined by attacker and defender strategies and capabilities. This makes time a physical constraint on attack success parameters and enables comparing different defender resource allocation strategies across different attacks. We solve for attack success metrics by approximating attacker-defender games as discrete-time Markov chains and show evaluation of return on detection investments associated with different attack steps. We apply GPLADD to MITRE's APT3 data from ATT&CK Framework and show that there are substantial and un-intuitive differences in estimated real-world vendor performance against a simplified APT3 attack. We focus on metrics that reflect attack difficulty versus attacker ability to remain hidden in the system after gaining control. This enables practical defender optimization and resource allocation against multi-step attacks.
Network intrusion detection systems (NIDS) are commonly used to detect malware communications, including command-and-control (C2) traffic from botnets. NIDS performance assessments have been studied for decades, but mathematical modeling has rarely been used to explore NIDS performance. This paper details a mathematical model that describes a NIDS performing packet inspection and its detection of malware's C2 traffic. Here, the paper further describes an emulation testbed and a set of cyber experiments that used the testbed to validate the model. These experiments included a commonly used NIDS (Snort) and traffic with contents from a pervasive malware (Emotet). Results are presented for two scenarios: a nominal scenario and a “stressed” scenario in which the NIDS cannot process all incoming packets. Model and experiment results match well, with model estimates mostly falling within 95 % confidence intervals on the experiment means. Model results were produced 70-3000 times faster than the experimental results. Consequently, the model's predictive capability could potentially be used to support decisions about NIDS configuration and effectiveness that require high confidence results, quantification of uncertainty, and exploration of large parameter spaces. Furthermore, the experiments provide an example for how emulation testbeds can be used to validate cyber models that include stochastic variability.
Virtual machine emulation environments provide ideal testbeds for cybersecurity evaluations because they run real software binaries in a scalable, offline test setting that is suitable for assessing the impacts of software security flaws on the system. Verification of such emulations determines whether the environment is working as intended. Verification can focus on various aspects such as timing realism, traffic realism, and resource realism. In this paper, we study resource realism and issues associated with virtual machine resource utilization. We examine telemetry metrics gathered from a series of structured experiments which involve large numbers of parallel emulations meant to oversubscribe resources at some point. We present an approach to use telemetry metrics for emulation verification, and we demonstrate this approach on two cyber scenarios. Descriptions of the experimental configurations are provided along with a detailed discussion of statistical tests used to compare telemetry metrics. Results demonstrate the potential for a structured experimental framework, combined with statistical analysis of telemetry metrics, to support emulation verification. We conclude with comments on generalizability and potential future work.
This report presents the results of the “Foundations of Rigorous Cyber Experimentation” (FORCE) Laboratory Directed Research and Development (LDRD) project. This project is a companion project to the “Science and Engineering of Cyber security through Uncertainty quantification and Rigorous Experimentation” (SECURE) Grand Challenge LDRD project. This project leverages the offline, controlled nature of cyber experimentation technologies in general, and emulation testbeds in particular, to assess how uncertainties in network conditions affect uncertainties in key metrics. We conduct extensive experimentation using a Firewheel emulation-based cyber testbed model of Invisible Internet Project (I2P) networks to understand a de-anonymization attack formerly presented in the literature. Our goals in this analysis are to see if we can leverage emulation testbeds to produce reliably repeatable experimental networks at scale, identify significant parameters influencing experimental results, replicate the previous results, quantify uncertainty associated with the predictions, and apply multi-fidelity techniques to forecast results to real-world network scales. The I2P networks we study are up to three orders of magnitude larger than the networks studied in SECURE and presented additional challenges to identify significant parameters. The key contributions of this project are the application of SECURE techniques such as UQ to a scenario of interest and scaling the SECURE techniques to larger network sizes. This report describes the experimental methods and results of these studies in more detail. In addition, the process of constructing these large-scale experiments tested the limits of the Firewheel emulation-based technologies. Therefore, another contribution of this work is that it informed the Firewheel developers of scaling limitations, which were subsequently corrected.
This report summarizes the activities performed as part of the Science and Engineering of Cybersecurity by Uncertainty quantification and Rigorous Experimentation (SECURE) Grand Challenge LDRD project. We provide an overview of the research done in this project, including work on cyber emulation, uncertainty quantification, and optimization. We present examples of integrated analyses performed on two case studies: a network scanning/detection study and a malware command and control study. We highlight the importance of experimental workflows and list references of papers and presentations developed under this project. We outline lessons learned and suggestions for future work.
Cyber testbeds provide an important mechanism for experimentally evaluating cyber security performance. However, as an experimental discipline, reproducible cyber experimentation is essential to assure valid, unbiased results. Even minor differences in setup, configuration, and testbed components can have an impact on the experiments, and thus, reproducibility of results. This paper documents a case study in reproducing an earlier emulation study, with the reproduced emulation experiment conducted by a different research group on a different testbed. We describe lessons learned as a result of this process, both in terms of the reproducibility of the original study and in terms of the different testbed technologies used by both groups. This paper also addresses the question of how to compare results between two groups' experiments, identifying candidate metrics for comparison and quantifying the results in this reproduction study.
Protecting against multi-step attacks of uncertain duration and timing forces defenders into an indefinite, always ongoing, resource-intensive response. To effectively allocate resources, a defender must be able to analyze multi-step attacks under assumption of constantly allocating resources against an uncertain stream of potentially undetected attacks. To achieve this goal, we present a novel methodology that applies a game-theoretic approach to the attack, attacker, and defender data derived from MITRE´s ATT&CK® Framework. Time to complete attack steps is drawn from a probability distribution determined by attacker and defender strategies and capabilities. This constraints attack success parameters and enables comparing different defender resource allocation strategies. By approximating attacker-defender games as Markov processes, we represent the attacker-defender interaction, estimate the attack success parameters, determine the effects of attacker and defender strategies, and maximize opportunities for defender strategy improvements against an uncertain stream of attacks. This novel representation and analysis of multi-step attacks enables defender policy optimization and resource allocation, which we illustrate using the data from MITRE´ s APT3 ATT&CK® Framework.
Port scanning is a commonly applied technique in the discovery phase of cyber attacks. As such, defending against them has long been the subject of many research and modeling efforts. Though modeling efforts can search large parameter spaces to find effective defensive parameter settings, confidence in modeling results can be hampered by limited or omitted validation efforts. In this paper, we introduce a novel, mathematical model that describes port scanning progress by an attacker and intrusion detection by a defender. The paper further describes a set of emulation experiments that we conducted with a virtual testbed and used to validate the model. Results are presented for two scanning strategies: a slow, stealthy approach and a fast, loud approach. Estimates from the model fall within 95% confidence intervals on the means estimated from the experiments. Consequently, the model's predictive capability provides confidence in its use for evaluation and development of defensive strategies against port scanning.
Securing cyber systems is of paramount importance, but rigorous, evidence-based techniques to support decision makers for high-consequence decisions have been missing. The need for bringing rigor into cybersecurity is well-recognized, but little progress has been made over the last decades. We introduce a new project, SECURE, that aims to bring more rigor into cyber experimentation. The core idea is to follow the footsteps of computational science and engineering and expand similar capabilities to support rigorous cyber experimentation. In this paper, we review the cyber experimentation process, present the research areas that underlie our effort, discuss the underlying research challenges, and report on our progress to date. This paper is based on work in progress, and we expect to have more complete results for the conference.
This report contains a response from Sandia National Laboratories for the 2019 update to the 2016 Federal Cybersecurity Research and Development Strategic Plan.
Sandia National Laboratories hosted a workshop on August 11, 2017 entitled "Research Directions for Cyber Experimentation," which focused on identifying and addressing research gaps within the field of cyber experimentation , particularly emulation testbeds . This report mainly documents the discussion toward the end of the workshop, which included research gaps such as developing a sustainable research infrastructure, exp anding cyber experimentation, and making the field more accessible to subject matter experts who may not have a background in computer science . Other gaps include methodologies for rigorous experimentation, validation, and uncertainty quantification, which , if addressed, also have the potential to bridge the gap between cyber experimentation and cyber engineering. Workshop attendees presented various ways to overcome these research gaps, however the main conclusion for overcoming these gaps is better commun ication through increased workshops, conferences, email lists, and slack chann els, among other opportunities.
Qubits demonstrated using GaAs double quantum dots (DQD). The qubit basis states are the (1) singlet and (2) triplet stationary states. Long spin decoherence times in silicon spurs translation of GaAs qubit in to silicon. In the near term the goals are: (1) Develop surface gate enhancement mode double quantum dots (MOS & strained-Si/SiGe) to demonstrate few electrons and spin read-out and to examine impurity doped quantum-dots as an alternative architecture; (2) Use mobility, C-V, ESR, quantum dot performance & modeling to feedback and improve upon processing, this includes development of atomic precision fabrication at SNL; (3) Examine integrated electronics approaches to RF-SET; (4) Use combinations of numerical packages for multi-scale simulation of quantum dot systems (NEMO3D, EMT, TCAD, SPICE); and (5) Continue micro-architecture evaluation for different device and transport architectures.
This report describes recent progress made in developing and utilizing hybrid Simulated, Emulated, and Physical Investigative Analysis (SEPIA) environments. Many organizations require advanced tools to analyze their information system's security, reliability, and resilience against cyber attack. Today's security analysis utilize real systems such as computers, network routers and other network equipment, computer emulations (e.g., virtual machines) and simulation models separately to analyze interplay between threats and safeguards. In contrast, this work developed new methods to combine these three approaches to provide integrated hybrid SEPIA environments. Our SEPIA environments enable an analyst to rapidly configure hybrid environments to pass network traffic and perform, from the outside, like real networks. This provides higher fidelity representations of key network nodes while still leveraging the scalability and cost advantages of simulation tools. The result is to rapidly produce large yet relatively low-cost multi-fidelity SEPIA networks of computers and routers that let analysts quickly investigate threats and test protection approaches.
With the build-out of large transport networks utilizing optical technologies, more and more capacity is being made available. Innovations in Dense Wave Division Multiplexing (DWDM) and the elimination of optical-electrical-optical conversions have brought on advances in communication speeds as we move into 10 Gigabit Ethernet and above. Of course, there is a need to encrypt data on these optical links as the data traverses public and private network backbones. Unfortunately, as the communications infrastructure becomes increasingly optical, advances in encryption (done electronically) have failed to keep up. This project examines the use of optical logic for implementing encryption in the photonic domain to achieve the requisite encryption rates. In order to realize photonic encryption designs, technology developed for electrical logic circuits must be translated to the photonic regime. This paper examines two classes of all optical logic (SEED, gain competition) and how each discrete logic element can be interconnected and cascaded to form an optical circuit. Because there is no known software that can model these devices at a circuit level, the functionality of the SEED and gain competition devices in an optical circuit were modeled in PSpice. PSpice allows modeling of the macro characteristics of the devices in context of a logic element as opposed to device level computational modeling. By representing light intensity as voltage, 'black box' models are generated that accurately represent the intensity response and logic levels in both technologies. By modeling the behavior at the systems level, one can incorporate systems design tools and a simulation environment to aid in the overall functional design. Each black box model of the SEED or gain competition device takes certain parameters (reflectance, intensity, input response), and models the optical ripple and time delay characteristics. These 'black box' models are interconnected and cascaded in an encrypting/scrambling algorithm based on a study of candidate encryption algorithms. We found that a low gate count, cascadable encryption algorithm is most feasible given device and processing constraints. The modeling and simulation of optical designs using these components is proceeding in parallel with efforts to perfect the physical devices and their interconnect. We have applied these techniques to the development of a 'toy' algorithm that may pave the way for more robust optical algorithms. These design/modeling/simulation techniques are now ready to be applied to larger optical designs in advance of our ability to implement such systems in hardware.
This report presents the implementation of a stateless scheme for Faithful Execution, the design for which is presented in a companion report, ''Principles of Faithful Execution in the Implementation of Trusted Objects'' (SAND 2003-2328). We added a simple cryptographic capability to an already simplified class loader and its associated Java Virtual Machine (JVM) to provide a byte-level implementation of Faithful Execution. The extended class loader and JVM we refer to collectively as the Sandia Faithfully Executing Java architecture (or JavaFE for short). This prototype is intended to enable exploration of more sophisticated techniques which we intend to implement in hardware.
We begin with the following definitions: Definition: A trusted volume is the computing machinery (including communication lines) within which data is assumed to be physically protected from an adversary. A trusted volume provides both integrity and privacy. Definition: Program integrity consists of the protection necessary to enable the detection of changes in the bits comprising a program as specified by the developer, for the entire time that the program is outside a trusted volume. For ease of discussion we consider program integrity to be the aggregation of two elements: instruction integrity (detection of changes in the bits within an instruction or block of instructions), and sequence integrity (detection of changes in the locations of instructions within a program). Definition: Faithful Execution (FE) is a type of software protection that begins when the software leaves the control of the developer and ends within the trusted volume of a target processor. That is, FE provides program integrity, even while the program is in execution. (As we will show below, FE schemes are a function of trusted volume size.) FE is a necessary quality for computing. Without it we cannot trust computations. In the early days of computing FE came for free since the software never left a trusted volume. At that time the execution environment was the same as the development environment. In some circles that environment was referred to as a ''closed shop:'' all of the software that was used there was developed there. When an organization bought a large computer from a vendor the organization would run its own operating system on that computer, use only its own editors, only its own compilers, only its own debuggers, and so on. However, with the continuing maturity of computing technology, FE becomes increasingly difficult to achieve
As rapid Internet growth continues, global communications becomes more dependent on Internet availability for information transfer. Recently, the Internet Engineering Task Force (IETF) introduced a new protocol, Multiple Protocol Label Switching (MPLS), to provide high-performance data flows within the Internet. MPLS emulates two major aspects of the Asynchronous Transfer Mode (ATM) technology. First, each initial IP packet is 'routed' to its destination based on previously known delay and congestion avoidance mechanisms. This allows for effective distribution of network resources and reduces the probability of congestion. Second, after route selection each subsequent packet is assigned a label at each hop, which determines the output port for the packet to reach its final destination. These labels guide the forwarding of each packet at routing nodes more efficiently and with more control than traditional IP forwarding (based on complete address information in each packet) for high-performance data flows. Label assignment is critical in the prompt and accurate delivery of user data. However, the protocols for label distribution were not adequately secured. Thus, if an adversary compromises a node by intercepting and modifying, or more simply injecting false labels into the packet-forwarding engine, the propagation of improperly labeled data flows could create instability in the entire network. In addition, some Virtual Private Network (VPN) solutions take advantage of this 'virtual channel' configuration to eliminate the need for user data encryption to provide privacy. VPN's relying on MPLS require accurate label assignment to maintain user data protection. This research developed a working distributive trust model that demonstrated how to deploy confidentiality, authentication, and non-repudiation in the global network label switching control plane. Simulation models and laboratory testbed implementations that demonstrated this concept were developed, and results from this research were transferred to industry via standards in the Optical Internetworking Forum (OIF).
The recent unprecedented growth of global network (Internet) usage has created an ever-increasing amount of congestion. Telecommunication companies (Telco) and Internet Service Providers (ISP's), which provide access and distribution through the network, are increasingly more aware of the need to manage this growth. Congestion, if left unmanaged, will result in a degradation of the over-all network. These access and distribution networks currently lack formal mechanisms to select Quality of Service (QoS) attributes for data transport. Network services with a requirement for expediency or consistent amounts of bandwidth cannot function properly in a communication environment without the implementation of a QoS structure. This report describes and implements such a structure that results in the ability to identify, prioritize, and police critical application flows.
The next major performance plateau for high-speed, long-haul networks is at 10 Gbps. Data visualization, high performance network storage, and Massively Parallel Processing (MPP) demand these (and higher) communication rates. MPP-to-MPP distributed processing applications and MPP-to-Network File Store applications already require single conversation communication rates in the range of 10 to 100 Gbps. MPP-to-Visualization Station applications can already utilize communication rates in the 1 to 10 Gbps range. This LDRD project examined some of the building blocks necessary for developing a 10 to 100 Gbps computer network architecture. These included technology areas such as, OS Bypass, Dense Wavelength Division Multiplexing (DWDM), IP switching and routing, Optical Amplifiers, Inverse Multiplexing of ATM, data encryption, and data compression; standards bodies activities in the ATM Forum and the Optical Internetworking Forum (OIF); and proof-of-principle laboratory prototypes. This work has not only advanced the body of knowledge in the aforementioned areas, but has generally facilitated the rapid maturation of high-speed networking and communication technology by: (1) participating in the development of pertinent standards, and (2) by promoting informal (and formal) collaboration with industrial developers of high speed communication equipment.
This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within the community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.
A number of substantive modifications were made from Version 1.0 to Version 1.1 of the ATM Security Specification. To assist implementers in identifying these modifications, the authors propose to include a foreword to the Security 1.1 specification that lists these modifications. Typically, a revised specification provides some mechanism for implementers to determine the modifications that were made from previous versions. Since the Security 1.1 specification does not include change bars or other mechanisms that specifically direct the reader to these modifications, they proposed to include a modification table in a foreword to the document. This modification table should also be updated to include substantive modifications that are made at the San Francisco meeting.
This contribution provides Sandia's strawballot comments for the Security Version l.l specification, STR-SEC-02.01. Two major comments are addressed here that pertain to potential problems with the use of the Security Association Section digital signature, and potential inconsistencies with the allocation of relative identifiers in the initiating security agent.
As described in contribution AF99-0335, it is interesting that new security services and mechanisms are allowed to be negotiated during a connection in progress. To do that, new ''negotiation OAM cells'' dedicated to security should be defined, as well as some acknowledgment cells allowing negotiation OAM cells to be exchanged reliably. Remarks which were given at the New Orleans meeting regarding those cell formats are taken into account. This contribution presents some baseline text describing the format of the negotiation and acknowledgment cells, and the using of those cells. All the modifications brought to the specifications are reversible using the Word tools.
This contribution proposes strawman techniques for Security Service Discovery by ATM endsystems in ATM networks. Candidate techniques include ILMI extensions, ANS extensions and new ATM anycast addresses. Another option is a new protocol based on an IETF service discovery protocol, such as Service Location Protocol (SLP). Finally, this contribution provides strawman requirements for Security-Based Routing in ATM networks.
The ATM Forum UNI 4.0 Security Addendum has undergone 4 revisions and has been without substantive modifications for 3 ATM Forurn meetings. This contribution is intended to assist the ATM Forum CS Working Group in the process of bringing BTD-CS-UNI-SEC-O1 .04 DIUFT to Straw Ballot. This effort applies equally to its companion documen~ BTD-CS-PNNI-SEC-O 1.02 DRAFT. BTD-CS-UNI-SEC-01 .04 DRAFT is an addendum to UNI 4.0 Signaling that describes the additional procedures needed of ATM signaling to support the signaling-based securily message exchange protocol, and its 4 basic security mechanisms, authentication, confidentiality, integrity and access control for ATM VC/VPs. These services are specified in detail in ATM Forurn document af-sec-0100.000, which is currently in Final Ballot. The remaining identified work for BTD-CS-UNI-SEC-01 .04 DRAFT includes the resolution of the TBD items in the draft, and a review of the sections of the ATM Forum Security Specification V 1.0 af-sec- 0100.000, that are specifically referenced by BTD-CS-UNI-SEC-O 1.04 DRAFT. In support of this effort, this contribution includes the relevant baseline text of the referenced sections of that Security Specification.
This contribution describes three interoperability scenarios for the ATM Security Message Exchange (SME) protocol. These scenarios include network-wide signaling support for the Security Services Information Element, partial signaling support wherethe SSIE is only supported in private or workgroup ATM networks, and the case where the SSIE is nonsupported by any network elements (exceptthosethat implement security services). Explanatory text is proposed for inclusion infection 2.3 of the ATM Security Specification, Version 1.0.
This contribution provides Sandia`s comments to the ATM Forum Security 1.0 straw ballot specification, STR-SECURITY-01.01. These comments are organized as follows--major comments indicate technical defects in the specification which, if not resolved, may preclude Sandia`s vote in favor of the specification. Minor comments are technical comments which, if left unresolved, will not preclude Sandia`s favorable vote. Finally, editorial comments are also provided.
This contribution proposes additional text for Section 7.1.5.5 of [1] which defines the contents of the digital signature buffer for each relevant flow in the Two-Way and Three-Way Security Message Exchange Protocols. This is clearly an interoperability issue because these signature buffers must be constructed identically at the sender (signature generator) and receiver (signature validator) in order for the protocols to proceed correctly. Sections 2 and 3 of this contribution are intended to be placed in Section 7.1.5.5 of [1]. In addition, text is proposed in Motion 2 of Section 4 of this contribution which clarifies the scope of encryption of the Confidential Section, which is defined in Section 7.1.4 of [1].
This document specifies signaling procedures required to support security services in the Phase I ATM Security Specification. These signaling procedures are in addition to those described in UNI 4.0 Signaling. When establishing point-to-point and point-to-multipoint calls, the call control procedures described in the ATM Forum UNI 4.0 Signaling apply. This document describes the additional information elements and procedures necessary to support security services. This description is in an incremental form with differences from the point-to-point and point-to-multipoint calls with respect to messages, information elements, and signaling procedures.
Asynchronous Transfer Mode (ATM) is a new data communications technology that promises to integrate voice, video, and data traffic into a common network infrastructure. In order to fully utilize ATM`s ability to transfer real-time data at high rates, applications will start to access the ATM layer directly. As a result of this trend, security mechanisms at the ATM layer will be required. A number of research programs are currently in progress which seek to better understand the unique issues associated with ATM security. This paper describes some of these issues, and the approaches taken by various organizations in the design of ATM layer security mechanisms. Efforts within the ATM Forum to address the user communities need for ATM security are also described.
The purpose of this contribution is to propose an ``Authentication Information Element`` that can be used to carry authentication information within the ATM signaling protocols. This information may be used by either signaling entity to validate the claimed identity of the other, and to verify the integrity of a portion of a message`s contents. By specifying a generic authentication IE, authentication information can be generated by any signature algorithm, and can be appended to any ATM signaling message. Procedures for the use of this information element are also provided.
This is the summary report for the Protocol Extensions for Asynchronous Transfer Mode project, funded under Sandia`s Laboratory Directed Research and Development program. During this one-year effort, techniques were examined for integrating security enhancements within standard ATM protocols, and mechanisms were developed to validate these techniques and to provide a basic set of ATM security assurances. Based on our experience during this project, recommendations were presented to the ATM Forum (a world-wide consortium of ATM product developers, service providers, and users) to assist with the development of security-related enhancements to their ATM specifications. As a result of this project, Sandia has taken a leading role in the formation of the ATM Forum`s Security Working Group, and has gained valuable alliances and leading-edge experience with emerging ATM security technologies and protocols.
This contribution addresses requirements for ATM signaling channel authentication. Signaling channel authentication is an ATM security service that binds an ATM signaling message to its source. By creating this binding, the message recipient, and even a third party, can confidently verify that the message originated from its claimed source. This provides a useful mechanism to mitigate a number of threats. For example, a denial of service attack which attempts to tear-down an active connection by surreptitiously injecting RELEASE or DROP PARTY messages could be easily thwarted when authenticity assurances are in place for the signaling channel. Signaling channel authentication could also be used to provide the required auditing information for accurate billing which is impervious to repudiation. Finally, depending on the signaling channel authentication mechanism, end-to-end integrity of the message (or at least part of it) can be provided. None of these capabilities exist in the current specifications.
This contribution proposes the format of the ``Algorithm-Specific Information`` and ``Signature`` fields within the ``Proposed Generic Authentication Information Element`` for authentication IEs based on the Digital Signature Standard (DSS). These fields are designed to allow various levels of authentication ``strength`` (or robustness), and many of these fields may be omitted in systems that optimize authentication performance by sharing common (public) Digital Signature Algorithm (DSA) parameters. This allows users and site security officers to design their authenticated signaling according to site security and performance requirements.
This contribution describes a proposed information element that can convey authentication information within an ATM signaling message. The design of this information element provides a large amount of flexibility to the user because it does not specify a particular signature algorithm, and it does not specify which information elements must accompany the Authentication IE in a signaling message. This allows the user to implement authenticated signaling based on her site`s security policies and performance requirements.
There has been some interest lately in the need for ``authenticated signalling``, and the development of signalling specifications by the ATM Forum that support this need. The purpose of this contribution is to show that if authenticated signalling is required, then supporting signalling facilities for directory services (i.e. key management) are also required. Furthermore, this contribution identifies other security related mechanisms that may also benefit from ATM-level signalling accommodations. For each of these mechanisms outlined here, an overview of the signalling issues and a rough cut at the required fields for supporting Information Elements are provided. Finally, since each of these security mechanisms are specified by a number of different standards, issues pertaining to the selection of a particular security mechanism at connection setup time (i.e. specification of a required ``Security Quality of Service``) are also discussed.
The design of a software package that provides a variety of Asynchronous Transfer Mode (ATM) test functions is presented here. These functions include cell capture, protocol decode for Transmission Control Protocol/Internet Protocol (TCP/IP) services, removal of cells (to support testing of an ATM system under cell loss conditions), and echo functions. This package is currently written to operate on the Sun Microsystems SPARCstation 10/SunOS 4.1.3 environment with a Fore Systems SBA-100 Sbus ATM adapter (140 Mbit/s TAXI interface), and the DEC 5000/240 running ULTRIX 4.2A with a Fore Systems TCA-100 TurboChannel adapter. Application scenarios and performance measurements of this software package on these host environments are presented here.