Publications

Results 26–32 of 32

Search results

Jump to search filters

Final Report for the 10 to 100 Gigabit/Second Networking Laboratory Directed Research and Development Project

Witzke, Edward L.; Pierson, Lyndon G.; Tarman, Thomas D.; Dean, Leslie B.; Robertson, Perry J.; Campbell, Philip L.

The next major performance plateau for high-speed, long-haul networks is at 10 Gbps. Data visualization, high performance network storage, and Massively Parallel Processing (MPP) demand these (and higher) communication rates. MPP-to-MPP distributed processing applications and MPP-to-Network File Store applications already require single conversation communication rates in the range of 10 to 100 Gbps. MPP-to-Visualization Station applications can already utilize communication rates in the 1 to 10 Gbps range. This LDRD project examined some of the building blocks necessary for developing a 10 to 100 Gbps computer network architecture. These included technology areas such as, OS Bypass, Dense Wavelength Division Multiplexing (DWDM), IP switching and routing, Optical Amplifiers, Inverse Multiplexing of ATM, data encryption, and data compression; standards bodies activities in the ATM Forum and the Optical Internetworking Forum (OIF); and proof-of-principle laboratory prototypes. This work has not only advanced the body of knowledge in the aforementioned areas, but has generally facilitated the rapid maturation of high-speed networking and communication technology by: (1) participating in the development of pertinent standards, and (2) by promoting informal (and formal) collaboration with industrial developers of high speed communication equipment.

More Details

Survivability via Control Objectives

Campbell, Philip L.

Control objectives open an additional front in the survivability battle. A given set of control objectives is valuable if it represents good practices, it is complete (it covers all the necessary areas), and it is auditable. CobiT and BS 7799 are two examples of control objective sets.

More Details

A System Analysis Tool

Campbell, Philip L.; Espinoza, Juan E.

In this paper we describe a tool for analyzing systems. The analysis is based on program slicing. It answers the following question for the software: if the value of a particular variable changes, what other variable values also change, and what is the path in between? program slicing was developed based on intra-procedure control and data flow. It has been expanded commercially to inter-procedure flow. However, we extend slicing to collections of programs and non-program entities, which we term multi-domain systems. The value of our tool is that an analyst can model the entirety of a system, not just the software, and we believe that this makes for a significant increase in power. We are building a prototype system.

More Details

Remote monitoring architectures: a part of the frontier

JNMM, Journal of the Institute of Nuclear Materials Management

Campbell, Philip L.; Craft, Richard L.; Snyder, Lillian A.

This paper presents a taxonomy, in the form of an abstract model, of the set of remote monitoring architectures, such as those used for international agreements, treaties, or the monitoring of hazardous materials. The model consists of three parts: a sensor, an optional server, and a user, with communication lines connecting sensor and server and connecting server and user. (If the server is not present, then the communication line connects the sensor and user directly). We refine the three parts to include different user populations, data sensitivity, and secure services. We complete the model by allowing data between the parts to be either pulled or pushed. This results in six basic partitions, each of which has a number of sub-partitions. For several sample architectures we show how they fit into the taxonomy. The importance of the taxonomy is that it provides a systematic method of understanding these architectures which we believe are on the forefront of technology. We anticipate that solutions generated by these architectures will become commonplace in the future. For example, a customary requirement for these architectures is that the adversary be a legitimate user.

More Details

LFSRs Do Not Provide Compression

Campbell, Philip L.; Pierson, Lyndon G.

We show that for general input sets linear feedback shift registers (LFSRS) do not provide compression comparable to current, standard algorithms, at least not on the current, standard input files. Rather, LFSRS provide performance on a par with simple, run-length encoding schemes. We exercised three different ways of using LFSRS on the Canterbury, Canterbury Oarge set, the Calgory Corpora, and on three, large graphics files of our own.

More Details

Trusted Objects

Campbell, Philip L.; Pierson, Lyndon G.; Witzke, Edward L.

In the world of computers a trusted object is a collection of possibly-sensitive data and programs that can be allowed to reside and execute on a computer, even on an adversary's machine. Beyond the scope of one computer we believe that network-based agents in high-consequence and highly reliable applications will depend on this approach, and that the basis for such objects is what we call ''faithful execution.''

More Details

An Implementation of the Berlekamp-Massey Linear Feedback Shift-Register Synthesis Algorithm in the C Programming Language

Campbell, Philip L.

This report presents an implementation of the Berlekamp-Massey linear feedback shift-register (LFSR) synthesis algorithm in the C programming language. Two pseudo-code versions of the code are given, the operation of LFSRs is explained, C-version of the pseudo-code versions is presented, and the output of the code, when run on two input samples, is shown.

More Details
Results 26–32 of 32
Results 26–32 of 32