Optoelectronic and photonic devices hold great promise for high data-rate communication and computing. Their wide implementation was limited first by the device technologies and now suffers due to the need for high-precision packaging that is mass-produced. The use of photons as a medium of communication and control implies a unique set of packaging constraints that are highly driven by the need for micron and even sub-micron alignments between photonic devices and their transmission media. Current trends in optoelectronic device packaging are reviewed and future directions are identified both for free-space (3-dimensional) and guided-wave (2-dimensional) photonics. Emphasis will be placed on the special needs generated by increasing levels of device integration.
Sandia Labs` mobile tracking systems have only one moving part. The double gimballed 18 inch diameter beryllium mirror is capable of constant tracking velocities up to 5 rads/sec in both axes, and accelerations to 150 rads/sec/sec in both axes. Orthogonality is <10 microradians. The mirror directs the 488 and 514 nm wavelength CW laser beams to adhesive-backed reflective material applied to the test unit. The mirror catches the return beam and visual image, directing the visual image to three camera bays, and the return beam to an image dissector behind an 80 inch gathering telescope. The image dissector or image position sensor is a photomultiplier with amplifying drift tube and electron aperture and its associated electronics. During the test, the image dissector scan senses the change in position of the reflective material and produces signals to operate the azimuth and elevation torque motors in the gimbal assembly. With the help of 1 1/8 inch diameter azimuth and elevation galvonometer steering mirrors in the optical path, the laser beam is kept on the target at extremely high velocities. To maintain a constant return signal strength, the outgoing beam is run through a microprocessor controlled beam focusing telescope.
Geologic Site Characterization should be a dynamic, continuing process, not an event. Its successes and failures are legion and can make or break an operator. A balanced approach must be sought to provide adequate information for safety of operations, neither slighting nor overdoing the effort. The evolving nature of study methods and geologic knowledge essentially mandates that characterization efforts be reviewed periodically. However, indifference, nonchallance, and even outright disdain describe attitudes witnessed in some circles regarding this subject. Unawareness may also be a factor. Unfortunately, several unanticipated events have led to severe economic consequences for the operators. The hard-learned lessons involving several unanticipated geotechnical occurrences at several Gulf Coast salt domes are discussed. The ultimate benefit of valuing site characterization efforts may be more than just enhanced safety and health--costs not expended in lost facilities and litigation can become profit.
In order to provide needed security assurances for traffic carried in Asynchronous Transfer Mode (ATM) networks, methods of protecting the integrity and privacy of traffic must be employed. Cryptographic methods can be used to assure authenticity and privacy, but are hard to scale and the incorporation of these methods into computer networks can severely impact functionality, reliability, and performance. To study these trade-offs, a research prototype encryptor/decryptor is under development. This prototype is to demonstrate the viability of implementing certain encryption techniques in high speed networks by processing Asynchronous Transfer Mode (ATM) cells in a SONET OC-3 payload. This paper describes the objectives and design trade-offs intended to be investigated with the prototype. User requirements for high performance computing and communication have driven Sandia to do work in the areas of functionality, reliability, security, and performance of high speed communication networks. Adherence to standards (including emerging standards) achieves greater functionality of high speed computer networks by providing wide interoperability of applications, network hardware, and network software.
Scannerless range imaging (SRI) is a unique approach to three dimensional imaging without scanners. SRI does, however, allow a more powerful light source to be used as compared to conventional Laser Radar (LADAR) systems due to the speed of operation associated with this staring system. As a result, a more efficient method of operation was investigated. As originally conceived, SRI transmits a continuous intensity modulated sinusoidal signal; however, a square wave driver is more energy efficient than a sinusoidal driver. In order to take advantage of this efficiency, a square wave operational methodology was investigated. As a result, four image frames are required for the extraction of range using a square wave to unambiguously resolve all time delays within one time period compared to a minimum of three frames for the sinusoidal wave.
Many advanced light water reactor (ALWR) concepts proposed for the next generation of nuclear power plants rely on passive systems to perform safety functions, rather than active systems as in current reactor designs. These passive systems depend to a great extent on physical processes such as natural circulation for their driving force, and not on active components, such as pumps. An NRC-sponsored study was begun at Sandia National Laboratories to develop and implement a methodology for evaluating ALWR passive system reliability in the context of probabilistic risk assessment (PRA). This report documents the first of three phases of this study, including methodology development, system-level qualitative analysis, and sequence-level component failure quantification. The methodology developed addresses both the component (e.g. valve) failure aspect of passive system failure, and uncertainties in system success criteria arising from uncertainties in the system`s underlying physical processes. Traditional PRA methods, such as fault and event tree modeling, are applied to the component failure aspect. Thermal-hydraulic calculations are incorporated into a formal expert judgment process to address uncertainties in selected natural processes and success criteria. The first phase of the program has emphasized the component failure element of passive system reliability, rather than the natural process uncertainties. Although cursory evaluation of the natural processes has been performed as part of Phase 1, detailed assessment of these processes will take place during Phases 2 and 3 of the program.
Full-scale fire characterization tests are becoming less frequent due to cost restrictions and environmental concerns. This trend, combined with significant advances in fire field modeling, has resulted in an increased effort to perform well-designed experiments which support the development and validation of numerical tools. In pursuit of improved fire characterization, large-fire measurement techniques in large-scale (D > 2m) fires are reviewed in this work. Primary attention is focused on the measurement of temperature and heat flux. Additional measurements of quantities such as soot volume fraction, soot emission temperature, and gas species are also addressed. Issues relating to the use of existing techniques, and methods for improving and interpreting the results from existing measurement techniques are presented. Alternate techniques for fire characterization and needs for development of advanced measurement technology are also briefly discussed.
A series of tests investigating dynamic pulse buckling of a cylindrical shell under axial impact is compared to several 2D and 3D finite element simulations of the event. The purpose of the work is to investigate the performance of various analysis codes and element types on a problem which is applicable to radioactive material transport packages, and ultimately to develop a benchmark problem to qualify finite element analysis codes for the transport package design industry. During the pulse buckling tests, a buckle formed at each end of the cylinder, and one of the two buckles became unstable and collapsed. Numerical simulations of the test were performed using PRONTO, a Sandia developed transient dynamics analysis code, and ABAQUS/Explicit with both shell and continuum elements. The calculations are compared to the tests with respect to deformed shape and impact load history.
This paper presents an analysis of the Chemical Vapor Deposition of diamond thin films in a direct-current (dc) arc-jet reactor. The analysis discussed here includes a model of the performance of the arc-jet hydrogen excitation source, chemistry in the free-stream region, diffusive transport and chemistry in the boundary layer and at the surface. The surface chemistry model includes pathways for deposition of diamond, as well as creation of defects in the diamond lattice.
An overview is presented of work on strained InAsSb heterostructures and infrared emitters. InAsSb/InGaAs strained-layer superlattices (SLS) and InAsSb quantum wells were grown by metal-organic chemical vapor deposition and characterized using magneto-photoluminescence. LEDs and lasers with InAsSb heterostructure active regions are described.
When a system is being designed, one of the system requirements will specify the intended life for the system, which is called the design life, the system life, the expected operational lifetime, or the service life. This specification is an important driver of the total life cycle cost. This paper suggests how specifying this design life affects the design and the cost of the system.
The most common tool used by aircraft inspectors is the personal flashlight. While it is compact and very portable, it is generally typified by poor beam quality which can interfere with the ability for an inspector to detect small defects and anomalies, such as cracks and corrosion sites, which may be indicators of major structural problems. A Light Shaping Diffuser{trademark} (LSD) installed in a stock flashlight as a replacement to the lens can improve the uniformity of an average flashlight and improve the quality of the inspection. Field trials at aircraft maintenance facilities have demonstrated general acceptance of the LSD by aircraft inspection and maintenance personnel.
Over the next decade, the US Department of Energy (DOE) must retire and dismantle many nuclear weapon systems. In support of this effort, Sandia National Laboratories (SNL) has developed the Hazard Separation System (HSS). The HSS combines abrasive waterjet cutting technology and real-time radiography. Using the HSS, operators determine the exact location of interior, hazardous sub-components and remove them through precision cutting. The system minimizes waste and maximizes the recovery of recyclable materials. During 1994, the HSS was completed and demonstrated. Weapon components processed during the demonstration period included arming, fusing, and firing units; preflight control units; neutron generator subassemblies; and x-units. Hazards removed included radioactive krytron tubes and gap tubes, thermal batteries, neutron generator tubes, and oil-filled capacitors. Currently, the HSS is being operated at SNL in a research and development mode to facilitate the transfer of the technology to other DOE facilities for support of their dismantlement operations.
We create mobile surface vacancies on vicinal Si(001) by bombarding the surface with 300 eV Xe ions at a substrate temperature of 465{degrees}C. The vacancies preferentially annihilate at the rough steps retracting them with respect to their smooth neighbors. This process leads to a bimodal terrace width distribution. The retraction of the rough steps due to the vacancy annihilation is in competition with the healing process by which the surface tries to maintain its equilibrium configuration of equally spaced steps. As the two competing processes balance, the surface reaches steady state and subsequent removal of surface atoms is manifest as simple step flow.
The purpose of publishing the minutes of this workshop is to document the content of the presentations and the direction of the discussions at the workshop as a means of fostering collaborative research and development on chromate replacements throughout the defense, automotive, aerospace, and packaging industries. The goal of the workshop was to bring together coating researchers, developers, and users from a variety of industries to discuss new coating ideas, testing methods, and coating preparation techniques from the perspective not only of end user, but also from the perspective of coating supplier, developer, and researcher. To this end, we succeeded because of the wide-ranging interests of attendees present in the more than 60 workshop registrants. It is our hope that future workshops, not only this one but others like it throughout government and industry, can benefit from the recorded minutes of our meeting and use them as a starting point for future discussions of the directions for chromate replacements in light metal finishing.
The multiphase, multicomponent, non-isothermal simulator M2NOTS was tested against several one-dimensional experiments. The experiments represented limiting conditions of soil venting processes: (1) a through-flow condition in which air flows through the contaminated region, and (2) a bypass-flow condition in which air is channeled around (rather than through) the contaminated region. Predictions using M2NOTS of changing in situ compositions and effluent concentrations for toluene and o-xylene mixtures were compared to the observed results for each condition. Results showed that M2NOTS was able to capture the salient trends and features of multicomponent through-flow and bypass-flow venting processes.
We identify a general framework for search called bootstrap search, which is defined as global search using only a local search procedure along with some memory for learning intermediate subgoals. We present a simple algorithm for bootstrap search, and provide some initial theory on their performance. In our theoretical analysis, we develop a random digraph problem model and use it to make some performance predictions and comparisons. We also use it to provide some techniques for approximating the optimal resource bound on the local search to achieve the best global search. We validate our theoretical results with empirical demonstration on the 15-puzzle. We show how to reduce the cost of a global search by 2 orders of magnitude using bootstrap search. We also demonstrate a natural but not widely recognized connection between search costs and the lognormal distribution. To further illustrate our algorithm`s generality and effectiveness, we also apply it to robot path planning, and demonstrate a phenomenon of over-learning.
The original DAMP (DAta Manipulation Program) was written by Mark Hedemann of Sandia National Laboratories and used the CA-DISSPLA{trademark} (available from Computer Associates International, Inc., Garden City, NY) graphics package as its engine. It was used to plot, modify, and otherwise manipulate the one-dimensional data waveforms (data vs time) from a wide variety of accelerators. With the waning of CA-DISSPLA and the increasing popularity of UNIX{reg_sign}-based workstations, a replacement was needed. This package uses the IDL{reg_sign} software, available from Research Systems Incorporated in Boulder, Colorado, as the engine, and creates a set of widgets to manipulate the data in a manner similar to the original DAMP. IDL is currently supported on a wide variety of UNIX platforms such as IBM{reg_sign} workstations, Hewlett Packard workstations, SUN{reg_sign} workstations, Microsoft{reg_sign} Windows{trademark} computers, Macintosh{reg_sign} computers and Digital Equipment Corporation VMS{reg_sign} systems. Thus, this program should be portable across many platforms. We have verified operation, albeit with some IDL bugs, on IBM UNIX platforms, DEC Alpha systems, HP 9000/7OO series workstations, and Macintosh computers, both regular and PowerPC{trademark} versions.
Wong, C.C.; Blottner, F.G.; Payne, J.L.; Soetrisno, M.; Imlay, S.T.
This report documents an exploratory research work, funded by the Laboratory Directed Research and Development (LDRD) office at Sandia National Laboratories, to develop an advanced, general purpose, robust compressible flow solver for handling large, complex, chemically reacting gas dynamics problems. The deliverable of this project, a computer program called PINCA (Parallel INtegrated Computer Analysis) will run on massively parallel computers such as the Intel/Gamma and Intel/Paragon. With the development of this parallel compressible flow solver, engineers will be better able to address large three-dimensional scientific arid engineering problems involving multi-component gas mixtures with finite rate chemistry. These problems occur in high temperature industrial processes, combustion, and hypersonic: reentry of space-crafts.
Utilizing unique properties of a recently developed set of attitude parameters, the modified Rodrigues parameters, we develop feedforward/feedback type control laws that globally control a spacecraft undergoing large nonlinear motions, using three or more reaction wheels. The method is suitable for tracking given smooth reference trajectories that spline smoothly into a target slate or pure spin motion; these reference trajectories may be exact or approximate solutions of the system equations of motion. In particular, we illustrate the ideas using both near-minimum-time and near-minimum fuel rotations about Euler`s principal rotation axis, with parameterization of the sharpness of the control switching for each class of reference maneuvers. Lyapunov stability theory is used to prove rigorous stability of closed loop motion in the end game, and qualified Lyapunov stability during the large nonlinear path tracking portion of the closed loop tracking error dynamics. The methodology is illustrated by designing example control laws for a prototype landmark tracking spacecraft; simulations are reported that show this approach to be attractive for practical applications. The inputs to the reference trajectory are designed with user-controlled sharpness of all control switches, to enhance the trackability of the reference maneuvers in the presence of structural flexibility.
With the recent completion of the documentation of the results from the Grand Gulf Nuclear Power Plant Low Power and Shutdown (LP and S) project funded by the US Nuclear Regulatory Commission (NRC), detailed probabilistic risk assessment (PRA) information from a boiling water reactor (BWR) for a specific time period in LP and S conditions became available for examination. This report contains observations and insights extracted from an examination of: (1) results in the LP and S documentation; (2) the specific models and assumptions used in the LP and S analyses; (3) selected results from the full-power analysis; (4) the experience of the analysts who performed the original LP and S study; and (5) results from sensitivity calculations performed as part of this project to help determine the impact that model assumptions and data values had on the results from the original LP and S analysis. Specifically, this study makes observations on and develops insights from the estimates of core damage frequency and aggregate risk (early fatalities and total latent cancer fatalities) associated with operations during plant operational state (POS) 5 (i.e., basically cold shutdown as defined by Technical Specifications) during a refueling outage for traditional internal events. A discussion of similarities and differences between full power accidents and accidents during LP and S conditions is provided. As part of this discussion, core damage frequency and risks results are presented on a per hour and per calendar year basis, allowing alternative perspectives on both the core damage frequency and risk associated with these two operational states.
Feasibility of ceramic joining using a high energy (10 MeV) electron beam. The experiments used refractory metals as bonding materials in buried interfaces between Si{sub 3}N{sub 4} pieces. Because the heat capacity of the metal bonding layer is much lower than the ceramic, the metal reaches much higher temperatures than the adjoining ceramic. Using the right combination of beam parameters allows the metal to be melted without causing the adjoining ceramics to melt or decompose. Beam energy deposition and thermal simulations were performed to guide the experiments. Joints were shear tested and interfaces between the metal and the ceramic were examined to identify the bonding mechanism. Specimens joined by electron beams were compared to specimens produced by hot-pressing. Similar reactions occurred using both processes. Reactions between the metal and ceramic produced silicides that bond the metal to the ceramic. The molybdenum silicide reaction products appeared to be more brittle than the platinum silicides. Si{sub 3}N{sub 4} was also joined to Si{sub 3} N{sub 4} directly. The bonding appears to have been produced by the flow of intergranular glass into the interface. Shear strength was similar to the metal bonded specimens. Bend specimens Of Si{sub 3}N{sub 4} were exposed to electron beams with similar parameters to those used in joining experiments to determine how beam exposure degrades the strength. Damage was macroscopic in nature with craters being tonned by material ablation, and cracking occurring due to excessive thermal stresses. Si was also observed on the surface indicating the Si{sub 3}N{sub 4} was decomposing. Bend strength after exposure was 62% of the asreceived strength. No obvious microstructural differences were observed in the material close to the damaged region compared to material in regions far away from the damage.
Design criteria for carbon-based Ultracapacitors have been determined for specified energy and power requirements, using the geometry of the components and such material properties as density, porosity and conductivity as parameters, while also considering chemical compatibility. This analysis shows that the weights of active and inactive components of the capacitor structure must be carefully balanced for maximum energy and power density. When applied to nonaqueous electrolytes, the design rules for a 5 Wh/kg device call for porous carbon with a specific capacitance of about 30 F/cm{sup 3}. This performance is not achievable with pure, electrostatic double layer capacitance. Double layer capacitance is only 5 to 30% of that observed in aqueous electrolyte. Tests also showed that nonaqeous elcctrolytes have a diminished capability to access micropores in activated carbon, in one case yielding a capacitance of less than 1 F/cm{sup 3} for carbon that had 100 F/cm{sup 3} in aqueous electrolyte. With negative results on nonaqueous electrolytes dominating the present study, the obvious conclusion is to concentrate on aqueous systems. Only aqueous double layer capacitors offer adequate electrostatic charging characteristics which is the basis for high power performance. There arc many opportunities for further advancing aqueous double layer capacitors, one being the use of highly activated carbon films, as opposed to powders, fibers and foams. While the manufacture of carbon films is still costly, and while the energy and power density of the resulting devices may not meet the optimistic goals that have been proposed, this technology could produce true double layer capacitors with significantly improved performance and large commercial potential.
In robotics, path planning refers to finding a short. collision-free path from an initial robot configuration to a desired configuratioin. It has to be fast to support real-time task-level robot programming. Unfortunately, current planning techniques are still too slow to be effective, as they often require several minutes, if not hours of computation. To remedy this situation, we present and analyze a learning algorithm that uses past experience to increase future performance. The algorithm relies on an existing path planner to provide solutions to difficult tasks. From these solutions, an evolving sparse network of useful robot configurations is learned to support faster planning. More generally, the algorithm provides a speedup-learning framework in which a slow but capable planner may be improved both cost-wise and capability-wise by a faster but less capable planner coupled with experience. The basic algorithm is suitable for stationary environments, and can be extended to accommodate changing environments with on-demand experience repair and object-attached experience abstraction. To analyze the algorithm, we characterize the situations in which the adaptive planner is useful, provide quantitative bounds to predict its behavior, and confirm our theoretical results with experiments in path planning of manipulators. Our algorithm and analysis are sufficiently, general that they may also be applied to other planning domains in which experience is useful.
System identification for the purpose of robust control design involves estimating a nominal model of a physical system and the uncertainty bounds of that nominal model via the use of experimentally measured input/output data. Although many algorithms have been developed to identify nominal models, little effort has been directed towards identifying uncertainty bounds. Therefore, in this document, a discussion of both nominal model identification and bounded output multiplicative uncertainty identification will be presented. This document is divided into several sections. Background information relevant to system identification and control design will be presented. A derivation of eigensystem realization type algorithms will be presented. An algorithm will be developed for calculating the maximum singular value of output multiplicative uncertainty from measured data. An application will be given involving the identification of a complex system with aliased dynamics, feedback control, and exogenous noise disturbances. And, finally, a short discussion of results will be presented.