The use of focused laser beams and fiber optics to control the location and density of current filaments in GaAs photoconductive semiconductor switches (PCSS) is described in this paper. An intensified CCD camera is used to monitor the infrared photoluminescence of the filaments during fast initiation of high gain switching for several sizes of lateral GaAs PCSS (e.g. 0.5×5, 1×5, 2.5×5, 2×30, and 15×20 mm2). The switches are triggered with either a focused, mode-locked, Nd:YAG laser (532 and 1064 nm) or fiber-optically coupled semiconductor laser diodes (approximately 900 nm). The dependencies of the size, location, and density of the current filaments on the optical trigger, switch voltage, and switch current will be discussed. The impact of optically controlled current filaments on device design and lifetime is emphasized. Electro-optical switching amplification is demonstrated using the high gain switching mode of GaAs (lock-on). A single semiconductor laser diode is used to trigger a small GaAs PCSS. This PCSS is used to drive a 15-element laser diode array. Both electrical and optical pulse compression, sharpening, and amplification are achieved. Estimates for electrical and optical power gains are 8000 and 750 respectively.
A gridless numerical technique called smooth particle hydrodynamics (SPH) has been coupled the transient dynamics finite element code, PRONTO. In this paper, a new weighted residual derivation for the SPH method will be presented, and the methods used to embed SPH within PRONTO will be outlined. Example SPH-PRONTO calculations will also be presented. Smooth particle hydrodynamics is a gridless Lagrangian technique. Requiring no mesh, SPH has the potential to model material fracture, large shear flows, and penetration. SPH computes the strain rate and the stress divergence based on the nearest neighbors of a particle, which are determined using an efficient particle sorting technique. Embedding the SPH method within PRONTO allows part of the problem to be modeled with quadrilateral finite elements while other parts are modeled with the gridless SPH method. SPH elements are coupled to the quadrilateral elements through a contact like algorithm.
The response of thickness shear mode (TSM) resonators in liquids is examined. Smooth-surface devices, which viscously entrain a layer of contacting liquid, respond to the product of liquid density and viscosity. Textured-surface devices, which also trap liquid in surface features, exhibit an additional response that depends on liquid density alone. Combining smooth and textured resonators in a monolithic sensor allows simultaneous measurement of liquid density and viscosity.
The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface-based and underground testing. Analyses have been performed to support the design of site characterization activities so to have minimal impact on the ability of the site to isolate waste, and on tests performed as part of the characterization process. Two examples of site characterization activities are the construction of an Exploratory Studies Facility, which may include underground shafts, drifts, and ramps, and surface-based testing activities, which may require borehole drilling, excavation of test pits, and road watering for dust control. The information in this report pertains to two-dimensional numerical calculations modeling the movement of surficially applied water and the potential effects of that water on repository performance and underground experiments. This document contains information that has been used in preparing recommendations for two Yucca Mountain Site Characterization Project documents: Appendix I of the Exploratory Studies Facility Design Requirements document, and the Surface-Based Testing Field Requirements Document.
The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level radioactive waste repository. Analyses reported herein were performed to support the design of site characterization activities so that these activities will have a minimal impact on the ability of the site to isolate waste and a minimal impact on underground tests performed as part of the characterization process. These analyses examine the effect of water to be used in the underground construction and testing activities for the Exploratory Studies Facility on in situ conditions. Underground activities and events where water will be used include construction, expected but unplanned spills, and fire protection. The models used predict that, if the current requirements in the Exploratory Studies Facility Design Requirements are observed, water that is imbibed into the tunnel wall rock in the Topopah Springs welded tuff can be removed over the preclosure time period by routine or corrective ventilation, and also that water imbibed into the Paintbrush Tuff nonwelded tuff will not reach the potential waste storage area.
Synthesis of hybrid organic-inorganic materials with ionic functionality within the polymer backbone has been achieved. A new family of hypervalent spiro anionic polysiliconates and polygermylates has been prepared. These materials were shown to be thermally stable to moderate temperatures and are completely air and moisture stable. Analysis by solution and solid state NMR verified the presence of the hypervalent functionality. We are currently examining the effect that alteration of the condensing reagent and/or the counterion may have on bulk properties of the ionomeric material.
Three concepts to measure incident flux (1) relative, real-time power measurement, (2) flux mapping and incident power measurement, and (3) real-time flux mapping) and two concepts to measure receiver surface temperatures low and high resolution temperature measurements) on an external central receiver are discussed along with the potential and shortcomings of these concepts to make the desired measurements and the uncertainties associated with the measurements caused by atmospheric and surface property variations. These concepts can aid in the operation and evaluation of the receiver and plant. Tests have shown that the incident flux distribution on a surface can be mapped out using a fixed, narrow white target and a CCD camera system by recording the images of the beam as it is passed over the target and by building a composite image. Tests with the infrared cameras have shown they are extremely valuable tools in determining temperature profiles during startup of the receiver and throughout operation. This paper describes each concept in detail along with the status of testing to determine the feasibility of these concepts.
The calculated valence charge density of the recently synthesized NaPd{sub 3}H{sub 2} compound is compared with that of palladium hydride, PdH, from which it can be derived.
Use of lead-indium solders in microelectronics packaging has increased over the last decade. Increased usage is due to improved properties, such as greater thermo-mechanical fatigue resistance, lower intermetallic formation rates with base metallizations, such as copper, and lower reflow temperatures. However, search of literature reveals no comprehensive studies on phase equilibrium relations between copper metal and lead-indium solder. Our effort involves a combination of experimental data acquisition and computer modeling to obtain the Cu-In-Pb ternary phase diagram. Isotherms and isopleths of interest at low temperatures are achieved by means of differential scanning calorimetry and electron probe microanalysis. Thermodynamic models of these sections served as a guide for efficient experimentation.
Nitrided gate oxides have been fabricated by furnace oxidation in N{sub 2}O with and without prior oxidation in O{sub 2}. SIMS nitrogen profiles show a sharp peak at the Si-insulator interface for both processes. Improved breakdown characteristics and reduced oxide damage after irradiation and charge injection are obtained.
The role of net positive oxide trapped charge and surface recombination velocity on excess base current in BJTs is identified. The effects of the two types of damage can be detected by plotting the excess base current versus base-emitter voltage. Differences and similarities between ionizing-radiation-induced and hot electron-induced degradation are discussed.
The inertial confinement fusion (ICF) program at Sandia National Laboratories (SNL) is directed toward validating light ions as an efficient driver for ICF defense and energy applications. The light ion laboratory microfusion facility (LMF) is envisioned as a facility in which high gain ICF targets could be developed and utilized in defense-related experiments. The relevance of LMF technology to eventual inertial fusion energy (IFE) applications is assessed via a comparison of LMF technologies with those projected in the Light Ion Beam Reactor Assessment (LIBRA) conceptual reactor design study.
This paper summarizes the results of a study performed by the US and Germany to assess the technical and economic potential of central receiver power plants and to identify the necessary research and development (R&D) activities required to reach demonstration and commercialization. Second generation power plant designs, employing molten-salt and volumetric-air receivers, were assessed at the size of 30 and 100 MWe. The study developed a common guideline and used data from previous system tests and studies. The levelized-energy costs for the second generation plants were estimated and found to be competitive with costs from fossil-fueled power plants. Potential for further cost reductions exists if technical improvements can be introduced successfully in the long term. Additionally, the study presents results of plant reliability and uncertainty analyses. Mid- and long-term technical potentials are described, as well as recommendations for the R&D activities needed to reach the goal of large-scale commercialization. The results of this study have already helped direct research in the US and Europe. For example, the favorable potential for these technologies has led to the Solar Two molten-salt project in the US and the TSA volumetric receiver test in Spain. In addition, early analysis conducted within this study indicated that an advanced thermal storage medium was necessary to achieve favorable economics for the air plant. This led to the design of the thermal storage system currently being tested in Spain. In summary, each of the investigated receiver technologies has mid- and long-term potential for improving plant performance and reducing capital and energy costs (resulting in less than 10 cts/kWh given excellent insolation conditions) in an environmentally safe way and largely independent of fossil-fuel prices.
We have developed a novel molecular beam mass spectrometry technique that can quantitatively analyze the gas-phase composition in a CVD reactor. The technique simultaneously monitors a wide variety of radical and stable species, and their concentrations can be determined with sensitivities approaching 1 ppM. Measurements performed in a diamond deposition system have given us keen insights into the important phenomena that affect the growth environment. This paper first discusses the primary gas sampling design issues. In the second part, the details of the experimental results and their implications will be described.
Hydrotalcite coatings on aluminum alloys are being developed for corrosion protection of aluminum in aggressive saline environments. Coating bath composition, surface pretreatment, and alloying elements in aluminum all influence the performance of these coatings during salt spray testing. The coating bath, comprised of lithium carbonate, requires aging by dissolution of aluminum into the bath in order to grow corrosion resistant coatings. Coatings formed in non- aged baths do not perform well in salt spray testing. The alloying elements in aluminum alloys, especially copper, influence the coating growth and formation leading to thin coatings. The effect of the alloy elements is to limit the supply of aluminum to the coating/electrolyte interface and hinder growth of hydrotalcite upon aluminum alloys.
Adsorption/desorption were studied using combined surface analytical techniques. An approximate initial sticking coefficient for Cs on sapphire was measured using reflection mass spectrometry and found to be 0.9. Thermal Desorption Mass Spectrometry (TDMS) and Auger Electron Spectroscopy (AES) were used to verify that a significant decrease in sticking coefficient occurs as the Cs coverage reaches a critical submonolayer value. TDMS analysis demonstrates that Cs is stabilized on a clean sapphire surface at temperatures (1200 K) in excess of the temperatures experienced by sapphire in a TOPAZ-2 thermionic fuel element (TFE). Surface contaminants on sapphire can enhance Cs adsorption relative to the clean surface. C contamination eliminates the high temperature state of Cs desorption found on clean sapphire but shifts the bulk of the C desorption from 400 to 620 K. Surface C is a difficult contaminant to remove from sapphire, requiring annealing above 1400 K. Whether Cs is stabilized on sapphire in a TFE environment will most likely depend on relation between surface contamination and surface structure.
Ortho-Chlorobenzylidene Malononitrile (CS) is one of a number of riot control agents referred to as tear gas, although it is in fact a particulate suspension. The toxicity of this material has been studied in various detail. The purpose of this study was to review and summarize the literature data available on the toxicity of CS.
Cold Smoke is a dense white smoke produced by the reaction of titanium tetrachloride and aqueous ammonia aerosols. Early studies on the toxicity of this nonpyrotechnically generated smoke indicated that the smoke itself is essentially non-toxic (i.e. exhibits to systemic toxicity or organ damage due to exposure) under normal deployment conditions. The purpose of this evaluation was to review and summarize the recent literature data available on the toxicity of Cold Smoke, its chemical constituents, and its starting materials.
The Microsoft (MS) Windows product is widely available for PC`s. There exists many thousands of them at Sandia. All of the MS applications in Windows have a Help file. This help file informs the user ``how to`` use and run that application. It is an ``on-line`` manual. The ``Help Compiler`` was obtained from Microsoft. Use of this compiler enables one to insert text in a form the MS ``Help Engine`` recognizes. This means all of the features of the Help file: Hypertext (hot links), browsing, searching, indexing, bookmarks, annotation, are available for your text. This turns a document into a ``Smart Document.`` The use of this Smart Document System (SDS) for Engineering Procedures (EPs) is described.
Leonard, J.; Doran, L.; Floyd, H.L.; Goetsch, B.; Parrott, L.
This item is a copy of the Dec., 1993 issue of Manufacturing Technology, a Sandia Technology Bulletin. It has information on a number of different projects being conducted by Sandia in the general area of manufacturing sciences. Topics addressed include the following: center for information-technology manufacturing gears up, luctrative flat-panel display market targeted; researchers make copper stick to teflon, patterned adhesion may provide ideal conductor/substrate combination for microcircuits; contact algorithm enhances simulation of manufacturing processes, algorithm efficiently handles previously difficult analyses of punching and cutting operations; national machine tool partnership rolls into action, national laboratories share technology to boost US machine-tool industry; closed-loop MAST system eyes robotic manufacturing, fast, accurate, low-cost sensor demonstrated on furnace brazing.
A brief history of lightning protection at Pantex nuclear explosive areas (NEAs) is given. An assessment of current Pantex lightning protection at NEAs is summarized. Recommendations for further improvements in lightning protection are described.
Supercomputing `93, a high-performance computing and communications conference, was held November 15th through 19th, 1993 in Portland, Oregon. For the past two years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1993 conference, the results of Sandia`s efforts in exploring and utilizing Asynchronous Transfer Mode (ATM) and Synchronous Optical Network (SONET) technologies were vividly demonstrated by building and operating three distinct networks. The networks encompassed a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second, an ATM network running on a SONET circuit at the Optical Carrier (OC) rate of 155.52 megabits per second, and a High Performance Parallel Interface (HIPPI) network running over a 622.08 megabits per second SONET circuit. The SMDS and ATM networks extended from Albuquerque, New Mexico to the showroom floor, while the HIPPI/SONET network extended from Beaverton, Oregon to the showroom floor. This paper documents and describes these networks.
An analysis of the impact of the Minuteman III Payload Transporter Type III into a nonyielding target at 46 m.p.h. and 30 m.p.h., and into a yielding target at 46 m.p.h. is presented. The analysis considers the structural response of the tiedown system which secures the Minuteman III re-entry system to the floor of the payload transporter. A finite element model of the re-entry system, its tiedown system, which includes tie-rods and shear pins, and the pallet plate which is attached to the transporter floating plate, was constructed. Because accelerations of the payload transporter are not known, acceleration data from one-quarter scale testing of the Safe Secure Trailer was used to investigate the response of the tiedown system. These accelerations were applied to the pallet plate. The ABAQUS computer code was used to predict the forces in the members of the tiedown system.
SAFSIM (System Analysis Flow SIMulator) is a FORTRAN computer program for simulating the integrated performance of complex flow systems. SAFSIM provides sufficient versatility to allow the engineering simulation of almost any system, from a backyard sprinkler system to a clustered nuclear reactor propulsion system. In addition to versatility, speed and robustness are primary SAFSIM development goals. SAFSIM contains three basic physics modules: (1) a fluid mechanics module with flow network capability; (2) a structure heat transfer module with multiple convection and radiation exchange surface capability; and (3) a point reactor dynamics module with reactivity feedback and decay heat capability. Any or all of the physics modules can be implemented, as the problem dictates. SAFSIM can be used for compressible and incompressible, single-phase, multicomponent flow systems. Both the fluid mechanics and structure heat transfer modules employ a one-dimensional finite element modeling approach. This document contains a description of the theory incorporated in SAFSIM, including the governing equations, the numerical methods, and the overall system solution strategies.
Preliminary estimates of national benefits from electric utility applications of battery energy storage through the year 2010 are presented along with a discussion of the particular applications studied. The estimates in this report were based on planning information reported to DOE by electric utilities across the United States. Future studies are planned to refine these estimates as more application-specific information becomes available.
The integrity of the electrostatic bonding procedures used to equilibrate operating technicians and weapon components was questioned during the course of the quality evaluation assessments of the W70, W68, and B57 dismantlement programs. A multi-disciplined, interlaboratory team was convened on an ad hoc basis to resolve certain static bonding issues. The accomplishments of this team in upgrading the integrity of the bonding process include recommendations on the proper use of wrist straps, training of technicians in their use, and procedures to reduce accumulation of static charge on components during routine handling operations.
The On-Line Aerosol Monitor (OLAM) is a light attenuation device designed and built at the Idaho National Engineering Laboratory (INEL) by EG&G Idaho. Its purpose is to provide an on-line indication of aerosol concentration in the PHEBUS-FP tests. It does this by measuring the attenuation of a light beam across a tube through which an aerosol is flowing. The OLAM does not inherently give an absolute response and must be calibrated. A calibration has been performed at Sandia National Laboratories` (SNL) Sandia Aerosol Research Laboratory (SARL) and the results are described here. Ammonium chloride and sodium chloride calibration aerosols are used for the calibration and the data for the sodium chloride aerosol is well described by a model presented in this report. Detectable instrument response is seen over a range of 0.1 cm{sup 3} of particulate material per m{sup 3} of gas to 10 cm{sup 3} of particulate material per m{sup 3} of gas.
Pressure-pulse, constant-pressure flow, and pressure-buildup tests have been performed in bedded evaporites of the Salado Formation at the Waste Isolation Pilot Plant (WIPP) site to evaluate the hydraulic properties controlling brine flow through the Salado. Transmissivities have been interpreted from six sequences of tests conducted on five stratigraphic intervals within 15 m of the WIPP underground excavations.
This report responds to the Department of Energy`s request that Sandia National Laboratories compare existing technologies against several advanced technologies as they apply to DOE needs to monitor the movement of material, weapons, or personnel for safety and security programs. The authors describe several material control systems, discuss their technologies, suggest possible applications, discuss assets and limitations, and project costs for each system. The following systems are described: WATCH system (Wireless Alarm Transmission of Container Handling); Tag system (an electrostatic proximity sensor); PANTRAK system (Personnel And Material Tracking); VRIS (Vault Remote Inventory System); VSIS (Vault Safety and Inventory System); AIMS (Authenticated Item Monitoring System); EIVS (Experimental Inventory Verification System); Metrox system (canister monitoring system); TCATS (Target Cueing And Tracking System); LGVSS (Light Grid Vault Surveillance System); CSS (Container Safeguards System); SAMMS (Security Alarm and Material Monitoring System); FOIDS (Fiber Optic Intelligence & Detection System); GRADS (Graded Radiation Detection System); and PINPAL (Physical Inventory Pallet).
An essential component of a horizontal, underground nuclear test setup at the Nevada Test Site is the auxiliary closure system. The massive gates that slam shut immediately after a device has been detonated allow the prompt radiation to pass, but block debris and hot gases from continuing down the tunnel. Thus, the gates protect experiments located in the horizontal line-of-sight steel pipe. Sandia National Laboratories has been the major designer and developer of these closure systems. This report records the history of SNL`s participation in and contributions to the technology of auxiliary closure systems used in horizontal tunnel tests in the underground test program.
VICTORIA is a mechanistic computer code that treats fission product behavior in the reactor coolant system during a severe accident. During an accident, fission products that deposit on structural surfaces produce heat loads that can cause fission products to revaporize and possibly cause structures, such as a pipe, to fail. This mechanism had been lacking from the VICTORIA model. This report describes the structural heat-up model that has recently been implemented in the code. A sample problem shows that revaporization of fission products can occur as structures heat up due to radioactive decay. In the sample problem, the mass of deposited fission products reaches a maximum, then diminishes. Similarly, temperatures of the deposited film and adjoining structure reach a maximum, then diminish.
This performance assessment characterized plausible treatment options conceived by the Idaho National Engineering Laboratory (INEL) for its spent fuel and high-level radioactive waste and then modeled the performance of the resulting waste forms in two hypothetical, deep, geologic repositories: one in bedded salt and the other in granite. The results of the performance assessment are intended to help guide INEL in its study of how to prepare wastes and spent fuel for eventual permanent disposal. This assessment was part of the Waste Management Technology Development Program designed to help the US Department of Energy develop and demonstrate the capability to dispose of its nuclear waste, as mandated by the Nuclear Waste Policy Act of 1982. The waste forms comprised about 700 metric tons of initial heavy metal (or equivalent units) stored at the INEL: graphite spent fuel, experimental low enriched and highly enriched spent fuel, and high-level waste generated during reprocessing of some spent fuel. Five different waste treatment options were studied; in the analysis, the options and resulting waste forms were analyzed separately and in combination as five waste disposal groups. When the waste forms were studied in combination, the repository was assumed to also contain vitrified high-level waste from three DOE sites for a common basis of comparison and to simulate the impact of the INEL waste forms on a moderate-sized repository, The performance of the waste form was assessed within the context of a whole disposal system, using the U.S. Environmental Protection Agency`s Environmental Radiation Protection Standards for Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes, 40 CFR 191, promulgated in 1985. Though the waste form behavior depended upon the repository type, all current and proposed waste forms provided acceptable behavior in the salt and granite repositories.
The purpose of this document is to present technical information that should be useful for understanding and applying locking systems for physical protection and control. There are major sections on hardware for locks, vaults, safes, and security containers. Other topics include management of lock systems and safety considerations. This document also contains notes on standards and specifications and a glossary.
To extend the life of gas delivery systems and improve wafer yields, there is a need for an in-line monitor of H{sub 2}O contamination. Goal of this project is to develop such an instrument, based on infrared spectroscopy, that has a detection limit of 30 ppB or better and costs $50K or less. This year`s work considered the application of Fourier transform infrared (FTIR) spectroscopy to H{sub 2}O detection in N{sub 2} and HCl. Using a modified commercial FTIR spectrometer and a long-path gas cell, a detection limit of about 10 ppB was demonstrated for H{sub 2}O in N{sub 2} and HCl. This includes about a factor of three improvement achieved by applying quantitative multivariate calibration methods to the problem. Absolute calibration of the instrument was established from absorptivities of prominent H{sub 2}O bands between 3600 and 3910 cm{sup {minus}1}. Methods are described to minimize background moisture in the beam path. Spectral region, detector type, resolution, cell type, and path length were optimized. Resolving the narrow H{sub 2}O bands (FWHM {approx} 0.20 cm{sup {minus}1}) is not necessary to achieve optimal sensitivity. In fact, optimal sensitivity is achieved at 2 to 4 cm{sup {minus}1} resolution, allowing the use of an inexpensive interferometer. A much smaller, second generation instrument is described that will have a conservatively estimated detection limit of 1 ppB. Since the present laboratory instrument can be duplicated in its essential parts for about $90K, it is realistic to project a cost of $50K for the new instrument. An accessory for existing FTIR spectrometers was designed that may be marketed for as little as $10K.
A dynamic simulator of the TOPAZ II reactor system has been developed for the Nuclear Electric Propulsion Space Test Program. The simulator combines first-principle modeling and empirical correlations in its algorithm to attain the modeling accuracy and computational through-put that are required for real-time execution. The overall execution time of the simulator for each time step is 15 ms when no data is written to the disk, and 18 ms when nine double precision data points are written to the disk once in every time step. The simulation program has been tested and it is able to handle a step decrease of $8 worth of reactivity. It also provides simulations of fuel, emitter, collector, stainless steel, and ZrH moderator failures. Presented in this paper are the models used in the calculations, a sample simulation session, and a discussion of the performance and limitations of the simulator. The simulator has been found to provide realistic real-time dynamic response of the TOPAZ II reactor system under both normal and casualty conditions.
Samples of chemically-vapor-deposited micrometer and sub-micrometer-thick films of polysilicon were analyzed by transmission electron microscopy (TEM) in cross-section and by Raman spectroscopy with illumination at their surface. TEM and Raman spectroscopy both find varying amounts of polycrystalline and amorphous silicon in the wafers. Raman spectra obtained using blue, green and red excitation wavelengths to vary the Raman sampling depth are compared with TEM cross-sections of these films. Films showing crystalline columnar structures in their TEM micrographs have Raman spectra with a band near 497 cm{sup {minus}1} in addition to the dominant polycrystalline silicon band (521 cm{sup {minus}1}). The TEM micrographs of these films have numerous faulted regions and fringes indicative of nanometer-scale silicon structures, which are believed to correspond to the 497cm{sup {minus}1} Raman band.
Optical communication is likely to significantly speed up parallel computation because the vast bandwidth of the optical medium can be divided to produce communication networks of very high degree. However, the problem of contention in high-degree networks makes the routing problem in these networks theoretically (and practically) difficult. In this paper we examine Valiant`s h-relation routing problem, which is a fundamental problem in the theory of parallel computing. The h-relation routing problem arises both in the direct implementation of specific parallel algorithms on distributed-memory machines and in the general simulation of shared memory models such as the PRAM on distributed-memory machines. In an h-relation routing problem each processor has up to h messages that it wishes to send to other processors and each processor is the destination of at most h messages. We present a lower bound for routing an h-relation (for any h > 1) on a complete optical network of size -n. Our lower bound applies to any randomized distributed algorithm for this task. Specifically, we show that the expected number of communication steps required to route an arbitrary h-relation is {Omega}(h + {radical}log log n). This is the first known lower bound for this problem which does not restrict the class of algorithms under consideration.
The combined dynamic environments of vibration and linear acceleration are common to a large number of spacecraft components and other devices. Testing such devices has normally been a two-step process in which independent vibration and centrifuge tests are performed. There is a concern that the combined effects from these two dynamic environments could cause unexpected operational failures that were not predicted from either analysis or independent testing. This paper describes the design and performance of a testing facility that combines vibration and centrifuge testing in a single operation. The test facility is called the Vibrafuge and utilizes Sandia National Laboratories' (SNL) 29-ft underground centrifuge with an attached electrodynamic shaker. Also addressed are activities underway at SNL on development of a combined vibration and acoustic test facility (ATF).
An Integrated Demonstration Program, hosted by the Fernald Environmental Restoration Management Corporation (FERMCO), has been established for investigating technologies applicable to the characterization and remediation of soils contaminated with uranium. An important part of this effort is the evaluation of field screening tools capable of acquiring high resolution information on the distribution of uranium contamination in surface soils in a cost-and-time efficient manner. Consistent with this need, four field screening technologies have been demonstrated at two hazardous waste sites at the FERMCO. The four technologies tested are wide-area gamma spectroscopy, beta scintillation counting, laser ablation-inductively coupled plasma-atomic emission spectroscopy (LA-ICP-AES), and long-range alpha detection (LRAD). One of the important findings of this demonstration was just how difficult it is to compare data collected by means of multiple independent measurement techniques. Difficulties are attributed to differences in measurement scale, differences in the basic physics upon which the various measurement schemes are predicated, and differences in the general performance of detector instrumentation. It follows that optimal deployment of these techniques requires the development of an approach for accounting for the intrinsic differences noted above. As such, emphasis is given in this paper to the development of a methodology for integrating these techniques for use in site characterization programs as well as the development of a framework for interpreting the collected data. The methodology described here also has general application to other field-based screening technologies and soil sampling programs.
This paper describes the development and use of the Multi-Axis Seam Tracking (MAST) sensor for tracking seams or other features in real-time. Four independent, spatially-distributed electric fields are used to sense changes in the relative position of the sensor and the workpiece. The MAST sensor is very inexpensive compared with commercially available seam tracking sensors. It can be used in systems to perform cost-effective small-lot manufacturing operations in a faster, more consistent manner. The MAST sensor is used in an automated system for dispensing braze paste during a rocket nozzle fabrication process.
Pitting of 1100 Al(Al-1.0(Fe,Cu,Si)) due to Al{sub 3}Fe constituent particles has been studied by examining a variety of intrinsic, extrinsic, and environmental factors that contribute to localized corrosion. Consistent with results from other studies, Al{sub 3}Fe is noble with respect to its microstructural surroundings and pitting is localized to the particle periphery. Polarization curves indicate that cathodic electron transfer reactions are supported on Al{sub 3}Fe at high rates, however, a anodic electron transfer reactions are not. Interparticle spacing appears to play a strong role in determining where pitting will occur, while Al{sub 3}Fe particle area plays a lesser role. Solution pH, applied potential, and exposure time each have measurable effects on the electrochemical behavior of Al{sub 3}Fe and the {alpha}-Al matrix phase which can impact either the galvanic potential of the Al{sub 3}Fe/{alpha}-Al couple, or charge transfer processes on Al{sub 3}Fe particles.
Using surface acoustic wave (SAW) devices, three approaches to the effective use of chemically sensitive interfaces that are not highly chemically selective have been examined: (1) molecular identification from time-resolved permeation transients; (2) using multifrequency SAW devices to determine the frequency dependence of analyte/film interactions; (3) use of an array of SAW devices bearing diverse chemically sensitive interfaces to produce a distinct response pattern for each analyte. In addition to their well-known sensitivity to mass changes (0.0035 monolayer of N{sub 2} can be measured), SAW devices respond to the mechanical and electronic properties of thin films, enhancing response information content but making a thorough understanding of the perturbation critical. Simultaneous measurement of changes in frequency and attenuation, which can provide the information necessary to determine the type of perturbation, are used as part of the above discrimination schemes.
We present a global motion planner for tracing curves in three dimensions with robot manipulator tool frames. This planner generates an efficient motion satisfying three types of constraints; constraints on the tool tip for curve tracing, robot kinematic constraints and robot-link collision constraints. Motions are planned using a global search algorithm and a local planner based on a potential-field approach. This planner can be used with potential-field approach. This planner can be used with any robots including redundant manipulators, and can any robots including redundant manipulators, and can control the trade-offs between its algorithmic completeness and computation time. It can be applied in many robotic tasks such as seam welding, caulking, edge deburrring and chamfering, and is expected to reduce motion programming times from days to minutes.
Robotic systems are often very complex and difficult to operate, especially as multiple robots are integrated to accomplish difficult tasks. In addition, training the operators of these complex robotic systems is time-consuming and costly. In this paper, a virtual reality based robotic control system is presented. The virtual reality system provides a means by which operators can operate, and be trained to operate, complex robotic systems in an intuitive, cost-effective way. Operator interaction with the robotic system is at a high, task-oriented, level. Continuous state monitoring prevents illegal robot actions and provides interactive feedback to the operator and real-time training for novice users.
A sensor-based control approach for real-time seam tracking of rocket thrust chamber assemblies has been developed to enable automation of a braze paste dispensing process. This approach utilizes a non-contact Multi-Axis Seam Tracking (MAST) sensor to track the seams. Thee MAST sensor measures capacitance variations between the sensor and the workpiece and produces four varying voltages which are read directly into the robot controller. A PID control algorithm which runs at the application program level has been designed based upon a simple dynamic model of the combined robot and sensor plant. The control algorithm acts on the incoming sensor signals in real-time to guide the robot motion along the seam path. Experiments demonstrate that seams can be tracked at 100 mm/sec within the accuracy required for braze paste dispensing.
Probabilistic risk assessment (PRA) is a process for evaluating hazardous operations by considering what can go wrong, the likelihood of these undesired events, and the resultant consequences. Techniques used in PRA originated in the 1960s. Although there were early exploratory applications to nuclear weapons and other technologies, the first major application of these techniques was in the Reactor Safety Study, WASH-1400, {sup 1} in which the risks of nuclear power accidents were thoroughly investigated for the first time. Recently, these techniques have begun to be adapted to nuclear weapon system applications. This report discusses this application to nuclear weapon systems.
In this paper, we present a new hybrid motion planner than is capable of exploiting previous planning episodes when confronted with new planning problems. Our approach is applicable when several (similar) problems are successively posed for the same static environment, or when the environment changes incrementally between planning episodes. At the heart of our system lie two low-level motion planners: a fast, but incomplete planner (which we call LOCAL), and a computationally costly (possibly resolution) complete planner (which we call GLOBAL). When a new planning problem is presented to our planner, a meta-level planner (which we call MANAGER) decomposes the problem into segments that are amenable to solution by LOCAL. This decomposition is made by exploiting a task graph, in which successful planning episodes have been recorded. In cases where the decomposition fails, GLOBAL is invoked. The key to our planner`s success is a novel representation of solution trajectories, in which segments of collision-free paths are associated with the boundary of nearby obstacles.
Sandia has been studying automatic assembly planning of electromechanical devices for some years, based on an implemented system called Archimedes. Work done to date has focussed on automatic generation of high-level plans, and translation of these plans into robotic control code and workcell layout. More recently, the importance of an assembly planning capability as a design aid has been emphasized, as it could potentially provide early feedback to a designer on the manufacturability of the design. This paper describes the work done on assembly planning to date, plans for extending it, and its applications to agile manufacturing. In particular, we describe an agile manufacturing demonstration project underway at Sandia, and the role the Archimedes assembly planning system will play in it.
Today, 109 nuclear power plants provide over 20 percent of the electrical energy generated in the US The operating license of the first of these plants will expire in the year 2000; one-third of the operating licenses will expire by 2010 and the remaining plant licenses are scheduled to expire by 2033. The National Energy Strategy assumes that 70 percent of these plants will continue to operate beyond their current license expiration to assist in ensuring an adequate, diverse, and environmentally acceptable energy supply for economic growth. In order to preserve this energy resource in the US three major tasks must be successfully completed: establishment of regulations, technical standards, and procedures for the preparation and review of a license renewal application; development, verification, and validation of technical criteria and bases for monitoring, refurbishing, and/or replacing plant equipment; and demonstration of the regulatory process. Since 1985, the US Department of Energy (DOE) has been working with the nuclear industry and the US Nuclear Regulatory Commission (NRC) to establish and demonstrate the option to extend the life of nuclear power plants through the renewal of operating licenses. This paper focuses primarily on DOE`s Plant Lifetime Improvement (PLIM) Program efforts to develop the technical criteria and bases for effective aging management and lifetime improvement for continued operation of nuclear power plants. This paper describes current projects to resolve generic technical issues in the principal areas of reactor pressure vessel (RPV) integrity, fatigue, and environmental qualification (EQ).
We have evaluated a wide variety of microcellular carbon foams prepared by the controlled pyrolysis and carbonization of several polymers including: polyacrylonitrile (PAN), polymethacrylonitrile (PMAN), resorcinol/formaldehyde (RF), divinylbenzene/methacrylonitrile (DVB), phenolics (furfuryl/alcohol), and cellulose polymers such as Rayon. The porosity may be established by several processes including: Gelation (1-5), phase separation (1-3,5-8), emulsion (1,9,10), aerogel/xerogel formation (1,11,12,13), replication (14) and activation. In this report we present the complex impedance analysis and double layer charging characteristics of electrodes prepared from one of these materials for double layer capacitor applications, namely activated cellulose derived microcellular carbon foam.
This paper evaluates the correlation between values of minimum principal in situ stress derived from two different models which use data obtained from triaxial core tests and coefficient for earth at rest correlations. Both models use triaxial laboratory tests with different confining pressures. The first method uses a vcrified fit to the Mohr failure envelope as a function of average rock grain size, which was obtained from detailed microscopic analyses. The second method uses the Mohr-Coulomb failure criterion. Both approaches give an angle in internal friction which is used to calculate the coefficient for earth at rest which gives the minimum principal in situ stress. The minimum principal in situ stress is then compared to actual field mini-frac test data which accurately determine the minimum principal in situ stress and are used to verify the accuracy of the correlations. The cores and the mini-frac stress test were obtained from two wells, the Gas Research Institute`s (GRIs) Staged Field Experiment (SFE) no. 1 well through the Travis Peak Formation in the East Texas Basin, and the Department of Energy`s (DOE`s) Multiwell Experiment (MWX) wells located west-southwest of the town of Rifle, Colorado, near the Rulison gas field. Results from this study indicates that the calculated minimum principal in situ stress values obtained by utilizing the rock failure envelope as a function of average rock grain size correlation are in better agreement with the measured stress values (from mini-frac tests) than those obtained utilizing Mohr-Coulomb failure criterion.
Currently, the production of in situ reinforcement in polymeric systems by sol-gel methods is undergoing rapid development. However, understanding of synthesis/structure/property relationships is still lacking. In order to produce sol-gel derived composite materials with sufficient mechanical properties for commercial applications, this deficit of information must be addressed. We have completed a detailed investigation of in situ silica growth in polydimethylsiloxane (PDMS)/tetraethylorthosilicate (TEOS) systems. Factors which affect the domain growth, such as catalyst activity and silica loading, have been examined by solid state {sup 29}Si NMR, SEM, mechanical testing and small angle neutron scattering.
Infrared emission spectra of 21 borophosphosilicate glass (BPSG) thin films on silicon wafers were collected with the samples held at constant temperature between 125--400{degree}C using a heating stage designed for precise temperature control ({plus_minus}{degree}C). Partial test squares calibrations applied to the BPSG infrared emittance spectra allowed four BPSG thin-film properties to be simultaneously quantified with precisions of 0.1 wt. % for boron and phosphorus, 35 {Angstrom} for film thickness, and 1.2{degree}C for temperature.
Experiments were performed to examine sensitivity of thin-film property determinations to several experimental variables when applying multivariate calibration methods to infrared reflection spectroscopic data. Results indicate that low angles of incidence are best for robust quantitative determination of boron, phosphorus, and film thickness in borophosphosilicate glass (BPSG) dielectric films. However, the polarization state of the incidence beam does not affect the quantitative prediction ability.
Due to the growth in the use of analog fiber optic data transmission systems at the Nevada Test Site and other locations, Sandia National Laboratories (SNL) has recognized the need to be able to multiplex several data channels per fiber. Wavelength-division, frequency-division, and time-division multiplex techniques have been investigated. A time-division system using optically-multiplexed laser transmitters driving a common receiver was fielded on the HUNTERS TROPHY event at the NTS. Stability, noise, and dynamic range compared favorably with that seen on nonmultiplexed links. Amplitude, width, and rise time of data transmitted via the multiplexed links was consistent with that recorded from non-multiplexed links.
Synthetic aperture radar (SAR) was evaluated as a potential technological improvement over the Coast Guard`s existing side-looking airborne radar (SLAR) for oil-spill surveillance applications. The US Coast Guard Research and Development Center (R&D Center), Environmental Branch, sponsored a joint experiment including the US Coast Guard, Sandia National Laboratories, and the Naval Oceanographic and Atmospheric Administration (NOAA), Hazardous Materials Division. Radar imaging missions were flown on six days over the coastal waters off Santa Barbara, CA, where there are constant natural seeps of oil. Both the Coast Guard SLAR and the Sandia National Laboratories SAR were employed to acquire simultaneous images of oil slicks and other natural sea surface features that impact oil-spill interpretation. Surface truth and other environmental data were also recorded during the experiment. The experiment data were processed at Sandia National Laboratories and delivered to the R&D Center on a computer workstation for analysis by experiment participants. Issues such as optimal spatial resolution, single-look vs. multi-look SAR imaging, and the utility of SAR for oil-spill analysis were addressed. Finally, conceptual design requirements for a possible future Coast Guard SAR were outlined and evaluated.
Sandia National Laboratories was tasked by the US Nuclear Regulatory Commission to perform a Probabilistic Risk Assessment (PRA) of a boiling water reactor (BWR) during low power and shutdown (LP&S) conditions. The plant chosen for the study was Grand Gulf Nuclear Station (GGNS), a BWR 6. In performing the analysis, it was found that in comparison with full-power PRAs, the low decay heat levels present during LP&S conditions result in a relatively large number of ways by which cooling can be provided to the core. In addition, because of the less stringent requirements imposed on system configurations possible is large and the availability of plant systems is more difficult to specify. These aspects of the LP&S environment led to the development and use of ``generic`` event trees in performing the analysis. The use of ``generic`` event trees, in turn, had a significant impact on the nature of the human reliability analysis (HRA) that was performed. This paper describes the development of the event trees for the LP&S PRA and important aspects of the resulting HRA.
The computer program, DMC (Distinct Motion Code), which was developed for simulating the rock motion associated with blasting, has been used to study the influence of row delay timing on rock motion. The numerical simulations correspond with field observations in that very short delays (< 50ms) and very long delays (> 300ms) produce a lower percent-cast than a medium delay (100 to 200 ms). The DMC predicted relationship between row delay timing and percent-cast is more complex than expected with a dip in the curve where the optimum timing might be expected. More study is required to gain a full understanding of this phenomenon.
When presented with a new supercomputer most users will first ask {open_quotes}How much faster will my applications run?{close_quotes} and then add a fearful {open_quotes}How much effort will it take me to convert to the new machine?{close_quotes} This paper describes some lessons learned at Sandia while asking these questions about the new 1800+ node Intel Paragon. The authors conclude that the operating system is crucial to both achieving high performance and allowing easy conversion from previous parallel implementations to a new machine. Using the Sandia/UNM Operating System (SUNMOS) they were able to port a LU factorization of dense matrices from the nCUBE2 to the Paragon and achieve 92% scaled speed-up on 1024 nodes. Thus on a 44,000 by 44,000 matrix which had required over 10 hours on the previous machine, they completed in less than 1/2 hour at a rate of over 40 GFLOPS. Two keys to achieving such high performance were the small size of SUNMOS (less than 256 kbytes) and the ability to send large messages with very low overhead.
A model of permeability changes in rock salt is developed and implemented in a time-dependent finite element code. Model parameters are developed from laboratory tests. The model is used to predict permeability changes adjacent to excavations in rock salt.
The purpose of this paper is to present a preliminary estimate of the nuclear-related public health risk presented by launching and operating the Russian TOPAZ II space reactor as part of the Nuclear Electric Propulsion Space Test Program (NEPSTP). This risk is then compared to the risks from the operation of commercial nuclear power reactors and previously planned and/or launched space nuclear power missions. For the current mission profile, the initial estimate of the risk posed by launching and operating TOPAZ II is significantly less (at least two orders of magnitude) than that estimated for prior space nuclear missions. Even allowing for the large uncertainties in this estimate, it does not appear that the NEPSTP mission will present a significant health risk to the public.
The WETCOR-1 test of simultaneous interactions of a high-temperature melt with water and a limestone/common-sand concrete is described. The test used a 34.1-kg melt of 76.8 w/o Al{sub 2}O{sub 3}, 16.9 w/o CaO, and 4.0 w/o SiO{sub 2} heated by induction using tungsten susceptors. Once quasi-steady attack on concrete by the melt was established, an attempt was made to quench the melt at 1850 K with 295 K water flowing at 57 liters per minute. Net power into the melt at the time of water addition was 0.61 {plus_minus} 0.19 W/cm{sup 3}. The test configuration used in the WETCOR-1 test was designed to delay melt freezing to the walls of the test fixture. This was done to test hypotheses concerning the inherent stability of crust formation when high-temperature melts are exposed to water. No instability in crust formation was observed. The flux of heat through the crust to the water pool maintained over the melt in the test was found to be 0.52 {plus_minus} 0.13 MW/m{sup 2}. Solidified crusts were found to attenuate aerosol emissions during the melt concrete interactions by factors of 1.3 to 3.5. The combination of a solidified crust and a 30-cm deep subcooled water pool was found to attenuate aerosol emissions by factors of 3 to 15.
A structural analysis methodology has been developed for the NASA 2.5-inch frangible nut used on the Space Shuttle. Two of these nuts are used to secure the External Tank to the aft end of the Orbiter. Both nuts must completely fracture before the Orbiter can safely separate from the External Tank. Ideally, only one of the two explosive boosters contained in each nut must detonate to completely break a nut. However, after an uncontrolled change in the Inconel 718 material processing, recent tests indicate that in certain circumstances both boosters may be required. This report details the material characterization and subsequent structural analyses of nuts manufactured from two lots of Inconel 718. The nuts from the HSX lot were observed to consistently separate with only one booster, while the nuts from the HBT lot never completely fracture with a single booster. The material characterization requires only tensile test data and the determination of a tearing parameter based on a computer simulation of a tensile test. Subsequent structural analyses using the PRONTO2D finite element code correctly predict the differing response of nuts fabricated from these two lots. This agreement is important because it demonstrates that this technique can be used to screen lots of Inconel 718 before manufacturing frangible nuts from them. To put this new capability to practice, Sandia personnel have transferred this technology to the Pyrotechnics Group at NASA-JSC.
The National Aeronautics and Space Administration (NASA) is planning to launch a network of scientific probes to Mars beginning in late 1996. The precursor to this network will be PATHFINDER. Decelerating PATHFINDER from the high speed of its approach to Mars will require the use of several deceleration techniques working in series. The Jet Propulsion Laboratory (JPL) has proposed that gas bags be used to cushion the payload`s ground impact on Mars. This report presents the computer code, BAG, which has been developed to calculate the pneumatic performance of gas bag impact attenuators and the one-dimensional rigid-body dynamic performance of a payload during ground impact.
Graph partitioning is a fundamental problem in many scientific settings. This document describes the capabilities and operation of Chaco, a software package designed to partition graphs. Chaco allows for recursive application of any of several different methods for finding small edge separators in weighted graphs. These methods include inertial, spectral, Kernighan-Lin and multilevel methods in addition to several simpler strategies. Each of these methods can be used to partition the graph into two, four or eight pieces at each level of recursion. In addition, the Kernighan-Lin method can be used to improve partitions generated by any of the other methods. Brief descriptions of these methods are provided, along with references to relevant literature. The user interface, input/output formats and appropriate settings for a variety of code parameters are discussed in detail, and some suggestions on algorithm selection are offered.
This document, the Stockpile Dismantlement Database (SDDB) training materials is designed to familiarize the user with the SDDB windowing system and the data entry steps for Component Characterization for Disposition. The foundation of information required for every part is depicted by using numbered graphic and text steps. The individual entering data is lead step by step through generic and specific examples. These training materials are intended to be supplements to individual on-the-job training.
This report is a followup to the work presented in NUREG/CR-5423 addressing early failure of a BWR Mark I containment by melt attack of the liner, and it constitutes a part of the implementation of the Risk-Oriented Accident Analysis Methodology (ROAAM) employed therein. In particular, it expands the quantification to include four independent evaluations carried out at Rensselaer Polytechnic Institute, Argonne National Laboratories, Sandia National Laboratories and ANATECH, Inc. on the various portions of the phenomenology involved. These independent evaluations are included here as Parts II through V. The results, and their integration in Part I, demonstrate the substantial synergism and convergence necessary to recognize that the issue has been resolved.
A discrete element computer program, DMC (Distinct Motion Code) was developed to simulate blast-induced rock motion. To simplify the complex task of entering material and explosive design parameters as well as bench configuration, a full-featured graphical interface has been developed. DMC is currently executed on both Sun SPARCstation 2 and Sun SPARCstation 10 platforms and routinely used to model bench and crater blasting problems. This paper will document the design and development of the full-featured interface to DMC. The development of the interface will be tracked through the various stages, highlighting the adjustments made to allow the necessary parameters to be entered in terms and units that field blasters understand. The paper also discusses a novel way of entering non-integer numbers and the techniques necessary to display blasting parameters in an understandable visual manner. A video presentation will demonstrate the graphics interface and explains its use.
This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ``XSOR``. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms.
Silica glass fibers have been produced and tested under ultra high vacuum (UHV) conditions to investigate the inert strength of pristine fibers in absence of reactive agents. Analysis of the coefficient of variation in diameter ({upsilon}{sub d}) vs the coefficient of variation of breaking strength ({upsilon}{sub {sigma}}) does not adequately explain the variation of breaking stress. Distribution of fiber tensile strength data suggests that the inert strength of such fibers is not single valued and that the intrinsic strength is controlled by defects in the glass. Furthermore, comparison of room temperature UHV data with LN{sub 2} data indicates that these intrinsic strengths are not temperature dependent.
A chemical vapor deposition technique is used to produce amorphous boron nitride and carbon thin films on high strength silica glass fibers. In this method, the fiber is drawn under ultra high vacuum conditions and low pressure process gases, in the presence of a hot tungsten filament, are used to grow films at low substrate temperatures. Films deposited with this technique do not degrade the intrinsic pristine strength of the silica fibers under dry conditions and, when stressed in chemically aggressive environments, act as effective barrier coatings.
A non-contact system for alignment of objects on mass properties measuring instruments is described. Test parts can be aligned to within the capabilities of the user and the fixture to make the adjustments. The current implementation can align objects to less than .001 inches at two points with final requested adjustments of a few ten-thousands of an inch. The non-contact capability allows the alignment of objects which are too compliant or fragile for traditional contacting measurement methods. Also, this system allows the definition of a reference axis on objects which are not perfectly symmetric. The reference axis is defined at the top of the object by an appropriate marker and defined at the bottom by a best fit circle through the surface at a specified height. A general description of the hardware, procedures, and results are presented for the non-user. Appendices which contain a complete description of the software, usage, and mathematical implementation are provided for the reader who is interested in using or further developing the system.
A comprehensive review of experimental base pressure and base heating data related to supersonic and hypersonic flight vehicles has been completed. Particular attention was paid to free-flight data as well as wind tunnel data for models without rear sting support. Using theoretically based correlation parameters, a series of internally consistent, empirical prediction equations has been developed for planar and axisymmetric geometries (wedges, cones, and cylinders). These equations encompass the speed range from low supersonic to hypersonic flow and laminar and turbulent forebody boundary layers. A wide range of cone and wedge angles and cone bluntness ratios was included in the data base used to develop the correlations. The present investigation also included preliminary studies of the effect of angle of attack and specific-heat ratio of the gas.
Materials used in the optical elements of a 1,061 m GSGG (gadolinium scandium gallium garnet) laser have been tested for transient radiation-induced absorption. The transient radiation-induced absorption in KK1, Schott S7005 and S7010, and M382 glasses have been determined for discrete wavelengths in the range 440--750 nm. Also, the transient radiation-induced absorption in {open_quotes}pure{close_quotes} and MgO doped LiNbO{sub 3} has been measured at 1,061 nm. Mathematical expressions composed of exponentials are fitted to the data.
This report was prepared at the request of the US Department of Energy`s Office of Energy Management for an objective comparison of the merits of battery energy storage with superconducting magnetic energy storage technology for utility applications. Conclusions are drawn regarding the best match of each technology with these utility application requirements. Staff from the Utility Battery Storage Systems Program and the superconductivity Programs at Sandia National contributed to this effort.
This report summarizes the effort to quantify the electromagnetic environments in the nuclear explosive areas at Pantex due to direct lightning. The fundamental measure of the threat to nuclear safety is assumed to be the maximum voltage between any two points in an assembly area, which is then available for producing arcing or for driving current into critical subsystems of a nuclear weapon. This maximum voltage has been computed with simple analytical models and with three-dimensional finite-difference computer codes.
The expert panel identified basic principles to guide current and future marker development efforts: (1) the site must be marked, (2) message(s) must be truthful and informative, (3) multiple components within a marker system, (4) multiple means of communication (e.g., language, pictographs, scientific diagrams), (5) multiple levels of complexity within individual messages on individual marker system elements, (6) use of materials with little recycle value, and (7) international effort to maintain knowledge of the locations and contents of nuclear waste repositories. The efficacy of the markers in deterring inadvertent human intrusion was estimated to decrease with time, with the probability function varying with the mode of intrusion (who is intruding and for what purpose) and the level of technological development of the society. The development of a permanent, passive marker system capable of surviving and remaining interpretable for 10,000 years will require further study prior to implementation.
The EXPO was organized to increase communication between US industry and DOE`s national laboratories. The report contains copies of viewgraphs of all speakers and reports of workshops designed to identify priority needs of industry. A conference synopsis and set of recommendations to DOE are also included.
Research in recent years has demonstrated the efficient use of solar thermal energy for driving endothermic chemical reforming reactions in which hydrocarbons are reacted to form synthesis gas (syngas). Closed-loop reforming/methanation systems can be used for storage and transport of process heat and for short-term storage for peaking power generation. Open-loop systems can be used for direct fuel production; for production of syngas feedstock for further processing to specialty chemicals and plastics and bulk ammonia, hydrogen, and liquid fuels; and directly for industrial processes such as iron ore reduction. In addition, reforming of organic chemical wastes and hazardous materials can be accomplished using the high-efficiency destruction capabilities of steam reforming. To help identify the most promising areas for future development of this technology, we discuss in this paper the economics and market potential of these applications.
All of American industry is being subjected to increased competitive pressures due to customer needs for shorter cycle times and better quality. The investment casting industry could be in a unique position to satisfy these needs by incorporating several emerging technologies into production processes. The inherent versatility and flexibility of casting make it a truly agile manufacturing process. Because of its compatibility with new rapid prototyping technologies, investment casting could be one of the key vehicles in the new ``art to part`` paradigm. Recently, dramatic advances have been made in the quality of wax and plastic patterns, parts, and tooling by investment casting on time scales unheard of today. Because design and acquisition of tooling contributes heavily to the lead time for any market, these advances will strengthen the position of investment casting manufacturers and customers, and create opportunities in traditional and non-tradition markets. Key to achieving this goal is to use the technology to remove uncertainties from investment casting process. To do this, we must collectively build the infrastructure to enable investment casting companies to make parts right the first time, every time. Integration of mature and on-the-horizon technologies will make this revolution possible and create large growth in markets for investment castings.
The National Center for Advanced Information Components Manufacturing (NCAICM) was established by congressional appropriation in the FY93 Defense Appropriation Bill. The Center, located at Sandia National Laboratories in Albuquerque, NM, is funded through the Advanced Research Projects Agency (ARPA). The technical focus of NCAICM is emissive flat panel displays and associated microelectronics, specifically targeting manufacturing issues such as materials, processes, equipment, and software tools. This Center is a new avenue of collaboration between ARPA and the Department of Energy (DOE). It will help the government meet its obligation to develop dual-use capabilities for the defense and civilian sectors of the economy and provide a new method for cooperation and collaboration between the federal government and American industry. In particular, one of NCAICM`s goals is to provide industry access to the broad resource base available at three DOE Defense Programs laboratories -- Sandia National Laboratories, Los Alamos National Laboratory, and Lawrence Livermore National Laboratory.
The use of multi-source power systems, ``hybrids,`` is one of the fastest growing, potentially significant markets for photovoltaic (PV) system technology today. Cost-effective applications today include remote facility power, remote area power supplies, remote home and village power, and power for dedicated electrical loads such as communications systems. This market sector is anticipated to be one of the most important growth opportunities for PV over the next five years. The US Department of Energy (USDOE) and Sandia National Laboratories (SNL) are currently engaged in an effort to accelerate the adoption of market-driven PV hybrid power systems and to effectively integrate PV with other energy sources. This paper provides details of this development and the ongoing hybrid activities in the United States. Hybrid systems are the primary focus of this paper.
Infrared (IR) reflection spectroscopy has been shown to be useful for making rapid and nondestructive quantitative determinations of B and P contents and film thickness for borophosphosilicate glass (BPSG) thin films on silicon monitor wafers. Preliminary data also show that similarly precise determinations can be made for BPSG films on device wafers.
A low pressure chemical vapor deposition (LPCVD) process for depositing W{sub X}B{sub (1-X)} films from WF{sub 6} and B{sub 2}H{sub 6} is described. The depositions were performed in a cold wall reactor on 6 in. Si wafers at 400C. During deposition, pressure was maintained at a fixed level in the range of 200 to 260 mTorr. Ratio of WF{sub 6}/B{sub 2}H{sub 6} was varied from 0.05 to 1.07. Carrier gas was either 100 sccm of Ar with a gas flow of 308 to 591 sccm, or 2000 sccm of Ar and 2000 sccm of H{sub 2} with the overall gas flow from 4213 to 4452 sccm. Two stable deposition regions were found separated by an unstable region that produced non-uniform films. The B-rich films produced in one of the stable deposition regions had W concentrations of 30 at.% and resistivities between 200 and 300 {mu}ohm{center_dot}cm. The W-rich films produced in the other stable deposition region had W concentrations of 80 at.% and resistivities of 100 {mu}ohm{center_dot}cm. As-deposited films had densities similar to bulk material of similar stoichiometry. Barrier properties of the films against diffusion of Cu to 700C in vacuum were measured by 4-point probe. Also, annealing was carried out to 900C in order to determine phases formed as the films crystallize. These studies indicate that W{sub X}B{sub (1-X)} films may be useful barriers in ULSI metallization applications.
A relatively high-speed I{sub DDQ} measurement circuit called QuiC-Mon is described. Depending upon IC settling times, upper measurement rates range from 50 kHz to 250 kHz at 100 nA resolution. It provides an inexpensive solution for fast, sensitive I{sub DDQ} measurements in CMOS IC wafer probe or packaged part production testing.
Sandia National Laboratories operates the Primary Standards Laboratory for the Department of Energy, Albuquerque Operations Office (DOE/AL). This report summarizes metrology activities that received emphasis in the first half of 1993 and provides information pertinent to the operation of the DOE/AL system-wide Standards and Calibration Program.
Sandia`s mission to explore technology that enhances US nuclear weapons capabilities has been the primary impetus for the development of a class of inertial measurement units not available commercially. The newest member of the family is the Ring Laser Gyro Assembly. The product of a five-year joint effort by Sandia and Honeywell`s Space and Strategic Systems Operation, the RLGA is a small, one-nautical-mile-per-hour-class inertial measurement unit that consumes only 16 watts - attributes that are important to a guidance and control capability for new or existing weapons. These same attributes led the Central Inertial Guidance Test Facility at Holloman Air Force Base to select the RLGA for their newest test instrumentation pod. The RLGA sensor assembly is composed of three Honeywell ring laser gyroscopes and three Sundstrand Data Control accelerometers that are selected from three types according to the user`s acceleration range and accuracy needs.
The magnetic and structural phase diagrams of the La{sub 2}CuO{sub 4+{delta}} system and the La{sub 2-x}Sr{sub x}CuO{sub 4+{delta}} are reviewed, with emphasis on recent results obtained from magnetic and structural neutron diffraction, thermogravimetric analysis, iodometric titration, magnetic susceptibility {chi}(T), and {sup 129}La nuclear quadrupole resonance (NQR) measurements.
The objectives of this program are (1) to use and refine a basinal analysis methodology for natural fracture exploration and exploitation, and (2) to determine the important characteritics of natural fracture systems for their use in completion, stimulation and production operations. Continuing work on this project has demonstrated that natural fracture systems and their flow characteristics can be defined by a thorough study of well and outcrop data within a basin. Outcrop data provides key information on fracture sets and lithologic controls, but some fracture sets found in the outcrop may not exist at depth. Well log and core data provide the important reservoir information to obtain the correct synthesis of the fracture data. In situ stress information is then linked with the natural fracture studies to define permeability anisotropy and stimulation effectiveness. All of these elements require field data, and in the cases of logs, core, and well test data, the cooperation of an operator.
The authors employ NMR and NQR spectroscopy as probes of local structure and charge environments in metallic La{sub 2}CuO{sub 4+{delta}} ({Tc} = 38 K). They discuss the effect of annealing the sample at various temperatures T{sub a} ({Tc} < T{sub a} < 300K) on the superconducting {Tc}. The dependence of {Tc} on annealing indicates that annealing allows the development of structural order which is important for {Tc}. The {sup 139}La quadrupole frequency, {nu}{sub Q} is smaller than in undoped materials. This is unexpected and may indicate a smaller charge on the apex oxygen in the doped material and thus a different distribution of charge between the La-O layer to the planes. The further, rapid decrease in {nu}{sub Q} just above {Tc} indicates that temperature dependent charge redistribution is occurring. The presence of doped holes induces a distribution of displacements of the apex oxygen off of the vertical La-Cu bond axis. These vary from zero to the value observed in lightly doped (antiferromagnetic) La{sub 2}CuO{sub 4+{delta}}. These measurements demonstrate a striking degree of inhomogeneity in the crystal structure of the La-O layer. Copper NQR spectroscopy shows that there are two distinct copper sites in the CuO{sub 2} planes and thus that either the structure or the charge distribution in the planes is inhomogeneous as well. These inhomogeneities are the intrinsic response of the crystal to doped holes; they are not the result of distortions of the lattice due to the presence of interstitial oxygen atoms.
Programs to develop solid core nuclear thermal propulsion (NTP) systems have been under way at the Department of Defense (DoD), the National Aeronautics and Space Administration (NASA), and the Department of Energy (DOE). These programs have recognized the need for a new ground test facility to support development of NTP systems. However, the different military and civilian applications have led to different ground test facility requirements. The Department of Energy (DOE) in its role as landlord and operator of the proposed research reactor test facilities has initiated an effort to explore opportunities for a common ground test facility to meet both DoD and NASA needs. The baseline design and operating limits of the proposed DoD NTP ground test facility are described. The NASA ground test facility requirements are reviewed and their potential impact on the DoD facility baseline is discussed.
In this note the authors describe the results of some tests of the message-passing performance of the Intel Paragon. These tests have been carried out under both the Intel-supplied OSF/1 operating system with an NX library, and also under an operating system called SUNMOS (Sandia UNM Operating System). For comparison with the previous generation of Intel machines, they have also included the results on the Intel Touchstone Delta. The source code used for these tests is identical for all systems. As a result of these tests, the authors can conclude that SUNMOS demonstrates that the Intel Paragon hardware is capable of very high bandwidth communication, and that the message coprocessor on Paragon nodes can be used to give quite respectable latencies. Further tuning can be expected to yield even better performance.
This paper briefly describes an ongoing project designed to assess the uncertainty in offsite radiological consequence calculations of hypothetical accidents in commercial nuclear power plants. This project is supported jointly by the Commission of European Communities (CEC) and the US Nuclear Regulatory Commission (USNRC). Both commissions have expressed an interest in assessing the uncertainty in consequence calculations used for risk assessments and regulatory purposes.
This paper presents a prototype system developed at Sandia National Laboratories to create and verify computer-generated graphical models of remote physical environments. The goal of the system is to create an interface between an operator and a computer vision system so that graphical models can be created interactively. Virtual reality and telepresence are used to allow interaction between the operator, computer, and remote environment. A stereo view of the remote environment is produced by two CCD cameras. The cameras are mounted on a three degree-of-freedom platform which is slaved to a mechanically-tracked, stereoscopic viewing device. This gives the operator a sense of immersion in the physical environment. The stereo video is enhanced by overlaying the graphical model onto it. Overlay of the graphical model onto the stereo video allows visual verification of graphical models. Creation of a graphical model is accomplished by allowing the operator to assist the computer in modeling. The operator controls a 3-D cursor to mark objects to be modeled. The computer then automatically extracts positional and geometric information about the object and creates the graphical model.
The fields of reliability analysis and risk assessment have grown dramatically since the 1970s. There are now bodies of literature and standard practices which cover quantitative aspects of system analysis such as failure rate and repair models, fault and event tree generation, minimal cut sets, classical and Bayesian analysis of reliability, component and system testing techniques, decomposition methods, etc. In spite of the growth in the sophistication of reliability models, however, little has been done to integrate optimization models within a reliability analysis framework. That is, often reliability models focus on characterization of system structure in terms of topology and failure/availability characteristics of components. A number of approaches have been proposed to help identify the components of a system that have the largest influence on overall system reliability. While this may help rank order the components, it does not necessarily help a system design team identify which components they should improve to optimize overall reliability (it may be cheaper and more effective to focus on improving two or three components of smaller importance than one component of larger importance). In this paper, we present an optimization model that identifies the components to be improved to maximize the increase in system MTBF, subject to a fixed budget constraint. A dual formulation of the model is to minimize cost, subject to achieving a certain level of system reliability.
In the detailed phenomenological event trees used in recent Level III PRA analyses questions arise about the possible outcomes of events for which the underlying physics is not well understood and where the initial and boundary conditions are uncertain. Examples of the types of events being analyzed are: What is the containment failure mode?, Is them a large in-vessel steam explosion?, How much H{sub 2}, CO, and CO{sub 2} are produced during core-concrete interactions? The outcomes of each of these questions must be defined based on an understanding of the basic physics of the phenomena and the level of detail of the probabilistic analysis. Many of these phenomena have never occurred since severe reactor accidents are extremely rare events. The only information we have about these phenomena comes from four basic sources: general theoretical knowledge, limited experimental results a few actual events, and various models of the phenomena. All of these phenomena have significant uncertainty arising from three basic sources: level of detail, initial and boundary conditions, and lack of knowledge. Since it is not possible to conduct enough full scale tests to generate a set of ``objective`` relative frequencies, the probabilities, therefore, will have to be ``subjective`` and generated based on expert knowledge. In assessing the conditional probabilities of the various possible outcomes of an event during an accident, the expert must amalgamate his knowledge with the level of detail being used in the PRA analysis to generate a set of probabilities for the defined set of outcomes. It is often convenient for an expert to formulate his opinion in terms of expecting to see n{sub i} occurrences of outcome E{sub i} in N occurrences of event E. The order of the outcomes is typically not important because the individual trials are viewed as being independent of one another.
In many science and engineering applications, there is an interest in predicting the outputs of a process for given levels of inputs. In order to develop a model, one could run the process (or a simulation of the process) at a number of points (a point would be one run at one set of possible input values) and observe the values of the outputs at those points. There observations can be used to predict the values of the outputs for other values of the inputs. Since the outputs are a function of the inputs, we can generate a surface in the space of possible inputs and outputs. This surface is called a response surface. In some cases, collecting data needed to generate a response surface can e very expensive. Thus, in these cases, there is a powerful incentive to minimize the sample size while building better response surfaces. One such case is the semiconductor equipment manufacturing industry. Semiconductor manufacturing equipment is complex and expensive. Depending upon the type of equipment, the number of control parameters may range from 10 to 30 with perhaps 5 to 10 being important. Since a single run can cost hundreds or thousands of dollars, it is very important to have efficient methods for building response surfaces. A current approach to this problem is to do the experiment in two stages. First, a traditional design (such as fractional factorial) is used to screen variables. After deciding which variables are significant, additional runs of the experiment are conducted. The original runs and the new runs are used to build a model with the significant variables. However, the original (screening) runs are not as helpful for building the model as some other points might have been. This paper presents a point selection scheme that is more efficient than traditional designs.
This document is a compilation of various presentations from the Fourth DOE Industry/University/Lab Forum on Robotics for Environmental Restoration and Waste Management held in Albuquerque, New Mexico July 19--21, 1993. Separate abstracts were prepared for each presentation of this report.