This report presents the results of three fire endurance tests and one ampacity derating test set of the fire barrier system Thermo-Lag 330-1 Subliming Coating. Each test was performed using cable tray specimens protected by a nominal three-hour fire barrier envelope comprised of two layers of nominal 1/2 inch thick material. The fire barrier systems for two of the three fire endurance test articles and for the ampacity derating test article were installed in accordance with the manufacturer`s installations procedures. The barrier system for the third fire endurance test article was a full reproduction of one of the original manufacturer`s qualification test articles. This final test article included certain installation enhancements not considered typical of current nuclear power plant installations. The primary criteria for fire endurance performance evaluation was based on cable circuit integrity testing. Secondary consideration was also given to the temperature rise limits set forth in the ASTM E119 standard fire barrier test procedure. All three of the fire endurance specimens failed prematurely. Circuit integrity failures for the two fire endurance test articles with procedures-based installations were recorded at approximately 76 and 59 minutes into the exposures for a 6 inch wide and 12 inch wide cable tray respectively. Temperature excursion failures (single point) for these two test articles were noted at approximately 65 and 56 minutes respectively. The first circuit integrity failure for the full reproduction test article was recorded approximately 119 minutes into the exposure, and the first temperature excursion failure for this test article was recorded approximately 110 minutes into the exposure.
An application protocol is an information systems engineering view of a specific product. The view represents an agreement on the generic activities needed to design and fabricate the product, the agreement on the information needed to support those activities, and the specific constructs of a product data standard for use in transfering some or all of the information required. This applications protocol describes the data for electrical and electronic products in terms of a product description standard called the Initial Graphics Exchange Specification (IGES). More specifically, the Layered Electrical Product IGES Application Protocol (AP) specifies the mechanisms for defining and exchanging computer-models and their associated data for those products which have been designed in two dimensional geometry so as to be produced as a series of layers in IGES format. The AP defines the appropriateness of the data items for describing the geometry of the various parts of a product (shape and location), the connectivity, and the processing and material characteristics. Excluded is the behavioral requirements which the product was intended to satisfy, except as those requirements have been recorded as design rules or product testing requirements.
Remediation of waste from Underground Storage Tanks (UST) at Hanford will require the use of large remotely controlled equipment. Inherent safety methods need to be identified and incorporated into the retrieval system to prevent contact damage to the UST or to the remediation equipment. This report discusses the requirements for an adequate protection system and reviews the major technologies available for inclusion in a damage protection system. The report proposes that adequate reliability of a protection system can be achieved through the use of two fully-independent subsafety systems. Safety systems technologies reviewed were Force/Torque Sensors, Overload Protection Devices, Ultrasonic Sensors, Capacitance Sensors, Controller Software Limit Graphic Collision Detection, and End Point Tracking. A relative comparison between retrieval systems protection technologies is presented.
Horizontal drilling is a viable approach for accessing hydrocarbons in many types of naturally-fractured reservoirs. Cost-effective improvements in the technology to drill, complete, and produce horizontal wells in difficult geologic environments require a better understanding of the mechanical and fluid-flow behavior of these reservoirs with changes ineffective stress during their development and production history. In particular, improved understanding is needed for predicting borehole stability and reservoir response during pore pressure drawdown. To address these problems, a cooperative project between Oryx Energy Company and Sandia National Laboratories was undertaken to study the effects of rock properties, in situ stress, and changes in effective stress on the deformation and permeability of stress sensitive, naturally-fractured reservoirs. A low value for the proelastic parameter was found, implying that the reservoir should have a low sensitivity to declining pore pressure. A surprisingly diverse suite of fractures was identified from core. From the coring-induced fractures, it was plausible to conclude that the maximum principal stress was in the horizontal plane. Measurements on permeability of naturally fractured rock in a newly-developed experimental arrangement showed that slip on fractures is much more effective inchangingpcrtncability than is normal stress. The intermediate principal stress was found to have a strong effect, on the strength and ductility of the chalk, implying the need for a more sophisticated calculation of borehole stability.
This study investigates the high frequency response of Faraday effect optical fiber current sensors that are bandwidth-limited by the transit time of the light in the fiber. Mathematical models were developed for several configurations of planar (collocated turns) and travelling wave (helical turns) singlemode fiber sensor coils, and experimental measurements verified the model predictions. High frequency operation above 500 MHz, with good sensitivity, was demonstrated for several current sensors; this frequency region was not previously considered accessible by fiber devices. Planar fiber coils in three configurations were investigated: circular cross section with the conductor centered coaxially; circular cross section with the conductor noncentered; and noncircular cross section with arbitrary location of the conductor. The helical travelling wave fiber coils were immersed in the dielectric of a coaxial transmission line to improve velocity phase matching between the field and light. Three liquids (propanol, methanol, and water) and air were used as transmission line dielectric. Complete models, which must account for liquid dispersion and waveguide dispersion from the multilayer dielectric in the transmission line, were developed to describe the Faraday response of the travelling wave sensors. Other travelling wave current sensors with potentially greater Faraday sensitivity, wider bandwidth and smaller size are investigated using the theoretical models developed for the singlemode fibers coils.
The U.S. Department of Energy (DOE) is responsible for disposing of a variety of radioactive and mixed wastes, some of which are considered special-case waste because they do not currently have a clear disposal option. It may be possible to dispose of some of the DOE`s special-case waste using greater confinement disposal techniques at the Nevada Test Site (NTS). The DOE asked Sandia National Laboratories to investigate this possibility by performing system configuration analyses. The first step in performing system configuration analyses is to estimate the characteristics of special-case waste that might be destined for disposal at the NTS. The objective of this report is to characterize this special-case waste based upon information available in the literature. No waste was sampled and analyzed specifically for this report. The waste compositions given are not highly detailed, consisting of grains and curies of specific radionuclides per cubic meter. However, such vague waste characterization is adequate for the purposes of the system configuration task. In some previous work done on this subject, Kudera et al. [1990] identified nine categories of special-case radioactive waste and estimated volumes and activities for these categories. It would have been difficult to develop waste compositions based on the categories proposed by Kudera et al. [1990], so we created five groups of waste on which to base the waste compositions. These groups are (1) transuranic waste, (2) fission product waste, (3) activation product waste, (4) mobile/volatile waste, and (5) sealed sources. The radionuclides within a given group share common characteristics (e.g., alpha-emitters, heat generators), and we believe that these groups adequately represent the DOE`s special-case waste potentially destined for greater confinement disposal at the NTS.
The purpose of the Reactor Pressure vessel Thermal Annealing Workshop was to provide a forum for US utilities and interested parties to discuss relevant experience and issues and identify potential solutions/approaches related to: An understanding of the potential benefits of thermal annealing for US commercial reactors; on-going technical research activities; technical aspects of a generic, full-scale, in-place vessel annealing demonstration; and the impact of economic, regulatory, and technical issues on the application of thermalannealingtechnology to US plants. Experts from the international nuclear reactor community were brought together to discuss issues regarding application of thermal annealing technology in the US and identify the steps necessary to commercialize this technology for US reactors. These proceedings contain all presentation materials discussed during the Workshop. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.
This SAND report summarizes the work completed for a Novel Project Research and Development LDRD project. In this research effort, new mathematical techniques from the theory of nonlinear generalized functions were applied to compute solutions of nonlinear hyperbolic field equations in nonconservative form. Nonconservative field equations contain products of generalized functions which are not defined in classical mathematics. Because of these products, traditional computational schemes are very difficult to apply and can produce erroneous numerical results. In the present work, existing first-order computational schemes based on results from the theory of nonlinear generalized functions were applied to simulate numerically two model problems cast in nonconservative form. From the results of these computational experiments, a higher-order Godunov scheme based on the piecewise parabolic method was proposed and tested. The numerical results obtained for the model problems are encouraging and suggest that the theory of nonlinear generalized functions provides a powerful tool for studying the complicated behavior of nonlinear hyperbolic field equations.
In situ design verification activities are being conducted in the North Ramp Starter Tunnel of the Yucca Mountain Project Exploratory Studies Facility. These activities include: monitoring the construction blasting, evaluating the damage to the rock mass associated with construction, assessing the rock mass quality surrounding the tunnel, monitoring the performance of the installed ground support, and monitoring the stability of the tunnel. In this paper, examples of the data that have been collected and preliminary conclusions from the data are presented.
The authors consider the following problem that arises in assembly planning: given an assembly, identify a subassembly that can be removed as a rigid object without disturbing the rest of the assembly. This is the assembly partitioning problem. Specifically, they consider planar assemblies of simple polygons and subassembly removal paths consisting of a single finite translation followed by a translation to infinity. They show that such a subassembly and removal path can be determined in O(n{sup 1.46}N{sup 6}) time, where n is the number of polygons in the assembly and N is the total number of edges and vertices of all the parts together. They then extend this formulation to removal paths consisting of a small number of finite translations, followed by a translation to infinity. In this case the algorithm runs in time polynomial in the number of parts, but exponential in the number of translations a path may contain.
Magnetoluminescence determined conduction-band and valence-band dispersion curves are presented for n-type InGaAs/GaAs stained-single-quantum well structures. The magnetic field range was 0 to 30 tesla, and the temperature varied between 4.2 and 77.4 K.
Effects of frequency, temperature and hydrostatic pressure on dielectric properties, molecular relaxations, and phase transitions of PVDF and a copolymer with 30 mol % trifluoroethylene are discussed. Pressure causes large slowing down of the {beta} molecular relaxations as well as large increases in the, ferroelectric transition temperatures and melting points, but the magnitudes of the effects are different for the different transitions. These effects can be understood in terms of pressure-induced hindrance of the molecular motions and/or reorientations. A unique application of these polymers as time-resolved dynamic stress gauges based on PVDF studies under very high pressure shock compression is discussed.
Design improvements for the International Atomic Energy Agency`s Spent Fuel Attribute Tester, recommended on the basis of an optimization study, were incorporated into a new instrument fabricated under the Finnish Support Programme. The new instrument was tested at a spent fuel storage pool on September 8 and 9, 1993. The result of two of the measurements have been compared with calculations. In both cases the calculated and measured pulse height spectra in good agreement and the {sup 137}Cs gamma peak signature from the target spent fuel element is present.
NATO and former Warsaw Pact nations have agreed to allow overflights of their countries in the interest of easing world tension. The United States has decided to implement two C-135 aircraft with a Synthetic Aperture Radar (SAR) that has a 3-meter resolution. This work is being sponsored by the Defense Nuclear Agency (DNA) and will be operational in Fall 1995. Since the SAR equipment must be exportable to foreign nations, a 20-year-old UPD-8 analog SAR system was selected as the front-end and refurbished for this application by Loral Defense Systems. Data processing is being upgraded to a currently exportable digital design by Sandia National Laboratories. Amplitude and phase histories will be collected during these overflights and digitized on VHS cassettes. Ground stations will use reduction algorithms to process the data and convert it to magnitude-detected images for member nations. System Planning Corporation is presently developing a portable ground station for use on the demonstration flights. Aircraft integration into the C-135 aircraft is being done by the Air Force at Wright-Patterson AFB, Ohio.
We describe the design and fabrication of two types of solid state moisture sensors, and discuss the results of an evaluation of the sensors for the detection of trace levels of moisture in semiconductor process gases. The first sensor is based on surface acoustic wave (SAW) technology. A moisture sensitive layer is deposited onto a SAW device, and the amount of moisture adsorbed on the layer produces a proportional shift in the operating frequency of the device. Sensors based on this concept have excellent detection limits for moisture in inert gas (100 ppb) and corrosive gas (150 ppb in HCl). The second sensor is a simple capacitor structure that uses porous silicon as a moisture-sensitive dielectric material. The detection limits of these sensors for moisture in inert gas are about 700 ppb prior to HCl exposure, and about 7 ppm following HCl exposure.
Several basic reasons are given to support the position that an integrated, systems methodology entailing probabilistic assessment offers the best means for addressing the problems in software safety. The recognized hard problems in software safety, or safety per se, and some of the techniques for hazard identification and analysis are then discussed relative to their specific strengths and limitations. The paper notes that it is the combination of techniques that will lead to safer systems, and that more experience, examples, and applications of techniques are needed to understand the limits to which software safety can be assessed. Lastly, some on-going project work at Sandia National Laboratories on developing a solution methodology is presented
The Australian Safeguards Office (ASO) and the US Department of Energy (DOE) have sponsored work under a bilateral agreement to implement a Remote Monitoring System (RMS) at an Australian nuclear site operated by the Australian Nuclear Science and Technology Organization (ANSTO). The RMS, designed by Sandia National Laboratories (SNL), was installed in February 1994 at the Dry Spent Fuel Storage Facility (DSFSF) located at Lucas Heights, Australia. The RMS was designed to test a number of different concepts that would be useful for unattended remote monitoring activities. The DSFSF located in Building 27 is a very suitable test site for a RMS. The RMS uses a network of low cost nodes to collect data from a number of different sensors and security devices. Different sensors and detection devices have been installed to study how they can be used to complement each other for C/S applications. The data collected from the network will allow a comparison of how the various types of sensors perform under the same set of conditions. A video system using digital compression collects digital images and stores them on a hard drive and a digital optical disk. Data and images from the storage area are remotely monitored via telephone from Canberra, Australia and Albuquerque, NM, USA. These remote monitoring stations operated by ASO and SNL respectively, can retrieve data and images from the RMS computer at the DSFSF. The data and images are encrypted before transmission. The Remote Monitoring System field tests have been operational for six months with good test results. Sensors have performed well and the digital images have excellent resolution. The hardware and software have performed reliably without any major difficulties. This paper summarizes the highlights of the prototype system and the ongoing field tests.
An important aspect of insider protection in production facilities is the monitoring of the movement of special nuclear material (SNM) and personnel. One system developed at Sandia National Labs for this purpose is the Personnel and Material Tracking System (PAMTRAK). PAMTRAK can intelligently integrate different sensor technologies and the security requirements of a facility to provide a unique capability in monitoring and tracking SNM and personnel. Currently many sensor technologies are used to track the location of personnel and SNM inside a production facility. These technologies are generally intrusive; they require special badges be worn by personnel, special tags be connected to material, and special detection devices be mounted in the area. Video technology, however, is non-intrusive because it does not require that personnel wear special badges or that special tags be attached to SNM. Sandia has developed a video-based image processing system consisting of three major components: the Material Monitoring-Subsystem (MMS), the Personnel Tracking Subsystem (PTS) and the Item Recognition Subsystem (IRS). The basic function of the MMS is to detect movements of SNM, that occur in user-defined regions of interest (ROI) from multiple cameras; these ROI can be of any shape and size. The purpose of the PTS is to track location of personnel in an area using multiple cameras. It can also be used to implement the two-person rule or to detect unauthorized personnel in a restricted area. Finally, the IRS can be used for the recognition and inventory of SNM in a working area. It can also generate a log record on the status of each SNM. Currently the MMS is integrated with PAMTRAK to complement other monitoring technologies in the system. The paper will discuss the system components and their implementations, and describe current enhancements as well as future work.
Sticky foam is an extremely tacky, tenacious material used to entangle and impair an individual. It was developed at Sandia National Laboratories (SNL) in the late 1970`s for usage in nuclear safeguards and security applications. In late 1992, the National Institute of Justice (NIJ), the research arm of the Department of Justice, began a project with SNL to determine the applicability of sticky foam for law enforcement usage. The objectives of the project were to develop a dispenser capable of firing sticky foam, to conduct an extensive toxicology review of sticky foam (formulation SF-283), to test the developed dispenser and sticky foam effectiveness on SNL volunteers acting out prison and law enforcement scenarios, and to have the dispenser and sticky foam further evaluated by correctional representatives. This paper discusses the results of the project.
MOS oxides have been fabricated by oxidation of silicon in N{sub 2}O. Processes studied include oxidation in N{sub 2}O alone, and two-step oxidation in O{sub 2} followed by N{sub 2}O. For both oxides, a nitrogen-rich layer with a peak N concentration of {approximately} 0.5 at. % is observed at the Si-SiO{sub 2} interface with SIMS. Electrical characteristics of N{sub 2}O oxides, such as breakdown and defect generation, are generally improved, especially for the two-step process. Drawbacks typically associated with NH{sub 3}-nitrided oxides such as high fixed oxide charge and enhanced electron trapping, are not observed in N{sub 2}O oxides, which is probably due to their smaller nitrogen content.
The Explosive Inventory and Information System (EIS) is being developed and implemented by Sandia National Laboratories (SNL) to incorporate a cradle to grave structure for all explosives and explosive containing devices and assemblies at SNL from acquisition through use, storage, reapplication, transfer or disposal. The system does more than track all material inventories. It provides information on material composition, characteristics, shipping requirements; life cycle cost information, plan of use; and duration of ownership. The system also provides for following the processes of explosive development; storage review; justification for retention; Resource, Recovery and Disposition Account (RRDA); disassembly and assembly; and job description, hazard analysis and training requirements for all locations and employees involved with explosive operations. In addition, other information systems will be provided through the system such as the Department of Energy (DOE) and SNL Explosive Safety manuals, the Navy`s Department of Defense (DoD) Explosive information system, and the Lawrence Livermore National Laboratories (LLNL) Handbook of Explosives.
This paper describes a method of reducing the mechanical stress caused when a ferrite pot core is encapsulated in a rigid epoxy. the stresses are due to the differences of coefficient of thermal expansion between the two materials. A stress relief medium, phenolic micro-balloon-filled, syntactic polysulfide, is molded into the shape of the pot core. The molded polysulfide is bonded to the core prior to encapsulation. The new package design has made a significant difference in the ability to survive temperature cycles.
This paper provides an overview of the methodology used in a probabilistic transportation risk assessment conducted to assess the probabilities and consequences of inadvertent dispersal of radioactive materials arising from severe transportation accidents. The model was developed for the Defense Program Transportation Risk Assessment (DPTRA) study. The analysis incorporates several enhancements relative to previous risk assessments of hazardous materials transportation including newly-developed statistics on the frequencies and severities of tractor semitrailer accidents and detailed route characterization using the 1990 Census data.
Protecting sensitive items from undetected tampering in an unattended environment is crucial to the success of non-proliferation efforts relying on the verification of critical activities. Tamper Indicating Packaging (TIP) technologies are applied to containers, packages, and equipment that require an indication of a tamper attempt. Examples include: the transportation and storage of nuclear material, the operation and shipment of surveillance equipment and monitoring sensors, and the retail storage of medicine and food products. The spectrum of adversarial tampering ranges from attempted concealment of a pin-hole sized penetration to the complete container replacement, which would involve counterfeiting efforts of various degrees. Sandia National Laboratories (SNL) has developed a technology base for advanced TIP materials, sensors, designs, and processes which can be adapted to various future monitoring systems. The purpose of this technology base is to investigate potential new technologies, and to perform basic research of advanced technologies. This paper will describe the theory of TIP technologies and recent investigations of TIP technologies at SNL.
Becoming aware of the significant events of the past four years and their effect on the expectations to international safeguards, it is necessary to reflect on which direction the development of nuclear safeguards in a new era needs to take and the implications. The lime proven monitoring techniques, based on quantitative factor`s and demonstrated universal application, have shown their merit. However, the new expectations suggest a possibility that a future IAEA safeguards system could rely more heavily on the value of a comprehensive, transparent and open implementation regime. Within such a regime, the associated measures need to be determined and technological support identified. This paper will identify the proven techniques which, with appropriate implementation support, could most quickly make available additional measures for a comprehensive, transparent and open implementation regime. In particular, it will examine the future of Integrated Monitoring Systems and Remote Monitoring in international safeguards, including technical and other related factors.
The upgrade of the TEXTOR tokamak at KFA Juelich was recently completed. This upgrade extended the TEXTOR pulse length from 5 seconds to 10 seconds. The auxiliary heating was increased to a total of 8.0 MW through a combination of neutral beam injection and radio frequency heating. Originally, the inertially cooled armor tiles of the full toroidal belt Advanced Limiter Test -- II (ALT-II) were designed for a 5-second operation with total heating of 6.0 MW. The upgrade of TEXTOR will increase the energy deposited per pulse onto the ALT-II by about 300%. Consequently, the graphite armor tiles for the ALT-II had to be redesigned to avoid excessively high graphite armor surface temperatures that would lead to unacceptable contamination of the plasma. This redesign took the form of two major changes in the ALT-II armor tile geometry. The first design change was an increase of the armor tile thermal mass, primarily by increasing the radial thickness of each tile from 17 mm to 20 mm. This increase in the radial tile dimension reduces the overall pumping efficiency of the ALT-II pump limiter by about 30%. The reduction in exhaust efficiency is unfortunate, but could be avoided only by active cooling of the ALT-II armor tiles. The active cooling option was too complicated and expensive to be considered at this time. The second design change involved redefining the plasma facing surface of each armor tile in order to fully utilize the entire surface area. The incident charged particle heat flux was distributed uniformly over the armor tile surfaces by carefully matching the radial, poloidal and toroidal curvature of each tile to the plasma flow in the TEXTOR boundary layer. This geometry redefinition complicates the manufacturing of the armor tiles, but results in significant thermal performance gains. In addition to these geometry upgrades, several material options were analyzed and evaluated.
Disposition of electric vehicle (EV) batteries after they have reached the end of their useful life is an issue that could impede the widespread acceptance of EVs in the commercial market. This is especially true for advanced battery systems where working recycling processes have not as yet been established. The DOE sponsors an Ad Hoc Electric Vehicle Battery Readiness Working Group to identify barriers to the introduction of commercial EVs and to advise them of specific issues related to battery reclamation/recycling, in-vehicle battery safety, and battery shipping. A Sub-Working Group on the reclamation/recycle topic has been reviewing the status of recycling process development for the principal battery technologies that are candidates for EV use from the near-term to the long-term. Recycling of near-term battery technologies, such as lead-acid and nickel/cadmium, is occurring today and it is believed that sufficient processing capacity can be maintained to keep up with the large number of units that could result from extensive EV use. Reclamation/recycle processes for midterm batteries are partially developed. Good progress has been made in identifying processes to recycle sodium/sulfur batteries at a reasonable cost and pilot scale facilities are being tested or planned. A pre-feasibility cost study on the nickel/metal hydride battery also indicates favorable economics for some of the proposed reclamation processes. Long-term battery technologies, including lithium-polymer and lithium/iron disulfide, are still being designed and developed for EVs, so descriptions for prototype recycling processes are rather general at this point. Due to the long time required to set up new, full-scale recycling facilities, it is important to develop a reclamation/recycling process in parallel with the battery technologies themselves.
The Safeguards Information Management System initiative is a program of the Department of Energy`s (DOE) Office of Arms Control and Nonproliferation aimed at supporting the International Atomic Energy Agency`s (IAEA) efforts to strengthen safeguards through the enhancement of information management capabilities. The DOE hopes to provide the IAEA with the ability to correlate and analyze data from existing and new sources of information, including publicly available information, information on imports and exports, design information, environmental monitoring data, and non-safeguards information. The first step in this effort is to identify and define IAEA requirements. In support of this, we have created a users` requirements document based on interviews with IAEA staff that describes the information management needs of the end user projected by the IAEA, including needs for storage, retrieval, analysis, communication, and visualization of data. Also included are characteristics of the end user and attributes of the current environment. This paper describes our efforts to obtain the required information. We discuss how to accurately represent user needs and involve users for an international organization with a multi-cultural user population. We describe our approach, our experience in setting up and conducting the interviews and brainstorming sessions, and a brief discussion of what we learned.
An input shaping scheme originally used to slew flexible beams via a tabletop D.C. motor is modified for use with an industrial-type, hydraulic-drive robot. This trajectory generation method was originally developed to produce symmetric, rest-to-rest maneuvers of flexible rotating rods where the angular velocity vector and gravitational vector were collinear. In that configuration, out-of-plane oscillations were excited due to centripetal acceleration of the rod. The bang-coast-bang acceleration profile resulted in no oscillations in either plane at the end of the symmetric slew maneuver. In this paper, a smoothed version of the bang-coast-bang acceleration is used for symmetric maneuvers where the angular velocity vector is orthogonal to the gravitational vector. Furthermore, the hydraulic robot servo dynamics are considered explicitly in determining the input joint angle trajectory. An instrumented mass is attached to the tip of a flexible aluminum rod. The first natural frequency of this system is about 1.0Hz. Joint angle responses obtained with encoder sensors are used to identify the servo actuator dynamics.
Multidimensional thermal/chemical modeling is an essential step in the development of a predictive capability for cookoff of energetic materials in systems subjected to abnormal thermal environments. COYOTE II is a state-of-the-art two- and three-dimensional finite element code for the solution of heat conduction problems including surface-to-surface thermal radiation heat transfer and decomposition chemistry. Multistep finite rate chemistry is incorporated into COYOTE II using an operator-splitting methodology; rate equations are solved element-by-element with a modified matrix-free stiff solver, CHEMEQ. COYOTE II is purposely designed with a user-oriented input structure compatible with the database, the pre-processing mesh generation, and the post-processing tools for data visualization shared with other engineering analysis codes available at Sandia National Laboratories. As demonstrated in a companion paper, decomposition during cookoff in a confined or semi-confined system leads to significant mechanical behavior. Although mechanical effect are not presently considered in COYOTE II, the formalism for including mechanics in multidimensions is under development.
The molecular-scale species distributions and intermediate-scale structure of silicate sols influence the microstructures of the corresponding thin films prepared by dip-coating. Using multi-step hydrolysis procedures, the authors find that, depending on the sequence and timing of the successive steps, the species distributions (determined by {sup 29}Si NMR) and intermediate scale structure (determined by SAXS) can change remarkably for sols prepared with the same nominal composition. During film formation, these kinetic effects cause differences in the efficiency of packing of the silicate species, leading to thin film structures with different porosities.
VISAR (Velocity Interferometer System for Any Reflector) is a specialized Doppler interferometer system that is gaining world-wide acceptance as the standard for shock phenomena analysis. The VISAR`s large power and cooling requirements, and the sensitive and complex nature of the interferometer cavity has restricted the traditional system to the laboratory. This paper describes the new portable VISAR, its peripheral sensors, and the role it played in optically measuring ground shock of an underground nuclear detonation (UGT). The Solid State VISAR uses a prototype diode pumped Nd:YAG laser and solid state detectors that provide a suitcase-size system with low power requirements. A special window and sensors was developed for fiber optic coupling (1 kilometer long) to the VISCAR. The system has proven itself as reliable, easy to use instrument that is capable of field test use and rapid data reduction using only a notebook personal computer (PC).
The improvements in purity of molding materials, the IC wafer passivation layers, and manufacturing quality have resulted over the last decade in extremely high reliability in commercial IC packages. In contrast the ceramic/hermetic package world is suffering from limited availability of the newest IC chips, higher cost, larger size, and decreasing quality and fewer manufacturing lines. Traditional manufacturing line qualification tests are a good start for conversion to commercial plastic parts. However, the use of standard sensitive test chips instead of product die is necessary to perform affordable, quantitative evaluations. These test chips have many integrated sensors measuring chemical, mechanical, thermal, and electrical degradation caused by manufacturing and the package environment. Besides visual, electrical test, and burn-in little has been documented on 100% nondestructive screening of plastic molded parts. Based on realistic process control and system engineer cultural expectations, user screening is necessary. Nondestructive tests of moisture and temperature excursion susceptibility are described.
System behaviors can be accurately simulated using artificial neural networks (ANNs), and one that performs well in simulation of structural response is the radial basis function network. A specific implementation of this is the connectionist normalized linear spline (CNLS) network, investigated in this study. A useful framework for ANN simulation of structural response is the recurrent network. This framework simulates the response of a structure one step at a time. It requires as inputs some measures of the excitation, and the response at previous times. On output, the recurrent ANN yields the response at some time in the future. This framework is practical to implement because every ANN requires training, and this is executed by showing the ANN examples of correct input/output behavior (exemplars), and requiring the ANN to simulate this behavior. In practical applications, hundreds or, perhaps, thousands, of exemplars are required for ANN training. The usual laboratory and non-neural numerical applications to be simulated by ANNs produce these amounts of information. Once the recurrent ANN is trained, it can be provided with excitation information, and used to propagate structural response, simulating the response it was trained to approximate. The structural characteristics, parameters in the CNLS network, and degree of training influence the accuracy of approximation. This investigation studies the accuracy of structural response simulation for a single-degree-of-freedom (SDF), nonlinear system excited by random vibration loading. The ANN used to simulate structural response is a recurrent CNLS network. We investigate the error in structural system simulation.
We are improving the understanding of pulsed-power-driven ion diodes using measurements of the charged particle distributions in the diode anode-cathode (AK) gap. We measure the time - and space-resolved electric field in the AK gap using Stark-shifted Li I 2s-2p emission. The ion density in the gap is determined from the electric field profile and the ion current density. The electron density is inferred by subtracting the net charge density, obtained from the derivative of the electric field profile, from the ion density. The measured electric field and charged particle distributions are compared with results from QUICKSILVER, a 3D particle-in-cell computer code. The comparison validates the fundamental concept of electron build-up in the AK gap. However, the PBFA II diode exhibits considerably richer physics than presently contained in the simulation, suggesting improvements both to the experiments and to our understanding of ion diode physics.
The Arrhenius relation predicts a linear relation between log of time to property change and inverse absolute temperature, with the Arrhenius activation energy E{sub a} given from the slope. For a nitrile rubber, Arrhenius behavior is observed for elongation vs air-oven aging temperature, with a E{sub a} of 22 kcal/mol. Confidence in extrapolation to low temperatures can be increased by measuring oxygen consumption. From 95 to 52 C, the E{sub a} for oxygen consumption is identical to that for elongation; however, below 52 C, the E{sub a} for oxygen consumption drops slightly to 18 kcal/mol, indicating that the extrapolation assumption probably overestimates the tensile property lifetime by a factor of about 2 at 23 C.
Prosperity Games are an outgrowth and adaptation of move/countermove and seminar War Games. Prosperity Games are simulations that explore complex issues in a variety of areas including economics, politics, sociology, environment, education and research. These issues can be examined from a variety of perspectives ranging from a global, macroeconomic and geopolitical viewpoint down to the details of customer/supplier/market interactions in specific industries. All Prosperity Games are unique in that both the game format and the player contributions vary from game to game. This report documents the Prosperity Game conducted under the sponsorship of the Electronic Industries Association. Almost all of the players were from the electronics industry. The game explored policy changes that could enhance US competitiveness in the manufacturing of consumer electronics. Four teams simulated a presidentially appointed commission comprised of high-level representatives from government, industry, universities and national laboratories. A single team represented the foreign equivalent of this commission, formed to develop counter strategies for any changes in US policies. The deliberations and recommendations of these teams provide valuable insights as to the views of this industry concerning policy changes, foreign competition, and the development, delivery and commercialization of new technologies.
An assessment of the long term containment capabilities of a possible nuclear waste disposal site requires both an understanding of the hydrogeology of the region under consideration and an assessment of the uncertainties associated with this understanding. Stochastic simulation - the generation of random {open_quotes}realizations{close_quotes} of the regions hydrogeology, consistent with the available information, provides a way to incorporate various types of uncertainty into a prediction of a complex system response such as site containment capability. One statistical problem in stochastic simulation is: What features of the data should be {open_quotes}mimicked{close_quotes} in the realizations? The answer can depend on the application. A discussion is provided of some of the more common data features used in recent applications. These features include spatial covariance functions and measures of the connectivity of extreme values, as examples. Trends and new directions in this area are summarized including a brief description of some statistics (the features) presently in experimental stages.
This report describes the Height of Charge measurement for the Middle Key 4 test conducted at the FCDNA Permanent High Explosives Test Site (PHETS) on 17 September, 1993. The object of the measurement was to monitor remotely the change in the height of the explosive charge suspended above the test pad once the site had been evacuated until detonation. Should the measurement have shown that the charge changed height by more than 15cm then a hold in the test was to be called so that a height adjustment in the suspension system could be made. The measurement system consisted of a remotely placed video camera linked to the test control center-based measurement computer via a fiber optical video data link, a pole mounted stationary reference target, and a target mounted on the charge bag. Measurement of the change in height was determined using image analysis software on frame grabbed images. Measurements indicate that the charge bag did not deviate from the initial surveyed height of 1747cm by more than 1cm between the last measurement made by the survey crew until detonation 40 minutes later.
Vehicle lateral dynamics are affected by vehicle mass, longitudinal velocity, vehicle inertia, and the cornering stiffness of the tires. All of these parameters are subject to variation, even over the course of a single trip. Therefore, a practical lateral control system must guarantee stability, and hopefully ride comfort, over a wide range of parameter changes. This paper describes a robust controller which theoretically guarantees stability over a wide range of parameter changes. The robust controller is designed using a frequency domain transfer function approach. An uncertainty band in the frequency domain is determined using simulations over the range of expected parameter variations. Based on this bound, a robust controller is designed by solving the Nevanlinna-Pick interpolation problem. The performance of the robust controller is then evaluated over the range of parameter variations through simulations.
HEISHI is a Fortran computer model designed to aid in analysis, prediction, and optimization of fuel characteristics for use in Space Nuclear Thermal Propulsion (SNTP). Calculational results include fission product release rate, fuel failure fraction, mode of fuel failure, stress-strain state, and fuel material morphology. HEISHI contains models for decay chain calculations of retained and released fission products, based on an input power history and release coefficients. Decay chain parameters such as direct fission yield, decay rates, and branching fractions are obtained from a database. HEISHI also contains models for stress-strain behavior of multilayered fuel particles with creep and differential thermal expansion effects, transient particle temperature profile, grain growth, and fuel particle failure fraction. Grain growth is treated as a function of temperature; the failure fraction depends on the coating tensile strength, which in turn is a function of grain size. The HEISHI code is intended for use in analysis of coated fuel particles for use in particle bed reactors; however, much of the code is geometry-independent and applicable to fuel geometries other than spherical.
Aging Aircraft NDI Validation Center (AANC) was established by the FAA Technical Center (FAATC) at Sandia National Laboratories in August of 1991. The Validation Center supports the inspection portion of the FAA`s National Aging Aircraft Program which was mandated by Congress in the 1988 Aviation Safety Act. The ultimate customers of the AANC include the FAA, airframe and engine manufacturers, airlines, and third party maintenance facilities. One goal of the AANC is to provide independent validation of technologies intended to enhance the structural inspection of aging commuter and transport aircraft. Another goal is to assist in transferring emerging inspection technology from other parts of the FAA`s program to the aircraft industry. The deliverables from both these activities are an assessment of the reliability and cost benefits of an inspection technology as applied to a particular inspection or class of inspections. The validation process consists of a quantitative and systematic assessment of the reliability and cost/benefits on a Nondestructive Inspection (NDI) process. A NDI process is defined as the NDI systems and procedures used for inspections. This includes the NDI operator, inspection environment, and the object being inspected. The phases of the validation process are: 1. Conceptual, 2. Preliminary design, 3. Final design, and 4. Field implementation. The AAANC usually gets involved in the validation process during Phases 2 and 3. The Center supports field trials with a full array of test specimens and established procedures for conducting the trials. Phase 4 reliability includes field trials using independent inspectors either at the Center`s hangar or at outside maintenance facilities. Three activities are summarized below where inspection technology has been validated in the field. These are: (1) eddy current inspection reliability experiment; (2) magneto optic imager validation; and (3) inspection tool improvement.
Low Energy Charge-Induced Voltage Alteration (LECIVA) is a new scanning electron microscopy technique developed to localize open conductors in passivated ICs. LECIVA takes advantage of recent experimental work showing that the dielectric surface equilibrium voltage has an electron flux density dependence at low electron beam energies ({le}1.0 keV). The equilibrium voltage changes from positive to negative as the electron flux density is increased. Like Charge-Induced Voltage Alteration (CIVA), LECIVA images are produced from the voltage fluctuations of a constant current power supply as an electron beam is scanned over the IC surface. LECIVA image contrast is generated only by the electrically open part of a conductor, yielding, the same high selectivity demonstrated by CIVA. Because LECIVA is performed at low beam energies, radiation damage by the primary electrons and x-rays to MOS structures is far less than that caused by CIVA. LECIVA may also be performed on commercial electron beam test systems that do not have high primary electron beam energy capabilities. The physics of LECIVA signal generation are described. LECIVA imaging examples illustrate its utility on both a standard scanning electron microscope (SEM) and a commercial electron beam test system.
The arc energy distribution in the electrode gap plays a central role in the vacuum arc remelting (VAR) process. However, very little has been done to investigate the response of this important process variable to changes in process parameters. Emission spectroscopy was used to investigate variations in arc energy in the annulus of a VAR furnace during melting of 0.43 m diameter Alloy 718 electrode into 0.51 in diameter ingot. Time averaged (1 second) intensity data from various chromium atom and ion (Cr{sup +}) emission lines were simultaneously collected and selected intensity ratios were subsequently used as air energy indicators. These studies were carried out as a function of melting current, electrode gap, and CO partial pressure. The data were modeled and the ion electronic energy was found to be a function of electrode gap, the energy content of the ionic vapor decreasing with increasing gap length; the ion ratios were not found to be sensitive to pressure. On the other hand, the chromium atom electronic energy was difficult to model in the factor space investigated, but was determined to be sensitive, to pressure. The difference in character of the chromium ion and atom energy fluctuations in the furnace annulus are attributed to the difference in the origins of these arc species and the non-equilibrium nature of the metal vapor arc. Most of the ion population is emitted directly from cathode spots, whereas much of the atomic vapor arises due to vaporization from the electrode and pool surfaces. Also, the positively charged ionic species interact more strongly with the electron gas than the neutral atomic species, the two distributions never equilibrating due to the low pressure.
The NAVSTAR satellites have two missions: navigation and nuclear detonation detection. The main objective of this paper is to describe one of the key elements of the Nuclear Detonation Detection System (NDS), the Burst Detector W-Sensor (BDW) that was developed for the Air Force Space and Missle Systems Center, its mission on GPS Block IIR, and how it utilizes GPS timing signals to precisely locate nuclear detonations (NUDET). The paper will also cover the interface to the Burst Detector Processor (BDP) which links the BDW to the ground station where the BDW is controlled and where data from multiple satellites are processed to determine the location of the NUDET. The Block IIR BDW is the culmination of a development program that has produced a state-of-the-art, space qualified digital receiver/processor that dissipates only 30 Watts, weighs 57 pounds, and has a 12in. {times} l4.2in. {times} 7.16in. footprint. The paper will highlight several of the key multilayer printed circuit cards without which the required power, weight, size, and radiation requirements could not have been met. In addition, key functions of the system software will be covered. The paper will be concluded with a discussion of the high speed digital signal processing and algorithm used to determine the time-of-arrival (TOA) of the electromagnetic pulse (EMP) from the NUDET.
We are using VUV spectroscopy to study the ion source region on the SABRE applied-B extraction ion diode. The VUV diagnostic views the anode-cathode gap perpendicular to the ion acceleration direction, and images a region 0--1 mm from the anode onto the entrance slit of a I m normal-incidence spectrometer. Time resolution is obtained by gating multiple striplines of a CuI- or MgF{sub 2} -coated micro-channel plate intensifier. We report on results with a passive proton/carbon ion source. Lines of carbon and oxygen are observed over 900--1600 {angstrom}. The optical depths of most of the lines are less than or of order 1. Unfolding the Doppler broadening of the ion lines in the source plasma, we calculate the contribution of the source to the accelerated C IV ion micro-divergence as 4 mrad at peak power. Collisional-radiative modeling of oxygen line intensities provides the source plasma average electron density of 7{times}10{sup 16} cm{sup {minus}3} and temperature of 10 eV Measurements are planned with a lithium ion source and with VUV absorption spectroscopy.
Practical and theoretical limits on the bandwidth of distributed optical phase modulators and traveling-wave photodetectors are given. For the case of perfect velocity matching, RF transmission losses are the main performance-limiting factor. However, some high-performance modulators require highly-capacitive transmission lines making proper slow-wave operation difficult to achieve.
This report documents the results of a study regarding the conservatisms in ASME Code Section 3, Class 1 component fatigue evaluations and the effects of Light Water Reactor (LWR) water environments on fatigue margins. After review of numerous Class 1 stress reports, it is apparent that there is a substantial amount of conservatism present in many existing component fatigue evaluations. With little effort, existing evaluations could be modified to reduce the overall predicted fatigue usage. Areas of conservatism include design transients considerably more severe than those experienced during service, conservative grouping of transients, conservatisms that have been removed in later editions of Section 3, bounding heat transfer and stress analysis, and use of the ``elastic-plastic penalty factor`` (K{sub 3}). Environmental effects were evaluated for two typical components that experience severe transient thermal cycling during service, based on both design transients and actual plant data. For all reasonable values of actual operating parameters, environmental effects reduced predicted margins, but fatigue usage was still bounded by the ASME Section 3 fatigue design curves. It was concluded that the potential increase in predicted fatigue usage due to environmental effects should be more than offset by decreases in predicted fatigue usage if re-analysis were conducted to reduce the conservatisms that are present in existing component fatigue evaluations.
Austin chalk core has been tested to determine the effective law for deformation of the matrix material and the stress-sensitive conductivity of the natural fractures. For deformation behavior, two samples provided data on the variations of the poroelastic parameter, {alpha}, for Austin chalk, giving values around 0.4. The effective-stress-law behavior of a Saratoga limestone sample was also measured for the purpose of obtaining a comparison with a somewhat more porous carbonate rock. {alpha} for this rock was found to be near 0.9. The low {alpha} for the Austin chalk suggests that stresses in the reservoir, or around the wellbore, will not change much with changes in pore pressure, as the contribution of the fluid pressure is small. Three natural fractures from the Austin chalk were tested, but two of the fractures were very tight and probably do not contribute much to production. The third sample was highly conductive and showed some stress sensitivity with a factor of three reduction in conductivity over a net stress increase of 3000 psi. Natural fractures also showed a propensity for permanent damage when net stressed exceeded about 3000 psi. This damage was irreversible and significantly affected conductivity. {alpha} was difficult to determine and most tests were inconclusive, although the results from one sample suggested that {alpha} was near unity.
The electronic components business within the Nuclear Weapons Complex spans organizational and Department of Energy contractor boundaries. An assessment of the current processes indicates a need for fundamentally changing the way electronic components are developed, procured, and manufactured. A model is provided based on a virtual enterprise that recognizes distinctive competencies within the Nuclear Weapons Complex and at the vendors. The model incorporates changes that reduce component delivery cycle time and improve cost effectiveness while delivering components of the appropriate quality.
MELTER is an analysis of cargo responses inside a fire-threatened Safe-Secure Trailer (SST) developed for the Defense Program Transportation Risk Assessment (DPTRA). Many simplifying assumptions are required to make the subject problem tractable. MELTER incorporates modeling which balances the competing requirements of execution speed, generality, completeness of essential physics, and robustness. Input parameters affecting the analysis include those defining the fire scenario, those defining the cargo loaded in the SST, and those defining properties of the SST. For a specified fire, SST, and cargo geometry MELTER predicts the critical fire duration that will lead to a failure. The principal features of the analysis include: (a) Geometric considerations to interpret fire-scenario descriptors in terms of a thermal radiation boundary condition, (b) a simple model of the SST`s wall combining the diffusion model for radiation through optically-thick media with an endothermic reaction front to describe the charring of dimensional, rigid foam in the SST wall, (c) a transient radiation enclosure model, (d) a one-dimensional, spherical idealization of the shipped cargos providing modularity so that cargos of interest can be inserted into the model, and (e) associated numerical methods to integrate coupled, differential equations and find roots.
Fission-fragment pumped 1.73 {mu}m atomic xenon laser output was measured without changing the laser gas mixture before each reactor pulse. For a Ar/Xe gas mixture at 260 Torr and 0.3 percent xenon, no degradation in laser output was noted for five reactor pulses.
One mechanism for the penetration of lightning energy into the interior of a weapon is by current diffusion through the exterior metal case. Tests were conducted in which simulated lightning currents were driven over the exteriors of similar aluminum and ferrous steel cylinders of 0.125-in wall thickness. Under conditions in which the test currents were driven asymmetrically over the exteriors of the cylinders, voltages were measured between various test points in the interior as functions of the amplitude and duration of the applied current. The maximum recorded open-circuit voltage, which occurred in the steel cylinder, was 1.7 V. On separate shots, currents flowing on a low impedance shorting conductor between the same set of test points were also measured, yielding a maximum current of 630 A, again occurring across the interior of the steel cylinder. Under symmetrical exterior drive current conditions, a maximum end-to-end internal voltage of 4.1 V was obtained, also in the steel cylinder, with a corresponding current of 480 A measured on a coaxial conductor connected between the two end plates of the cylinder. Data were acquired over a range of input current amplitudes between about 40 and 100 kA. These data provide the experimental basis for validating models that can subsequently be applied to real weapons and other objects of interest.
Apparent malfunctions of two Trajectory Sensing Signal Generators (TSSGs) were observed during B61-3/4/10 trajectory arming tests at the Pantex Weapons Evaluation Test Laboratory (WETL). On two subsequent occasions, the TSSG used in the B83 bomb developed single-channel failures during tests at WETL. It was concluded that the failures were related to the electrostatic discharge (ESD) hazard associated with the use of a Thermotron{reg_sign} liquid carbon dioxide (CO{sub 2}) refrigeration system. It was demonstrated that during temperature conditioning, the case of the TSSG can acquire a substantial electrostatic charge as a result of triboelectric processes. A series of tests was performed to identify the charging mechanisms associated with liquid CO{sub 2} refrigeration systems and a quantitative assessment of the ESD hazard was made. The conclusions and recommendations derived in these tests were summarized in a Significant Finding Investigations Closeout. The purpose of this report is to provide formal documentation for, and elaboration on, issues addressed in the Findings.
The electrical effects of lightning penetration of the outer case of a weapon on internal structures, such as a firing set housing, and on samples of a flat, flexline detonator cable have been investigated experimentally. Maximum open-circuit voltages measured on either simulated structures (126 V) or the cable (46 V) located directly behind the point of penetration were well below any level that is foreseen to create a threat to nuclear safety. On the other hand, it was found that once full burnthrough of the barrier occurred, significant fractions of the incident continuing currents coupled to both the simulated internal structure (up to 300 A) or to the cable sample (69 A) when each was electrically connected internally to case ground. No occurrence was observed of the injection of large amplitude currents from return strokes occurring after barrier penetration. Under circumstances in which small volumes of trapped gases exist behind penetration sites, rapid heating of the gas by return strokes occurring after burnthrough has been shown to produced large mechanical impulses to the adjacent surfaces.
The Department of Energy (DOE) has tasked Sandia with conducting a market survey to identify and evaluate pertinent solid state recorders. This report identifies the chosen recorders and explains why they were selected. It details test procedures and provides the results of the evaluation. Our main focus in this evaluation was to determine whether the frame grabber altered signal quality. To determine the effect on the signal, we evaluated specific parameters: sensitivity, resolution, signal-to-noise ratio, and intrascene dynamic range. These factors were evaluated at the input and output of the frame grabber.
A suite of In Situ Permeable Flow Sensors was deployed at the site of the Savannah River Integrated Demonstration to monitor the interaction between the groundwater flow regime and air injected into the saturated subsurface through a horizontal well. One of the goals of the experiment was to determine if a groundwater circulation system was induced by the air injection process. The data suggest that no such circulation system was established, perhaps due to the heterogeneous nature of the sediments through which the injected gas has to travel. The steady state and transient groundwater flow patterns observed suggest that the injected air followed high permeability pathways from the injection well to the water table. The preferential pathways through the essentially horizontal impermeable layers appear to have been created by drilling activities at the site.
Berman, M.; Crespin, G.; Garcia, L.R.; Jansma, R.; Lovato, L.; Randall, G.; Sanchez, A.
As part of Sandia`s Corporate Diversity Program, a Diversity Action Team was assembled to study the impact of diversity on teamwork. We reviewed the available literature on successful teaming, both with homogeneous (more alike than different) and heterogeneous teams. Although many principles and guidelines for successful homogeneous teams also apply to diverse teams, we believe that a document concentrating on diverse teams will be useful both for Sandians and for the outside world.
Sandia National Laboratories, New Mexico (Sandia/NM) has successfully developed and implemented a chargeback system to fund the implementation of Pollution Prevention activities. In the process of establishing this system, many valuable lessons have been learned. This paper describes how the chargeback system currently functions, the benefits and drawbacks of implementing such a system, and recommendations for implementing a chargeback system at other facilities. The initial goals in establishing a chargeback system were to create (1) funding for pollution prevention implementation, including specific pollution prevention projects; and (2) awareness on the part of the line organizations of the quantities and types of waste that they generate, thus providing them with a direct incentive to reduce that waste. The chargeback system inputs waste generation data and then filters and sorts the data to serve two purposes: (1) the operation of the chargeback system; and (2) the detailed waste generation reporting used for assessing processes and identifying pollution prevention opportunities.
The Technology Development and Scoping (TDS) test series was conducted to test and develop instrumentation and procedures for performing steam-driven, high-pressure melt ejection (HPME) experiments at the Surtsey Test Facility to investigate direct containment heating (DCH). Seven experiments, designated TDS-1 through TDS-7, were performed in this test series. These experiments were conducted using similar initial conditions; the primary variable was the initial pressure in the Surtsey vessel. All experiments in this test series were performed with a steam driving gas pressure of {approx_equal} 4 MPa, 80 kg of lumina/iron/chromium thermite melt simulant, an initial hole diameter of 4.8 cm (which ablated to a final hole diameter of {approx_equal} 6 cm), and a 1/10th linear scale model of the Surry reactor cavity. The Surtsey vessel was purged with argon (<0.25 mol% O{sub 2}) to limit the recombination of hydrogen and oxygen, and gas grab samples were taken to measure the amount of hydrogen produced.
There is an instability in certain S.P.H. (Smoothed Particle Hydrodynamics method) material dynamics computations. Evidence from analyses and experiments suggests that the instabilities in S.P.H. are not removable with artificial viscosities. However, the analysis shows that a type of conservative smoothing does remove the instability. Also, numerical experiments, on certain test problems, show that SPHCS, and S.P.H. code with conservative smoothing, compares well in accuracy with computations based on the von Neumann-Richtmyer method.
This report is a summary of the history, design and development, procurement, fabrication, installation and operation of the closures used as containment devices on underground nuclear tests at the Nevada Test Site. It also addresses the closure program mothball and start-up procedures. The Closure Program Document Index and equipment inventories, included as appendices, serve as location directories for future document reference and equipment use.
This report summarizes the progress on the pulsed power approach to inertial confinement fusion. In 1989, the authors achieved a proton focal intensity of 5 TW/cm{sup 2} on PBFA-II in a 15-cm-radius applied magnetic-field (applied-B) ion diode. This is an improvement by a factor of 4 compared to previous PBFA-II experiments. They completed development of the three-dimensional (3-D), electromagnetic, particle-in-cell code QUICKSILVER and obtained the first 3-D simulations of an applied-B ion diode. The simulations, together with analytic theory, suggest that control of electromagnetic instabilities could reduce ion divergence. In experiments using a lithium fluoride source, they delivered 26 kJ of lithium energy to the diode axis. Rutherford-scattered ion diagnostics have been developed and tested using a conical foil located inside the diode. They can now obtain energy density profiles by using range filters and recording ion images on nuclear track recording film. Timing uncertainties in power flow experiments on PBFA-II have been reduced by a factor of 5. They are investigating three plasma opening switches that use magnetic fields to control and confine the injected plasma. These new switches provide better power flow than the standard plasma erosion switch. Advanced pulsed-power fusion drivers will require extraction-geometry applied-B ion diodes. During this reporting period, progress was made in evaluating the generation, transport, and focus of multiple ion beams in an extraction geometry and in assessing the probable damage to a target chamber first wall.
This report was stimulated by some recent investigations of S.P.H. (Smoothed Particle Hydrodynamics method). Solid dynamics computations with S.P.H. show symptoms of instabilities which are not eliminated by artificial viscosities. Both analysis and experiment indicate that conservative smoothing eliminates the instabilities in S.P.H. computations which artificial viscosities cannot. Questions were raised as to whether conservative smoothing might smear solutions more than artificial viscosity. Conservative smoothing, properly used, can produce more accurate solutions than the von Neumann-Richtmyer-Landshoff artificial viscosity which has been the standard for many years. The authors illustrate this using the vNR scheme on a test problem with known exact solution involving a shock collision in an ideal gas. They show that the norms of the errors with conservative smoothing are significantly smaller than the norms of the errors with artificial viscosity.
Prosperity Games are an outgrowth and adaptation of move/countermove and seminar War Games. Prosperity Games are simulations that explore complex issues in a variety of areas including economics, politics, sociology, environment, education and research. These issues can be examined from a variety of perspectives ranging from a global, macroeconomic and geopolitical viewpoint down to the details of customer/supplier/market interactions in specific industries. All Prosperity Games are unique in that both the game format and the player contributions vary from game to game. This report documents the Prosperity Game conducted under the sponsorship of the American Electronics Association in conjunction with the Electronics Subcommittee of the Civilian Industrial Technology Committee of the National Science and Technology Council. Players were drawn from government, national laboratories, and universities, as well as from the electronics industry. The game explored policy changes that could enhance US competitiveness in the manufacturing of consumer electronics. Two teams simulated a presidentially appointed commission comprised of high-level representatives from government, industry, universities and national laboratories. A single team represented the foreign equivalent of this commission, formed to develop counter strategies for any changes in US policies. The deliberations and recommendations of these teams provide valuable insights as to the views of this diverse group of decision makers concerning policy changes, foreign competition, and the development, delivery and commercialization of new technologies.
Inside this issue is a farewell to Testing Technology message from technical advisor, Ruth David. Also included are articles on: Testing the I-40 bridge over the Rio Grande, simulated reactor meltdown studies, an inexpensive monitor for testing integrated circuits, testing of antihelicoptor mines, and quality assurance on aircraft inspection.
This report describes the HP370 component of the Enhanced Graphics System (EGS) used at Tonopah Test Range (TTR). Selected Radar data is fed into the computer systems and the resulting tracking symbols are displayed on high-resolution video monitors in real time. These tracking symbols overlay background maps and are used for monitoring/controlling various flight vehicles. This report discusses both the operational aspects and the internal configuration of the HP370 Workstation portion of the EGS system.
An extensive set of direct-strike lightning tests has been carried out on a set of facsimile assembly joints of the kinds employed in the design of nuclear weapon cases. Taken as a whole, the test hardware included all the conceptual design elements that are embodied, either singly or in combination, in any specific assembly joint incorporated into any stockpiled weapon. During the present testing, the effects of all key design parameters on the voltages developed across the interior of the joints were investigated under a range of lightning stroke current parameter values. Design parameter variations included the types and number of joint fasteners, mechanical preload, surface finish tolerance and coatings, and the material from which the joint assembly was fabricated. Variations of the simulated lightning stroke current included amplitude (30-, 100-, and 200-kA peak), rise time, and decay time. The maximum voltage observed on any of the test joints that incorporated proper metal-to-metal surface contact was 65 V. Typical response values were more on the order of 20 V. In order to assess the effect of the presence of a dielectric coating (either intentional or as a result of corrosion) between the mating surfaces of a joint, a special configuration was tested in which the mating parts of the test assembly were coated with a 1-mil-thick dielectric anodizing layer. First strokes to these test assemblies resulted in very narrow voltage spikes of amplitudes up to 900 V. The durations of these spikes were less than 0.1 {mu}s. However, beyond these initial spikes, the voltages typically had amplitudes of up to 400 V for durations of 3 to 5 {mu}s.
A limited pilot production run on PESC silicon solar cells for use at high concentrations (200 to 400 suns) is summarized. The front contact design of the cells was modified for operation without prismatic covers. The original objective of the contract was to systematically complete a process consolidation phase, in which all the, process improvements developed during the contract would be combined in a pilot production run. This pilot run was going to provide, a basis for estimating cell costs when produced at high throughput. Because of DOE funding limitations, the Photovoltaic Concentrator Initiative is on hold, and Applied Solar`s contract was operated at a low level of effort for most of 1993. The results obtained from the reduced scope pilot run showed the effects of discontinuous process optimization and characterization. However, the run provided valuable insight into the technical areas that can be optimized to achieve the original goals of the contract.
The use of glasses is widespread in making hermetic, insulating seals for many electronic components. Flat panel displays and fiber optic connectors are other products utilizing glass as a structural element. When glass is cooled from sealing temperatures, residual stresses are generated due to mismatches in thermal shrinkage created by the dissimilar material properties of the adjoining materials. Because glass is such a brittle material at room temperature, tensile residual stresses must be kept small to ensure durability and avoid cracking. Although production designs and the required manufacturing process development can be deduced empirically, this is an expensive and time consuming process that does not necessarily lead to an optimal design. Agile manufacturing demands that analyses be used to reduce development costs and schedules by providing insight and guiding the design process through the development cycle. To make these gains, however, viscoelastic models of glass must be available along with the right tool to use them. A viscoelastic model of glass can be used to simulate the stress and volume relaxation that occurs at elevated temperatures as the molecular structure of the glass seeks to equilibrate to the state of the supercooled liquid. The substance of the numerical treatment needed to support the implementation of the model in a 3-D finite element program is presented herein. An accurate second-order, central difference integrator is proposed for the constitutive equations, and numerical solutions are compared to those obtained with other integrators. Inherent convergence problems are reviewed and fixes are described. The resulting algorithms are generally applicable to the broad class of viscoelastic material models. First-order error estimates are used as a basis for developing a scheme for automatic time step controls, and several demonstration problems are presented to illustrate the performance of the methodology.
The necessity to evaluate our participant Quality Assurance (QA) Program for the Yucca Mountain Site Characterization Project (YMP) against the Office of Civilian Radioactive Waste Management (OCRWM) Quality Assurance Requirements and Description (QARD) issued December 1992, presented an opportunity to improve the QA Program. For some time, the SNL YMP technical staff had complained that the QA requirements imposed on their work were cumbersome and inhibited their ability to perform investigations using scientific methods. There was some truth to this, since SNL had over the years developed some procedures with many detailed controls that were far beyond what was required by project QA requirements. This had occurred either as a result of responding to numerous audit findings with a ``make the auditor happy`` attitude or with an attempt to cover every contingency. Procedures affecting scientific work were authored by the technical staff in an effort to provide them with ownership of the process; unfortunately, there were problems. Procedures were inconsistent because of the varied writing styles and differing perceptions of the degree of QA controls required to implement the program. It was extremely difficult to get all of the technical staff to accept the QA program as it was intended. These issues were endemic to the program and resulted in the QARD, the actual requirements, being written by a team of QA professionals. Once new QARD requirements were issued, an opportunity to evaluate the QA Program and to revise it not only to meet the QARD, but also to make it more plausible and meaningful to the technical staff, was presented. The discussion that follows will describe how the program was changed, will present both the positive and negative experiences observed by SNL personnel during the QARD transition, and will provide some recommendations.
One source of uncertainty in calculating radionuclide releases from a potential radioactive-waste at Yucca Mountain, Nevada, is uncertainty in the unsaturated-zone stratigraphy. Uncertainty stratigraphy results from sparse drillhole data; possible variations in stratigraphy are modeled using the geostatistical method of indicator simulation. One-dimensional stratigraphic columns are generated and used for calculations of groundwater flow and radionuclide transport. There are indications of a dependence of release on hydrogeologic-unit thicknesses, but the resulting variation in release is smaller than variations produced by other sources of uncertainty.
The evaluation of the stability of the openings for the Exploratory Studies Facility and a potential repository for high-level nuclear waste at Yucca Mountain, Nevada will require computer codes capable of predicting slip on rock joints resulting from changes in thermal stresses. The geometrical method of analysis of moire fringe analysis was used to evaluate the magnitude and extent of frictional sliding in a layered polycarbonate rock mass model containing a circular hole. Slips were observed in confined zones around the hole and micron resolutions were obtained. Unpredicted and uncontrolled uniform slip of several interfaces in the model were observed giving considerable uncertainty in the boundary conditions of the model, perhaps making detailed comparison with numerical models impossible.
Prediction of the deformation behavior of large engineering structures in jointed rock under a specified loading history requires the extensive use of numerical simulation. For example, the evaluation of the stability of the openings for the Exploratory Studies Facility and a potential repository for high-level nuclear waste at Yucca Mountain, Nevada will require computer codes capable of predicting slip on rock joints resulting from changes in thermal stresses. The testing and ultimate validation of these complex finite element computer codes is an important step in their development before their use as a design tool for an engineering structure or for the study of some other practical problem. While field tests may be ultimately necessary, the authors propose a different and more thorough approach where early tests are done on a bench scale with easily characterized materials and geometries. For these bench-scale tests, the basic approach is to construct a laboratory specimen with a known geometry from an easily characterized material. Digital video imaging combined with the geometric moire fringe method of strain analysis is used to measure and derive the displacements on the sample under load. Here the authors present the method of acquiring and analyzing the moire data and give an analysis of its problems and benefits.
To build a bridge with customers, we balance the linear modeling process with the dynamics of the individuals we serve, who may feel unfamiliar, even confused, with that process. While it is recognized that human factors engineers improve the physical aspect of the workplace, they also work to integrate customers` cognitive styles, feelings, and concerns into the workplace tools. We take customers` feelings into consideration and integrate their expressed needs and concerns into the modeling sessions. After establishing an agreeable, professional relationship, we use a simple, portable CASE tool to reveal the effectiveness of NIAM. This tool, Modeler`s Assistant, is friendly enough to use directly with people who know nothing of NIAM, yet it captures all the information necessary to create complete models. The Modeler`s Assistant succeeds because it organizes the detailed information in an enhanced text format for customer validation. Customer cooperation results from our modeling sessions as they grow comfortable and become enthused about providing information.
Verification measurements may be used to help ensure nuclear criticality safety when burnup credit is applied to spent fuel transport and storage systems. The FORK system measures the passive neutron and gamma-ray emission from spent fuel assemblies while in the storage pool. It was designed at Los Alamos National Laboratory for the International Atomic Energy Agency safeguards program and is well suited to verify burnup and cooling time records at commercial Pressurized Water Reactor (PWR) sites. This report deals with the application of the FORK system to burnup credit operations.
Proceedings of the Annual ACM Conference on Computational Learning Theory
Goldberg, P.W.; Goldman, S.A.; Mathias, H.D.
We present two algorithms that use membership and equivalence queries to exactly identify the concepts given by the union of s discretized axis-parallel boxes in d-dimensional discretized Euclidean space where there are n discrete values that each coordinate can have. The first algorithm receives at most sd counterexamples and uses time and membership queries polynomial in s and log n for d any constant. Further, all equivalence queries made can be formulated as the union of O(sd logs) axis-parallel boxes. Next, we introduce a new complexity measure that better captures the complexity of a union of boxes than simply the number of boxes and dimensions. Our new measure, u, is the number of segments in the target polyhedron where a segment is a maximum portion of one of the sides of the polyhedron that lies entirely inside or entirely outside each of the other halfspaces defining the polyhedron. We then present an improvement of our first algorithm that uses time and queries polynomial in u and log n. The hypothesis class used here is decision trees of height at most 2sd, Further we can show that the time and queries used by this algorithm are polynomial in d and log n for s any constant thus generalizing the exact learnability of DNF formulas with a constant number of terms. In fact, this single algorithm is efficient for either s or d constant.
This presentation (consisting of vugraphs) first provides the background motivation for Sandia`s effort for the development of improved crystalline silicon solar cells. It then discusses specific results and progress, and concludes with a brief discussion of options for next year.
This report describes the equipment required for initial assembly/maintenance and inspection/resetting of the Fifth Wheel system. It also gives a step-by-step procedure for initial assembly/maintenance inspection and procedures for resetting the system and Eager-Pac installation. The Fifth Wheel system is associated with a tractor-type vehicle used for materials handling.
High Consequence System Surety is an ongoing project at Sandia National Laboratories. This project pulls together a multi- disciplinary team to integrate the elements of surety into an encompassing process. The surety process will be augmented and validated by applying it to an automated system handling a critical nuclear weapon component at the Mason & Hanger Pantex Plant. This paper presents the development to date of an integrated, high consequence surety process.
Heavy Ion Backscattering Spectrometry (HIBS) is a new ion beam analysis tool using heavy, low-energy ions in backscattering mode which can detect very low levels of surface contamination. By taking advantage of the greatly increased scattering cross-section for such ion beams and eliminating unwanted substrate scattering with a thin carbon foil, our research system has achieved a sensitivity ranging from {approximately}5{times}10{sup 10} atoms/cm{sup 2} for Fe to {approximately}1{times}10{sup 9} atoms/cm{sup 2} for Au on Si, without preconcentration. A stand-alone HIBS prototype now under construction in collaboration with SEMATECH is expected to achieve detection limits of {approximately}5{times}10{sup 9} atoms/cm{sup 2} for Fe and {approximately}1{times}10{sup 8} atoms/cm{sup 2} for Au on Si, again without preconcentration. Since HIBS is standardless and has no matrix effects, it will be useful not only as a standalone tool, but also for benchmarking standards for other tools. This conference is testimony to the importance of controlling contamination in microelectronics manufacturing. By the turn of the century, very large scale integrated circuit processing is expected to require contamination levels well below 1{times}10{sup 9} atoms/cm{sup 2} in both starting materials and introduced by processing. One of the most sensitive of existing general-purpose tools is Total reflection X-Ray Fluorescence (TXRF), which can detect {approximately}1{times}10{sup 10} atoms/cm{sup 2} levels of some elements such as Fe and Cu, but for many elements it is limited to 1{times}10{sup 12} atoms/cm{sup 2} or worse. TXRF can achieve a sensitivity of 10{sup 8} atoms/cm{sup 2} through the use of synchrotron radiation or via pre-concentration using Vapor Phase Decomposition. HIBS provides an ion beam analysis capability with the potential for providing similar sensitivity at medium Z and higher sensitivity at larger Z, all without pre-concentration or matrix effects.
We report the results of to separate long-term tests of batteries and charge controllers in small stand-alone PV systems. In these experiments, seven complete systems were tested for two years at each of two locations: Sandia National Laboratories in Albuquerque and the Florida Solar Energy Center in Cape Canaveral, Florida. Each system contained a PV array, flooded-lead-acid battery, a charge controller and a resistive load. Performance of the systems was strongly influenced by the difference in solar irradiance at the two sites, with some batteries at Sandia exceeding manufacturer`s predictions for cycle life. System performance was strongly correlated with regulation reconnect voltage (R{sup 2} correlation coefficient = 0.95) but only weakly correlated with regulation voltage. We will also discuss details of system performance, battery lifetime and battery water consumption.
A model is presented which describes the formation of surface damage ``gouging`` on the rails that guide rocket sleds. An unbalanced sled can randomly cause a very shallow-angle, oblique impact between the sled shoe and the rail. This damage phenomenon has also been observed in high-velocity guns where the projectile is analogous to the moving sled shoe and the gun barrel is analogous to the stationary rail. At sufficiently high velocity, the oblique impact will produce a thin hot layer of soft material on the contact surfaces. Under the action of a normal moving load, the soft layer lends itself to an anti-symmetric deformation and the formation of a ``hump`` in front of the moving load. A gouge is formed when this hump is overrun by the sled shoe. The phenomenon is simulated numerically using the CTH strong shock physics code, and the results are in good agreement with experimental observation.
The objective of this paper is to provide a general overview of hydrologic conditions at the Waste Isolation Pilot Plant (WIPP) by describing several key hydrologic studies that have been carried out as part of the site characterization program over the last 20 years. The paper is composed of three parts: background information about general objectives of the WIPP project; information about the geologic and hydrologic setting of the facility; and information about three aspects of the hydrologic system that are important to understanding the long-term performance of the WIPP facility. For additional detailed information, the reader is referred to the references cited in the text.
Cast blasting can be designed to utilize explosive energy effectively and economically for coal mining operations to remove overburden material. The more overburden removed by explosives, the less blasted material there is left to be transported with mechanical equipment, such as draglines and trucks. In order to optimize the percentage of rock that is cast, a higher powder factor than normal is required plus an initiation technique designed to produce a much greater degree of horizontal muck movement. This paper compares two blast models known as DMC (Distinct Motion Code) and SABREX (Scientific Approach to Breaking Rock with Explosives). DMC, applies discrete spherical elements interacted with the flow of explosive gases and the explicit time integration to track particle motion resulting from a blast. The input to this model includes multi-layer rock properties, and both loading geometry and explosives equation-of-state parameters. It enables the user to have a wide range of control over drill pattern and explosive loading design parameters. SABREX assumes that heave process is controlled by the explosive gases which determines the velocity and time of initial movement of blocks within the burden, and then tracks the motion of the blocks until they come to a rest. In order to reduce computing time, the in-flight collisions of blocks are not considered and the motion of the first row is made to limit the motion of subsequent rows. Although modelling a blast is a complex task, the DMC can perform a blast simulation in 0.5 hours on the SUN SPARCstation 10--41 while the new SABREX 3.5 produces results of a cast blast in ten seconds on a 486-PC computer. Predicted percentage of cast and face velocities from both computer codes compare well with the measured results from a full scale cast blast.
The Mixed Waste Landfill Integrated Demonstration (MWLID) focuses on ``in-situ`` characterization, monitoring, remediation, and containment of landfills in and environments that contain hazardous and mixed waste. The MWLID mission is to assess, demonstrate, and transfer technologies and systems that lead to faster, better, cheaper, and safer cleanup. Most important, the demonstrated technologies will be evaluated against the baseline of conventional technologies. Key goals of the MWLID are routine use of these technologies by Environmental Restoration Groups throughout the DOE complex and commercialization of these technologies to the private sector. The MWLID is demonstrating technologies at hazardous waste landfills located at Sandia National Laboratories and on Kirtland Air Force Base. These landfills have been selected because they are representative of many sites throughout the Southwest and in other and climates.
This paper addresses the issues surrounding the use of NIAM to capture time dependencies in a domain of discourse. The NIAM concepts that support capturing time dependencies are in the event and process portions of the NIAM metamodel, which are the portions most poorly supported by a well-established methodology. This lack of methodological support is a potentially serious handicap in any attempt to apply NIAM to a domain of discourse in which time dependencies are a central issue. However, the capability that NIAM provides for validating and verifying the elementary facts in the domain may reduce the magnitude of the event/process-specification task to a level at which it could be effectively handled even without strong methodological support.
A statistical design of experiments approach has been employed to evaluate the particle removal efficacy of the SC-1/megasonic clean for sub-0.15 {mu}m inorganic particles. The effects of megasonic input power, solution chemistry, bath temperature, and immersion time have been investigated. Immersion time was not observed to be a statistically significant factor. The NH{sub 4}OH/H{sub 2}O{sub 2} ratio was significant, but varying the molar H{sub 2}O{sub 2} concentration had no effect on inorganic particle removal. Substantially diluted chemistries, performed with high megasonic input power and moderate-to-elevated temperatures, was shown to be very effective for small particle removal. Bath composition data show extended lifetimes can be obtained when high purity chemicals are used at moderate (eg., 45{degrees}C) temperature. Transition metal surface concentrations and surface roughness have been measured after dilute SC-1 processing and compared to metallic contamination following traditional SC-1.
I{sub DDQ} testing is mandatory to ensure that low power CMOS ICs meet their design intent. I{sub DDQ} testing is both a design verifier for low quiescent current and a sensitive production test for defects. Quiescent power reduction is particularly important for products such as cardiac pacemakers, laptop computers, and cellular telephones.
Transformers of two different designs; and unencapsulated pot core and an encapsulated toroidal core have been modeled for circuit analysis with circuit simulation tools. We selected MicroSim`s PSPICE and Anology`s SABER as the simulation tools and used experimental BH Loop and network analyzer measurements to generate the needed input data. The models are compared for accuracy and convergence using the circuit simulators. Results are presented which demonstrate the effects on circuit performance from magnetic core losses, eddy currents, and mechanical stress on the magnetic cores.
A new waveguide is designed using a cut-off slab waveguide for fabrication of single-mode rib optical waveguides with mesa isolation. These waveguides are easy to fabricate and offer crosstalk performance perhaps better than BH waveguides.
Developing the ability to recognize a landmark from a visual image of a robot`s current location is a fundamental problem in robotics. The authors consider the problem of PAC-learning the concept class of geometric patterns where the target geometric pattern is a configuration of k points in the real line. Each instance is a configuration of n points on the real line, where it is labeled according to whether or not it visually resembles the target pattern. They relate the concept class of geometric patterns to the landmark recognition problem and then present a polynomial-time algorithm that PAC-learns the class of one-dimensional geometric patterns when the negative examples are corrupted by a large amount of random misclassification noise.
A series of experiment were conducted to study the influence of electrode geometry on the prebreakdown (and breakdown) characteristics of high resistivity ({rho} > 30 k{Omega}-cm), p-type Si wafers under quasi-uniform and non-uniform electric field configurations. In the quasi-uniform field configuration, the 1mm thick Si wafer was mounted between the slots of two plane parallel stainless steel disc electrodes (parallel), while the non-uniform field was obtained by mounting the wafer between two pillar-type electrodes with a hemispherical tip (pillar). The main objective of the above investigation was to verify if the uniform field configuration under a parallel system has a positive influence by reducing the field enhancement at the contact region, as opposed to the definite field enhancement present in the case of the non-uniform pillar system. Also, it was proposed to study the effect of the contact profile on the field distribution over the wafer surface and hence its influence on the high-field performance of the Si wafers.
The current baseline plan for RH TRU (remote-handled transuranic) waste disposal is to package the waste in special canisters for emplacement in the walls of the waste disposal rooms at the Waste Isolation Pilot Plant (WIPP). The RH waste must be emplaced before the disposal rooms are filled by contact-handled waste. Issues which must be resolved for this plan to be successful include: (1) construction of RH waste preparation and packaging facilities at large-quantity sites; (2) finding methods to get small-quantity site RH waste packaged and certified for disposal; (3) developing transportation systems and characterization facilities for RH TRU waste; (4) meeting lag storage needs; and (5) gaining public acceptance for the RH TRU waste program. Failure to resolve these issues in time to permit disposal according to the WIPP baseline plan will force either modification to the plan, or disposal or long-term storage of RH TRU waste at non-WIPP sites. The recommended strategy is to recognize, and take the needed actions to resolve, the open issues preventing disposal of RH TRU waste at WIPP on schedule. It is also recommended that the baseline plan be upgraded by adopting enhancements such as revised canister emplacement strategies and a more flexible waste transport system.
Sandia National Laboratories manages the US Department of Energy program for slimhole drilling. The principal objective of this program is to expand proven geothermal reserves through increased exploration, made possible by lower-cost slimhole drilling. For this to be a valid exploration method, however, it is necessary to demonstrate that slimholes yield enough data to evaluate a geothermal reservoir, and that is the focus of Sandia`s current research. Sandia negotiated an agreement with Far West Capital, which operates the Steamboat Hills geothermal field, to drill and test an exploratory slimhole on their lease. The principal objectives for the slimhole were development of slimhole testing methods, comparison of slimhole data with that from adjacent production-size wells, and definition of possible higher-temperature production zones lying deeper than the existing wells.
Emphasis on the human-to-aircraft interface has magnified in importance as the performance envelope of today`s aircraft has continued to expand. A major problem is that there has been a corresponding increase in the need for better fitting protection equipment and unfortunately it has become increasingly difficult for aircrew members to find equipment that will provide this level of fit. While protection equipment has, historically had poor fit characteristics, the issue has grown tremendously with the recent increase in the numbers of minorities and women. Fundamental to this problem are the archaic methods for sizing individual equipment and the methods for establishing a sizing system. This paper documents recent investigations by the author into developing new methods to overcome these problems. Research centered on the development of a new statistically based method for describing form and the application of fuzzy clustering using the new shape descriptors. A sizing system was developed from the application of the research, prototype masks were constructed and the hardware tested under flight conditions.
Internal corrosion in semiconductor gas delivery systems may lead to increased particle counts in downstream fabrication tools and to catastrophic failure of the delivery system itself. The problem is particularly acute since, once the corrosion begins, it becomes a moisture reservoir to further damage the system. To keep gas systems as moisture free as possible semiconductor manufacturers employ drying filters, usually located just after the source of the process gas. Even so, the piping for corrosive gases may need to be rebuilt every few years. Careful monitoring of the moisture in the process gases can provide valuable information about the state of the gas handling system and its effect on the process integrity. Presently there are several technologies costing $50K or less that are capable of detecting trace water vapor as low as 50 ppb in N{sub 2}. However, no one type of instrument has achieved universal acceptance. In particular, all have limited compatibility with corrosive gases such as HCl and HBr. The goal of this project is to develop an in-line instrument based on infrared spectroscopy for this purpose. Earlier results leave no doubt that FTIR spectroscopy can be successfully used for trace water detection. However, important questions regarding optimal data analysis and instrument design are not yet fully answered. It is the goal of this research effort to answer these questions and to incorporate the findings into a prototype device suitable for commercialization.
A description of the flow field in an overflow wafer rinse process is presented. This information is being used in an initiative whose principal objective is to reduce the usage of water in wafer rinsing. The velocity field is calculated using finite-element numerical techniques. A large portion of the water does not contribute to wafer rinsing.
Photovoltaic (PV) modules and photovoltaic balance of systems equipment are designed, manufactured, and marketed internationally. Each country or group Of countries has a set of electrical safety codes, either in place or evolving, that guide and regulate the design and installation of PV power systems. A basic difference in these codes is that some require hard (low-resistance) grounding (the United States and Canada) and others opt for an essentially ungrounded system (Europe and Japan). The significant design and safety issues that exist between the two grounding concepts affect the international PV industry`s ability to economically and effectively design and market safe, reliable, and durable PV systems in the global market place. This paper will analyze the technical and safety benefits, penalties, and costs of both grounded arid ungrounded PV systems. The existing grounding practice in several typical countries will be addressed.
Conservatively, there are 100,000 localities in the world waiting for the benefits that electricity can provide, and many of these are in climates where sunshine is plentiful. With these locations in mind a prototype 30 kW hybrid system has been assembled at Sandia to prove the reliability and economics of photovoltaic, diesel and battery energy sources managed by an autonomous power converter. In the Trimode Power Converter the same power parts, four IGBT`s with an isolation transformer and filter components, serve as rectifier and charger to charge the battery from the diesel; as a stand-alone inverter to convert PV and battery energy to AC; and, as a parallel inverter with the diesel-generator to accommodate loads larger than the rating of the diesel. Whenever the diesel is supplying the load, an algorithm assures that the diesel is running at maximum efficiency by regulating the battery charger operating point. Given the profile of anticipated solar energy, the cost of transporting diesel fuel to a remote location and a five year projection of load demand, a method to size the PV array, battery and diesel for least cost is developed.
Ceramics offer significant performance advantages over other engineering materials in a great number of applications such as turbocharger rotors and wear components. However, to realize their full market potential, ceramics must become more cost competitive. One way to achieve such competitiveness is to maximize manufacturing yield via process optimization. One simple optimization strategy involves maximizing yield by decreasing product variability (e.g., by operating in a regime that is inherently process tolerant). This paper extends this concept to the simultaneous optimization of many material characteristics, which is more typical of the requirements of a real ceramic manufacturing operation.
Extensive simulations of Tokamak disruptions have provided a picture of material erosion that is limited by the transfer of energy from the incident plasma to the armor solid surface through a dense vapor shield. Radiation spectra were recorded in the VUV and in the visible at the Efremov Laboratories on VIKA using graphite targets. The VUV data were recorded with a Sandia Labs transmission grating spectrograph, covering 1--40 nm. Plasma parameters were evaluated with incident plasma energy densities varying from 1--10 kJ/cm{sup 2}. A second transmission grating spectrograph was taken to 2MK-200 at TRINITI to study the plasma-material interface in magnetic cusp plasma. Target materials included POCO graphite, ATJ graphite, boron nitride, and plasma-sprayed tungsten. Detailed spectra were recorded with a spatial resolution of {approximately}1 mm resolution. Time-resolved data with 40--200 ns resolution was also recorded. The data from both plasma gun facilities demonstrated that the hottest plasma region was sitting several millimeters above the armor tile surface.
In the past few years much effort has been devoted to finding faster and more convenient ways to exchange data between nodes of massively parallel distributed memory machines. One such approach, taken by Thorsten von Eicken et al. is called Active Messages. The idea is to hide message passing latency and continue to compute while data is being sent and delivered. The authors have implemented Active Messages under SUNMOS for the Intel Paragon and performed various experiments to determine their efficiency and utility. In this paper they concentrate on the subset of the Active Message layer that is used by the implementation of the Split-C library. They compare performance to explicit message passing under SUNMOS and explore new ways to support Split-C without Active Messages. They also compare the implementation to the original one on the Thinking Machines CM-5 and try to determine what the effects of low latency and low band-width versus high latency and high bandwidth are on user codes.
SUNMOS is an acronym for Sandia/UNM Operating System. It was originally developed for the nCUBE-2 MIMD supercomputer between January and December of 1991. Between April and August of 1993, SUNMOS was ported to the Intel Paragon. This document provides a quick overview of how to compile and run jobs using the SUNMOS environment on the Paragon. The primary goal of SUNMOS is to provide high performance message passing and process support an example of its capabilities, SUNMOS Release 1.4 occupies approximately 240K of memory on a Paragon node, and is able to send messages at bandwidths of 165 megabytes per second with latencies as low as 42 microseconds using Intel NX calls. By contrast, Release 1.2 of OSF/1 for the Paragon occupies approximately 7 megabytes of memory on a node, has a peak bandwidth of 65 megabytes per second, and latencies as low as 42 microseconds (the communication numbers are reported elsewhere in these proceedings).
Sandia National Laboratories became seriously involved in the science education reform movement in 1989 in response to a Department of Energy directive: ``We must expand our involvement in science education to inspire the youth of American to either enter or feel more comfortable in the fields of math, science and engineering. With our labs and facilities we are uniquely well positioned to provide major assistance in strengthening science and engineering motivation and education, making it `come alive` for the main body of students who too often fear these disciplines or who cannot relate to them``. (Adm. James D. Watkins, U.S. Sec`t. of Energy, 9/5/89)
Functioning, matrixed, field emission devices have been fabricated using a modification of standard integrated circuit fabrication techniques. The emitter-to-gate spacing is fixed by the thickness of a deposited oxide and not by photolithographic techniques. Modeling of the emitted electron trajectories using a two dimensional, Poisson solver, finite difference code indicates that much of the current runs perpendicular to plane of the part. Functioning triode structures have been fabricated using this approach. Emission current, to a collector electrically and physically separated from the matrixed array follows Fowler-Nordheim behavior.
Market-place acceptance of utility-connected photovoltaic (PV) power generation systems and their accelerated installation into residential and commercial applications are heavily dependent upon the ability of their power conditioning subsystems (PCS) to meet high reliability, low cost, and high performance goals. Many PCS development efforts have taken place over the last 15 years, and those efforts have resulted in substantial PCS hardware improvements. These improvements, however, have generally fallen short of meeting many reliability, cost and performance goals. Continuously evolving semiconductor technology developments, coupled with expanded market opportunities for power processing, offer a significant promise of improving PCS reliability, cost and performance, as they are integrated into future PCS designs. This paper revisits past and present development efforts in PCS design, identifies the evolutionary improvements and describes the new opportunities for PCS designs. The new opportunities are arising from the increased availability and capability of semiconductor switching components, smart power devices, and power integrated circuits (PICS).
The term ``TNT Equivalence`` is used throughout the explosives and related industries to compare the effects of the output of a given explosive to that of TNT. This is done for technical design reasons in scaling calculation such as for the prediction of blast waves, craters, and structural response, and is also used as a basis for government regulations controlling the shipping, handling and storage of explosive materials, as well as for the siting and design of explosive facilities. TNT equivalence is determined experimentally by several different types of tests, the most common of which include: plate dent, ballistic mortar, trauzl, sand crush, and air blast. All of these tests do not necessarily measure the same output property of the sample explosive. As examples of this, some tests depend simply upon the CJ pressure, some depend upon the PV work in the CJ zone and in the Taylor wave behind the CJ plane, some are functions of the total work which includes that from secondary combustion in the air mixing region of the fireball and are acutely effected by the shape of the pressure-time profile of the wave. Some of the tests incorporate systematic errors which are not readily apparent, and which have a profound effect upon skewing the resultant data. Further, some of the tests produce different TNT Equivalents for the same explosive which are a function of the conditions at which the test is run. This paper describes the various tests used, discusses the results of each test and makes detailed commentary on what the test is actually measuring, how the results may be interpreted, and if and how these results can be predicted by first principals based calculations. Extensive data bases are referred to throughout the paper and used in examples for each point in the commentaries.
For PZT films deposited on Pt coated substrates, remanent polarization is a monotonic function of thermal expansion of the substrate, a result of 90{degree} domain formation occurring as the film is cooled through the transformation temperature. PZT film stress in the vicinity of the Curie point controls 90{degree} domain assemblages within the film. PZT films under tension at the transformation temperature area-domain oriented; whereas, films under compression at the transformation temperature are c-domain oriented. From XRD electrical switching of 90{degree} domains is severely limited. Thus, formation of these 90{degree} domains in vicinity of the Curie point is dominant in determination of PZT film dielectric properties. Chemically prepared PZT thin films with random crystallite orientation, but preferential a-domain orientation, have low remanent polarization (24 {mu}C/cm{sup 2}) and high dielectric constant (1000). Conversely, PZT films of similar crystalline orientation, but of preferential c-domain orientation, have large remanent polarizations (37 {mu}C/cm{sup 2}) and low dielectric constants (700). This is consistent with single-crystal properties of tetragonally distorted, simple perovksite ferroelectrics. Further, for our films that grain size - 90{degree} domain relationships appear similar to those in the bulk. The effect of grain size on 90{degree} domain formation and electrical properties are discussed.
MELCOR is a fully integrated, engineering-level computer code, being developed at Sandia National Laboratories for the USNRC, that models the entire spectrum of severe accident phenomena in a unified framework for both BWRs and PWRS. As part of an ongoing assessment program, the MELCOR computer code has been used to analyze a series of blowdown tests performed in the early 1980s at General Electric. The GE large vessel blowdown and level swell experiments are a set of primary system thermal/hydraulic separate effects tests studying the level swell phenomenon for BWR transients and LOCAS; analysis of these GE tests is intended to validate the new implicit bubble separation algorithm added since the release of MELCOR 1.8.2. Basecase MELCOR results are compared to test data, and a number of sensitivity studies on input modelling parameters and options have been done. MELCOR results for these experiments also are compared to MAAP and TRAC-B qualification analyses for the same tests. Time-step and machine-dependency calculations were done to identify whether any numeric effects exist in our GE large vessel blowdown and level swell assessment analyses.
The prompt neutron generation time for the Annular Core Research Reactor was experimentally determined using a prompt-period technique. The resultant value of 25.5 {mu}s agreed well with the analytically determined value of 24 {mu}s. The three different methods of reactivity insertion determination yielded {+-}5% agreement in the experimental values of the prompt neutron generation time. Discrepancies observed in reactivity insertion values determined by the three methods used (transient rod position, relative delayed critical control rod positions, and relative transient rod and control rod positions) were investigated to a limited extent. Rod-shadowing and low power fuel/coolant heat-up were addressed as possible causes of the discrepancies.
The introduction of the hazards assessment process is to document the impact of the release of hazards at the Advanced Manufacturing Processes Laboratory (AMPL) that are significant enough to warrant consideration in Sandia National Laboratories` operational emergency management program. This hazards assessment is prepared in accordance with the Department of Energy Order 5500.3A requirement that facility-specific hazards assessments be prepared, maintained, and used for emergency planning purposes. This hazards assessment provides an analysis of the potential airborne release of chemicals associated with the operations and processes at the AMPL. This research and development laboratory develops advanced manufacturing technologies, practices, and unique equipment and provides the fabrication of prototype hardware to meet the needs of Sandia National Laboratories, Albuquerque, New Mexico (SNL/NM). The focus of the hazards assessment is the airborne release of materials because this requires the most rapid, coordinated emergency response on the part of the AMPL, SNL/NM, collocated facilities, and surrounding jurisdiction to protect workers, the public, and the environment.
A review of US and Japanese experiences with using microelectronics consortia as a tool for strengthening their respective industries reveals major differences. Japan has established catch-up consortia with focused goals. These consortia have a finite life targeted from the beginning, and emphasis is on work that supports or leads to product and process-improvement-driven commercialization. Japan`s government has played a key role in facilitating the development of consortia and has used consortia promote domestic competition. US consortia, on the other hand, have often emphasized long-range research with considerably less focus than those in Japan. The US consortia have searched for and often made revolutionary technology advancements. However, technology transfer to their members has been difficult. Only SEMATECH has assisted its members with continuous improvements, compressing product cycles, establishing relationships, and strengthening core competencies. The US government has not been a catalyst nor provided leadership in consortia creation and operation. We propose that in order to regain world leadership in areas where US companies lag foreign competition, the US should create industry-wide, horizontal-vertical, catch-up consortia or continue existing consortia in the six areas where the US lags behind Japan -- optoelectronics, displays, memories, materials, packaging, and manufacturing equipment. In addition, we recommend that consortia be established for special government microelectronics and microelectronics research integration and application. We advocate that these consortia be managed by an industry-led Microelectronics Alliance, whose establishment would be coordinated by the Department of Commerce. We further recommend that the Semiconductor Research Corporation, the National Science Foundation Engineering Research Centers, and relevant elements of other federal programs be integrated into this consortia complex.
Inside this issue: (1) Robotic cleaning safer, faster, more reliable; robots taught how to clean in seconds instead of days. (2) Microporous insulating films can boost microcircuit performance; films display improved dielectric constant, mechanical properties, (3) Life-cycle analysis: the big picture; cradle-to-grave environmental analysis tailored to the needs of defense manufacturing, (4) New simulation tool predicts properties of forged metal; internal state variable model improves design, speeds development time.
This report summarizes the results from MELCOR calculations of severe accident sequences in the ABWR and presents comparisons with MAAP calculations for the same sequences. MELCOR was run for two low-pressure and three high-pressure sequences to identify the materials which enter containment and are available for release to the environment (source terms), to study the potential effects of core-concrete interaction, and to obtain event timings during each sequence; the source terms include fission products and other materials such as those generated by core-concrete interactions. Sensitivity studies were done on the impact of assuming limestone rather than basaltic concrete and on the effect of quenching core debris in the cavity compared to having hot, unquenched debris present.
An experimental technique is described to launch an intact ``chunk,`` i.e. a 0.3 cm thick by 0.6 cm diameter cylindrical titanium alloy (Ti-6Al-4V) flyer, to 10.2 km/s. The ability to launch fragments having such an aspect ratio is important for hypervelocity impact phenomenology studies. The experimental techniques used to accomplish this launch were similar but not identical to techniques developed for the Sandia HyperVelocity Launcher (HVL). A confined barrel impact is crucial in preventing the two-dimensional effects from dominating the loading response of the projectile chunk. The length to diameter ratio of the metallic chunk that is launched to 10.2 km/s is 0.5 and is an order of magnitude larger than those accomplished using the conventional hypervelocity launcher. The multi-dimensional, finite-difference (finite-volume), hydrodynamic code CTH was used to evaluate and assess the acceleration characteristics i.e., the in-bore ballistics of the chunky projectile launch. A critical analysis of the CTH calculational results led to the final design and the experimental conditions that were used in this study. However, the predicted velocity of the projectile chunk based on CTH calculations was {approximately} 6% lower than the measured velocity of {approximately}10.2 km/S.
Integrated Service digital Network, ISDN, technology is an integral component of Sandia National Laboratories telecommunications infrastructure. ISDN is a fully digital telephone service that allows simultaneous voice and data communication from the same telephone instrument. Almost all ISDN phones in use at Sandia/New Mexico and most ISDN phones at Sandia/California have a built-in module for data communication. This user guide describes the use and operation of ISDN data module and services as they are installed at Sandia.
LUGSAN (LUG and Sway brace ANalysis) is a analysis and database computer program designed to calculate store lug and sway brace loads from aircraft captive carriage. LUGSAN combines the rigid body dynamics code, SWAY85 and the maneuver calculation code, MILGEN, with an INGRES database to function both as an analysis and archival system. This report describes the operation of the LUGSAN application program, including function description, layout examples, and sample sessions. This report is intended to be a user`s manual for version 1.1 of LUGSAN operating on the VAX/VMS system. The report is not intended to be a programmer or developer`s manual.
SKYDOS evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated by three simple geometries: (1) a source in a silo; (2) a source behind an infinitely long, vertical, black wall; and (3) a source in a rectangular building. In all three geometries, an optical overhead shield may be specified. The source energy must be between 0.02 and 100 MeV (10 MeV for sources with an overhead shield). This is a user`s manual. Other references give more detail on the integral line-beam method used by SKYDOSE.
McSKY evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated into either a vertical cone or a vertical structure with an N-sided polygon cross section. The code assumes an overhead shield of two materials, through the user can specify zero shield thickness for an unshielded calculation. The code uses a Monte-Carlo algorithm to evaluate transport through source shields and the integral line source to describe photon transport through the atmosphere. The source energy must be between 0.02 and 100 MeV. For heavily shielded sources with energies above 20 MeV, McSKY results must be used cautiously, especially at detector locations near the source.
Calculations of water flow through Yucca Mountain show significant dryout and water perching in the vicinity of the proposed nuclear waste repository. These calculations also show that the extent of the dryout and perched water zones is a strong function of the material characteristics which are used to represent the fracture zones. The results show that for 100 {mu}m fracture case appreciable dryout and perched regions exist. When 1 {mu}m fractures are used no dryout or perched regions are calculated.
An integral part of the licensing procedure for the potential nuclear waste repository at Yucca Mountain, Nevada involves accurate prediction of the in situ rheology for design and construction of the facility and emplacement of the canisters containing radioactive waste. The data required as input to successful thermal and mechanical models of the behavior of the repository and surrounding lithologies include bulk density, grain density, porosity, compressional and shear wave velocities, elastic moduli, and compressional and tensile strengths. In this study a suite of experiments was performed on cores recovered from the USW-NRG-6 borehole drilled to support the Exploratory Studies Facility (ESF) at Yucca Mountain. USW-NRG-6 was drilled to a depth of 1100 feet through four thermal/mechanical units of Paintbrush tuff. A large data set has been collected on specimens recovered from borehole USW-NRG-6. Analysis of the results of these experiments showed that there is a correlation between fracture strength, Young`s modulus, compressional wave velocity and porosity. Additional scaling laws relating; static Young`s modulus and compressional wave velocity; and fracture strength and compressional wave velocity are promising. Since there are no other distinct differences in material properties, the scatter that is present at each fixed porosity suggests that the differences in the observed property can be related to the pore structure of the specimen. Image analysis of CT scans performed on each test specimen are currently underway to seek additional empirical relations to aid in refining the correlations between static and dynamic properties of tuff.
One of the critical issues facing the Yucca Mountain site characterization and performance assessment programs is the manner in which property scaling is addressed. Property scaling becomes an issue whenever heterogeneous media properties are measured at one scale but applied at another. A research program has been established to challenge current understanding of property scaling with the aim of developing and testing models that describe scaling behavior in a quantitative manner. Scaling of constitutive rock properties is investigated through physical experimentation involving the collection of suites of gas-permeability data measured over a range of discrete scales. The approach is to systematically isolate those factors believed to influence property scaling and investigate their relative contributions to overall scaling behavior. Two blocks of tuff, each exhibiting differing heterogeneity structure, have recently been examined. Results of the investigation show very different scaling behavior, as exhibited by changes in the distribution functions and variograms, for the two tuff samples. Even for the relatively narrow range of measurement scales employed significant changes in the distribution functions, variograms, and summary statistics occurred. Because such data descriptors will likely play an important role in calculating effective media properties, these results demonstrate both the need to understand and accurately model scaling behavior.
Improvements have been made to the fracture-flow model being used in the total-system performance assessment of a potential high-level radioactive waste repository at Yucca Mountain, Nevada. The ``weeps model`` now includes (1) weeps of varied sizes, (2) flow-pattern fluctuations caused by climate change, and (3) flow-pattern perturbations caused by repository heat generation. Comparison with the original weeps model indicates that allowing weeps of varied sizes substantially reduces the number of weeps and the number of containers contacted by weeps. However, flow-pattern perturbations caused by either climate change or repository heat generation greatly increases the number of containers contacted by weeps. In preliminary total-system calculations, using a phenomenological container-failure and radionuclide-release model, the weeps model predicts that radionuclide releases from a high-level radioactive waste repository at Yucca Mountain will be below the EPA standard specified in 40 CFR 191, but that the maximum radiation dose to an individual could be significant. Specific data from the site are required to determine the validity of the weep-flow mechanism and to better determine the parameters to which the dose calculation is sensitive.
Indicator geostatistical techniques have been used to produce a number of fully three-dimensional stochastic simulations of large-scale lithologic categories at the Yucca Mountain site. Each realization reproduces the available drill hole data used to condition the simulation. Information is propagated away from each point of observation in accordance with a mathematical model of spatial continuity inferred through soft data taken from published geologic cross sections. Variations among the simulated models collectively represent uncertainty in the lithology at unsampled locations. These stochastic models succeed in capturing many major features of welded-nonwelded lithologic framework of Yucca Mountain. However, contacts between welded and nonwelded rock types for individual simulations appear more complex than suggested by field observation, and a number of probable numerical artifacts exist in these models. Many of the apparent discrepancies between the simulated models and the general geology of Yucca Mountain represent characterization uncertainty, and can be traced to the sparse site data used to condition the simulations. Several vertical stratigraphic columns have been extracted from the three-dimensional stochastic models for use in simplified total-system performance assessment exercises. Simple, manual adjustments are required to eliminate the more obvious simulation artifacts and to impose a secondary set of deterministic geologic features on the overall stratigraphic framework provided by the indictor models.
Laboratory experiments were performed to measure the effect of frequency, water-saturation, and strain amplitude on Young`s modulus and seismic wave attenuation on rock cores recovered on or near the site of a potential nuclear waste repository at Yucca Mountain, Nevada. The purpose of this investigation is to perform the measurements using four techniques: cyclic loading, waveform inversion, resonant bar, and ultrasonic velocity. The measurements ranged in frequency between 10{sup {minus}2} and 10{sup 6} Hz. For the dry specimens Young`s modulus and attenuation were independent of frequency; that is, all four techniques yielded nearly the same values for modulus and attenuation. For saturated specimens, a frequency dependence for both Young`s modulus and attenuation was observed. In general, saturation reduced Young`s modulus and increased seismic wave attenuation. The effect of strain amplitude on Young`s modulus and attenuation was measured using the cyclic loading technique at a frequency of 10{sup {minus}1} Hz. The effect of strain amplitude in all cases was small. For some rocks, such as the potential repository horizon of the Topopah Spring Member tuff (TSw2), the effect of strain amplitude on both attenuation and modulus was minimal.
An intensive laboratory investigation is being performed to determine the mechanical properties of tuffs for the Yucca Mountain Site Characterization Project (YMP). Most recently, experiments are being performed on tuff samples from a series of drill holes along the proposed alignment of the Exploratory Study Facilities (ESF) north ramp. Unconfined compression and indirect tension experiments are being performed and the results are being analyzed with the help of bulk property information. The results on samples from five of the drill holes are presented here. In general, the properties vary widely, but are highly dependent on the sample porosity.
I use a piece-wise linear approximation to the directed flux expressions for a flowing Maxwellian fluid to write down boundary conditions for the fluid description of a multicomponent plasma. These boundary conditions are sufficiently robust to treat particle reflection, surface reactions leading to secondary production, diffusion, and field-induced drift of charged species.
Tisone, G.C.; Hargis Jr., P.J.; Clark, B.; Wakefield-Reyes, C.
The optimization of UV laser remote sensing systems and the interpretation of the return signals from these systems require detailed absorption and fluorescence spectra for the species of interest. Multispectral fluorescence techniques additionally require a database of dispersed UV fluorescence excitation spectra. Excitation wavelengths between 250 and 400 nm and fluorescence wavelengths in the 200 to 700 nm range are of interest.
Current SNL CALIOPE modeling efforts have produced an initial model that addresses DIAL issues of wavelength, hardware design parameters, range evaluation, etc. Although this model is producing valuable results and will be used to support the planning and evaluations necessary for the first ground field experiment, it is expected to have limitations with the complex science issues that affect the CALIOPE program. In particular, the multi-dimensional effects of atmospheric turbulence, plume dynamics, speckle, etc., may be significant issues and must be evaluated in detail as the program moves to the detection of liquids and solids, longer ranges, and elevated platform environments. The goal of the integrated UV fluorescence/DIAL modeling effort is to build upon the knowledge obtained in developing and exercising the initial model to adequately support the future activities of this program. This paper will address the development of the integrated UV model, issues and limiting assumptions that may be needed in order to address the-complex phenomena involved, limits of expected performance, and the potential use of this model.
Infrared emission (IRE) spectra were obtained from two borophosphosilicate glass (BPSG) thin-film sample sets. The first set consisted of 21 films deposited on undoped silicon wafers, and the second set consisted of 9 films deposited on patterned and doped (product) wafers. The IRE data were empirically modeled using partial least-squares calibration to simultaneously quantify four BPSG thin-film properties. The standard errors of the determinations when modeling the 21 monitor wafers were
Throughout the Department of Energy (DOE) complex, sites protect themselves with intrusion detection systems. Some of these systems have sensors in remote areas. These sensors frequently alarm -- not because they have detected a terrorist skulking around the area, but because they have detected a horse, or a dog, or a bush moving in the breeze. Even though the local security force is 99% sure there is no real threat, they must assess each of these nuisance or false alarms. Generally, the procedure consists of dispatching an inspector to drive to the area and make an assessment. This is expensive in terms of manpower and the assessment is not timely. Often, by the time the inspector arrives, the cause of the alarm has vanished. A television camera placed to view the area protected by the sensor could be used to help in this assessment, but this requires the installation of high-quality cable, optical fiber, or a microwave link. Further, to be of use at the present time, the site must have had the foresight to have installed these facilities in the past and have them ready for use now. What is needed is a device to place between the television camera and a modem connecting to a low-bandwidth channel such as radio or a telephone line. This paper discusses the development of such a device: an Image Transmission System, or ITS.
We have developed a capability to make real time concentration measurements of individual chemicals in a complex mixture using a multispectral laser remote sensing system. Our chemical recognition and analysis software consists of three parts: (1) a rigorous multivariate analysis package for quantitative concentration and uncertainty estimates, (2) a genetic optimizer which customizes and tailors the multivariate algorithm for a particular application, and (3) an intelligent neural net chemical filter which pre-selects from the chemical database to find the appropriate candidate chemicals for quantitative analyses by the multivariate algorithms, as well as providing a quick-look concentration estimate and consistency check. Detailed simulations using both laboratory fluorescence data and computer synthesized spectra indicate that our software can make accurate concentration estimates from complex multicomponent mixtures, even when the mixture is noisy and contaminated with unknowns.
Prior to May 1992, field demonstrations of characterization technologies were performed at an uncontaminated site near the Chemical Waste Landfill. In mid-1992 through summer 1993, both non-intrusive and intrusive characterization techniques were demonstrated at the Chemical Waste Landfill. Subsurface and dry barrier demonstrations were started in summer 1993 and will continue into 1995. Future plans include demonstrations of innovative drilling, characterization and long-term monitoring, and remediation techniques. Demonstrations were also scheduled in summer 1993 at the Kirtland Air Force HSWA site and will continue in 1994. The first phase of the Thermal Enhanced Vapor Extraction System (TEVES) project occurred in April 1992 when two holes were drilled and vapor extraction wells were installed at the Chemical Waste Landfill. Obtaining the engineering design and environmental permits necessary to implement this field demonstration will take until early 1994. Field demonstration of the vapor extraction system will occur in 1994.
This paper will discuss the UV Laser Remote Sensing Data Acquisition and Control Subsystem being developed by Sandia National Laboratories in support of the CALIOPE program. Details include the control of active system elements including the laser and beam steering mirror, passive system elements including detectors and signal processing instrumentation, and the acquisition and transfer of data for archival and evaluation by the multivariate analysis algorithm. Using the LabVIEW design philosophy developed for laboratory testing as a baseline, this evolving subsystem will initially support the UV fluorescence calibration and background data collections planned at SNL and the October 1994 Ground Field Experiment at the Nevada Test Site. The subsystem will then be upgraded to support an integrated DIAL/fluorescence capability for the April 1995 Ground Field Experiment and the October 1995 Elevated Platform Field Experiment.
When the ACCORD Process introduced Pro/ENGINEER to Sandians several years ago, a new process for design/definition was implemented. Prior to ACCORD, engineers and draftsmen worked in the 2-D mode with a program caned ANVIL{reg_sign}, which had limited capabilities. Although the transition from 2-D modeling to 3-D modeling met with some resistance, most engineers have embraced this new concept with enthusiasm They are now able to work in the 3-D mode and at increased levels of productivity with appropriate time savings never achieved before. One area that Pro/ENGINEER is noted for that this report will concentrate on, is the powerful interface module with its wide selection of transfer file configurations. This allows the engineer to create parts or assemblies and transfer them to many different second party software packages whose vendors can provide the capability for stress analysis, rapid prototypes, virtual reality environments, or many other forms of advanced manufacturing modes of communication. The ACCORD Program has at its core, the Pro/ENGINEER program from Parametric Technology Inc. Included in the ACCORD program, are several supporting programs from other vendors to make this cooperation between software packages a reality. It is possible to create parts in Pro/ENG transfer those parts to another package that has the capability to analyze the parts for deficiencies, then optimize those parts, and allow for changes to be made. Also included in this report, are other packages closely tied to Pro/ENGINEER, but not necessarily supported under the ACCORD program. Some of these packages allow you to create very impressive video productions, or allow you to meander through a virtual reality scenario. All of these new software packages will give you a new perspective on performance. This report will show how some of these interfaces work, and how you can improve your productivity if you utilize the ACCORD program as it is implemented here at Sandia.
The implosion dynamics of compact wire arrays on Saturn are explored as a function of wire mass m, wire length {ell}, wire radii R, and radial power-flow feed geometry using the ZORK code. Electron losses and the likelihood of arcing in the radial feed adjacent the wire load are analyzed using the TWOQUICK and CYLTRAN codes. The physical characteristics of the implosion and subsequent thermal radiation production are estimated using the LASNEX code in one dimension. These analyses show that compact tungsten wire arrays with parameters suggested by D. Mosher and with a 21-nH vacuum feed geometry satisfy the empirical scaling criterion I/(M/{ell}) {approximately} 2 MA/(mg/cm) of Mosher for optimizing non-thermal radiation from z pinches, generate low electron losses in the radial feeds, and generate electric fields at the insulator stack below the Charlie Martin flashover limit thereby permitting full power to be delivered to the load. Under such conditions, peak currents of {approximately}5 MA can be delivered to wire loads {approximately}20 ns before the driving voltage reverses at the insulator stack, potentially allowing the m = 0 instability to develop with the subsequent emission of non-thermal radiation as predicted by the Mosher model.
A thickness-shear mode (TSM) resonator typically consists of a thin disk of AT-cut quartz with circular electrodes patterned on both sides. When connected to appropriate circuitry, the quartz crystal resonates at a frequency determined by the crystal thickness. Originally used to measure metal deposition in vacuum, the device has recently been used for measurements in liquid. Since the mass sensitivity of the resonator is nearly the same in liquids as in air or vacuum, the device can be used as a sensitive solution-phase microbalance. In addition, the sensitivity of the TSM resonator to contacting fluid properties enables it to function as a monitor for these properties. Under liquid loading, the change in frequency of the resonator/oscillator combination differs from the change in resonant frequency of the device. Either of these changes can be determined from an appropriate application of an equivalent-circuit model that describes the electrical characteristics of the liquid-loaded resonator.
Four general topics are covered in respect to the natural space radiation environment: (1) particles trapped by the earth`s magnetic field, (2) cosmic rays, (3) radiation environment inside a spacecraft, (4) laboratory radiation sources. The interaction of radiation with materials is described by ionization effects and displacement effects. Total-dose effects on MOS devices is discussed with respect to: measurement techniques, electron-hole yield, hole transport, oxide traps, interface traps, border traps, device properties, case studies and special concerns for commercial devices. Other device types considered for total-dose effects are SOI devices and nitrided oxide devices. Lastly, single event phenomena are discussed with respect to charge collection mechanisms and hard errors. (GHH)
Supercritical carbon dioxide is being explored as a waste minimization technique for separating oils, greases and solvents from solid waste. The containments are dissolved into the supercritical fluid and precipitated out upon depressurization. The carbon dioxide solvent can then be recycled for continued use. Definitions of the temperature, pressure, flowrate and potential co-solvents are required to establish the optimum conditions for hazardous contaminant removal. Excellent extractive capability for common manufacturing oils, greases, and solvents has been observed in both supercritical and liquid carbon dioxide. Solubility measurements are being used to better understand the extraction process, and to determine if the minimum solubility required by federal regulations is met.
A variety of new molecular modeling tools are now available for studying molecular structures and molecular interactions, for building molecular structures from simple components using analytical data, and for studying the relationship of molecular structure to the energy of bonding and non-bonding interactions. These are proving quite valuable in characterizing molecular structures and intermolecular interactions and in designing new molecules. This paper describes the application of molecular modeling techniques to a variety of materials problems, including the probable modecular structures of coals, lignins, and hybrid inorganic-organic-organic systems (silsesquioxanes), the intercalation of small gas molecules in fullerene crystals, the diffusion of gas molecules through membranes, and the design, structure and function of biomimetic and nanocluster catalysts.
A unique end-to-end LIDAR sensor model has been developed supporting the concept development stage of the CALIOPE UV DIAL and UV laser-induced-fluorescence (LIF) efforts. The model focuses on preserving the temporal and spectral nature of signals as they pass through the atmosphere, are collected by the optics, detected by the sensor, and processed by the sensor electronics and algorithms. This is done by developing accurate component sub-models with realistic inputs and outputs, as well as internal noise sources and operating parameters. These sub-models are then configured using data-flow diagrams to operate together to reflect the performance of the entire DIAL system. This modeling philosophy allows the developer to have a realistic indication of the nature of signals throughout the system and to design components and processing in a realistic environment. Current component models include atmospheric absorption and scattering losses, plume absorption and scattering losses, background, telescope and optical filter models, PMT (photomultiplier tube) with realistic noise sources, amplifier operation and noise, A/D converter operation, noise and distortion, pulse averaging, and DIAL computation. Preliminary results of the model will be presented indicating the expected model operation depicting the October field test at the NTS spill test facility. Indications will be given concerning near-term upgrades to the model.
The Department of Energy`s Utility-Scale Joint-Venture (USJV) Program was developed to help industry commercialize dish/engine electric systems. Sandia National Laboratories developed this program and has placed two contracts, one with Science Applications International Corporation`s Energy Projects Division and one with the Cummins Power Generation Company. In this paper we present the designs for the two dish/Stirling systems that are being developed through the USJV Program.
The Arc/Info GENERALIZE command implements the Douglas-Peucker algorithm, a well-regarded approach that preserves line ``character`` while reducing the number of points according to a tolerance parameter supplied by the user. The authors have developed an Arc Macro Language (AML) interface called MAGENCO that allows the user to browse workspaces, select a coverage, extract a sample from this coverage, then apply various tolerances to the sample. The results are shown in multiple display windows that are arranged around the original sample for quick visual comparison. The user may then return to the whole coverage and apply the chosen tolerance. They analyze the ergonomics of line simplification, explain the design (which includes an animated demonstration of the Douglas-Peucker algorithm), and discuss key points of the MAGENCO implementation.
A comparison of the KAMELEON Fire model to large-scale open pool fire experimental data is presented. The model was used to calculate large-scale JP-4 pool fires with and without wind, and with and without large objects in the fire. The effect of wind and large objects on the fire environment is clearly seen. For the pool fire calculations without any object in the fire, excellent agreement is seen in the location of the oxygen-starved region near the pool center. Calculated flame temperatures are about 200--300 K higher than measured. This results in higher heat fluxes back to the fuel pool and higher fuel evaporation rates (by a factor of 2). Fuel concentrations at lower elevations and peak soot concentrations are in good agreement with data. For pool fire calculations with objects, similar trends in the fire environment are observed. Excellent agreement is seen in the distribution of the heat flux around a cylindrical calorimeter in a rectangular pool with wind effects. The magnitude of the calculated heat flux to the object is high by a factor of 2 relative to the test data, due to the higher temperatures calculated. For the case of a large flat plate adjacent to a circular pool, excellent qualitative agreement is seen in the predicted and measured flame shapes as a function of wind.
The thickness-shear mode (TSM) resonator typically consists of a thin disk of AT-cut quartz with circular electrodes patterned on both sides. An RF voltage applied between these electrodes excites a shear mode mechanical resonance when the excitation frequency matches the crystal resonant frequency. When the TSM resonator is operated in contact with a liquid, the shear motion of the surface generates motion in the contacting liquid. The liquid velocity field, v{sub x}(y), can be determined by solving the one-dimensional Navier-Stokes equation. Newtonian fluids cause an equal increase in resonator motional resistance and reactance, R{sub 2}{sup (N)} = X{sub 2}{sup (N)}, with the response depending only on the liquid density-viscosity product ({rho}{eta}). Non-Newtonian fluids, as illustrated by the simple example of a Maxwell fluid, can cause unequal increases in motional resistance and reactance. For the Maxwell fluid, R{sub 2}{sup (M)} > X{sub 2}{sup (M)}, with relaxation time {tau} proportional to the difference between R{sub 2}{sup (M)}and X{sub 2}{sup (M)}. Early results indicate that a TSM resonator can be used to extract properties of non-Newtonian fluids.
This work examined self-assembled monolayers (SAMs) of n-alkane thiols using quartz resonators to determine the shear storage and loss moduli. Network analyzer measurements of electrical admittance at fundamental and corresponding harmonic values are fit to an equivalent circuit model. Shear modulus depends on frequency; the modulus values are three orders of magnitude lower than expected for a liquid or elastomeric polymer, more like those of a dense gas or supercritical fluid. A density of around 0.45 g/cm{sup 3} is calculated for a dodecane thiol SAM; this is roughly half of the bulk density. In conclusion, quartz resonators can be used to inertially deform SAMs.
The feasibility of three different non-destructive and direct methods of evaluating PCB (printed circuit boards) cleanliness was demonstrated. The detection limits associated with each method were established. In addition, the pros and cons of these methods as routine quality control inspection tools were discussed. OSEE (Optically Stimulated Electron Emission) was demonstrated to be a sensitive technique for detection of low levels of flux residues on insulating substances. However, future work including development of rugged OSEE instrumentation will determine whether the PCB industry can accept this technique in a production environment. FTIR (Fourier Transform Infrared) microscopy is a well established technique with well known characteristics. The inability of FTIR to discriminate an organic contaminant from an organic substrate limits its usefulness as a PCB line inspection tool, but it will still remain a technique for the QC/QA laboratory. One advantage of FTIR over the other two techniques described here is its ability to identify the chemical nature of the residue, which is important in Failure Mode Analysis. Optical imaging using sophisticated pattern recognition algorithms was found to be limited to high concentrations of residue. Further work on improved sensor techniques is necessary.
We demonstrate a two-dimensional device simulator for MOSFET structures that incorporates models for defect distributions and show predicted effects on device switching performance for various spatial distributions of defects in amorphous and polycrystalline silicon.
A high temperature resistance furnace has been modified for the study of directional solidification of nickel-base superalloys such as alloys 718 and 625. The furnace will be used to study segregation and solidification phenomena that occur in consumable-electrode melting processes such as vacuum arc remelting and electro-slag remelting. The system consists of a water cooled high temperature furnace (maximum temperature {approximately}2900 C), roughing vacuum,system, cooling system, cooled hearth, molten metal quenching bath, and a mechanism to lower the hearth from the furnace into the molten metal bath. The lowering mechanism is actuated by a digital stopping motor with a programmable controller. The specimen (1.9 cm dia {times} 14 cm long) is melted and contained within an alumina tube (2.54 cm dia {times} 15.24 cm long) which is seated on a copper hearth cooled with {approximately}13 C water. Directional solidification can then be accomplished by decreasing the furnace temperature while holding the specimen in position, maintaining the temperature gradient in the furnace and lowering the specimen at a controlled rate or a combination of both. At any point the specimen can be lowered rapidly into the 70 C molten metal bath to quench the specimen, preserve the solidification structure, and minimize solid state diffusion, enhancing the ability to study the localized solidification conditions.
The design of a software package that provides a variety of Asynchronous Transfer Mode (ATM) test functions is presented here. These functions include cell capture, protocol decode for Transmission Control Protocol/Internet Protocol (TCP/IP) services, removal of cells (to support testing of an ATM system under cell loss conditions), and echo functions. This package is currently written to operate on the Sun Microsystems SPARCstation 10/SunOS 4.1.3 environment with a Fore Systems SBA-100 Sbus ATM adapter (140 Mbit/s TAXI interface), and the DEC 5000/240 running ULTRIX 4.2A with a Fore Systems TCA-100 TurboChannel adapter. Application scenarios and performance measurements of this software package on these host environments are presented here.
Relationships between countries normally war and peace. Crisis prevention activities will be particularly important in this area, and should have two goals: (1) stabilizing tense situations that could push countries toward war and (2) supporting or reenforcing efforts to move countries toward a state of peace. A Crisis Prevention Center (CPC) will facilitate efforts to achieve these goals and its functions can be grouped into three broad, inter-related categories: (1) establishing and facilitating communication among participating countries, (2) supporting negotiations and consensus-building on regional security issues, and (3) supporting implementation of agreed confidence and security building measures. Appropriate activities in each of these categories will depend on the relations among participating countries. Technology will play a critical role in a establishing communication systems to ensure the timely flow of information between countries and to provide the means for organizing and analyzing this information. Technically-based cooperative monitoring can provide an objective source of information on mutually agreed issues, thereby supporting the implementation of confidence building measures and treaties. In addition, technology itself can be a neutral subject of interaction and collaboration between technical communities from different countries. Establishing a CPC in Northeast Asia does not require the existence of an Asian security regime. Indeed, activities that occur under the auspices of a CPC, even highly formalized exchanges of agreed information, can increase transparency, and thereby pave the way for future regional cooperation. Major players in Northeast Asian security are Japan, Russia, China, North and South Korea, and the United States.
Recent research on point defects in thin films of SiO{sub 2} and Si{sub 3}SN{sub 4} on Si is presented and reviewed. In SiO{sub 2} it is now clear that no one type of E{prime} center is the sole source of radiation-induced positive charge; hydrogenous moieties or other types of E{prime} are proposed. Molecular orbital theory and easy passivation of E{prime} by H{sub 2} suggest that released H might depassivate P{sub b} sites. A charged E{prime}{sub {delta}} center has been seen in Cl-free SIMOX and thermal oxide film, and it is reassigned to an electron delocalized over four O{sub 3}{equivalent_to}Si units around a fifth Si. In Si{sub 3}N{sub 4} a new model for the amphoteric charging of Si{equivalent_to}N{sub 3} moieties is based on local shifts in defect energy with respect to the Fermi level, arising from nonuniform composition; it does not assume negative-U electron correlation. A new defect NN{sub 2}{sup 0} has been identified, with dangling orbital on a 2-coordinated N atom bonded to another N.
A parallel unstructured finite element (FE) implementation designed for message passing machines is described. This implementation employs automated problem partitioning algorithms for load balancing unstructured grids, a distributed sparse matrix representation of the global finite element equations and a parallel conjugate gradient (CG) solver. In this paper a number of issues related to the efficient implementation of parallel unstructured mesh applications are presented. These include the differences between structured and unstructured mesh parallel applications, major communication kernels for unstructured CG solvers, automatic mesh partitioning algorithms, and the influence of mesh. partitioning metrics on parallel performance. Initial results are presented for example finite element (FE) heat transfer analysis applications on a 1024 processor nCUBE 2 hypercube. Results indicate over 95% scaled efficiencies are obtained for some large problems despite the required unstructured data communication.
The Leo Brady Seismic Net (LBSN) has been used to estimate seismic yields on US nuclear explosion tests for over 30 years. One of the concerns that Non-Proliferation Experiment (NPE) addresses is the yield equivalence between a large conventional explosion and a nuclear explosion. The LSBN consists of five stations that surround the Nevada Test Site (NTS). Because of our previous experience in measuring nuclear explosion yields, we operated this net to record NPE signals. Comparisons were made with 9 nuclear tests in the same volcanic tuff medium and within an 800 m range of the NPE source. The resulting seismic yield determined by each nuclear test ranged from 1.3 to 2.2 kT. Using the same techniques in determining nuclear explosion yields, the 1 kT NPE was measured at 1.7 kT nuclear equivalent yield with a standard deviation of 16%. The individual stations show a non-symmetric radiation pattern with more energy transmitted to the north and south. Comparisons with an nuclear event does not sow any obvious differences between the two tests.
Law enforcement officers work each day with individuals who can become aggressive and violent. Among the worst scenarios, which occur each year and often raise national media attention, an officer has his handgun taken away and used against him. As many as 12 officers per year are killed with their own gun. This problem can be addressed through the integration of modern sensors with control electronics to provide authorized user firearms for law enforcement and even recreational uses. A considerable benefit to law enforcement agencies, as well as society as a whole, would be gained by the application of recommended Smart Gun Technologies (SGT) as a method of limiting the use of firearms to authorized individuals. Sandia National Laboratory has been actively involved in the research and design of technologically sophisticated surety devices for weapons for the DOE and DOD. This experience is now being applied to criminal justice problems by transferring these technologies to commercial industry. In the SGT project Sandia is developing the user requirements that would limit a firearms use to its owner and/or authorized users. Various technologies that are capable of meeting the requirements are being investigated, these range from biometrics identification to radio-controlled devices. Research is presently underway to investigate which technologies represent the best solutions to the problem. Proof of concept demonstration models are being built for the most promising SGT with the intent of technology transfer. Different solutions are recommended for the possible applications: law enforcement, military, and commercial (personal protection/recreational) use.
This paper describes and discusses a basic safety analysis technique which may be useful for the beginning process of Risk Assessment and Risk Management. The technique uses judgmental factors on the part of analysts rather than dependence upon numerical techniques associated with more detailed analysis. The basic technique is presented and coupled to risk charts which may vary depending upon the intent of the analysis and the output required for the particular situation. Some variations are included to show how the technique may be used for prioritization of competing resources for necessary work.
In addition to stress and acceleration measurements made in the inelastic regime, Sandia fielded two triaxial accelerometer packages in the seismic free-field for the NON-PROLIFERATION EXPERIMENT (NPE). The gauges were located at ranges of 190 and 200 m from the center of the ANFO-laden cavity on the opposite sides of a vertical fault. This location allowed us to assess several different seismological aspects related to non-proliferation. The radial and vertical components of the two packages show similar motion. Comparisons are made with similar data from nuclear tests to estimate yield, calculate seismic energy release and to detect spectral differences between nuclear and non-nuclear explosions. The wave forms of NPE differ significantly from nuclear explosions. The first two peak amplitudes of NPE are comparable while the nuclear explosion initial peak is much larger than the second peak. The calculated seismic energies imply that the conventional explosions couple to the medium much better at low frequencies than do nuclear explosions and that nuclear explosions contain more high frequency energy than NPE. Radial and vertical accelerations were integrated for displacement and indicate there was movement across the fault.
Environmental and toxicity concerns related to the use of lead have initiated the search for acceptable, alternate joining materials for electronics assembly. This paper describes a novel lead-free solder designed as a ``drop in`` replacement for common tin/lead eutectic solder. The physical and mechanical properties of this solder are discussed in comparison to tin/lead eutectic solder. The performance of this solder when used for electronics assembly is discussed and compared to other common solders. Fatigue testing results are reported for thermal cycling electronics assemblies soldered with this lead-free composition. The paper concludes with a discussion on indium metal availability, supply and price.
The core problem with the US health care system is -- it already costs to much and the rate of its cost growth is cause for further alarm. To deal with these, regulators must introduce incentives for health care providers to reduce costs and introduce incentives that make consumers of health care services concerned about the costs of the services they demand. Achievement of these regulatory goals will create opportunities for the introduction of innovations, including revolutionary new technology, that can lead to major reductions in costs. Modeling of health care system inputs, outputs, transactions, and the relationships between these parameters will expedite the development of an effective regulatory process. This model must include all of those major factors that affect the demand for health care and it must facilitate benchmarking health care subsystems against the most efficient international practices.
The response of smooth- and textured-surface thickness-shear mode (TSM) quartz resonators in liquid has been examined. Smooth devices, which viscously entrain a layer of contacting liquid, exhibit a response that depends on the product of liquid density and viscosity. Textured-surface devices, with either randomly rough or regularly patterned features, also trap liquid in surface features, exhibiting an additional response that depends on liquid density alone. Combining smooth- and textured-surface resonators in a monolithic sensor enables simultaneous extraction of liquid density and viscosity.
We have demonstrated that a thickness shear mode quartz resonator can be used as a real-time, in situ monitor of the state-of-charge of lead-acid batteries. The resonator is sensitive to hanges in the density and viscosity of the sulfuric acid electrolyte. Both of these liquid parameters vary monotonically with the battery state-of-charge. This new monitor is more precise than sampling hydrometers, and since it is compatible with the Corrosive electrolyte environment, it can be used for in situ monitoring. A TSM resonator consists of gold electrodes deposited on opposite surfaces of a thin AT-cut quartz crystal. When an RF voltage is applied to the electrodes, a shear strain is introduced in the piezoelectric quartz and mechanical resonance occurs between the surfaces. A liquid in contact with one of the quartz surfaces is viscously entrained, which perturbs the resonant frequency and resonance magnitude. If the surface is smooth, the changes in both frequency and magnitude are proportional to ({rho}{eta}) {sup {1/2}}, where {rho} is the liquid density and {eta} is the viscosity.
Synthetic Aperture Radar (SAR) is used to form images that are maps of radar reflectivity of some scene of interest, from range soundings taken over some spatial aperture. Additionally, the range soundings are typically synthesized from a sampled frequency aperture. Efficient processing of the collected data necessitates using efficient digital signal processing techniques such as vector multiplies and fast implementations of the Discrete Fourier Transform. Inherent in image formation algorithms that use these is a trade-off between the size of the scene that can be acceptably imaged, and the resolution with which the image can be made. These limits arise from migration errors and spatially variant phase errors, and different algorithms mitigate these to varying degrees. Two fairly successful algorithms for airborne SARs are Polar Format processing, and Overlapped Subaperture (OSA) processing. This report introduces and summarizes the analysis of generalized Tiered Subaperture (TSA) techniques that are a superset of both Polar Format processing and OSA processing. It is shown how tiers of subapertures in both azimuth and range can effectively mitigate both migration errors and spatially variant phase errors to allow virtually arbitrary scene sizes, even in a dynamic motion environment.
A simple model has been developed to address a pragmatic question: What fraction of its research and development budget should a national laboratory devote to enhancing technology in the private sector? In dealing with lab-wide budgets in an aggregate sense, the model uses three parameters - fraction of lab R&D transferable to industry, transfer efficiency and payback to laboratory missions - to partition fixed R&D resources between technology transfer and core missions. It is a steady-state model in that the transfer process is assumed to work in equilibrium with technology generation. The results presented should be of use to those engaged in managing and overseeing federal laboratory technology transfer activities.
Military Specifications call out general procedures and guidelines for conducting contact resistance measurements on chemical conversion coated panels. This paper deals with a test procedure developed at Sandia National Laboratories used to conduct contact electrical resistance on non-chromated conversion coated test panels. MIL-C-81706 {open_quotes}Chemical Conversion Materials For Coating Aluminum and Aluminum Alloys{close_quotes} was the reference specification used for guidance.
The photodiode transition indicator is a device which has been successfully used to determine the onset of boundary layer transition on numerous hypersonic flight vehicles. The exact source of the electromagnetic radiation detected by the photodiode at transition was not understood. In some cases early saturation of the device occurred, and the device failed to detect transition. Analyses have been performed to determine the source of the radiation producing the photodiode signal. The results of these analyses indicate that the most likely source of the radiation is blackbody emission from the heatshield material bordering the quartz window of the device. Good agreement between flight data and calculations based on this radiation source has been obtained. Analyses also indicate that the most probable source of the radiation causing early saturation is blackbody radiation from carbon particles which break away from the nosetip during the ablation process.
Electrical discharges from a lightning simulator were directed at Mk12 aeroshells. Buckling of the aluminum substrate was observed after some 100-kA shots, and severe damage consisting of tearing of the aluminum and the production of inward flying aluminum shrapnel was observed after some 200-kA peak-current shots. Some shots resulted in severe damage to both the aluminum and the carbon-phenolic ablative material. It is reasonable to conclude from the experimental results that a lightning stroke with very high-peak current could, by itself, produce an opening in an Mk12 aeroshell. Because the aeroshell is part of the nuclear explosive safety exclusion region for the Mk12/W62 nuclear weapon, an opening would significantly reduce the assured safety of the weapon. It is unlikely that the observed interaction between lightning and the aeroshells would have been predicted by any form of computer simulation.
To draft a procurement specification for the Long-Reach Manipulator (LRM), the benefits and limitations of the various robotic control system architectures available need to be determined. This report identifies and describes the advantages and potential disadvantages of using an open control system versus a closed (or proprietary) system, focusing on integration of interfaces for sensors, end effectors, tooling, and operator interfaces. In addition, the various controls methodologies of several recent systems are described. Finally, the reasons behind the recommendation to procure an open control system are discussed.
The work for the development of an Annular Precision Linear Shaped Charge (APLSC) Flight Termination System (FTS) for the Operation and Deployment Experiment Simulator (ODES) program is discussed and presented in this report. The Precision Linear Shaped Charge (PLSC) concept was recently developed at Sandia. The APLSC component is designed to produce a copper jet to cut four inch diameter holes in each of two spherical tanks, one containing fuel and the other an oxidizer that are hyperbolic when mixed, to terminate the ODES vehicle flight if necessary. The FTS includes two detonators, six Mild Detonating Fuse (MDF) transfer lines, a detonator block, detonation transfer manifold, and the APLSC component. PLSCs have previously been designed in ring components where the jet penetrating axis is either directly away or toward the center of the ring assembly. Typically, these PLSC components are designed to cut metal cylinders from the outside inward or from the inside outward. The ODES program requires an annular linear shaped charge. The (Linear Shaped Charge Analysis) LESCA code was used to design this 65 grain/foot APLSC and data comparing the analytically predicted to experimental data are presented. Jet penetration data are presented to assess the maximum depth and reproducibility of the penetration. Data are presented for full scale tests, including all FTS components, and conducted with nominal 19 inch diameter, spherical tanks.
This report provides a statistical description of the types and severities of tractor semi-trailer accidents involving at least one fatality. The data were developed for use in risk assessments of hazardous materials transportation. Several accident databases were reviewed to determine their suitability to the task. The TIFA (Trucks Involved in Fatal Accidents) database created at the University of Michigan Transportation Research Institute was extensively utilized. Supplementary data on collision and fire severity, which was not available in the TIFA database, were obtained by reviewing police reports for selected TIFA accidents. The results are described in terms of frequencies of different accident types and cumulative distribution functions for the peak contact velocity, rollover skid distance, fire temperature, fire size, fire separation, and fire duration.
Sandia National Laboratories has developed a sophisticated custom digital data acquisition system to record data from a wide variety of experiments conducted on nuclear weapons effects tests at the Nevada Test Site (NTS). Software is a critical part of this data acquisition system. In particular software has been developed to support an instrumentation/experiment setup database, interactive and automated instrument control, remote data readout and processing, plotting, interactive data analysis, and automated calibration. Some software is also used as firmware in custom subsystems incorporating embedded microprocessors. The software operations are distributed across the nearly 40 computer nodes that comprise the NTS Wide Area Computer Network. This report is an overview of the software developed to support this data acquisition system. The report also provides a brief description of the computer network and the various recording systems used.
The work in this program covered four primary areas: solid modeling, path planning, modular fixturing, and stability analysis. This report contains highlights of results from the program, references to published reports, and, in an appendix, a currently unpublished report which has been accepted for journal publication, but has yet to appear.
This report documents the Surftherm program that analyzes transport coefficient, thermochemical- and kinetic rate information in complex gas-phase and surface chemical reaction mechanisms. The program is designed for use with the Chemkin (gas-phase chemistry) and Surface Chemkin (heterogeneous chemistry) programs. It was developed as a ``chemist`s companion`` in using the Chemkin packages with complex chemical reaction mechanisms. It presents in tabular form detailed information about the temperature and pressure dependence of chemical reaction rate constants and their reverse rate constants, reaction equilibrium constants, reaction thermochemistry, chemical species thermochemistry and transport properties. This report serves as a user`s manual for use of the program, explaining the required input and the output.
A three dimensional (3D) finite element analysis of the Markel Mine located on Weeks Island was performed to: (1) evaluate the stability of the mine and (2) determine the effect of mine failure on the nearby Morton Salt mine and SPR facilities. The first part of the stability evaluation investigates the effect of pillar failure on mine stability. These simulations revealed that tensile stresses and dilatant damage develop in the overlying salt as a result of pillar loss. These tensile stresses extend to the salt/overburden interface only for the case where all 45 of the pillars are assumed to fail. Tensile stresses would likely cause microfracturing of the salt, resulting in a flow path for groundwater from the overlying aquifer to enter the mine. The dilatant damage bridges between the mine and the overburden in the case where 15 or more pillars are removed from the model. Dilatant damage is attributed to microfracturing or changes in the pore structure of the salt and could also result in a flow path for groundwater to enter the mine. The second part of the Markel Mine evaluation investigates the stability of the pillars with respect to three failure mechanisms: tensile failure, compressive failure, and creep rupture. A 3D slabbing pillar model of the Markel mine was developed to investigate progressive failure of the pillars and the effect of slabbing on mine stability. Based on a strain-limiting creep rupture criterion, pillar failure is predicted to be extensive at present. The associated loss of pillar strength should be equivalent to removing all pillars from the model as was done in the first part of this stability analysis, resulting in the possibility of ground water intrusion. Since creep rupture is not a well understood phenomenon, further development and validation of this criterion is recommended.
The NRC has proposed revisions to 10 CFR 100 which include the codification of nuclear reactor site population density limits to 500 people per square mile, at the siting stage, averaged over any radial distance out to 30 miles, and 1,000 people per square mile within the 40-year lifetime of a nuclear plant. This study examined whether there are less restrictive alternative population density and/or distribution criteria which would provide equivalent or better protection to human health in the unlikely event of a nuclear accident. This study did not attempt to directly address the issue of actual population density limits because there are no US risk standards established for the evaluation of population density limits. Calculations were performed using source terms for both a current generation light water reactor (LWR) and an advanced light water reactor (ALWR) design. The results of this study suggest that measures which address the distribution of the population density, including emergency response conditions, could result in lower average individual risks to the public than the proposed guidelines that require controlling average population density. Studies also indicate that an exclusion zone size, determined by emergency response conditions and reactor design (power level and safety features), would better serve to protect public health than a rigid standard applied to all sites.
The President of Sandia National Laboratories, Albert Narath, made this presentation to the congressional subcommittee on February 3, 1994. In it he outlines the convergence of the defense and civilian technology bases, technology leadership, the government/industry relationship in science and technology, historical laboratory effectiveness, Sandia`s evolution to a multiprogram laboratory, Sandia`s energy programs today, planning for a changing operating environment, Sandia`s strategy for enhancing industrial competitiveness, R&D partnerships, technology deployment, entrepreneurial initiatives, and current DOE planning efforts. Appendices contain information on technology transfer initiatives in the fields of high-performance computing, materials and processes for manufacturing, energy and environment, microelectronics and photonics and advanced manufacturing. Also included are customer response highlights, information on dual-use research centers and user facilities, examples of technology transfer achievements, major accomplishments of 1993, and questions and answers from the subcommittee.
This report documents the study that was performed from October 1993 through June 1994 to determine the effects of humidity on the W80 MC3268/3269 Trajectory-Sensing Signal Generators (TSSGs) during the test bed build and laboratory test processes. Mason and Hanger, Silas Mason Co., performs the disassembly and inspections along with the test bed build processes at the Pantex Plant in Amarillo, Texas. The laboratory testing of the TSSGs is performed at Sandia`s Weapons Evaluation Test Laboratory (WETL), located at the Pantex Plant. This report summarizes the historical sequence of events, the engineering analyses and decisions, and the future plans for controlling the ingress of moisture into the TSSGS during laboratory testing.
NonDestructive Testing (NDT), also called NonDestructive Evaluation (NDE), is commonly used to monitor structures before, during, and after testing. This paper reports on the use of two NDT techniques to monitor the behavior of a typical wind turbine blade during a quasi-static test-to-failure. The two NDT techniques used were acoustic emission and coherent optical. The former monitors the acoustic energy produced by the blade as it is loaded. The latter uses electron shearography to measure the differences in surface displacements between two load states. Typical results are presented to demonstrate the ability of these two techniques to locate and monitor both high damage regions and flaws in the blade structure. Furthermore, this experiment highlights the limitations in the techniques that must be addressed before one or both can be transferred, with a high probability of success, to the inspection and monitoring of turbine blades during the manufacturing process and under normal operating conditions.
Utility-interactive (UI) photovoltaic power systems mounted on residences and commercial buildings are likely to become a small, but important source of electric generation in the next century. This is a new concept in utility power production--a change from large-scale central generation to small-scale dispersed generation. As such, it requires a re-examination of many existing standards and practices to enable the technology to develop and emerge into the marketplace. Much work has been done over the last 20 years to identify and solve the potential problems associated with dispersed power generation systems. This report gives an overview of these issues and also provides a guide to applicable codes, standards and other related documents. The main conclusion that can be drawn from this work is that there are no major technical barriers to the implementation of dispersed PV generating systems. While more technical research is needed in some specific areas, the remaining barriers are fundamentally price and policy.
A 3-D finite element analysis was performed to evaluate the stability of the SPR upper and lower oil storage levels at Weeks Island. The mechanical analysis predicted stresses and strains from which pillar stability was inferred using a fracture criterion developed from previous testing of Weeks Island salt. This analysis simulated the sequential mining of the two levels and subsequent oil fill of the mine. The predicted subsidence rates compare well to those measured over the past few years. Predicted failure mechanisms agree with observations made at the time the mine was being modified for oil storage. The modeling technique employed here treats an infinite array of pillars and is a reasonable representation of the behavior at the center of the mine. This analysis predicts that the lower level pillars, at the center of the mine, have fractured and their stability at this time is questionable. Localized pillar fracturing is predicted and implies that the mine is entering a phase of continual time dependent deterioration. Continued and expanded monitoring of the facility and development of methods to assess and predict its behavior are more important now than ever.
Excessive deceleration forces experienced during high speed deployment of parachute systems can cause damage to the payload and the canopy fabric. Conventional reefing lines offer limited relief by temporarily restricting canopy inflation and limiting the peak deceleration load. However, the open-loop control provided by existing reefing devices restrict their use to a specific set of deployment conditions. In this paper, the sensing, processing, and actuation that are characteristic of adaptive structures form the basis of three concepts for active control of parachute inflation. These active control concepts are incorporated into a computer simulation of parachute inflation. Initial investigations indicate that these concepts promise enhanced performance as compared to conventional techniques for a nominal release. Furthermore, the ability of each controller to adapt to off-nominal release conditions is examined.
Proceedings of SPIE - The International Society for Optical Engineering
Zolper, I.C.
Vertical-cavity surfaeeniitting lasers (VCSELs) can be integrated with heterojunction phototransistors (HPTs)and heterojunction bipolar transistors (HBTs) on the same wafer to form high speed optical and optoelectronic switches,respectively, that can be optically or electrically addressed. This permits the direct communcication and transmission ofdata between distributed electronic processors through an optical switching network. The experimental demonstration of anintegrated optoelectronic HBT/VCSEL switch combining a GaAs/A1GaAs heterojunction bipolar transistor (HBT) with aVCSEL is described below, using the same epilayer structure upon which binary HPT/VCSEL optical switches are alsobuilt. The monolithic }IBT/VCSEL switch has high current gain, low power dissipation, and a high optical to electricalconversion efficiency. Its modulation response has been measured and modeled.
Reactor pumped lasers have the potential to be scaled to multi-megawatt power levels with long run times. In proposed designs, the laser will be capable of output powers of several megawatts of power for run times of several hours. Such a laser would have many diverse applications such as material processing, space debris removal and power beaming to geosynchronous satellites or the moon. However, before such systems can be designed, fundamental laser parameters such as small signal gain, saturation intensity and efficiency must be determined over a wide operational parameter space. We have recently measured fundamental laser parameters for a selection of nuclear pumped visible and near IR laser transitions in atomic neon, argon and xenon. An overview of the results of this investigation will be presented.
The Nevada Test Site (NTS) is one excellent possibility for a laser power beaming site. It is in the low latitudes of the U.S., is in an exceptionally cloud-free area of the southwest, is already an area of restricted access (which enhances safety considerations), and possesses a highly-skilled technical team with extensive engineering and research capabilities from underground testing of our nation's nuclear deterrence. The average availability of cloud-free clear line of site to a given point in space is about 84%. With a beaming angle of ±60° from the zenith, about 52 geostationaiy-orbit (GEO) satellites could be accessed continuously from NTS. In addition, the site would provide an average view factor of about 10% for orbital transfer from low earth orbit to GEO. One of the major candidates for a long-duration, high-power laser is a reactor-pumped laser being developed by DOE. The extensive nuclear expertise at NTS makes this site a prime candidate for utilizing the capabilities of a reactor pumped laser for power beaming. The site then could be used for many dual-use roles such as industrial material processing research, defense testing, and removing space debris.
We present a new massively parallel decomposition for grand canonical Monte Carlo computer simulation (GCMC) suitable for short ranged fluids. Our spatial algorithm relies on the fact that for short-ranged fluids, molecules separated by a greater distance than the reach of the potential act independently, thus different processors can work concurrently in regions of the same system which are sufficiently far apart. Several parallelization issues unique to GCMC are addressed such as the handling of the three different types of Monte Carlo move used in GCMC: the displacement of a molecule, the creation of a molecule, and the destruction of a molecule. The decomposition is shown to scale with system size, making it especially useful for systems where the physical problem dictates the system size, for example, fluid behavior in mesopores.
This paper investigates the applicability of existing SRAM SEU hardening techniques to conventional CMOS cross-coupled sense amplifiers used in DRAM structures. We propose a novel SEU mirroring concept and implementation for hardening DRAMs to bitline hits. Simulations indicate a 24-fold improvement in critical charge during the sensing state using a 10K T-Resistor scheme and a 28-fold improvement during the highly susceptible high impedance state using 2pF dynamic capacitance coupling.
A novel DRAM cell technology consisting of an access transistor and a bootstrapped storage capacitor with an integrated breakdown diode is proposed. This design offers considerable resistance to single event cell hits. The information change packet is shielded from an SE hit by placing the vulnerable node in a self-compensating standby state. The proposed cell is comparable in size to a conventional DRAM cell, but simulations show an improvement in critical charge of two orders of magnitude.