This report highlights the following topics: Photon Correlation Spectroscopy--a new application in jet fuel analysis, Testing news in brief; Solar test facility supports space station research; Shock isolation technique developed for piezoresistive accelerometer; High-speed photography captures Distant Image measurements; and, Radiation effects test revised for CMOS electronics.
A monolithic dose-rate nuclear event detector (NED) has been evaluated as a function of radiation pulse width. The dose-rate trip level of the NED was evaluated in "near" minimum and maximum sensitivity configurations for pulse widths from 20 to 250 ns and at dose rates from 106 to 109 rads(Si)/s. The trip level varied up to a factor of ∼16 with pulse width. At each pulse width the trip level can be varied intentionally by adding external resistors. Neutron irradiations caused an increase in the trip level, while electron irradiations, up to a total-dose of 50 krads(Si), had no measurable effect. This adjustable dose-rate-level detector should prove valuable to designers of radiation-hardened systems.
Particulate contamination during IC fabrication is generally acknowledged as a major contributor to yield loss. In particular, plasma processes have the potential for generating copious quantities of process induced particulates. Ideally, in order to effectively control process generated particulate contamination, a fundamental understanding of the particulate generation and transport is essential. Although a considerable amount of effort has been expended to study particles in laboratory apparatus, only a limited amount of work has been performed in production line equipment with production processes. In these experiments, a Drytek Quad Model 480 single wafer etcher was used to etch blanket thermal SiO{sub 2} films on 150 mm substrates in fluorocarbon discharges. The effects of rf power, reactor pressure, and feed gas composition on particle production rates were evaluated. Particles were measured using an HYT downstream particle flux monitor. Surface particle deposition was measured using a Tencor Surfscan 4500, as well as advanced ex situ techniques. Particle morphology and composition were also determined ex situ. Response surface methodology was utilized to determine the process conditions under which particle generation was most pronounced. The use of in situ and ex situ techniques has provided some insight into the mechanisms involved for particle generation and particle dynamics within the plasma during oxide etching.
Franssen, F.; Islam, A.B.M.N.; Sonnier, C.; Schoeneman, J.L.; Baumann, M.
The conclusions of the vulnerability test on VOPAN (verification of Operator's Analysis) as conducted at Safeguards Analytical Laboratory (ASA) at Seibersdorf, Austria in October 1990 and documented in STR-266, indicate that whenever samples are taken for safeguards purposes extreme care must be taken to ensure that they have not been interfered with during the sample taking, transportation, storage or sample preparation process.'' Indeed there exist a number of possibilities to alter the content of a safeguards sample vial from the moment of sampling up to the arrival of the treated (or untreated) sample at SAL. The time lapse between these two events can range from a few days up to months. The sample history over this period can be subdivided into three main sub-periods: (1) the period from when the sampling activities are commenced up to the treatment in the operator's laboratory, (2) during treatment of samples in the operator's laboratory, and finally, (3) the period between that treatment and the arrival of the sample at SAL. A combined effort between the Agency and the United States Support Program to the Agency (POTAS) has resulted in two active tasks and one proposed task to investigate improving the maintenance of continuity of knowledge on safeguards samples during the entire period of their existence. This paper describes the use of the Sample Vial Secure Container (SVSC), of the Authenticated Secure Container System (ASCS), and of the Secure Container for Storage and Transportation of samples (SCST) to guarantee that a representative portion of the solution sample will be received at SAL.
A control algorithm is proposed for a molten-salt solar central receiver in a cylindrical configuration. The algorithm simultaneously regulates the receiver outlet temperature and limits thermal-fatigue damage of the receiver tubes to acceptable levels. The algorithm is similar to one that was successfully tested for a receiver in a cavity configuration at the Central Receiver Test Facility in 1988. Due to the differences in the way solar flux is introduced on the receivers during cloud-induced transients, the cylindrical receiver will be somewhat more difficult to control than the cavity receiver. However, simulations of a proposed cylindrical receiver at the Solar Two power plant have indicated that automatic control during severe cloud transients is feasible. This paper also provides important insights regarding receiver design and lifetime as well as a strategy for reducing the power consumed by the molten-salt pumps.
This paper describes experiments on the wettability of tin on oxygen free, high conductivity (OFHC) copper using a ″point source″ ultrasonic horn. Ultrasonics are used on such metals as aluminum or stainless steel which are difficult to wet without the use of very strong corrosives. These experiments explore the behavior of acoustic energy transmission in the horn-solder-substrate systems indicated by the solder film generated and explore the use of ultrasonics in actual electronic systems component fabrication and assembly processes.
An evaluation of substitutes for tin-lead alloy solders is discribed. The first part of the evaluation studies the wettability of tin-based, lead free solders. The second part evaluates the solderability. The solders evaluated were commercially available.
This paper presents the results of a set of structural analyses performed to investigate the effects of internal gas generation on the extension of pre-existing fractures around disposal rooms at the Waste Isolation Pilot Plant. The response of a room and its contents is computed for this scenario to establish the condition of the room at any point in time. The development of the capability to perform these analyses represents an additional step in the development of an overall model for the disposal room.
National Electronic Packaging and Production Conference-Proceedings of the Technical Program (West and East)
Frear, D.R.
Acid vapors have been used to fluxlessly reduce metal oxides and enhance wetting of solder on metallizations. Dilute solutions of hydrogen, acetic acid and formic acid in an inert carrier gas of nitrogen or argon were used with the sessile drop technique for 60Sn-40Pb solder on Cu and Au/Ni metallizations. The time to reduce metal oxides and degree of wetting as a function of acid vapor concentration were characterized. Acetic and formic acids reduce the surface metal oxides sufficiently to form metallurgically sound solder joints. Hydrogen did not reduce oxides rapidly enough at 220°C to be suitable for soldering applications. The optimum conditions for oxide reduction with formic acid was with an acid vapor concentration in nitrogen carrier gas of 4% for Cu metallizations and 1.6% on Au/Ni. The acetic acid vapor concentration, also in nitrogen, was optimized at 1.5% for both metallizations. Above a vapor concentration of 1.5%, the acetic acid combined with the bare metal to form acetates which increased the wetting time. These results indicate that acid vapor fluxless soldering is a viable alternative to traditional flux soldering.
Proceedings of the International Instrumentation Symposium
Clark, E.L.
The measurement of surface pressures on a body which is submerged in flowing water involves several problems which are not encountered when the test medium is air. Many of these problems exist even if the water velocity is low, and become more severe at higher velocitics (45-65 ft/sec) where the surface pressure may be low enough for cavitation to occur. Problem areas which are discussed include:hydrostatic pressure, surface tension, orifice errors, thermal effects on surface-mounted transducers, electrical fields, two-phase phenomena and air content.
Deconing controllers are developed for a spinning spacecraft, where the control mechanism is that of axial or radial moving masses that are used to produce intentional, transient principal axis misalignments. A single mass axial controller is used to motivate the concept, and then axial and radial dual mass controllers are described. The two mass problem is of particular interest since spacecraft imbalances can be simultaneously removed with the same control logic. Each controller is tested via simulation for its ability to eliminate existing coning motion for a range of spin rates. Both controllers are developed via a linear-quadratic-regulator synthesis procedure, which is motivated by their multi-input/multi-output nature. The dynamic coupling in the radial two mass control problem introduces some particularly interesting design complications.
Many complex physical processes are modeled by coupled systems of partial differential equations (PDEs). Often, the numerical approximation of these PDEs requires the solution of large sparse nonsymmetric systems of equations. In this paper we compare the parallel performance of a number of preconditioned Krylov subspace methods on a large-scale MIMD machine. These methods are among the most robust and efficient iterative algorithms for the solution of large sparse linear systems. They are easy to implement on various architectures and work well on a wide variety of important problems. In this comparison we focus on the parallel issues associated with both local preconditioners (those that combine information from the entire domain). The various preconditioners are applied to a variety of PDE problems within the GMRES, CCGS, BiCGSTAB, and QMRCGS methods. Conclusions are drawn on the effectiveness of the different schemes based on results obtained from a 1024 processor a nCUBE 2 hypercube.
A two-stage self-organizing neural network architecture has been applied to object recognition in Synthetic Aperture Radar imagery. The first stage performs feature extraction and implements a two-layer Neocognitron. The resulting feature vectors are presented to the second stage, an ART 2-A classifier network, which clusters the features into multiple target categories. Training is performed off-line in two steps. First, the Neocognitron self-organizes in response to repeated presentations of an object to recognize. During this training process, discovered features and the mechanisms for their extraction are captured in the excitatory weight patterns. In the second step, Neocognitron learning is inhibited and the ART 2-A classifier forms categories in response to the feature vectors generated by additional presentations of the object to recognize. Finally, all training is inhibited and the system tested against a variety of objects and background clutter. In this paper we report the results of our initial experiments. The architecture recognizes a simulated tank vehicle at arbitrary azimuthal orientations at a single depression angle while rejecting clutter and other object returns. The neural architecture has achieved excellent classification performance using 20 clusters.
This paper presents results of a set of numerical experiments performed bo benchmark the Cell-Centered Implicit Continuous-fluid Eulerian (CCICE), and to determine their limitations as flow solvers for water entry and water exit simulations.
This paper will include a brief overview of the components of the QUICKSILVER suite and its current modeling capabilities. As time permits, results from sample applications will be shown, including time animations of simulation results.
Proceedings of the 35th International Power Sources Symposium
Clark, N.H.
Technologies that use carbon and mixed metal oxides as the electrode material have been pursued for the purpose of producing high-reliability double-layer capacitors (DLCs). The author demonstrates their environmental stability in temperature, shock, vibration, and linear acceleration. She reviews the available test data for both types of DLCs under these stress conditions. This study suggests that mixed metal oxides and carbon-based double-layer capacitors can survive robust environments if packaged properly, and that temperature decreases performance of double-layer capacitors.
We describe a simple engineering model applicable to stand-off “Whipple bumper” shields, which are used to protect space-based assets from impacts by orbital debris particles. The model provides a framework for analyzing: 1) the parameter limits governing the penetration and breakup or decomposition of the hypervelocity debris particle; 2) the behavior of the induced debris cloud, including its velocity and divergence; and 3) the design and optimization of the stand-off shield for a specific threat and level of protection required. The model is normalized to actual stand-off debris shield experiments and multi-dimensional numerical simulations at impact velocities of ~10 km/s. The subsequent analysis of a current space station shield design suggests that: 1) for acceptable levels of protection, stand-off shields can be significantly thinner than previously thought; and 2) with the proper balance between shield thickness and stand-off distance, the total shield mass can be reduced substantially.
A series of experiments has been performed on the Sandia Hypervelocity Launcher to determine the performance limits of conventional Whipple shields against representative 0.8 g aluminum orbital debris plate-like fragments with velocities of 7 and 10 km/s. Supporting diagnostics include flash X-rays, high speed photography and transient digitizers for timing correlation. Two Whipple shield designs were tested with either a 0.030 cm or a 0.127 cm thick front sheet and a 0.407 cm thick backsheet separated by 30.5 cm. These two designs bracket the ballistic penetration limit curve for protection against these debris simulants for 7 km/s impacts.
Final Program and Paper Summaries for the 1992 Digital Signal Processing Workshop, DSPWS 1992
Jakowatz Jr., C.V.; Thompson, P.A.
In this paper we take a new look at the tomographic formulation of spotlight mode synthetic aperture radar (SAR), so as to include the case of targets having three-dimensional structure. This bridges the work of David C. Munson and his colleagues, who first described SAR in terms of two-dimensional tomography, with Jack Walker`s original derivation of spotlight mode SAR imaging via Doppler analysis. The main result is to demonstrate that the demodulated radar return data from a spotlight mode collection represent a certain set of samples of the three-dimensional Fourier transform of the target reflectivity function, and to do so using tomographic principles instead of traditional Doppler arguments. We then show that the tomographic approach is useful in interpreting the two-dimensional SAR image of a three-dimensional scene. In particular, the well-known SAR imaging phenomenon commonly referred to as layover is easily explained in terms of tomographic projection. 4 refs.
The unit cell shape of thick frequency selective surfaces, or dichroic plate, is dependent on its frequency requirements. One aperture shape may be chosen to give wider bandwidths, and another chosen for sharper frequency roll-off. This is analogous to circuits where the need for differing frequency response determines the circuit topology. Acting as spatial frequency filters, dichroics are a critical component in supporting the Deep Space Network (DSN) for spacecraft command a control up links as well as spacecraft down links. Currently these dichroic plates separate S-band at 2.0--232 GHz from X-band at 8.4--8.45 GHz. But new spacecraft communication requirements are also calling for an up link frequency at 7.165 GHz. In addition future spacecraft such as Craft/Casssini will require dichroics effectively separating K{sub a}-band frequencies in the 31--35 GHz range. The requirements for these surfaces are low transmission loss of < 0.1 dB at high power levels. Also is important to maintain a minimal relative phase shift between polarizations for circular polarization transmission. More current work has shown the successful demonstration of design techniques for straight, rectangular apertures at an incident angle of 30{degrees}. The plates are air-filled due to power dissipation and noise temperature considerations. Up-link frequency powers approach 100 kW making dielectrics undesirable. Here we address some of the cases in which the straight rectangular shape may have limited usefulness. For example, grating lobes become a consideration when the bandwidth required to include the new frequency of 7.165 GHz conflicts with the desired incident angle of 30{degrees}. For this case, the cross shape`s increased packing density and bandwidth could make it desirable. When a sharp frequency response is required to separate two closely space K{sub a}-band frequencies, the stepped rectangular aperture might be advantageous. 5 refs.
Several closed form trajectory solutions have been developed for low-thrust interplanetary flight and used with patched conies for analysis of combined propulsion systems. The solutions provide insight into alternative types of Mars missions, and show considerable mass savings for fast crewed missions with outbound trip times on the order of 90-100 days.
Nuclear Thermal Propulsion (NTP) has been identified as a critical technology in support of the NASA Space Exploration Initiative (SEI). In order to safely develop a reliable, reusable, long-lived flight engine, facilities are required that will support ground tests to qualify the nuclear rocket engine design. Initial nuclear fuel element testing will need to be performed in a facility that supports a realistic thermal and neutronic environment in which the fuel elements will operate at a fraction of the power of a flight weight reactor/engine. Ground testing of nuclear rocket engines is not new. New restrictions mandated by the National Environmental Protection Act of 1970, however, now require major changes to be made in the manner in which reactor engines are now tested. These new restrictions now preclude the types of nuclear rocket engine tests that were performed in the past from being done today, A major attribute of a safely operating ground test facility is its ability to prevent fission products from being released in appreciable amounts to the environment. Details of the intricacies and complications involved with the design of a fuel element ground test facility are presented in this report with a strong emphasis on safety and economy.
A rapid deployment access delay system (RAPADS) has been designed to provide high security protection of valued assets. The system or vault is transportable, modular, and utilizes a pin connection design. Individual panels are attached together to construct the vault. The pin connection allows for quick assembly and disassembly, and makes it possible to construct vaults of various sizes to meet a specific application. Because of the unique pin connection and overlapping joint arrangement, a sequence of assembly steps are required to assembly the vault. As a result, once the door is closed and locked, all pin connections are concealed and inaccessible. This provides a high level of protection in that no one panel or connection is vulnerable. This paper presents the RAPADS concept, design, fabrication, and construction.
Proceedings - International Carnahan Conference on Security Technology
Arlowe, H.D.
There is an emerging interest in using thermal IR to automatically detect human intruders over wide areas. Such a capability could provide early warning beyond the perimeter at fixed sites, and could be used for portable security around mobile military assets. Sandia National Laboratories has been working on automatic detection systems based on the thermal contrast and motion of human intruders for several years, and has found that detection is sometimes difficult, depending on solar and other environmental conditions. Solar heating can dominate human thermal radiation by 100 fold, and dynamic background temperature changes can limit detector sensitivity. This paper explains those conditions and energy transfer mechanisms that lead to difficult thermal detection. We will not cover those adverse conditions that are more widely understood and previously reported on, such as fog, smoke, rain and falling snow. This work was sponsored by the Defense Nuclear Agency.
In the wavenumber-domain method of SAR imaging, frequencydomain radar data are used to reconstruct a portion of the 2-D Fourier transform of the scene, which is then inverted to create the image. The method suffers no inherent limits on aperture length or scene size. This paper extends the concept to the case where the synthetic aperture is not a straight line and the samples are unevenly spaced. An accumulation formula for wavenumberdomain reconstruction is derived and shown to be equivalent to earlier algorithms in the uniform-aperture case. It is then shown how data with three-dimensional irregularity in the aperture can be processed using height correction and mapping into the slant plane.
CIRCE2 is a cone-optics computer code for determining the flux distribution and total incident power upon a receiver, given concentrator and receiver geometries, sunshape (angular distribution of incident rays from the sun-disk), and concentrator imperfections such as surface roughness and random deviation in slope. Statistical methods are used to evaluate the directional distribution of reflected rays from any given point on the concentrator, whence the contribution to any point on the target can be obtained. DEKGEN2 is an interactive preprocessor which facilitates specification of geometry, sun models, and error distributions. The CIRCE2/DEKGEN2 package equips solar energy engineers with a quick, user-friendly design and analysis tool for study/optimization of dish-type distributed receiver systems. The package exhibits convenient features for analysis of 'conventional' concentrators, and has the generality required to investigate complex and unconventional designs. Among the more advanced features are the ability to model dish or faceted concentrators and stretched-membrane reflectors, and to analyze 3-D flux distributions on internal or external receivers with 3-D geometries. Facets of rectangular, triangular, or circular projected shape, with profiles of parabolic, spherical, flat, or custom curvature can be handled. Provisions for shading, blocking, and aperture specification are also included. This paper outlines the features and capabilities of the new package, as well as the theory and numerical models employed in CIRCE2.
Proceedings of SPIE - The International Society for Optical Engineering
Stansfield, Sharon A.
This paper presents two parallel implementations of a knowledge-based robotic grasp generator. The grasp generator, originally developed as a rule-based system, embodies a knowledge of the associations between the features of an object and the set of valid hand shapes/arm configurations which may be used to grasp it. Objects are assumed to be unknown, with no a priori models available. The first part of this paper presents a `parallelization' of this rule base using the connectionist paradigm. Rules are mapped into a set of nodes and connections which represent knowledge about object features, grasps, and the required conditions for a given grasp to be valid for a given set of features. Having shown that the object and knowledge representations lend themselves to this parallel recasting, the second part of the paper presents a back propagation neural net implementation of the system that allows the robot to learn the associations between object features and appropriate grasps.
The Capacitors Division at Sandia National Laboratories has for many years been actively involved in developing high reliability, low-inductance, energy-storage, pulse-discharge capacitors. Development has concentrated on two dielectric systems; mica-paper and Mylar (both dry wrap and fill and FC40 liquid impregnation). Continuous design improvements are constantly being sought. For pulse discharge usage lowering the capacitor inductance can improve circuit performance. This paper describes recent efforts to improve the efficiency of low-inductance, mica-paper capacitors by reducing the inductance through optimizing the component geometry. The study focused on a 0.2 {mu}F, 4000 V mica-paper extended-foil capacitor design. The experimental matrix was a two-level, three factor with center points design, and was replicated four times to give reasonable statistics. The factors were the capacitor width, capacitor length, and electrode width, and with response functions of capacitor inductance and circuit performance. The capacitor inductance was measured by the resonance technique, and the circuit performance was evaluated by peak (discharge) current and rise time. Results show that the inductance can be minimized by choice of geometry with accompanying improvements in circuit performance.
This paper describes the plan for a test to failure of a steel containment vessel model. The test specimen proposed for this test is a scale model representing certain features of an improved BWR MARK-2 containment vessel. The objective of this test is to investigate the ultimate structural behavior of the model by incrementally increasing the internal pressure, at ambient temperature, until failure occurs. Pre- and posttest analyses will be conducted to predict and evaluate the results of this test. The main objective of these analyses to validate, by comparisons with the experimental data, the analytical methods used to evaluate the structural behavior of an actual containment vessel under severe accident conditions. This experiment is part of a cooperative program between the Nuclear Power Engineering Corporation (NUPEC), the United States Nuclear Regulatory Commission (NRC), and Sandia National Laboratories (SNL).
Logging technologies developed hydrocarbon resource evaluation have not migrated into geothermal applications even though data so obtained would strengthen reservoir characterization efforts. Two causative issues have impeded progress: (i) there is a general lack of vetted, high-temperature instrumentation, and (ii) the interpretation of log data generated in a geothermal formation is in its infancy. Memory-logging tools provide a path around the first obstacle by providing quality data at a low cost. These tools feature on-board computers that process and store data, and newer systems may be programmed to make decisions.'' Since memory tools are completely self-contained, they are readily deployed using the slick line found on most drilling locations. They have proven to be rugged, and a minimum training program is required for operator personnel. Present tools measure properties such as temperature and pressure, and the development of noise, deviation, and fluid conductivity logs based on existing hardware is relatively easy. A more complex geochemical tool aimed at a quantitative analysis of potassium, uranium and thorium will be available in about on year, and it is expandable into all nuclear measurements common in the hydrocarbon industry. A second tool designed to sample fluids at conditions exceeding 400{degrees}C is in the proposal stage. Partnerships are being formed between the geothermal industry, scientific drilling programs, and the national laboratories to define and develop inversion algorithms relating raw tool data to more pertinent information. 8 refs.
The overpressurization of a 1:6 scale reinforced concrete containment building demonstrated that liner tearing is a plausible failure mode in such structures under severe accident conditions. A combined experimental and analytical program was developed to determine the important parameters that affect liner tearing and to develop reasonably simple analytical methods for predicting when tearing will occur. Three sets of test specimens were designed to allow individual control over and investigation of the mechanisms believed to be important in causing failure of the liner plate. The series of tests investigated the effect on liner tearing produced by the anchorage system, the loading conditions, and the transition in thickness of the liner. Before testing, the specimens were analyzed using two- and three-dimensional finite element models. Based on the analysis, the failure mode and corresponding load conditions were predicted for each specimen. Test data and posttest examination of test specimens shows mixed agreement with the analytical predictions with regard to failure mode and specimen response for most tests. Many similarities were also observed between the response of the liner in the 1:6 scale reinforced concrete containment model and the response of the test specimens. This work illustrates the fact that the failure mechanism of a reinforced concrete containment building can be greatly influenced by details of liner and anchorage system design. Furthermore, it significantly increases the understanding of containment building response under severe accident conditions.
Acoustic telemetry has been a dream of the drilling industry for the past 50 years. It offers the promise of data rates which are one-hundred times greater than existing technology. Such a system would open the door to true logging-while-drilling technology and bring enormous profits to its developers. The basic idea is to produce an encoded sound wave at the bottom of the well, let it propagate up the steel drillpipe, and extract the data from the signal at the surface. Unfortunately, substantial difficulties arise. The first difficult problem is to produce the sound wave. Since the most promising transmission wavelengths are about 20 feet, normal transducer efficiencies are quire low. Compounding this problem is the structural complexity of the bottomhole assembly and drillstring. For example, the acoustic impedance of the drillstring changes every 30 feet and produces an unusual scattering pattern in the acoustic transmission. This scattering pattern causes distortion of the signal and is often confused with signal attenuation. These problems are not intractable. Recent work has demonstrated that broad frequency bands exist which are capable of transmitting data at rates up to 100 bits per second. Our work has also identified the mechanism which is responsible for the observed anomalies in the patterns of signal attenuation. Furthermore in the past few years a body of experience has been developed in designing more efficient transducers for application to metal waveguides. The direction of future work is clear. New transducer designs which are more efficient and compatible with existing downhole power supplies need to be built and tested; existing field test data need to be analyzed for transmission bandwidth and attenuation; and the new and less expensive methods of collecting data on transmission path quality need to be incorporated into this effort. 11 refs.
Sorenson, Ken B.; Salzbrenner, Richard; Nickell, Robert E.
An effort has been undertaken to develop a brittle fracture acceptance criterion for structural components of nuclear material transportation casks. The need for such a criterion was twofold. First, new generation cask designs have proposed the use of ferritic steels and other materials to replace the austenitic stainless steel commonly used for structural components in transport casks. Unlike austenitic stainless steel which fails in a high-energy absorbing, ductile tearing mode, it is possible for these candidate materials to fail via brittle fracture when subjected to certain combinations of elevated loading rates and low temperatures. Second, there is no established brittle fracture criterion accepted by the regulatory community that covers a broad range of structural materials. Although the existing IAEA Safety Series {number sign}37 addressed brittle fracture, its the guidance was dated and pertained only to ferritic steels. Consultant's Services Meetings held under the auspices of the IAEA have resulted in a recommended brittle fracture criterion. The brittle fracture criterion is based on linear elastic fracture mechanics, and is the result of a consensus of experts from six participating IAEA-member countries. The brittle fracture criterion allows three approaches to determine the fracture toughness of the structural material. The three approaches present the opportunity to balance material testing requirements and the conservatism of the material's fracture toughness which must be used to demonstrate resistance to brittle fracture. This work has resulted in a revised Appendix IX to Safety Series {number sign}37 which will be released as an IAEA Technical Document within the coming year.
We show experimentally and theoretically that the generation of the 13-TW Hermes III electron beam can be accurately monitored, and that the beam can be accurately directed onto a high-Z target to produce a wide variety of bremsstrahlung patterns. This control allows the study of radiation effects induced by gamma rays to be extended into new parameters regimes. Finally, we show that the beam can be stably transported in low-pressure gas cells.
This paper presents the groundwork for a completely automatic 3-D hexahedral mesh generation algorithm called plastering. It is an extension of the paving algorithm developed by Blacker, where paving is a completely automatic 2-D quadrilateral meshing technique.
The transport of a chemically reactive fluid through a permeable medium is governed by many classes of chemical interactions. Dissolution/precipitation (D/P) reactions are among the interactions of primary importance because of their significant influence on the mobility of aqueous ions. In general, D/P reactions lead to the propagation of coherent waves. This paper provides an overview of the types of wave phenomena observed in one-dimensional (1D) and two-dimensional (2D) porous media for systems in which mineral D/P is the dominant type of chemical reaction. It is demonstrated that minerals dissolve in sharp waves in 1D advection-dominated transport, and that these waves separate zones of constant chemical compositions in the aqueous and mineral phases. Analytical solutions based on coherence methods are presented for solving 1D advection-dominated transport problems with constant and variable boundary conditions. Numerical solutions of diffusion-dominated transport in porous media show that sharp D/P fronts occur in this system as well. A final example presents a simple dual-porosity system with advection in an idealized fracture and solute diffusion into an adjacent porous matrix. The example illustrates the delay of contaminant release from the 2D domain due to a combination of physical retardation and chemical retardation.
A closely coupled computational and experimental aerodynamics research program was conducted on a hypersonic vehicle configuration at Mach 8. Aerodynamic force and moment measurements and flow visualization results were obtained in the Sandia National Laboratories hypersonic wind tunnel for laminar boundary layer conditions. Parabolized and iterative Navier-Stokes simulations were used to predict flow fields and forces and moments on the hypersonic configuration. The basic vehicle configuration is a spherically blunted 10{degrees} cone with a slice parallel with the axis of the vehicle. On the slice portion of the vehicle, a flap can be attached so that deflection angles of 10{degrees}, 20{degrees}, and 30{degrees} can be obtained. Comparisons are made between experimental and computational results to evaluate quality of each and to identify areas where improvements are needed. This extensive set of high-quality experimental force and moment measurements is recommended for use in the calibration and validation of computational aerodynamics codes. 22 refs.
Microstructural models of deformation of polycrystalline materials suggest that inelastic deformation leads to the formation of a corner or vertex at the current load point. This vertex can cause the response to non-proportional loading to be more compliant than predicted by the smooth yield-surface idealization. Combined compression-torsion experiments on Tennessee marble indicate that a vertex forms during inelastic flow. An important implication is that strain localization by bifurcation occurs earlier than predicted by bifurcation analysis using isotropic hardening.
Acoustic emissions and conventional strain measurements were used to follow the evolution of the damage surface and plastic potential in a limestone under triaxial compression. Confining pressures were chosen such that macroscopically, the limestone exhibited both brittle and ductile behavior. The parameters derived are useful for modeling the deformation of a pressure-dependent material and for computing when localization would occur. For modeling, simple approximations are adequate, but a more complete understanding of the evolution of the various parameters is necessary in order to calculate when localization can be expected. 11 refs., 6 figs.
Light emission microscopy is now currently used in most integrated circuit (IC) failure analysis laboratories. This tutorial is designed to benefit both novice and experienced failure analysts by providing an introduction to light emission microscopy as well as information on new techniques, such as the use of spectral signatures. The use of light emission for accurate identification and spatial localization of physical defects and failure mechanisms is presented. This includes the analysis of defects such as short circuits which do not themselves emit light. The importance of understanding the particular IC design and applying the correct electrical stimulus is stressed. A video tape is used to show light emission from pn junctions, MOS transistors, test structures, and CMOS ICs in static and dynamic electrical stimulus conditions. 27 refs.
The Thermionic System Evaluation Test (TSET) is a ground test of an unfueled Russian TOPAZ-II in-core thermionic space reactor powered by electric heaters. The facility that will be used for testing of the TOPAZ-II systems is located at the New Mexico Engineering Research Institute (NMERI) complex in Albuquerque, NM. The reassembly of the Russian test equipment is the responsibility of International Scientific Products (ISP), a San Jose, CA, company and Inertek, a Russian corporation, with support provided by engineers and technicians from Phillips Laboratory (PL), Sandia National Laboratories (SNL), Los Alamos National Laboratory (LANL), and the University of New Mexico (UNM). This test is the first test to be performed under the New Mexico Strategic Alliance agreement. This alliance consist of the PL, SNL, LANL, and UNM. The testing is being funded by the Strategic Defense Initiative Organization (SDIO) with the PL responsible for project execution.
Radioactive material transport casks use either lead or depleted uranium (DU) as gamma-ray shielding material. Stainless steel is conventionally used for structural containment. If a DU alloy had sufficient properties to guarantee resistance to failure during both nominal use and accident conditions to serve the dual-role of shielding and containment, the use of other structure materials (i.e., stainless steel) could be reduced. (It is recognized that lead can play no structural role.) Significant reductions in cask weight and dimensions could then be achieved perhaps allowing an increase in payload. The mechanical response of depleted uranium has previously not been included in calculations intended to show that DU-shielded transport casks will maintain their containment function during all conditions. This paper describesa two-part study of depleted uranium alloys: First, the mechanical behavior of DU alloys was determined in order to extend the limited set of mechanical properties reported in the literature. The mechanical properties measured include the tensile behavior the impact energy. Fracture toughness testing was also performed to determine the sensitivity of DU alloys to brittle fracture. Fracture toughness is the inherent material property which quantifies the fracmm resistance of a material. Tensile strength and ductility are significant in terms of other failure modes, however, as win be discussed. These mechanical properties were then input into finite element calculations of cask response to loading conditions to quantify the potential for claiming structural credit for DU. (The term structural credit'' describes whether a material has adequate properties to allow it to assume a positive role in withstanding structural loadings.)
Interfacial microchemical characterization is required in all aspects of surface processing as applied to transportation and utility technologies. Corrosion protection, fuel cells and batteries, wear surfaces, polymers and polymer-oxide interfaces, thin film multilayers, photoelectrochemical systems, and organized molecular assemblies are just a few examples of interfacial systems of interest to these industries. A number of materials and processing problems, both related to fundamental understanding and to monitoring manufacturing operations, have been identified where our microchemical characterization abilities need improving. Over twenty areas for research are identified where progress will contribute to improved understanding of materials and processes, improved problem-solving abilities, improved manufacturing consistency, and lower costs. Some of the highest priority areas for research include (1) developing techniques and methods with improved chemical specificity at interfaces, (2) developing fast, real-time surface and interface probes and (3) improving the cost and reliability of manufacturing monitors. Increased collaboration among University, Industry, and Government laboratories will be a prerequisite to making the required progress in a timely fashion.
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection to host computer, parallelizing code generator, and look-up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating-point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An Open Windows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.
International Atomic Energy Agency (IAEA) inspectors must maintain continuity of knowledge on all safeguard samples, and in particular on those samples drawn from plutonium product and spent fuel input tanks at a nuclear reprocessing plant's blister sampling station. Integrity of safeguard samples must be guaranteed from the sampling point to the moment of sample analysis at the IAEA's Safeguards Analytical Laboratory (SAL Seibersdorf) or at an accepted local laboratory. These safeguard samples are drawn at a blister sampling station with inspector participation, and then transferred via a pneumatic post system to the facility's analytical laboratory. The transfer of the sample by the pneumatic post system, the arrival of the sample in the operator's analytical laboratory, and the storage of the sample awaiting analysis is very time consuming for the inspector, particularly if continuous human surveillance is required for all these activities. This process might be observed by ordinary surveillance methods, such as a video monitoring system, but again this would be cumbersome and time consuming for both the inspector and operator. This paper will describe a secure container designed to assure sample vial integrity from the point the sample is drawn to the treatment of the sample at the facility's analytical laboratory.
Understanding the mechanisms of growth during vapor-phase deposition is critical for the precise control of surface morphology required by advanced electronic device structures. Yet only relatively recently have the tools for observing this growth on an atomic-level scale become available (via scanning tunneling microscopy (STM), reflection high energy electron diffraction (RHEED) and low-energy electron microscopy (LEEM)). We present results from our own RHEED and STM measurements in which we use computer simulations to aid in determining the fundamental surface processes which contribute to.the observed structures. In this study of low-energy ion bombardment and growth on Si(001), it is demonstrated how simulations enable us to determine the dominant atomistic process.
Reflective Particle Tags were developed for uniquely identifying individual strategic weapons that would be counted in order to verify arms control treaties. These tags were designed to be secure from copying and transfer even after being lift under the control of a very determined adversary for a number of years. This paper discusses how this technology can be applied in other applications requiring confidence that a piece of equipment, such as a seal or a component of a secure, has not been replaced with a similar item. The hardware and software needed to implement this technology is discussed, and guidelines for the sign of systems that rely on these or similar randomly formed features for security applications are presented. Substitution of identical components is one of the easiest ways to defeat security seals, secure containers, verification instrumentation, and similar equipment. This technology, when properly applied, provides a method to counter this defeat scenario. This paper presents a method for uniquely identifying critical security related equipment. Guidelines for implementing identification systems based on reflective particles or similar random features without compromising their intrinsic security are discussed.
A non-contact, high-resolution laser ranging device has been incorporated into an instrument for accurately mapping the surface of WECS airfoils in the field. Preliminary scans of composite materials and bug debris show that the system has adequate resolution to accurately map bug debris and other surface contamination. This system, just recently delivered and now being debugged and optimized, will be used to characterize blade surface contamination on wind turbines. The technology used in this system appears to hold promise for application to many other measurements tasks, including a system for quickly and very accurately determining the profile of turbine blade molds and blades.
York II, A.R.; Freedman, J.M.; Kincy, M.A.; Joseph, B.J.
Sandia National Laboratories has completed the design and is now fabricating packages for shipment of tritium gas in conformance with 10 CFR 71. The package, referred to as the AL-SX, is quite unique in that its contents are a radioactive gas, and a large margin of safety has been demonstrated through overtesting. The AL-SX is small, 42 cm in diameter and 55 cm tall, and weighs between 55 kg empty and up to a maximum of 60 kg with contents and is designed for a 20-year service life. This paper describes the design of the AL-SX and certification testing performed on AL-SX packages and discusses containment of tritium and AL-SX manufacturing considerations.
Sandia National Laboratories is one of the nation's largest research and development (R and D) facilities and is responsible for national security programs in defense and energy with a primary emphasis on nuclear weapon R and D. However, Sandia also supports a wide variety of projects ranging from basic materials research to the design of specialized parachutes. As a multiprogram national laboratory, Sandia has much to offer both industrial and government customers in pursuing space nuclear technologies. A brief summary of Sandia's technical capabilities, test facilities, and example programs that relate to military and civilian objectives in space is presented.
Sandia National Laboratories is actively involved in testing coated particle nuclear fuels for the Space Nuclear Thermal Propulsion (SNTP) program managed by Phillips Laboratory. The testing program integrates the results of numerous in-pile and out-of-pile tests with modeling efforts to qualify fuel and fuel elements for the SNTP program. This paper briefly describes the capabilities of the Annular Core Research Reactor (in which the experiments are performed), the major in-pile tests, and the models used to determine the performance characteristics of the fuel and fuel elements. 6 refs.
The US Department of Energy's Slant Hole Completion Test Well, SHCT-1, was drilled in 1990 into gas-bearing, lenticular and blanket-shaped sandstones of the Mesaverde Formation, northwestern Colorado. The reservoirs are over-pressured, with sub-microdarcy, in situ, matrix-rock permeabilities. However, a set of sub-parallel natural fractures increases the whole-reservoir permeabilities, measured by well tests, to several tens of microdarcies. The slant hole azimuth was therefore oriented to cut across the dominant fracture strike, in order to access the natural-fracture permeability and increase drainage into the wellbore.
Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.
Three methods of evaluating accelerated battery test data are described. Criteria for each method are used to determine the minimum test matrix required for accurate predictions. Other test methods involving high current discharge and real time techniques are discussed.
Computational mechanics simulation capability via the finite element method is being integrated into the FASTCAST project to allow realistic analyses of investment casting problems. Commercial and in-house software is being coupled to new, solid model based mesh generation capabilities to provide improved access to fluid, thermal and structural simulations. These simulations are being used for the validation of complex gating designs and the study of fundamental problems in casting.
This document presents recent accomplishments in engineering and science at Sandia National Laboratories. Commercial-scale parabolic troughs at the National Solar Thermal Test Facility are used for such applications as heating water, producing steam for industrial processes, and driving absorption air conditioning systems. Computerized-aided design, superconductor technology, radar imaging, soldering technology, software development breakthroughs are made known. Defense programs are exhibited. And microchip engineering applications in test chips, flow sensors, miniature computers, integrated circuits, and microsensors are presented.
Diffraction peaks can occur as unidentifiable peaks in the energy spectrum of an x-ray spectrometric analysis. Recently, there has been increased interest in oriented polycrystalline films and epitaxial films on single crystal substrates for electronic applications. Since these materials diffract x-rays more efficiently than randomly oriented polycrystalline materials, diffraction peaks are being observed more frequently in x-ray fluorescent spectra. In addition, micro x-ray spectrometric analysis utilizes a small, intense, collimated x-ray beam that can yield well defined diffraction peaks. In some cases these diffraction peaks can occur at the same position as elemental peaks. These diffraction peaks, although a possible problem in qualitative and quantitative elemental analysis, can give very useful information about the crystallographic structure and orientation of the material being analyzed. The observed diffraction peaks are dependent on the geometry of the x-ray spectrometer, the degree of collimation and the distribution of wavelengths (energies) originating from the x-ray tube and striking the sample.
Geologic materials are often modeled with discrete spheres because the material is not continuous and discrete spherical models simplify the mathematics. Spherical element models have been created using assemblages of spheres with a specified particle size distribution or by assuming the particles are all the same size and making the assemblage a close-packed array of spheres. Both of these approaches yield a considerable amount of material dilatation upon movement. This has proven to be unsatisfactory for sedimentary rock formations that contain bedding planes where shear movement can occur with minimal dilatation of the interface. A new concept referred to as packing angle has been developed to allow the modeler to build arrays of spheres that are the same size but have the rows of spheres offset from each other. ne row offset is a function of the packing angle and allows the modeler to control the dilatation as rows of spheres experience relative horizontal motion.
The syntheses and physical properties of {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]X (X=Br and Cl) are summarized. The {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Br salt is the highest {Tc} radical-cation based ambient pressure organic superconductor ({Tc}=11.6 K), and the {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Cl salt becomes a superconductor at even higher {Tc} under 0.3 kbar hydrostatic pressure ({Tc}=12.8 K). The similarities and differences between {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Br and {kappa}-(ET){sub 2}Cu(NCS){sub 2} ({Tc}=10.4 K) are presented. The X-ray structures at 127 K reveal that the the S{hor_ellipsis}S contacts shorten between ET dimers in the former compound while the S{hor_ellipsis}S contacts shorten within dimers in the latter. The difference in their ESR linewidth behavior is also explained in terms of the structural differences. A semiconducting compound, (ET)Cu[N(CN){sub 2}]{sub 2}, isolated during {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Cl synthesis is also reported. The ESR measurements of the {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Cl salt indicate that the phase transition near 40 K is similar to the spin density wave transition in (TMTSF){sub 2}SbF{sub 6}. A new class of organic superconductors, {kappa}-(ET){sub 2}Cu{sub 2}(CN){sub 3} and {kappa}-(ET){sub 2}Cu{sub 2}(CN){sub 3}-{delta}Br{delta}, is reported with {Tc}`s of 2.8 K (1.5 kbar) and 2.6 K (1 kbar), respectively.
Nuclear weapons system designers and safety analysts are contemplating broader use of probabilistic risk assessment techniques. As an aid to their understanding, this document summarizes the development and use of probabilistic risk assessment (PRA) techniques in the nuclear power industry. This report emphasizes the use of PRA in decision making with the use of case studies. Nuclear weapon system designers and safety analysts, contemplating the broader use of PRA techniques, will find this document useful.
This document contains implementation details for the Quality Information Management System (QIMS) Pilot Project, which has been released for VAX/VMS systems using the INGRES RDBMS. The INGRES Applications-By-Forms (ABF) software development tool was used to define the modules and screens which comprise the QIMS Pilot application. These specifications together with the QIMS information model and corresponding database definition constitute the QIMS technical specification and implementation description presented herein. The QIMS Pilot Project represents a completed software product which has been released for production use. Further extension projects are planned which will release new versions for QIMS. These versions will offer expanded and enhanced functionality to meet further customer requirements not accommodated by the QIMS Pilot Project.
A large buildup in interface traps has been observed in commercial and radiation-hardened MOS transistors at very long times after irradiation (> 10{sup 6} s). This latent buildup may have important implications for CMOS response in space. 13 refs.
Translations of two pioneering Russian papers on antenna theory are presented. The first paper provides a treatise on finite-length dipole antennas; the second paper addresses infinite-length, impedance-loaded transmitting antennas.
A new approach for solving two-dimensional clustering problems is presented. The method is based on an inhibitory template which is applied to each pair of dots in a data set. Direct clustering of the pair is inhibited (allowed) if another dot is present (absent), respectively, within the area of the template. The performance of the method is thus entirely determined by the shape of the template. Psychophysical experiments have been used to define the template shape for this work, so that the resulting method requires no pattern-dependent adjustment of any parameters. The novel concept of a psychophysically-defined template and the absence of adjustable parameters set this approach apart from previous work. The useful grouping performance of this approach is demonstrated with the successful grouping of a variety of dot patterns selected from the clustering literature.
Sandia National Laboratories (SNL) Environmental Restoration (ER) Program has recently implemented a highly structured CS{sup 2} required by DOE. It is a complex system which has evolved over a period of a year and a half. During the implementation of this system, problem areas were discovered in cost estimating, allocation of management costs, and integration of the CS{sup 2} system with the Sandia Financial Information System. In addition to problem areas, benefits of the system were fund in the areas of schedule adjustment, projecting personnel requirements, budgeting, and responding to audits. Finally, a number of lessons were learned regarding how to successfully implement the system.
Ferroelectric PZT 53:47 thin films were prepared by two different solution deposition methodologies. Both routes utilized carboxylate and alkoxide precursors and acetic acid, which served as both a solvent and a chemical modifier. We have studied the effects of solution preparation conditions on film microstructure and ferroelectric properties, and have used NMR spectroscopy to characterize chemical differences between the two precursor solutions. Films prepared by a sequential precursor addition (SPA) process were characterized by slightly lossy hysteresis loops, with a P{sub r} of 18.7 {mu}C/cm{sup 2} and an E{sub c} of 55.2 kV/cm. Films prepared by an inverted mixing order (IMO) process were characterized by well saturated hysteresis loops, a P{sub r} of 26.2 {mu}C/cm{sup 2} and an E{sub c} of 43.3 kV/cm. While NMR investigations indicated that the chemical environments of both the proton and carbon species were similar for the two processes, differences in the amounts of by-products (esters, and therefore, water) formed were noted. These differences apparently impacted ceramic microstructure. Although both films were characterized by a columnar growth morphology, the SPA derived film displayed a residual pyrochlore layer at the film surface, which did not transform into the stable perovskite phase. The presence of this layer resulted in poor dielectric properties and lossy ferroelectric behavior.
We have developed a video detection algorithm for measuring the residue left on a printed circuit board after a soldering process. Oblique lighting improves the contrast between the residue and the board substrate, but also introduces an illumination gradient. The algorithm uses the Boundary Contour System/Feature Contour System to produce an idealized clean board image by discounting the illuminant, detecting trace boundaries, and filling the trace and substrate regions. The algorithm then combines the original input image and ideal image using mathematical models of the normal and inverse Weber Law to enhance the residue on the traces and substrate. The paper includes results for a clean board and one with residue.
CEPXS/ONELD is a discrete ordinates transport code package that can model the electron-photon cascade from 100 MeV to 1 keV. The CEPXS code generates fully-coupled multigroup-Legendre cross section data. This data is used by the general-purpose discrete ordinates code, ONELD, which is derived from the Los Alamos ONEDANT and ONETRAN codes. Version 1.0 of CEPXS/ONELD was released in 1989 and has been primarily used to analyze the effect of radiation environments on electronics. Version 2.0 is under development and will include user-friendly features such as the automatic selection of group structure, spatial mesh structure, and S{sub N} order.
Changing the focus of a corporate compensation and performance review system from process orientation to data base orientation results in a more integrated and flexible design. Data modeling of the business system provides both systems and human resource professionals insight into the underlying constants of the review process. Descriptions of the business and data modeling processes are followed by a detailed presentation of the data base model. Benefits derived from designing a system based on the model include elimination of hard-coding, better audit capabilities, a consistent approach to exception processing, and flexibility of integrating changes in compensation policy and philosophy.
This paper will address the purpose, scope, and approach of the Department of Energy Tiger Team Assessments. It will use the Tiger Team Assessment experience of Sandia National Laboratories at Albuquerque, New Mexico, as illustration.
One of the common waste streams generated throughout the nuclear weapon complex is hardware'' originating from the nuclear weapons program. The activities associated with this hardware at Sandia National Laboratories (SNL) include design and development, environmental testing, reliability and stockpile surveillance testing, and military liaison training. SNL-designed electronic assemblies include radars, arming/fusing/firing systems, power sources, and use-control and safety systems. Waste stream characterization using process knowledge is difficult due to the age of some components and lack of design information oriented towards hazardous constituent identification. Chemical analysis methods such as the Toxicity Characteristic Leaching Procedure (TCLP) are complicated by the inhomogeneous character of these components and the fact that many assemblies have aluminum or stainless steel cases, with the electronics encapsulated in a foam or epoxy matrix. In addition, some components may contain explosives, radioactive materials, toxic substances (PCBs, asbestos), and other regulated or personnel hazards which must be identified prior to handling and disposal. In spite of the above difficulties, we have succeeded in characterizing a limited number of weapon components using a combination of process knowledge and chemical analysis. For these components, we have shown that if the material is regulated as RCRA hazardous waste, it is because the waste exhibits one or more hazardous characteristics; primarily reactivity and/or toxicity (Pb, Cd).
The discrete Fourier transform and power spectral density are often used in analyzing data from analog-to-digital converters. These analyses normally apply a window to the data to alleviate the effects of leakage. This paper describes how windows modify the magnitude of a discrete Fourier transform and the level of a power spectral density computed by Welch's method. For white noise, the magnitude of the discrete Fourier transform at a fixed frequency has a Rayleigh probability distribution. For sine waves with an integer number of cycles and quantization noise, the theoretical values of the amplitude of the discrete Fourier transform and power spectral density are calculated. We show how the signal-to-noise ratio in a single discrete Fourier transform or power spectral density frequency bin is related to the normal time-domain definition of the signal-to-noise ratio. The answer depends on the discrete Fourier transform length, the window type and the function averaged.
The UNIX LANs in 1500 are experiencing explosive growth. The individual departments are creating LANs to address their particular needs; however, at the same time, shared software tools between the departments are becoming more common. It is anticipated that users will occasionally need access to various department software and/or LAN services, and that support personnel may carry responsibilities which require familiarization with multiple environments. It would be beneficial to users and support personnel if the various department environments share some basic similarities, allowing somewhat transparent access. This will become more important when departments share specific systems, as 1510 and 1550 have proposed with an unclassified UNIX system. Therefore, standards/conventions on the department LANs and the central site systems have to be established to allow for these features. it should be noted that the goal of the UEC is to set standards/conventions which affect the users and provide some basic structure for software installation and maintenance; it is not the intent that all 1500 LANs be made identical at an operating system and/or hardware level. The specific areas of concern include: (1) definition of a non-OS file structure; (2) definition of an interface for remote mounted file systems; (3) definition of a user interface for public files; (4) definition of a basic user level environment; and (5) definition of documentation requirements for public files (shared software). Each of these areas is addressed in this paper.
This document contains implementation details for the Sandia Management Restructure Study Team (MRST) Prototype Information System, which resides on a Sun SPARC II workstation employing the INGRES RDBMS. The INGRES/Windows 4GL application editor was used to define the components of the two user applications which comprise the system. These specifications together with the MRST information model and corresponding database definition constitute the MRST Prototype Information System technical specification and implementation description presented herein. The MRST Prototype Information System represents a completed software product which has been presented to the Management Restructure Study Team to support the management restructing processes at Sandia National Laboratories.
Finite element analyses of oil-filled caverns were performed to investigate the effects of cavern depth on surface subsidence and storage loss, a primary performance criteria of SPR caverns. The finite element model used for this study was axisymmetric, approximating an infinite array of caverns spaced at 750 ft. The stratigraphy and cavern size were held constant while the cavern depth was varied between 1500 ft and 3000 ft in 500 ft increments. Thirty year simulations, the design life of the typical SPR cavern, were performed with boundary conditions modeling the oil pressure head applied to the cavern lining. A depth dependent temperature gradient of 0.012{degrees}F/ft was also applied to the model. The calculations were performed using ABAQUS, a general purpose of finite element analysis code. The user-defined subroutine option in ABAQUS was used to enter an elastic secondary creep model which includes temperature dependence. The calculations demonstrated that surface subsidence and storage loss rates increase with increasing depth. At lower depths the difference between the lithostatic stress and the oil pressure is greater. Thus, the effective stresses are greater, resulting in higher creep rates. Furthermore, at greater depths the cavern temperatures are higher which also produce higher creep rates. Together, these factors result in faster closure of the cavern. At the end of the 30 year simulations, a 1500 ft-deep cavern exhibited 4 percent storage loss and 4 ft of subsidence while a 3000 ft-deep cavern exhibited 33 percent storage loss and 44 ft of subsidence. The calculations also demonstrated that surface subsidence is directly related to the amount of storage loss. Deeper caverns exhibit more subsidence because the caverns exhibit more storage loss. However, for a given amount of storage loss, nearly the same magnitude of surface subsidence was exhibited, independent of cavern depth.
This economic analysis compares human and robotic TRUPACT unloading at the Waste Isolation Pilot Plant. Robots speed up the unloading process, reduce human labor requirements, and reduce human exposure to radiation. The analysis shows that benefit/cost ratios are greater than one for most cases using government economic parameters. This suggests that robots are an attractive option for the TRUPACT application, from a government perspective. Rates of return on capital investment are below 15% for most cases using private economic parameters. Thus, robots are not an attractive option for this application, from a private enterprise perspective.
This paper summarizes the results of aging, condition monitoring, and accident testing of Class 1E cables used in nuclear power generating stations. Three sets of cables were aged for up to 9 months under simultaneous thermal ({approximately}100{degrees}C) and radiation ({approximately}0.10 kGy/hr) conditions. After the aging, the cables were exposed to a simulated accident consisting of high dose rate irradiation ({approximately}6 kGy/hr) followed by a high temperature steam (up to 400{degrees}C) exposure. A fourth set of cables, which were unaged, was also exposed to the accident conditions. The cables that were aged for 3 months and then accident tested were subsequently exposed to a high temperature steam fragility test (up to 400{degrees}C), while the cables that were aged for 6 months and then accident tested were subsequently exposed to a 1000-hour submergence test in a chemical solution. The results of these tests do not indicate any reason to believe that many popular nuclear power plant cable products cannot inherently be qualified for 60 years of operation for conditions simulated by this testing. Mechanical measurements (primarily elongation, modulus, and density) are more effective than electrical measurements for monitoring age-related degradation. In the high temperature steam test, ethylene propylene rubber (EPR) cable materials generally survived to higher temperatures than crosslinked polyolefin (XLPO) cable materials. In dielectric testing after the submergence testing, the XLPO materials performed better than the EPR materials.
This paper describes several different types of constraints that can be placed on multilayered feedforward neural networks which are used for automatic target recognition (ATR). We show how unconstrained networks are likely to give poor generalization on the ATR problem. We also show how the ATR problem requires a special type of classifier called a one-class classifier. The network constraints come in two forms: architectural constraints and learning constraints. Some of the constraints are used to improve generalization, while others are incorporated so that the network will be forced to perform one-class classification. 14 refs
Foams, like most highly structured fluids, exhibiting rheological behavior that is both fascinating and complex. We have developed microrheological models for uniaxial extension and simple shearing flow of a dry', perfectly ordered, three-dimensional foam composed of thin films with uniform surface tension T and negligible liquid content. We neglect viscous flow in the thin films and examine large elastic-plastic deformations of the foam. The primitive undeformed foam structure is composed of regular space-filling tetrakaidecahedra, which have six square and eight hexagonal surfaces. This structure possesses the film-network topology that is necessary to satisfy equilibrium: three films meet at each edge, which corresponds to a Plateau border, and four edges meet at vertex. However, to minimize surface energy, the films must meet at equal angles of 120{degrees} and the edges must join at equal tetrahedral angles of cos{sup {minus}1}({minus}1/3) {approx} 10.947{degree}. No film in an equilibrium foam structure can be a planar polygon because no planar polygon has all angles equal to the tetrahedral edge. In the equilibrium foam structure known as Kelvin's minimal tetrakaidecahedron, the squares' are planar quadrilateral surfaces with curved edges and the hexagons' are non-planar saddle surfaces with zero mean curvature. As the foam structure evolves with the macroscopic flow, each film maintains zero mean curvature because the pressure is the same in every bubble. In general, the shape of each thin film, defined by z = h(x,y), satisfies R{sub 1}/1 + R{sub 2}/1 = {del}{center dot} (1 + {vert bar}{del}h{vert bar}){sup {1/2}} = O where R{sub 1}{sup {minus}1} and A{sub 2}{sup {minus}1} are the principal curvatures. The appropriate boundary conditions correspond to three films meeting at equal angles. For the homogeneous deformations under consideration, the center of each film moves affinely with the flow. 5 refs
Renewable energy technologies convert naturally occurring phenomena into useful energy forms. These technologies use resources that generally are not depleted, such as the direct energy (heat and light) from the sun and the indirect results of its impact on the earth (wind, falling water, heating effects, plant growth), gravitational forces (the tides), and the heat of the Earth's core (geothermal), as the sources from which they produce useful energy. These very large stores of natural energy represent a resource potential that is incredibly massive -- dwarfing that of equivalent fossil energy resources. The magnitude of these resources is, therefore, not a key constraint on energy production. However, they are generally diffuse and not fully accessible, some are intermittent, and all have distinct regional and local variability. It is these aspects of their character that give rise to difficult, but generally solvable, technical, institutional, and economic challenges inherent in development and use of renewable energy resources. This report discusses the technologies and their associated energy source.
Theoretical models have been formulated describing the dynamic behavior of the swelling and contracting of polyelectrolyte gels. This paper presents a method of weighted residuals approach to solving the governing system of equations by finite element analysis. The modulation of the imbibition of solvent by a spherical gel is studied.
There is considerable interest in the use of chemically vapor deposited (CVD) polycrystalline diamond films in advanced materials technology. However, most of the potential applications of CVD diamond films require well-controlled properties which depend on the film structure, and in turn, on the conditions under which the films are synthesized. The structure of the vapor-deposited diamond films is frequently characterized by Raman spectroscopy. Despite extensive research, much work still needs to be completed to understand the various features of the Raman spectra and to understand how the processing variables affect the spectral features. This paper examines the Raman spectra of diamond films prepared by a hot-filament-assisted CVD process as a function of substrate processing and deposition parameters.
Many applications of national importance require the design, analysis, and simulation of complex electromagnetic phenomena. These applications range from the simulation of synthetic aperture radar to the design and analysis of low-observable platforms, antenna design, and automatic target recognition. In general, the modeling of complex electromagnetic phenomena requires significant amounts of computer time and capacity on conventional vector supercomputers but takes far less on massively parallel computers. Sandia National Laboratories is currently developing massively parallel methods and algorithms for the characterization of complex electromagnetic phenomena. The goal of on going research at Sandia is to understand the characteristics, limitations, and trade-offs associated with complex electromagnetic systems including: modeling the seeker response to complex targets in clutter, calculating the radiation and scattering from conformal communication and radar system antennas, and the analysis and design of high speed circuitry. By understanding the theoretical underpinnings of complex electromagnetic systems it is possible to achieve realistic models of system performance. The first objective is the development of computationally practical, high fidelity, systems models targeted for massively parallel computers. Research to achieve this objective is conducted in such areas as mathematical algorithms, problem decomposition, inter-processor communication schemes, and load balancing. The work in mathematical algorithms includes both the development of new methods and the parallel implementation of existing techniques. The second objective is the application of these high fidelity models to facilitate a better understanding of systems level performance for many C{sup 3}I platforms. This presentation describes applications of much current interest and novel solution techniques for these applications utilizing massively parallel processing techniques.
A neighboring external control problem is formulated for a hypersonic glider to execute a maximum-terminal-velocity descent to a stationary target. The resulting two-part, feedback control scheme initially solves a nonlinear algebraic problem to generate a nominal trajectory to the target altitude. Secondly, a neighboring optimal path computation about the nominal provides a lift and side-force perturbations necessary to achieve the target downrange and crossrange. On-line feedback simulations of the proposed scheme and a form of proportional navigation are compared with an off-line parameter optimization method. The neighboring optimal terminal velocity compares very well with the parameter optimization solution and is far superior to proportional navigation. 8 refs.
This paper describes the design of an inverse adaptive filter, using the Least-Mean-Square (LMS) algorithm, the correct data taken with an analog filter. The gradient estimate used in the LMS algorithm is based upon the instantaneous error, e{sup 2}(n). Minimizing the mean-squared-error does not provide an optimal solution in this specific case. Therefore, another performance criterion, error power, was developed to calculate the optimal inverse model. Despite using a different performance criterion, the inverse filter converges rapidly and gives a small mean-squared-error. Computer simulations of this filter are also shown in this paper.
Intense light ion beams are being developed to drive inertial confinement fusion (ICF) targets. Recently, intense proton beams have been used to drive two different types of targets in experiments on the Particle Beam Fusion Accelerator. The experiments focused separately on ion deposition physics and on implosion hydrodynamics. In the ion deposition physics experiments, a 3--4 TW/cm{sup 2} proton beam heated a low-density foam contained within a gold cylinder with a specific power deposition exceeding 100 TW/gm for investigating ion deposition, foam heating, and generation of x-rays. The significant results from these experiments included the following: the foam provided an optically thin radiating region, the uniformity of radiation across the foam was good, and the foam tamped the gold case, holding it in its original position for the 15 ns beam pulse width.
This document describes the Temperature Monitoring System for the RHEPP project at Sandia National Laboratories. The system is designed to operate in the presence of severe repetitive high voltage and electromagnetic fields while providing real time thermal data on component behavior. The thermal data is used in the design and evaluation of the major RHEPP components such as the magnetically switched pulse compressor and the linear induction voltage adder. Particular attention is given to the integration of commercially available hardware and software components with a custom written control program. While this document is intended to be a reference guide, it may also serve as a template for similar applications. 3 refs.
This bibliography contains 34 references concerning utilizing benchmarking in the management of businesses. Books and articles are both cited. Methods for gathering and utilizing information are emphasized. (GHH)
Measurements have recently been conducted and computer models constructed to determine the coupling of lightning energy into munition storage bunkers as detailed in companion conference papers. In this paper transfer functions from the incident current to the measured parameters are used to construct simple circuit models that explain much of the important observed quantitative and qualitative information and differences in transfer functions are used to identify nonlinearities in the response data. In particular, V{sub oc} -- the open-circuit voltage generated between metal objects in the structure, I{sub sc} -- the short-circuit current generated in a wire connecting metal objects in the structure, and a typical current measurement in the buried counterpoise system behave in a relatively simple manner explainable by one or several circuit elements. The circuit elements inferred from measured data are comparable in magnitude with those developed from simple analytical models for inductance and resistance. These analytical models are more useful in predicting bounding electromagnetic environment values rather than providing exact time domain waveforms. 2 refs.
The restoration of environmentally contaminated sites at DOE facilities has become a major effort in the past several years. The variety of wastes involved and the differing characteristics have driven the development of new restoration and monitoring technologies. One of the new remediation technologies is being demonstrated at the Savannah River Site near Aiken, South Carolina. In conjunction with this demonstration, a new technology for site characterization and monitoring of the remediation process has been applied by Sandia National Laboratories.
We used surface-profile data taken with a noncontact laser profilometer to determine the aperture distribution within a natural fracture and found the surfaces and apertures to be isotropic. The aperture distribution could be described equally well by either a normal or a lognormal distribution, although we had to adjust the standard deviation to 'fit' the data. The aperture spatial correlation varied over different areas of the fracture, with some areas being much more correlated U= others. The fracture surfaces did not have a single fractal dimension over all length scales, which implied that they were not self-similar. We approximated the saturated flow field in the fracture by solving a finite-difference discretization of the fluid-flow continuity equation in two dimensions. We then calculated tracer breakthrough curves using a particle-tracking method. comparing the breakthrough curves obtained using both coarse- and fine-resolution aperture data (0.5- and 0.05-mm spacing between points, respectively) over the same subset of the fracture domain suggests that the spacing between the aperture data points must be less than the correlation length to obtain accurate predictions of fluid flow and tracer transport. In the future, we will perform tracer experiments and numerical modeling studies to determine exactly how fine the aperture data resolution must be (relative to the correlation length) to obtain accurate predictions.
Sandia National Laboratories (SNL) designs, tests and operates a variety of accelerators that generate large amounts of high energy Bremsstrahlung radiation over an extended time. Typically groups of similar accelerators are housed in a large building that is inaccessible to the general public. To facilitate independent operation of each accelerator, test cells are constructed around each accelerator to shield it from the radiation workers occupying surrounding test cells and work-areas. These test cells, about 9 ft. high, are constructed of high density concrete block walls that provide direct radiation shielding. Above the target areas (radiation sources), lead or steel plates are used to minimize skyshine radiation. Space, accessibility and cost considerations impose certain restrictions on the design of these test cells. SNL Health Physics division is tasked to evaluate the adequacy of each test cell design and compare resultant dose rates with the design criteria stated in DOE Order 5480.11. In response SNL-Health Physics has undertaken an intensive effort to asses existing radiation shielding codes and compare their predictions against measured dose rates. This paper provides a summary of the effort underway and its results.
The last decade has offered many challenges to the welding metallurgist: new types of materials requiring welded construction, describing the microstructural evolution of traditional materials, and explaining non-equilibrium microstructures arising from rapid thermal cycle weld processing. In this paper, the author will briefly review several advancements made in these areas, often citing specific examples of where new insights were required to describe new observations, and to show how traditional physical metallurgy methods can be used to describe transformation phenomena in advanced, non-traditional materials. The paper will close with comments and suggestions as to the needs required for continued advancement in the field.
Phase II of the Long Valley Exploratory Well was completed to a depth of 7588 feet in November 1991. The drilling comprised two sub-phases: (1) drilling 17-1/2 inch hole from the Phase I casing shoe at 2558 feet to a depth of 7130 feet, plugging back to 6826 feet, and setting 13-3/8 inch casing at 6825 feet, all during August--September 1991; and (2) returning in November to drill a 3.85-inch core hole deviated out of the previous wellbore at 6868 feet and extending to 7588 feet. Ultimate depth of the well is planned to be 20,000 feet, or at a bottomhole temperature of 500{degrees}C, whichever comes first. Total cost of this drilling phase was approximately $2.3 million, and funding was shared about equally between the California Energy Commission and the Department of Energy. Phase II scientific work will commence in July 1992 and will be supported by DOE Office of Basic Energy Sciences, DOE Geothermal Division, and other funding sources.