Rutherford backscattering spectrometry (RBS), elastic recoil detection (ERD), proton induced x-ray emission (PIXE) and nuclear reaction analysis (NRA) are among the most commonly used, or traditional, ion beam analysis (IBA) techniques. In this review, several adaptations of these IBA techniques are described where either the approach used in the analysis or the application area is clearly non-traditional or unusual. These analyses and/or applications are summarized in this paper.
MOSFETs historically have exhibited large 1/f noise magnitudes because of carrier-defect interactions that cause the number of channel carriers and their mobility to fluctuate. Uncertainty in the type and location of defects that lead to the observed noise have made it difficult to optimize MOSFET processing to reduce the level of 1/f noise. This has limited one`s options when designing devices or circuits (high-precision analog electronics, preamplifiers, etc.) for low-noise applications at frequencies below {approximately}10--100 kHz. We have performed detailed comparisons of the low-frequency 1/f noise of MOSFETs manufactured with radiation-hardened and non-radiation-hardened processing. We find that the same techniques which reduce the amount of MOSFET radiation-induced oxide-trap charge can also proportionally reduce the magnitude of the low-frequency 1/f noise of both unirradiated and irradiated devices. MOSFETs built in radiation-hardened device technologies show noise levels up to a factor of 10 or more lower than standard commercial MOSFETs of comparable dimensions, and our quietest MOSFETs show noise magnitudes that approach the low noise levels of JFETS.
Three dimensional (3D) seismic technology is regarded as one of the most significant improvements in oil exploration technology to come along in recent years. This report provides an assessment of the likely long-term effect on the world oil price and some possible implications for the firms and countries that participate in the oil market. The potential reduction in average finding costs expected from the use of 3D seismic methods and the potential effects these methods may have on the world oil price were estimated. Three dimensional seismic technology is likely to have a more important effect on the stability rather than on the level of oil prices. The competitive position of US oil production will not be affected by 3D seismic technology.
A programming tool has been developed to allow detailed analysis of Fortran programs for massively parallel architectures. The tool obtains counts for various arithmetic, logical, and input/output operations by data types as desired by the user. The tool operates on complete programs and recognizes user-defined and intrinsic language functions as operations that may be counted. The subset of functions recognized by the tool, STOPCNTR, can be extended by altering the input data sets. This feature facilitates analysis of programs targeted for different architectures. The basic usage and operation of the tool is described along with the more important data structures and more interesting algorithmic aspects before identifying future directions in continued development of the tool and discussing STOPCNTR`s inherent advantages and disadvantages.
Tensile properties were measured for nineteen different formulations of epoxy encapsulating materials. Formulations were of different combinations of two neat resins (Epon 828 and Epon 826, with and without CTBN modification), three fillers (ALOX, GNM and mica) and four hardeners (Z, DEA, DETDA-SA and ANH-2). Five of the formulations were tested at -55, -20, 20 and 60C, one formulation at -55, 20 and 71C; and the remaining formulations at 20C. Complete stress-strain curves are presented along with tables of tensile strength, initial modulus and Poisson`s ratio. The stress-strain responses are nonlinear and are temperature dependent. The reported data provide information for comparing the mechanical properties of encapsulants containing the suspected carcinogen Shell Z with the properties of encapsulants containing noncarcinogenic hardeners. Also, calculated shear moduli, based on measured tensile moduli and Poisson`s ratio, are in very good agreement with reported shear moduli from experimental torsional pendulum tests.
Sandia National Laboratories is currently involved in the optimization of a Plane Shock Generator Explosive Lens (PSGEL). This PSGEL component is designed to generate a planar shock wave transmitted to perform a function through a steel bulkhead without rupturing or destroying the integrity of the bulkhead. The PSGEL component consists of a detonator, explosive, brass cone and tamper housing. The purpose of the PSGEL component is to generate a plane shock wave input to 4340 steel bulkhead (wave separator) with a ferro-electric (PZT) ceramic disk attached to the steel on the surface opposite the PSGEL. The planar shock wave depolarizes the PZT 65/35 ferroelectric ceramic to produce an electrical output. Elastic, plastic I and plastic II waves with different velocities are generated in the steel bulkhead. The depolarization of the PZT ceramic is produced by the elastic wave of specific amplitude (10--20 Kilobars) and this process must be completed before (about 0. 15 microseconds) the first plastic wave arrives at the PZT ceramic. Measured particle velocity versus time profiles, using a Velocity Interferometer System for Any Reflector (VISAR) are presented for the brass and steel output free surfaces. Peak pressures are calculated from the particle velocities for the elastic, plastic I and plastic 11 waves in the steel. The work presented here investigates replacing the current 4340 steel with PH 13-8 Mo stainless steel in order to have a more corrosion resistant, weldable and more compatible material for the multi-year life of the component. Therefore, the particle velocity versus time profile data are presented comparing the 4340 steel and PH 13-8 Mo stainless steel. Additionally, in order to reduce the amount of explosive, data are presented to show that LX-13 can replace PBX-9501 explosive to produce more desirable results.
Previous studies in this laboratory have demonstrated that DMBA alters biochemical events associated with lymphocyte activation including formation of the second messenger IP{sub 3} and the release of intracellular Ca{sup 2+}. The purpose of the present studies was to evaluate the mechanisms by which DMBA induces IP{sub 3} formation and Ca{sup 2+} release by examining phosphorylation of membrane associated proteins and activation of protein tyrosine kinases lck and fyn. These studies demonstrated that exposure of HPB-ALL cells to 10{mu}M DMBA resulted in a time- and dose-dependent increase in tyrosine phosphorylation of PLC-{gamma}1 that correlated with our earlier findings of IP{sub 3} formation and Ca{sup 2+} release. These results indicate that the effects of DMBA on the PI-PLC signaling pathway are in part, the result of DMBA-induced tyrosine phosphorylation of the PLC-{gamma}1 enzyme. The mechanism of DMBA- induced tyrosine phosphorylation of PLC-{gamma}1 may be due to activation of fyn or lck kinase activity, since it was found that DMBA increased the activity of these PTKs by more than 2-fold. Therefore, these studies demonstrate that DMBA may disrupt T cell activation by stimulating PTK activation with concomitant tyrosine phosphorylation of PLC-{gamma}1, release of IP{sub 3}, and mobilization of intracellular Ca{sup 2+}.
The Natural Excitation Technique (NExT) is a method of modal testing that allows structures to be tested in their ambient environments. This report is a compilation of developments and results since 1990, and contains a new theoretical derivation of NExT, as well as a verification using analytically generated data. In addition, we compare results from NExT with conventional modal testing for a parked, vertical-axis wind turbine, and, for a rotating turbine, NExT is used to calculate the model parameters as functions of the rotation speed, since substantial damping is derived from the aeroelastic interactions during operation. Finally, we compare experimental results calculated using NExT with analytical predictions of damping using aeroelastic theory.
Characteristics of a long pulse, low-pump rate, atomic xenon (XeI) laser are described. Energy loading up to 170 mJ/cc at pulse widths between 5 and 55 ms is achieved with an electron beam in transverse geometry. The small-signal gain coefficient, loss coefficient, and saturation intensity are inferred from a modified Rigrod analysis. For pump rates between 12 and 42 W/cc the small-signal gain coefficient varies between 0.64 and 0.91%/cm, the loss coefficient varies between 0.027 and 0.088%/cm, and the saturation intensity varies between 61 and 381 W/cm{sup 2}. Laser energy as a function of pulse width and the effects of air and CO{sub 2} impurities are described. The intrinsic laser energy efficiency has a maximum at a pulse width of 10 ms corresponding to a pump rate of 1.6 W/cc. No maximum is observed in the intrinsic power efficiency, A drastic reduction of laser output power is observed for impurity concentrations of greater than {approx}0.01%. An investigation of the dominant laser wavelength in a high Q cavity indicates that the 2.6-{mu}m radiation dominates. A comparison of dominant wavelength with reactor pumped results indicates good agreement when the same cavity optics are used.
This Executive Summary presents the methodology for determining containment requirements for spent-fuel transport casks under normal and hypothetical accident conditions. Three sources of radioactive material are considered: (1) the spent fuel itself, (2) radioactive material, referred to as CRUD, attached to the outside surfaces of fuel rod cladding, and (3) residual contamination adhering to interior surfaces of the cask cavity. The methodologies for determining the concentrations of freely suspended radioactive materials within a spent-fuel transport cask for these sources are discussed in much greater detail in three companion reports: ``A Method for Determining the Spent-Fuel Contribution to Transport Cask Containment Requirements,`` ``Estimate of CRUD Contribution to Shipping Cask Containment Requirements,`` and ``A Methodology for Estimating the Residual Contamination Contribution to the Source Term in a Spent-Fuel Transport Cask.`` Examples of cask containment requirements that combine the individually determined containment requirements for the three sources are provided, and conclusions from the three companion reports to this Executive Summary are presented.
This report discusses recent efforts to characterize the flow and density nonuniformities downstream of heated screens placed in a uniform flow. The Heated Screen Test Facility (HSTF) at Sandia National Laboratories and the Lockheed Palo Alto Flow Channel (LPAFC) were used to perform experiments over wide ranges of upstream velocities and heating rates. Screens of various mesh configurations were examined, including multiple screens sequentially positioned in the flow direction. Diagnostics in these experiments included pressure manometry, hot-wire anemometry, interferometry, Hartmann wavefront slope sensing, and photorefractive schlieren photography. A model was developed to describe the downstream evolution of the flow and density nonuniformities. Equations for the spatial variation of the mean flow quantities and the fluctuation magnitudes were derived by incorporating empirical correlations into the equations of motion. Numerical solutions of these equations are in fair agreement with previous and current experimental results.
Two heliostats representing the state-of-the-art in glass-metal designs for central receiver (and photovoltaic tracking) applications were tested and evaluated at the National Solar Thermal Test Facility in Albuquerque, New Mexico from 1986 to 1992. These heliostats have collection areas of 148 and 200 m{sup 2} and represent low-cost designs for heliostats that employ glass-metal mirrors. The evaluation encompassed the performance and operational characteristics of the heliostats, and examined heliostat beam quality, the effect of elevated winds on beam quality, heliostat drives and controls, mirror module reflectance and durability, and the overall operational and maintenance characteristics of the two heliostats. A comprehensive presentation of the results of these and other tests is presented. The results are prefaced by a review of the development (in the United States) of heliostat technology.
Shipping containers for radioactive materials must be qualified to meet a thermal accident environment specified in regulations, such at Title 10, Code of Federal Regulations, Part 71. Aimed primarily at the shipping container design, this report discusses the thermal testing options available for meeting the regulatory requirements, and states the advantages and disadvantages of each approach. The principal options considered are testing with radiant heat, furnaces, and open pool fires. The report also identifies some of the facilities available and current contacts. Finally, the report makes some recommendations on the appropriate use of these different testing methods.
Within the Yucca Mountain Site Characterization Project, the design of drifts and ramps and evaluation of the impacts of thermomechanical loading of the host rock requires definition of the rock mass mechanical properties. Ramps and exploratory drifts will intersect both welded and nonwelded tuffs with varying abundance of fractures. The rock mass mechanical properties are dependent on the intact rock properties and the fracture joint characteristics. An understanding of the effects of fractures on the mechanical properties of the rock mass begins with a detailed description of the fracture spatial location and abundance, and includes a description of their physical characteristics. This report presents a description of the abundance, orientation, and physical characteristics of fractures and the Rock Quality Designation in the thermomechanical stratigraphic units at the Yucca Mountain site. Data was reviewed from existing sources and used to develop descriptions for each unit. The product of this report is a data set of the best available information on the fracture characteristics.
In this paper we consider the problem of interprocessor communication on a Completely Connected Optical Communication Parallel Computer (OCPC). The particular problem we study is that of realizing an h-relation. In this problem, each processor has at most h messages to send and at most h messages to receive. It is clear that any 1-relation can be realized in one communication step on an OCPC. However, the best known p-processor OCPC algorithm for realizing an arbitrary h-relation for h > 1 requires {Theta}(h + log p) expected communication steps. (This algorithm is due to Valiant and is based on earlier work of Anderson and Miller.) Valiant`s algorithm is optimal only for h = {Omega}(log p) and it is an open question of Gereb-Graus and Tsantilas whether there is a faster algorithm for h = o(log p). In this paper we answer this question in the affirmative by presenting a {Theta} (h + log log p) communication step algorithm that realizes an arbitrary h-relation on a p-processor OCPC. We show that if h {le} log p then the failure probability can be made as small as p{sup -{alpha}} for any positive constant {alpha}.
This cautionary paper reminds users of quartz shock stress gauges that sensors that ignore the design rules of the Sandia quartz gauge'' may produce substantial and unrecognized deviations from normal sensitivity, waveform distortion, and anomalous conduction. Each deviant design must be extensively characterized. The consequence of non-standard gauge designs, like the shorted quartz gauge'' designs, are given for prompt response to pulsed radiation while stressed.
Through the Dish-Stirling Joint Venture Program (JVP) sponsored by the US Department of Energy (DOE), Cummins Power Generation, Inc., (CPG) and Sandia National Laboratories (SNL) have entered into a joint venture to develop and commercialize economically competitive dish-Stirling systems for remote power applications. The $14 million JVP is being conducted in three phases over a 3 1/2-year period in accordance with the Cummins Total Quality System (TQS) for new product development. The JVP is being funded equally by CPG, including its industrial partners, and the DOE. In June 1992, a concept validation'' (prototype) 5-kW[sub e], dish-Stirling system became operational at the CPG test site m Abilene, TX. And on January 1, 1993, the program advanced to phase 2. On the basis of the performance of the 5-kW[sub e] system, a decision was made to increase the rated system output to 7.5-kW[sub e]. The CPG system uses advanced components that have the potential for low cost and reliable operation, but which also have technical risks. In this paper, the status of the advanced components and results from system integration testing are presented and discussed. Performance results from system testing of the 5-kW[sub e] prototype along with phase 2 goals for the 7.5-kW[sub e] system are also discussed.
This paper presents data compiled by the Photovoltaic Design Assistance Center at Sandia National Laboratories from more than eighty field tests performed at over thirty-five photovoltaic systems in the United States during the last ten years. The recorded performance histories, failure rates, and degradation of post-Block IV modules and balance-of-system (BOS) components are described in detail.
Images taken with a synthetic aperture radar (SAR) on an airplane were distorted with phase errors generated by a computer program that simulates the propagation of radar waves through the disturbed ionosphere. The simulation is for an orbiting SAR imaging a scene on the ground. Both the spatially-invariant (decorrelation length projected onto the ground much larger than the scene size) and spatially-variant (decorrelation length much smaller than the scene size) cases are described. The spatially-invariant phase errors can be removed using several different algorithms. Problems and strategies in restoring SAR images distorted with spatially-variant phase errors are discussed.
PVDF piezoelectric polymer shock stress sensors have been used to measure the shock and impulse generated by soft X-rays and by filter debris in the SATURN Plasma Radiation Source at Sandia National Laboratories, NM. SATURN was used to generate 30 to 40 kJ, 20-ns duration, line radiation at 2 to 3 keV. Fluence on samples was nominally 40, 200, and 400 kJ/m[sup 2] (1, 5, and 10 cal/cm[sup 2]). Measurements of X-ray induced material shock response exposing both aluminum and PMMA acrylic samples agree well with companion measurements made with single crystal X-cut quartz gauges. Time-of-flight, stress, and impulse produced by Kimfol (polycarbonate/aluminum) filter debris were also measured with the PVDF gauges.
Drilling production-size holes for geothermal exploration puts a large expense at the beginning of the project, and thus requires a long period of debt service before those costs can be recaptured from power sales. If a reservoir can be adequately defined and proved by drilling smaller, cheaper slim-holes, production well drilling can be delayed until the power plant is under construction, saving years of interest payments. In the broadest terms, this project's objective is to demonstrate that a geothermal resevoir can be identified and evaluated with data collected in slim holes. We have assembled a coordinated working group, including personnel from Sandia, Lawrence Berkeley Lab, University of Utah Research Institute, US Geological Survey, independent consultants, and geothermal operators, to focus on the development of this project. This group is involved to a greater or lesser extent in all decisions affecting the direction of the research. Specific tasks being pursued include: Correlation of fluid flow and injection tests between slim-holes and production size wells. Transfer of slim-hole exploration drilling and reservoir assessment to industry so that slim-hole drilling becomes an accepted method for geothermal exploration.Development and validation of a coupled wellbore-reservoir flow simulator which can be used for reservoir evaluation from slim-hole flow data. Collection of applicable data from commercial wells in existing geothermal fields. Drilling of at least one new slim-hole and use it to evaluate a geothermal reservoir.
PVDF shock stress sensors were subjected to X-ray deposition at nominal absorbed levels of 1, 1[1/2], 3, and 5 cal/gm (SiO[sub 2] equiv.) and to neutron fluence above 10[sup 13] n/cm[sup 2] while stressed at a peak level of about 2 GPa. Moderate transitory electrical noise that occurred briefly during the radiation did not persist. PVDF shock sensors with aluminum electrodes appear satisfactory for measurement within these exposure limits. Reference quartz gauges were severely affected.
Future challenges facing the nonproliferation community will undoubtedly change the normal way of doing business'' in international safeguards. New technology will emerge in support of compliance concepts such as transparency and openness, regional security assurance, bilateral cooperation, and special. or non-routine inspections. Technologies address in remote unattended monitoring, integrated on-site monitoring, environmental monitoring, satellite and aerial over-flight systems, equipment for special inspectios, and sharable data information fusion and management, are just a few examples of potential technologics for new nonproliferation monitoring regimes.
Evolution of the microstructure of Al-2wt.%Cu thin films is examined with respect to how the presence of copper can influence electromigration behavior. After an anneal that simulates a thin film sintering step, the microstructure of the Al-Cu films consisted of 1 [mu]m aluminum grains with [theta]-phase A1[sub 2]Cu precipitates at grain boundaries and triple points. The grain size and precipitation distribution did not change with subsequent heat treatments. Upon cooling to room temperature the heat treatment of the films near the Al/Al+[theta] solvus temperature results in depletion of copper at the aluminum grain boundaries. Heat treatments lower in the two phase region (200 to 300C) result in enrichment of copper at the aluminum grain boundaries. Here, it is proposed that electromigration behavior of aluminum is improved by adding copper because the copper enrichment in the form of A1[sub 2]Cu phase may hinder aluminum diffusion along the grain boundaries.
C++ is commonly described as an object-oriented programming language because of its strong support for classes with multiple inheritance and polymorphism. However, for a growing community of numerical programmers, an equally important feature of C++ is its support of operator overloading on abstract data types. The authors choose to call the resulting style of programming object-oriented numerics. They believe that much of object-oriented numerics is orthogonal to conventional object-oriented programming. As a case study, they discuss two strong shock physics codes written in C++ that they're currently developing. These codes use both polymorphic classes (typical of traditional object-oriented programming) and abstract data types with overloaded operators (typical of object-oriented numerics). They believe that C++ translators can generate efficient code for many numerical objects. However, for the important case of smart arrays (which are used to represent matrices and the fields found in partial differential equations) fundamental difficulties remain. The authors discuss the two most important of these, namely, the aliasing ambiguity and the proliferation of temporaries, and present some possible solutions.
The solution of Grand Challenge Problems will require computations which are too large to fit in the memories of even the largest machines. Inevitably new designs of I/O systems will be necessary to support them. Through our implementations of an out-of-core LU factorization we have learned several important lessons about what I/O systems should be like. In particular we believe that the I/O system must provide the programmer with the ability to explicitly manage storage. One method of doing so is to have a partitioned secondary storage in which each processor owns a logical disk. Along with operating system enhancements which allow overheads such as buffer copying to be avoided, this sort of I/O system meets the needs of high performance computing.
An eXplosive CHEMical kinetics code, XCHEM was developed to solve the reactive diffusion equations associated with thermal ignition of energetic material. This method-of-lines code uses stiff numerical methods and adaptive meshing. Solution accuracy is maintained between multilayered materials consisting of blends of reactive components and/or inert materials. Phase change and variable properties are included in one-dimensional slab, cylindrical and spherical geometries. Temperature-dependent thermal properties was incorporated and modification of thermal conductivities to include decomposition effects are estimated using solid/gas volume fractions determined by species fractions. Gas transport properties are also included. Time varying temperature, heat flux, convective and thermal radiation boundary conditions, and layer to layer contact resistances are also implemented. The global kinetic mechanism developed at Lawrence Livermore National Laboratory (LLNL) by McGuire and Tarver used to fit One-Dimensional Time to eXplosion (ODTX) data for the conventional energetic materials (HMX, RDX, TNT, and TATB) are presented as sample calculations representative of multistep chemistry. Calculated and measured ignition times for explosive mixtures of Comp B (RDX/TNT), Octol, (HMX/TNT), PBX 9404 (HMX/NC), and RX-26-AF (HMX/TATB) are compared. Geometry and size effects are accurately modeled, and calculations are compared to experiments with time varying boundary conditions. Finally, XCHEM calculations of initiation of an AN/oil/water emulsion, resistively heated, are compared to measurements.
Diamond films were deposited on tungsten substrates by a filament-assisted chemical vapor deposition process as a function of seven different processing parameters. The effect of variations in measured film characteristics such as growth rate, texture, diamond-to-nondiamond carbon Raman band intensity ratio and strain on the adhesion between the diamond film/tungsten substrate pairs as measured by a tensile pull method were investigated. The measured adhesion values do not correlate with any of the measured film characteristics mentioned above. The problem arises because of the non-reproducibility of the adhesion test results, due to the non-uniformity of film thickness, surface preparation and structural homogeneity across the full area of the substrate.
Strained-layer semiconductors have revolutionized modern heterostructure devices by exploiting the modification of semiconductor band structure associated with the coherent strain of lattice-mismatched heteroepitaxy. The modified band structure improves transport of holes in heterostructures and enhances the operation of semiconductor lasers. Strained-layer epitaxy also can create materials whose band gaps match wavelengths (e.g. 1.06 μm and 1.32 μm) not attainable in ternary epitaxial systems lattice matched to binary substrates. Other benefits arise from metallurgical effects of modulated strain fields on dislocations. Lattice mismatched epitaxial layers that exceed the limits of equilibrium thermodynamics will degrade under sufficient thermal processing by converting the as-grown coherent epitaxy into a network of strain-relieving dislocations. After presenting the effects of strain on band structure, we describe the stability criterion for rapid-thermal processing of strained-layer structures and the effects of exceeding the thermodynamic limits. Finally, device results are reviewed for structures that benefit from high temperature processing of strained-layer superlattices.
The porosity of sol-gel thin films may be tailored for specific applications through control of the size and structure of inorganic polymers within the coating sol, the extent of polymer reaction and interpenetration during film formation, and the magnitude of the capillary pressure exerted during the final stage of drying. By maximizing the capillary pressure and avoiding excessive condensation, dense insulating films may be prepared as passivation layers on silicon substrates. Such films can exhibit excellent dielectric integrity, viz., low interface trap densities and insulating properties approaching those of thermally grown SiO[sub 2]. Alternatively, through exploitation of the scaling relationship of mass and density of fractal objects, silica films can be prepared that show a variation in porosity (7--29 %) and refractive index (1.42--1.31) desired for applications in sensors, membranes, and photonics.
At Sandia National Laboratories, the Engineering Sciences Center has made a commitment to integrate AVS into our computing environment as the primary tool for scientific visualization. AVS will be used on an everyday basis by a broad spectrum of users ranging from the occasional computer user to AVS module developers. Additionally, AVS will be used to visualize structured grid, unstructured grid, gridless, 1D, 2D, 3D, steady-state, transient, computational, and experimental data. The following is one user's perspective on how AVS meets this task. Several examples of how AVS is currently being utilized will be given along with some future directions.
Sandia National Laboratories and the Allied Signal-Kansas City Plant (AS-KCP) are engaged in a program called the Integrated Manufacturing and Design Initiative, or IMDI. The focus of IMDI is to develop and implement concurrent engineering processes for the realization of weapon components.'' An explicit part of each of the activities within IMDI is an increased concern for environmental impacts associated with design, and a desire to minimize those impacts through the implementation of Environmentally Conscious Manufacturing, or ECM. These same concerns and desires are shared within the Department of Energy's Manufacturing Complex, and are gaining strong support throughout US industrial sectors as well. Therefore, the development and application of an environmental life cycle analysis framework, the thrust of this specific effort, is most consistent not only with the overall objectives of IMDI, but with those of DOE and private industry.
When an object is subjected to the flow of combustion gas at a different temperature, the thermal responses of the object and the surrounding gas become coupled. The ability to model this interaction is of primary interest in the design of components which must withstand fire environments. One approach has been to decouple the problem and treat the incident flux on the surface of the object as being emitted from a blackbody at an approximate gas temperature. By neglecting the presence of the participating media, this technique overpredicts the heat fluxes initially acting on the object surface. The main goal of this work is to quantify the differences inherent in treating the combustion media as a blackbody as opposed to a gray gas. This objective is accomplished by solving the coupled participating media radiation and conduction heat transfer problem. A transient conduction analysis of a vertical flat plate was performed using a gray gas model to provide a radiation boundary condition. A 1-D finite difference algorithm was used to solve the conduction problem at locations along the plate. The results are presented in terms of nondimensional parameters and include both average and local heat fluxes as a function of time. Early in the transient, a reduction in net heat fluxes of up to 65% was observed for the gray gas results as compared to the blackbody cases. This reduction in the initial net heat flux results in lower surface temperatures for the gray gas case. Due to the initially reduced surface temperatures, the gray gas net heat flux exceeds the net blackbody heat flux with increasing time. For radiation Biot numbers greater than 5, or values of the radiation parameter less than 10-2, the differences inherent in treating the media as a gray gas are negligible and the blackbody assumption is valid. Overall, the results clearly indicate the importance of participating media treatment in the modeling of the thermal response of objects in fires and large combustion systems.
This paper gives an estimate of the cost to produce electricity from hot-dry rock (HDR). Employment of the energy in HDR for the production of electricity requires drilling multiple wells from the surface to the hot rock, connecting the wells through hydraulic fracturing, and then circulating water through the fracture system to extract heat from the rock. The basic HDR system modeled in this paper consists of an injection well, two production wells, the fracture system (or HDR reservoir), and a binary power plant. Water is pumped into the reservoir through the injection well where it is heated and then recovered through the production wells. Upon recovery, the hot water is pumped through a heat exchanger transferring heat to the binary, or working, fluid in the power plant. The power plant is a net 5.1-MW[sub e] binary plant employing dry cooling. Make-up water is supplied by a local well. In this paper, the cost of producing electricity with the basic system is estimated as the sum of the costs of the individual parts. The effects on cost of variations to certain assumptions, as well as the sensitivity of costs to different aspects of the basic system, are also investigated.
We describe an algorithm for the static load balancing of scientific computations that generalizes and improves upon spectral bisection. Through a novel use of multiple eigenvectors, our new spectral algorithm can divide a computation into 4 or 8 pieces at once. This leads to balanced partitions that have lower communication overhead and are less expensive to compute than those of spectral bisection. In addition, our approach automatically works to minimize message contention on a hypercube or mesh architecture.
This paper describes a collaborative effort between Sandia National Laboratories and the Rocketdyne Division of Rockwell International Corporation to develop an automated braze paste dispensing system for rocket engine nozzle manufacturing. The motivation for automating this manufacturing process is to reduce the amount of labor and excess material required. A critical requirement for this system is the automatic location of key nozzle features using non-contact sensors. Sandia has demonstrated that the low-cost Multi-Axis Seam Tracking (MAST) capacitive sensor can be used to accurately locate the nozzle surface and tube gaps.
We report our progress on the physical optics modelling of Sandia/AT T SXPL experiments. The code is benchmarked and the 10X Schwarzchild system is being studied.
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-to-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection to host computer, parallelizing code generator, and look-up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating-point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An OpenWindows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.
A high speed readout imaging system utilizing a commercial flash X-ray machine and miniature X-ray detectors has been developed. This system was designed to operate in the environment near a nuclear detonation where film or camera imaging cannot be used. The temporal resolution of the system is set by the 20 nanosecond FWHM of the X-ray pulse. The spatial resolution of the system was determined by the size and close packing of the PIN diodes used as the X-ray detectors. In the array used here, the PIN diodes have an active area of 2mm in diameter and were placed 3.8mm center to center. Computer-generated images using algorithms developed for this system are presented and compared with an image captured on film in the laboratory.
This paper discusses a nonideal solution model of the metallic phases of reactor core debris. The metal phase model is based on the Kohler equation for a 37 component system. The binary subsystems are assumed to have subregular interactions. The model is parameterized by comparison to available data and by estimating subregular interactions using the methods developed by Miedama et al. The model is shown to predict phase separation in the metallic phase of core debris. The model also predicts reduced chemical activities of zirconium and tellurium in the metal phase. A model of the oxide phase of core debris is described briefly. The model treats the oxide phase as an associated solution. The chemical activities of solution components are determined by the existence and interactions of species formed from the components.
Pool-boiler reflux receivers have been considered as an alternative to heat pipes for the input of concentrated solar energy to Stirling-cycle engines in dish-Stirling electric generation systems. Fool boilers offer simplicity in desip and fabrication. Pool-boiler solar receiver operation has been demonstrated for short periods of time. However, in order to generate cost-effective electricity, the receiver must operate without significant maintenance for the entire system life. At least one theory explaining incipient-boiling behavior of alkali metals indicates that favorable start-up behavior should deteriorate over time. Many factors affect the stability and startup behavior of the boiling system. Therefore, it is necessary to simulate the full-scale design in every detail as much as possible, including flux levels materials, and operating cycles. On-sun testing is impractical due to the limited test time available. No boiling system has been demonstrated with the current porous boiling enhancement surface and materials for a significant period of time. A test vessel was constructed with a Friction Coatings Inc. porous boiling enhancement surface. The vessel is heated with a quartz lamp array providing about 92 W/Cm[sup 2] peak incident thermal flux. The vessel is charged with NaK-78, which is liquid at room temperature. This allows the elimination of costly electric preheating, both on this test and on full-scale receivers. The vessel is fabricated from Haynes 230 alloy, selected for its high temperature strength and oxidation resistance. The vessel operates at 750[degrees]C around the clock, with a 1/2-hour shutdown cycle to ambient every 8 hours. Temperature data is continually collected. The test design and initial (first 2500 hours and 300 start-ups) test data are presented here. The test is designed to operate for 10,000 hours, and will be complete in the spring of 1994.
Sandia National Laboratories has the qualification evaluation responsibility for the design of certain components intended for use in nuclear weapons. Specific techniques in assurance and assessment have been developed to provide the quality evidence that the software has been properly qualified for use. Qualification Evaluation is a process for assessing the suitability of either a process used to develop or manufacture the product, or the product itself. The qualification process uses a team approach to evaluating a product or process, chaired by a Quality Assurance professional, with other members representing the design organization, the systems organization, and the production agency. Suitable for use implies that adequate and appropriate definition and documentation has been produced and formally released, adequate verification and validation activities have taken place to ensure proper operation, and the software product meets all requirements, explicitly or otherwise.
Upon achieving ignition and gain, the Laboratory Microfusion Facility (LMF) will be a major tool for Inertial Confinement Fusion (ICF) research and defense applications. Our concept for delivering [approximately]10 MJ with a peak on-target light ion power of [approximately]700 TW involves a multi-modular approach using an extension of the compact inductively isolated cavity and Magnetically Insulated Transmission Line (MITL) Voltage Adder technology that is presently being used in several large accelerators at Sandia/New Mexico. The LMF driver design consists of twelve 8-TW and twelve 38-TW accelerating modules, each with a triaxial MITL/Adder that delivers power to a two stage ion extraction diode. The desired energy, power pulse shape, and deposition uniformity on an ICF target can be achieved by controlling the energy and firing sequence of the A'' and B'' accelerator modules, plus optimizing the beam transport and focusing. The multi-modular configuration reduces risk by not scaling significantly beyond existing machines and offers the flexibility of staged construction. It permits modular driver testing at the full operating level required by the LMF.
Parallel computers are becoming more powerful and more complex in response to the demand for computing power by scientists and engineers. Inevitably, new and more complex I/O systems will be developed for these systems. In particular we believe that the I/O system must provide the programmer with the ability to explicitly manage storage (despite the trend toward complex parallel file systems and caching schemes). One method of doing so is to have a partitioned secondary storage in which each processor owns a logical disk. Along with operating system enhancements which allow overheads such as buffer copying to be avoided and libraries to support optimal remapping of data, this sort of I/O system meets the needs of high performance computing.
The design-basis, defense-related, transuranic waste to be emplaced in the Waste Isolation Pilot Plant may, if sufficient H2O, nutrients, and viable microorganisms are present, generate significant quantities of gas in the repository after filling and sealing. We summarize recent results of laboratory studies of anoxic corrosion and microbial activity, the most potentially significant processes. We also discuss possible implications for the repository gas budget.
This report discusses electronic isolators which are used to maintain electrical separation between safety and non-safety systems in nuclear power plants. The concern is that these devices may fail allowing unwanted signals or energy to act upon safety systems, or preventing desired signals from performing their intended function. While operational history shows many isolation device problems requiring adjustments and maintenance, we could not find incidents where there was a safety implication. Even hypothesizing multiple simultaneous failures did not lead to significant contributions to core damage frequency. Although the analyses performed in this study were not extensive or detailed, there seems to be no evidence to suspect that isolation device failure is an issue which should be studied further.
This report documents the fiscal year 1992 activities of the, Utility Battery Storage Systems Program (UBS) of the US Department of Energy (DOE), Office of Energy Management (OEM). The UBS program is conducted by Sandia National Laboratories (SNL). UBS is responsible for the engineering development of integrated battery systems for use in utility-energy-storage (UES) and other stationary applications. Development is accomplished primarily through cost-shared contracts with industrial organizations. An important part of the development process is the identification, analysis, and characterization of attractive UES applications. UBS is organized into five projects: Utility Battery Systems Analyses; Battery Systems Engineering; Zinc/Bromine; Sodium/Sulfur; Supplemental Evaluations and Field Tests. The results of the Utility Systems Analyses are used to identify several utility-based applications for which battery storage can effectively solve existing problems. The results will also specify the engineering requirements for widespread applications and motivate and define needed field evaluations of full-size battery systems.
The sixth experiment of the Integral Effects Test (IET-6) series was conducted to investigate the effects of high pressure melt ejection on direct containment heating. Scale models of the Zion reactor pressure vessel (RPV), cavity, instrument tunnel, and subcompartment structures were constructed in the Surtsey Test Facility at Sandia National Laboratories. The RPV was modeled with a melt generator that consisted of a steel pressure barrier, a cast MgO crucible, and a thin steel inner liner. The melt generator/crucible had a hemispherical bottom head containing a graphite limitor plate with a 4-cm exit hole to simulate the ablated hole in the RPV bottom head that would be formed by ejection of an instrument guide tube in a severe nuclear power plant accident. The cavity contained 3.48 kg of water, which corresponds to condensate levels in the Zion plant, and the containment basement floor was dry. A 43-kg initial charge of iron oxide/aluminum/chromium thermite was used to simulate corium debris on the bottom head of the RPV. Molten thermite was ejected by steam at an initial pressure of 6.3 MPa into the reactor cavity. The Surtsey vessel atmosphere contained pre-existing hydrogen to represent partial oxidation of the zirconium in the Zion core. The initial composition of the vessel atmosphere was 87.1 mol.% N{sub 2}, 9.79 mol.% O{sub 2}, and 2.59 mol.% H{sub 2}, and the initial absolute pressure was 198 kPa. A partial hydrogen burn occurred in the Surtsey vessel. The peak vessel pressure increase was 279 kPa in IET-6, compared to 246 kPa in the IET-3 test. The total debris mass ejected into the Surtsey vessel in IET-6 was 42.5 kg. The gas grab sample analysis indicated that there were 180 g{center_dot} moles of pre-existing hydrogen, and that 308{center_dot}moles of hydrogen were produced by steam/metal reactions. About 335 g{center_dot}moles of hydrogen burned, and 153 g{center_dot}moles remained unreacted.
The Sandia National Laboratories (SNL) Engineering Analysis Code Access System (SEACAS) is a collection of structural and thermal codes and utilities used by analysts at SNL. The system includes pre- and post-processing codes, analysis codes, database translation codes, support libraries, UNIX{trademark} shell scripts, and an installation system. SEACAS is used at SNL on a daily basis as a production, research, and development system for the engineering analysts and code developers. Over the past year, approximately 180 days of Cray Y-MP{trademark} CPU time have been used at SNL by SEACAS codes. The job mix includes jobs using only a few seconds of CPU time, up to jobs using two and one-half days of CPU time. SEACAS is running on several different systems at SNL including Cray Unicos, Hewlett Packard HP-UX{trademark}, Digital Equipment Ultrix{trademark}, and Sun SunOS{trademark}. This document is a short description of the codes the SEACAS system.
This bulletin from Sandia Laboratories presents current research on testing technology. Fiber optics systems at the Nevada Test Site is replacing coaxial cables. The hypervelocity launcher is being used to test orbital debris impacts with space station shielding. A digital recorder makes testing of high-speed water entries possible. Automobile engine design is aided by an instrumented head gasket that detects the combustion zone. And composite-to-metal strength and fatigue tests provide new data on joint failures in wind turbine joint tests.
The CONTAIN computer code is a best-estimate, integrated analysis tool for predicting the physical, chemical, and radiological conditions inside a nuclear reactor containment building following the release of core material from the primary system. CONTAIN is supported primarily by the U. S. Nuclear Regulatory Commission (USNRC), and the official code versions produced with this support are intended primarily for the analysis of light water reactors (LWR). The present manual describes CONTAIN LMR/1B-Mod. 1, a code version designed for the analysis of reactors with liquid metal coolant. It is a variant of the official CONTAIN 1.11 LWR code version. Some of the features of CONTAIN-LMR for treating the behavior of liquid metal coolant are in fact present in the LWR code versions but are discussed here rather than in the User`s Manual for the LWR versions. These features include models for sodium pool and spray fires. In addition to these models, new or substantially improved models have been installed in CONTAIN-LMR. The latter include models for treating two condensables (sodium and water) simultaneously, sodium atmosphere and pool chemistry, sodium condensation on aerosols, heat transfer from core-debris beds and to sodium pools, and sodium-concrete interactions. A detailed description of each of the above models is given, along with the code input requirements.
Inelastic material constitutive relations for elastoplasticity coupled with continuum damage mechanics are investigated. For elastoplasticity, continuum damage mechanics, and the coupled formulations, rigorous thermodynamic frameworks are derived. The elastoplasticity framework is shown to be sufficiently general to encompass J{sub 2} plasticity theories including general isotropic and kinematic hardening relations. The concepts of an intermediate undamaged configuration and a fictitious deformation gradient are used to develop a damage representation theory. An empirically-based, damage evolution theory is proposed to overcome some observed deficiencies. Damage deactivation, which is the negation of the effects of damage under certain loading conditions, is investigated. An improved deactivation algorithm is developed for both damaged elasticity and coupled elastoplasticity formulations. The applicability of coupled formulations is validated by comparing theoretical predictions to experimental data for a spectrum of materials and loads paths. The pressure-dependent brittle-to-ductile transitional behavior of concrete is replicated. The deactivation algorithm is validated using tensile and compression data for concrete. For a ductile material, the behavior of an aluminum alloy is simulated including the temperature-dependent ductile-to-brittle behavior features. The direct application of a coupled model to fatigue is introduced. In addition, the deactivation algorithm in conjunction with an assumed initial damage and strain is introduced as a novel method of simulating the densification phenomenon in cellular solids.
Target recognition requires the ability to distinguish targets from non-targets, a capability called one-class generalization. Many neural network pattern classifiers fail as one-class classifiers because they use open decision boundaries. To function as one-class classifier, a neural network must have three types of generalization: within-class, between-class, and out-of-class. We discuss these three types of generalization and identify neural network architectures that meet these requirements. We have applied our one-class classifier ideas to the problem of automatic target recognition in synthetic aperture radar. We have compared three neural network algorithms: Carpenter and Grossberg`s algorithmic version of the Adaptive Resonance Theory (ART-2A), Kohonen`s Learning Vector Quantization (LVQ), and Reilly and Cooper`s Restricted Coulomb Energy network (RCE). The ART 2-A neural network gives the best results, with 100% within-class, between-class, and out-of-class generalization. Experiments show that the network`s performance is sensitive to vigilance and number of training set presentations.
This report discusses the seventh experiment of the Integral Effects Test (IET-7) series. The experiment was conducted to investigate the effects of preexisting hydrogen in the Surtsey vessel on direct containment heating. Scale models of the Zion reactor pressure vessel (RPV), cavity, instrument tunnel, and subcompartment structures were constructed in the Surtsey Test Facility at Sandia National Laboratories. The RPV was modeled with a melt generator that consisted of a steel pressure barrier, a cast MgO crucible, and a thin steel inner liner. The melt generator/crucible had a hemispherical bottom head containing a graphite limitor plate with a 4-cm exit hole to simulate the ablated hole in the RPV bottom head that would be formed by ejection of an instrument guide tube in a severe nuclear power plant accident. The cavity contained 3.48 kg of water, and the containment basement floor inside the cranewall contained 71 kg of water, which corresponds to scaled condensate levels in the Zion plant. A 43-kg initial charge of iron oxide/aluminum/chromium thermite was used to simulate corium debris on the bottom head of the RPV. Molten thermite was ejected by steam at an initial pressure of 5.9 MPa into the reactor cavity.
Charpy V-notch specimens (ASTM Type A) and 5.74-mm diameter tension test specimens of the Shippingport Reactor Neutron Shield Tank (NST) (outer wall material) were irradiated together with Charpy V-notch specimens of the Oak Ridge National Laboratory (ORNI), High,, Flux Isotope Reactor (HFIR) vessel (shell material), to 5.07 {times} 10{sup 17} n/cm{sup 2}, E > 1 MeV. The irradiation was performed in the Ford Nuclear Reactor (FNR), a test reactor, at a controlled temperature of 54{degrees}C (130{degrees}F) selected to approximate the prior service temperatures of the cited reactor structures. Radiation-induced elevations in the Charpy 41-J transition temperature and the ambient temperature yield strength were small and independent of specimen test orientation (ASTM LT vs. TL). The observations are consistent with prior findings for the two materials (A 212-B plate) and other like materials irradiated at low temperature (< 200{degrees}C) to low fluence. The high radiation embrittlement sensitivity observed in HFIR vessel surveillance program tests was not found in the present accelerated irradiation test. Response to 288{degrees}C-168 h postirradiation annealing was explored for the NST material. Notch ductility recovery was found independent of specimen test orientation but dependent on the temperature within the transition region at which the specimens were tested.
A method is presented for determining the nonlinear stability of undamped flexible structures spinning about a principal axis of inertia. Equations of motion are developed for structures that are free of applied forces and moments. The development makes use of a floating reference frame which follows the overall rigid body motion. Within this frame, elastic deformations are assumed to be given functions of n generalized coordinates. A transformation of variables is devised which shows the equivalence of the equations of motion to a Hamiltonian system with n + 1 degrees of freedom. Using this equivalence, stability criteria are developed based upon the normal form of the Hamiltonian. It is shown that a motion which is spin stable in the linear approximation may be unstable when nonlinear terms are included. A stability analysis of a simple flexible structure is provided to demonstrate the application of the stability criteria. Results from numerical integration of the equations of motion are shown to be consistent with the predictions of the stability analysis. A new method for modeling the dynamics of rotating flexible structures is developed and investigated. The method is similar to conventional assumed displacement (modal) approaches with the addition that quadratic terms are retained in the kinematics of deformation. Retention of these terms is shown to account for the geometric stiffening effects which occur in rotating structures. Computational techniques are developed for the practical implementation of the method. The techniques make use of finite element analysis results, and thus are applicable to a wide variety of structures. Motion studies of specific problems are provided to demonstrate the validity of the method. Excellent agreement is found both with simulations presented in the literature for different approaches and with results from a commercial finite element analysis code. The computational advantages of the method are demonstrated.
The U.S. Department of Energy (DOE) is developing the Waste Isolation Pilot Plant (WIPP) in southeastern New Mexico as a facility for the long-term disposal of defense-related transuranic (TRU) wastes. Use of the WIPP for waste disposal is contingent on demonstrations of compliance with applicable regulations of the U.S. Environmental Protection Agency (EPA). This paper addresses issues related to modeling gas and brine migration at the WIPP for compliance with both EPA 40 CFR 191 (the Standard) and 40 CFR 268.6 (the RCRA). At the request of the WIPP Project Integration Office (WPIO) of the DOE, the WIPP Performance Assessment (PA) Department of Sandia National Laboratories (SNL) has completed preliminary uncertainty and sensitivity analyses of gas and brine migration away from the undisturbed repository. This paper contains descriptions of the numerical model and simulations, including model geometries and parameter values, and a summary of major conclusions from sensitivity analyses. Because significant transport of contaminants can only occur in a fluid (gas or brine) medium, two-phase flow modeling can provide an estimate of the distance to which contaminants can migrate. Migration of gas or brine beyond the RCRA 'disposal-unit boundary' or the Standard's accessible environment constitutes a potential, but not certain, violation and may require additional evaluations of contaminant concentrations.
This paper presents an infinite impulse response (IIR) filtering technique for reducing structural vibration in remotely operated robotic systems. The technique uses a discrete filter between the operator's joy stick and the robot controller to alter the inputs of the system so that residual vibration and swing are reduced. A linearized plant model of the system is analyzed in the discrete time domain, and the filter is designed using pole-zero placement in the z-plane. This technique has been successfully applied to a two link flexible arm and a gantry crane with a suspended payload.
Before disposing of transuranic radioactive waste at the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories (SNL) is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for final compliance evaluations. This paper describes the 1992 preliminary comparison with Subpart B of the Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191), which regulates long-term releases of radioactive waste. Results of the 1992 PA are preliminary, and cannot be used to determine compliance or noncompliance with EPA regulations because portions of the modeling system and data base are incomplete. Results are consistent, however, with those of previous iterations of PA, and the SNL WIPP PA Department has high confidence that compliance with 40 CFR 191B can be demonstrated. Comparison of predicted radiation doses from the disposal system also gives high confidence that the disposal system is safe for long-term isolation.
We describe an algorithm for the static load balancing of scientific computations that generalizes and improves upon spectral bisection. Through a novel use of multiple eigenvectors, our new spectral algorithm can divide a computation into 4 or 8 pieces at once. These multidimensional spectral partitioning algorithms generate balanced partitions that have lower communication overhead and are less expensive to compute than those produced by spectral bisection. In addition, they automatically work to minimize message contention on a hypercube or mesh architecture. These spectral partitions are further improved by a multidimensional generalization of the Kernighan-Lin graph partitioning algorithm. Results on several computational grids are given and compared with other popular methods.
Before disposing of transuranic radioactive waste at the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with long-term regulations of the United States Environmental Protection Agency (EPA), specifically the Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191), and the Land Disposal Restrictions (40 CFR 268) of the Hazardous and Solid Waste Amendments to the Resource Conservation and Recovery Act (RCRA). Sandia National Laboratories (SNL) is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for final compliance evaluations. This paper provides background information on the regulations, describes the SNL WIPP PA Department's approach to developing a defensible technical basis for consistent compliance evaluations, and summarizes the major observations and conclusions drawn from the 1991 and 1992 PAs.
This report describes preliminary experiments to investigate the feasibility of using electron beam (e-beam) radiolysis to destroy the organic compounds in simulated Hanford tank waste. For these experiments a simulated Hanford Tank 101-SY waste mixture was radiolyzed in a {sup 60}Co facility to simulate radiolysis in the waste tank. This slurry was then exposed without dilution to dose levels up to 1600 Mrad at instantaneous dose rates of 2.5 {times} 10{sup 8} and 2. 7 {times} 10{sup 11} rad/s. The inferred dose to destroy all the organic material in the simulated waste, assuming destruction is linear with dose, is 1000 Mrads for the higher dose rate. The cost for organic destruction of Hanford waste at a treatment rate of 20 gpm is roughly estimated to be $10. 60 per gallon. Such a system would treat all the waste in a 1 million gallon Hanford tank in about 40 days. Estimates of capital costs are given in the body of this report. While ferrocyanide destruction was not experimentally investigated in this work, previous experiments by others suggest that ferrocyanide would also be destroyed in such a system.
Artman, W.D.; Sullivan, J.J.; De La O, R.V.; Zawadzkas, G.A.
This report describes the Training and Qualification Program at the Saturn Facility. The main energy source at Saturn is the Saturn accelerator which is used to test military hardware for vulnerability to X-rays, as well as to perform various types of plasma radiation source experiments. The facility is operated and maintained by a staff of twenty scientists and technicians. This program is designed to ensure these personnel are adequately trained and qualified to perform their jobs in a safe and efficient manner. Copies of actual documents used in the program are included as appendices. This program meets all the requirements for training and qualification in the DOE Orders on Conduct of Operations and Quality Assurance, and may be useful to other organizations desiring to come into compliance with these orders.
Experiments were run to determine if oxidized Kovar could be chemically cleaned so that copper would wet the Kovar in a wet hydrogen atmosphere at 1100{degrees}C. We found that a multi-stepped acid etch process cleaned the Kovar so that copper would wet it. We also found that the degree of copper cracking after melting and cool-down correlated well with the degree of wetting.
The relatively thin web of salt that separates Bayou Choctaw Caverns 15 and 17 was evaluated using the finite-element method. The stability calculations provided insight as to whether or not any operationrestrictions or recommendations are necessary. Because of the uncertainty in the exact dimensions of the salt web, various web thicknesses were examined under different operating scenarios that included individual cavern workovers and drawdowns. Cavern workovers were defined by a sudden drop in the oil side pressure at the wellhead to atmospheric. Workovers represent periods of low cavern pressure. Cavern drawdowns were simulated by enlargening the cavern diameters, thus decreasing the thickness of the web. The calculations predict that Cavern 15 dominates the behavior of the web because of its larger diameter. Thus, giventhe choice of caverns, Cavern 17 should be used for oil withdrawal in order to minimize the adverse impacts on web resulting from pressure drops or cavern enlargement. From a stability point of view, maintaining normal pressures in Cavern 15 was found to be more important than operating the caverns as a gallery where both caverns are maintained at the same pressure. However, during a workover, it may be prudent to operate the caverns under similar pressures to avoid the possibility of a sudden pressure surge at the wellhead should the web fail.
A feasibility study for developing an improved tool and improved models for performing event assessments is described. The study indicates that the IRRAS code should become the base tool for performing event assessments, but that modifications would be needed to make it more suitable for routine use. Alternative system modeling approaches are explored and an approach is recommended that is based on improved train-level models. These models are demonstrated for Grand Gulf and Sequoyah. The insights that can be gained from importance measures are also demonstrated. The feasibility of using Individual Plant Examination (IPE) submittals as the basis for train-level models for precursor studies was also examined. The level of reported detail was found to vary widely, but in general, the submittals did not provide sufficient information to fully define the model. The feasibility of developing an industry risk profile from precursor results and of trending precursor results for individual plants were considered. The data sparsity would need to be considered when using the results from these types of evaluations, and because of the extremely sparse data for individual plants we found that trending evaluations for groups of plants would be more meaningful than trending evaluations for individual plants.
The Modal Group at Sandia National Laboratories performs a variety of tests on structures ranging from weapons systems to wind turbines. The desired number of data channels for these tests has increased significantly over the past several years. Tests requiring large numbers of data channels makes roving accelerometers impractical and inefficient. The Modal Lab has implemented a method in which the test unit is fully instrumented before any data measurements are taken. This method uses a 16 channel data acquisition system and a mechanical switching setup to access each bank of accelerometers. A data base containing all transducer sensitivities, location numbers, and coordinate information is resident on the system enabling quick updates for each data set as it is patched into the system. This method has reduced test time considerably and is patched into the system. this method has reduced test time considerably and is easily customized to accommodate data acquisition systems with larger channel capabilities.
Vadose-zone moisture transport near an impermeable barrier has been under study at a field site near Albuquerque, NM since 1990. Moisture content and temperature have been monitored in the subsurface on a regular basis; both undergo a seasonal variation about average values. Even though the slab introduces two-dimensional effects on the scale of the slab, moisture and heat transport is predominantly vertical. Numerical simulations, based on the models developed by Philip and de Vries (1957) and de Vries (1958), indicate that the heat flow is conduction-dominated while the moisture movement is dominated by diffusive vapor distillation. Model predictions of the magnitude and extent of changes in moisture content underneath the slab are in reasonable agreement with observation.
A method is proposed for supps'essing the resonances that occur as an item of rotating machinery is spun-up from rest to its operating speed. This proposed method invokes “stiffiness scheduling” so that the resonant frequency of the system is shifted during spin-up so as to be distant from the excitation frequency. A strategy for modulating the stiffness through the use of shape memory alloy is also presented.
A Sandia National Laboratories/AT&T Bell Laboratories Team is developing a soft x-ray projection lithography tool that uses a compact laser plasma as a source of 14 nm x-rays. Optimization of the 14 nm x-rays source brightness is a key issue in this research. This paper describes our understanding of the source as it has been obtained through the use of computer simulations utilizing the LASNEX radiation-hydrodynamics code.
Lightning protection systems (LPSs) for explosives handling and storage facilities have long been designed similarity to those need for more conventional facilities, but their overall effectiveness in controlling interior electromagnetic (EM) environments has still not been rigorously assessed. Frequent lightning-caused failures of a security system installed in earth-covered explosives storage structures prompted the U.S. Army and Sandia National Laboratories to conduct a program to determine quantitatively the EM environments inside an explosives storage structure that is struck by lightning. These environments were measured directly during rocket-triggered lightning (RTL) tests in the summer of 1991 and were computed using linear finite-difference, time-domain (FDTD) EM solvers. The experimental and computational results were first compared in order to validate the code and were then used to construct bounds for interior environments corresponding to severe incident lightning flashes. The code results were also used to develop simple circuit models for the EM field behavior.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Kannan, S.K.; Warnow, T.J.
The problem of constructing trees given a matrix of interleaf distances is motivated by applications in computational evolutionary biology and linguistics. The general problem is to find an edge-weighted tree which most closely approximates (under some norm) the distance matrix. Although the construction problem is easy when the tree exactly fits the distance matrix, optimization problems under all popular criteria are either known or conjectured to be NP-complete. In this paper we consider the related problem where we are given a partial order on the pairwise distances, and wish to construct (if possible) an edge-weighted tree realizing the partial order. In particular we are interested in partial orders which arise from experiments on triples of species. We will show that the consistency problem is NP-hard in general, but that for certain special cases the construction problem can be solved in polynomial time.
The dynamics of flexible bodies spinning at rates near or above their first natural frequencies is a notoriously difficult area of analysis. Recently, a method of analysis, tentatively referred to as a method of quadratic modes, has been developed to address this sort of problem. This method restricts consideration to configurations in which all kinematic constraints are automatically satisfied through second order in deformation. Besides providing robustness, this analysis method reduces the problem from one that would otherwise require the reformulation of stiffness matrices at each time step to one of solving only a small number of nonlinear equations at each time step. A test of this method has been performed, examining the vibrations of a rotating, inflated membrane.
Parallel computing offers new capabilities for using molecular dynamics (MD) to simulate larger numbers of atoms and longer time scales. In this paper we discuss two methods we have used to implement the embedded atom method (EAM) formalism for molecular dynamics on multiple-instruction/multiple-data (MIMD) parallel computers. The first method (atom-decomposition) is simple and suitable for small numbers of atoms. The second method (force-decomposition) is new and is particularly appropriate for the EAM because all the computations are between pairs of atoms. Both methods have the advantage of not requiring any geometric information about the physical domain being simulated. We present timing results for the two parallel methods on a benchmark EAM problem and briefly indicate how the methods can be used in other kinds of materials MD simulations.
The Vapnik-Chervonenkis (V-C) dimension is an important combinatorial tool in the analysis of learning problems in the PAC framework. For polynomial learnability, we seek upper bounds on the V-C dimension that are polynomial in the syntactic complexity of concepts. Such upper bounds are automatic for discrete concept classes, but hitherto little has been known about what general conditions guarantee polynomial bounds on V-C dimension for classes in which concepts and examples are represented by tuples of real numbers. In this paper, we show that for two general kinds of concept class the V-C dimension is polynomially bounded as a function of the syntactic complexity of concepts. One is classes where the criterion for membership of an instance in a concept can be expressed as a formula (in the first-order theory of the reals) with fixed quantification depth and exponentially-bounded length, whose atomic predicates are polynomial inequalities of exponentially-bounded degree. The other is classes where containment of an instance in a concept is testable in polynomial time, assuming we may compute standard arithmetic operations on reals exactly in constant time. Our results show that in the continuous case, as in the discrete, the real barrier to efficient learning in the Occam sense is complexity-theoretic and not information-theoretic. We present examples to show how these results apply to concept classes defined by geometrical figures and neural nets, and derive polynomial bounds on the V-C dimension for these classes.
During hydrocarbon reservoir stimulations, such as hydraulic fracturing, the cracking and slippage of the formation results in the emission of seismic energy. The objective of this study was to determine the properties of these induced micro-seisms. A hydraulic fracture experiment was performed in the Piceance Basin of Western Colorado to induce and record micro-seismic events. The formation was subjected to four processes; breakdown/ballout, step-rate test, KCL mini-fracture, and linear-gel mini-fracture. Micro-seisms were acquired with an advanced three-component wall-locked seismic accelerometer package, placed in an observation well 211 ft offset from the fracture well. During the two hours of formation treatment, more than 1200 micro-seisms with signal-to-noise ratios in excess of 20 dB were observed. The observed micro- seisms had a nominally flat frequency spectrum from 100 Hz to 1500 Hz and lack the spurious tool-resonance effects evident in previous attempts to measure micro-seisms. Both p-wave and s-wave arrivals are clearly evident in the data set, and hodogram analysis yielded coherent estimates of the event locations. This paper describes the characteristics of the observed micro- seismic events (event occurrence, signal-to-noise ratios, and bandwidth) and illustrates that the new acquisition approach results in enhanced detectability and event location resolution.
An essential requirement for both Vertical Seismic Profiling (VSP) and Cross-Hole Seismic Profiling (CHSP) is the rapid acquisition of high resolution borehole seismic data. Additionally, full wave-field recording using three-component receivers enables the use of both transmitted and reflected elastic wave events in the resulting seismic images of the subsurface. To this end, an advanced three-component multi-station borehole seismic receiver system has been designed and developed by Sandia National Labs (SNL) and OYO Geospace. The system acquires data from multiple three-component wall-locking accelerometer packages and telemeters digital data to the surface in real-time. Due to the multiplicity of measurement stations and the real-time data link, acquisition time for the borehole seismic survey is significantly reduced. The system was tested at the Chevron La Habra Test Site using Chevron's clamped axial borehole vibrator as the seismic source. Several source and receiver fans were acquired using a four-station version of the advanced receiver system. For comparison purposes, an equivalent data set was acquired using a standard analog wall-locking geophone receiver. The test data indicate several enhancements provided by the multi-station receiver relative to the standard receiver; drastically improved signal-to-noise ratio, increased signal bandwidth, the detection of multiple reflectors, and a true 4: 1 reduction in survey time.
The earth`s ionosphere consists of an ionized plasma which will interact with any electromagnetic wave propagating through it. The interaction is particularly strong at vhf and uhf frequencies but decreases for higher microwave frequencies. These interaction effects and their relationship to the operation of a wide-bandwidth, synthetic-aperture, space-based radar are examined. Emphasis is placed on the dispersion effects and the polarimetric effects. Results show that high-resolution (wide-bandwidth) and high-quality coherent polarimetrics will be very difficult to achieve below 1 GHz.
SAFSIM (System Analysis Flow SIMulator) is a FORTRAN computer program that provides engineering simulations of user-specified flow networks at the system level. It includes fluid mechanics, heat transfer, and reactor dynamics capabilities. SAFSIM provides sufficient versatility to allow the simulation of almost any flow system, from a backyard sprinkler system to a clustered nuclear reactor propulsion system. In addition to versatility, speed and robustness are primary goals of SAFSIM development. The current capabilities of SAFSIM are summarized and some sample applications are presented. It is applied here to a nuclear thermal propulsion system and nuclear rocket engine test facility.
Proceedings - 6th Annual IEEE International ASIC Conference and Exhibit, ASIC 1993
Shen, Hui-Chien; Becker, S.M.
Many designs use EPLDs (Erasable Programmable Logic Devices) to implement control logic and state machines. If the design is slow, timing through the EPLD is not crucial so designers often treat the device as a black box. In high speed designs, timing through the EPLD is critical. In these cases a thorough understanding of the device architecture is necessary. Lessons learned in the implementation of a high-speed design using the Altera EPM5130 are discussed.
A new Assembly Test Chip, ATC04, designed to measure mechanical stresses at the die surface has been built and tested. This CMOS chip 0.25 in. on a side, has an array of 25 piezoresistive stress sensing cells, four resistive heaters and two ring oscillators. The ATCO4 chip facilitates making stress measurements with relatively simple test equipment and data analysis. The design, use, and accuracy of the chip are discussed and initial results are presented from three types of stress measurement experiments: four-point bending calibration, single point bending of a substrate with an ATC04 attached by epoxy, and stress produced by a liquid epoxy encapsulant.
The feasibility of utilizing a groundbased laser without an orbital mirror for space debris removal is examined. Technical issues include atmospheric transmission losses, adaptive-optics corrections of wavefront distortions, laser field-of-view limitations, and laser-induced impulse generation. The physical constraints require a laser with megawatt output, long run-time capability, and wavelength with good atmospheric transmission characteristics. It is found that a 5-MW reactor-pumped laser can deorbit debris having masses of the order of one kilogram from orbital altitudes to be used by Space Station Freedom. Debris under one kilogram can be deorbited after one pass over the laser site, while larger debris can be deorbited or transferred to alternate orbits after multiple passes over the site.
Proceedings - 1993 IEEE/Tsukuba International Workshop on Advanced Robotics: Can Robots Contribute to Preventing Environmental Deterioration?, ICAR 1993
Hwang, Yong K.
Automatic motion planning of a spray cleaning robot with collision avoidance is presented in this paper. In manufacturing environments, electronic and mechanical components are traditionally cleaned by spraying or dipping them using chlorofluorocarbon (CFC) solvents. As new scientific data show that such solvents are major causes for stratospheric ozone depletion, an alternate cleaning method is needed. Part cleaning with aqueous solvents is environmentally safe, but can require precision spraying at high pressures for extended time periods. Operator fatigue during manual spraying can decrease the quality of the cleaning process. By spraying with a robotic manipulator, the necessary spray accuracy and consistency to manufacture high-reliability components can be obtained. Our motion planner was developed to automatically generate motions for spraying robots based on the part geometry and cleaning process parameters. For spraying paint and other coatings a geometric description of the parts and robot may be sufficient for motion planning, since coatings are usually done over the visible surfaces. For spray cleaning, the requirement to reach hidden surfaces necessitates the addition of a rule-based method to the geometric motion planning.
The geochemical properties of a porous sand and several tracers (Ni, Br, and Li) have been characterized for use in a caisson experiment designed to validate sorption models used in models of reactive transport. The surfaces of the sand grains have been examined by a combination of techniques including potentiometric titration, acid leaching, optical microscopy, and scanning electron microscopy with energy-dispersive spectroscopy. The surface studies indicate the presence of small amounts of carbonate, kaolinite and iron-oxyhydroxides. Adsorption of nickel, lithium and bromide by the sand was measured using batch techniques. Bromide was not sorbed by the sand. A linear (Kd) or an isotherm sorption model may adequately describe transport of Li; however, a model describing the changes of pH and the concentrations of other solution species as a function of time and position within the caisson and the concomitant effects on Ni sorption may be required for accurate predictions of nickel transport.
For problems where media properties are measured at one scale and applied at another, scaling laws or models must be used in order to define effective properties at the scale of interest. The accuracy of such models will play a critical role in predicting flow and transport through the Yucca Mountain Test Site given the sensitivity of these calculations to the input property fields. Therefore, a research program has been established to gain a fundamental understanding of how properties scale with the aim of developing and testing models that describe scaling behavior in a quantitative manner. Scaling of constitutive rock properties is investigated through physical experimentation involving the collection of suites of gas permeability data measured over a range of discrete scales. Also, various physical characteristics of property heterogeneity and the means by which the heterogeneity is measured and described and systematically investigated to evaluate their influence on scaling behavior. This paper summarizes the approach that is being taken toward this goal and presents the results of a scoping study that was conducted to evaluate the feasibility of the proposed research.
Experimental results exploring gravity-driven wetting front instability in a pre-wetted, rough-walled analog fracture are presented. Initial conditions considered include a uniform moisture field wetted to field capacity of the analog fracture and the structured moisture field created by unstable infiltration into an initially dry fracture. As in previous studies performed under dry initial conditions, instability was found to result both at the cessation of stable infiltration and at flux lower than the fracture capacity under gravitational driving force. Individual fingers were faster, narrower, longer, and more numerous than observed under dry initial conditions. Wetting fronts were found to follow existing wetted structure, providing a mechanism for rapid recharge and transport.
In an attempt to achieve completeness and consistency, the performance-assessment analyses developed by the Yucca Mountain Project are tied to scenarios described in event trees. Development of scenarios requires describing the constituent features, events, and processes in detail. Several features and processes occurring at the waste packages and the rock immediately surrounding the packages (i.e., the near field) have been identified: the effects of radiation on fluids in the near-field rock, the path-dependency of rock-water interactions, and the partitioning of contaminant transport between colloids and solutes. This paper discusses some questions regarding these processes that the near-field performance-assessment modelers will need to have answered to specify those portions of scenarios dealing with the near field.
Experiments investigating the behavior of individual, gravity-driven fingers in an initially dry, rough-walled analog fracture are presented. Fingers were initiated from constant flow to a point source. Finger structure is described in detail; specific phenomena observed include: desaturation behind the finger-tip, variation in finger path, intermittent flow structures, finger-tip bifurcation, and formation of dendritic sub-fingers. Measurements were made of finger-tip velocity, finger width, and finger-tip length. Non-dimensional forms of the measured variables are analyzed relative to the independent parameters, flow rate and gravitational gradient.