The possibility of formulating and validating a multi-site, multi- solute model for prediction of contaminant transport in groundwaters is being evaluated through experiments with simple analog systems. These systems consist of mixtures of well-characterized synthetic and natural materials in which the effects of sorption by ion exchange and amphoteric sites are isolated. Initial results are reported for studies of lead sorption by mixtures of goethite and montmorillonite, and Ni-Sr and Pb-Sr ion exchange by montmorillonite. The results of studies of simple clay-oxide mixtures indicate that the pH-dependent sorption behavior of Ni by mixtures of minerals containing amphoteric sites can be predicted from the properties of the component minerals.
In this summary, we re-evaluate estimates of trapped-hole energies inferred from TSC measurements and transistor annealing studies. Improved estimates of the trapped-hole ``attempt-to-escape`` frequency ({upsilon}{sub A}) and a quantitative treatment of (Schottky) electric-field induced barrier lowering strongly suggest that previous estimates of trapped-hole energies in TSC and transistor annealing studies are too low. Moreover, we show that TSC measurements can be modeled analytically from first principles, and the resulting model can accurately predict TSC measurements under arbitrary heating conditions. Finally, we evaluate the dependence of electron trapping in irradiated SiO{sub 2} on dose and on electric field during irradiation. 30 refs.
Site characterization is an integral component of any environmental assessment or restoration project. However, it is often difficult to know how to prioritize site characterization activities. In the absence of a preliminary analysis, site characterization decisions are sometimes guided by little more than intuition. The objective of this paper is to show that a Performance Assessment Methodology, used very early in a project, can be a useful tool for guiding site characterization activities. As an example, a ``preliminary`` performance assessment for the Greater Confinement Disposal project is used to demonstrate implementation of the methodology.
We present results that correlate microstructure and mechanical evolution to variations of deformation rate, hold time and environmental effects on the thermomechanical fatigue (TMF) behavior of 60Sn-40Pb solder. The results are used to define valid conditions for performing accelerated TMF tests. TMF tests at deformation rates of 5.6{times}10{sup {minus}4}s{sup {minus}1}, 2.8{times}10{sup {minus}4}s{sup {minus}1} and 2.1{times}10{sup {minus}4}s{sup {minus}1} were performed. Deformation rates greater than 2.8{times}10{sup {minus}4}s{sup {minus}1} result in fewer cycles to failure. At low deformation rates, the microstructure heterogeneously coarsens at cell boundaries. At higher rates, the deformation mechanism changes, and heterogeneous coarsening occurs at a strain concentration in the joint, independent of the microstructure. TMF tests with hold times of 0, 3 and 6 min. at the temperature extremes were performed. At hold times 3 min. or longer the damage at cell boundaries is annealed, resulting in heterogeneous coarsening. With no hold times the TMF life was greatly enhanced as a result of limited coarsening. The effect of the oxygen environment was explored. The TMF life in the presence of oxygen was found to be extended. Valid acceleration conditions for a TMF test of solder are: a deformation rate of 2.8{times}10{sup {minus}4}s{sup {minus}1} or lower, with hold times of 3 mn. or longer.
A new, laser-based system has been developed for rapid evaluation of monolithic thermoluminescence dosimetry (TLD) arrays. A precision controlled CO{sub 2} laser is used to sequentially heat 1.5 mm diameter, {approx} 0.04 mm thick TLDs deposited on a .125 mm thick polymer substrate in a 3 mm {times} 3 mm grid. Array areas up to 30 cm {times} 30 cm are used (> 10,000 TLD elements), with evaluation times of 45--90 minutes. Isodose contours and various analysis functions are available on the system-operating PC. This system allows for greatly expanded dosimetry compared to standard TLDs, simultaneously decreasing effort and record keeping. We compared the dosimetric characteristics of this system with standard techniques, using near Si-equivalent CaF{sub 2}:Mn TLD elements, in a test with 19 MeV end-point X radiation. The results show the laser system performs as well as the standard system. 4 refs.
The Chemical Waste Landfill (CWL) was used by Sandia National Laboratories (SNL), Albuquerque for disposal of hazardous chemicals from the years 1962 to 1985. Prompted by the detection of low levels of trichlorethylene (TCE) in groundwater samples from a water table aquifer approximately 146 meters below ground surface, a RCRA Site Investigation (RSI) and remediation of organic contaminants will be performed at the CWL prior to closure of this landfill. The RSI is focused on optimal characterization of the VOC and dense non-aqueous phase liquid (DNAPL) contamination at this site which will be possible through application of innovative strategies for characterization and promising new technologies. This paper provides a discussion of conceptual models of contaminant transport at the CWL, and an overview of our investigative strategy which is focused on characterizing transport of VOC and DNAPLS. Each stage of the RSI has been developed to gather information which will reduce the uncertainty in the design of each subsequent phase of the investigation. Three stages are described; a source characterization stage, unsaturated zone characterization stage, and a saturated zone characterization stage. The unsaturated zone characterization must provide all data necessary to make decisions concerning the necessity of a saturated zone characterization phase.
Numerous investigations have studied the potential for chaotic vibrations of nonlinear systems. It has been shown for many simple nonlinear systems, that when they are excited severely enough, or with the appropriate parametric combinations, that they will execute chaotic vibrations. The present investigation considers the potential for the occurrence of chaos in a practical nonlinear system -- the isolated accelerometer. A simple, first order model is proposed for the isolated accelerometer, and it is shown that chaos can occur in the isolated accelerometer. A preliminary investigation into the bearing that this chaos potential has on the measurement of shock response is summarized. 7 refs.
Carrier-driven photochemical reactions require direct participation of free carriers for the chemical reaction to proceed. Therefore, they can be selectively suppressed by increasing the carrier recombination rate through creation of defects using ion implantation. The residual defect concentration following ion implantation should correlate with etching suppression. Changes in the Raman LO-phonon lineshape correlate well with the degree of etching suppression and predict etching behavior better than defect concentrations calculated with the Monte Carlo code, TRIM. Raman spectroscopy may be a useful pre-etch diagnostic to predict the degree of etching suppression resulting from a given implantation treatment. 11 refs.
Direct containment heating (DCH) has recently been studied at Sandia National Laboratory`s Surtsey facility in a number of experiments in which high-temperature thermite melts are ejected by pressurized steam from a melt generator into scaled reactor cavities. Steam blowdown from the melt generator disperses at least part of the melt into the Surtsey vessel. Efficient team-metal chemical reaction was observed in many of the experiments. Analysis of the results suggests that hydrogen generation occurs primarily in the cavity can actually reduce hydrogen generation by separating the debris from the blowdown steam. Debris-gas heat transfer appears to include both a component that takes place in the cavity in proportion to the hydrogen generation, and a second component that takes place in the Surtsey vessel itself. The magnitude of the latter depends upon the amount of debris dispersed and the length of the unobstructed flight path in the Surtsey vessel. Some possible implications of these results are discussed.
A Senate Committee requested assistance from Sandia in determining the adequacy of the investigation of the incident aboard the USS IOWA. This currently unexplained explosion occurred in Turret 2 of the battleship on April 19, 1989, killing 47 crewmen. The investigation included material characterization of debris found after the explosion, ignition experiments to characterize the propellant, and analytic modeling of the mechanics, interior ballistics and ignition. The analytic modeling is described in this paper. The modeling of the incident was concerned with the mechanics of the ramming equipment used to load the 16 inch guns, and the interior ballistic and ignition of the propellant. Many separate analyses were performed to explain the crushing of the propellant grains, the dynamics and location of ignition of the propellant train, and the presence of damage after the incident. The goal of this modeling was to assess the feasibility of the various events in the turret, and to identify the cause of the incident. An item of particular interest was damage to the rammer control handle quadrant. The US Navy conjectured that the blast propelled the rammermans seat into the quadrant in such a way as to suggest low speed ram during the incident. The speed of the ram was discovered to be very important in determining the probability of ignition during an overram, and an analysis of the rammermans seat motion was completed. In order to understand how the seat impacts the quadrant, a three-dimensional finite element analysis was completed using ABAQUS/Explicit. The loading of the seat was due to two-phase gas and propellant flow through the bag train and into the turret volume. The results showed that impact onto the quadrant probably first occurred at the rear, dislodging it from its mount. This analysis was pivotal in the examination of the incident, and was the final evidence that the cause of the explosion could not be conclusively determined.
High spatial resolution x-ray microanalysis in the analytical electron microscope (AEM) describes a technique by which chemical composition can be determined on spatial scales of less than 50 nm. Dependent upon the size of the incident probe, the energy (voltage) of the beam, the average atomic number of the material being analyzed, and the thickness of the specimens at the point of analysis it is possible to measure uniquely the composition of a region 2--20 nm in diameter. Conventional thermionic (tungsten or LaB{sub 6}) AEMs can attain direct spatial resolutions as small as 20 nm, while field emission (FEG) AEM`s can attain direct spatial resolutions approaching 2 nm. Recently, efforts have been underway to extract compositional information on a finer spatial scale by using massively parallel Monte Carlo electron trajectory simulations coupled with AEM measurements. By deconvolving the measured concentration profile with the calculated x-ray generation profile it is possible to extract compositional information at near atomic resolution.
Chemical and physical transformations involved in ion implantation processes in glasses determine changes in mechanical. and tribological properties, in network dilatation, in induced optical absorption and luminescence and in the composition and chemical behavior as a function of different experimental conditions (ion, energy, dose, target temperature). Variations of chemical etch rate in HF are related to radiation damages and formation of compounds. A systematic study of the etch rate changes in silica due to Ar, N, Si plus N implants has been performed. Structure modifications at depths greater than the corresponding implanted ion ranges are evidenced for nuclear deposited energy greater than 10{sup 22} keV cm{sup {minus}3}. Formation of silicon oxynitrides reduces the etch rate values.
Archimedes is a prototype mechanical assembly system which generates and executes robot assembly programs from a CAD model input. The system seeks to increase flexibility in robotic mechanical assembly applications by automating the programming task. Input is a solid model of the finished assembly, augmented by additional design information such as weld specifications. Parts relationships and geometric constraints are deduced from the solid model. A rule-based planner generates a ``generic`` assembly plan that satisfies the geometric constraints, as well as other constraints embodied in the rules. A plan compiler then converts the generic plan into code specific to an application environment. Other outputs include fixture designs, workcell layout information, object-recognition (vision) routines, grasp plans, and executable code for controlling the robot and workcell accessories. Lessons from operating and demonstrating the system are presented, with a particular emphasis on the implications for future systems. 12 refs.
Isolated accelerometer measurement systems are used to measure environments composed of a wide spectrum of frequencies including the natural frequency of the isolated accelerometer. Because the isolated accelerometer measurement system is a nonlinear system, it is subject to the potential for chaotic vibrations. it is clear that this potential if realized, affects the response of the measurement system to vibration input and perhaps to shock input also. This paper explores the effects that the potential for chaotic vibrations and nonlinear response, in general, has on the random vibration response of the isolated accelerometer measurement system. Specifically, the system response to white noise is investigated and assessed in terms of response histogram and response spectral density. 6 refs.
In the Federal Register, Volume 51, Number 168, NRC has intended the use of IMPACTS-BRC to evaluate petitions for evaluating radioactive waste streams as below regulatory concern. IMPACTS-BRC is a generic radiological assessment code that allows calculation of potential impacts to maximum individuals, waste disposal workers, and the general population resulting from exemption of very low-level radioactive waste from regulatory control. The code allows calculations to be made of human exposure to the waste by many pathways and exposure scenarios. This document describes the code history and the quality assurance work that has been carried out on IMPACTS-BRC. The report includes a summary of all the literature reviews pertaining to IMPACTS-BRC up to Version 2.0. The new code and data verification work necessary to produce IMPACTS-BRC, Version 2.1 is presented. General comments about the models and treatment of uncertainty in IMPACTS-BRC are also given.
This work describes the collection, handling, transportation, thermal desorption, and analysis of explosive vapors using quartz collection tubes. A description of the sampling system is presented, along with the collection efficiency of the quartz tubes and some of the precautions necessary to maintain the sample integrity. The design and performance characteristics of the thermal desorption system are also discussed. Collection of explosive vapor using empty, 0.25 inch O.D. by 5.25 inch long quartz tubes at a flow rate of 200 mL min-1 is quite different. Thermal desorption of the explosive vapor molecules using a furnace that allows control of the gas phase chemistry in the IMS has been shown to provide a reliable, reproducible means of analysis. Empty quartz tubes provide a sharper desorption profile than packed collection tubes, resulting in a better signal-to-noise ratio, and perhaps, a lower detection limit than packed quartz tubes. Both the ion drift time of the explosive and its desorption characteristics can provide a means of identification. Sample handling, packaging, and transportation methods which minimize sample loss and contamination have been developed and evaluated.
This report contains the purchasing and materials management operating highlights for Fiscal Year 1991. Included in the report are compiled data on: personnel; type of procurement; small business procurements; disadvantaged business procurements; woman-owned business procurements; New Mexico commercial business procurements; Bay Area commercial business procurements; commitments by states and foreign countries to commercial suppliers; and, transportation activities. Other statistical data tables enumerate the following: the twenty-five commercial contractors receiving the largest dollar commitments; commercial contractors receiving commitments of $1000 or over; integrated contractor and federal agency commitments of $1000 or over from Sandia National Laboratories-Albuquerque and Livermore; and, transportation commitments of $1000 or over from Sandia National Laboratories-Albuquerque and Livermore.
This white paper addresses the issue of banning lead from solders used in electronics manufacturing. The current efforts by legislative bodies and regulatory agencies to curtail the use of lead in manufactured goods, including solders, are described. In response to a ban on lead or the imposition of a tax which makes lead uneconomical for use in solder alloys, alternative technologies including lead-free solders and conductive epoxies are presented. The recommendation is made that both users and producers of solder materials join together as partners in a consortium to address this issue in a timely and cost-effective manner.
The MELCOR code has been used to simulate the ST-1 and ST-2 in-pile product source term experiments performed in the ACRR facility. As expected, there were no major differences observed in the results calculated for the different test conditions. The CORSOR, CORSOR-M and CORSOR-Booth release models all were tested, and the effect of including the surface-volume correction term was evaluated. MELCOR results were compared to test data and to VICTORIA results, and also directly to the correlations and to ST-1/ST-2 results predicted by Battelle using their stand-alone CORSOR code to verify that the models have been implemented correctly in MELCOR. The release rates and total release fractions calculated by MELCOR generally agreed well with the test data, for both volatile and refractory species, with none of the release model options available yielding consistently better agreement with data for species. Sensitivity studies checking for time step and noding effects and machine dependencies were done, and some machine dependencies associated with very small numbers were identified and corrected in the code. Additional sensitivity studies were run on parameters affecting core heatup and core damage, including both variations in code models such as convective heat transfer coefficients, radiation view factors, candling assumptions, and in experimental conditions such as pressures, flow rates, power levels, and insulation thermal conductivity. Code and user input modeling errors encountered in these analyses are described.
This paper presents a method to solve partial differential equations governing two-phase fluid flow by using a genetic algorithm on the NCUBE/2 multiprocessor computer. Genetic algorithms represent a significant departure from traditional approaches of solving fluid flow problems. The inherent parallelism of genetic algorithms offers the prospect of obtaining solutions faster than ever possible. The paper discusses the two-phase flow equations, the genetic representation of the unknowns, the fitness function, the genetic operators, and the implementation of the genetic algorithm on the NCUBE/2 computer. The paper investigates the implementation efficiency using a pipe blowdown test and presents the effects of varying both the genetic parameters and the number of processors. The results show that genetic algorithms provide a major advancement in methods for solving two-phase flow problems. A desired goal of solving these equations for a specific simulation problem in real time or faster requires computers with an order of magnitude more processors or faster than the NCUBE/2`s 1024.
Although plasma cleaning is a recognized substitute for solvent cleaning in removing organic contaminants, current cleaning rates are impractically low for many applications. A set of experiments is described which demonstrate that the rate of plasma removal of organic contaminants can be greatly increased by modification of the plasma chemistry. A comparison of plasma cleaning rates of argon, oxygen and oxygen/sulfur hexafluoride gases shows that the fluorine containing plasma is at least an order of magnitude faster at etching organics. Rates are reported for the removal of polymer films and of A-9 Aluminum cutting fluid. 7 refs.
Measuring the yield of an underground nuclear detonation using sensor cables has been proposed for verification purposes. These cables not only sense the signals associated with the yield they also capture the sensitive primary and secondary electromagnetic pulses associated with the detonation but have nothing to do with the yield. An anti-intrusiveness device is to be connected to the sensor cable to prevent the electromagnetic pulses from passing through to the verifier. The anti-intrusiveness device both attenuates the electromagnetic pulses and adds noise to the cable over the interval of time that the electromagnetic pulses may be present. This report addresses the problem of determining the optimum noise spectral density for masking the electromagnetic pulses. To this end it derives an expression for the lower bound on the error in the estimation of the time separation between two pulses when the time of arrival of neither is known and they are imbedded in Gaussian noise. The noise spectral shapes considered are white, and lowpass, and bandpass.
UPEML is a machine-portable program that emulates a subset of the functions of the standard CDC Update. Machine-portability has been achieved by conforming to ANSI standards for Fortran-77. UPEML is compact and fairly efficient; however, it only allows a restricted syntax as compared with the CDC Update. This program was written primarily to facilitate the use of CDC-based scientific packages on alternate computer systems such as the VAX/VMS mainframes and UNIX workstations. UPEML has also been successfully used on the multiprocessor ELXSI, on CRAYs under both UNICOS and CTSS operating systems, and on Sun, HP, Stardent and IBM workstations. UPEML was originally released with the ITS electron/photon Monte Carlo transport package, which was developed on a CDC-7600 and makes extensive use of conditional file structure to combine several problem geometry and machine options into a single program file. UPEML 3.0 is an enhanced version of the original code and is being independently released for use at any installation or with any code package. Version 3.0 includes enhanced error checking, full ASCII character support, a program library audit capability, and a partial update option in which only selected or modified decks are written to the complete file. Version 3.0 also checks for overlapping corrections, allows processing of pested calls to common decks, and allows the use of alternate files in READ and ADDFILE commands. Finally, UPEML Version 3.0 allows the assignment of input and output files at runtime on the control line.
The Planning and Staff Support of the Sandia National Laboratories publishes a monthly bulletin titled, Energy and Environment. The bulletin facilitates technology exchange with industries, universities, and with other government agencies. This bulletin is for the month of April 1992 and covers such things as new methods of soldering which reduces environmental threats by avoiding chlorofluorocarbon solvents. Some technologies developed are soldering in controlled atmospheres, acid-vapor soldering, and laser soldering. Another topic in this bulletin is the designing of catalysts of chemical reactions by computers. Biomimetic catalysts are being created by Computer-Aided Molecular Design. These biomimetic catalysts can aid in fuel conversion. In-situ remediation of soils contaminated by heavy metals was another topic in this bulletin. This in-situ process is called, electrokinetic remediation. It uses electrodes to induce a metal-attracting electric field in the ground. The last topic in this bulletin is the design of a semiconductor bridge (SCB) which is used to improve the timing and effectiveness of blasting. Timing and accuracy is important; and the blasting industry is no exception. This SCB gives a low-energy pulse which causes a doped region on a polysilicon substrate into a bright plasma. This plasma discharge causes the ignition and produces an accurate explosion in microseconds. (MB)
Seventeen small-scale brine inflow experiment boreholes have been and are currently being monitored for brine accumulation. All of the boreholes were drilled from underground excavations at the Waste Isolation Pilot Plant (WIPP) near Carlsbad, NM. Experiments are ongoing in Room D, Room L4, and the Q access drift in the WIPP underground. The boreholes range from approximately 5 to 90 cm in diameter and from 3 to 6 m in length. The objective of these experiments is to provide data for use in the development and validation of a predictive, mechanistic model for brine inflow to the repository. There is considerable variability in the observed responses of the different boreholes, and there are also significant similarities. Two of the boreholes in Room D have yielded no brine in more than 3.5 years, while all 15 of the other boreholes have produced anywhere from 2 to 90 kg of brine. Inflow rates vary by as much as 2 orders of magnitude for boreholes of the same dimensions in the same general location; however, inflow rates measured in most of the boreholes are of the same order of magnitude. Decreasing, increasing, and steady inflow rates have been measured. Nevertheless, 9 of the 15 brine-producing boreholes behaved similarly early in their history. These 9 boreholes all exhibited a relatively high initial inflow rate followed by a fairly smooth decline with time. Variabilities in borehole response can be explained by assuming there are heterogeneities in the formation tested. In most cases these heterogeneities are believed to be excavation-induced. Data from these experiments suggest that flow near excavations has been altered by rock deformation, including fracturing. Additional experiments are required to differentiate between a far-field, near-field, or combination brine source and to characterize the significant flow mechanism or mechanisms.
A high-velocity impact testing technique, utilizing a tethered rocket, is being developed at Sandia National Laboratories. The technique involves tethering a rocket assembly to a pivot location and flying it in a semicircular trajectory to deliver the rocket and payload to an impact target location. Integral to developing this testing technique is the parallel development of accurate simulation models. An operational computer code, called ROAR (Rocket-on-a-Rope), has been developed to simulate the three-dimensional transient dynamic behavior of the tether and motor/payload assembly. This report presents a discussion of the parameters modeled, the governing set of equations, the through-time integration scheme, and the input required to set up a model. Also included is a sample problem and a comparison with experimental results.
The Plasma/Wall Interaction and High Heat Flux Materials and Components Task Groups typically hold a joint meeting each year to provide a forum for discussion of technical issues of current interest as well as an opportunity for program reviews by the Department of Energy (DOE). At the meeting in September 1990, reported here, research programs in support of the International Thermonuclear Experimental Reactor (ITER) were highlighted. The first part of the meeting was devoted to research and development (R&D) for ITER on plasma facing components plus introductory presentations on some current projects and design studies. The balance of the meeting was devoted to program reviews, which included presentations by most of the participants in the Small Business Innovative Research (SBIR) Programs with activities related to plasma wall interactions. The Task Groups on Plasma/Wall Interaction and on High Heat Flux Materials and Components were chartered as continuing working groups by the Division of Development and Technology in DOE`s Magnetic Fusion Program. This report is an addition to the series of ``blue cover`` reports on the Joint Meetings of the Plasma/Wall Interaction and High Heat Flux Materials and Components Task Groups. Among several preceding meetings were those in October 1989 and January 1988.
The switch delay time of the MC3858 sprytron was measured using a test matrix consisting of 36 different trigger circuit configurations. The test matrix allowed the measurement of switch delay times for peak trigger voltages ranging from 47 V to 1340 V and for stored trigger energies ranging from 0.023 mJ to 2.7 mJ. The average switch delay time was independent of peak trigger voltage above approximately 800 V. Similarly, the average switch delay was independent of trigger stored energy above approximately 0.5 mJ. Below these saturation values, the average switch delay increases rapidly with decreasing trigger voltage or esergy. In contrast to the average switch delay time, the shot-to-shot variability in switch delay time does not appear to be strongly affected by peak trigger voltage as long as the trigger voltage is groater than 100 V. Below 100 V, the variability in switch delay time rises rapidly due to failure of the trigger to undergo immediate high voltage breakdown when trigger voltage is applied. The effect of an abnormally-high-resistance trigger probe on switch delay time was also investigated. It was found that a high-resistance probe behaved as a second overvoltage gap in the trigger circuit. Operation with a peak trigger voltage greater than the breakdown voltage of this second gap yielded delay times comparable to operation with a normal trigger. Operation with a peak trigger voltage less than the breakdown voltage of this second gap increased the switch delay time by an amount comparable to the time required to ramp the trigger circuit output up to the breakdown voltage of the second gap. Finally, the effect that varying the bias voltage applied to the sprytron has on switch delay time was measured. The switch delay time did not appear to depend on bias voltage for bias voltages between 725 V and 2420 V.
Performance assessment modeling for High Level Waste (HLW) disposal incorporates three different types of uncertainty. These include data and parameter uncertainty, modeling uncertainty (which includes conceptual, mathematical, and numerical), and uncertainty associated with predicting the future state of the system. In this study, the potential impact of conceptual model uncertainty on the estimated performance of a hypothetical high-level radioactive waste disposal site in unsaturated, fractured tuff has been assessed for a given group of conceptual models. This was accomplished by taking a series of six, one-dimensional conceptual models, which differed only by the fundamental assumptions used to develop them, and conducting ground-water flow and radionuclide transport simulations. Complementary cumulative distribution functions (CCDFs) representing integrated radionuclide release to the water table indicate that differences in the basic assumptions used to develop conceptual models can have a significant impact on the estimated performance of the site. Because each of the conceptual models employed the same mathematical and numerical models, contained the same data and parameter values and ranges, and did not consider the possible future states of the system, changes in the CCDF could be attributed primarily to differences in conceptual modeling assumptions. Studies such as this one could help prioritize site characterization activities by identifying critical and uncertain assumptions used in model development, thereby providing guidance as to where reduction of uncertainty is most important.
The tectonics program for the proposed high-level nuclear waste repository at Yucca Mountain in southwestern Nevada must evaluate the potential for surface faulting beneath the prospective surface facilities. To help meet this goal, Quaternary surficial mapping studies and photolineament analyses were conducted to provide data for evaluating the location, recency, and style of faulting with Midway Valley at the eastern base of Yucca Mountain, the preferred location of these surface facilities. This interim report presents the preliminary results of this work.
The scanning electron microscope (SEM) has become as standard a tool for IC failure analysis as the optical microscope, with improvements in existing SEM techniques and new techniques being reported regularly. This tutorial has been designed to benefit both novice and experienced failure analysts by reviewing several standard as well as new SEM techniques used for failure analysis. Advanced electron-beam test systems will be covered briefly; however all techniques discussed may be performed on any standard SEM. Topics to be covered are (1) standard techniques: secondary electron imaging for surface topology, voltage contrast, capacitive coupling voltage contrast, backscattered electron imaging, electron beam induced current imaging, and x-ray microanalysis and (2) new SEM techniques: novel voltage contrast applications, resistive contrast imaging, biased resistive contrast imaging, and charge-induced voltage alteration. Each technique will be described in terms of the information yielded, the physics behind technique use, any special equipment and/or instrumentation required to implement the technique, the expertise required to implement the technique, possible damage to the IC as a result of using the technique, and examples of using the technique for failure analysis.
Sandia is a government-owned, contractor-operated national laboratory that AT&T has operated on a no-profit, no-fee basis since 1949. We have been an integral part of the nuclear weapons program, providing total concept-to-retirement engineering for every warhead and bomb in the nuclear weapon stockpile. We are proud of our contributions to national security. Our scientific and engineering skills, our facilities, and our experience have benefited not only the nuclear weapons program but have also contributed significantly to their areas of national security, including conventional defense, energy, and industrial competitiveness. Likewise, these capabilities position us well to continue a tradition of exceptional service in the national service in the national interest. Sandia is a multiprogram national laboratory with mission responsibilities in nuclear weapons, arms control and verification, energy and environment, and technology transfer. Our work for the DOE Assistant Secretary for Defense Programs constitutes 50% of the laboratory`s effort. Sandia`s arms control, verification, and related intelligence and security programs, funded by DOE and by other agencies constitute the largest aggregation of such work at any facility in the world. We also support DOE with technology development -- in particular, specialized robotics and waste characterization and treatment processes to assist in the cleanup of contaminated sites. Research and development to support the National Energy Strategy is another substantial laboratory activity. Sandia`s successful developments in renewable, nuclear, and fossil energy technologies have saved the country billions of dollars in energy supply and utilization. Technology transfer is conducted across all Sandia programs.
A series of cyclic, direct-shear tests was conducted on several replicas of a tensile fracture of welded tuff to verify the graphical method proposed by Saeb (1989) and by Amedei and Saeb (1990). Tests were performed under different levels of constant normal load and constant normal stiffness. Each test consisted of five cycles of forward and reverse shear. The effect of cyclic loading on the fracture shear behavior was investigated. Fracture surface asperity degradation was quantified by comparing fracture fractal dimensions before and after shear.
The purpose of this talk is to set the scene with a definition of records management, records and federal records. It is also to introduce some techniques to ensure that office files are properly organized and maintained, rapidly retrievable, complete, and ready for appropriate disposition the NARA (National Archives and Records Administration) way.
A designed and assembled method for a non-adjustable Interferometer cavity has been developed at Sandia National Laboratories which has enabled the development of a Fixed-Cavity Velocity Interferometer System for Any Reflector (VISAR). In this system, the critical interference adjustments are performed during assembly of the interferometer cavity, freeing the user from an otherwise repetitive task. The Fixed-Cavity VISAR System is constructed in modular form. Compared to previous VISAR systems, it is easy to use, and gives high quality results. 6 refs.
The high-temperature stability of current and proposed aviation fuels is a major factor in the design of advanced technology aircraft engines. Efforts to develop highly stable formulations and thereby mitigate fouling problems in aircraft fuel system components would clearly benefit from a predictive model that describes the important parameters in thermally induced degradation of the liquid fuel as well as the deposition of solid species. To generate such a model, diagnostic tools are needed to characterize adequately fluid dynamics, heat transfer, mass transfer and complex chemical processes that occur in thermally stressed fuels. In this paper, the authors describe preliminary results in the use of a dynamic light scattering technique, photon correlation spectroscopy (PCS), to address one aspect of the fuel stability problem; i.e., incipient particle formation and subsequent growth in mean particle size as a function of tempreture, exposure time, degree of oxidation, etc.
The highest {Tc}`s achieved in organic electron-donor-based systems occur in two isostructural ET salts, viz., {kappa}-[(ET){sub 2}Cu][N(CN){sub 2}]X, X = Br ({Tc} = 11.6 K, ambient pressure), X = Cl ({Tc} = 12.8 K, 0.3 kbar) whereas for the electron-acceptor-based systems derived from C{sub 60} they occur in K{sub 3}C{sub 60} ({Tc} = 19 K), Rb{sub 3}C{sub 60} ({Tc} = 29 K), Rb{sub x}Cs{sub y}C{sub 60} ({Tc} 33 K) and Rb{sub x}Tl{sub y}C{sub 60} ({Tc} {approx} 45 K). Research performed at Argonne National Laboratory, and based on the ET and C{sub 60} systems, is reviewed.
The photocurrent response, photo-induced changes in hysteresis behavior, and electrooptic (birefringence) effects of sol-gel derived PZT film have been characterized as part of an effort to evaluate ferroelectric films for image storage and processing applications.
The effects of argon addition to the vacuum arc remelting (VAR) process were studied in both laboratory and industrial experiments while remelting Alloy 718. The results demonstrate that argon can be added to an industrial VAR furnace to relatively high partial pressures without decreasing the melt rate, drip-short frequency, or constricting the arc plasma to a local region of the electrode surface. Laboratory experiments illustrate that this result is dependent on electrode chemistry, possibly related to magnesium content.
Melt pool shape in VAR is controlled by fluid flow, which is governed by the balance between two opposing flow fields. At low melt currents, flow is dominated by thermal buoyancy. In these instances, metal is swept radially outward on the pool surface, resulting in relatively shallow melt pools but increased heat transfer to the crucible at the melt pool surface. At high melt currents, flow is primarily driven by magento-hydrodynamic forces. In these cases, the surface flow is radially inward and downward, resulting in a constricted arc, the pool depth and relative heat transfer to the crucible are intermediate, even though the melt rate is significantly lower than either diffuse arc condition. Constricted arc conditions also result in erratic heat transfer behavior and non-uniformities in pool shape.
This report contains a summary of large-scale experiments conducted at Sandia National Laboratories under the Solar Detoxification of Water project. The objectives of the work performed were to determine the potential of using solar radiation to destroy organic contaminants in water by photocatalysis and to develop the process and improve its performance. For these experiments, we used parabolic troughs to focus sunlight onto glass pipes mounted at the trough's focus. Water spiked with a contaminant and containing suspended titanium dioxide catalyst was pumped through the illuminated glass pipe, activating the catalyst with the ultraviolet portion of the solar spectrum. The activated catalyst creates oxidizers that attack and destroy the organics. Included in this report are a summary and discussion of the implications of experiments conducted to determine: the effect of process kinetics on the destruction of chlorinated solvents (such trichloroethylene, perchloroethylene, trichloroethane, methylene chloride, chloroform and carbon tetrachloride), the enhancement due to added hydrogen peroxide, the optimal catalyst loading, the effect of light intensity, the inhibition due to bicarbonates, and catalyst issues.
Accident severity categories are used in many risk analyses for the classification and treatment of accidents involving vehicles transporting radioactive materials. Any number or definition of severity categories may be used in an analysis. A methodology which allow accident probabilities associated with one severity category scheme to be transferred to another severity category scheme is described. The supporting data and information necessary to apply the methodology are also discussed. The ability to transfer accident probabilities between severity category schemes will allow some comparisons of different studies at the category level. The methodology can be employed to transfer any quantity between category schemes if the appropriate supporting information is available.
This paper will describe two data bases which provide supporting information on radioactive material transport experience in the United States. The Radioactive Material Incident Report (RMIR) documents accident/incident experience from 1971 to the present from data acquired from the US Department of Transportation (DOT) and the Nuclear Regulatory Commission (NRC). The Radioactive Material Postnotification (RAMPOST) data base documents the shipments that have taken place for Highway Route Controlled Quantities (HRCQ) of radioactive material. HRCQ shipments are post notified (that is, after the shipment) to the DOT.
A brief discussion of the following topics is given in this report: Liquid Metal Divertors; Lithium Droplet Beam Divertor; Preferential Pumping of Helium; Reduced Erosion with Cu-Li, W-Li, etc.; Reduction of Erosion by Thermionic Emission; Reduced Erosion in Boronized Graphites; Proposal for Materials Experiments in TRIAM; Carbon-SiC for Plasma Facing Components; Helium Pumping with Palladium; Large Area Pump Limiter; Techniques for Enhanced Heat Removal; New Outlook on Gaseous Divertors; Gaseous Divertor Simulations; Impurity Seeding to Control ITER Particle and Heat Loads; Gaseous Divertor Experiments; Electrical Biasing to Control SOL Particle Fluxes; Biased Limiter in TEXTOR and Biased Divertor in PBX-M; Particle and Heat Flux Control Using Ponderomotive Forces; Helium Exhaust Using ICRF; Ergodic Magnetic Limiter Experiments in JFT-2M; and Helium Exhaust Using Fishbones.
This report describes the Training and Qualification Program at the Simulation Technology Laboratory (STL). The main facility at STL is Hermes III, a twenty megavolt accelerator which is used to test military hardware for vulnerability to gamma-rays. The facility is operated and maintained by a staff of twenty engineers and technicians. This program is designed to ensure that these personnel are adequately trained and qualified to perform their jobs in a safe and efficient manner. Copies of actual documents used in the program are included in appendices. This program meets all the requirements for training and qualification in the DOE Orders on Conduct of Operations and Quality Assurance, and may be useful to other organizations desiring to come into compliance with these orders.
Division 2473 has characterized the performance of three types of focusing lenses used for CO{sub 2} laser beam welding. Specifically, we evaluated the plano-convex, positive meniscus, and aspheric lenses with focal lengths ranging from 1.25 to 5.0 inches. The measured responses were the resultant weld depth and width of bead-on-plate welds made using a range of focus positions. The welding parameters were 185 to 700 watts continuous wave beam power and 30 inches per minute travel speed. The results of this study quantified the weld profile dimensions as a function of lens type and focal length, beam power, depth of focus, and verified the coincidence of maximum weld depth and width.
The WC-1 and WC-3 experiments were conducted using a dry, 1:10 linear scale model of the Zion reactor cavity to obtain baseline data for comparison to future experiments that will have water in the cavity. WC-1 and WC-3 were performed with similar initial conditions except for the exit hole between the melt generator and the scaled model of the reactor cavity. For both experiments the molten core debris was simulated by a thermitically generated melt formed from 50 kg of iron oxide/aluminum/chromium powders. After the thermite was ignited in WC-1, the melt was forcibly ejected by 374 moles of slightly superheated steam at an initial driving pressure of 4.6 MPa through an exit hole with an actual diameter of 4.14 cm into the scaled model of the reactor cavity. In WC-3, the molten thermite was ejected by 300 moles of slightly superheated steam at an initial driving pressure of 3.8 MPa through an exit hole with an actual diameter of 10.1 cm into the scaled model of the reactor cavity. Because of the larger exit hole diameter, WC-3 had a shorter blowdown time than WC-1, 0.8`s compared to 3.0`s. WC-3 also had a higher debris velocity than WC-1, 54 m/s compared to 17.5 m/s. Posttest sieve analysis of debris recovered from the Surtsey vessel gave identical results in WC-1 and WC-3 for the sieve mass median particle diameter, i.e. 1.45 mm. The total mass ejected into the Surtsey vessel in WC-3 was 45.0 kg compared to 47.9 kg in WC-1. The peak pressure increase in Surtsey due to the high-pressure melt ejection (HPME) was 0.275 MPa in WC-3 and 0.272 in WC-1. Steam/metal reactions produced 181 moles of of hydrogen in WC-3 and 145 moles of hydrogen in WC-1.
Transport models used for performance assessment of the Waste Isolation Pilot Plant (WIPP) in the event of human intrusion into the repository currently rely on the K{sub d} linear sorption isotherm model to predict rates of radionuclide migration. The vast majority of K{sub d} data was measured in static (batch) experiment on powdered substrate; most data specific to the Culebra dolomite were gathered in this way for five radioelements of concern using up to four different water compositions. This report summarizes the available data, examines inconsistencies between these data and the assumptions of the K{sub d} model, and discusses potential difficulties in using existing sorption data for predictive modeling of radionuclide retardation through adsorption modeling are presented as an alternative to the K{sub d} model.
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a machine-portable utility that emulates the basic features of the CDC UPDATE processor, the user selects on of eight codes for running on a machine of one of at least four major vendors. The ease with which this utility is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is maximized by employing the best available cross sections and sampling distributions, and the most complete physical model for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. Feasibility of construction permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications through simple update procedures. Version 3.0, the latest version of ITS, contains major improvements to the physical model, additional variance reduction via both internal restructuring and new user options, and expanded input/output capabilities.
A new algorithm for the treatment of sliding interfaces between solids with or without friction in an Eulerian wavecode is described. The algorithm has been implemented in the two-dimensional version of the CTH code. The code was used to simulate penetration and perforation of aluminum plates by rigid, conical-nosed tungsten projectiles. Comparison with experimental data is provided.
The first phase of a program to study the resistance of exclusion region barriers to ductile failure when subjected to accident-type, quasi-static extreme mechanical loads has been completed. This first phase consisted to qualification of the analytical tools used to study these types of structural deformations and the development of appropriate criteria to predict ductile failure. A series of tests were performed on hydroformed half-cylinder barrier mock-ups. The qualification activity was considered a success based upon the comparison of the deformations and loads measured during the testing to the response of these structures computed by the finite element modeling. This successful completion of the first phase allows the second phase program to proceed. 12 refs.
We have measured, by {sup 1}H and {sup 13}C nuclear magnetic resonance (NMR), the percent deuteration, the tacticity and the purity of several polymers and one solvent used in the preparation of microcellular foams. The percent deuteration was measured for polystyrene, polyacrylonitrile and polyethylene. The tacticities of polystyrene and polyacrylonitrile were determined. The purity and degradation products of polyacrylonitrile and maleic anhydride were examined. This report documents the experimental procedures and results of these measurements.
Sandia National Laboratories operates the Primary Standards Laboratory (PSL) for the Department of Energy, Albuquerque Operations Office (DOE/AL). This report summarizes metrology activities that received emphasis in the second half of 1991 and provides information pertinent to the operation of the DOE/AL system-wide Standards and Calibration Program.
The effects of cavern spacing and operating pressure on surface subsidence and cavern storage losses were evaluated using the finite- element method. The base case for the two sensitivity studies was a typical SPR cavern. The predicted responses of the base case and those from the pressurization study compared quite closely to measured surface subsidence and oil pressurization rates. This provided credibility for the analyses and constitutive models used. Subsidence and cavern storage losses were found to be strongly influenced by cavern spacing and pressurization. The relationship between subsidence volume and losses in storage volume varied as cavern spacing and operating pressure deviated from the base case. However, for a typical SPR cavern subsidence volume is proportional to storage loss and when expressed in ft., subsidence is equal to the percentage of storage loss.
The goal of the wet cavity (WC) test series was to investigate the effect of water in a reactor cavity on direct containment heating (DCH). The WC-1 experiment was performed with a dry cavity to obtain baseline data for comparison to the WC-2 experiment. WC-2 was conducted with water 3 cm deep (11.76 kg) in a 1:10 linear scale model of the Zion reactor cavity. The initial conditions for the experiments were similar. For both experiments the molten core debris was simulated by a thermitically generated melt formed from 50 kg of iron oxide/aluminum/chromium powders. After the charge was ignited, the debris was melted by the chemical reaction and was forcibly ejected through a nominal 3.5 cm hole into the scaled reactor cavity by superheated steam at an initial driving pressure of 4.58 MPa. The peak pressure increase in the containment due to the high-pressure melt ejection (HPME) was 0.272 MPa in WC-1 and 0.286 MPa in WC-2. The total amount of hydrogen generated in the experiments was 145 moles of H{sub 2} in WC-1 and 179 moles of H{sub 2} in WC-2. The total mass of debris ejected into the containment was identical for both experiments. These results suggest that water in the cavity slightly enhanced DCH.
Laboratory simulation of the approach of a radar fuze towards a target is an important factor in our ability to accurately measure the radar`s performance. This simulation is achieved, in part, by dynamically delaying and attenuating the radar`s transmitted pulse and sending the result back to the radar`s receiver. Historically, the device used to perform the dynamic delay has been a limiting factor in the evaluation of a radar`s performance and characteristics. A new device has been proposed that appears to have more capability than previous dynamic delay devices. This device is the digital RF memory. This report presents the results of an analysis of a digital RF memory used in a signal-delay application. 2 refs.
This report describes research and development related to Mo-based catalysts supported on hydrous metal oxide in exchangers for use in direct coal liquefaction processes. A group of NiMo catalysts were prepared on different hydrous titanium oxide (HTO) supports to serve as baseline materials for use in determining the effects of altering process parameters on the physical and catalytic properties of NiMoHTO catalysts. The baseline group included catalysts which had hydrogenation activities up to 40% higher than the best commercial NiMo/Al{sub 2}O{sub 3} catalysts used in coal liquefaction pilot plant studies on a weight of catalyst basis while containing 25% less active metal. The results of high resolution electron microscopy (HREM) studies addressing the effects of processing parameters on microstructure are also presented. NiMoHTO catalysts were included in a group of some 30 commercial and experimental catalysts tested at Amoco Oil Co. to determine applicability for upgrading coal resids. The performance of NiMoHTO catalysts in these tests was better than or comparable to the best commercial catalysts available for this application. The initial work with thin-film NiMoHTO catalysts supported on commercial silica gel spheres is presented. Second generation thin-film catalysts containing about 1% Mo have hydrogenation activities of about 75% of those of extruded commercial NiMo/Al{sub 2}O{sub 3} catalysts containing 10--13% Mo and up to 50% of the hydrodesulfurization activity of the commercial catalysts. The use of thin-film HTO technology, which allows for preparation of NiMoHTO catalysts on virtually any substrate lowers catalyst cost by reducing the amount of Ti required and provides engineering forms of HMO materials without development work needed to convert bulk HTO materials into usable engineering forms. Work done with NiMo catalysts supported on hydrous zirconium oxide (HZO) is also presented.
The third experiment of the Integral Effects Test (IET-3) series was conducted to investigate the effects of high pressure melt ejection (HPME) on direct containment heating (DCH). A 1:10 linear scale model of the Zion reactor pressure vessel (RPV), cavity, instrument tunnel, and subcompartment structures were constructed in the Surtsey Test Facility at Sandia National Laboratories (SNL). The RPV was modeled with a melt generator that consisted of a steel pressure barrier, a cast MgO crucible, and a thin steel inner liner. The melt generator/crucible had a semi-hemispherical bottom head containing a graphite limitor plate with a 3.5 cm exit hole to simulate the ablated hole in the RPV bottom head that would be formed by tube ejection in a severe nuclear power plant (NPP) accident. The reactor cavity model contained 3.48 kg of water with a depth of 0.9 cm that correspond to condensate levels in the Zion plant. A steam driven iron oxide/aluminum/chromium thermite was used to simulate HPME. IET-3 replicated the first experiment in the IET series (IET-1) except the Surtsey vessel contained 0.09 MPa air and 0.1 MPa nitrogen. No steam explosions occurred in the cavity in IET-3 experiment. The cavity pressure measurements showed that rapid vaporization of water occurred in the cavity at about the same time as the steam explosion in IET-1. However, the oxygen in the Surtsey vessel in IET-3 resulted in a vigorous hydrogen burn, which caused a significant increase in the peak pressure, 246 kPa compared to 98 kPa in the IET-1 test. The total debris mass ejected into the Surtsey vessel in IET-3 was 34.3 kg, and gas grab sample analysis indicated that 223 moles of hydrogen were produced by steam/metal reactions. About 186 moles of hydrogen burned and 37 moles remained unreacted.
This document presents planned actions, and their associated costs, for addressing the findings in the Environmental, Safety and Health Tiger Team Assessment of the Sandia National Laboratories, Albuquerque, May 1991, hereafter called the Assessment. This Final Action Plan should be read in conjunction with the Assessment to ensure full understanding of the findings addressed herein. The Assessment presented 353 findings in four general categories: (1)Environmental (82 findings); (2) Safety and Health (243 findings); (3) Management and Organization (18 findings); and (4) Self-Assessment (10 findings). Additionally, 436 noncompliance items with Occupational Safety and Health Administration (OSHA) standards were addressed during and immediately after the Tiger Team visit.
This paper addresses problems of synchronization and coordination in the context of faulty shared memory. We present algorithms for the consensus problem, and for reliable shared memory objects, from collections of read/write registers, 2-processor binary test-and-set objects, and read-modify-write registers, some of which may be faulty.
A computer program has been developed to reduce and analyze data from a standardized piezoelectric polymer (PVDF) shock-wave stress rate gauge. The program is menu driven with versatile graphic capabilities, input/output file options, hard copy options, and unique data processing capabilities. This program was designed to analyze digital current-mode'' data recorded from a Bauer PVDF stress-rate gauge and reduce it to a stress-versus-time record. The program was also designed to combine two simulanteously recorded data channels.
The purpose of the molten-salt pump and valve loop test is to demonstrate the performance, reliability, and service life of full-scale hot- and cold-salt pumps and valves for use in commercial central receiver solar power plants. This test was in operation at Sandia National Laboratories National Solar Thermal Test Facility from January 1988 to September 1990. The test hardware consists of two pumped loops; the hot-salt loop'' to simulate the piping and components on the hot (565{degrees}C) side of the receiver and the cold-salt loop'' to simulate piping and components on the receiver's cold (285{degrees}C) side. Each loop contains a pump and five valves sized to be representative of a conceptual 60-MW{sub e} commercial solar power plant design. The hot-salt loop accumulated over 6700 hours of operation and the cold-salt loop over 2500 hours during the test period. This project has demonstrated the performance and reliability required for commercial-scale molten-salt pumps and valves.
The goal of the Stretched-Membrane Dish Program is the development of a dish solar concentrator fabricated with a single optical element capable of collecting 60 kWt. Solar Kinetics, Inc., has constructed a prototype 7-meter dish to demonstrate the manufacturability and optical performance of this innovative design. The reflective surface of the dish consists of a plastically deformed metal membrane with a separate reflective polymer membrane on top, both held in place by a low-level vacuum. Sandia conducted a test program to determine the on-sum performance of the dish. The vacuum setting was varied 8.9- to 17. 2-cm of water column and the vertex to target distance was varied over a range of 15.24 cm to evaluate beam quality. The optimal setting for the vacuum was 11.4 centimeters of water column with the best beam quality of 6.4 centimeters behind the theoretical focal point of the dish. Flux arrays based on slope error from the CIRCE2 computer code were compared to the measured flux array of the dish. The uniformly distributed slope error of 2.3 milliradians was determined as the value that would produce a modeled array with the minimum mean square difference to the measured array. Cold water calorimetry measured a power of 23.3 {plus minus} .3 kWt. Reflectivity change from an initial value of 88.3% to 76.7% over a one year period. 12 refs.
Pulsed high field magnet coils are an integral part of the applied-B ion diode used in the light ion Inertial Confinement Fusion program at Sandia National Laboratories. Several factors have contributed in recent years to the need for higher magnetic fields of these applied-B ion diodes. These increased magnetic field requirements have precipitated the development of better engineering tools and techniques for use in the design of applied-B ion diodes. This paper describes the status of the applied-B ion diode engineering at Sandia. The design process and considerations are discussed. A systematic approach for maximizing the field achievable from a particular coil system consisting of the capacitor bank, the feeds, and the coil is presented. A coupled electromechanical finite element analysis is also described.
Rock mechanisms parameters such as the in situ stresses, elastic properties, failure characteristics, and poro-elastic response are important to most completion and stimulation operations. Perforating, hydraulic fracturing, wellbore stability, and sand production are examples of technology that are largely controlled by the rock mechanics of the process. While much research has been performed in these areas, there has been insufficient application that research by industry. In addition, there are new research needs that must be addressed for technology advancement.
We describe the use of the object-oriented language C++ in the development of a hydrocode simulation system, PCTH. The system is designed to be horizontally and vertically portable from low-end workstations to next generation massively parallel supercomputers. The development of the PCTH system and the issues and rationale considered in moving to the object oriented paradigm will be discussed.
The goals and time constraints of developing the next generation shock code, RHALE++, for the Computational Dynamics and Adaptive Structures Department at Sandia National Laboratories have forced the development team to closely examine their program development environment. After a thorough investigation of possible programming languages, the development team has switched from a FORTRAN programming environment to C++. This decision is based on the flexibility, strong type checking, and object-oriented features of the C++ programming language. RHALE++ is a three dimensional, multi-material, arbitrary Lagrangian Eulerian hydrocode. Currently, RHALE++ is being developed for von Neumann, vector, and MIMD/SIMD computer architectures. Using the object oriented features of C++ facilitates development on these different computer architectures since architecture dependences such as inter processor communication, can be hidden in base classes. However, the object oriented features of the language can create significant losses in efficiency and memory utilization. Techniques, such as reference counting, have been developed to address efficiency problems that are inherent in the language. Presently, there has been very little efficiency loss realized on SUN scalar and nCUBE massively parallel computers; however, although some vectorization has been accomplished on CRAY systems, significant efficiency losses exist. This paper presents the current status of using C++ as the development language for RHALE++ and the efficiency that has been realized on SUN, CRAY, and nCUBE systems.
In this report we will consider how radiation measurements on spent fuel can contribute to verifying the loading of burnup credit casks. Measurements can be used in burnup credit operations to help prevent misloading of fuel that does not meet the minimum specifications for a particular cask design. Passive neutron and gross gamma-ray measurements are proposed as a means of qualifying spent fuel assemblies. Active systems to measure reactivity or fissile content are necessarily more complex and appear to offer no obvious advantage to burnup credit applications over simpler systems. 4 refs., 2 figs.
Salzbrenner, R.; Wellman, G.W.; Sorenson, K.B.; Mcconnell, P.
Depleted uranium (DU) is used in high level radioactive waste transport containers as a gamma shield. The mechanical response of this material has generally not been included in calculations intended to assure that these casks will maintain their containment function during all normal use and accident conditions. If DU could be qualified as a structural component, the thickness of other materials (e.g. stainless steel) in the primary containment boundary could be reduced, thereby allowing a reduction in cask mass and/or an increase in payload capacity. This study was conducted to determine the mechanical behavior of a range of DU alloys in order to extend the limited set of mechanical properties reported in the literature. These mechanical properties were used as the basis for finite element calculations to quantify the potential for claiming structural credit for DU.
Variations of bed void fraction in a full-scale, reacting, fixed-bed coal gasifier have been deduced from measured axial pressure profiles obtained during gasification of seven coal types ranging from lignite to bituminous. Packed-bed pressure correlations were used to calculate the void fractions based on monotonic polynomial fits of measured pressure profiles. Insights into the fixed-bed combustion processes affected by the void distribution were obtained by a one-dimensional, steady-state, fixed-bed combustion model. Predicted temperature profiles from this model compare reasonably well to experimental data. The bed void distributions are not linear but are perturbed by vigorous reactions in the devolatilization and oxidation zones. Results indicate that a dramatic increase in temperature and associated gas release causes the bed to expand and the gas void space to increase. Increased void space localized in the combustion zone causes the steep temperature gradient to decrease and the location of the maximum temperature to shift. Also, large feed gas flow rates cause the void fraction in the ash zone to increase.
Crosshole shear-wave seismic surveys have been used to monitor the distribution of injected air in the subsurface during an in situ air stripping waste remediation project at the Savannah River site in South Carolina. To remove the contaminant, in this case TCE`s from a leaking sewer line, two horizontal wells were drilled at depths of 20 m and 52 m. Air was pumped into the lower well and a vacuum was applied to the upper well to extract the injected air. As the air passed through the subsurface, TCE`s were dissolved into the gas and brought out the extraction well. Monitoring of the air injection by crosshole shear wave seismics is feasible due to the changes in soil saturation during injection resulting in a corresponding change in seismic velocities. Using a downhole shear-wave source and clamped downhole receiver, two sets of shear-wave data were taken. The first data were taken before the start of air injection, and the second taken during. The difference in travel times between the two data sets were tomographically inverted to obtain velocity differences. Velocity changes ranging up to 3% were mapped corresponding to saturation changes up to 24%. The distribution of these changes shows a desaturation around the position of the injection well with a plume extending in the direction of the extraction well. Layers with higher clay content show distinctively less change in saturation than the regions with higher sand content.
Numerical optimization has been successfully used to obtain optimal designs in a more efficient and structured manner in many industries. Optimization of sizing variables is already a widely used design tool and even though shape optimization is still an active research topic, significant successes have been achieved for many structural analysis problems. The transportation cask design problem seems to have the formulation and requirements to benefit from numerical optimization. Complex structural, thermal and radiation shielding analyses associated with cask design constraints can be integrated and automated through numerical optimization to help meet the growing needs for safe and reliable shipping containers. Improved overall package safety and efficiency with cost savings in the design and fabrication can also be realized. Sandia National Laboratories (SNL) has the opportunity to be a significant contributor in the development of new sophisticated transportation cask design tools. Current state-of-the-art technology at SNL in the areas of structural mechanics, thermal mechanics, numerical analysis, adaptive finite element analysis, automatic mesh generation, and transportation cask design can be combined to enhance current industry-standard cask design and analysis techniques through numerical optimization.
This report describes in details the operations necessary to perform a test on the Sandia National Laboratories 18-Inch Actuator. This report is to sever as a training aid for personnel learning to operate the Actuator. A complete description of the construction and operation of the Actuator is also given. The control system, data acquisition system, and high-pressure air supply system are also described. Detailed checklists, with an emphasis on safety, are presented for test operations and for maintenance.
The Hazardous Material Identification Process is a guide to pre-characterization of excess weapon hardware for environmental and safety hazards prior to introduction of the hardware into a waste stream. A procedure for planning predisposal processing of hardware for declassification, demilitarization, and separation/expenditure of certain hazards is included. Final characterization of the resultant waste streams is left to the cognizant waste management agency or organization.
The 2-D code MAGIC and TRAJ have been used for extensive studies of diode, IFR channel, and accelerating gap problems in the recirculating linear accelerator (RLA). Typical beam parameters use 10--20 kA, 3--4 MeV. This report summarizes recent results from these simulations. We have also designed possible injectors for the proposed BOLT experiment, with typical beams at 100 A, 1.0--1.5 MeV. Finally, we discuss some preliminary diode runs of proposed 100 MV, 500 kA accelerator using the SMILE/HERMES method of adding voltages from many cavities across a single immersed diode gap. 8 refs.
This report describes the progress of the three-dimensional mesh generation research, using plastering, during the 1990 fiscal year. Plastering is a 3-D extension of the two-dimensional paving technique. The objective is to fill an arbitrary volume with hexahedral elements. The plastering algorithm`s approach to the problem is to remove rows of elements from the exterior of the volume. Elements are removed, one level at a time, until the volume vanishes. Special closure algorithms may be necessary at the center. The report also discusses the common development environment and software management issues. 13 refs.
The AL-SX/2 and AL-SX/3 are recently certified Type B shipping containers for tritium reservoirs. Both containers consist of an outer stainless steel drum overpack and sealed stainless steel containment vessel. WR reservoirs provide containment of tritium for normal conditions of transport. In accident conditions the containment vessel of the AL-SX must contain the tritium. A variety of reservoirs and materials will be packaged inside the containment vessel. These materials must not produce high pressure gas products that exceed the internal pressure capability of the vessel if the container is in an accident involving fire. This report summarizes outgassing tests performed on various organic materials. Tests of commonly used materials show that increased pressure due to outgassing is not a problem at elevated temperatures that simulate an accident. This report summarizes outgassing tests performed on various materials that may be packaged inside the AL-SX during shipment. These materials (except the getter) are normally a part of the reservoir shipping configuration. The objective of the tests was to determine the temperature that these materials begin to generate high pressure gaseous products.
During June and July 1991, the Sandia Transportable Lightning Instrumentation Facility (SATTLIF) was fielded at the Department of Defense (DoD) Security Operations Test Site (SOTS) at Ft. McClellan, Alabama. Nine negative cloud-to-ground lightning flashes were artifically triggered to designated locations on Igloo 2, a weapons storage bunker specially prepared to allow instrumentation access to various of its structural and electrical system elements. Simultaneous measurements of the incident flash currents and responses at 24 test points within the igloo and its grounding counterpoise network were recorded under lightning attachments to the front and rear air terminals of the structure`s lightning protection system. In Volume I the test is described in detail. The measured data are summarized and discussed. Appendix A contains the full set of recorded incident flash currents, while Appendix B presents the set of largest responses measured at each test point, for both front and rear attachments to the structure. As part of these tests, 0.050-in-thick stainless steel, 0.08-in copper, and 0.08-in titanium samples were exposed to triggered flash currents. In this way, damage spots created by direct-strike triggered lightning have been obtained, along with the measurement of the return-stroke and continuing currents that produced them. These data points, along with similar ones on aluminum and ferrous steel obtained during 1990 will be used as benchmarks against which to quantify the fidelity of burnthrough testing achievable Sandia`s advanced laboratory lightning simulator.
A channel ions can focus and guide a relativistic electron beam. This report discuses the generation of plasma channels using magnetically confined low energy electron beams in a low pressure gas. The most significant advantages of these channels are that any gas can be ionized and that they can easily be made to follow a curved path. The major advantages are that the channel is less well confined than a laser produced channel and that a small solenoidal magnetic field is required. This report is intended to be a guide for those technicians and scientists who need to assemble and operate an e-beam generated plasma channel system. Hardware requirements are discussed in detail. There are brief discussions of operating techniques, channel diagnostic, and channel characteristics.
This report contains viewgraphs on topics in the following areas: plasma facing components (PFC) operation in devices; disruption studies; laboratory PFM and high heat flux research; R&D for future machines; and neutron effects on thermonuclear materials.
Data are presented from the 18 W/m{sup 2} Mockup for Defense High-Level Waste, a very large scale in situ test fielded underground at the Waste Isolation Pilot Plant (WIPP). These data include selected fielding information, test configuration, instrumentation activities, and comprehensive results from a large number of gages. The results in this report give measured data from the thermal response gages, i.e., thermocouples, flux meters, and heater power gages emplaced in the test. Construction of the test began in June 1984; gage data in this report cover the complete test duration, that is, to June 1990.
The objective of this project is to apply Sandia`s expertise and technology towards the development of stimulation diagnostic technology in the areas of in situ stress, natural fracturing, stimulation processes and instrumentation systems. Initial work has concentrated on experiment planning for a site where hydraulic fracturing could be evaluated and design models and fracture diagnostics could be validated and improved. Important issues have been defined and new diagnostics, such as inclinometers, identified. In the area of in situ stress, circumferential velocity analysis is proving to be a useful diagnostic for stress orientation. Natural fracture studies of the Frontier formation are progressing; two fracture sets have been found and their relation to tectonic events have been hypothesized. Analyses of stimulation data have been performed for several sites, primarily for in situ stress information. Some new ideas in stimulation diagnostics have been proposed; these ideas may significantly improve fracture diagnostic capabilities.
The Nuclear Waste Repository Technology Department at Sandia National Laboratories (SNL) is investigating the suitability of Yucca Mountain as a potential site for underground burial of nuclear wastes. One element of the investigations is to assess the potential long-term effects of groundwater flow on the integrity of a potential repository. A number of computer codes are being used to model groundwater flow through geologic media in which the potential repository would be located. These codes compute numerical solutions for problems that are usually analytically intractable. Consequently, independent confirmation of the correctness of the solution is often not possible. Code verification is a process that permits the determination of the numerical accuracy of codes by comparing the results of several numerical solutions for the same problem. The international nuclear waste research community uses benchmarking for intercomparisons that partially satisfy the Nuclear Regulatory Commission (NRC) definition of code verification. This report presents the results from the COVE-2A (Code Verification) project, which is a subset of the COVE project.
The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface-based and underground testing. Analyses have been performed to design site characterization activities with minimal impact on the ability of the site to isolate waste, and on tests performed as part of the characterization process. One activity of site characterization is the construction of an Exploratory Studies Facility, which may include underground shafts, drifts, and ramps, and the accompanying ponds used for the storage of sewage water and muck water removed from construction operations. The information in this report pertains to the two-dimensional numerical calculations modelling the movement of sewage and settling pond water, and the potential effects of that water on repository performance and underground experiments. This document contains information that has been used in preparing Appendix I of the Exploratory Studies Facility Design Requirements document (ESF DR) for the Yucca Mountain Site Characterization Project.
The Yucca Mountain Site Characterization Project (YMP) is conducting studies to determine whether the Yucca Mountain site in southern Nevada will meet regulatory criteria for a potential mined geologic disposal system for high-level radioactive waste. Data gathered as part of these studies must be compiled and tabulated in a controlled manner for use in design and performance analyses. An integrated data management system has been developed to facilitate this process; this system relies on YMP participants to share in the development of the database and to ensure the integrity of the data. The site and Engineering Properties Database (SEPDB) is unique in that, unlike most databases where one data set is stored for use by one defined user, the SEPDB stores different sets of data which must be structured so that a variety of users can be given access to the information. All individuals responsible for activities supporting the license application should, to the extent possible,work with the same data and the same assumptions. For this reason, it is important that these data sets are readily accessible, comprehensive, and current. The SEPDB contains scientific and engineering data for use in performance assessment and design activities. These data sets currently consist of geologic, hydrologic, and rock properties information from drill holes and field measurements. The users of the SEPDB include engineers and scientists from several government research laboratories (Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories), the US Geological Survey, and several government contractors. This manuscript describes the detailed requirements, contents, design, and status of the SEPDB, the procedures for submitting data to and/or requesting data from the SEPDB, and a SEPDB data dictionary (Appendix A) for defining the present contents.
Midway Valley, located at the eastern base of Yucca Mountain, Nye County, Nevada, has been identified as a possible location for the surface facilities of a potential high-level nuclear-waste repository. This structural and topographic valley is bounded by two north- trending, down-to-the-west normal faults: the Paintbrush Canyon fault on the east and the Bow Ridge fault on the west. Surface and near-surface geological data have been acquired from Midway Valley during the past three years with particular emphasis on evaluating the existence of Quaternary faults. A detailed (1:6000) surficial geological map has been prepared based on interpretation of new and existing aerial photographs, field mapping, soil pits, and trenches. No evidence was found that would indicate displacement of these surficial deposits along previously unrecognized faults. However, given the low rates of Quaternary faulting and the extensive areas that are covered by late Pleistocene to Holocene deposits south of Sever Wash, Quaternary faulting between known faults cannot be precluded based on surface evidence alone. Middle to late Pleistocene alluvial fan deposits (Unit Q3) exist at or near the surface throughout Midway Valley. Confidence is increased that the potential for surface fault rupture in Midway Valley can be assessed by excavations that expose the deposits and soils associated with Unit Q3 or older units (middle Pleistocene or earlier).
The Choice Coordination Problem with {kappa} alternatives ({kappa}-CCP) was introduced by Rabin in 1982. The goal is to design a wait-free protocol for n asynchronous processes which causes all correct processes to agree on one out of {kappa} possible alternatives. Each of the {kappa} alternatives has an associated shared register and a solution to the {kappa}-CCP requires that a special symbol be written in exactly one shared register. All correct processes must eventually halt with a pointer to the register containing the special symbol. The difficulty arises from the fact that each process may have a different naming convention for the registers. Protocols requiring the least number of symbols are considered optimal. We gave a brief overview of our results in this paper.
This paper presents a methodology for determining the response of spent fuel assembly spacer grids subjected to transport cask impact loading. The spacer grids and their interaction with rod-to-rod loading are the most critical components governing the structural response of spent fuel assemblies. The purpose of calculating the assembly response is to determine the resistance to failure of spent fuel during regulatory transport. The failure frequency computed from these analyses is used in calculating category B spent fuel cask containment source term leakage rates for licensing calculations. Without defensible fuel rod failure frequency prediction calculations, assumptions of 100% fuel failure must be made, leading to leak tight cask design requirements.
The Strategic Defense Initiative Organization (SDIO) decided to investigate the possibility of launching a Russian Topaz 11 space nuclear power system. A preliminary safety assessment was conducted to determine whether or not a space mission could be conducted safely and within budget constraints. As part of this assessment, a safety policy and safety functional requirements were developed to guide both the safely assessment and future Topaz II activities. A review of the Russian flight safety program was conducted and documented. Our preliminary safety assessment included a top level event tree, neutronic analysis of normal and accident configurations, an evaluation of temperature coefficients of reactivity, a reentry and disposal analysis, and analysis of postulated launch abort impact accidents, and an analysis of postulated propellant fire and explosion accidents. Based on the assessment, it appears that it will be possible to safely launch the Topaz II system in the US with some possible system modifications. The principal system modifications will probably include design changes to preclude water flooded criticality and to assure intact reentry.
The US Department of Energy (DOE) has developed a site characterization plan (SCP) to collect detailed information on geology, geohydrology, geochemistry, geoengineering, hydrology, climate, and meteorology (collectively referred to as ``geologic information``) of the Yucca Mountain site. This information will be used to determine if a mined geologic disposal system (MGDS) capable of isolating high-level radioactive waste without adverse effects to public health and safety over 10,000 years, as required by regulations 40 CFR Part 191 and 10 CFR Part 60, could be constructed at the Yucca Mountain site. Forecasts of future climates conditions for the Yucca Mountain area will be based on both empirical and numerical techniques. The empirical modeling is based on the assumption that future climate change will follow past patterns. In this approach, paleclimate records will be analyzed to estimate the nature, timing, and probability of occurrence of certain climate states such as glacials and interglacials over the next 10,000 years. For a given state, key climate parameters such as precipitation and temperature will be assumed to be the same as determined from the paleoclimate data. The numerical approach, which is the primary focus of this paper, involves the numerical solution of basic equations associated with atmospheric motions. This paper describes these equations and the strategy for solving them to predict future climate conditions around Yucca Mountain.
This paper presents an overview of the preclosure seismic hazards and the influence of these hazards on determining the suitability of Yucca Mountain as a national high-level nuclear-waste repository. Geologic data, engineering analyses, and regulatory guidelines must be examined collectively to assess this suitability. An environmental assessment for Yucca Mountain, written in 1986, compiled and evaluated the existing tectonic data and presented arguments to satisfy, in part, the regulatory requirements that must be met if the Yucca Mountain site is to become a national waste repository. Analyses have been performed in the past five years that better quantify the local seismic hazards and the possibility that these hazards could lead to release of radionuclides to the environment. The results from these analyses increase the confidence in the ability of Yucca Mountain and the facilities that may be built there to function satisfactorily in their role as a waste repository. Uncertainties remain, however, primarily in the input parameters and boundary conditions for the models that were used to complete the analyses. These models must be validated and uncertainties reduced before Yucca Mountain can qualify as a viable high-level nuclear waste repository.
The design of cementitious repository seals requires an understanding of cement hydration effects in developing a tight interface zone between the rock and the seal. For this paper, a computer code, SHAFT.SEAL, is used to model early-age cement hydration effects and performs thermal and thermomechanical analysis of cementitious seals. The model is described, and then used to analyze for the effects of seal size, rock temperature and placement temperature. The model results assist in selecting the instrumentation necessary for progressive evaluation of seal components and seal-system tests. Also, the results identify strategies for seal emplacement for a series of repository seal tests for the Yucca Mountain Site Characterization Project (YMP).
Bent-axis maneuvering vehicles provide a unique type of control for a variety of supersonic and hypersonic missions. Unfortunately, large hinge moments, incomplete pitching moment predictions, and a misunderstanding of corresponding center of pressure calculations have prevented their application. A procedure is presented for the efficient design of bent-axis vehicles given an adequate understanding of origins of pitching moment effects. In particular, sources of pitching moment contributions will be described including not only normal force, but inviscid axial force and viscous effects as well. Off-centerline center of pressure effects are first reviewed for symmetric hypersonic sphere-cone configurations. Next the effects of the bent-axis geometry are considered where axial force, acting on the deflected tail section, can generate significant pitching moment components. The unique relationship between hinge moments and pitching moments for the bent-axis class of vehicles is discussed.
As part of the design process for a hypersonic vehicle, it is necessary to predict the aerodynamic and aerothermodynamic environment for flight conditions. This involves combining results obtained from ground testing with analytical modeling to predict the aerodynamics and heating for all conditions of interest. The question which always arises is, how well will these models predict what is actually seen in a flight environment? This paper will briefly address ground-testing and analytical modeling and discuss where each is appropriate, and the associated problems with each area. It will then describe flight test options as well as instrumentation currently available and show how flight tests can be used to validate or improve models. Finally, several results will be shown to indicate areas where ground testing and modeling alone are inadequate to accurately predict hypersonic aerodynamics and aerothermodynamics.
X-ray observations of boiling sodium in a 75-kW{sub t} reflux-pool-boiler solar receiver operating at up to 800{degrees}C were carried out. Both cinematographic and quantitative observations were made. From the cinematography, the pool free surface was observed before and during the start of boiling. During boiling, the free surface rose out of the field of view, and chaotic motion was observed. From the quantitative observations, void fraction in pencil-like probe volumes was inferred, using a linear array of detectors. Useful data were obtained from three of the eight probe volumes. Information from the other volumes was masked by scattered radiation. During boiling, time-averaged void fractions ranged from 0.6 to 0.8. During hot restarts, void fractions near unity occurred and persisted for up to {1/2} second. 17 refs.
Many robot control algorithms for high performance in-contact operations including hybrid force/position, stiffness control and impedance control approaches require the command of the joint torques. However, most commercially available robots do not provide joint torque command capabilities. The joint command at the user level is typically position or velocity and at the control developer level is voltage current, or pulse-width, and the torque generated is a nonlinear function of the command and joint position. To enable the application of high performance in-contact control algorithms to commercially available robots, and thereby facilitate technology transfer from the robot control research community to commercial applications, a practical methodology has been developed to linearize the torque characteristics of electric motor-amplifier combinations. A four degree-of-freedom Adept 2 robot, having pulse-width modulation amplifiers and both variable reluctance and brushless DC motors, is converted to operate from joint torque commands to demonstrate the methodology. The average percentage torque deviation over the command and position ranges is reduced from as much as 76% to below 5% for the direct-drive joints 1, 2 and 4 and is cut by one half in the remaining ball-screw driven joint 3. 16 refs., 16 figs., 2 tabs.
We have obtained dual-longitudinal-mode operation of a Q-switched Nd:YAG laser by simultaneous injection-seeding at two frequencies to produce pulses with modulation frequency discretely tunable from 185 MHz to greater than 17 GHz.
This report highlights the following topics: Photon Correlation Spectroscopy--a new application in jet fuel analysis, Testing news in brief; Solar test facility supports space station research; Shock isolation technique developed for piezoresistive accelerometer; High-speed photography captures Distant Image measurements; and, Radiation effects test revised for CMOS electronics.
A monolithic dose-rate nuclear event detector (NED) has been evaluated as a function of radiation pulse width. The dose-rate trip level of the NED was evaluated in "near" minimum and maximum sensitivity configurations for pulse widths from 20 to 250 ns and at dose rates from 106 to 109 rads(Si)/s. The trip level varied up to a factor of ∼16 with pulse width. At each pulse width the trip level can be varied intentionally by adding external resistors. Neutron irradiations caused an increase in the trip level, while electron irradiations, up to a total-dose of 50 krads(Si), had no measurable effect. This adjustable dose-rate-level detector should prove valuable to designers of radiation-hardened systems.
Particulate contamination during IC fabrication is generally acknowledged as a major contributor to yield loss. In particular, plasma processes have the potential for generating copious quantities of process induced particulates. Ideally, in order to effectively control process generated particulate contamination, a fundamental understanding of the particulate generation and transport is essential. Although a considerable amount of effort has been expended to study particles in laboratory apparatus, only a limited amount of work has been performed in production line equipment with production processes. In these experiments, a Drytek Quad Model 480 single wafer etcher was used to etch blanket thermal SiO{sub 2} films on 150 mm substrates in fluorocarbon discharges. The effects of rf power, reactor pressure, and feed gas composition on particle production rates were evaluated. Particles were measured using an HYT downstream particle flux monitor. Surface particle deposition was measured using a Tencor Surfscan 4500, as well as advanced ex situ techniques. Particle morphology and composition were also determined ex situ. Response surface methodology was utilized to determine the process conditions under which particle generation was most pronounced. The use of in situ and ex situ techniques has provided some insight into the mechanisms involved for particle generation and particle dynamics within the plasma during oxide etching.
Franssen, F.; Islam, A.B.M.N.; Sonnier, C.; Schoeneman, J.L.; Baumann, M.
The conclusions of the vulnerability test on VOPAN (verification of Operator's Analysis) as conducted at Safeguards Analytical Laboratory (ASA) at Seibersdorf, Austria in October 1990 and documented in STR-266, indicate that whenever samples are taken for safeguards purposes extreme care must be taken to ensure that they have not been interfered with during the sample taking, transportation, storage or sample preparation process.'' Indeed there exist a number of possibilities to alter the content of a safeguards sample vial from the moment of sampling up to the arrival of the treated (or untreated) sample at SAL. The time lapse between these two events can range from a few days up to months. The sample history over this period can be subdivided into three main sub-periods: (1) the period from when the sampling activities are commenced up to the treatment in the operator's laboratory, (2) during treatment of samples in the operator's laboratory, and finally, (3) the period between that treatment and the arrival of the sample at SAL. A combined effort between the Agency and the United States Support Program to the Agency (POTAS) has resulted in two active tasks and one proposed task to investigate improving the maintenance of continuity of knowledge on safeguards samples during the entire period of their existence. This paper describes the use of the Sample Vial Secure Container (SVSC), of the Authenticated Secure Container System (ASCS), and of the Secure Container for Storage and Transportation of samples (SCST) to guarantee that a representative portion of the solution sample will be received at SAL.
A control algorithm is proposed for a molten-salt solar central receiver in a cylindrical configuration. The algorithm simultaneously regulates the receiver outlet temperature and limits thermal-fatigue damage of the receiver tubes to acceptable levels. The algorithm is similar to one that was successfully tested for a receiver in a cavity configuration at the Central Receiver Test Facility in 1988. Due to the differences in the way solar flux is introduced on the receivers during cloud-induced transients, the cylindrical receiver will be somewhat more difficult to control than the cavity receiver. However, simulations of a proposed cylindrical receiver at the Solar Two power plant have indicated that automatic control during severe cloud transients is feasible. This paper also provides important insights regarding receiver design and lifetime as well as a strategy for reducing the power consumed by the molten-salt pumps.
This paper describes experiments on the wettability of tin on oxygen free, high conductivity (OFHC) copper using a ″point source″ ultrasonic horn. Ultrasonics are used on such metals as aluminum or stainless steel which are difficult to wet without the use of very strong corrosives. These experiments explore the behavior of acoustic energy transmission in the horn-solder-substrate systems indicated by the solder film generated and explore the use of ultrasonics in actual electronic systems component fabrication and assembly processes.
An evaluation of substitutes for tin-lead alloy solders is discribed. The first part of the evaluation studies the wettability of tin-based, lead free solders. The second part evaluates the solderability. The solders evaluated were commercially available.
This paper presents the results of a set of structural analyses performed to investigate the effects of internal gas generation on the extension of pre-existing fractures around disposal rooms at the Waste Isolation Pilot Plant. The response of a room and its contents is computed for this scenario to establish the condition of the room at any point in time. The development of the capability to perform these analyses represents an additional step in the development of an overall model for the disposal room.
National Electronic Packaging and Production Conference-Proceedings of the Technical Program (West and East)
Frear, D.R.
Acid vapors have been used to fluxlessly reduce metal oxides and enhance wetting of solder on metallizations. Dilute solutions of hydrogen, acetic acid and formic acid in an inert carrier gas of nitrogen or argon were used with the sessile drop technique for 60Sn-40Pb solder on Cu and Au/Ni metallizations. The time to reduce metal oxides and degree of wetting as a function of acid vapor concentration were characterized. Acetic and formic acids reduce the surface metal oxides sufficiently to form metallurgically sound solder joints. Hydrogen did not reduce oxides rapidly enough at 220°C to be suitable for soldering applications. The optimum conditions for oxide reduction with formic acid was with an acid vapor concentration in nitrogen carrier gas of 4% for Cu metallizations and 1.6% on Au/Ni. The acetic acid vapor concentration, also in nitrogen, was optimized at 1.5% for both metallizations. Above a vapor concentration of 1.5%, the acetic acid combined with the bare metal to form acetates which increased the wetting time. These results indicate that acid vapor fluxless soldering is a viable alternative to traditional flux soldering.
Proceedings of the International Instrumentation Symposium
Clark, E.L.
The measurement of surface pressures on a body which is submerged in flowing water involves several problems which are not encountered when the test medium is air. Many of these problems exist even if the water velocity is low, and become more severe at higher velocitics (45-65 ft/sec) where the surface pressure may be low enough for cavitation to occur. Problem areas which are discussed include:hydrostatic pressure, surface tension, orifice errors, thermal effects on surface-mounted transducers, electrical fields, two-phase phenomena and air content.
Deconing controllers are developed for a spinning spacecraft, where the control mechanism is that of axial or radial moving masses that are used to produce intentional, transient principal axis misalignments. A single mass axial controller is used to motivate the concept, and then axial and radial dual mass controllers are described. The two mass problem is of particular interest since spacecraft imbalances can be simultaneously removed with the same control logic. Each controller is tested via simulation for its ability to eliminate existing coning motion for a range of spin rates. Both controllers are developed via a linear-quadratic-regulator synthesis procedure, which is motivated by their multi-input/multi-output nature. The dynamic coupling in the radial two mass control problem introduces some particularly interesting design complications.
Many complex physical processes are modeled by coupled systems of partial differential equations (PDEs). Often, the numerical approximation of these PDEs requires the solution of large sparse nonsymmetric systems of equations. In this paper we compare the parallel performance of a number of preconditioned Krylov subspace methods on a large-scale MIMD machine. These methods are among the most robust and efficient iterative algorithms for the solution of large sparse linear systems. They are easy to implement on various architectures and work well on a wide variety of important problems. In this comparison we focus on the parallel issues associated with both local preconditioners (those that combine information from the entire domain). The various preconditioners are applied to a variety of PDE problems within the GMRES, CCGS, BiCGSTAB, and QMRCGS methods. Conclusions are drawn on the effectiveness of the different schemes based on results obtained from a 1024 processor a nCUBE 2 hypercube.
A two-stage self-organizing neural network architecture has been applied to object recognition in Synthetic Aperture Radar imagery. The first stage performs feature extraction and implements a two-layer Neocognitron. The resulting feature vectors are presented to the second stage, an ART 2-A classifier network, which clusters the features into multiple target categories. Training is performed off-line in two steps. First, the Neocognitron self-organizes in response to repeated presentations of an object to recognize. During this training process, discovered features and the mechanisms for their extraction are captured in the excitatory weight patterns. In the second step, Neocognitron learning is inhibited and the ART 2-A classifier forms categories in response to the feature vectors generated by additional presentations of the object to recognize. Finally, all training is inhibited and the system tested against a variety of objects and background clutter. In this paper we report the results of our initial experiments. The architecture recognizes a simulated tank vehicle at arbitrary azimuthal orientations at a single depression angle while rejecting clutter and other object returns. The neural architecture has achieved excellent classification performance using 20 clusters.
This paper presents results of a set of numerical experiments performed bo benchmark the Cell-Centered Implicit Continuous-fluid Eulerian (CCICE), and to determine their limitations as flow solvers for water entry and water exit simulations.
This paper will include a brief overview of the components of the QUICKSILVER suite and its current modeling capabilities. As time permits, results from sample applications will be shown, including time animations of simulation results.
Proceedings of the 35th International Power Sources Symposium
Clark, N.H.
Technologies that use carbon and mixed metal oxides as the electrode material have been pursued for the purpose of producing high-reliability double-layer capacitors (DLCs). The author demonstrates their environmental stability in temperature, shock, vibration, and linear acceleration. She reviews the available test data for both types of DLCs under these stress conditions. This study suggests that mixed metal oxides and carbon-based double-layer capacitors can survive robust environments if packaged properly, and that temperature decreases performance of double-layer capacitors.
We describe a simple engineering model applicable to stand-off “Whipple bumper” shields, which are used to protect space-based assets from impacts by orbital debris particles. The model provides a framework for analyzing: 1) the parameter limits governing the penetration and breakup or decomposition of the hypervelocity debris particle; 2) the behavior of the induced debris cloud, including its velocity and divergence; and 3) the design and optimization of the stand-off shield for a specific threat and level of protection required. The model is normalized to actual stand-off debris shield experiments and multi-dimensional numerical simulations at impact velocities of ~10 km/s. The subsequent analysis of a current space station shield design suggests that: 1) for acceptable levels of protection, stand-off shields can be significantly thinner than previously thought; and 2) with the proper balance between shield thickness and stand-off distance, the total shield mass can be reduced substantially.
A series of experiments has been performed on the Sandia Hypervelocity Launcher to determine the performance limits of conventional Whipple shields against representative 0.8 g aluminum orbital debris plate-like fragments with velocities of 7 and 10 km/s. Supporting diagnostics include flash X-rays, high speed photography and transient digitizers for timing correlation. Two Whipple shield designs were tested with either a 0.030 cm or a 0.127 cm thick front sheet and a 0.407 cm thick backsheet separated by 30.5 cm. These two designs bracket the ballistic penetration limit curve for protection against these debris simulants for 7 km/s impacts.
Final Program and Paper Summaries for the 1992 Digital Signal Processing Workshop, DSPWS 1992
Jakowatz Jr., C.V.; Thompson, P.A.
In this paper we take a new look at the tomographic formulation of spotlight mode synthetic aperture radar (SAR), so as to include the case of targets having three-dimensional structure. This bridges the work of David C. Munson and his colleagues, who first described SAR in terms of two-dimensional tomography, with Jack Walker`s original derivation of spotlight mode SAR imaging via Doppler analysis. The main result is to demonstrate that the demodulated radar return data from a spotlight mode collection represent a certain set of samples of the three-dimensional Fourier transform of the target reflectivity function, and to do so using tomographic principles instead of traditional Doppler arguments. We then show that the tomographic approach is useful in interpreting the two-dimensional SAR image of a three-dimensional scene. In particular, the well-known SAR imaging phenomenon commonly referred to as layover is easily explained in terms of tomographic projection. 4 refs.
The unit cell shape of thick frequency selective surfaces, or dichroic plate, is dependent on its frequency requirements. One aperture shape may be chosen to give wider bandwidths, and another chosen for sharper frequency roll-off. This is analogous to circuits where the need for differing frequency response determines the circuit topology. Acting as spatial frequency filters, dichroics are a critical component in supporting the Deep Space Network (DSN) for spacecraft command a control up links as well as spacecraft down links. Currently these dichroic plates separate S-band at 2.0--232 GHz from X-band at 8.4--8.45 GHz. But new spacecraft communication requirements are also calling for an up link frequency at 7.165 GHz. In addition future spacecraft such as Craft/Casssini will require dichroics effectively separating K{sub a}-band frequencies in the 31--35 GHz range. The requirements for these surfaces are low transmission loss of < 0.1 dB at high power levels. Also is important to maintain a minimal relative phase shift between polarizations for circular polarization transmission. More current work has shown the successful demonstration of design techniques for straight, rectangular apertures at an incident angle of 30{degrees}. The plates are air-filled due to power dissipation and noise temperature considerations. Up-link frequency powers approach 100 kW making dielectrics undesirable. Here we address some of the cases in which the straight rectangular shape may have limited usefulness. For example, grating lobes become a consideration when the bandwidth required to include the new frequency of 7.165 GHz conflicts with the desired incident angle of 30{degrees}. For this case, the cross shape`s increased packing density and bandwidth could make it desirable. When a sharp frequency response is required to separate two closely space K{sub a}-band frequencies, the stepped rectangular aperture might be advantageous. 5 refs.
Several closed form trajectory solutions have been developed for low-thrust interplanetary flight and used with patched conies for analysis of combined propulsion systems. The solutions provide insight into alternative types of Mars missions, and show considerable mass savings for fast crewed missions with outbound trip times on the order of 90-100 days.
Nuclear Thermal Propulsion (NTP) has been identified as a critical technology in support of the NASA Space Exploration Initiative (SEI). In order to safely develop a reliable, reusable, long-lived flight engine, facilities are required that will support ground tests to qualify the nuclear rocket engine design. Initial nuclear fuel element testing will need to be performed in a facility that supports a realistic thermal and neutronic environment in which the fuel elements will operate at a fraction of the power of a flight weight reactor/engine. Ground testing of nuclear rocket engines is not new. New restrictions mandated by the National Environmental Protection Act of 1970, however, now require major changes to be made in the manner in which reactor engines are now tested. These new restrictions now preclude the types of nuclear rocket engine tests that were performed in the past from being done today, A major attribute of a safely operating ground test facility is its ability to prevent fission products from being released in appreciable amounts to the environment. Details of the intricacies and complications involved with the design of a fuel element ground test facility are presented in this report with a strong emphasis on safety and economy.
A rapid deployment access delay system (RAPADS) has been designed to provide high security protection of valued assets. The system or vault is transportable, modular, and utilizes a pin connection design. Individual panels are attached together to construct the vault. The pin connection allows for quick assembly and disassembly, and makes it possible to construct vaults of various sizes to meet a specific application. Because of the unique pin connection and overlapping joint arrangement, a sequence of assembly steps are required to assembly the vault. As a result, once the door is closed and locked, all pin connections are concealed and inaccessible. This provides a high level of protection in that no one panel or connection is vulnerable. This paper presents the RAPADS concept, design, fabrication, and construction.
Proceedings - International Carnahan Conference on Security Technology
Arlowe, H.D.
There is an emerging interest in using thermal IR to automatically detect human intruders over wide areas. Such a capability could provide early warning beyond the perimeter at fixed sites, and could be used for portable security around mobile military assets. Sandia National Laboratories has been working on automatic detection systems based on the thermal contrast and motion of human intruders for several years, and has found that detection is sometimes difficult, depending on solar and other environmental conditions. Solar heating can dominate human thermal radiation by 100 fold, and dynamic background temperature changes can limit detector sensitivity. This paper explains those conditions and energy transfer mechanisms that lead to difficult thermal detection. We will not cover those adverse conditions that are more widely understood and previously reported on, such as fog, smoke, rain and falling snow. This work was sponsored by the Defense Nuclear Agency.
In the wavenumber-domain method of SAR imaging, frequencydomain radar data are used to reconstruct a portion of the 2-D Fourier transform of the scene, which is then inverted to create the image. The method suffers no inherent limits on aperture length or scene size. This paper extends the concept to the case where the synthetic aperture is not a straight line and the samples are unevenly spaced. An accumulation formula for wavenumberdomain reconstruction is derived and shown to be equivalent to earlier algorithms in the uniform-aperture case. It is then shown how data with three-dimensional irregularity in the aperture can be processed using height correction and mapping into the slant plane.
CIRCE2 is a cone-optics computer code for determining the flux distribution and total incident power upon a receiver, given concentrator and receiver geometries, sunshape (angular distribution of incident rays from the sun-disk), and concentrator imperfections such as surface roughness and random deviation in slope. Statistical methods are used to evaluate the directional distribution of reflected rays from any given point on the concentrator, whence the contribution to any point on the target can be obtained. DEKGEN2 is an interactive preprocessor which facilitates specification of geometry, sun models, and error distributions. The CIRCE2/DEKGEN2 package equips solar energy engineers with a quick, user-friendly design and analysis tool for study/optimization of dish-type distributed receiver systems. The package exhibits convenient features for analysis of 'conventional' concentrators, and has the generality required to investigate complex and unconventional designs. Among the more advanced features are the ability to model dish or faceted concentrators and stretched-membrane reflectors, and to analyze 3-D flux distributions on internal or external receivers with 3-D geometries. Facets of rectangular, triangular, or circular projected shape, with profiles of parabolic, spherical, flat, or custom curvature can be handled. Provisions for shading, blocking, and aperture specification are also included. This paper outlines the features and capabilities of the new package, as well as the theory and numerical models employed in CIRCE2.
Proceedings of SPIE - The International Society for Optical Engineering
Stansfield, Sharon A.
This paper presents two parallel implementations of a knowledge-based robotic grasp generator. The grasp generator, originally developed as a rule-based system, embodies a knowledge of the associations between the features of an object and the set of valid hand shapes/arm configurations which may be used to grasp it. Objects are assumed to be unknown, with no a priori models available. The first part of this paper presents a `parallelization' of this rule base using the connectionist paradigm. Rules are mapped into a set of nodes and connections which represent knowledge about object features, grasps, and the required conditions for a given grasp to be valid for a given set of features. Having shown that the object and knowledge representations lend themselves to this parallel recasting, the second part of the paper presents a back propagation neural net implementation of the system that allows the robot to learn the associations between object features and appropriate grasps.
The Capacitors Division at Sandia National Laboratories has for many years been actively involved in developing high reliability, low-inductance, energy-storage, pulse-discharge capacitors. Development has concentrated on two dielectric systems; mica-paper and Mylar (both dry wrap and fill and FC40 liquid impregnation). Continuous design improvements are constantly being sought. For pulse discharge usage lowering the capacitor inductance can improve circuit performance. This paper describes recent efforts to improve the efficiency of low-inductance, mica-paper capacitors by reducing the inductance through optimizing the component geometry. The study focused on a 0.2 {mu}F, 4000 V mica-paper extended-foil capacitor design. The experimental matrix was a two-level, three factor with center points design, and was replicated four times to give reasonable statistics. The factors were the capacitor width, capacitor length, and electrode width, and with response functions of capacitor inductance and circuit performance. The capacitor inductance was measured by the resonance technique, and the circuit performance was evaluated by peak (discharge) current and rise time. Results show that the inductance can be minimized by choice of geometry with accompanying improvements in circuit performance.
This paper describes the plan for a test to failure of a steel containment vessel model. The test specimen proposed for this test is a scale model representing certain features of an improved BWR MARK-2 containment vessel. The objective of this test is to investigate the ultimate structural behavior of the model by incrementally increasing the internal pressure, at ambient temperature, until failure occurs. Pre- and posttest analyses will be conducted to predict and evaluate the results of this test. The main objective of these analyses to validate, by comparisons with the experimental data, the analytical methods used to evaluate the structural behavior of an actual containment vessel under severe accident conditions. This experiment is part of a cooperative program between the Nuclear Power Engineering Corporation (NUPEC), the United States Nuclear Regulatory Commission (NRC), and Sandia National Laboratories (SNL).
Logging technologies developed hydrocarbon resource evaluation have not migrated into geothermal applications even though data so obtained would strengthen reservoir characterization efforts. Two causative issues have impeded progress: (i) there is a general lack of vetted, high-temperature instrumentation, and (ii) the interpretation of log data generated in a geothermal formation is in its infancy. Memory-logging tools provide a path around the first obstacle by providing quality data at a low cost. These tools feature on-board computers that process and store data, and newer systems may be programmed to make decisions.'' Since memory tools are completely self-contained, they are readily deployed using the slick line found on most drilling locations. They have proven to be rugged, and a minimum training program is required for operator personnel. Present tools measure properties such as temperature and pressure, and the development of noise, deviation, and fluid conductivity logs based on existing hardware is relatively easy. A more complex geochemical tool aimed at a quantitative analysis of potassium, uranium and thorium will be available in about on year, and it is expandable into all nuclear measurements common in the hydrocarbon industry. A second tool designed to sample fluids at conditions exceeding 400{degrees}C is in the proposal stage. Partnerships are being formed between the geothermal industry, scientific drilling programs, and the national laboratories to define and develop inversion algorithms relating raw tool data to more pertinent information. 8 refs.
The overpressurization of a 1:6 scale reinforced concrete containment building demonstrated that liner tearing is a plausible failure mode in such structures under severe accident conditions. A combined experimental and analytical program was developed to determine the important parameters that affect liner tearing and to develop reasonably simple analytical methods for predicting when tearing will occur. Three sets of test specimens were designed to allow individual control over and investigation of the mechanisms believed to be important in causing failure of the liner plate. The series of tests investigated the effect on liner tearing produced by the anchorage system, the loading conditions, and the transition in thickness of the liner. Before testing, the specimens were analyzed using two- and three-dimensional finite element models. Based on the analysis, the failure mode and corresponding load conditions were predicted for each specimen. Test data and posttest examination of test specimens shows mixed agreement with the analytical predictions with regard to failure mode and specimen response for most tests. Many similarities were also observed between the response of the liner in the 1:6 scale reinforced concrete containment model and the response of the test specimens. This work illustrates the fact that the failure mechanism of a reinforced concrete containment building can be greatly influenced by details of liner and anchorage system design. Furthermore, it significantly increases the understanding of containment building response under severe accident conditions.
Acoustic telemetry has been a dream of the drilling industry for the past 50 years. It offers the promise of data rates which are one-hundred times greater than existing technology. Such a system would open the door to true logging-while-drilling technology and bring enormous profits to its developers. The basic idea is to produce an encoded sound wave at the bottom of the well, let it propagate up the steel drillpipe, and extract the data from the signal at the surface. Unfortunately, substantial difficulties arise. The first difficult problem is to produce the sound wave. Since the most promising transmission wavelengths are about 20 feet, normal transducer efficiencies are quire low. Compounding this problem is the structural complexity of the bottomhole assembly and drillstring. For example, the acoustic impedance of the drillstring changes every 30 feet and produces an unusual scattering pattern in the acoustic transmission. This scattering pattern causes distortion of the signal and is often confused with signal attenuation. These problems are not intractable. Recent work has demonstrated that broad frequency bands exist which are capable of transmitting data at rates up to 100 bits per second. Our work has also identified the mechanism which is responsible for the observed anomalies in the patterns of signal attenuation. Furthermore in the past few years a body of experience has been developed in designing more efficient transducers for application to metal waveguides. The direction of future work is clear. New transducer designs which are more efficient and compatible with existing downhole power supplies need to be built and tested; existing field test data need to be analyzed for transmission bandwidth and attenuation; and the new and less expensive methods of collecting data on transmission path quality need to be incorporated into this effort. 11 refs.
Sorenson, Ken B.; Salzbrenner, Richard; Nickell, Robert E.
An effort has been undertaken to develop a brittle fracture acceptance criterion for structural components of nuclear material transportation casks. The need for such a criterion was twofold. First, new generation cask designs have proposed the use of ferritic steels and other materials to replace the austenitic stainless steel commonly used for structural components in transport casks. Unlike austenitic stainless steel which fails in a high-energy absorbing, ductile tearing mode, it is possible for these candidate materials to fail via brittle fracture when subjected to certain combinations of elevated loading rates and low temperatures. Second, there is no established brittle fracture criterion accepted by the regulatory community that covers a broad range of structural materials. Although the existing IAEA Safety Series {number sign}37 addressed brittle fracture, its the guidance was dated and pertained only to ferritic steels. Consultant's Services Meetings held under the auspices of the IAEA have resulted in a recommended brittle fracture criterion. The brittle fracture criterion is based on linear elastic fracture mechanics, and is the result of a consensus of experts from six participating IAEA-member countries. The brittle fracture criterion allows three approaches to determine the fracture toughness of the structural material. The three approaches present the opportunity to balance material testing requirements and the conservatism of the material's fracture toughness which must be used to demonstrate resistance to brittle fracture. This work has resulted in a revised Appendix IX to Safety Series {number sign}37 which will be released as an IAEA Technical Document within the coming year.
We show experimentally and theoretically that the generation of the 13-TW Hermes III electron beam can be accurately monitored, and that the beam can be accurately directed onto a high-Z target to produce a wide variety of bremsstrahlung patterns. This control allows the study of radiation effects induced by gamma rays to be extended into new parameters regimes. Finally, we show that the beam can be stably transported in low-pressure gas cells.
This paper presents the groundwork for a completely automatic 3-D hexahedral mesh generation algorithm called plastering. It is an extension of the paving algorithm developed by Blacker, where paving is a completely automatic 2-D quadrilateral meshing technique.
The transport of a chemically reactive fluid through a permeable medium is governed by many classes of chemical interactions. Dissolution/precipitation (D/P) reactions are among the interactions of primary importance because of their significant influence on the mobility of aqueous ions. In general, D/P reactions lead to the propagation of coherent waves. This paper provides an overview of the types of wave phenomena observed in one-dimensional (1D) and two-dimensional (2D) porous media for systems in which mineral D/P is the dominant type of chemical reaction. It is demonstrated that minerals dissolve in sharp waves in 1D advection-dominated transport, and that these waves separate zones of constant chemical compositions in the aqueous and mineral phases. Analytical solutions based on coherence methods are presented for solving 1D advection-dominated transport problems with constant and variable boundary conditions. Numerical solutions of diffusion-dominated transport in porous media show that sharp D/P fronts occur in this system as well. A final example presents a simple dual-porosity system with advection in an idealized fracture and solute diffusion into an adjacent porous matrix. The example illustrates the delay of contaminant release from the 2D domain due to a combination of physical retardation and chemical retardation.
A closely coupled computational and experimental aerodynamics research program was conducted on a hypersonic vehicle configuration at Mach 8. Aerodynamic force and moment measurements and flow visualization results were obtained in the Sandia National Laboratories hypersonic wind tunnel for laminar boundary layer conditions. Parabolized and iterative Navier-Stokes simulations were used to predict flow fields and forces and moments on the hypersonic configuration. The basic vehicle configuration is a spherically blunted 10{degrees} cone with a slice parallel with the axis of the vehicle. On the slice portion of the vehicle, a flap can be attached so that deflection angles of 10{degrees}, 20{degrees}, and 30{degrees} can be obtained. Comparisons are made between experimental and computational results to evaluate quality of each and to identify areas where improvements are needed. This extensive set of high-quality experimental force and moment measurements is recommended for use in the calibration and validation of computational aerodynamics codes. 22 refs.
Microstructural models of deformation of polycrystalline materials suggest that inelastic deformation leads to the formation of a corner or vertex at the current load point. This vertex can cause the response to non-proportional loading to be more compliant than predicted by the smooth yield-surface idealization. Combined compression-torsion experiments on Tennessee marble indicate that a vertex forms during inelastic flow. An important implication is that strain localization by bifurcation occurs earlier than predicted by bifurcation analysis using isotropic hardening.
Acoustic emissions and conventional strain measurements were used to follow the evolution of the damage surface and plastic potential in a limestone under triaxial compression. Confining pressures were chosen such that macroscopically, the limestone exhibited both brittle and ductile behavior. The parameters derived are useful for modeling the deformation of a pressure-dependent material and for computing when localization would occur. For modeling, simple approximations are adequate, but a more complete understanding of the evolution of the various parameters is necessary in order to calculate when localization can be expected. 11 refs., 6 figs.
Light emission microscopy is now currently used in most integrated circuit (IC) failure analysis laboratories. This tutorial is designed to benefit both novice and experienced failure analysts by providing an introduction to light emission microscopy as well as information on new techniques, such as the use of spectral signatures. The use of light emission for accurate identification and spatial localization of physical defects and failure mechanisms is presented. This includes the analysis of defects such as short circuits which do not themselves emit light. The importance of understanding the particular IC design and applying the correct electrical stimulus is stressed. A video tape is used to show light emission from pn junctions, MOS transistors, test structures, and CMOS ICs in static and dynamic electrical stimulus conditions. 27 refs.
The Thermionic System Evaluation Test (TSET) is a ground test of an unfueled Russian TOPAZ-II in-core thermionic space reactor powered by electric heaters. The facility that will be used for testing of the TOPAZ-II systems is located at the New Mexico Engineering Research Institute (NMERI) complex in Albuquerque, NM. The reassembly of the Russian test equipment is the responsibility of International Scientific Products (ISP), a San Jose, CA, company and Inertek, a Russian corporation, with support provided by engineers and technicians from Phillips Laboratory (PL), Sandia National Laboratories (SNL), Los Alamos National Laboratory (LANL), and the University of New Mexico (UNM). This test is the first test to be performed under the New Mexico Strategic Alliance agreement. This alliance consist of the PL, SNL, LANL, and UNM. The testing is being funded by the Strategic Defense Initiative Organization (SDIO) with the PL responsible for project execution.
Radioactive material transport casks use either lead or depleted uranium (DU) as gamma-ray shielding material. Stainless steel is conventionally used for structural containment. If a DU alloy had sufficient properties to guarantee resistance to failure during both nominal use and accident conditions to serve the dual-role of shielding and containment, the use of other structure materials (i.e., stainless steel) could be reduced. (It is recognized that lead can play no structural role.) Significant reductions in cask weight and dimensions could then be achieved perhaps allowing an increase in payload. The mechanical response of depleted uranium has previously not been included in calculations intended to show that DU-shielded transport casks will maintain their containment function during all conditions. This paper describesa two-part study of depleted uranium alloys: First, the mechanical behavior of DU alloys was determined in order to extend the limited set of mechanical properties reported in the literature. The mechanical properties measured include the tensile behavior the impact energy. Fracture toughness testing was also performed to determine the sensitivity of DU alloys to brittle fracture. Fracture toughness is the inherent material property which quantifies the fracmm resistance of a material. Tensile strength and ductility are significant in terms of other failure modes, however, as win be discussed. These mechanical properties were then input into finite element calculations of cask response to loading conditions to quantify the potential for claiming structural credit for DU. (The term structural credit'' describes whether a material has adequate properties to allow it to assume a positive role in withstanding structural loadings.)
Interfacial microchemical characterization is required in all aspects of surface processing as applied to transportation and utility technologies. Corrosion protection, fuel cells and batteries, wear surfaces, polymers and polymer-oxide interfaces, thin film multilayers, photoelectrochemical systems, and organized molecular assemblies are just a few examples of interfacial systems of interest to these industries. A number of materials and processing problems, both related to fundamental understanding and to monitoring manufacturing operations, have been identified where our microchemical characterization abilities need improving. Over twenty areas for research are identified where progress will contribute to improved understanding of materials and processes, improved problem-solving abilities, improved manufacturing consistency, and lower costs. Some of the highest priority areas for research include (1) developing techniques and methods with improved chemical specificity at interfaces, (2) developing fast, real-time surface and interface probes and (3) improving the cost and reliability of manufacturing monitors. Increased collaboration among University, Industry, and Government laboratories will be a prerequisite to making the required progress in a timely fashion.
A parallel processor that is optimized for real-time linear control has been developed. This modular system consists of A/D modules, D/A modules, and floating-point processor modules. The scalable processor uses up to 1,000 Motorola DSP96002 floating-point processors for a peak computational rate of 60 GFLOPS. Sampling rates up to 625 kHz are supported by this analog-in to analog-out controller. The high processing rate and parallel architecture make this processor suitable for computing state-space equations and other multiply/accumulate-intensive digital filters. Processor features include 14-bit conversion devices, low input-output latency, 240 Mbyte/s synchronous backplane bus, low-skew clock distribution circuit, VME connection to host computer, parallelizing code generator, and look-up-tables for actuator linearization. This processor was designed primarily for experiments in structural control. The A/D modules sample sensors mounted on the structure and the floating-point processor modules compute the outputs using the programmed control equations. The outputs are sent through the D/A module to the power amps used to drive the structure's actuators. The host computer is a Sun workstation. An Open Windows-based control panel is provided to facilitate data transfer to and from the processor, as well as to control the operating mode of the processor. A diagnostic mode is provided to allow stimulation of the structure and acquisition of the structural response via sensor inputs.
International Atomic Energy Agency (IAEA) inspectors must maintain continuity of knowledge on all safeguard samples, and in particular on those samples drawn from plutonium product and spent fuel input tanks at a nuclear reprocessing plant's blister sampling station. Integrity of safeguard samples must be guaranteed from the sampling point to the moment of sample analysis at the IAEA's Safeguards Analytical Laboratory (SAL Seibersdorf) or at an accepted local laboratory. These safeguard samples are drawn at a blister sampling station with inspector participation, and then transferred via a pneumatic post system to the facility's analytical laboratory. The transfer of the sample by the pneumatic post system, the arrival of the sample in the operator's analytical laboratory, and the storage of the sample awaiting analysis is very time consuming for the inspector, particularly if continuous human surveillance is required for all these activities. This process might be observed by ordinary surveillance methods, such as a video monitoring system, but again this would be cumbersome and time consuming for both the inspector and operator. This paper will describe a secure container designed to assure sample vial integrity from the point the sample is drawn to the treatment of the sample at the facility's analytical laboratory.
Understanding the mechanisms of growth during vapor-phase deposition is critical for the precise control of surface morphology required by advanced electronic device structures. Yet only relatively recently have the tools for observing this growth on an atomic-level scale become available (via scanning tunneling microscopy (STM), reflection high energy electron diffraction (RHEED) and low-energy electron microscopy (LEEM)). We present results from our own RHEED and STM measurements in which we use computer simulations to aid in determining the fundamental surface processes which contribute to.the observed structures. In this study of low-energy ion bombardment and growth on Si(001), it is demonstrated how simulations enable us to determine the dominant atomistic process.
Reflective Particle Tags were developed for uniquely identifying individual strategic weapons that would be counted in order to verify arms control treaties. These tags were designed to be secure from copying and transfer even after being lift under the control of a very determined adversary for a number of years. This paper discusses how this technology can be applied in other applications requiring confidence that a piece of equipment, such as a seal or a component of a secure, has not been replaced with a similar item. The hardware and software needed to implement this technology is discussed, and guidelines for the sign of systems that rely on these or similar randomly formed features for security applications are presented. Substitution of identical components is one of the easiest ways to defeat security seals, secure containers, verification instrumentation, and similar equipment. This technology, when properly applied, provides a method to counter this defeat scenario. This paper presents a method for uniquely identifying critical security related equipment. Guidelines for implementing identification systems based on reflective particles or similar random features without compromising their intrinsic security are discussed.
A non-contact, high-resolution laser ranging device has been incorporated into an instrument for accurately mapping the surface of WECS airfoils in the field. Preliminary scans of composite materials and bug debris show that the system has adequate resolution to accurately map bug debris and other surface contamination. This system, just recently delivered and now being debugged and optimized, will be used to characterize blade surface contamination on wind turbines. The technology used in this system appears to hold promise for application to many other measurements tasks, including a system for quickly and very accurately determining the profile of turbine blade molds and blades.
York II, A.R.; Freedman, J.M.; Kincy, M.A.; Joseph, B.J.
Sandia National Laboratories has completed the design and is now fabricating packages for shipment of tritium gas in conformance with 10 CFR 71. The package, referred to as the AL-SX, is quite unique in that its contents are a radioactive gas, and a large margin of safety has been demonstrated through overtesting. The AL-SX is small, 42 cm in diameter and 55 cm tall, and weighs between 55 kg empty and up to a maximum of 60 kg with contents and is designed for a 20-year service life. This paper describes the design of the AL-SX and certification testing performed on AL-SX packages and discusses containment of tritium and AL-SX manufacturing considerations.
Sandia National Laboratories is one of the nation's largest research and development (R and D) facilities and is responsible for national security programs in defense and energy with a primary emphasis on nuclear weapon R and D. However, Sandia also supports a wide variety of projects ranging from basic materials research to the design of specialized parachutes. As a multiprogram national laboratory, Sandia has much to offer both industrial and government customers in pursuing space nuclear technologies. A brief summary of Sandia's technical capabilities, test facilities, and example programs that relate to military and civilian objectives in space is presented.
Sandia National Laboratories is actively involved in testing coated particle nuclear fuels for the Space Nuclear Thermal Propulsion (SNTP) program managed by Phillips Laboratory. The testing program integrates the results of numerous in-pile and out-of-pile tests with modeling efforts to qualify fuel and fuel elements for the SNTP program. This paper briefly describes the capabilities of the Annular Core Research Reactor (in which the experiments are performed), the major in-pile tests, and the models used to determine the performance characteristics of the fuel and fuel elements. 6 refs.
The US Department of Energy's Slant Hole Completion Test Well, SHCT-1, was drilled in 1990 into gas-bearing, lenticular and blanket-shaped sandstones of the Mesaverde Formation, northwestern Colorado. The reservoirs are over-pressured, with sub-microdarcy, in situ, matrix-rock permeabilities. However, a set of sub-parallel natural fractures increases the whole-reservoir permeabilities, measured by well tests, to several tens of microdarcies. The slant hole azimuth was therefore oriented to cut across the dominant fracture strike, in order to access the natural-fracture permeability and increase drainage into the wellbore.
Advection-dominated flows occur widely in the transport of groundwater contaminants, the movements of fluids in enhanced oil recovery projects, and many other contexts. In numerical models of such flows, adaptive local grid refinement is a conceptually attractive approach for resolving the sharp fronts or layers that tend to characterize the solutions. However, this approach can be difficult to implement in practice. A domain decomposition method developed by Bramble, Ewing, Pasciak, and Schatz, known as the BEPS method, overcomes many of the difficulties. We demonstrate the applicability of the iterative BEPS ideas to finite-element collocation on trial spaces of piecewise Hermite bicubics. The resulting scheme allows one to refine selected parts of a spatial grid without destroying algebraic efficiencies associated with the original coarse grid. We apply the method to two dimensional time-dependent advection-diffusion problems.
Three methods of evaluating accelerated battery test data are described. Criteria for each method are used to determine the minimum test matrix required for accurate predictions. Other test methods involving high current discharge and real time techniques are discussed.
Computational mechanics simulation capability via the finite element method is being integrated into the FASTCAST project to allow realistic analyses of investment casting problems. Commercial and in-house software is being coupled to new, solid model based mesh generation capabilities to provide improved access to fluid, thermal and structural simulations. These simulations are being used for the validation of complex gating designs and the study of fundamental problems in casting.
This document presents recent accomplishments in engineering and science at Sandia National Laboratories. Commercial-scale parabolic troughs at the National Solar Thermal Test Facility are used for such applications as heating water, producing steam for industrial processes, and driving absorption air conditioning systems. Computerized-aided design, superconductor technology, radar imaging, soldering technology, software development breakthroughs are made known. Defense programs are exhibited. And microchip engineering applications in test chips, flow sensors, miniature computers, integrated circuits, and microsensors are presented.
Diffraction peaks can occur as unidentifiable peaks in the energy spectrum of an x-ray spectrometric analysis. Recently, there has been increased interest in oriented polycrystalline films and epitaxial films on single crystal substrates for electronic applications. Since these materials diffract x-rays more efficiently than randomly oriented polycrystalline materials, diffraction peaks are being observed more frequently in x-ray fluorescent spectra. In addition, micro x-ray spectrometric analysis utilizes a small, intense, collimated x-ray beam that can yield well defined diffraction peaks. In some cases these diffraction peaks can occur at the same position as elemental peaks. These diffraction peaks, although a possible problem in qualitative and quantitative elemental analysis, can give very useful information about the crystallographic structure and orientation of the material being analyzed. The observed diffraction peaks are dependent on the geometry of the x-ray spectrometer, the degree of collimation and the distribution of wavelengths (energies) originating from the x-ray tube and striking the sample.
Geologic materials are often modeled with discrete spheres because the material is not continuous and discrete spherical models simplify the mathematics. Spherical element models have been created using assemblages of spheres with a specified particle size distribution or by assuming the particles are all the same size and making the assemblage a close-packed array of spheres. Both of these approaches yield a considerable amount of material dilatation upon movement. This has proven to be unsatisfactory for sedimentary rock formations that contain bedding planes where shear movement can occur with minimal dilatation of the interface. A new concept referred to as packing angle has been developed to allow the modeler to build arrays of spheres that are the same size but have the rows of spheres offset from each other. ne row offset is a function of the packing angle and allows the modeler to control the dilatation as rows of spheres experience relative horizontal motion.
The syntheses and physical properties of {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]X (X=Br and Cl) are summarized. The {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Br salt is the highest {Tc} radical-cation based ambient pressure organic superconductor ({Tc}=11.6 K), and the {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Cl salt becomes a superconductor at even higher {Tc} under 0.3 kbar hydrostatic pressure ({Tc}=12.8 K). The similarities and differences between {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Br and {kappa}-(ET){sub 2}Cu(NCS){sub 2} ({Tc}=10.4 K) are presented. The X-ray structures at 127 K reveal that the the S{hor_ellipsis}S contacts shorten between ET dimers in the former compound while the S{hor_ellipsis}S contacts shorten within dimers in the latter. The difference in their ESR linewidth behavior is also explained in terms of the structural differences. A semiconducting compound, (ET)Cu[N(CN){sub 2}]{sub 2}, isolated during {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Cl synthesis is also reported. The ESR measurements of the {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Cl salt indicate that the phase transition near 40 K is similar to the spin density wave transition in (TMTSF){sub 2}SbF{sub 6}. A new class of organic superconductors, {kappa}-(ET){sub 2}Cu{sub 2}(CN){sub 3} and {kappa}-(ET){sub 2}Cu{sub 2}(CN){sub 3}-{delta}Br{delta}, is reported with {Tc}`s of 2.8 K (1.5 kbar) and 2.6 K (1 kbar), respectively.
Nuclear weapons system designers and safety analysts are contemplating broader use of probabilistic risk assessment techniques. As an aid to their understanding, this document summarizes the development and use of probabilistic risk assessment (PRA) techniques in the nuclear power industry. This report emphasizes the use of PRA in decision making with the use of case studies. Nuclear weapon system designers and safety analysts, contemplating the broader use of PRA techniques, will find this document useful.
This document contains implementation details for the Quality Information Management System (QIMS) Pilot Project, which has been released for VAX/VMS systems using the INGRES RDBMS. The INGRES Applications-By-Forms (ABF) software development tool was used to define the modules and screens which comprise the QIMS Pilot application. These specifications together with the QIMS information model and corresponding database definition constitute the QIMS technical specification and implementation description presented herein. The QIMS Pilot Project represents a completed software product which has been released for production use. Further extension projects are planned which will release new versions for QIMS. These versions will offer expanded and enhanced functionality to meet further customer requirements not accommodated by the QIMS Pilot Project.
A large buildup in interface traps has been observed in commercial and radiation-hardened MOS transistors at very long times after irradiation (> 10{sup 6} s). This latent buildup may have important implications for CMOS response in space. 13 refs.
Translations of two pioneering Russian papers on antenna theory are presented. The first paper provides a treatise on finite-length dipole antennas; the second paper addresses infinite-length, impedance-loaded transmitting antennas.
A new approach for solving two-dimensional clustering problems is presented. The method is based on an inhibitory template which is applied to each pair of dots in a data set. Direct clustering of the pair is inhibited (allowed) if another dot is present (absent), respectively, within the area of the template. The performance of the method is thus entirely determined by the shape of the template. Psychophysical experiments have been used to define the template shape for this work, so that the resulting method requires no pattern-dependent adjustment of any parameters. The novel concept of a psychophysically-defined template and the absence of adjustable parameters set this approach apart from previous work. The useful grouping performance of this approach is demonstrated with the successful grouping of a variety of dot patterns selected from the clustering literature.
Sandia National Laboratories (SNL) Environmental Restoration (ER) Program has recently implemented a highly structured CS{sup 2} required by DOE. It is a complex system which has evolved over a period of a year and a half. During the implementation of this system, problem areas were discovered in cost estimating, allocation of management costs, and integration of the CS{sup 2} system with the Sandia Financial Information System. In addition to problem areas, benefits of the system were fund in the areas of schedule adjustment, projecting personnel requirements, budgeting, and responding to audits. Finally, a number of lessons were learned regarding how to successfully implement the system.
Ferroelectric PZT 53:47 thin films were prepared by two different solution deposition methodologies. Both routes utilized carboxylate and alkoxide precursors and acetic acid, which served as both a solvent and a chemical modifier. We have studied the effects of solution preparation conditions on film microstructure and ferroelectric properties, and have used NMR spectroscopy to characterize chemical differences between the two precursor solutions. Films prepared by a sequential precursor addition (SPA) process were characterized by slightly lossy hysteresis loops, with a P{sub r} of 18.7 {mu}C/cm{sup 2} and an E{sub c} of 55.2 kV/cm. Films prepared by an inverted mixing order (IMO) process were characterized by well saturated hysteresis loops, a P{sub r} of 26.2 {mu}C/cm{sup 2} and an E{sub c} of 43.3 kV/cm. While NMR investigations indicated that the chemical environments of both the proton and carbon species were similar for the two processes, differences in the amounts of by-products (esters, and therefore, water) formed were noted. These differences apparently impacted ceramic microstructure. Although both films were characterized by a columnar growth morphology, the SPA derived film displayed a residual pyrochlore layer at the film surface, which did not transform into the stable perovskite phase. The presence of this layer resulted in poor dielectric properties and lossy ferroelectric behavior.
We have developed a video detection algorithm for measuring the residue left on a printed circuit board after a soldering process. Oblique lighting improves the contrast between the residue and the board substrate, but also introduces an illumination gradient. The algorithm uses the Boundary Contour System/Feature Contour System to produce an idealized clean board image by discounting the illuminant, detecting trace boundaries, and filling the trace and substrate regions. The algorithm then combines the original input image and ideal image using mathematical models of the normal and inverse Weber Law to enhance the residue on the traces and substrate. The paper includes results for a clean board and one with residue.
CEPXS/ONELD is a discrete ordinates transport code package that can model the electron-photon cascade from 100 MeV to 1 keV. The CEPXS code generates fully-coupled multigroup-Legendre cross section data. This data is used by the general-purpose discrete ordinates code, ONELD, which is derived from the Los Alamos ONEDANT and ONETRAN codes. Version 1.0 of CEPXS/ONELD was released in 1989 and has been primarily used to analyze the effect of radiation environments on electronics. Version 2.0 is under development and will include user-friendly features such as the automatic selection of group structure, spatial mesh structure, and S{sub N} order.
Changing the focus of a corporate compensation and performance review system from process orientation to data base orientation results in a more integrated and flexible design. Data modeling of the business system provides both systems and human resource professionals insight into the underlying constants of the review process. Descriptions of the business and data modeling processes are followed by a detailed presentation of the data base model. Benefits derived from designing a system based on the model include elimination of hard-coding, better audit capabilities, a consistent approach to exception processing, and flexibility of integrating changes in compensation policy and philosophy.
This paper will address the purpose, scope, and approach of the Department of Energy Tiger Team Assessments. It will use the Tiger Team Assessment experience of Sandia National Laboratories at Albuquerque, New Mexico, as illustration.
One of the common waste streams generated throughout the nuclear weapon complex is hardware'' originating from the nuclear weapons program. The activities associated with this hardware at Sandia National Laboratories (SNL) include design and development, environmental testing, reliability and stockpile surveillance testing, and military liaison training. SNL-designed electronic assemblies include radars, arming/fusing/firing systems, power sources, and use-control and safety systems. Waste stream characterization using process knowledge is difficult due to the age of some components and lack of design information oriented towards hazardous constituent identification. Chemical analysis methods such as the Toxicity Characteristic Leaching Procedure (TCLP) are complicated by the inhomogeneous character of these components and the fact that many assemblies have aluminum or stainless steel cases, with the electronics encapsulated in a foam or epoxy matrix. In addition, some components may contain explosives, radioactive materials, toxic substances (PCBs, asbestos), and other regulated or personnel hazards which must be identified prior to handling and disposal. In spite of the above difficulties, we have succeeded in characterizing a limited number of weapon components using a combination of process knowledge and chemical analysis. For these components, we have shown that if the material is regulated as RCRA hazardous waste, it is because the waste exhibits one or more hazardous characteristics; primarily reactivity and/or toxicity (Pb, Cd).
The discrete Fourier transform and power spectral density are often used in analyzing data from analog-to-digital converters. These analyses normally apply a window to the data to alleviate the effects of leakage. This paper describes how windows modify the magnitude of a discrete Fourier transform and the level of a power spectral density computed by Welch's method. For white noise, the magnitude of the discrete Fourier transform at a fixed frequency has a Rayleigh probability distribution. For sine waves with an integer number of cycles and quantization noise, the theoretical values of the amplitude of the discrete Fourier transform and power spectral density are calculated. We show how the signal-to-noise ratio in a single discrete Fourier transform or power spectral density frequency bin is related to the normal time-domain definition of the signal-to-noise ratio. The answer depends on the discrete Fourier transform length, the window type and the function averaged.
The UNIX LANs in 1500 are experiencing explosive growth. The individual departments are creating LANs to address their particular needs; however, at the same time, shared software tools between the departments are becoming more common. It is anticipated that users will occasionally need access to various department software and/or LAN services, and that support personnel may carry responsibilities which require familiarization with multiple environments. It would be beneficial to users and support personnel if the various department environments share some basic similarities, allowing somewhat transparent access. This will become more important when departments share specific systems, as 1510 and 1550 have proposed with an unclassified UNIX system. Therefore, standards/conventions on the department LANs and the central site systems have to be established to allow for these features. it should be noted that the goal of the UEC is to set standards/conventions which affect the users and provide some basic structure for software installation and maintenance; it is not the intent that all 1500 LANs be made identical at an operating system and/or hardware level. The specific areas of concern include: (1) definition of a non-OS file structure; (2) definition of an interface for remote mounted file systems; (3) definition of a user interface for public files; (4) definition of a basic user level environment; and (5) definition of documentation requirements for public files (shared software). Each of these areas is addressed in this paper.
This document contains implementation details for the Sandia Management Restructure Study Team (MRST) Prototype Information System, which resides on a Sun SPARC II workstation employing the INGRES RDBMS. The INGRES/Windows 4GL application editor was used to define the components of the two user applications which comprise the system. These specifications together with the MRST information model and corresponding database definition constitute the MRST Prototype Information System technical specification and implementation description presented herein. The MRST Prototype Information System represents a completed software product which has been presented to the Management Restructure Study Team to support the management restructing processes at Sandia National Laboratories.
Finite element analyses of oil-filled caverns were performed to investigate the effects of cavern depth on surface subsidence and storage loss, a primary performance criteria of SPR caverns. The finite element model used for this study was axisymmetric, approximating an infinite array of caverns spaced at 750 ft. The stratigraphy and cavern size were held constant while the cavern depth was varied between 1500 ft and 3000 ft in 500 ft increments. Thirty year simulations, the design life of the typical SPR cavern, were performed with boundary conditions modeling the oil pressure head applied to the cavern lining. A depth dependent temperature gradient of 0.012{degrees}F/ft was also applied to the model. The calculations were performed using ABAQUS, a general purpose of finite element analysis code. The user-defined subroutine option in ABAQUS was used to enter an elastic secondary creep model which includes temperature dependence. The calculations demonstrated that surface subsidence and storage loss rates increase with increasing depth. At lower depths the difference between the lithostatic stress and the oil pressure is greater. Thus, the effective stresses are greater, resulting in higher creep rates. Furthermore, at greater depths the cavern temperatures are higher which also produce higher creep rates. Together, these factors result in faster closure of the cavern. At the end of the 30 year simulations, a 1500 ft-deep cavern exhibited 4 percent storage loss and 4 ft of subsidence while a 3000 ft-deep cavern exhibited 33 percent storage loss and 44 ft of subsidence. The calculations also demonstrated that surface subsidence is directly related to the amount of storage loss. Deeper caverns exhibit more subsidence because the caverns exhibit more storage loss. However, for a given amount of storage loss, nearly the same magnitude of surface subsidence was exhibited, independent of cavern depth.
This economic analysis compares human and robotic TRUPACT unloading at the Waste Isolation Pilot Plant. Robots speed up the unloading process, reduce human labor requirements, and reduce human exposure to radiation. The analysis shows that benefit/cost ratios are greater than one for most cases using government economic parameters. This suggests that robots are an attractive option for the TRUPACT application, from a government perspective. Rates of return on capital investment are below 15% for most cases using private economic parameters. Thus, robots are not an attractive option for this application, from a private enterprise perspective.
This paper summarizes the results of aging, condition monitoring, and accident testing of Class 1E cables used in nuclear power generating stations. Three sets of cables were aged for up to 9 months under simultaneous thermal ({approximately}100{degrees}C) and radiation ({approximately}0.10 kGy/hr) conditions. After the aging, the cables were exposed to a simulated accident consisting of high dose rate irradiation ({approximately}6 kGy/hr) followed by a high temperature steam (up to 400{degrees}C) exposure. A fourth set of cables, which were unaged, was also exposed to the accident conditions. The cables that were aged for 3 months and then accident tested were subsequently exposed to a high temperature steam fragility test (up to 400{degrees}C), while the cables that were aged for 6 months and then accident tested were subsequently exposed to a 1000-hour submergence test in a chemical solution. The results of these tests do not indicate any reason to believe that many popular nuclear power plant cable products cannot inherently be qualified for 60 years of operation for conditions simulated by this testing. Mechanical measurements (primarily elongation, modulus, and density) are more effective than electrical measurements for monitoring age-related degradation. In the high temperature steam test, ethylene propylene rubber (EPR) cable materials generally survived to higher temperatures than crosslinked polyolefin (XLPO) cable materials. In dielectric testing after the submergence testing, the XLPO materials performed better than the EPR materials.
This paper describes several different types of constraints that can be placed on multilayered feedforward neural networks which are used for automatic target recognition (ATR). We show how unconstrained networks are likely to give poor generalization on the ATR problem. We also show how the ATR problem requires a special type of classifier called a one-class classifier. The network constraints come in two forms: architectural constraints and learning constraints. Some of the constraints are used to improve generalization, while others are incorporated so that the network will be forced to perform one-class classification. 14 refs
Foams, like most highly structured fluids, exhibiting rheological behavior that is both fascinating and complex. We have developed microrheological models for uniaxial extension and simple shearing flow of a dry', perfectly ordered, three-dimensional foam composed of thin films with uniform surface tension T and negligible liquid content. We neglect viscous flow in the thin films and examine large elastic-plastic deformations of the foam. The primitive undeformed foam structure is composed of regular space-filling tetrakaidecahedra, which have six square and eight hexagonal surfaces. This structure possesses the film-network topology that is necessary to satisfy equilibrium: three films meet at each edge, which corresponds to a Plateau border, and four edges meet at vertex. However, to minimize surface energy, the films must meet at equal angles of 120{degrees} and the edges must join at equal tetrahedral angles of cos{sup {minus}1}({minus}1/3) {approx} 10.947{degree}. No film in an equilibrium foam structure can be a planar polygon because no planar polygon has all angles equal to the tetrahedral edge. In the equilibrium foam structure known as Kelvin's minimal tetrakaidecahedron, the squares' are planar quadrilateral surfaces with curved edges and the hexagons' are non-planar saddle surfaces with zero mean curvature. As the foam structure evolves with the macroscopic flow, each film maintains zero mean curvature because the pressure is the same in every bubble. In general, the shape of each thin film, defined by z = h(x,y), satisfies R{sub 1}/1 + R{sub 2}/1 = {del}{center dot} (1 + {vert bar}{del}h{vert bar}){sup {1/2}} = O where R{sub 1}{sup {minus}1} and A{sub 2}{sup {minus}1} are the principal curvatures. The appropriate boundary conditions correspond to three films meeting at equal angles. For the homogeneous deformations under consideration, the center of each film moves affinely with the flow. 5 refs
Renewable energy technologies convert naturally occurring phenomena into useful energy forms. These technologies use resources that generally are not depleted, such as the direct energy (heat and light) from the sun and the indirect results of its impact on the earth (wind, falling water, heating effects, plant growth), gravitational forces (the tides), and the heat of the Earth's core (geothermal), as the sources from which they produce useful energy. These very large stores of natural energy represent a resource potential that is incredibly massive -- dwarfing that of equivalent fossil energy resources. The magnitude of these resources is, therefore, not a key constraint on energy production. However, they are generally diffuse and not fully accessible, some are intermittent, and all have distinct regional and local variability. It is these aspects of their character that give rise to difficult, but generally solvable, technical, institutional, and economic challenges inherent in development and use of renewable energy resources. This report discusses the technologies and their associated energy source.
Theoretical models have been formulated describing the dynamic behavior of the swelling and contracting of polyelectrolyte gels. This paper presents a method of weighted residuals approach to solving the governing system of equations by finite element analysis. The modulation of the imbibition of solvent by a spherical gel is studied.
There is considerable interest in the use of chemically vapor deposited (CVD) polycrystalline diamond films in advanced materials technology. However, most of the potential applications of CVD diamond films require well-controlled properties which depend on the film structure, and in turn, on the conditions under which the films are synthesized. The structure of the vapor-deposited diamond films is frequently characterized by Raman spectroscopy. Despite extensive research, much work still needs to be completed to understand the various features of the Raman spectra and to understand how the processing variables affect the spectral features. This paper examines the Raman spectra of diamond films prepared by a hot-filament-assisted CVD process as a function of substrate processing and deposition parameters.
Many applications of national importance require the design, analysis, and simulation of complex electromagnetic phenomena. These applications range from the simulation of synthetic aperture radar to the design and analysis of low-observable platforms, antenna design, and automatic target recognition. In general, the modeling of complex electromagnetic phenomena requires significant amounts of computer time and capacity on conventional vector supercomputers but takes far less on massively parallel computers. Sandia National Laboratories is currently developing massively parallel methods and algorithms for the characterization of complex electromagnetic phenomena. The goal of on going research at Sandia is to understand the characteristics, limitations, and trade-offs associated with complex electromagnetic systems including: modeling the seeker response to complex targets in clutter, calculating the radiation and scattering from conformal communication and radar system antennas, and the analysis and design of high speed circuitry. By understanding the theoretical underpinnings of complex electromagnetic systems it is possible to achieve realistic models of system performance. The first objective is the development of computationally practical, high fidelity, systems models targeted for massively parallel computers. Research to achieve this objective is conducted in such areas as mathematical algorithms, problem decomposition, inter-processor communication schemes, and load balancing. The work in mathematical algorithms includes both the development of new methods and the parallel implementation of existing techniques. The second objective is the application of these high fidelity models to facilitate a better understanding of systems level performance for many C{sup 3}I platforms. This presentation describes applications of much current interest and novel solution techniques for these applications utilizing massively parallel processing techniques.
A neighboring external control problem is formulated for a hypersonic glider to execute a maximum-terminal-velocity descent to a stationary target. The resulting two-part, feedback control scheme initially solves a nonlinear algebraic problem to generate a nominal trajectory to the target altitude. Secondly, a neighboring optimal path computation about the nominal provides a lift and side-force perturbations necessary to achieve the target downrange and crossrange. On-line feedback simulations of the proposed scheme and a form of proportional navigation are compared with an off-line parameter optimization method. The neighboring optimal terminal velocity compares very well with the parameter optimization solution and is far superior to proportional navigation. 8 refs.
This paper describes the design of an inverse adaptive filter, using the Least-Mean-Square (LMS) algorithm, the correct data taken with an analog filter. The gradient estimate used in the LMS algorithm is based upon the instantaneous error, e{sup 2}(n). Minimizing the mean-squared-error does not provide an optimal solution in this specific case. Therefore, another performance criterion, error power, was developed to calculate the optimal inverse model. Despite using a different performance criterion, the inverse filter converges rapidly and gives a small mean-squared-error. Computer simulations of this filter are also shown in this paper.
Intense light ion beams are being developed to drive inertial confinement fusion (ICF) targets. Recently, intense proton beams have been used to drive two different types of targets in experiments on the Particle Beam Fusion Accelerator. The experiments focused separately on ion deposition physics and on implosion hydrodynamics. In the ion deposition physics experiments, a 3--4 TW/cm{sup 2} proton beam heated a low-density foam contained within a gold cylinder with a specific power deposition exceeding 100 TW/gm for investigating ion deposition, foam heating, and generation of x-rays. The significant results from these experiments included the following: the foam provided an optically thin radiating region, the uniformity of radiation across the foam was good, and the foam tamped the gold case, holding it in its original position for the 15 ns beam pulse width.
This document describes the Temperature Monitoring System for the RHEPP project at Sandia National Laboratories. The system is designed to operate in the presence of severe repetitive high voltage and electromagnetic fields while providing real time thermal data on component behavior. The thermal data is used in the design and evaluation of the major RHEPP components such as the magnetically switched pulse compressor and the linear induction voltage adder. Particular attention is given to the integration of commercially available hardware and software components with a custom written control program. While this document is intended to be a reference guide, it may also serve as a template for similar applications. 3 refs.
This bibliography contains 34 references concerning utilizing benchmarking in the management of businesses. Books and articles are both cited. Methods for gathering and utilizing information are emphasized. (GHH)
Measurements have recently been conducted and computer models constructed to determine the coupling of lightning energy into munition storage bunkers as detailed in companion conference papers. In this paper transfer functions from the incident current to the measured parameters are used to construct simple circuit models that explain much of the important observed quantitative and qualitative information and differences in transfer functions are used to identify nonlinearities in the response data. In particular, V{sub oc} -- the open-circuit voltage generated between metal objects in the structure, I{sub sc} -- the short-circuit current generated in a wire connecting metal objects in the structure, and a typical current measurement in the buried counterpoise system behave in a relatively simple manner explainable by one or several circuit elements. The circuit elements inferred from measured data are comparable in magnitude with those developed from simple analytical models for inductance and resistance. These analytical models are more useful in predicting bounding electromagnetic environment values rather than providing exact time domain waveforms. 2 refs.
The restoration of environmentally contaminated sites at DOE facilities has become a major effort in the past several years. The variety of wastes involved and the differing characteristics have driven the development of new restoration and monitoring technologies. One of the new remediation technologies is being demonstrated at the Savannah River Site near Aiken, South Carolina. In conjunction with this demonstration, a new technology for site characterization and monitoring of the remediation process has been applied by Sandia National Laboratories.
We used surface-profile data taken with a noncontact laser profilometer to determine the aperture distribution within a natural fracture and found the surfaces and apertures to be isotropic. The aperture distribution could be described equally well by either a normal or a lognormal distribution, although we had to adjust the standard deviation to 'fit' the data. The aperture spatial correlation varied over different areas of the fracture, with some areas being much more correlated U= others. The fracture surfaces did not have a single fractal dimension over all length scales, which implied that they were not self-similar. We approximated the saturated flow field in the fracture by solving a finite-difference discretization of the fluid-flow continuity equation in two dimensions. We then calculated tracer breakthrough curves using a particle-tracking method. comparing the breakthrough curves obtained using both coarse- and fine-resolution aperture data (0.5- and 0.05-mm spacing between points, respectively) over the same subset of the fracture domain suggests that the spacing between the aperture data points must be less than the correlation length to obtain accurate predictions of fluid flow and tracer transport. In the future, we will perform tracer experiments and numerical modeling studies to determine exactly how fine the aperture data resolution must be (relative to the correlation length) to obtain accurate predictions.
Sandia National Laboratories (SNL) designs, tests and operates a variety of accelerators that generate large amounts of high energy Bremsstrahlung radiation over an extended time. Typically groups of similar accelerators are housed in a large building that is inaccessible to the general public. To facilitate independent operation of each accelerator, test cells are constructed around each accelerator to shield it from the radiation workers occupying surrounding test cells and work-areas. These test cells, about 9 ft. high, are constructed of high density concrete block walls that provide direct radiation shielding. Above the target areas (radiation sources), lead or steel plates are used to minimize skyshine radiation. Space, accessibility and cost considerations impose certain restrictions on the design of these test cells. SNL Health Physics division is tasked to evaluate the adequacy of each test cell design and compare resultant dose rates with the design criteria stated in DOE Order 5480.11. In response SNL-Health Physics has undertaken an intensive effort to asses existing radiation shielding codes and compare their predictions against measured dose rates. This paper provides a summary of the effort underway and its results.
The last decade has offered many challenges to the welding metallurgist: new types of materials requiring welded construction, describing the microstructural evolution of traditional materials, and explaining non-equilibrium microstructures arising from rapid thermal cycle weld processing. In this paper, the author will briefly review several advancements made in these areas, often citing specific examples of where new insights were required to describe new observations, and to show how traditional physical metallurgy methods can be used to describe transformation phenomena in advanced, non-traditional materials. The paper will close with comments and suggestions as to the needs required for continued advancement in the field.
Phase II of the Long Valley Exploratory Well was completed to a depth of 7588 feet in November 1991. The drilling comprised two sub-phases: (1) drilling 17-1/2 inch hole from the Phase I casing shoe at 2558 feet to a depth of 7130 feet, plugging back to 6826 feet, and setting 13-3/8 inch casing at 6825 feet, all during August--September 1991; and (2) returning in November to drill a 3.85-inch core hole deviated out of the previous wellbore at 6868 feet and extending to 7588 feet. Ultimate depth of the well is planned to be 20,000 feet, or at a bottomhole temperature of 500{degrees}C, whichever comes first. Total cost of this drilling phase was approximately $2.3 million, and funding was shared about equally between the California Energy Commission and the Department of Energy. Phase II scientific work will commence in July 1992 and will be supported by DOE Office of Basic Energy Sciences, DOE Geothermal Division, and other funding sources.