Through the Dish-Stirling Joint Venture Program (JVP) sponsored by the US Department of Energy (DOE), Cummins Power Generation, Inc., (CPG) and Sandia National Laboratories (SNL) have entered into a joint venture to develop and commercialize economically competitive dish-Stirling systems for remote power applications. The $14 million JVP is being conducted in three phases over a 3 1/2-year period in accordance with the Cummins Total Quality System (TQS) for new product development. The JVP is being funded equally by CPG, including its industrial partners, and the DOE. In June 1992, a ``concept validation`` (prototype) 5-kW{sub e}, dish-Stirling system became operational at the CPG test site m Abilene, TX. And on January 1, 1993, the program advanced to phase 2. On the basis of the performance of the 5-kW{sub e} system, a decision was made to increase the rated system output to 7.5-kW{sub e}. The CPG system uses advanced components that have the potential for low cost and reliable operation, but which also have technical risks. In this paper, the status of the advanced components and results from system integration testing are presented and discussed. Performance results from system testing of the 5-kW{sub e} prototype along with phase 2 goals for the 7.5-kW{sub e} system are also discussed.
This paper presents data compiled by the Photovoltaic Design Assistance Center at Sandia National Laboratories from more than eighty field tests performed at over thirty-five photovoltaic systems in the United States during the last ten years. The recorded performance histories, failure rates, and degradation of post-Block IV modules and balance-of-system (BOS) components are described in detail.
Drilling production-size holes for geothermal exploration puts a large expense at the beginning of the project, and thus requires a long period of debt service before those costs can be recaptured from power sales. If a reservoir can be adequately defined and proved by drilling smaller, cheaper slim-holes, production well drilling can be delayed until the power plant is under construction, saving years of interest payments. In the broadest terms, this project`s objective is to demonstrate that a geothermal resevoir can be identified and evaluated with data collected in slim holes. We have assembled a coordinated working group, including personnel from Sandia, Lawrence Berkeley Lab, University of Utah Research Institute, US Geological Survey, independent consultants, and geothermal operators, to focus on the development of this project. This group is involved to a greater or lesser extent in all decisions affecting the direction of the research. Specific tasks being pursued include: Correlation of fluid flow and injection tests between slim-holes and production size wells. Transfer of slim-hole exploration drilling and reservoir assessment to industry so that slim-hole drilling becomes an accepted method for geothermal exploration.Development and validation of a coupled wellbore-reservoir flow simulator which can be used for reservoir evaluation from slim-hole flow data. Collection of applicable data from commercial wells in existing geothermal fields. Drilling of at least one new slim-hole and use it to evaluate a geothermal reservoir.
C++ is commonly described as an object-oriented programming language because of its strong support for classes with multiple inheritance and polymorphism. However, for a growing community of numerical programmers, an equally important feature of C++ is its support of operator overloading on abstract data types. The authors choose to call the resulting style of programming object-oriented numerics. They believe that much of object-oriented numerics is orthogonal to conventional object-oriented programming. As a case study, they discuss two strong shock physics codes written in C++ that they`re currently developing. These codes use both polymorphic classes (typical of traditional object-oriented programming) and abstract data types with overloaded operators (typical of object-oriented numerics). They believe that C++ translators can generate efficient code for many numerical objects. However, for the important case of smart arrays (which are used to represent matrices and the fields found in partial differential equations) fundamental difficulties remain. The authors discuss the two most important of these, namely, the aliasing ambiguity and the proliferation of temporaries, and present some possible solutions.
At Sandia National Laboratories, the Engineering Sciences Center has made a commitment to integrate Application Visualization System (AVS) into our computing environment as the primary tool for scientific visualization. AVS will be used on an everyday basis by a broad spectrum of users ranging from the occasional computer user to AVS module developers. Additionally, AVS will be used to visualize structured grid, unstructured grid, gridless, 1D, 2D, 3D, steady-state, transient, computational, and experimental data. The following is one user`s perspective on how AVS meets this task. Several examples of how AVS is currently being utilized will be given along with some future directions.
Sandia National Laboratories and the Allied Signal-Kansas City Plant (AS-KCP) are engaged in a program called the Integrated Manufacturing and Design Initiative, or IMDI. The focus of IMDI is ``to develop and implement concurrent engineering processes for the realization of weapon components.`` An explicit part of each of the activities within IMDI is an increased concern for environmental impacts associated with design, and a desire to minimize those impacts through the implementation of Environmentally Conscious Manufacturing, or ECM. These same concerns and desires are shared within the Department of Energy`s Manufacturing Complex, and are gaining strong support throughout US industrial sectors as well. Therefore, the development and application of an environmental life cycle analysis framework, the thrust of this specific effort, is most consistent not only with the overall objectives of IMDI, but with those of DOE and private industry.
GREPOS is a mesh utility program that repositions or modifies the configuration of a two-dimensional or three-dimensional mesh. GREPOS can be used to change the orientation and size of a two-dimensional or three-dimensional mesh; change the material block, nodeset, and sideset IDs; or ``explode`` the mesh to facilitate viewing of the various parts of the model. GREPOS also updates the EXODUS quality assurance and information records to help track the codes and files used to generate the mesh. GREPOS reads and writes two-dimensional and three-dimensional mesh databases in the GENESIS database format; therefore, it is compatible with the preprocessing, postprocessing, and analysis codes in the Sandia National Laboratories Engineering Analysis Code Access System (SEACAS).
This paper discusses a nonideal solution model of the metallic phases of reactor core debris. The metal phase model is based on the Kohler equation for a 37 component system. The binary subsystems are assumed to have subregular interactions. The model is parameterized by comparison to available data and by estimating subregular interactions using the methods developed by Miedama et al. The model is shown to predict phase separation in the metallic phase of core debris. The model also predicts reduced chemical activities of zirconium and tellurium in the metal phase. A model of the oxide phase of core debris is described briefly. The model treats the oxide phase as an associated solution. The chemical activities of solution components are determined by the existence and interactions of species formed from the components.
Pool-boiler reflux receivers have been considered as an alternative to heat pipes for the input of concentrated solar energy to Stirling-cycle engines in dish-Stirling electric generation systems. Fool boilers offer simplicity in desip and fabrication. Pool-boiler solar receiver operation has been demonstrated for short periods of time. However, in order to generate cost-effective electricity, the receiver must operate without significant maintenance for the entire system life. At least one theory explaining incipient-boiling behavior of alkali metals indicates that favorable start-up behavior should deteriorate over time. Many factors affect the stability and startup behavior of the boiling system. Therefore, it is necessary to simulate the full-scale design in every detail as much as possible, including flux levels materials, and operating cycles. On-sun testing is impractical due to the limited test time available. No boiling system has been demonstrated with the current porous boiling enhancement surface and materials for a significant period of time. A test vessel was constructed with a Friction Coatings Inc. porous boiling enhancement surface. The vessel is heated with a quartz lamp array providing about 92 W/Cm{sup 2} peak incident thermal flux. The vessel is charged with NaK-78, which is liquid at room temperature. This allows the elimination of costly electric preheating, both on this test and on full-scale receivers. The vessel is fabricated from Haynes 230 alloy, selected for its high temperature strength and oxidation resistance. The vessel operates at 750{degrees}C around the clock, with a 1/2-hour shutdown cycle to ambient every 8 hours. Temperature data is continually collected. The test design and initial (first 2500 hours and 300 start-ups) test data are presented here. The test is designed to operate for 10,000 hours, and will be complete in the spring of 1994.
The title problem is of particular interest for the analysis of seismic signals arising from underground nuclear explosions. Previous attempts at the solution have indicated that, although cylindrical symmetry exists, conventional methods cannot be applied because of the existence of plane and spherical boundaries. The present paper develops a ray-grouping technique for finding the solution to the title problem. This technique allows the separation of the problem into a series of canonical problems. Each such problem deals with a given boundary condition (e.g., continuity conditions at a material interface). Using this technique, one may follow waves along ray paths. It is easy to identify, after n reflections, (a) rays which arrive simultaneously at a given point and (b) the terms in the solution which need to be included at a given time. It is important to note that a cylindrical coordinate system is not employed, even though the problem is axially symmetric. Instead, the equations are carefully transformed making it possible to use a Cartesian coordinate system. This results in a spectral representation of the solution in terms of algebraic expressions in lieu of Bessel functions.
An overview is presented of research that focuses on slow flows of suspensions in which colloidal and inertial effects are negligibly small. We describe nuclear magnetic resonance imaging experiments to quantitatively measure particle migration occurring in concentrated suspensions undergoing a flow with a nonuniform shear rate. These experiments address the issue of how the flow field affects the microstructure of suspensions. In order to understand the local viscosity in a suspension with such a flow-induced, spatially varying concentration, one must know how the viscosity of a homogeneous suspension depends on such variables as solids concentration and particle orientation. We suggest the technique of falling ball viscometry, using small balls, as a method to determine the effective viscosity of a suspension without affecting the original microstructure significantly. We also describe data from experiments in which the detailed fluctuations of a falling ball`s velocity indicate the noncontinuum nature of the suspension and may lead to more insights into the effects of suspension microstructure on macroscopic properties. Finally, we briefly describe other experiments that can be performed in quiescent suspensions (in contrast to the use of conventional shear rotational viscometers) in order to learn more about boundary effects in concentrated suspensions.
Rock mass mechanical properties are important in the design of drifts and ramps. These properties are used in evaluations of the impacts of thermomechanical loading of potential host rock within the Yucca Mountain Site Characterization Project. Representative intact rock and joint mechanical properties were selected for welded and nonwelded tuffs from the currently available data sources. Rock mass qualities were then estimated using both the Norwegian Geotechnical Institute (Q) and Geomechanics Rating (RMR) systems. Rock mass mechanical properties were developed based on estimates of rock mass quality, the current knowledge of intact properties, and fracture/joint characteristics. Empirical relationships developed to correlate the rock mass quality indices and the rock mass mechanical properties were then used to estimate the range of rock mass mechanical properties.
Two aspects of the radionuclide source terms used for total-system performance assessment (TSPA) analyses have been reviewed. First, a detailed radionuclide inventory (i.e., one in which the reactor type, decay, and burnup are specified) is compared with the standard source-term inventory used in prior analyses. The latter assumes a fixed ratio of pressurized-water reactor (PWR) to boiling-water reactor (BWR) spent fuel, at specific amounts of burnup and at 10-year decay. TSPA analyses have been used to compare the simplified source term with the detailed one. The TSPA-91 analyses did not show a significant difference between the source terms. Second, the radionuclides used in source terms for TSPA aqueous-transport analyses have been reviewed to select ones that are representative of the entire inventory. It is recommended that two actinide decay chains be included (the 4n+2 ``uranium`` and 4n+3 ``actinium`` decay series), since these include several radionuclides that have potentially important release and dose characteristics. In addition, several fission products are recommended for the same reason. The choice of radionuclides should be influenced by other parameter assumptions, such as the solubility and retardation of the radionuclides.
Brine seepage to 17 boreholes in salt at the Waste Isolation Pilot Plant (WIPP) facility horizon has been monitored for several years. A simple model for one-dimensional, radial, darcy flow due to relaxation of ambient pore-water pressure is applied to analyze the field data. Fits of the model response to the data yield estimates of two parameters that characterize the magnitude of the flow and the time scale over which it evolves. With further assumptions, these parameters are related to the permeability and the hydraulic diffusivity of the salt. For those data that are consistent with the model prediction, estimated permeabilities are typically 10{sup {minus}22} to 10{sup {minus}21} m{sup 2}. The relatively small range of inferred permeabilities reflects the observation that the measured seepage fluxes are fairly consistent from hole to hole, of the order of 10{sup {minus}10} m/s. Estimated diffusivities are typically 10{sup {minus}10} to 10{sup {minus}8} m{sup 2}/s. The greater scatter in inferred hydraulic diffusivities is due to the difficulty of matching the idealized model history to the observed evolution of the flows. The data obtained from several of the monitored holes are not consistent with the simple model adopted here; material properties could not be inferred in these cases.
This report is a summary and a guide to core-based stress measurements. It covers anelastic strain recovery, circumferential velocity anistropy, differential strain curve analysis, differential wave velocity analysis, petrographic examination of microcracks, overcoring of archieved core, measurements of the Kaiser effect, strength anisotropy tests, and analysis of coring-induced fractures. The report begins with a discussion of the stored energy within rocks, its release during coring, and the subsequent formation of relaxation microcracks. The interogation or monitoring of these microcracks form the basis for most of the core-based techniques (except for the coring induced fractures). Problems that can arise due to coring or fabric are also presented, Coring induced fractures are discussed in some detail, with the emphasis placed on petal (and petal-centerline) fractures and scribe-knife fractures. For each technique, a short description of the physics and the analysis procedures is given. In addition, several example applications have also been selected (where available) to illustrate pertinent effects. This report is intended to be a guide to the proper application and diagnosis of core-based stress measurement procedures.
This paper describes progress made in the Lost Circulation Technology Development Program over the period March, 1992--April, 1993. The program is sponsored at Sandia National Laboratories by the US Department of Energy, Geothermal division. The goal of the program is to develop technology to reduce lost circulation costs associated with geothermal drilling by 30--50%.
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields. It combines operational simplicity and physical accuracy in order to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Flexibility of construction permits tailoring of the codes to specific applications and extension of code capabilities to more complex applications through simple update procedures.
Intergranular environmentally assisted cracking (EAC) of Ni anode substrates is likely to occur in a large proportion of Li/SOCl{sub 2} cells, but it is not generally detected because in the majority of cases it does not lead to catatrophic failure. However, EAC could become a problem for applications requiring continuous power with high reliability for 10--15 years. In the present work, we determine why simple galvanic couple constant-strain tests do not produce cracking, and introduce a constant strain test that does produce cracking. Objective of this investigation is to determine the stress threshold for cracking as a function of Ni composition and microstructure.
Ever since PRO-ENGINEER has become a dominating CAD package available to the public, some of us have been saying, ``Gee, if only I could export my geometry to a stress analysis program without having to recreate any of the details already created, wouldn`t that be spectacular?`` Well, much to the credit of the major stress and thermal analysis software vendors, some of them have been listening to design engineers like me badger them to furnish a seamless interface between PRO and their stress analysis programs. The down side of this problem is the fact that a lot of problems still exist with most of the vendors and their interfaces. I want to discuss the interfaces that I feel are currently ``State of the Art``, and how they are developing and the future for finally arriving at a transparent procedure that an engineer at a workstation can utilize in his or her design process. In years past, engineers would develop a design and changes would evolve based on intuition, or somebody else`s critical evaluation. Then the design would be forwarded to the production group, or the stress analysis group for further evaluation and analysis. Maybe data from a preliminary prototype would be collected and an evaluation report made. All of this took time and increased the cost of the item to be manufactured. Today, the engineer must assume responsibility for design and functional capability early on in the design process, if for no other reason than costs associated with diverse channels of critiquing. For that reason, one place to enhance the design process is to have the ability to do preliminary stress and thermal analysis during the initial design phase. This is both cost and time effective. But, as I am sure you are aware, this has been easier said than done.
In this paper, we characterize a visualization environment that has been designed and prototyped for a large community of scientists and engineers, with an emphasis in superconducting-based computational mechanics. The proposed environment makes use of a visualization server concept to provide effective, interactive visualization to the user`s desktop. Benefits of using the visualization server approach are discussed. Some thoughts regarding desirable features for visualization server hardware architectures are also addressed. A brief discussion of the software environment is included. The paper concludes by summarizing certain observations which we have made regarding the implementation of such visualization environments.
Spray systems in nuclear reactor containments are described. The scrubbing of aerosols from containment atmospheres by spray droplets is discussed. Uncertainties are identified in the prediction of spray performance when the sprays are used as a means for decontaminating containment atmospheres. A mechanistic model based on current knowledge of the physical phenomena involved in spray performance is developed. With this model, a quantitative uncertainty analysis of spray performance is conducted using a Monte Carlo method to sample 20 uncertain quantities related to phenomena of spray droplet behavior as well as the initial and boundary conditions expected to be associated with severe reactor accidents. Results of the uncertainty analysis are used to construct simplified expressions for spray decontamination coefficients. Two variables that affect aerosol capture by water droplets are not treated as uncertain; they are (1) {open_quote}Q{close_quote}, spray water flux into the containment, and (2) {open_quote}H{close_quote}, the total fall distance of spray droplets. The choice of values of these variables is left to the user since they are plant and accident specific. Also, they can usually be ascertained with some degree of certainty. The spray decontamination coefficients are found to be sufficiently dependent on the extent of decontamination that the fraction of the initial aerosol remaining in the atmosphere, m{sub f}, is explicitly treated in the simplified expressions. The simplified expressions for the spray decontamination coefficient are given. Parametric values for these expressions are found for median, 10 percentile, and 90 percentile values in the uncertainty distribution for the spray decontamination coefficient. Examples are given to illustrate the utility of the simplified expressions to predict spray decontamination of an aerosol-laden atmosphere.
This document provides an overview of those activities that are normally performed by Sandia National Laboratories to provide nuclear weapon reliability evaluations for the Department of Energy. These reliability evaluations are first provided as a prediction of the attainable stockpile reliability of a proposed weapon design. Stockpile reliability assessments are provided for each weapon type as the weapon is fielded and are continuously updated throughout the weapon stockpile life. The reliability predictions and assessments depend heavily on data from both laboratory simulation and actual flight tests. An important part of the methodology are the opportunities for review that occur throughout the entire process that assure a consistent approach and appropriate use of the data for reliability evaluation purposes.
Bench-scale tests were carried out in support of the design of a second-generation 75-kW{sub t} reflux pool-boiler solar receiver. The receiver will be made from Haynes Alloy 230 and will contain the sodium-potassium alloy NaK-78. The bench-scale tests used quartz-lamp-heated boilers to screen candidate boiling-stabilization materials and methods at temperatures up to 750{degree}C. Candidates that provided stable boiling were tested for hot-restart behavior. Poor stability was obtained with single 1/4-inch diameter patches of powdered metal hot-press-sintered onto the wetted side of the heat-input area. Laser-drilled and electric-discharge-machined cavities in the heated surface also performed poorly. Small additions of xenon, and heated-surface tilt out of the vertical dramatically improved poor boiling stability; additions of helium or oxygen did not. The most stable boiling was obtained when the entire heat-input area was covered by a powdered-metal coating. The effect of heated-area size was assessed for one coating: at low incident fluxes, when even this coating performed poorly, increasing the heated-area size markedly improved boiling stability. Good hot-restart behavior was not observed with any candidate, although results were significantly better with added xenon in a boiler shortened from 3 to 2 feet. In addition to the screening tests, flash-radiography imaging of metal-vapor bubbles during boiling was attempted. Contrary to the Cole-Rohsenow correlation, these bubble-size estimates did not vary with pressure; instead they were constant, consistent with the only other alkali metal measurements, but about 1/2 their size.
Experiments with hydrogen-air-steam mixtures, such as those found within a containment system following a reactor accident, were conducted in the Heated Detonation Tube (43 cm diameter and 12 m long) to determine the region of benign combustion; i.e., the region between the flammability limits and the deflagration-to-detonation transition limits. Obstacles were used to accelerate the flame; these include 30% blockage ratio annular rings, and alternate rings and disks of 60% blockage ratio. The initial conditions were 110 {degree}C and one or three atmospheres pressure. A benign burning region exists for rich mixtures, but is generally smaller than for lean mixtures. Effects of the different obstacles and of the different pressures are discussed.
Technological advances in the world of microcomputers have brought forth affordable systems and powerful software than can compete with the more traditional world of minicomputers. This paper describes an effort at Sandia National Laboratories to decrease operational and maintenance costs and increase performance by moving a database system from a minicomputer to a microcomputer.
Uncontained, high-energy gas turbine engine fragments are a potential threat to air-transportable containers carried aboard jet aircraft. The threat to a generic example container is evaluated by probability analyses and penetration testing to demonstrate the methodology to be used in the evaluation of a specific container/aircraft/engine combination. Fragment/container impact probability is the product of the uncontained fragment release rate and the geometric probability that a container is in the path of this fragment. The probability of a high-energy rotor burst fragment from four generic aircraft engines striking one of the containment vessels aboard a transport aircraft is approximately 1.2 {times} 10{sup {minus}9} strikes/hour. Finite element penetration analyses and tests can be performed to identify specific fragments which have the potential to penetrate a generic or specific containment vessel. The relatively low probability of engine fragment/container impacts is primarily due to the low release rate of uncontained, hazardous jet engine fragments.
The Stockpile Transition Enabling Program (STEP) is aimed at identifying weapon components suitable for use in more than one weapon and for qualifying components so identified for multiple use. Work includes identifying the means to maintain the manufacturing capability for these items. This document provides the participants in STEP a common, consistent understanding of the process and requirements. The STEP objectives are presented and the activities are outlined. The STEP project selections are based on the customer needs, product applicability, and maturity of the technology used. A formal project selection process is described and the selection criteria are defined. The concept of {open_quotes}production readiness{close_quotes} is introduced, along with a summary of the project requirements and deliverables to demonstrate production readiness.
The MELCOR code was used to simulate one of GRS`s (a reactor research group in Germany) core degradation experiments conducted in the CORA out-of-pile test facility. This test, designated CORA-13, was selected as one of the International Standard Problems, Number ISP31, by the Organization for Economic Cooperation and Development. In this blind calculation, only initial and boundary conditions were provided. The experiment consisted of a small core bundle of twenty-five PWR fuel elements that was electrically heated to temperatures greater than 2,800 K. The experiment composed three phases: a 3,000 second gas preheat phase, an 1,870 second transient phase, and a 180 second water quench phase. MELCOR predictions are compared both to the experimental data and to eight other ISP31 submittals. Temperatures of various components, energy balance, zircaloy oxidation, and core blockage are examined. Up to the point where oxidation was significant, MELCOR temperatures agreed very well with the experiment -- usually to within 50 K. MELCOR predicted oxidation to occur about 100 seconds earlier and at a faster rate than experimental data. The large oxidation spike that occurred during quench was not predicted. However, the experiment produced 210 grams of hydrogen, while MELCOR predicted 184 grams, which was one of the closest integral predictions of the nine submittals. Core blockage was of the right magnitude; however, material collected on the lower grid spacer in the experiment at an axial location of 450 mm, while in MELCOR the material collected at the 50 to 150 mm location. In general, compared to the other submittals, the MELCOR calculation was superior.
The capability to perform atmospheric corrosion testing of materials and components now exists at Sandia resulting from the installation of a system called the Facility for Atmospheric Corrosion Testing (FACT). This report details the design, equipment, operation, maintenance, and future modifications of the system. This report also presents some representative data acquired from testing copper in environments generated by the FACT.
The purpose of a unique signal (UQS) in a nuclear weapon system is to provide an unambiguous communication of intent to detonate from the UQS information input source device to a stronglink safety device in the weapon in a manner that is highly unlikely to be duplicated or simulated in normal environments and in a broad range of ill-defined abnormal environments. This report presents safety considerations for the design and implementation of UQSs in the context of the overall safety system.
The Department of Energy`s Nevada Operations Office (DOE/NV) has disposed of a small quantity of transuranic waste at the Greater Confinement Disposal facility in Area 5 of the Nevada Test Site. In 1989, DOE/NV contracted with Sandia National Laboratories to perform a preliminary performance assessment of this disposal facility. This preliminary performance assessment consisted of analyses designed to assess the likelihood of complying with Environmental Protection Agency standards for the disposal of transuranic waste, high level waste, and spent fuel. The preliminary nature of this study meant that no other regulatory standards were considered and the analyses were conducted with specific limitations. The procedure for the preliminary performance assessment consisted of (1) collecting information about the site, (2) developing models based on this information, (3) implementing these models in computer codes, (4) performing the analyses using the computer codes, and (5) performing sensitivity analyses to determine the more important variables. Based on the results of the analyses, it appears that the Greater Confinement Disposal facility will most likely comply with the Environmental Protection Agency`s standards for the disposal of transuranic waste. The results of the sensitivity analyses are being used to guide site characterization activities related to the next iteration of performance assessment analyses for the Greater Confinement Disposal facility.
The containment building surrounding a nuclear reactor offers the last barrier to the release of radioactive materials from a severe accident into the environment. The loading environment of the containment under severe accident conditions may include much greater than design pressures and temperatures. Investigations into the performance of containments subject to ultimate or failure pressure and temperature conditions have been performed over the last several years through a program administered by the Nuclear Regulatory Commission (NRC). These NRC sponsored investigations are subsequently discussed. Reviewed are the results of large scale experiments on reinforced concrete, prestressed concrete, and steel containment models pressurized to failure. In conjunction with these major tests, the results of separate effect testing on many of the critical containment components; that is, aged and unaged seals, a personnel air lock and electrical penetration assemblies subjected to elevated temperature and pressure have been performed. An objective of the NRC program is to gain an understanding of the behavior of typical existing and planned containment designs subject to postulated severe accident conditions. This understanding has led to the development of experimentally verified analytical tools that can be applied to accurately predict their ultimate capacities useful in developing severe accident mitigation schemes. Finally, speculation on the response of containments subjected to severe accident conditions is presented.
An object-oriented methodology is presented that is based on two sets of Data Flow Diagrams (DFDs): one for the functional view, and one for the behavioral view. The functional view presents the information flow between shared objects. These objects map to the classes identified in the structural view (e.g., Information Model). The behavioral view presents the flow of information between control components and relates these components to their state models. Components appearing in multiple views provide a bridge between the views. The top-down hierarchical nature of the DFDs provide a needed overview or road map through the software system.
Current CASE technology provides sophisticated diagramming tools to generate a software design. The design, stored internal to the CASE tool, is bridged to the code via code generators. There are several limitations to this technique: (1) the portability of the design is limited to the portability of the CASE tools, and (2) the code generators offer a clumsy link between design and code. The CASE tool though valuable during design, becomes a hindrance during implementation. Frustration frequently causes the CASE tool to be abandoned during implementation, permanently severing the link between design and code. Current CASE stores the design in a CASE internal structure, from which code is generated. The technique presented herein suggests that CASE tools store the system knowledge directly in code. The CASE support then switches from an emphasis on code generators to employing state-of-the-art reverse engineering techniques for document generation. Graphical and textual descriptions of each software component (e.g., Ada Package) may be generated via reverse engineering techniques from the code. These reverse engineered descriptions can be merged with system over-view diagrams to form a top-level design document. The resulting document can readily reflect changes to the software components by automatically generating new component descriptions for the changed components. The proposed auto documentation technique facilitates the document upgrade task at later stages of development, (e.g., design, implementation and delivery) by using the component code as the source of the component descriptions. The CASE technique presented herein is a unique application of reverse engineering techniques to new software systems. This technique contrasts with more traditional CASE auto code generation techniques.
Nonuniform etching is a serious problem in plasma processing of semiconductor materials and has important consequences in the quality and yield of microelectronic components. In many plasmas, etching occurs at a faster rate near the periphery of the wafer, resulting in nonuniform removal of specific materials over the wafer surface. This research was to investigate in situ optical diagnostic techniques for monitoring etch uniformity during plasma processing of microelectronic components. We measured 2-D images of atomic chlorine at 726 nm in a chlorine-helium plasma during plasma etching of polysilicon in a parallel-plate plasma etching reactor. The 3-D distribution of atomic chlorine was determined by Abel inversion of the plasma image. The experimental results showed that the chlorine atomic emission intensity is at a maximum near the outer radius of the plasma and decreases toward the center. Likewise, the actual etch rate, as determined by profilometry on the processed wafer, was approximately 20% greater near the edge of the wafer than at its center. There was a direct correlation between the atomic chlorine emission intensity and the etch rate of polysilicon over the wafer surface. Based on these analyses, 3-D imaging would be a useful diagnostic technique for in situ monitoring of etch uniformity on wafers.
The measurement of VOC concentrations in harsh chemical and physical environments is a formidable task. A surface acoustic wave (SAW) sensor has been designed for this purpose and its construction and testing are described in this paper. Included is a detailed description of the design elements specific to operation in 300{degree}C steam and HCl environments including temperature control, gas handling, and signal processing component descriptions. In addition, laboratory temperature stability was studied and a minimum detection limit was defined for operation in industrial environments. Finally, a description of field tests performed on steam reforming equipment at Synthetica Technologies Inc. of Richmond, CA is given including a report on destruction efficiency of CCl{sub 4} in the Synthetica moving bed evaporator. Design improvements based on the field tests are proposed.
The MELCOR code was used to simulate PNL`s Ice Condenser Experiments 11-6 and 16-11. In these experiments, ZnS was injected into a mixing chamber, and the combined steam/air/aerosol mixture flowed into an ice condenser which was l4.7m tall. Experiment 11-6 was a low flow test; Experiment l6-1l was a high flow test. Temperatures in the ice condenser region and particle retention were measured in these tests. MELCOR predictions compared very well to the experimental data. The MELCOR calculations were also compared to CONTAIN code calculations for the same tests. A number of sensitivity studies were performed. It as found that simulation time step, aerosol parameters such as the number of MAEROS components and sections used and the particle density, and ice condenser parameters such as the energy capacity of the ice, ice heat transfer coefficient multiplier, and ice heat structure characteristic length all could affect the results. Thermal/hydraulic parameters such as control volume equilibrium assumptions, flow loss coefficients, and the bubble rise model were found to affect the results less significantly. MELCOR results were not machine dependent for this problem.
One of the most widely recognized inadequacies of C is its low-level treatment of arrays. Arrays are not first-class objects in C; an array name in an expression almost always decays into a pointer to the underlying type. This is unfortunate, especially since an increasing number of high-performance computers are optimized for calculations involving arrays of numbers. On such machines, double [] may be regarded as an intrinsic data type comparable to double or int and quite distinct from double. This weakness of C is acknowledged in the ARM where it is suggested that the inadequacies of the C array can be overcome in C++ by wrapping it in a class that supplies dynamic memory management, bounds checking, operator syntax, and other useful features. Such ``smart arrays`` can in fact supply the same functionality as the first-class arrays found in other high-level, general-purpose programming languages. Unfortunately, they are expensive in both time and memory and make poor use of advanced floating-point architectures. Is there a better solution? The most obvious solution is to make arrays first-class objects and add the functionality mentioned in the previous paragraph. However, this would destroy C compatibility and significantly alter the C++ language. Major conflicts with existing practice would seem inevitable. I propose instead that numerical array classes be adopted as part of the C++ standard library. These classes will have the functionality appropriate for the intrinsic arrays found on most high-performance computers, and the compilers written for these computers will be free to implement them as built-in classes. On other platforms, these classes may be defined normally, and will provide users with basic army functionality without imposing an excessive burden on the implementor.
The United States (US) Strategic Defense Initiative Organization (SDIO) decided to investigate the possibility of launching a Russian Topaz II space nuclear power system. A preliminary nuclear safety assessment was conducted to determine whether or not a space mission could be conducted safely and within budget constraints. As part of this assessment, a safety policy and safety functional requirements were developed to guide both the safety assessment and future Topaz II activities. A review of the Russian flight safety program was conducted and documented. Our preliminary nuclear safety assessment included a number of deterministic analyses, such as; neutronic analysis of normal and accident configurations, an evaluation of temperature coefficients of reactivity, a reentry and disposal analysis, an analysis of postulated launch abort impact accidents, and an analysis of postulated propellant fire and explosion accidents. Based on the assessment to date, it appears that it will be possible to safely launch the Topaz II system in the US with a modification to preclude water flooded criticality. A full scale safety program is now underway.
The second stage of the Shock Technology and Applied Research (STAR) facility two-stage light gas gun at Sandia National Laboratories has been modeled to better assess its safety during operation and to determine the significance of various parameters to its performance. The piston motion and loading of the acceleration reservoir (AR), the structural response of AR, and the projectile motion are determined. The piston is represented as an incompressible fluid while the AR is modeled with the ABAQUS finite element structural analysis code. Model results are compared with a measured profile of AR diameter growth for a test at maximum conditions and with projectile exit velocities for a group of tests. Changes in the piston density and in the break diaphragm opening pressure are shown to significantly affect the AR loading and the projectile final velocity.
This paper provides an overview of the message passing primitives provided by PUMA (Performance-oriented, User-managed Messaging Architecture). Message passing in PUMA is based on the concept of a portal--an opening in the address space of an application process. Once an application process has established a portal, other processes can write values into the memory associated with the portal using a simple send operation. Because messages are written directly into the address space of the receiving process, there is not need to buffer messages in the PUMA kernel. This simplifies the design of the kernel, increasing its reliability and portability. Moreover, because messages are mapped directly into the address space of the application process, the application can manage the messages that it receives without needing direct support from the kernel.
We construct massively parallel, adaptive finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We demonstrate parallel efficiency through computations on a 1024-processor nCUBE/2 hypercube. We also present results using adaptive p-refinement to reduce the computational cost of the method. We describe tiling, a dynamic, element-based data migration system. Tiling dynamically maintains global load balance in the adaptive method by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. We demonstrate the effectiveness of the dynamic load balancing with adaptive p-refinement examples.
The successful design and testing of a three-phase planar integrated magnetic micromotor is presented. Fabrication is based on a modified deep X-ray lithography and electroplating or LIGA process. Maximum rotational speeds of 33,000 rpm are obtained in air with a rotor diameter of 285 {mu}m and do not change when operated in vacuum. Real time rotor response is obtained with an integrated shaft encoder. Long lifetime is evidenced by testing to over 5(10){sup 7} ration cycles without changes in performance. Projected speeds of the present motor configuration are in the vicinity of 100 krpm and are limited by torque ripple. Higher speeds, which are attractive for sensor applications. require constant torque characteristic excitation as is evidenced by ultracentrifuge and gyroscope design. Further understanding of electroplated magnetic material properties will drive these performance improvements.
This case study presents work being done to provide visualization capabilities for a family of codes at Sandia in the area of shock physics. The codes, CTH and Parallel CTH, are running in traditional supercomputing as well as massively parallel environments. These are Eulerian codes which produce data on structured grids. Data sets can be large, so managing large data is a priority. A supercomputing-based distributed visualization environment has been implemented to support such applications. This environment, which is based in New Mexico, is also accessible from our branch site in California via a long haul FDDI/ATM link. Functionality includes the ability to track ongoing simulations. A custom visualization file has been developed to provide efficient, interactive access to result data. Visualization capabilities are based on the commercially available AVS software. A few example results are presented, along with a brief discussion of future work.
A series of CTH simulations were conducted to assess the feasibility of using the hydrodynamic code for debris cloud formation and to predict any damage due to the subsequent loading on rear structures. Six axisymmetric and one 3-dimensional simulations were conducted for spherical projectiles impacting Whipple bumper shields. The projectile diameters were chosen to correlate with two well known analytic expressions for the ballistic limit of a Whipple bumper shield. It has been demonstrated that CTH can be used to simulate the debris cloud formation, the propagation of the debris across a void region, and the secondary impact of the debris against a structure. In addition, the results from the CTH simulations were compared to the analytic estimates of the ballistic limit. At impact velocities of 10 km/s or less, the CTH predicted ballistic limit lays between the two analytic estimates. However, for impact velocities greater than 10 km/s, CTH simulations predicted a ballistic limit larger than both analytical estimates. The differences at high velocities are not well understood. Structural failure at late times due to the time integrated loading of a very diffuse debris cloud has not been considered in the CTH model. In addition, the analytic predictions are extrapolated from relatively low velocity data and the extrapolation technique may not be valid. The discrepancy between the two techniques should be investigated further.
A multiphase mixture model is applied to describe shocked-induced flow in deformable low-density foam. This model includes interphase drag and heat transfer and all phases are treated as compressible. Volume fraction is represented as an independent kinematic variable and the entropy inequality suggests a thermodynamically-admissable evolutionary equation to describe rate-dependent compaction. This multiphase model has been applied to shock tube experiments conducted by B. W. Skews and colleagues in the study of normal shock impingement on a wall-supported low density porous layer. Numerical solution of the multiphase flow equations employs a high resolution adaptive finite element method which accurately resolves contact surfaces and shock interactions. Additional studies are presented in an investigation of the effect of initial gas pressure in the foam layer, the shock interaction on multiple layers of foam and the shock- induced flow in an unsupported foam layer.
This report describes the Quality and ES&H Self-Appraisal Program at the Center for Applied Physics, Engineering and Testing, 9300 and explains how the program promotes good ``Conduct of Operations`` throughout the center and helps line managers improve efficiency and maintain a safe work environment. The program provides a means to identify and remove hazards and to ensure workers are following correct and safe procedures; but, most importantly, 9300`s Self-Appraisal program uses DOE`s ``Conduct of Operations`` and ``Quality Assurance`` guidelines to evaluate the manager`s policies and decisions. The idea is to draw attention to areas for improvement in ES&H while focusing on how well the organization`s processes and programs are doing. A copy of the Administrative Procedure which establishes and defines the program, as well as samples of a Self-Appraisal Report and a Manager`s Response to the Self-Appraisal Report are provided as appendixes.
A class of recording instruments records high-frequency signals as a two-dimensional image rather than converting the analog signal directly to digital output. This report explores the task of reducing the two-dimensional trace to a uniformly sampled waveform that best represents the signal characteristics. Many recorders provide algorithms for locating the center of trace. The author discusses these algorithms and alternative algorithms, comparing their effectiveness.
Midband (1KHz -- 500MHz) analog fiber optic data links were purchased for evaluation from three suppliers. Ortel Corp., Alhambra, CA, provided units built to the specifications. Kaman Sciences Corp, Colorado Springs, CO and Laser Diode Inc., Princeton, NJ, provided units similar to the specification but with significant differences. The final version of the Ortel units met the specification but were marginal in dynamic range. The other units failed to meet the specification but showed promise for future application.
Amid all the changes in the nuclear weapons complex, one intransigent fact remains: an enduring nuclear deterrent will not be possible without a continuing surveillance program to (1) find aging and other stockpile problems so that they can be fixed and (2) assure that when we do not find problems, none exist. Surveillance involves destructive or degrading tests that will exhaust planned provisions for rebuilding or replacing sample weapons in the not-too-distant future. This document discusses needed preparations for conducting surveillance in a future where production of new types of weapons is unlikely. Near-term opportunities to minimize the impact of extended surveillance are identified, and the need to maintain production capabilities is explained.
A subsurface-imaging synthetic-aperture radar (SISAR) has potential for application in areas as diverse as non-proliferation programs for nuclear weapons to environmental monitoring. However, most conventional synthetic-aperture radars operate at higher microwave frequencies which do not significantly penetrate below the soil surface. This study attempts to provide a basis for determining optimum frequencies and frequency ranges which will allow synthetic-aperture imaging of buried targets. Since the radar return from a buried object must compete with the return from surface clutter, the signal-to-clutter ratio is an appropriate measure of performance for a SISAR. A parameter-based modeling approach is used to model the complex dielectric constant of the soil from measured data obtained from the literature. Theoretical random-surface scattering models, based on statistical solutions to Maxwell`s equations, are used to model the clutter. These models are combined to estimate the signal-to-clutter ratio for canonical targets buried in several soil configurations. Initial results indicate that the HF spectrum (3--30 MHz), although it could be used to detect certain targets under some conditions, has limited practical value for use with SISAR, while the upper vhf through uhf spectrum ({approximately}100 MHz--1 GHz) shows the most promise for a general purpose SISAR system. Recommendations are included for additional research.
The MC4033 Common Radar, developed for the B61/83 Stockpile Improvement Program, required a small, rugged crystal resonator in an all-ceramic package capable of providing a frequency of 20 MHz. A commercially available crystal resonator, manufactured by Statek Corporation, met this requirement. This report describes the design intent, component characteristics, and evaluation test results for this device.
A radial transmission line material measurement sample apparatus (sample holder, offset short standards, measurement software, and instrumentation) is described which has been proposed, analyzed, designed, constructed, and tested. The purpose of the apparatus is to obtain accurate surface impedance measurements of lossy, possibly anisotropic, samples at low and intermediate frequencies (vhf and low uhf). The samples typically take the form of sections of the material coatings on conducting objects. Such measurements thus provide the key input data for predictive numerical scattering codes. Prediction of the sample surface impedance from the coaxial input impedance measurement is carried out by two techniques. The first is an analytical model for the coaxial-to-radial transmission line junction. The second is an empirical determination of the bilinear transformation model of the junction by the measurement of three full standards. The standards take the form of three offset shorts (and an additional lossy Salisbury load), which have also been constructed. The accuracy achievable with the device appears to be near one percent.
A new contact detection algorithm has been developed to address difficulties associated with the numerical simulation of contact in nonlinear finite element structural analysis codes. Problems including accurate and efficient detection of contact for self-contacting surfaces, tearing and eroding surfaces, and multi-body impact are addressed. The proposed algorithm is portable between dynamic and quasi-static codes and can efficiently model contact between a variety of finite element types including shells, bricks, beams and particles. The algorithm is composed of (1) a location strategy that uses a global search to decide which slave nodes are in proximity to a master surface and (2) an accurate detailed contact check that uses the projected motions of both master surface and slave node. In this report, currently used contact detection algorithms and their associated difficulties are discussed. Then the proposed algorithm and how it addresses these problems is described. Finally, the capability of the new algorithm is illustrated with several example problems.
Four fluorosilica clad, all silica core fibers with polyamide buffers were examined for radiation-induced, transient absorption in the central cavity of the Annular Core Research Reactor. The reactor operated 24 times in the pulse mode, typically yielding gamma doses of 15 krad(Si) and neutron fluences of 1.4 {times}10{sup 14} nts/cm{sup 2} thermal and 1.0 {times} 10{sup 15} nts/cm{sup 2} (fast). The two low-OH fibers absorbed 90% of the light in the 400 to 500 nm region and 30% in the 700 and 800 nm region. The high-OH fibers absorbed 20% in the 400 to 500 nm region and 50% in the 700 to 800 nm region. Saturation of the transient induced absorption was observed in all the fibers. No systematic measurements were taken of long term induced absorption. However, excessive absorption was not a problem in any fibers, even those that received total gamma doses of 5 Mrad(Si). Scintillation in the 680 to 820 mn band was observed. This report documents the data from these experiments.
Evolution of the microstructure of Al-2wt.%Cu thin films is examined with respect to how the presence of copper can influence electromigration behavior. After an anneal that simulates a thin film sintering step, the microstructure of the Al-Cu films consisted of 1 {mu}m aluminum grains with {theta}-phase A1{sub 2}Cu precipitates at grain boundaries and triple points. The grain size and precipitation distribution did not change with subsequent heat treatments. Upon cooling to room temperature the heat treatment of the films near the Al/Al+{theta} solvus temperature results in depletion of copper at the aluminum grain boundaries. Heat treatments lower in the two phase region (200 to 300C) result in enrichment of copper at the aluminum grain boundaries. Here, it is proposed that electromigration behavior of aluminum is improved by adding copper because the copper enrichment in the form of A1{sub 2}Cu phase may hinder aluminum diffusion along the grain boundaries.
The solution of Grand Challenge Problems will require computations which are too large to fit in the memories of even the largest machines. Inevitably new designs of I/O systems will be necessary to support them. Through our implementations of an out-of-core LU factorization we have learned several important lessons about what I/O systems should be like. In particular we believe that the I/O system must provide the programmer with the ability to explicitly manage storage. One method of doing so is to have a partitioned secondary storage in which each processor owns a logical disk. Along with operating system enhancements which allow overheads such as buffer copying to be avoided, this sort of I/O system meets the needs of high performance computing.
An eXplosive CHEMical kinetics code, XCHEM was developed to solve the reactive diffusion equations associated with thermal ignition of energetic material. This method-of-lines code uses stiff numerical methods and adaptive meshing. Solution accuracy is maintained between multilayered materials consisting of blends of reactive components and/or inert materials. Phase change and variable properties are included in one-dimensional slab, cylindrical and spherical geometries. Temperature-dependent thermal properties was incorporated and modification of thermal conductivities to include decomposition effects are estimated using solid/gas volume fractions determined by species fractions. Gas transport properties are also included. Time varying temperature, heat flux, convective and thermal radiation boundary conditions, and layer to layer contact resistances are also implemented. The global kinetic mechanism developed at Lawrence Livermore National Laboratory (LLNL) by McGuire and Tarver used to fit One-Dimensional Time to eXplosion (ODTX) data for the conventional energetic materials (HMX, RDX, TNT, and TATB) are presented as sample calculations representative of multistep chemistry. Calculated and measured ignition times for explosive mixtures of Comp B (RDX/TNT), Octol, (HMX/TNT), PBX 9404 (HMX/NC), and RX-26-AF (HMX/TATB) are compared. Geometry and size effects are accurately modeled, and calculations are compared to experiments with time varying boundary conditions. Finally, XCHEM calculations of initiation of an AN/oil/water emulsion, resistively heated, are compared to measurements.
Diamond films were deposited on tungsten substrates by a filament-assisted chemical vapor deposition process as a function of seven different processing parameters. The effect of variations in measured film characteristics such as growth rate, texture, diamond-to-nondiamond carbon Raman band intensity ratio and strain on the adhesion between the diamond film/tungsten substrate pairs as measured by a tensile pull method were investigated. The measured adhesion values do not correlate with any of the measured film characteristics mentioned above. The problem arises because of the non-reproducibility of the adhesion test results, due to the non-uniformity of film thickness, surface preparation and structural homogeneity across the full area of the substrate.
The porosity of sol-gel thin films may be tailored for specific applications through control of the size and structure of inorganic polymers within the coating sol, the extent of polymer reaction and interpenetration during film formation, and the magnitude of the capillary pressure exerted during the final stage of drying. By maximizing the capillary pressure and avoiding excessive condensation, dense insulating films may be prepared as passivation layers on silicon substrates. Such films can exhibit excellent dielectric integrity, viz., low interface trap densities and insulating properties approaching those of thermally grown SiO{sub 2}. Alternatively, through exploitation of the scaling relationship of mass and density of fractal objects, silica films can be prepared that show a variation in porosity (7--29 %) and refractive index (1.42--1.31) desired for applications in sensors, membranes, and photonics.
Parametric calculations are performed, using the SAFSIM computer program, to investigate the fluid mechanics and heat transfer performance of a particle bed fuel element. Both steady-state and transient calculations are included, addressing such issues as flow stability, reduced thrust operation, transpiration drag, coolant conductivity enhancement, flow maldistributions, decay heat removal, flow perturbations, and pulse cooling. The calculations demonstrate the dependence of the predicted results on the modeling assumptions and thus provide guidance as to where further experimental and computational investigations are needed. The calculations also demonstrate that both flow instability and flow maldistribution in the fuel element are important phenomena. Furthermore, results are encouraging that geometric design changes to the element can significantly reduce problems related to these phenomena, allowing improved performance over a wide range of element power densities and flow rates. Such design changes will help to maximize the operational efficiency of space propulsion reactors employing particle bed fuel element technology. Finally, the results demonstrate that SAFSIM is a valuable engineering tool for performing quick and inexpensive parametric simulations addressing complex flow problems.
Ten MC4073/4369 programmer base plates were analyzed. This component, a programmer base plate for the SRAM II (and later the SRAM A), is specified as a Grade C quality casting made of aluminum Alloy A356, heat treated to the T6 condition. A concern was expressed regarding the choice of an A356 casting for this application, given the complexity and severity of the loading environment. Preliminary tests and analyses suggested that the design was adequate, but noted the uncertainty involved in a number of their underlying assumptions. The uncertainty was compounded by the discovery that the casting used in the original series of mechanical tests failed. In this investigation, several production castings were examined and found to be of a quality superior to that required under current specifications. Their defect content and microstructure were studied and compared with published data to establish a mechanical property data base. The data base was supplemented with a series of X-direction static tests, which characterized the loading environment and measured the overall casting performance. It was found that the mechanical properties of the supplied castings were adequate for the anticipated X-direction loading environment, but the component is not over-designed. The established data base further indicates that a reduction in casting quality to the allowable level could result in failure of the component. Recommendations were made including (1) change the component specification to require higher casting quality in highly stressed areas, (2) supplement the inspection procedures to ensure adequate quality in critical regions, (3) alter the component design to reduce the stress levels in the mounting feet, (4) substitute a modified A356 alloy to improve the mechanical properties and their consistency, and (5) more thoroughly establish a data base for the mechanical property consequences of levels and configurations of casting defects.
This document is the Maintenance Manual for the Beneficial Uses Shipping System (BUSS) cask. These instructions address requirements for maintenance, inspection, testing, and repair, supplementing general information found in the BUSS Safety Analysis Report for Packaging (SARP), SAND 83-0698. Use of the BUSS cask is authorized by the Department of Energy (DOE) and the Nuclear Regulatory Commission (NRC) for the shipment of special form cesium chloride or strontium flouride capsules.
CAMCON, the Compliance Assessment Methodology CONtroller, is an analysis system that assists in assessing the compliance of the Waste Isolation Pilot Plant (WIPP) with applicable long-term regulations of the US Environmental Protection Agency, including Subpart B of the Environmental Standards for the Management and Disposal of spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes, 40 CFR 191 and 40CFR268.6, which is the portion of the Land Disposal Restrictions implementing the Resource, Conservative, and Recovery Act of 1976, as amended that states the conditions for disposal of hazardous chemical wastes. This manual provides an architectural overview of the CAMCON system. Furthermore this manual presents guidelines and presents suggestions for programmers developing the many different types of software necessary to investigate various events and physical processes of the WIPP. These guidelines include user interface requirements, minimum quality assurance requirements, coding style suggestions, and the use of numerous software libraries developed specifically for or adapted for the CAMCON system.
This volume contains a description of the codes and input/output files used to perform the LaSalle Level II/III Probabilistic Risk Assessment. A chart showing the process flow is presented and the relationship between the codes and the needed input and output data is discussed. Code listings for codes not documented elsewhere and complete or sample listings of the input and output files are also presented.
This report contains an initial definition of the field tests proposed for the Yucca Mountain Project repository sealing program. The tests are intended to resolve various performance and emplacement concerns. Examples of concerns to be addressed include achieving selected hydrologic and structural requirements for seals, removing portions of the shaft liner, excavating keyways, emplacing cementitious and earthen seals, reducing the impact of fines on the hydraulic conductivity of fractures, efficient grouting of fracture zones, sealing of exploratory boreholes, and controlling the flow of water by using engineered designs. Ten discrete tests are proposed to address these and other concerns. These tests are divided into two groups: Seal component tests and performance confirmation tests. The seal component tests are thorough small-scale in situ tests, the intermediate-scale borehole seal tests, the fracture grouting tests, the surface backfill tests, and the grouted rock mass tests. The seal system tests are the seepage control tests, the backfill tests, the bulkhead test in the Calico Hills unit, the large-scale shaft seal and shaft fill tests, and the remote borehole sealing tests. The tests are proposed to be performed in six discrete areas, including welded and non-welded environments, primarily located outside the potential repository area. The final selection of sealing tests will depend on the nature of the geologic and hydrologic conditions encountered during the development of the Exploratory Studies Facility and detailed numerical analyses. Tests are likely to be performed both before and after License Application.
In July 1990 the Institute of Nuclear Materials Management (INMM) [open quotes]International Safeguards Subcommittee[close quotes], an arm of the INMM [open quotes]Safeguards Committee[close quotes], held its first meeting, which was devoted principally to organizational matters. The goal of this organization is to promote International Safeguards as a major tool of Non Proliferation polices. Within the framework of the INMM, it has the responsibility to provide a forum for exchange of information related to further development of selected aspects of International Safeguards, and to enhance a broader understanding of these topics. A second meeting of this [open quotes]Subcommittee[close quotes] was held at the 1991 INMM Annual Meeting. In November 1991, the INMM reorganized into [open quotes]Divisions[close quotes], with the establishment of the [open quotes]International Safeguards and Non Proliferation Division[close quotes] (IS NP). From November 1991, the IS NP Division met two times, once in Europe and once in the USA. In October 1992, further reorganization of the INMM led to establishment of the [open quotes]International Safeguards Division[close quotes] (ISD) which, under the new designation, has met two times, once in Europe and once in Japan. This paper presents the purpose, objectives, results of past meetings, and future plans of the ISD.
Training new users of complex machines is often an expensive and time-consuming process. This is particularly true for special purpose systems, such as those frequently encountered in DOE applications. This paper presents a computer-based training system intended as a partial solution to this problem. The system extends the basic virtual reality (VR) training paradigm by adding a multimedia component which may be accessed during interaction with the virtual environment: The 3D model used to create the virtual reality is also used as the primary navigation tool through the associated multimedia. This method exploits the natural mapping between a virtual world and the real world that it represents to provide a more intuitive way for the student to interact with all forms of information about the system.
This paper describes a current research program at Sandia National Laboratories whereby magnetic stripes made from very high coercivity magnetic materials are produced through the use of a new particle rotation technology. This new process allows the stripes to be produced in bulk and then held in a latent state so that they may be encoded at a later date. Since particle rotation is less dependent on the type of magnetic particle used, very high coercivity particles could provide a way to increase both magnetic tamper-resistance and accidental erasure protection of the magnetic stripes.
SPR-IIIM is a modernized, improved version of SPR-111. The new system is expected to improve overall reliability and performance while reducing personnel dose and maintenance frequency. A description of the SPR-IIIM reactor and its features are presented in this paper along with plans for characterizing the reactors operational and safety characteristics. Enhancements of SPR-IIIM include a larger central irradiation cavity, 7.5 in. ID, a self-aligning safety block, spring-loaded fuel clamping, forced flow cooling across fuel plate gaps, and larger diameter hollow shafts with precision spline bearings support the reflector control elements.
One of the drivers in the dismantlement and disposal of nuclear weapon components is Envirorunental Protection Agency (EPA) guidelines. The primary regulatory driver for these components is the Resource Conservation Recovery Act (RCRA). Nuclear weapon components are heterogeneous and contain a number of hazardous materials including heavy metals, PCB`S, selfcontained explosives, radioactive materials, gas-filled tubes, etc. The Waste Component Recycle, Treatment, Disposal and Integrated Demonstration (WeDID) is a Department of Energy (DOE) Environmental Restoration and Waste Management (ERWM) sponsored program. It also supports DOE Defense Program (DP) dismantlement activities. The goal of WeDID is to demonstrate the end-to-end disposal process for Sandia National Laboratories designed nuclear weapon components. One of the primary objectives of WeDID is to develop and demonstrate advanced system treatment technologies that will allow DOE to continue dismantlement and disposal unhindered even as environmental regulations become more stringent. WeDID is also demonstrating waste minimization techniques by recycling a significant weight percentage of the bulk/precious metals found in weapon components and by destroying the organic materials typically found in these components. WeDID is concentrating on demonstrating technologies that are regulatory compliant, are cost effective, technologically robust, and are near-term to ensure the support of DOE dismantlement time lines. The waste minimization technologies being demonstrated by WeDID are cross cutting and should be able to support a number of ERWM programs.
Interactive Collaborative Environments (ICE) technologies allow teames at separate locations to work concurrently on joint problem solving. Examples of ICE use include engineers simultaneously viewing and manipulating the same CAD application to discuss design/production changes and trade-offs. This concept was demonstrated in March of 1992 between AT&T, Shreveport Works and Holmdel. In May 1992, Sandia National Laboratories demonstrated a platform independent version of application sharing code using the workstations and application software available at AT&T, Shreveport Works. AT&T and Sandia are currently negotiating future work agreements. In addition, Sandia has provided demonstrations and created pilot project links for internal Sandia use, and for communication with other facilities, e.g. Los Alamos National Laboratories and Sandia, California location. ICE can also be used to link up suppliers and customers, even in different companies. Anywhere team members are separated geographically, or even between building and facilities at a particular site, ICE can improve remote problem solving, cutting down on delays and miscommunication flascoes.
The combination of an energy dispersive x-ray spectrometer (EDS) with the ultrahigh vacuum environment of many modern electron microscopes requires the spectrometer designer to take extra precautions and presents the microscopist with the additional option of utilizing windowless spectrometers for light element detection while not worrying about contamination of the detector. UHV is generally defined as a pressure of better than 10{sup {minus}7} Pa and is necessary to prevent specimen modification by the components of the vacuum. UHV may also be defined as an environment in which the time to form a monolayer on the specimen is equal to or longer than the usual time for a laboratory measurement. This report examines performance of energy dispersion x-ray spectrometers in UHV.
The primary difficulty of computing the vibration of spinning inflated membranes arises from the low natural frequencies of such systems. When such systems are rotated near their own natural frequencies the dynamics equations must account for higher order kinematics than is necessary for more rigid structures. These complications results from the membrane loads that develop within the bodies in reaction to the accelerations of the overall body. When second order kinematics act against these membrane loads, the resulting energies become of the same order as the potential and kinetic energies of the vibrations that would be calculated by first order kinematics. These complications apply to the problem addressed here. Here we consider a spin-stabilized, inflated membrane, spinning around its minor axis. This structure is very flexible and somewhat viscoelastic, so vibrations excited by the overall motion of the structure will dissipate energy of the system, thus reducing the kinetic energy. A reduction in kinetic energy consistent with a conservation of angular momentum results in coning and, eventually, tumbling. Here we must address the excitation of vibration by the rigid-body motion and then we must address the retarding effect of the energy dissipation on the rigid-body motion.
The properties of vertical-cavity surface-emitting lasers (VCSELS) and VCSEL-based optical switches using MOCVD-grown epitaxial material are discussed and sum their performance.
Parallel computing offers new capabilities for using molecular dynamics (MD) to simulate larger numbers of atoms and longer time scales. In this paper we discuss two methods we have used to implement the embedded atom method (EAM) formalism for molecular dynamics on multiple-instruction/multiple-data (MIMD) parallel computers. The first method (atom-decomposition) is simple and suitable for small numbers of atoms. The second method (force-decomposition) is new and is particularly appropriate for the EAM because all the computations are between pairs of atoms. Both methods have the advantage of not requiring any geometric information about the physical domain being simulated. We present timing results for the two parallel methods on a benchmark EAM problem and briefly indicate how the methods can be used in other kinds of materials MD simulations.
Solutions from a Parabolized Navier-Stokes (PNS) code with an algebraic turbulence model are compared with wall functions. The wall functions represent the turbulent flow profiles in the viscous sublayer, thus removing many grid points from the solution procedure. The wall functions are intended to replace the computed profiles between the body surface and a match point in the logarithmic region. A supersonic adiabatic flow case was examined first. This adiabatic case indicates close agreement between computed velocity profiles near the wall and the wall function for a limited range of suitable match points in the logarithmic region. In an attempt to improve marching stability, a laminar to turbulent transition routine was implemented at the start of the PNS code. Implementing the wall function with the transitional routine in the PNS code is expected to reduce computational time while maintaining good accuracy in computed skin friction.
CRESLAF is a Fortran program that predicts the velocity, temperature, and species profiles in two-dimensional (planar or axisymmetric) channels. The program accounts for finite-rate gas-phase and surface chemical kinetics and molecular transport. The model employs the boundary-layer approximations for the fluid-flow equations, coupled to gas-phase and surface species continuity equations. The program runs in conjunction with the Chemkin preprocessors for the gas-phase and surface chemical reaction mechanisms and the transport properties. This report presents the equations defining the model, the method of solution, the input parameters to the program, and a sample problem illustrating its use. Applications of CRESLAF include chemical vapor deposition (CVD) reactors, heterogeneous catalysis on reactor walls, and corrosion processes.
Biometric identity research and development activities are being conducted in universities, government, and private industry. This paper discusses some of the factors that limit the performance of biometric identity devices, looks at some new developments, and speculates on future developments.
Performance projections based on the analytical model of a scannerless laser radar system are compared to laboratory simulations and to field data measurements. Data and characteristics of the system, including camera response, image spatial resolution, and contributions to the signal-to-noise ratio are presented. A discussion of range resolution for this system will also be presented, and finally, the performance characteristics of the prototype benchtop system will be summarized.
In an effort to remain regulatory compliant, it is becoming increasingly important to locate resources that can provide up to date environmental regulations and regulatory interpretations. there are many resources available to provide information and training in these areas.
Remote systems are needed to accomplish many tasks such as the clean up of waste sites in which the exposure of personnel to radiation, chemical, explosive, and other hazardous constituents is unacceptable. In addition, hazardous operations which in the past have been completed by technicians are under increased scrutiny due to high costs and low productivity associated with providing protective clothing and environments. Traditional remote operations have, unfortunately, proven to also have very low productivity when compare with unencumbered human operators. However, recent advances in the integration of sensors and computing into the control of conventional remotely operated industrial equipment has shown great promise for providing systems capable of solving difficult problems.
This work concerns preparing tailored porous carbon monoliths by pyrolyzing porous polymer precursors. Prior work in this laboratory (1) demonstrated that a low density (0.05 g/cm{sup 3}), high void fraction (97 vol%) carbon monolith could be prepared by pyrolyzing a porous poly(acrylonitrile) (PAN) precursor. A higher density, more robust carbon material is preferred for certain applications, such as electrodes for electrochemical devices. The present work demonstrates that porous carbon monoliths having mass density of 0.7 g/cm{sup 3} can be prepared from a porous PAN precursor if the pyrolysis is controlled carefully. The macropore structure of the carbon is adjusted by changing the pore structure of the PAN precursor, and the finer scale structure (such as the crystallite size L{sub c}) is adjusted by varying the pyrolysis or heat treatment temperature.
The Arrhenius approach assumes a linear relation between log time to material property change and inverse absolute temperature. For elastomers, ultimate tensile elongation results are often used to confirm Arrhenius behavior, even though the ultimate tensile strength is non-Arrhenius. This paper critically examines the Arrhenius approach. Elongation vs air-oven aging temperature for a nitrile rubber, gives an E{sub a} of 22 kcal/mol; however this does not hold for the tensile strength, indicating degradation. Modulus profiling shows heterogeneity at the earliest times at 125 C, caused by diffusion-limited oxidation (DLO). Tensile strength depends on the force at break integrated over the cross section, and nitrile rubbers aged at different temperatures experience different degrees of degradation in the interior. Modulus at the surface, however, is not affected by DLO anomalies. Severe mechanical degradation will occur when the edge modulus increases by an order of magnitude. 7 figs, 3 refs.
Critical information required for Environment, Safety, and Health (ES&H) protection can be acquired with a comprehensive cradle-to-grave tracking and information system. The cradle-to-grave concept makes two initial assumptions. First, it is more effective to gather information at the origination of a process or entry point of a material and maintain that information during the rest of its life-cycle than to collect data on an ad hoc basis. Second, the information needs of the various ES&H programs have many commonalties. A system which adheres to a methodology based upon these assumptions requires a significant technical and administrative commitment; however, this investment, will in the long-term, reduce the effort and duplication of ES&H programs, improve the efficiency of ES&H and line personnel, and increase the scope and accuracy of ES&H data. The cradle-to-grave system being developed at Sandia National Laboratories (SNL) is designed to provide useful information on materials, personnel, facilities, hazards, wastes, and processes to fulfill the mission of pollution prevention, risk management, industrial hygiene, emergency preparedness, air/water quality, and hazardous and radioactive waste management groups. SNL is currently linking system modules, which are at various stages of development and production, to realize a cradle-to-grave tracking and information system that is functional for a large research and development laboratory.
A very brief description of two ``classes`` developed for use in design optimization and sensitivity analyses are given. These classes are used in simulations of systems in early design phases as well as system response assessments. The instanciated classes were coupled to system models to demonstrate the practically and efficiency of using these objects in complex robust design processes.
The Becker-Kistiakowsky-Wilson equation of state (BKW-EOS) has been calibrated over a wide initial density range near C-J states using measured detonation properties from 62 explosives at III total initial densities. Values for the empirical BKW constants {alpha}, {beta}, {kappa}, and {theta} were 0.5, 0.298, 10.5, and 6620, respectively. Covolumes were assumed to be invariant. Model evaluation includes comparison to measurements from 91 explosives composed of combinations of Al, B, Ba, C, Ca, Cl, F, H, N, 0, P, Pb, and Si at 147 total initial densities. Adequate agreement between predictions and measurements were obtained with a few exceptions for nonideal explosives. However, detonation properties for the nonideal explosives can be predicted adequately by assuming partial equilibrium. The partial equilibrium assumption was applied to aluminized composites of RDX, HMX, TNETB, and TNT to predict detonation velocity and temperature.
The multifrequency, multisource holographic method used in the analysis of seismic data is to extended electromagnetic (EM) data within the audio frequency range. The method is applied to the secondary magnetic fields produced by a borehole, vertical electric source (VES). The holographic method is a numerical reconstruction procedure based on the double focusing principle for both the source array and the receiver array. The approach used here is to Fourier transform the constructed image from frequency space to time space and set time equal to zero. The image is formed when the in-phase part (real part) is a maximum or the out-of-phase (imaginary part) is a minimum; i.e., the EM wave is phase coherent at its origination. In the application here the secondary magnetic fields are treated as scattered fields. In the numerical reconstruction, the seismic analog of the wave vector is used; i.e., the imaginary part of the actual wave vector is ignored. The multifrequency, multisource holographic method is applied to calculated model data and to actual field data acquired to map a diesel fuel oil spill.
An essential requirement for both Vertical Seismic Profiling (VSP) and Cross-Hole Seismic Profiling (CHSP) is the rapid acquisition of high resolution borehole seismic data. Additionally, full wave-field recording using three-component receivers enables the use of both transmitted and reflected elastic wave events in the resulting seismic images of the subsurface. To this end, an advanced three- component multi-station borehole seismic receiver system has been designed and developed by Sandia National Laboratory (SNL) and OYO Geospace. The system requires data from multiple three-component wall-locking accelerometer packages and telemeters digital data to the surface in real-time. Due to the multiplicity of measurement stations and the real-time data link, acquisition time for the borehole seismic survey is significantly reduced. The system was tested at the Chevron La Habra Test Site using Chevron`s clamped axial borehole vibrator as the seismic source. Several source and receiver fans were acquired using a four-station version of the advanced system. For comparison purposes, an equivalent data set was acquired using a standard analog wall-locking geophone receiver. The test data indicate several enhancements provided by the multi-station receiver relative to the standard, drastically improved signal-to-noise ratio, increased signal bandwidth, the detection of multiple reflectors, and a true 4:1 reduction in survey time.
The introduction of rapid prototyping machines into the market place promises to revolutionize the process of producing prototype parts with production-like quality. In the age of concurrent engineering and agile manufacturing, it is necessary to exploit applicable new technologies as soon as they become available. The driving force behind integrating these evolutionary processes into the design and manufacture of prototype parts is the need to reduce lead times and fabrication costs improve efficiency, and increase flexibility without sacrificing quality. Sandia Utilizes stereolithography and selective laser sintering capabilities to support internal design and manufacturing efforts. Stereolithography (SLA) is used in the design iteration process to produce proof-of-concept models, hands-on models for design reviews, fit check models, visual aids for manufacturing, and functional parts in assemblies. Selective laser sintering (SLS) is used to produce wax patterns for the lost wax process of investment casting in support of an internal Sandia National Laboratories program called FASTCAST which integrates experimental and computational technologies into the investment casting process. This presentation will provide a brief overview of the SLA and SLS processes and address our experiences with these technologies from the standpoints of application, accuracy, surface finish, and feature definition. Also presented will be several examples of prototype parts manufactured by the stereolithography and selective laser sintering rapid prototyping machines.
Three-dimensional finite element analyses of gas-filled storage caverns in domal salt were performed to investigate the effects of cavern spacing on surface subsidence, storage loss, and cavern stability. The finite element model used for this study models a seven cavern storage field with one center cavern and six hexagonally spaced surrounding caverns. Cavern spacing is described in terms of the P/D ratio which is the pillar thickness (the width between two caverns) divided by the cavern diameter. With the stratigraphy and cavern size held constant, simulations were performed for P/D ratios of 6.0, 3.0, 2.0, 1.0, and 0.5. Ten year simulations were performed modeling a constant 400 psi gas pressure applied to the cavern lining. The calculations were performed using JAC3D, a three dimensional finite element analysis code for nonlinear quasistatic solids. For the range of P/D ratios studied, cavern deformation and storage volume were relatively insensitive to P/D ratio, while subsidence volume increased with increasing P/D ratio. A stability criterion which describes stability in terms of a limiting creep strain was used to investigate cavern stability. The stability criterion indicated that through-pillar instability was possible for the cases of P/D = 0.5 and 1.0.
One of the proposed applications of the satellite-based Global Verification and Location System (GVLS) is the Authenticated Tracking and Monitoring System (ATMS). When fully developed, ATMS will provide the capability to monitor, in a secure and authenticated fashion, the status and global tracking of selected items while in transit - in particular, proliferation sensitive items. The resulting tracking, timing, and status information can then be processed and utilized to assure compliance with, for example, various treaties. Selected items to be monitored could include, but are not limited to, Treaty Limited Items (TLIs), such as nuclear weapon components, Re-entry Vehicles (RVs), weapon delivery and launch systems, chemical and biological agents, Special Nuclear Material (SNM), and related nuclear weapons manufacturing equipment. The ATMS has potential applications in the areas of arms control, disarmament and Non-proliferation treaty verification, military asset control, as well as International Atomic Energy Agency (IAEA) and Euratom safeguards monitoring activities. The concept presented here is mainly focused on a monitoring technology for proliferation sensitive items. It should, however, be noted that the systems potential applications are numerous and broad in scope, and could easily be applied to other types of monitoring activities as well.
This bulletin presents state of the art testing technology utilized at Sandia National Laboratory. A hand-held NiCad battery tester automatically checks batteries of individual cells. Modal analysis shows the way to better process control for integrated circuit lithography. An ultrasonic system pings reentry vehicles to measure in-flight ablation. A smaller VISAR shines in detonator tests. Higher image quality is achieved at neutron radiography facility with the use of a neutron collimator.
Over the past several years the Information Technology Department at Sandia Laboratories has developed information systems based on a solid foundation of information modeling and data administration. The output of the information modeling efforts is a fifth normal form relational table structure and associated data constraints. Developers would then implement the system by creating end-user application software. Traditionally, the development process combined the code necessary for maintaining data constraints with the code to provide the user interface (i.e. forms, windows, etc.). This approach has an adverse effect on the maintainability of the software as the system (i.e. the information model) changes over time. This paper will discuss the application of a direct connection between the information model and the implementation of a database with associated code to maintain required data constraints. The automated generation of this code allows the developers to concentrate on the user interface code development. The technique involves generating database procedure code automatically from the information modeling process. The database procedure code will enforce the data constraints defined in the information model. This has resulted in a fully functional database with complete rules enforcement within days of a completed information model. This work used the Knowledge Management Extensions of the Ingres database software. Changes to the architecture of both Application By Forms (ABF) and Ingres Windows4GL client applications required by this process will also be discussed.
Leonard, J.A.; Floyd, H.L.; Goetsch, B.; Doran, L.
This bulletin describes innovative manufacturing technologies being developed at Sandia National Laboratories. Topics in this issue include: new techniques to overcome barriers to large scale fabrication of vertical cavity surface-emitting lasers (VCSELs), variability reduction in plasma etching of microcircuits, using neural networks to evaluate effectiveness of flux-cleaning methods and alternative fluxes for printed circuit boards, ion implantation to increase the strength and wear resistance of aluminium, and a collaborative project to improve processing of thin-section welded assemblies. (GH)
A technique to integrate a dense, locally non-uniform mesh into finite-difference time-domain (FDTD) codes is presented. The method is designed for the full-wave analysis of multi-material layers that are physically thin, but perhaps electrically thick. Such layers are often used for the purpose of suppressing electromagnetic reflections from conducting surfaces. Throughout the non-uniform local mesh, average values for the conductivity and permittivity are used, where as variations in permeability are accommodated by splitting H-field line integrals and enforcing continuity of the normal B field. A unique interpolation scheme provides accuracy and late-time stability for mesh discontinuities as large as 1000 to 1. Application is made to resistive sheets, the absorbing Salisbury screen, crosstalk on printed circuit boards, and apertures that are narrow both in width and depth with regard to a uniform cell. Where appropriate, comparisons are made with the MoM code CARLOS and transmission-line theory. The hybrid mesh formulation has been highly optimized for both vector and parallel-processing on Cray YMP architectures.
Aerodynamic force and moment measurements and flow visualization results are presented for a hypersonic vehicle configuration at Mach 8. The basic vehicle configuration is a spherically blunted 10{degree} half-angle cone with a slice parallel with the axis of the vehicle. On the slice portion of the vehicle, a flap could be attached so that deflection angles of 10{degree}, 20{degree} and 30{degree} could be obtained. All of the experimental results were obtained in the Sandia Mach 8 hypersonic wind tunnel for laminar boundary layer conditions. Flow visualization results include shear stress sensitive liquid crystal photographs, surface streak flow photographs (using liquid crystals), and spark schlieren photographs and video. The liquid crystals were used as an aid in verifying that a laminar boundary layer existed over the entire body. The surface flow photo-graphs show attached and separated flow on both the leeside of the vehicle and near the flap. A detailed uncertainty analysis was conducted to estimate the contributors to body force and moment measurement uncertainty. Comparisons are made with computational results to evaluate both the experimental and numerical results. This extensive set of high-quality experimental force and moment measurements is recommended for use in the calibration and validation of relevant computational aerodynamics codes.
Contact resistances of greater than 40 milliohms have been associated with hermetic connectors and lightning arrestor connectors (LAC) during routine testing. Empirical analysis demonstrated that the platings could be damaged within several mating cycles. The oxides that formed upon the exposed copper alloy had no significant impact upon contact resistance when the mated contacts were stationary, but effectively disrupted continuity when the mating interfaces were translated. The stiffness of the pin contact was determined to be about five times greater than the socket contact. As the pin contact engages the socket, therefore, the socket spring member deflects and the pin does not deflect. Hence, the pin contact could easily remain centered within the socket cavity in a mated condition, contacting the hemispherical spring at a localized point. Thus the only avenue for electrical conduction is between two contacting curved surfaces-the pin surface and the socket contact dimple surface. This scenario, coupled with the presence of corrosion products at the contacting interface, presents the opportunity for high contact resistances.
The author reports experimental measurements for the argon and oxygen permeability coefficients for the new EPDM material (SR793B-80) used for the environmental o-ring seals of the W88. The results allow the author to refine the argon gas analysis modeling predictions for W88 surveillance units. By comparing early surveillance results (up to four years in the field) with the modeling, the author shows that (1) up to this point in time, leakage past the seals is insignificant and (2) the argon approach should be able to inexpensively and easily monitor both integrated lifetime water leakage and the onset of any aging problems. Finally, the author provides a number of pieces of evidence indicating that aging of the SR793B-80 material will not be significant during the expected lifetime of the W88.
The image blur in a photograph is produced by the exposure of a moving object. Knowing the amount of image blur is important for recording useful data. If there is too much blur, it becomes hard to make quantitative measurements. This report discusses image blur, the parameters used to control it, and how to calculate it.
This report summarizes the purchasing and transportation activities of the Purchasing and Materials Management Organization for Fiscal Year 1992. Activities for both the New Mexico and California locations are included. Topics covered in this report include highlights for fiscal year 1992, personnel, procurements (small business procurements, disadvantaged business procurements, woman-owned business procurements, New Mexico commercial business procurements, Bay area commercial business procurements), commitments by states and foreign countries, and transportation activities. Also listed are the twenty-five commercial contractors receiving the largest dollar commitments, commercial contractors receiving commitments of $1,000 or more, integrated contractor and federal agency commitments of $1,000 or more from Sandia National Laboratories/New Mexico and California, and transportation commitments of $1,000 or more from Sandia National Laboratories/New Mexico and California.
This document is the Operations Manual for the Beneficial Uses Shipping System (BUSS) cask. These operating instructions address requirements; for loading, shipping, and unloading, supplementing general operational information found in the BUSS Safety Analysis Report for Packaging (SARP), SAND 83-0698. Use of the BUSS cask is authorized by Department of Energy (DOE) and Nuclear Regulatory Commission (NRC) for the shipment of special form cesium chloride or strontium flouride capsules.
Two separate Tiger Team assessments were conducted at Sandia National Laboratories (SNL). The first was conducted at the California site in Livermore between April 30, 1990, and May 18, 1990. A second Tiger Team assessment was conducted at the New Mexico site in Albuquerque between April 15 and May 24, 1991. This report is volume two, change one. One purpose of this Action Plan is to provide a formal written response to each of the findings and/or concerns cited in the SNL Tiger Team assessment reports. A second purpose is to present actions planned to be conducted to eliminate deficiencies identified by the Tiger Teams. A third purpose is to consolidate (group) related findings and to identify priorities assigned to the planned actions for improved efficiency and enhanced management of the tasks. A fourth and final purpose is to merge the two original SNL Action Plans for the New Mexico [Ref. a] and California [Ref. b] sites into a single Action Plan as a major step toward managing all SNL ES&H activities more similarly. Included in this combined SNL Action Plan are descriptions of the actions to be taken by SNL to liminate all problems identified in the Tiger Teams` findings/concerns, as well as estimated costs and schedules for planned actions.
Environmental monitoring, earth-resource mapping, and military systems require broad-area imaging at high resolutions. Many times the imagery must be acquired in inclement weather or during night as well as day. Synthetic aperture radar (SAR) provides such a capability. SAR systems take advantage of the long-range propagation characteristics of radar signals and the complex information processing capability of modern digital electronics to provide high resolution imagery. SAR complements photographic and other optical imaging capabilities because of the minimum constrains on time-of-day and atmospheric conditions and because of the unique responses of terrain and cultural targets to radar frequencies. Interferometry is a method for generating a three-dimensional image of terrain. The height projection is obtained by acquiring two SAR images from two slightly differing locations. It is different from the common method of stereoscopic imaging for topography. The latter relies on differing geometric projections for triangulation to define the surface geometry whereas interferometry relies on differences in radar propagation times between the two SAR locations. This paper presents the capabilities of SAR, explains how SAR works, describes a few SAR applications, provides an overview of SAR development at Sandia, and briefly describes the motion compensation subsystem.
Given a planar straight-line graph, we find a covering triangulation whose maximum angle is as small as possible. A covering triangulation is a triangulation whose vertex set contains the input vertex set and whose edge set contains the input edge set. Such a triangulation differs from the usual Steiner triangulation in that we may not add a Steiner vertex on any input edge. Covering triangulations provide a convenient method for triangulating multiple regions sharing a common boundary, as each region can be triangulated independently. As it is possible that no finite covering triangulation is optimal in terms of its maximum angle, we propose an approximation algorithm. Our algorithm produces a covering triangulation whose maximum angle {gamma} is probably close to {gamma}{sub opt}, a lower bound on the maximum angle in any covering triangulation of the input graph. Note that we must have {gamma} {le} 3{gamma}{sub opt}, since we always have {gamma}{sub opt} {ge} {pi}/3 and no triangulation can contain an angle of size greater than {pi}. We prove something significantly stronger. We show that {pi} {minus} {gamma} {ge} ({pi} {minus} {gamma}{sub opt})/6, i.e., our {gamma} is not much closer to {pi} than is {gamma}{sub opt}. This result represents the first nontrivial bound on a covering triangulation`s maximum angle. We require a subroutine for the following problem: Given a polygon with holes, find a Steiner triangulation whose maximum angle is bounded away from {pi}. No angle larger than 8{pi}/9 is sufficient for the bound on {gamma} claimed above. The number of Steiner vertices added by our algorithm and its running time are highly dependent on the corresponding bounds for the subroutine. Given an n-vertex planar straight-line graph, we require O(n + S(n)) Steiner vertices and O(n log n + T(n)) time, where S(n) is the number of Steiner vertices added by the subroutine and T(n) is its running time for an O(n)-vertex polygon with holes.
One proven method of evading the detection of a nuclear test is to decouple the explosion with a large air-filled cavity. Past tests have shown it is possible to substantially reduce the seismic energy emanating from a nuclear explosion by as much as two, orders of magnitude. The problem is not whether it can be done; the problem is the expense involved in mining a large cavity to fully decouple any reasonable size test. It has been suggested that partial decoupling may exist so some fraction of decoupling may be attained between factors of 1 to 100. MISTY ECHO and MINERAL QUARRY are two nuclear tests which were instrumented to look at this concept. MISTY ECHO was a nuclear explosion conducted in an 11 m hemispherical cavity such that the walls were over driven and reacted in a non-linear manner. MINERAL QUARRY was a nearby tamped event that is used as a reference to compare with MISTY ECHO. The scaled cavity radius of MISTY ECHO was greater than 2m/kt[sup l/3]. Both of these tests had free-field accelerometers located within 400 m of their respective sources. Analysis of surface ground motion is inconclusive on the question of partial decoupling. This is due to the difference in medium properties that the ray paths take to the surface. The free-field configuration alleviates this concern. The analysis consists of cube-root signal MINERAL QUARRYs signal to MISTY ECHO's yield and calculating the ratio of the Fourier amplitudes of both the acceleration and the reduced displacement potentials. The results do not indicate the presence of partial decoupling. In fact, there is a coupling enhancement factor of 2.
Hazardous operations which involve the dextrous manipulation of dangerous materials in the field have, in the past, been completed by technicians. Use of humans in such hazardous operations is under increased scrutiny due to high costs and low productivity associated with providing protective clothing and environments. Remote systems are needed to accomplish many tasks such as the clean up of waste sites in which the exposure of personnel to radiation, chemical, explosive, and other hazardous constituents is unacceptable. Traditional remote manual field operations have, unfortunately, proven to have very low productivity when compared with unencumbered human operators. Recent advances in the integration of wars and computing into the control of remotely operated equipment have shown great promise for reducing the cost of remote systems while providing faster and safer remote systems. This paper discusses applications of such advances to remote field operations.
Multiple tracer techniques were used to estimate recharge rates through unsaturated alluvium beneath the Greater Confinement Disposal site, a waste disposal site located in Frenchman Flat, on the Nevada Test Site. Three tracers of soil water movement -- meteoric chloride, stable isotopes of water, and cosmogenic chlorine-36 -- yielded consistent results indicating that recharge rates were negligible for the purpose of performance assessment at the site.
This report describes work performed for the development of a fiber-optic shock position sensor used to measure the location of a shock front in the neighborhood of a nuclear explosion. Such a measurement would provide a hydrodynamic determination of nuclear yield. The original proposal was prompted by the Defense Nuclear Agency`s interest in replacing as many electrical sensors as possible with their optical counterparts for the verification of a treaty limiting the yield of a nuclear device used in underground testing. Immunity to electromagnetic pulse is the reason for the agency`s interest; unlike electrical sensors and their associated cabling, fiber-optic systems do not transmit to the outside world noise pulses from the device containing secret information.
This paper describes the connection between mechanical degradation of common cable materials in radiation and elevated temperature environments and density increases caused by the oxidation which leads to this degradation. Two techniques based on density changes are suggested as potential non-destructive evaluation (NDE) procedures which may be applicable to monitoring the mechanical condition of cable materials in power plant environments. The first technique is direct measurement of density changes, via a density gradient column, using small shavings removed from the surface of cable jackets at selected locations. The second technique is computed X-ray tomography, utilizing a portable scanning device.
We technologists generally only address risk magnitudes in our analyses, although other studies have found nineteen additional dimensions for the way the public perceives risk. These include controllability, voluntariness, catastrophic potential, and trust in the institution putting forth the risk. We and the geneml public use two different languages, and to understand what their concerns are, we need to realize that the culture surrounding nuclear weapons is completely alien to the general public. Ultimately, the acceptability of a risk is a values question, not a technical question. For most of the risk dimensions, the public would perceive no significant difference between using oralloy and plutonium. This does not mean that the suggested design change should not be proposed, only that the case for, or against, it be made comprehensively using the best information available today. The world has changed: the ending of the cold war has decreased the benefit of nuclear weapons in the minds of the public and the specter of Chernobyl has increased the perceived risks of processes that use radioactive materials. Our analyses need to incorporate the lessons pertinent to this newer world.
This is the final report for a study performed for the 1992 LDRD spaceborne SAR (Synthetic Aperture Radar) study. This report presents an overview of some of the issues that must be considered for design and implementation of a SAR on a spaceborne platform. The issues addressed in this report include: a survey of past, present, and future spaceborne SARs; pulse-repetition frequency (PRF); general image processing issues; transmitter power requirements; the ionosphere; antennas; two case studies; and an appendix with a simplified presentation on geometry and orbits.
Damage induced during electron-beam metallization results in a three-order-of-magnitude increase in the generation rate of bulk GaAs. The damage appears to be radiation induced, with low-energy electrons being the most likely from p{sup +}-i-n-i-p{sup +}-GaAs layers damaging mechanism.
This report describes the design, development, manufacturing processes, acceptance equipment, test results, and conclusions for the SA3581/MC4196 LAC program. Four development groups (Identified as Groups 1 through 3 and a Proof of Development Build) provided the evaluation criteria for the PPI/TMS production units.
An ever increasing demand for highly rugged, miniature AT strip resonators prompted the development of a resonator package for use in high-g shock applications. This package, designed and developed by Statek Corporation, is based on the package configuration currently being used by Statek for commercial devices. This report describes the design intent, component characteristics, and evaluation test results for this device.
A temperature between 400 and 500 and a pressure between 40 MPa and 160 MPa were indicated by a two-factor, three-level factorial experiment for diffusion bonding of molybdenum sheet substrates. These substrates were sputter ion plated with palladium (0.5 {mu}m) and silver (10 {mu}m) films on the mating surfaces, with the silver used as a bonding interlayer. The palladium acted as an adhesive layer between the silver film and molybdenum substrate. The silver diffusion bonds that resulted were qualitatively characterized at the interfacial regions, and bonds with no visible interface were obtained at 750OX magnification. Correlations were obtained for voids found optically at the silver/silver bonding interface and colored image maps, illustrating bond quality, produced by nondestructive ultrasonic imaging. Above 160 MPa, the bonding process produces samples with a nonuniform load distribution. These samples contained regions with gaps and well-bonded regions at the silver/silver interface, and all had macroscopic deformation of the silver films.
Salford Electrical Instruments, Ltd., and the General Electric Company`s Hirst Research Centre, under contract to the United Kingdom`s (UK) Ministry of Defence, developed a radiation-hard, leadless chip-carrier-packaged oscillator/divider. Two preproduction clocks brought to Sandia National Laboratories (SNL) by a potential SNL customer underwent mechanical and thermal environmental evaluation. Because of the subsequent failure of one device and the deteriorating condition of another device, the devices were not subjected to radiation tests. This report describes the specifics of the environmental evaluation performed on these two clocks and the postmortem analysis of one unit, which ultimately failed. Clock startup time versus temperature studies were also performed and compared to an SNL-designed clock having the same fundamental frequency.
High speed flash radiography has been used to record phenomena that occur during rapid dynamic events. The events are difficult, if not impossible, to record by other means due to the speed of the event or the obscuration associated with it. To eliminate the motion blur of objects moving at high speeds it is necessary to have extremely short exposure times. This short exposure time requires the use of high speed intensifying screens and high speed x-ray film to record the radiographic image. Technicians who use flash x-rays have to depend on recommendations from present and former flash x-ray users for film and screen selection. The film and screen industry has made many changes in the last few years. It is not uncommon to find that the particular film or screen used in the past is no longer manufactured. This paper will describe some of the films and screens that are currently used for testing. It will also describe the optimum experimental setup used to obtain the best images.
The Modal Group at Sandia National Laboratories performs a variety of tests on structures ranging from weapons systems to wind turbines. The desired number of data channels for these tests has increased significantly over the past several years. Tests requiring large numbers of data channels makes roving, accelerometers impractical and inefficient. The Modal Lab has implemented a method in which the test unit is fully instrumented before any data measurements are taken. This method uses a 16 channel data acquisition system and a mechanical switching setup to access each bank of accelerometers. A data base containing all transducer sensitivities, location numbers, and coordinate information is resident on the system enabling quick updates for each data set as it is patched into the system. Ibis method has reduced test time considerably and is easily customized to accommodate data acquisition systems with larger channel capabilities.
A new class of inorganic ion exchange materials that can separate low parts per million level concentrations of Cs{sup +} from molar concentrations of Na{sup +} has recently been developed as a result of a collaborative effort between Sandia National Laboratories and Texas A&M University. The materials, called crystalline silicotitanates, show significant potential for application to the treatment of aqueous nuclear waste solutions, especially neutralized defense wastes that contain molar concentrations of Na{sup +} in highly alkaline solutions. In experiments with alkaline solutions that simulate defense waste compositions, the crystalline silicotitanates exhibit distribution coefficients for Cs{sup +} of greater than 2,000 ml/g, and distribution coefficients greater than 10,000 for solutions adjusted to a pH between 1 and 10. Additionally, the crystalline silicotitanates were found to exhibit distribution coefficients for Pu and Sr{sup 2+} of greater than 2,000 and 100,000 respectively. Development of these materials for use in processes to treat defense waste streams is currently being pursued.
The effects of the midgap-level interface trap density and net oxide charge on the total-dose gain degradation of a bipolar transistor are separately identified. The superlinear dose dependence of the excess base current is explained.
MOS total-dose response is shown to depend strongly on transistor gate length. Simple scaling models cannot predict short-channel device response from long-channel results. Hardness assurance implications are discussed for weapon and space environments.
We have studied intrinsic free-carrier recombination in a variety of GaAs structures, including: OMVPE- and MBE-prepared GaAs/Al{sub x}Ga{sub 1-x}As double heterostructures, Na{sub 2}S passivated GaAs structures and bare GaAs structures. We find OMVPE prepared structures are superior to all of these other structures with 300 K lifetimes of {approximately} 2.5 {mu}s and negligible nonradiative interface and bulkrecombination, and thus are truly surface-free (S < 40 cm/s). Moreover, we observe systematic trends in optical properties versus growth conditions. Lastly, we find that the presence of free-exciton recombination in the low-temperature photoluminescence spectra is a necessary but not sufficient condition for optimal optical properties (i.e. long minority-carrier lifetimes).
Sandia National Laboratories (SNL) is conducting several research programs to help develop validated methods for the prediction of the ultimate pressure capacity, at elevated temperatures, of light water reactor (LWR) containment structures. To help understand the ultimate pressure of the entire containment pressure boundary, each component must be evaluated. The containment pressure boundary consists of the containment shell and many access, piping, and electrical penetrations. The focus of the current research program is to study the ultimate behavior of flexible metal bellows that are used at piping penetrations. Bellows are commonly used at piping penetrations in steel containments; however, they have very few applications in concrete (reinforced or prestressed) containments. The purpose of piping bellows is to provide a soft connection between the containment shell and the pipe are attached while maintaining the containment pressure boundary. In this way, piping loads caused by differential movement between the piping and the containment shell are minimized. SNL is conducting a test program to determine the leaktight capacity of containment bellows when subjected to postulated severe accident conditions. If the test results indicate that containment bellows could be a possible failure mode of the containment pressure boundary, then methods will be developed to predict the deformation, pressure, and temperature conditions that would likely cause a bellows failure. Results from the test program would be used to validate the prediction methods. This paper provides a description of the use and design of bellows in containment piping penetrations, the types of possible bellows loadings during a severe accident, and an overview of the test program, including available test results at the time of writing.
The Department of Energy`s Nevada Field Office has disposed of a small quantity of high activity and special case wastes using Greater Confinement Disposal facilities in Area 5 of the Nevada Test Site. Because some of these wastes are transuranic radioactive wastes, the Environmental Protection Agency standards for their disposal under 40 CFR Part 191 which requires a compliance assessment. In conducting the 40 CFR Part 191 compliance assessment, review of the Greater Confinement Disposal inventory revealed potentially land disposal restricted hazardous wastes. The regulatory options for disposing of land disposal restricted wastes consist of (1) treatment and monitoring, or (2) developing a no-migration petition. Given that the waste is already buried without treatment, a no-migration petition becomes the primary option. Based on a desire to minimize costs associated with site characterization and performance assessment, a single approach has been developed for assessing compliance with 40 CFR Part 191, DOE Order 5820.2A (which regulates low-level radioactive wastes contained in Greater Confinement Disposal facilities) and developing a no-migration petition. The approach consists of common points of compliance, common time frame for analysis, and common treatment of uncertainty. The procedure calls for conservative bias of modeling assumptions, including model input parameter distributions and adverse processes and events that can occur over the regulatory time frame, coupled with a quantitative treatment of data and parameter uncertainty. This approach provides a basis for a defensible regulatory decision. In addition, the process is iterative between modeling and site characterization activities, where the need for site characterization activities is based on a quantitative definition of the most important and uncertain parameters or assumptions.
The first portion of this paper proposes a method of fabricating a material whose modulus can be changed substantially through the application of a specified stimulus. The particular implementation presented here indirectly exploits the large deformation associated with shape memory alloys to achieve the desired modulation of stiffness. The next portion of this paper discusses a class of vibration problems for which such materials have a serious potential for vibration suppression. These are problems, such as the spinning up of rotating machinery, in which the excitation at any time lies within a narrow frequency band, and that band moves through the frequency spectrum in a predictable manner. Finally, an example problem is examined and the utility of this approach is discussed.
An element based finite control volume procedure is applied to the solution of ablation problems for 2-D axisymmetric geometries. A mesh consisting of four node quadrilateral elements was used. The nodes are allowed to move in response to the surface recession rate. The computational domain is divided into a region with a structured mesh with moving nodes and a region with an unstructured mesh with stationary nodes. The mesh is costrained to move along spines associated with the original mesh. Example problems are presented for the ablation of a realistic nose tip geometry exposed to aerodynamic heating from a uniform free stream environment.
The paper gives some of the highlights of a panel discussion on surface diffusion held Monday, November 30, 1992 at the Fall MRS Meeting in Boston, Massachusetts. Four invited speakers discussed computer modeling techniques and scanning tunneling microscopy experiments that have been used to provide new understanding of the atomistic processes that occur at surfaces. We present a summary of each of the invited talks, indicate other presentations on surface diffusion in this proceedings, and provide a transcript of the two discussion sessions.
During the past few years, methods have been developed for quantifying and analyzing common cause failures (CCFs). These methods have outpaced current data collection activities. This document discusses the collection and documentation of failure events at nuclear power plants with respect to these new CCFs methods. The report concentrates on the information necessary to improve the parameter estimates for both independent and dependent events in probabilistic risk assessments (PRAS) and alludes to the fact that the same information can be used to enhance other nuclear power plant activities. Several existing data bases are reviewed as to their adequacy for these new CCF methods, and areas where information is lacking, either because certain information is simply not required to be reported or because required information was simply not reported, are identified. Finally, data needs identified from recent PRAs are discussed.
Lighting protection systems (LPSs) for explosives handling and storage facilities have long been designed similarly to those will for more conventional facilities, but their overall effectiveness in controlling interior electromagnetic (EM) environments has still not been rigorously assessed. Frequent lightning-caused failures of a security system installed in earth-covered explosives storage structures prompted the U.& Army and Sandia National Laboratories to conduct a program to determine quantitatively the EM environments inside an explosives storage structure that is struck by lightning. These environments were measured directly during rocket-triggered lightning (RTL) tests in the summer of 1991 and were computed using linear finite-difference, time-domain (FDTD) EM solvers. The experimental and computational results were first compared in order to validate the code and were also used to construct bounds for interior environments corresponding to seven incident lightning flashes. The code insults were also used to develop simple circuit models for the EM field behavior-a process that insulted in a very simple and somewhat surprising physical interpretation of the structure`s response that has significant practical and economic implications for design, construction, and maintenance of such facilities.
A panel discussion on interface roughness was held at the Fall 1992 Materials Research Society meeting. We present a of results presented by the invited speakers on the application and interpretation of X-ray reflectivity, atomic force microscopy (AFM), scanning tunneling microscopy (STM), photoluminescence and transmission electron microscopy.
A safety system has been designed and constructed to mitigate the asphyxiation and low temperature hazards presented by the distribution and usage of cryogenic liquids in work spaces at Sandia National Laboratories. After identifying common accident scenarios, the CRYOFACS (Cryogenic Fail-Safe Control System) unit was designed, employing microprocessor technology and software that can be easily modified to accommodate varying laboratory requirements. Sensors have been incorporated in the unit for the early detection of accidental releases or overflows of cryogenic liquids. The CRYOFACS design includes control (and shutdown) of the cryogen source upon error detection, and interfaces with existing oxygen monitors, in common use at Sandia Labs, to provide comprehensive protection for both personnel and property.
The Greater Confinement Disposal (GCD) facility was established by the Nevada office of the Department of Energy (DOE) in Area 5 at the Nevada Test Site for containment of waste inappropriate for shallow land burial. Some transuranic (TRU) waste has been disposed of at the GCD facility, and compliance of this disposal system with Environmental Protection Agency (EPA) regulations 40 CFR 191 must be evaluated by performance assessment calculations. We have adopted an iterative approach where performance assessment results guide site data collection which in turn influences the parameters and models used in performance assessment. The first iteration was based upon readily available data. The first iteration indicated that the GCD facility would likely comply with 40 CFR 191 and that the downward recharge rate had a major influence on the results. As a result, a site characterization project was initiated to study recharge in Area 5 by use of three environmental tracers. This study resulted in the conclusion that recharge was extremely small, if not negligible. Thus, downward advection to the water table is no longer considered a viable release pathway, leaving upward liquid diffusion as the sole release pathway. This second performance assessment iteration refined the upward pathway models and parameters. The results of the performance assessment using these models still indicate that the GCD site is likely to comply with all sections of 40 CFR 191.
We have developed a method for generating chromate-free corrosion resistant coatings on aluminum alloys using a process procedurally similar to standard chromate conversion. These coatings provide good corrosion resistance on 6061-T6 and 1100 A1 under salt spray testing conditions. The resistance of the new coating is comparable to that of chromate conversion coatings in four point probe tests, but higher when a mercury probe technique is used. Initial tests of paint adhesion, and under paint corrosion resistance are promising. Primary advantage of this new process is that no hazardous chemicals are used or produced during the coating operation.
A method is described to characterize shocks (transient time histories) in terms of the Fourier energy spectrum and the temporal moments of the shock passed through a contiguous set of bandpass filters. This method is compared for two transient time histories with the more conventional methods of shock response spectra (SRS) and a nonstationary random characteristic.
Qualitatively different trends in postirradiation electrical response are observed in MOS devices after very long (up to 2.75-year) switched-bias bakes. A revised defect nomenclature is introduced, and implications for MOS defect models are discussed.
La{sub 2-x}Sr{sub x}CuO{sub 4+{delta}} with x = 0.01, 0.025, 0.050, 0.10 and 0.16 and excess oxygen {delta} incorporated by high-pressure O{sub 2} anneals. These compounds were examined using time-of-flight neutron diffraction data. Various models were fit by Rietveld least-squares refinement, with the maximum amount of {delta} being only of the order of 10 standard deviations. {delta} is largest for x near 0, is zero for x = 0.10 and is intermediate for x = 0.16. Only the sample with x = 0.01 is found to phase separate distinctly into a nearly stoichiometric phase with {delta} {approx} 0 and an oxygen-rich superconducting phase as the temperature is lowered. Coincidence of phase separation and Neel temperature strongly suggests that the phase separation is driven by free energy provided by long-range antiferromagnetic ordering in the nearly stoichiometric, weakly Sr-doped La{sub 2-x}Sr{sub x}CuO{sub 4}. The excess oxygen stoichiometry shows that at low values of x, hole doping is provided primarily by the excess oxygen, and is enhanced substantially by phase separation. At larger values of x, excess oxygen is no longer incorporated, and hole doping is provided by the substitution of Sr{sup +2} for La{sup +3}.
The US Nuclear Regulatory Commission (NRC) is investigating the performance of containments subject to severe accidents. This work is being performed by Sandia National Laboratories (SNL). In 1987, a 1:6-scale Reinforced Concrete Containment (RCC) model was tested to failure. The failure mode was a liner tear. As a result, a separate effects test program has been conducted to investigate liner tearing. This paper discusses the design of test specimens and the results of the testing. The post-test examination of the 1:6-scale RCC model revealed that the large tear was not an isolated event. Other small tears in similar locations were also discovered. All tears occurred near the insert-to-liner transition which is also the region of closest stud spacing. Also, all tears propagated vertically, in response to the hoop strain. Finally, all tears were adjacent to a row of studs. The tears point to a mechanism which could involve the liner/insert transition, the liner anchorage, and the material properties. The separate effects tests investigated these effects. The program included the design of three types of specimens with each simulating some features of the 1:6-scale RCC model. The specimens were instrumented using strain gages and photoelastic materials.
The problem of constructing trees given a matrix of interleaf distances is motivated by applications in computational evolutionary biology and linguistics. The general problem is to find an edge-weighted tree which most closely approximates the distance matrix. Although the construction problem is easy when the tree exactly fits the distance matrix, optimization problems under all popular criteria are either known or conjectured to be NP-complete. In this paper we consider the related problem where we are given a partial order on the pairwise distances, and wish to construct (if possible) an edge-weighted tree realizing the partial order. In particular we are interested in partial orders which arise from experiments on triples of species, which determine either a linear ordering of the three pairwise distances (called Total Order Model or TOM experiments) or only the pair(s) of minimum distance apart (called Partial Order Model or POM experiments). The POM and TOM experimental model is inspired by the model proposed by Kannan, Lawler, and Warnow for constructing trees from experiments which determine the rooted topology for any triple of species. We examine issues of construction of trees and consistency of TOM and POM experiments, where the trees may either be weighted or unweighted. Using these experiments to construct unweighted trees without nodes of degree two is motivated by a similar problem studied by Winkler, called the Discrete Metric Realization problem, which he showed to be strongly NP-hard. We have the following results: Determining consistency of a set of TOM or POM experiments is NP-Complete whether the tree is weighted or constrained to be unweighted and without degree two nodes. We can construct unweighted trees without degree two nodes from TOM experiments in optimal O(n{sup 3}) time and from POM experiments in O(n{sup 4}) time.
Semiconductor ring lasers are being developed for use as direct-waveguide-coupled sources for photonic integrated circuits. This report describes the results of our research and development of this new class of diode lasers. We have fabricated and characterized semiconductor ring lasers which operate continuous-wave at room temperature with a single-frequency output of several milliwatts. Our work has led to an increased understanding of the operating behavior of these lasers and to the development of two new types of advanced devices. The interferometric ring diode laser uses a coupled-cavity structure to improve the level of single-frequency performance. And, the unidirectional ring diode laser uses an active crossover waveguide to promote lasing in a single ring direction with up to 96% of the output emitted in the preferred lasing direction.
Results from fundamental investigations of low-temperature plasma systems were used to improve chamber-to-chamber reproducibility and reliability in commercial plasma-etching equipment. The fundamental studies were performed with a GEC RF Reference Cell, a laboratory research system designed to facilitate experimental and theoretical studies of plasma systems. Results and diagnostics from the Reference Cell studies were then applied to analysis and rectification of chamber-to-chamber variability on a commercial, multichamber, plasma reactor. Pertinent results were transferred to industry.
Compression seals are commonly used in electronic components. Because glass has such a low fracture toughness, tensile residual stresses must be kept low to avoid crackS. N. Burchett analyzed a variety of compression pin seals to identify mechanically optimal configurations when work hardened Alloy 52 conductor pins are sealed in a 304 stainless steel housing with a Kimble TM-9 glass insulator. Mechanical property tests on Alloy 52, have shown that the heat treatments encountered in a typical glass sealing cycle are capable of annealing the Alloy 52 pins, increasing ductility and lowering the yield strength. Since most seal analyses are routinely based on unannealed Alloy 52 properties, a limited study has been performed to determine the design impact of lowering the yield strength of the pins in a typical compression seal. Thermal residual stresses were computed in coaxial compression seals with annealed pins and the results then were used to reconstruct design guidelines following the procedures employed by Miller and Burchett. Annealing was found to significantly narrow the optimal design range (as defined by a dimensionless geometric parameter). The Miller-Burchett analyses which were based on very coarse finite element meshes and a 50 ksi yield strength fortuitously predicted an overly conservative design range that is a subset of the narrow design window prevalent when the yield strength is assumed to be 34 ksi. This may not remain true for lower yield strengths. The presence of pin wetting was shown to exacerbate the glass stress state. The time is right to develop a modern and enhanced set of design guidelines which could address new material systems, three dimensional geometries, and viscoelastic effects.
With ever increasing processor and memory speeds, new methods to overcome the ``I/O bottleneck`` need to be found. This is especially true for massively parallel computers that need to store and retrieve large amounts of data fast and reliably, to fully utilize the available processing power. We have designed and implemented a parallel file system, that distributes the work of transferring data to and from mass storage, across several I/O nodes and communication channels. The prototype parallel file system makes use of the existing single threaded file system of the Sandia/University of New Mexico Operating System (SUNMOS). SUNMOS is a joint project between Sandia National Laboratory and the University of New Mexico to create a small and efficient OS for Massively Parallel (MP) Multiple Instruction, Multiple Data (MIMD) machines. We chose file striping to interleave files across sixteen disks. By using source-routing of messages we were able to increase throughput beyond the maximum single channel bandwidth the default routing algorithm of the nCUBE 2 hypercube allows. We describe our implementation, the results of our experiments, and the influence this work has had on the design of the Performance-oriented, User-managed, Messaging Architecture (PUMA) operating system, the successor to SUNMOS.
An automatic phase identification system that employs a neural network approach to classifying seismic event phases is described. Extraction of feature vectors used to distinguish the different classes is explained, and the design and training of the neural networks in the system are detailed. Criteria used to evaluate the performance of the neural network approach are provided.
Satellite electronics may be subjected to a large fluence of protons from the Van Allen belt and from solar flares. To determine if unhardened electronics will survive a radiation environment, the total ionizing dose and displacement damage to the electronics must be determined. Several computer codes are available for modeling proton transport, ranging in complexity for a very-efficient straight-line approximation to general-geometry time-dependent Monte Carlo transport, with corresponding increase in computer run time. For most satellite applications, neutrons can be neglected in the analysis. However, neutrons may be important for modeling heavily shielded compartments for personnel and electronics.
This report describes the activities and results of an LDRD entitled Sensor Based Process Control. This research examined the needs of the plating industry for monitor and control capabilities with particular emphasis on water effluent from rinse baths. A personal computer-based monitor and control development system was used as a test bed.
A large-scale brine inflow test was conducted 655 m below ground surface in a cylindrical test room at the Waste Isolation Pilot Plant (WIPP). This test was the first large-scale WIPP test that allowed periodic access to a sealed, monitored excavation. The test was designed to characterize the environment within the sealed test room (Room Q) and to examine the surrounding host rock to quantify such characteristics as near-surface resistivity and permeability in the formation surrounding the room. Testing began with room boring in July 1989. Data in this report were collected from the time of test start-up through November 25, 1991. Relative humidity, barometric pressure, and temperature were measured in the sealed environment of the test room. Formation closure rates and electrical resistance of the formation close to the room surface were measured to determine the response of the host rock around Room Q. Brine was collected periodically to quantify the amount of inflow from large-scale openings. Results of the measurements are presented in a series of graphs. This report also describes the features of the test
Pore-pressure and fluid-flow tests were performed in 15 boreholes drilled into the bedded evaporites of the Salado Formation from within the Waste Isolation Pilot Plant (WIPP). The tests measured fluid flow and pore pressure within the Salado. The boreholes were drilled into the previously undisturbed host rock around a proposed cylindrical test room, Room Q, located on the west side of the facility about 655 m below ground surface. The boreholes were about 23 m deep and ranged over 27.5 m of stratigraphy. They were completed and instrumented before excavation of Room Q. Tests were conducted in isolated zones at the end of each borehole. Three groups of 5 isolated zones extend above, below, and to the north of Room Q at increasing distances from the room axis. Measurements recorded before, during, and after the mining of the circular test room provided data about borehole closure, pressure, temperature, and brine seepage into the isolated zones. The effects of the circular excavation were recorded. This data report presents the data collected from the borehole test zones between April 25, 1989 and November 25, 1991. The report also describes test development, test equipment, and borehole drilling operations.
The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/{radical}p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in the well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.
Radioactive spent fuel assemblies are a source of hazardous waste that will have to be dealt with in the near future. It is anticipated that the spent fuel assemblies will be transported to disposal sites in spent fuel transportation casks. In order to design a reliable and safe transportation cask, the maximum cladding temperature of the spent fuel rod arrays must be calculated. A comparison between numerical calculations using commercial thermal analysis software packages and experimental data simulating a horizontally oriented spent fuel rod array was performed. Twelve cases were analyzed using air and helium for the fill gas, with three different heat dissipation levels. The numerically predicted temperatures are higher than the experimental data for all levels of heat dissipation with air as the fill gas. The temperature differences are 4{degree}C and 23{degree}C for the low heat dissipation and high heat dissipation, respectively. The temperature predictions using helium as a fill gas are lower for the low and medium heat dissipation levels, but higher at the high heat dissipation. The temperature differences are 1{degree}C and 6{degree}C for the low and medium heat dissipation, respectively. For the high heat dissipation level, the temperature predictions are 16{degree}C higher than the experimental data. Differences between the predicted and experimental temperatures can be attributed to several factors. These factors include experimental uncertainty in the temperature and heat dissipation measurements, actual convection effects not included in the model, and axial heat flow in the experimental data. This work demonstrates that horizontally oriented spent fuel rod surface temperature predictions can be made using existing commercial software packages. This work also shows that end effects will be increasingly important as the amount of dissipated heat increases.
This bulletin discusses the following: decontamination of polluted water by using a photocatalyst to convert ultraviolet energy into electrochemical energy capable of destroying organic waste and removing toxic metals; monitoring oil spills with SAR by collecting data in digital form, processing the data, and creating digital images that are recorded for post-mission viewing and processing; revitalization of a solar industrial process heat system which uses parabolic troughs to heat water for foil production of integrated circuits; and an electronic information system, EnviroTRADE (Environmental Technologies for Remedial Actions Data Exchange) for worldwide exchange of environmental restoration and waste management information.
Sandia is a DOE multiprogram engineering and science laboratory with major facilities at Albuquerque, New Mexico, and Livermore, California, and a test range near Tonapah, Nevada. We have major research and development responsibilities for nuclear weapons, arms control, energy, the environment, economic competitiveness, and other areas of importance to the needs of the nation. Our principal mission is to support national defense policies by ensuring that the nuclear weapon stockpile meets the highest standards of safety, reliability, security, use control, and military performance. Selected unclassified technical activities and accomplishments are reported here. Topics include advanced manufacturing technologies, intelligent machines, computational simulation, sensors and instrumentation, information management, energy and environment, and weapons technology.
Increasing complexity of experiments coupled with limitations of the previously used computers required improvements in both hardware and software in the Rock Mechanics Laboratories. Increasing numbers of input channels and the need for better graphics could no longer be supplied by DATAVG, an existing software package for data acquisition and display written by D. J. Holcomb in 1983. After researching the market and trying several alternatives, no commercial program was found which met our needs. The previous version of DATAVG had the basic features needed but was tied to obsolete hardware. Memory limitations on the previously used PDP-11 made it impractical to upgrade the software further. With the advances in IBM compatible computers it is now desirable to use them as data recording platforms. With this information in mind, it was decided to write a new version of DATAVG which would take advantage of newer hardware. The new version had to support multiple graphic display windows and increased channel counts. It also had to be easier to use.
A quartz digital accelerometer has been developed which uses double ended tuning forks as the active sensing elements. The authors have demonstrated the ability of this accelerometer to be capable of acceleration measurements between {+-}150G with {+-}0.5G accuracy. They have further refined the original design and assembly processes to produce accelerometers with < 1mG stability in inertial measurement applications. This report covers the development, design, processing, assembly, and testing of these devices.
The thermomechanical effect on the exploratory ramps, drifts, and shafts as a result of high-level nuclear waste disposal is examined using a three-dimensional thermo-elastic model. The repository layout modeled is based on the use of mechanical mining of all excavations with equivalent waste emplacement areal power densities of 57 and 80 kW/acre. Predicted temperatures and stress changes for the north and south access drifts, east main drift, east-west exploratory drift, the north and south Calico Hills access ramps, the Calico Hills north-south exploratory drift, and the optional exploratory studies facility and man and materials shafts are presented for times 10, 35, 50, 100, 300, 500, 1000, 2000, 5000, and 10,000 years after the start of waste emplacement. The study indicates that the east-west exploratory drift at the repository horizon is subject to the highest thermomechanical impact because it is located closest the buried waste canisters. For most exploratory openings, the thermally induced temperatures and stresses tend to reach the maximum magnitudes at approximately 1000 years after waste emplacement.
One of the most widely recognized inadequacies of C is its low-level treatment of arrays. Arrays are not first-class objects in C; an array name in an expression almost always decays into a pointer to the underlying type. This is unfortunate, especially since an increasing number of high-performance computers are optimized for calculations involving arrays of numbers. On such machines, double[] may be regarded as an intrinsic data type comparable to double or int and quite distinct from double. This weakness of C is acknowledged in the ARM, where it is suggested that the inadequacies of the C array can be overcome in C++ by wrapping it in a class that supplies dynamic memory management bounds checking, operator syntax, and other useful features. Such ``smart arrays`` can in fact supply the same functionality as the first-class arrays found in other high-level, general-purpose programming languages. Unfortunately, they are typically expensive in both time and memory and make poor use of advanced floating-point architectures. The reasons for these difficulties are discussed in X3JI6/92-0076//WG21/N0153, ``Optimization of Expressions Involving Array Classes.`` Is there a better solution? The most obvious solution is to make arrays first-class objects and add the functionality mentioned in the previous paragraph. However, this would destroy C compatibility and significantly alter the C++ language. Major conflicts with existing practice would seem inevitable. I propose instead that a numerical array class be adopted as part of the C++ standard library. This class will have the functionality appropriate for the intrinsic arrays found on most high-performance computers, and the compilers written for these computers will be free to implement it as a built-in class. On other platforms, this class may be defined normally, and will provide users with basic array functionality without imposing an excessive burden on the implementor.
Sandia National Laboratories Occupational Medicine Center has primary responsibility for industrial medicine services, applied epidemiology, workers` compensation and sickness absence benefit management, Human Studies Board, employee assistance and health promotion. Each discipline has unique needs for data management, standard and ad hoc reporting and data analysis. The Medical Organization has established a local area network as the preferred computing environment to meet these diverse needs. Numerous applications have been implemented on the LAN supporting some 80 users.
The PANDA code is used to build a multiphase equation of state (EOS) table for iron. Separate EOS tables were first constructed for each of the individual phases. The phase diagram and multiphase EOS were then determined from the Helmholtz free energies. The model includes four solid phases ([alpha],[gamma], [delta], and [var epsilon]) and a fluid phase (including the liquid, vapor, and supercritical regions). The model gives good agreement with experimental thermophysical data, static compression data, phase boundaries, and shock-wave measurements. Contributions from thermal electronic excitation, computed from a quantum-statistical-mechanical model, were found to be very important. This EOS covers a wide range of densities (0--1000 g/cm[sup 3]) and temperatures (0--1.2[times]10[sup 7] K). It is also applicable to RHA steel. The new EOS is used in hydrocode simulations of plate impact experiments, a nylon ball impact on steel, and the shaped charge perforation of an RHA plate. The new EOS table can be accessed through the SNL-SESAME library as material number 2150.
To allow more reliable estimates to be made of the amount of water that permeates through weapon environmental seals, we have generated extensive water permeability coefficient data for numerous o-ring materials, including, weapon-specific formulations of EPDM, butyl, fluorosilicone and silicone. For each material, data were obtained at several temperatures, ranging typically from 21[degrees]C to 80[degrees]C; for selected materials, the effect of relative humidity was monitored. Two different experimental techniques were used for most of the measurements, a permeability cup method and a weight gain/loss approach using, a sensitive microbalance. Good agreement was found between the results from the two methods, adding confidence to the reliability of the measurements. Since neither of the above methods was sufficiently sensitive to measure the water permeability of the butyl material at low temperatures, a third method, based on the use of a commercial instrument which employs a water-sensitive infrared sensor, was applied under these conditions.
ETPRE is a preprocessor for the Event Progression Analysis Code EVNTRE. It reads an input file of event definitions and writes the lengthy EVNTRE code input files. ETPRE's advantage is that it eliminates the error-prone task of manually creating or revising these files since their formats are quite elaborate. The user-friendly format of ETPRE differs from the EVNTRE code format in that questions, branch references, and other event tree components are defined symbolically instead of numerically. When ETPRE is executed, these symbols are converted to their numeric equivalents and written to the output files using format defined in the EVNTRE Reference Manual. Revisions to event tree models are simplified by allowing the user to edit the symbolic format and rerun the preprocessor, since questions, branch references, and other symbols are automatically resequenced to their new values with each execution.
The FALCON Remote Laser Alignment System is used in a high radiation environment to adjust an optical assembly. The purpose of this report is to provide a description of the hardware used and to present the system configuration. Use of the system has increased the reliability and reproducibility of data as well as significantly reducing personnel radiation exposure. Based upon measured radiation dose, radiation exposure was reduced by at least a factor of two after implementing the remote alignment system.
A statistical analysis of test results on 1000 transportation and storage casks revealed the main parameters that determine the properties of DI (ductile iron, a special form of cost iron). These data were used to established a test program in which the mechanical properties (particularly fracture toughness) of 24 DI alloys were determined as a function of their microstructure. Furthermore, the analysis emphasized the effect of test specimen size and different test data evaluation methods. Results of the test program show the prominent effect of pearlite content and graphite nodule structure in the mechanical and fracture toughness characteristics of DI. As the first-order parameter, the pearlite content is responsible for the transition from linear-elastic to elastic-plastic material behavior. The structure of the graphite nodules has a strong effect on the magnitude of the material property values. On the lower shelf, materials with small, homogeneously distributed graphite nodules show higher K{sub IC}-values (matrix-oriented fracture). On the upper shelf, materials with larger graphite nodules show higher fracture toughness (graphite-oriented fracture). With smaller specimens, conservative values were calculated on the upper shelf. This is important for transportation and storage containers of radioactive materials.
The Westinghouse AP600 plant is one of a number of new reactor plant concepts being proposed by industry. One of the unique design features of the AP600 plant is the method by which the containment is cooled during a reactor accident. Through the passive containment cooling system (PCCS), the containment steel shell is passively cooled by natural convection of air and by water film evaporation from the shell exterior surface. In this study an analysis of the AP600 plant was conducted for postulated design basis accident (DBA) and severe accident scenarios using the NRC containment code CONTAIN2 with new code enhancements to model water film transport and evaporation on the exterior of the containment shell.
A 1:6-scale model of a nuclear reactor containment model was built and tested at Sandia National Laboratories as part of research program sponsored by the Nuclear Regulatory Commission to investigate containment overpressure test was terminated due to leakage from a large tear in the steel liner. A limited destructive examination of the liner and anchorage system was conducted to gain information about the failure mechanism and is described. Sections of liner were removed in areas where liner distress was evident or where large strains were indicated by instrumentation during the test. The condition of the liner, anchorage system, and concrete for each of the regions that were investigated are described. The probable cause of the observed posttest condition of the liner is discussed.
This report discusses the testing and evaluation of five commercially available interior video emotion detection (VMD) systems. Three digital VMDs and two analog VMDs were tested. The report focuses on nuisance alarm data and on intrusion detection results. Tests were conducted in a high-bay (warehouse) location and in an office.
An efficient electron-photon Monte Carlo model, taking advantage of approximate periodicity in repetitive satellite structures, is employed to benchmark a more approximate code and to study the shielding effect of a honeycomb-like structure.
This paper represents a review copy for text that is to be included in the Shock and Vibration Recommended Practice Document. This section on pyroshock is written an a general introduction to and description of the topic loading to presentation of an extensive bibliography on the subject. Pyroshock is an evolving science that needs continued focus on both achieving improvements in testing and measurement techniques and advancing instrumentation capabilities. When desired in the near future, recommended practices can be presented. Pyroshock phenomena are associated with separation systems of missiles. spacecraft, and in some cases airplanes. During launch, a rocket driven vehicle may be exposed to 19 to 30 g`s of acceleration with predominant frequencies less than 200 Hz. After launch or takeoffs sections or parts of vehicles may be separated rapidly using explosive driven release mechanisms. Separations can involve stage disconnections for spacecraft sections or payload ejections from missiles and airplanes. At separation, localized pyrotechnic induced accelerations may range from 1000 to over 100,000 g`s at frequencies much higher than 1000 Hz. These pyroshocks are characterized by high intensity, high frequency transients that decay rapidly. Pyroshock impulses have insignificant velocity changes.
A robotic precursor mission to the Lunar surface is proposed. The objective of the mission is to place six to ten 15kg micro-rovers on the planet to investigate equipment left behind during the Apollo missions and to perform other science and exploration duties. The micro-rovers are teleoperated from Earth. An equipment on the rovers is existing technology from NASA, DOE, SDIO, DoD, and industry. The mission is designed to involve several NASA centers, the National Laboratories, multiple universities and the private sector. A major long-term goal which is addressed is the educational outreach aspect of space exploration.
The development of a high mobility platform integrated with high strength manipulation is under development at Sandia National Laboratories. The mobility platform used is a High Mobility Multipurpose Wheeled Vehicle (HMMWV). Manipulation is provided by two Titan 7F Schilling manipulators integrated onboard the HMMWV. The current state of development is described and future plans are discussed.
Controlled impact experiments have been performed on concrete to determine dynamic material properties. The properties assessed include the high-strain-rate yield strength (Hugoniot elastic limit), and details of the inelastic dynamic stress versus strain response of the concrete. The latter features entail the initial void-collapse modulus, the high-stress limiting void-collapse strain, and the stress amplitude dependence of the deformational wave which loads the concrete from the elastic limit to the maximum dynamics stress state. Dynamic stress-versus-strain data are reported over the stress range of the data, from the Hugoniot elastic limit to in excess of 2 GPa. 6 figs, 4 refs, 4 tabs.
Rutherford backscattering spectrometry (RBS), elastic recoil detection (ERD), proton induced x-ray emission (PIXE) and nuclear reaction analysis (NRA) are among the most commonly used, or traditional, ion beam analysis (IBA) techniques. In this review, several adaptations of these IBA techniques are described where either the approach used in the analysis or the application area is clearly non-traditional or unusual. These analyses and/or applications are summarized in this paper.
MOSFETs historically have exhibited large 1/f noise magnitudes because of carrier-defect interactions that cause the number of channel carriers and their mobility to fluctuate. Uncertainty in the type and location of defects that lead to the observed noise have made it difficult to optimize MOSFET processing to reduce the level of 1/f noise. This has limited one`s options when designing devices or circuits (high-precision analog electronics, preamplifiers, etc.) for low-noise applications at frequencies below {approximately}10--100 kHz. We have performed detailed comparisons of the low-frequency 1/f noise of MOSFETs manufactured with radiation-hardened and non-radiation-hardened processing. We find that the same techniques which reduce the amount of MOSFET radiation-induced oxide-trap charge can also proportionally reduce the magnitude of the low-frequency 1/f noise of both unirradiated and irradiated devices. MOSFETs built in radiation-hardened device technologies show noise levels up to a factor of 10 or more lower than standard commercial MOSFETs of comparable dimensions, and our quietest MOSFETs show noise magnitudes that approach the low noise levels of JFETS.
Three dimensional (3D) seismic technology is regarded as one of the most significant improvements in oil exploration technology to come along in recent years. This report provides an assessment of the likely long-term effect on the world oil price and some possible implications for the firms and countries that participate in the oil market. The potential reduction in average finding costs expected from the use of 3D seismic methods and the potential effects these methods may have on the world oil price were estimated. Three dimensional seismic technology is likely to have a more important effect on the stability rather than on the level of oil prices. The competitive position of US oil production will not be affected by 3D seismic technology.
A programming tool has been developed to allow detailed analysis of Fortran programs for massively parallel architectures. The tool obtains counts for various arithmetic, logical, and input/output operations by data types as desired by the user. The tool operates on complete programs and recognizes user-defined and intrinsic language functions as operations that may be counted. The subset of functions recognized by the tool, STOPCNTR, can be extended by altering the input data sets. This feature facilitates analysis of programs targeted for different architectures. The basic usage and operation of the tool is described along with the more important data structures and more interesting algorithmic aspects before identifying future directions in continued development of the tool and discussing STOPCNTR`s inherent advantages and disadvantages.
Tensile properties were measured for nineteen different formulations of epoxy encapsulating materials. Formulations were of different combinations of two neat resins (Epon 828 and Epon 826, with and without CTBN modification), three fillers (ALOX, GNM and mica) and four hardeners (Z, DEA, DETDA-SA and ANH-2). Five of the formulations were tested at -55, -20, 20 and 60C, one formulation at -55, 20 and 71C; and the remaining formulations at 20C. Complete stress-strain curves are presented along with tables of tensile strength, initial modulus and Poisson`s ratio. The stress-strain responses are nonlinear and are temperature dependent. The reported data provide information for comparing the mechanical properties of encapsulants containing the suspected carcinogen Shell Z with the properties of encapsulants containing noncarcinogenic hardeners. Also, calculated shear moduli, based on measured tensile moduli and Poisson`s ratio, are in very good agreement with reported shear moduli from experimental torsional pendulum tests.
Sandia National Laboratories is currently involved in the optimization of a Plane Shock Generator Explosive Lens (PSGEL). This PSGEL component is designed to generate a planar shock wave transmitted to perform a function through a steel bulkhead without rupturing or destroying the integrity of the bulkhead. The PSGEL component consists of a detonator, explosive, brass cone and tamper housing. The purpose of the PSGEL component is to generate a plane shock wave input to 4340 steel bulkhead (wave separator) with a ferro-electric (PZT) ceramic disk attached to the steel on the surface opposite the PSGEL. The planar shock wave depolarizes the PZT 65/35 ferroelectric ceramic to produce an electrical output. Elastic, plastic I and plastic II waves with different velocities are generated in the steel bulkhead. The depolarization of the PZT ceramic is produced by the elastic wave of specific amplitude (10--20 Kilobars) and this process must be completed before (about 0. 15 microseconds) the first plastic wave arrives at the PZT ceramic. Measured particle velocity versus time profiles, using a Velocity Interferometer System for Any Reflector (VISAR) are presented for the brass and steel output free surfaces. Peak pressures are calculated from the particle velocities for the elastic, plastic I and plastic 11 waves in the steel. The work presented here investigates replacing the current 4340 steel with PH 13-8 Mo stainless steel in order to have a more corrosion resistant, weldable and more compatible material for the multi-year life of the component. Therefore, the particle velocity versus time profile data are presented comparing the 4340 steel and PH 13-8 Mo stainless steel. Additionally, in order to reduce the amount of explosive, data are presented to show that LX-13 can replace PBX-9501 explosive to produce more desirable results.
Previous studies in this laboratory have demonstrated that DMBA alters biochemical events associated with lymphocyte activation including formation of the second messenger IP{sub 3} and the release of intracellular Ca{sup 2+}. The purpose of the present studies was to evaluate the mechanisms by which DMBA induces IP{sub 3} formation and Ca{sup 2+} release by examining phosphorylation of membrane associated proteins and activation of protein tyrosine kinases lck and fyn. These studies demonstrated that exposure of HPB-ALL cells to 10{mu}M DMBA resulted in a time- and dose-dependent increase in tyrosine phosphorylation of PLC-{gamma}1 that correlated with our earlier findings of IP{sub 3} formation and Ca{sup 2+} release. These results indicate that the effects of DMBA on the PI-PLC signaling pathway are in part, the result of DMBA-induced tyrosine phosphorylation of the PLC-{gamma}1 enzyme. The mechanism of DMBA- induced tyrosine phosphorylation of PLC-{gamma}1 may be due to activation of fyn or lck kinase activity, since it was found that DMBA increased the activity of these PTKs by more than 2-fold. Therefore, these studies demonstrate that DMBA may disrupt T cell activation by stimulating PTK activation with concomitant tyrosine phosphorylation of PLC-{gamma}1, release of IP{sub 3}, and mobilization of intracellular Ca{sup 2+}.
The Natural Excitation Technique (NExT) is a method of modal testing that allows structures to be tested in their ambient environments. This report is a compilation of developments and results since 1990, and contains a new theoretical derivation of NExT, as well as a verification using analytically generated data. In addition, we compare results from NExT with conventional modal testing for a parked, vertical-axis wind turbine, and, for a rotating turbine, NExT is used to calculate the model parameters as functions of the rotation speed, since substantial damping is derived from the aeroelastic interactions during operation. Finally, we compare experimental results calculated using NExT with analytical predictions of damping using aeroelastic theory.
Characteristics of a long pulse, low-pump rate, atomic xenon (XeI) laser are described. Energy loading up to 170 mJ/cc at pulse widths between 5 and 55 ms is achieved with an electron beam in transverse geometry. The small-signal gain coefficient, loss coefficient, and saturation intensity are inferred from a modified Rigrod analysis. For pump rates between 12 and 42 W/cc the small-signal gain coefficient varies between 0.64 and 0.91%/cm, the loss coefficient varies between 0.027 and 0.088%/cm, and the saturation intensity varies between 61 and 381 W/cm{sup 2}. Laser energy as a function of pulse width and the effects of air and CO{sub 2} impurities are described. The intrinsic laser energy efficiency has a maximum at a pulse width of 10 ms corresponding to a pump rate of 1.6 W/cc. No maximum is observed in the intrinsic power efficiency, A drastic reduction of laser output power is observed for impurity concentrations of greater than {approx}0.01%. An investigation of the dominant laser wavelength in a high Q cavity indicates that the 2.6-{mu}m radiation dominates. A comparison of dominant wavelength with reactor pumped results indicates good agreement when the same cavity optics are used.
This Executive Summary presents the methodology for determining containment requirements for spent-fuel transport casks under normal and hypothetical accident conditions. Three sources of radioactive material are considered: (1) the spent fuel itself, (2) radioactive material, referred to as CRUD, attached to the outside surfaces of fuel rod cladding, and (3) residual contamination adhering to interior surfaces of the cask cavity. The methodologies for determining the concentrations of freely suspended radioactive materials within a spent-fuel transport cask for these sources are discussed in much greater detail in three companion reports: ``A Method for Determining the Spent-Fuel Contribution to Transport Cask Containment Requirements,`` ``Estimate of CRUD Contribution to Shipping Cask Containment Requirements,`` and ``A Methodology for Estimating the Residual Contamination Contribution to the Source Term in a Spent-Fuel Transport Cask.`` Examples of cask containment requirements that combine the individually determined containment requirements for the three sources are provided, and conclusions from the three companion reports to this Executive Summary are presented.
This report discusses recent efforts to characterize the flow and density nonuniformities downstream of heated screens placed in a uniform flow. The Heated Screen Test Facility (HSTF) at Sandia National Laboratories and the Lockheed Palo Alto Flow Channel (LPAFC) were used to perform experiments over wide ranges of upstream velocities and heating rates. Screens of various mesh configurations were examined, including multiple screens sequentially positioned in the flow direction. Diagnostics in these experiments included pressure manometry, hot-wire anemometry, interferometry, Hartmann wavefront slope sensing, and photorefractive schlieren photography. A model was developed to describe the downstream evolution of the flow and density nonuniformities. Equations for the spatial variation of the mean flow quantities and the fluctuation magnitudes were derived by incorporating empirical correlations into the equations of motion. Numerical solutions of these equations are in fair agreement with previous and current experimental results.
Two heliostats representing the state-of-the-art in glass-metal designs for central receiver (and photovoltaic tracking) applications were tested and evaluated at the National Solar Thermal Test Facility in Albuquerque, New Mexico from 1986 to 1992. These heliostats have collection areas of 148 and 200 m{sup 2} and represent low-cost designs for heliostats that employ glass-metal mirrors. The evaluation encompassed the performance and operational characteristics of the heliostats, and examined heliostat beam quality, the effect of elevated winds on beam quality, heliostat drives and controls, mirror module reflectance and durability, and the overall operational and maintenance characteristics of the two heliostats. A comprehensive presentation of the results of these and other tests is presented. The results are prefaced by a review of the development (in the United States) of heliostat technology.
Shipping containers for radioactive materials must be qualified to meet a thermal accident environment specified in regulations, such at Title 10, Code of Federal Regulations, Part 71. Aimed primarily at the shipping container design, this report discusses the thermal testing options available for meeting the regulatory requirements, and states the advantages and disadvantages of each approach. The principal options considered are testing with radiant heat, furnaces, and open pool fires. The report also identifies some of the facilities available and current contacts. Finally, the report makes some recommendations on the appropriate use of these different testing methods.
Within the Yucca Mountain Site Characterization Project, the design of drifts and ramps and evaluation of the impacts of thermomechanical loading of the host rock requires definition of the rock mass mechanical properties. Ramps and exploratory drifts will intersect both welded and nonwelded tuffs with varying abundance of fractures. The rock mass mechanical properties are dependent on the intact rock properties and the fracture joint characteristics. An understanding of the effects of fractures on the mechanical properties of the rock mass begins with a detailed description of the fracture spatial location and abundance, and includes a description of their physical characteristics. This report presents a description of the abundance, orientation, and physical characteristics of fractures and the Rock Quality Designation in the thermomechanical stratigraphic units at the Yucca Mountain site. Data was reviewed from existing sources and used to develop descriptions for each unit. The product of this report is a data set of the best available information on the fracture characteristics.
In this paper we consider the problem of interprocessor communication on a Completely Connected Optical Communication Parallel Computer (OCPC). The particular problem we study is that of realizing an h-relation. In this problem, each processor has at most h messages to send and at most h messages to receive. It is clear that any 1-relation can be realized in one communication step on an OCPC. However, the best known p-processor OCPC algorithm for realizing an arbitrary h-relation for h > 1 requires {Theta}(h + log p) expected communication steps. (This algorithm is due to Valiant and is based on earlier work of Anderson and Miller.) Valiant`s algorithm is optimal only for h = {Omega}(log p) and it is an open question of Gereb-Graus and Tsantilas whether there is a faster algorithm for h = o(log p). In this paper we answer this question in the affirmative by presenting a {Theta} (h + log log p) communication step algorithm that realizes an arbitrary h-relation on a p-processor OCPC. We show that if h {le} log p then the failure probability can be made as small as p{sup -{alpha}} for any positive constant {alpha}.
Strained-layer semiconductors have revolutionized modern heterostructure devices by exploiting the modification of semiconductor band structure associated with the coherent strain of lattice-mismatched heteroepitaxy. The modified band structure improves transport of holes in heterostructures and enhances the operation of semiconductor lasers. Strained-layer epitaxy also can create materials whose band gaps match wavelengths (e.g. 1.06 μm and 1.32 μm) not attainable in ternary epitaxial systems lattice matched to binary substrates. Other benefits arise from metallurgical effects of modulated strain fields on dislocations. Lattice mismatched epitaxial layers that exceed the limits of equilibrium thermodynamics will degrade under sufficient thermal processing by converting the as-grown coherent epitaxy into a network of strain-relieving dislocations. After presenting the effects of strain on band structure, we describe the stability criterion for rapid-thermal processing of strained-layer structures and the effects of exceeding the thermodynamic limits. Finally, device results are reviewed for structures that benefit from high temperature processing of strained-layer superlattices.
When an object is subjected to the flow of combustion gas at a different temperature, the thermal responses of the object and the surrounding gas become coupled. The ability to model this interaction is of primary interest in the design of components which must withstand fire environments. One approach has been to decouple the problem and treat the incident flux on the surface of the object as being emitted from a blackbody at an approximate gas temperature. By neglecting the presence of the participating media, this technique overpredicts the heat fluxes initially acting on the object surface. The main goal of this work is to quantify the differences inherent in treating the combustion media as a blackbody as opposed to a gray gas. This objective is accomplished by solving the coupled participating media radiation and conduction heat transfer problem. A transient conduction analysis of a vertical flat plate was performed using a gray gas model to provide a radiation boundary condition. A 1-D finite difference algorithm was used to solve the conduction problem at locations along the plate. The results are presented in terms of nondimensional parameters and include both average and local heat fluxes as a function of time. Early in the transient, a reduction in net heat fluxes of up to 65% was observed for the gray gas results as compared to the blackbody cases. This reduction in the initial net heat flux results in lower surface temperatures for the gray gas case. Due to the initially reduced surface temperatures, the gray gas net heat flux exceeds the net blackbody heat flux with increasing time. For radiation Biot numbers greater than 5, or values of the radiation parameter less than 10-2, the differences inherent in treating the media as a gray gas are negligible and the blackbody assumption is valid. Overall, the results clearly indicate the importance of participating media treatment in the modeling of the thermal response of objects in fires and large combustion systems.
This paper gives an estimate of the cost to produce electricity from hot-dry rock (HDR). Employment of the energy in HDR for the production of electricity requires drilling multiple wells from the surface to the hot rock, connecting the wells through hydraulic fracturing, and then circulating water through the fracture system to extract heat from the rock. The basic HDR system modeled in this paper consists of an injection well, two production wells, the fracture system (or HDR reservoir), and a binary power plant. Water is pumped into the reservoir through the injection well where it is heated and then recovered through the production wells. Upon recovery, the hot water is pumped through a heat exchanger transferring heat to the binary, or working, fluid in the power plant. The power plant is a net 5.1-MW[sub e] binary plant employing dry cooling. Make-up water is supplied by a local well. In this paper, the cost of producing electricity with the basic system is estimated as the sum of the costs of the individual parts. The effects on cost of variations to certain assumptions, as well as the sensitivity of costs to different aspects of the basic system, are also investigated.
We report our progress on the physical optics modelling of Sandia/AT&T SXPL experiments. The code is benchmarked and the 10X Schwarzchild system is being studied.
Parallel computers are becoming more powerful and more complex in response to the demand for computing power by scientists and engineers. Inevitably, new and more complex I/O systems will be developed for these systems. In particular we believe that the I/O system must provide the programmer with the ability to explicitly manage storage (despite the trend toward complex parallel file systems and caching schemes). One method of doing so is to have a partitioned secondary storage in which each processor owns a logical disk. Along with operating system enhancements which allow overheads such as buffer copying to be avoided and libraries to support optimal remapping of data, this sort of I/O system meets the needs of high performance computing.
The design-basis, defense-related, transuranic waste to be emplaced in the Waste Isolation Pilot Plant may, if sufficient H2O, nutrients, and viable microorganisms are present, generate significant quantities of gas in the repository after filling and sealing. We summarize recent results of laboratory studies of anoxic corrosion and microbial activity, the most potentially significant processes. We also discuss possible implications for the repository gas budget.