A classic model of aerosol scrubbing from bubbles rising through water is applied to the decontamination of gases produced during core debris interactions with concrete. The model, originally developed by Fuchs, describes aerosol capture by diffusion, sedimentation, and inertial impaction. This original model for spherical bubbles is modified to account for ellipsoidal distortion of the bubbles. Eighteen uncertain variables are identified in the application of the model to the decontamination of aerosols produced during core debris interactions with concrete by a water pool of specified depth and subcooling. These uncertain variables include properties of the aerosols, the bubbles, the water and the ambient pressure. Ranges for the values of the uncertain variables are defined based on the literature and experience. Probability density functions for values of these uncertain variables are hypothesized. The model of decontamination is applied in a Monte Carlo sampling of the decontamination by pools of specified depth and subcooling. Results are analyzed using a nonparametric, order statistical analysis that allows quantitative differentiation of stochastic and phenomenological uncertainty. The sampled values of the decontamination factors are used to construct estimated probability density functions for the decontamination factor at confidence levels of 50%, 90% and 95%. The decontamination factors for pools 30, 50, 100, 200, 300, and 500 cm deep and subcooling levels of 0, 2, 5, 10, 20, 30, 50, and 70{degree}C are correlated by simple polynomial regression. These polynomial equations can be used to estimate decontamination factors at prescribed confidence levels.
Sandia National Laboratories has recently placed into production a mass storage system based on the UniTree{sup TM} Central File Manager software. this paper describes the current status of the system. Background information on the selection criteria is given and the hardware and software configurations are shown. The system has been in production since April, 1992 and the usage and performance statistics, as obtained thus far, are presented.
In previous years, a suite of interim models had been developed for the CONTAIN code for analyzing direct containment heating (DCH) accidents. The initial development and application of these DCH models are described in a previous WRS paper. While useful, these interim models were incomplete and were highly parametric. The parametric nature of the interim CONTAIN DCH models was necessary at the time because of the lack of relevant DCH experimental data, and to facilitate sensitivity studies aimed at improving our understanding of the most important governing processes in a DCH event. However, today our understanding of DCH phenomenology is significantly improved from when the interim DCH models were developed. This understanding largely stems from recently completed NRC-sponsored DCH experiments at Sandia National Laboratories and Argonne National Laboratory. New models have been developed and added to the CONTAIN code for modeling DCH events to reflect this improvement in our understanding of DCH. The purpose of this paper is to describe the new DCH models in CONTAIN. A demonstration of the new models by comparing simplified calculations against relevant DCH test data will also be presented in this paper. This paper is an extension of the preliminary descriptions of the DCH model improvements presented in the 19th WRS paper. The new models that have been added to CONTAIN for analyzing DCH are briefly discussed below. The following paragraphs also include brief discussions of the motivation and/or basis for the developed improvement. The models are described in greater detail in the full paper.
This report describes a full screen menu system developed using IBM`s Interactive System Productivity Facility (ISPF) and the REXX programming language. The software was developed for the 2800 IBM/VM Electrical Computer Aided Design (ECAD) system. The system was developed to deliver electronic drawing definitions to a corporate drawing release system. Although this report documents the status of the menu system when it was retired, the methodologies used and the requirements defined are very applicable to replacement systems.
A critical enabling technology in the evolutionary development of nuclear thermal propulsion (NTP) is the ability to predict the system performance under a variety of operating conditions. Since October 1991, US (DOE), (DOD) and NASA have initiated critical technology development efforts for NTP systems to be used on Space Exploration Initiative (SEI) missions to the Moon and Mars. This paper presents the strategy and progress of an interagency NASA/DOE/DOD team for NTP system modeling. It is the intent of the interagency team to develop several levels of computer programs to simulate various NTP systems. An interagency team was formed for this task to use the best capabilities available and to assure appropriate peer review. The vision and strategy of the interagency team for developing NTP system models will be discussed in this paper. A review of the progress on the Level 1 interagency model is also presented.
Laser two-focus (L2F) velocimetry has been used to measure particle velocities in the Wire Arc Plasma spray process. Particle velocities were measured for aluminum, stainless steel, and copper feedstock with wire diameters of 1.6 mm and 0.9 mm. The Wire Arc Plasma gun was operated in both a single-gas mode, using air, and in a two-gas mode, using a mixture of argon/35% hydrogen as the primary plasma gas with pure argon as the secondary gas. The results indicate that maximum particle velocities are as high as 180 m/s for aluminum sprayed using air and 130 m/s using the argon/hydrogen mixture. The results also show that arc current and wire feed rate have little effect on particle velocity; however, particle velocities increase significantly with decreasing wire diameter and with decreasing density of the feedstock material.
The computer programs SLAAP and DATA are currently being used by Division 2743 for data analysis. These programs had not been previously verified to determine if they were producing correct results. The objective of the study described in this report was to verify these programs by comparing their results to those obtained with GRAFAID, a verified data analysis program. To accomplish this, five acceleration-time histories were selected. For each time history, the shock response spectrum, integral, double integral, derivative and Fourier transform were computed using SLAAP, DATA and GRAFAID. The results of each operation for each time history were overlay plotted for comparison. The results show only minor differences in some cases. These differences are deterministic and are due to differences in the algorithms or block size restrictions of the three programs.
Experimental and analytical studies have been conducted to investigate gas, particle, and coating dynamics in the vacuum plasma spray (VPS) process for a tungsten powder. VPS coatings were examined metallographically and the results compared with the model`s predictions. The plasma was numerically modeled from the cathode tip to the spray distance in the free plume for the experimental conditions of this study. This information was then used as boundary conditions to solve the particle dynamics. The predicted temperature and velocity of the powder particles at standoff were then used as initial conditions for a coating dynamics code. The code predicts the coating morphology for the specific process parameters. The predicted characteristics exhibit good correlation with the observed coating properties.
The Explosive Release Atmospheric Dispersion (ERAD) model is a three-dimensional numerical simulation of turbulent atmospheric transport and diffusion. An integral plume rise technique is used to provide a description of the physical and thermodynamic properties of the cloud of warm gases formed when the explosive detonates. Particle dispersion is treated as a stochastic process which is simulated using a discrete time Lagrangian Monte Carlo method. The stochastic process approach permits a more fundamental treatment of buoyancy effects, calm winds and spatial variations in meteorological conditions. Computational requirements of the three-dimensional simulation are substantially reduced by using a conceptualization in which each Monte Carlo particle represents a small puff that spreads according to a Gaussian law in the horizontal directions. ERAD was evaluated against dosage and deposition measurements obtained during Operation Roller Coaster. The predicted contour areas average within about 50% of the observations. The validation results confirm the model`s representation of the physical processes.
This paper addresses the problem of manipulation planning in the presence of uncertainty. We begin by reviewing the worst-case planning techniques introduced in and show that these methods are hampered by an information gap inherent to worst-case analysis techniques. As the task uncertainty increases, these methods fail to produce useful information even though a high-quality plan may exist. To fill this gap, we present the probabilistic backprojection, which describes the likelihood that a given action will achieve the task goal from a given initial state. We provide a constructive definition of the probabilistic backprojection and related probabilistic models of manipulation task mechanics, and show how these models unify and enhance several past results in manipulation planning. These models capture the fundamental nature of the task behavior, but appear to be very complex. Methods for computing these models are sketched, but efficient computational methods remain unknown.
Target recognition requires the ability to distinguish targets from non-targets, a capability called one-class generalization. To function as a one-class classifier, a neural network must have three types of generalization: within-class, between-class, and out-of-class. We discuss these three types of generalization and identify neural network architectures that meet these requirements. We have applied our one-class classifier ideas to the problem of automatic target recognition in synthetic aperture radar. We have compared three neural network algorithms: Carpenter and Grossberg`s algorithmic version of the Adaptive Resonance Theory (ART-2A), Kohonen`s Learning Vector Quantization (LVQ), and Reilly and Cooper`s Restricted Columb Energy network (RCE). The ART 2-A neural network has given the best results, with 100% within-class, and out-of-class generalization. Experiments show that the network`s performance is sensitive to vigilance and number of training set presentations.
Microwave parameters drifted significantly for two out of twenty- nine GaAs MESFET-based MMICs during ten weeks of storage at 125{degrees}C and 150{degrees}C. Analysis using measured, post- storage, FET characteristics and the microwave behavior indicates that all of the FETs in the MMICs drifted, most likely due to contamination.
Early solution miners encountered occasional difficulties with nonsymmetric caverns (including ``wings`` and ``chimneys``), gas releases, insoluble stringers, and excessive anhydrite ``sands.`` Apparently there was no early recognition of trends for these encounters, although certain areas were avoided after problems appeared consistently within them. Solution mining has now matured, and an accumulation of experience indicates that anomalous salt features occur on a number of Gulf Coast domes. Trends incorporating concentrations of anomalous features will be referred to as ``anomalous zones,`` or AZs (after Kupfer). The main objective of this Project is to determine the effects of AZ encounters on solution-mined caverns and related storage operations in domes. Geological features of salt domes related directly to cavern operations and AZs will be described briefly, but discussions of topics related generally to the evolution of Gulf Coast salt structures are beyond the scope of this Project.
Measurements of the effects of pressure on the thermal electron emission rate and capture cross section for a variety of deep electronic levels in GaAs, GaP and their alloys have yielded the pressure dependences of the energies of these levels in the bandgaps, allowed evaluation of the breathing mode lattice relaxations accompanying carrier emission or capture by these levels and revealed trends which lead to new insights into the nature of the responsible defects. Emphasis is on deep levels believed to be associated with simple defects. Specifically, results will be summarized for the donor levels of the dominant native defect known as EL2 in CAM, which is believed to be associated with the arsenic antisite, and on the radiation-induced El and E2 levels in GaAs, GaP and their alloys, which are believed to be due to arsenic (or phosphorous) vacancies. The results are discussed in terms of models for the defects responsible for these deep levels.
Applications for the controlled thermal expansion alloy Fe-29Ni-17Co often require joining by fusion welding processes. In addition, these applications usually require hermetic and high reliability joints. The small size of typical components normally dictates the use of autogenous welding processes, so that the hot cracking tendency of Fe-29Ni-17Co is of concem. The solidification behavoir and hot cracking tendency of commercial Fe-29Ni-17Co has been evaluated using diffcrential thermal analysis (DTA), Varestraint testing, light and electron microscopy, and laser welding trials. DTA and microstructural analysis indicated that the solidification of Fe-29Ni-17Co occurs as single phase austenite, does not exhibit the formation of terminal solidification phases, and results in only minimal segregation of major alloying elements. Varestraitit testing indicated that the hot cracking behavior of Fe-29Ni-17Co is similar to, though somewhat more pronounced than, 304L and 316 stainless steels. Relative to other Fe-Ni-Co and Ni-based alloys, however, the hot cracking response of this alloy is fiverable. Pulsed laser welding trials indicated that the phosphorus and sulfur levels in this heat of Fe-29Ni-17Co were insufficient to pmmote cracking in bead-on-plate welds.
High energy electron beam accelerator technology has been developed over the past three decades in response to military and energy-related requirements for weapons simulators, directed-energy weapons, and inertially-confined fusion. These applications required high instantaneous power, large beam energy, high accelerated particle energy, and high current. These accelerators are generally referred to as ``pulsed power`` devices, and are typified by accelerating potential of millions of volts (MV), beam current in thousands of amperes (KA), pulse duration of tens to hundreds of nanoseconds, kilojoules of beam energy, and instantaneous power of gigawatts to teffawatts (10{sup 9} to 10{sup 12} watts). Much of the early development work was directed toward single pulse machines, but recent work has extended these pulsed power devices to continuously repetitive applications. These relativistic beams penetrate deeply into materials, with stopping range on the order of a centimeter. Such high instantaneous power deposited in depth offers possibilities for new material fabrication and processing capabilities that can only now be explored. Fundamental techniques of pulse compression, high voltage requirements, beam generation and transport under space-charge-dominated conditions will be discussed in this paper.
The Remote Security Station (RSS) was developed by Sandia National Laboratories for the Defense Nuclear Agency to investigate issues pertaining to robotics and sensor fusion in physical security systems. This final report documents the status of the RSS program at its completion in April 1992. The RSS system consists of the Man Portable Security Station (MaPSS) and the Telemanaged Mobile Security Station (TMSS), which are integrated by the Operator`s Control Unit (OCU) into a flexible exterior perimeter security system. The RSS system uses optical, infrared, microwave, and acoustic intrusion detection sensors in conjunction with sensor fusion techniques to increase the probability of detection and to decrease the nuisance alarm rate of the system. Major improvements to the system developed during the final year are an autonomous patrol capability, which allows TMSS to execute security patrols with limited operator interaction, and a neural network approach to sensor fusion, which significantly improves the system`s ability to filter out nuisance alarms due to adverse weather conditions.
The In Situ Permeable Flow Sensor, a new technology which uses a thermal perturbation technique to directly measure the 3-dimensional groundwater flow velocity vector at a point in permeable, unconsolidated geologic formations, has been used to monitor changes in the groundwater flow regime around an experimental air stripping waste remediation activity. While design flaws in the first version of the technology, which were used during the experiment being reported here, precluded measurements of the horizontal component of the flow velocity, measurements of the vertical component of the flow velocity were obtained. Results indicate that significant changes in the vertical flow velocity were induced by the air injection system. One flow sensor, MHM6, measured a vertical flow velocity of 4 m/yr or less when the air injection system was not operating and 25 m/yr when the air injection system was on. This may be caused by air bubbles moving past the probes or may be the result of the establishment of a more widespread flow regime in the groundwater induced by the air injection system. In the latter case, significantly more groundwater would be remediated by the air stripping operation since groundwater would be circulated through the zone of influence of the air injection system. Newly designed flow sensors, already in the ground at Savannah River to monitor Phase II of the project, are capable of measuring horizontal as well as vertical components of flow velocity.
In previous work, failure of early versions of the zinc/bromine battery was traced to degradation and warpage of the carbon-plastic electrode. These electrodes were fabricated from copolymers of ethylene and propylene (EP) containing structures that were found to be susceptible to degradation by the electrolyte. In this work, we evaluated two developmental electrodes from Johnson Controls Battery Group, Inc., in which the EP copolymer was replaced with a high-density polyethylene (HDPE) that contained glass-fiber reinforcing fillers. The glass fiber content of these two electrodes was different (19% vs. 31%). We determined the effect of electrolyte on sorption behavior, dimensional stability, chemical stability, and thermal, mechanical, and electrical properties under real-time and accelerated aging conditions. We also characterized unaged samples of both electrodes to determine their chemical composition and physical structure. We found that high glass content in the electrode minimizes sorption and increases dimensional stability. Both high and low glass content electrodes were found to be chemically and thermally stable toward the electrolyte. A slight decrease in the storage modulus (G{prime}) of both electrodes was attributed to sorption of non-ionic and hydrophobic ingredients in the electrolyte. The electrical conductivity of both electrodes appeared to improve (increase) upon exposure to the electrolyte. No time or temperature trends were observed for the chemical, thermal, or mechanical properties of electrodes made from HDPE. Since decreases in these properties were noted for electrodes made from EP copolymers under similar conditions, it appears that the HDPE-based electrodes have superior long-term stability in the ZnBr{sub 2} environment.
Hydrostatic and constant-stress-difference (CSD) experiments were conducted at RT on 3 different sintering runs of unpoled, Nb-doped lead-zirconate-titanate ceramic (PZT 95/5-2Nb) in order to quantify influence of shear stress on displacive, martensitic-like, first-order, rhombohedral {r_arrow} orthorhombic phase transformation. In hydrostatic compression at RT, the transformation began at about 260 MPa, and was usually incompletely reversed upon return to ambient. Strains associated with the transformation were isotropic, both on first and subsequent hydrostatic cycles. Results for CSD tests were quite different. First, the confining pressure and mean stress at which the transition begins decreased linearly with increasing stress difference. Second, the rate of transformation decreased with increasing shear stress and the accompanying purely elastic shear strain. This contrasts with the typical observation that shear stresses increase reaction and transformation kinetics. Third, strain was not isotropic during the transformation: axial strains were greater and lateral strains smaller than for the hydrostatic case, though volumetric strain behavior was comparable for the two types of tests. However, this effect does not appear to be an example of true transformational plasticity: no additional unexpected strains accumulated during subsequent cycles through transition under nonhydrostatic loading. If subsequent hydrostatic cycles were performed on samples previously run under CSD conditions, strain anisotropy was again observed, indicating that the earlier superimposed shear stress produced a permanent mechanical anisotropy in the material. The mechanical anisotropy probably results from a ``one-time`` crystallographic preferred orientation that developed during the transformation under shear stress. Finally, in a few specimens from one particular sintering run, sporadic evidence for a ``shape memory effect`` was observed.
This document contains the planned actions to correct the deficiences identified in the Tiger Team Assessments of Sandia California (August 1990) and Sandia New Mexico (May 1991). Information is also included on the management structures, estimated costs, root causes, prioritization and schedules for the Action Plan. This Plan is an integration of the two individual Action Plans to provide a cost effective, integrated program for implemenation by Sandia and monitoring by DOE. This volume (2) contains information and corrective action plans pertaining to safety and health and management practices.
A Level III Probabilistic Risk Assessment (PRA) has been performed for LaSalle Unit 2 under the Risk Methods Integration and Evaluation Program (RMIEP) and the Phenomenology and Risk Uncertainty Evaluation Program (PRUEP). This report documents the phenomenological calculations and sources of. uncertainty in the calculations performed with HELCOR in support of the Level II portion of the PRA. These calculations are an integral part of the Level II analysis since they provide quantitative input to the Accident Progression Event Tree (APET) and Source Term Model (LASSOR). However, the uncertainty associated with the code results must be considered in the use of the results. The MELCOR calculations performed include four integrated calculations: (1) a high-pressure short-term station blackout, (2) a low-pressure short-term station blackout, (3) an intermediate-term station blackout, and (4) a long-term station blackout. Several sensitivity studies investigating the effect of variations in containment failure size and location, as well as hydrogen ignition concentration are also documented.
This volume presents the results of the initiating event and accident sequence delineation analyses of the LaSalle Unit II nuclear power plant performed as part of the Level III PRA being performed by Sandia National Laboratories for the Nuclear Regulatory Commission. The initiating event identification included a thorough review of extant data and a detailed plant specific search for special initiators. For the LaSalle analysis, the following initiating events were defined: eight general transients, ten special initiators, four LOCAs inside containment, one LOCA outside containment, and two interfacing LOCAs. Three accident sequence event trees were constructed: LOCA, transient, and ATWS. These trees were general in nature so that a tree represented all initiators of a particular type (i.e., the LOCA tree was constructed for evaluating small, medium, and large LOCAs simultaneously). The effects of the specific initiators on the systems and the different success criteria were handled by including the initiating events directly in the system fault trees. The accident sequence event trees were extended to include the evaluation of containment vulnerable sequences. These internal event accident sequence event trees were also used for the evaluation of the seismic, fire, and flood analyses.
Sequential indicator simulation (SIS) is a geostatistical technique designed to aid in the characterization of uncertainty about the structure or behavior of natural systems. This report discusses a simulation experiment designed to study the quality of uncertainty bounds generated using SIS. The results indicate that, while SIS may produce reasonable uncertainty bounds in many situations, factors like the number and location of available sample data, the quality of variogram models produced by the user, and the characteristics of the geologic region to be modeled, can all have substantial effects on the accuracy and precision of estimated confidence limits. It is recommended that users of SIS conduct validation studies for the technique on their particular regions of interest before accepting the output uncertainty bounds.
In 1992, NIST announced a proposed standard for a collision-free hash function. The algorithm for producing the hash value is known as the Secure Hash Algorithm (SHA), and the standard using the algorithm in known as the Secure Hash Standard (SHS). Later, an announcement was made that a scientist at NSA had discovered a weakness in the original algorithm. A revision to this standard was then announced as FIPS 180-1, and includes a slight change to the algorithm that eliminates the weakness. This new algorithm is called SHA-1. In this report we describe a portable and efficient implementation of SHA-1 in the C language. Performance information is given, as well as tips for porting the code to other architectures. We conclude with some observations on the efficiency of the algorithm, and a discussion of how the efficiency of SHA might be improved.