This report discusses electronic isolators which are used to maintain electrical separation between safety and non-safety systems in nuclear power plants. The concern is that these devices may fail allowing unwanted signals or energy to act upon safety systems, or preventing desired signals from performing their intended function. While operational history shows many isolation device problems requiring adjustments and maintenance, we could not find incidents where there was a safety implication. Even hypothesizing multiple simultaneous failures did not lead to significant contributions to core damage frequency. Although the analyses performed in this study were not extensive or detailed, there seems to be no evidence to suspect that isolation device failure is an issue which should be studied further.
This report documents the fiscal year 1992 activities of the, Utility Battery Storage Systems Program (UBS) of the US Department of Energy (DOE), Office of Energy Management (OEM). The UBS program is conducted by Sandia National Laboratories (SNL). UBS is responsible for the engineering development of integrated battery systems for use in utility-energy-storage (UES) and other stationary applications. Development is accomplished primarily through cost-shared contracts with industrial organizations. An important part of the development process is the identification, analysis, and characterization of attractive UES applications. UBS is organized into five projects: Utility Battery Systems Analyses; Battery Systems Engineering; Zinc/Bromine; Sodium/Sulfur; Supplemental Evaluations and Field Tests. The results of the Utility Systems Analyses are used to identify several utility-based applications for which battery storage can effectively solve existing problems. The results will also specify the engineering requirements for widespread applications and motivate and define needed field evaluations of full-size battery systems.
The sixth experiment of the Integral Effects Test (IET-6) series was conducted to investigate the effects of high pressure melt ejection on direct containment heating. Scale models of the Zion reactor pressure vessel (RPV), cavity, instrument tunnel, and subcompartment structures were constructed in the Surtsey Test Facility at Sandia National Laboratories. The RPV was modeled with a melt generator that consisted of a steel pressure barrier, a cast MgO crucible, and a thin steel inner liner. The melt generator/crucible had a hemispherical bottom head containing a graphite limitor plate with a 4-cm exit hole to simulate the ablated hole in the RPV bottom head that would be formed by ejection of an instrument guide tube in a severe nuclear power plant accident. The cavity contained 3.48 kg of water, which corresponds to condensate levels in the Zion plant, and the containment basement floor was dry. A 43-kg initial charge of iron oxide/aluminum/chromium thermite was used to simulate corium debris on the bottom head of the RPV. Molten thermite was ejected by steam at an initial pressure of 6.3 MPa into the reactor cavity. The Surtsey vessel atmosphere contained pre-existing hydrogen to represent partial oxidation of the zirconium in the Zion core. The initial composition of the vessel atmosphere was 87.1 mol.% N{sub 2}, 9.79 mol.% O{sub 2}, and 2.59 mol.% H{sub 2}, and the initial absolute pressure was 198 kPa. A partial hydrogen burn occurred in the Surtsey vessel. The peak vessel pressure increase was 279 kPa in IET-6, compared to 246 kPa in the IET-3 test. The total debris mass ejected into the Surtsey vessel in IET-6 was 42.5 kg. The gas grab sample analysis indicated that there were 180 g{center_dot} moles of pre-existing hydrogen, and that 308{center_dot}moles of hydrogen were produced by steam/metal reactions. About 335 g{center_dot}moles of hydrogen burned, and 153 g{center_dot}moles remained unreacted.
The Sandia National Laboratories (SNL) Engineering Analysis Code Access System (SEACAS) is a collection of structural and thermal codes and utilities used by analysts at SNL. The system includes pre- and post-processing codes, analysis codes, database translation codes, support libraries, UNIX{trademark} shell scripts, and an installation system. SEACAS is used at SNL on a daily basis as a production, research, and development system for the engineering analysts and code developers. Over the past year, approximately 180 days of Cray Y-MP{trademark} CPU time have been used at SNL by SEACAS codes. The job mix includes jobs using only a few seconds of CPU time, up to jobs using two and one-half days of CPU time. SEACAS is running on several different systems at SNL including Cray Unicos, Hewlett Packard HP-UX{trademark}, Digital Equipment Ultrix{trademark}, and Sun SunOS{trademark}. This document is a short description of the codes the SEACAS system.
This bulletin from Sandia Laboratories presents current research on testing technology. Fiber optics systems at the Nevada Test Site is replacing coaxial cables. The hypervelocity launcher is being used to test orbital debris impacts with space station shielding. A digital recorder makes testing of high-speed water entries possible. Automobile engine design is aided by an instrumented head gasket that detects the combustion zone. And composite-to-metal strength and fatigue tests provide new data on joint failures in wind turbine joint tests.
The CONTAIN computer code is a best-estimate, integrated analysis tool for predicting the physical, chemical, and radiological conditions inside a nuclear reactor containment building following the release of core material from the primary system. CONTAIN is supported primarily by the U. S. Nuclear Regulatory Commission (USNRC), and the official code versions produced with this support are intended primarily for the analysis of light water reactors (LWR). The present manual describes CONTAIN LMR/1B-Mod. 1, a code version designed for the analysis of reactors with liquid metal coolant. It is a variant of the official CONTAIN 1.11 LWR code version. Some of the features of CONTAIN-LMR for treating the behavior of liquid metal coolant are in fact present in the LWR code versions but are discussed here rather than in the User`s Manual for the LWR versions. These features include models for sodium pool and spray fires. In addition to these models, new or substantially improved models have been installed in CONTAIN-LMR. The latter include models for treating two condensables (sodium and water) simultaneously, sodium atmosphere and pool chemistry, sodium condensation on aerosols, heat transfer from core-debris beds and to sodium pools, and sodium-concrete interactions. A detailed description of each of the above models is given, along with the code input requirements.
Inelastic material constitutive relations for elastoplasticity coupled with continuum damage mechanics are investigated. For elastoplasticity, continuum damage mechanics, and the coupled formulations, rigorous thermodynamic frameworks are derived. The elastoplasticity framework is shown to be sufficiently general to encompass J{sub 2} plasticity theories including general isotropic and kinematic hardening relations. The concepts of an intermediate undamaged configuration and a fictitious deformation gradient are used to develop a damage representation theory. An empirically-based, damage evolution theory is proposed to overcome some observed deficiencies. Damage deactivation, which is the negation of the effects of damage under certain loading conditions, is investigated. An improved deactivation algorithm is developed for both damaged elasticity and coupled elastoplasticity formulations. The applicability of coupled formulations is validated by comparing theoretical predictions to experimental data for a spectrum of materials and loads paths. The pressure-dependent brittle-to-ductile transitional behavior of concrete is replicated. The deactivation algorithm is validated using tensile and compression data for concrete. For a ductile material, the behavior of an aluminum alloy is simulated including the temperature-dependent ductile-to-brittle behavior features. The direct application of a coupled model to fatigue is introduced. In addition, the deactivation algorithm in conjunction with an assumed initial damage and strain is introduced as a novel method of simulating the densification phenomenon in cellular solids.
Target recognition requires the ability to distinguish targets from non-targets, a capability called one-class generalization. Many neural network pattern classifiers fail as one-class classifiers because they use open decision boundaries. To function as one-class classifier, a neural network must have three types of generalization: within-class, between-class, and out-of-class. We discuss these three types of generalization and identify neural network architectures that meet these requirements. We have applied our one-class classifier ideas to the problem of automatic target recognition in synthetic aperture radar. We have compared three neural network algorithms: Carpenter and Grossberg`s algorithmic version of the Adaptive Resonance Theory (ART-2A), Kohonen`s Learning Vector Quantization (LVQ), and Reilly and Cooper`s Restricted Coulomb Energy network (RCE). The ART 2-A neural network gives the best results, with 100% within-class, between-class, and out-of-class generalization. Experiments show that the network`s performance is sensitive to vigilance and number of training set presentations.
This report discusses the seventh experiment of the Integral Effects Test (IET-7) series. The experiment was conducted to investigate the effects of preexisting hydrogen in the Surtsey vessel on direct containment heating. Scale models of the Zion reactor pressure vessel (RPV), cavity, instrument tunnel, and subcompartment structures were constructed in the Surtsey Test Facility at Sandia National Laboratories. The RPV was modeled with a melt generator that consisted of a steel pressure barrier, a cast MgO crucible, and a thin steel inner liner. The melt generator/crucible had a hemispherical bottom head containing a graphite limitor plate with a 4-cm exit hole to simulate the ablated hole in the RPV bottom head that would be formed by ejection of an instrument guide tube in a severe nuclear power plant accident. The cavity contained 3.48 kg of water, and the containment basement floor inside the cranewall contained 71 kg of water, which corresponds to scaled condensate levels in the Zion plant. A 43-kg initial charge of iron oxide/aluminum/chromium thermite was used to simulate corium debris on the bottom head of the RPV. Molten thermite was ejected by steam at an initial pressure of 5.9 MPa into the reactor cavity.
Charpy V-notch specimens (ASTM Type A) and 5.74-mm diameter tension test specimens of the Shippingport Reactor Neutron Shield Tank (NST) (outer wall material) were irradiated together with Charpy V-notch specimens of the Oak Ridge National Laboratory (ORNI), High,, Flux Isotope Reactor (HFIR) vessel (shell material), to 5.07 {times} 10{sup 17} n/cm{sup 2}, E > 1 MeV. The irradiation was performed in the Ford Nuclear Reactor (FNR), a test reactor, at a controlled temperature of 54{degrees}C (130{degrees}F) selected to approximate the prior service temperatures of the cited reactor structures. Radiation-induced elevations in the Charpy 41-J transition temperature and the ambient temperature yield strength were small and independent of specimen test orientation (ASTM LT vs. TL). The observations are consistent with prior findings for the two materials (A 212-B plate) and other like materials irradiated at low temperature (< 200{degrees}C) to low fluence. The high radiation embrittlement sensitivity observed in HFIR vessel surveillance program tests was not found in the present accelerated irradiation test. Response to 288{degrees}C-168 h postirradiation annealing was explored for the NST material. Notch ductility recovery was found independent of specimen test orientation but dependent on the temperature within the transition region at which the specimens were tested.
A method is presented for determining the nonlinear stability of undamped flexible structures spinning about a principal axis of inertia. Equations of motion are developed for structures that are free of applied forces and moments. The development makes use of a floating reference frame which follows the overall rigid body motion. Within this frame, elastic deformations are assumed to be given functions of n generalized coordinates. A transformation of variables is devised which shows the equivalence of the equations of motion to a Hamiltonian system with n + 1 degrees of freedom. Using this equivalence, stability criteria are developed based upon the normal form of the Hamiltonian. It is shown that a motion which is spin stable in the linear approximation may be unstable when nonlinear terms are included. A stability analysis of a simple flexible structure is provided to demonstrate the application of the stability criteria. Results from numerical integration of the equations of motion are shown to be consistent with the predictions of the stability analysis. A new method for modeling the dynamics of rotating flexible structures is developed and investigated. The method is similar to conventional assumed displacement (modal) approaches with the addition that quadratic terms are retained in the kinematics of deformation. Retention of these terms is shown to account for the geometric stiffening effects which occur in rotating structures. Computational techniques are developed for the practical implementation of the method. The techniques make use of finite element analysis results, and thus are applicable to a wide variety of structures. Motion studies of specific problems are provided to demonstrate the validity of the method. Excellent agreement is found both with simulations presented in the literature for different approaches and with results from a commercial finite element analysis code. The computational advantages of the method are demonstrated.
The U.S. Department of Energy (DOE) is developing the Waste Isolation Pilot Plant (WIPP) in southeastern New Mexico as a facility for the long-term disposal of defense-related transuranic (TRU) wastes. Use of the WIPP for waste disposal is contingent on demonstrations of compliance with applicable regulations of the U.S. Environmental Protection Agency (EPA). This paper addresses issues related to modeling gas and brine migration at the WIPP for compliance with both EPA 40 CFR 191 (the Standard) and 40 CFR 268.6 (the RCRA). At the request of the WIPP Project Integration Office (WPIO) of the DOE, the WIPP Performance Assessment (PA) Department of Sandia National Laboratories (SNL) has completed preliminary uncertainty and sensitivity analyses of gas and brine migration away from the undisturbed repository. This paper contains descriptions of the numerical model and simulations, including model geometries and parameter values, and a summary of major conclusions from sensitivity analyses. Because significant transport of contaminants can only occur in a fluid (gas or brine) medium, two-phase flow modeling can provide an estimate of the distance to which contaminants can migrate. Migration of gas or brine beyond the RCRA 'disposal-unit boundary' or the Standard's accessible environment constitutes a potential, but not certain, violation and may require additional evaluations of contaminant concentrations.
This paper presents an infinite impulse response (IIR) filtering technique for reducing structural vibration in remotely operated robotic systems. The technique uses a discrete filter between the operator's joy stick and the robot controller to alter the inputs of the system so that residual vibration and swing are reduced. A linearized plant model of the system is analyzed in the discrete time domain, and the filter is designed using pole-zero placement in the z-plane. This technique has been successfully applied to a two link flexible arm and a gantry crane with a suspended payload.
Before disposing of transuranic radioactive waste at the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories (SNL) is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for final compliance evaluations. This paper describes the 1992 preliminary comparison with Subpart B of the Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191), which regulates long-term releases of radioactive waste. Results of the 1992 PA are preliminary, and cannot be used to determine compliance or noncompliance with EPA regulations because portions of the modeling system and data base are incomplete. Results are consistent, however, with those of previous iterations of PA, and the SNL WIPP PA Department has high confidence that compliance with 40 CFR 191B can be demonstrated. Comparison of predicted radiation doses from the disposal system also gives high confidence that the disposal system is safe for long-term isolation.
We describe an algorithm for the static load balancing of scientific computations that generalizes and improves upon spectral bisection. Through a novel use of multiple eigenvectors, our new spectral algorithm can divide a computation into 4 or 8 pieces at once. These multidimensional spectral partitioning algorithms generate balanced partitions that have lower communication overhead and are less expensive to compute than those produced by spectral bisection. In addition, they automatically work to minimize message contention on a hypercube or mesh architecture. These spectral partitions are further improved by a multidimensional generalization of the Kernighan-Lin graph partitioning algorithm. Results on several computational grids are given and compared with other popular methods.
Before disposing of transuranic radioactive waste at the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with long-term regulations of the United States Environmental Protection Agency (EPA), specifically the Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191), and the Land Disposal Restrictions (40 CFR 268) of the Hazardous and Solid Waste Amendments to the Resource Conservation and Recovery Act (RCRA). Sandia National Laboratories (SNL) is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for final compliance evaluations. This paper provides background information on the regulations, describes the SNL WIPP PA Department's approach to developing a defensible technical basis for consistent compliance evaluations, and summarizes the major observations and conclusions drawn from the 1991 and 1992 PAs.
This report describes preliminary experiments to investigate the feasibility of using electron beam (e-beam) radiolysis to destroy the organic compounds in simulated Hanford tank waste. For these experiments a simulated Hanford Tank 101-SY waste mixture was radiolyzed in a {sup 60}Co facility to simulate radiolysis in the waste tank. This slurry was then exposed without dilution to dose levels up to 1600 Mrad at instantaneous dose rates of 2.5 {times} 10{sup 8} and 2. 7 {times} 10{sup 11} rad/s. The inferred dose to destroy all the organic material in the simulated waste, assuming destruction is linear with dose, is 1000 Mrads for the higher dose rate. The cost for organic destruction of Hanford waste at a treatment rate of 20 gpm is roughly estimated to be $10. 60 per gallon. Such a system would treat all the waste in a 1 million gallon Hanford tank in about 40 days. Estimates of capital costs are given in the body of this report. While ferrocyanide destruction was not experimentally investigated in this work, previous experiments by others suggest that ferrocyanide would also be destroyed in such a system.
Artman, W.D.; Sullivan, J.J.; De La O, R.V.; Zawadzkas, G.A.
This report describes the Training and Qualification Program at the Saturn Facility. The main energy source at Saturn is the Saturn accelerator which is used to test military hardware for vulnerability to X-rays, as well as to perform various types of plasma radiation source experiments. The facility is operated and maintained by a staff of twenty scientists and technicians. This program is designed to ensure these personnel are adequately trained and qualified to perform their jobs in a safe and efficient manner. Copies of actual documents used in the program are included as appendices. This program meets all the requirements for training and qualification in the DOE Orders on Conduct of Operations and Quality Assurance, and may be useful to other organizations desiring to come into compliance with these orders.
Experiments were run to determine if oxidized Kovar could be chemically cleaned so that copper would wet the Kovar in a wet hydrogen atmosphere at 1100{degrees}C. We found that a multi-stepped acid etch process cleaned the Kovar so that copper would wet it. We also found that the degree of copper cracking after melting and cool-down correlated well with the degree of wetting.
The relatively thin web of salt that separates Bayou Choctaw Caverns 15 and 17 was evaluated using the finite-element method. The stability calculations provided insight as to whether or not any operationrestrictions or recommendations are necessary. Because of the uncertainty in the exact dimensions of the salt web, various web thicknesses were examined under different operating scenarios that included individual cavern workovers and drawdowns. Cavern workovers were defined by a sudden drop in the oil side pressure at the wellhead to atmospheric. Workovers represent periods of low cavern pressure. Cavern drawdowns were simulated by enlargening the cavern diameters, thus decreasing the thickness of the web. The calculations predict that Cavern 15 dominates the behavior of the web because of its larger diameter. Thus, giventhe choice of caverns, Cavern 17 should be used for oil withdrawal in order to minimize the adverse impacts on web resulting from pressure drops or cavern enlargement. From a stability point of view, maintaining normal pressures in Cavern 15 was found to be more important than operating the caverns as a gallery where both caverns are maintained at the same pressure. However, during a workover, it may be prudent to operate the caverns under similar pressures to avoid the possibility of a sudden pressure surge at the wellhead should the web fail.
A feasibility study for developing an improved tool and improved models for performing event assessments is described. The study indicates that the IRRAS code should become the base tool for performing event assessments, but that modifications would be needed to make it more suitable for routine use. Alternative system modeling approaches are explored and an approach is recommended that is based on improved train-level models. These models are demonstrated for Grand Gulf and Sequoyah. The insights that can be gained from importance measures are also demonstrated. The feasibility of using Individual Plant Examination (IPE) submittals as the basis for train-level models for precursor studies was also examined. The level of reported detail was found to vary widely, but in general, the submittals did not provide sufficient information to fully define the model. The feasibility of developing an industry risk profile from precursor results and of trending precursor results for individual plants were considered. The data sparsity would need to be considered when using the results from these types of evaluations, and because of the extremely sparse data for individual plants we found that trending evaluations for groups of plants would be more meaningful than trending evaluations for individual plants.
The Modal Group at Sandia National Laboratories performs a variety of tests on structures ranging from weapons systems to wind turbines. The desired number of data channels for these tests has increased significantly over the past several years. Tests requiring large numbers of data channels makes roving accelerometers impractical and inefficient. The Modal Lab has implemented a method in which the test unit is fully instrumented before any data measurements are taken. This method uses a 16 channel data acquisition system and a mechanical switching setup to access each bank of accelerometers. A data base containing all transducer sensitivities, location numbers, and coordinate information is resident on the system enabling quick updates for each data set as it is patched into the system. This method has reduced test time considerably and is patched into the system. this method has reduced test time considerably and is easily customized to accommodate data acquisition systems with larger channel capabilities.
Vadose-zone moisture transport near an impermeable barrier has been under study at a field site near Albuquerque, NM since 1990. Moisture content and temperature have been monitored in the subsurface on a regular basis; both undergo a seasonal variation about average values. Even though the slab introduces two-dimensional effects on the scale of the slab, moisture and heat transport is predominantly vertical. Numerical simulations, based on the models developed by Philip and de Vries (1957) and de Vries (1958), indicate that the heat flow is conduction-dominated while the moisture movement is dominated by diffusive vapor distillation. Model predictions of the magnitude and extent of changes in moisture content underneath the slab are in reasonable agreement with observation.
A method is proposed for supps'essing the resonances that occur as an item of rotating machinery is spun-up from rest to its operating speed. This proposed method invokes “stiffiness scheduling” so that the resonant frequency of the system is shifted during spin-up so as to be distant from the excitation frequency. A strategy for modulating the stiffness through the use of shape memory alloy is also presented.
A Sandia National Laboratories/AT&T Bell Laboratories Team is developing a soft x-ray projection lithography tool that uses a compact laser plasma as a source of 14 nm x-rays. Optimization of the 14 nm x-rays source brightness is a key issue in this research. This paper describes our understanding of the source as it has been obtained through the use of computer simulations utilizing the LASNEX radiation-hydrodynamics code.
Lightning protection systems (LPSs) for explosives handling and storage facilities have long been designed similarity to those need for more conventional facilities, but their overall effectiveness in controlling interior electromagnetic (EM) environments has still not been rigorously assessed. Frequent lightning-caused failures of a security system installed in earth-covered explosives storage structures prompted the U.S. Army and Sandia National Laboratories to conduct a program to determine quantitatively the EM environments inside an explosives storage structure that is struck by lightning. These environments were measured directly during rocket-triggered lightning (RTL) tests in the summer of 1991 and were computed using linear finite-difference, time-domain (FDTD) EM solvers. The experimental and computational results were first compared in order to validate the code and were then used to construct bounds for interior environments corresponding to severe incident lightning flashes. The code results were also used to develop simple circuit models for the EM field behavior.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Kannan, S.K.; Warnow, T.J.
The problem of constructing trees given a matrix of interleaf distances is motivated by applications in computational evolutionary biology and linguistics. The general problem is to find an edge-weighted tree which most closely approximates (under some norm) the distance matrix. Although the construction problem is easy when the tree exactly fits the distance matrix, optimization problems under all popular criteria are either known or conjectured to be NP-complete. In this paper we consider the related problem where we are given a partial order on the pairwise distances, and wish to construct (if possible) an edge-weighted tree realizing the partial order. In particular we are interested in partial orders which arise from experiments on triples of species. We will show that the consistency problem is NP-hard in general, but that for certain special cases the construction problem can be solved in polynomial time.
The dynamics of flexible bodies spinning at rates near or above their first natural frequencies is a notoriously difficult area of analysis. Recently, a method of analysis, tentatively referred to as a method of quadratic modes, has been developed to address this sort of problem. This method restricts consideration to configurations in which all kinematic constraints are automatically satisfied through second order in deformation. Besides providing robustness, this analysis method reduces the problem from one that would otherwise require the reformulation of stiffness matrices at each time step to one of solving only a small number of nonlinear equations at each time step. A test of this method has been performed, examining the vibrations of a rotating, inflated membrane.
Parallel computing offers new capabilities for using molecular dynamics (MD) to simulate larger numbers of atoms and longer time scales. In this paper we discuss two methods we have used to implement the embedded atom method (EAM) formalism for molecular dynamics on multiple-instruction/multiple-data (MIMD) parallel computers. The first method (atom-decomposition) is simple and suitable for small numbers of atoms. The second method (force-decomposition) is new and is particularly appropriate for the EAM because all the computations are between pairs of atoms. Both methods have the advantage of not requiring any geometric information about the physical domain being simulated. We present timing results for the two parallel methods on a benchmark EAM problem and briefly indicate how the methods can be used in other kinds of materials MD simulations.
The Vapnik-Chervonenkis (V-C) dimension is an important combinatorial tool in the analysis of learning problems in the PAC framework. For polynomial learnability, we seek upper bounds on the V-C dimension that are polynomial in the syntactic complexity of concepts. Such upper bounds are automatic for discrete concept classes, but hitherto little has been known about what general conditions guarantee polynomial bounds on V-C dimension for classes in which concepts and examples are represented by tuples of real numbers. In this paper, we show that for two general kinds of concept class the V-C dimension is polynomially bounded as a function of the syntactic complexity of concepts. One is classes where the criterion for membership of an instance in a concept can be expressed as a formula (in the first-order theory of the reals) with fixed quantification depth and exponentially-bounded length, whose atomic predicates are polynomial inequalities of exponentially-bounded degree. The other is classes where containment of an instance in a concept is testable in polynomial time, assuming we may compute standard arithmetic operations on reals exactly in constant time. Our results show that in the continuous case, as in the discrete, the real barrier to efficient learning in the Occam sense is complexity-theoretic and not information-theoretic. We present examples to show how these results apply to concept classes defined by geometrical figures and neural nets, and derive polynomial bounds on the V-C dimension for these classes.
During hydrocarbon reservoir stimulations, such as hydraulic fracturing, the cracking and slippage of the formation results in the emission of seismic energy. The objective of this study was to determine the properties of these induced micro-seisms. A hydraulic fracture experiment was performed in the Piceance Basin of Western Colorado to induce and record micro-seismic events. The formation was subjected to four processes; breakdown/ballout, step-rate test, KCL mini-fracture, and linear-gel mini-fracture. Micro-seisms were acquired with an advanced three-component wall-locked seismic accelerometer package, placed in an observation well 211 ft offset from the fracture well. During the two hours of formation treatment, more than 1200 micro-seisms with signal-to-noise ratios in excess of 20 dB were observed. The observed micro- seisms had a nominally flat frequency spectrum from 100 Hz to 1500 Hz and lack the spurious tool-resonance effects evident in previous attempts to measure micro-seisms. Both p-wave and s-wave arrivals are clearly evident in the data set, and hodogram analysis yielded coherent estimates of the event locations. This paper describes the characteristics of the observed micro- seismic events (event occurrence, signal-to-noise ratios, and bandwidth) and illustrates that the new acquisition approach results in enhanced detectability and event location resolution.
An essential requirement for both Vertical Seismic Profiling (VSP) and Cross-Hole Seismic Profiling (CHSP) is the rapid acquisition of high resolution borehole seismic data. Additionally, full wave-field recording using three-component receivers enables the use of both transmitted and reflected elastic wave events in the resulting seismic images of the subsurface. To this end, an advanced three-component multi-station borehole seismic receiver system has been designed and developed by Sandia National Labs (SNL) and OYO Geospace. The system acquires data from multiple three-component wall-locking accelerometer packages and telemeters digital data to the surface in real-time. Due to the multiplicity of measurement stations and the real-time data link, acquisition time for the borehole seismic survey is significantly reduced. The system was tested at the Chevron La Habra Test Site using Chevron's clamped axial borehole vibrator as the seismic source. Several source and receiver fans were acquired using a four-station version of the advanced receiver system. For comparison purposes, an equivalent data set was acquired using a standard analog wall-locking geophone receiver. The test data indicate several enhancements provided by the multi-station receiver relative to the standard receiver; drastically improved signal-to-noise ratio, increased signal bandwidth, the detection of multiple reflectors, and a true 4: 1 reduction in survey time.
The earth`s ionosphere consists of an ionized plasma which will interact with any electromagnetic wave propagating through it. The interaction is particularly strong at vhf and uhf frequencies but decreases for higher microwave frequencies. These interaction effects and their relationship to the operation of a wide-bandwidth, synthetic-aperture, space-based radar are examined. Emphasis is placed on the dispersion effects and the polarimetric effects. Results show that high-resolution (wide-bandwidth) and high-quality coherent polarimetrics will be very difficult to achieve below 1 GHz.
SAFSIM (System Analysis Flow SIMulator) is a FORTRAN computer program that provides engineering simulations of user-specified flow networks at the system level. It includes fluid mechanics, heat transfer, and reactor dynamics capabilities. SAFSIM provides sufficient versatility to allow the simulation of almost any flow system, from a backyard sprinkler system to a clustered nuclear reactor propulsion system. In addition to versatility, speed and robustness are primary goals of SAFSIM development. The current capabilities of SAFSIM are summarized and some sample applications are presented. It is applied here to a nuclear thermal propulsion system and nuclear rocket engine test facility.
Proceedings - 6th Annual IEEE International ASIC Conference and Exhibit, ASIC 1993
Shen, Hui-Chien; Becker, S.M.
Many designs use EPLDs (Erasable Programmable Logic Devices) to implement control logic and state machines. If the design is slow, timing through the EPLD is not crucial so designers often treat the device as a black box. In high speed designs, timing through the EPLD is critical. In these cases a thorough understanding of the device architecture is necessary. Lessons learned in the implementation of a high-speed design using the Altera EPM5130 are discussed.
A new Assembly Test Chip, ATC04, designed to measure mechanical stresses at the die surface has been built and tested. This CMOS chip 0.25 in. on a side, has an array of 25 piezoresistive stress sensing cells, four resistive heaters and two ring oscillators. The ATCO4 chip facilitates making stress measurements with relatively simple test equipment and data analysis. The design, use, and accuracy of the chip are discussed and initial results are presented from three types of stress measurement experiments: four-point bending calibration, single point bending of a substrate with an ATC04 attached by epoxy, and stress produced by a liquid epoxy encapsulant.
The feasibility of utilizing a groundbased laser without an orbital mirror for space debris removal is examined. Technical issues include atmospheric transmission losses, adaptive-optics corrections of wavefront distortions, laser field-of-view limitations, and laser-induced impulse generation. The physical constraints require a laser with megawatt output, long run-time capability, and wavelength with good atmospheric transmission characteristics. It is found that a 5-MW reactor-pumped laser can deorbit debris having masses of the order of one kilogram from orbital altitudes to be used by Space Station Freedom. Debris under one kilogram can be deorbited after one pass over the laser site, while larger debris can be deorbited or transferred to alternate orbits after multiple passes over the site.
Proceedings - 1993 IEEE/Tsukuba International Workshop on Advanced Robotics: Can Robots Contribute to Preventing Environmental Deterioration?, ICAR 1993
Hwang, Yong K.
Automatic motion planning of a spray cleaning robot with collision avoidance is presented in this paper. In manufacturing environments, electronic and mechanical components are traditionally cleaned by spraying or dipping them using chlorofluorocarbon (CFC) solvents. As new scientific data show that such solvents are major causes for stratospheric ozone depletion, an alternate cleaning method is needed. Part cleaning with aqueous solvents is environmentally safe, but can require precision spraying at high pressures for extended time periods. Operator fatigue during manual spraying can decrease the quality of the cleaning process. By spraying with a robotic manipulator, the necessary spray accuracy and consistency to manufacture high-reliability components can be obtained. Our motion planner was developed to automatically generate motions for spraying robots based on the part geometry and cleaning process parameters. For spraying paint and other coatings a geometric description of the parts and robot may be sufficient for motion planning, since coatings are usually done over the visible surfaces. For spray cleaning, the requirement to reach hidden surfaces necessitates the addition of a rule-based method to the geometric motion planning.
The geochemical properties of a porous sand and several tracers (Ni, Br, and Li) have been characterized for use in a caisson experiment designed to validate sorption models used in models of reactive transport. The surfaces of the sand grains have been examined by a combination of techniques including potentiometric titration, acid leaching, optical microscopy, and scanning electron microscopy with energy-dispersive spectroscopy. The surface studies indicate the presence of small amounts of carbonate, kaolinite and iron-oxyhydroxides. Adsorption of nickel, lithium and bromide by the sand was measured using batch techniques. Bromide was not sorbed by the sand. A linear (Kd) or an isotherm sorption model may adequately describe transport of Li; however, a model describing the changes of pH and the concentrations of other solution species as a function of time and position within the caisson and the concomitant effects on Ni sorption may be required for accurate predictions of nickel transport.
For problems where media properties are measured at one scale and applied at another, scaling laws or models must be used in order to define effective properties at the scale of interest. The accuracy of such models will play a critical role in predicting flow and transport through the Yucca Mountain Test Site given the sensitivity of these calculations to the input property fields. Therefore, a research program has been established to gain a fundamental understanding of how properties scale with the aim of developing and testing models that describe scaling behavior in a quantitative manner. Scaling of constitutive rock properties is investigated through physical experimentation involving the collection of suites of gas permeability data measured over a range of discrete scales. Also, various physical characteristics of property heterogeneity and the means by which the heterogeneity is measured and described and systematically investigated to evaluate their influence on scaling behavior. This paper summarizes the approach that is being taken toward this goal and presents the results of a scoping study that was conducted to evaluate the feasibility of the proposed research.
Experimental results exploring gravity-driven wetting front instability in a pre-wetted, rough-walled analog fracture are presented. Initial conditions considered include a uniform moisture field wetted to field capacity of the analog fracture and the structured moisture field created by unstable infiltration into an initially dry fracture. As in previous studies performed under dry initial conditions, instability was found to result both at the cessation of stable infiltration and at flux lower than the fracture capacity under gravitational driving force. Individual fingers were faster, narrower, longer, and more numerous than observed under dry initial conditions. Wetting fronts were found to follow existing wetted structure, providing a mechanism for rapid recharge and transport.
In an attempt to achieve completeness and consistency, the performance-assessment analyses developed by the Yucca Mountain Project are tied to scenarios described in event trees. Development of scenarios requires describing the constituent features, events, and processes in detail. Several features and processes occurring at the waste packages and the rock immediately surrounding the packages (i.e., the near field) have been identified: the effects of radiation on fluids in the near-field rock, the path-dependency of rock-water interactions, and the partitioning of contaminant transport between colloids and solutes. This paper discusses some questions regarding these processes that the near-field performance-assessment modelers will need to have answered to specify those portions of scenarios dealing with the near field.
Experiments investigating the behavior of individual, gravity-driven fingers in an initially dry, rough-walled analog fracture are presented. Fingers were initiated from constant flow to a point source. Finger structure is described in detail; specific phenomena observed include: desaturation behind the finger-tip, variation in finger path, intermittent flow structures, finger-tip bifurcation, and formation of dendritic sub-fingers. Measurements were made of finger-tip velocity, finger width, and finger-tip length. Non-dimensional forms of the measured variables are analyzed relative to the independent parameters, flow rate and gravitational gradient.