Publications

Results 94351–94375 of 96,771

Search results

Jump to search filters

Coupling of smooth particle hydrodynamics with PRONTO

American Society of Mechanical Engineers, Applied Mechanics Division, AMD

Attaway, Stephen W.

A gridless numerical technique called smooth particle hydrodynamics (SPH) has been coupled the transient dynamics finite element code, PRONTO. In this paper, a new weighted residual derivation for the SPH method will be presented, and the methods used to embed SPH within PRONTO will be outlined. Example SPH-PRONTO calculations will also be presented. Smooth particle hydrodynamics is a gridless Lagrangian technique. Requiring no mesh, SPH has the potential to model material fracture, large shear flows, and penetration. SPH computes the strain rate and the stress divergence based on the nearest neighbors of a particle, which are determined using an efficient particle sorting technique. Embedding the SPH method within PRONTO allows part of the problem to be modeled with quadrilateral finite elements while other parts are modeled with the gridless SPH method. SPH elements are coupled to the quadrilateral elements through a contact like algorithm.

More Details

Measuring liquid properties with smooth- and textured-surface resonators

Proceedings of the Annual Frequency Control Symposium

Martin, S.J.

The response of thickness shear mode (TSM) resonators in liquids is examined. Smooth-surface devices, which viscously entrain a layer of contacting liquid, respond to the product of liquid density and viscosity. Textured-surface devices, which also trap liquid in surface features, exhibit an additional response that depends on liquid density alone. Combining smooth and textured resonators in a monolithic sensor allows simultaneous measurement of liquid density and viscosity.

More Details

Estimations of the extent of migration of surficially applied water for various surface conditions near the potential repository perimeter; Yucca Mountain Site Characterization Project

Sobolik, Steven R.

The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface-based and underground testing. Analyses have been performed to support the design of site characterization activities so to have minimal impact on the ability of the site to isolate waste, and on tests performed as part of the characterization process. Two examples of site characterization activities are the construction of an Exploratory Studies Facility, which may include underground shafts, drifts, and ramps, and surface-based testing activities, which may require borehole drilling, excavation of test pits, and road watering for dust control. The information in this report pertains to two-dimensional numerical calculations modeling the movement of surficially applied water and the potential effects of that water on repository performance and underground experiments. This document contains information that has been used in preparing recommendations for two Yucca Mountain Site Characterization Project documents: Appendix I of the Exploratory Studies Facility Design Requirements document, and the Surface-Based Testing Field Requirements Document.

More Details

Evaluation of the effects of underground water usage and spillage in the Exploratory Studies Facility; Yucca Mountain Site Characterization Project

Dunn, E.; Sobolik, S.R.

The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level radioactive waste repository. Analyses reported herein were performed to support the design of site characterization activities so that these activities will have a minimal impact on the ability of the site to isolate waste and a minimal impact on underground tests performed as part of the characterization process. These analyses examine the effect of water to be used in the underground construction and testing activities for the Exploratory Studies Facility on in situ conditions. Underground activities and events where water will be used include construction, expected but unplanned spills, and fire protection. The models used predict that, if the current requirements in the Exploratory Studies Facility Design Requirements are observed, water that is imbibed into the tunnel wall rock in the Topopah Springs welded tuff can be removed over the preclosure time period by routine or corrective ventilation, and also that water imbibed into the Paintbrush Tuff nonwelded tuff will not reach the potential waste storage area.

More Details

Characteristics and control response of the TOPAZ II Reactor System Real-time Dynamic Simulator

Kwok, Kwan S.

A dynamic simulator of the TOPAZ II reactor system has been developed for the Nuclear Electric Propulsion Space Test Program. The simulator combines first-principle modeling and empirical correlations in its algorithm to attain the modeling accuracy and computational through-put that are required for real-time execution. The overall execution time of the simulator for each time step is 15 ms when no data is written to the disk, and 18 ms when nine double precision data points are written to the disk once in every time step. The simulation program has been tested and it is able to handle a step decrease of $8 worth of reactivity. It also provides simulations of fuel, emitter, collector, stainless steel, and ZrH moderator failures. Presented in this paper are the models used in the calculations, a sample simulation session, and a discussion of the performance and limitations of the simulator. The simulator has been found to provide realistic real-time dynamic response of the TOPAZ II reactor system under both normal and casualty conditions.

More Details

Characterization of polysilicon films by Raman spectroscopy and transmission electron microscopy: A comparative study

Tallant, David T.

Samples of chemically-vapor-deposited micrometer and sub-micrometer-thick films of polysilicon were analyzed by transmission electron microscopy (TEM) in cross-section and by Raman spectroscopy with illumination at their surface. TEM and Raman spectroscopy both find varying amounts of polycrystalline and amorphous silicon in the wafers. Raman spectra obtained using blue, green and red excitation wavelengths to vary the Raman sampling depth are compared with TEM cross-sections of these films. Films showing crystalline columnar structures in their TEM micrographs have Raman spectra with a band near 497 cm{sup {minus}1} in addition to the dominant polycrystalline silicon band (521 cm{sup {minus}1}). The TEM micrographs of these films have numerous faulted regions and fringes indicative of nanometer-scale silicon structures, which are believed to correspond to the 497cm{sup {minus}1} Raman band.

More Details

Examination of metrics and assumptions used in correlation filter design

Proceedings of SPIE - The International Society for Optical Engineering

Gheen, G.; Dickey, F.; Delaurentis, J.

This paper examines some of the metrics that are commonly used to design correlation filter's for optical pattern recognition, including: the Fisher ratio, the signal-to-noise ratio, the equal correlation peak (ECP) contraint, and normalized correlation. Attention is given to the underlying assumptions that are required to move from Bayesian decision theory to a particular metric or design principle. Since a Bayes classifier is statistically optimum, this provides a means for assessing the merit of a particular approach. Although we only examine a few metrics in this paper, the approach is general and should be useful for assessing the merit and applicability of any of the numerous filter designs that have been proposed in the optical pattern recognition community.

More Details

An {Omega}({radical}log log n) lower bound for routing in optical networks

Goldberg, L.A.

Optical communication is likely to significantly speed up parallel computation because the vast bandwidth of the optical medium can be divided to produce communication networks of very high degree. However, the problem of contention in high-degree networks makes the routing problem in these networks theoretically (and practically) difficult. In this paper we examine Valiant`s h-relation routing problem, which is a fundamental problem in the theory of parallel computing. The h-relation routing problem arises both in the direct implementation of specific parallel algorithms on distributed-memory machines and in the general simulation of shared memory models such as the PRAM on distributed-memory machines. In an h-relation routing problem each processor has up to h messages that it wishes to send to other processors and each processor is the destination of at most h messages. We present a lower bound for routing an h-relation (for any h > 1) on a complete optical network of size -n. Our lower bound applies to any randomized distributed algorithm for this task. Specifically, we show that the expected number of communication steps required to route an arbitrary h-relation is {Omega}(h + {radical}log log n). This is the first known lower bound for this problem which does not restrict the class of algorithms under consideration.

More Details

A global motion planner for curve-tracing robots

Hwang, Y.K.

We present a global motion planner for tracing curves in three dimensions with robot manipulator tool frames. This planner generates an efficient motion satisfying three types of constraints; constraints on the tool tip for curve tracing, robot kinematic constraints and robot-link collision constraints. Motions are planned using a global search algorithm and a local planner based on a potential-field approach. This planner can be used with potential-field approach. This planner can be used with any robots including redundant manipulators, and can any robots including redundant manipulators, and can control the trade-offs between its algorithmic completeness and computation time. It can be applied in many robotic tasks such as seam welding, caulking, edge deburrring and chamfering, and is expected to reduce motion programming times from days to minutes.

More Details

An interactive Virtual Reality simulation system for robot control and operator training

Miner, Nadine E.

Robotic systems are often very complex and difficult to operate, especially as multiple robots are integrated to accomplish difficult tasks. In addition, training the operators of these complex robotic systems is time-consuming and costly. In this paper, a virtual reality based robotic control system is presented. The virtual reality system provides a means by which operators can operate, and be trained to operate, complex robotic systems in an intuitive, cost-effective way. Operator interaction with the robotic system is at a high, task-oriented, level. Continuous state monitoring prevents illegal robot actions and provides interactive feedback to the operator and real-time training for novice users.

More Details

Nuclear weapon system risk assessment

Carlson, D.D.

Probabilistic risk assessment (PRA) is a process for evaluating hazardous operations by considering what can go wrong, the likelihood of these undesired events, and the resultant consequences. Techniques used in PRA originated in the 1960s. Although there were early exploratory applications to nuclear weapons and other technologies, the first major application of these techniques was in the Reactor Safety Study, WASH-1400, {sup 1} in which the risks of nuclear power accidents were thoroughly investigated for the first time. Recently, these techniques have begun to be adapted to nuclear weapon system applications. This report discusses this application to nuclear weapon systems.

More Details

An efficient hybrid planner in changing environments

Chen, P.C.

In this paper, we present a new hybrid motion planner than is capable of exploiting previous planning episodes when confronted with new planning problems. Our approach is applicable when several (similar) problems are successively posed for the same static environment, or when the environment changes incrementally between planning episodes. At the heart of our system lie two low-level motion planners: a fast, but incomplete planner (which we call LOCAL), and a computationally costly (possibly resolution) complete planner (which we call GLOBAL). When a new planning problem is presented to our planner, a meta-level planner (which we call MANAGER) decomposes the problem into segments that are amenable to solution by LOCAL. This decomposition is made by exploiting a task graph, in which successful planning episodes have been recorded. In cases where the decomposition fails, GLOBAL is invoked. The key to our planner`s success is a novel representation of solution trajectories, in which segments of collision-free paths are associated with the boundary of nearby obstacles.

More Details

Automatic assembly planning and its role in agile manufacturing: A Sandia perspective

Jones, R.E.; Kaufman, S.G.

Sandia has been studying automatic assembly planning of electromechanical devices for some years, based on an implemented system called Archimedes. Work done to date has focussed on automatic generation of high-level plans, and translation of these plans into robotic control code and workcell layout. More recently, the importance of an assembly planning capability as a design aid has been emphasized, as it could potentially provide early feedback to a designer on the manufacturability of the design. This paper describes the work done on assembly planning to date, plans for extending it, and its applications to agile manufacturing. In particular, we describe an agile manufacturing demonstration project underway at Sandia, and the role the Archimedes assembly planning system will play in it.

More Details

Overview of United States Department of Energy activities to support life extension of nuclear power plants

Rosinski, S.T.

Today, 109 nuclear power plants provide over 20 percent of the electrical energy generated in the US The operating license of the first of these plants will expire in the year 2000; one-third of the operating licenses will expire by 2010 and the remaining plant licenses are scheduled to expire by 2033. The National Energy Strategy assumes that 70 percent of these plants will continue to operate beyond their current license expiration to assist in ensuring an adequate, diverse, and environmentally acceptable energy supply for economic growth. In order to preserve this energy resource in the US three major tasks must be successfully completed: establishment of regulations, technical standards, and procedures for the preparation and review of a license renewal application; development, verification, and validation of technical criteria and bases for monitoring, refurbishing, and/or replacing plant equipment; and demonstration of the regulatory process. Since 1985, the US Department of Energy (DOE) has been working with the nuclear industry and the US Nuclear Regulatory Commission (NRC) to establish and demonstrate the option to extend the life of nuclear power plants through the renewal of operating licenses. This paper focuses primarily on DOE`s Plant Lifetime Improvement (PLIM) Program efforts to develop the technical criteria and bases for effective aging management and lifetime improvement for continued operation of nuclear power plants. This paper describes current projects to resolve generic technical issues in the principal areas of reactor pressure vessel (RPV) integrity, fatigue, and environmental qualification (EQ).

More Details

Double layer capacitance of carbon foam electrodes

Delnick, F.M.; Ingersoll, D.; Firsich, D.

We have evaluated a wide variety of microcellular carbon foams prepared by the controlled pyrolysis and carbonization of several polymers including: polyacrylonitrile (PAN), polymethacrylonitrile (PMAN), resorcinol/formaldehyde (RF), divinylbenzene/methacrylonitrile (DVB), phenolics (furfuryl/alcohol), and cellulose polymers such as Rayon. The porosity may be established by several processes including: Gelation (1-5), phase separation (1-3,5-8), emulsion (1,9,10), aerogel/xerogel formation (1,11,12,13), replication (14) and activation. In this report we present the complex impedance analysis and double layer charging characteristics of electrodes prepared from one of these materials for double layer capacitor applications, namely activated cellulose derived microcellular carbon foam.

More Details

Comparison and verification of two models which predict minimum principal in situ stress from triaxial data

Warpinski, Norman R.

This paper evaluates the correlation between values of minimum principal in situ stress derived from two different models which use data obtained from triaxial core tests and coefficient for earth at rest correlations. Both models use triaxial laboratory tests with different confining pressures. The first method uses a vcrified fit to the Mohr failure envelope as a function of average rock grain size, which was obtained from detailed microscopic analyses. The second method uses the Mohr-Coulomb failure criterion. Both approaches give an angle in internal friction which is used to calculate the coefficient for earth at rest which gives the minimum principal in situ stress. The minimum principal in situ stress is then compared to actual field mini-frac test data which accurately determine the minimum principal in situ stress and are used to verify the accuracy of the correlations. The cores and the mini-frac stress test were obtained from two wells, the Gas Research Institute`s (GRIs) Staged Field Experiment (SFE) no. 1 well through the Travis Peak Formation in the East Texas Basin, and the Department of Energy`s (DOE`s) Multiwell Experiment (MWX) wells located west-southwest of the town of Rifle, Colorado, near the Rulison gas field. Results from this study indicates that the calculated minimum principal in situ stress values obtained by utilizing the rock failure envelope as a function of average rock grain size correlation are in better agreement with the measured stress values (from mini-frac tests) than those obtained utilizing Mohr-Coulomb failure criterion.

More Details

Sol-gel derived silica/siloxane composite materials: The effect of loading level and catalyst activity on silica domain formation

Ulibarri, Tamara A.

Currently, the production of in situ reinforcement in polymeric systems by sol-gel methods is undergoing rapid development. However, understanding of synthesis/structure/property relationships is still lacking. In order to produce sol-gel derived composite materials with sufficient mechanical properties for commercial applications, this deficit of information must be addressed. We have completed a detailed investigation of in situ silica growth in polydimethylsiloxane (PDMS)/tetraethylorthosilicate (TEOS) systems. Factors which affect the domain growth, such as catalyst activity and silica loading, have been examined by solid state {sup 29}Si NMR, SEM, mechanical testing and small angle neutron scattering.

More Details

Chemometric analysis of infrared emission spectra for quantitative analysis of BPSG films on silicon

Haaland, David M.

Infrared emission spectra of 21 borophosphosilicate glass (BPSG) thin films on silicon wafers were collected with the samples held at constant temperature between 125--400{degree}C using a heating stage designed for precise temperature control ({plus_minus}{degree}C). Partial test squares calibrations applied to the BPSG infrared emittance spectra allowed four BPSG thin-film properties to be simultaneously quantified with precisions of 0.1 wt. % for boron and phosphorus, 35 {Angstrom} for film thickness, and 1.2{degree}C for temperature.

More Details

Optimization of experimental conditions in IR reflectance determination of BPSG properties

Haaland, David M.

Experiments were performed to examine sensitivity of thin-film property determinations to several experimental variables when applying multivariate calibration methods to infrared reflection spectroscopic data. Results indicate that low angles of incidence are best for robust quantitative determination of boron, phosphorus, and film thickness in borophosphosilicate glass (BPSG) dielectric films. However, the polarization state of the incidence beam does not affect the quantitative prediction ability.

More Details

Analog fiber optic multiplexing techniques and results from the Hunters Trophy Experiment

Hansen, G.J.

Due to the growth in the use of analog fiber optic data transmission systems at the Nevada Test Site and other locations, Sandia National Laboratories (SNL) has recognized the need to be able to multiplex several data channels per fiber. Wavelength-division, frequency-division, and time-division multiplex techniques have been investigated. A time-division system using optically-multiplexed laser transmitters driving a common receiver was fielded on the HUNTERS TROPHY event at the NTS. Stability, noise, and dynamic range compared favorably with that seen on nonmultiplexed links. Amplitude, width, and rise time of data transmitted via the multiplexed links was consistent with that recorded from non-multiplexed links.

More Details

A comparative evaluation of SAR and SLAR

Mastin, G.A.; Manson, J.J.; Bradley, J.D.; Axline, R.M.; Hover, G.L.

Synthetic aperture radar (SAR) was evaluated as a potential technological improvement over the Coast Guard`s existing side-looking airborne radar (SLAR) for oil-spill surveillance applications. The US Coast Guard Research and Development Center (R&D Center), Environmental Branch, sponsored a joint experiment including the US Coast Guard, Sandia National Laboratories, and the Naval Oceanographic and Atmospheric Administration (NOAA), Hazardous Materials Division. Radar imaging missions were flown on six days over the coastal waters off Santa Barbara, CA, where there are constant natural seeps of oil. Both the Coast Guard SLAR and the Sandia National Laboratories SAR were employed to acquire simultaneous images of oil slicks and other natural sea surface features that impact oil-spill interpretation. Surface truth and other environmental data were also recorded during the experiment. The experiment data were processed at Sandia National Laboratories and delivered to the R&D Center on a computer workstation for analysis by experiment participants. Issues such as optimal spatial resolution, single-look vs. multi-look SAR imaging, and the utility of SAR for oil-spill analysis were addressed. Finally, conceptual design requirements for a possible future Coast Guard SAR were outlined and evaluated.

More Details

Generic event trees and the treatment of dependencies and non-proceduralized actions in a low power and shutdown Probabilistic Risk Assessment

Whitehead, Donnie W.

Sandia National Laboratories was tasked by the US Nuclear Regulatory Commission to perform a Probabilistic Risk Assessment (PRA) of a boiling water reactor (BWR) during low power and shutdown (LP&S) conditions. The plant chosen for the study was Grand Gulf Nuclear Station (GGNS), a BWR 6. In performing the analysis, it was found that in comparison with full-power PRAs, the low decay heat levels present during LP&S conditions result in a relatively large number of ways by which cooling can be provided to the core. In addition, because of the less stringent requirements imposed on system configurations possible is large and the availability of plant systems is more difficult to specify. These aspects of the LP&S environment led to the development and use of ``generic`` event trees in performing the analysis. The use of ``generic`` event trees, in turn, had a significant impact on the nature of the human reliability analysis (HRA) that was performed. This paper describes the development of the event trees for the LP&S PRA and important aspects of the resulting HRA.

More Details

A numerical study of bench blast row delay timing and its influence on percent-cast

Preece, Dale S.

The computer program, DMC (Distinct Motion Code), which was developed for simulating the rock motion associated with blasting, has been used to study the influence of row delay timing on rock motion. The numerical simulations correspond with field observations in that very short delays (< 50ms) and very long delays (> 300ms) produce a lower percent-cast than a medium delay (100 to 200 ms). The DMC predicted relationship between row delay timing and percent-cast is more complex than expected with a dip in the curve where the optimum timing might be expected. More study is required to gain a full understanding of this phenomenon.

More Details

Achieving high performance on the Intel Paragon

Greenberg, D.S.

When presented with a new supercomputer most users will first ask {open_quotes}How much faster will my applications run?{close_quotes} and then add a fearful {open_quotes}How much effort will it take me to convert to the new machine?{close_quotes} This paper describes some lessons learned at Sandia while asking these questions about the new 1800+ node Intel Paragon. The authors conclude that the operating system is crucial to both achieving high performance and allowing easy conversion from previous parallel implementations to a new machine. Using the Sandia/UNM Operating System (SUNMOS) they were able to port a LU factorization of dense matrices from the nCUBE2 to the Paragon and achieve 92% scaled speed-up on 1024 nodes. Thus on a 44,000 by 44,000 matrix which had required over 10 hours on the previous machine, they completed in less than 1/2 hour at a rate of over 40 GFLOPS. Two keys to achieving such high performance were the small size of SUNMOS (less than 256 kbytes) and the ability to send large messages with very low overhead.

More Details
Results 94351–94375 of 96,771
Results 94351–94375 of 96,771