Publications

Results 97026–97050 of 99,299

Search results

Jump to search filters

Experimental investigation of pressure and blockage effects on combustion limits in H{sub 2}-air-steam mixtures

Sherman, M.P.

Experiments with hydrogen-air-steam mixtures, such as those found within a containment system following a reactor accident, were conducted in the Heated Detonation Tube (43 cm diameter and 12 m long) to determine the region of benign combustion; i.e., the region between the flammability limits and the deflagration-to-detonation transition limits. Obstacles were used to accelerate the flame; these include 30% blockage ratio annular rings, and alternate rings and disks of 60% blockage ratio. The initial conditions were 110 {degree}C and one or three atmospheres pressure. A benign burning region exists for rich mixtures, but is generally smaller than for lean mixtures. Effects of the different obstacles and of the different pressures are discussed.

More Details

Downsizing a database platform for increased performance and decreased costs

Miller, M.M.; Tolendino, L.F.

Technological advances in the world of microcomputers have brought forth affordable systems and powerful software than can compete with the more traditional world of minicomputers. This paper describes an effort at Sandia National Laboratories to decrease operational and maintenance costs and increase performance by moving a database system from a minicomputer to a microcomputer.

More Details

A methodology for the evaluation of the turbine jet engine fragment threat to generic air transportable containers

Harding, David C.

Uncontained, high-energy gas turbine engine fragments are a potential threat to air-transportable containers carried aboard jet aircraft. The threat to a generic example container is evaluated by probability analyses and penetration testing to demonstrate the methodology to be used in the evaluation of a specific container/aircraft/engine combination. Fragment/container impact probability is the product of the uncontained fragment release rate and the geometric probability that a container is in the path of this fragment. The probability of a high-energy rotor burst fragment from four generic aircraft engines striking one of the containment vessels aboard a transport aircraft is approximately 1.2 {times} 10{sup {minus}9} strikes/hour. Finite element penetration analyses and tests can be performed to identify specific fragments which have the potential to penetrate a generic or specific containment vessel. The relatively low probability of engine fragment/container impacts is primarily due to the low release rate of uncontained, hazardous jet engine fragments.

More Details

Stockpile Transition Enabling Program (STEP): Process and project requirements

Ma, Kwok-Kee

The Stockpile Transition Enabling Program (STEP) is aimed at identifying weapon components suitable for use in more than one weapon and for qualifying components so identified for multiple use. Work includes identifying the means to maintain the manufacturing capability for these items. This document provides the participants in STEP a common, consistent understanding of the process and requirements. The STEP objectives are presented and the activities are outlined. The STEP project selections are based on the customer needs, product applicability, and maturity of the technology used. A formal project selection process is described and the selection criteria are defined. The concept of {open_quotes}production readiness{close_quotes} is introduced, along with a summary of the project requirements and deliverables to demonstrate production readiness.

More Details

MELCOR 1.8.1 calculations of ISP31: The CORA-13 experiment

Gross, Robert J.

The MELCOR code was used to simulate one of GRS`s (a reactor research group in Germany) core degradation experiments conducted in the CORA out-of-pile test facility. This test, designated CORA-13, was selected as one of the International Standard Problems, Number ISP31, by the Organization for Economic Cooperation and Development. In this blind calculation, only initial and boundary conditions were provided. The experiment consisted of a small core bundle of twenty-five PWR fuel elements that was electrically heated to temperatures greater than 2,800 K. The experiment composed three phases: a 3,000 second gas preheat phase, an 1,870 second transient phase, and a 180 second water quench phase. MELCOR predictions are compared both to the experimental data and to eight other ISP31 submittals. Temperatures of various components, energy balance, zircaloy oxidation, and core blockage are examined. Up to the point where oxidation was significant, MELCOR temperatures agreed very well with the experiment -- usually to within 50 K. MELCOR predicted oxidation to occur about 100 seconds earlier and at a faster rate than experimental data. The large oxidation spike that occurred during quench was not predicted. However, the experiment produced 210 grams of hydrogen, while MELCOR predicted 184 grams, which was one of the closest integral predictions of the nine submittals. Core blockage was of the right magnitude; however, material collected on the lower grid spacer in the experiment at an axial location of 450 mm, while in MELCOR the material collected at the 50 to 150 mm location. In general, compared to the other submittals, the MELCOR calculation was superior.

More Details

Characterization of the Facility for Atmospheric Corrosion Testing (FACT) at Sandia

Greenholt, Charles J.

The capability to perform atmospheric corrosion testing of materials and components now exists at Sandia resulting from the installation of a system called the Facility for Atmospheric Corrosion Testing (FACT). This report details the design, equipment, operation, maintenance, and future modifications of the system. This report also presents some representative data acquired from testing copper in environments generated by the FACT.

More Details

The unique signal concept for detonation safety in nuclear weapons

Hoover, Marcey L.

The purpose of a unique signal (UQS) in a nuclear weapon system is to provide an unambiguous communication of intent to detonate from the UQS information input source device to a stronglink safety device in the weapon in a manner that is highly unlikely to be duplicated or simulated in normal environments and in a broad range of ill-defined abnormal environments. This report presents safety considerations for the design and implementation of UQSs in the context of the overall safety system.

More Details

Preliminary performance assessment of the Greater Confinement Disposal facility at the Nevada Test Site. Volume 3: Supporting details

Price, Laura L.

The Department of Energy`s Nevada Operations Office (DOE/NV) has disposed of a small quantity of transuranic waste at the Greater Confinement Disposal facility in Area 5 of the Nevada Test Site. In 1989, DOE/NV contracted with Sandia National Laboratories to perform a preliminary performance assessment of this disposal facility. This preliminary performance assessment consisted of analyses designed to assess the likelihood of complying with Environmental Protection Agency standards for the disposal of transuranic waste, high level waste, and spent fuel. The preliminary nature of this study meant that no other regulatory standards were considered and the analyses were conducted with specific limitations. The procedure for the preliminary performance assessment consisted of (1) collecting information about the site, (2) developing models based on this information, (3) implementing these models in computer codes, (4) performing the analyses using the computer codes, and (5) performing sensitivity analyses to determine the more important variables. Based on the results of the analyses, it appears that the Greater Confinement Disposal facility will most likely comply with the Environmental Protection Agency`s standards for the disposal of transuranic waste. The results of the sensitivity analyses are being used to guide site characterization activities related to the next iteration of performance assessment analyses for the Greater Confinement Disposal facility.

More Details

Insights into the behavior of nuclear power plant containments during severe accidents

Ludwigsen, John S.

The containment building surrounding a nuclear reactor offers the last barrier to the release of radioactive materials from a severe accident into the environment. The loading environment of the containment under severe accident conditions may include much greater than design pressures and temperatures. Investigations into the performance of containments subject to ultimate or failure pressure and temperature conditions have been performed over the last several years through a program administered by the Nuclear Regulatory Commission (NRC). These NRC sponsored investigations are subsequently discussed. Reviewed are the results of large scale experiments on reinforced concrete, prestressed concrete, and steel containment models pressurized to failure. In conjunction with these major tests, the results of separate effect testing on many of the critical containment components; that is, aged and unaged seals, a personnel air lock and electrical penetration assemblies subjected to elevated temperature and pressure have been performed. An objective of the NRC program is to gain an understanding of the behavior of typical existing and planned containment designs subject to postulated severe accident conditions. This understanding has led to the development of experimentally verified analytical tools that can be applied to accurately predict their ultimate capacities useful in developing severe accident mitigation schemes. Finally, speculation on the response of containments subjected to severe accident conditions is presented.

More Details

Object-oriented DFD models to present the functional and behavioral views

Maxted, A.

An object-oriented methodology is presented that is based on two sets of Data Flow Diagrams (DFDs): one for the functional view, and one for the behavioral view. The functional view presents the information flow between shared objects. These objects map to the classes identified in the structural view (e.g., Information Model). The behavioral view presents the flow of information between control components and relates these components to their state models. Components appearing in multiple views provide a bridge between the views. The top-down hierarchical nature of the DFDs provide a needed overview or road map through the software system.

More Details

A proposal for reverse engineering CASE tools to support new software development

Maxted, A.

Current CASE technology provides sophisticated diagramming tools to generate a software design. The design, stored internal to the CASE tool, is bridged to the code via code generators. There are several limitations to this technique: (1) the portability of the design is limited to the portability of the CASE tools, and (2) the code generators offer a clumsy link between design and code. The CASE tool though valuable during design, becomes a hindrance during implementation. Frustration frequently causes the CASE tool to be abandoned during implementation, permanently severing the link between design and code. Current CASE stores the design in a CASE internal structure, from which code is generated. The technique presented herein suggests that CASE tools store the system knowledge directly in code. The CASE support then switches from an emphasis on code generators to employing state-of-the-art reverse engineering techniques for document generation. Graphical and textual descriptions of each software component (e.g., Ada Package) may be generated via reverse engineering techniques from the code. These reverse engineered descriptions can be merged with system over-view diagrams to form a top-level design document. The resulting document can readily reflect changes to the software components by automatically generating new component descriptions for the changed components. The proposed auto documentation technique facilitates the document upgrade task at later stages of development, (e.g., design, implementation and delivery) by using the component code as the source of the component descriptions. The CASE technique presented herein is a unique application of reverse engineering techniques to new software systems. This technique contrasts with more traditional CASE auto code generation techniques.

More Details

Optical diagnostic instrument for monitoring etch uniformity during plasma etching of polysilicon in a chlorine-helium plasma

Hareland, W.A.

Nonuniform etching is a serious problem in plasma processing of semiconductor materials and has important consequences in the quality and yield of microelectronic components. In many plasmas, etching occurs at a faster rate near the periphery of the wafer, resulting in nonuniform removal of specific materials over the wafer surface. This research was to investigate in situ optical diagnostic techniques for monitoring etch uniformity during plasma processing of microelectronic components. We measured 2-D images of atomic chlorine at 726 nm in a chlorine-helium plasma during plasma etching of polysilicon in a parallel-plate plasma etching reactor. The 3-D distribution of atomic chlorine was determined by Abel inversion of the plasma image. The experimental results showed that the chlorine atomic emission intensity is at a maximum near the outer radius of the plasma and decreases toward the center. Likewise, the actual etch rate, as determined by profilometry on the processed wafer, was approximately 20% greater near the edge of the wafer than at its center. There was a direct correlation between the atomic chlorine emission intensity and the etch rate of polysilicon over the wafer surface. Based on these analyses, 3-D imaging would be a useful diagnostic technique for in situ monitoring of etch uniformity on wafers.

More Details

Surface acoustic wave sensing of VOCs in harsh chemical environments

Pfeifer, Kent B.

The measurement of VOC concentrations in harsh chemical and physical environments is a formidable task. A surface acoustic wave (SAW) sensor has been designed for this purpose and its construction and testing are described in this paper. Included is a detailed description of the design elements specific to operation in 300{degree}C steam and HCl environments including temperature control, gas handling, and signal processing component descriptions. In addition, laboratory temperature stability was studied and a minimum detection limit was defined for operation in industrial environments. Finally, a description of field tests performed on steam reforming equipment at Synthetica Technologies Inc. of Richmond, CA is given including a report on destruction efficiency of CCl{sub 4} in the Synthetica moving bed evaporator. Design improvements based on the field tests are proposed.

More Details

MELCOR 1.8.1 assessment: PNL Ice Condenser Aerosol Experiments

Gross, Robert J.

The MELCOR code was used to simulate PNL`s Ice Condenser Experiments 11-6 and 16-11. In these experiments, ZnS was injected into a mixing chamber, and the combined steam/air/aerosol mixture flowed into an ice condenser which was l4.7m tall. Experiment 11-6 was a low flow test; Experiment l6-1l was a high flow test. Temperatures in the ice condenser region and particle retention were measured in these tests. MELCOR predictions compared very well to the experimental data. The MELCOR calculations were also compared to CONTAIN code calculations for the same tests. A number of sensitivity studies were performed. It as found that simulation time step, aerosol parameters such as the number of MAEROS components and sections used and the particle density, and ice condenser parameters such as the energy capacity of the ice, ice heat transfer coefficient multiplier, and ice heat structure characteristic length all could affect the results. Thermal/hydraulic parameters such as control volume equilibrium assumptions, flow loss coefficients, and the bubble rise model were found to affect the results less significantly. MELCOR results were not machine dependent for this problem.

More Details

Proposal for a numerical array library (Revised)

Budge, Kent G.

One of the most widely recognized inadequacies of C is its low-level treatment of arrays. Arrays are not first-class objects in C; an array name in an expression almost always decays into a pointer to the underlying type. This is unfortunate, especially since an increasing number of high-performance computers are optimized for calculations involving arrays of numbers. On such machines, double [] may be regarded as an intrinsic data type comparable to double or int and quite distinct from double. This weakness of C is acknowledged in the ARM where it is suggested that the inadequacies of the C array can be overcome in C++ by wrapping it in a class that supplies dynamic memory management, bounds checking, operator syntax, and other useful features. Such ``smart arrays`` can in fact supply the same functionality as the first-class arrays found in other high-level, general-purpose programming languages. Unfortunately, they are expensive in both time and memory and make poor use of advanced floating-point architectures. Is there a better solution? The most obvious solution is to make arrays first-class objects and add the functionality mentioned in the previous paragraph. However, this would destroy C compatibility and significantly alter the C++ language. Major conflicts with existing practice would seem inevitable. I propose instead that numerical array classes be adopted as part of the C++ standard library. These classes will have the functionality appropriate for the intrinsic arrays found on most high-performance computers, and the compilers written for these computers will be free to implement them as built-in classes. On other platforms, these classes may be defined normally, and will provide users with basic army functionality without imposing an excessive burden on the implementor.

More Details

Preliminary Nuclear Safety Assessment of the NEPST (Topaz II) Space Reactor Program

Marshall, Albert C.

The United States (US) Strategic Defense Initiative Organization (SDIO) decided to investigate the possibility of launching a Russian Topaz II space nuclear power system. A preliminary nuclear safety assessment was conducted to determine whether or not a space mission could be conducted safely and within budget constraints. As part of this assessment, a safety policy and safety functional requirements were developed to guide both the safety assessment and future Topaz II activities. A review of the Russian flight safety program was conducted and documented. Our preliminary nuclear safety assessment included a number of deterministic analyses, such as; neutronic analysis of normal and accident configurations, an evaluation of temperature coefficients of reactivity, a reentry and disposal analysis, an analysis of postulated launch abort impact accidents, and an analysis of postulated propellant fire and explosion accidents. Based on the assessment to date, it appears that it will be possible to safely launch the Topaz II system in the US with a modification to preclude water flooded criticality. A full scale safety program is now underway.

More Details

Modeling of the second stage of the STAR 1.125 inch two-stage gas gun

Longcope, Donald B.

The second stage of the Shock Technology and Applied Research (STAR) facility two-stage light gas gun at Sandia National Laboratories has been modeled to better assess its safety during operation and to determine the significance of various parameters to its performance. The piston motion and loading of the acceleration reservoir (AR), the structural response of AR, and the projectile motion are determined. The piston is represented as an incompressible fluid while the AR is modeled with the ABAQUS finite element structural analysis code. Model results are compared with a measured profile of AR diameter growth for a test at maximum conditions and with projectile exit velocities for a group of tests. Changes in the piston density and in the break diaphragm opening pressure are shown to significantly affect the AR loading and the projectile final velocity.

More Details

Message passing in PUMA

Wheat, S.R.

This paper provides an overview of the message passing primitives provided by PUMA (Performance-oriented, User-managed Messaging Architecture). Message passing in PUMA is based on the concept of a portal--an opening in the address space of an application process. Once an application process has established a portal, other processes can write values into the memory associated with the portal using a simple send operation. Because messages are written directly into the address space of the receiving process, there is not need to buffer messages in the PUMA kernel. This simplifies the design of the kernel, increasing its reliability and portability. Moreover, because messages are mapped directly into the address space of the application process, the application can manage the messages that it receives without needing direct support from the kernel.

More Details

A massively parallel adaptive finite element method with dynamic load balancing

Wheat, S.R.

We construct massively parallel, adaptive finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We demonstrate parallel efficiency through computations on a 1024-processor nCUBE/2 hypercube. We also present results using adaptive p-refinement to reduce the computational cost of the method. We describe tiling, a dynamic, element-based data migration system. Tiling dynamically maintains global load balance in the adaptive method by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. We demonstrate the effectiveness of the dynamic load balancing with adaptive p-refinement examples.

More Details

Design and testing of planar magnetic micromotors fabricated by deep x-ray lithography and electroplating

Karnowsky, M.

The successful design and testing of a three-phase planar integrated magnetic micromotor is presented. Fabrication is based on a modified deep X-ray lithography and electroplating or LIGA process. Maximum rotational speeds of 33,000 rpm are obtained in air with a rotor diameter of 285 {mu}m and do not change when operated in vacuum. Real time rotor response is obtained with an integrated shaft encoder. Long lifetime is evidenced by testing to over 5(10){sup 7} ration cycles without changes in performance. Projected speeds of the present motor configuration are in the vicinity of 100 krpm and are limited by torque ripple. Higher speeds, which are attractive for sensor applications. require constant torque characteristic excitation as is evidenced by ultracentrifuge and gyroscope design. Further understanding of electroplated magnetic material properties will drive these performance improvements.

More Details

Visualization for applications in shock physics

Pavlakos, Constantine

This case study presents work being done to provide visualization capabilities for a family of codes at Sandia in the area of shock physics. The codes, CTH and Parallel CTH, are running in traditional supercomputing as well as massively parallel environments. These are Eulerian codes which produce data on structured grids. Data sets can be large, so managing large data is a priority. A supercomputing-based distributed visualization environment has been implemented to support such applications. This environment, which is based in New Mexico, is also accessible from our branch site in California via a long haul FDDI/ATM link. Functionality includes the ability to track ongoing simulations. A custom visualization file has been developed to provide efficient, interactive access to result data. Visualization capabilities are based on the commercially available AVS software. A few example results are presented, along with a brief discussion of future work.

More Details

Comparison of analytic Whipple bumper shield ballistic limits with CTH simulations

Hertel, Eugene S.

A series of CTH simulations were conducted to assess the feasibility of using the hydrodynamic code for debris cloud formation and to predict any damage due to the subsequent loading on rear structures. Six axisymmetric and one 3-dimensional simulations were conducted for spherical projectiles impacting Whipple bumper shields. The projectile diameters were chosen to correlate with two well known analytic expressions for the ballistic limit of a Whipple bumper shield. It has been demonstrated that CTH can be used to simulate the debris cloud formation, the propagation of the debris across a void region, and the secondary impact of the debris against a structure. In addition, the results from the CTH simulations were compared to the analytic estimates of the ballistic limit. At impact velocities of 10 km/s or less, the CTH predicted ballistic limit lays between the two analytic estimates. However, for impact velocities greater than 10 km/s, CTH simulations predicted a ballistic limit larger than both analytical estimates. The differences at high velocities are not well understood. Structural failure at late times due to the time integrated loading of a very diffuse debris cloud has not been considered in the CTH model. In addition, the analytic predictions are extrapolated from relatively low velocity data and the extrapolation technique may not be valid. The discrepancy between the two techniques should be investigated further.

More Details

A multiphase model for shock-induced flow in low density foam

Baer, M.R.

A multiphase mixture model is applied to describe shocked-induced flow in deformable low-density foam. This model includes interphase drag and heat transfer and all phases are treated as compressible. Volume fraction is represented as an independent kinematic variable and the entropy inequality suggests a thermodynamically-admissable evolutionary equation to describe rate-dependent compaction. This multiphase model has been applied to shock tube experiments conducted by B. W. Skews and colleagues in the study of normal shock impingement on a wall-supported low density porous layer. Numerical solution of the multiphase flow equations employs a high resolution adaptive finite element method which accurately resolves contact surfaces and shock interactions. Additional studies are presented in an investigation of the effect of initial gas pressure in the foam layer, the shock interaction on multiple layers of foam and the shock- induced flow in an unsupported foam layer.

More Details

Quality and ES&H Self-Appraisal Program at the Center for Applied Physics, Engineering and Testing

Zawadzkas, Gerald A.

This report describes the Quality and ES&H Self-Appraisal Program at the Center for Applied Physics, Engineering and Testing, 9300 and explains how the program promotes good ``Conduct of Operations`` throughout the center and helps line managers improve efficiency and maintain a safe work environment. The program provides a means to identify and remove hazards and to ensure workers are following correct and safe procedures; but, most importantly, 9300`s Self-Appraisal program uses DOE`s ``Conduct of Operations`` and ``Quality Assurance`` guidelines to evaluate the manager`s policies and decisions. The idea is to draw attention to areas for improvement in ES&H while focusing on how well the organization`s processes and programs are doing. A copy of the Administrative Procedure which establishes and defines the program, as well as samples of a Self-Appraisal Report and a Manager`s Response to the Self-Appraisal Report are provided as appendixes.

More Details

Center of trace algorithms for extracting digitized waveforms from two-dimensional images

Lee, J.W.

A class of recording instruments records high-frequency signals as a two-dimensional image rather than converting the analog signal directly to digital output. This report explores the task of reducing the two-dimensional trace to a uniformly sampled waveform that best represents the signal characteristics. Many recorders provide algorithms for locating the center of trace. The author discusses these algorithms and alternative algorithms, comparing their effectiveness.

More Details
Results 97026–97050 of 99,299
Results 97026–97050 of 99,299