The Strategic Petroleum Reserve site at West Hackberry, Louisiana has historically experienced casing leaks. Numerous West Hackberry oil storage caverns have wells exhibiting communication between the interior 10 3/4 x 20-inch (oil) annulus and the ''outer cemented'' 20 x 26-inch annulus. Well 108 in Cavern 108 exhibits this behavior. It is thought that one, if not the primary, cause of this communication is casing thread leaks at the 20-inch casing joints combined with microannuli along the cement casing interfaces and other cracks/flaws in the cemented 20 x 26-inch annulus. An operation consisting of a series of nitrogen leak tests, similar to cavern integrity tests, was performed on Cavern 108 in an effort to determine the leak horizons and to see if these leak horizons coincided with those of casing joints. Certain leaky, threaded casing joints were identified between 400 and 1500 feet. A new leak detection procedure was developed as a result of this test, and this methodology for identifying and interpreting such casing joint leaks is presented in this report. Analysis of the test data showed that individual joint leaks could be successfully identified, but not without some degree of ambiguity. This ambiguity is attributed to changes in the fluid content of the leak path (nitrogen forcing out oil) and possibly to very plausible changes in characteristics of the flow path during the test. These changes dominated the test response and made the identification of individual leak horizons difficult. One consequence of concern from the testing was a progressive increase in the leak rate measured during testing due to nitrogen cleaning small amounts of oil out of the leak paths and very likely due to the changes of the leak path during the flow test. Therefore, careful consideration must be given before attempting similar tests. Although such leaks have caused no known environmental or economic problems to date, the leaks may be significant because of the potential for future problems. To mitigate future problems, some repair scenarios are discussed including injection of sealants.
This report describes a new microsystems technology for the creation of microsensors and microelectromechanical systems (MEMS) using stress-free amorphous diamond (aD) films. Stress-free aD is a new material that has mechanical properties close to that of crystalline diamond, and the material is particularly promising for the development of high sensitivity microsensors and rugged and reliable MEMS. Some of the unique properties of aD include the ability to easily tailor film stress from compressive to slightly tensile, hardness and stiffness 80-90% that of crystalline diamond, very high wear resistance, a hydrophobic surface, extreme chemical inertness, chemical compatibility with silicon, controllable electrical conductivity from insulating to conducting, and biocompatibility. A variety of MEMS structures were fabricated from this material and evaluated. These structures included electrostatically-actuated comb drives, micro-tensile test structures, singly- and doubly-clamped beams, and friction and wear test structures. It was found that surface micromachined MEMS could be fabricated in this material easily and that the hydrophobic surface of the film enabled the release of structures without the need for special drying procedures or the use of applied hydrophobic coatings. Measurements using these structures revealed that aD has a Young's modulus of {approx}650 GPa, a tensile fracture strength of 8 GPa, and a fracture toughness of 8 MPa{center_dot}m {sup 1/2}. These results suggest that this material may be suitable in applications where stiction or wear is an issue. Flexural plate wave (FPW) microsensors were also fabricated from aD. These devices use membranes of aD as thin as {approx}100 nm. The performance of the aD FPW sensors was evaluated for the detection of volatile organic compounds using ethyl cellulose as the sensor coating. For comparable membrane thicknesses, the aD sensors showed better performance than silicon nitride based sensors. Greater than one order of magnitude increase in chemical sensitivity is expected through the use of ultra-thin aD membranes in the FPW sensor. The discoveries and development of the aD microsystems technology that were made in this project have led to new research projects in the areas of aD bioMEMS and aD radio frequency MEMS.
Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.
This activity brought two robotic mobile manipulation systems developed by Sandia National Laboratories to the Maneuver Support Center (MANSCEN) at Ft. Leonard Wood for the following purposes: Demonstrate advanced manipulation and control capabilities; Apply manipulation to hazardous activities within MANSCEN mission space; Stimulate thought and identify potential applications for future mobile manipulation applications; and Provide introductory knowledge of manipulation to better understand how to specify capability and write requirements.
This report summarizes the work completed in the MyLink Lab Directed Research and Development project. The goal of this project was to investigate the ability of computers to come to understand individuals and to assist them with various aspects of their lives.
The conventional discrete ordinates approximation to the Boltzmann transport equation can be described in a matrix form. Specifically, the within-group scattering integral can be represented by three components: a moment-to-discrete matrix, a scattering cross-section matrix and a discrete-to-moment matrix. Using and extending these entities, we derive and summarize the matrix representations of the second-order transport equations.
This document is the second in a series that describe graphical user interface tools developed to control the Visual Empirical Region of Influence (VERI) algorithm. In this paper we describe a user interface designed to optimize the VERI algorithm results. The optimization mode uses a brute force method of searching through the combinations of features in a data set for features that produce the best pattern recognition results. With a small number of features in a data set an exact solution can be determined. However, the number of possible combinations increases exponentially with the number of features and an alternate means of finding a solution must be found. We developed and implemented a technique for finding solutions in data sets with both small and large numbers of features. This document illustrates step-by-step examples of how to use the interface and how to interpret the results. It is written in two parts, part I deals with using the interface to find the best combination from all possible sets of features, part II describes how to use the tool to find a good solution in data sets with a large number of features. The VERI Optimization Interface Tool was written using the Tcl/Tk Graphical User Interface (GUI) programming language, version 8.1. Although the Tcl/Tk packages are designed to run on multiple computer platforms, we have concentrated our efforts to develop a user interface for the ubiquitous DOS environment. The VERI algorithms are compiled, executable programs. The optimization interface executes the VERI algorithm in Leave-One-Out mode using the Euclidean metric. For a thorough description of the type of data analysis we perform, and for a general Pattern Recognition tutorial, refer to our website at: http://www.sandia.gov/imrl/XVisionScience/Xusers.htm.
A laser safety evaluation and pertinent output measurements were performed (during March and April 2002) on the M203PI Grenade Launcher Simulator (GLS) and its associated Umpire Control Gun manufactured by Oscmar International Limited, Auckland, New Zealand. The results were the Oscmar Umpire Gun is laser hazard Class 1 and can be used without restrictions. The radiant energy output of the Oscmar M203PI GLS, under ''Small Source'' criteria at 10 centimeters, is laser hazard Class 3b and not usable, under SNL policy, in force-on-force exercises. However, due to a relatively large exit diameter and an intentionally large beam divergence, to simulate a large area blast, the output beam geometry met the criteria for ''Extended Source'' viewing [ANSI Std. 2136.1-2000 (S.l)]. Under this ''Extended Source'' criteria the output of the M203PI GLS unit was, in fact, laser hazard Class 1 (eye safe), for 3 of the 4 possible modes of laser operation. The 4'h mode, ''Auto Fire'', which simulates a continuous grenade firing every second and is not used at SNL, was laser hazard Class 3a (under the ''Extended Source'' viewing criteria). The M203PI GLS does present a laser hazard Class 3a to aided viewing with binoculars inside 3 meters from the unit. Farther than 3 meters it is ''eye safe''. The M203PI GLS can be considered a Class 1 laser hazard and can be used under SNL policy with the following restrictions: (1) The M203PI GLS unit shall only be programmed for: the ''Single Fire'' (which, includes ''Rapid Fire'') and the ''Auto Align'' (used in adjusting the alignment of the grenade launcher simulator system to the target) modes of operation. (2) The M203PI GLS shall never be directed against personnel, using binoculars, inside of 3 meters. DOE Order 5480.16A, Firearms Safety, (Chapter 1)(5)(a)(8)(d) and DOE-STD-1091-96, Firearms Safety (Chapter 4); already prevents ESS laser engagement of personnel (with or without binoculars), ''closer than 10 feet (3.05 meters)''. Both of these restrictions can be administratively imposed, through a formal Operating Procedure or Technical Work Document and by full compliance with DOE orders and standards.
This project makes use of ''biomimetic behavioral engineering'' in which adaptive strategies used by animals in the real world are applied to the development of autonomous robots. The key elements of the biomimetic approach are to observe and understand a survival behavior exhibited in nature, to create a mathematical model and simulation capability for that behavior, to modify and optimize the behavior for a desired robotics application, and to implement it. The application described in this report is dynamic soaring, a behavior that certain sea birds use to extract flight energy from laminar wind velocity gradients in the shallow atmospheric boundary layer directly above the ocean surface. Theoretical calculations, computational proof-of-principle demonstrations, and the first instrumented experimental flight test data for dynamic soaring are presented to address the feasibility of developing dynamic soaring flight control algorithms to sustain the flight of unmanned airborne vehicles (UAVs). Both hardware and software were developed for this application. Eight-foot custom foam sailplanes were built and flown in a steep shear gradient. A logging device was designed and constructed with custom software to record flight data during dynamic soaring maneuvers. A computational toolkit was developed to simulate dynamic soaring in special cases and with a full 6-degree of freedom flight dynamics model in a generalized time-dependent wind field. Several 3-dimensional visualization tools were built to replay the flight simulations. A realistic aerodynamics model of an eight-foot sailplane was developed using measured aerodynamic derivatives. Genetic programming methods were developed and linked to the simulations and visualization tools. These tools can now be generalized for other biomimetic behavior applications.
Historically, high resolution, high slew rate optics have been heavy, bulky, and expensive. Recent advances in MEMS (Micro Electro Mechanical Systems) technology and micro-machining may change this. Specifically, the advent of steerable sub-millimeter sized mirror arrays could provide the breakthrough technology for producing very small-scale high-performance optical systems. For example, an array of steerable MEMS mirrors could be the building blocks for a Fresnel mirror of controllable focal length and direction of view. When coupled with a convex parabolic mirror the steerable array could realize a micro-scale pan, tilt and zoom system that provides full CCD sensor resolution over the desired field of view with no moving parts (other than MEMS elements). This LDRD provided the first steps towards the goal of a new class of small-scale high-performance optics based on MEMS technology. A large-scale, proof of concept system was built to demonstrate the effectiveness of an optical configuration applicable to producing a small-scale (< 1cm) pan and tilt imaging system. This configuration consists of a color CCD imager with a narrow field of view lens, a steerable flat mirror, and a convex parabolic mirror. The steerable flat mirror directs the camera's narrow field of view to small areas of the convex mirror providing much higher pixel density in the region of interest than is possible with a full 360 deg. imaging system. Improved image correction (dewarping) software based on texture mapping images to geometric solids was developed. This approach takes advantage of modern graphics hardware and provides a great deal of flexibility for correcting images from various mirror shapes. An analytical evaluation of blur spot size and axi-symmetric reflector optimization were performed to address depth of focus issues that occurred in the proof of concept system. The resulting equations will provide the tools for developing future system designs.
GENESIS Version 2.0 is a general circulation model developed at the National Center for Atmospheric Research (NCAR) and is the principal code that is used by paleoclimatologists to model climate at various times throughout Earth's history. The primary result of this LDRD project has been the development of a distributed-memory parallel version of GENESIS, leading to a significant performance enhancement on commodity-based, large-scale computing platforms like the CPlant. The shared-memory directives of the original version were replaced by MPI calls in the new version of GENESIS. This was accomplished by means of parallel decomposition over latitude strip domains. The code achieved a parallel speedup of four times that of the shared-memory parallel version at R15 resolution. T106 resolution runs 20 times faster than the NCAR serial version on 20 nodes of the CPlant. As part of the project, GENESIS was used to model the climatic effects of an orbiting debris ring due to a large planetary impact event.
Electro-microfluidics is experiencing explosive growth in new product developments. There are many commercial applications for electro-microfluidic devices such as chemical sensors, biological sensors, and drop ejectors for both printing and chemical analysis. The number of silicon surface micromachined electro-microfluidic products is likely to increase. Manufacturing efficiency and integration of microfluidics with electronics will become important. Surface micromachined microfluidic devices are manufactured with the same tools as IC's (integrated circuits) and their fabrication can be incorporated into the IC fabrication process. In order to realize applications for devices must be developed. An Electro-Microfluidic Dual In-line Package (EMDIP{trademark}) was developed surface micromachined electro-microfluidic devices, a practical method for getting fluid into these to be a standard solution that allows for both the electrical and the fluidic connections needed to operate a great variety of electro-microfluidic devices. The EMDIP{trademark} includes a fan-out manifold that, on one side, mates directly with the 200 micron diameter Bosch etched holes found on the device, and, on the other side, mates to lager 1 mm diameter holes. To minimize cost the EMDIP{trademark} can be injection molded in a great variety of thermoplastics which also serve to optimize fluid compatibility. The EMDIP{trademark} plugs directly into a fluidic printed wiring board using a standard dual in-line package pattern for the electrical connections and having a grid of multiple 1 mm diameter fluidic connections to mate to the underside of the EMDIP{trademark}.
The information form of the Kalman filter is used as a device for implementing an optimal, linear, decentralized algorithm on a decentralized topology. A systems approach utilizing design tradeoffs is required to successfully implement an effective data fusion network with minimal communication. Combining decentralized results over the past four decades with practical aspects of nodal network implementation, the final product provides an important benchmark for functionally decentralized systems designs.
Inorganic mesoporous thin-films are import for applications such as membranes, sensors, low-dielectric-constant insulators (so-called low {kappa} dielectrics), and fluidic devices. Over the past five years, several research groups have demonstrated the efficacy of using evaporation accompanying conventional coating operations such as spin- and dip-coating as an efficient means of driving the self-assembly of homogeneous solutions into highly ordered, oriented, mesostructured films. Understanding such evaporation-induced self-assembly (EISA) processes is of interest for both fundamental and technological reasons. Here, the authors use spatially resolved 2D grazing incidence X-ray scattering in combination with optical interferometry during steady-state dip-coating of surfactant-templated silica thin-films to structurally and compositionally characterize the EISA process. They report the evolution of a hexagonal (p6 mm) thin-film mesophase from a homogeneous precursor solution and its further structural development during drying and calcination. Monte Carlo simulations of water/ethanol/surfactant bulk phase behavior are used to investigate the role of ethanol in the self-assembly process, and they propose a mechanism to explain the observed dilation in unit cell dimensions during solvent evaporation.
An approach is presented to compute the force on a spherical particle in a rarefied flow of a monatomic gas. This approach relies on the development of a Green's function that describes the force on a spherical particle in a delta-function molecular velocity distribution function. The gas-surface interaction model in this development allows incomplete accommodation of energy and tangential momentum. The force from an arbitrary molecular velocity distribution is calculated by computing the moment of the force Green's function in the same way that other macroscopic variables are determined. Since the molecular velocity distribution function is directly determined in the DSMC method, the force Green's function approach can be implemented straightforwardly in DSMC codes. A similar approach yields the heat transfer to a spherical particle in a rarefied gas flow. The force Green's function is demonstrated by application to two problems. First, the drag force on a spherical particle at arbitrary temperature and moving at arbitrary velocity through an equilibrium motionless gas is found analytically and numerically. Second, the thermophoretic force on a motionless particle in a motionless gas with a heat flux is found analytically and numerically. Good agreement is observed in both situations.
In this report we describe the construction and characterization of a small quantum processor based on trapped ions. This processor could ultimately be used to perform analogue quantum simulations with an engineered computationally-cold bath for increasing the system's robustness to noise. We outline the requirements to build such a simulator, including individual addressing, distinguishable detection, and low crosstalk between operations, and our methods to implement and characterize these requirements. Specifically for measuring crosstalk, we introduce a new method, simultaneous gate set tomography to characterize crosstalk errors.
Mine detection dogs have a demonstrated capability to locate hidden objects by trace chemical detection. Because of this capability, demining activities frequently employ mine detection dogs to locate individual buried landmines or for area reduction. The conditions appropriate for use of mine detection dogs are only beginning to emerge through diligent research that combines dog selection/training, the environmental conditions that impact landmine signature chemical vapors, and vapor sensing performance capability and reliability. This report seeks to address the fundamental soil-chemical interactions, driven by local weather history, that influence the availability of chemical for trace chemical detection. The processes evaluated include: landmine chemical emissions to the soil, chemical distribution in soils, chemical degradation in soils, and weather and chemical transport in soils. Simulation modeling is presented as a method to evaluate the complex interdependencies among these various processes and to establish conditions appropriate for trace chemical detection. Results from chemical analyses on soil samples obtained adjacent to landmines are presented and demonstrate the ultra-trace nature of these residues. Lastly, initial measurements of the vapor sensing performance of mine detection dogs demonstrates the extreme sensitivity of dogs in sensing landmine signature chemicals; however, reliability at these ultra-trace vapor concentrations still needs to be determined. Through this compilation, additional work is suggested that will fill in data gaps to improve the utility of trace chemical detection.
The purpose of the report is to summarize discussions from a Ceramic/Metal Brazing: From Fundamentals to Applications Workshop that was held at Sandia National Laboratories in Albuquerque, NM on April 4, 2001. Brazing experts and users who bridge common areas of research, design, and manufacturing participated in the exercise. External perspectives on the general state of the science and technology for ceramics and metal brazing were given. Other discussions highlighted and critiqued Sandia's brazing research and engineering programs, including the latest advances in braze modeling and materials characterization. The workshop concluded with a facilitated dialogue that identified critical brazing research needs and opportunities.
This report provides a review of the open literature relating to numerical methods for simulating deep penetration events. The objective of this review is to provide recommendations for future development of the ALEGRA shock physics code to support earth penetrating weapon applications. While this report focuses on coupled Eulerian-Lagrangian methods, a number of complementary methods are also discussed which warrant further investigation. Several recommendations are made for development activities within ALEGRA to support earth penetrating weapon applications in the short, intermediate, and long term.
The quality of low-cost multicrystalline silicon (mc-Si) has improved to the point that it forms approximately 50% of the worldwide photovoltaic (PV) power production. The performance of commercial mc-Si solar cells still lags behind c-Si due in part to the inability to texture it effectively and inexpensively. Surface texturing of mc-Si has been an active field of research. Several techniques including anodic etching [1], wet acidic etching [2], lithographic patterning [3], and mechanical texturing [4] have been investigated with varying degrees of success. To date, a cost-effective technique has not emerged.
This report summarizes the activities of the Computer Science Research Institute at Sandia National Laboratories during the period January 1, 2001 to December 31, 2001.
This study on the opportunities for energy storage technologies determined electric utility application requirements, assessed the suitability of a variety of storage technologies to meet the requirements, and reviewed the compatibility of technologies to satisfy multiple applications in individual installations. The study is called ''Opportunities Analysis'' because it identified the most promising opportunities for the implementation of energy storage technologies in stationary applications. The study was sponsored by the U.S. DOE Energy Storage Systems Program through Sandia National Laboratories and was performed in coordination with industry experts from utilities, manufacturers, and research organizations. This Phase II report updates the Phase I analysis performed in 1994.
A new concept has been developed which allows direct-to-RF conversion of digitally synthesized waveforms. The concept named Quadrature Error Corrected Digital Waveform Synthesis (QECDWS) employs quadrature amplitude and phase predistortion to the complex waveform to reduce the undesirable quadrature image. Another undesirable product of QECDWS-based RF conversion is the Local Oscillator (LO) leakage through the quadrature upconverter (mixer). A common technique for reducing this LO leakage is to apply a quadrature bias to the mixer I and Q inputs. This report analyzes this technique through theory, lab measurement, and data analysis for a candidate quadrature mixer for Synthetic Aperture Radar (SAR) applications.
Biomass feedstocks contain roughly 10-30% lignin, a substance that can not be converted to fermentable sugars. Hence, most schemes for producing biofuels (ethanol) assume that the lignin coproduct will be utilized as boiler fuel to provide heat and power to the process. However, the chemical structure of lignin suggests that it will make an excellent high value fuel additive, if it can be broken down into smaller molecular units. From fiscal year 1997 through fiscal year 2001, Sandia National Laboratories was a participant in a cooperative effort with the National Renewable Energy Laboratory and the University of Utah to develop and scale a base catalyzed depolymerization (BCD) process for lignin conversion. SNL's primary role in the effort was to utilize rapidly heated batch microreactors to perform kinetic studies, examine the reaction chemistry, and to develop alternate catalyst systems for the BCD process. This report summarizes the work performed at Sandia during FY97 and FY98 with alcohol based systems. More recent work with aqueous based systems will be summarized in a second report.
Biomass feedstocks contain roughly 15-30% lignin, a substance that can not be converted to fermentable sugars. Hence, most schemes for producing biofuels assume that the lignin coproduct will be utilized as boiler fuel. Yet, the chemical structure of lignin suggests that it will make an excellent high value fuel additive, if it can be broken down into smaller compounds. From Fiscal year 1997 through Fiscal year 2001, Sandia National Laboratories participated in a cooperative effort with the National Renewable Energy Laboratory and the University of Utah to develop and scale a base catalyzed depolymerization (BCD) process for lignin conversion. SNL's primary role in the effort was to perform kinetic studies, examine the reaction chemistry, and to develop alternate BCD catalyst systems. This report summarizes the work performed at Sandia during Fiscal Year 1999 through Fiscal Year 2001 with aqueous systems. Work with alcohol based systems is summarized in part 1 of this report. Our study of lignin depolymerization by aqueous NaOH showed that the primary factor governing the extent of lignin conversion is the NaOH:lignin ratio. NaOH concentration is at best a secondary issue. The maximum lignin conversion is achieved at NaOH:lignin mole ratios of 1.5-2. This is consistent with acidic compounds in the depolymerized lignin neutralizing the base catalyst. The addition of CaO to NaOH improves the reaction kinetics, but not the degree of lignin conversion. The combination of Na{sub 2}CO{sub 3} and CaO offers a cost saving alternative to NaOH that performs identically to NaOH on a per Na basis. A process where CaO is regenerated from CaCO{sub 3} could offer further advantages, as could recovering the Na as Na{sub 2}CO{sub 3} or NaHCO{sub 3} by neutralization of the product solution with CO2. Model compound studies show that two types of reactions involving methoxy substituents on the aromatic ring occur: methyl group migration between phenolic groups (making and breaking ether bonds) and the loss of methyl/methoxy groups from the aromatic ring (destruction of ether linkages). The migration reactions are significantly faster than the demethylation reactions, but ultimately demethylation processes predominates.
Islanding, the supply of energy to a disconnected portion of the grid, is a phenomenon that could result in personnel hazard, interfere with reclosure, or damage hardware. Considerable effort has been expended on the development of IEEE 929, a document that defines unacceptable islanding and a method for evaluating energy sources. The worst expected loads for an islanded inverter are defined in IEEE 929 as being composed of passive resistance, inductance, and capacitance. However, a controversy continues concerning the possibility that a capacitively compensated, single-phase induction motor with a very lightly damped mechanical load having a large rotational inertia would be a significantly more difficult load to shed during an island. This report documents the result of a study that shows such a motor is not a more severe case, simply a special case of the RLC network.
We discuss application of the FETI-DP linear solver within the Salinas finite element application. An overview of Salinas and of the FETI-DP solver is presented. We discuss scalability of the software on ASCI-red, Cplant and ASCI-white. Options for solution of the coarse grid problem that results from the FETI problem are evaluated. The finite element software and solver are seen to be numerically and cpu scalable on each of these platforms. In addition, the software is very robust and can be used on a large variety of finite element models.
The magnetically excited flexural plate wave (mag-FPW) device has great promise as a versatile sensor platform. FPW's can have better sensitivity at lower operating frequencies than surface acoustic wave (SAW) devices. Lower operating frequency (< 1 MHz for the FPW versus several hundred MHz to a few GHz for the SAW device) simplifies the control electronics and makes integration of sensor with electronics easier. Magnetic rather than piezoelectric excitation of the FPW greatly simplifies the device structure and processing by eliminating the need for piezoelectric thin films, also simplifying integration issues. The versatile mag-FPW resonator structure can potentially be configured to fulfill a number of critical functions in an autonomous sensored system. As a physical sensor, the device can be extremely sensitive to temperature, fluid flow, strain, acceleration and vibration. By coating the membrane with self-assembled monolayers (SAMs), or polymer films with selective absorption properties (originally developed for SAW sensors), the mass sensitivity of the FPW allows it to be used as biological or chemical sensors. Yet another critical need in autonomous sensor systems is the ability to pump fluid. FPW structures can be configured as micro-pumps. This report describes work done to develop mag-FPW devices as physical, chemical, and acoustic sensors, and as micro-pumps for both liquid and gas-phase analytes to enable new integrated sensing platform.
This report describes the results of the FY01 Level 1 Peer Reviews for the Verification and Validation (V&V) Program at Sandia National Laboratories. V&V peer review at Sandia is intended to assess the ASCI (Accelerated Strategic Computing Initiative) code team V&V planning process and execution. The Level 1 Peer Review process is conducted in accordance with the process defined in SAND2000-3099. V&V Plans are developed in accordance with the guidelines defined in SAND2000-3 101. The peer review process and process for improving the Guidelines are necessarily synchronized and form parts of a larger quality improvement process supporting the ASCI V&V program at Sandia. During FY00 a prototype of the process was conducted for two code teams and their V&V Plans and the process and guidelines updated based on the prototype. In FY01, Level 1 Peer Reviews were conducted on an additional eleven code teams and their respective V&V Plans. This report summarizes the results from those peer reviews, including recommendations from the panels that conducted the reviews.
The Controlatron Software Suite is a custom built application to perform automated testing of Controlatron neutron tubes. The software package was designed to allowing users to design tests and to run a series of test suites on a tube. The data is output to ASCII files of a pre-defined format for data analysis and viewing with the Controlatron Data Viewer Application. This manual discusses the operation of the Controlatron Test Suite Software and a brief discussion of state machine theory, as state machine is the functional basis of the software.
Electrical connectors corrode. Even our best SA and MC connectors finished with 50 to 100 microinches of gold over 50 to 100 microinches of nickel corrode. This work started because some, but not all, lots of connectors held in KC stores for a decade had been destroyed by pore corrosion (chemical corrosion). We have identified a MIL-L-87177 lubricant that absolutely stops chemical corrosion on SA connectors, even in the most severe environments. For commercial connectors which typically have thinner plating thicknesses, not only does the lubricant significantly retard effects of chemical corrosion, but also it greatly prolongs the fretting life. This report highlights the initial development history and use of the lubricant at Bell Labs and AT&T, and the Battelle studies and the USAF experience that lead to its deployment to stop dangerous connector corrosion on the F-16. We report the Sandia, HFM&T and Battelle development work, connector qualification, and material compatibility studies that demonstrate its usefulness and safety on JTA and WR systems. We will be applying MIL-L-87177 Connector Lubricant to all new connectors that go into KC stores. We recommend that it be applied to connectors on newly built cables and equipment as well as material that recycles through manufacturing locations from the field.
This document summarizes research of reactively deposited metal hydride thin films and their properties. Reactive deposition processes are of interest, because desired stoichiometric phases are created in a one-step process. In general, this allows for better control of film stress compared with two-step processes that react hydrogen with pre-deposited metal films. Films grown by reactive methods potentially have improved mechanical integrity, performance and aging characteristics. The two reactive deposition techniques described in this report are reactive sputter deposition and reactive deposition involving electron-beam evaporation. Erbium hydride thin films are the main focus of this work. ErH{sub x} films are grown by ion beam sputtering erbium in the presence of hydrogen. Substrates include a Al{sub 2}O{sub 3} {l_brace}0001{r_brace}, a Al{sub 2}O{sub 3} {l_brace}1120{r_brace}, Si{l_brace}001{r_brace} having a native oxide, and polycrystalline molybdenum substrates. Scandium dideuteride films are also studied. ScD{sub x} is grown by evaporating scandium in the presence of molecular deuterium. Substrates used for scandium deuteride growth include single crystal sapphire and molybdenum-alumina cermet. Ultra-high vacuum methods are employed in all experiments to ensure the growth of high purity films, because both erbium and scandium have a strong affinity for oxygen. Film microstructure, phase, composition and stress are evaluated using a number of thin film and surface analytical techniques. In particular, we present evidence for a new erbium hydride phase, cubic erbium trihydride. This phase develops in films having a large in-plane compressive stress independent of substrate material. Erbium hydride thin films form with a strong <111> out-of-plane texture on all substrate materials. A moderate in-plane texture is also found; this crystallographic alignment forms as a result of the substrate/target geometry and not epitaxy. Multi-beam optical sensors (MOSS) are used for in-situ analysis of erbium hydride and scandium hydride film stress. These instruments probe the evolution of film stress during all stages of deposition and cooldown. Erbium hydride thin film stress is investigated for different growth conditions including temperature and sputter gas, and properties such as thermal expansion coefficient are measured. The in-situ stress measurement technique is further developed to make it suitable for manufacturing systems. New features added to this technique include the ability to monitor multiple substrates during a single deposition and a rapidly switched, tiltable mirror that accounts for small differences in sample alignment on a platen.
The transboundary nature of water resources demands a transboundary approach to their monitoring and management. However, transboundary water projects raise a challenging set of problems related to communication issues, and standardization of sampling, analysis and data management methods. This manual addresses those challenges and provides the information and guidance needed to perform the Navruz Project, a cooperative, transboundary, river monitoring project involving rivers and institutions in Kazakhstan, Kyrgyzstan, Tajikistan, and Uzbekistan facilitated by Sandia National Laboratories in the U.S. The Navruz Project focuses on waterborne radionuclides and metals because of their importance to public health and nuclear materials proliferation concerns in the region. This manual provides guidelines for participants on sample and data collection, field equipment operations and procedures, sample handling, laboratory analysis, and data management. Also included are descriptions of rivers, sampling sites and parameters on which data are collected. Data obtained in this project are shared among all participating countries and the public through an internet web site, and are available for use in further studies and in regional transboundary water resource management efforts. Overall, the project addresses three main goals: to help increase capabilities in Central Asian nations for sustainable water resources management; to provide a scientific basis for supporting nuclear transparency and non-proliferation in the region; and to help reduce the threat of conflict in Central Asia over water resources, proliferation concerns, or other factors.
A suite of laboratory triaxial compression and triaxial steady-state creep tests provide quasi-static elastic constants and damage criteria for bedded rock salt and dolomite extracted from Cavern Well No.1 of the Tioga field in northern Pennsylvania. The elastic constants, quasi-static damage criteria, and creep parameters of host rocks provides information for evaluating a proposed cavern field for gas storage near Tioga, Pennsylvania. The Young's modulus of the dolomite was determined to be 6.4 ({+-}1.0) x 10{sup 6} psi, with a Poisson's ratio of 0.26 ({+-}0.04). The elastic Young's modulus was obtained from the slope of the unloading-reloading portion of the stress-strain plots as 7.8 ({+-}0.9) x 10{sup 6} psi. The damage criterion of the dolomite based on the peak load was determined to be J{sub 2}{sup 0.5} (psi) = 3113 + 0.34 I{sub 1} (psi) where I{sub 1} and J{sub 2} are first and second invariants respectively. Using the dilation limit as a threshold level for damage, the damage criterion was conservatively estimated as J{sub 2}{sup 0.5} (psi) = 2614 + 0.30 I{sub 1} (psi). The Young's modulus of the rock salt, which will host the storage cavern, was determined to be 2.4 ({+-}0.65) x 10{sup 6} psi, with a Poisson's ratio of 0.24 ({+-}0.07). The elastic Young's modulus was determined to be 5.0 ({+-}0.46) x 10{sup 6} psi. Unlike the dolomite specimens under triaxial compression, rock salt specimens did not show shear failure with peak axial load. Instead, most specimens showed distinct dilatancy as an indication of internal damage. Based on dilation limit, the damage criterion for the rock salt was estimated as J{sub 2}{sup 0.5} (psi) = 704 + 0.17 I{sub 1} (psi). In order to determine the time dependent deformation of the rock salt, we conducted five triaxial creep tests. The creep deformation of the Tioga rock salt was modeled based on the following three-parameter power law as {var_epsilon}{sub s} = 1.2 x 10{sup -17} {sigma}{sup 4.75} exp(-6161/T), where {var_epsilon}{sub s} is the steady state strain rate in s{sup -1}, {sigma} is the applied axial stress difference in psi, and T is the temperature in Kelvin.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
This report describes Umbra's High Level Architecture HLA library. This library serves as an interface to the Defense Simulation and Modeling Office's (DMSO) Run Time Infrastructure Next Generation Version 1.3 (RTI NG1.3) software library and enables Umbra-based models to be federated into HLA environments. The Umbra library was built to enable the modeling of robots for military and security system concept evaluation. A first application provides component technologies that ideally fit the US Army JPSD's Joint Virtual Battlespace (JVB) simulation framework for Objective Force concept analysis. In addition to describing the Umbra HLA library, the report describes general issues of integrating Umbra with RTI code and outlines ways of building models to support particular HLA simulation frameworks like the JVB.
Supervisory Control and Data Acquisition (SCADA) systems are a part of the nation's critical infrastructure that is especially vulnerable to attack or disruption. Sandia National Laboratories is developing a high-security SCADA specification to increase the national security posture of the U.S. Because SCADA security is an international problem and is shaped by foreign and multinational interests, Sandia is working to develop a standards-based solution through committees such as the IEC TC 57 WG 15, the IEEE Substation Committee, and the IEEE P1547-related activity on communications and controls. The accepted standards are anticipated to take the form of a Common Criteria Protection Profile. This report provides the status of work completed and discusses several challenges ahead.
This report details some proof-of-principle experiments we conducted under a small, one year ($100K) grant from the Strategic Environmental Research and Development Program (SERDP) under the SERDP Exploratory Development (SEED) effort. Our chemiresistor technology had been developed over the last few years for detecting volatile organic compounds (VOCs) in the air, but these sensors had never been used to detect VOCs in water. In this project we tried several different configurations of the chemiresistors to find the best method for water detection. To test the effect of direct immersion of the (non-water soluble) chemiresistors in contaminated water, we constructed a fixture that allowed liquid water to pass over the chemiresistor polymer without touching the electrical leads used to measure the electrical resistance of the chemiresistor. In subsequent experiments we designed and fabricated probes that protected the chemiresistor and electronics behind GORE-TEX{reg_sign} membranes that allowed the vapor from the VOCs and the water to reach a submerged chemiresistor without allowing the liquids to touch the chemiresistor. We also designed a vapor flow-through system that allowed the headspace vapor from contaminated water to be forced past a dry chemiresistor array. All the methods demonstrated that VOCs in a high enough concentration in water can be detected by chemiresistors, but the last method of vapor phase exposure to a dry chemiresistor gave the fastest and most repeatable measurements of contamination. Answers to questions posed by SERDP reviewers subsequent to a presentation of this material are contained in the appendix.
The computer vision field has undergone a revolution of sorts in the past five years. Moore's law has driven real-time image processing from the domain of dedicated, expensive hardware, to the domain of commercial off-the-shelf computers. This thesis describes their work on the design, analysis and implementation of a Real-Time Shape from Silhouette Sensor (RT S{sup 3}). The system produces time-varying volumetric data at real-time rates (10-30Hz). The data is in the form of binary volumetric images. Until recently, using this technique in a real-time system was impractical due to the computational burden. In this thesis they review the previous work in the field, and derive the mathematics behind volumetric calibration, silhouette extraction, and shape-from-silhouette. For the sensor implementation, they use four color camera/framegrabber pairs and a single high-end Pentium III computer. The color cameras were configured to observe a common volume. This hardware uses the RT S{sup 3} software to track volumetric motion. Two types of shape-from-silhouette algorithms were implemented and their relative performance was compared. They have also explored an application of this sensor to markerless motion tracking. In his recent review of work done in motion tracking Gavrila states that results of markerless vision based 3D tracking are still limited. The method proposed in this paper not only expands upon the previous work but will also attempt to overcome these limitations.
Dempster-Shafer theory offers an alternative to traditional probabilistic theory for the mathematical representation of uncertainty. The significant innovation of this framework is that it allows for the allocation of a probability mass to sets or intervals. Dempster-Shafer theory does not require an assumption regarding the probability of the individual constituents of the set or interval. This is a potentially valuable tool for the evaluation of risk and reliability in engineering applications when it is not possible to obtain a precise measurement from experiments, or when knowledge is obtained from expert elicitation. An important aspect of this theory is the combination of evidence obtained from multiple sources and the modeling of conflict between them. This report surveys a number of possible combination rules for Dempster-Shafer structures and provides examples of the implementation of these rules for discrete and interval-valued data.
Critical infrastructures underpin the domestic security, health, safety and economic well being of the United States. They are large, widely dispersed, mostly privately owned systems operated under a mixture of federal, state and local government departments, laws and regulations. While there currently are enormous pressures to secure all aspects of all critical infrastructures immediately, budget realities limit available options. The purpose of this study is to provide a clear framework for systematically analyzing and prioritizing resources to most effectively secure US critical infrastructures from terrorist threats. It is a scalable framework (based on the interplay of consequences, threats and vulnerabilities) that can be applied at the highest national level, the component level of an individual infrastructure, or anywhere in between. This study also provides a set of key findings and a recommended approach for framework application. In addition, this study develops three laptop computer-based tools to assist with framework implementation-a Risk Assessment Credibility Tool, a Notional Risk Prioritization Tool, and a County Prioritization tool. This study's tools and insights are based on Sandia National Laboratories' many years of experience in risk, consequence, threat and vulnerability assessments, both in defense- and critical infrastructure-related areas.
In October 2000, the personnel responsible for administration of the corporate computers managed by the Scientific Computing Department assembled to reengineer the process of creating and deleting users' computer accounts. Using the Carnegie Mellon Software Engineering Institute (SEI) Capability Maturity Model (CMM) for quality improvement process, the team performed the reengineering by way of process modeling, defining and measuring the maturity of the processes, per SEI and CMM practices. The computers residing in the classified environment are bound by security requirements of the Secure Classified Network (SCN) Security Plan. These security requirements delimited the scope of the project, specifically mandating validation of all user accounts on the central corporate computer systems. System administrators, in addition to their assigned responsibilities, were spending valuable hours performing the additional tacit responsibility of tracking user accountability for user-generated data. For example, in cases where the data originator was no longer an employee, the administrators were forced to spend considerable time and effort determining the appropriate management personnel to assume ownership or disposition of the former owner's data files. In order to prevent this sort of problem from occurring and to have a defined procedure in the event of an anomaly, the computer account management procedure was thoroughly reengineered, as detailed in this document. An automated procedure is now in place that is initiated and supplied data by central corporate processes certifying the integrity, timeliness and authentication of account holders and their management. Automated scripts identify when an account is about to expire, to preempt the problem of data becoming ''orphaned'' without a responsible ''owner'' on the system. The automated account-management procedure currently operates on and provides a standard process for all of the computers maintained by the Scientific Computing Department.
Computational techniques for the evaluation of steady plane subsonic flows represented by Chaplygin series in the hodograph plane are presented. These techniques are utilized to examine the properties of the free surface wall jet solution. This solution is a prototype for the shaped charge jet, a problem which is particularly difficult to compute properly using general purpose finite element or finite difference continuum mechanics codes. The shaped charge jet is a classic validation problem for models involving high explosives and material strength. Therefore, the problem studied in this report represents a useful verification problem associated with shaped charge jet modeling.
Superresolution concepts offer the potential of resolution beyond the classical limit. This great promise has not generally been realized. In this study we investigate the potential application of superresolution concepts to synthetic aperture radar. The analytical basis for superresolution theory is discussed. In a previous report the application of the concept to synthetic aperture radar was investigated as an operator inversion problem. Generally, the operator inversion problem is ill posed. This work treats the problem from the standpoint of regularization. Both the operator inversion approach and the regularization approach show that the ability to superresolve SAR imagery is severely limited by system noise.
A technical review is presented of experiment activities and state of knowledge on air-borne, radiation source terms resulting from explosive sabotage attacks on spent reactor fuel subassemblies in shielded casks. Current assumptions about the behavior of irradiated fuel are largely based on a limited number of experimental results involving unirradiated, depleted uranium dioxide ''surrogate'' fuel. The behavior of irradiated nuclear fuel subjected to explosive conditions could be different from the behavior of the surrogate fuel, depending on the assumptions made by the evaluator. Available data indicate that these potential differences could result in errors, and possible orders-of-magnitude overestimates of aerosol dispersion and potential health effects from sabotage attacks. Furthermore, it is suggested that the current assumptions used in arriving at existing regulations for the transportation and storage of spent fuel in the U.S. are overly conservative. This, in turn, has led to potentially higher-than-needed operating expenses for those activities. A confirmatory experimental program is needed to develop a realistic correlation between source terms of irradiated fuel and unirradiated fuel. The motivations for performing the confirmatory experimental program are also presented.
This document describes a proactive plan for assessing and controlling sources of risk for the ASCI (Accelerated Strategic Computing Initiative) V&V program at Sandia National Laboratories. It offers a graded approach for identifying, analyzing, prioritizing, responding to, and monitoring risks.
The goal of this project was to develop a device that uses electric fields to grasp and possibly levitate LIGA parts. This non-contact form of grasping would solve many of the problems associated with grasping parts that are only a few microns in dimensions. Scaling laws show that for parts this size, electrostatic and electromagnetic forces are dominant over gravitational forces. This is why micro-parts often stick to mechanical tweezers. If these forces can be controlled under feedback control, the parts could be levitated, possibly even rotated in air. In this project, we designed, fabricated, and tested several grippers that use electrostatic and electromagnetic fields to grasp and release metal LIGA parts. The eventual use of this tool will be to assemble metal and non-metal LIGA parts into small electromechanical systems.
Photovoltaic inverters are the most mature of any DER inverter, and their mean time to first failure (MTFF) is about five years. This is an unacceptable MTFF and will inhibit the rapid expansion of PV. With all DER technologies, (solar, wind, fuel cells, and microturbines) the inverter is still an immature product that will result in reliability problems in fielded systems. The increasing need for all of these technologies to have a reliable inverter provides a unique opportunity to address these needs with focused R&D development projects. The requirements for these inverters are so similar that modular designs with universal features are obviously the best solution for a ''next generation'' inverter. A ''next generation'' inverter will have improved performance, higher reliability, and improved profitability. Sandia National Laboratories has estimated that the development of a ''next generation'' inverter could require approximately 20 man-years of work over an 18- to 24-month time frame, and that a government-industry partnership will greatly improve the chances of success.
The synthesis and characterization of soluble and processable high molecular weight polysilsesquioxanes with carboxylate functionalities was discussed. It was found that the tert-butyl functionality in these polymers was eliminated to give carboxylic acids functionalized polysilsesquioxane or methyltin carboxylatosilsesquioxane gels. The analysis showed that the polysilsesquioxane binds and removes tin through gelation.
The Boeing Company fabricated the Solar Two receiver as a subcontractor for the Solar Two project. The receiver absorbed sunlight reflected from the heliostat field. A molten-nitrate-salt heat transfer fluid was pumped from a storage tank at grade level, heated from 290 to 565 C by the receiver mounted on top of a tower, then flowed back down into another storage tank. To make electricity, the hot salt was pumped through a steam generator to produce steam that powered a conventional Rankine steam turbine/generator. This evaluation identifies the most significant Solar Two receiver system lessons learned from the Mechanical Design, Instrumentation and Control, Panel Fabrication, Site Construction, Receiver System Operation, and Management from the perspective of the receiver designer/manufacturer. The lessons learned on the receiver system described here consist of two parts: the Problem and one or more identified Solutions. The appendix summarizes an inspection of the advanced receiver panel developed by Boeing that was installed and operated in the Solar Two receiver.
Polycrystalline silicon (polysilicon) surface micromachining is a new technology for building micrometer ({micro}m) scale mechanical devices on silicon wafers using techniques and process tools borrowed from the manufacture of integrated circuits. Sandia National Laboratories has invested a significant effort in demonstrating the viability of polysilicon surface micromachining and has developed the Sandia Ultraplanar Micromachining Technology (SUMMiT V{trademark} ) process, which consists of five structural levels of polysilicon. A major advantage of polysilicon surface micromachining over other micromachining methods is that thousands to millions of thin film mechanical devices can be built on multiple wafers in a single fabrication lot and will operate without post-processing assembly. However, if thin film mechanical or surface properties do not lie within certain tightly set bounds, micromachined devices will fail and yield will be low. This results in high fabrication costs to attain a certain number of working devices. An important factor in determining the yield of devices in this parallel-processing method is the uniformity of these properties across a wafer and from wafer to wafer. No metrology tool exists that can routinely and accurately quantify such properties. Such a tool would enable micromachining process engineers to understand trends and thereby improve yield of micromachined devices. In this LDRD project, we demonstrated the feasibility of and made significant progress towards automatically mapping mechanical and surface properties of thin films across a wafer. The MEMS parametrics measurement team has implemented a subset of this platform, and approximately 30 wafer lots have been characterized. While more remains to be done to achieve routine characterization of all these properties, we have demonstrated the essential technologies. These include: (1) well-understood test structures fabricated side-by-side with MEMS devices, (2) well-developed analysis methods, (3) new metrologies (i.e., long working distance interferometry) and (4) a hardware/software platform that integrates (1), (2) and (3). In this report, we summarize the major focus areas of our LDRD project. We describe the contents of several articles that provide the details of our approach. We also describe hardware and software innovations we made to realize a fully automatic wafer prober system for MEMS mechanical and surface property characterization across wafers and from wafer-lot to wafer-lot.
This report documents measurements in inductively driven plasmas containing SF{sub 6}/Argon gas mixtures. The data in this report is presented in a series of appendices with a minimum of interpretation. During the course of this work we investigated: the electron and negative ion density using microwave interferometry and laser photodetachment; the optical emission; plasma species using mass spectrometry, and the ion energy distributions at the surface of the rf biased electrode in several configurations. The goal of this work was to assemble a consistent set of data to understand the important chemical mechanisms in SF{sub 6} based processing of materials and to validate models of the gas and surface processes.
This report presents general concepts in a broadly applicable methodology for validation of Accelerated Strategic Computing Initiative (ASCI) codes for Defense Programs applications at Sandia National Laboratories. The concepts are defined and analyzed within the context of their relative roles in an experimental validation process. Examples of applying the proposed methodology to three existing experimental validation activities are provided in appendices, using an appraisal technique recommended in this report.
LOCA, the Library of Continuation Algorithms, is a software library for performing stability analysis of large-scale applications. LOCA enables the tracking of solution branches as a function of a system parameter, the direct tracking of bifurcation points, and, when linked with the ARPACK library, a linear stability analysis capability. It is designed to be easy to implement around codes that already use Newton's method to converge to steady-state solutions. The algorithms are chosen to work for large problems, such as those that arise from discretizations of partial differential equations, and to run on distributed memory parallel machines. This manual presents LOCA's continuation and bifurcation analysis algorithms, and instructions on how to implement LOCA with an application code. The LOCA code is being made publicly available at www.cs.sandia.gov/loca.
Three-dimensional finite element analyses simulate the mechanical response of enlarging existing caverns at the Strategic Petroleum Reserve (SPR). The caverns are located in Gulf Coast salt domes and are enlarged by leaching during oil drawdowns as fresh water is injected to displace the crude oil from the caverns. The current criteria adopted by the SPR limits cavern usage to 5 drawdowns (leaches). As a base case, 5 leaches were modeled over a 25 year period to roughly double the volume of a 19 cavern field. Thirteen additional leaches where then simulated until caverns approached coalescence. The cavern field approximated the geometries and geologic properties found at the West Hackberry site. This enabled comparisons are data collected over nearly 20 years to analysis predictions. The analyses closely predicted the measured surface subsidence and cavern closure rates as inferred from historic well head pressures. This provided the necessary assurance that the model displacements, strains, and stresses are accurate. However, the cavern field has not yet experienced the large scale drawdowns being simulated. Should they occur in the future, code predictions should be validated with actual field behavior at that time. The simulations were performed using JAS3D, a three dimensional finite element analysis code for nonlinear quasi-static solids. The results examine the impacts of leaching and cavern workovers, where internal cavern pressures are reduced, on surface subsidence, well integrity, and cavern stability. The results suggest that the current limit of 5 oil drawdowns may be extended with some mitigative action required on the wells and later on to surface structure due to subsidence strains. The predicted stress state in the salt shows damage to start occurring after 15 drawdowns with significant failure occurring at the 16th drawdown, well beyond the current limit of 5 drawdowns.
This report addresses the effects of spectrum loading on lifetime and residual strength of a typical fiberglass laminate configuration used in wind turbine blade construction. Over 1100 tests have been run on laboratory specimens under a variety of load sequences. Repeated block loading at two or more load levels, either tensile-tensile, compressive-compressive, or reversing, as well as more random standard spectra have been studied. Data have been obtained for residual strength at various stages of the lifetime. Several lifetime prediction theories have been applied to the results. The repeated block loading data show lifetimes that are usually shorter than predicted by the most widely used linear damage accumulation theory, Miner's sum. Actual lifetimes are in the range of 10 to 20 percent of predicted lifetime in many cases. Linear and nonlinear residual strength models tend to fit the data better than Miner's sum, with the nonlinear providing a better fit of the two. Direct tests of residual strength at various fractions of the lifetime are consistent with the residual strength models. Load sequencing effects are found to be insignificant. The more a spectrum deviates from constant amplitude, the more sensitive predictions are to the damage law used. The nonlinear model provided improved correlation with test data for a modified standard wind turbine spectrum. When a single, relatively high load cycle was removed, all models provided similar, though somewhat non-conservative correlation with the experimental results. Predictions for the full spectrum, including tensile and compressive loads were slightly non-conservative relative to the experimental data, and accurately captured the trend with varying maximum load. The nonlinear residual strength based prediction with a power law S-N curve extrapolation provided the best fit to the data in most cases. The selection of the constant amplitude fatigue regression model becomes important at the lower stress, higher cycle loading cases. The residual strength models may provide a more accurate estimate of blade lifetime than Miner's rule for some loads spectra. They have the added advantage of providing an estimate of current blade strength throughout the service life.
The final report for a Laboratory Directed Research and Development project entitled, Molecular Simulation of Reacting Systems is presented. It describes efforts to incorporate chemical reaction events into the LAMMPS massively parallel molecular dynamics code. This was accomplished using a scheme in which several classes of reactions are allowed to occur in a probabilistic fashion at specified times during the MD simulation. Three classes of reaction were implemented: addition, chain transfer and scission. A fully parallel implementation was achieved using a checkerboarding scheme, which avoids conflicts due to reactions occurring on neighboring processors. The observed chemical evolution is independent of the number of processors used. The code was applied to two test applications: irreversible linear polymerization and thermal degradation chemistry.
Military test and training ranges generate scrap materials from targets and ordnance debris. These materials are routinely removed from the range for recycling; however, energetic material residues in this range scrap has presented a significant safety hazard to operations personnel and damaged recycling equipment. The Strategic Environmental Research and Development Program (SERDP) sought proof of concept evaluations for monitoring technologies to identify energetic residues among range scrap. Sandia National Laboratories teamed with Nomadics, Inc. to evaluate the Nomadics FIDO vapor sensor for application to this problem. Laboratory tests were completed that determined the vapor-sensing threshold to be 10 to 20 ppt for TNT and 150 to 200 ppt for DNT. Field tests with the FIDO demonstrated the proof of concept that energetic material residues can be identified with vapor sensing in enclosed scrap bins. Items such as low order detonation debris, demolition block granules, and unused 81-mm mortars were detected quickly and with minimum effort. Conceptual designs for field-screening scrap for energetic material residues include handheld vapor sensing systems, batch scrap sensing systems, continuous conveyor sensing systems and a hot gas decontamination verification system.
A physics-based understanding of material aging mechanisms helps to increase reliability when predicting the lifetime of mechanical and electrical components. This report examines in detail the mechanisms of atmospheric copper sulfidation and evaluates new methods of parallel experimentation for high-throughput corrosion analysis. Often our knowledge of aging mechanisms is limited because coupled chemical reactions and physical processes are involved that depend on complex interactions with the environment and component functionality. Atmospheric corrosion is one of the most complex aging phenomena and it has profound consequences for the nation's economy and safety. Therefore, copper sulfidation was used as a test-case to examine the utility of parallel experimentation. Through the use of parallel and conventional experimentation, we measured: (1) the sulfidation rate as a function of humidity, light, temperature and O{sub 2} concentration; (2) the primary moving species in solid state transport; (3) the diffusivity of Cu vacancies through Cu{sub 2}S; (4) the sulfidation activation energies as a function of relative humidity (RH); (5) the sulfidation induction times at low humidities; and (6) the effect of light on the sulfidation rate. Also, the importance of various sulfidation mechanisms was determined as a function of RH and sulfide thickness. Different models for sulfidation-reactor geometries and the sulfidation reaction process are presented.
Laser beam welding is the principal welding process for the joining of Sandia weapon components because it can provide a small fusion zone with low overall heating. Improved process robustness is desired since laser energy absorption is extremely sensitive to joint variation and filler metal is seldom added. This project investigated the experimental and theoretical advantages of combining a fiber optic delivered Nd:YAG laser with a miniaturized GMAW system. Consistent gas metal arc droplet transfer employing a 0.25 mm diameter wire was only obtained at high currents in the spray transfer mode. Excessive heating of the workpiece in this mode was considered an impractical result for most Sandia micro-welding applications. Several additional droplet detachment approaches were investigated and analyzed including pulsed tungsten arc transfer(droplet welding), servo accelerated transfer, servo dip transfer, and electromechanically braked transfer. Experimental observations and rigorous analysis of these approaches indicate that decoupling droplet detachment from the arc melting process is warranted and may someday be practical.
The Solar Thermal Program at Sandia supports work developing dish/Stirling systems to convert solar energy into electricity. Heat pipe technology is ideal for transferring the energy of concentrated sunlight from the parabolic dish concentrators to the Stirling engine heat tubes. Heat pipes can absorb the solar energy at non-uniform flux distributions and release this energy to the Stirling engine heater tubes at a very uniform flux distribution thus decoupling the design of the engine heater head from the solar absorber. The most important part of a heat pipe is the wick, which transports the sodium over the heated surface area. Bench scale heat pipes were designed and built to more economically, both in time and money, test different wicks and cleaning procedures. This report covers the building, testing, and post-test analysis of the sixth in a series of bench scale heat pipes. Durability heat pipe No.6 was built and tested to determine the effects of a high temperature bakeout, 950 C, on wick corrosion during long-term operation. Previous tests showed high levels of corrosion with low temperature bakeouts (650-700 C). Durability heat pipe No.5 had a high temperature bakeout and reflux cleaning and showed low levels of wick corrosion after long-term operation. After testing durability heat pipe No.6 for 5,003 hours at an operating temperature of 750 C, it showed low levels of wick corrosion. This test shows a high temperature bakeout alone will significantly reduce wick corrosion without the need for costly and time consuming reflux cleaning.
This report presents the major findings of the Montana State University Composite Materials Fatigue Program from 1997 to 2001, and is intended to be used in conjunction with the DOE/MSU Composite Materials Fatigue Database. Additions of greatest interest to the database in this time period include environmental and time under load effects for various resin systems; large tow carbon fiber laminates and glass/carbon hybrids; new reinforcement architectures varying from large strands to prepreg with well-dispersed fibers; spectrum loading and cumulative damage laws; giga-cycle testing of strands; tough resins for improved structural integrity; static and fatigue data for interply delamination; and design knockdown factors due to flaws and structural details as well as time under load and environmental conditions. The origins of a transition to increased tensile fatigue sensitivity with increasing fiber content are explored in detail for typical stranded reinforcing fabrics. The second focus of the report is on structural details which are prone to delamination failure, including ply terminations, skin-stiffener intersections, and sandwich panel terminations. Finite element based methodologies for predicting delamination initiation and growth in structural details are developed and validated, and simplified design recommendations are presented.
This report describes research and development of the large eddy simulation (LES) turbulence modeling approach conducted as part of Sandia's laboratory directed research and development (LDRD) program. The emphasis of the work described here has been toward developing the capability to perform accurate and computationally affordable LES calculations of engineering problems using unstructured-grid codes, in wall-bounded geometries and for problems with coupled physics. Specific contributions documented here include (1) the implementation and testing of LES models in Sandia codes, including tests of a new conserved scalar--laminar flamelet SGS combustion model that does not assume statistical independence between the mixture fraction and the scalar dissipation rate, (2) the development and testing of statistical analysis and visualization utility software developed for Exodus II unstructured grid LES, and (3) the development and testing of a novel new LES near-wall subgrid model based on the one-dimensional Turbulence (ODT) model.
This report summarizes progress from the Laboratory Directed Research and Development (LDRD) program during fiscal year 2001. In addition to a programmatic and financial overview, the report includes progress reports from 295 individual R and D projects in 14 categories.
To identify connections between technology needs for countering terrorism and underlying science issues and to recommend investment strategies to increase the impact of basic research on efforts to counter terrorism.
We have used a nonionic inverse micelle synthesis technique to form nanoclusters of platinum and palladium. These nanoclusters can be rendered hydrophobic or hydrophilic by the appropriate choice of capping ligand. Unlike Au nanoclusters, Pt nanoclusters show great stability with thiol ligands in aqueous media. Alkane thiols, with alkane chains ranging from C6 to C18, were used as hydrophobic ligands, and with some of these we were able to form two-dimensional and/or three-dimensional superlattices of Pt nanoclusters as small as 2.7 nm in diameter. Image processing techniques were developed to reliably extract from transmission electron micrographs (TEMs) the particle size distribution, and information about the superlattice domains and their boundaries. The latter permits us to compute the intradomain vector pair correlation function of the particle centers, from which we can accurately determine the lattice spacing and the coherent domain size. From these data the gap between the particles in the coherent domains can be determined as a function of the thiol chain length. It is found that as the thiol chain length increases, the interparticle gaps increase more slowly than the measured hydrodynamic radius of the functionalized nanoclusters in solution, possibly indicating thiol chain interdigitation in the superlattices.
Laser safety evaluation and output emission measurements were performed (during October and November 2001) on SNL MILES and Mini MILES laser emitting components. The purpose, to verify that these components, not only meet the Class 1 (eye safe) laser hazard criteria of the CDRH Compliance Guide for Laser Products and 21 CFR 1040 Laser Product Performance Standard; but also meet the more stringent ANSI Std. z136.1-2000 Safe Use of Lasers conditions for Class 1 lasers that govern SNL laser operations. The results of these measurements confirmed that all of the Small Arms Laser Transmitters, as currently set (''as is''), meet the Class 1 criteria. Several of the Mini MILES Small Arms Transmitters did not. These were modified and re-tested and now meet the Class 1 laser hazard criteria. All but one System Controllers (hand held and rifle stock) met class 1 criteria for single trigger pulls and all presented Class 3a laser hazard levels if the trigger is held (continuous emission) for more than 5 seconds on a single point target. All units were Class 3a for ''aided'' viewing. These units were modified and re-tested and now meet the Class 1 hazard criteria for both ''aided'' as well as ''unaided'' viewing. All the Claymore Mine laser emitters tested are laser hazard Class 1 for both ''aided'' as well as ''unaided'' viewing.
The use of biometrics for the identification of individuals is becoming more prevalent in society and in the general government community. As the demand for these devices increases, it becomes necessary for the user community to have the facts needed to determine which device is the most appropriate for any given application. One such application is the use of biometric devices in areas where an individual may not be able to present a biometric feature that requires contact with the identifier (e.g., when dressed in anti-contamination suits or when wearing a respirator). This paper discusses a performance evaluation conducted on the IrisScan2200 from Iridian Technologies to determine if it could be used in such a role.
The oxidation behavior of nickel-matrix/aluminum-particle composite coatings was studied using thermogravimetric (TG) analysis and long-term furnace exposure in air at 1000°C. The coatings were applied by the composite-electrodeposition technique and vacuum heat treated for 3 hr at 825°C prior to oxidation testing. The heat-treated coatings consisted of a two-phase mixture of γ (Ni) + γ′(Ni3Al). During short-term exposure at 1000°C, a thin α-Al2O3 layer developed below a matrix of spinel NiAl2O4, with θ-Al2O3 needles at the outer oxide surface. After 100 hr of oxidation, remnants of θ-Al2O3 are present with spinel at the surface and an inner layer of θ-Al2O3. After 1000-2000 hr, a relatively thick layer of α-Al2O3 is found below a thin, outer spinel layer. Oxidation kinetics are controlled by the slow growth of the inner Al2O3 layer at short-term and intermediate exposures. At long times, an increase in mass gain is found due to oxidation at the coating-substrate interface and enhanced scale formation possibly in areas of reduced Al content. Ternary Si additions to Ni-Al composite coatings were found to have little effect on oxidation performance. Comparison of coatings with bulk Ni-Al alloys showed that low Al γ-alloys exhibit a healing Al2O3 layer after transient Ni-rich oxide growth. Higher Al alloys display Al2O3-controlled kinetics with low mass gain during TG analysis.
The theory, numerical algorithm, and user documentation are provided for a new ''Centroidal Voronoi Tessellation (CVT)'' method of filling a region of space (2D or 3D) with particles at any desired particle density. ''Clumping'' is entirely avoided and the boundary is optimally resolved. This particle placement capability is needed for any so-called ''mesh-free'' method in which physical fields are discretized via arbitrary-connectivity discrete points. CVT exploits efficient statistical methods to avoid expensive generation of Voronoi diagrams. Nevertheless, if a CVT particle's Voronoi cell were to be explicitly computed, then it would have a centroid that coincides with the particle itself and a minimized rotational moment. The CVT code provides each particle's volume and centroid, and also the rotational moment matrix needed to approximate a particle by an ellipsoid (instead of a simple sphere). DIATOM region specification is supported.
This report describes the results of a Laboratory-Directed Research and Development project on techniques for pattern discovery in discrete event time series data. In this project, we explored two different aspects of the pattern matching/discovery problem. The first aspect studied was the use of Dynamic Time Warping for pattern matching in continuous data. In essence, DTW is a technique for aligning time series along the time axis to optimize the similarity measure. The second aspect studied was techniques for discovering patterns in discrete event data. We developed a pattern discovery tool based on adaptations of the A-priori and GSP (Generalized Sequential Pattern mining) algorithms. We then used the tool on three different application areas--unattended monitoring system data from a storage magazine, computer network intrusion detection, and analysis of robot training data.
The requirements in modeling and simulation are driven by two fundamental changes in the nuclear weapons landscape: (1) The Comprehensive Test Ban Treaty and (2) The Stockpile Life Extension Program which extends weapon lifetimes well beyond their originally anticipated field lifetimes. The move from confidence based on nuclear testing to confidence based on predictive simulation forces a profound change in the performance asked of codes. The scope of this document is to improve the confidence in the computational results by demonstration and documentation of the predictive capability of electrical circuit codes and the underlying conceptual, mathematical and numerical models as applied to a specific stockpile driver. This document describes the High Performance Electrical Modeling and Simulation software normal environment Verification and Validation Plan.
FAILPROB is a computer program that applies the Weibull statistics characteristic of brittle failure of a material along with the stress field resulting from a finite element analysis to determine the probability of failure of a component. FAILPROB uses the statistical techniques for fast fracture prediction (but not the coding) from the N.A.S.A. - CARES/life ceramic reliability package. FAILPROB provides the analyst at Sandia with a more convenient tool than CARES/life because it is designed to behave in the tradition of structural analysis post-processing software such as ALGEBRA, in which the standard finite element database format EXODUS II is both read and written. This maintains compatibility with the entire SEACAS suite of post-processing software. A new technique to deal with the high local stresses computed for structures with singularities such as glass-to-metal seals and ceramic-to-metal braze joints is proposed and implemented. This technique provides failure probability computation that is insensitive to the finite element mesh employed in the underlying stress analysis. Included in this report are a brief discussion of the computational algorithms employed, user instructions, and example problems that both demonstrate the operation of FAILPROB and provide a starting point for verification and validation.
This report documents the strategies for verification and validation of the codes LSP and ICARUS used for simulating the operation of the neutron tubes used in all modern nuclear weapons. The codes will be used to assist in the design of next generation neutron generators and help resolve manufacturing issues for current and future production of neutron devices. Customers for the software are identified, tube phenomena are identified and ranked, software quality strategies are given, and the validation plan is set forth.
The theory and algorithm for the Material Point Method (MPM) are documented, with a detailed discussion on the treatments of boundary conditions and shock wave problems. A step-by-step solution scheme is written based on direct inspection of the two-dimensional MPM code currently used at the University of Missouri-Columbia (which is, in turn, a legacy of the University of New Mexico code). To test the completeness of the solution scheme and to demonstrate certain features of the MPM, a one-dimensional MPM code is programmed to solve one-dimensional wave and impact problems, with both linear elasticity and elastoplasticity models. The advantages and disadvantages of the MPM are investigated as compared with competing mesh-free methods. Based on the current work, future research directions are discussed to better simulate complex physical problems such as impact/contact, localization, crack propagation, penetration, perforation, fragmentation, and interactions among different material phases. In particular, the potential use of a boundary layer to enforce the traction boundary conditions is discussed within the framework of the MPM.
Nuclear energy has been proposed as a heat source for producing hydrogen from water using a sulfur-iodine thermochemical cycle. This document presents an assessment of the suitability of various reactor types for this application. The basic requirement for the reactor is the delivery of 900 C heat to a process interface heat exchanger. Ideally, the reactor heat source should not in itself present any significant design, safety, operational, or economic issues. This study found that Pressurized and Boiling Water Reactors, Organic-Cooled Reactors, and Gas-Core Reactors were unsuitable for the intended application. Although Alkali Metal-Cooled and Liquid-Core Reactors are possible candidates, they present significant development risks for the required conditions. Heavy Metal-Cooled Reactors and Molten Salt-Cooled Reactors have the potential to meet requirements, however, the cost and time required for their development may be appreciable. Gas-Cooled Reactors (GCRs) have been successfully operated in the required 900 C coolant temperature range, and do not present any obvious design, safety, operational, or economic issues. Altogether, the GCRs approach appears to be very well suited as a heat source for the intended application, and no major development work is identified. This study recommends using the Gas-Cooled Reactor as the baseline reactor concept for a sulfur-iodine cycle for hydrogen generation.