A Synthetic Aperture Radar (SAR) which employs direct IF sampling can significantly reduce the complexity of the analog electronics prior to the analog-to-digital converter (ADC). For relatively high frequency IF bands, a wide-bandwidth track-and-hold amplifier (THA) is required prior to the ADC. The THA functions primarily as a means of converting, through bandpass sampling, the IF signal to a baseband signal which can be sampled by the ADC. For a wide-band, high dynamic-range receiver system, such as a SAR receiver, stringent performance requirements are placed on the THA. We first measure the THA parameters such as gain, gain compression, third-order intercept (TOI), signal-to-noise ratio (SNR), spurious-free dynamic-range (SFDR), noise figure (NF), and phase noise. The results are then analyzed in terms of their respective impact on the overall performance of the SAR. The specific THA under consideration is the Rockwell Scientific RTH010.
This report describes a 3-D fluid mechanics code for predicting flow past bluff bodies whose surfaces can be assumed to be made up of shell elements that are simply connected. Version 1.0 of the VIPAR code (Vortex Inflation PARachute code) is described herein. This version contains several first order algorithms that we are in the process of replacing with higher order ones. These enhancements will appear in the next version of VIPAR. The present code contains a motion generator that can be used to produce a large class of rigid body motions. The present code has also been fully coupled to a structural dynamics code in which the geometry undergoes large time dependent deformations. Initial surface geometry is generated from triangular shell elements using a code such as Patran and is written into an ExodusII database file for subsequent input into VIPAR. Surface and wake variable information is output into two ExodusII files that can be post processed and viewed using software such as EnSight{trademark}.
The specific problem to be addressed in this work is the secondary combustion that arises from shock-induced mixing in volumetric explosives. It has been recognized that the effects of combustion due to secondary mixing can greatly alter the expansion of gases and dispersal of high-energy explosive. Furthermore, this enhanced effect may be a tailored feature for the new energetic material systems. One approach for studying this problem is based on the use of Large Eddy Simulation (LES) techniques. In this approach, the large turbulent length scales of motion are simulated directly while the small scales of turbulent motion are explicitly treated using a subgrid scale (SGS) model. The focus of this effort is to develop a SGS model for combustion that is applicable to shock-induced combustion events using probability density function (PDF) approaches. A simplified presumed PDF combustion model is formulated and implemented in the CTH shock physics code. Two classes of problems are studied using this model. The first is an isolated piece of reactive material burning with the surrounding air. The second problem is the dispersal of highly reactive material due to a shock driven explosion event. The results from these studies show the importance of incorporating a secondary combustion modeling capability and the utility of using a PDF-based description to simulate these events.
The lack of protection for semiconductor bridges (SCBs) against human electrostatic discharge (ESD) presents an obstacle to widespread use of this device. The goal of this research is to protect SCB initiators against pin-to-pin ESD without affecting their performance. Two techniques were investigated. In the first, a parallel capacitor is used to attenuate high frequencies. The second uses a parallel zener diode to limit the voltage amplitude. Both the 1 {micro}F capacitor and the 14 V zener diode protected the SCBs from ESD. The capacitor provided the best protection. The protection circuits had no effect on the SCB's threshold voltage. The function time for the CP-loaded SCBs with capacitors was about 11 {micro}s when fired by a firing set charged to 40 V. The SCBs failed to function when protected by the 6 V and 8 V zeners. The 51 V zener did not provide adequate protection against ESD. The parallel capacitor succeeded in protecting SCB initiators against pin-to-pin ESD without affecting their performance. Additional experiments should be done on SCBs and actual detonators to further quantify the effectiveness of this technique. Methods for retrofitting existing SCB initiators and integrating capacitors into future devices should also be explored.
This report describes the PDF Object Linking Extension (POLE) and how it came about. POLE is an extension of an existing DXL script called Outdoors that provides a linking mechanism to files outside of DOORS. Our modifications expand the script's capabilities to link to bookmarks within PDF documents. PDF linking allows for traceability to be maintained between DOORS objects and the requirements within PDF files.
Foliage penetrating (FOPEN) synthetic aperture radar (SAR) systems are capable of producing images of targets concealed under a foliage canopy. The quality and interpretability of these images, however, is generally limited by dense foliage clutter and by fundamental foliage-induced image degradation. Use of a polarimetric SAR to provide multiple polarization channels can mitigate these effects by offering target and scene information beyond that provided by a single-polarization SAR. This paper presents the results of a literature survey to investigate the use of multiple-polarization data in conjunction with FOPEN SAR applications. The effects of foliage propagation on SAR image quality are briefly summarized. Various approaches to multiple-polarization-based FOPEN target detection are described. Although literature concerning FOPEN target recognition is scarce, the use of multiple-polarization data for in-the-clear target recognition is described. The applicability of various target detection and recognition applications for use with concealed target SAR (CTSAR) imagery is considered.
The goal of this LDRD was to engineer further improvements in a novel electron tunneling device, the double electron layer tunneling transistor (DELTT). The DELTT is a three terminal quantum device, which does not require lateral depletion or lateral confinement, but rather is entirely planar in configuration. The DELTT's operation is based on 2D-2D tunneling between two parallel 2D electron layers in a semiconductor double quantum well heterostructure. The only critical dimensions reside in the growth direction, thus taking full advantage of the single atomic layer resolution of existing semiconductor growth techniques such as molecular beam epitaxy. Despite these advances, the original DELTT design suffered from a number of performance short comings that would need to be overcome for practical applications. These included (i)a peak voltage too low ({approx}20 mV) to interface with conventional electronics and to be robust against environmental noise, (ii) a low peak current density, (iii) a relatively weak dependence of the peak voltage on applied gate voltage, and (iv) an operating temperature that, while fairly high, remained below room temperature. In this LDRD we designed and demonstrated an advanced resonant tunneling transistor that incorporates structural elements both of the DELTT and of conventional double barrier resonant tunneling diodes (RTDs). Specifically, the device is similar to the DELTT in that it is based on 2D-2D tunneling and is controlled by a surface gate, yet is also similar to the RTD in that it has a double barrier structure and a third collector region. Indeed, the device may be thought of either as an RTD with a gate-controlled, fully 2D emitter, or alternatively, as a ''3-layer DELTT,'' the name we have chosen for the device. This new resonant tunneling transistor retains the original DELTT advantages of a planar geometry and sharp 2D-2D tunneling characteristics, yet also overcomes the performance shortcomings of the original DELTT design. In particular, it exhibits the high peak voltages and current densities associated with conventional RTDs, allows sensitive control of the peak voltage by the control gate, and operates nearly at room temperature. Finally, we note under this LDRD we also investigated the use of three layer DELTT structures as long wavelength (Terahertz) detectors using photon-assisted tunneling. We have recently observed a narrowband (resonant) tunable photoresponse in related structures consisting of grating-gated double quantum wells, and report on that work here as well.
Sandia National Laboratories has developed a Near Real Time Range Safety Analysis Tool named PREDICT that is based upon a probabilistic range safety analysis process. Probabilistic calculations of risk may be used in place of the total containment of potentially hazardous debris during a missile launch operation. Impact probabilities are computed based upon probabilistic density functions, Monte Carlo trajectories of dispersion events, and missile failure scenarios. Impact probabilities are then coupled with current demographics (land populations, commercial and military ship traffic, and aircraft traffic) to produce expected casualty predictions for a particular launch window. Historically, these calculations required days of computer time to finalize. Sandia has developed a process that utilizes the IBM SP machines at the Maui High Performance Computing Center and at the Arctic Region Supercomputing Center to reduce the computation time from days to as little as an hour or two. This analysis tool then allows the Missile Flight Safety Officer to make launch decisions based on the latest information (winds, ship, and aircraft movements) utilizing an intelligent risk management approach. This report provides a user's manual for PREDICT version 3.3.
The highly leveraged, asymmetric attacks of September 11th have launched the nation on a vast ''War on Terrorism''. Now that our vulnerabilities and the enemies' objectives and determination have been demonstrated, we find ourselves rapidly immersed in a huge, complex problem that is virtually devoid of true understanding while being swamped with resources and proposed technologies for solutions. How do we win this war? How do we make sure that we are making the proper investments? What things or freedoms or rights do we have to give up to win? Where do we even start? In analyzing this problem, many similarities to mankind's battle with uncontrolled fire and the threat it presented to society were noted. Major fires throughout history have destroyed whole cities and caused massive loss of life and property. Solutions were devised that have gradually, over several hundred years, reduced this threat to a level that allows us to co-exist with the threat of fire by applying constant vigilance and investments in fire protection, but without living in constant fear and dread from fire. We have created a multi-pronged approach to fire protection that involves both government and individuals in the prevention, mitigation, and response to fires. Fire protection has become a virtually unnoticed constant in our daily lives; we will have to do the same for terrorism. This paper discusses the history of fire protection and draws analogies to our War on Terrorism. We have, as a society, tackled and successfully conquered a problem as big as terrorism. From this battle, we can learn and take comfort.
Arsenic removal technologies that are effective at the tens of ppb level include coagulation, followed by settling/microfiltration, ion exchange by mineral surfaces,and pressure-driven membrane processes (reverse osmosis, nanofiltration and ultrafiltration). This report describes the fundamental mechanisms of operation of the arsenic removal systems and addresses the critical issues of arsenic speciation, source water quality on the performance of the arsenic removal systems and costs associated with the different treatment technology categories.
This report describes the results of research and development in the area of communication among disparate species of software agents. The two primary elements of the work are the formation of ontologies for use by software agents and the means by which software agents are instructed to carry out complex tasks that require interaction with other agents. This work was grounded in the areas of commercial transport and cybersecurity.
The development and testing of a new technique for blending of electrolyte-binder (separator) mixes for use in thermal batteries is described. The original method of blending such materials at Sandia involved liquid Freon TF' as a medium. The ban on the use of halogenated solvents throughout much of the Department of Energy complex required the development of an alternative liquid medium as a replacement. The use of liquid nitrogen (LN) was explored and developed into a viable quality process. For comparison, a limited number of dry-blending tests were also conducted using a Turbula mixer. The characterization of pellets made from LN-blended separators involved deformation properties at 530 C and electrolyte-leakage behavior at 400 or 500 C, as well as performance in single-cells and five-cell batteries under several loads. Stack-relaxation tests were also conducted using 10-cell batteries. One objective of this work was to observe if correlations could be obtained between the mechanical properties of the separators and the performance in single cells and batteries. Separators made using three different electrolytes were examined in this study. These included the LiCl-KCl eutectic, the all-Li LiCl-LiBr-LiF electrolyte, and the low-melting LiBr-KBr-LiF eutectic. The electrochemical performance of separator pellets made with LN-blended materials was compared to that for those made with Freon T P and, in some cases, those that were dry blended. A satisfactory replacement MgO (Marinco 'OL', now manufactured by Morton) was qualified as a replacement for the standard Maglite 'S' MgO that has been used for years but is no longer commercially available. The separator compositions with the new MgO were optimized and included in the blending and electrochemical characterization tests.
Many governmental and corporate organizations are interested in tracking materials and/or information through a network. Often, as in the case of the U.S. Customs Service, the traffic is recorded as transactions through a large number of checkpoints with a correspondingly complex network. These networks will contain large numbers of uninteresting transactions that act as noise to conceal the chains of transactions of interest, such as drug trafficking. We are interested in finding significant paths in transaction data containing high noise levels, which tend to make traditional graph visualization methods complex and hard to understand. This paper covers the evolution of a series of graphing methods designed to assist in this search for paths-from 1-D to 2-D to 3-D and beyond.
Endospores of the bacterium, Bacillus subfilis, have been shown to exhibit a synergistic rate of cell death when treated with particular levels of heat and ionizing radiation in combination. This synergism has been documented for a number of different organisms at various temperatures and radiation doses (Sivinski, H.D., D.M. Garst, M.C. Reynolds, C.A. Trauth, Jr., R.E. Trujillo, and W.J. Whitfield, ''The Synergistic Inactivation of Biological Systems by Thermoradiation,'' Industrial Sterilization, International Symposium, Amsterdam, 1972, Duke University Press, Durham, NC, pp. 305-335). However, the mechanism of the synergistic action is unknown. This study attempted to determine whether the mechanism of synergism was specifically connected to the DNA strand breakage--either single strand breakage or double strand breakage. Some work was also done to examine the effect of free radicals and ions created in the spore body by the radiation treatments, as well as to determine the functionality of repair enzymes following heat, radiation, and thermoradiation treatments. Bacillus subtilis spores were treated at combinations of 33 kr/hr, 15 kr/hr, 105 C, 85 C, 63 C, and 50 C. Some synergistic correlation was found with the number of double strand breaks, and a strong correlation was found with the number of single strand breaks. In cases displaying synergism of spore killing, single strand breakage while the DNA was in a denatured state is suspected as a likely mechanism. DNA was damaged more by irradiation in the naked state than when encased within the spore, indicating that the spore encasement provides an overall protective effect from radiation damage in spite of free radicals and ions which may be created from molecules other than the DNA molecule within the spore body. Repair enzymes were found to be functional following treatments by radiation only, heat only, and thermoradiation.
Photovoltaics is the utility connected distributed energy resource (DER) that is in widespread use today. It has one element, the inverter, which is common with all DER sources except rotating generators. The inverter is required to transfer dc energy to ac energy. With all the DER technologies, (solar, wind, fuel cells, and microturbines) the inverter is still an immature product that will result in reliability problems in fielded systems. Today, the PV inverter is a costly and complex component of PV systems that produce ac power. Inverter MTFF (mean time to first failure) is currently unacceptable. Low inverter reliability contributes to unreliable fielded systems and a loss of confidence in renewable technology. The low volume of PV inverters produced restricts the manufacturing to small suppliers without sophisticated research and reliability programs or manufacturing methods. Thus, the present approach to PV inverter supply has low probability of meeting DOE reliability goals. DOE investments in power electronics are intended to address the reliability and cost of power electronics. This report details the progress of power electronics, identifies technologies that are in current use, and explores new approaches that can provide significant improvements in inverter reliability while leading to lower cost. A key element to improved inverter design is the systems approach to design. This approach includes a list of requirements for the product being designed and a preliminary requirements document is a part of this report. Finally, the design will be for a universal inverter that can be applied to several technologies. The objective of a universal inverter is to increase the quantity being manufactured so that mass-manufacturing techniques can be applied. The report includes the requirements and recommended design approaches for a new inverter with a ten-year mean time to first failure (MTFF) and with lower cost. This development will constitute a ''leap forward'' in capability that leverages emerging technologies and best manufacturing processes to produce a new, high reliability, inverter. The targeted inverter size is from two to ten kilowatts. The report is organized into four sections. A brief introduction by Sandia is followed by Section Two from Millennium Technologies (a company with UPS experience). Section Three is provided by Xantrex (a PV manufacturing company) and the University of Minnesota provided Section Four. This report is very detailed and provides inverter design information that is irrelevant to the layman. It is intended to be a comprehensive documentation of proven technology and the manufacturing skills required to produce a high reliability inverter. An accompanying report will provide a summary of the recommended approach for inverter development.
Since 1983, ground surface elevation data from the US DOE West Hackberry Strategic Petroleum crude oil storage facility has been routinely collected. The data have been assimilated, analyzed, and presented in terms of absolute elevations, subsidence rate, and estimates of volumetric changes of the storage facility. The information presented impacts operations and maintenance of the facility, and provides important constraints on the interpretation of ongoing structural analyses of the facility.
The Strategic Petroleum Reserve site at West Hackberry, Louisiana has historically experienced casing leaks. Numerous West Hackberry oil storage caverns have wells exhibiting communication between the interior 10 3/4 x 20-inch (oil) annulus and the ''outer cemented'' 20 x 26-inch annulus. Well 108 in Cavern 108 exhibits this behavior. It is thought that one, if not the primary, cause of this communication is casing thread leaks at the 20-inch casing joints combined with microannuli along the cement casing interfaces and other cracks/flaws in the cemented 20 x 26-inch annulus. An operation consisting of a series of nitrogen leak tests, similar to cavern integrity tests, was performed on Cavern 108 in an effort to determine the leak horizons and to see if these leak horizons coincided with those of casing joints. Certain leaky, threaded casing joints were identified between 400 and 1500 feet. A new leak detection procedure was developed as a result of this test, and this methodology for identifying and interpreting such casing joint leaks is presented in this report. Analysis of the test data showed that individual joint leaks could be successfully identified, but not without some degree of ambiguity. This ambiguity is attributed to changes in the fluid content of the leak path (nitrogen forcing out oil) and possibly to very plausible changes in characteristics of the flow path during the test. These changes dominated the test response and made the identification of individual leak horizons difficult. One consequence of concern from the testing was a progressive increase in the leak rate measured during testing due to nitrogen cleaning small amounts of oil out of the leak paths and very likely due to the changes of the leak path during the flow test. Therefore, careful consideration must be given before attempting similar tests. Although such leaks have caused no known environmental or economic problems to date, the leaks may be significant because of the potential for future problems. To mitigate future problems, some repair scenarios are discussed including injection of sealants.
This report describes a new microsystems technology for the creation of microsensors and microelectromechanical systems (MEMS) using stress-free amorphous diamond (aD) films. Stress-free aD is a new material that has mechanical properties close to that of crystalline diamond, and the material is particularly promising for the development of high sensitivity microsensors and rugged and reliable MEMS. Some of the unique properties of aD include the ability to easily tailor film stress from compressive to slightly tensile, hardness and stiffness 80-90% that of crystalline diamond, very high wear resistance, a hydrophobic surface, extreme chemical inertness, chemical compatibility with silicon, controllable electrical conductivity from insulating to conducting, and biocompatibility. A variety of MEMS structures were fabricated from this material and evaluated. These structures included electrostatically-actuated comb drives, micro-tensile test structures, singly- and doubly-clamped beams, and friction and wear test structures. It was found that surface micromachined MEMS could be fabricated in this material easily and that the hydrophobic surface of the film enabled the release of structures without the need for special drying procedures or the use of applied hydrophobic coatings. Measurements using these structures revealed that aD has a Young's modulus of {approx}650 GPa, a tensile fracture strength of 8 GPa, and a fracture toughness of 8 MPa{center_dot}m {sup 1/2}. These results suggest that this material may be suitable in applications where stiction or wear is an issue. Flexural plate wave (FPW) microsensors were also fabricated from aD. These devices use membranes of aD as thin as {approx}100 nm. The performance of the aD FPW sensors was evaluated for the detection of volatile organic compounds using ethyl cellulose as the sensor coating. For comparable membrane thicknesses, the aD sensors showed better performance than silicon nitride based sensors. Greater than one order of magnitude increase in chemical sensitivity is expected through the use of ultra-thin aD membranes in the FPW sensor. The discoveries and development of the aD microsystems technology that were made in this project have led to new research projects in the areas of aD bioMEMS and aD radio frequency MEMS.
Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.
This activity brought two robotic mobile manipulation systems developed by Sandia National Laboratories to the Maneuver Support Center (MANSCEN) at Ft. Leonard Wood for the following purposes: Demonstrate advanced manipulation and control capabilities; Apply manipulation to hazardous activities within MANSCEN mission space; Stimulate thought and identify potential applications for future mobile manipulation applications; and Provide introductory knowledge of manipulation to better understand how to specify capability and write requirements.
This report summarizes the work completed in the MyLink Lab Directed Research and Development project. The goal of this project was to investigate the ability of computers to come to understand individuals and to assist them with various aspects of their lives.
The conventional discrete ordinates approximation to the Boltzmann transport equation can be described in a matrix form. Specifically, the within-group scattering integral can be represented by three components: a moment-to-discrete matrix, a scattering cross-section matrix and a discrete-to-moment matrix. Using and extending these entities, we derive and summarize the matrix representations of the second-order transport equations.
This document is the second in a series that describe graphical user interface tools developed to control the Visual Empirical Region of Influence (VERI) algorithm. In this paper we describe a user interface designed to optimize the VERI algorithm results. The optimization mode uses a brute force method of searching through the combinations of features in a data set for features that produce the best pattern recognition results. With a small number of features in a data set an exact solution can be determined. However, the number of possible combinations increases exponentially with the number of features and an alternate means of finding a solution must be found. We developed and implemented a technique for finding solutions in data sets with both small and large numbers of features. This document illustrates step-by-step examples of how to use the interface and how to interpret the results. It is written in two parts, part I deals with using the interface to find the best combination from all possible sets of features, part II describes how to use the tool to find a good solution in data sets with a large number of features. The VERI Optimization Interface Tool was written using the Tcl/Tk Graphical User Interface (GUI) programming language, version 8.1. Although the Tcl/Tk packages are designed to run on multiple computer platforms, we have concentrated our efforts to develop a user interface for the ubiquitous DOS environment. The VERI algorithms are compiled, executable programs. The optimization interface executes the VERI algorithm in Leave-One-Out mode using the Euclidean metric. For a thorough description of the type of data analysis we perform, and for a general Pattern Recognition tutorial, refer to our website at: http://www.sandia.gov/imrl/XVisionScience/Xusers.htm.
A laser safety evaluation and pertinent output measurements were performed (during March and April 2002) on the M203PI Grenade Launcher Simulator (GLS) and its associated Umpire Control Gun manufactured by Oscmar International Limited, Auckland, New Zealand. The results were the Oscmar Umpire Gun is laser hazard Class 1 and can be used without restrictions. The radiant energy output of the Oscmar M203PI GLS, under ''Small Source'' criteria at 10 centimeters, is laser hazard Class 3b and not usable, under SNL policy, in force-on-force exercises. However, due to a relatively large exit diameter and an intentionally large beam divergence, to simulate a large area blast, the output beam geometry met the criteria for ''Extended Source'' viewing [ANSI Std. 2136.1-2000 (S.l)]. Under this ''Extended Source'' criteria the output of the M203PI GLS unit was, in fact, laser hazard Class 1 (eye safe), for 3 of the 4 possible modes of laser operation. The 4'h mode, ''Auto Fire'', which simulates a continuous grenade firing every second and is not used at SNL, was laser hazard Class 3a (under the ''Extended Source'' viewing criteria). The M203PI GLS does present a laser hazard Class 3a to aided viewing with binoculars inside 3 meters from the unit. Farther than 3 meters it is ''eye safe''. The M203PI GLS can be considered a Class 1 laser hazard and can be used under SNL policy with the following restrictions: (1) The M203PI GLS unit shall only be programmed for: the ''Single Fire'' (which, includes ''Rapid Fire'') and the ''Auto Align'' (used in adjusting the alignment of the grenade launcher simulator system to the target) modes of operation. (2) The M203PI GLS shall never be directed against personnel, using binoculars, inside of 3 meters. DOE Order 5480.16A, Firearms Safety, (Chapter 1)(5)(a)(8)(d) and DOE-STD-1091-96, Firearms Safety (Chapter 4); already prevents ESS laser engagement of personnel (with or without binoculars), ''closer than 10 feet (3.05 meters)''. Both of these restrictions can be administratively imposed, through a formal Operating Procedure or Technical Work Document and by full compliance with DOE orders and standards.
This project makes use of ''biomimetic behavioral engineering'' in which adaptive strategies used by animals in the real world are applied to the development of autonomous robots. The key elements of the biomimetic approach are to observe and understand a survival behavior exhibited in nature, to create a mathematical model and simulation capability for that behavior, to modify and optimize the behavior for a desired robotics application, and to implement it. The application described in this report is dynamic soaring, a behavior that certain sea birds use to extract flight energy from laminar wind velocity gradients in the shallow atmospheric boundary layer directly above the ocean surface. Theoretical calculations, computational proof-of-principle demonstrations, and the first instrumented experimental flight test data for dynamic soaring are presented to address the feasibility of developing dynamic soaring flight control algorithms to sustain the flight of unmanned airborne vehicles (UAVs). Both hardware and software were developed for this application. Eight-foot custom foam sailplanes were built and flown in a steep shear gradient. A logging device was designed and constructed with custom software to record flight data during dynamic soaring maneuvers. A computational toolkit was developed to simulate dynamic soaring in special cases and with a full 6-degree of freedom flight dynamics model in a generalized time-dependent wind field. Several 3-dimensional visualization tools were built to replay the flight simulations. A realistic aerodynamics model of an eight-foot sailplane was developed using measured aerodynamic derivatives. Genetic programming methods were developed and linked to the simulations and visualization tools. These tools can now be generalized for other biomimetic behavior applications.
Historically, high resolution, high slew rate optics have been heavy, bulky, and expensive. Recent advances in MEMS (Micro Electro Mechanical Systems) technology and micro-machining may change this. Specifically, the advent of steerable sub-millimeter sized mirror arrays could provide the breakthrough technology for producing very small-scale high-performance optical systems. For example, an array of steerable MEMS mirrors could be the building blocks for a Fresnel mirror of controllable focal length and direction of view. When coupled with a convex parabolic mirror the steerable array could realize a micro-scale pan, tilt and zoom system that provides full CCD sensor resolution over the desired field of view with no moving parts (other than MEMS elements). This LDRD provided the first steps towards the goal of a new class of small-scale high-performance optics based on MEMS technology. A large-scale, proof of concept system was built to demonstrate the effectiveness of an optical configuration applicable to producing a small-scale (< 1cm) pan and tilt imaging system. This configuration consists of a color CCD imager with a narrow field of view lens, a steerable flat mirror, and a convex parabolic mirror. The steerable flat mirror directs the camera's narrow field of view to small areas of the convex mirror providing much higher pixel density in the region of interest than is possible with a full 360 deg. imaging system. Improved image correction (dewarping) software based on texture mapping images to geometric solids was developed. This approach takes advantage of modern graphics hardware and provides a great deal of flexibility for correcting images from various mirror shapes. An analytical evaluation of blur spot size and axi-symmetric reflector optimization were performed to address depth of focus issues that occurred in the proof of concept system. The resulting equations will provide the tools for developing future system designs.
GENESIS Version 2.0 is a general circulation model developed at the National Center for Atmospheric Research (NCAR) and is the principal code that is used by paleoclimatologists to model climate at various times throughout Earth's history. The primary result of this LDRD project has been the development of a distributed-memory parallel version of GENESIS, leading to a significant performance enhancement on commodity-based, large-scale computing platforms like the CPlant. The shared-memory directives of the original version were replaced by MPI calls in the new version of GENESIS. This was accomplished by means of parallel decomposition over latitude strip domains. The code achieved a parallel speedup of four times that of the shared-memory parallel version at R15 resolution. T106 resolution runs 20 times faster than the NCAR serial version on 20 nodes of the CPlant. As part of the project, GENESIS was used to model the climatic effects of an orbiting debris ring due to a large planetary impact event.
Electro-microfluidics is experiencing explosive growth in new product developments. There are many commercial applications for electro-microfluidic devices such as chemical sensors, biological sensors, and drop ejectors for both printing and chemical analysis. The number of silicon surface micromachined electro-microfluidic products is likely to increase. Manufacturing efficiency and integration of microfluidics with electronics will become important. Surface micromachined microfluidic devices are manufactured with the same tools as IC's (integrated circuits) and their fabrication can be incorporated into the IC fabrication process. In order to realize applications for devices must be developed. An Electro-Microfluidic Dual In-line Package (EMDIP{trademark}) was developed surface micromachined electro-microfluidic devices, a practical method for getting fluid into these to be a standard solution that allows for both the electrical and the fluidic connections needed to operate a great variety of electro-microfluidic devices. The EMDIP{trademark} includes a fan-out manifold that, on one side, mates directly with the 200 micron diameter Bosch etched holes found on the device, and, on the other side, mates to lager 1 mm diameter holes. To minimize cost the EMDIP{trademark} can be injection molded in a great variety of thermoplastics which also serve to optimize fluid compatibility. The EMDIP{trademark} plugs directly into a fluidic printed wiring board using a standard dual in-line package pattern for the electrical connections and having a grid of multiple 1 mm diameter fluidic connections to mate to the underside of the EMDIP{trademark}.
The information form of the Kalman filter is used as a device for implementing an optimal, linear, decentralized algorithm on a decentralized topology. A systems approach utilizing design tradeoffs is required to successfully implement an effective data fusion network with minimal communication. Combining decentralized results over the past four decades with practical aspects of nodal network implementation, the final product provides an important benchmark for functionally decentralized systems designs.
Inorganic mesoporous thin-films are import for applications such as membranes, sensors, low-dielectric-constant insulators (so-called low {kappa} dielectrics), and fluidic devices. Over the past five years, several research groups have demonstrated the efficacy of using evaporation accompanying conventional coating operations such as spin- and dip-coating as an efficient means of driving the self-assembly of homogeneous solutions into highly ordered, oriented, mesostructured films. Understanding such evaporation-induced self-assembly (EISA) processes is of interest for both fundamental and technological reasons. Here, the authors use spatially resolved 2D grazing incidence X-ray scattering in combination with optical interferometry during steady-state dip-coating of surfactant-templated silica thin-films to structurally and compositionally characterize the EISA process. They report the evolution of a hexagonal (p6 mm) thin-film mesophase from a homogeneous precursor solution and its further structural development during drying and calcination. Monte Carlo simulations of water/ethanol/surfactant bulk phase behavior are used to investigate the role of ethanol in the self-assembly process, and they propose a mechanism to explain the observed dilation in unit cell dimensions during solvent evaporation.
An approach is presented to compute the force on a spherical particle in a rarefied flow of a monatomic gas. This approach relies on the development of a Green's function that describes the force on a spherical particle in a delta-function molecular velocity distribution function. The gas-surface interaction model in this development allows incomplete accommodation of energy and tangential momentum. The force from an arbitrary molecular velocity distribution is calculated by computing the moment of the force Green's function in the same way that other macroscopic variables are determined. Since the molecular velocity distribution function is directly determined in the DSMC method, the force Green's function approach can be implemented straightforwardly in DSMC codes. A similar approach yields the heat transfer to a spherical particle in a rarefied gas flow. The force Green's function is demonstrated by application to two problems. First, the drag force on a spherical particle at arbitrary temperature and moving at arbitrary velocity through an equilibrium motionless gas is found analytically and numerically. Second, the thermophoretic force on a motionless particle in a motionless gas with a heat flux is found analytically and numerically. Good agreement is observed in both situations.
In this report we describe the construction and characterization of a small quantum processor based on trapped ions. This processor could ultimately be used to perform analogue quantum simulations with an engineered computationally-cold bath for increasing the system's robustness to noise. We outline the requirements to build such a simulator, including individual addressing, distinguishable detection, and low crosstalk between operations, and our methods to implement and characterize these requirements. Specifically for measuring crosstalk, we introduce a new method, simultaneous gate set tomography to characterize crosstalk errors.
The original DAMP (W t a Manipulation Program) was written by Mark Hedemann of Sandia National Laboratories and used the CA-DISSPLA{trademark} (available from Computer Associates International, Inc., Garden City, NY) graphics package as its engine. It was used to plot, modify, and otherwise manipulate the one-dimensional data waveforms (data vs. time) from a wide variety of accelerators. With the waning of CA-DISSPLA and the increasing popularity of Unix{reg_sign}-based workstations, a replacement was needed. This package uses the IDL{reg_sign} software, available from Research Systems Incorporated, a Xerox company, in Boulder, Colorado, as the engine, and creates a set of widgets to manipulate the data in a manner similar to the original DAMP and earlier versions of xdamp. IDL is currently supported on a wide variety of Unix platforms such as IBM{reg_sign} workstations, Hewlett Packard workstations, SUN{reg_sign} workstations, Microsoft{reg_sign} Windows{trademark} computers, Macintosh{reg_sign} computers and Digital Equipment Corporation VMS{reg_sign} and Alpha{reg_sign} systems. Thus, xdamp is portable across many platforms. We have verified operation, albeit with some minor IDL bugs, on personal computers using Windows 95 and Windows NT; IBM Unix platforms; DEC Alpha and VMS systems; HP 9000/700 series workstations; and Macintosh computers, both regular and PowerPC{trademark} versions. Version 4 is an update that removes some obsolete features and better supports very large arrays and Excel formatted data import.
Mine detection dogs have a demonstrated capability to locate hidden objects by trace chemical detection. Because of this capability, demining activities frequently employ mine detection dogs to locate individual buried landmines or for area reduction. The conditions appropriate for use of mine detection dogs are only beginning to emerge through diligent research that combines dog selection/training, the environmental conditions that impact landmine signature chemical vapors, and vapor sensing performance capability and reliability. This report seeks to address the fundamental soil-chemical interactions, driven by local weather history, that influence the availability of chemical for trace chemical detection. The processes evaluated include: landmine chemical emissions to the soil, chemical distribution in soils, chemical degradation in soils, and weather and chemical transport in soils. Simulation modeling is presented as a method to evaluate the complex interdependencies among these various processes and to establish conditions appropriate for trace chemical detection. Results from chemical analyses on soil samples obtained adjacent to landmines are presented and demonstrate the ultra-trace nature of these residues. Lastly, initial measurements of the vapor sensing performance of mine detection dogs demonstrates the extreme sensitivity of dogs in sensing landmine signature chemicals; however, reliability at these ultra-trace vapor concentrations still needs to be determined. Through this compilation, additional work is suggested that will fill in data gaps to improve the utility of trace chemical detection.
The purpose of the report is to summarize discussions from a Ceramic/Metal Brazing: From Fundamentals to Applications Workshop that was held at Sandia National Laboratories in Albuquerque, NM on April 4, 2001. Brazing experts and users who bridge common areas of research, design, and manufacturing participated in the exercise. External perspectives on the general state of the science and technology for ceramics and metal brazing were given. Other discussions highlighted and critiqued Sandia's brazing research and engineering programs, including the latest advances in braze modeling and materials characterization. The workshop concluded with a facilitated dialogue that identified critical brazing research needs and opportunities.
This report provides a review of the open literature relating to numerical methods for simulating deep penetration events. The objective of this review is to provide recommendations for future development of the ALEGRA shock physics code to support earth penetrating weapon applications. While this report focuses on coupled Eulerian-Lagrangian methods, a number of complementary methods are also discussed which warrant further investigation. Several recommendations are made for development activities within ALEGRA to support earth penetrating weapon applications in the short, intermediate, and long term.
The quality of low-cost multicrystalline silicon (mc-Si) has improved to the point that it forms approximately 50% of the worldwide photovoltaic (PV) power production. The performance of commercial mc-Si solar cells still lags behind c-Si due in part to the inability to texture it effectively and inexpensively. Surface texturing of mc-Si has been an active field of research. Several techniques including anodic etching [1], wet acidic etching [2], lithographic patterning [3], and mechanical texturing [4] have been investigated with varying degrees of success. To date, a cost-effective technique has not emerged.
This report summarizes the activities of the Computer Science Research Institute at Sandia National Laboratories during the period January 1, 2001 to December 31, 2001.
This study on the opportunities for energy storage technologies determined electric utility application requirements, assessed the suitability of a variety of storage technologies to meet the requirements, and reviewed the compatibility of technologies to satisfy multiple applications in individual installations. The study is called ''Opportunities Analysis'' because it identified the most promising opportunities for the implementation of energy storage technologies in stationary applications. The study was sponsored by the U.S. DOE Energy Storage Systems Program through Sandia National Laboratories and was performed in coordination with industry experts from utilities, manufacturers, and research organizations. This Phase II report updates the Phase I analysis performed in 1994.
A new concept has been developed which allows direct-to-RF conversion of digitally synthesized waveforms. The concept named Quadrature Error Corrected Digital Waveform Synthesis (QECDWS) employs quadrature amplitude and phase predistortion to the complex waveform to reduce the undesirable quadrature image. Another undesirable product of QECDWS-based RF conversion is the Local Oscillator (LO) leakage through the quadrature upconverter (mixer). A common technique for reducing this LO leakage is to apply a quadrature bias to the mixer I and Q inputs. This report analyzes this technique through theory, lab measurement, and data analysis for a candidate quadrature mixer for Synthetic Aperture Radar (SAR) applications.
Biomass feedstocks contain roughly 10-30% lignin, a substance that can not be converted to fermentable sugars. Hence, most schemes for producing biofuels (ethanol) assume that the lignin coproduct will be utilized as boiler fuel to provide heat and power to the process. However, the chemical structure of lignin suggests that it will make an excellent high value fuel additive, if it can be broken down into smaller molecular units. From fiscal year 1997 through fiscal year 2001, Sandia National Laboratories was a participant in a cooperative effort with the National Renewable Energy Laboratory and the University of Utah to develop and scale a base catalyzed depolymerization (BCD) process for lignin conversion. SNL's primary role in the effort was to utilize rapidly heated batch microreactors to perform kinetic studies, examine the reaction chemistry, and to develop alternate catalyst systems for the BCD process. This report summarizes the work performed at Sandia during FY97 and FY98 with alcohol based systems. More recent work with aqueous based systems will be summarized in a second report.
Biomass feedstocks contain roughly 15-30% lignin, a substance that can not be converted to fermentable sugars. Hence, most schemes for producing biofuels assume that the lignin coproduct will be utilized as boiler fuel. Yet, the chemical structure of lignin suggests that it will make an excellent high value fuel additive, if it can be broken down into smaller compounds. From Fiscal year 1997 through Fiscal year 2001, Sandia National Laboratories participated in a cooperative effort with the National Renewable Energy Laboratory and the University of Utah to develop and scale a base catalyzed depolymerization (BCD) process for lignin conversion. SNL's primary role in the effort was to perform kinetic studies, examine the reaction chemistry, and to develop alternate BCD catalyst systems. This report summarizes the work performed at Sandia during Fiscal Year 1999 through Fiscal Year 2001 with aqueous systems. Work with alcohol based systems is summarized in part 1 of this report. Our study of lignin depolymerization by aqueous NaOH showed that the primary factor governing the extent of lignin conversion is the NaOH:lignin ratio. NaOH concentration is at best a secondary issue. The maximum lignin conversion is achieved at NaOH:lignin mole ratios of 1.5-2. This is consistent with acidic compounds in the depolymerized lignin neutralizing the base catalyst. The addition of CaO to NaOH improves the reaction kinetics, but not the degree of lignin conversion. The combination of Na{sub 2}CO{sub 3} and CaO offers a cost saving alternative to NaOH that performs identically to NaOH on a per Na basis. A process where CaO is regenerated from CaCO{sub 3} could offer further advantages, as could recovering the Na as Na{sub 2}CO{sub 3} or NaHCO{sub 3} by neutralization of the product solution with CO2. Model compound studies show that two types of reactions involving methoxy substituents on the aromatic ring occur: methyl group migration between phenolic groups (making and breaking ether bonds) and the loss of methyl/methoxy groups from the aromatic ring (destruction of ether linkages). The migration reactions are significantly faster than the demethylation reactions, but ultimately demethylation processes predominates.
Islanding, the supply of energy to a disconnected portion of the grid, is a phenomenon that could result in personnel hazard, interfere with reclosure, or damage hardware. Considerable effort has been expended on the development of IEEE 929, a document that defines unacceptable islanding and a method for evaluating energy sources. The worst expected loads for an islanded inverter are defined in IEEE 929 as being composed of passive resistance, inductance, and capacitance. However, a controversy continues concerning the possibility that a capacitively compensated, single-phase induction motor with a very lightly damped mechanical load having a large rotational inertia would be a significantly more difficult load to shed during an island. This report documents the result of a study that shows such a motor is not a more severe case, simply a special case of the RLC network.
We discuss application of the FETI-DP linear solver within the Salinas finite element application. An overview of Salinas and of the FETI-DP solver is presented. We discuss scalability of the software on ASCI-red, Cplant and ASCI-white. Options for solution of the coarse grid problem that results from the FETI problem are evaluated. The finite element software and solver are seen to be numerically and cpu scalable on each of these platforms. In addition, the software is very robust and can be used on a large variety of finite element models.
The magnetically excited flexural plate wave (mag-FPW) device has great promise as a versatile sensor platform. FPW's can have better sensitivity at lower operating frequencies than surface acoustic wave (SAW) devices. Lower operating frequency (< 1 MHz for the FPW versus several hundred MHz to a few GHz for the SAW device) simplifies the control electronics and makes integration of sensor with electronics easier. Magnetic rather than piezoelectric excitation of the FPW greatly simplifies the device structure and processing by eliminating the need for piezoelectric thin films, also simplifying integration issues. The versatile mag-FPW resonator structure can potentially be configured to fulfill a number of critical functions in an autonomous sensored system. As a physical sensor, the device can be extremely sensitive to temperature, fluid flow, strain, acceleration and vibration. By coating the membrane with self-assembled monolayers (SAMs), or polymer films with selective absorption properties (originally developed for SAW sensors), the mass sensitivity of the FPW allows it to be used as biological or chemical sensors. Yet another critical need in autonomous sensor systems is the ability to pump fluid. FPW structures can be configured as micro-pumps. This report describes work done to develop mag-FPW devices as physical, chemical, and acoustic sensors, and as micro-pumps for both liquid and gas-phase analytes to enable new integrated sensing platform.
This report describes the results of the FY01 Level 1 Peer Reviews for the Verification and Validation (V&V) Program at Sandia National Laboratories. V&V peer review at Sandia is intended to assess the ASCI (Accelerated Strategic Computing Initiative) code team V&V planning process and execution. The Level 1 Peer Review process is conducted in accordance with the process defined in SAND2000-3099. V&V Plans are developed in accordance with the guidelines defined in SAND2000-3 101. The peer review process and process for improving the Guidelines are necessarily synchronized and form parts of a larger quality improvement process supporting the ASCI V&V program at Sandia. During FY00 a prototype of the process was conducted for two code teams and their V&V Plans and the process and guidelines updated based on the prototype. In FY01, Level 1 Peer Reviews were conducted on an additional eleven code teams and their respective V&V Plans. This report summarizes the results from those peer reviews, including recommendations from the panels that conducted the reviews.
The Controlatron Software Suite is a custom built application to perform automated testing of Controlatron neutron tubes. The software package was designed to allowing users to design tests and to run a series of test suites on a tube. The data is output to ASCII files of a pre-defined format for data analysis and viewing with the Controlatron Data Viewer Application. This manual discusses the operation of the Controlatron Test Suite Software and a brief discussion of state machine theory, as state machine is the functional basis of the software.
Electrical connectors corrode. Even our best SA and MC connectors finished with 50 to 100 microinches of gold over 50 to 100 microinches of nickel corrode. This work started because some, but not all, lots of connectors held in KC stores for a decade had been destroyed by pore corrosion (chemical corrosion). We have identified a MIL-L-87177 lubricant that absolutely stops chemical corrosion on SA connectors, even in the most severe environments. For commercial connectors which typically have thinner plating thicknesses, not only does the lubricant significantly retard effects of chemical corrosion, but also it greatly prolongs the fretting life. This report highlights the initial development history and use of the lubricant at Bell Labs and AT&T, and the Battelle studies and the USAF experience that lead to its deployment to stop dangerous connector corrosion on the F-16. We report the Sandia, HFM&T and Battelle development work, connector qualification, and material compatibility studies that demonstrate its usefulness and safety on JTA and WR systems. We will be applying MIL-L-87177 Connector Lubricant to all new connectors that go into KC stores. We recommend that it be applied to connectors on newly built cables and equipment as well as material that recycles through manufacturing locations from the field.
This document summarizes research of reactively deposited metal hydride thin films and their properties. Reactive deposition processes are of interest, because desired stoichiometric phases are created in a one-step process. In general, this allows for better control of film stress compared with two-step processes that react hydrogen with pre-deposited metal films. Films grown by reactive methods potentially have improved mechanical integrity, performance and aging characteristics. The two reactive deposition techniques described in this report are reactive sputter deposition and reactive deposition involving electron-beam evaporation. Erbium hydride thin films are the main focus of this work. ErH{sub x} films are grown by ion beam sputtering erbium in the presence of hydrogen. Substrates include a Al{sub 2}O{sub 3} {l_brace}0001{r_brace}, a Al{sub 2}O{sub 3} {l_brace}1120{r_brace}, Si{l_brace}001{r_brace} having a native oxide, and polycrystalline molybdenum substrates. Scandium dideuteride films are also studied. ScD{sub x} is grown by evaporating scandium in the presence of molecular deuterium. Substrates used for scandium deuteride growth include single crystal sapphire and molybdenum-alumina cermet. Ultra-high vacuum methods are employed in all experiments to ensure the growth of high purity films, because both erbium and scandium have a strong affinity for oxygen. Film microstructure, phase, composition and stress are evaluated using a number of thin film and surface analytical techniques. In particular, we present evidence for a new erbium hydride phase, cubic erbium trihydride. This phase develops in films having a large in-plane compressive stress independent of substrate material. Erbium hydride thin films form with a strong <111> out-of-plane texture on all substrate materials. A moderate in-plane texture is also found; this crystallographic alignment forms as a result of the substrate/target geometry and not epitaxy. Multi-beam optical sensors (MOSS) are used for in-situ analysis of erbium hydride and scandium hydride film stress. These instruments probe the evolution of film stress during all stages of deposition and cooldown. Erbium hydride thin film stress is investigated for different growth conditions including temperature and sputter gas, and properties such as thermal expansion coefficient are measured. The in-situ stress measurement technique is further developed to make it suitable for manufacturing systems. New features added to this technique include the ability to monitor multiple substrates during a single deposition and a rapidly switched, tiltable mirror that accounts for small differences in sample alignment on a platen.
The transboundary nature of water resources demands a transboundary approach to their monitoring and management. However, transboundary water projects raise a challenging set of problems related to communication issues, and standardization of sampling, analysis and data management methods. This manual addresses those challenges and provides the information and guidance needed to perform the Navruz Project, a cooperative, transboundary, river monitoring project involving rivers and institutions in Kazakhstan, Kyrgyzstan, Tajikistan, and Uzbekistan facilitated by Sandia National Laboratories in the U.S. The Navruz Project focuses on waterborne radionuclides and metals because of their importance to public health and nuclear materials proliferation concerns in the region. This manual provides guidelines for participants on sample and data collection, field equipment operations and procedures, sample handling, laboratory analysis, and data management. Also included are descriptions of rivers, sampling sites and parameters on which data are collected. Data obtained in this project are shared among all participating countries and the public through an internet web site, and are available for use in further studies and in regional transboundary water resource management efforts. Overall, the project addresses three main goals: to help increase capabilities in Central Asian nations for sustainable water resources management; to provide a scientific basis for supporting nuclear transparency and non-proliferation in the region; and to help reduce the threat of conflict in Central Asia over water resources, proliferation concerns, or other factors.
A suite of laboratory triaxial compression and triaxial steady-state creep tests provide quasi-static elastic constants and damage criteria for bedded rock salt and dolomite extracted from Cavern Well No.1 of the Tioga field in northern Pennsylvania. The elastic constants, quasi-static damage criteria, and creep parameters of host rocks provides information for evaluating a proposed cavern field for gas storage near Tioga, Pennsylvania. The Young's modulus of the dolomite was determined to be 6.4 ({+-}1.0) x 10{sup 6} psi, with a Poisson's ratio of 0.26 ({+-}0.04). The elastic Young's modulus was obtained from the slope of the unloading-reloading portion of the stress-strain plots as 7.8 ({+-}0.9) x 10{sup 6} psi. The damage criterion of the dolomite based on the peak load was determined to be J{sub 2}{sup 0.5} (psi) = 3113 + 0.34 I{sub 1} (psi) where I{sub 1} and J{sub 2} are first and second invariants respectively. Using the dilation limit as a threshold level for damage, the damage criterion was conservatively estimated as J{sub 2}{sup 0.5} (psi) = 2614 + 0.30 I{sub 1} (psi). The Young's modulus of the rock salt, which will host the storage cavern, was determined to be 2.4 ({+-}0.65) x 10{sup 6} psi, with a Poisson's ratio of 0.24 ({+-}0.07). The elastic Young's modulus was determined to be 5.0 ({+-}0.46) x 10{sup 6} psi. Unlike the dolomite specimens under triaxial compression, rock salt specimens did not show shear failure with peak axial load. Instead, most specimens showed distinct dilatancy as an indication of internal damage. Based on dilation limit, the damage criterion for the rock salt was estimated as J{sub 2}{sup 0.5} (psi) = 704 + 0.17 I{sub 1} (psi). In order to determine the time dependent deformation of the rock salt, we conducted five triaxial creep tests. The creep deformation of the Tioga rock salt was modeled based on the following three-parameter power law as {var_epsilon}{sub s} = 1.2 x 10{sup -17} {sigma}{sup 4.75} exp(-6161/T), where {var_epsilon}{sub s} is the steady state strain rate in s{sup -1}, {sigma} is the applied axial stress difference in psi, and T is the temperature in Kelvin.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
This report describes Umbra's High Level Architecture HLA library. This library serves as an interface to the Defense Simulation and Modeling Office's (DMSO) Run Time Infrastructure Next Generation Version 1.3 (RTI NG1.3) software library and enables Umbra-based models to be federated into HLA environments. The Umbra library was built to enable the modeling of robots for military and security system concept evaluation. A first application provides component technologies that ideally fit the US Army JPSD's Joint Virtual Battlespace (JVB) simulation framework for Objective Force concept analysis. In addition to describing the Umbra HLA library, the report describes general issues of integrating Umbra with RTI code and outlines ways of building models to support particular HLA simulation frameworks like the JVB.
Supervisory Control and Data Acquisition (SCADA) systems are a part of the nation's critical infrastructure that is especially vulnerable to attack or disruption. Sandia National Laboratories is developing a high-security SCADA specification to increase the national security posture of the U.S. Because SCADA security is an international problem and is shaped by foreign and multinational interests, Sandia is working to develop a standards-based solution through committees such as the IEC TC 57 WG 15, the IEEE Substation Committee, and the IEEE P1547-related activity on communications and controls. The accepted standards are anticipated to take the form of a Common Criteria Protection Profile. This report provides the status of work completed and discusses several challenges ahead.
This report details some proof-of-principle experiments we conducted under a small, one year ($100K) grant from the Strategic Environmental Research and Development Program (SERDP) under the SERDP Exploratory Development (SEED) effort. Our chemiresistor technology had been developed over the last few years for detecting volatile organic compounds (VOCs) in the air, but these sensors had never been used to detect VOCs in water. In this project we tried several different configurations of the chemiresistors to find the best method for water detection. To test the effect of direct immersion of the (non-water soluble) chemiresistors in contaminated water, we constructed a fixture that allowed liquid water to pass over the chemiresistor polymer without touching the electrical leads used to measure the electrical resistance of the chemiresistor. In subsequent experiments we designed and fabricated probes that protected the chemiresistor and electronics behind GORE-TEX{reg_sign} membranes that allowed the vapor from the VOCs and the water to reach a submerged chemiresistor without allowing the liquids to touch the chemiresistor. We also designed a vapor flow-through system that allowed the headspace vapor from contaminated water to be forced past a dry chemiresistor array. All the methods demonstrated that VOCs in a high enough concentration in water can be detected by chemiresistors, but the last method of vapor phase exposure to a dry chemiresistor gave the fastest and most repeatable measurements of contamination. Answers to questions posed by SERDP reviewers subsequent to a presentation of this material are contained in the appendix.
The computer vision field has undergone a revolution of sorts in the past five years. Moore's law has driven real-time image processing from the domain of dedicated, expensive hardware, to the domain of commercial off-the-shelf computers. This thesis describes their work on the design, analysis and implementation of a Real-Time Shape from Silhouette Sensor (RT S{sup 3}). The system produces time-varying volumetric data at real-time rates (10-30Hz). The data is in the form of binary volumetric images. Until recently, using this technique in a real-time system was impractical due to the computational burden. In this thesis they review the previous work in the field, and derive the mathematics behind volumetric calibration, silhouette extraction, and shape-from-silhouette. For the sensor implementation, they use four color camera/framegrabber pairs and a single high-end Pentium III computer. The color cameras were configured to observe a common volume. This hardware uses the RT S{sup 3} software to track volumetric motion. Two types of shape-from-silhouette algorithms were implemented and their relative performance was compared. They have also explored an application of this sensor to markerless motion tracking. In his recent review of work done in motion tracking Gavrila states that results of markerless vision based 3D tracking are still limited. The method proposed in this paper not only expands upon the previous work but will also attempt to overcome these limitations.
Dempster-Shafer theory offers an alternative to traditional probabilistic theory for the mathematical representation of uncertainty. The significant innovation of this framework is that it allows for the allocation of a probability mass to sets or intervals. Dempster-Shafer theory does not require an assumption regarding the probability of the individual constituents of the set or interval. This is a potentially valuable tool for the evaluation of risk and reliability in engineering applications when it is not possible to obtain a precise measurement from experiments, or when knowledge is obtained from expert elicitation. An important aspect of this theory is the combination of evidence obtained from multiple sources and the modeling of conflict between them. This report surveys a number of possible combination rules for Dempster-Shafer structures and provides examples of the implementation of these rules for discrete and interval-valued data.
Critical infrastructures underpin the domestic security, health, safety and economic well being of the United States. They are large, widely dispersed, mostly privately owned systems operated under a mixture of federal, state and local government departments, laws and regulations. While there currently are enormous pressures to secure all aspects of all critical infrastructures immediately, budget realities limit available options. The purpose of this study is to provide a clear framework for systematically analyzing and prioritizing resources to most effectively secure US critical infrastructures from terrorist threats. It is a scalable framework (based on the interplay of consequences, threats and vulnerabilities) that can be applied at the highest national level, the component level of an individual infrastructure, or anywhere in between. This study also provides a set of key findings and a recommended approach for framework application. In addition, this study develops three laptop computer-based tools to assist with framework implementation-a Risk Assessment Credibility Tool, a Notional Risk Prioritization Tool, and a County Prioritization tool. This study's tools and insights are based on Sandia National Laboratories' many years of experience in risk, consequence, threat and vulnerability assessments, both in defense- and critical infrastructure-related areas.
In October 2000, the personnel responsible for administration of the corporate computers managed by the Scientific Computing Department assembled to reengineer the process of creating and deleting users' computer accounts. Using the Carnegie Mellon Software Engineering Institute (SEI) Capability Maturity Model (CMM) for quality improvement process, the team performed the reengineering by way of process modeling, defining and measuring the maturity of the processes, per SEI and CMM practices. The computers residing in the classified environment are bound by security requirements of the Secure Classified Network (SCN) Security Plan. These security requirements delimited the scope of the project, specifically mandating validation of all user accounts on the central corporate computer systems. System administrators, in addition to their assigned responsibilities, were spending valuable hours performing the additional tacit responsibility of tracking user accountability for user-generated data. For example, in cases where the data originator was no longer an employee, the administrators were forced to spend considerable time and effort determining the appropriate management personnel to assume ownership or disposition of the former owner's data files. In order to prevent this sort of problem from occurring and to have a defined procedure in the event of an anomaly, the computer account management procedure was thoroughly reengineered, as detailed in this document. An automated procedure is now in place that is initiated and supplied data by central corporate processes certifying the integrity, timeliness and authentication of account holders and their management. Automated scripts identify when an account is about to expire, to preempt the problem of data becoming ''orphaned'' without a responsible ''owner'' on the system. The automated account-management procedure currently operates on and provides a standard process for all of the computers maintained by the Scientific Computing Department.
Computational techniques for the evaluation of steady plane subsonic flows represented by Chaplygin series in the hodograph plane are presented. These techniques are utilized to examine the properties of the free surface wall jet solution. This solution is a prototype for the shaped charge jet, a problem which is particularly difficult to compute properly using general purpose finite element or finite difference continuum mechanics codes. The shaped charge jet is a classic validation problem for models involving high explosives and material strength. Therefore, the problem studied in this report represents a useful verification problem associated with shaped charge jet modeling.
Superresolution concepts offer the potential of resolution beyond the classical limit. This great promise has not generally been realized. In this study we investigate the potential application of superresolution concepts to synthetic aperture radar. The analytical basis for superresolution theory is discussed. In a previous report the application of the concept to synthetic aperture radar was investigated as an operator inversion problem. Generally, the operator inversion problem is ill posed. This work treats the problem from the standpoint of regularization. Both the operator inversion approach and the regularization approach show that the ability to superresolve SAR imagery is severely limited by system noise.
A technical review is presented of experiment activities and state of knowledge on air-borne, radiation source terms resulting from explosive sabotage attacks on spent reactor fuel subassemblies in shielded casks. Current assumptions about the behavior of irradiated fuel are largely based on a limited number of experimental results involving unirradiated, depleted uranium dioxide ''surrogate'' fuel. The behavior of irradiated nuclear fuel subjected to explosive conditions could be different from the behavior of the surrogate fuel, depending on the assumptions made by the evaluator. Available data indicate that these potential differences could result in errors, and possible orders-of-magnitude overestimates of aerosol dispersion and potential health effects from sabotage attacks. Furthermore, it is suggested that the current assumptions used in arriving at existing regulations for the transportation and storage of spent fuel in the U.S. are overly conservative. This, in turn, has led to potentially higher-than-needed operating expenses for those activities. A confirmatory experimental program is needed to develop a realistic correlation between source terms of irradiated fuel and unirradiated fuel. The motivations for performing the confirmatory experimental program are also presented.
This document describes a proactive plan for assessing and controlling sources of risk for the ASCI (Accelerated Strategic Computing Initiative) V&V program at Sandia National Laboratories. It offers a graded approach for identifying, analyzing, prioritizing, responding to, and monitoring risks.
The goal of this project was to develop a device that uses electric fields to grasp and possibly levitate LIGA parts. This non-contact form of grasping would solve many of the problems associated with grasping parts that are only a few microns in dimensions. Scaling laws show that for parts this size, electrostatic and electromagnetic forces are dominant over gravitational forces. This is why micro-parts often stick to mechanical tweezers. If these forces can be controlled under feedback control, the parts could be levitated, possibly even rotated in air. In this project, we designed, fabricated, and tested several grippers that use electrostatic and electromagnetic fields to grasp and release metal LIGA parts. The eventual use of this tool will be to assemble metal and non-metal LIGA parts into small electromechanical systems.
Photovoltaic inverters are the most mature of any DER inverter, and their mean time to first failure (MTFF) is about five years. This is an unacceptable MTFF and will inhibit the rapid expansion of PV. With all DER technologies, (solar, wind, fuel cells, and microturbines) the inverter is still an immature product that will result in reliability problems in fielded systems. The increasing need for all of these technologies to have a reliable inverter provides a unique opportunity to address these needs with focused R&D development projects. The requirements for these inverters are so similar that modular designs with universal features are obviously the best solution for a ''next generation'' inverter. A ''next generation'' inverter will have improved performance, higher reliability, and improved profitability. Sandia National Laboratories has estimated that the development of a ''next generation'' inverter could require approximately 20 man-years of work over an 18- to 24-month time frame, and that a government-industry partnership will greatly improve the chances of success.
The synthesis and characterization of soluble and processable high molecular weight polysilsesquioxanes with carboxylate functionalities was discussed. It was found that the tert-butyl functionality in these polymers was eliminated to give carboxylic acids functionalized polysilsesquioxane or methyltin carboxylatosilsesquioxane gels. The analysis showed that the polysilsesquioxane binds and removes tin through gelation.
The Boeing Company fabricated the Solar Two receiver as a subcontractor for the Solar Two project. The receiver absorbed sunlight reflected from the heliostat field. A molten-nitrate-salt heat transfer fluid was pumped from a storage tank at grade level, heated from 290 to 565 C by the receiver mounted on top of a tower, then flowed back down into another storage tank. To make electricity, the hot salt was pumped through a steam generator to produce steam that powered a conventional Rankine steam turbine/generator. This evaluation identifies the most significant Solar Two receiver system lessons learned from the Mechanical Design, Instrumentation and Control, Panel Fabrication, Site Construction, Receiver System Operation, and Management from the perspective of the receiver designer/manufacturer. The lessons learned on the receiver system described here consist of two parts: the Problem and one or more identified Solutions. The appendix summarizes an inspection of the advanced receiver panel developed by Boeing that was installed and operated in the Solar Two receiver.
Polycrystalline silicon (polysilicon) surface micromachining is a new technology for building micrometer ({micro}m) scale mechanical devices on silicon wafers using techniques and process tools borrowed from the manufacture of integrated circuits. Sandia National Laboratories has invested a significant effort in demonstrating the viability of polysilicon surface micromachining and has developed the Sandia Ultraplanar Micromachining Technology (SUMMiT V{trademark} ) process, which consists of five structural levels of polysilicon. A major advantage of polysilicon surface micromachining over other micromachining methods is that thousands to millions of thin film mechanical devices can be built on multiple wafers in a single fabrication lot and will operate without post-processing assembly. However, if thin film mechanical or surface properties do not lie within certain tightly set bounds, micromachined devices will fail and yield will be low. This results in high fabrication costs to attain a certain number of working devices. An important factor in determining the yield of devices in this parallel-processing method is the uniformity of these properties across a wafer and from wafer to wafer. No metrology tool exists that can routinely and accurately quantify such properties. Such a tool would enable micromachining process engineers to understand trends and thereby improve yield of micromachined devices. In this LDRD project, we demonstrated the feasibility of and made significant progress towards automatically mapping mechanical and surface properties of thin films across a wafer. The MEMS parametrics measurement team has implemented a subset of this platform, and approximately 30 wafer lots have been characterized. While more remains to be done to achieve routine characterization of all these properties, we have demonstrated the essential technologies. These include: (1) well-understood test structures fabricated side-by-side with MEMS devices, (2) well-developed analysis methods, (3) new metrologies (i.e., long working distance interferometry) and (4) a hardware/software platform that integrates (1), (2) and (3). In this report, we summarize the major focus areas of our LDRD project. We describe the contents of several articles that provide the details of our approach. We also describe hardware and software innovations we made to realize a fully automatic wafer prober system for MEMS mechanical and surface property characterization across wafers and from wafer-lot to wafer-lot.
This report documents measurements in inductively driven plasmas containing SF{sub 6}/Argon gas mixtures. The data in this report is presented in a series of appendices with a minimum of interpretation. During the course of this work we investigated: the electron and negative ion density using microwave interferometry and laser photodetachment; the optical emission; plasma species using mass spectrometry, and the ion energy distributions at the surface of the rf biased electrode in several configurations. The goal of this work was to assemble a consistent set of data to understand the important chemical mechanisms in SF{sub 6} based processing of materials and to validate models of the gas and surface processes.
This report presents general concepts in a broadly applicable methodology for validation of Accelerated Strategic Computing Initiative (ASCI) codes for Defense Programs applications at Sandia National Laboratories. The concepts are defined and analyzed within the context of their relative roles in an experimental validation process. Examples of applying the proposed methodology to three existing experimental validation activities are provided in appendices, using an appraisal technique recommended in this report.
LOCA, the Library of Continuation Algorithms, is a software library for performing stability analysis of large-scale applications. LOCA enables the tracking of solution branches as a function of a system parameter, the direct tracking of bifurcation points, and, when linked with the ARPACK library, a linear stability analysis capability. It is designed to be easy to implement around codes that already use Newton's method to converge to steady-state solutions. The algorithms are chosen to work for large problems, such as those that arise from discretizations of partial differential equations, and to run on distributed memory parallel machines. This manual presents LOCA's continuation and bifurcation analysis algorithms, and instructions on how to implement LOCA with an application code. The LOCA code is being made publicly available at www.cs.sandia.gov/loca.
Three-dimensional finite element analyses simulate the mechanical response of enlarging existing caverns at the Strategic Petroleum Reserve (SPR). The caverns are located in Gulf Coast salt domes and are enlarged by leaching during oil drawdowns as fresh water is injected to displace the crude oil from the caverns. The current criteria adopted by the SPR limits cavern usage to 5 drawdowns (leaches). As a base case, 5 leaches were modeled over a 25 year period to roughly double the volume of a 19 cavern field. Thirteen additional leaches where then simulated until caverns approached coalescence. The cavern field approximated the geometries and geologic properties found at the West Hackberry site. This enabled comparisons are data collected over nearly 20 years to analysis predictions. The analyses closely predicted the measured surface subsidence and cavern closure rates as inferred from historic well head pressures. This provided the necessary assurance that the model displacements, strains, and stresses are accurate. However, the cavern field has not yet experienced the large scale drawdowns being simulated. Should they occur in the future, code predictions should be validated with actual field behavior at that time. The simulations were performed using JAS3D, a three dimensional finite element analysis code for nonlinear quasi-static solids. The results examine the impacts of leaching and cavern workovers, where internal cavern pressures are reduced, on surface subsidence, well integrity, and cavern stability. The results suggest that the current limit of 5 oil drawdowns may be extended with some mitigative action required on the wells and later on to surface structure due to subsidence strains. The predicted stress state in the salt shows damage to start occurring after 15 drawdowns with significant failure occurring at the 16th drawdown, well beyond the current limit of 5 drawdowns.
This report addresses the effects of spectrum loading on lifetime and residual strength of a typical fiberglass laminate configuration used in wind turbine blade construction. Over 1100 tests have been run on laboratory specimens under a variety of load sequences. Repeated block loading at two or more load levels, either tensile-tensile, compressive-compressive, or reversing, as well as more random standard spectra have been studied. Data have been obtained for residual strength at various stages of the lifetime. Several lifetime prediction theories have been applied to the results. The repeated block loading data show lifetimes that are usually shorter than predicted by the most widely used linear damage accumulation theory, Miner's sum. Actual lifetimes are in the range of 10 to 20 percent of predicted lifetime in many cases. Linear and nonlinear residual strength models tend to fit the data better than Miner's sum, with the nonlinear providing a better fit of the two. Direct tests of residual strength at various fractions of the lifetime are consistent with the residual strength models. Load sequencing effects are found to be insignificant. The more a spectrum deviates from constant amplitude, the more sensitive predictions are to the damage law used. The nonlinear model provided improved correlation with test data for a modified standard wind turbine spectrum. When a single, relatively high load cycle was removed, all models provided similar, though somewhat non-conservative correlation with the experimental results. Predictions for the full spectrum, including tensile and compressive loads were slightly non-conservative relative to the experimental data, and accurately captured the trend with varying maximum load. The nonlinear residual strength based prediction with a power law S-N curve extrapolation provided the best fit to the data in most cases. The selection of the constant amplitude fatigue regression model becomes important at the lower stress, higher cycle loading cases. The residual strength models may provide a more accurate estimate of blade lifetime than Miner's rule for some loads spectra. They have the added advantage of providing an estimate of current blade strength throughout the service life.
The final report for a Laboratory Directed Research and Development project entitled, Molecular Simulation of Reacting Systems is presented. It describes efforts to incorporate chemical reaction events into the LAMMPS massively parallel molecular dynamics code. This was accomplished using a scheme in which several classes of reactions are allowed to occur in a probabilistic fashion at specified times during the MD simulation. Three classes of reaction were implemented: addition, chain transfer and scission. A fully parallel implementation was achieved using a checkerboarding scheme, which avoids conflicts due to reactions occurring on neighboring processors. The observed chemical evolution is independent of the number of processors used. The code was applied to two test applications: irreversible linear polymerization and thermal degradation chemistry.
Military test and training ranges generate scrap materials from targets and ordnance debris. These materials are routinely removed from the range for recycling; however, energetic material residues in this range scrap has presented a significant safety hazard to operations personnel and damaged recycling equipment. The Strategic Environmental Research and Development Program (SERDP) sought proof of concept evaluations for monitoring technologies to identify energetic residues among range scrap. Sandia National Laboratories teamed with Nomadics, Inc. to evaluate the Nomadics FIDO vapor sensor for application to this problem. Laboratory tests were completed that determined the vapor-sensing threshold to be 10 to 20 ppt for TNT and 150 to 200 ppt for DNT. Field tests with the FIDO demonstrated the proof of concept that energetic material residues can be identified with vapor sensing in enclosed scrap bins. Items such as low order detonation debris, demolition block granules, and unused 81-mm mortars were detected quickly and with minimum effort. Conceptual designs for field-screening scrap for energetic material residues include handheld vapor sensing systems, batch scrap sensing systems, continuous conveyor sensing systems and a hot gas decontamination verification system.
A physics-based understanding of material aging mechanisms helps to increase reliability when predicting the lifetime of mechanical and electrical components. This report examines in detail the mechanisms of atmospheric copper sulfidation and evaluates new methods of parallel experimentation for high-throughput corrosion analysis. Often our knowledge of aging mechanisms is limited because coupled chemical reactions and physical processes are involved that depend on complex interactions with the environment and component functionality. Atmospheric corrosion is one of the most complex aging phenomena and it has profound consequences for the nation's economy and safety. Therefore, copper sulfidation was used as a test-case to examine the utility of parallel experimentation. Through the use of parallel and conventional experimentation, we measured: (1) the sulfidation rate as a function of humidity, light, temperature and O{sub 2} concentration; (2) the primary moving species in solid state transport; (3) the diffusivity of Cu vacancies through Cu{sub 2}S; (4) the sulfidation activation energies as a function of relative humidity (RH); (5) the sulfidation induction times at low humidities; and (6) the effect of light on the sulfidation rate. Also, the importance of various sulfidation mechanisms was determined as a function of RH and sulfide thickness. Different models for sulfidation-reactor geometries and the sulfidation reaction process are presented.
Laser beam welding is the principal welding process for the joining of Sandia weapon components because it can provide a small fusion zone with low overall heating. Improved process robustness is desired since laser energy absorption is extremely sensitive to joint variation and filler metal is seldom added. This project investigated the experimental and theoretical advantages of combining a fiber optic delivered Nd:YAG laser with a miniaturized GMAW system. Consistent gas metal arc droplet transfer employing a 0.25 mm diameter wire was only obtained at high currents in the spray transfer mode. Excessive heating of the workpiece in this mode was considered an impractical result for most Sandia micro-welding applications. Several additional droplet detachment approaches were investigated and analyzed including pulsed tungsten arc transfer(droplet welding), servo accelerated transfer, servo dip transfer, and electromechanically braked transfer. Experimental observations and rigorous analysis of these approaches indicate that decoupling droplet detachment from the arc melting process is warranted and may someday be practical.
The Solar Thermal Program at Sandia supports work developing dish/Stirling systems to convert solar energy into electricity. Heat pipe technology is ideal for transferring the energy of concentrated sunlight from the parabolic dish concentrators to the Stirling engine heat tubes. Heat pipes can absorb the solar energy at non-uniform flux distributions and release this energy to the Stirling engine heater tubes at a very uniform flux distribution thus decoupling the design of the engine heater head from the solar absorber. The most important part of a heat pipe is the wick, which transports the sodium over the heated surface area. Bench scale heat pipes were designed and built to more economically, both in time and money, test different wicks and cleaning procedures. This report covers the building, testing, and post-test analysis of the sixth in a series of bench scale heat pipes. Durability heat pipe No.6 was built and tested to determine the effects of a high temperature bakeout, 950 C, on wick corrosion during long-term operation. Previous tests showed high levels of corrosion with low temperature bakeouts (650-700 C). Durability heat pipe No.5 had a high temperature bakeout and reflux cleaning and showed low levels of wick corrosion after long-term operation. After testing durability heat pipe No.6 for 5,003 hours at an operating temperature of 750 C, it showed low levels of wick corrosion. This test shows a high temperature bakeout alone will significantly reduce wick corrosion without the need for costly and time consuming reflux cleaning.
This report presents the major findings of the Montana State University Composite Materials Fatigue Program from 1997 to 2001, and is intended to be used in conjunction with the DOE/MSU Composite Materials Fatigue Database. Additions of greatest interest to the database in this time period include environmental and time under load effects for various resin systems; large tow carbon fiber laminates and glass/carbon hybrids; new reinforcement architectures varying from large strands to prepreg with well-dispersed fibers; spectrum loading and cumulative damage laws; giga-cycle testing of strands; tough resins for improved structural integrity; static and fatigue data for interply delamination; and design knockdown factors due to flaws and structural details as well as time under load and environmental conditions. The origins of a transition to increased tensile fatigue sensitivity with increasing fiber content are explored in detail for typical stranded reinforcing fabrics. The second focus of the report is on structural details which are prone to delamination failure, including ply terminations, skin-stiffener intersections, and sandwich panel terminations. Finite element based methodologies for predicting delamination initiation and growth in structural details are developed and validated, and simplified design recommendations are presented.
This report describes research and development of the large eddy simulation (LES) turbulence modeling approach conducted as part of Sandia's laboratory directed research and development (LDRD) program. The emphasis of the work described here has been toward developing the capability to perform accurate and computationally affordable LES calculations of engineering problems using unstructured-grid codes, in wall-bounded geometries and for problems with coupled physics. Specific contributions documented here include (1) the implementation and testing of LES models in Sandia codes, including tests of a new conserved scalar--laminar flamelet SGS combustion model that does not assume statistical independence between the mixture fraction and the scalar dissipation rate, (2) the development and testing of statistical analysis and visualization utility software developed for Exodus II unstructured grid LES, and (3) the development and testing of a novel new LES near-wall subgrid model based on the one-dimensional Turbulence (ODT) model.
This report summarizes progress from the Laboratory Directed Research and Development (LDRD) program during fiscal year 2001. In addition to a programmatic and financial overview, the report includes progress reports from 295 individual R and D projects in 14 categories.
To identify connections between technology needs for countering terrorism and underlying science issues and to recommend investment strategies to increase the impact of basic research on efforts to counter terrorism.
We have used a nonionic inverse micelle synthesis technique to form nanoclusters of platinum and palladium. These nanoclusters can be rendered hydrophobic or hydrophilic by the appropriate choice of capping ligand. Unlike Au nanoclusters, Pt nanoclusters show great stability with thiol ligands in aqueous media. Alkane thiols, with alkane chains ranging from C6 to C18, were used as hydrophobic ligands, and with some of these we were able to form two-dimensional and/or three-dimensional superlattices of Pt nanoclusters as small as 2.7 nm in diameter. Image processing techniques were developed to reliably extract from transmission electron micrographs (TEMs) the particle size distribution, and information about the superlattice domains and their boundaries. The latter permits us to compute the intradomain vector pair correlation function of the particle centers, from which we can accurately determine the lattice spacing and the coherent domain size. From these data the gap between the particles in the coherent domains can be determined as a function of the thiol chain length. It is found that as the thiol chain length increases, the interparticle gaps increase more slowly than the measured hydrodynamic radius of the functionalized nanoclusters in solution, possibly indicating thiol chain interdigitation in the superlattices.
The oxidation behavior of nickel-matrix/aluminum-particle composite coatings was studied using thermogravimetric (TG) analysis and long-term furnace exposure in air at 1000°C. The coatings were applied by the composite-electrodeposition technique and vacuum heat treated for 3 hr at 825°C prior to oxidation testing. The heat-treated coatings consisted of a two-phase mixture of γ (Ni) + γ′(Ni3Al). During short-term exposure at 1000°C, a thin α-Al2O3 layer developed below a matrix of spinel NiAl2O4, with θ-Al2O3 needles at the outer oxide surface. After 100 hr of oxidation, remnants of θ-Al2O3 are present with spinel at the surface and an inner layer of θ-Al2O3. After 1000-2000 hr, a relatively thick layer of α-Al2O3 is found below a thin, outer spinel layer. Oxidation kinetics are controlled by the slow growth of the inner Al2O3 layer at short-term and intermediate exposures. At long times, an increase in mass gain is found due to oxidation at the coating-substrate interface and enhanced scale formation possibly in areas of reduced Al content. Ternary Si additions to Ni-Al composite coatings were found to have little effect on oxidation performance. Comparison of coatings with bulk Ni-Al alloys showed that low Al γ-alloys exhibit a healing Al2O3 layer after transient Ni-rich oxide growth. Higher Al alloys display Al2O3-controlled kinetics with low mass gain during TG analysis.
Laser safety evaluation and output emission measurements were performed (during October and November 2001) on SNL MILES and Mini MILES laser emitting components. The purpose, to verify that these components, not only meet the Class 1 (eye safe) laser hazard criteria of the CDRH Compliance Guide for Laser Products and 21 CFR 1040 Laser Product Performance Standard; but also meet the more stringent ANSI Std. z136.1-2000 Safe Use of Lasers conditions for Class 1 lasers that govern SNL laser operations. The results of these measurements confirmed that all of the Small Arms Laser Transmitters, as currently set (''as is''), meet the Class 1 criteria. Several of the Mini MILES Small Arms Transmitters did not. These were modified and re-tested and now meet the Class 1 laser hazard criteria. All but one System Controllers (hand held and rifle stock) met class 1 criteria for single trigger pulls and all presented Class 3a laser hazard levels if the trigger is held (continuous emission) for more than 5 seconds on a single point target. All units were Class 3a for ''aided'' viewing. These units were modified and re-tested and now meet the Class 1 hazard criteria for both ''aided'' as well as ''unaided'' viewing. All the Claymore Mine laser emitters tested are laser hazard Class 1 for both ''aided'' as well as ''unaided'' viewing.
The use of biometrics for the identification of individuals is becoming more prevalent in society and in the general government community. As the demand for these devices increases, it becomes necessary for the user community to have the facts needed to determine which device is the most appropriate for any given application. One such application is the use of biometric devices in areas where an individual may not be able to present a biometric feature that requires contact with the identifier (e.g., when dressed in anti-contamination suits or when wearing a respirator). This paper discusses a performance evaluation conducted on the IrisScan2200 from Iridian Technologies to determine if it could be used in such a role.
The theory, numerical algorithm, and user documentation are provided for a new ''Centroidal Voronoi Tessellation (CVT)'' method of filling a region of space (2D or 3D) with particles at any desired particle density. ''Clumping'' is entirely avoided and the boundary is optimally resolved. This particle placement capability is needed for any so-called ''mesh-free'' method in which physical fields are discretized via arbitrary-connectivity discrete points. CVT exploits efficient statistical methods to avoid expensive generation of Voronoi diagrams. Nevertheless, if a CVT particle's Voronoi cell were to be explicitly computed, then it would have a centroid that coincides with the particle itself and a minimized rotational moment. The CVT code provides each particle's volume and centroid, and also the rotational moment matrix needed to approximate a particle by an ellipsoid (instead of a simple sphere). DIATOM region specification is supported.
This report describes the results of a Laboratory-Directed Research and Development project on techniques for pattern discovery in discrete event time series data. In this project, we explored two different aspects of the pattern matching/discovery problem. The first aspect studied was the use of Dynamic Time Warping for pattern matching in continuous data. In essence, DTW is a technique for aligning time series along the time axis to optimize the similarity measure. The second aspect studied was techniques for discovering patterns in discrete event data. We developed a pattern discovery tool based on adaptations of the A-priori and GSP (Generalized Sequential Pattern mining) algorithms. We then used the tool on three different application areas--unattended monitoring system data from a storage magazine, computer network intrusion detection, and analysis of robot training data.
The requirements in modeling and simulation are driven by two fundamental changes in the nuclear weapons landscape: (1) The Comprehensive Test Ban Treaty and (2) The Stockpile Life Extension Program which extends weapon lifetimes well beyond their originally anticipated field lifetimes. The move from confidence based on nuclear testing to confidence based on predictive simulation forces a profound change in the performance asked of codes. The scope of this document is to improve the confidence in the computational results by demonstration and documentation of the predictive capability of electrical circuit codes and the underlying conceptual, mathematical and numerical models as applied to a specific stockpile driver. This document describes the High Performance Electrical Modeling and Simulation software normal environment Verification and Validation Plan.
FAILPROB is a computer program that applies the Weibull statistics characteristic of brittle failure of a material along with the stress field resulting from a finite element analysis to determine the probability of failure of a component. FAILPROB uses the statistical techniques for fast fracture prediction (but not the coding) from the N.A.S.A. - CARES/life ceramic reliability package. FAILPROB provides the analyst at Sandia with a more convenient tool than CARES/life because it is designed to behave in the tradition of structural analysis post-processing software such as ALGEBRA, in which the standard finite element database format EXODUS II is both read and written. This maintains compatibility with the entire SEACAS suite of post-processing software. A new technique to deal with the high local stresses computed for structures with singularities such as glass-to-metal seals and ceramic-to-metal braze joints is proposed and implemented. This technique provides failure probability computation that is insensitive to the finite element mesh employed in the underlying stress analysis. Included in this report are a brief discussion of the computational algorithms employed, user instructions, and example problems that both demonstrate the operation of FAILPROB and provide a starting point for verification and validation.
This report documents the strategies for verification and validation of the codes LSP and ICARUS used for simulating the operation of the neutron tubes used in all modern nuclear weapons. The codes will be used to assist in the design of next generation neutron generators and help resolve manufacturing issues for current and future production of neutron devices. Customers for the software are identified, tube phenomena are identified and ranked, software quality strategies are given, and the validation plan is set forth.
The theory and algorithm for the Material Point Method (MPM) are documented, with a detailed discussion on the treatments of boundary conditions and shock wave problems. A step-by-step solution scheme is written based on direct inspection of the two-dimensional MPM code currently used at the University of Missouri-Columbia (which is, in turn, a legacy of the University of New Mexico code). To test the completeness of the solution scheme and to demonstrate certain features of the MPM, a one-dimensional MPM code is programmed to solve one-dimensional wave and impact problems, with both linear elasticity and elastoplasticity models. The advantages and disadvantages of the MPM are investigated as compared with competing mesh-free methods. Based on the current work, future research directions are discussed to better simulate complex physical problems such as impact/contact, localization, crack propagation, penetration, perforation, fragmentation, and interactions among different material phases. In particular, the potential use of a boundary layer to enforce the traction boundary conditions is discussed within the framework of the MPM.
Nuclear energy has been proposed as a heat source for producing hydrogen from water using a sulfur-iodine thermochemical cycle. This document presents an assessment of the suitability of various reactor types for this application. The basic requirement for the reactor is the delivery of 900 C heat to a process interface heat exchanger. Ideally, the reactor heat source should not in itself present any significant design, safety, operational, or economic issues. This study found that Pressurized and Boiling Water Reactors, Organic-Cooled Reactors, and Gas-Core Reactors were unsuitable for the intended application. Although Alkali Metal-Cooled and Liquid-Core Reactors are possible candidates, they present significant development risks for the required conditions. Heavy Metal-Cooled Reactors and Molten Salt-Cooled Reactors have the potential to meet requirements, however, the cost and time required for their development may be appreciable. Gas-Cooled Reactors (GCRs) have been successfully operated in the required 900 C coolant temperature range, and do not present any obvious design, safety, operational, or economic issues. Altogether, the GCRs approach appears to be very well suited as a heat source for the intended application, and no major development work is identified. This study recommends using the Gas-Cooled Reactor as the baseline reactor concept for a sulfur-iodine cycle for hydrogen generation.
The blast parameters for the 6-foot diameter by 200-foot long, explosively driven shock tube are presented in this report. The purpose, main characteristics, and blast simulation capabilities of this PETN Primacord, explosively driven facility are included. Experimental data are presented for air and Sulfurhexaflouride (SF6) test gases with initial pressures between 0.5 to 12.1 psia (ambient). Experimental data are presented and include shock wave time of amval at various test stations, flow duration, static or side-on overpressure, and stagnation or head-on overpressure. The blast parameters calculated from the above measured parameters and presented in this report include shock wave velocity, shock strength, shock Mach number, flow Mach Number, reflected pressure, dynamic pressure, particle velocity, density, and temperature. Graphical data for the above parameters are included. Algorithms and least squares fit equations are also included.
The assembly and packaging of MEMS (Microelectromechanical Systems) devices raise a number of issues over and above those normally associated with the assembly of standard microelectronic circuits. MEMS components include a variety of sensors, microengines, optical components, and other devices. They often have exposed mechanical structures which during assembly require particulate control, free space in the package, non-contact handling procedures, low-stress die attach, precision die placement, unique process schedules, hermetic sealing in controlled environments (including vacuum), and other special constraints. These constraints force changes in the techniques used to separate die on a wafer, in the types of packages which can be used, in the assembly processes and materials, and in the sealing environment and process. This paper discusses a number of these issues and provides information on approaches being taken or proposed to address them.
The Geothermal Research Dept. at Sandia Natl. Laboratories, in collaboration with Drill Cool Systems Inc., has worked to develop and test insulated drillpipe (IDP). IDP will allow much cooler drilling fluid to reach the bottom of the hole, making possible the use of downhole motors, electronics, and steering tools that are now useless in high-temperature formations. Other advantages of cooler fluid include reduced degradation of drilling fluid, longer bit life, and reduced corrosion rates. This article describes the theoretical background, laboratory testing, and field testing of IDP, including structural and thermal laboratory testing procedures and results. We also give results for a field test in a geothermal well in which circulating temperatures in IDP are compared with those in conventional drillpipe (CDP) at different flow rates. A brief description of the software used to model wellbore temperature and to calculate sensitivity in IDP design differences is included, along with a comparison of calculated and measured wellbore temperatures in the field test. There is also analysis of mixed (IDP and CDP) drillstrings and discussion of where IDP should be placed in a mixed string.
A study was performed on the Sandia Heat Flux Gauge (HFG) developed as a rugged, cost effective technique for performing steady state heat flux measurements in the pool fire environment. The technique involved reducing the time-temperature history of a thin metal plate to an incident heat flux via a dynamic thermal model, even though the gauge was intended for use at steady state. A validation experiment was presented where the gauge was exposed to a step input of radiant heat flux.
We consider the steady-state transport of normally incident pencil beams of radiation in slabs of material. A method has been developed for determining the exact radial moments of three-dimensional (3-D) beams of radiation as a function of depth into the slab, by solving systems of one-dimensional (1-D) transport equations. We implement these radial-moment equations in the ONEBFP discrete ordinates code and simulate energy-dependent, coupled electron-photon beams using CEPXS-generated cross sections. Modified PN synthetic acceleration is employed to speed up the iterative convergence of the 1-D charged-particle calculations. For high-energy photon beams, a hybrid Monte Carlo/discrete ordinates method is examined. We demonstrate the efficiency of the calculations and make comparisons with 3-D Monte Carlo calculations. Thus, by solving 1-D transport equations, we obtain realistic multidimensional information concerning the broadening of electron-photon beams. This information is relevant to fields such as industrial radiography, medical imaging, radiation oncology, particle accelerators, and lasers.
A family of transients with the property that the initial and final acceleration, velocity, and displacement are all zero is derived. The transients are based on a relatively arbitrary function multiplied by window of the form cosm(x). Several special cases are discussed which result in odd acceleration and displacement functions. This is desirable for shaker reproduction because the required positive and negative peak accelerations and displacements will be balanced. Another special case is discussed which will permit the development of transients with the first five (0-4) temporal moments specified. The transients are defined with three or four parameters that will allow sums of components to be found which will match a variety of shock response spectra.
The verification and validation (V & V) in computational fluid dynamics was presented. The methods and procedures for assessing V & V were presented. The issues such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainity, conceptual sources of error and uncertainity, and the relationship between validation and prediction was discussed. Methods for determining the accuracy of numerical solutions were presented and the importance of software testing during verification activities were emphasized.
The Computational Plant or Cplant is a commodity-based supercomputer under development at Sandia National Laboratories. This paper describes resource-allocation strategies to achieve processor locality for parallel jobs in Cplant and other supercomputers. Users of Cplant and other Sandia supercomputers submit parallel jobs to a job queue. When a job is scheduled to run, it is assigned to a set of processors. To obtain maximum throughput, jobs should be allocated to localized clusters of processors to minimize communication costs and to avoid bandwidth contention caused by overlapping jobs. This paper introduces new allocation strategies and performance metrics based on space-filling curves and one dimensional allocation strategies. These algorithms are general and simple. Preliminary simulations and Cplant experiments indicate that both space-filling curves and one-dimensional packing improve processor locality compared to the sorted free list strategy previously used on Cplant. These new allocation strategies are implemented in the new release of the Cplant System Software, Version 2.0, phased into the Cplant systems at Sandia by May 2002.
The ABO3 perovskite oxides constitute an important family of technologically important ferroelectrics whose relatively simple chemical and crystallographic structures have contributed significantly to our understanding of ferroelectricity. They readily undergo structural phase transitions involving both polar and non-polar distortions from the ideal cubic lattice. This paper focuses on the mixed perovskite system KTa1-xNbxO3, or KTN, which has turned out to be a model system. While the end members KTaO3 and KNbO3 might be expected to be similar, in reality they exhibit very different properties. Their mixed crystals, which can be grown over the whole composition range, exhibit a rich set of phenomena whose study has added greatly to our current understanding of the phase trsitions and dielectric properties of these materials. Included among these phenomena are soft mode response, ferroelectric (FE)-to-relaxor (R) crossover, quantum mechanical suppression of the transition, the appearance of a quantum paraelectric state and relaxational effects associated with dipolar impurities. Each of these phenomena is discussed briefly and illustrated. Some emphasis is on the unique role of pressure in elucidating the physics involved.
We consider two fundamental problems in dynamic scheduling: scheduling to meet deadlines in a preemptive multiprocessor setting, and scheduling to provide good response time in a number of scheduling environments. When viewed from the perspective of traditional worst-case analysis, no good on-line algorithms exist for these problems, and for some variants no good off-line algorithms exist unless T = NP. We study these problems using a relaxed notion of competitive analysis, introduced by Kalyanasundaram and Pruhs, in which the on-line algorithm is allowed more resources than the optimal off-line algorithm to which it is compared. Using this approach, we establish that several well-known on-line algorithms, that have poor performance from an absolute worst-case perspective, are optimal for the problems in question when allowed moderately more resources. For optimization of average flow time, these are the first results of any sort, for any NP-hard version of the problem, that indicate that it might be possible to design good approximation algorithms.
One of the concerns surrounding composite doubler technology pertains to long-term survivability, especially in the presence of non-optimum installations. This test program demonstrated the damage-tolerance capabilities of bonded composite doublers. The fatigue and strength tests quantified the structural response and crack-abatement capabilities of boron-epoxy doublers in the presence of worst-case flaw scenarios. The engineered flaws included cracks in the parent material, disbonds in the adhesive layer, and impact damage to the composite laminate. Environmental conditions representing temperature and humidity exposure were also included in the coupon tests. Large strains immediately adjacent to the doubler flaws emphasize the fact that relatively large disbond or delamination flaws (up to 100 diameter) in the composite doubler have only localized effects on strain and minimal effect on the overall doubler performance (i.e., undesirable strain relief over disbond but favorable load transfer immediately next to disbond). This statement is made relative to the inspection requirement that result in the detection of disbonds/delaminations of 0.5 '' diameter or greater. The point at which disbonds become detrimental depends upon the size and location of the disbond and the strain field around the doubler. This study did not attempt to determine a "flaw size vs. effect" relation. Rather, it used flaws that were twice as large as the detectable limit to demonstrate the ability of composite doublers to tolerate potential damage.
Two major issues associated with model validation are addressed here. First, we present a maximum likelihood approach to define and evaluate a model validation metric. The advantage of this approach is it is more easily applied to nonlinear problems than the methods presented earlier by Hills and Trucano (1999, 2001); the method is based on optimization for which software packages are readily available; and the method can more easily be extended to handle measurement uncertainty and prediction uncertainty with different probability structures. Several examples are presented utilizing this metric. We show conditions under which this approach reduces to the approach developed previously by Hills and Trucano (2001). Secondly, we expand our earlier discussions (Hills and Trucano, 1999, 2001) on the impact of multivariate correlation and the effect of this on model validation metrics. We show that ignoring correlation in multivariate data can lead to misleading results, such as rejecting a good model when sufficient evidence to do so is not available.
Collaboration between Sandia National Laboratories and the University of New Mexico Biology Department resulted in the capability to train students in microarray techniques and the interpretation of data from microarray experiments. These studies provide for a better understanding of the role of stationary phase and the gene regulation involved in exit from stationary phase, which may eventually have important clinical implications. Importantly, this research trained numerous students and is the basis for three new Ph.D. projects.
Hyperspectral Fourier transform infrared images have been obtained from a neoprene sample aged in air at elevated temperatures. The massive amount of spectra available from this heterogeneous sample provides the opportunity to perform quantitative analysis of the spectral data without the need for calibration standards. Multivariate curve resolution (MCR) methods with non-negativity constraints applied to the iterative alternating least squares analysis of the spectral data has been shown to achieve the goal of quantitative image analysis without the use of standards. However, the pure-component spectra and the relative concentration maps were heavily contaminated by the presence of system artifacts in the spectral data. We have demonstrated that the detrimental effects of these artifacts can be minimized by adding an estimate of the error covariance structure of the spectral image data to the MCR algorithm. The estimate is added by augmenting the concentration and pure-component spectra matrices with scores and eigenvectors obtained from the mean-centered repeat image differences of the sample. The implementation of augmentation is accomplished by employing efficient equality constraints on the MCR analysis. Augmentation with the scores from the repeat images is found to primarily improve the pure-component spectral estimates while augmentation with the corresponding eigenvectors primarily improves the concentration maps. Augmentation with both scores and eigenvectors yielded the best result by generating less noisy pure-component spectral estimates and relative concentration maps that were largely free from a striping artifact that is present due to system errors in the FT-IR images. The MCR methods presented are general and can also be applied productively to non-image spectral data.
One of the major needs of the law enforcement field is a product that quickly, accurately, and inexpensively identifies whether a person has recently fired a gun--even if the suspect has attempted to wash the traces of gunpowder off. The Field Test Kit for Gunshot Residue Identification based on Sandia National Laboratories technology works with a wide variety of handguns and other weaponry using gunpowder. There are several organic chemicals in small arms propellants such as nitrocellulose, nitroglycerine, dinitrotoluene, and nitrites left behind after the firing of a gun that result from the incomplete combustion of the gunpowder. Sandia has developed a colorimetric shooter identification kit for in situ detection of gunshot residue (GSR) from a suspect. The test kit is the first of its kind and is small, inexpensive, and easily transported by individual law enforcement personnel requiring minimal training for effective use. It will provide immediate information identifying gunshot residue.
The Nonactinide Isotopes and Sealed Sources (NISS) Web Application is a web-based database query and data management tool designed to facilitate the identification and reapplication of radioactive sources throughout the Department of Energy (DOE) complex. It provides search capability to the general Internet community and detailed data management functions to contributing site administrators.
The present document summarizes the experimental efforts of a three-year study funded under the Laboratory Directed Research and Development program of Sandia National Laboratories. The Innovative Diagnostics LDRD project was designed to develop new measurement capabilities to examine the interaction of a propulsive spin jet in a transonic freestream for a model in a wind tunnel. The project motivation was the type of jet/fin interactions commonly occurring during deployment of weapon systems. In particular, the two phenomena of interest were the interaction of the propulsive spin jet with the freestream in the vicinity of the nozzle and the impact of the spin rocket plume and its vortices on the downstream fins. The main thrust of the technical developments was to incorporate small-size, Lagrangian sensors for pressure and roll-rate on a scale model and include data acquisition, transmission, and power circuitry onboard. FY01 was the final year of the three-year LDRD project and the team accomplished much of the project goals including use of micron-scale pressure sensors, an onboard telemetry system for data acquisition and transfer, onboard jet exhaust, and roll-rate measurements. A new wind tunnel model was designed, fabricated, and tested for the program which incorporated the ability to house multiple MEMS-based pressure sensors, interchangeable vehicle fins with pressure instrumentation, an onboard multiple-channel telemetry data package, and a high-pressure jet exhaust simulating a spin rocket motor plume. Experiments were conducted for a variety of MEMS-based pressure sensors to determine performance and sensitivity in order to select pressure transducers for use. The data acquisition and analysis path was most successful by using multiple, 16-channel data processors with telemetry capability to a receiver outside the wind tunnel. The development of the various instrumentation paths led to the fabrication and installation of a new wind tunnel model for baseline non-rotating experiments to validate the durability of the technologies and techniques. The program successfully investigated a wide variety of instrumentation and experimental techniques and ended with basic experiments for a non-rotating model with jet-on with the onboard jets operating and both rotating and non-rotating model conditions.
This report summarizes a multiyear effort to establish a new capability for determining dynamic material properties. By utilizing a significant reduction in experimental length and time scales, this new capability addresses both the high per-experiment costs of current methods and the inability of these methods to characterize materials having very small dimensions. Possible applications include bulk-processed materials with minimal dimensions, very scarce or hazardous materials, and materials that can only be made with microscale dimensions. Based on earlier work to develop laser-based techniques for detonating explosives, the current study examined the laser acceleration, or photonic driving, of small metal discs (''flyers'') that can generate controlled, planar shockwaves in test materials upon impact. Sub-nanosecond interferometric diagnostics were developed previously to examine the motion and impact of laser-driven flyers. To address a broad range of materials and stress states, photonic driving levels must be scaled up considerably from the levels used in earlier studies. Higher driving levels, however, increase concerns over laser-induced damage in optics and excessive heating of laser-accelerated materials. Sufficiently high levels require custom beam-shaping optics to ensure planar acceleration of flyers. The present study involved the development and evaluation of photonic driving systems at two driving levels, numerical simulations of flyer acceleration and impact using the CTH hydrodynamics code, design and fabrication of launch assemblies, improvements in diagnostic instrumentation, and validation experiments on both bulk and thin-film materials having well-established shock properties. The primary conclusion is that photonic driving techniques are viable additions to the methods currently used to obtain dynamic material properties. Improvements in launch conditions and diagnostics can certainly be made, but the main challenge to future applications will be the successful design and fabrication of test assemblies for materials of interest.
SERAPHIM technology appears capable of efficiently driving a tip driven fan. If the motor is powered using an inverter and resonant circuit, the size and weight could be considerably below that of a comparable rotary electric motor.
Recyclable transmission lines (RTL) are studied as a means of repetitively driving z pinches. The lowest reprocessing costs should be obtained by minimizing the mass of the RTL. Low mass transmission lines (LMTL) could also help reduce the cost of a single shot facility such as the proposed X-1 accelerator and make z-pinch driven space propulsion feasible. We present calculations to determine the minimum LMTL electrode mass to provide sufficient inertia against the magnetic pressure produced by the large currents needed to drive the z pinches. The results indicate an electrode thickness which is much smaller than the resistive skin depth. We have performed experiments to determine if such thin electrodes can efficiently carry the required current. The tests were performed with various thickness of materials. The results indicate that LMTLs should efficiently carry the large z-pinch currents needed for inertial fusion. We also use our results to estimate of the performance of pulsed power driven pulsed nuclear rockets.
High-brightness flash x-ray sources are needed for penetrating dynamic radiography for a variety of applications. Various bremsstrahlung source experiments have been conducted on the TriMeV accelerator (3MV, 60 {Omega}, 20 ns) to determine the best diode and focusing configuration in the 2-3 MV range. Three classes of candidate diodes were examined: gas cell focusing, magnetically immersed, and rod pinch. The best result for the gas cell diode was 6 rad at 1 meter from the source with a 5 mm diameter x-ray spot. Using a 0.5 mm diameter cathode immersed in a 17 T solenoidal magnetic field, the best shot produced 4.1 rad with a 2.9 mm spot. The rod pinch diode demonstrated very reproducible radiographic spots between 0.75 and 0.8 mm in diameter, producing 1.2 rad. This represents a factor of eight improvement in the TriMeV flash radiographic capability above the original gas cell diode to a figure of merit (dose/spot diameter) > 1.8 rad/mm. These results clearly show the rod pinch diode to be the choice x-ray source for flash radiography at 2-3 M V endpoint.
This report describes the development of bulk hydrous titanium oxide (HTO)- and silica-doped hydrous titanium oxide (HTO:Si)-supported Pt catalysts for lean-burn NOx catalyst applications. The effects of various preparation methods, including both anion and cation exchange, and specifically the effect of Na content on the performance of Pt/HTO:Si catalysts, were evaluated. Pt/HTO:Si catalysts with low Na content (< 0.5 wt.%) were found to be very active for NOx reduction in simulated lean-burn exhaust environments utilizing propylene as the major reductant species. The activity and performance of these low Na Pt/HTO:Si catalysts were comparable to supported Pt catalysts prepared using conventional oxide or zeolite supports. In ramp down temperature profile test conditions, Pt/HTO:Si catalysts with Na contents in the range of 3-5 wt.% showed a wide temperature window of appreciable NOx conversion relative to low Na Pt/HTO:Si catalysts. Full reactant species analysis using both ramp up and isothermal test conditions with the high Na Pt/HTO:Si catalysts, as well as diffuse reflectance FTIR studies, showed that this phenomenon was related to transient NOx storage effects associated with NaNO{sub 2}/NaNO{sub 3} formation. These nitrite/nitrate species were found to decompose and release NOx at temperatures above 300 C in the reaction environment (ramp up profile). A separate NOx uptake experiment at 275 C in NO/N{sub 2}/O{sub 2} showed that the Na phase was inefficiently utilized for NOx storage. Steady state tests showed that the effect of increased Na content was to delay NOx light-off and to decrease the maximum NOx conversion. Similar results were observed for high K Pt/HTO:Si catalysts, and the effects of high alkali content were found to be independent of the sample preparation technique. Catalyst characterization (BET surface area, H{sub 2} chemisorption, and transmission electron microscopy) was performed to elucidate differences between the HTO- and HTO:Si-supported Pt catalysts and conventional oxide- or zeolite-supported Pt catalysts.
Sandia National Laboratories has previously developed a unidirectional High Shear Stress Sediment Erosion flume for the US Army Corps of Engineers, Coastal Hydraulics Laboratory. The flow regime for this flume has limited applicability to wave-dominated environments. A significant design modification to the existing flume allows oscillatory flow to be superimposed upon a unidirectional current. The new flume simulates highshear stress erosion processes experienced in coastal waters where wave forcing dominates the system. Flow velocity measurements, and erosion experiments with known sediment samples were performed with the new flume. Also, preliminary computational flow models closely simulate experimental results and allow for a detailed assessment of the induced shear stresses at the sediment surface.
The Department of Energy (DOE) is moving towards Long-Term Stewardship (LTS) of many environmental restoration sites that cannot be released for unrestricted use. One aspect of information management for LTS is geospatial data archiving. This report discusses the challenges facing the DOE LTS program concerning the data management and archiving of geospatial data. It discusses challenges in using electronic media for archiving, overcoming technological obsolescence, data refreshing, data migration, and emulation. It gives an overview of existing guidance and policy and discusses what the United States Geological Service (USGS), National Oceanic and Atmospheric Administration (NOAA) and the Federal Emergency Management Agency (FEMA) are doing to archive the geospatial data that their agencies are responsible for. In the conclusion, this report provides issues for further discussion around long-term spatial data archiving.
Solar Two was a collaborative, cost-shared project between 11 U. S. industry and utility partners and the U. S. Department of Energy to validate molten-salt power tower technology. The Solar Two plant, located east of Barstow, CA, comprised 1926 heliostats, a receiver, a thermal storage system, a steam generation system, and steam-turbine power block. Molten nitrate salt was used as the heat transfer fluid and storage media. The steam generator powered a 10-MWe (megawatt electric), conventional Rankine cycle turbine. Solar Two operated from June 1996 to April 1999. The major objective of the test and evaluation phase of the project was to validate the technical characteristics of a molten salt power tower. This report describes the significant results from the test and evaluation activities, the operating experience of each major system, and overall plant performance. Tests were conducted to measure the power output (MW) of the each major system, the efficiencies of the heliostat, receiver, thermal storage, and electric power generation systems and the daily energy collected, daily thermal-to-electric conversion, and daily parasitic energy consumption. Also included are detailed test and evaluation reports.
This document provides a guide to the deployment of the software verification activities, software engineering practices, and project management principles that guide the development of Accelerated Strategic Computing Initiative (ASCI) applications software at Sandia National Laboratories (Sandia). The goal of this document is to identify practices and activities that will foster the development of reliable and trusted products produced by the ASCI Applications program. Document contents include an explanation of the structure and purpose of the ASCI Quality Management Council, an overview of the software development lifecycle, an outline of the practices and activities that should be followed, and an assessment tool. These sections map practices and activities at Sandia to the ASCI Software Quality Engineering: Goals, Principles, and Guidelines, a Department of Energy document.
Water shortages affect 88 developing countries that are home to half of the world's population. In these places, 80-90% of all diseases and 30% of all deaths result from poor water quality. Furthermore, over the next 25 years, the number of people affected by severe water shortages is expected to increase fourfold. Low cost methods of purifying freshwater, and desalting seawater are required to contend with this destabilizing trend. Membrane distillation (MD) is an emerging technology for separations that are traditionally accomplished via conventional distillation or reverse osmosis. As applied to desalination, MD involves the transport of water vapor from a saline solution through the pores of a hydrophobic membrane. In sweeping gas MD, a flowing gas stream is used to flush the water vapor from the permeate side of the membrane, thereby maintaining the vapor pressure gradient necessary for mass transfer. Since liquid does not penetrate the hydrophobic membrane, dissolved ions are completely rejected by the membrane. MD has a number of potential advantages over conventional desalination including low temperature and pressure operation, reduced membrane strength requirements, compact size, and 100% rejection of non-volatiles. The present work evaluated the suitability of commercially available technology for sweeping gas membrane desalination. Evaluations were conducted with Celgard Liqui-Cel{reg_sign} Extra-Flow 2.5X8 membrane contactors with X-30 and X-40 hydrophobic hollow fiber membranes. Our results show that sweeping gas membrane desalination systems are capable of producing low total dissolved solids (TDS) water, typically 10 ppm or less, from seawater, using low grade heat. However, there are several barriers that currently prevent sweeping gas MD from being a viable desalination technology. The primary problem is that large air flows are required to achieve significant water yields, and the costs associated with transporting this air are prohibitive. To overcome this barrier, at least two improvements are required. First, new and different contactor geometries are necessary to achieve efficient contact with an extremely low pressure drop. Second, the temperature limits of the membranes must be increased. In the absence of these improvements, sweeping gas MD will not be economically competitive. However, the membranes may still find use in hybrid desalination systems.
The use of oxidized metal powders in mechanical shock or crush safety enhancers in nuclear weapons has been investigated. The functioning of these devices is based on the remarkable electrical behavior of compacts of certain oxidized metal powders when subjected to compressive stress. For example, the low voltage resistivity of a compact of oxidized tantalum powder was found to decrease by over six orders of magnitude during compaction between 1 MPa, where the thin, insulating oxide coatings on the particles are intact, to 10 MPa, where the oxide coatings have broken down along a chain of particles spanning the electrodes. In this work, the behavior of tantalum and aluminum powders was investigated. The low voltage resistivity during compaction of powders oxidized under various conditions was measured and compared. In addition, the resistivity at higher voltages and the dielectric breakdown strength during compaction were also measured. A key finding was that significant changes in the electrical properties persist after the removal of the stress so that a mechanical shock enhancer is feasible. This was verified by preliminary shock experiments. Finally, conceptual designs for both types of enhancers are presented.
Preliminary thermal decomposition experiments with Ablefoam and EF-AR20 foam (Ablefoam replacement) were done to determine the important chemical and associated physical phenomena that should be investigated to develop the foam decomposition chemistry sub-models that are required in numerical simulations of the fire-induced response of foam-filled engineered systems for nuclear safety applications. Although the two epoxy foams are physically and chemically similar, the thermal decomposition of each foam involves different chemical mechanisms, and the associated physical behavior of the foams, particularly ''foaming'' and ''liquefaction,'' have significant implications for modeling. A simplified decomposition chemistry sub-model is suggested that, subject to certain caveats, may be appropriate for ''scoping-type'' calculations.
This report is a presentation of modeling and simulation work for analyzing three designs of Micro Electro Mechanical (MEM) Compound Pivot Mirrors (CPM). These CPMs were made at Sandia National Laboratories using the SUMMiT{trademark} process. At 75 volts and above, initial experimental analysis of fabricated mirrors showed tilt angles of up to 7.5 degrees for one design, and 5 degrees for the other two. Nevertheless, geometric design models predicted higher tilt angles. Therefore, a detailed study was conducted to explain why lower tilt angles occurred and if design modifications could be made to produce higher tilt angles at lower voltages. This study showed that the spring stiffnesses of the CPMs were too great to allow for desired levels of rotation at lower levels of voltage. To produce these lower stiffnesses, a redesign is needed.
The semiconductor bridge (SCB) is an electroexplosive device used to initiate detonators. A C cable is commonly used to connect the SCB to a firing set. A series of tests were performed to identify smaller, lighter cables for firing single and multiple SCBs. This report provides a description of these tests and their results. It was demonstrated that lower threshold voltages and faster firing times can be achieved by increasing the wire size, which reduces ohmic losses. The RF 100 appears to be a reasonable substitute for C cable when firing single SCBs. This would reduce the cable volume by 68% and the weight by 67% while increasing the threshold voltage by only 22%. In general, RG 58 outperforms twisted pair when firing multiple SCBs in parallel. The RG 58's superior performance is attributed to its larger conductor size.
Chemometric analysis of nuclear magnetic resonance (NMR) spectroscopy has increased dramatically in recent years. Various chemometric techniques have been applied to a wide range of problems in food, agricultural, medical, process, and industrial system. This article gives a brief review of chemometric analysis of NMR spectral data, including a summary of the types of mixtures and experiments analyzed with chemometric techniques. Common experiment problems encountered during the chemometric analysis of NMR data are also discussed.
Stiction and friction in micromachines is commonly inhibited through the use of silane coupling agents such as 1H-, 1H-, 2H-, 2H-perfluorodecyltrichlorosilane (FDTS). FDTS coatings have allowed micromachine parts processed in water to be released without debilitating capillary adhesion occurring. These coatings are frequently considered as densely-packed monolayers, well-bonded to the substrate. In this paper, it is demonstrated that FDTS coatings can exhibit complex nanoscale structures, which control whether micromachine parts release or not. Surface images obtained via atomic force microscopy reveal that FDTS coating solutions can generate micellar aggregates that deposit on substrate surfaces. Interferometric imaging of model beam structures shows that stiction is high when the droplets are present and low when only monolayers are deposited. As the aggregate thickness (tens of nanometers) is insufficient to bridge the 2 μm gap under the beams, the aggregates appear to promote beam-substrate adhesion by changing the wetting characteristics of coated surfaces. Contact angle measurements and condensation figure experiments have been performed on surfaces and under coated beams to quantify the changes in interfacial properties that accompany different coating structures. These results may explain the irreproducibility that is often observed with these films.
A DOE/Sandia project termed the Blade Manufacturing Program was established at Sandia to develop means of advancing manufacturing processes in ways that lower costs and improve the reliability of turbine blades. Through industry contracts, manufacturers are improving processes such as resin infusion, resin transfer molding, and thermoplastic casting. Testing and modeling research at universities and national labs are adding to the knowledge of how composite materials perform in substructures and sub-scale blades as a function of their fabrication process.
Optimal estimation theory has been applied to the problem of estimating process variables during vacuum arc remelting (VAR), a process widely used in the specialty metals industry to cast large ingots of segregation sensitive and/or reactive metal alloys. Four state variables were used to develop a simple state-space model of the VAR process: electrode gap (G), electrode mass (M), electrode position (X) and electrode melting rate (R). The optimal estimator consists of a Kalman filter that incorporates the model and uses electrode feed rate and measurement based estimates of G, M and X to produce optimal estimates of all four state variables. Simulations show that the filter provides estimates that have error variances between one and three orders-of-magnitude less than estimates based solely on measurements. Examples are presented that verify this for electrode gap, an extremely important control parameter for the process.
Direct Simulation Monte Carlo (DSMC) and Navier-Stokes calculations are performed for a Mach 11 25 deg.-55 deg. spherically blunted biconic. The conditions are such that flow is laminar, with separation occurring at the cone-cone juncture. The simulations account for thermochemical nonequilibrium based on standard Arrhenius chemical rates for nitrogen dissociation and Millikan and White vibrational relaxation. The simulation error for the Navier-Stokes (NS) code is estimated to be 2% for the surface pressure and 10% for the surface heat flux. The grid spacing for the DSMC simulations was adjusted to be less than the local mean-freepath (mfp) and the time step less than the cell transient time of a computational particle. There was overall good agreement between the two simulations; however, the recirculation zone was computed to be larger for the NS simulation. A sensitivity study is performed to examine the effects of experimental uncertainty in the freestream properties on the surface pressure and heat flux distributions. The surface quantities are found to be extremely sensitive to the vibrational excitation state of the gas at the test section, with differences of 25% found in the surface pressure and 25%-35% for the surface heat flux. These calculations are part of a blind validation comparison and thus the experimental data has not yet been released.
Simulations of a turbulent methanol pool fire are conducted using both Reynolds-Averaged Navier-Stokes (RANS) and Large Eddy Simulation (LES) modeling methodologies. Two simple conserved scalar flameletbased combustion models with assumed PDF are developed and implemented. The first model assumes statistical independence between mixture fraction and its variance and results in poor predictions of time-averaged temperature and velocity. The second combustion model makes use of the PDF transport equation for mixture fraction and does not employ the statistical independence assumption. Results using this model show good agreement with experimental data for both the 2D and 3D LES, indicating that the use of statistical independence between mixture fraction and its dissipation is not valid for pool fire simulations. Lastly, "finger-like" flow structures near the base of the plume, generated from stream-wise vorticity, are shown to be important mixing mechanisms for accurate prediction of time-averaged temperature and velocity.
The concept of genetic divisors can be given a quantitative measure with a non-Archimedean p-adic metric that is both computationally convenient and physically motivated. For two particles possessing distinct mass parameters x and y, the metric distance D(x, y) is expressed on the field of rational numbers Q as the inverse of the greatest common divisor [gcd (x , y)]. As a measure of genetic similarity, this metric can be applied to (1) the mass numbers of particle states and (2) the corresponding subgroup orders of these systems. The use of the Bezout identity in the form of a congruence for the expression of the gcd (x , y) corresponding to the v{sub e} and {sub {mu}} neutrinos (a) connects the genetic divisor concept to the cosmic seesaw congruence, (b) provides support for the {delta}-conjecture concerning the subgroup structure of particle states, and (c) quantitatively strengthens the interlocking relationships joining the values of the prospectively derived (i) electron neutrino (v{sub e}) mass (0.808 meV), (ii) muon neutrino (v{sub {mu}}) mass (27.68 meV), and (iii) unified strong-electroweak coupling constant ({alpha}*{sup -1} = 34.26).
Alkylation reactions of benzene with propylene using zeolites were studied for their affinity for cumene production. The current process for the production of cumene involves heating corrosive acid catalysts, cooling, transporting, and distillation. This study focused on the reaction of products in a static one-pot vessel using non-corrosive zeolite catalysts, working towards a more efficient one-step process with a potentially large energy savings. A series of experiments were conducted to find the best reaction conditions yielding the highest production of cumene. The experiments looked at cumene formation amounts in two different reaction vessels that had different physical traits. Different zeolites, temperatures, mixing speeds, and amounts of reactants were also investigated to find their affects on the amount of cumene produced. Quantitative analysis of product mixture was performed by gas chromatography. Mass spectroscopy was also utilized to observe the gas phase components during the alkylation process.
The ultimate goal of many environmental measurements is to determine the risk posed to humans or ecosystems by various contaminants. Conventional environmental monitoring typically requires extensive sampling grids covering several media including air, water, soil and vegetation. A far more efficient, innovative and inexpensive tactic has been found using honeybees as sampling mechanisms. Members from a single bee colony forage over large areas ({approx}2 x 10{sup 6} m{sup 2}), making tens of thousands of trips per day, and return to a fixed location where sampling can be conveniently conducted. The bees are in direct contact with the air, water, soil and vegetation where they encounter and collect any contaminants that are present in gaseous, liquid and particulate form. The monitoring of honeybees when they return to the hive provides a rapid method to assess chemical distributions and impacts (1). The primary goal of this technology is to evaluate the efficiency of the transport mechanism (honeybees) to the hive using preconcentrators to collect samples. Once the extent and nature of the contaminant exposure has been characterized, resources can be distributed and environmental monitoring designs efficiently directed to the most appropriate locations. Methyl salicylate, a chemical agent surrogate was used as the target compound in this study.
Using intense magnetic pressure, a method was developed to launch flyer plates to velocities in excess of 20 km/s. This technique was used to perform plate-impact, shock wave experiments on cryogenic liquid deuterium (LD{sub 2}) to examine its high-pressure equation of state (EOS). Using an impedance matching method, Hugoniot measurements were obtained in the pressure range of 30-70 GPa. The results of these experiments disagree with previously reported Hugoniot measurements of LD{sub 2} in the pressure range above {approx}40 GPa, but are in good agreement with first principles, ab-initio models for hydrogen and its isotopes.
Sandstones that overlie or that are interbedded with evaporitic or other ductile strata commonly contain numerous localized domains of fractures, each covering an area of a few square miles. Fractures within the Entrada Sandstone at the Salt Valley Anticline are associated with salt mobility within the underlying Paradox Formation. The fracture relationships observed at Salt Valley (along with examples from Paleozoic strata at the southern edge of the Holbrook basin in northeastern Arizona, and sandstones of the Frontier Formation along the western edge of the Green River basin in southwestern Wyoming), show that although each fracture domain may contain consistently oriented fractures, the orientations and patterns of the fractures vary considerably from domain to domain. Most of the fracture patterns in the brittle sandstones are related to local stresses created by subtle, irregular flexures resulting from mobility of the associated, interbedded ductile strata (halite or shale). Sequential episodes of evaporite dissolution and/or mobility in different directions can result in multiple, superimposed fracture sets in the associated sandstones. Multiple sets of superimposed fractures create reservoir-quality fracture interconnectivity within restricted localities of a formation. However, it is difficult to predict the orientations and characteristics of this type of fracturing in the subsurface. This is primarily because the orientations and characteristics of these fractures typically have little relationship to the regional tectonic stresses that might be used to predict fracture characteristics prior to drilling. Nevertheless, the high probability of numerous, intersecting fractures in such settings attests to the importance of determining fracture orientations in these types of fractured reservoirs.
Carbon is an important support for heterogeneous catalysts, such as platinum supported on activated carbon (AC). An important property of these catalysts is that they decompose upon heating in air. Consequently, Pt/AC catalysts can be used in applications requiring rapid decomposition of a material, leaving little residue. This report describes the catalytic effects of platinum on carbon decomposition in an attempt to maximize decomposition rates. Catalysts were prepared by impregnating the AC with two different Pt precursors, Pt(NH{sub 3}){sub 4}(NO{sub 3}){sub 2} and H{sub 2}PtCl{sub 6}. Some catalysts were treated in flowing N{sub 2} or H{sub 2} at elevated temperatures to decompose the Pt precursor. The catalysts were analyzed for weight loss in air at temperatures ranging from 375 to 450 C, using thermogravimetric analysis (TGA). The following results were obtained: (1) Pt/AC decomposes much faster than pure carbon; (2) treatment of the as-prepared 1% Pt/AC samples in N{sub 2} or H{sub 2} enhances decomposition; (3) autocatalytic behavior is observed for 1% Pt/AC samples at temperatures {ge} 425 C; (4) oxygen is needed for decomposition to occur. Overall, the Pt/AC catalyst with the highest activity was impregnated with H{sub 2}PtCl{sub 6} dissolved in acetone, and then treated in H{sub 2}. However, further research and development should produce a more active Pt/AC material.
The Microsystems Subgrid Physics project is intended to address gaps between developing high-performance modeling and simulation capabilities and microdomain specific physics. The initial effort has focused on incorporating electrostatic excitations, adhesive surface interactions, and scale dependent material and thermal properties into existing modeling capabilities. Developments related to each of these efforts are summarized, and sample applications are presented. While detailed models of the relevant physics are still being developed, a general modeling framework is emerging that can be extended to incorporate evolving material and surface interaction modules.
Recently an innovative technique known as the Isentropic Compression Experiment (ICE) was developed that allows the dynamic compressibility curve of a material to be measured in a single experiment. Hence, ICE significantly reduces the cost and time required for generating and validating theoretical models of dynamic material response. ICE has been successfully demonstrated on several materials using the 20 MA Z accelerator, resulting in a large demand for its use. The present project has demonstrated its use on another accelerator, Saturn. In the course of this study, Saturn was tailored to produce a satisfactory drive time structure, and instrumented to produce velocity data. Pressure limits are observed to be approximately 10-15 GPa (''LP'' configuration) or 40-50 GPa (''HP'' configuration), depending on sample material. Drive reproducibility (panel to panel within a shot and between shots) is adequate for useful experimentation, but alignment fixturing problems make it difficult to achieve the same precision as is possible at Z. Other highlights included the useful comparison of slightly different PZT and ALOX compositions (neutron generator materials), temperature measurement using optical pyrometry, and the development of a new technique for preheating samples. 28 ICE tests have been conducted at Saturn to date, including the experiments described herein.
Sandia is investigating the shock response of single-crystal diamond up to several Mbar pressure in a collaborative effort with the Institute for Shock Physics (ISP) at Washington State University (WSU). This is project intended to determine (i) the usefulness of diamond as a window material for high pressure velocity interferometry measurements, (ii) the maximum stress level at which diamond remains transparent in the visible region, (iii) if a two-wave structure can be detected and analyzed, and if so, (iv) the Hugoniot elastic limit (HEL) for the [110] orientation of diamond. To this end experiments have been designed and performed, scoping the shock response in diamond in the 2-3 Mbar pressure range using conventional velocity interferometry techniques (conventional VISAR diagnostic). In order to perform more detailed and highly resolved measurements, an improved line-imaging VISAR has been developed and experiments using this technique have been designed. Prior to performing these more detailed experiments, additional scoping experiments are being performed using conventional techniques at WSU to refine the experimental design.
Explosive charges placed on the fuze end of a drained chemical munition are expected to be used as a means to destroy the fuze and burster charges of the munition. Analyses are presented to evaluate the effect of these additional initiation charges on the fragmentation characteristics for the M121A1 155mm chemical munition, modeled with a T244 fuze attached, and to assess the consequences of these fragment impacts on the walls of a containment chamber--the Burster Detonation Vessel. A numerical shock physics code (CTH) is used to characterize the mass and velocity of munition fragments. Both two- and three-dimensional simulations of the munition have been completed in this study. Based on threshold fragment velocity/mass results drawn from both previous and current analyses, it is determined that under all fragment impact conditions from the munition configurations considered in this study, no perforation of the inner chamber wall will occur, and the integrity of the Burster Detonation Vessel is retained. However, the munition case fragments have sufficient mass and velocity to locally damage the surface of the inner wall of the containment vessel.
Schells, Regina L.; Bogdan, Carolyn W.; Wix, Steven D.
This document describes the High Performance Electrical Modeling and Simulation (HPEMS) Global Verification Test Suite (VERTS). The VERTS is a regression test suite used for verification of the electrical circuit simulation codes currently being developed by the HPEMS code development team. This document contains descriptions of the Tier I test cases.
The Geometric Search Engine is a software system for storing and searching a database of geometric models. The database maybe searched for modeled objects similar in shape to a target model supplied by the user. The database models are generally from CAD models while the target model may be either a CAD model or a model generated from range data collected from a physical object. This document describes key generation, database layout, and search of the database.
This report provides a summary of the work completed in the Source Code Assurance Tool project. This work was done as part of the Laboratory Directed Research and Development program.
This report provides a preliminary functional description of a novel software application, the Source Code Assurance Tool, which would assist a system analyst in the software assessment process. An overview is given of the tool's functionality and design; and how the analyst would use it to assess a body of source code. This work was done as part of a Laboratory Directed Research and Development project.
In this paper we describe a new language, Visual Structure Language (VSL), designed to describe the structure of a program and explain its pieces. This new language is built on top of a general-purpose language, such as C. The language consists of three extensions: explanations, nesting, and arcs. Explanations are comments explicitly associated with code segments. These explanations can be nested. And arcs can be inserted between explanations to show data- or control-flow. The value of VSL is that it enables a developer to better control a code. The developer can represent the structure via nested explanations, using arcs to indicate the flow of data and control. The explanations provide a ''second opinion'' about the code so that at any level, the developer can confirm that the code operates as it is intended to do. We believe that VSL enables a programmer to use in a computer language the same model--a hierarchy of components--that they use in their heads when they conceptualize systems.
We present the tool we built as part of a Laboratory Directed Research and Development (LDRD) project. This tool consists of a commercially-available, graphical editor front-end, combined with a back end ''slicer.'' The significance of the tool is that it shows how to slice across system components. This is an advance from slicing across program components.
This report details experimental data useful in validating radiative transfer codes involving participating media, particularly for cases involving combustion. Special emphasis is on data for pool fires. Features sought in the references are: Flame geometry and fuel that approximate conditions for a pool fire or a well-defined flame geometry and characteristics that can be completely modeled; detailed information that could be used as code input data, including species concentration and temperature profiles and associated absorption coefficients, soot morphology and concentration profiles, associated scattering coefficients and phase functions, specification of system geometry, and system boundary conditions; detailed information that could be compared against code output predictions, including measured boundary radiative energy flux distributions (preferably spectral) and/or boundary temperature distributions; and a careful experimental error analysis so that code predictions could be rationally compared with experimental measurements. Reference data were gathered from more than 35 persons known to be active in the field of radiative transfer and combustion, particularly in experimental work. A literature search was carried out using key words. Additionally, the reference lists in papers/reports were pursued for additional leads. The report presents extended abstracts of the cited references, with comments on available and missing data for code validation, and comments on reported error. A graphic for quick reference is added to each abstract that indicates the completeness of data and how well the data mimics a large-scale pool fire. The references are organized into Lab-Scale Pool Fires, Large-Scale Pool Fires, Momentum-Driven Diffusion Flames, and Enclosure Fires. As an additional aid to report users, the Tables in Appendix A show the types of data included in each reference. The organization of the tables follows that used for the abstracts.
Sandia National Laboratories is developing innovative alternative technology to replace open burn/open detonation (OB/OD) operations for the destruction and disposal of obsolete, excess, and off-spec energetic materials. Alternatives to OB/OD are necessary to comply with increasingly stringent regulations. This program is developing an alternative technology to destruct energetic materials using organic amines with minimal discharge of toxic chemicals to the environment and defining the application of the by-products for the manufacture of structural materials.
Wire explosion experiments have been carried out at the University of Nevada, Reno. These experiments investigated the explosion phase of wires with properties and current-driving conditions comparable to that used in the initial stage of wire array z-pinch implosions on the Z machine at Sandia National Laboratories. Specifically, current pulses similar to and faster than the pre-pulse current on Z (current prior to fast rise in current pulse) were applied to single wire loads to study wire heating and the early development of plasmas in the wire initiation process. Understanding such issues are important to larger pulsed power machines that implode cylindrical wire array loads comprised of many wires. It is thought that the topology of an array prior to its acceleration influences the implosion and final stagnation properties, and therefore may depend on the initiation phase of the wires. Single wires ranging from 4 to 40 pm in diameter and comprised of material ranging from AI to W were investigated. Several diagnostics were employed to determine wire current, voltage, total emitted-light energy and power, along with the wire expansion velocity throughout the explosion. In a number of cases, the explosion process was also observed with x-ray backlighting using x-pinches. The experimental data indicates that the characteristics of a wire explosion depend dramatically on the rate of rise of the current, on the diameter of the wire, and on the heat of vaporization of the wire material. In this report, these characteristics will be described in detail. Of particular interest is the result that a faster current rise produces a higher energy deposition into the wire prior to explosion. This result introduces a different means of increasing the efficiency of wire heating. In this case, the energy deposition along the wire and its subsequent expansion, is uniform compared to a ''slow'' current rise (170 A/ns compared to 22 A /s current rise into a short circuit) and the expansion velocity is larger. The energy deposition and wire expansion is further modified by the wire diameter and material. Investigations of wire diameter indicate that the diameter primarily effects the expansion velocity and energy deposition; thicker wires explode with greater velocities but absorb less energy per atom. The heat of vaporization also categorizes the wire explosion; wires with a low heat of vaporization expand faster and emit less radiation than their high heat of vaporization counterparts.
An important capability in conducting underground nuclear tests is to be able to determine the nuclear test yield accurately within hours after a test. Due to a nuclear test moratorium, the seismic method that has been used in the past has not been exercised since a non-proliferation high explosive test in 1993. Since that time, the seismic recording system and the computing environment have been replaced with modern equipment. This report describes the actions that have been taken to preserve the capability for determining seismic yield, in the event that nuclear testing should resume. Specifically, this report describes actions taken to preserve seismic data, actions taken to modernize software, and actions taken to document procedures. It concludes with a summary of the current state of the data system and makes recommendations for maintaining this system in the future.
This report describes testing of prototype InfiniBand{trademark} host channel adapters from Intel Corporation, using the Linux(reg sign) operating system. Three generations of prototype hardware were obtained, and Linux device drivers were written which exercised the data movement capabilities of the cards. Latency and throughput results obtained were similar to other SAN technologies, but not significantly better.
This project set out to scientifically-tailor ''smart'' interfacial films and 3-D composite nanostructures to exhibit photochromic responses to specific, highly-localized chemical and/or mechanical stimuli, and to integrate them into optical microsystems. The project involved the design of functionalized chromophoric self-assembled materials that possessed intense and environmentally-sensitive optical properties (absorbance, fluorescence) enabling their use as detectors of specific stimuli and transducers when interfaced with optical probes. The conjugated polymer polydiacetylene (PDA) proved to be the most promising material in many respects, although it had some drawbacks concerning reversibility. Throughout his work we used multi-task scanning probes (AFM, NSOM), offering simultaneous optical and interfacial force capabilities, to actuate and characterize the PDA with localized and specific interactions for detailed characterization of physical mechanisms and parameters. In addition to forming high quality mono-, bi-, and tri-layers of PDA via Langmuir-Blodgett deposition, we were successful in using the diacetylene monomer precursor as a surfactant that directed the self-assembly of an ordered, mesostructured inorganic host matrix. Remarkably, the diacetylene was polymerized in the matrix, thus providing a PDA-silica composite. The inorganic matrix serves as a perm-selective barrier to chemical and biological agents and provides structural support for improved material durability in microsystems. Our original goal was to use the composite films as a direct interface with microscale devices as optical elements (e.g., intracavity mirrors, diffraction gratings), taking advantage of the very high sensitivity of device performance to real-time dielectric changes in the films. However, our optical physics colleagues (M. Crawford and S. Kemme) were unsuccessful in these efforts, mainly due to the poor optical quality of the composite films.
The intention of this project was to collaborate with Harvard University in the general area of nanoscale structures, biomolecular materials and their application in support of Sandia's MEMS technology. The expertise at Harvard was crucial in fostering these fundamentally interdisciplinary developments. Areas that were of interest included: (1) nanofabrication that exploits traditional methods (from Si technology) and developing new methods; (2) self-assembly of organic and inorganic systems; (3) assembly and dynamics of membranes and microfluidics; (4) study of the hierarchy of scales in assembly; (5) innovative imaging methods; and (6) hard (engineering)/soft (biological) interfaces. Specifically, we decided to work with Harvard to design and construct an experimental test station to measure molecular transport through single nanopores. The pore may be of natural origin, such as a self-assembled bacterial protein in a lipid bilayer, or an artificial structure in silicon or silicon nitride.
This report documents work supporting the Sandia National Laboratories initiative in Distributed Energy Resources (DERs) and Supervisory Control and Data Acquisition (SCADA) systems. One approach for real-time control of power generation assets using feedback control, Quantitative feedback theory (QFT), has recently been applied to voltage, frequency, and phase-control of power systems at Sandia. QFT provided a simple yet powerful philosophy for designing the control systems--allowing the designer to optimize the system by making design tradeoffs without getting lost in complex mathematics. The feedback systems were effective in reducing sensitivity to large and sudden changes in the power grid system. Voltage, frequency, and phase were accurately controlled, even with large disturbances to the power grid system.
This report is divided into two parts: a study of the glass transition in confined geometries, and formation mechanisms of block copolymer mesophases by solvent evaporation-induced self-assembly. The effect of geometrical confinement on the glass transition of polymers is a very important consideration for applications of polymers in nanotechnology applications. We hypothesize that the shift of the glass transition temperature of polymers in confined geometries can be attributed to the inhomogeneous density profile of the liquid. Accordingly, we assume that the glass temperature in the inhomogeneous state can be approximated by the Tg of a corresponding homogeneous, bulk polymer, but at a density equal to the average density of the inhomogeneous system. Simple models based on this hypothesis give results that are in remarkable agreement with experimental measurements of the glass transition of confined liquids. Evaporation-induced self-assembly (EISA) of block copolymers is a versatile process for producing novel, nanostructured materials and is the focus of much of the experimental work at Sandia in the Brinker group. In the EISA process, as the solvent preferentially evaporates from a cast film, two possible scenarios can occur: microphase separation or micellization of the block copolymers in solution. In the present investigation, we established the conditions that dictate which scenario takes place. Our approach makes use of scaling arguments to determine whether the overlap concentration c* occurs before or after the critical micelle concentration (CMC). These theoretical arguments are used to interpret recent experimental results of Yu and collaborators on EISA experiments on Silica/PS-PEO systems.
In exploring the question of how humans reason in ambiguous situations or in the absence of complete information, we stumbled onto a body of knowledge that addresses issues beyond the original scope of our effort. We have begun to understand the importance that philosophy, in particular the work of C. S. Peirce, plays in developing models of human cognition and of information theory in general. We have a foundation that can serve as a basis for further studies in cognition and decision making. Peircean philosophy provides a foundation for understanding human reasoning and capturing behavioral characteristics of decision makers due to cultural, physiological, and psychological effects. The present paper describes this philosophical approach to understanding the underpinnings of human reasoning. We present the work of C. S. Peirce, and define sets of fundamental reasoning behavior that would be captured in the mathematical constructs of these newer technologies and would be able to interact in an agent type framework. Further, we propose the adoption of a hybrid reasoning model based on his work for future computational representations or emulations of human cognition.
This article summarizes information related to the automated course of action (COA) development effort. The information contained in this document puts the COA effort into an operational perspective that addresses command and control theory, as well as touching on the military planning concept known as effects-based operations. The sections relating to the COA effort detail the rationale behind the functional models developed and identify technologies that could support the process functions. The functional models include a section related to adversarial modeling, which adds a dynamic to the COA process that is missing in current combat simulations. The information contained in this article lays the foundation for building a unique analytic capability.
This report is an update to previous ''smart gun'' work and the corresponding report that were completed in 1996. It incorporates some new terminology and expanded definitions. This effort is the product of an open source look at what has happened to the ''smart gun'' technology landscape since the 1996 report was published.
The Comprehensive Test Ban Treaty of 1996 banned any future nuclear explosions or testing of nuclear weapons and created the CTBTO in Vienna to implement the treaty. The U.S. response to this was the cessation of all above and below ground nuclear testing. As such, all stockpile reliability assessments are now based on periodic testing of subsystems being stored in a wide variety of environments. This data provides a wealth of information and feeds a growing web of deterministic, physics-based computer models for assessment of stockpile reliability. Unfortunately until 1996 it was difficult to relate the deterministic materials aging test data to component reliability. Since that time we have made great strides in mathematical techniques and computer tools that permit explicit relationships between materials degradation, e.g. corrosion, thermo-mechanical fatigue, and reliability. The resulting suite of tools is known as CRAX and the mathematical library supporting these tools is Cassandra. However, these techniques ignore the historical data that is also available on similar systems in the nuclear stockpile, the DoD weapons complex and even in commercial applications. Traditional statistical techniques commonly used in classical re liability assessment do not permit data from these sources to be easily included in the overall assessment of system reliability. An older, alternative approach based on Bayesian probability theory permits the inclusion of data from all applicable sources. Data from a variety of sources is brought together in a logical fashion through the repeated application of inductive mathematics. This research brings together existing mathematical methods, modifies and expands those techniques as required, permitting data from a wide variety of sources to be combined in a logical fashion to increase the confidence in the reliability assessment of the nuclear weapons stockpile. The application of this research is limited to those systems composed of discrete components, e.g. those that can be characterized as operating or not operating. However, there is nothing unique about the underlying principles and the extension to continuous subsystem/systems is straightforward. The framework is also laid for the consideration of systems with multiple correlated failure modes. While an important consideration, time and resources limited the specific demonstration of these methods.