This report summarizes research performed at Sandia National Laboratories (SNL) in collaboration with the Environmental Protection Agency (EPA) to assess microarray quality on arrays from two platforms of interest to the EPA. Custom microarrays from two novel, commercially produced array platforms were imaged with SNL's unique hyperspectral imaging technology and multivariate data analysis was performed to investigate sources of emission on the arrays. No extraneous sources of emission were evident in any of the array areas scanned. This led to the conclusions that either of these array platforms could produce high quality, reliable microarray data for the EPA toxicology programs. Hyperspectral imaging results are presented and recommendations for microarray analyses using these platforms are detailed within the report.
Raman spectroscopic imaging is a powerful technique for visualizing chemical differences within a variety of samples based on the interaction of a substance's molecular vibrations with laser light. While Raman imaging can provide a unique view of samples such as residual stress within silicon devices, chemical degradation, material aging, and sample heterogeneity, the Raman scattering process is often weak and thus requires very sensitive collection optics and detectors. Many commercial instruments (including ones owned here at Sandia National Laboratories) generate Raman images by raster scanning a point focused laser beam across a sample--a process which can expose a sample to extreme levels of laser light and requires lengthy acquisition times. Our previous research efforts have led to the development of a state-of-the-art two-dimensional hyperspectral imager for fluorescence imaging applications such as microarray scanning. This report details the design, integration, and characterization of a line-scan Raman imaging module added to this efficient hyperspectral fluorescence microscope. The original hyperspectral fluorescence instrument serves as the framework for excitation and sample manipulation for the Raman imaging system, while a more appropriate axial transmissive Raman imaging spectrometer and detector are utilized for collection of the Raman scatter. The result is a unique and flexible dual-modality fluorescence and Raman imaging system capable of high-speed imaging at high spatial and spectral resolutions. Care was taken throughout the design and integration process not to hinder any of the fluorescence imaging capabilities. For example, an operator can switch between the fluorescence and Raman modalities without need for extensive optical realignment. The instrument performance has been characterized and sample data is presented.
Physical mechanisms responsible for single-event effects are reviewed, concentrating on silicon MOS devices and digital integrated circuits. A brief historical overview of single-event effects in space and terrestrial systems is given. Single-event upset mechanisms in SRAMs are briefly described, as is the initiation of single-event latchup in CMOS structures. Techniques for mitigating single-event effects are described, including the impact of technology trends on mitigation efficacy. Future challenges are briefly explored.
Relatively small motion measurement errors manifest themselves principally as a phase error in Synthetic Aperture Radar (SAR) complex data samples, and if large enough become observable as a smearing, blurring, or other degradation in the image. The phase error function can be measured and then deconvolved from the original data to compensate for the presumed motion error, ultimately resulting in a well-focused image. Techniques that do this are termed "autofocus" algorithms. A very popular autofocus algorithm is the Phase Gradient Autofocus (PGA) algorithm. The nearly universal, and typically reasonable, assumption is that the motion errors are less than the range resolution of the radar, allowing solely a phase correction to suffice. Very large relative motion measurement errors manifest themselves as an unexpected additional shifting or migration of target locations beyond any deterministic migration during the course of the synthetic aperture. Degradation in images from data exhibiting errors of this magnitude are substantial, often rendering the image completely useless. When residual range migration due to either real or apparent motion errors exceeds the range resolution, conventional autofocus algorithms fail. Excessive residual migration is increasingly encountered as resolutions become finer, less expensive inertial sensors are used, and operating ranges become longer (due to atmospheric phenomena). A new migration-correction autofocus algorithm has been developed that estimates the excessive residual migration and applies phase and frequency corrections to properly focus the image. This overcomes the conventional constraint that motion errors not exceed the SAR range resolution.
Sandia National Laboratories designs and builds Synthetic Aperture Radar (SAR) systems capable of forming high-quality exceptionally fine resolution images. During the spring of 2004 a series of test flights were completed with a Ka-band testbed SAR on Sandia's DeHavilland DHC-6 Twin Otter aircraft. A large data set was collected including real-time fine-resolution images of a variety of target scenes. This paper offers a sampling of high quality images representative of the output of Sandia's Ka-band testbed radar with resolutions as fine as 4 inches. Images will be annotated with descriptions of collection geometries and other relevant image parameters.
Airborne synthetic aperture radar (SAR) imaging systems have reached a degree of accuracy and sophistication that requires the validity of the free-space approximation for radio-wave propagation to be questioned. Based on the thin-lens approximation, a closed-form model for the focal length of a gravity wave-modulated refractive-index interface in the lower troposphere is developed. The model corroborates the suggestion that mesoscale, quasi-deterministic variations of the clear-air radio refractive-index field can cause diffraction patterns on the ground that are consistent with reflectivity artifacts occasionally seen in SAR images, particularly in those collected at long ranges, short wavelengths, and small grazing angles.
An unattended ground sensor (UGS) that attempts to perform target identification without providing some corresponding estimate of confidence level is of limited utility. In this context, a confidence level is a measure of probability that the detected vehicle is of a particular target class. Many identification methods attempt to match features of a detected vehicle to each of a set of target templates. Each template is formed empirically from features collected from vehicles known to be members of the particular target class. The nontarget class is inherent in this formulation and must be addressed in providing a confidence level. Often, it is difficult to adequately characterize the nontarget class empirically by feature collection, so assumptions must be made about the nontarget class. An analyst tasked with deciding how to use the confidence level of the classifier decision should have an accurate understanding of the meaning of the confidence level given. This paper compares several definitions of confidence level by considering the assumptions that are made in each, how these assumptions affect the meaning, and giving examples of implementing them in a practical acoustic UGS.
The shape control of thin, flexible structures has been studied primarily for edge-supported thin plates. For applications involving reconfigurable apertures such as membrane optics and active RF surfaces, corner-supported configurations may prove more applicable. Corner-supported adaptive structures allow for parabolic geometries, greater flexibility, and larger achievable deflections when compared to edge-supported geometries under similar actuation conditions. Preliminary models have been developed for corner-supported thin plates actuated by isotropic piezoelectric actuators. However, typical piezoelectric materials are known to be orthotropic. This paper extends a previously-developed isotropic model for a corner-supported, thin, rectangular bimorph to a more general orthotropic model for a bimorph actuated by a two-dimensional array of segmented PVDF laminates. First, a model determining the deflected shape of an orthotropic laminate for a given distribution of voltages over the actuator array is derived. Second, symmetric actuation of a bimorph consisting of orthotropic material is simulated using orthogonally-oriented laminae. Finally, the results of the model are shown to agree well with layered-shell finite element simulations for simple and complex voltage distributions.
This report describes the test and evaluation methods by which the Teraflops Operating System, or TOS, that resides on Sandia's massively-parallel computer Janus is verified for production release. Also discussed are methods used to build TOS before testing and evaluating, miscellaneous utility scripts, a sample test plan, and a proposed post-test method for quickly examining the large number of test results. The purpose of the report is threefold: (1) to provide a guide to T&E procedures, (2) to aid and guide others who will run T&E procedures on the new ASCI Red Storm machine, and (3) to document some of the history of evaluation and testing of TOS. This report is not intended to serve as an exhaustive manual for testers to conduct T&E procedures.
Modeling and simulation is playing an increasing role in supporting tough regulatory decisions, which are typically characterized by variabilities and uncertainties in the scenarios, input conditions, failure criteria, model parameters, and even model form. Variability exists when there is a statistically significant database that is fully relevant to the application. Uncertainty, on the other hand, is characterized by some degree of ignorance. A simple algebraic problem was used to illustrate how various risk methodologies address variability and uncertainty in a regulatory context. These traditional risk methodologies include probabilistic methods (including frequensic and Bayesian perspectives) and second-order methods where variabilities and uncertainties are treated separately. Representing uncertainties with (subjective) probability distributions and using probabilistic methods to propagate subjective distributions can lead to results that are not logically consistent with available knowledge and that may not be conservative. The Method of Belief Scales (MBS) is developed as a means to logically aggregate uncertain input information and to propagate that information through the model to a set of results that are scrutable, easily interpretable by the nonexpert, and logically consistent with the available input information. The MBS, particularly in conjunction with sensitivity analyses, has the potential to be more computationally efficient than other risk methodologies. The regulatory language must be tailored to the specific risk methodology if ambiguity and conflict are to be avoided.
Tensile and compressive stress-strain experiments on metals at strain rates in the range of 1-1000 1/s are relevant to many applications such as gravity-dropped munitions and airplane accidents. While conventional test methods cover strain rates up to {approx}10 s{sup -1} and split-Hopkinson and other techniques cover strain rates in excess of {approx}1000 s{sup -1}, there are no well defined techniques for the intermediate or ''Sub-Hopkinson'' strain-rate regime. The current work outlines many of the challenges in testing in the Sub-Hopkinson regime, and establishes methods for addressing these challenges. The resulting technique for obtaining intermediate rate stress-strain data is demonstrated in tension on a high-strength, high-toughness steel alloy (Hytuf) that could be a candidate alloy for earth penetrating munitions and in compression on a Au-Cu braze alloy.
We conducted broadband absorption measurements of atmospheric water vapor in the ground state, X {sup 1}A{sub 1} (000), from 0.4 to 2.7 THz with a pressure broadening-limited resolution of 6.2 GHz using pulsed, terahertz time-domain spectroscopy (THz-TDS). We measured a total of seventy-two absorption lines and forty-nine lines were identified as H{sub 2}{sup 16}O resonances. All the H{sub 2}{sup 16}O lines identified were confirmed by comparing their center frequencies to experimental values available in the literature.
Friction and wear are major concerns in the performance and reliability of micromechanical (MEMS) devices. While a variety of lubricant and wear resistant coatings are known which we might consider for application to MEMS devices, the severe geometric constraints of many micromechanical systems (high aspect ratios, shadowed surfaces) make most deposition methods for friction and wear-resistance coatings impossible. In this program we have produced and evaluate highly conformal, tribological coatings, deposited by atomic layer deposition (ALD), for use on surface micromachined (SMM) and LIGA structures. ALD is a chemical vapor deposition process using sequential exposure of reagents and self-limiting surface chemistry, saturating at a maximum of one monolayer per exposure cycle. The self-limiting chemistry results in conformal coating of high aspect ratio structures, with monolayer precision. ALD of a wide variety of materials is possible, but there have been no studies of structural, mechanical, and tribological properties of these films. We have developed processes for depositing thin (<100 nm) conformal coatings of selected hard and lubricious films (Al2O3, ZnO, WS2, W, and W/Al{sub 2}O{sub 3} nanolaminates), and measured their chemical, physical, mechanical and tribological properties. A significant challenge in this program was to develop instrumentation and quantitative test procedures, which did not exist, for friction, wear, film/substrate adhesion, elastic properties, stress, etc., of extremely thin films and nanolaminates. New scanning probe and nanoindentation techniques have been employed along with detailed mechanics-based models to evaluate these properties at small loads characteristic of microsystem operation. We emphasize deposition processes and fundamental properties of ALD materials, however we have also evaluated applications and film performance for model SMM and LIGA devices.
This multinational test program is quantifying the aerosol particulates produced when a high energy density device (HEDD) impacts surrogate material and actual spent fuel test rodlets. The experimental work, performed in four consecutive test phases, has been in progress for several years. The overall program provides needed data that are relevant to some sabotage scenarios in relation to spent fuel transport and storage casks, and associated risk assessments. This program also provides significant political benefits in international cooperation for nuclear security related evaluations. The spent fuel sabotage--aerosol test program is coordinated with the international Working Group for Sabotage Concerns of Transport and Storage Casks (WGSTSC), and supported by both the U.S. Department of Energy and Nuclear Regulatory Commission. This report summarizes the preliminary, Phase 1 work performed in 2001 and 2002 at Sandia National Laboratories and the Fraunhofer Institute, Germany, and documents the experimental results obtained, observations, and preliminary interpretations. Phase 1 testing included: performance quantifications of the HEDD devices; characterization of the HEDD or conical shaped charge (CSC) jet properties with multiple tests; refinement of the aerosol particle collection apparatus being used; and, CSC jet-aerosol tests using leaded glass plates and glass pellets, serving as representative brittle materials. Phase 1 testing was quite important for the design and performance of the following Phase 2 test program and test apparatus.
Due to the nature of many infectious agents, such as anthrax, symptoms may either take several days to manifest or resemble those of less serious illnesses leading to misdiagnosis. Thus, bioterrorism attacks that include the release of such agents are particularly dangerous and potentially deadly. For this reason, a system is needed for the quick and correct identification of disease outbreaks. The Real-time Outbreak Disease Surveillance System (RODS), initially developed by Carnegie Mellon University and the University of Pittsburgh, was created to meet this need. The RODS software implements different classifiers for pertinent health surveillance data in order to determine whether or not an outbreak has occurred. In an effort to improve the capability of RODS at detecting outbreaks, we incorporate a data fusion method. Data fusion is used to improve the results of a single classification by combining the output of multiple classifiers. This paper documents the first stages of the development of a data fusion system that can combine the output of the classifiers included in RODS.
Large complex teams (e.g., DOE labs) must achieve sustained productivity in critical operations (e.g., weapons and reactor development) while maintaining safety for involved personnel, the public, and physical assets, as well as security for property and information. This requires informed management decisions that depend on tradeoffs of factors such as the mode and extent of personnel protection, potential accident consequences, the extent of information and physical asset protection, and communication with and motivation of involved personnel. All of these interact (and potentially interfere) with each other and must be weighed against financial resources and implementation time. Existing risk analysis tools can successfully treat physical response, component failure, and routine human actions. However, many ''soft'' factors involving human motivation and interaction among weakly related factors have proved analytically problematic. There has been a need for an effective software tool capable of quantifying these tradeoffs and helping make rational choices. This type of tool, developed during this project, facilitates improvements in safety, security, and productivity, and enables measurement of improvements as a function of resources expended. Operational safety, security, and motivation are significantly influenced by ''latent effects'', which are pre-occurring influences. One example of these is that an atmosphere of excessive fear can suppress open and frank disclosures, which can in turn hide problems, impede correction, and prevent lessons learned. Another is that a cultural mind-set of commitment, self-responsibility, and passion for an activity is a significant contributor to the activity's success. This project pursued an innovative approach for quantitatively analyzing latent effects in order to link the above types of factors, aggregating available information into quantitative metrics that can contribute to strategic management decisions, and measuring the results. The approach also evaluates the inherent uncertainties, and allows for tracking dynamics for early response and assessing developing trends. The model development is based on how factors combine and influence other factors in real time and over extended time periods. Potential strategies for improvement can be simulated and measured. Input information can be determined by quantification of qualitative information in a structured derivation process. This has proved to be a promising new approach for research and development applied to personnel performance and risk management.
Saliency detection in images is an important outstanding problem both in machine vision design and the understanding of human vision mechanisms. Recently, seminal work by Itti and Koch resulted in an effective saliency-detection algorithm. We reproduce the original algorithm in a software application Vision and explore its limitations. We propose extensions to the algorithm that promise to improve performance in the case of difficult-to-detect objects.
A previously-developed experimental facility has been used to determine gas-surface thermal accommodation coefficients from the pressure dependence of the heat flux between parallel plates of similar material but different surface finish. Heat flux between the plates is inferred from measurements of temperature drop between the plate surface and an adjacent temperature-controlled water bath. Thermal accommodation measurements were determined from the pressure dependence of the heat flux for a fixed plate separation. Measurements of argon and nitrogen in contact with standard machined (lathed) or polished 304 stainless steel plates are indistinguishable within experimental uncertainty. Thus, the accommodation coefficient of 304 stainless steel with nitrogen and argon is estimated to be 0.80 {+-} 0.02 and 0.87 {+-} 0.02, respectively, independent of the surface roughness within the range likely to be encountered in engineering practice. Measurements of the accommodation of helium showed a slight variation with 304 stainless steel surface roughness: 0.36 {+-} 0.02 for a standard machine finish and 0.40 {+-} 0.02 for a polished finish. Planned tests with carbon-nanotube-coated plates will be performed when 304 stainless-steel blanks have been successfully coated.
Two different Sandia MEMS devices have been tested in a high-g environment to determine their performance and survivability. The first test was performed using a drop-table to produce a peak acceleration load of 1792 g's over a period of 1.5 ms. For the second test the MEMS devices were assembled in a gun-fired penetrator and shot into a cement target at the Army Waterways Experiment Station in Vicksburg Mississippi. This test resulted in a peak acceleration of 7191 g's for a duration of 5.5 ms. The MEMS devices were instrumented using the MEMS Diagnostic Extraction System (MDES), which is capable of driving the devices and recording the device output data during the high-g event, providing in-flight data to assess the device performance. A total of six devices were monitored during the experiments, four mechanical non-volatile memory devices (MNVM) and two Silicon Reentry Switches (SiRES). All six devices functioned properly before, during, and after each high-g test without a single failure. This is the first known test under flight conditions of an active, powered MEMS device at Sandia.
Optoelectronic microsystems are more and more prevalent as researchers seek to increase transmission bandwidths, implement electrical isolation, enhance security, or take advantage of sensitive optical sensing methods. Board level photonic integration techniques continue to improve, but photonic microsystems and fiber interfaces remain problematic, especially upon size reduction. Optical fiber is unmatched as a transmission medium for distances ranging from tens of centimeters to kilometers. The difficulty with using optical fiber is the small size of the core (approximately 9 {micro}m for the core of single mode telecommunications fiber) and the tight requirement on spot size and input numerical aperture (NA). Coupling to devices such as vertical cavity emitting lasers (VCSELs) and photodetectors presents further difficulties since these elements work in a plane orthogonal to the electronics board and typically require additional optics. This leads to the need for a packaging solution that can incorporate dissimilar materials while maintaining the tight alignment tolerances required by the optics. Over the course of this LDRD project, we have examined the capabilities of components such as VCSELs and photodetectors for high-speed operation and investigated the alignment tolerances required by the optical system. A solder reflow process has been developed to help fulfill these packaging requirements and the results of that work are presented here.
This report examines a number of hardware circuit design issues associated with implementing certain functions in FPGA and ASIC technologies. Here we show circuit designs for AES and SHA-1 that have an extremely small hardware footprint, yet show reasonably good performance characteristics as compared to the state of the art designs found in the literature. Our AES performance numbers are fueled by an optimized composite field S-box design for the Stratix chipset. Our SHA-1 designs use register packing and feedback functionalities of the Stratix LE, which reduce the logic element usage by as much as 72% as compared to other SHA-1 designs.
While isentropic compression experiment (ICE) techniques have proved useful in deducing the high-pressure compressibility of a wide range of materials, they have encountered difficulties where large-volume phase transitions exist. The present study sought to apply graded-density impactor methods for producing isentropic loading to planar impact experiments to selected such problems. Cerium was chosen due to its 20% compression between 0.7 and 1.0 GPa. A model was constructed based on limited earlier dynamic data, and applied to the design of a suite of experiments. A capability for handling this material was installed. Two experiments were executed using shock/reload techniques with available samples, loading initially to near the gamma-alpha transition, then reloading. As well, two graded-density impactor experiments were conducted with alumina. A method for interpreting ICE data was developed and validated; this uses a wavelet construction for the ramp wave and includes corrections for the ''diffraction'' of wavelets by releases or reloads reflected from the sample/window interface. Alternate methods for constructing graded-density impactors are discussed.
Water is the critical natural resource of the new century. Significant improvements in traditional water treatment processes require novel approaches based on a fundamental understanding of nanoscale and atomic interactions at interfaces between aqueous solution and materials. To better understand these critical issues and to promote an open dialog among leading international experts in water-related specialties, Sandia National Laboratories sponsored a workshop on April 24-26, 2005 in Santa Fe, New Mexico. The ''Frontiers of Interfacial Water Research Workshop'' provided attendees with a critical review of water technologies and emphasized the new advances in surface and interfacial microscopy, spectroscopy, diffraction, and computer simulation needed for the development of new materials for water treatment.
Recent interest in reprocessing nuclear fuel in the U.S. has led to advanced separations processes that employ continuous processing and multiple extraction steps. These advanced plants will need to be designed with state-of-the-art instrumentation for materials accountancy and control. This research examines the current and upcoming instrumentation for nuclear materials accountancy for those most suited to the reprocessing environment. Though this topic has received attention time and again in the past, new technologies and changing world conditions require a renewed look and this subject. The needs for the advanced UREX+ separations concept are first identified, and then a literature review of current and upcoming measuring techniques is presented. The report concludes with a preliminary list of recommended instruments and measurement locations.
This report contains the summary of LDRD project 91312, titled ''Binary Electrokinetic Separation of Target DNA from Background DNA Primers''. This work is the first product of a collaboration with Columbia University and the Northeast BioDefense Center of Excellence. In conjunction with Ian Lipkin's lab, we are developing a technique to reduce false positive events, due to the detection of unhybridized reporter molecules, in a sensitive and multiplexed detection scheme for nucleic acids developed by the Lipkin lab. This is the most significant problem in the operation of their capability. As they are developing the tools for rapidly detecting the entire panel of hemorrhagic fevers this technology will immediately serve an important national need. The goal of this work was to attempt to separate nucleic acid from a preprocessed sample. We demonstrated the preconcentration of kilobase-pair length double-stranded DNA targets, and observed little preconcentration of 60 base-pair length single-stranded DNA probes. These objectives were accomplished in microdevice formats that are compatible with larger detection systems for sample pre-processing. Combined with Columbia's expertise, this technology would enable a unique, fast, and potentially compact method for detecting/identifying genetically-modified organisms and multiplexed rapid nucleic acid identification. Another competing approach is the DARPA funded IRIS Pharmaceutical TIGER platform which requires many hours for operation, and an 800k$ piece of equipment that fills a room. The Columbia/SNL system could provide a result in 30 minutes, at the cost of a few thousand dollars for the platform, and would be the size of a shoebox or smaller.
This report documents the investigation regarding the failure of CPVC piping that was used to connect a solar hot water system to standard plumbing in a home. Details of the failure are described along with numerous pictures and diagrams. A potential failure mechanism is described and recommendations are outlined to prevent such a failure.
Political borders are controversial and contested spaces. In an attempt to better understand movement along and through political borders, this project applied the metaphor of a membrane to look at how people, ideas, and things ''move'' through a border. More specifically, the research team employed this metaphor in a system dynamics framework to construct a computer model to assess legal and illegal migration on the US-Mexico border. Employing a metaphor can be helpful, as it was in this project, to gain different perspectives on a complex system. In addition to the metaphor, the multidisciplinary team utilized an array of methods to gather data including traditional literature searches, an experts workshop, a focus group, interviews, and culling expertise from the individuals on the research team. Results from the qualitative efforts revealed strong social as well as economic drivers that motivate individuals to cross the border legally. Based on the information gathered, the team concluded that legal migration dynamics were of a scope we did not want to consider hence, available demographic models sufficiently capture migration at the local level. Results from both the quantitative and qualitative data searches were used to modify a 1977 border model to demonstrate the dynamic nature of illegal migration. Model runs reveal that current US-policies based on neo-classic economic theory have proven ineffective in curbing illegal migration, and that proposed enforcement policies are also likely to be ineffective. We suggest, based on model results, that improvement in economic conditions within Mexico may have the biggest impact on illegal migration to the U.S. The modeling also supports the views expressed in the current literature suggesting that demographic and economic changes within Mexico are likely to slow illegal migration by 2060 with no special interventions made by either government.
Current Joint Test Assembly (JTA) neutron monitors rely on knock-on proton type detectors that are susceptible to X-rays and low energy gamma rays. We investigated two novel plastic scintillating fiber directional neutron detector prototypes. One prototype used a fiber selected such that the fiber width was less than 2.1mm which is the range of a proton in plastic. The difference in the distribution of recoil proton energy deposited in the fiber was used to determine the incident neutron direction. The second prototype measured both the recoil proton energy and direction. The neutron direction was determined from the kinematics of single neutron-proton scatters. This report describes the development and performance of these detectors.
A turbulence model for buoyant flows has been developed in the context of a k-{var_epsilon} turbulence modeling approach. A production term is added to the turbulent kinetic energy equation based on dimensional reasoning using an appropriate time scale for buoyancy-induced turbulence taken from the vorticity conservation equation. The resulting turbulence model is calibrated against far field helium-air spread rate data, and validated with near source, strongly buoyant helium plume data sets. This model is more numerically stable and gives better predictions over a much broader range of mesh densities than the standard k-{var_epsilon} model for these strongly buoyant flows.
Because of the inevitable depletion of fossil fuels and the corresponding release of carbon to the environment, the global energy future is complex. Some of the consequences may be politically and economically disruptive, and expensive to remedy. For the next several centuries, fuel requirements will increase with population, land use, and ecosystem degradation. Current or projected levels of aggregated energy resource use will not sustain civilization as we know it beyond a few more generations. At the same time, issues of energy security, reliability, sustainability, recoverability, and safety need attention. We supply a top-down, qualitative model--the surety model--to balance expenditures of limited resources to assure success while at the same time avoiding catastrophic failure. Looking at U.S. energy challenges from a surety perspective offers new insights on possible strategies for developing solutions to challenges. The energy surety model with its focus on the attributes of security and sustainability could be extrapolated into a global energy system using a more comprehensive energy surety model than that used here. In fact, the success of the energy surety strategy ultimately requires a more global perspective. We use a 200 year time frame for sustainability because extending farther into the future would almost certainly miss the advent and perfection of new technologies or changing needs of society.
UNIPROCESSOR PERFORMANCE ANALYSIS OF A REPRESENTATIVE WORKLOAD OF SANDIA NATIONAL LABORATORIES' SCIENTIFIC APPLICATIONS Master of Science in Electrical Engineering New Mexico State University Las Cruces, New Mexico, 2005 Dr. Jeanine Cook, Chair Throughout the last decade computer performance analysis has become absolutely necessary to maximum performance of some workloads. Sandia National Laboratories (SNL) located in Albuquerque, New Mexico is no different in that to achieve maximum performance of large scientific, parallel workloads performance analysis is needed at the uni-processor level. A representative workload has been chosen as the basis of a computer performance study to determine optimal processor characteristics in order to better specify the next generation of supercomputers. Cube3, a finite element test problem developed at SNL is a representative workload of their scientific workloads. This workload has been studied at the uni-processor level to understand characteristics in the microarchitecture that will lead to the overall performance improvement at the multi-processor level. The goal of studying vthis workload at the uni-processor level is to build a performance prediction model that will be integrated into a multi-processor performance model which is currently being developed at SNL. Through the use of performance counters on the Itanium 2 microarchitecture, performance statistics are studied to determine bottlenecks in the microarchitecture and/or changes in the application code that will maximize performance. From source code analysis a performance degrading loop kernel was identified and through the use of compiler optimizations a performance gain of around 20% was achieved.
We present a new ab initio method for electronic structure calculations of materials at finite temperature (FT) based on the all-electron quasiparticle self-consistent GW (QPscGW) approximation and Keldysh time-loop Green's function approach. We apply the method to Si, Ge, GaAs, InSb, and diamond and show that the band gaps of these materials universally decrease with temperature in contrast with the local density approximation (LDA) of density functional theory (DFT) where the band gaps universally increase. At temperatures of a few eV the difference between quasiparticle energies obtained in FT-QPscGW and FT-LDA approaches significantly reduces. This result suggests that existing simulations of very high temperature materials based on the FT-LDA are more justified then it might appear from well-known LDA band gap errors at zero-temperature.
The use of Ion Mobility Spectrometry (IMS)in the Detection of Contraband Sandia researchers use ion mobility spectrometers for trace chemical detection and analysis in a variety of projects and applications. Products developed in recent years based on IMS-technology include explosives detection personnel portals, the Material Area Access (MAA) checkpoint of the future, an explosives detection vehicle portal, hand-held detection systems such as the Hound and Hound II (all 6400), micro-IMS sensors (1700), ordnance detection (2500), and Fourier Transform IMS technology (8700). The emphasis to date has been on explosives detection, but the detection of chemical agents has also been pursued (8100 and 6400).
This report began with a Laboratory-Directed Research and Development (LDRD) project to improve Sandia National Laboratories multidisciplinary capabilities in energy systems analysis. The aim is to understand how various electricity generating options can best serve needs in the United States. The initial product is documented in a series of white papers that span a broad range of topics, including the successes and failures of past modeling studies, sustainability, oil dependence, energy security, and nuclear power. Summaries of these projects are included here. These projects have provided a background and discussion framework for the Energy Systems Analysis LDRD team to carry out an inter-comparison of many of the commonly available electric power sources in present use, comparisons of those options, and efforts needed to realize progress towards those options. A computer aid has been developed to compare various options based on cost and other attributes such as technological, social, and policy constraints. The Energy Systems Analysis team has developed a multi-criteria framework that will allow comparison of energy options with a set of metrics that can be used across all technologies. This report discusses several evaluation techniques and introduces the set of criteria developed for this LDRD.
An experimental technique was developed to perform isentropic compression of heated liquid tin samples at the Z Accelerator, and multiple such experiments were performed to investigate solidification under rapid compression. Preliminary analyses, using two different methods, of data from experiments with high uncertainty in sample thickness suggest that solidification can begin to occur during isentropic compression on time scales of less than 100 ns. Repeatability of this result has not been confirmed due to technical issues on the subsequent experiments performed. First-principles molecular-dynamics calculations based on density-functional theory showed good agreement with experimentally-determined structure factors for liquid tin, and were used to investigate the equation of state and develop a novel interatomic pseudo-potential for liquid tin and its high-pressure solid phase. Empirical-potential molecular-dynamics calculations, using the new potential, gave results for the solid-liquid interface velocity, which was found to vary linearly with difference in free energy between the solid and liquid phases, as well as the liquidus, the maximum over-pressurization, and the solid-liquid interfacial energy. These data will prove useful in future modeling of solidification kinetics for liquid tin.
Damping vibrations is important in the design of some types of inertial sensing devices. One method for adding damping to a device is to use magnetic forces generated by a static magnetic field interacting with eddy currents. In this report, we develop a 2-dimensional finite element model for the analysis of quasistatic eddy currents in a thin sheet of conducting material. The model was used for design and sensitivity analyses of a novel mechanical oscillator that consists of a shuttle mass (thin sheet of conducting material) and a set of folded spring elements. The oscillator is damped through the interaction of a static magnetic field and eddy currents in the shuttle mass. Using a prototype device and Laser Dopler Velocimetry (LDV), measurements were compared to the model in a validation study using simulation based uncertainty analyses. Measurements were found to follow the trends predicted by the model.
The military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives as can be seen in the Department of Defense's (DoD) Defense Modeling and Simulation Office's (DMSO) Master Plan (DoD 5000.59-P 1995). To this goal, the military is currently spending millions of dollars on programs devoted to HPM in various military contexts. Examples include the Human Performance Modeling Integration (HPMI) program within the Air Force Research Laboratory, which focuses on integrating HPMs with constructive models of systems (e.g. cockpit simulations) and the Navy's Human Performance Center (HPC) established in September 2003. Nearly all of these initiatives focus on the interface between humans and a single system. This is insufficient in the era of highly complex network centric SoS. This report presents research and development in the area of HPM in a system-of-systems (SoS). Specifically, this report addresses modeling soldier fatigue and the potential impacts soldier fatigue can have on SoS performance.
The overall objective of the Nonproliferation and Assessments Scenario Development project is to create and analyze potential and plausible scenarios that would lead to an adversary's ability to acquire and use a biological weapon. The initial three months of funding was intended to be used to develop a scenario to demonstrate the efficacy of this analysis methodology; however, it was determined that a substantial amount of preliminary data collection would be needed before a proof of concept scenario could be developed. We have dedicated substantial effort to determine the acquisition pathways for Foot and Mouth Disease Virus, and similar processes will be applied to all pathogens of interest. We have developed a biosecurity assessments database to capture information on adversary skill locales, available skill sets in specific regions, pathogen sources and regulations involved in pathogen acquisition from legitimate facilities. FY06 funding, once released, will be dedicated to data collection on acquisition, production and dissemination requirements on a pathogen basis. Once pathogen data has been collected, scenarios will be developed and scored.
Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.
The one-year Software Architecture LDRD (No.79819) was a cross-site effort between Sandia California and Sandia New Mexico. The purpose of this research was to further develop and demonstrate integrating software architecture frameworks for distributed simulation and distributed collaboration in the homeland security domain. The integrated frameworks were initially developed through the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC), sited at SNL/CA, and the National Infrastructure Simulation & Analysis Center (NISAC), sited at SNL/NM. The primary deliverable was a demonstration of both a federation of distributed simulations and a federation of distributed collaborative simulation analysis communities in the context of the same integrated scenario, which was the release of smallpox in San Diego, California. To our knowledge this was the first time such a combination of federations under a single scenario has ever been demonstrated. A secondary deliverable was the creation of the standalone GroupMeld{trademark} collaboration client, which uses the GroupMeld{trademark} synchronous collaboration framework. In addition, a small pilot experiment that used both integrating frameworks allowed a greater range of crisis management options to be performed and evaluated than would have been possible without the use of the frameworks.
This Pollution Prevention Opportunity Assessment (PPOA) was conducted for Sandia National Laboratories/California Electronics Prototype Laboratory (EPL) in May 2005. The primary purpose of this PPOA is to provide recommendations to assist Electronics Prototype Laboratory personnel in reducing the generation of waste and improving the efficiency of their processes. This report contains a summary of the information collected, analyses performed and recommended options for implementation. The Sandia National Laboratories Pollution Prevention staff will continue to work with the EPL to implement the recommendations.
Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.
Threats to water distribution systems include release of contaminants and Denial of Service (DoS) attacks. A better understanding, and validated computational models, of the flow in water distribution systems would enable determination of sensor placement in real water distribution networks, allow source identification, and guide mitigation/minimization efforts. Validation data are needed to evaluate numerical models of network operations. Some data can be acquired in real-world tests, but these are limited by 1) unknown demand, 2) lack of repeatability, 3) too many sources of uncertainty (demand, friction factors, etc.), and 4) expense. In addition, real-world tests have limited numbers of network access points. A scale-model water distribution system was fabricated, and validation data were acquired over a range of flow (demand) conditions. Standard operating variables included system layout, demand at various nodes in the system, and pressure drop across various pipe sections. In addition, the location of contaminant (salt or dye) introduction was varied. Measurements of pressure, flowrate, and concentration at a large number of points, and overall visualization of dye transport through the flow network were completed. Scale-up issues that that were incorporated in the experiment design include Reynolds number, pressure drop across nodes, and pipe friction and roughness. The scale was chosen to be 20:1, so the 10 inch main was modeled with a 0.5 inch pipe in the physical model. Controlled validation tracer tests were run to provide validation to flow and transport models, especially of the degree of mixing at pipe junctions. Results of the pipe mixing experiments showed large deviations from predicted behavior and these have a large impact on standard network operations models.3
This diffractive optical element (DOE) LDRD is divided into two tasks. In Task 1, we develop two new DOE technologies: (1) a broad wavelength band effective anti-reflection (AR) structure and (2) a design tool to encode dispersion and polarization information into a unique diffraction pattern. In Task 2, we model, design, and fabricate a subwavelength polarization splitter. The first technology is an anti-reflective (AR) layer that may be etched into the DOE surface. For many wavelengths of interest, transmissive silicon DOEs are ideal. However, a significant portion of light (30% from each surface) is lost due to Fresnel reflection. To address this issue, we investigate a subwavelength, surface relief structure that acts as an effective AR coating. The second DOE component technology in Task 1 is a design tool to determine the optimal DOE surface relief structure that can encode the light's degree of dispersion and polarization into a unique spatial pattern. Many signals of interest have unique spatial, temporal, spectral, and polarization signatures. The ability to disperse the signal into a unique diffraction pattern would result in improved signal detection sensitivity with a simultaneous reduction in false alarm. Task 2 of this LDRD project is to investigate the modeling, design, and fabrication of subwavelength birefringent devices for polarimetric spectral sensing and imaging applications. Polarimetric spectral sensing measures the spectrum of the light and polarization state of light at each wavelength simultaneously. The capability to obtain both polarization and spectral information can help develop target/object signature and identify the target/object for several applications in NP&MC and national security.
The fate of contaminants after a dispersal event is a major concern, and waterways may be particularly sensitive to such an incident. Contaminants could be introduced directly into a water system (municipal or general) or indirectly (Radiological Dispersal Device) from aerial dispersion, precipitation, or improper clean-up techniques that may wash contamination into storm water drains, sewer systems, rivers, lakes, and reservoirs. Most radiological, chemical, and biological contaminants have an affinity for sediments and organic matter in the water system. If contaminated soils enter waterways, a plume of contaminated sediments could be left behind, subject to remobilization during the next storm event. Or, contaminants could remain in place, thus damaging local ecosystems. Suitable planning and deployment of resources to manage such a scenario could considerably mitigate the severity of the event. First responses must be prearranged so that clean-up efforts do not increase dispersal and exacerbate the problem. Interactions between the sediment, contaminant, and water cycle are exceedingly complex and poorly understood. This research focused on the development of a risk-based model that predicts the fate of introduced contaminants in surface water systems. Achieving this goal requires integrating sediment transport with contaminant chemical reactions (sorption and desorption) and surface water hydrodynamics. Sandia leveraged its existing state-of-the-art capabilities in sediment transport measurement techniques, hydrochemistry, high performance computing, and performance assessment modeling in an effort to accomplish this task. In addition, the basis for the physical hydrodynamics is calculated with the EPA sponsored, public domain model, Environmental Fluid Dynamics Code (EFDC). The results of this effort will enable systems analysis and numerical simulation that allow the user to determine both short term and long-term consequences of contamination of waterways as well as to help formulate preventative and remedial strategies.
This report highlights the findings of an extensive review of the literature in the area of nanorobotics. The main goal of this midyear LDRD effort is to survey and identify accomplishments and advancements that have been made in this relatively new and emerging field. As a result, it may be determined what routes in the area of nanorobotics are scientifically plausible and technically useful so that the Intelligent Systems and Robotics Center can position itself to play a role in the future development of nanotechnology.
The gas-phase {mu}ChemLab{trademark} developed by Sandia can detect volatile organics and semi-volatiles organics via gas phase sampling . The goal of this three year Laboratory Directed Research and Development (LDRD) project was to adapt the components and concepts used by the {mu}ChemLab{trademark} system towards the analysis of water-borne chemicals of current concern. In essence, interfacing the gas-phase {mu}ChemLab{trademark} with water to bring the significant prior investment of Sandia and the advantages of microfabrication and portable analysis to a whole new world of important analytes. These include both chemical weapons agents and their hydrolysis products and disinfection by-products such as Trihalomethanes (THMs) and haloacetic acids (HAAs). THMs and HAAs are currently regulated by EPA due to health issues, yet water utilities do not have rapid on-site methods of detection that would allow them to adjust their processes quickly; protecting consumers, meeting water quality standards, and obeying regulations more easily and with greater confidence. This report documents the results, unique hardware and devices, and methods designed during the project toward the goal stated above. It also presents and discusses the portable field system to measure THMs developed in the course of this project.
This work focuses on different methods to generate confidence regions for nonlinear parameter identification problems. Three methods for confidence region estimation are considered: a linear approximation method, an F-test method, and a Log-Likelihood method. Each of these methods are applied to three case studies. One case study is a problem with synthetic data, and the other two case studies identify hydraulic parameters in groundwater flow problems based on experimental well-test results. The confidence regions for each case study are analyzed and compared. Although the F-test and Log-Likelihood methods result in similar regions, there are differences between these regions and the regions generated by the linear approximation method for nonlinear problems. The differing results, capabilities, and drawbacks of all three methods are discussed.
The ability to integrate metal and semiconductor micro-systems to perform highly complex functions, such as RF-MEMS, will depend on developing freestanding metal structures that offer improved conductivity, reflectivity, and mechanical properties. Three issues have prevented the proliferation of these systems: (1) warpage of active components due to through-thickness stress gradients, (2) limited component lifetimes due to fatigue, and (3) low yield strength. To address these issues, we focus on developing and implementing techniques to enable the direct study of the stress and microstructural evolution during electrodeposition and mechanical loading. The study of stress during electrodeposition of metal thin films is being accomplished by integrating a multi-beam optical stress sensor into an electrodeposition chamber. By coupling the in-situ stress information with ex-situ microstructural analysis, a scientific understanding of the sources of stress during electrodeposition will be obtained. These results are providing a foundation upon which to develop a stress-gradient-free thin film directly applicable to the production of freestanding metal structures. The issues of fatigue and yield strength are being addressed by developing novel surface micromachined tensile and bend testers, by interferometry, and by TEM analysis. The MEMS tensile tester has a ''Bosch'' etched hole to allow for direct viewing of the microstructure in a TEM before, during, and after loading. This approach allows for the quantitative measurements of stress-strain relations while imaging dislocation motion, and determination of fracture nucleation in samples with well-known fatigue/strain histories. This technique facilitates the determination of the limits for classical deformation mechanisms and helps to formulate a new understanding of the mechanical response as the grain sizes are refined to a nanometer scale. Together, these studies will result in a science-based infrastructure to enhance the production of integrated metal--semiconductor systems and will directly impact RF MEMS and LIGA technologies at Sandia.
Outbreaks of infectious agricultural diseases, whether natural occurring or introduced intentionally, could have catastrophic impacts on the U.S. economy. Examples of such agricultural pathogens include foot and mouth disease (FMD), avian influenza (AI), citrus canker, wheat and soy rust, etc. Current approaches to mitigate the spread of agricultural pathogens include quarantine, development of vaccines for animal diseases, and development of pathogen resistant crop strains in the case of plant diseases. None of these approaches is rapid, and none address the potential persistence of the pathogen in the environment, which could lead to further spread of the agent and damage after quarantine is lifted. Pathogen spread in agricultural environments commonly occurs via transfer on agricultural equipment (transportation trailers, tractors, trucks, combines, etc.), having components made from a broad range of materials (galvanized and painted steel, rubber tires, glass and Plexiglas shields, etc), and under conditions of heavy organic load (mud, soil, feces, litter, etc). A key element of stemming the spread of an outbreak is to ensure complete inactivation of the pathogens in the agricultural environment and on the equipment used in those environments. Through the combination of enhanced agricultural pathogen decontamination chemistry and a validated inactivation verification methodology, important technologies for incorporation as components of a robust response capability will be enabled. Because of the potentially devastating economic impact that could result from the spread of infectious agricultural diseases, the proposed capability components will promote critical infrastructure protection and greater border and food supply security. We investigated and developed agricultural pathogen decontamination technologies to reduce the threat of infectious-agent spread, and thus enhance agricultural biosecurity. Specifically, enhanced detergency versions of the patented Sandia decontamination chemistry were developed and tested against a few surrogate pathogens under conditions of relatively heavy organic load. Tests were conducted on surfaces commonly found in agricultural environments. Wide spectrum decontamination efficacy, low corrosivity, and biodegradability issues were addressed in developing an enhanced detergency formulation. A method for rapid assessment of loss of pathogenic activity (inactivation) was also assessed. This enhanced technology will enable rapid assessment of contamination following an intentional event, and will also be extremely useful in routine assessment of agricultural environments. The primary effort during the second year was progress towards a demonstration of both decontamination and viral inactivation technologies of Foot and Mouth virus (FMDv) using the modified SNL chemistry developed through this project. Lab studies using a surrogate virus (bovine enterovirus) were conducted using DF200, modified DF200 chemistry, and decontaminants currently recommended for use in heavily loaded organic, agricultural environments (VirkonS, 10% bleach, sodium hydroxide and citric acid). Tests using actual FMD virus will be performed at the Department of Homeland Security's Plum Island facilities in the fall of 2005. Success and the insight gained from this project will lead to enhanced response capability, which will benefit agencies such as USDA, DHS, DOD, and the agricultural industry.
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
Two mitigation strategies including the use of corrosion resistant alloys (CRA) for the tubing and the application of a corrosion inhibitor and anti-fouling package in the water were used in the laboratory simulation of corrosion in large oil coolers at US Strategic Petroleum Reserve. A closed-loop, recirculating system was designed and constructed. The corrosion sensors were monitored over time using a commercially available linear polarization resistance (LPR) meter. The ERW steel exhibited significant localized attack along the entire weld root, in addition to pitting along the rest of the surface, as observed on the seamless tubing.
Sandia National Laboratories, CA proposed a sensor concept to detect emissions from open-burning/open-detonation (OB/OD) events. The system would serve two purposes: (1) Provide data to demilitarization operations about process efficiency, allowing process optimization for cleaner emissions and higher efficiency. (2) Provide data to regulators and neighboring communities about materials dispersing into the environment by OB/OD operations. The proposed sensor system uses instrument control hardware and data visualization software developed at Sandia National Laboratories to link together an array of sensors to monitor emissions from OB/OD events. The suite of sensors would consist of various physical and chemical detectors mounted on stationary or mobile platforms. The individual sensors would be wirelessly linked to one another and controlled through a central command center. Real-time data collection from the sensors, combined with integrated visualization of the data at the command center, would allow for feedback to the sensors to alter operational conditions to adjust for changing needs (i.e., moving plume position, increased spatial resolution, increased sensitivity). This report presents a systems study of the problem of implementing a sensor system for monitoring OB/OD emissions. The goal of this study was to gain a fuller understanding of the political, economic, and technical issues for developing and fielding this technology.
Tucker, W.T.; Ferson, Scott; Hajagos, Janos; Myers, David S.
Constructor is software for the Microsoft Windows microcomputer environment that facilitates the collation of empirical information and expert judgment for the specification of probability distributions, probability boxes, random sets or Dempster-Shafer structures from data, qualitative shape information, constraints on moments, order statistics, densities, and coverage probabilities about uncertain unidimensional quantities. These quantities may be real-valued, integer-valued or logical values.
A joint experimental and computational study was performed to evaluate the capability of the Sandia Fire Code VULCAN to predict thermocouple response temperature. Thermocouple temperatures recorded by an Inconel-sheathed thermocouple inserted into a near-adiabatic flat flame were predicted by companion VULCAN simulations. The predicted thermocouple temperatures were within 6% of the measured values, with the error primarily attributable to uncertainty in Inconel 600 emissivity and axial conduction losses along the length of the thermocouple assembly. Hence, it is recommended that future thermocouple models (for Inconel-sheathed designs) include a correction for axial conduction. Given the remarkable agreement between experiment and simulation, it is recommended that the analysis be repeated for thermocouples in flames with pollutants such as soot.
Gilmore, Walter E.; Bennett, Thomas C.; Brannon, Nathan G.
Several nuclear weapons programs have or are pursuing the implementation of multi-unit operations for tasks such as disassembly and inspection, and rebuild. A multi-unit operation is interpreted to mean the execution of nuclear explosive operating procedures in a single facility by two separate teams of technicians. The institution of a multi-unit operations program requires careful consideration of the tools, resources, and environment provided to the technicians carrying out the work. Therefore, a systematic approach is necessary to produce safe, secure, and reliable processes. In order to facilitate development of a more comprehensive multi-unit operations program, the current work details categorized issues that should be addressed prior to the implementation of multi-unit operations in a given weapons program. The issues have been organized into the following categories: local organizational conditions, work process flow/material handling/workplace configuration, ambient environmental conditions, documented safety analysis, and training.
Sandia National Laboratories, New Mexico (SNL/NM) is a government-owned, contractor-operated facility owned by the U.S. Department of Energy (DOE), National Nuclear Security Administration (NNSA) and managed by the Sandia Site Office (SSO), Albuquerque, New Mexico. Sandia Corporation, a wholly-owned subsidiary of Lockheed Martin Corporation, operates SNL/NM. This annual report summarizes data and the compliance status of Sandia Corporation's environmental protection and monitoring programs through December 31, 2004. Major environmental programs include air quality, water quality, groundwater protection, terrestrial surveillance, waste management, pollution prevention (P2), environmental restoration (ER), oil and chemical spill prevention, and the National Environmental Policy Act (NEPA). Environmental monitoring and surveillance programs are required by DOE Order 450.1, Environmental Protection Program (DOE 2005) and DOE Order 231.1A, Environment, Safety, and Health Reporting (DOE 2004a). (DOE 2004a).
Tonopah Test Range (TTR) in Nevada and Kauai Test Facility (KTF) in Hawaii are government-owned, contractor-operated facilities operated by Sandia Corporation, a subsidiary of Lockheed Martin Corporation. The U.S. Department of Energy (DOE), National Nuclear Security Administration (NNSA), through the Sandia Site Office (SSO), in Albuquerque, NM, manages TTR and KTF's operations. Sandia Corporation conducts operations at TTR in support of DOE/NNSA's Weapons Ordnance Program and has operated the site since 1957. Westinghouse Government Services subcontracts to Sandia Corporation in administering most of the environmental programs at TTR. Sandia Corporation operates KTF as a rocket preparation launching and tracking facility. This Annual Site Environmental Report (ASER) summarizes data and the compliance status of the environmental protection and monitoring program at TTR and KTF through Calendar Year (CY) 2004. The compliance status of environmental regulations applicable at these sites include state and federal regulations governing air emissions, wastewater effluent, waste management, terrestrial surveillance, and Environmental Restoration (ER) cleanup activities. Sandia Corporation is responsible only for those environmental program activities related to its operations. The DOE/NNSA, Nevada Site Office (NSO) retains responsibility for the cleanup and management of ER TTR sites. Currently, there are no ER Sites at KTF. Environmental monitoring and surveillance programs are required by DOE Order 450.1, Environmental Protection Program (DOE 2005) and DOE Order 231.1A, Environment, Safety, and Health Reporting (DOE 2004b).
When two electrodes are in close proximity in a dielectric liquid, application of a voltage pulse can produce a spark discharge between them, resulting in a small amount of material removal from both electrodes. Pulsed application of the voltage at discharge energies in the range of micro-Joules results in the continuous material removal process known as micro-electro-discharge machining (micro-EDM). Spark erosion by micro-EDM provides significant opportunities for producing small features and micro-components such as nozzle holes, slots, shafts and gears in virtually any conductive material. If the speed and precision of micro-EDM processes can be significantly enhanced, then they have the potential to be used for a wide variety of micro-machining applications including fabrication of microelectromechanical system (MEMS) components. Toward this end, a better understanding of the impacts the various machining parameters have on material removal has been established through a single discharge study of micro-EDM and a parametric study of small hole making by micro-EDM. The main avenues for improving the speed and efficiency of the micro-EDM process are in the areas of more controlled pulse generation in the power supply and more controlled positioning of the tool electrode during the machining process. Further investigation of the micro-EDM process in three dimensions leads to important design rules, specifically the smallest feature size attainable by the process.
Finned bodies of revolution firing lateral jets in flight may experience lower spin rates than predicted. This reduction in spin rate is a result of vortices generated by the interaction between the lateral jets and freestream air flowing past the body. The vortices change the pressure distribution on the fins, inducing a counter torque that opposes the desired spin. Wind tunnel data measuring roll torque and fin pressures were collected for a full-scale model at varying angle of attack, roll angle, airspeed, and jet strength. The current analysis builds upon previously written code that computes torque by integrating pressure over the fin surfaces at 0{sup o} angle of attack. The code was modified to investigate the behavior of counter torque at different angles of attack and roll angles as a function of J, the ratio of jet dynamic pressure to freestream dynamic pressure. Numerical error analysis was applied to all data to assist with interpretation of results. Results show that agreement between balance and fin pressure counter torque at 0{sup o} angle of attack was not as close as previously believed. Counter torque at 4{sup o} angle of attack was higher than at 0{sup o}, and agreement between balance and fin pressure counter torque was closer. Plots of differential fin pressure coefficient revealed a region of high pressure at the leading edge and an area of low pressure over the center and aft regions of the tapped surface. Large differences in the counter-torque coefficient were found between various freestream dynamic pressures, especially at Mach 0.95 and 1.1. Roll angle had significant effect only for cases at angle of attack, where it caused counter torque to change unpredictably.
Energy planning represents an investment-decision problem. Investors commonly evaluate such problems using portfolio theory to manage risk and maximize portfolio performance under a variety of unpredictable economic outcomes. Energy planners need to similarly abandon their reliance on traditional, ''least-cost'' stand-alone technology cost estimates and instead evaluate conventional and renewable energy sources on the basis of their portfolio cost--their cost contribution relative to their risk contribution to a mix of generating assets. This report describes essential portfolio-theory ideas and discusses their application in the Western US region. The memo illustrates how electricity-generating mixes can benefit from additional shares of geothermal and other renewables. Compared to fossil-dominated mixes, efficient portfolios reduce generating cost while including greater renewables shares in the mix. This enhances energy security. Though counter-intuitive, the idea that adding more costly geothermal can actually reduce portfolio-generating cost is consistent with basic finance theory. An important implication is that in dynamic and uncertain environments, the relative value of generating technologies must be determined not by evaluating alternative resources, but by evaluating alternative resource portfolios. The optimal results for the Western US Region indicate that compared to the EIA target mixes, there exist generating mixes with larger geothermal shares at equal-or-lower expected cost and risk.
IFP V4.0 is the fourth generation of an extraordinarily powerful and flexible image formation processor for spotlight mode synthetic aperture radar. It has been successfully utilized in processing phase histories from numerous radars and has been instrumental in the development of many new capabilities for spotlight mode SAR. This document provides a brief history of the development of IFP, a full exposition of the signal processing steps involved, and a short user's manual for the software implementing this latest iteration.
A linear structure is excited at multiple points with a stationary normal random process. The response of the structure is measured at multiple outputs. If the auto spectral densities of the inputs are specified, the phase relationships between the inputs are derived that will minimize or maximize the trace of the auto spectral density matrix of the outputs. If the autospectral densities of the outputs are specified, the phase relationships between the outputs that will minimize or maximize the trace of the input auto spectral density matrix are derived. It is shown that other phase relationships and ordinary coherence less than one will result in a trace intermediate between these extremes. Least favorable response and some classes of critical response are special cases of the development. It is shown that the derivation for stationary random waveforms can also be applied to nonstationary random, transients, and deterministic waveforms.
In hostile ad hoc wireless communication environments, such as battlefield networks, end-node authentication is critical. In a wired infrastructure, this authentication service is typically facilitated by a centrally-located ''authentication certificate generator'' such as a Certificate Authority (CA) server. This centralized approach is ill-suited to meet the needs of mobile ad hoc networks, such as those required by military systems, because of the unpredictable connectivity and dynamic routing. There is a need for a secure and robust approach to mobile node authentication. Current mechanisms either assign a pre-shared key (shared by all participating parties) or require that each node retain a collection of individual keys that are used to communicate with other individual nodes. Both of these approaches have scalability issues and allow a single compromised node to jeopardize the entire mobile node community. In this report, we propose replacing the centralized CA with a distributed CA whose responsibilities are shared between a set of select network nodes. To that end, we develop a protocol that relies on threshold cryptography to perform the fundamental CA duties in a distributed fashion. The protocol is meticulously defined and is implemented it in a series of detailed models. Using these models, mobile wireless scenarios were created on a communication simulator to test the protocol in an operational environment and to gather statistics on its scalability and performance.
SAR phase history data represents a polar array in the Fourier space of a scene being imaged. Polar Format processing is about reformatting the collected SAR data to a Cartesian data location array for efficient processing and image formation. In a real-time system, this reformatting or ''re-gridding'' operation is the most processing intensive, consuming the majority of the processing time; it also is a source of error in the final image. Therefore, any effort to reduce processing time while not degrading image quality is valued. What is proposed in this document is a new way of implementing real-time polar-format processing through a variation on the traditional interpolation/2-D Fast Fourier Transform (FFT) algorithm. The proposed change is based upon the frequency scaling property of the Fourier Transform, which allows a post azimuth FFT interpolation. A post azimuth processing interpolation provides overall benefits to image quality and potentially more efficient implementation of the polar format image formation process.
A series of three pressurized sulfuric acid decomposition tests were performed to (1) obtain data on the fraction of sulfuric acid catalytically converted to sulfur dioxide, oxygen, and water as a function of temperature and pressure, (2) demonstrate real-time measurements of acid conversion for use as process control, (3) obtain multiple measurements of conversion as a function of temperature within a single experiment, and (4) assess rapid quenching to minimize corrosion of metallic components by undecomposed acid. All four of these objectives were successfully accomplished. This report documents the completion of the NHI milestone on high pressure H{sub 2}SO{sub 4} decomposition tests for the Sulfur-Iodine (SI) thermochemical cycle project. All heated sections of the apparatus, (i.e. the boiler, decomposer, and condenser) were fabricated from Hastelloy C276. A ceramic acid injection tube and a ceramic-sheathed thermocouple were used to minimize corrosion of hot liquid acid on the boiler surfaces. Negligible fracturing of the platinum on zirconia catalyst was observed in the high temperature decomposer. Temperature measurements at the exit of the decomposer and at the entry of the condenser indicated that the hot acid vapors were rapidly quenched from about 400 C to less than 20 C within a 14 cm length of the flow path. Real-time gas flow rate measurements of the decomposition products provided a direct measurement of acid conversion. Pressure in the apparatus was preset by a pressure-relief valve that worked well at controlling the system pressure. However, these valves sometimes underwent abrupt transitions that resulted in rapidly varying gas flow rates with concomitant variations in the acid conversion fraction.
Semantic graphs offer one promising avenue for intelligence analysis in homeland security. They provide a mechanism for describing a wide variety of relationships between entities of potential interest. The vertices are nouns of various types, e.g. people, organizations, events, etc. Edges in the graph represent different types of relationships between entities, e.g. 'is friends with', 'belongs-to', etc. Semantic graphs offer a number of potential advantages as a knowledge representation system. They allow information of different kinds, and collected in differing ways, to be combined in a seamless manner. A semantic graph is a very compressed representation of some of relationship information. It has been reported that the semantic graph can be two orders of magnitude smaller than the processed intelligence data. This allows for much larger portions of the data universe to be resident in computer memory. Many intelligence queries that are relevant to the terrorist threat are naturally expressed in the language of semantic graphs. One example is the search for 'interesting' relationships between two individuals or between an individual and an event, which can be phrased as a search for short paths in the graph. Another example is the search for an analyst-specified threat pattern, which can be cast as an instance of subgraph isomorphism. It is important to note than many kinds of analysis are not relationship based, so these are not good candidates for semantic graphs. Thus, a semantic graph should always be used in conjunction with traditional knowledge representation and interface methods. Operations that involve looking for chains of relationships (e.g. friend of a friend) are not efficiently executable in a traditional relational database. However, the semantic graph can be thought of as a pre-join of the database, and it is ideally suited for these kinds of operations. Researchers at Sandia National Laboratories are working to facilitate semantic graph analysis. Since intelligence datasets can be extremely large, the focus of this work is on the use of parallel computers. We have been working to develop scalable parallel algorithms that will be at the core of a semantic graph analysis infrastructure. Our work has involved two different thrusts, corresponding to two different computer architectures. The first architecture of interest is distributed memory, message passing computers. These machines are ubiquitous and affordable, but they are challenging targets for graph algorithms. Much of our distributed-memory work to date has been collaborative with researchers at Lawrence Livermore National Laboratory and has focused on finding short paths on distributed memory parallel machines. Our implementation on 32K processors of BlueGene/Light finds shortest paths between two specified vertices in just over a second for random graphs with 4 billion vertices.
The deployment of the Joint Technical Operations Team (JTOT) is evolving toward a lean and mobile response team. As a result, opportunities to support more rapid mobilization are being investigated. This study investigates three specific opportunities including: (1) the potential of using standard firefighting equipment to support deployment of the aqueous foam concentrate (AFC-380); (2) determining the feasibility and needs for regional staging of equipment to reduce the inventory currently mobilized during a JTOT response; and (3) determining the feasibility and needs for development of the next generation AFC-380 to reduce the volume of foam concentrate required for a response. This study supports the need to ensure that requirements for alternative deployment schemes are understood and in place to support improved response activities.
The Advanced Concepts Group of Sandia National Laboratories hosted a workshop, ''FOILFest: Community Enabled Security'', on July 18-21, 2005, in Albuquerque, NM. This was a far-reaching look into the future of physical protection consisting of a series of structured brainstorming sessions focused on preventing and foiling attacks on public places and soft targets such as airports, shopping malls, hotels, and public events. These facilities are difficult to protect using traditional security devices since they could easily be pushed out of business through the addition of arduous and expensive security measures. The idea behind this Fest was to explore how the public, which is vital to the function of these institutions, can be leveraged as part of a physical protection system. The workshop considered procedures, space design, and approaches for building community through technology. The workshop explored ways to make the ''good guys'' in public places feel safe and be vigilant while making potential perpetrators of harm feel exposed and convinced that they will not succeed. Participants in the Fest included operators of public places, social scientists, technology experts, representatives of government agencies including DHS and the intelligence community, writers and media experts. Many innovative ideas were explored during the fest with most of the time spent on airports, including consideration of the local airport, the Albuquerque Sunport. Some provocative ideas included: (1) sniffers installed in passage areas like revolving door, escalators, (2) a ''jumbotron'' showing current camera shots in the public space, (3) transparent portal screeners allowing viewing of the screening, (4) a layered open/funnel/open/funnel design where open spaces are used to encourage a sense of ''communitas'' and take advantage of citizen ''sensing'' and funnels are technological tunnels of sensors (the tunnels of truth), (5) curved benches with blast proof walls or backs, (6) making it easy for the public to report, even if not sure/''non-event'' (e.g. ''I'm uncomfortable'') and processing those reports in aggregate not individually, (7) transforming the resident working population into a part-time undercover security/sensor force through more innovative training and (8) adding ambassadors/security that engage in unexpected conversation with the public. The group recommended that we take actions to pursue the following ideas next: (a) A concept for a mobile sensor transport (JMP); (b) Conduct a follow-on workshop; (c) Conduct social experiments/activities to see how people would react to the concepts related to community and security; (d) Explore further aesthetically pleasing, blast-resistance seating areas; and (e) The Art of Freedom (an educational, multi-media campaign).
The RoboHound{trademark} Project was a three-year, multiphase project at Sandia National Laboratories to build and refine a working prototype trace explosive detection system as a tool for a commercial robot. The RoboHound system was envisioned to be a tool for emergency responders to test suspicious items (i.e., packages or vehicles) for explosives while maintaining a safe distance. The project investigated combining Sandia's expertise in trace explosives detection with a wheeled robotic platform that could be programmed to interrogate suspicious items remotely for the presence of explosives. All of the RoboHound field tests were successful, especially with regards to the ability to collect and detect trace samples of RDX. The project has gone from remote sampling with human intervention to a fully automatic system that requires no human intervention until the robot returns from a sortie. A proposal is being made for additional work leading towards commercialization.
It is commonly believed that scale-free networks are robust to massive numbers of random node deletions. For example, Cohen et al. in (1) study scale-free networks including some which approximate the measured degree distribution of the Internet. Their results suggest that if each node in this network failed independently with probability 0.99, most of the remaining nodes would still be connected in a giant component. In this paper, we show that a large and important subclass of scale-free networks are not robust to massive numbers of random node deletions. In particular, we study scale-free networks which have minimum node degree of 1 and a power-law degree distribution beginning with nodes of degree 1 (power-law networks). We show that, in a power-law network approximating the Internet's reported distribution, when the probability of deletion of each node is 0.5 only about 25% of the surviving nodes in the network remain connected in a giant component, and the giant component does not persist beyond a critical failure rate of 0.9. The new result is partially due to improved analytical accommodation of the large number of degree-0 nodes that result after node deletions. Our results apply to power-law networks with a wide range of power-law exponents, including Internet-like networks. We give both analytical and empirical evidence that such networks are not generally robust to massive random node deletions.
This report summarizes the analytical and experimental efforts for the Laboratory Directed Research and Development (LDRD) project entitled ''Advancements in Sensing and Perception using Structured Lighting Techniques''. There is an ever-increasing need for robust, autonomous ground vehicles for counterterrorism and defense missions. Although there has been nearly 30 years of government-sponsored research, it is undisputed that significant advancements in sensing and perception are necessary. We developed an innovative, advanced sensing technology for national security missions serving the Department of Energy, the Department of Defense, and other government agencies. The principal goal of this project was to develop an eye-safe, robust, low-cost, lightweight, 3D structured lighting sensor for use in broad daylight outdoor applications. The market for this technology is wide open due to the unavailability of such a sensor. Currently available laser scanners are slow, bulky and heavy, expensive, fragile, short-range, sensitive to vibration (highly problematic for moving platforms), and unreliable for outdoor use in bright sunlight conditions. Eye-safety issues are a primary concern for currently available laser-based sensors. Passive, stereo-imaging sensors are available for 3D sensing but suffer from several limitations : computationally intensive, require a lighted environment (natural or man-made light source), and don't work for many scenes or regions lacking texture or with ambiguous texture. Our approach leveraged from the advanced capabilities of modern CCD camera technology and Center 6600's expertise in 3D world modeling, mapping, and analysis, using structured lighting. We have a diverse customer base for indoor mapping applications and this research extends our current technology's lifecycle and opens a new market base for outdoor 3D mapping. Applications include precision mapping, autonomous navigation, dexterous manipulation, surveillance and reconnaissance, part inspection, geometric modeling, laser-based 3D volumetric imaging, simultaneous localization and mapping (SLAM), aiding first responders, and supporting soldiers with helmet-mounted LADAR for 3D mapping in urban-environment scenarios. The technology developed in this LDRD overcomes the limitations of current laser-based 3D sensors and contributes to the realization of intelligent machine systems reducing manpower need.
Modeling microscale heat transfer with the computational-heat-transfer code Calore is discussed. Microscale heat transfer problems differ from their macroscopic counterparts in that conductive heat transfer in both solid and gaseous materials may have important noncontinuum effects. In a solid material, three noncontinuum effects are considered: ballistic transport of phonons across a thin film, scattering of phonons from surface roughness at a gas-solid interface, and scattering of phonons from grain boundaries within the solid material. These processes are modeled for polycrystalline silicon, and the thermal-conductivity values predicted by these models are compared to experimental data. In a gaseous material, two noncontinuum effects are considered: ballistic transport of gas molecules across a thin gap and accommodation of gas molecules to solid conditions when reflecting from a solid surface. These processes are modeled for arbitrary gases by allowing the gas and solid temperatures across a gas-solid interface to differ: a finite heat transfer coefficient (contact conductance) is imposed at the gas-solid interface so that the temperature difference is proportional to the normal heat flux. In this approach, the behavior of gas in the bulk is not changed from behavior observed under macroscopic conditions. These models are implemented in Calore as user subroutines. The user subroutines reside within Sandia's Source Forge server, where they undergo version control and regression testing and are available to analysts needing these capabilities. A Calore simulation is presented that exercises these models for a heated microbeam separated from an ambient-temperature substrate by a thin gas-filled gap. Failure to use the noncontinuum heat transfer models for the solid and the gas causes the maximum temperature of the microbeam to be significantly underpredicted.
Flame heights of co-flowing cylindrical ethylene-air and methane-air laminar inverse diffusion flames were measured. The luminous flame height was found to be greater than the height of the reaction zone determined by planar laser-induced fluorescence (PLIF) of hydroxyl radicals (OH) because of luminous soot above the reaction zone. However, the location of the peak luminous signals along the centerline agreed very well with the OH flame height. Flame height predictions using Roper's analysis for circular port burners agreed with measured reaction zone heights when using values for the characteristic diffusion coefficient and/or diffusion temperature somewhat different from those recommended by Roper. The fact that Roper's analysis applies to inverse diffusion flames is evidence that inverse diffusion flames are similar in structure to normal diffusion flames.
Gorman, Anna K.; Wilson, Dominique; Clark, Katherine
Sandia National Laboratories has developed a portfolio of programs to address the critical skills needs of the DP labs, as identified by the 1999 Chiles Commission Report. The goals are to attract and retain the best and the brightest students and transition them into Sandia - and DP Complex - employees. The US Department of Energy/Defense Programs University Partnerships funded ten laboratory critical skills development programs in FY04. This report provides a qualitative and quantitative evaluation of these programs and their status. 3
This Quality Assurance Project Plan (QAPP) applies to the Environmental Monitoring Program at the Sandia National Laboratories/California. This QAPP follows DOE Quality Assurance Management System Guide for Use with 10 CFR 830 Subpart A, Quality Assurance Requirements, and DOE O 414.1C, Quality Assurance (DOE G 414.1-2A June 17, 2005). The Environmental Monitoring Program is located within the Environmental Operations Department. The Environmental Operations Department is responsible for ensuring that SNL/CA operations have minimal impact on the environment. The Department provides guidance to line organizations to help them comply with applicable environmental regulations and DOE orders. To fulfill its mission, the department has groups responsible for waste management; pollution prevention, air quality; environmental planning; hazardous materials management; and environmental monitoring. The Environmental Monitoring Program is responsible for ensuring that SNL/CA complies with all Federal, State, and local regulations and with DOE orders regarding the quality of wastewater and stormwater discharges. The Program monitors these discharges both visually and through effluent sampling. The Program ensures that activities at the SNL/CA site do not negatively impact the quality of surface waters in the vicinity, or those of the San Francisco Bay. The Program verifies that wastewater and stormwater discharges are in compliance with established standards and requirements. The Program is also responsible for compliance with groundwater monitoring, and underground and above ground storage tanks regulatory compliance. The Program prepares numerous reports, plans, permit applications, and other documents that demonstrate compliance.
Stakeholders often have competing interests when selecting or planning new power plants. The purpose of developing this preliminary Electricity Portfolio Simulation Model (EPSim) is to provide a first cut, dynamic methodology and approach to this problem, that can subsequently be refined and validated, that may help energy planners, policy makers, and energy students better understand the tradeoffs associated with competing electricity portfolios. EPSim allows the user to explore competing electricity portfolios annually from 2002 to 2025 in terms of five different criteria: cost, environmental impacts, energy dependence, health and safety, and sustainability. Four additional criteria (infrastructure vulnerability, service limitations, policy needs and science and technology needs) may be added in future versions of the model. Using an analytic hierarchy process (AHP) approach, users or groups of users apply weights to each of the criteria. The default energy assumptions of the model mimic Department of Energy's (DOE) electricity portfolio to 2025 (EIA, 2005). At any time, the user can compare alternative portfolios to this reference case portfolio.
A laser safety and hazard analysis is presented, for the Coherent(r) driven Acculite(r) laser central to the Sandia Remote Sensing System (SRSS). The analysis is based on the 2000 version of the American National Standards Institute's (ANSI) Standard Z136.1, for Safe Use of Lasers and the 2000 version of the ANSI Standard Z136.6, for Safe Use of Lasers Outdoors. The trailer (B70) based SRSS laser system is a mobile platform which is used to perform laser interaction experiments and tests at various national test sites. The trailer based SRSS laser system is generally operated on the United State Air Force Starfire Optical Range (SOR) at Kirtland Air Force Base (KAFB), New Mexico. The laser is used to perform laser interaction testing inside the laser trailer as well as outside the trailer at target sites located at various distances. In order to protect personnel who work inside the Nominal Hazard Zone (NHZ) from hazardous laser exposures, it was necessary to determine the Maximum Permissible Exposure (MPE) for each laser wavelength (wavelength bands) and calculate the appropriate minimum Optical Density (ODmin) necessary for the laser safety eyewear used by authorized personnel. Also, the Nominal Ocular Hazard Distance (NOHD) and The Extended Ocular Hazard Distance (EOHD) are calculated in order to protect unauthorized personnel who may have violated the boundaries of the control area and might enter into the laser's NHZ for testing outside the trailer. 4Page intentionally left blank
The Sandia National Laboratories, California (SNL/CA) Environmental Management System (EMS) Program Manual documents the elements of the site EMS Program. The SNL/CA EMS Program was developed in accordance with Department of Energy (DOE) Order 450.1 and incorporates the elements of the International Standard on Environmental Management Systems, ISO 14001.
Electromagnetic induction is a classic geophysical exploration method designed for subsurface characterization--in particular, sensing the presence of geologic heterogeneities and fluids such as groundwater and hydrocarbons. Several approaches to the computational problems associated with predicting and interpreting electromagnetic phenomena in and around the earth are addressed herein. Publications resulting from the project include [31]. To obtain accurate and physically meaningful numerical simulations of natural phenomena, computational algorithms should operate in discrete settings that reflect the structure of governing mathematical models. In section 2, the extension of algebraic multigrid methods for the time domain eddy current equations to the frequency domain problem is discussed. Software was developed and is available in Trilinos ML package. In section 3 we consider finite element approximations of De Rham's complex. We describe how to develop a family of finite element spaces that forms an exact sequence on hexahedral grids. The ensuing family of non-affine finite elements is called a van Welij complex, after the work [37] of van Welij who first proposed a general method for developing tangentially and normally continuous vector fields on hexahedral elements. The use of this complex is illustrated for the eddy current equations and a conservation law problem. Software was developed and is available in the Ptenos finite element package. The more popular methods of geophysical inversion seek solutions to an unconstrained optimization problem by imposing stabilizing constraints in the form of smoothing operators on some enormous set of model parameters (i.e. ''over-parametrize and regularize''). In contrast we investigate an alternative approach whereby sharp jumps in material properties are preserved in the solution by choosing as model parameters a modest set of variables which describe an interface between adjacent regions in physical space. While still over-parametrized, this choice of model space contains far fewer parameters than before, thus easing the computational burden, in some cases, of the optimization problem. And most importantly, the associated finite element discretization is aligned with the abrupt changes in material properties associated with lithologic boundaries as well as the interface between buried cultural artifacts and the surrounding Earth. In section 4, algorithms and tools are described that associate a smooth interface surface to a given triangulation. In particular, the tools support surface refinement and coarsening. Section 5 describes some preliminary results on the application of interface identification methods to some model problems in geophysical inversion. Due to time constraints, the results described here use the GNU Triangulated Surface Library for the manipulation of surface meshes and the TetGen software library for the generation of tetrahedral meshes.
Currently, the critical particle properties of pentaerythritol tetranitrate (PETN) that influence deflagration-to-detonation time in exploding bridge wire detonators (EBW) are not known in sufficient detail to allow development of a predictive failure model. The specific surface area (SSA) of many PETN powders has been measured using both permeametry and gas absorption methods and has been found to have a critical effect on EBW detonator performance. The permeametry measure of SSA is a function of particle shape, packed bed pore geometry, and particle size distribution (PSD). Yet there is a general lack of agreement in PSD measurements between laboratories, raising concerns regarding collaboration and complicating efforts to understand changes in EBW performance related to powder properties. Benchmarking of data between laboratories that routinely perform detailed PSD characterization of powder samples and the determination of the most appropriate method to measure each PETN powder are necessary to discern correlations between performance and powder properties and to collaborate with partnering laboratories. To this end, a comparison was made of the PSD measured by three laboratories using their own standard procedures for light scattering instruments. Three PETN powder samples with different surface areas and particle morphologies were characterized. Differences in bulk PSD data generated by each laboratory were found to result from variations in sonication of the samples during preparation. The effect of this sonication was found to depend on particle morphology of the PETN samples, being deleterious to some PETN samples and advantageous for others in moderation. Discrepancies in the submicron-sized particle characterization data were related to an instrument-specific artifact particular to one laboratory. The type of carrier fluid used by each laboratory to suspend the PETN particles for the light scattering measurement had no consistent effect on the resulting PSD data. Finally, the SSA of the three powders was measured using both permeametry and gas absorption methods, enabling the PSD to be linked to the SSA for these PETN powders. Consistent characterization of other PETN powders can be performed using the appropriate sample-specific preparation method, so that future studies can accurately identify the effect of changes in the PSD on the SSA and ultimately model EBW performance.
To generate data for comparison with the predictions of continuum sintering models for multi-material systems, several types of concentric cylinder samples were sintered to produce damage during sintering. The samples consisted of an outer ring of pressed ceramic powder (alumina or zinc oxide), the center of which was either fully or partially filled with a cylinder that consisted of either the same powder pressed to a higher green density (fully filled) or of previously densified 99% alumina (fully or partially filled). In addition, slots of various lengths were cut in some of the rings, from the outer surface parallel to the cylinder axis, which were then fully filled with dense alumina center cylinders and sintered. The types of sintering damage produced as the shrinkage of the rings was constrained by the center cylinders which shrank less or not at all, included shape deformation, cracking and possible density gradient formation. Comparisons of shrinkage measurements on rings fully filled with dense alumina center cylinders indicated that while the presence of the center cylinder increased the thickness and width shrinkage for both materials, the overall densification of the rings was impeded due to the decrease in circumferential shrinkage. This effect was more severe for the zinc oxide rings. The shape of the cross sections of the rings that were sintered either fully or partially filled with dense alumina center cylinders also showed differences depending on their composition.
An integrated 3D Direct Chill (DC) casting model was used to simulate the heat transfer, fluid flow, solidification, and thermal stress during casting. Temperature measurements were performed in an industrial casting facility to setup and validate the model. The key features such as heat transfer between cooling water and the ingot surface as a function of surface temperature, cooling water flow rate, air gaps caused by mold and bottom block design were also considered in the model. An elasto-viscoplastic constitutive model, which was determined based on mechanical testing, was used to calculate the evolution of stress during casting. The stress evolution was compared at various locations and correlated with physical phenomena associated with the casting process. An Ingot Cracking Index, which represents the ingot hot cracking propensity, was established based on the ratio of stress to strength. The Index calculation results were consistent with observations in industrial casting practice.
Surface heat transfer coefficients representing the various regimes of water cooling during the Direct Chill (DC) casting of aluminum 3004 alloy ingots have been calculated using the inverse heat transfer technique. ProCAST, a commercial casting simulation package, which includes heat transfer, fluid flow, solidification, and inverse heat transfer, was used for this effort. Thermocouple data from an experimental casting run, and temperature-dependent thermophysical properties of the alloy were used in the calculation. The use of a structured vs. unstructured mesh was evaluated. The calculated effective heat transfer coefficient, which is a function of temperature and time, covers three water cooling regimes, i.e., convection, nucleate boiling, and film boiling, and the change of water flow rate with time.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE
Greenberg, Melisa R.; Chen, Weiliang; Pulford, Ben N.; Smolyakov, Gennady A.; Jiang, Ying B.; Bunge, Scott D.; Boyle, Timothy J.; Osiński, Marek
InP quantum dots (QDs) with zinc blende structure and InN QDs with hexagonal structure were synthesized from appropriate organometallic precursors in a noncoordinating solvent using myristic acid as a ligand. The QDs were characterized by TEM, the associated energy dispersive spectroscopy (EDS), electron diffraction, and steady state UV-VIS optical absorption and photoluminescence spectroscopy. To our best knowledge, this paper reports synthesis of InN colloidal quantum dots for the first time.
A decomposition chemistry and heat transfer model to predict the response of removable epoxy foam (REF) exposed to fire-like heat fluxes is described. The epoxy foam was created using a perfluorohexane blowing agent with a surfactant. The model includes desorption of the blowing agent and surfactant, thermal degradation of the epoxy polymer, polymer fragment transport, and vapor-liquid equilibrium. An effective thermal conductivity model describes changes in thermal conductivity with reaction extent. Pressurization is modeled assuming: (1) no strain in the condensed-phase, (2) no resistance to gas-phase transport, (3) spatially uniform stress fields, and (4) no mass loss from the system due to venting. The model has been used to predict mass loss, pressure rise, and decomposition front locations for various small-scale and large-scale experiments performed by others. The framework of the model is suitable for polymeric foams with absorbed gases. Published by Elsevier Ltd.
The peridynamic model was introduced by Silling in 1998. In this paper, we demonstrate the application of the quasistatic peridynamic model to two-dimensional, linear elastic, plane stress and plane strain problems, with special attention to the modeling of plain and reinforced concrete structures. We consider just one deviation from linearity--that which arises due to the irreversible sudden breaking of bonds between particles. The peridynamic model starts with the assumption that Newton's second law holds true on every infinitesimally small free body (or particle) within the domain of analysis. A specified force density function, called the pairwise force function, (with units of force per unit volume per unit volume) between each pair of infinitesimally small particles is postulated to act if the particles are closer together than some finite distance, called the material horizon. The pairwise force function may be assumed to be a function of the relative position and the relative displacement between the two particles. In this paper, we assume that for two particles closer together than the specified 'material horizon' the pairwise force function increases linearly with respect to the stretch, but at some specified stretch, the pairwise force function is irreversibly reduced to zero.
The US Department of Energy requires a periodic assessment of the Microsystems Program at Sandia National Laboratories. An external review of this program is held approximately every 18 months to 24 months. The report from the External Review Panel serves as the basis for Sandia's ''self assessment'' and is a specific deliverable of the governance contract between Lockheed Martin and the Department of Energy. The External Review of Microelectronics and Microsystems for Fiscal Year 2004 was held September 27-29, 2004 at Sandia National Laboratories, Albuquerque, NM. The external review panel consisted of experts in the fields of microelectronics, photonics and microsystems from universities, industry and other Government agencies. A complete list of the panel members is included as Appendix A of the attached report. The review assessed four areas: relevance to national needs and agency mission; quality of science, technology and engineering; performance in the operation of a major facility; and program performance management and planning. Relevance to national needs and agency mission was rated as ''outstanding''. The quality of science, technology, and engineering was rated as ''outstanding''. Operation of a major facility was rated as ''outstanding'', and the category of program performance, management, and planning was rated as ''outstanding''. Sandia's Microsystems Program thus received an overall rating of ''outstanding'' [the highest possible rating].
This report describes the features of monolithic, series connected silicon (Si) photovoltaic (PV) cells which have been developed for applications requiring higher voltages than obtained with conventional single junction solar cells. These devices are intended to play a significant role in micro / mini firing systems and fuzing systems for DOE and DOD applications. They are also appropriate for other applications (such as micro-electro-mechanical-systems (MEMS) actuation as demonstrated by Bellew et. al.) where electric power is required in remote regions and electrical connection to the region is unavailable or deemed detrimental for whatever reason. Our monolithic device consists of a large number of small PV cells, combined in series and fabricated using standard CMOS processing on silicon-on-insulator (SOI) wafers with 0.4 to 3 micron thick buried oxide (BOX) and top Si thickness of 5 and 10 microns. Individual cell isolation is achieved using the BOX layer of the SOI wafer on the bottom. Isolation along the sides is produced by trenching the top Si and subsequently filling the trench by deposition of dielectric films such as oxide, silicon nitride, or oxynitride. Multiple electrically isolated PV cells are connected in series to produce voltages ranging from approximately 0.5 volts for a single cell to several thousands of volts for strings of thousands of cells.
With the increasing role of computational modeling in engineering design, performance estimation, and safety assessment, improved methods are needed for comparing computational results and experimental measurements. Traditional methods of graphically comparing computational and experimental results, though valuable, are essentially qualitative. Computable measures are needed that can quantitatively compare computational and experimental results over a range of input, or control, variables and sharpen assessment of computational accuracy. This type of measure has been recently referred to as a validation metric. We discuss various features that we believe should be incorporated in a validation metric and also features that should be excluded. We develop a new validation metric that is based on the statistical concept of confidence intervals. Using this fundamental concept, we construct two specific metrics: one that requires interpolation of experimental data and one that requires regression (curve fitting) of experimental data. We apply the metrics to three example problems: thermal decomposition of a polyurethane foam, a turbulent buoyant plume of helium, and compressibility effects on the growth rate of a turbulent free-shear layer. We discuss how the present metrics are easily interpretable for assessing computational model accuracy, as well as the impact of experimental measurement uncertainty on the accuracy assessment.
Crude oil storage caverns at the U.S. Strategic Petroleum Reserve (SPR) are solution-mined from subsurface salt domes along the U.S. Gulf Coast. While these salt domes exhibit many attractive characteristics for large-volume, long-term storage of oil such as low cost for construction, low permeability for effective fluids containment, and secure location deep underground, they also present unique technical challenges for maintaining oil quality within delivery standards. The vapor pressures of the crude oils stored at SPR tend to increase with storage time due to the combined effects of geothermal heating and gas intrusion from the surrounding salt. This presents a problem for oil delivery offsite because high vapor-pressure oil may lead to excessive atmospheric emissions of hydrocarbon gases that present explosion hazards, health hazards, and handling problems at atmospheric pressure. Recognizing this potential hazard, the U.S. Department of Energy, owner and operator of the SPR, implemented a crude oil vapor pressure monitoring program that collects vapor pressure data for all the storage caverns. From these data, DOE evaluates the rate of change in vapor pressures of its oils in the SPR. Moreover, DOE implemented a vapor pressure mitigation program in which the oils are degassed periodically and will be cooled immediately prior to delivery in order to reduce the vapor pressure to safe handling levels. The work described in this report evaluates the entire database since its origin in 1993, and determines the current levels of vapor pressure around the SPR, as well as the rate of change for purposes of optimizing both the mitigation program and meeting safe delivery standards. Generally, the rate of vapor pressure increase appears to be lower in this analysis than reported in the past and, problematic gas intrusion seems to be limited to just a few caverns. This being said, much of the current SPR inventory exceeds vapor pressure delivery guidelines and must be degassed and cooled in order to meet current delivery standards.
This report documents the results of a six month test program of an Alternative Configuration (ACONF) power management system design for a typical United States Coast Guard (USCG) National Distress System (NDS) site. The USCG/USDOE funded work was performed at Sandia National Laboratories to evaluate the effect of a Sandia developed battery management technology known as ACONF on the performance of energy storage systems at NDS sites. This report demonstrates the savings of propane gas, and the improvement of battery performance when utilizing the new ACONF designs. The fuel savings and battery performance improvements resulting from ACONF use would be applicable to all current NDS sites in the field. The inherent savings realized when using the ACONF battery management design was found to be significant when compared to battery replacement and propane refueling at the remote NDS sites.
The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a community model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.
This report examines the problem of an antenna radiating from a cylindrical hole in the earth and the subsequent far-zone field produced in the upper air half space. The approach used for this analysis was to first examine propagation characteristics along the hole for surrounding geologic material properties. Three cases of sand with various levels of moisture content were considered as the surrounding material to the hole. For the hole diameters and sand cases examined, the radiation through the earth medium was found to be the dominant contribution to the radiation transmitted through to the upper half-space. In the analysis presented, the radiation from a vertical and a horizontal dipole source within the hole is used to determine a closed-form expression for the radiation in the earth medium which represents a modified element factor for the source and hole combination. As the final step, the well-known results for a dipole below a half space, in conjunction with the use of Snell's law to transform the modified element factor to the upper half space, determine closed-form expressions for the far-zone radiated fields in the air region above the earth.
Total X-ray power measurements using aluminum block calorimetry and other techniques were made at LIGA X-ray scanner synchrotron beamlines located at both the Advanced Light Source (ALS) and the Advanced Photon Source (APS). This block calorimetry work was initially performed on the LIGA beamline 3.3.1 of the ALS to provide experimental checks of predictions of the LEX-D (LIGA Exposure- Development) code for LIGA X-ray exposures, version 7.56, the version of the code in use at the time calorimetry was done. These experiments showed that it was necessary to use bend magnet field strengths and electron storage ring energies different from the default values originally in the code in order to obtain good agreement between experiment and theory. The results indicated that agreement between LEX-D predictions and experiment could be as good as 5% only if (1) more accurate values of the ring energies, (2) local values of the magnet field at the beamline source point, and (3) the NIST database for X-ray/materials interactions were used as code inputs. These local magnetic field value and accurate ring energies, together with NIST database, are now defaults in the newest release of LEX-D, version 7.61. Three dimensional simulations of the temperature distributions in the aluminum calorimeter block for a typical ALS power measurement were made with the ABAQUS code and found to be in good agreement with the experimental temperature data. As an application of the block calorimetry technique, the X-ray power exiting the mirror in place at a LIGA scanner located at the APS beamline 10 BM was measured with a calorimeter similar to the one used at the ALS. The overall results at the APS demonstrated the utility of calorimetry in helping to characterize the total X-ray power in LIGA beamlines. In addition to the block calorimetry work at the ALS and APS, a preliminary comparison of the use of heat flux sensors, photodiodes and modified beam calorimeters as total X-ray power monitors was made at the ALS, beamline 3.3.1. This work showed that a modification of a commercially available, heat flux sensor could result in a simple, direct reading beam power meter that could be a useful for monitoring total X-ray power in Sandia's LIGA exposure stations at the ALS, APS and Stanford Synchrotron Radiation Laboratory (SSRL).
With the continuing trend of decreasing feature sizes in flip-chip assemblies, the reliability tolerance to interfacial flaws is also decreasing. Small-scale disbonds will become more of a concern, pointing to the need for a better understanding of the initiation stage of interfacial delamination. With most accepted adhesion metric methodologies tailored to predict failure under the prior existence of a disbond, the study of the initiation phenomenon is open to development and standardization of new testing procedures. Traditional fracture mechanics approaches are not suitable, as the mathematics assume failure to originate at a disbond or crack tip. Disbond initiation is believed to first occur at free edges and corners, which act as high stress concentration sites and exhibit singular stresses similar to a crack tip, though less severe in intensity. As such, a 'fracture mechanics-like' approach may be employed which defines a material parameter--a critical stress intensity factor (K{sub c})--that can be used to predict when initiation of a disbond at an interface will occur. The factors affecting the adhesion of underfill/polyimide interfaces relevant to flip-chip assemblies were investigated in this study. The study consisted of two distinct parts: a comparison of the initiation and propagation phenomena and a comparison of the relationship between sub-critical and critical initiation of interfacial failure. The initiation of underfill interfacial failure was studied by characterizing failure at a free-edge with a critical stress intensity factor. In comparison with the interfacial fracture toughness testing, it was shown that a good correlation exists between the initiation and propagation of interfacial failures. Such a correlation justifies the continuing use of fracture mechanics to predict the reliability of flip-chip packages. The second aspect of the research involved fatigue testing of tensile butt joint specimens to determine lifetimes at sub-critical load levels. The results display an interfacial strength ranking similar to that observed during monotonic testing. The fatigue results indicate that monotonic fracture mechanics testing may be an adequate screening tool to help predict cyclic underfill failure; however lifetime data is required to predict reliability.