Publications
Search results
Jump to search filtersBiofuel impacts on water
Sandia National Laboratories and General Motors Global Energy Systems team conducted a joint biofuels systems analysis project from March to November 2008. The purpose of this study was to assess the feasibility, implications, limitations, and enablers of large-scale production of biofuels. 90 billion gallons of ethanol (the energy equivalent of approximately 60 billion gallons of gasoline) per year by 2030 was chosen as the book-end target to understand an aggressive deployment. Since previous studies have addressed the potential of biomass but not the supply chain rollout needed to achieve large production targets, the focus of this study was on a comprehensive systems understanding the evolution of the full supply chain and key interdependencies over time. The supply chain components examined in this study included agricultural land use changes, production of biomass feedstocks, storage and transportation of these feedstocks, construction of conversion plants, conversion of feedstocks to ethanol at these plants, transportation of ethanol and blending with gasoline, and distribution to retail outlets. To support this analysis, we developed a 'Seed to Station' system dynamics model (Biofuels Deployment Model - BDM) to explore the feasibility of meeting specified ethanol production targets. The focus of this report is water and its linkage to broad scale biofuel deployment.
Multilingual Text Analysis of Large Data
Abstract not provided.
Ferrite-YSZ composites for thermochemical reduction of CO2: In-situ and post mortem characterization
Chemistry of Materials
Abstract not provided.
The Application of Super Heated Drop (Bubble Detectors) for the characturization of Nano-Second-Pulsed Neutron Fields
Abstract not provided.
Computer Science Research Overview
Abstract not provided.
Uncertainty quantification of US Southwest climate from IPCC projections
The Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) made extensive use of coordinated simulations by 18 international modeling groups using a variety of coupled general circulation models (GCMs) with different numerics, algorithms, resolutions, physics models, and parameterizations. These simulations span the 20th century and provide forecasts for various carbon emissions scenarios in the 21st century. All the output from this panoply of models is made available to researchers on an archive maintained by the Program for Climate Model Diagnosis and Intercomparison (PCMDI) at LLNL. I have downloaded this data and completed the first steps toward a statistical analysis of these ensembles for the US Southwest. This constitutes the final report for a late start LDRD project. Complete analysis will be the subject of a forthcoming report.
Overview of Upscaling from Atomistic to Continuum Models for Nuclear Waste Glass Dissolution and Making Models of Multicomponent Glass
Abstract not provided.
PCG Meeting Project Action Sheet Update for Physical Protection Projects
Abstract not provided.
Hardware Architectures Group Recap
Abstract not provided.
Hardware Architecures (HWA) Working Group Outbrief
Abstract not provided.
Subcell Models with Application to Split-Ring Resonators
Abstract not provided.
LBMD: A layer-based Mesh Data structure Tailored for Generic API Infrastructures
Abstract not provided.
Used Fuel Disposition Campaign Update on the Status of the Blue Ribbon Commission
Abstract not provided.
Atoms-to-Continuum (AtC) user package for LAMMPS
Abstract not provided.
Advanced dexterous manipulation for IED defeat : report on the feasibility of using the ShadowHand for remote operations
Improvised Explosive Device (IED) defeat (IEDD) operations can involve intricate operations that exceed the current capabilities of the grippers on board current bombsquad robots. The Shadow Dexterous Hand from the Shadow Robot Company or 'ShadowHand' for short (www.shadowrobot.com) is the first commercially available robot hand that realistically replicates the motion, degrees-of-freedom and dimensions of a human hand (Figure 1). In this study we evaluate the potential for the ShadowHand to perform potential IED defeat tasks on a mobile platform.
Mechanical Engineering Research & Opportunities at Sandia National Laboratories
Abstract not provided.
RADIATION DETECTION AT SANDIA
Abstract not provided.
Quantitative laboratory measurements of biogeochemical processes controlling biogenic calcite carbon sequestration
The purpose of this LDRD was to generate data that could be used to populate and thereby reduce the uncertainty in global carbon cycle models. These efforts were focused on developing a system for determining the dissolution rate of biogenic calcite under oceanic pressure and temperature conditions and on carrying out a digital transcriptomic analysis of gene expression in response to changes in pCO2, and the consequent acidification of the growth medium.
Characterization and Comparison of Devices Fabricated From Epitaxial Graphene on SiC and Electrostatically Transferred Graphene
Abstract not provided.
Recovering from the Great Recession: An Economic Financial & Political Conundrum
Abstract not provided.
Trusted Computing Technologies, Intel Trusted Execution Technology
We describe the current state-of-the-art in Trusted Computing Technologies - focusing mainly on Intel's Trusted Execution Technology (TXT). This document is based on existing documentation and tests of two existing TXT-based systems: Intel's Trusted Boot and Invisible Things Lab's Qubes OS. We describe what features are lacking in current implementations, describe what a mature system could provide, and present a list of developments to watch. Critical systems perform operation-critical computations on high importance data. In such systems, the inputs, computation steps, and outputs may be highly sensitive. Sensitive components must be protected from both unauthorized release, and unauthorized alteration: Unauthorized users should not access the sensitive input and sensitive output data, nor be able to alter them; the computation contains intermediate data with the same requirements, and executes algorithms that the unauthorized should not be able to know or alter. Due to various system requirements, such critical systems are frequently built from commercial hardware, employ commercial software, and require network access. These hardware, software, and network system components increase the risk that sensitive input data, computation, and output data may be compromised.
Recent Advances in Sandia National Laboratories' MiniSAR System - Applications and Algorithms
Abstract not provided.
Building the Next Generation of Parallel Applications
Abstract not provided.
Artificial Viscosity Flux Limiting in Lagrangian Shock Hydrodynamic Finite Element Computations
Abstract not provided.
Behavior Based Safety (BBS)
Abstract not provided.
Characterizing hydrogen adsorbed on tungsten and beryllium surfaces using ion scattering spectroscopy
Abstract not provided.
Display Posters for Blue Ribbon Commission Reception
Abstract not provided.
Recommendations on the prediction of thermal hazard distances from large liquefied natural gas pool fires on water for solid flame models
Abstract not provided.
Demo natural resources planning decision support tool
Abstract not provided.
Collaborative Stakeholder-Driven Wat3er-Energy-Food Resource Modeling and Planning
Abstract not provided.
Large Scale Integrated Safeguards Information Systems
Abstract not provided.
Comparisons of Release Probabilities Obtained in Recent WIPP Performance Assessments
Abstract not provided.
SUMMARY OF THE PHOENIX SERIES LARGE SCALE LNG POOL FIRE EXPERIMENTS
Abstract not provided.
Copy of Sandia's Approach to Reliability
Abstract not provided.
NETS presentation: Challenges in Structural Analysis for Deformed Nuclear Reactivity Assessments
Abstract not provided.
Overview of EEG Capabilities and Projects
Abstract not provided.
Chemical Transportation
Abstract not provided.
OHSAS 18001: International Occupational Health and Safety Assessment Specification & Management System
Abstract not provided.
Requirements for High-Resolution Mass Spectrometer
Abstract not provided.
Software Development and the Cognitive Foundry
Abstract not provided.
Utilization of localized panel resonant behavior of wind turbine blades
Abstract not provided.
Assessing the operational life of flexible printed boards intended for continuous flexing applications : a case study
Through the vehicle of a case study, this paper describes in detail how the guidance found in the suite of IPC (Association Connecting Electronics Industries) publications can be applied to develop a high level of design assurance that flexible printed boards intended for continuous flexing applications will satisfy specified lifetime requirements.
3D X-Ray CT Analysis of Solder Joints in Area Array Electronic Package Assemblies
Abstract not provided.
Predicting Performance Margins:Linking the microscale to the macroscale
Abstract not provided.
Gray fox story
Abstract not provided.
Gray Fox Story B905
Abstract not provided.
Z2011 TREX 6a data
Abstract not provided.
Sensitivity Analysis and Parameter Optimization Using 1-D MHD Simulations of Magnetic Drive Experiments
Abstract not provided.
Improving the Fatigue Resistance of Ferritic Steels in Hydrogen Gas
Abstract not provided.
Using simulation to design extreme-scale applications and architectures: Programming model exploration
Performance Evaluation Review
Abstract not provided.
Potential Operational Impacts of Warhead Monitoring
Abstract not provided.
Nuclear Incident Response Programs PONI Briefing
Abstract not provided.
Automotive Augmented Cognition
Abstract not provided.
Copy of Introduction to Weapons of Mass Destruction Science course for the Federal Bureau of Investigation
Abstract not provided.
Plasma Facing Components
Abstract not provided.
Laboratories Overview Nuclear Energy and Small Modular Reactors Overview
Abstract not provided.
DIC article Series
Experimental Techniques
Abstract not provided.
DHS Science and Technology Directorate Chemical Supply Chain and Resilience Project: Data Serves a Core Role
Abstract not provided.
Overview of Degraded Containment Research at Sandia National Laboratories
Abstract not provided.
Recent Advances in High Fidelity Wind Applications that Involve Sliding Meshes
Abstract not provided.
BLM Gas Migration Study Risk Assessment: Geomechanical Parameter List
Abstract not provided.
Application-driven Analysis of Two Generations of Capability Computing Platforms: Purple and Cielo
Abstract not provided.
Whither Commercial Nanobiosensors?
Journal of Biosensors and Bioelectronics
The excitement surrounding the marriage of biosensors and nanotechnology is palpable even from a cursory examination of the scientific literature. Indeed, the word “nano” might be in danger of being overused and reduced to a cliché, although probably essential for publishing papers or securing research funding. The biosensor literature is littered with clever or catchy acronyms, birds being apparently favored (“CANARY”, “SPARROW”), quite apart from “electronic tongue,” “electronic nose,” and so on. Although biosensors have been around since glucose monitors were commercialized in the 1970s, the transition of laboratory research and innumerable research papers on biosensors into the world of commerce has lagged. There are several reasons for this phenomenon including the infamous “valley of death” afflicting entrepreneurs emerging from academic environment into the industrial world, where the rules for success can be radically different. In this context, musings on biosensors and especially nanobiosensors in an open access journal such as Journal of Biosensors and Bioelectronics is topical and appropriate especially since market surveys of biosensors are prohibitively expensive, sometimes running into thousands of dollars for a single copy. The contents and predictions of market share for biosensors in these reports also keep changing every time a report is published. Not only that, the market share projections for biosensors differs considerably amongst various reports. An editorial provides the opportunity to offer personal opinions and perhaps stimulate debate on a particular topic. In this sense, editorials are a departure from the rigor of a research paper. This editorial is no exception. With this preamble, it is worthwhile to stop and ponder the status of commercial biosensors and nanobiosensors.
Quantifying the value of hydropower in the electric grid : role of hydropower in existing markets
The electrical power industry is facing the prospect of integrating a significant addition of variable generation technologies in the next several decades, primarily from wind and solar facilities. Overall, transmission and generation reserve levels are decreasing and power system infrastructure in general is aging. To maintain grid reliability modernization and expansion of the power system as well as more optimized use of existing resources will be required. Conventional and pumped storage hydroelectric facilities can provide an increasingly significant contribution to power system reliability by providing energy, capacity and other ancillary services. However, the potential role of hydroelectric power will be affected by another transition that the industry currently experiences - the evolution and expansion of electricity markets. This evolution to market-based acquisition of generation resources and grid management is taking place in a heterogeneous manner. Some North American regions are moving toward full-featured markets while other regions operate without formal markets. Yet other U.S. regions are partially evolved. This report examines the current structure of electric industry acquisition of energy and ancillary services in different regions organized along different structures, reports on the current role of hydroelectric facilities in various regions, and attempts to identify features of market and scheduling areas that either promote or thwart the increased role that hydroelectric power can play in the future. This report is part of a larger effort led by the Electric Power Research Institute with purpose of examining the potential for hydroelectric facilities to play a greater role in balancing the grid in an era of greater penetration of variable renewable energy technologies. Other topics that will be addressed in this larger effort include industry case studies of specific conventional and hydro-electric facilities, systemic operating constraints on hydro-electric resources, and production cost simulations aimed at quantifying the increased role of hydro.
Passive load control for large wind turbines
Collection of Technical Papers - AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference
Wind energy research activities at Sandia National Laboratories focus on developing large rotors that are lighter and more cost-effective than those designed with current technologies. Because gravity scales as the cube of the blade length, gravity loads become a constraining design factor for very large blades. Efforts to passively reduce turbulent loading has shown significant potential to reduce blade weight and capture more energy. Research in passive load reduction for wind turbines began at Sandia in the late 1990's and has moved from analytical studies to blade applications. This paper discusses the test results of two Sandia prototype research blades that incorporate load reduction techniques. The TX-100 is a 9-m long blade that induces bend-twist coupling with the use of off-axis carbon in the skin. The STAR blade is a 27-m long blade that induces bend-twist coupling by sweeping the blade in a geometric fashion.
Graphene islands on Cu foils: The interplay between shape, orientation, and defects
Nano Letters
We have observed the growth of monolayer graphene on Cu foils using low-energy electron microscopy. On the (100)-textured surface of the foils, four-lobed, 4-fold-symmetric islands nucleate and grow. The graphene in each of the four lobes has a different crystallographic alignment with respect to the underlying Cu substrate. These "polycrystalline" islands arise from complex heterogeneous nucleation events at surface imperfections. The shape evolution of the lobes is well explained by an angularly dependent growth velocity. Well-ordered graphene forms only above ∼790 °C. Sublimation-induced motion of Cu steps during growth at this temperature creates a rough surface, where large Cu mounds form under the graphene islands. Strategies for improving the quality of monolayer graphene grown on Cu foils must address these fundamental defect-generating processes. © 2010 American Chemical Society.
A generalized view on Galilean invariance in stabilized compressible flow computations
International Journal for Numerical Methods in Fluids
This article presents a generalized analysis on the significance of Galilean invariance in compressible flow computations with stabilized and variational multi-scale methods. The understanding of the key issues and the development of general approaches to Galilean-invariant stabilization are facilitated by the use of a matrix-operator description of Galilean transformations. The analysis of invariance for discontinuity capturing operators is also included. Published in 2010 by John Wiley & Sons, Ltd. This article is a U.S. Government work and is in the public domain in the U.S.A. Published in 2010 by John Wiley & Sons, Ltd.
Aerodynamic and acoustic corrections for a Kevlar-walled anechoic wind tunnel
16th AIAA/CEAS Aeroacoustics Conference (31st AIAA Aeroacoustics Conference)
The aerodynamic and acoustic performance of a Kevlar-walled anechoic wind tunnel test section has been analyzed. Aerodynamic measurements and panel method calculations were performed on a series of airfoils to reveal the influence of the test section walls, including their porosity and flexibility. A lift interference correction method was developed from first principles which shows consistently high accuracy when measurements are compared to viscous free-flight calculations. Interference corrections are an order of magnitude smaller than those associated with an open jet test section. Blockage corrections are found to be a fraction of those which would be associated with a hard-wall test section of the same size, and are negligible in most cases. New measurements showing the acoustic transparency of the Kevlar and the quality of the anechoic environment in the chambers are presented, along with benchmark trailing edge noise measurements. © 2010 by William J. Devenport, Ricardo A. Burdisso, Aurelien Borgoltz, Patricio Ravetta and Matthew F Barone.
Computing contingency statistics in parallel: Design trade-offs and limiting cases
Proceedings - IEEE International Conference on Cluster Computing, ICCC
Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and x2 independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics (which we discussed in [1]) where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse. © 2010 IEEE.
Advantages of clustering in the phase classification of hyperspectral materials images
Microscopy and Microanalysis
Despite the many demonstrated applications of factor analysis (FA) in analyzing hyperspectral materials images, FA does have inherent mathematical limitations, preventing it from solving certain materials characterization problems. A notable limitation of FA is its parsimony restriction, referring to the fact that in FA the number of components cannot exceed the chemical rank of a dataset. Clustering is a promising alternative to FA for the phase classification of hyperspectral materials images. In contrast with FA, the phases extracted by clustering do not have to be parsimonious. Clustering has an added advantage in its insensitivity to spectral collinearity that can result in phase mixing using FA. For representative energy dispersive X-ray spectroscopy materials images, namely a solder bump dataset and a braze interface dataset, clustering generates phase classification results that are superior to those obtained using representative FA-based methods. For the solder bump dataset, clustering identifies a Cu-Sn intermetallic phase that cannot be isolated using FA alone due to the parsimony restriction. For the braze interface sample that has collinearity among the phase spectra, the clustering results do not exhibit the physically unrealistic phase mixing obtained by multivariate curve resolution, a commonly utilized FA algorithm. © Microscopy Society of America 2010.
A framework for the solution of inverse radiation transport problems
IEEE Transactions on Nuclear Science
Radiation sensing applications for SNM detection, identification, and characterization all face the same fundamental problem: each to varying degrees must infer the presence, identity, and configuration of a radiation source given a set of radiation signatures. This is a problem of inverse radiation transport: given the outcome of a measurement, what source terms and transport medium caused that observation? This paper presents a framework for solving inverse radiation transport problems, describes its essential components, and illustrates its features and performance. The framework implements an implicit solution to the inverse transport problem using deterministic neutron, electron, and photon transport calculations embedded in a Levenberg-Marquardt nonlinear optimization solver. The solver finds the layer thicknesses of a one-dimensional transport model by minimizing the difference between the gamma spectrum calculated by deterministic transport and the measured gamma spectrum. The fit to the measured spectrum is a full-spectrum analysisall spectral features are modeled, including photopeaks and continua from spontaneous and induced photon emissions. An example problem is solved by analyzing a high-resolution gamma spectrometry measurement of plutonium metal. © 2010 IEEE.
Comparison of thermal conductivity and thermal boundary conductance sensitivities in continuous-wave and ultrashort-pulsed thermoreflectance analyses
International Journal of Thermophysics
Thermoreflectance techniques are powerful tools for measuring thermophysical properties of thin film systems, such as thermal conductivity, Λ, of individual layers, or thermal boundary conductance across thin film interfaces (G). Thermoreflectance pump-probe experiments monitor the thermoreflectance change on the surface of a sample, which is related to the thermal properties in the sample of interest. Thermoreflectance setups have been designed with both continuous wave (cw) and pulsed laser systems. In cw systems, the phase of the heating event is monitored, and its response to the heating modulation frequency is related to the thermophysical properties; this technique is commonly termed a phase sensitive thermoreflectance (PSTR) technique. In pulsed laser systems, pump and probe pulses are temporally delayed relative to each other, and the decay in the thermoreflectance signal in response to the heating event is related to the thermophysical properties; this technique is commonly termed a transient thermoreflectance (TTR) technique. In this work, mathematical models are presented to be used with PSTR and TTR techniques to determine the Λ and G of thin films on substrate structures. The sensitivities of the models to various thermal and sample parameters are discussed, and the advantages and disadvantages of each technique are elucidated from the results of the model analyses. © 2010 Springer Science+Business Media, LLC.
True triaxial testing of castlegate sandstone
44th US Rock Mechanics Symposium - 5th US/Canada Rock Mechanics Symposium
Deformation bands in high porosity sandstone are an important geological feature for geologists and petroleum engineers; however, formation of these bands is not fully understood. The theoretical framework for deformation band formation in high porosity geomaterials is well established. It suggests that the intermediate principal stress influences the predicted deformation band type; however, these predictions have yet to be fully validated through experiments. Therefore, this study investigates the influence of the intermediate principal stress on failure and the formation of deformation bands in Castlegate sandstone. Mean stresses for these tests range from 30 to 150 MPa, covering brittle to ductile behavior. Deformation band orientations are measured with external observation as well as through acoustic emission locations. Results of experiments conducted at Lode angles of 30 and 14.5 degrees show trends that qualitatively agree with localization theory. The band angle (between the band normal and maximum compression) decreases with increasing mean stress. For tests at the same mean stress, band angle decreases with increasing Lode angle. Copyright 2010 ARMA, American Rock Mechanics Association.
A system of parallel and selective microchannels for biosensor sample delivery and containment
Proceedings of IEEE Sensors
This paper presents an integrated microfluidic system for selectively interrogating parallel biosensors at programmed time intervals. Specifically, the microfluidic system is used for delivering a volume of sample from a single source to a surface-based arrayed biosensor. In this case the biosensors were an array of electrochemical electrodes modified with sample specific capture probes. In addition, the sample was required to be captured, stored and removed for additional laboratory analysis. This was accomplished by a plastic laminate stack in which each thin laminate was patterned by CO2 laser ablation to form microchannels and two novel valves. The first valve was a normally closed type opened by heat via an electrically resistive wire. The second valve was a check type integrated into a removable storage chamber. This setup allows for remote and leave-behind sensing applications and also containment of sensed sample for further laboratory analysis. ©2010 IEEE.
Cooling of an isothermal plate using a triangular array of swirling air jets
2010 14th International Heat Transfer Conference, IHTC 14
Cooling with swirling jets is an effective means for enhancing heat transfer and improving spatial uniformity of the cooling rate in many applications. This paper investigates cooling a flat, isothermal plate at 1,000 K using a single and a triangular array of swirling air jets, and characterizes the resulting flow field and the air temperature above the plate. This problem was modeled using the Fuego computational fluid dynamics (CFD) code that is being developed at Sandia National Laboratories. The separation distance to jet diameter, L/D, varied from 3 to 12, Reynolds number, Re, varied from 5×103-5×104, and the swirl number, S varied from 0 to 2.49. The formation of the central recirculation zone (CRZ) and its impact on heat transfer were also investigated. For a hubless swirling jet, a CRZ was generated whenever S ≥ 0.67, in agreement with experimental data and our mathematical derivation for swirl (helicoid) azimuthal and axial velocities. On the other hand, for S <0.058, the velocity field closely approximated that of a conventional jet. With the azimuthal velocity of a swirling jet decaying as 1/z2, most mixing occurred only a few jet diameters from the jet nozzle. Highest cooling occurred when L/D = 3 and S = 0.12 to 0.79. Heat transfer enhancement increased as S or Re increased, or L/D decreased. © 2010 by ASME.
Charge enhancement effects in 6H-SiC MOSFETs induced by heavy ion strike
IEEE Transactions on Nuclear Science
The transient response of Silicon Carbide (SiC) Metal-Oxide-Semiconductor Field Effect Transistors (MOSFETs) with three different gates due to a single ion strike is studied. Comparing the experiment and numerical simulation, it is suggested that the charge enhancement is due to the bipolar effect. We find the bipolar gain depends on the quality of gate oxide. The impact of fixed charge in SiO2 and interface traps at SiC/SiO2 on the charge collection is discussed. © 2010 IEEE.
Infrared cubic dielectric resonator metamaterial
Optics InfoBase Conference Papers
Dielectric resonators are an effective means to realize isotropic, low-loss optical metamaterials. As proof of this concept, a cubic resonator is analytically designed and then tested in the long-wave infrared. © 2010 Optical Society of America.
Ultra-compact optical true time delay device for wideband phased array radars
Proceedings of SPIE - The International Society for Optical Engineering
An ultra-compact optical true time delay device is demonstrated that can support 112 antenna elements with better than six bits of delay in a volume 16″x5″x4″ including the box and electronics. Free-space beams circulate in a White cell, overlapping in space to minimize volume. The 18 mirrors are slow-tool diamond turned on two substrates, one at each end, to streamline alignment. Pointing accuracy of better than 10?rad is achieved, with surface roughness ∼45 nm rms. A MEMS tip-style mirror array selects among the paths for each beam independently, requiring ∼100 μs to switch the whole array. The micromirrors have 1.4° tip angle and three stable states (east, west, and flat). The input is a fiber-andmicrolens array, whose output spots are re-imaged multiple times in the White cell, striking a different area of the single MEMS chip in each of 10 bounces. The output is converted to RF by an integrated InP wideband optical combiner detector array. Delays were accurate to within 4% (shortest delay) to 0.03% (longest mirror train). The fiber-to- detector insertion loss is 7.82 dB for the shortest delay path. © 2010 SPIE.
Readout IC requirement trends based on a simplified parametric seeker model
Proceedings of SPIE the International Society for Optical Engineering
Modern space based optical sensors place substantial demands on the focal plane array readout integrated circuit. Active pixel readout designs offer direct access to individual pixel data but require analog to digital conversion at or near each pixel. Thus, circuit designers must create precise, fundamentally analog circuitry within tightly constrained areas on the integrated circuit. Rapidly changing phenomena necessitate tradeoffs between sampling and conversion speed, data precision, and heat generation adjacent the detector array, especially of concern for thermally sensitive space grade infrared detectors. A simplified parametric model is presented that illustrates seeker system performance and analog to digital conversion requirements trends in the visible through mid-wave infrared, for varying sample rate. Notional limiting-case Earth optical backgrounds were generated using MODTRAN4 with a range of cloud extremes and approximate practical albedo limits for typical surface features from a composite of the Mosart and Aster spectral albedo databases. The dynamic range requirements imposed by these background spectra are discussed in the context of optical band selection and readout design impacts. © 2010 Copyright SPIE - The International Society for Optical Engineering.
Achromatic circular polarization generation for ultra-intense lasers
Optics InfoBase Conference Papers
Generating circular polarization for ultra-intense lasers requires solutions beyond traditional transmissive waveplates which have insufficient bandwidth and pose nonlinear phase (B-integral) problems. We demonstrate a reflective design employing 3 metallic mirrors to gen-erate circular polarization. © 2010 Optical Society of America.
Life assessment of full-scale EDS vessel under impulsive loadings
American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP
The Explosive Destruction System (EDS) was developed by Sandia National Laboratories for the US Army Product Manager for Non-Stockpile Chemical Materiel (PMNSCM) to destroy recovered, explosively configured, chemical munitions. PMNSCM currently has five EDS units that have processed over 1,400 items. The system uses linear and conical shaped charges to open munitions and attack the burster followed by chemical treatment of the agent. The main component of the EDS is a stainless steel, cylindrical vessel, which contains the explosion and the subsequent chemical treatment. Extensive modeling and testing have been used to design and qualify the vessel for different applications and conditions. The high explosive (HE) pressure histories and subsequent vessel response (strain histories) are modeled using the analysis codes CTH and LS-DYNA, respectively. Using the model results, a load rating for the EDS is determined based on design guidance provided in the ASME Code, Sect. VIII, Div. 3, Code Case No. 2564. One of the goals is to assess and understand the vessel's capacity in containing a wide variety of detonation sequences at various load levels. Of particular interest are to know the total number of detonation events at the rated load that can be processed inside each vessel, and a maximum load (such as that arising from an upset condition) that can be contained without causing catastrophic failure of the vessel. This paper will discuss application of Code Case 2564 to the stainless steel EDS vessels, including a fatigue analysis using a J-R curve, vessel response to extreme upset loads, and the effects of strain hardening from successive events. Copyright © 2010 by ASME.
Optical logic gates using interconnected photodiodes and electro-absorption modulators
Optics InfoBase Conference Papers
We demonstrate an optical gate architecture with optical isolation between input and output using interconnected PD-EAMs to perform AND and NOT functions. Waveforms for 10 Gbps AND and 40 Gbps NOT gates are shown. © 2010 Optical Society of America.
A beamforming algorithm for bistatic SAR image formation
Proceedings of SPIE - The International Society for Optical Engineering
Beamforming is a methodology for collection-mode-independent SAR image formation. It is essentially equivalent to backprojection. The authors have in previous papers developed this idea and discussed the advantages and disadvantages of the approach to monostatic SAR image formation vis-à-vis the more standard and time-tested polar formatting algorithm (PFA). In this paper we show that beamforming for bistatic SAR imaging leads again to a very simple image formation algorithm that requires a minimal number of lines of code and that allows the image to be directly formed onto a three-dimensional surface model, thus automatically creating an orthorectified image. The same disadvantage of beamforming applied to monostatic SAR imaging applies to the bistatic case, however, in that the execution time for the beamforming algorithm is quite long compared to that of PFA. Fast versions of beamforming do exist to help alleviate this issue. Results of image reconstructions from phase history data are presented. © 2010 Copyright SPIE - The International Society for Optical Engineering.
Controlling the microstructure of vapor-deposited pentaerythritol tetranitrate (PETN) films
Proceedings - 14th International Detonation Symposium, IDS 2010
We have demonstrated the ability to control the microstructure of PETN films deposited using physical vapor deposition by altering the interface between the film and substrate. Evolution of surface morphology, average density, and surface roughness with film thickness were characterized using surface profilometry and scanning electron microscopy. While films on all of the substrates investigated showed a trend toward a lower average density with increasing film thickness, there were significant variations in density, pore size, and surface morphology in films deposited on different substrates.
Critical thickness measurements in vapor-deposited Pentaerythritol tetranitrate (PETN) films
Proceedings - 14th International Detonation Symposium, IDS 2010
Abstract not provided.
Calculating hugoniots for molecular crystals from first principles
Proceedings - 14th International Detonation Symposium, IDS 2010
Density Functional Theory (DFT) has over the last few years emerged as an indispensable tool for understanding the behavior of matter under extreme conditions. DFT based molecular dynamics simulations (MD) have for example confirmed experimental findings for shocked deuterium,1 enabled the first experimental evidence for a triple point in carbon above 850 GPa,2 and amended experimental data for constructing a global equation of state (EOS) for water, carrying implications for planetary physics.3 The ability to perform high-fidelity calculations is even more important for cases where experiments are impossible to perform, dangerous, and/or prohibitively expensive. For solid explosives, and other molecular crystals, similar success has been severely hampered by an inability of describing the materials at equilibrium. The binding mechanism of molecular crystals (van der Waals' forces) is not well described within traditional DFT.4 Among widely used exchange-correlation functionals, neither LDA nor PBE balances the strong intra-molecular chemical bonding and the weak inter-molecular attraction, resulting in incorrect equilibrium density, negatively affecting the construction of EOS for undetonated high explosives. We are exploring a way of bypassing this problem by using the new Armiento-Mattsson 2005 (AM05) exchange-correlation functional.5, 6 The AM05 functional is highly accurate for a wide range of solids,4, 7 in particular in compression.8 In addition, AM05 does not include any van der Waals' attraction,4 which can be advantageous compared to other functionals: Correcting for a fictitious van der Waals' like attraction with unknown origin can be harder than correcting for a complete absence of all types of van der Waals' attraction. We will show examples from other materials systems where van der Waals' attraction plays a key role, where this scheme has worked well,9 and discuss preliminary results for molecular crystals and explosives.
Risk-based cost-benefit analysis for security assessment problems
Proceedings - International Carnahan Conference on Security Technology
Decision-makers want to perform risk-based cost-benefit prioritization of security investments. However, strong nonlinearities in the most common physical security performance metric make it difficult to use for cost-benefit analysis. This paper extends the definition of risk for security applications and embodies this definition in a new but related security risk metric based on the degree of difficulty an adversary will encounter to successfully execute the most advantageous attack scenario. This metric is compatible with traditional cost-benefit optimization algorithms, and can lead to an objective risk-based cost-benefit method for security investment option prioritization. It also enables decision-makers to more effectively communicate the justification for their investment decisions with stakeholders and funding authorities. ©2010 IEEE.
Applying human reliability analysis models as a probabilistic basis for an integrated evaluation of safeguards and security systems
10th International Conference on Probabilistic Safety Assessment and Management 2010, PSAM 2010
Material control and accounting (MC&A) safeguards operations that track and account for critical assets at nuclear facilities provide a key protection approach for defeating insider adversaries. These activities, however, have been difficult to characterize in ways that are compatible with the probabilistic path analysis methods that are used to systematically evaluate the effectiveness of a site's physical protection (security) system (PPS). MC&A activities have many similar characteristics to operator procedures performed in a nuclear power plant (NPP) to check for anomalous conditions. This work applies human reliability analysis (HRA) methods and models for human performance of NPP operations to develop detection probabilities for MC&A activities. This has enabled the development of an extended probabilistic path analysis methodology in which MC&A protections can be combined with traditional sensor data in the calculation of PPS effectiveness. The extended path analysis methodology provides an integrated evaluation of a safeguards and security system that addresses its effectiveness for attacks by both outside and inside adversaries.
Unreacted equation of state development and multiphase modeling of dynamic compaction of low density hexanitrostilbene (HNS) pressings
Proceedings - 14th International Detonation Symposium, IDS 2010
Compaction waves in porous energetic materials have been shown to induce reaction under impact loading. In the past, simple two-state burn models such as the Arrhenius Burn model have been developed to predict slapper initiation in Hexanitrostilbene (HNS) pellets; however, a more sophisticated, fundamental approach is needed to predict the shock response during impact loading, especially in pellets that have been shown to have strong density gradients. The intergranular stress measures the resistance to bed compaction or the removal of void space due to particle packing and rearrangement. A constitutive model for the intergranular stress is needed for closure in the Baer-Nunziato (BN) multiphase mixture theory for reactive energetic materials. The intergranular stress was obtained from both quasi-static compaction experiments and from dynamic compaction experiments. Additionally, historical data and more recently acquired data for porous pellets compacted to high densities under shock loading were used for model assessment. Predicted particle velocity profiles under dynamic compaction were generally in good agreement with the experimental data. Hence, a multiphase model of HNS has been developed to extend current predictive capability.
Lessons learned on Human Reliability Analysis (HRA) methods from the International HRA Empirical Study
10th International Conference on Probabilistic Safety Assessment and Management 2010, PSAM 2010
In the International HRA Empirical Study, human reliability analysis (HRA) method predictions for human failure events (HFEs) in steam generator tube rupture and loss of feedwater scenarios were compared against the performance of real crews in a nuclear power plant control room simulator. The comparisons examined both the qualitative and quantitative HRA method predictions. This paper discusses some of the lessons learned about HRA methods that have been identified to date. General strengths and weaknesses of HRA methods are addressed, along with the reasons for any limitations in the predictive results produced by the methods. However, the discussions of the lessons learned in this paper must be considered a "snapshot." While most of the data has been analyzed, more detailed analysis of the results from specific HRA methods are ongoing and additional information may emerge.
Application of a field-based method to spatially varying thermal transport problems in molecular dynamics
Modelling and Simulation in Materials Science and Engineering
This paper derives a methodology to enable spatial and temporal control of thermally inhomogeneous molecular dynamics (MD) simulations. The primary goal is to perform non-equilibrium MD of thermal transport analogous to continuum solutions of heat flow which have complex initial and boundary conditions, moving MD beyond quasi-equilibrium simulations using periodic boundary conditions. In our paradigm, the entire spatial domain is filled with atoms and overlaid with a finite element (FE) mesh. The representation of continuous variables on this mesh allows fixed temperature and fixed heat flux boundary conditions to be applied, non-equilibrium initial conditions to be imposed and source terms to be added to the atomistic system. In effect, the FE mesh defines a large length scale over which atomic quantities can be locally averaged to derive continuous fields. Unlike coupling methods which require a surrogate model of thermal transport like Fourier's law, in this work the FE grid is only employed for its projection, averaging and interpolation properties. Inherent in this approach is the assumption that MD observables of interest, e.g. temperature, can be mapped to a continuous representation in a non-equilibrium setting. This assumption is taken advantage of to derive a single, unified set of control forces based on Gaussian isokinetic thermostats to regulate the temperature and heat flux locally in the MD. Example problems are used to illustrate potential applications. In addition to the physical results, data relevant to understanding the numerical effects of the method on these systems are also presented. © 2010 IOP Publishing Ltd.
Architecture of PFC supports analogy, but PFC is not an analogy machine
Cognitive Neuroscience
In the preceding discussion paper, I proposed a theory of prefrontal cortical organization that was fundamentally intended to address the question: How does prefrontal cortex (PFC) support the various functions for which it seems to be selectively recruited? In so doing, I chose to focus on a particular function, analogy, that seems to have been largely ignored in the theoretical treatments of PFC, but that does underlie many other cognitive functions (Hofstadter, 2001; Holyoak & Thagard, 1997). At its core, this paper was intended to use analogy as a foundation for exploring one possibility for prefrontal function in general, although it is easy to see how the analogy-specific interpretation arises (as in the comment by Ibáñez). In an attempt to address this more foundational question, this response will step away from analogy as a focus, and will address first the various comments from the perspective of the initial motivation for developing this theory, and then specific issues raised by the commentators. © 2010 Psychology Press.
Fire-induced failure mode testing for dc-powered control circuits
10th International Conference on Probabilistic Safety Assessment and Management 2010, PSAM 2010
The U.S. Nuclear Regulatory Commission, in concert with industry, continues to explore the effects of fire on electrical cable and control circuit performance. The latest efforts, which are currently underway, are exploring issues related to fire-induced cable failure modes and effects for direct current (dc) powered electrical control circuits. An extensive series of small and intermediate scale fire tests has been performed. Each test induced electrical failure in copper conductor cables of various types typical of those used by the U.S. commercial nuclear power industry. The cables in each test were connected to one of several surrogate dc control circuits designed to monitor and detect cable electrical failure modes and effects. The tested dc control circuits included two sets of reversing dc motor starters typical of those used in motor-operated valve (MOV) circuits, two small solenoid-operated valves (SOV), one intermediate size (1-inch (25.4mm) diameter) SOV, a very large direct-acting valve coil, and a switchgear/breaker unit. Also included was a specialized test circuit designed specifically to monitor for electrical shorts between two cables (inter-cable shorting). Each of these circuits was powered from a nominal 125V battery bank comprised of 60 individual battery cells (nominal 2V lead-acid type cells with plates made from a lead-cadmium alloy). The total available short circuit current at the terminals of the battery bank was estimated at 13,000A. All of the planned tests have been completed with the data analysis and reporting currently being completed. This paper will briefly describe the test program, some of the preliminary test insights, and planned follow-on activities.
Investigation of microcantilever array with ordered nanoporous coatings for selective chemical detection
Proceedings of SPIE - The International Society for Optical Engineering
In this paper we demonstrate the potential for novel nanoporous framework materials (NFM) such as metal-organic frameworks (MOFs) to provide selectivity and sensitivity to a broad range of analytes including explosives, nerve agents, and volatile organic compounds (VOCs). NFM are highly ordered, crystalline materials with considerable synthetic flexibility resulting from the presence of both organic and inorganic components within their structure. Detection of chemical weapons of mass destruction (CWMD), explosives, toxic industrial chemicals (TICs), and volatile organic compounds (VOCs) using micro-electro-mechanical-systems (MEMS) devices, such as microcantilevers and surface acoustic wave sensors, requires the use of recognition layers to impart selectivity. Traditional organic polymers are dense, impeding analyte uptake and slowing sensor response. The nanoporosity and ultrahigh surface areas of NFM enhance transport into and out of the NFM layer, improving response times, and their ordered structure enables structural tuning to impart selectivity. Here we describe experiments and modeling aimed at creating NFM layers tailored to the detection of water vapor, explosives, CWMD, and VOCs, and their integration with the surfaces of MEMS devices. Force field models show that a high degree of chemical selectivity is feasible. For example, using a suite of MOFs it should be possible to select for explosives vs. CWMD, VM vs. GA (nerve agents), and anthracene vs. naphthalene (VOCs). We will also demonstrate the integration of various NFM with the surfaces of MEMS devices and describe new synthetic methods developed to improve the quality of VFM coatings. Finally, MOF-coated MEMS devices show how temperature changes can be tuned to improve response times, selectivity, and sensitivity. © 2010 Copyright SPIE - The International Society for Optical Engineering.
Pixelated spectral filter for integrated focal plane array in the long-wave IR
Proceedings of SPIE - The International Society for Optical Engineering
We present the design, fabrication, and characterization of a pixelated, hyperspectral arrayed component for Focal Plane Array (FPA) integration in the Long-Wave IR. This device contains tens of pixels within a single super-pixel which is tiled across the extent of the FPA. Each spectral pixel maps to a single FPA pixel with a spectral FWHM of 200nm. With this arrayed approach, remote sensing data may be accumulated with a non-scanning, "snapshot" imaging system. This technology is flexible with respect to individual pixel center wavelength and to pixel position within the array. Moreover, the entire pixel area has a single wavelength response, not the integrated linear response of a graded cavity thickness design. These requirements bar tilted, linear array technologies where the cavity length monotonically increases across the device. © 2010 Copyright SPIE - The International Society for Optical Engineering.
Readout IC requirement trends based on a simplified parametric seeker model
Proceedings of SPIE - The International Society for Optical Engineering
Modern space based optical sensors place substantial demands on the focal plane array readout integrated circuit. Active pixel readout designs offer direct access to individual pixel data but require analog to digital conversion at or near each pixel. Thus, circuit designers must create precise, fundamentally analog circuitry within tightly constrained areas on the integrated circuit. Rapidly changing phenomena necessitate tradeoffs between sampling and conversion speed, data precision, and heat generation adjacent the detector array, especially of concern for thermally sensitive space grade infrared detectors. A simplified parametric model is presented that illustrates seeker system performance and analog to digital conversion requirements trends in the visible through mid-wave infrared, for varying sample rate. Notional limiting-case Earth optical backgrounds were generated using MODTRAN4 with a range of cloud extremes and approximate practical albedo limits for typical surface features from a composite of the Mosart and Aster spectral albedo databases. The dynamic range requirements imposed by these background spectra are discussed in the context of optical band selection and readout design impacts. © 2010 Copyright SPIE - The International Society for Optical Engineering.
A physics-based device model of transient neutron damage in bipolar junction transistors
IEEE Transactions on Nuclear Science
For the purpose of simulating the effects of neutron radiation damage on bipolar circuit performance, a bipolar junction transistor (BJT) compact model incorporating displacement damage effects and rapid annealing has been developed. A physics-based approach is used to model displacement damage effects, and this modeling approach is implemented as an augmentation to the Gummel-Poon BJT model. The model is presented and implemented in the Xyce circuit simulator, and is shown to agree well with experiments and TCAD simulation, and is shown to be superior to a previous compact modeling approach. © 2010 IEEE.
Optimal utilization of heterogeneous resources for biomolecular simulations
2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2010
Biomolecular simulations have traditionally benefited from increases in the processor clock speed and coarse-grain inter-node parallelism on large-scale clusters. With stagnating clock frequencies, the evolutionary path for performance of microprocessors is maintained by virtue of core multiplication. Graphical processing units (GPUs) offer revolutionary performance potential at the cost of increased programming complexity. Furthermore, it has been extremely challenging to effectively utilize heterogeneous resources (host processor and GPU cores) for scientific simulations, as underlying systems, programming models and tools are continually evolving. In this paper, we present a parametric study demonstrating approaches to exploit resources of heterogeneous systems to reduce time-to-solution of a production-level application for biological simulations. By overlapping and pipelining computation and communication, we observe up to 10-fold application acceleration in multi-core and multi-GPU environments illustrating significant performance improvements over code acceleration approaches, where the host-to-accelerator ratio is static, and is constrained by a given algorithmic implementation. © 2010 IEEE.
A parametric study of the impact of various error contributions on the flux distribution of a solar dish concentrator
ASME 2010 4th International Conference on Energy Sustainability, ES 2010
Dish concentrators can produce highly concentrated flux for the operation of an engine, a chemical process, or other energy converter. The high concentration allows a small aperture to control thermal losses, and permits high temperature processes at the focal point. A variety of optical errors can influence the flux pattern both at the aperture and at the absorber surface. Impacts of these errors can be lost energy (intercept losses), aperture compromise (increased size to accommodate flux), high peak fluxes (leading to part failure or life reduction), and improperly positioned flux also leading to component failure. Optical errors can include small scale facet errors ("waviness"), facet shape errors, alignment (facet pointing) errors, structural deflections, and tracking errors. The errors may be random in nature, or may be systematic. The various sources of errors are often combined in a "root-mean-squared" process to present a single number as an "error budget". However, this approach ignores the fact that various errors can influence the performance in different ways, and can mislead the designer, leading to component damage in a system or poor system performance. In this paper, we model a hypothetical radial gore dish system using Sandia's CIRCE2 optical code. We evaluate the peak flux and incident power through the aperture and onto various parts of the receiver cavity. We explore the impact of different error sources on the character of the flux pattern, and demonstrate the limitations of lumping all of the errors into a single error budget. © 2010 by ASME.