Stability Experiments on MEMS Aluminum Nitride RF Resonators
Abstract not provided.
Abstract not provided.
This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.
Laser-induced incandescence measurements have recently been obtained from 10% and 30% toluene in methanol blended fuel pool fires of 2-m diameter. Calibration of the instrument was performed using an ethylene/air laminar diffusion flame produced by a Santoro-type burner which allowed the extraction of absolute soot-volume-fractions from these images. Performance of the optical probe was characterized using the laminar diffusion flame and corrections were implemented for signal dependence upon detector gain, flat field, and location within the probe laser sheet when processing the images. Probability density functions of the soot-volume fraction were constructed for the blended fuels used in this study and the mean values were determined to be 0.0077 and 0.028 ppm for the 10% and 30% blended fuels, respectively. Signal trapping was estimated for the two types of blended fuel and it was determined to be negligible for the 10% toluene/methanol blend and require {approx}10% correction for the 30% toluene/methanol blend.
Proposed for publication in Designs, Codes, and Cryptography.
Abstract not provided.
Abstract not provided.
IPDPS 2009 - Proceedings of the 2009 IEEE International Parallel and Distributed Processing Symposium
Abstract not provided.
FPL 09: 19th International Conference on Field Programmable Logic and Applications
As FPGA logic density continues to increase, new techniques are needed to store initial configuration data efficiently, maintain usability, and minimize cost. In this paper, a novel compression technique is presented for Xilinx Virtex partially reconfigurable FPGAs. This technique relies on constrained hardware design and layout combined with a few simple compression techniques. This technique uses partial recon-figuration to separate a hardware design into two separate regions: a static and partial region. A bitstream containing only the static region is then compressed by removing empty frames. This bitstream will be stored in non-volatile memory and used for initialization. The remaining logic is configured through partial reconfiguration over a communication network. By applying this technique, a high level of compression was achieved (almost 90% for the V4 LX25). This compression technique requires no extra decompression circuitry and compression levels improve as device size increases. ©2009 IEEE.
Physical Review B - Condensed Matter and Materials Physics
Thermal transport across one-dimensional atomic chains is studied using a harmonic nonequilibrium Green's function formalism in the ballistic phonon transport regime. Introducing a mass impurity in the chain and mass loading in the thermal contacts leads to interference of phonon waves, which can be manipulated by varying the magnitude of the loading. This shows that thermal rectification is tunable in a completely harmonic system. © 2009 The American Physical Society.
2008 Proceedings of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, DETC 2008
An improved theoretical approach is presented to calculate and predict the quality factors of flexible microeantilevers affected by squeeze-film damping at low ambient pressures, and moderate to high Knudsen numbers. Veijola's model [1]. originally derived for a rigid oscillating plate near a wall, is extended to a flexible cantilever beam and both the gas inertia effect and slip boundary condition are considered in deriving resulting damping pressure. The model is used to predict the natural frequencies and quality factors of silicon microeantilevers with small gaps and their dependence on ambient pressure. In contrast to non-slip, continuum models, we find that quality factor depends strongly on ambient pressure, and that the damping of higher modes is more sensitive to ambient pressure than the fundamental. Copyright © 2008 by ASME.
2008 Proceedings of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, DETC 2008
This paper presents a novel micro-scale passive-latching mechanical shock sensor with reset capability. The device integrates a compliant bistable mechanism, designed to have a high contact force and low actuation force, with metal-to-metal electrical contacts that provide a means for interrogating the switch state. No electrical power is required during storage or sensing. Electrical power is only required to initialize, reset, self-test, or interrogate the device, allowing the mechanism to be used in low-power and long shelf-life applications. The sensor has a footprint of about 1 mm2, allowing multiple devices to be integrated on a single chip for arrays of acceleration thresholds, redundancy, and/or multiple sense directions. Modeling and experimental results for a few devices with different thresholds in the 100g to 400g range are given. Centrifuge test results show that the accelerations required to toggle the switches are higher than current model predictions. Resonant frequency measurements suggest that the springs may be stiffer than predicted. Hammer-strike tests demonstrate the feasibility of using the devices as sensors for actual mechanical shock events. Copyright © 2008 by ASME.
2009 IEEE Conference on Technologies for Homeland Security, HST 2009
This paper presents Sandia National Laboratories' Out-door Weapons of Mass Destruction Decision Analysis Center (Out-DAC) and, through an example case study, derives lessons for its use. This tool, related to similar capabilities at Sandia, can be used to determine functional requirements for a detection system of aerosol-released threats outdoors. Essential components of OutDAC are a population database, a meteorological dataset, an atmospheric transport and dispersion model and an optimization toolkit. Detector placement is done through optimization against a library of hypothe-sized attack scenarios by minimizing either the mean or value-at-risk of undetected infections. These scenarios are the product of a Monte Carlo simulation intended to characterize the uncertainty associated with the threat. An example case study illustrates that Monte Carlo convergence is dependent on the statistic of interest. Furthermore, the quality of the detector placement optimization may be tied to the convergence level of the Monte Carlo simulation. © 2009 IEEE.
This report summarizes the Combinatorial Algebraic Topology: software, applications & algorithms workshop (CAT Workshop). The workshop was sponsored by the Computer Science Research Institute of Sandia National Laboratories. It was organized by CSRI staff members Scott Mitchell and Shawn Martin. It was held in Santa Fe, New Mexico, August 29-30. The CAT Workshop website has links to some of the talk slides and other information, http://www.cs.sandia.gov/CSRI/Workshops/2009/CAT/index.html. The purpose of the report is to summarize the discussions and recap the sessions. There is a special emphasis on technical areas that are ripe for further exploration, and the plans for follow-up amongst the workshop participants. The intended audiences are the workshop participants, other researchers in the area, and the workshop sponsors.
Proceedings of SPIE - The International Society for Optical Engineering
Fibers doped with rare-earth constituents such as Yb3+ and Er3+, as well as fibers co-doped with these species, form an essential part of many optical systems requiring amplification. This study consists of two separate investigations examining the effects of gamma-radiation-induced photodarkening on the behavior of rare-earth doped fibers. In one part of this study, a suite of previously irradiated rare-earth doped fibers was heated to an elevated temperature of 300°C and the transmittance monitored over an 8-hour period. Transmittance recoveries of ~10 - 20% were found for Er3+- doped fiber, while recoveries of ~5 - 15% and ~20% were found for Yb3+- and Yb3+/Er3+ co-doped fibers, respectively. In the other part of this study, an Yb3+-doped fiber was actively pumped by a laser diode during a gamma-radiation exposure to simulate the operation of an optical amplifier in a radiation environment. The response of the amplified signal was observed and monitored over time. A significant decrease in amplifier output was observed to result from the gamma-radiation exposure. © 2009 SPIE.
IEEE International Reliability Physics Symposium Proceedings
2009 Conference on Lasers and Electro-Optics and 2009 Conference on Quantum Electronics and Laser Science Conference, CLEO/QELS 2009
A new class of microphotonic-resonators, Adiabatic Resonant Microrings (ARMs), is introduced. The ARM resonator geometry enables heater elements to be formed within the resonator, simultaneously enabling record low-power (4.4μW/GHz) and record high-speed (1μs) thermal tuning. ©2009 Optical Society of America.
Abstract not provided.
IEEE International Conference on Plasma Science
Abstract not provided.
Proceedings - Symposium on Fusion Engineering
An electromagnetic analysis is performed on different first wall designs for the ITER device. The electromagnetic forces and torques present due to a plasma disruption event are calculated and compared for the different designs.
Proceedings - Symposium on Fusion Engineering
An electromagnetic analysis is performed on the ITER shield modules under different plasma disruption scenarios using the OPERA-3d software. The modeling procedure is explained, electromagnetic torques are presented, and results of the modeling are discussed.
IEEE International Conference on Plasma Science
Abstract not provided.
International Journal for Numerical Methods in Engineering
Three hundred-plus years of successful theoretical development and application of probability theory provide sufficient justification for it as the mathematical context in which to analyze the uncertainty in the performance of engineering and scientific systems. In this document, we propose a joint probabilistic and deterministic function analytic approach as the means for the development of advanced techniques that feature a strong connection between classical deterministic and probabilistic methods. We know of no other means to achieve simultaneous, balanced approximations across these two constituents. We present foundational materials on the general approach to particular aspects of functional analysis, which are relevant to probability, and emphasize the common elements it shares, and the close connections it provides, to various classical deterministic mathematical analysis elements. Finally, we describe how to use the joint approach as a means to augment deterministic analysis methods in a particular Hilbert space context, and thus enable a rigorous framework for commingling deterministic and probabilistic analysis tools in an application setting. © 2009 John Wiley & Sons, Ltd.
Annual Conference of the North American Fuzzy Information Processing Society - NAFIPS
Terrorist acts are intentional and therefore differ significantly from "dumb" random acts that are the subject of most risk analyses. There is significant epistemic (state of knowledge) uncertainty associated with such intentional acts, especially for the likelihood of specific attack scenarios. Also, many of the variables of concern are not numeric and should be treated as purely linguistic (words). Epistemic uncertainty can be addressed using the belief/plausibility measure of uncertainty, an extension of the traditional probability measure of uncertainty. Fuzzy sets can be used to segregate a variable into purely linguistic values. Linguistic variables can be combined using an approximate reasoning rule base to map combinations of fuzzy sets of the constituent variables to fuzzy sets of the resultant variable. We have implemented the mathematics of fuzzy sets, approximate reasoning, and belief/plausibility into Java software tools. The PoolEvidence© software tool combines evidence (pools) from different experts. The LinguisticBelief© software tool evaluates the risk associated with scenarios of concern using the pooled evidence as input. The tools are not limited to the evaluation of terrorist risk; they are useful for evaluating any decision involving significant epistemic uncertainty and linguistic variables. Sandia National Laboratories' analysts have applied the tools to: risk of terrorist acts, security of nuclear materials, cyber security, prediction of movements of plumes of hazardous materials, and issues with nuclear weapons. This paper focuses on evaluating the risk of acts of terrorism. ©2009 IEEE.
Technometrics
Optimization for complex systems in engineering often involves the use of expensive computer simulation. By combining statistical emulation using treed Gaussian processes with pattern search optimization, we are able to perform robust local optimization more efficiently and effectively than when using either method alone. Our approach is based on the augmentation of local search patterns with location sets generated through improvement prediction over the input space.We further develop a computational framework for asynchronous parallel implementation of the optimization algorithm. We demonstrate our methods on two standard test problems and our motivating example of calibrating a circuit device simulator. © 2009 American Statistical Association.
We have performed molecular dynamics simulations of cascade damage in the gadolinium pyrochlore Gd{sub 2}Zr{sub 2}O{sub 7}, comparing results obtained from traditional methodologies that ignore the effect of electron-ion interactions with a 'two-temperature model' in which the electronic subsystem is modeled using a diffusion equation to determine the electronic temperature. We find that the electron-ion interaction friction coefficient {gamma}{sub p} is a significant parameter in determining the behavior of the system following the formation of the primary knock-on atom (here, a U{sup 3+} ion). The mean final U{sup 3+} displacement and the number of defect atoms formed is shown to decrease uniformly with increasing {gamma}{sub p}; however, other properties, such as the final equilibrium temperature and the oxygen-oxygen radial distribution function show a more complicated dependence on {gamma}{sub p}.
This report documents progress in discovering new catalytic technologies that will support the development of advanced biofuels. The global shift from petroleum-based fuels to advanced biofuels will require transformational breakthroughs in biomass deconstruction technologies, because current methods are neither cost effective nor sufficiently efficient or robust for scaleable production. Discovery and characterization of lignocellulolytic enzyme systems adapted to extreme environments will accelerate progress. Obvious extreme environments to mine for novel lignocellulolytic deconstruction technologies include aridland ecosystems (ALEs), such as those of the Sevilleta Long Term Ecological Research (LTER) site in central New Mexico (NM). ALEs represent at least 40% of the terrestrial biosphere and are classic extreme environments, with low nutrient availability, high ultraviolet radiation flux, limited and erratic precipitation, and extreme variation in temperatures. ALEs are functionally distinct from temperate environments in many respects; one salient distinction is that ALEs do not accumulate soil organic carbon (SOC), in marked contrast to temperate settings, which typically have large pools of SOC. Low productivity ALEs do not accumulate carbon (C) primarily because of extraordinarily efficient extracellular enzyme activities (EEAs) that are derived from underlying communities of diverse, largely uncharacterized microbes. Such efficient enzyme activities presumably reflect adaptation to this low productivity ecosystem, with the result that all available organic nutrients are assimilated rapidly. These communities are dominated by ascomycetous fungi, both in terms of abundance and contribution to ecosystem-scale metabolic processes, such as nitrogen and C cycling. To deliver novel, robust, efficient lignocellulolytic enzyme systems that will drive transformational advances in biomass deconstruction, we have: (1) secured an award through the Department of Energy (DoE) Joint Genome Institute (JGI) to perform metatranscriptomic functional profiling of eukaryotic microbial communities of blue grama grass (Bouteloua gracilis) rhizosphere (RHZ) soils and (2) isolated and provided initial genotypic and phenotypic characterization data for thermophilic fungi. Our preliminary results show that many strains in our collection of thermophilic fungi frequently outperform industry standards in key assays; we also demonstrated that this collection is taxonomically diverse and phenotypically compelling. The studies summarized here are being performed in collaboration with University of New Mexico and are based at the Sevilleta LTER research site.
This report describes work performed from October 2007 through September 2009 under the Sandia Laboratory Directed Research and Development project titled 'Reduced Order Modeling of Fluid/Structure Interaction.' This project addresses fundamental aspects of techniques for construction of predictive Reduced Order Models (ROMs). A ROM is defined as a model, derived from a sequence of high-fidelity simulations, that preserves the essential physics and predictive capability of the original simulations but at a much lower computational cost. Techniques are developed for construction of provably stable linear Galerkin projection ROMs for compressible fluid flow, including a method for enforcing boundary conditions that preserves numerical stability. A convergence proof and error estimates are given for this class of ROM, and the method is demonstrated on a series of model problems. A reduced order method, based on the method of quadratic components, for solving the von Karman nonlinear plate equations is developed and tested. This method is applied to the problem of nonlinear limit cycle oscillations encountered when the plate interacts with an adjacent supersonic flow. A stability-preserving method for coupling the linear fluid ROM with the structural dynamics model for the elastic plate is constructed and tested. Methods for constructing efficient ROMs for nonlinear fluid equations are developed and tested on a one-dimensional convection-diffusion-reaction equation. These methods are combined with a symmetrization approach to construct a ROM technique for application to the compressible Navier-Stokes equations.
Abstract not provided.
Abstract not provided.
Proposed for publication in various journals such as redteamjournal.com .
Red teams that address complex systems have rarely taken advantage of Modeling and Simulation (M&S) in a way that reproduces most or all of a red-blue team exchange within a computer. Chess programs, starting with IBM's Deep Blue, outperform humans in that red-blue interaction, so why shouldn't we think computers can outperform traditional red teams now or in the future? This and future position papers will explore possible ways to use M&S to augment or replace traditional red teams in some situations, the features Red Team M&S should possess, how one might connect live and simulated red teams, and existing tools in this domain.
Abstract not provided.
Abstract not provided.
Group 12 metal cyclam complexes and their derivatives as well as (octyl){sub 2}Sn(OMe){sub 2} were examined as potential catalysts for the production of dimethyl carbonate (DMC) using CO{sub 2} and methanol. The zinc cyclams will readily take up carbon dioxide and methanol at room temperature and atmospheric pressure to give the metal methyl carbonate. The tin exhibited an improvement in DMC yields. Studies involving the reaction of bis-phosphino- and (phosphino)(silyl)-amido group 2 and 12 complexes with CO{sub 2} and CS{sub 2} were performed. Notable results include formation of phosphino-substituted isocyanates, fixation of three moles of CO{sub 2} in an unprecedented [N(CO{sub 2}){sub 3}]{sup 3-} anion, and rapid splitting of CS{sub 2} by main group elements under extremely mild conditions. Similar investigations of divalent group 14 silyl amides led to room temperature splitting of CO{sub 2} into CO and metal oxide clusters, and the formation of isocyanates and carbodiimides.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This late start RTBF project started the development of barium titanate (BTO)/glass nanocomposite capacitors for future and emerging energy storage applications. The long term goal of this work is to decrease the size, weight, and cost of ceramic capacitors while increasing their reliability. Ceramic-based nanocomposites have the potential to yield materials with enhanced permittivity, breakdown strength (BDS), and reduced strain, which can increase the energy density of capacitors and increase their shot life. Composites of BTO in glass will limit grain growth during device fabrication (preserving nanoparticle grain size and enhanced properties), resulting in devices with improved density, permittivity, BDS, and shot life. BTO will eliminate the issues associated with Pb toxicity and volatility as well as the variation in energy storage vs. temperature of PZT based devices. During the last six months of FY09 this work focused on developing syntheses for BTO nanoparticles and firing profiles for sintering BTO/glass composite capacitors.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Bioweapons and emerging infectious diseases pose formidable and growing threats to our national security. Rapid advances in biotechnology and the increasing efficiency of global transportation networks virtually guarantee that the United States will face potentially devastating infectious disease outbreaks caused by novel ('unknown') pathogens either intentionally or accidentally introduced into the population. Unfortunately, our nation's biodefense and public health infrastructure is primarily designed to handle previously characterized ('known') pathogens. While modern DNA assays can identify known pathogens quickly, identifying unknown pathogens currently depends upon slow, classical microbiological methods of isolation and culture that can take weeks to produce actionable information. In many scenarios that delay would be costly, in terms of casualties and economic damage; indeed, it can mean the difference between a manageable public health incident and a full-blown epidemic. To close this gap in our nation's biodefense capability, we will develop, validate, and optimize a system to extract nucleic acids from unknown pathogens present in clinical samples drawn from infected patients. This system will extract nucleic acids from a clinical sample, amplify pathogen and specific host response nucleic acid sequences. These sequences will then be suitable for ultra-high-throughput sequencing (UHTS) carried out by a third party. The data generated from UHTS will then be processed through a new data assimilation and Bioinformatic analysis pipeline that will allow us to characterize an unknown pathogen in hours to days instead of weeks to months. Our methods will require no a priori knowledge of the pathogen, and no isolation or culturing; therefore it will circumvent many of the major roadblocks confronting a clinical microbiologist or virologist when presented with an unknown or engineered pathogen.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Energy production is inextricably linked to national security and poses the danger of altering the environment in potentially catastrophic ways. There is no greater problem than sustainable energy production. Our purpose was to attack this problem by examining processes, technology, and science needed for recycling CO{sub 2} back into transportation fuels. This approach can be thought of as 'bio-inspired' as nature employs the same basic inputs, CO{sub 2}/energy/water, to produce biomass. We addressed two key deficiencies apparent in current efforts. First, a detailed process analysis comparing the potential for chemical and conventional engineering methods to provide a route for the conversion of CO{sub 2} and water to fuel has been completed. No apparent 'showstoppers' are apparent in the synthetic route. Opportunities to improve current processes have also been identified and examined. Second, we have also specifically addressed the fundamental science of the direct production of methanol from CO{sub 2} using H{sub 2} as a reductant.
Abstract not provided.
Abstract not provided.
The NUclear EVacuation Analysis Code (NUEVAC) has been developed by Sandia National Laboratories to support the analysis of shelter-evacuate (S-E) strategies following an urban nuclear detonation. This tool can model a range of behaviors, including complex evacuation timing and path selection, as well as various sheltering or mixed evacuation and sheltering strategies. The calculations are based on externally generated, high resolution fallout deposition and plume data. Scenario setup and calculation outputs make extensive use of graphics and interactive features. This software is designed primarily to produce quantitative evaluations of nuclear detonation response options. However, the outputs have also proven useful in the communication of technical insights concerning shelter-evacuate tradeoffs to urban planning or response personnel.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Previous studies in the nuclear weapons complex have shown that ambiguous work instructions (WIs) and operating procedures (OPs) can lead to human error, which is a major cause for concern. This report outlines some of the sources of ambiguity in written English and describes three recommendations for reducing ambiguity in WIs and OPs. The recommendations are based on commonly used research techniques in the fields of linguistics and cognitive psychology. The first recommendation is to gather empirical data that can be used to improve the recommended word lists that are provided to technical writers. The second recommendation is to have a review in which new WIs and OPs and checked for ambiguities and clarity. The third recommendation is to use self-paced reading time studies to identify any remaining ambiguities before the new WIs and OPs are put into use. If these three steps are followed for new WIs and OPs, the likelihood of human errors related to ambiguity could be greatly reduced.
This report describes the results of a small experimental study that investigated potential sources of ambiguity in written work instructions (WIs). The English language can be highly ambiguous because words with different meanings can share the same spelling. Previous studies in the nuclear weapons complex have shown that ambiguous WIs can lead to human error, which is a major cause for concern. To study possible sources of ambiguity in WIs, we determined which of the recommended action verbs in the DOE and BWXT writer's manuals have numerous meanings to their intended audience, making them potentially ambiguous. We used cognitive psychology techniques to conduct a survey in which technicians who use WIs in their jobs indicated the first meaning that came to mind for each of the words. Although the findings of this study are limited by the small number of respondents, we identified words that had many different meanings even within this limited sample. WI writers should pay particular attention to these words and to their most frequent meanings so that they can avoid ambiguity in their writing.
Fast electrical energy storage or Voltage-Driven Technology (VDT) has dominated fast, high-voltage pulsed power systems for the past six decades. Fast magnetic energy storage or Current-Driven Technology (CDT) is characterized by 10,000 X higher energy density than VDT and has a great number of other substantial advantages, but it has all but been neglected for all of these decades. The uniform explanation for neglect of CDT technology is invariably that the industry has never been able to make an effective opening switch, which is essential for the use of CDT. Most approaches to opening switches have involved plasma of one sort or another. On a large scale, gaseous plasmas have been used as a conductor to bridge the switch electrodes that provides an opening function when the current wave front propagates through to the output end of the plasma and fully magnetizes the plasma - this is called a Plasma Opening Switch (POS). Opening can be triggered in a POS using a magnetic field to push the plasma out of the A-K gap - this is called a Magnetically Controlled Plasma Opening Switch (MCPOS). On a small scale, depletion of electron plasmas in semiconductor devices is used to affect opening switch behavior, but these devices are relatively low voltage and low current compared to the hundreds of kilo-volts and tens of kilo-amperes of interest to pulsed power. This work is an investigation into an entirely new approach to opening switch technology that utilizes new materials in new ways. The new materials are Ferroelectrics and using them as an opening switch is a stark contrast to their traditional applications in optics and transducer applications. Emphasis is on use of high performance ferroelectrics with the objective of developing an opening switch that would be suitable for large scale pulsed power applications. Over the course of exploring this new ground, we have discovered new behaviors and properties of these materials that were here to fore unknown. Some of these unexpected discoveries have lead to new research directions to address challenges.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only). (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical simulation capability, designed to meet the unique needs of the laboratory.
This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users’ Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users’ Guide.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The purpose of this project is to develop multi-layered co-extrusion (MLCE) capabilities at Sandia National Laboratories to produce multifunctional polymeric structures. Multi-layered structures containing layers of alternating electrical, mechanical, optical, or structural properties can be applied to a variety of potential applications including energy storage, optics, sensors, mechanical, and barrier applications relevant to the internal and external community. To obtain the desired properties, fillers must be added to the polymer materials that are much smaller than the end layer thickness. We developed two filled polymer systems, one for conductive layers and one for dielectric layers and demonstrated the potential for using MLCE to manufacture capacitors. We also developed numerical models to help determine the material and processing parameters that impact processing and layer stability.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Nobel Prize winner Richard Smalley was an avid champion for the cause of energy research. Calling it 'the single most important problem facing humanity today,' Smalley promoted the development of nanotechnology as a means to harness solar energy. Using nanotechnology to create solar fuels (i.e., fuels created from sunlight, CO{sub 2}, and water) is an especially intriguing idea, as it impacts not only energy production and storage, but also climate change. Solar irradiation is the only sustainable energy source of a magnitude sufficient to meet projections for global energy demand. Biofuels meet the definition of a solar fuel. Unfortunately, the efficiency of photosynthesis will need to be improved by an estimated factor of ten before biofuels can fully replace fossil fuels. Additionally, biological organisms produce an array of hydrocarbon products requiring further processing before they are usable for most applications. Alternately, 'bio-inspired' nanostructured photocatalytic devices that efficiently harvest sunlight and use that energy to reduce CO{sub 2} into a single useful product or chemical intermediate can be envisioned. Of course, producing such a device is very challenging as it must be robust and multifunctional, i.e. capable of promoting and coupling the multi-electron, multi-photon water oxidation and CO{sub 2} reduction processes. Herein, we summarize some of the recent and most significant work towards creating light harvesting nanodevices that reduce CO{sub 2} to CO (a key chemical intermediate) that are based on key functionalities inspired by nature. We report the growth of Co(III)TPPCl nanofibers (20-100 nm in diameter) on gas diffusion layers via an evaporation induced self-assembly (EISA) method. Remarkably, as-fabricated electrodes demonstrate light-enhanced activity for CO{sub 2} reduction to CO as evidenced by cyclic voltammograms and electrolysis with/without light irradiation. To the best of our knowledge, it is the first time to observe such a light-enhanced CO{sub 2} reduction reaction based on nanostructured cobalt(III) porphyrin catalysts. Additionally, gas chromatography (GC) verifies that light irradiation can improve CO production by up to 31.3% during 2 hours of electrolysis. In addition, a variety of novel porphyrin nano- or micro-structures were also prepared including nanospheres, nanotubes, and micro-crosses.
Abstract not provided.
We have used Matlab and Google Earth to construct a prototype application for modeling the performance of local seismic networks for monitoring small, contained explosions. Published equations based on refraction experiments provide estimates of peak ground velocities as a function of event distance and charge weight. Matlab routines implement these relations to calculate the amplitudes across a network of stations from sources distributed over a geographic grid. The amplitudes are then compared to ambient noise levels at the stations, and scaled to determine the smallest yield that could be detected at each source location by a specified minimum number of stations. We use Google Earth as the primary user interface, both for positioning the stations of a hypothetical local network, and for displaying the resulting detection threshold contours.
This report summarizes existing statistical engines in VTK/Titan and presents both the serial and parallel k-means statistics engines. It is a sequel to [PT08], [BPRT09], and [PT09] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, and contingency engines. The ease of use of the new parallel k-means engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the k-means engine.
The Proton-21 Laboratory in the Ukraine has been publishing results on shock-induced transmutation of several elements, including Cobalt 60 into non-radioactive elements. This report documents exploratory characterization of a shock-compressed Aluminum-6061 sample, which is the only available surrogate for the high-purity copper samples in the Proton-21 experiments. The goal was to determine Sandia's ability to detect possible shock-wave-induced transmutation products and to unambiguously validate or invalidate the claims in collaboration with the Proton-21 Laboratory. We have developed a suitable characterization process and tested it on the surrogate sample. Using trace elemental analysis capabilities, we found elevated and localized concentrations of impurity elements like the Ukrainians report. All our results, however, are consistent with the ejection of impurities that were not in solution in our alloy or were deposited from the cathode during irradiation or possibly storage. Based on the detection capabilities demonstrated and additional techniques available, we are positioned to test samples from Proton-21 if funded to do so.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Numerous benchmark measurements have been performed to enable developers of neutron transport models and codes to evaluate the accuracy of their calculations. In particular, for criticality safety applications, the International Criticality Safety Benchmark Experiment Program (ICSBEP) annually publishes a handbook of critical and subcritical benchmarks. Relatively fewer benchmark measurements have been performed to validate photon transport models and codes, and unlike the ICSBEP, there is no program dedicated to the evaluation and publication of photon benchmarks. Even fewer coupled neutron-photon benchmarks have been performed. This report documents a coupled neutron-photon benchmark for plutonium metal reflected by polyethylene. A 4.5-kg sphere of ?-phase, weapons-grade plutonium metal was measured in six reflected configurations: (1) Bare; (2) Reflected by 0.5 inch of high density polyethylene (HDPE); (3) Reflected by 1.0 inch of HDPE; (4) Reflected by 1.5 inches of HDPE; (5) Reflected by 3.0 inches of HDPE; and (6) Reflected by 6.0 inches of HDPE. Neutron and photon emissions from the plutonium sphere were measured using three instruments: (1) A gross neutron counter; (2) A neutron multiplicity counter; and (3) A high-resolution gamma spectrometer. This report documents the experimental conditions and results in detail sufficient to permit developers of radiation transport models and codes to construct models of the experiments and to compare their calculations to the measurements. All of the data acquired during this series of experiments are available upon request.
Journal of Computational and Theoretical Nanoscience
We perform pressure-driven non-equilibrium molecular dynamics (MD) simulations to drive a 1.0 M NaCI electrolyte through a dipole-lined smooth nanopore of diameter 12 A penetrating a model membrane. We show that partial, about 70-80%, CI- rejection is achieved at a ~68 atmosphere pressure. At the high water flux achieved in these model nanopores, which are particularly pertinent to atomistically smooth carbon nanotube membranes that permit fast water transport, the ion rejection ratio decreases with increasing water flux. The computed potential of mean force of Cl- frozen inside the nanopore reveals a barrier of 6.4 kcal/mol in 1.0 M NaCI solution. The Cl- permeation occurs despite the barrier, and this is identified as a dynamical effect, with ions carried along by the water flux. Na +-CI- ion-pairing or aggregation near the pore entrance and inside the pore, where the dielectric screening is weaker than in bulk water, is critical to Cl- permeation. We also consider negative charges decorating the rim and the interior of the pore instead of dipoles, and find that, with sufficient pressure, CI- from a 1.0 M NaCI solution readily passes through such nanopores. © 2009 American Scientific Publishers.
2008 Proceedings of the 2nd International Conference on Energy Sustainability, ES 2008
Concentrating Solar Power (CSP) dish systems use a parabolic dish to concentrate sunlight, providing heat for a thermodynamic cycle to generate shaft power and ultimately, electricity. Currently, leading contenders use a Stirling cycle engine with a heat absorber surface at about 800°C. The concentrated light passes through an aperture, which controls the thermal losses of the receiver system. Similar systems may use the concentrated light to heat a thermochemical process. The concentrator system, typically steel and glass, provides a source of fuel over the service life of the system, but this source of fuel manifests as a capital cost up front. Therefore, it is imperative that the cost of the reflector assembly is minimized. However, dish systems typically concentrate light to a peak of as much as 13,000 suns, with an average geometric concentration ratio of over 3000 suns. Several recent dish-Stirling systems have incorporated reflector facets with a normally-distributed surface slope error (local distributed waviness) of 0.8 mrad RMS (1-sigma error). As systems move toward commercialization, the cost of these highly accurate facets must be assessed. However, when considering lower-cost options, any decrease in the performance of the facets must be considered in the evaluation of such facets. In this paper, I investigate the impact of randomly-distributed slope errors on the performance, and therefore the value, of a typical dish-Stirling system. There are many potential sources of error in a concentrating system. When considering facet options, the surface waviness, characterized as a normally-distributed slope error, has the greatest impact on the aperture size and therefore the thermal losses. I develop an optical model and a thermal model for the performance of a baseline system. I then analyze the impact on system performance for a range of mirror quality, and evaluate the impact of such performance changes on the economic value of the system. This approach can be used to guide the evaluation of low-cost facets that differ in performance and cost. The methodology and results are applicable to other point- and line-focus thermal systems including dish-Brayton, dish-Thermochemical, tower systems, and troughs. Copyright © 2008 by ASME.
2008 Proceedings of the 2nd International Conference on Energy Sustainability, ES 2008
Thermal energy storage can enhance the utility of parabolic trough solar power plants by providing the ability to match electrical output to peak demand periods. An important component of thermal energy storage system optimization is selecting the working fluid used as the storage media and/or heat transfer fluid. Large quantities of the working fluid are required for power plants at the scale of 100-MW, so maximizing heat transfer fluid performance while minimizing material cost is important. This paper reports recent developments of multi-component molten salt formulations consisting of common alkali nitrate and alkaline earth nitrate salts that have advantageous properties for applications as heat transfer fluids in parabolic trough systems. A primary disadvantage of molten salt heat transfer fluids is relatively high freeze-onset temperature compared to organic heat transfer oil. Experimental results are reported for formulations of inorganic molten salt mixtures that display freeze-onset temperatures below 100°C. In addition to phase-change behavior, several properties of these molten salts that significantly affect their suitability as thermal energy storage fluids were evaluated, including chemical stability and viscosity. These alternative molten salts have demonstrated chemical stability in the presence of air up to approximately 500°C in laboratory testing and display chemical equilibrium behavior similar to Solar Salt. The capability to operate at temperatures up to 500°C may allow an increase in maximum temperature operating capability vs. organic fluids in existing trough systems and will enable increased power cycle efficiency. Experimental measurements of viscosity were performed from near the freeze-onset temperature to about 200°C. Viscosities can exceed 100 cP at the lowest temperature but are less than 10 cP in the primary temperature range at which the mixtures would be used in a thermal energy storage system. Quantitative cost figures of constituent salts and blends are not currently available, although, these molten salt mixtures are expected to be inexpensive compared to synthetic organic heat transfer fluids. Experiments are in progress to confirm that the corrosion behavior of readily available alloys is satisfactory for long-term use. Copyright © 2008 by ASME.
2008 Proceedings of the 4th International Topical Meeting on High Temperature Reactor Technology, HTR 2008
Sandia National Laboratories (SNL), General Atomics Corporation (GA) and the French Commissariat a l'Energie Atomique (CEA) have been conducting laboratory-scale experiments to investigate the thermochemical production of hydrogen using the Sulfur-Iodine (S-I) process. This project is being conducted as an International Nuclear Energy Research Initiative (INERI) project supported by the CEA and US DOE Nuclear Hydrogen Initiative. In the S-I process, 1) H2SO4 is catalytically decomposed at high temperature to produce SO2, O2 and H20. 2) The S02 is reacted with H20 and I2 to produce HI and H2SO 4. The H2S04 is returned to the acid decomposer. 3) The HI is decomposed to H2 and I2. The I2 is returned to the HI production process. Each participant in this work is developing one of the three primary reaction sections. SNL is responsible for the H 2SO4 decomposition section, CEA, the primary HI production section and General Atomics, the HI decomposition section. The objective of initial testing of the S-I laboratory-scale experiment was to establish the capability for integrated operations and demonstrate H2 production from the S-I cycle. The first phase of these objectives was achieved with the successful integrated operation of the SNL acid decomposition and CEA Bunsen reactor sections and the subsequent generation of H2 in the GA HI decomposition section. This is the first time the S-I cycle has been realized using engineering materials and operated at prototypic temperature and pressure to produce hydrogen. © 2008 by ASME.
Proceedings - Electronic Components and Technology Conference
We have developed a complete process module for fabricating front end of line (FEOL) through silicon vias (TSVs). In this paper we describe the integration, which relies on using thermally deposited silicon as a sacrificial material to fill the TSV during FEOL processing, followed by its removal and replacement with tungsten after FEOL processing is complete. The uniqueness of this approach follows mainly from forming the TSVs early in the FEOL while still ultimately using metal as the via fill material. TSVs formed early in the FEOL can be formed at comparatively small diameter, high aspect ratio, and high spatial density. We have demonstrated FEOL-integrated TSVs that are 2 μm in diameter, over 45 μm deep, and on 20 μm pitch for a possible interconnect density of 250,000/cm2. Moreover, thermal oxidation of silicon can be used to form the dielectric isolation. Thermal oxidation is conformal and robust in the as-formed state. Finally, TSVs formed in the FEOL alleviate device design constraints common to vias-last integration. © 2009 IEEE.
We report the results of an LDRD effort to investigate new technologies for the identification of small-sized (mm to cm) debris in low-earth orbit. This small-yet-energetic debris presents a threat to the integrity of space-assets worldwide and represents significant security challenge to the international community. We present a nonexhaustive review of recent US and Russian efforts to meet the challenges of debris identification and removal and then provide a detailed description of joint US-Russian plans for sensitive, laser-based imaging of small debris at distances of hundreds of kilometers and relative velocities of several kilometers per second. Plans for the upcoming experimental testing of these imaging schemes are presented and a preliminary path toward system integration is identified.
This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.
I used supramolecular self-assembling cyanine and the polyamine spermine binding to Escherichia coli genomic DNA as a model for DNA collapse during high throughput screening. Polyamine binding to DNA converts the normally right handed B-DNA into left handed Z-DNA conformation. Polyamine binding to DNA was inhibited by the supramolecular self-assembling cyanine. Self-assembly of cyanine upon DNA scaffold was likewise competitively inhibited by spermine as signaled by fluorescence quench from DNA-cyanine ensemble. Sequence of DNA exposure to cyanine or spermine was critical in determining the magnitude of fluorescence quench. Methanol potentiated spermine inhibition by >10-fold. The IC{sub 50} for spermine inhibition was 0.35 {+-} 0.03 {micro}M and the association constant Ka was 2.86 x 10{sup -6}M. Reversibility of the DNA-polyamine interactions was evident from quench mitigation at higher concentrations of cyanine. System flexibility was demonstrated by similar spermine interactions with {lambda}DNA. The choices and rationale regarding the polyamine, the cyanine dye as well as the remarkable effects of methanol are discussed in detail. Cyanine might be a safer alternative to the mutagenic toxin ethidium bromide for investigating DNA-drug interactions. The combined actions of polyamines and alcohols mediate DNA collapse producing hybrid bio-nanomaterials with novel signaling properties that might be useful in biosensor applications. Finally, this work will be submitted to Analytical Sciences (Japan) for publication. This journal published our earlier, related work on cyanine supramolecular self-assembly upon a variety of nucleic acid scaffolds.
This report describes a new methodology, social language network analysis (SLNA), that combines tools from social language processing and network analysis to identify socially situated relationships between individuals which, though subtle, are highly influential. Specifically, SLNA aims to identify and characterize the nature of working relationships by processing artifacts generated with computer-mediated communication systems, such as instant message texts or emails. Because social language processing is able to identify psychological, social, and emotional processes that individuals are not able to fully mask, social language network analysis can clarify and highlight complex interdependencies between group members, even when these relationships are latent or unrecognized. This report outlines the philosophical antecedents of SLNA, the mechanics of preprocessing, processing, and post-processing stages, and some example results obtained by applying this approach to a 15-month corporate discussion archive.
Staggered bioterrorist attacks with aerosolized pathogens on population centers present a formidable challenge to resource allocation and response planning. The response and planning will commence immediately after the detection of the first attack and with no or little information of the second attack. In this report, we outline a method by which resource allocation may be performed. It involves probabilistic reconstruction of the bioterrorist attack from partial observations of the outbreak, followed by an optimization-under-uncertainty approach to perform resource allocations. We consider both single-site and time-staggered multi-site attacks (i.e., a reload scenario) under conditions when resources (personnel and equipment which are difficult to gather and transport) are insufficient. Both communicable (plague) and non-communicable diseases (anthrax) are addressed, and we also consider cases when the data, the time-series of people reporting with symptoms, are confounded with a reporting delay. We demonstrate how our approach develops allocations profiles that have the potential to reduce the probability of an extremely adverse outcome in exchange for a more certain, but less adverse outcome. We explore the effect of placing limits on daily allocations. Further, since our method is data-driven, the resource allocation progressively improves as more data becomes available.
Understanding charge transport processes at a molecular level using computational techniques is currently hindered by a lack of appropriate models for incorporating anistropic electric fields in molecular dynamics (MD) simulations. An important technological example is ion transport through solid-electrolyte interphase (SEI) layers that form in many common types of batteries. These layers regulate the rate at which electro-chemical reactions occur, affecting power, safety, and reliability. In this work, we develop a model for incorporating electric fields in MD using an atomistic-to-continuum framework. This framework provides the mathematical and algorithmic infrastructure to couple finite element (FE) representations of continuous data with atomic data. In this application, the electric potential is represented on a FE mesh and is calculated from a Poisson equation with source terms determined by the distribution of the atomic charges. Boundary conditions can be imposed naturally using the FE description of the potential, which then propagates to each atom through modified forces. The method is verified using simulations where analytical or theoretical solutions are known. Calculations of salt water solutions in complex domains are performed to understand how ions are attracted to charged surfaces in the presence of electric fields and interfering media.
Fiber-optic gas phase surface plasmon resonance (SPR) detection of several contaminant gases of interest to state-of-health monitoring in high-consequence sealed systems has been demonstrated. These contaminant gases include H{sub 2}, H{sub 2}S, and moisture using a single-ended optical fiber mode. Data demonstrate that results can be obtained and sensitivity is adequate in a dosimetric mode that allows periodic monitoring of system atmospheres. Modeling studies were performed to direct the design of the sensor probe for optimized dimensions and to allow simultaneous monitoring of several constituents with a single sensor fiber. Testing of the system demonstrates the ability to detect 70mTorr partial pressures of H{sub 2} using this technique and <280 {micro}Torr partial pressures of H{sub 2}S. In addition, a multiple sensor fiber has been demonstrated that allows a single fiber to measure H{sub 2}, H{sub 2}S, and H{sub 2}O without changing the fiber or the analytical system.
Working with leading experts in the field of cognitive neuroscience and computational intelligence, SNL has developed a computational architecture that represents neurocognitive mechanisms associated with how humans remember experiences in their past. The architecture represents how knowledge is organized and updated through information from individual experiences (episodes) via the cortical-hippocampal declarative memory system. We compared the simulated behavioral characteristics with those of humans measured under well established experimental standards, controlling for unmodeled aspects of human processing, such as perception. We used this knowledge to create robust simulations of & human memory behaviors that should help move the scientific community closer to understanding how humans remember information. These behaviors were experimentally validated against actual human subjects, which was published. An important outcome of the validation process will be the joining of specific experimental testing procedures from the field of neuroscience with computational representations from the field of cognitive modeling and simulation.
The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.
A number of codes have been developed in the past for safeguards analysis, but many are dated, and no single code is able to cover all aspects of materials accountancy, process monitoring, and diversion scenario analysis. The purpose of this work was to integrate a transient solvent extraction simulation module developed at Oak Ridge National Laboratory, with the Separations and Safeguards Performance Model (SSPM), developed at Sandia National Laboratory, as a first step toward creating a more versatile design and evaluation tool. The SSPM was designed for materials accountancy and process monitoring analyses, but previous versions of the code have included limited detail on the chemical processes, including chemical separations. The transient solvent extraction model is based on the ORNL SEPHIS code approach to consider solute build up in a bank of contactors in the PUREX process. Combined, these capabilities yield a more robust transient separations and safeguards model for evaluating safeguards system design. This coupling and initial results are presented. In addition, some observations toward further enhancement of separations and safeguards modeling based on this effort are provided, including: items to be addressed in integrating legacy codes, additional improvements needed for a fully functional solvent extraction module, and recommendations for future integration of other chemical process modules.
This highly interdisciplinary team has developed dual-color, total internal reflection microscopy (TIRF-M) methods that enable us to optically detect and track in real time protein migration and clustering at membrane interfaces. By coupling TIRF-M with advanced analysis techniques (image correlation spectroscopy, single particle tracking) we have captured subtle changes in membrane organization that characterize immune responses. We have used this approach to elucidate the initial stages of cell activation in the IgE signaling network of mast cells and the Toll-like receptor (TLR-4) response in macrophages stimulated by bacteria. To help interpret these measurements, we have undertaken a computational modeling effort to connect the protein motion and lipid interactions. This work provides a deeper understanding of the initial stages of cellular response to external agents, including dynamics of interaction of key components in the signaling network at the 'immunological synapse,' the contact region of the cell and its adversary.
The effect of composition on the elastic responses of alumina particle-filled epoxy composites is examined using isotropic elastic response models relating the average stresses and strains in a discretely reinforced composite material consisting of perfectly bonded and uniformly distributed particles in a solid isotropic elastic matrix. Responses for small elastic deformations and large hydrostatic and plane-strain compressions are considered. The response model for small elastic deformations depends on known elastic properties of the matrix and particles, the volume fraction of the particles, and two additional material properties that reflect the composition and microstructure of the composite material. These two material properties, called strain concentration coefficients, are characterized for eleven alumina-filled epoxy composites. It is found that while the strain concentration coefficients depend strongly on the volume fraction of alumina particles, no significant dependence on particle morphology and size is observed for the compositions examined. Additionally, an analysis of the strain concentration coefficients reveals a remarkably simple dependency on the alumina volume fraction. Responses for large hydrostatic and plane-strain compressions are obtained by generalizing the equations developed for small deformation, and letting the alumina volume fraction in the composite increase with compression. The large compression plane-strain response model is shown to predict equilibrium Hugoniot states in alumina-filled epoxy compositions remarkably well.
This report focuses on quantum chemistry and ab initio molecular dynamics (AIMD) calculations applied to elucidate the mechanism of the multi-step, 2-electron, electrochemical reduction of the green house gas molecule carbon dioxide (CO{sub 2}) to carbon monoxide (CO) in aqueous media. When combined with H{sub 2} gas to form synthesis ('syn') gas, CO becomes a key precursor to methane, methanol, and other useful hydrocarbon products. To elucidate the mechanism of this reaction, we apply computational electrochemistry which is a fledgling, important area of basic science critical to energy storage. This report highlights several approaches, including the calculation of redox potentials, the explicit depiction of liquid water environments using AIMD, and free energy methods. While costly, these pioneering calculations reveal the key role of hydration- and protonation-stabilization of reaction intermediates, and may inform the design of CO{sub 2}-capture materials as well as its electrochemical reduction. In the course of this work, we have also dealt with the challenges of identifying and applying electronic structure methods which are sufficiently accurate to deal with transition metal ion complex-based catalyst. Such electronic structure methods are also pertinent to the accurate modeling of actinide materials and therefore to nuclear energy research. Our multi-pronged effort towards achieving this titular goal of the LDRD is discussed.
Our LDRD research project sought to develop an analytical method for detection of chemicals used in nuclear materials processing. Our approach is distinctly different than current research involving hardware-based sensors. By utilizing the response of indigenous species of plants and/or animals surrounding (or within) a nuclear processing facility, we propose tracking 'suspicious molecules' relevant to nuclear materials processing. As proof of concept, we have examined TBP, tributylphosphate, used in uranium enrichment as well as plutonium extraction from spent nuclear fuels. We will compare TBP to the TPP (triphenylphosphate) analog to determine the uniqueness of the metabonomic response. We show that there is a unique metabonomic response within our animal model to TBP. The TBP signature can further be delineated from that of TPP. We have also developed unique methods of instrumental transfer for metabonomic data sets.
Rapid identification of aerosolized biological agents following an alarm by particle triggering systems is needed to enable response actions that save lives and protect assets. Rapid identifiers must achieve species level specificity, as this is required to distinguish disease-causing organisms (e.g., Bacillus anthracis) from benign neighbors (e.g., Bacillus subtilis). We have developed a rapid (1-5 minute), novel identification methodology that sorts intact organisms from each other and particulates using capillary electrophoresis (CE), and detects using near-infrared (NIR) absorbance and scattering. We have successfully demonstrated CE resolution of Bacillus spores and vegetative bacteria at the species level. To achieve sufficient sensitivity for detection needs ({approx}10{sup 4} cfu/mL for bacteria), we have developed fiber-coupled cavity-enhanced absorbance techniques. Using this method, we have demonstrated {approx}two orders of magnitude greater sensitivity than published results for absorbing dyes, and single particle (spore) detection through primarily scattering effects. Results of the integrated CE-NIR system for spore detection are presented.
Abstract not provided.
This report documents a high-level analysis of the benefit and cost for flywheel energy storage used to provide area regulation for the electricity supply and transmission system in California. Area regulation is an 'ancillary service' needed for a reliable and stable regional electricity grid. The analysis was based on results from a demonstration, in California, of flywheel energy storage developed by Beacon Power Corporation (the system's manufacturer). Demonstrated was flywheel storage systems ability to provide 'rapid-response' regulation. Flywheel storage output can be varied much more rapidly than the output from conventional regulation sources, making flywheels more attractive than conventional regulation resources. The performance of the flywheel storage system demonstrated was generally consistent with requirements for a possible new class of regulation resources - 'rapid-response' energy-storage-based regulation - in California. In short, it was demonstrated that Beacon Power Corporation's flywheel system follows a rapidly changing control signal (the ACE, which changes every four seconds). Based on the results and on expected plant cost and performance, the Beacon Power flywheel storage system has a good chance of being a financially viable regulation resource. Results indicate a benefit/cost ratio of 1.5 to 1.8 using what may be somewhat conservative assumptions. A benefit/cost ratio of one indicates that, based on the financial assumptions used, the investment's financial returns just meet the investors target.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This report describes trans-organizational efforts to investigate the impact of chip multiprocessors (CMPs) on the performance of important Sandia application codes. The impact of CMPs on the performance and applicability of Sandia's system software was also investigated. The goal of the investigation was to make algorithmic and architectural recommendations for next generation platform acquisitions.
Abstract not provided.
Abstract not provided.
Currently, electrical power generation uses about 140 billion gallons of water per day accounting for over 39% of all freshwater withdrawals thus competing with irrigated agriculture as the leading user of water. Coupled to this water use is the required pumping, conveyance, treatment, storage and distribution of the water which requires on average 3% of all electric power generated. While water and energy use are tightly coupled, planning and management of these fundamental resources are rarely treated in an integrated fashion. Toward this need, a decision support framework has been developed that targets the shared needs of energy and water producers, resource managers, regulators, and decision makers at the federal, state and local levels. The framework integrates analysis and optimization capabilities to identify trade-offs, and 'best' alternatives among a broad list of energy/water options and objectives. The decision support framework is formulated in a modular architecture, facilitating tailored analyses over different geographical regions and scales (e.g., national, state, county, watershed, NERC region). An interactive interface allows direct control of the model and access to real-time results displayed as charts, graphs and maps. Ultimately, this open and interactive modeling framework provides a tool for evaluating competing policy and technical options relevant to the energy-water nexus.
Abstract not provided.
This report documents The Nambe Pueblo Water Budget and Water Forecasting model. The model has been constructed using Powersim Studio (PS), a software package designed to investigate complex systems where flows and accumulations are central to the system. Here PS has been used as a platform for modeling various aspects of Nambe Pueblo's current and future water use. The model contains three major components, the Water Forecast Component, Irrigation Scheduling Component, and the Reservoir Model Component. In each of the components, the user can change variables to investigate the impacts of water management scenarios on future water use. The Water Forecast Component includes forecasting for industrial, commercial, and livestock use. Domestic demand is also forecasted based on user specified current population, population growth rates, and per capita water consumption. Irrigation efficiencies are quantified in the Irrigated Agriculture component using critical information concerning diversion rates, acreages, ditch dimensions and seepage rates. Results from this section are used in the Water Demand Forecast, Irrigation Scheduling, and the Reservoir Model components. The Reservoir Component contains two sections, (1) Storage and Inflow Accumulations by Categories and (2) Release, Diversion and Shortages. Results from both sections are derived from the calibrated Nambe Reservoir model where historic, pre-dam or above dam USGS stream flow data is fed into the model and releases are calculated.
Abstract not provided.
The goal of this project is to develop an efficient energy scavenger for converting ambient low-frequency vibrations into electrical power. In order to achieve this a novel inertial micro power generator architecture has been developed that utilizes the bi-stable motion of a mechanical mass to convert a broad range of low-frequency (< 30Hz), and large-deflection (>250 {micro}m) ambient vibrations into high-frequency electrical output energy. The generator incorporates a bi-stable mechanical structure to initiate high-frequency mechanical oscillations in an electromagnetic scavenger. This frequency up-conversion technique enhances the electromechanical coupling and increases the generated power. This architecture is called the Parametric Frequency Increased Generator (PFIG). Three generations of the device have been fabricated. It was first demonstrated using a larger bench-top prototype that had a functional volume of 3.7cm3. It generated a peak power of 558{micro}W and an average power of 39.5{micro}W at an input acceleration of 1g applied at 10 Hz. The performance of this device has still not been matched by any other reported work. It yielded the best power density and efficiency for any scavenger operating from low-frequency (<10Hz) vibrations. A second-generation device was then fabricated. It generated a peak power of 288{micro}W and an average power of 5.8{micro}W from an input acceleration of 9.8m/s{sup 2} at 10Hz. The device operates over a frequency range of 20Hz. The internal volume of the generator is 2.1cm{sup 3} (3.7cm{sup 3} including casing), half of a standard AA battery. Lastly, a piezoelectric version of the PFIG is currently being developed. This device clearly demonstrates one of the key features of the PFIG architecture, namely that it is suitable for MEMS integration, more so than resonant generators, by incorporating a brittle bulk piezoelectric ceramic. This is the first micro-scale piezoelectric generator capable of <10Hz operation. The fabricated device currently generates a peak power of 25.9{micro}W and an average power of 1.21{micro}W from an input acceleration of 9.8m/s{sup -} at 10Hz. The device operates over a frequency range of 23Hz. The internal volume of the generator is 1.2cm{sup 3}.
Abstract not provided.
Abstract not provided.
Inelastic neutron scattering, density functional theory, ab initio molecular dynamics, and classical molecular dynamics were used to examine the behavior of nanoconfined water in palygorskite and sepiolite. These complementary methods provide a strong basis to illustrate and correlate the significant differences observed in the spectroscopic signatures of water in two unique clay minerals. Distortions of silicate tetrahedra in the smaller-pore palygorskite exhibit a limited number of hydrogen bonds having relatively short bond lengths. In contrast, without the distorted silicate tetrahedra, an increased number of hydrogen bonds are observed in the larger-pore sepiolite with corresponding longer bond distances. Because there is more hydrogen bonding at the pore interface in sepiolite than in palygorskite, we expect librational modes to have higher overall frequencies (i.e., more restricted rotational motions); experimental neutron scattering data clearly illustrates this shift in spectroscopic signatures. Distortions of the silicate tetrahedra in these minerals effectively disrupts hydrogen bonding patterns at the silicate-water interface, and this has a greater impact on the dynamical behavior of nanoconfined water than the actual size of the pore or the presence of coordinatively-unsaturated magnesium edge sites.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This Quick Reference Guide supplements the more complete Guide to Preparing SAND Reports and Other Communication Products. It provides limited guidance on how to prepare SAND Reports at Sandia National Laboratories. Users are directed to the in-depth guide for explanations of processes.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Location of the liquid-vapor critical point (c.p.) is one of the key features of equation of state models used in simulating high energy density physics and pulsed power experiments. For example, material behavior in the location of the vapor dome is critical in determining how and when coronal plasmas form in expanding wires. Transport properties, such as conductivity and opacity, can vary an order of magnitude depending on whether the state of the material is inside or outside of the vapor dome. Due to the difficulty in experimentally producing states near the vapor dome, for all but a few materials, such as Cesium and Mercury, the uncertainty in the location of the c.p. is of order 100%. These states of interest can be produced on Z through high-velocity shock and release experiments. For example, it is estimated that release adiabats from {approx}1000 GPa in aluminum would skirt the vapor dome allowing estimates of the c.p. to be made. This is within the reach of Z experiments (flyer plate velocity of {approx}30 km/s). Recent high-fidelity EOS models and hydrocode simulations suggest that the dynamic two-phase flow behavior observed in initial scoping experiments can be reproduced, providing a link between theory and experiment. Experimental identification of the c.p. in aluminum would represent the first measurement of its kind in a dynamic experiment. Furthermore, once the c.p. has been experimentally determined it should be possible to probe the electrical conductivity, opacity, reflectivity, etc. of the material near the vapor dome, using a variety of diagnostics. We propose a combined experimental and theoretical investigation with the initial emphasis on aluminum.
Petaflops systems will have tens to hundreds of thousands of compute nodes which increases the likelihood of faults. Applications use checkpoint/restart to recover from these faults, but even under ideal conditions, applications running on more than 30,000 nodes will likely spend more than half of their total run time saving checkpoints, restarting, and redoing work that was lost. We created a library that performs redundant computations on additional nodes allocated to the application. An active node and its redundant partner form a node bundle which will only fail, and cause an application restart, when both nodes in the bundle fail. The goal of this library is to learn whether this can be done entirely at the user level, what requirements this library places on a Reliability, Availability, and Serviceability (RAS) system, and what its impact on performance and run time is. We find that our redundant MPI layer library imposes a relatively modest performance penalty for applications, but that it greatly reduces the number of applications interrupts. This reduction in interrupts leads to huge savings in restart and rework time. For large-scale applications the savings compensate for the performance loss and the additional nodes required for redundant computations.
Ionizing radiation is known to cause Single Event Effects (SEE) in a variety of electronic devices. The mechanism that leads to these SEEs is current induced by the radiation in these devices. While this phenomenon is detrimental in ICs, this is the basic mechanism behind the operation of semiconductor radiation detectors. To be able to predict SEEs in ICs and detector responses we need to be able to simulate the radiation induced current as the function of time. There are analytical models, which work for very simple detector configurations, but fail for anything more complex. On the other end, TCAD programs can simulate this process in microelectronic devices, but these TCAD codes costs hundreds of thousands of dollars and they require huge computing resources. In addition, in certain cases they fail to predict the correct behavior. A simulation model based on the Gunn theorem was developed and used with the COMSOL Multiphysics framework.
Graph algorithms are a key component in a wide variety of intelligence analysis activities. The Graph-Based Informatics for Non-Proliferation and Counter-Terrorism project addresses the critical need of making these graph algorithms accessible to Sandia analysts in a manner that is both intuitive and effective. Specifically we describe the design and implementation of an open source toolkit for doing graph analysis, informatics, and visualization that provides Sandia with novel analysis capability for non-proliferation and counter-terrorism.
Abstract not provided.
Abstract not provided.
Traditional safeguards and security design for fuel cycle facilities is done separately and after the facility design is near completion. This can result in higher costs due to retrofits and redundant use of data. Future facilities will incorporate safeguards and security early in the design process and integrate the systems to make better use of plant data and strengthen both systems. The purpose of this project was to evaluate the integration of materials control and accounting (MC&A) measurements with physical security design for a nuclear reprocessing plant. Locations throughout the plant where data overlap occurs or where MC&A data could be a benefit were identified. This mapping is presented along with the methodology for including the additional data in existing probabilistic assessments to evaluate safeguards and security systems designs.
Abstract not provided.
Progress in MEMS fabrication has enabled a wide variety of force and displacement sensing devices to be constructed. One device under intense development at Sandia is a passive shock switch, described elsewhere (Mitchell 2008). A goal of all MEMS devices, including the shock switch, is to achieve a high degree of reliability. This, in turn, requires systematic methods for validating device performance during each iteration of design. Once a design is finalized, suitable tools are needed to provide quality assurance for manufactured devices. To ensure device performance, measurements on these devices must be traceable to NIST standards. In addition, accurate metrology of MEMS components is needed to validate mechanical models that are used to design devices to accelerate development and meet emerging needs. Progress towards a NIST-traceable calibration method is described for a next-generation, 2D Interfacial Force Microscope (IFM) for applications in MEMS metrology and qualification. Discussed are the results of screening several suitable calibration methods and the known sources of uncertainty in each method.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Advanced computing hardware and software written to exploit massively parallel architectures greatly facilitate the computation of extremely large problems. On the other hand, these tools, though enabling higher fidelity models, have often resulted in much longer run-times and turn-around-times in providing answers to engineering problems. The impediments include smaller elements and consequently smaller time steps, much larger systems of equations to solve, and the inclusion of nonlinearities that had been ignored in days when lower fidelity models were the norm. The research effort reported focuses on the accelerating the analysis process for structural dynamics though combinations of model reduction and mitigation of some factors that lead to over-meshing.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The advent of high quality factor (Q) microphotonic-resonators has led to the demonstration of high-fidelity optical sensors of many physical phenomena (e.g. mechanical, chemical, and biological sensing) often with far better sensitivity than traditional techniques. Microphotonic-resonators also offer potential advantages as uncooled thermal detectors including significantly better noise performance, smaller pixel size, and faster response times than current thermal detectors. In particular, microphotonic thermal detectors do not suffer from Johnson noise in the sensor, offer far greater responsivity, and greater thermal isolation as they do not require metallic leads to the sensing element. Such advantages make the prospect of a microphotonic thermal imager highly attractive. Here, we introduce the microphotonic thermal detection technique, present the theoretical basis for the approach, discuss our progress on the development of this technology and consider future directions for thermal microphotonic imaging. Already we have demonstrated viability of device fabrication with the successful demonstration of a 20{micro}m pixel, and a scalable readout technique. Further, to date, we have achieved internal noise performance (NEP{sub Internal} < 1pW/{radical}Hz) in a 20{micro}m pixel thereby exceeding the noise performance of the best microbolometers while simultaneously demonstrating a thermal time constant ({tau} = 2ms) that is five times faster. In all, this results in an internal detectivity of D*{sub internal} = 2 x 10{sup 9}cm {center_dot} {radical}Hz/W, while roughly a factor of four better than the best uncooled commercial microbolometers, future demonstrations should enable another order of magnitude in sensitivity. While much work remains to achieve the level of maturity required for a deployable technology, already, microphotonic thermal detection has demonstrated considerable potential.
Decisions for climate policy will need to take place in advance of climate science resolving all relevant uncertainties. Further, if the concern of policy is to reduce risk, then the best-estimate of climate change impacts may not be so important as the currently understood uncertainty associated with realizable conditions having high consequence. This study focuses on one of the most uncertain aspects of future climate change - precipitation - to understand the implications of uncertainty on risk and the near-term justification for interventions to mitigate the course of climate change. We show that the mean risk of damage to the economy from climate change, at the national level, is on the order of one trillion dollars over the next 40 years, with employment impacts of nearly 7 million labor-years. At a 1% exceedance-probability, the impact is over twice the mean-risk value. Impacts at the level of individual U.S. states are then typically in the multiple tens of billions dollar range with employment losses exceeding hundreds of thousands of labor-years. We used results of the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report 4 (AR4) climate-model ensemble as the referent for climate uncertainty over the next 40 years, mapped the simulated weather hydrologically to the county level for determining the physical consequence to economic activity at the state level, and then performed a detailed, seventy-industry, analysis of economic impact among the interacting lower-48 states. We determined industry GDP and employment impacts at the state level, as well as interstate population migration, effect on personal income, and the consequences for the U.S. trade balance.
This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.
This document outlines ways to more effectively communicate with U.S. Federal decision makers by outlining the structure, authority, and motivations of various Federal groups, how to find the trusted advisors, and how to structure communication. All three branches of Federal governments have decision makers engaged in resolving major policy issues. The Legislative Branch (Congress) negotiates the authority and the resources that can be used by the Executive Branch. The Executive Branch has some latitude in implementation and prioritizing resources. The Judicial Branch resolves disputes. The goal of all decision makers is to choose and implement the option that best fits the needs and wants of the community. However, understanding the risk of technical, political and/or financial infeasibility and possible unintended consequences is extremely difficult. Primarily, decision makers are supported in their deliberations by trusted advisors who engage in the analysis of options as well as the day-to-day tasks associated with multi-party negotiations. In the best case, the trusted advisors use many sources of information to inform the process including the opinion of experts and if possible predictive analysis from which they can evaluate the projected consequences of their decisions. The paper covers the following: (1) Understanding Executive and Legislative decision makers - What can these decision makers do? (2) Finding the target audience - Who are the internal and external trusted advisors? (3) Packaging the message - How do we parse and integrate information, and how do we use computer simulation or models in policy communication?