Solid-State Lighting: Science Technology and Economic Perspectives
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Numerous benchmark measurements have been performed to enable developers of neutron transport models and codes to evaluate the accuracy of their calculations. In particular, for criticality safety applications, the International Criticality Safety Benchmark Experiment Program (ICSBEP) annually publishes a handbook of critical and subcritical benchmarks. Relatively fewer benchmark measurements have been performed to validate photon transport models and codes, and unlike the ICSBEP, there is no program dedicated to the evaluation and publication of photon benchmarks. Even fewer coupled neutron-photon benchmarks have been performed. This report documents a coupled neutron-photon benchmark for plutonium metal reflected by polyethylene. A 4.5-kg sphere of ?-phase, weapons-grade plutonium metal was measured in six reflected configurations: (1) Bare; (2) Reflected by 0.5 inch of high density polyethylene (HDPE); (3) Reflected by 1.0 inch of HDPE; (4) Reflected by 1.5 inches of HDPE; (5) Reflected by 3.0 inches of HDPE; and (6) Reflected by 6.0 inches of HDPE. Neutron and photon emissions from the plutonium sphere were measured using three instruments: (1) A gross neutron counter; (2) A neutron multiplicity counter; and (3) A high-resolution gamma spectrometer. This report documents the experimental conditions and results in detail sufficient to permit developers of radiation transport models and codes to construct models of the experiments and to compare their calculations to the measurements. All of the data acquired during this series of experiments are available upon request.
Journal of Computational and Theoretical Nanoscience
We perform pressure-driven non-equilibrium molecular dynamics (MD) simulations to drive a 1.0 M NaCI electrolyte through a dipole-lined smooth nanopore of diameter 12 A penetrating a model membrane. We show that partial, about 70-80%, CI- rejection is achieved at a ~68 atmosphere pressure. At the high water flux achieved in these model nanopores, which are particularly pertinent to atomistically smooth carbon nanotube membranes that permit fast water transport, the ion rejection ratio decreases with increasing water flux. The computed potential of mean force of Cl- frozen inside the nanopore reveals a barrier of 6.4 kcal/mol in 1.0 M NaCI solution. The Cl- permeation occurs despite the barrier, and this is identified as a dynamical effect, with ions carried along by the water flux. Na +-CI- ion-pairing or aggregation near the pore entrance and inside the pore, where the dielectric screening is weaker than in bulk water, is critical to Cl- permeation. We also consider negative charges decorating the rim and the interior of the pore instead of dipoles, and find that, with sufficient pressure, CI- from a 1.0 M NaCI solution readily passes through such nanopores. © 2009 American Scientific Publishers.
2008 Proceedings of the 2nd International Conference on Energy Sustainability, ES 2008
Concentrating Solar Power (CSP) dish systems use a parabolic dish to concentrate sunlight, providing heat for a thermodynamic cycle to generate shaft power and ultimately, electricity. Currently, leading contenders use a Stirling cycle engine with a heat absorber surface at about 800°C. The concentrated light passes through an aperture, which controls the thermal losses of the receiver system. Similar systems may use the concentrated light to heat a thermochemical process. The concentrator system, typically steel and glass, provides a source of fuel over the service life of the system, but this source of fuel manifests as a capital cost up front. Therefore, it is imperative that the cost of the reflector assembly is minimized. However, dish systems typically concentrate light to a peak of as much as 13,000 suns, with an average geometric concentration ratio of over 3000 suns. Several recent dish-Stirling systems have incorporated reflector facets with a normally-distributed surface slope error (local distributed waviness) of 0.8 mrad RMS (1-sigma error). As systems move toward commercialization, the cost of these highly accurate facets must be assessed. However, when considering lower-cost options, any decrease in the performance of the facets must be considered in the evaluation of such facets. In this paper, I investigate the impact of randomly-distributed slope errors on the performance, and therefore the value, of a typical dish-Stirling system. There are many potential sources of error in a concentrating system. When considering facet options, the surface waviness, characterized as a normally-distributed slope error, has the greatest impact on the aperture size and therefore the thermal losses. I develop an optical model and a thermal model for the performance of a baseline system. I then analyze the impact on system performance for a range of mirror quality, and evaluate the impact of such performance changes on the economic value of the system. This approach can be used to guide the evaluation of low-cost facets that differ in performance and cost. The methodology and results are applicable to other point- and line-focus thermal systems including dish-Brayton, dish-Thermochemical, tower systems, and troughs. Copyright © 2008 by ASME.
2008 Proceedings of the 2nd International Conference on Energy Sustainability, ES 2008
Thermal energy storage can enhance the utility of parabolic trough solar power plants by providing the ability to match electrical output to peak demand periods. An important component of thermal energy storage system optimization is selecting the working fluid used as the storage media and/or heat transfer fluid. Large quantities of the working fluid are required for power plants at the scale of 100-MW, so maximizing heat transfer fluid performance while minimizing material cost is important. This paper reports recent developments of multi-component molten salt formulations consisting of common alkali nitrate and alkaline earth nitrate salts that have advantageous properties for applications as heat transfer fluids in parabolic trough systems. A primary disadvantage of molten salt heat transfer fluids is relatively high freeze-onset temperature compared to organic heat transfer oil. Experimental results are reported for formulations of inorganic molten salt mixtures that display freeze-onset temperatures below 100°C. In addition to phase-change behavior, several properties of these molten salts that significantly affect their suitability as thermal energy storage fluids were evaluated, including chemical stability and viscosity. These alternative molten salts have demonstrated chemical stability in the presence of air up to approximately 500°C in laboratory testing and display chemical equilibrium behavior similar to Solar Salt. The capability to operate at temperatures up to 500°C may allow an increase in maximum temperature operating capability vs. organic fluids in existing trough systems and will enable increased power cycle efficiency. Experimental measurements of viscosity were performed from near the freeze-onset temperature to about 200°C. Viscosities can exceed 100 cP at the lowest temperature but are less than 10 cP in the primary temperature range at which the mixtures would be used in a thermal energy storage system. Quantitative cost figures of constituent salts and blends are not currently available, although, these molten salt mixtures are expected to be inexpensive compared to synthetic organic heat transfer fluids. Experiments are in progress to confirm that the corrosion behavior of readily available alloys is satisfactory for long-term use. Copyright © 2008 by ASME.
2008 Proceedings of the 4th International Topical Meeting on High Temperature Reactor Technology, HTR 2008
Sandia National Laboratories (SNL), General Atomics Corporation (GA) and the French Commissariat a l'Energie Atomique (CEA) have been conducting laboratory-scale experiments to investigate the thermochemical production of hydrogen using the Sulfur-Iodine (S-I) process. This project is being conducted as an International Nuclear Energy Research Initiative (INERI) project supported by the CEA and US DOE Nuclear Hydrogen Initiative. In the S-I process, 1) H2SO4 is catalytically decomposed at high temperature to produce SO2, O2 and H20. 2) The S02 is reacted with H20 and I2 to produce HI and H2SO 4. The H2S04 is returned to the acid decomposer. 3) The HI is decomposed to H2 and I2. The I2 is returned to the HI production process. Each participant in this work is developing one of the three primary reaction sections. SNL is responsible for the H 2SO4 decomposition section, CEA, the primary HI production section and General Atomics, the HI decomposition section. The objective of initial testing of the S-I laboratory-scale experiment was to establish the capability for integrated operations and demonstrate H2 production from the S-I cycle. The first phase of these objectives was achieved with the successful integrated operation of the SNL acid decomposition and CEA Bunsen reactor sections and the subsequent generation of H2 in the GA HI decomposition section. This is the first time the S-I cycle has been realized using engineering materials and operated at prototypic temperature and pressure to produce hydrogen. © 2008 by ASME.
Proceedings - Electronic Components and Technology Conference
We have developed a complete process module for fabricating front end of line (FEOL) through silicon vias (TSVs). In this paper we describe the integration, which relies on using thermally deposited silicon as a sacrificial material to fill the TSV during FEOL processing, followed by its removal and replacement with tungsten after FEOL processing is complete. The uniqueness of this approach follows mainly from forming the TSVs early in the FEOL while still ultimately using metal as the via fill material. TSVs formed early in the FEOL can be formed at comparatively small diameter, high aspect ratio, and high spatial density. We have demonstrated FEOL-integrated TSVs that are 2 μm in diameter, over 45 μm deep, and on 20 μm pitch for a possible interconnect density of 250,000/cm2. Moreover, thermal oxidation of silicon can be used to form the dielectric isolation. Thermal oxidation is conformal and robust in the as-formed state. Finally, TSVs formed in the FEOL alleviate device design constraints common to vias-last integration. © 2009 IEEE.
We report the results of an LDRD effort to investigate new technologies for the identification of small-sized (mm to cm) debris in low-earth orbit. This small-yet-energetic debris presents a threat to the integrity of space-assets worldwide and represents significant security challenge to the international community. We present a nonexhaustive review of recent US and Russian efforts to meet the challenges of debris identification and removal and then provide a detailed description of joint US-Russian plans for sensitive, laser-based imaging of small debris at distances of hundreds of kilometers and relative velocities of several kilometers per second. Plans for the upcoming experimental testing of these imaging schemes are presented and a preliminary path toward system integration is identified.
This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.
I used supramolecular self-assembling cyanine and the polyamine spermine binding to Escherichia coli genomic DNA as a model for DNA collapse during high throughput screening. Polyamine binding to DNA converts the normally right handed B-DNA into left handed Z-DNA conformation. Polyamine binding to DNA was inhibited by the supramolecular self-assembling cyanine. Self-assembly of cyanine upon DNA scaffold was likewise competitively inhibited by spermine as signaled by fluorescence quench from DNA-cyanine ensemble. Sequence of DNA exposure to cyanine or spermine was critical in determining the magnitude of fluorescence quench. Methanol potentiated spermine inhibition by >10-fold. The IC{sub 50} for spermine inhibition was 0.35 {+-} 0.03 {micro}M and the association constant Ka was 2.86 x 10{sup -6}M. Reversibility of the DNA-polyamine interactions was evident from quench mitigation at higher concentrations of cyanine. System flexibility was demonstrated by similar spermine interactions with {lambda}DNA. The choices and rationale regarding the polyamine, the cyanine dye as well as the remarkable effects of methanol are discussed in detail. Cyanine might be a safer alternative to the mutagenic toxin ethidium bromide for investigating DNA-drug interactions. The combined actions of polyamines and alcohols mediate DNA collapse producing hybrid bio-nanomaterials with novel signaling properties that might be useful in biosensor applications. Finally, this work will be submitted to Analytical Sciences (Japan) for publication. This journal published our earlier, related work on cyanine supramolecular self-assembly upon a variety of nucleic acid scaffolds.
This report describes a new methodology, social language network analysis (SLNA), that combines tools from social language processing and network analysis to identify socially situated relationships between individuals which, though subtle, are highly influential. Specifically, SLNA aims to identify and characterize the nature of working relationships by processing artifacts generated with computer-mediated communication systems, such as instant message texts or emails. Because social language processing is able to identify psychological, social, and emotional processes that individuals are not able to fully mask, social language network analysis can clarify and highlight complex interdependencies between group members, even when these relationships are latent or unrecognized. This report outlines the philosophical antecedents of SLNA, the mechanics of preprocessing, processing, and post-processing stages, and some example results obtained by applying this approach to a 15-month corporate discussion archive.
Staggered bioterrorist attacks with aerosolized pathogens on population centers present a formidable challenge to resource allocation and response planning. The response and planning will commence immediately after the detection of the first attack and with no or little information of the second attack. In this report, we outline a method by which resource allocation may be performed. It involves probabilistic reconstruction of the bioterrorist attack from partial observations of the outbreak, followed by an optimization-under-uncertainty approach to perform resource allocations. We consider both single-site and time-staggered multi-site attacks (i.e., a reload scenario) under conditions when resources (personnel and equipment which are difficult to gather and transport) are insufficient. Both communicable (plague) and non-communicable diseases (anthrax) are addressed, and we also consider cases when the data, the time-series of people reporting with symptoms, are confounded with a reporting delay. We demonstrate how our approach develops allocations profiles that have the potential to reduce the probability of an extremely adverse outcome in exchange for a more certain, but less adverse outcome. We explore the effect of placing limits on daily allocations. Further, since our method is data-driven, the resource allocation progressively improves as more data becomes available.
Understanding charge transport processes at a molecular level using computational techniques is currently hindered by a lack of appropriate models for incorporating anistropic electric fields in molecular dynamics (MD) simulations. An important technological example is ion transport through solid-electrolyte interphase (SEI) layers that form in many common types of batteries. These layers regulate the rate at which electro-chemical reactions occur, affecting power, safety, and reliability. In this work, we develop a model for incorporating electric fields in MD using an atomistic-to-continuum framework. This framework provides the mathematical and algorithmic infrastructure to couple finite element (FE) representations of continuous data with atomic data. In this application, the electric potential is represented on a FE mesh and is calculated from a Poisson equation with source terms determined by the distribution of the atomic charges. Boundary conditions can be imposed naturally using the FE description of the potential, which then propagates to each atom through modified forces. The method is verified using simulations where analytical or theoretical solutions are known. Calculations of salt water solutions in complex domains are performed to understand how ions are attracted to charged surfaces in the presence of electric fields and interfering media.
Fiber-optic gas phase surface plasmon resonance (SPR) detection of several contaminant gases of interest to state-of-health monitoring in high-consequence sealed systems has been demonstrated. These contaminant gases include H{sub 2}, H{sub 2}S, and moisture using a single-ended optical fiber mode. Data demonstrate that results can be obtained and sensitivity is adequate in a dosimetric mode that allows periodic monitoring of system atmospheres. Modeling studies were performed to direct the design of the sensor probe for optimized dimensions and to allow simultaneous monitoring of several constituents with a single sensor fiber. Testing of the system demonstrates the ability to detect 70mTorr partial pressures of H{sub 2} using this technique and <280 {micro}Torr partial pressures of H{sub 2}S. In addition, a multiple sensor fiber has been demonstrated that allows a single fiber to measure H{sub 2}, H{sub 2}S, and H{sub 2}O without changing the fiber or the analytical system.
Working with leading experts in the field of cognitive neuroscience and computational intelligence, SNL has developed a computational architecture that represents neurocognitive mechanisms associated with how humans remember experiences in their past. The architecture represents how knowledge is organized and updated through information from individual experiences (episodes) via the cortical-hippocampal declarative memory system. We compared the simulated behavioral characteristics with those of humans measured under well established experimental standards, controlling for unmodeled aspects of human processing, such as perception. We used this knowledge to create robust simulations of & human memory behaviors that should help move the scientific community closer to understanding how humans remember information. These behaviors were experimentally validated against actual human subjects, which was published. An important outcome of the validation process will be the joining of specific experimental testing procedures from the field of neuroscience with computational representations from the field of cognitive modeling and simulation.
The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.
A number of codes have been developed in the past for safeguards analysis, but many are dated, and no single code is able to cover all aspects of materials accountancy, process monitoring, and diversion scenario analysis. The purpose of this work was to integrate a transient solvent extraction simulation module developed at Oak Ridge National Laboratory, with the Separations and Safeguards Performance Model (SSPM), developed at Sandia National Laboratory, as a first step toward creating a more versatile design and evaluation tool. The SSPM was designed for materials accountancy and process monitoring analyses, but previous versions of the code have included limited detail on the chemical processes, including chemical separations. The transient solvent extraction model is based on the ORNL SEPHIS code approach to consider solute build up in a bank of contactors in the PUREX process. Combined, these capabilities yield a more robust transient separations and safeguards model for evaluating safeguards system design. This coupling and initial results are presented. In addition, some observations toward further enhancement of separations and safeguards modeling based on this effort are provided, including: items to be addressed in integrating legacy codes, additional improvements needed for a fully functional solvent extraction module, and recommendations for future integration of other chemical process modules.
This highly interdisciplinary team has developed dual-color, total internal reflection microscopy (TIRF-M) methods that enable us to optically detect and track in real time protein migration and clustering at membrane interfaces. By coupling TIRF-M with advanced analysis techniques (image correlation spectroscopy, single particle tracking) we have captured subtle changes in membrane organization that characterize immune responses. We have used this approach to elucidate the initial stages of cell activation in the IgE signaling network of mast cells and the Toll-like receptor (TLR-4) response in macrophages stimulated by bacteria. To help interpret these measurements, we have undertaken a computational modeling effort to connect the protein motion and lipid interactions. This work provides a deeper understanding of the initial stages of cellular response to external agents, including dynamics of interaction of key components in the signaling network at the 'immunological synapse,' the contact region of the cell and its adversary.
The effect of composition on the elastic responses of alumina particle-filled epoxy composites is examined using isotropic elastic response models relating the average stresses and strains in a discretely reinforced composite material consisting of perfectly bonded and uniformly distributed particles in a solid isotropic elastic matrix. Responses for small elastic deformations and large hydrostatic and plane-strain compressions are considered. The response model for small elastic deformations depends on known elastic properties of the matrix and particles, the volume fraction of the particles, and two additional material properties that reflect the composition and microstructure of the composite material. These two material properties, called strain concentration coefficients, are characterized for eleven alumina-filled epoxy composites. It is found that while the strain concentration coefficients depend strongly on the volume fraction of alumina particles, no significant dependence on particle morphology and size is observed for the compositions examined. Additionally, an analysis of the strain concentration coefficients reveals a remarkably simple dependency on the alumina volume fraction. Responses for large hydrostatic and plane-strain compressions are obtained by generalizing the equations developed for small deformation, and letting the alumina volume fraction in the composite increase with compression. The large compression plane-strain response model is shown to predict equilibrium Hugoniot states in alumina-filled epoxy compositions remarkably well.
This report focuses on quantum chemistry and ab initio molecular dynamics (AIMD) calculations applied to elucidate the mechanism of the multi-step, 2-electron, electrochemical reduction of the green house gas molecule carbon dioxide (CO{sub 2}) to carbon monoxide (CO) in aqueous media. When combined with H{sub 2} gas to form synthesis ('syn') gas, CO becomes a key precursor to methane, methanol, and other useful hydrocarbon products. To elucidate the mechanism of this reaction, we apply computational electrochemistry which is a fledgling, important area of basic science critical to energy storage. This report highlights several approaches, including the calculation of redox potentials, the explicit depiction of liquid water environments using AIMD, and free energy methods. While costly, these pioneering calculations reveal the key role of hydration- and protonation-stabilization of reaction intermediates, and may inform the design of CO{sub 2}-capture materials as well as its electrochemical reduction. In the course of this work, we have also dealt with the challenges of identifying and applying electronic structure methods which are sufficiently accurate to deal with transition metal ion complex-based catalyst. Such electronic structure methods are also pertinent to the accurate modeling of actinide materials and therefore to nuclear energy research. Our multi-pronged effort towards achieving this titular goal of the LDRD is discussed.
Our LDRD research project sought to develop an analytical method for detection of chemicals used in nuclear materials processing. Our approach is distinctly different than current research involving hardware-based sensors. By utilizing the response of indigenous species of plants and/or animals surrounding (or within) a nuclear processing facility, we propose tracking 'suspicious molecules' relevant to nuclear materials processing. As proof of concept, we have examined TBP, tributylphosphate, used in uranium enrichment as well as plutonium extraction from spent nuclear fuels. We will compare TBP to the TPP (triphenylphosphate) analog to determine the uniqueness of the metabonomic response. We show that there is a unique metabonomic response within our animal model to TBP. The TBP signature can further be delineated from that of TPP. We have also developed unique methods of instrumental transfer for metabonomic data sets.
Rapid identification of aerosolized biological agents following an alarm by particle triggering systems is needed to enable response actions that save lives and protect assets. Rapid identifiers must achieve species level specificity, as this is required to distinguish disease-causing organisms (e.g., Bacillus anthracis) from benign neighbors (e.g., Bacillus subtilis). We have developed a rapid (1-5 minute), novel identification methodology that sorts intact organisms from each other and particulates using capillary electrophoresis (CE), and detects using near-infrared (NIR) absorbance and scattering. We have successfully demonstrated CE resolution of Bacillus spores and vegetative bacteria at the species level. To achieve sufficient sensitivity for detection needs ({approx}10{sup 4} cfu/mL for bacteria), we have developed fiber-coupled cavity-enhanced absorbance techniques. Using this method, we have demonstrated {approx}two orders of magnitude greater sensitivity than published results for absorbing dyes, and single particle (spore) detection through primarily scattering effects. Results of the integrated CE-NIR system for spore detection are presented.
Abstract not provided.
This report documents a high-level analysis of the benefit and cost for flywheel energy storage used to provide area regulation for the electricity supply and transmission system in California. Area regulation is an 'ancillary service' needed for a reliable and stable regional electricity grid. The analysis was based on results from a demonstration, in California, of flywheel energy storage developed by Beacon Power Corporation (the system's manufacturer). Demonstrated was flywheel storage systems ability to provide 'rapid-response' regulation. Flywheel storage output can be varied much more rapidly than the output from conventional regulation sources, making flywheels more attractive than conventional regulation resources. The performance of the flywheel storage system demonstrated was generally consistent with requirements for a possible new class of regulation resources - 'rapid-response' energy-storage-based regulation - in California. In short, it was demonstrated that Beacon Power Corporation's flywheel system follows a rapidly changing control signal (the ACE, which changes every four seconds). Based on the results and on expected plant cost and performance, the Beacon Power flywheel storage system has a good chance of being a financially viable regulation resource. Results indicate a benefit/cost ratio of 1.5 to 1.8 using what may be somewhat conservative assumptions. A benefit/cost ratio of one indicates that, based on the financial assumptions used, the investment's financial returns just meet the investors target.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This report describes trans-organizational efforts to investigate the impact of chip multiprocessors (CMPs) on the performance of important Sandia application codes. The impact of CMPs on the performance and applicability of Sandia's system software was also investigated. The goal of the investigation was to make algorithmic and architectural recommendations for next generation platform acquisitions.
Abstract not provided.
Abstract not provided.
Currently, electrical power generation uses about 140 billion gallons of water per day accounting for over 39% of all freshwater withdrawals thus competing with irrigated agriculture as the leading user of water. Coupled to this water use is the required pumping, conveyance, treatment, storage and distribution of the water which requires on average 3% of all electric power generated. While water and energy use are tightly coupled, planning and management of these fundamental resources are rarely treated in an integrated fashion. Toward this need, a decision support framework has been developed that targets the shared needs of energy and water producers, resource managers, regulators, and decision makers at the federal, state and local levels. The framework integrates analysis and optimization capabilities to identify trade-offs, and 'best' alternatives among a broad list of energy/water options and objectives. The decision support framework is formulated in a modular architecture, facilitating tailored analyses over different geographical regions and scales (e.g., national, state, county, watershed, NERC region). An interactive interface allows direct control of the model and access to real-time results displayed as charts, graphs and maps. Ultimately, this open and interactive modeling framework provides a tool for evaluating competing policy and technical options relevant to the energy-water nexus.
Abstract not provided.
This report documents The Nambe Pueblo Water Budget and Water Forecasting model. The model has been constructed using Powersim Studio (PS), a software package designed to investigate complex systems where flows and accumulations are central to the system. Here PS has been used as a platform for modeling various aspects of Nambe Pueblo's current and future water use. The model contains three major components, the Water Forecast Component, Irrigation Scheduling Component, and the Reservoir Model Component. In each of the components, the user can change variables to investigate the impacts of water management scenarios on future water use. The Water Forecast Component includes forecasting for industrial, commercial, and livestock use. Domestic demand is also forecasted based on user specified current population, population growth rates, and per capita water consumption. Irrigation efficiencies are quantified in the Irrigated Agriculture component using critical information concerning diversion rates, acreages, ditch dimensions and seepage rates. Results from this section are used in the Water Demand Forecast, Irrigation Scheduling, and the Reservoir Model components. The Reservoir Component contains two sections, (1) Storage and Inflow Accumulations by Categories and (2) Release, Diversion and Shortages. Results from both sections are derived from the calibrated Nambe Reservoir model where historic, pre-dam or above dam USGS stream flow data is fed into the model and releases are calculated.
Abstract not provided.
The goal of this project is to develop an efficient energy scavenger for converting ambient low-frequency vibrations into electrical power. In order to achieve this a novel inertial micro power generator architecture has been developed that utilizes the bi-stable motion of a mechanical mass to convert a broad range of low-frequency (< 30Hz), and large-deflection (>250 {micro}m) ambient vibrations into high-frequency electrical output energy. The generator incorporates a bi-stable mechanical structure to initiate high-frequency mechanical oscillations in an electromagnetic scavenger. This frequency up-conversion technique enhances the electromechanical coupling and increases the generated power. This architecture is called the Parametric Frequency Increased Generator (PFIG). Three generations of the device have been fabricated. It was first demonstrated using a larger bench-top prototype that had a functional volume of 3.7cm3. It generated a peak power of 558{micro}W and an average power of 39.5{micro}W at an input acceleration of 1g applied at 10 Hz. The performance of this device has still not been matched by any other reported work. It yielded the best power density and efficiency for any scavenger operating from low-frequency (<10Hz) vibrations. A second-generation device was then fabricated. It generated a peak power of 288{micro}W and an average power of 5.8{micro}W from an input acceleration of 9.8m/s{sup 2} at 10Hz. The device operates over a frequency range of 20Hz. The internal volume of the generator is 2.1cm{sup 3} (3.7cm{sup 3} including casing), half of a standard AA battery. Lastly, a piezoelectric version of the PFIG is currently being developed. This device clearly demonstrates one of the key features of the PFIG architecture, namely that it is suitable for MEMS integration, more so than resonant generators, by incorporating a brittle bulk piezoelectric ceramic. This is the first micro-scale piezoelectric generator capable of <10Hz operation. The fabricated device currently generates a peak power of 25.9{micro}W and an average power of 1.21{micro}W from an input acceleration of 9.8m/s{sup -} at 10Hz. The device operates over a frequency range of 23Hz. The internal volume of the generator is 1.2cm{sup 3}.
Abstract not provided.
Abstract not provided.
Inelastic neutron scattering, density functional theory, ab initio molecular dynamics, and classical molecular dynamics were used to examine the behavior of nanoconfined water in palygorskite and sepiolite. These complementary methods provide a strong basis to illustrate and correlate the significant differences observed in the spectroscopic signatures of water in two unique clay minerals. Distortions of silicate tetrahedra in the smaller-pore palygorskite exhibit a limited number of hydrogen bonds having relatively short bond lengths. In contrast, without the distorted silicate tetrahedra, an increased number of hydrogen bonds are observed in the larger-pore sepiolite with corresponding longer bond distances. Because there is more hydrogen bonding at the pore interface in sepiolite than in palygorskite, we expect librational modes to have higher overall frequencies (i.e., more restricted rotational motions); experimental neutron scattering data clearly illustrates this shift in spectroscopic signatures. Distortions of the silicate tetrahedra in these minerals effectively disrupts hydrogen bonding patterns at the silicate-water interface, and this has a greater impact on the dynamical behavior of nanoconfined water than the actual size of the pore or the presence of coordinatively-unsaturated magnesium edge sites.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This Quick Reference Guide supplements the more complete Guide to Preparing SAND Reports and Other Communication Products. It provides limited guidance on how to prepare SAND Reports at Sandia National Laboratories. Users are directed to the in-depth guide for explanations of processes.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Location of the liquid-vapor critical point (c.p.) is one of the key features of equation of state models used in simulating high energy density physics and pulsed power experiments. For example, material behavior in the location of the vapor dome is critical in determining how and when coronal plasmas form in expanding wires. Transport properties, such as conductivity and opacity, can vary an order of magnitude depending on whether the state of the material is inside or outside of the vapor dome. Due to the difficulty in experimentally producing states near the vapor dome, for all but a few materials, such as Cesium and Mercury, the uncertainty in the location of the c.p. is of order 100%. These states of interest can be produced on Z through high-velocity shock and release experiments. For example, it is estimated that release adiabats from {approx}1000 GPa in aluminum would skirt the vapor dome allowing estimates of the c.p. to be made. This is within the reach of Z experiments (flyer plate velocity of {approx}30 km/s). Recent high-fidelity EOS models and hydrocode simulations suggest that the dynamic two-phase flow behavior observed in initial scoping experiments can be reproduced, providing a link between theory and experiment. Experimental identification of the c.p. in aluminum would represent the first measurement of its kind in a dynamic experiment. Furthermore, once the c.p. has been experimentally determined it should be possible to probe the electrical conductivity, opacity, reflectivity, etc. of the material near the vapor dome, using a variety of diagnostics. We propose a combined experimental and theoretical investigation with the initial emphasis on aluminum.
Petaflops systems will have tens to hundreds of thousands of compute nodes which increases the likelihood of faults. Applications use checkpoint/restart to recover from these faults, but even under ideal conditions, applications running on more than 30,000 nodes will likely spend more than half of their total run time saving checkpoints, restarting, and redoing work that was lost. We created a library that performs redundant computations on additional nodes allocated to the application. An active node and its redundant partner form a node bundle which will only fail, and cause an application restart, when both nodes in the bundle fail. The goal of this library is to learn whether this can be done entirely at the user level, what requirements this library places on a Reliability, Availability, and Serviceability (RAS) system, and what its impact on performance and run time is. We find that our redundant MPI layer library imposes a relatively modest performance penalty for applications, but that it greatly reduces the number of applications interrupts. This reduction in interrupts leads to huge savings in restart and rework time. For large-scale applications the savings compensate for the performance loss and the additional nodes required for redundant computations.
Ionizing radiation is known to cause Single Event Effects (SEE) in a variety of electronic devices. The mechanism that leads to these SEEs is current induced by the radiation in these devices. While this phenomenon is detrimental in ICs, this is the basic mechanism behind the operation of semiconductor radiation detectors. To be able to predict SEEs in ICs and detector responses we need to be able to simulate the radiation induced current as the function of time. There are analytical models, which work for very simple detector configurations, but fail for anything more complex. On the other end, TCAD programs can simulate this process in microelectronic devices, but these TCAD codes costs hundreds of thousands of dollars and they require huge computing resources. In addition, in certain cases they fail to predict the correct behavior. A simulation model based on the Gunn theorem was developed and used with the COMSOL Multiphysics framework.
Graph algorithms are a key component in a wide variety of intelligence analysis activities. The Graph-Based Informatics for Non-Proliferation and Counter-Terrorism project addresses the critical need of making these graph algorithms accessible to Sandia analysts in a manner that is both intuitive and effective. Specifically we describe the design and implementation of an open source toolkit for doing graph analysis, informatics, and visualization that provides Sandia with novel analysis capability for non-proliferation and counter-terrorism.
Abstract not provided.
Abstract not provided.
Traditional safeguards and security design for fuel cycle facilities is done separately and after the facility design is near completion. This can result in higher costs due to retrofits and redundant use of data. Future facilities will incorporate safeguards and security early in the design process and integrate the systems to make better use of plant data and strengthen both systems. The purpose of this project was to evaluate the integration of materials control and accounting (MC&A) measurements with physical security design for a nuclear reprocessing plant. Locations throughout the plant where data overlap occurs or where MC&A data could be a benefit were identified. This mapping is presented along with the methodology for including the additional data in existing probabilistic assessments to evaluate safeguards and security systems designs.
Abstract not provided.
Progress in MEMS fabrication has enabled a wide variety of force and displacement sensing devices to be constructed. One device under intense development at Sandia is a passive shock switch, described elsewhere (Mitchell 2008). A goal of all MEMS devices, including the shock switch, is to achieve a high degree of reliability. This, in turn, requires systematic methods for validating device performance during each iteration of design. Once a design is finalized, suitable tools are needed to provide quality assurance for manufactured devices. To ensure device performance, measurements on these devices must be traceable to NIST standards. In addition, accurate metrology of MEMS components is needed to validate mechanical models that are used to design devices to accelerate development and meet emerging needs. Progress towards a NIST-traceable calibration method is described for a next-generation, 2D Interfacial Force Microscope (IFM) for applications in MEMS metrology and qualification. Discussed are the results of screening several suitable calibration methods and the known sources of uncertainty in each method.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Advanced computing hardware and software written to exploit massively parallel architectures greatly facilitate the computation of extremely large problems. On the other hand, these tools, though enabling higher fidelity models, have often resulted in much longer run-times and turn-around-times in providing answers to engineering problems. The impediments include smaller elements and consequently smaller time steps, much larger systems of equations to solve, and the inclusion of nonlinearities that had been ignored in days when lower fidelity models were the norm. The research effort reported focuses on the accelerating the analysis process for structural dynamics though combinations of model reduction and mitigation of some factors that lead to over-meshing.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The advent of high quality factor (Q) microphotonic-resonators has led to the demonstration of high-fidelity optical sensors of many physical phenomena (e.g. mechanical, chemical, and biological sensing) often with far better sensitivity than traditional techniques. Microphotonic-resonators also offer potential advantages as uncooled thermal detectors including significantly better noise performance, smaller pixel size, and faster response times than current thermal detectors. In particular, microphotonic thermal detectors do not suffer from Johnson noise in the sensor, offer far greater responsivity, and greater thermal isolation as they do not require metallic leads to the sensing element. Such advantages make the prospect of a microphotonic thermal imager highly attractive. Here, we introduce the microphotonic thermal detection technique, present the theoretical basis for the approach, discuss our progress on the development of this technology and consider future directions for thermal microphotonic imaging. Already we have demonstrated viability of device fabrication with the successful demonstration of a 20{micro}m pixel, and a scalable readout technique. Further, to date, we have achieved internal noise performance (NEP{sub Internal} < 1pW/{radical}Hz) in a 20{micro}m pixel thereby exceeding the noise performance of the best microbolometers while simultaneously demonstrating a thermal time constant ({tau} = 2ms) that is five times faster. In all, this results in an internal detectivity of D*{sub internal} = 2 x 10{sup 9}cm {center_dot} {radical}Hz/W, while roughly a factor of four better than the best uncooled commercial microbolometers, future demonstrations should enable another order of magnitude in sensitivity. While much work remains to achieve the level of maturity required for a deployable technology, already, microphotonic thermal detection has demonstrated considerable potential.
Decisions for climate policy will need to take place in advance of climate science resolving all relevant uncertainties. Further, if the concern of policy is to reduce risk, then the best-estimate of climate change impacts may not be so important as the currently understood uncertainty associated with realizable conditions having high consequence. This study focuses on one of the most uncertain aspects of future climate change - precipitation - to understand the implications of uncertainty on risk and the near-term justification for interventions to mitigate the course of climate change. We show that the mean risk of damage to the economy from climate change, at the national level, is on the order of one trillion dollars over the next 40 years, with employment impacts of nearly 7 million labor-years. At a 1% exceedance-probability, the impact is over twice the mean-risk value. Impacts at the level of individual U.S. states are then typically in the multiple tens of billions dollar range with employment losses exceeding hundreds of thousands of labor-years. We used results of the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report 4 (AR4) climate-model ensemble as the referent for climate uncertainty over the next 40 years, mapped the simulated weather hydrologically to the county level for determining the physical consequence to economic activity at the state level, and then performed a detailed, seventy-industry, analysis of economic impact among the interacting lower-48 states. We determined industry GDP and employment impacts at the state level, as well as interstate population migration, effect on personal income, and the consequences for the U.S. trade balance.
This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.
This document outlines ways to more effectively communicate with U.S. Federal decision makers by outlining the structure, authority, and motivations of various Federal groups, how to find the trusted advisors, and how to structure communication. All three branches of Federal governments have decision makers engaged in resolving major policy issues. The Legislative Branch (Congress) negotiates the authority and the resources that can be used by the Executive Branch. The Executive Branch has some latitude in implementation and prioritizing resources. The Judicial Branch resolves disputes. The goal of all decision makers is to choose and implement the option that best fits the needs and wants of the community. However, understanding the risk of technical, political and/or financial infeasibility and possible unintended consequences is extremely difficult. Primarily, decision makers are supported in their deliberations by trusted advisors who engage in the analysis of options as well as the day-to-day tasks associated with multi-party negotiations. In the best case, the trusted advisors use many sources of information to inform the process including the opinion of experts and if possible predictive analysis from which they can evaluate the projected consequences of their decisions. The paper covers the following: (1) Understanding Executive and Legislative decision makers - What can these decision makers do? (2) Finding the target audience - Who are the internal and external trusted advisors? (3) Packaging the message - How do we parse and integrate information, and how do we use computer simulation or models in policy communication?