Single Ion Detection for Donor Devices
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The ability to automatically morph an existing mesh to conform to geometry modifications is a necessary capability to enable rapid prototyping of design variations. This paper compares six methods for morphing hexahedral and tetrahedral meshes, including the previously published FEMWARP and LBWARP methods as well as four new methods. Element quality and performance results show that different methods are superior on different models. We recommend that designers of applications that use mesh morphing consider both the FEMWARP and a linear simplex based method.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
A study was performed to compare the annual performance of 50 MW{sub e} Andasol-like trough plants that employ either a 2-tank or a thermocline-type molten-salt thermal storage system. trnsys software was used to create the plant models and to perform the annual simulations. The annual performance of each plant was found to be nearly identical in the base-case comparison. The reason that the thermocline exhibited nearly the same performance is primarily due to the ability of many trough power blocks to operate at a temperature that is significantly below the design point. However, if temperatures close to the design point are required, the performance of the 2-tank plant would be significantly better than the thermocline.
The proper alignment of facets on a dish engine concentrated solar power system is critical to the performance of the system. These systems are generally highly concentrating to produce high temperatures for maximum thermal efficiency so there is little tolerance for poor optical alignment. Improper alignment can lead to poor performance and shortened life through excessively high flux on the receiver surfaces, imbalanced power on multicylinder engines, and intercept losses at the aperture. Alignment approaches used in the past are time consuming field operations, typically taking 4-6 h per dish with 40-80 facets on the dish. Production systems of faceted dishes will need rapid, accurate alignment implemented in a fraction of an hour. In this paper, we present an extension to our Sandia Optical Fringe Analysis Slope Technique mirror characterization system that will automatically acquire data, implement an alignment strategy, and provide real-time mirror angle corrections to actuators or labor beneath the dish. The Alignment Implementation for Manufacturing using Fringe Analysis Slope Technique (AIMFAST) has been implemented and tested at the prototype level. In this paper we present the approach used in AIMFAST to rapidly characterize the dish system and provide near-real-time adjustment updates for each facet. The implemented approach can provide adjustment updates every 5 s, suitable for manual or automated adjustment of facets on a dish assembly line.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Applied Physics Letters
Abstract not provided.
Abstract not provided.
Abstract not provided.
This document identifies and provides access to source documentation for the Site- Wide Environmental Impact Statement for Sandia National Laboratories/New Mexico. Specifically, it lists agreements between the U.S. Department of Energy (DOE), the National Nuclear Security Administration (NNSA), DOE/NNSA/Sandia Site Office (SSO), Sandia Corporation, and local and state government agencies, Department of Defense, Kirtland Air Force Base, and other federal agencies.
The development of a new radiation effects microscopy (REM) technique is crucial as emerging semiconductor technologies demonstrate smaller feature sizes and thicker back end of line (BEOL) layers. To penetrate these materials and still deposit sufficient energy into the device to induce single event effects, high energy heavy ions are required. Ion photon emission microscopy (IPEM) is a technique that utilizes coincident photons, which are emitted from the location of each ion impact to map out regions of radiation sensitivity in integrated circuits and devices, circumventing the obstacle of focusing high-energy heavy ions. Several versions of the IPEM have been developed and implemented at Sandia National Laboratories (SNL). One such instrument has been utilized on the microbeam line of the 6 MV tandem accelerator at SNL. Another IPEM was designed for ex-vacu use at the 88 cyclotron at Lawrence Berkeley National Laboratory (LBNL). Extensive engineering is involved in the development of these IPEM systems, including resolving issues with electronics, event timing, optics, phosphor selection, and mechanics. The various versions of the IPEM and the obstacles, as well as benefits associated with each will be presented. In addition, the current stage of IPEM development as a user instrument will be discussed in the context of recent results.
Abstract not provided.
Abstract not provided.
In this paper an approach is described for the efficient computation of the mixed-potential scalar and dyadic Green's functions for a one-dimensional periodic (periodic along x direction) array of point sources embedded in a planar stratified structure. Suitable asymptotic extractions are performed on the slowly converging spectral series. The extracted terms are summed back through the Ewald method, modified and optimized to efficiently deal with all the different terms. The accelerated Green's functions allow for complex wavenumbers, and are thus suitable for application to leaky-wave antennas analysis. Suitable choices of the spectral integration paths are made in order to account for leakage effects and the proper/improper nature of the various space harmonics that form the 1-D periodic Green's function.
Abstract not provided.
Abstract not provided.
Abstract not provided.
NREL's Solar Advisor Model (SAM) is employed to estimate the current and future costs for parabolic trough and molten salt power towers in the US market. Future troughs are assumed to achieve higher field temperatures via the successful deployment of low melting-point, molten-salt heat transfer fluids by 2015-2020. Similarly, it is assumed that molten salt power towers are successfully deployed at 100MW scale over the same time period, increasing to 200MW by 2025. The levelized cost of electricity for both technologies is predicted to drop below 11 cents/kWh (assuming a 10% investment tax credit and other financial inputs outlined in the paper), making the technologies competitive in the marketplace as benchmarked by the California MPR. Both technologies can be deployed with large amounts of thermal energy storage, yielding capacity factors as high as 65% while maintaining an optimum LCOE.
The objective of this project is to investigate accuracy of error metrics in SCEPTRE and produce useful benchmarks, identify metrics that do not work well, identify metrics that do work well, and produce easy to reference results.
Nuclear Science and Technology
Abstract not provided.
Abstract not provided.
The YDB impact hypothesis of Firestone et al. (2007) is so extremely improbable it can be considered statistically impossible in addition to being physically impossible. Comets make up only about 1% of the population of Earth-crossing objects. Broken comets are a vanishingly small fraction, and only exist as Earth-sized clusters for a very short period of time. Only a small fraction of impacts occur at angles as shallow as proposed by the YDB impact authors. Events that are exceptionally unlikely to take place in the age of the Universe are 'statistically impossible'. The size distribution of Earth-crossing asteroids is well-constrained by astronomical observations, DoD satellite bolide frequencies, and the cratering record. This distribution can be transformed to a probability density function (PDF) for the largest expected impact of the past 20,000 years. The largest impact of any kind expected over the period of interest is 250 m. Anything larger than 2 km is exceptionally unlikely (probability less than 1%). The impact hypothesis does not rely on any sound physical model. A 4-km diameter comet, even if it fragmented upon entry, would not disperse or explode in the atmosphere. It would generate a crater about 50 km in diameter with a transient cavity as deep as 10 km. There is no evidence for such a large, young crater associated with the YDB. There is no model to suggest that a comet impact of this size is capable of generating continental-wide fires or blast damage, and there is no physical mechanism that could cause a 4-km comet to explode at the optimum height of 500 km. The highest possible altitude for a cometary optimum height is about 15 km, for a 120-m diameter comet. To maximize blast and thermal damage, a 4-km comet would have to break into tens of thousands fragments of this size and spread out over the entire continent, but that would require lateral forces that greatly exceed the drag force, and would not conserve energy. Airbursts are decompression explosions in which projectile material reaches high temperature but not high pressure states. Meteoritic diamonds would be vaporized. Nanodiamonds at the YDB are not evidence for an airburst or for an impact.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Assigning an acceptable level of power reliability in a security system environment requires a methodical approach to design when considering the alternatives tied to the reliability and life of the system. The downtime for a piece of equipment, be it for failure, routine maintenance, replacement, or refurbishment or connection of new equipment is a major factor in determining the reliability of the overall system. In addition to these factors is the condition where the system is static or dynamic in its growth. Most highly reliable security power source systems are supplied by utility power with uninterruptable power source (UPS) and generator backup. The combination of UPS and generator backup with a reliable utility typically provides full compliance to security requirements. In the energy market and from government agencies, there is growing pressure to utilize alternative sources of energy other than fossil fuel to increase the number of local generating systems to reduce dependence on remote generating stations and cut down on carbon effects to the environment. There are also conditions where a security system may be limited on functionality due to lack of utility power in remote locations. One alternative energy source is a renewable energy hybrid system including a photovoltaic or solar system with battery bank and backup generator set. This is a viable source of energy in the residential and commercial markets where energy management schemes can be incorporated and systems are monitored and maintained regularly. But, the reliability of this source could be considered diminished when considering the security system environment where stringent uptime requirements are required.
Abstract not provided.
Modern high-level programming languages often contain constructs whose semantics are non-trivial. In practice however, software developers generally restrict the use of such constructs to settings in which their semantics is simple (programmers use language constructs in ways they understand and can reason about). As a result, when developing tools for analyzing and manipulating software, a disproportionate amount of effort ends up being spent developing capabilities needed to analyze constructs in settings that are infrequently used. This paper takes the position that such distinctions between theory and practice are an important measure of the analyzability of a language.
Is there a systematic way to make EIGER data visual to help us communicate the results of our work and to help us understand those results? EIGER electromagnetics solver data is a challenge to interpret - it is extremely useful to post-process the results and make them visual. Summary of this presentation is that they designed and demonstrated a simple, extensible, and user-friendly package to automate post-processing and visualization of EIGER DATA for any research group using MOENCH, EIGER and JUNGFRAU.
Realization of a 33 GHz Phononic Crystal Fabricated in a Freestanding Membrane
Abstract not provided.
Abstract not provided.
Near-field microwave microscopy can be used as an alternative to atomic-force microscopy or Raman microscopy in determination of graphene thickness. We evaluated the values of AC impedance for few layer graphene. The impedance of mono and few-layer graphene at 4GHz was found predominantly active. Near-field microwave microscopy allows simultaneous imaging of location, geometry, thickness, and distribution of electrical properties of graphene without device fabrication. Our results may be useful for design of future graphene-based microwave devices.
Abstract not provided.
Abstract not provided.
Applied Physics Letters
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
State-of-the-art techniques for failure localization and design modification through bulk silicon are essential for multi-level metallization and new, flip chip packaging methods. The tutorial reviews the transmission of light through silicon, sample preparation, and backside defect localization techniques that are both currently available and under development. The techniques covered include emission microscopy, scanning laser microscope based techniques (electrooptic techniques, LIVA and its derivatives), and other non-IR based tools (FIB, e-beam techniques, etc.).
SEM and SOM techniques for IC analysis that take advantage of 'active injection' are reviewed. Active injection refers to techniques that alter the electrical characteristics of the device analyzed. All of these techniques can be performed on a standard SEM or SOM (using the proper laser wavelengths).
The shallow water equations are used as a test for many atmospheric models because the solution mimics the horizontal aspects of atmospheric dynamics while the simplicity of the equations make them useful for numerical experiments. This study describes a high-order element-based Galerkin method for the global shallow water equations using absolute vorticity, divergence, and fluid depth (atmospheric thickness) as the prognostic variables, while the wind field is a diagnostic variable that can be calculated from the stream function and velocity potential (the Laplacians of which are the vorticity and divergence, respectively). The numerical method employed to solve the shallow water system is based on the discontinuous Galerkin and spectral element methods. The discontinuous Galerkin method, which is inherently conservative, is used to solve the equations governing two conservative variables - absolute vorticity and atmospheric thickness (mass). The spectral element method is used to solve the divergence equation and the Poisson equations for the velocity potential and the stream function. Time integration is done with an explicit strong stability-preserving second-order Runge-Kutta scheme and the wind field is updated directly from the vorticity and divergence at each stage, and the computational domain is the cubed sphere. A stable steady-state test is run and convergence results are provided, showing that the method is high-order accurate. Additionally, two tests without analytic solutions are run with comparable results to previous high-resolution runs found in the literature.
Abstract not provided.
Abstract not provided.
Abstract not provided.
A prototype of a tritium thermoelectric generator (TTG) is currently being developed at Sandia. In the TTG, a vacuum jacket reduces the amount of heat lost from the high temperature source via convection. However, outgassing presents challenges to maintaining a vacuum for many years. Getters are chemically active substances that scavenge residual gases in a vacuum system. In order to maintain the vacuum jacket at approximately 1.0 x 10{sup -4} torr for decades, nonevaporable getters that can operate from -55 C to 60 C are going to be used. This paper focuses on the hydrogen capacity and absorption rate of the St707{trademark} non-evaporable getter by SAES. Using a getter testing manifold, we have carried out experiments to test these characteristics of the getter over the temperature range of -77 C to 60 C. The results from this study can be used to size the getter appropriately.
As a source of clean, remote energy, photovoltaic (PV) systems are an important area of research. The majority of solar cells are rigid materials with negligible flexibility. Flexible PV systems possess many advantages, such as being transportable and incorporable on diverse structures. Amorphous silicon and organic PV systems are flexible; however, they lack the efficiency and lifetime of rigid cells. There is also a need for PV systems that are light weight, especially in space and flight applications. We propose a solution to this problem by arranging rigid cells onto a flexible substrate creating efficient, light weight, and flexible devices. To date, we have created a working prototype of our design using the 1.1cm x 1cm Emcore cells. We have achieved a better power to weight ratio than commercially available PowerFilm{reg_sign}, which uses thin film silicon yielding .034W/gram. We have also tested our concept with other types of cells and verified that our methods are able to be adapted to any rigid solar cell technology. This allows us to use the highest efficiency devices despite their physical characteristics. Depending on the cell size we use, we can rival the curvature of most available flexible PV devices. We have shown how the benefits of rigid solar cells can be integrated into flexible applications, allowing performance that surpasses alternative technologies.
The objective is to deconvolve radiochromic film data into ion energy spectrum. The purpose is to: (1) Experiment - Utilize HERMES III as a pulsed neutron source; and (2) Unfolding - Ion energy spectrum gives insight into when the H{sup +} ions form and Spectrum is needed to predict neutron production. Conclusions are: (1) the majority of ions are high energy, therefore they form during the main beam pulse; (2) image processing worked; and (3) unfolding proved to be relatively stable.
Although stochastic programming is a powerful tool for modeling decision-making under uncertainty, various impediments have historically prevented its widespread use. One key factor involves the ability of non-specialists to easily express stochastic programming problems as extensions of deterministic models, which are often formulated first. A second key factor relates to the difficulty of solving stochastic programming models, particularly the general mixed-integer, multi-stage case. Intricate, configurable, and parallel decomposition strategies are frequently required to achieve tractable run-times. We simultaneously address both of these factors in our PySP software package, which is part of the COIN-OR Coopr open-source Python project for optimization. To formulate a stochastic program in PySP, the user specifies both the deterministic base model and the scenario tree with associated uncertain parameters in the Pyomo open-source algebraic modeling language. Given these two models, PySP provides two paths for solution of the corresponding stochastic program. The first alternative involves writing the extensive form and invoking a standard deterministic (mixed-integer) solver. For more complex stochastic programs, we provide an implementation of Rockafellar and Wets Progressive Hedging algorithm. Our particular focus is on the use of Progressive Hedging as an effective heuristic for approximating general multi-stage, mixed-integer stochastic programs. By leveraging the combination of a high-level programming language (Python) and the embedding of the base deterministic model in that language (Pyomo), we are able to provide completely generic and highly configurable solver implementations. PySP has been used by a number of research groups, including our own, to rapidly prototype and solve difficult stochastic programming problems.
Abstract not provided.
Marine hydrokinetic (MHK) projects will extract energy from ocean currents and tides, thereby altering water velocities and currents in the site's waterway. These hydrodynamics changes can potentially affect the ecosystem, both near the MHK installation and in surrounding (i.e., far field) regions. In both marine and freshwater environments, devices will remove energy (momentum) from the system, potentially altering water quality and sediment dynamics. In estuaries, tidal ranges and residence times could change (either increasing or decreasing depending on system flow properties and where the effects are being measured). Effects will be proportional to the number and size of structures installed, with large MHK projects having the greatest potential effects and requiring the most in-depth analyses. This work implements modification to an existing flow, sediment dynamics, and water-quality code (SNL-EFDC) to qualify, quantify, and visualize the influence of MHK-device momentum/energy extraction at a representative site. New algorithms simulate changes to system fluid dynamics due to removal of momentum and reflect commensurate changes in turbulent kinetic energy and its dissipation rate. A generic model is developed to demonstrate corresponding changes to erosion, sediment dynamics, and water quality. Also, bed-slope effects on sediment erosion and bedload velocity are incorporated to better understand scour potential.
The outline of this presentation is: (1) High-level view of Zoltan; (2) Requirements, data models, and interface; (3) Load Balancing and Partitioning; (4) Matrix Ordering, Graph Coloring; (5) Utilities; (6) Isorropia; and (7) Zoltan2.
Combustion Theory and Modelling
Abstract not provided.
Composite materials, particularly fiber reinforced plastic composites, have been extensively utilized in many military and industrial applications. As an important structural component in these applications, the composites are often subjected to external impact loading. It is desirable to understand the mechanical response of the composites under impact loading for performance evaluation in the applications. Even though many material models for the composites have been developed, experimental investigation is still needed to validate and verify the models. It is essential to investigate the intrinsic material response. However, it becomes more applicable to determine the structural response of composites, such as a composite beam. The composites are usually subjected to out-of-plane loading in applications. When a composite beam is subjected to a sudden transverse impact, two different kinds of stress waves, longitudinal and transverse waves, are generated and propagate in the beam. The longitudinal stress wave propagates through the thickness direction; whereas, the propagation of the transverse stress wave is in-plane directions. The longitudinal stress wave speed is usually considered as a material constant determined by the material density and Young's modulus, regardless of the loading rate. By contrast, the transverse wave speed is related to structural parameters. In ballistic mechanics, the transverse wave plays a key role to absorb external impact energy [1]. The faster the transverse wave speed, the more impact energy dissipated. Since the transverse wave speed is not a material constant, it is not possible to be calculated from stress-wave theory. One can place several transducers to track the transverse wave propagation. An alternative but more efficient method is to apply digital image correlation (DIC) to visualize the transverse wave propagation. In this study, we applied three-pointbending (TPB) technique to Kolsky compression bar to facilitate dynamic transverse loading on a glass fiber/epoxy composite beam. The high-speed DIC technique was employed to study the transverse wave propagation.
There has been increasing demand to understand the stress-strain response as well as damage and failure mechanisms of materials under impact loading condition. Dynamic tensile characterization has been an efficient approach to acquire satisfactory information of mechanical properties including damage and failure of the materials under investigation. However, in order to obtain valid experimental data, reliable tensile experimental techniques at high strain rates are required. This includes not only precise experimental apparatus but also reliable experimental procedures and comprehensive data interpretation. Kolsky bar, originally developed by Kolsky in 1949 [1] for high-rate compressive characterization of materials, has been extended for dynamic tensile testing since 1960 [2]. In comparison to Kolsky compression bar, the experimental design of Kolsky tension bar has been much more diversified, particularly in producing high speed tensile pulses in the bars. Moreover, instead of directly sandwiching the cylindrical specimen between the bars in Kolsky bar compression bar experiments, the specimen must be firmly attached to the bar ends in Kolsky tensile bar experiments. A common method is to thread a dumbbell specimen into the ends of the incident and transmission bars. The relatively complicated striking and specimen gripping systems in Kolsky tension bar techniques often lead to disturbance in stress wave propagation in the bars, requiring appropriate interpretation of experimental data. In this study, we employed a modified Kolsky tension bar, newly developed at Sandia National Laboratories, Livermore, CA, to explore the dynamic tensile response of a 4330-V steel. The design of the new Kolsky tension bar has been presented at 2010 SEM Annual Conference [3]. Figures 1 and 2 show the actual photograph and schematic of the Kolsky tension bar, respectively. As shown in Fig. 2, the gun barrel is directly connected to the incident bar with a coupler. The cylindrical striker set inside the gun barrel is launched to impact on the end cap that is threaded into the open end of the gun barrel, producing a tension on the gun barrel and the incident bar.
Abstract not provided.
Abstract not provided.
The outline of this presentation: (1) Proton acceleration with high-power lasers - Target Normal Sheath Acceleration concept; (2) Proton acceleration with mass-reduced targets - Breaking the 60 MeV threshold; (3) Proton beam divergence control - Novel focusing target geometry; and (4) New experimental capability development - Proton radiography on Z.
Abstract not provided.
Abstract not provided.
Abstract not provided.
International Journal of Mathematical Modelling and Numerical Optimisation
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Ferroelectric lead zirconate titanate (PZT) thin films are used for integrated capacitors, ferroelectric memory, and piezoelectric actuators. Solution deposition is routinely used to fabricate these thin films. During the solution deposition process, the precursor solutions are spin coated onto the substrate and then pyrolyzed to form an amorphous film. The amorphous film is then heated at a higher temperature (650-700 C) to crystallize the film into the desired perovskite phase. Phase purity is critical in achieving high ferroelectric properties. Moreover, due to the anisotropy in the structure and properties of PZT, it is desirable to control the texture obtained in these thin films. The heating rate during crystallization process is known to affect the sequence of phase evolution and texture obtained in these thin films. However, to date, a comprehensive understanding of how phase and texture evolution takes place is still lacking. To understand the effects of heating rate on phase and texture evolution, in-situ diffraction experiments during the crystallization of solution deposited PZT thin films were carried out at beamline 6-ID-B, Advanced Photon Source (APS). The high X-ray flux coupled with the sophisticated detectors available at the APS synchrotron source allow for in-situ characterization of phase and texture evolution at the high ramp rates that are commonly used during processing of PZT thin films. A PZT solution of nominal composition 52/48 (Zr/Ti) was spin coated onto a platinum-coated Si substrate (Pt//TiO{sub x}//SiO{sub 2}//Si). The films were crystallized using an infrared lamp, similar to a rapid thermal annealing furnace. The ramp rate was adjusted by controlling the voltage applied to the infrared lamp and increasing the voltage by a constant step with every acquisition. Four different ramp rates, ranging from {approx}1000 C/s to {approx}1 C/s, were investigated. The sample was aligned in grazing incidence to maximize the signal from the thin films. Successive diffraction patterns were acquired with a 1s acquisition time using a MAR SX-165 CCD detector during crystallization. The sample to detector distance and the tilt rotations of the detector were determined in Fit2D{copyright} by using Al{sub 2}O{sub 3} as the calibrant. These corrections were applied to the patterns when binning the data into radial (2{theta}) and azimuthal bins. The texture observed in the thin film was qualitatively analyzed by fitting the intensity peaks along the azimuthal direction with a gaussian profile function to obtain the integrated intensity of the peaks. Data analysis and peak fitting was done using the curve fitting toolbox in MATLAB{copyright}. A fluorite-type phase was observed to form before the perovskite phase for all ramp rates. PtxPb is a transient intermetallic formed due to the interaction of the thin film and the bottom electrode during crystallization. Ramp rate was observed to significantly affect the amount of PtxPb observed in the thin films during crystallization. Ramp rate was also observed to affect the final texture obtained in the thin films. These results will be discussed in the poster in view of the current understanding of these materials.
Abstract not provided.
The SNL/AWE joint mechanics workshop, held in Dartington Hall, Totnes, Devon, UK 26-29 April 2009 was a follow up to another international joints workshop held in Arlington, Virginia, in October 2006. The preceding workshop focused on identifying what length scales and interactions would be necessary to provide a scientific basis for analyzing and understanding joint mechanics from the atomistic scale on upward. In contrast, the workshop discussed in this report, focused much more on identification and development of methods at longer length scales that can have a nearer term impact on engineering analysis, design, and prediction of the dynamics of jointed structures. Also, the 2009 meeting employed less technical presentation and more break out sessions for developing focused strategies than was the case with the early workshop. Several 'challenges' were identified and assignments were made to teams to develop approaches to address those challenges.
Abstract not provided.
Tracking nuclear materials production and processing, particularly covert operations, is a key national security concern, given that nuclear materials processing can be a signature of nuclear weapons activities by US adversaries. Covert trafficking can also result in homeland security threats, most notably allowing terrorists to assemble devices such as dirty bombs. Existing methods depend on isotope analysis and do not necessarily detect chronic low-level exposure. In this project, indigenous organisms such as plants, small mammals, and bacteria are utilized as living sensors for the presence of chemicals used in nuclear materials processing. Such 'metabolic fingerprinting' (or 'metabonomics') employs nuclear magnetic resonance (NMR) spectroscopy to assess alterations in organismal metabolism provoked by the environmental presence of nuclear materials processing, for example the tributyl phosphate employed in the processing of spent reactor fuel rods to extract and purify uranium and plutonium for weaponization.
Policy makers will most likely need to make decisions about climate policy before climate scientists have resolved all relevant uncertainties about the impacts of climate change. This study demonstrates a risk-assessment methodology for evaluating uncertain future climatic conditions. We estimate the impacts from responses to climate change on U.S. state- and national-level economic activity from 2010 to 2050. To understand the implications of uncertainty on risk and to provide a near-term rationale for policy interventions to mitigate the course of climate change, we focus on precipitation, one of the most uncertain aspects of future climate change. We use results of the climate-model ensemble from the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report (AR4) as a proxy for representing climate uncertainty over the next 40 years, map the simulated weather from the climate models hydrologically to the county level to determine the physical consequences on economic activity at the state level, and perform a detailed 70-industry analysis of economic impacts among the interacting lower-48 states. We determine the industry-level contribution to the gross domestic product and employment impacts at the state level, as well as interstate population migration, effects on personal income, and consequences for the U.S. trade balance. We show that the mean or average risk of damage to the U.S. economy from climate change, at the national level, is on the order of $1 trillion over the next 40 years, with losses in employment equivalent to nearly 7 million full-time jobs.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Digital image correlation (DIC) and the tremendous advances in optical imaging are beginning to revolutionize explosive and high-strain rate measurements. This paper presents results obtained from metallic hemispheres expanded at detonation velocities. Important aspects of sample preparation and lighting of the image will be presented that are key considerations in obtaining images for DIC with frame rates at 1-million frames/second. Quantitative measurements of the case strain rate, expansion velocity and deformation will be presented. Furthermore, preliminary estimations of the measurement uncertainty will be discussed with notes on how image noise and contrast effect the measurement of shape and displacement. The data are then compared with analytical representations of the experiment.
Abstract not provided.
We present hierarchical streamline bundles, a new approach to simplifying and visualizing 2D flow fields. Our method first densely seeds a flow field and produces a large number of streamlines that capture important flow features such as critical points. Then, we group spatially neighboring and geometrically similar streamlines to construct a hierarchy from which we extract streamline bundles at different levels of detail. Streamline bundles highlight multiscale flow features and patterns through a clustered yet non-cluttered display. This selective visualization strategy effectively accentuates visual foci and therefore is able to convey the desired insight into the flow fields. The hierarchical streamline bundles we have introduced offer a new way to characterize and visualize the flow structure and patterns in multiscale fashion. Streamline bundles highlight critical points clearly and concisely. Exploring the hierarchy allows a complete visualization of important flow features. Thanks to selective streamline display and flexible LOD refinement, our multiresolution technique is scalable and is promising for viewing large and complex flow fields. In the future, we would like to seek a cost-effective way to generate streamlines without enforcing the dense seeding condition. We will also extend this approach to handle real-world 3D complex flow fields.
Abstract not provided.
Abstract not provided.
Three salt compositions for potential use in trough-based solar collectors were tested to determine their mechanical properties as a function of temperature. The mechanical properties determined were unconfined compressive strength, Young's modulus, Poisson's ratio, and indirect tensile strength. Seventeen uniaxial compression and indirect tension tests were completed. It was found that as test temperature increases, unconfined compressive strength and Young's modulus decreased for all salt types. Empirical relationships were developed quantifying the aforementioned behaviors. Poisson's ratio tends to increase with increasing temperature except for one salt type where there is no obvious trend. The variability in measured indirect tensile strength is large, but not atypical for this index test. The average tensile strength for all salt types tested is substantially higher than the upper range of tensile strengths for naturally occurring rock salts.