Publications

Results 72001–72200 of 99,299

Search results

Jump to search filters

Variance estimation for radiation analysis and multi-sensor fusion

Mitchell, Dean J.

Variance estimates that are used in the analysis of radiation measurements must represent all of the measurement and computational uncertainties in order to obtain accurate parameter and uncertainty estimates. This report describes an approach for estimating components of the variance associated with both statistical and computational uncertainties. A multi-sensor fusion method is presented that renders parameter estimates for one-dimensional source models based on input from different types of sensors. Data obtained with multiple types of sensors improve the accuracy of the parameter estimates, and inconsistencies in measurements are also reflected in the uncertainties for the estimated parameter. Specific analysis examples are presented that incorporate a single gross neutron measurement with gamma-ray spectra that contain thousands of channels. The parameter estimation approach is tolerant of computational errors associated with detector response functions and source model approximations.

More Details

Algorithm and exploratory study of the Hall MHD Rayleigh-Taylor instability

Gardiner, Thomas A.

This report is concerned with the influence of the Hall term on the nonlinear evolution of the Rayleigh-Taylor (RT) instability. This begins with a review of the magnetohydrodynamic (MHD) equations including the Hall term and the wave modes which are present in the system on time scales short enough that the plasma can be approximated as being stationary. In this limit one obtains what are known as the electron MHD (EMHD) equations which support two characteristic wave modes known as the whistler and Hall drift modes. Each of these modes is considered in some detail in order to draw attention to their key features. This analysis also serves to provide a background for testing the numerical algorithms used in this work. The numerical methods are briefly described and the EMHD solver is then tested for the evolution of whistler and Hall drift modes. These methods are then applied to study the nonlinear evolution of the MHD RT instability with and without the Hall term for two different configurations. The influence of the Hall term on the mixing and bubble growth rate are analyzed.

More Details

Solid oxide electrochemical reactor science

Stechel, Ellen B.

Solid-oxide electrochemical cells are an exciting new technology. Development of solid-oxide cells (SOCs) has advanced considerable in recent years and continues to progress rapidly. This thesis studies several aspects of SOCs and contributes useful information to their continued development. This LDRD involved a collaboration between Sandia and the Colorado School of Mines (CSM) ins solid-oxide electrochemical reactors targeted at solid oxide electrolyzer cells (SOEC), which are the reverse of solid-oxide fuel cells (SOFC). SOECs complement Sandia's efforts in thermochemical production of alternative fuels. An SOEC technology would co-electrolyze carbon dioxide (CO{sub 2}) with steam at temperatures around 800 C to form synthesis gas (H{sub 2} and CO), which forms the building blocks for a petrochemical substitutes that can be used to power vehicles or in distributed energy platforms. The effort described here concentrates on research concerning catalytic chemistry, charge-transfer chemistry, and optimal cell-architecture. technical scope included computational modeling, materials development, and experimental evaluation. The project engaged the Colorado Fuel Cell Center at CSM through the support of a graduate student (Connor Moyer) at CSM and his advisors (Profs. Robert Kee and Neal Sullivan) in collaboration with Sandia.

More Details

Simulations of neutron multiplicity measurements with MCNP-PoliMi

Miller, Eric C.; Mattingly, John K.

The heightened focus on nuclear safeguards and accountability has increased the need to develop and verify simulation tools for modeling these applications. The ability to accurately simulate safeguards techniques, such as neutron multiplicity counting, aids in the design and development of future systems. This work focuses on validating the ability of the Monte Carlo code MCNPX-PoliMi to reproduce measured neutron multiplicity results for a highly multiplicative sample. The benchmark experiment for this validation consists of a 4.5-kg sphere of plutonium metal that was moderated by various thicknesses of polyethylene. The detector system was the nPod, which contains a bank of 15 3He detectors. Simulations of the experiments were compared to the actual measurements and several sources of potential bias in the simulation were evaluated. The analysis included the effects of detector dead time, source-detector distance, density, and adjustments made to the value of {nu}-bar in the data libraries. Based on this analysis it was observed that a 1.14% decrease in the evaluated value of {nu}-bar for 239Pu in the ENDF-VII library substantially improved the accuracy of the simulation.

More Details

Enabling R&D for accurate simulation of non-ideal explosives

Thompson, A.P.; Aidun, John B.; Schmitt, Robert G.

We implemented two numerical simulation capabilities essential to reliably predicting the effect of non-ideal explosives (NXs). To begin to be able to treat the multiple, competing, multi-step reaction paths and slower kinetics of NXs, Sandia's CTH shock physics code was extended to include the TIGER thermochemical equilibrium solver as an in-line routine. To facilitate efficient exploration of reaction pathways that need to be identified for the CTH simulations, we implemented in Sandia's LAMMPS molecular dynamics code the MSST method, which is a reactive molecular dynamics technique for simulating steady shock wave response. Our preliminary demonstrations of these two capabilities serve several purposes: (i) they demonstrate proof-of-principle for our approach; (ii) they provide illustration of the applicability of the new functionality; and (iii) they begin to characterize the use of the new functionality and identify where improvements will be needed for the ultimate capability to meet national security needs. Next steps are discussed.

More Details

Design considerations for concentrating solar power tower systems employing molten salt

Moore, Robert C.; Vernon, Milton E.; Ho, Clifford K.; Siegel, Nathan P.; Kolb, Gregory J.

The Solar Two Project was a United States Department of Energy sponsored project operated from 1996 to 1999 to demonstrate the coupling of a solar power tower with a molten nitrate salt as a heat transfer media and for thermal storage. Over all, the Solar Two Project was very successful; however many operational challenges were encountered. In this work, the major problems encountered in operation of the Solar Two facility were evaluated and alternative technologies identified for use in a future solar power tower operating with a steam Rankine power cycle. Many of the major problems encountered can be addressed with new technologies that were not available a decade ago. These new technologies include better thermal insulation, analytical equipment, pumps and values specifically designed for molten nitrate salts, and gaskets resistant to thermal cycling and advanced equipment designs.

More Details

Initiation of the TLR4 signal transduction network : deeper understanding for better therapeutics

Kent, Michael S.; Branda, Steven; Hayden, Carl C.; Sasaki, Darryl Y.; Sale, Kenneth L.

The innate immune system represents our first line of defense against microbial pathogens, and in many cases is activated by recognition of pathogen cellular components (dsRNA, flagella, LPS, etc.) by cell surface membrane proteins known as toll-like receptors (TLRs). As the initial trigger for innate immune response activation, TLRs also represent a means by which we can effectively control or modulate inflammatory responses. This proposal focused on TLR4, which is the cell-surface receptor primarily responsible for initiating the innate immune response to lipopolysaccharide (LPS), a major component of the outer membrane envelope of gram-negative bacteria. The goal was to better understand TLR4 activation and associated membrane proximal events, in order to enhance the design of small molecule therapeutics to modulate immune activation. Our approach was to reconstitute the receptor in biomimetic systems in-vitro to allow study of the structure and dynamics with biophysical methods. Structural studies were initiated in the first year but were halted after the crystal structure of the dimerized receptor was published early in the second year of the program. Methods were developed to determine the association constant for oligomerization of the soluble receptor. LPS-induced oligomerization was observed to be a strong function of buffer conditions. In 20 mM Tris pH 8.0 with 200 mM NaCl, the onset of receptor oligomerization occurred at 0.2 uM TLR4/MD2 with E coli LPS Ra mutant in excess. However, in the presence of 0.5 uM CD14 and 0.5 uM LBP, the onset of receptor oligomerization was observed to be less than 10 nM TLR4/MD2. Several methods were pursued to study LPS-induced oligomerization of the membrane-bound receptor, including CryoEM, FRET, colocalization and codiffusion followed by TIRF, and fluorescence correlation spectroscopy. However, there approaches met with only limited success.

More Details

Modeling attacker-defender interactions in information networks

Collins, Michael J.

The simplest conceptual model of cybersecurity implicitly views attackers and defenders as acting in isolation from one another: an attacker seeks to penetrate or disrupt a system that has been protected to a given level, while a defender attempts to thwart particular attacks. Such a model also views all non-malicious parties as having the same goal of preventing all attacks. But in fact, attackers and defenders are interacting parts of the same system, and different defenders have their own individual interests: defenders may be willing to accept some risk of successful attack if the cost of defense is too high. We have used game theory to develop models of how non-cooperative but non-malicious players in a network interact when there is a substantial cost associated with effective defensive measures. Although game theory has been applied in this area before, we have introduced some novel aspects of player behavior in our work, including: (1) A model of how players attempt to avoid the costs of defense and force others to assume these costs; (2) A model of how players interact when the cost of defending one node can be shared by other nodes; and (3) A model of the incentives for a defender to choose less expensive, but less effective, defensive actions.

More Details

Improved high temperature solar absorbers for use in Concentrating Solar Power central receiver applications

Staiger, Chad L.; Lambert, Timothy N.; Hall, Aaron; Bencomo, Marlene; Stechel, Ellen B.

Concentrating solar power (CSP) systems use solar absorbers to convert the heat from sunlight to electric power. Increased operating temperatures are necessary to lower the cost of solar-generated electricity by improving efficiencies and reducing thermal energy storage costs. Durable new materials are needed to cope with operating temperatures >600 C. The current coating technology (Pyromark High Temperature paint) has a solar absorptance in excess of 0.95 but a thermal emittance greater than 0.8, which results in large thermal losses at high temperatures. In addition, because solar receivers operate in air, these coatings have long term stability issues that add to the operating costs of CSP facilities. Ideal absorbers must have high solar absorptance (>0.95) and low thermal emittance (<0.05) in the IR region, be stable in air, and be low-cost and readily manufacturable. We propose to utilize solution-based synthesis techniques to prepare intrinsic absorbers for use in central receiver applications.

More Details

Nanopatterned ferroelectrics for ultrahigh density rad-hard nonvolatile memories

Brennecka, Geoff; Stevens, Jeffrey; Gin, Aaron G.; Scrymgeour, David

Radiation hard nonvolatile random access memory (NVRAM) is a crucial component for DOE and DOD surveillance and defense applications. NVRAMs based upon ferroelectric materials (also known as FERAMs) are proven to work in radiation-rich environments and inherently require less power than many other NVRAM technologies. However, fabrication and integration challenges have led to state-of-the-art FERAMs still being fabricated using a 130nm process while competing phase-change memory (PRAM) has been demonstrated with a 20nm process. Use of block copolymer lithography is a promising approach to patterning at the sub-32nm scale, but is currently limited to self-assembly directly on Si or SiO{sub 2} layers. Successful integration of ferroelectrics with discrete and addressable features of {approx}15-20nm would represent a 100-fold improvement in areal memory density and would enable more highly integrated electronic devices required for systems advances. Towards this end, we have developed a technique that allows us to carry out block copolymer self-assembly directly on a huge variety of different materials and have investigated the fabrication, integration, and characterization of electroceramic materials - primarily focused on solution-derived ferroelectrics - with discrete features of {approx}20nm and below. Significant challenges remain before such techniques will be capable of fabricating fully integrated NVRAM devices, but the tools developed for this effort are already finding broader use. This report introduces the nanopatterned NVRAM device concept as a mechanism for motivating the subsequent studies, but the bulk of the document will focus on the platform and technology development.

More Details

Thermokinetic/mass-transfer analysis of carbon capture for reuse/sequestration

Brady, Patrick V.; Luketa, Anay; Stechel, Ellen B.

Effective capture of atmospheric carbon is a key bottleneck preventing non bio-based, carbon-neutral production of synthetic liquid hydrocarbon fuels using CO{sub 2} as the carbon feedstock. Here we outline the boundary conditions of atmospheric carbon capture for recycle to liquid hydrocarbon fuels production and re-use options and we also identify the technical advances that must be made for such a process to become technically and commercially viable at scale. While conversion of atmospheric CO{sub 2} into a pure feedstock for hydrocarbon fuels synthesis is presently feasible at the bench-scale - albeit at high cost energetically and economically - the methods and materials needed to concentrate large amounts of CO{sub 2} at low cost and high efficiency remain technically immature. Industrial-scale capture must entail: (1) Processing of large volumes of air through an effective CO{sub 2} capture media and (2) Efficient separation of CO{sub 2} from the processed air flow into a pure stream of CO{sub 2}.

More Details

Development of efficient, integrated cellulosic biorefineries : LDRD final report

Shaddix, Christopher R.; Hecht, Ethan S.; Teh, Kwee-Yan; Buffleben, George M.; Dibble, Dean C.

Cellulosic ethanol, generated from lignocellulosic biomass sources such as grasses and trees, is a promising alternative to conventional starch- and sugar-based ethanol production in terms of potential production quantities, CO{sub 2} impact, and economic competitiveness. In addition, cellulosic ethanol can be generated (at least in principle) without competing with food production. However, approximately 1/3 of the lignocellulosic biomass material (including all of the lignin) cannot be converted to ethanol through biochemical means and must be extracted at some point in the biochemical process. In this project we gathered basic information on the prospects for utilizing this lignin residue material in thermochemical conversion processes to improve the overall energy efficiency or liquid fuel production capacity of cellulosic biorefineries. Two existing pretreatment approaches, soaking in aqueous ammonia (SAA) and the Arkenol (strong sulfuric acid) process, were implemented at Sandia and used to generated suitable quantities of residue material from corn stover and eucalyptus feedstocks for subsequent thermochemical research. A third, novel technique, using ionic liquids (IL) was investigated by Sandia researchers at the Joint Bioenergy Institute (JBEI), but was not successful in isolating sufficient lignin residue. Additional residue material for thermochemical research was supplied from the dilute-acid simultaneous saccharification/fermentation (SSF) pilot-scale process at the National Renewable Energy Laboratory (NREL). The high-temperature volatiles yields of the different residues were measured, as were the char combustion reactivities. The residue chars showed slightly lower reactivity than raw biomass char, except for the SSF residue, which had substantially lower reactivity. Exergy analysis was applied to the NREL standard process design model for thermochemical ethanol production and from a prototypical dedicated biochemical process, with process data supplied by a recent report from the National Research Council (NRC). The thermochemical system analysis revealed that most of the system inefficiency is associated with the gasification process and subsequent tar reforming step. For the biochemical process, the steam generation from residue combustion, providing the requisite heating for the conventional pretreatment and alcohol distillation processes, was shown to dominate the exergy loss. An overall energy balance with different potential distillation energy requirements shows that as much as 30% of the biomass energy content may be available in the future as a feedstock for thermochemical production of liquid fuels.

More Details

Challenge problem and milestones for : Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC)

Arguello, Jose G.; Mcneish, Jerry; Schultz, Peter A.; Wang, Yifeng

This report describes the specification of a challenge problem and associated challenge milestones for the Waste Integrated Performance and Safety Codes (IPSC) supporting the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The NEAMS challenge problems are designed to demonstrate proof of concept and progress towards IPSC goals. The goal of the Waste IPSC is to develop an integrated suite of modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. To demonstrate proof of concept and progress towards these goals and requirements, a Waste IPSC challenge problem is specified that includes coupled thermal-hydrologic-chemical-mechanical (THCM) processes that describe (1) the degradation of a borosilicate glass waste form and the corresponding mobilization of radionuclides (i.e., the processes that produce the radionuclide source term), (2) the associated near-field physical and chemical environment for waste emplacement within a salt formation, and (3) radionuclide transport in the near field (i.e., through the engineered components - waste form, waste package, and backfill - and the immediately adjacent salt). The initial details of a set of challenge milestones that collectively comprise the full challenge problem are also specified.

More Details

Influence of point defects on grain boundary motion

Foiles, Stephen M.

This work addresses the influence of point defects, in particular vacancies, on the motion of grain boundaries. If there is a non-equilibrium concentration of point defects in the vicinity of an interface, such as due to displacement cascades in a radiation environment, motion of the interface to sweep up the defects will lower the energy and provide a driving force for interface motion. Molecular dynamics simulations are employed to examine the process for the case of excess vacancy concentrations in the vicinity of two grain boundaries. It is observed that the efficacy of the presence of the point defects in inducing boundary motion depends on the balance of the mobility of the defects with the mobility of the interfaces. In addition, the extent to which grain boundaries are ideal sinks for vacancies is evaluated by considering the energy of boundaries before and after vacancy absorption.

More Details

Scaling of X pinches from 1 MA to 6 MA

Sinars, Daniel; Mcbride, Ryan; Wenger, D.F.; Cuneo, Michael E.; Yu, Edmund; Harding, Eric H.; Hansen, Stephanie B.; Ampleford, David J.; Jennings, Christopher A.

This final report for Project 117863 summarizes progress made toward understanding how X-pinch load designs scale to high currents. The X-pinch load geometry was conceived in 1982 as a method to study the formation and properties of bright x-ray spots in z-pinch plasmas. X-pinch plasmas driven by 0.2 MA currents were found to have source sizes of 1 micron, temperatures >1 keV, lifetimes of 10-100 ps, and densities >0.1 times solid density. These conditions are believed to result from the direct magnetic compression of matter. Physical models that capture the behavior of 0.2 MA X pinches predict more extreme parameters at currents >1 MA. This project developed load designs for up to 6 MA on the SATURN facility and attempted to measure the resulting plasma parameters. Source sizes of 5-8 microns were observed in some cases along with evidence for high temperatures (several keV) and short time durations (<500 ps).

More Details

Scientific data analysis on data-parallel platforms

Roe, Diana C.; Choe, Yung R.; Ulmer, Craig

As scientific computing users migrate to petaflop platforms that promise to generate multi-terabyte datasets, there is a growing need in the community to be able to embed sophisticated analysis algorithms in the computing platforms' storage systems. Data Warehouse Appliances (DWAs) are attractive for this work, due to their ability to store and process massive datasets efficiently. While DWAs have been utilized effectively in data-mining and informatics applications, they remain largely unproven in scientific workloads. In this paper we present our experiences in adapting two mesh analysis algorithms to function on five different DWA architectures: two Netezza database appliances, an XtremeData dbX database, a LexisNexis DAS, and multiple Hadoop MapReduce clusters. The main contribution of this work is insight into the differences between these DWAs from a user's perspective. In addition, we present performance measurements for ten DWA systems to help understand the impact of different architectural trade-offs in these systems.

More Details

Using reconfigurable functional units in conventional microprocessors

Rodrigues, Arun

Scientific applications use highly specialized data structures that require complex, latency sensitive graphs of integer instructions for memory address calculations. Working with the Univeristy of Wisconsin, we have demonstrated significant differences between the Sandia's applications and the industry standard SPEC-FP (standard performance evaluation corporation-floating point) suite. Specifically, integer dataflow performance is critical to overall system performance. To improve this performance, we have developed a configurable functional unit design that is capable of accelerating integer dataflow.

More Details

Uncertainty quantification of cinematic imaging for development of predictive simulations of turbulent combustion

Frank, Jonathan H.; Lawson, Matthew; Sargsyan, Khachik; Debusschere, Bert; Najm, Habib N.

Recent advances in high frame rate complementary metal-oxide-semiconductor (CMOS) cameras coupled with high repetition rate lasers have enabled laser-based imaging measurements of the temporal evolution of turbulent reacting flows. This measurement capability provides new opportunities for understanding the dynamics of turbulence-chemistry interactions, which is necessary for developing predictive simulations of turbulent combustion. However, quantitative imaging measurements using high frame rate CMOS cameras require careful characterization of the their noise, non-linear response, and variations in this response from pixel to pixel. We develop a noise model and calibration tools to mitigate these problems and to enable quantitative use of CMOS cameras. We have demonstrated proof of principle for image de-noising using both wavelet methods and Bayesian inference. The results offer new approaches for quantitative interpretation of imaging measurements from noisy data acquired with non-linear detectors. These approaches are potentially useful in many areas of scientific research that rely on quantitative imaging measurements.

More Details

90Sr Liquid Scintillation Urine Analysis Utilizing Different Approaches for Tracer Recovery

Piraner, Olga; Preston, Rose T.; Shanks, Sonoya T.; Jones, Robert

90Sr is one of the isotopes most commonly produced by nuclear fission. This medium lived isotope presents serious challenges to radiation workers, the environment, and following a nuclear event, the general public. Methods of identifying this nuclide have been in existence for a number of years (e.g. Horwitz, E.P. [1], Maxwell, S.L.[2], EPA 905.0 [3]) which are time consuming, requiring a month or more for full analysis. This time frame is unacceptable in the present security environment. It is therefore important to have a dependable and rapid method for the determination of Sr. The purposes of this study are to reduce analysis time to less than half a day by utilizing a single method of radiation measurement while continuing to yield precise results. This paper presents findings on three methods that can meet this criteria; (1) stable Sr carrier, (2) 85Sr by gamma spectroscopy, and (3) 85Sr by LSC. Two methods of analyzing and calculating the 85Sr tracer recovery were investigated (gamma spectroscopy and a low energy window-Sr85LEBAB by LSC) as well as the use of two different types of Sr tracer (85Sr and stable Sr carrier). Three separate stock blank urine samples were spiked with various activity levels of 239Pu, 137Cs, 90Sr /90Y to determine the effectiveness of the Eichrome Sr-spec™ resin 2mL extractive columns. The objective was to compare the recoveries of 85Sr versus a stable strontium carrier, attempt to compare the rate at which samples can be processed by evaluating evaporation, neutralization, and removing the use of another instrument (gamma spectrometer) by using the LSC spectrometer to obtain 85Sr recovery. It was found that when using a calibration curve comprised of a different cocktail and a non-optimum discriminator setting reasonable results (bias of ± 25%) were achieved. The results from spiked samples containing 85Sr demonstrated that a higher recovery is obtained when using gamma spectroscopy (89-95%) than when using the LEB window from LSC (120-470%). The high recovery for 85Sr by LSC analysis may be due to the interference/cross talk from the alpha region since alpha counts were observed in all sample sets. After further investigation it was determined that the alpha counts were due to 239Pu breakthrough on the Sr-spec™ column. This requires further development to purify the Sr before an accurate tracer recovery determination can be made. Sample preparation times varied and ranged from 4-6 hours depending on the specific sample preparation process. The results from the spiked samples containing stable strontium nitrate Sr(NO3)2 carrier demonstrate that gravimetric analysis yields the most consistent high recoveries (97-101%) when evaporation is carefully performed. Since this method did not have a variation on the tracer recovery method, the samples were counted in 1) LEB/Alpha/Beta mode optimized for Sr-90, 2) DPM for Sr-90, and 3) general LEB/Alpha/Beta mode. The results (from the known) ranged from 79-104%, 107-177%, and 85-89% for 1, 2, and 3 respectively. Counting the prepared samples in a generic low energy beta/alpha/beta protocol yielded more accurate and consistent results and also yielded the shortest sample preparation turn-around-time of 3.5 hours.

More Details

Accelerated Cartesian expansion (ACE) based framework for the rapid evaluation of diffusion, lossy wave, and Klein-Gordon potentials

Journal of Computational Physics

Baczewski, Andrew D.; Vikram, Melapudi; Shanker, Balasubramaniam; Kempel, Leo

Diffusion, lossy wave, and Klein–Gordon equations find numerous applications in practical problems across a range of diverse disciplines. The temporal dependence of all three Green’s functions are characterized by an infinite tail. This implies that the cost complexity of the spatio-temporal convolutions, associated with evaluating the potentials, scales as O(Ns2Nt2), where Ns and Nt are the number of spatial and temporal degrees of freedom, respectively. In this paper, we discuss two new methods to rapidly evaluate these spatio-temporal convolutions by exploiting their block-Toeplitz nature within the framework of accelerated Cartesian expansions (ACE). The first scheme identifies a convolution relation in time amongst ACE harmonics and the fast Fourier transform (FFT) is used for efficient evaluation of these convolutions. The second method exploits the rank deficiency of the ACE translation operators with respect to time and develops a recursive numerical compression scheme for the efficient representation and evaluation of temporal convolutions. It is shown that the cost of both methods scales as O(NsNtlog2Nt). Furthermore, several numerical results are presented for the diffusion equation to validate the accuracy and efficacy of the fast algorithms developed here.

More Details

Reducing variance in batch partitioning measurements

Mariner, Paul

The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.

More Details

A resilience assessment framework for infrastructure and economic systems: Quantitative and qualitative resilience analysis of petrochemical supply chains to a hurricane

AIChE Annual Meeting, Conference Proceedings

Vugrin, Eric D.; Warren, Drake E.; Ehlen, Mark

In recent years, the nation has recognized that critical infrastructure protection should consider not only the prevention of disruptive events, but also the processes that infrastructure systems undergo to maintain functionality following disruptions. This more comprehensive approach has been termed critical infrastructure resilience. Given the occurrence of a particular disruptive event, the resilience of a system to that event is the system's ability to efficiently reduce both the magnitude and duration of the deviation from targeted system performance levels. Under the direction of the U. S. Department of Homeland Security's Science and Technology Directorate, Sandia National Laboratories has developed a comprehensive resilience assessment framework for evaluating the resilience of infrastructure and economic systems. The framework includes a quantitative methodology that measures resilience costs that result from a disruption to infrastructure function. The framework also includes a qualitative analysis methodology that assesses system characteristics affecting resilience to provide insight and direction for potential improvements. This paper describes the resilience assessment framework and demonstrates the utility of the assessment framework through application to two hypothetical scenarios involving the disruption of a petrochemical supply chain by hurricanes.

More Details

Impacts to the ethylene supply chain from a hurricane disruption

AIChE Annual Meeting, Conference Proceedings

Downes, Paula S.; Welk, Margaret; Sun, Amy C.; Heinen, Russell

Analysis of chemical supply chains is an inherently complex task, given the dependence of these supply chains on multiple infrastructure systems (e.g. transportation and energy). This effort requires data and information at various levels of resolution, ranging from network-level distribution systems to individual chemical reactions. The U.S. Department of Homeland Security (DHS) has tasked the National Infrastructure Simulation and Analysis Center (NISAC) with developing a chemical infrastructure analytical capability to assess interdependencies and complexities of the nation's critical infrastructure, including the chemical sector. To address this need, the Sandia National Laboratories (Sandia)1 component of NISAC has integrated its existing simulation and infrastructure analysis capabilities with various chemical industry datasets to create a capability to analyze and estimate the supply chain and economic impacts resulting from large-scale disruptions to the chemical sector. This development effort is ongoing and is currently being funded by the DHS's Science and Technology Directorate. This paper describes the methodology being used to create the capability and the types of data necessary to exercise the capability, and it presents an example analysis focusing on the ethylene portion of the chemical supply chain.

More Details

Modeling the national chlorinated hydrocarbon supply chain and effects of disruption

AIChE Annual Meeting, Conference Proceedings

Welk, Margaret E.; Sun, Amy C.; Downes, Paula S.

Chlorinated hydrocarbons represent the precursors for products ranging from polyvinyl chloride (PVC) and refrigerants to pharmaceuticals. Natural or manmade disruptions that affect the availability of these products nationally have the potential to affect a wide range of markets, from healthcare to construction. Analysis of chemical supply chains is an inherently complex task, given the dependence of these supply chains on multiple infrastructure systems (e.g. transportation and energy). This effort requires data and information at various levels of resolution, ranging from network-level distribution systems to individual chemical reactions. The U.S. Department of Homeland Security (DHS) has tasked the National Infrastructure Simulation and Analysis Center (NISAC) with developing a chemical infrastructure analytical capability to assess interdependencies and complexities of the nation's critical infrastructure, including the chemical sector. To address this need, the Sandia National Laboratories (Sandia) component of NISAC has integrated its existing simulation and infrastructure analysis capabilities with various chemical industry datasets to create a capability to analyze and estimate the supply chain economic impacts resulting from large-scale disruptions to the chemical sector. This development effort is ongoing and is currently being funded by the DHS's Science and Technology Directorate. This paper describes the methodology being used to create the capability and the types of data necessary to exercise the capability, and it presents an example analysis focusing on the chlorinated hydrocarbon portion of the chemical supply chain.

More Details

Process characterization vehicles for 3D integration

Proceedings - Electronic Components and Technology Conference

Campbell, David V.

Assemblies produced by 3D Integration, whether fabricated at die or wafer level, involve a large number of post fab processing steps. Performing the prove-in of these operations on high value product has many limitations. This work uses simple surrogate process characterization vehicles, which workaround limitations of cost, timeliness of piecparts, ability to consider multiple processing options, and insufficient volumes for adequately exercising flows to collect specific process data for characterization. The test structures easily adapt to specific product in terms of die dimensions, aspect ratios, pitch and number of interconnects, and etc. This results in good fidelity in exercising product-specific processing. The discussed Cyclops vehicle implements a mirrored layout suitable for stacking to itself by wafer-to-wafer, die-to-wafer, or die-to-die. A standardized 2x10 pad test interface allows characterization of any of the integration methods with a single simple setup. This design offers the utility of comparison study of the various methods all using the same basis.

More Details

Yield modeling of 3D integrated wafer scale assemblies

Proceedings - Electronic Components and Technology Conference

Campbell, David V.

3D Integration approaches exist for wafer-to-wafer, die-towafer, and die-to-die assembly, each with distinct merits. Creation of "seamless" wafer scale focal plane arrays on the order of 6-8" in diameter drives very demanding yield requirements and understanding. This work established a Monte Carlo model of our exploratory architecture in order to assess the trades of the various assembly methods. The model results suggested an optimum die size, number of die stacks per assembly, number of layers per stack, and quantified the value of sorting for optimizing the assembly process.

More Details

Contribution of optical phonons to thermal boundary conductance

Applied Physics Letters

Beechem, Thomas; Duda, John C.; Hopkins, Patrick E.; Norris, Pamela M.

Thermal boundary conductance (TBC) is a performance determinant for many microsystems due to the numerous interfaces contained within their structure. To assess this transport, theoretical approaches often account for only the acoustic phonons as optical modes are assumed to contribute negligibly due to their low group velocities. To examine this approach, the diffuse mismatch model is reformulated to account for more realistic dispersions containing optical modes. Using this reformulation, it is found that optical phonons contribute to TBC by as much as 80% for a variety of material combinations in the limit of both inelastic and elastic scattering. © 2010 American Institute of Physics.

More Details

Evolution of Sandia's Risk Assessment Methodology for Water and Wastewater Utilities (RAM-W™)

World Environmental and Water Resources Congress 2010: Challenges of Change - Proceedings of the World Environmental and Water Resources Congress 2010

Jaeger, Calvin D.; Hightower, Marion M.; Torres, Teresa M.

The initial version of RAM-W was issued in November 2001. The Public Health Security and Bioterrorism Preparedness and Response Act was issued in 2002 and in October 2002, version 2 of RAM-W was distributed to the water sector. In August 2007, RAM-W was revised to be compliant with specific RAMCAP® (Risk Analysis and Management for Critical Asset Protection) requirements. In addition, this version of RAM-W incorporated a number of other changes and improvements to the RAM process. All of these RAM-W versions were manual, paper-based methods which allowed an analyst to estimate security risk for their specific utility. In September 2008, an automated RAM prototype tool was developed which provided the basic RAM framework for critical infrastructures. In 2009, water sector stakeholders identified a need to automate RAM-W and this development effort was started in January 2009. This presentation will discuss the evolution of the RAM-W approach, capabilities and the new automated RAM-W tool (ARAM-W which will be available in mid-2010). © 2010 ASCE.

More Details

Representation of analysis results involving aleatory and epistemic uncertainty

International Journal of General Systems

Sallaberry, Cedric J.

Procedures are described for the representation of results in analyses that involve both aleatory uncertainty and epistemic uncertainty, with aleatory uncertainty deriving from an inherent randomness in the behaviour of the system under study and epistemic uncertainty deriving from a lack of knowledge about the appropriate values to use for quantities that are assumed to have fixed but poorly known values in the context of a specific study. Aleatory uncertainty is usually represented with probability and leads to cumulative distribution functions (CDFs) or complementary CDFs (CCDFs) for analysis results of interest. Several mathematical structures are available for the representation of epistemic uncertainty, including interval analysis, possibility theory, evidence theory and probability theory. In the presence of epistemic uncertainty, there is not a single CDF or CCDF for a given analysis result. Rather, there is a family of CDFs and a corresponding family of CCDFs that derive from epistemic uncertainty and have an uncertainty structure that derives from the particular uncertainty structure (e.g. interval analysis, possibility theory, evidence theory or probability theory) used to represent epistemic uncertainty. Graphical formats for the representation of epistemic uncertainty in families of CDFs and CCDFs are investigated and presented for the indicated characterisations of epistemic uncertainty.

More Details

Incorporating uncertainty into probabilistic performance models of concentrating solar power plants

Journal of Solar Energy Engineering, Transactions of the ASME

Ho, Clifford K.; Kolb, Gregory J.

A method for applying probabilistic models to concentrating solar-thermal power plants is described in this paper. The benefits of using probabilistic models include quantification of uncertainties inherent in the system and characterization of their impact on system performance and economics. Sensitivity studies using stepwise regression analysis can identify and rank the most important parameters and processes as a means to prioritize future research and activities. The probabilistic method begins with the identification of uncertain variables and the assignment of appropriate distributions for those variables. Those parameters are then sampled using a stratified method (Latin hypercube sampling) to ensure complete and representative sampling from each distribution. Models of performance, reliability, and cost are then simulated multiple times using the sampled set of parameters. The results yield a cumulative distribution function that can be used to quantify the probability of exceeding (or being less than) a particular value. Two examples, a simple cost model and a more detailed performance model of a hypothetical 100-MW e power tower, are provided to illustrate the methods. Copyright © 2010 by ASME.

More Details

Processing effects on microstructure in Er and ErD2 thin-films

Journal of Nuclear Materials

Snow, Clark S.; Kammler, Daniel; Brewer, Luke N.

Erbium metal thin-films have been deposited on molybdenum-on-silicon substrates and then converted to erbium dideuteride (ErD2). Here, we study the effects of deposition temperature (≈300 or 723 K) and deposition rate (1 or 20 nm/s) upon the initial Er metal microstructure and subsequent ErD2 microstructure. We find that low deposition temperature and low deposition rate lead to small Er metal grain sizes, and high deposition temperature and deposition rate led to larger Er metal grain sizes, consistent with published models of metal thin-film growth. ErD2 grain sizes are strongly influenced by the prior-metal grain size, with small metal grains leading to large ErD2 grains. A novel sample preparation technique for electron backscatter diffraction of air-sensitive ErD2 was developed, and allowed the quantitative measurement of ErD2 grain size and crystallographic texture. Finer-grained ErD2 showed a strong (1 1 1) fiber texture, whereas larger grained ErD2 had only weak texture. We hypothesize that this inverse correlation may arise from improved hydrogen diffusion kinetics in the more defective fine-grained metal structure or due to improved nucleation in the textured large-grain Er. © 2010 Elsevier B.V. All rights reserved.

More Details

A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data

De Sapio, Vincent

The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in job queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.

More Details

Optical holography as an analogue for a neural reuse mechanism

Behavioral and Brain Sciences

Verzi, Stephen J.; Wagner, John S.; Warrender, Christina E.

We propose an analogy between optical holography and neural behavior as a hypothesis about the physical mechanisms of neural reuse. Specifically, parameters in optical holography (frequency, amplitude, and phase of the reference beam) may provide useful analogues for understanding the role of different parameters in determining the behavior of neurons (e.g., frequency, amplitude, and phase of spiking behavior). © 2010 Cambridge University Press.

More Details

Environmental Geographic Information System

Peek, Dennis W.; Helfrich, Donald A.; Gorman, Susan

This document describes how the Environmental Geographic Information System (EGIS) was used, along with externally received data, to create maps for the Site-Wide Environmental Impact Statement (SWEIS) Source Document project. Data quality among the various classes of geographic information system (GIS) data is addressed. A complete listing of map layers used is provided.

More Details

Environmental management system

Salinas, Stephanie A.

The purpose of the Sandia National Laboratories/New Mexico (SNL/NM) Environmental Management System (EMS) is identification of environmental consequences from SNL/NM activities, products, and/or services to develop objectives and measurable targets for mitigation of any potential impacts to the environment. This Source Document discusses the annual EMS process for analysis of environmental aspects and impacts and also provides the fiscal year (FY) 2010 analysis. Further information on the EMS structure, processes, and procedures are described within the programmatic EMS Manual (PG470222).

More Details

Supplemental Information Source Document: Health and Safety

Avery, Rosemary P.; Johns, William H.

This document provides information on the possible human exposure to environmental media potentially contaminated with radiological materials and chemical constituents from operations at Sandia National Laboratories/New Mexico (SNL/NM). This report is based on the best available information for Calendar Year (CY) 2008, and was prepared in support of future analyses, including those that may be performed as part of the SNL/NM Site-Wide Environmental Impact Statement.

More Details

Long-term Environmental Stewardship

Nagy, Michael D.

The purpose of this Supplemental Information Source Document is to effectively describe Long-Term Environmental Stewardship (LTES) at Sandia National Laboratories/New Mexico (SNL/NM). More specifically, this document describes the LTES and Long-Term Stewardship (LTS) Programs, distinguishes between the LTES and LTS Programs, and summarizes the current status of the Environmental Restoration (ER) Project.

More Details

Sustaining knowledge in the neutron generator community and benchmarking study. Phase II

Huff, Tameka B.; Baldonado, Esther

This report documents the second phase of work under the Sustainable Knowledge Management (SKM) project for the Neutron Generator organization at Sandia National Laboratories. Previous work under this project is documented in SAND2008-1777, Sustaining Knowledge in the Neutron Generator Community and Benchmarking Study. Knowledge management (KM) systems are necessary to preserve critical knowledge within organizations. A successful KM program should focus on people and the process for sharing, capturing, and applying knowledge. The Neutron Generator organization is developing KM systems to ensure knowledge is not lost. A benchmarking study involving site visits to outside industry plus additional resource research was conducted during this phase of the SKM project. The findings presented in this report are recommendations for making an SKM program successful. The recommendations are activities that promote sharing, capturing, and applying knowledge. The benchmarking effort, including the site visits to Toyota and Halliburton, provided valuable information on how the SEA KM team could incorporate a KM solution for not just the neutron generators (NG) community but the entire laboratory. The laboratory needs a KM program that allows members of the workforce to access, share, analyze, manage, and apply knowledge. KM activities, such as communities of practice (COP) and sharing best practices, provide a solution towards creating an enabling environment for KM. As more and more people leave organizations through retirement and job transfer, the need to preserve knowledge is essential. Creating an environment for the effective use of knowledge is vital to achieving the laboratory's mission.

More Details

Adapting ORAP to wind plants : industry value and functional requirements

Strategic Power Systems (SPS) was contracted by Sandia National Laboratories to assess the feasibility of adapting their ORAP (Operational Reliability Analysis Program) tool for deployment to the wind industry. ORAP for Wind is proposed for use as the primary data source for the CREW (Continuous Reliability Enhancement for Wind) database which will be maintained by Sandia to enable reliability analysis of US wind fleet operations. The report primarily addresses the functional requirements of the wind-based system. The SPS ORAP reliability monitoring system has been used successfully for over twenty years to collect RAM (Reliability, Availability, Maintainability) and operations data for benchmarking and analysis of gas and steam turbine performance. This report documents the requirements to adapt the ORAP system for the wind industry. It specifies which existing ORAP design features should be retained, as well as key new requirements for wind. The latter includes alignment with existing and emerging wind industry standards (IEEE 762, ISO 3977 and IEC 61400). There is also a comprehensive list of thirty critical-to-quality (CTQ) functional requirements which must be considered and addressed to establish the optimum design for wind.

More Details

Supplemental information source document : socioeconomics

Sedore, Lora J.

This document provides information on expenditures and staffing levels at Sandia National Laboratories/New Mexico (SNL/NM). This report is based on the best available information obtained from Sandia Corporation for Fiscal Years 2008 and 2009, and was prepared in support of future analyses, including those that may be performed as part of the SNL/NM Site-Wide Environmental Impact Statement.

More Details

A modal approach to modeling spatially distributed vibration energy dissipation

Segalman, Daniel J.

The nonlinear behavior of mechanical joints is a confounding element in modeling the dynamic response of structures. Though there has been some progress in recent years in modeling individual joints, modeling the full structure with myriad frictional interfaces has remained an obstinate challenge. A strategy is suggested for structural dynamics modeling that can account for the combined effect of interface friction distributed spatially about the structure. This approach accommodates the following observations: (1) At small to modest amplitudes, the nonlinearity of jointed structures is manifest primarily in the energy dissipation - visible as vibration damping; (2) Correspondingly, measured vibration modes do not change significantly with amplitude; and (3) Significant coupling among the modes does not appear to result at modest amplitudes. The mathematical approach presented here postulates the preservation of linear modes and invests all the nonlinearity in the evolution of the modal coordinates. The constitutive form selected is one that works well in modeling spatially discrete joints. When compared against a mathematical truth model, the distributed dissipation approximation performs well.

More Details

Control system devices : architectures and supply channels overview

Schwartz, Moses; Mulder, John; Trent, Jason; Atkins, William D.

This report describes a research project to examine the hardware used in automated control systems like those that control the electric grid. This report provides an overview of the vendors, architectures, and supply channels for a number of control system devices. The research itself represents an attempt to probe more deeply into the area of programmable logic controllers (PLCs) - the specialized digital computers that control individual processes within supervisory control and data acquisition (SCADA) systems. The report (1) provides an overview of control system networks and PLC architecture, (2) furnishes profiles for the top eight vendors in the PLC industry, (3) discusses the communications protocols used in different industries, and (4) analyzes the hardware used in several PLC devices. As part of the project, several PLCs were disassembled to identify constituent components. That information will direct the next step of the research, which will greatly increase our understanding of PLC security in both the hardware and software areas. Such an understanding is vital for discerning the potential national security impact of security flaws in these devices, as well as for developing proactive countermeasures.

More Details

The first steps towards a standardized methodology for CSP electricity yield analysis

Ho, Clifford K.

The authors have founded a temporary international core team to prepare a SolarPACES activity aimed at the standardization of a methodology for electricity yield analysis of CSP plants. This core team has drafted a structural framework for a standardized methodology and the standardization process itself. The structural framework has to assure that the standardized methodology is applicable to all conceivable CSP systems, can be used on all levels of the project development process and covers all aspects affecting the electricity yield of CSP plants. Since the development of the standardized methodology is a complex task, the standardization process has been structured in work packages, and numerous international experts covering all aspects of CSP yield analysis have been asked to contribute to this process. These experts have teamed up in an international working group with the objective to develop, document and publish standardized methodologies for CSP yield analysis. This paper summarizes the intended standardization process and presents the structural framework of the methodology for CSP yield analysis.

More Details

Uranium for hydrogen storage applications : a materials science perspective

Kolasinski, Robert; Shugard, Andrew D.; Tewell, Craig R.; Cowgill, Donald F.

Under appropriate conditions, uranium will form a hydride phase when exposed to molecular hydrogen. This makes it quite valuable for a variety of applications within the nuclear industry, particularly as a storage medium for tritium. However, some aspects of the U+H system have been characterized much less extensively than other common metal hydrides (particularly Pd+H), likely due to radiological concerns associated with handling. To assess the present understanding, we review the existing literature database for the uranium hydride system in this report and identify gaps in the existing knowledge. Four major areas are emphasized: {sup 3}He release from uranium tritides, the effects of surface contamination on H uptake, the kinetics of the hydride phase formation, and the thermal desorption properties. Our review of these areas is then used to outline potential avenues of future research.

More Details

Living off-grid in an arid environment without a well : can residential and commercial/industrial water harvesting help solve water supply problems?

Axness, Carl L.

Our family of three lives comfortably off-grid without a well in an arid region ({approx}9 in/yr, average). This year we expect to achieve water sustainability with harvested or grey water supporting all of our needs (including a garden and trees), except drinking water (about 7 gallons/week). We discuss our implementation and the implication that for an investment of a few thousand dollars, many single family homes could supply a large portion of their own water needs, significantly reducing municipal water demand. Generally, harvested water is very low in minerals and pollutants, but may need treatment for microbes in order to be potable. This may be addressed via filters, UV light irradiation or through chemical treatment (bleach). Looking further into the possibility of commercial water harvesting from malls, big box stores and factories, we ask whether water harvesting could supply a significant portion of potable water by looking at two cities with water supply problems. We look at the implications of separate municipal water lines for potable and clean non-potable uses. Implications on changes to future building codes are explored.

More Details

Why Models Don%3CU%2B2019%3Et Forecast

Mcnamara, Laura A.

The title of this paper, Why Models Don't Forecast, has a deceptively simple answer: models don't forecast because people forecast. Yet this statement has significant implications for computational social modeling and simulation in national security decision making. Specifically, it points to the need for robust approaches to the problem of how people and organizations develop, deploy, and use computational modeling and simulation technologies. In the next twenty or so pages, I argue that the challenge of evaluating computational social modeling and simulation technologies extends far beyond verification and validation, and should include the relationship between a simulation technology and the people and organizations using it. This challenge of evaluation is not just one of usability and usefulness for technologies, but extends to the assessment of how new modeling and simulation technologies shape human and organizational judgment. The robust and systematic evaluation of organizational decision making processes, and the role of computational modeling and simulation technologies therein, is a critical problem for the organizations who promote, fund, develop, and seek to use computational social science tools, methods, and techniques in high-consequence decision making.

More Details

Plasma-materials interaction results at Sandia National Laboratories

Kolasinski, Robert; Buchenauer, D.A.; Cowgill, Donald F.; Karnesky, Richard A.; Whaley, Josh A.; Wampler, William R.

Overview of Plasma Materials Interaction (PMI) activities are: (1) Hydrogen diffusion and trapping in metals - (a) Growth of hydrogen precipitates in tungsten PFCs, (b) Temperature dependence of deuterium retention at displacement damage, (c) D retention in W at elevated temperatures; (2) Permeation - (a) Gas driven permeation results for W/Mo/SiC, (b) Plasma-driven permeation test stand for TPE; and (3) Surface studies - (a) H-sensor development, (b) Adsorption of oxygen and hydrogen on beryllium surfaces.

More Details

Antarctica X-band MiniSAR Crevasse Detection Radar : draft final report

Bickel, Douglas L.; Sander, Grant J.

This document is the final report for the 2009 Antarctica Crevasse Detection Radar (CDR) Project. This portion of the project is referred to internally as Phase 2. This is a follow on to the work done in Phase 1 reported on in [1]. Phase 2 involved the modification of a Sandia National Laboratories MiniSAR system used in Phase 1 to work with an LC-130 aircraft that operated in Antarctica in October through November of 2009. Experiments from the 2006 flights were repeated, as well as a couple new flight tests to examine the effect of colder snow and ice on the radar signatures of 'deep field' sites. This document includes discussion of the hardware development, system capabilities, and results from data collections in Antarctica during the fall of 2009.

More Details

An adaptive grid-based all hexahedral meshing algorithm based on 2-refinement

Owen, Steven J.

Most adaptive mesh generation algorithms employ a 3-refinement method. This method, although easy to employ, provides a mesh that is often too coarse in some areas and over refined in other areas. Because this method generates 27 new hexes in place of a single hex, there is little control on mesh density. This paper presents an adaptive all-hexahedral grid-based meshing algorithm that employs a 2-refinement method. 2-refinement is based on dividing the hex to be refined into eight new hexes. This method allows a greater control on mesh density when compared to a 3-refinement procedure. This adaptive all-hexahedral meshing algorithm provides a mesh that is efficient for analysis by providing a high element density in specific locations and a reduced mesh density in other areas. In addition, this tool can be effectively used for inside-out hexahedral grid based schemes, using Cartesian structured grids for the base mesh, which have shown great promise in accommodating automatic all-hexahedral algorithms. This adaptive all-hexahedral grid-based meshing algorithm employs a 2-refinement insertion method. This allows greater control on mesh density when compared to 3-refinement methods. This algorithm uses a two layer transition zone to increase element quality and keeps transitions from lower to higher mesh densities smooth. Templates were introduced to allow both convex and concave refinement.

More Details

Field-structured chemiresistors : tunable sensors for chemical-switch arrays

Read, Douglas

We have developed a significantly improved composite material for applications to chemiresistors, which are resistance-based sensors for volatile organic compounds. This material is a polymer composite containing Au-coated magnetic particles organized into electrically conducting pathways by magnetic fields. This improved material overcomes the various problems inherent to conventional carbon-black chemiresistors, while achieving an unprecedented magnitude of response. When exposed to chemical vapors, the polymer swells only slightly, yet this is amplified into large, reversible resistance changes - as much as 9 decades at a swelling of only 1.5 %. These conductor-insulator transitions occur over such a narrow range of analyte vapor concentration that these devices can be described as chemical switches. We demonstrate that the sensitivity and response range of these sensors can be tailored over a wide range by controlling the stress within the composite, including through the application of a magnetic field. Such tailorable sensors can be used to create sensor arrays that can accurately determine analyte concentration over a broad concentration range, or can be used to create logic circuits that signal a particular chemical environment. It is shown through combined mass-sorption and conductance measurements, that the response curve of any individual sensor is a function of polymer swelling alone. This has the important implication that individual sensor calibration requires testing with only a single analyte. In addition, we demonstrate a method for analyte discrimination based on sensor response kinetics, which is independent of analyte concentration. This method allows for discrimination even between chemically similar analytes. Lastly, additional variables associated with the composite and their effects on sensor response are explored.

More Details

A comparison of mesh morphing methods for shape optimization

Owen, Steven J.; Staten, Matthew L.

The ability to automatically morph an existing mesh to conform to geometry modifications is a necessary capability to enable rapid prototyping of design variations. This paper compares six methods for morphing hexahedral and tetrahedral meshes, including the previously published FEMWARP and LBWARP methods as well as four new methods. Element quality and performance results show that different methods are superior on different models. We recommend that designers of applications that use mesh morphing consider both the FEMWARP and a linear simplex based method.

More Details

Evaluation of annual performance of 2-tank and thermocline thermal storage for trough plants

Kolb, Gregory J.

A study was performed to compare the annual performance of 50 MW{sub e} Andasol-like trough plants that employ either a 2-tank or a thermocline-type molten-salt thermal storage system. trnsys software was used to create the plant models and to perform the annual simulations. The annual performance of each plant was found to be nearly identical in the base-case comparison. The reason that the thermocline exhibited nearly the same performance is primarily due to the ability of many trough power blocks to operate at a temperature that is significantly below the design point. However, if temperatures close to the design point are required, the performance of the 2-tank plant would be significantly better than the thermocline.

More Details

AIMFAST : an alignment tool based on fringe reflection methods applied to dish concentrators

Yellowhair, Julius; Carlson, Jeffrey; Trapeznikov, Kirill T.

The proper alignment of facets on a dish engine concentrated solar power system is critical to the performance of the system. These systems are generally highly concentrating to produce high temperatures for maximum thermal efficiency so there is little tolerance for poor optical alignment. Improper alignment can lead to poor performance and shortened life through excessively high flux on the receiver surfaces, imbalanced power on multicylinder engines, and intercept losses at the aperture. Alignment approaches used in the past are time consuming field operations, typically taking 4-6 h per dish with 40-80 facets on the dish. Production systems of faceted dishes will need rapid, accurate alignment implemented in a fraction of an hour. In this paper, we present an extension to our Sandia Optical Fringe Analysis Slope Technique mirror characterization system that will automatically acquire data, implement an alignment strategy, and provide real-time mirror angle corrections to actuators or labor beneath the dish. The Alignment Implementation for Manufacturing using Fringe Analysis Slope Technique (AIMFAST) has been implemented and tested at the prototype level. In this paper we present the approach used in AIMFAST to rapidly characterize the dish system and provide near-real-time adjustment updates for each facet. The implemented approach can provide adjustment updates every 5 s, suitable for manual or automated adjustment of facets on a dish assembly line.

More Details

Miscellaneous Agreements Between the U.S. Department of Energy and Federal, State, and Local Agencies

Meincke, Carol L.

This document identifies and provides access to source documentation for the Site- Wide Environmental Impact Statement for Sandia National Laboratories/New Mexico. Specifically, it lists agreements between the U.S. Department of Energy (DOE), the National Nuclear Security Administration (NNSA), DOE/NNSA/Sandia Site Office (SSO), Sandia Corporation, and local and state government agencies, Department of Defense, Kirtland Air Force Base, and other federal agencies.

More Details

Heavy ion radiation effects studies with ion photon emission microscopy

Hattar, Khalid M.; Powell, Cody J.; Doyle, B.L.

The development of a new radiation effects microscopy (REM) technique is crucial as emerging semiconductor technologies demonstrate smaller feature sizes and thicker back end of line (BEOL) layers. To penetrate these materials and still deposit sufficient energy into the device to induce single event effects, high energy heavy ions are required. Ion photon emission microscopy (IPEM) is a technique that utilizes coincident photons, which are emitted from the location of each ion impact to map out regions of radiation sensitivity in integrated circuits and devices, circumventing the obstacle of focusing high-energy heavy ions. Several versions of the IPEM have been developed and implemented at Sandia National Laboratories (SNL). One such instrument has been utilized on the microbeam line of the 6 MV tandem accelerator at SNL. Another IPEM was designed for ex-vacu use at the 88 cyclotron at Lawrence Berkeley National Laboratory (LBNL). Extensive engineering is involved in the development of these IPEM systems, including resolving issues with electronics, event timing, optics, phosphor selection, and mechanics. The various versions of the IPEM and the obstacles, as well as benefits associated with each will be presented. In addition, the current stage of IPEM development as a user instrument will be discussed in the context of recent results.

More Details

Efficient calculation of 1-D periodic Green's functions for leaky-wave applications

Johnson, William A.

In this paper an approach is described for the efficient computation of the mixed-potential scalar and dyadic Green's functions for a one-dimensional periodic (periodic along x direction) array of point sources embedded in a planar stratified structure. Suitable asymptotic extractions are performed on the slowly converging spectral series. The extracted terms are summed back through the Ewald method, modified and optimized to efficiently deal with all the different terms. The accelerated Green's functions allow for complex wavenumbers, and are thus suitable for application to leaky-wave antennas analysis. Suitable choices of the spectral integration paths are made in order to account for leakage effects and the proper/improper nature of the various space harmonics that form the 1-D periodic Green's function.

More Details

Current and future costs for parabolic trough and power tower systems in the US market

Ho, Clifford K.; Kolb, Gregory J.

NREL's Solar Advisor Model (SAM) is employed to estimate the current and future costs for parabolic trough and molten salt power towers in the US market. Future troughs are assumed to achieve higher field temperatures via the successful deployment of low melting-point, molten-salt heat transfer fluids by 2015-2020. Similarly, it is assumed that molten salt power towers are successfully deployed at 100MW scale over the same time period, increasing to 200MW by 2025. The levelized cost of electricity for both technologies is predicted to drop below 11 cents/kWh (assuming a 10% investment tax credit and other financial inputs outlined in the paper), making the technologies competitive in the marketplace as benchmarked by the California MPR. Both technologies can be deployed with large amounts of thermal energy storage, yielding capacity factors as high as 65% while maintaining an optimum LCOE.

More Details

Younger Dryas Boundary (YDB) impact : physical and statistical impossibility

Boslough, Mark

The YDB impact hypothesis of Firestone et al. (2007) is so extremely improbable it can be considered statistically impossible in addition to being physically impossible. Comets make up only about 1% of the population of Earth-crossing objects. Broken comets are a vanishingly small fraction, and only exist as Earth-sized clusters for a very short period of time. Only a small fraction of impacts occur at angles as shallow as proposed by the YDB impact authors. Events that are exceptionally unlikely to take place in the age of the Universe are 'statistically impossible'. The size distribution of Earth-crossing asteroids is well-constrained by astronomical observations, DoD satellite bolide frequencies, and the cratering record. This distribution can be transformed to a probability density function (PDF) for the largest expected impact of the past 20,000 years. The largest impact of any kind expected over the period of interest is 250 m. Anything larger than 2 km is exceptionally unlikely (probability less than 1%). The impact hypothesis does not rely on any sound physical model. A 4-km diameter comet, even if it fragmented upon entry, would not disperse or explode in the atmosphere. It would generate a crater about 50 km in diameter with a transient cavity as deep as 10 km. There is no evidence for such a large, young crater associated with the YDB. There is no model to suggest that a comet impact of this size is capable of generating continental-wide fires or blast damage, and there is no physical mechanism that could cause a 4-km comet to explode at the optimum height of 500 km. The highest possible altitude for a cometary optimum height is about 15 km, for a 120-m diameter comet. To maximize blast and thermal damage, a 4-km comet would have to break into tens of thousands fragments of this size and spread out over the entire continent, but that would require lateral forces that greatly exceed the drag force, and would not conserve energy. Airbursts are decompression explosions in which projectile material reaches high temperature but not high pressure states. Meteoritic diamonds would be vaporized. Nanodiamonds at the YDB are not evidence for an airburst or for an impact.

More Details

Use of a hybrid technology in a critical security system

Trujillo, David J.; Scharmer, Carol

Assigning an acceptable level of power reliability in a security system environment requires a methodical approach to design when considering the alternatives tied to the reliability and life of the system. The downtime for a piece of equipment, be it for failure, routine maintenance, replacement, or refurbishment or connection of new equipment is a major factor in determining the reliability of the overall system. In addition to these factors is the condition where the system is static or dynamic in its growth. Most highly reliable security power source systems are supplied by utility power with uninterruptable power source (UPS) and generator backup. The combination of UPS and generator backup with a reliable utility typically provides full compliance to security requirements. In the energy market and from government agencies, there is growing pressure to utilize alternative sources of energy other than fossil fuel to increase the number of local generating systems to reduce dependence on remote generating stations and cut down on carbon effects to the environment. There are also conditions where a security system may be limited on functionality due to lack of utility power in remote locations. One alternative energy source is a renewable energy hybrid system including a photovoltaic or solar system with battery bank and backup generator set. This is a viable source of energy in the residential and commercial markets where energy management schemes can be incorporated and systems are monitored and maintained regularly. But, the reliability of this source could be considered diminished when considering the security system environment where stringent uptime requirements are required.

More Details

The tyranny of the vital few : the Pareto principle in language design

Farkas, Benjamin D.; McCoy, James A.

Modern high-level programming languages often contain constructs whose semantics are non-trivial. In practice however, software developers generally restrict the use of such constructs to settings in which their semantics is simple (programmers use language constructs in ways they understand and can reason about). As a result, when developing tools for analyzing and manipulating software, a disproportionate amount of effort ends up being spent developing capabilities needed to analyze constructs in settings that are infrequently used. This paper takes the position that such distinctions between theory and practice are an important measure of the analyzability of a language.

More Details

VIST : a package for automated visualization of EIGER-generated data

Orndorff-Plunkett, Franklin

Is there a systematic way to make EIGER data visual to help us communicate the results of our work and to help us understand those results? EIGER electromagnetics solver data is a challenge to interpret - it is extremely useful to post-process the results and make them visual. Summary of this presentation is that they designed and demonstrated a simple, extensible, and user-friendly package to automate post-processing and visualization of EIGER DATA for any research group using MOENCH, EIGER and JUNGFRAU.

More Details

Near-field scanning microwave microscopy of few-layer graphene

Gin, Aaron G.; Shaner, Eric A.

Near-field microwave microscopy can be used as an alternative to atomic-force microscopy or Raman microscopy in determination of graphene thickness. We evaluated the values of AC impedance for few layer graphene. The impedance of mono and few-layer graphene at 4GHz was found predominantly active. Near-field microwave microscopy allows simultaneous imaging of location, geometry, thickness, and distribution of electrical properties of graphene without device fabrication. Our results may be useful for design of future graphene-based microwave devices.

More Details

Flip-chip and backside techniques

Colr, Edward I.; Barton, Daniel L.

State-of-the-art techniques for failure localization and design modification through bulk silicon are essential for multi-level metallization and new, flip chip packaging methods. The tutorial reviews the transmission of light through silicon, sample preparation, and backside defect localization techniques that are both currently available and under development. The techniques covered include emission microscopy, scanning laser microscope based techniques (electrooptic techniques, LIVA and its derivatives), and other non-IR based tools (FIB, e-beam techniques, etc.).

More Details

Beam-based defect localization techniques

Colr, Edward I.

SEM and SOM techniques for IC analysis that take advantage of 'active injection' are reviewed. Active injection refers to techniques that alter the electrical characteristics of the device analyzed. All of these techniques can be performed on a standard SEM or SOM (using the proper laser wavelengths).

More Details

A high-order element-based Galerkin Method for the global shallow water equations

Levy, Michael N.

The shallow water equations are used as a test for many atmospheric models because the solution mimics the horizontal aspects of atmospheric dynamics while the simplicity of the equations make them useful for numerical experiments. This study describes a high-order element-based Galerkin method for the global shallow water equations using absolute vorticity, divergence, and fluid depth (atmospheric thickness) as the prognostic variables, while the wind field is a diagnostic variable that can be calculated from the stream function and velocity potential (the Laplacians of which are the vorticity and divergence, respectively). The numerical method employed to solve the shallow water system is based on the discontinuous Galerkin and spectral element methods. The discontinuous Galerkin method, which is inherently conservative, is used to solve the equations governing two conservative variables - absolute vorticity and atmospheric thickness (mass). The spectral element method is used to solve the divergence equation and the Poisson equations for the velocity potential and the stream function. Time integration is done with an explicit strong stability-preserving second-order Runge-Kutta scheme and the wind field is updated directly from the vorticity and divergence at each stage, and the computational domain is the cubed sphere. A stable steady-state test is run and convergence results are provided, showing that the method is high-order accurate. Additionally, two tests without analytic solutions are run with comparable results to previous high-resolution runs found in the literature.

More Details

Hydrogen capacity and absorption rate of the SAES St707 non-evaporable getter at various temperatures

Hsu, Irving; Mills, Bernice E.

A prototype of a tritium thermoelectric generator (TTG) is currently being developed at Sandia. In the TTG, a vacuum jacket reduces the amount of heat lost from the high temperature source via convection. However, outgassing presents challenges to maintaining a vacuum for many years. Getters are chemically active substances that scavenge residual gases in a vacuum system. In order to maintain the vacuum jacket at approximately 1.0 x 10{sup -4} torr for decades, nonevaporable getters that can operate from -55 C to 60 C are going to be used. This paper focuses on the hydrogen capacity and absorption rate of the St707{trademark} non-evaporable getter by SAES. Using a getter testing manifold, we have carried out experiments to test these characteristics of the getter over the temperature range of -77 C to 60 C. The results from this study can be used to size the getter appropriately.

More Details

Flexible implementation of rigid solar cell technologies

Clark, Ryan A.; Rowen, Adam M.; Coleman, Jonathan J.; Gillen, James R.

As a source of clean, remote energy, photovoltaic (PV) systems are an important area of research. The majority of solar cells are rigid materials with negligible flexibility. Flexible PV systems possess many advantages, such as being transportable and incorporable on diverse structures. Amorphous silicon and organic PV systems are flexible; however, they lack the efficiency and lifetime of rigid cells. There is also a need for PV systems that are light weight, especially in space and flight applications. We propose a solution to this problem by arranging rigid cells onto a flexible substrate creating efficient, light weight, and flexible devices. To date, we have created a working prototype of our design using the 1.1cm x 1cm Emcore cells. We have achieved a better power to weight ratio than commercially available PowerFilm{reg_sign}, which uses thin film silicon yielding .034W/gram. We have also tested our concept with other types of cells and verified that our methods are able to be adapted to any rigid solar cell technology. This allows us to use the highest efficiency devices despite their physical characteristics. Depending on the cell size we use, we can rival the curvature of most available flexible PV devices. We have shown how the benefits of rigid solar cells can be integrated into flexible applications, allowing performance that surpasses alternative technologies.

More Details

Spectral unfolding of HERMES III H%2B beam

Sharp, Andrew C.

The objective is to deconvolve radiochromic film data into ion energy spectrum. The purpose is to: (1) Experiment - Utilize HERMES III as a pulsed neutron source; and (2) Unfolding - Ion energy spectrum gives insight into when the H{sup +} ions form and Spectrum is needed to predict neutron production. Conclusions are: (1) the majority of ions are high energy, therefore they form during the main beam pulse; (2) image processing worked; and (3) unfolding proved to be relatively stable.

More Details

PySP : modeling and solving stochastic mixed-integer programs in Python

Watson, Jean-Paul

Although stochastic programming is a powerful tool for modeling decision-making under uncertainty, various impediments have historically prevented its widespread use. One key factor involves the ability of non-specialists to easily express stochastic programming problems as extensions of deterministic models, which are often formulated first. A second key factor relates to the difficulty of solving stochastic programming models, particularly the general mixed-integer, multi-stage case. Intricate, configurable, and parallel decomposition strategies are frequently required to achieve tractable run-times. We simultaneously address both of these factors in our PySP software package, which is part of the COIN-OR Coopr open-source Python project for optimization. To formulate a stochastic program in PySP, the user specifies both the deterministic base model and the scenario tree with associated uncertain parameters in the Pyomo open-source algebraic modeling language. Given these two models, PySP provides two paths for solution of the corresponding stochastic program. The first alternative involves writing the extensive form and invoking a standard deterministic (mixed-integer) solver. For more complex stochastic programs, we provide an implementation of Rockafellar and Wets Progressive Hedging algorithm. Our particular focus is on the use of Progressive Hedging as an effective heuristic for approximating general multi-stage, mixed-integer stochastic programs. By leveraging the combination of a high-level programming language (Python) and the embedding of the base deterministic model in that language (Pyomo), we are able to provide completely generic and highly configurable solver implementations. PySP has been used by a number of research groups, including our own, to rapidly prototype and solve difficult stochastic programming problems.

More Details

Simulating environmental changes due to marine hydrokinetic energy installations

Seetho, Eddy S.; Roberts, Jesse D.

Marine hydrokinetic (MHK) projects will extract energy from ocean currents and tides, thereby altering water velocities and currents in the site's waterway. These hydrodynamics changes can potentially affect the ecosystem, both near the MHK installation and in surrounding (i.e., far field) regions. In both marine and freshwater environments, devices will remove energy (momentum) from the system, potentially altering water quality and sediment dynamics. In estuaries, tidal ranges and residence times could change (either increasing or decreasing depending on system flow properties and where the effects are being measured). Effects will be proportional to the number and size of structures installed, with large MHK projects having the greatest potential effects and requiring the most in-depth analyses. This work implements modification to an existing flow, sediment dynamics, and water-quality code (SNL-EFDC) to qualify, quantify, and visualize the influence of MHK-device momentum/energy extraction at a representative site. New algorithms simulate changes to system fluid dynamics due to removal of momentum and reflect commensurate changes in turbulent kinetic energy and its dissipation rate. A generic model is developed to demonstrate corresponding changes to erosion, sediment dynamics, and water quality. Also, bed-slope effects on sediment erosion and bedload velocity are incorporated to better understand scour potential.

More Details

Tutorial : the Zoltan toolkit

Wolf, Michael

The outline of this presentation is: (1) High-level view of Zoltan; (2) Requirements, data models, and interface; (3) Load Balancing and Partitioning; (4) Matrix Ordering, Graph Coloring; (5) Utilities; (6) Isorropia; and (7) Zoltan2.

More Details

Stress wave propagation in a composite beam subjected to transverse impact

Song, Bo; Jin, Helena; Lu, Wei-Yang

Composite materials, particularly fiber reinforced plastic composites, have been extensively utilized in many military and industrial applications. As an important structural component in these applications, the composites are often subjected to external impact loading. It is desirable to understand the mechanical response of the composites under impact loading for performance evaluation in the applications. Even though many material models for the composites have been developed, experimental investigation is still needed to validate and verify the models. It is essential to investigate the intrinsic material response. However, it becomes more applicable to determine the structural response of composites, such as a composite beam. The composites are usually subjected to out-of-plane loading in applications. When a composite beam is subjected to a sudden transverse impact, two different kinds of stress waves, longitudinal and transverse waves, are generated and propagate in the beam. The longitudinal stress wave propagates through the thickness direction; whereas, the propagation of the transverse stress wave is in-plane directions. The longitudinal stress wave speed is usually considered as a material constant determined by the material density and Young's modulus, regardless of the loading rate. By contrast, the transverse wave speed is related to structural parameters. In ballistic mechanics, the transverse wave plays a key role to absorb external impact energy [1]. The faster the transverse wave speed, the more impact energy dissipated. Since the transverse wave speed is not a material constant, it is not possible to be calculated from stress-wave theory. One can place several transducers to track the transverse wave propagation. An alternative but more efficient method is to apply digital image correlation (DIC) to visualize the transverse wave propagation. In this study, we applied three-pointbending (TPB) technique to Kolsky compression bar to facilitate dynamic transverse loading on a glass fiber/epoxy composite beam. The high-speed DIC technique was employed to study the transverse wave propagation.

More Details

Dynamic tensile characterization of a 4330 steel with kolsky bar techniques

Song, Bo; Antoun, Bonnie R.; Connelly, Kevin

There has been increasing demand to understand the stress-strain response as well as damage and failure mechanisms of materials under impact loading condition. Dynamic tensile characterization has been an efficient approach to acquire satisfactory information of mechanical properties including damage and failure of the materials under investigation. However, in order to obtain valid experimental data, reliable tensile experimental techniques at high strain rates are required. This includes not only precise experimental apparatus but also reliable experimental procedures and comprehensive data interpretation. Kolsky bar, originally developed by Kolsky in 1949 [1] for high-rate compressive characterization of materials, has been extended for dynamic tensile testing since 1960 [2]. In comparison to Kolsky compression bar, the experimental design of Kolsky tension bar has been much more diversified, particularly in producing high speed tensile pulses in the bars. Moreover, instead of directly sandwiching the cylindrical specimen between the bars in Kolsky bar compression bar experiments, the specimen must be firmly attached to the bar ends in Kolsky tensile bar experiments. A common method is to thread a dumbbell specimen into the ends of the incident and transmission bars. The relatively complicated striking and specimen gripping systems in Kolsky tension bar techniques often lead to disturbance in stress wave propagation in the bars, requiring appropriate interpretation of experimental data. In this study, we employed a modified Kolsky tension bar, newly developed at Sandia National Laboratories, Livermore, CA, to explore the dynamic tensile response of a 4330-V steel. The design of the new Kolsky tension bar has been presented at 2010 SEM Annual Conference [3]. Figures 1 and 2 show the actual photograph and schematic of the Kolsky tension bar, respectively. As shown in Fig. 2, the gun barrel is directly connected to the incident bar with a coupler. The cylindrical striker set inside the gun barrel is launched to impact on the end cap that is threaded into the open end of the gun barrel, producing a tension on the gun barrel and the incident bar.

More Details

Proton acceleration experiments with Z-Petawatt

Schollmeier, Marius; Geissel, Matthias; Sefkow, Adam B.; Kimmel, Mark; Rambo, Patrick K.; Schwarz, Jens; Atherton, B.

The outline of this presentation: (1) Proton acceleration with high-power lasers - Target Normal Sheath Acceleration concept; (2) Proton acceleration with mass-reduced targets - Breaking the 60 MeV threshold; (3) Proton beam divergence control - Novel focusing target geometry; and (4) New experimental capability development - Proton radiography on Z.

More Details

In-Situ phase and texture characterization of solution deposited PZT thin films during crystallization

Brennecka, Geoff

Ferroelectric lead zirconate titanate (PZT) thin films are used for integrated capacitors, ferroelectric memory, and piezoelectric actuators. Solution deposition is routinely used to fabricate these thin films. During the solution deposition process, the precursor solutions are spin coated onto the substrate and then pyrolyzed to form an amorphous film. The amorphous film is then heated at a higher temperature (650-700 C) to crystallize the film into the desired perovskite phase. Phase purity is critical in achieving high ferroelectric properties. Moreover, due to the anisotropy in the structure and properties of PZT, it is desirable to control the texture obtained in these thin films. The heating rate during crystallization process is known to affect the sequence of phase evolution and texture obtained in these thin films. However, to date, a comprehensive understanding of how phase and texture evolution takes place is still lacking. To understand the effects of heating rate on phase and texture evolution, in-situ diffraction experiments during the crystallization of solution deposited PZT thin films were carried out at beamline 6-ID-B, Advanced Photon Source (APS). The high X-ray flux coupled with the sophisticated detectors available at the APS synchrotron source allow for in-situ characterization of phase and texture evolution at the high ramp rates that are commonly used during processing of PZT thin films. A PZT solution of nominal composition 52/48 (Zr/Ti) was spin coated onto a platinum-coated Si substrate (Pt//TiO{sub x}//SiO{sub 2}//Si). The films were crystallized using an infrared lamp, similar to a rapid thermal annealing furnace. The ramp rate was adjusted by controlling the voltage applied to the infrared lamp and increasing the voltage by a constant step with every acquisition. Four different ramp rates, ranging from {approx}1000 C/s to {approx}1 C/s, were investigated. The sample was aligned in grazing incidence to maximize the signal from the thin films. Successive diffraction patterns were acquired with a 1s acquisition time using a MAR SX-165 CCD detector during crystallization. The sample to detector distance and the tilt rotations of the detector were determined in Fit2D{copyright} by using Al{sub 2}O{sub 3} as the calibrant. These corrections were applied to the patterns when binning the data into radial (2{theta}) and azimuthal bins. The texture observed in the thin film was qualitatively analyzed by fitting the intensity peaks along the azimuthal direction with a gaussian profile function to obtain the integrated intensity of the peaks. Data analysis and peak fitting was done using the curve fitting toolbox in MATLAB{copyright}. A fluorite-type phase was observed to form before the perovskite phase for all ramp rates. PtxPb is a transient intermetallic formed due to the interaction of the thin film and the bottom electrode during crystallization. Ramp rate was observed to significantly affect the amount of PtxPb observed in the thin films during crystallization. Ramp rate was also observed to affect the final texture obtained in the thin films. These results will be discussed in the poster in view of the current understanding of these materials.

More Details

Report on the SNL/AWE/NSF international workshop on joint mechanics, Dartington, United Kingdom, 2729 April 2009

Segalman, Daniel J.

The SNL/AWE joint mechanics workshop, held in Dartington Hall, Totnes, Devon, UK 26-29 April 2009 was a follow up to another international joints workshop held in Arlington, Virginia, in October 2006. The preceding workshop focused on identifying what length scales and interactions would be necessary to provide a scientific basis for analyzing and understanding joint mechanics from the atomistic scale on upward. In contrast, the workshop discussed in this report, focused much more on identification and development of methods at longer length scales that can have a nearer term impact on engineering analysis, design, and prediction of the dynamics of jointed structures. Also, the 2009 meeting employed less technical presentation and more break out sessions for developing focused strategies than was the case with the early workshop. Several 'challenges' were identified and assignments were made to teams to develop approaches to address those challenges.

More Details

Metabonomics for detection of nuclear materials processing

Alam, Todd M.; Alam, Kathleen M.

Tracking nuclear materials production and processing, particularly covert operations, is a key national security concern, given that nuclear materials processing can be a signature of nuclear weapons activities by US adversaries. Covert trafficking can also result in homeland security threats, most notably allowing terrorists to assemble devices such as dirty bombs. Existing methods depend on isotope analysis and do not necessarily detect chronic low-level exposure. In this project, indigenous organisms such as plants, small mammals, and bacteria are utilized as living sensors for the presence of chemicals used in nuclear materials processing. Such 'metabolic fingerprinting' (or 'metabonomics') employs nuclear magnetic resonance (NMR) spectroscopy to assess alterations in organismal metabolism provoked by the environmental presence of nuclear materials processing, for example the tributyl phosphate employed in the processing of spent reactor fuel rods to extract and purify uranium and plutonium for weaponization.

More Details

Assessing the near-term risk of climate uncertainty : interdependencies among the U.S. States

Backus, George A.

Policy makers will most likely need to make decisions about climate policy before climate scientists have resolved all relevant uncertainties about the impacts of climate change. This study demonstrates a risk-assessment methodology for evaluating uncertain future climatic conditions. We estimate the impacts from responses to climate change on U.S. state- and national-level economic activity from 2010 to 2050. To understand the implications of uncertainty on risk and to provide a near-term rationale for policy interventions to mitigate the course of climate change, we focus on precipitation, one of the most uncertain aspects of future climate change. We use results of the climate-model ensemble from the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report (AR4) as a proxy for representing climate uncertainty over the next 40 years, map the simulated weather from the climate models hydrologically to the county level to determine the physical consequences on economic activity at the state level, and perform a detailed 70-industry analysis of economic impacts among the interacting lower-48 states. We determine the industry-level contribution to the gross domestic product and employment impacts at the state level, as well as interstate population migration, effects on personal income, and consequences for the U.S. trade balance. We show that the mean or average risk of damage to the U.S. economy from climate change, at the national level, is on the order of $1 trillion over the next 40 years, with losses in employment equivalent to nearly 7 million full-time jobs.

More Details

Ultra-high speed imaging and DIC for explosive system observation

Reu, P.L.; Cooper, Marcia

Digital image correlation (DIC) and the tremendous advances in optical imaging are beginning to revolutionize explosive and high-strain rate measurements. This paper presents results obtained from metallic hemispheres expanded at detonation velocities. Important aspects of sample preparation and lighting of the image will be presented that are key considerations in obtaining images for DIC with frame rates at 1-million frames/second. Quantitative measurements of the case strain rate, expansion velocity and deformation will be presented. Furthermore, preliminary estimations of the measurement uncertainty will be discussed with notes on how image noise and contrast effect the measurement of shape and displacement. The data are then compared with analytical representations of the experiment.

More Details

Hierarchical streamline bundles for visualizing 2D flow fields

Yu, Hongfeng Y.; Chen, Jacqueline H.

We present hierarchical streamline bundles, a new approach to simplifying and visualizing 2D flow fields. Our method first densely seeds a flow field and produces a large number of streamlines that capture important flow features such as critical points. Then, we group spatially neighboring and geometrically similar streamlines to construct a hierarchy from which we extract streamline bundles at different levels of detail. Streamline bundles highlight multiscale flow features and patterns through a clustered yet non-cluttered display. This selective visualization strategy effectively accentuates visual foci and therefore is able to convey the desired insight into the flow fields. The hierarchical streamline bundles we have introduced offer a new way to characterize and visualize the flow structure and patterns in multiscale fashion. Streamline bundles highlight critical points clearly and concisely. Exploring the hierarchy allows a complete visualization of important flow features. Thanks to selective streamline display and flexible LOD refinement, our multiresolution technique is scalable and is promising for viewing large and complex flow fields. In the future, we would like to seek a cost-effective way to generate streamlines without enforcing the dense seeding condition. We will also extend this approach to handle real-world 3D complex flow fields.

More Details

Temperature dependent mechanical property testing of nitrate thermal storage salts

Broome, Scott T.; Siegel, Nathan P.

Three salt compositions for potential use in trough-based solar collectors were tested to determine their mechanical properties as a function of temperature. The mechanical properties determined were unconfined compressive strength, Young's modulus, Poisson's ratio, and indirect tensile strength. Seventeen uniaxial compression and indirect tension tests were completed. It was found that as test temperature increases, unconfined compressive strength and Young's modulus decreased for all salt types. Empirical relationships were developed quantifying the aforementioned behaviors. Poisson's ratio tends to increase with increasing temperature except for one salt type where there is no obvious trend. The variability in measured indirect tensile strength is large, but not atypical for this index test. The average tensile strength for all salt types tested is substantially higher than the upper range of tensile strengths for naturally occurring rock salts.

More Details
Results 72001–72200 of 99,299
Results 72001–72200 of 99,299