Publications

Results 72001–72100 of 99,299

Search results

Jump to search filters

Variance estimation for radiation analysis and multi-sensor fusion

Mitchell, Dean J.

Variance estimates that are used in the analysis of radiation measurements must represent all of the measurement and computational uncertainties in order to obtain accurate parameter and uncertainty estimates. This report describes an approach for estimating components of the variance associated with both statistical and computational uncertainties. A multi-sensor fusion method is presented that renders parameter estimates for one-dimensional source models based on input from different types of sensors. Data obtained with multiple types of sensors improve the accuracy of the parameter estimates, and inconsistencies in measurements are also reflected in the uncertainties for the estimated parameter. Specific analysis examples are presented that incorporate a single gross neutron measurement with gamma-ray spectra that contain thousands of channels. The parameter estimation approach is tolerant of computational errors associated with detector response functions and source model approximations.

More Details

Algorithm and exploratory study of the Hall MHD Rayleigh-Taylor instability

Gardiner, Thomas A.

This report is concerned with the influence of the Hall term on the nonlinear evolution of the Rayleigh-Taylor (RT) instability. This begins with a review of the magnetohydrodynamic (MHD) equations including the Hall term and the wave modes which are present in the system on time scales short enough that the plasma can be approximated as being stationary. In this limit one obtains what are known as the electron MHD (EMHD) equations which support two characteristic wave modes known as the whistler and Hall drift modes. Each of these modes is considered in some detail in order to draw attention to their key features. This analysis also serves to provide a background for testing the numerical algorithms used in this work. The numerical methods are briefly described and the EMHD solver is then tested for the evolution of whistler and Hall drift modes. These methods are then applied to study the nonlinear evolution of the MHD RT instability with and without the Hall term for two different configurations. The influence of the Hall term on the mixing and bubble growth rate are analyzed.

More Details

Solid oxide electrochemical reactor science

Stechel, Ellen B.

Solid-oxide electrochemical cells are an exciting new technology. Development of solid-oxide cells (SOCs) has advanced considerable in recent years and continues to progress rapidly. This thesis studies several aspects of SOCs and contributes useful information to their continued development. This LDRD involved a collaboration between Sandia and the Colorado School of Mines (CSM) ins solid-oxide electrochemical reactors targeted at solid oxide electrolyzer cells (SOEC), which are the reverse of solid-oxide fuel cells (SOFC). SOECs complement Sandia's efforts in thermochemical production of alternative fuels. An SOEC technology would co-electrolyze carbon dioxide (CO{sub 2}) with steam at temperatures around 800 C to form synthesis gas (H{sub 2} and CO), which forms the building blocks for a petrochemical substitutes that can be used to power vehicles or in distributed energy platforms. The effort described here concentrates on research concerning catalytic chemistry, charge-transfer chemistry, and optimal cell-architecture. technical scope included computational modeling, materials development, and experimental evaluation. The project engaged the Colorado Fuel Cell Center at CSM through the support of a graduate student (Connor Moyer) at CSM and his advisors (Profs. Robert Kee and Neal Sullivan) in collaboration with Sandia.

More Details

Simulations of neutron multiplicity measurements with MCNP-PoliMi

Miller, Eric C.; Mattingly, John K.

The heightened focus on nuclear safeguards and accountability has increased the need to develop and verify simulation tools for modeling these applications. The ability to accurately simulate safeguards techniques, such as neutron multiplicity counting, aids in the design and development of future systems. This work focuses on validating the ability of the Monte Carlo code MCNPX-PoliMi to reproduce measured neutron multiplicity results for a highly multiplicative sample. The benchmark experiment for this validation consists of a 4.5-kg sphere of plutonium metal that was moderated by various thicknesses of polyethylene. The detector system was the nPod, which contains a bank of 15 3He detectors. Simulations of the experiments were compared to the actual measurements and several sources of potential bias in the simulation were evaluated. The analysis included the effects of detector dead time, source-detector distance, density, and adjustments made to the value of {nu}-bar in the data libraries. Based on this analysis it was observed that a 1.14% decrease in the evaluated value of {nu}-bar for 239Pu in the ENDF-VII library substantially improved the accuracy of the simulation.

More Details

Enabling R&D for accurate simulation of non-ideal explosives

Thompson, A.P.; Aidun, John B.; Schmitt, Robert G.

We implemented two numerical simulation capabilities essential to reliably predicting the effect of non-ideal explosives (NXs). To begin to be able to treat the multiple, competing, multi-step reaction paths and slower kinetics of NXs, Sandia's CTH shock physics code was extended to include the TIGER thermochemical equilibrium solver as an in-line routine. To facilitate efficient exploration of reaction pathways that need to be identified for the CTH simulations, we implemented in Sandia's LAMMPS molecular dynamics code the MSST method, which is a reactive molecular dynamics technique for simulating steady shock wave response. Our preliminary demonstrations of these two capabilities serve several purposes: (i) they demonstrate proof-of-principle for our approach; (ii) they provide illustration of the applicability of the new functionality; and (iii) they begin to characterize the use of the new functionality and identify where improvements will be needed for the ultimate capability to meet national security needs. Next steps are discussed.

More Details

Design considerations for concentrating solar power tower systems employing molten salt

Moore, Robert C.; Vernon, Milton E.; Ho, Clifford K.; Siegel, Nathan P.; Kolb, Gregory J.

The Solar Two Project was a United States Department of Energy sponsored project operated from 1996 to 1999 to demonstrate the coupling of a solar power tower with a molten nitrate salt as a heat transfer media and for thermal storage. Over all, the Solar Two Project was very successful; however many operational challenges were encountered. In this work, the major problems encountered in operation of the Solar Two facility were evaluated and alternative technologies identified for use in a future solar power tower operating with a steam Rankine power cycle. Many of the major problems encountered can be addressed with new technologies that were not available a decade ago. These new technologies include better thermal insulation, analytical equipment, pumps and values specifically designed for molten nitrate salts, and gaskets resistant to thermal cycling and advanced equipment designs.

More Details

Initiation of the TLR4 signal transduction network : deeper understanding for better therapeutics

Kent, Michael S.; Branda, Steven; Hayden, Carl C.; Sasaki, Darryl Y.; Sale, Kenneth L.

The innate immune system represents our first line of defense against microbial pathogens, and in many cases is activated by recognition of pathogen cellular components (dsRNA, flagella, LPS, etc.) by cell surface membrane proteins known as toll-like receptors (TLRs). As the initial trigger for innate immune response activation, TLRs also represent a means by which we can effectively control or modulate inflammatory responses. This proposal focused on TLR4, which is the cell-surface receptor primarily responsible for initiating the innate immune response to lipopolysaccharide (LPS), a major component of the outer membrane envelope of gram-negative bacteria. The goal was to better understand TLR4 activation and associated membrane proximal events, in order to enhance the design of small molecule therapeutics to modulate immune activation. Our approach was to reconstitute the receptor in biomimetic systems in-vitro to allow study of the structure and dynamics with biophysical methods. Structural studies were initiated in the first year but were halted after the crystal structure of the dimerized receptor was published early in the second year of the program. Methods were developed to determine the association constant for oligomerization of the soluble receptor. LPS-induced oligomerization was observed to be a strong function of buffer conditions. In 20 mM Tris pH 8.0 with 200 mM NaCl, the onset of receptor oligomerization occurred at 0.2 uM TLR4/MD2 with E coli LPS Ra mutant in excess. However, in the presence of 0.5 uM CD14 and 0.5 uM LBP, the onset of receptor oligomerization was observed to be less than 10 nM TLR4/MD2. Several methods were pursued to study LPS-induced oligomerization of the membrane-bound receptor, including CryoEM, FRET, colocalization and codiffusion followed by TIRF, and fluorescence correlation spectroscopy. However, there approaches met with only limited success.

More Details

Modeling attacker-defender interactions in information networks

Collins, Michael J.

The simplest conceptual model of cybersecurity implicitly views attackers and defenders as acting in isolation from one another: an attacker seeks to penetrate or disrupt a system that has been protected to a given level, while a defender attempts to thwart particular attacks. Such a model also views all non-malicious parties as having the same goal of preventing all attacks. But in fact, attackers and defenders are interacting parts of the same system, and different defenders have their own individual interests: defenders may be willing to accept some risk of successful attack if the cost of defense is too high. We have used game theory to develop models of how non-cooperative but non-malicious players in a network interact when there is a substantial cost associated with effective defensive measures. Although game theory has been applied in this area before, we have introduced some novel aspects of player behavior in our work, including: (1) A model of how players attempt to avoid the costs of defense and force others to assume these costs; (2) A model of how players interact when the cost of defending one node can be shared by other nodes; and (3) A model of the incentives for a defender to choose less expensive, but less effective, defensive actions.

More Details

Improved high temperature solar absorbers for use in Concentrating Solar Power central receiver applications

Staiger, Chad L.; Lambert, Timothy N.; Hall, Aaron; Bencomo, Marlene; Stechel, Ellen B.

Concentrating solar power (CSP) systems use solar absorbers to convert the heat from sunlight to electric power. Increased operating temperatures are necessary to lower the cost of solar-generated electricity by improving efficiencies and reducing thermal energy storage costs. Durable new materials are needed to cope with operating temperatures >600 C. The current coating technology (Pyromark High Temperature paint) has a solar absorptance in excess of 0.95 but a thermal emittance greater than 0.8, which results in large thermal losses at high temperatures. In addition, because solar receivers operate in air, these coatings have long term stability issues that add to the operating costs of CSP facilities. Ideal absorbers must have high solar absorptance (>0.95) and low thermal emittance (<0.05) in the IR region, be stable in air, and be low-cost and readily manufacturable. We propose to utilize solution-based synthesis techniques to prepare intrinsic absorbers for use in central receiver applications.

More Details

Nanopatterned ferroelectrics for ultrahigh density rad-hard nonvolatile memories

Brennecka, Geoff; Stevens, Jeffrey; Gin, Aaron G.; Scrymgeour, David

Radiation hard nonvolatile random access memory (NVRAM) is a crucial component for DOE and DOD surveillance and defense applications. NVRAMs based upon ferroelectric materials (also known as FERAMs) are proven to work in radiation-rich environments and inherently require less power than many other NVRAM technologies. However, fabrication and integration challenges have led to state-of-the-art FERAMs still being fabricated using a 130nm process while competing phase-change memory (PRAM) has been demonstrated with a 20nm process. Use of block copolymer lithography is a promising approach to patterning at the sub-32nm scale, but is currently limited to self-assembly directly on Si or SiO{sub 2} layers. Successful integration of ferroelectrics with discrete and addressable features of {approx}15-20nm would represent a 100-fold improvement in areal memory density and would enable more highly integrated electronic devices required for systems advances. Towards this end, we have developed a technique that allows us to carry out block copolymer self-assembly directly on a huge variety of different materials and have investigated the fabrication, integration, and characterization of electroceramic materials - primarily focused on solution-derived ferroelectrics - with discrete features of {approx}20nm and below. Significant challenges remain before such techniques will be capable of fabricating fully integrated NVRAM devices, but the tools developed for this effort are already finding broader use. This report introduces the nanopatterned NVRAM device concept as a mechanism for motivating the subsequent studies, but the bulk of the document will focus on the platform and technology development.

More Details

Thermokinetic/mass-transfer analysis of carbon capture for reuse/sequestration

Brady, Patrick V.; Luketa, Anay; Stechel, Ellen B.

Effective capture of atmospheric carbon is a key bottleneck preventing non bio-based, carbon-neutral production of synthetic liquid hydrocarbon fuels using CO{sub 2} as the carbon feedstock. Here we outline the boundary conditions of atmospheric carbon capture for recycle to liquid hydrocarbon fuels production and re-use options and we also identify the technical advances that must be made for such a process to become technically and commercially viable at scale. While conversion of atmospheric CO{sub 2} into a pure feedstock for hydrocarbon fuels synthesis is presently feasible at the bench-scale - albeit at high cost energetically and economically - the methods and materials needed to concentrate large amounts of CO{sub 2} at low cost and high efficiency remain technically immature. Industrial-scale capture must entail: (1) Processing of large volumes of air through an effective CO{sub 2} capture media and (2) Efficient separation of CO{sub 2} from the processed air flow into a pure stream of CO{sub 2}.

More Details

Development of efficient, integrated cellulosic biorefineries : LDRD final report

Shaddix, Christopher R.; Hecht, Ethan S.; Teh, Kwee-Yan; Buffleben, George M.; Dibble, Dean C.

Cellulosic ethanol, generated from lignocellulosic biomass sources such as grasses and trees, is a promising alternative to conventional starch- and sugar-based ethanol production in terms of potential production quantities, CO{sub 2} impact, and economic competitiveness. In addition, cellulosic ethanol can be generated (at least in principle) without competing with food production. However, approximately 1/3 of the lignocellulosic biomass material (including all of the lignin) cannot be converted to ethanol through biochemical means and must be extracted at some point in the biochemical process. In this project we gathered basic information on the prospects for utilizing this lignin residue material in thermochemical conversion processes to improve the overall energy efficiency or liquid fuel production capacity of cellulosic biorefineries. Two existing pretreatment approaches, soaking in aqueous ammonia (SAA) and the Arkenol (strong sulfuric acid) process, were implemented at Sandia and used to generated suitable quantities of residue material from corn stover and eucalyptus feedstocks for subsequent thermochemical research. A third, novel technique, using ionic liquids (IL) was investigated by Sandia researchers at the Joint Bioenergy Institute (JBEI), but was not successful in isolating sufficient lignin residue. Additional residue material for thermochemical research was supplied from the dilute-acid simultaneous saccharification/fermentation (SSF) pilot-scale process at the National Renewable Energy Laboratory (NREL). The high-temperature volatiles yields of the different residues were measured, as were the char combustion reactivities. The residue chars showed slightly lower reactivity than raw biomass char, except for the SSF residue, which had substantially lower reactivity. Exergy analysis was applied to the NREL standard process design model for thermochemical ethanol production and from a prototypical dedicated biochemical process, with process data supplied by a recent report from the National Research Council (NRC). The thermochemical system analysis revealed that most of the system inefficiency is associated with the gasification process and subsequent tar reforming step. For the biochemical process, the steam generation from residue combustion, providing the requisite heating for the conventional pretreatment and alcohol distillation processes, was shown to dominate the exergy loss. An overall energy balance with different potential distillation energy requirements shows that as much as 30% of the biomass energy content may be available in the future as a feedstock for thermochemical production of liquid fuels.

More Details

Challenge problem and milestones for : Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC)

Arguello, Jose G.; Mcneish, Jerry; Schultz, Peter A.; Wang, Yifeng

This report describes the specification of a challenge problem and associated challenge milestones for the Waste Integrated Performance and Safety Codes (IPSC) supporting the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The NEAMS challenge problems are designed to demonstrate proof of concept and progress towards IPSC goals. The goal of the Waste IPSC is to develop an integrated suite of modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. To demonstrate proof of concept and progress towards these goals and requirements, a Waste IPSC challenge problem is specified that includes coupled thermal-hydrologic-chemical-mechanical (THCM) processes that describe (1) the degradation of a borosilicate glass waste form and the corresponding mobilization of radionuclides (i.e., the processes that produce the radionuclide source term), (2) the associated near-field physical and chemical environment for waste emplacement within a salt formation, and (3) radionuclide transport in the near field (i.e., through the engineered components - waste form, waste package, and backfill - and the immediately adjacent salt). The initial details of a set of challenge milestones that collectively comprise the full challenge problem are also specified.

More Details

Influence of point defects on grain boundary motion

Foiles, Stephen M.

This work addresses the influence of point defects, in particular vacancies, on the motion of grain boundaries. If there is a non-equilibrium concentration of point defects in the vicinity of an interface, such as due to displacement cascades in a radiation environment, motion of the interface to sweep up the defects will lower the energy and provide a driving force for interface motion. Molecular dynamics simulations are employed to examine the process for the case of excess vacancy concentrations in the vicinity of two grain boundaries. It is observed that the efficacy of the presence of the point defects in inducing boundary motion depends on the balance of the mobility of the defects with the mobility of the interfaces. In addition, the extent to which grain boundaries are ideal sinks for vacancies is evaluated by considering the energy of boundaries before and after vacancy absorption.

More Details

Scaling of X pinches from 1 MA to 6 MA

Sinars, Daniel; Mcbride, Ryan; Wenger, D.F.; Cuneo, Michael E.; Yu, Edmund; Harding, Eric H.; Hansen, Stephanie B.; Ampleford, David J.; Jennings, Christopher A.

This final report for Project 117863 summarizes progress made toward understanding how X-pinch load designs scale to high currents. The X-pinch load geometry was conceived in 1982 as a method to study the formation and properties of bright x-ray spots in z-pinch plasmas. X-pinch plasmas driven by 0.2 MA currents were found to have source sizes of 1 micron, temperatures >1 keV, lifetimes of 10-100 ps, and densities >0.1 times solid density. These conditions are believed to result from the direct magnetic compression of matter. Physical models that capture the behavior of 0.2 MA X pinches predict more extreme parameters at currents >1 MA. This project developed load designs for up to 6 MA on the SATURN facility and attempted to measure the resulting plasma parameters. Source sizes of 5-8 microns were observed in some cases along with evidence for high temperatures (several keV) and short time durations (<500 ps).

More Details

Scientific data analysis on data-parallel platforms

Roe, Diana C.; Choe, Yung R.; Ulmer, Craig

As scientific computing users migrate to petaflop platforms that promise to generate multi-terabyte datasets, there is a growing need in the community to be able to embed sophisticated analysis algorithms in the computing platforms' storage systems. Data Warehouse Appliances (DWAs) are attractive for this work, due to their ability to store and process massive datasets efficiently. While DWAs have been utilized effectively in data-mining and informatics applications, they remain largely unproven in scientific workloads. In this paper we present our experiences in adapting two mesh analysis algorithms to function on five different DWA architectures: two Netezza database appliances, an XtremeData dbX database, a LexisNexis DAS, and multiple Hadoop MapReduce clusters. The main contribution of this work is insight into the differences between these DWAs from a user's perspective. In addition, we present performance measurements for ten DWA systems to help understand the impact of different architectural trade-offs in these systems.

More Details

Using reconfigurable functional units in conventional microprocessors

Rodrigues, Arun

Scientific applications use highly specialized data structures that require complex, latency sensitive graphs of integer instructions for memory address calculations. Working with the Univeristy of Wisconsin, we have demonstrated significant differences between the Sandia's applications and the industry standard SPEC-FP (standard performance evaluation corporation-floating point) suite. Specifically, integer dataflow performance is critical to overall system performance. To improve this performance, we have developed a configurable functional unit design that is capable of accelerating integer dataflow.

More Details

Uncertainty quantification of cinematic imaging for development of predictive simulations of turbulent combustion

Frank, Jonathan H.; Lawson, Matthew; Sargsyan, Khachik; Debusschere, Bert; Najm, Habib N.

Recent advances in high frame rate complementary metal-oxide-semiconductor (CMOS) cameras coupled with high repetition rate lasers have enabled laser-based imaging measurements of the temporal evolution of turbulent reacting flows. This measurement capability provides new opportunities for understanding the dynamics of turbulence-chemistry interactions, which is necessary for developing predictive simulations of turbulent combustion. However, quantitative imaging measurements using high frame rate CMOS cameras require careful characterization of the their noise, non-linear response, and variations in this response from pixel to pixel. We develop a noise model and calibration tools to mitigate these problems and to enable quantitative use of CMOS cameras. We have demonstrated proof of principle for image de-noising using both wavelet methods and Bayesian inference. The results offer new approaches for quantitative interpretation of imaging measurements from noisy data acquired with non-linear detectors. These approaches are potentially useful in many areas of scientific research that rely on quantitative imaging measurements.

More Details

90Sr Liquid Scintillation Urine Analysis Utilizing Different Approaches for Tracer Recovery

Piraner, Olga; Preston, Rose T.; Shanks, Sonoya T.; Jones, Robert

90Sr is one of the isotopes most commonly produced by nuclear fission. This medium lived isotope presents serious challenges to radiation workers, the environment, and following a nuclear event, the general public. Methods of identifying this nuclide have been in existence for a number of years (e.g. Horwitz, E.P. [1], Maxwell, S.L.[2], EPA 905.0 [3]) which are time consuming, requiring a month or more for full analysis. This time frame is unacceptable in the present security environment. It is therefore important to have a dependable and rapid method for the determination of Sr. The purposes of this study are to reduce analysis time to less than half a day by utilizing a single method of radiation measurement while continuing to yield precise results. This paper presents findings on three methods that can meet this criteria; (1) stable Sr carrier, (2) 85Sr by gamma spectroscopy, and (3) 85Sr by LSC. Two methods of analyzing and calculating the 85Sr tracer recovery were investigated (gamma spectroscopy and a low energy window-Sr85LEBAB by LSC) as well as the use of two different types of Sr tracer (85Sr and stable Sr carrier). Three separate stock blank urine samples were spiked with various activity levels of 239Pu, 137Cs, 90Sr /90Y to determine the effectiveness of the Eichrome Sr-spec™ resin 2mL extractive columns. The objective was to compare the recoveries of 85Sr versus a stable strontium carrier, attempt to compare the rate at which samples can be processed by evaluating evaporation, neutralization, and removing the use of another instrument (gamma spectrometer) by using the LSC spectrometer to obtain 85Sr recovery. It was found that when using a calibration curve comprised of a different cocktail and a non-optimum discriminator setting reasonable results (bias of ± 25%) were achieved. The results from spiked samples containing 85Sr demonstrated that a higher recovery is obtained when using gamma spectroscopy (89-95%) than when using the LEB window from LSC (120-470%). The high recovery for 85Sr by LSC analysis may be due to the interference/cross talk from the alpha region since alpha counts were observed in all sample sets. After further investigation it was determined that the alpha counts were due to 239Pu breakthrough on the Sr-spec™ column. This requires further development to purify the Sr before an accurate tracer recovery determination can be made. Sample preparation times varied and ranged from 4-6 hours depending on the specific sample preparation process. The results from the spiked samples containing stable strontium nitrate Sr(NO3)2 carrier demonstrate that gravimetric analysis yields the most consistent high recoveries (97-101%) when evaporation is carefully performed. Since this method did not have a variation on the tracer recovery method, the samples were counted in 1) LEB/Alpha/Beta mode optimized for Sr-90, 2) DPM for Sr-90, and 3) general LEB/Alpha/Beta mode. The results (from the known) ranged from 79-104%, 107-177%, and 85-89% for 1, 2, and 3 respectively. Counting the prepared samples in a generic low energy beta/alpha/beta protocol yielded more accurate and consistent results and also yielded the shortest sample preparation turn-around-time of 3.5 hours.

More Details

Accelerated Cartesian expansion (ACE) based framework for the rapid evaluation of diffusion, lossy wave, and Klein-Gordon potentials

Journal of Computational Physics

Baczewski, Andrew D.; Vikram, Melapudi; Shanker, Balasubramaniam; Kempel, Leo

Diffusion, lossy wave, and Klein–Gordon equations find numerous applications in practical problems across a range of diverse disciplines. The temporal dependence of all three Green’s functions are characterized by an infinite tail. This implies that the cost complexity of the spatio-temporal convolutions, associated with evaluating the potentials, scales as O(Ns2Nt2), where Ns and Nt are the number of spatial and temporal degrees of freedom, respectively. In this paper, we discuss two new methods to rapidly evaluate these spatio-temporal convolutions by exploiting their block-Toeplitz nature within the framework of accelerated Cartesian expansions (ACE). The first scheme identifies a convolution relation in time amongst ACE harmonics and the fast Fourier transform (FFT) is used for efficient evaluation of these convolutions. The second method exploits the rank deficiency of the ACE translation operators with respect to time and develops a recursive numerical compression scheme for the efficient representation and evaluation of temporal convolutions. It is shown that the cost of both methods scales as O(NsNtlog2Nt). Furthermore, several numerical results are presented for the diffusion equation to validate the accuracy and efficacy of the fast algorithms developed here.

More Details

Reducing variance in batch partitioning measurements

Mariner, Paul

The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.

More Details

A resilience assessment framework for infrastructure and economic systems: Quantitative and qualitative resilience analysis of petrochemical supply chains to a hurricane

AIChE Annual Meeting, Conference Proceedings

Vugrin, Eric D.; Warren, Drake E.; Ehlen, Mark

In recent years, the nation has recognized that critical infrastructure protection should consider not only the prevention of disruptive events, but also the processes that infrastructure systems undergo to maintain functionality following disruptions. This more comprehensive approach has been termed critical infrastructure resilience. Given the occurrence of a particular disruptive event, the resilience of a system to that event is the system's ability to efficiently reduce both the magnitude and duration of the deviation from targeted system performance levels. Under the direction of the U. S. Department of Homeland Security's Science and Technology Directorate, Sandia National Laboratories has developed a comprehensive resilience assessment framework for evaluating the resilience of infrastructure and economic systems. The framework includes a quantitative methodology that measures resilience costs that result from a disruption to infrastructure function. The framework also includes a qualitative analysis methodology that assesses system characteristics affecting resilience to provide insight and direction for potential improvements. This paper describes the resilience assessment framework and demonstrates the utility of the assessment framework through application to two hypothetical scenarios involving the disruption of a petrochemical supply chain by hurricanes.

More Details

Impacts to the ethylene supply chain from a hurricane disruption

AIChE Annual Meeting, Conference Proceedings

Downes, Paula S.; Welk, Margaret; Sun, Amy C.; Heinen, Russell

Analysis of chemical supply chains is an inherently complex task, given the dependence of these supply chains on multiple infrastructure systems (e.g. transportation and energy). This effort requires data and information at various levels of resolution, ranging from network-level distribution systems to individual chemical reactions. The U.S. Department of Homeland Security (DHS) has tasked the National Infrastructure Simulation and Analysis Center (NISAC) with developing a chemical infrastructure analytical capability to assess interdependencies and complexities of the nation's critical infrastructure, including the chemical sector. To address this need, the Sandia National Laboratories (Sandia)1 component of NISAC has integrated its existing simulation and infrastructure analysis capabilities with various chemical industry datasets to create a capability to analyze and estimate the supply chain and economic impacts resulting from large-scale disruptions to the chemical sector. This development effort is ongoing and is currently being funded by the DHS's Science and Technology Directorate. This paper describes the methodology being used to create the capability and the types of data necessary to exercise the capability, and it presents an example analysis focusing on the ethylene portion of the chemical supply chain.

More Details

Modeling the national chlorinated hydrocarbon supply chain and effects of disruption

AIChE Annual Meeting, Conference Proceedings

Welk, Margaret E.; Sun, Amy C.; Downes, Paula S.

Chlorinated hydrocarbons represent the precursors for products ranging from polyvinyl chloride (PVC) and refrigerants to pharmaceuticals. Natural or manmade disruptions that affect the availability of these products nationally have the potential to affect a wide range of markets, from healthcare to construction. Analysis of chemical supply chains is an inherently complex task, given the dependence of these supply chains on multiple infrastructure systems (e.g. transportation and energy). This effort requires data and information at various levels of resolution, ranging from network-level distribution systems to individual chemical reactions. The U.S. Department of Homeland Security (DHS) has tasked the National Infrastructure Simulation and Analysis Center (NISAC) with developing a chemical infrastructure analytical capability to assess interdependencies and complexities of the nation's critical infrastructure, including the chemical sector. To address this need, the Sandia National Laboratories (Sandia) component of NISAC has integrated its existing simulation and infrastructure analysis capabilities with various chemical industry datasets to create a capability to analyze and estimate the supply chain economic impacts resulting from large-scale disruptions to the chemical sector. This development effort is ongoing and is currently being funded by the DHS's Science and Technology Directorate. This paper describes the methodology being used to create the capability and the types of data necessary to exercise the capability, and it presents an example analysis focusing on the chlorinated hydrocarbon portion of the chemical supply chain.

More Details

Process characterization vehicles for 3D integration

Proceedings - Electronic Components and Technology Conference

Campbell, David V.

Assemblies produced by 3D Integration, whether fabricated at die or wafer level, involve a large number of post fab processing steps. Performing the prove-in of these operations on high value product has many limitations. This work uses simple surrogate process characterization vehicles, which workaround limitations of cost, timeliness of piecparts, ability to consider multiple processing options, and insufficient volumes for adequately exercising flows to collect specific process data for characterization. The test structures easily adapt to specific product in terms of die dimensions, aspect ratios, pitch and number of interconnects, and etc. This results in good fidelity in exercising product-specific processing. The discussed Cyclops vehicle implements a mirrored layout suitable for stacking to itself by wafer-to-wafer, die-to-wafer, or die-to-die. A standardized 2x10 pad test interface allows characterization of any of the integration methods with a single simple setup. This design offers the utility of comparison study of the various methods all using the same basis.

More Details

Yield modeling of 3D integrated wafer scale assemblies

Proceedings - Electronic Components and Technology Conference

Campbell, David V.

3D Integration approaches exist for wafer-to-wafer, die-towafer, and die-to-die assembly, each with distinct merits. Creation of "seamless" wafer scale focal plane arrays on the order of 6-8" in diameter drives very demanding yield requirements and understanding. This work established a Monte Carlo model of our exploratory architecture in order to assess the trades of the various assembly methods. The model results suggested an optimum die size, number of die stacks per assembly, number of layers per stack, and quantified the value of sorting for optimizing the assembly process.

More Details

Contribution of optical phonons to thermal boundary conductance

Applied Physics Letters

Beechem, Thomas; Duda, John C.; Hopkins, Patrick E.; Norris, Pamela M.

Thermal boundary conductance (TBC) is a performance determinant for many microsystems due to the numerous interfaces contained within their structure. To assess this transport, theoretical approaches often account for only the acoustic phonons as optical modes are assumed to contribute negligibly due to their low group velocities. To examine this approach, the diffuse mismatch model is reformulated to account for more realistic dispersions containing optical modes. Using this reformulation, it is found that optical phonons contribute to TBC by as much as 80% for a variety of material combinations in the limit of both inelastic and elastic scattering. © 2010 American Institute of Physics.

More Details

Evolution of Sandia's Risk Assessment Methodology for Water and Wastewater Utilities (RAM-W™)

World Environmental and Water Resources Congress 2010: Challenges of Change - Proceedings of the World Environmental and Water Resources Congress 2010

Jaeger, Calvin D.; Hightower, Marion M.; Torres, Teresa M.

The initial version of RAM-W was issued in November 2001. The Public Health Security and Bioterrorism Preparedness and Response Act was issued in 2002 and in October 2002, version 2 of RAM-W was distributed to the water sector. In August 2007, RAM-W was revised to be compliant with specific RAMCAP® (Risk Analysis and Management for Critical Asset Protection) requirements. In addition, this version of RAM-W incorporated a number of other changes and improvements to the RAM process. All of these RAM-W versions were manual, paper-based methods which allowed an analyst to estimate security risk for their specific utility. In September 2008, an automated RAM prototype tool was developed which provided the basic RAM framework for critical infrastructures. In 2009, water sector stakeholders identified a need to automate RAM-W and this development effort was started in January 2009. This presentation will discuss the evolution of the RAM-W approach, capabilities and the new automated RAM-W tool (ARAM-W which will be available in mid-2010). © 2010 ASCE.

More Details

Representation of analysis results involving aleatory and epistemic uncertainty

International Journal of General Systems

Sallaberry, Cedric J.

Procedures are described for the representation of results in analyses that involve both aleatory uncertainty and epistemic uncertainty, with aleatory uncertainty deriving from an inherent randomness in the behaviour of the system under study and epistemic uncertainty deriving from a lack of knowledge about the appropriate values to use for quantities that are assumed to have fixed but poorly known values in the context of a specific study. Aleatory uncertainty is usually represented with probability and leads to cumulative distribution functions (CDFs) or complementary CDFs (CCDFs) for analysis results of interest. Several mathematical structures are available for the representation of epistemic uncertainty, including interval analysis, possibility theory, evidence theory and probability theory. In the presence of epistemic uncertainty, there is not a single CDF or CCDF for a given analysis result. Rather, there is a family of CDFs and a corresponding family of CCDFs that derive from epistemic uncertainty and have an uncertainty structure that derives from the particular uncertainty structure (e.g. interval analysis, possibility theory, evidence theory or probability theory) used to represent epistemic uncertainty. Graphical formats for the representation of epistemic uncertainty in families of CDFs and CCDFs are investigated and presented for the indicated characterisations of epistemic uncertainty.

More Details

Incorporating uncertainty into probabilistic performance models of concentrating solar power plants

Journal of Solar Energy Engineering, Transactions of the ASME

Ho, Clifford K.; Kolb, Gregory J.

A method for applying probabilistic models to concentrating solar-thermal power plants is described in this paper. The benefits of using probabilistic models include quantification of uncertainties inherent in the system and characterization of their impact on system performance and economics. Sensitivity studies using stepwise regression analysis can identify and rank the most important parameters and processes as a means to prioritize future research and activities. The probabilistic method begins with the identification of uncertain variables and the assignment of appropriate distributions for those variables. Those parameters are then sampled using a stratified method (Latin hypercube sampling) to ensure complete and representative sampling from each distribution. Models of performance, reliability, and cost are then simulated multiple times using the sampled set of parameters. The results yield a cumulative distribution function that can be used to quantify the probability of exceeding (or being less than) a particular value. Two examples, a simple cost model and a more detailed performance model of a hypothetical 100-MW e power tower, are provided to illustrate the methods. Copyright © 2010 by ASME.

More Details

Processing effects on microstructure in Er and ErD2 thin-films

Journal of Nuclear Materials

Snow, Clark S.; Kammler, Daniel; Brewer, Luke N.

Erbium metal thin-films have been deposited on molybdenum-on-silicon substrates and then converted to erbium dideuteride (ErD2). Here, we study the effects of deposition temperature (≈300 or 723 K) and deposition rate (1 or 20 nm/s) upon the initial Er metal microstructure and subsequent ErD2 microstructure. We find that low deposition temperature and low deposition rate lead to small Er metal grain sizes, and high deposition temperature and deposition rate led to larger Er metal grain sizes, consistent with published models of metal thin-film growth. ErD2 grain sizes are strongly influenced by the prior-metal grain size, with small metal grains leading to large ErD2 grains. A novel sample preparation technique for electron backscatter diffraction of air-sensitive ErD2 was developed, and allowed the quantitative measurement of ErD2 grain size and crystallographic texture. Finer-grained ErD2 showed a strong (1 1 1) fiber texture, whereas larger grained ErD2 had only weak texture. We hypothesize that this inverse correlation may arise from improved hydrogen diffusion kinetics in the more defective fine-grained metal structure or due to improved nucleation in the textured large-grain Er. © 2010 Elsevier B.V. All rights reserved.

More Details

A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data

De Sapio, Vincent

The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in job queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.

More Details

Optical holography as an analogue for a neural reuse mechanism

Behavioral and Brain Sciences

Verzi, Stephen J.; Wagner, John S.; Warrender, Christina E.

We propose an analogy between optical holography and neural behavior as a hypothesis about the physical mechanisms of neural reuse. Specifically, parameters in optical holography (frequency, amplitude, and phase of the reference beam) may provide useful analogues for understanding the role of different parameters in determining the behavior of neurons (e.g., frequency, amplitude, and phase of spiking behavior). © 2010 Cambridge University Press.

More Details

Environmental Geographic Information System

Peek, Dennis W.; Helfrich, Donald A.; Gorman, Susan

This document describes how the Environmental Geographic Information System (EGIS) was used, along with externally received data, to create maps for the Site-Wide Environmental Impact Statement (SWEIS) Source Document project. Data quality among the various classes of geographic information system (GIS) data is addressed. A complete listing of map layers used is provided.

More Details

Environmental management system

Salinas, Stephanie A.

The purpose of the Sandia National Laboratories/New Mexico (SNL/NM) Environmental Management System (EMS) is identification of environmental consequences from SNL/NM activities, products, and/or services to develop objectives and measurable targets for mitigation of any potential impacts to the environment. This Source Document discusses the annual EMS process for analysis of environmental aspects and impacts and also provides the fiscal year (FY) 2010 analysis. Further information on the EMS structure, processes, and procedures are described within the programmatic EMS Manual (PG470222).

More Details

Supplemental Information Source Document: Health and Safety

Avery, Rosemary P.; Johns, William H.

This document provides information on the possible human exposure to environmental media potentially contaminated with radiological materials and chemical constituents from operations at Sandia National Laboratories/New Mexico (SNL/NM). This report is based on the best available information for Calendar Year (CY) 2008, and was prepared in support of future analyses, including those that may be performed as part of the SNL/NM Site-Wide Environmental Impact Statement.

More Details

Long-term Environmental Stewardship

Nagy, Michael D.

The purpose of this Supplemental Information Source Document is to effectively describe Long-Term Environmental Stewardship (LTES) at Sandia National Laboratories/New Mexico (SNL/NM). More specifically, this document describes the LTES and Long-Term Stewardship (LTS) Programs, distinguishes between the LTES and LTS Programs, and summarizes the current status of the Environmental Restoration (ER) Project.

More Details

Sustaining knowledge in the neutron generator community and benchmarking study. Phase II

Huff, Tameka B.; Baldonado, Esther

This report documents the second phase of work under the Sustainable Knowledge Management (SKM) project for the Neutron Generator organization at Sandia National Laboratories. Previous work under this project is documented in SAND2008-1777, Sustaining Knowledge in the Neutron Generator Community and Benchmarking Study. Knowledge management (KM) systems are necessary to preserve critical knowledge within organizations. A successful KM program should focus on people and the process for sharing, capturing, and applying knowledge. The Neutron Generator organization is developing KM systems to ensure knowledge is not lost. A benchmarking study involving site visits to outside industry plus additional resource research was conducted during this phase of the SKM project. The findings presented in this report are recommendations for making an SKM program successful. The recommendations are activities that promote sharing, capturing, and applying knowledge. The benchmarking effort, including the site visits to Toyota and Halliburton, provided valuable information on how the SEA KM team could incorporate a KM solution for not just the neutron generators (NG) community but the entire laboratory. The laboratory needs a KM program that allows members of the workforce to access, share, analyze, manage, and apply knowledge. KM activities, such as communities of practice (COP) and sharing best practices, provide a solution towards creating an enabling environment for KM. As more and more people leave organizations through retirement and job transfer, the need to preserve knowledge is essential. Creating an environment for the effective use of knowledge is vital to achieving the laboratory's mission.

More Details

Adapting ORAP to wind plants : industry value and functional requirements

Strategic Power Systems (SPS) was contracted by Sandia National Laboratories to assess the feasibility of adapting their ORAP (Operational Reliability Analysis Program) tool for deployment to the wind industry. ORAP for Wind is proposed for use as the primary data source for the CREW (Continuous Reliability Enhancement for Wind) database which will be maintained by Sandia to enable reliability analysis of US wind fleet operations. The report primarily addresses the functional requirements of the wind-based system. The SPS ORAP reliability monitoring system has been used successfully for over twenty years to collect RAM (Reliability, Availability, Maintainability) and operations data for benchmarking and analysis of gas and steam turbine performance. This report documents the requirements to adapt the ORAP system for the wind industry. It specifies which existing ORAP design features should be retained, as well as key new requirements for wind. The latter includes alignment with existing and emerging wind industry standards (IEEE 762, ISO 3977 and IEC 61400). There is also a comprehensive list of thirty critical-to-quality (CTQ) functional requirements which must be considered and addressed to establish the optimum design for wind.

More Details

Supplemental information source document : socioeconomics

Sedore, Lora J.

This document provides information on expenditures and staffing levels at Sandia National Laboratories/New Mexico (SNL/NM). This report is based on the best available information obtained from Sandia Corporation for Fiscal Years 2008 and 2009, and was prepared in support of future analyses, including those that may be performed as part of the SNL/NM Site-Wide Environmental Impact Statement.

More Details

A modal approach to modeling spatially distributed vibration energy dissipation

Segalman, Daniel J.

The nonlinear behavior of mechanical joints is a confounding element in modeling the dynamic response of structures. Though there has been some progress in recent years in modeling individual joints, modeling the full structure with myriad frictional interfaces has remained an obstinate challenge. A strategy is suggested for structural dynamics modeling that can account for the combined effect of interface friction distributed spatially about the structure. This approach accommodates the following observations: (1) At small to modest amplitudes, the nonlinearity of jointed structures is manifest primarily in the energy dissipation - visible as vibration damping; (2) Correspondingly, measured vibration modes do not change significantly with amplitude; and (3) Significant coupling among the modes does not appear to result at modest amplitudes. The mathematical approach presented here postulates the preservation of linear modes and invests all the nonlinearity in the evolution of the modal coordinates. The constitutive form selected is one that works well in modeling spatially discrete joints. When compared against a mathematical truth model, the distributed dissipation approximation performs well.

More Details

Control system devices : architectures and supply channels overview

Schwartz, Moses; Mulder, John; Trent, Jason; Atkins, William D.

This report describes a research project to examine the hardware used in automated control systems like those that control the electric grid. This report provides an overview of the vendors, architectures, and supply channels for a number of control system devices. The research itself represents an attempt to probe more deeply into the area of programmable logic controllers (PLCs) - the specialized digital computers that control individual processes within supervisory control and data acquisition (SCADA) systems. The report (1) provides an overview of control system networks and PLC architecture, (2) furnishes profiles for the top eight vendors in the PLC industry, (3) discusses the communications protocols used in different industries, and (4) analyzes the hardware used in several PLC devices. As part of the project, several PLCs were disassembled to identify constituent components. That information will direct the next step of the research, which will greatly increase our understanding of PLC security in both the hardware and software areas. Such an understanding is vital for discerning the potential national security impact of security flaws in these devices, as well as for developing proactive countermeasures.

More Details

The first steps towards a standardized methodology for CSP electricity yield analysis

Ho, Clifford K.

The authors have founded a temporary international core team to prepare a SolarPACES activity aimed at the standardization of a methodology for electricity yield analysis of CSP plants. This core team has drafted a structural framework for a standardized methodology and the standardization process itself. The structural framework has to assure that the standardized methodology is applicable to all conceivable CSP systems, can be used on all levels of the project development process and covers all aspects affecting the electricity yield of CSP plants. Since the development of the standardized methodology is a complex task, the standardization process has been structured in work packages, and numerous international experts covering all aspects of CSP yield analysis have been asked to contribute to this process. These experts have teamed up in an international working group with the objective to develop, document and publish standardized methodologies for CSP yield analysis. This paper summarizes the intended standardization process and presents the structural framework of the methodology for CSP yield analysis.

More Details

Uranium for hydrogen storage applications : a materials science perspective

Kolasinski, Robert; Shugard, Andrew D.; Tewell, Craig R.; Cowgill, Donald F.

Under appropriate conditions, uranium will form a hydride phase when exposed to molecular hydrogen. This makes it quite valuable for a variety of applications within the nuclear industry, particularly as a storage medium for tritium. However, some aspects of the U+H system have been characterized much less extensively than other common metal hydrides (particularly Pd+H), likely due to radiological concerns associated with handling. To assess the present understanding, we review the existing literature database for the uranium hydride system in this report and identify gaps in the existing knowledge. Four major areas are emphasized: {sup 3}He release from uranium tritides, the effects of surface contamination on H uptake, the kinetics of the hydride phase formation, and the thermal desorption properties. Our review of these areas is then used to outline potential avenues of future research.

More Details

Living off-grid in an arid environment without a well : can residential and commercial/industrial water harvesting help solve water supply problems?

Axness, Carl L.

Our family of three lives comfortably off-grid without a well in an arid region ({approx}9 in/yr, average). This year we expect to achieve water sustainability with harvested or grey water supporting all of our needs (including a garden and trees), except drinking water (about 7 gallons/week). We discuss our implementation and the implication that for an investment of a few thousand dollars, many single family homes could supply a large portion of their own water needs, significantly reducing municipal water demand. Generally, harvested water is very low in minerals and pollutants, but may need treatment for microbes in order to be potable. This may be addressed via filters, UV light irradiation or through chemical treatment (bleach). Looking further into the possibility of commercial water harvesting from malls, big box stores and factories, we ask whether water harvesting could supply a significant portion of potable water by looking at two cities with water supply problems. We look at the implications of separate municipal water lines for potable and clean non-potable uses. Implications on changes to future building codes are explored.

More Details

Why Models Don%3CU%2B2019%3Et Forecast

Mcnamara, Laura A.

The title of this paper, Why Models Don't Forecast, has a deceptively simple answer: models don't forecast because people forecast. Yet this statement has significant implications for computational social modeling and simulation in national security decision making. Specifically, it points to the need for robust approaches to the problem of how people and organizations develop, deploy, and use computational modeling and simulation technologies. In the next twenty or so pages, I argue that the challenge of evaluating computational social modeling and simulation technologies extends far beyond verification and validation, and should include the relationship between a simulation technology and the people and organizations using it. This challenge of evaluation is not just one of usability and usefulness for technologies, but extends to the assessment of how new modeling and simulation technologies shape human and organizational judgment. The robust and systematic evaluation of organizational decision making processes, and the role of computational modeling and simulation technologies therein, is a critical problem for the organizations who promote, fund, develop, and seek to use computational social science tools, methods, and techniques in high-consequence decision making.

More Details

Plasma-materials interaction results at Sandia National Laboratories

Kolasinski, Robert; Buchenauer, D.A.; Cowgill, Donald F.; Karnesky, Richard A.; Whaley, Josh A.; Wampler, William R.

Overview of Plasma Materials Interaction (PMI) activities are: (1) Hydrogen diffusion and trapping in metals - (a) Growth of hydrogen precipitates in tungsten PFCs, (b) Temperature dependence of deuterium retention at displacement damage, (c) D retention in W at elevated temperatures; (2) Permeation - (a) Gas driven permeation results for W/Mo/SiC, (b) Plasma-driven permeation test stand for TPE; and (3) Surface studies - (a) H-sensor development, (b) Adsorption of oxygen and hydrogen on beryllium surfaces.

More Details

Antarctica X-band MiniSAR Crevasse Detection Radar : draft final report

Bickel, Douglas L.; Sander, Grant J.

This document is the final report for the 2009 Antarctica Crevasse Detection Radar (CDR) Project. This portion of the project is referred to internally as Phase 2. This is a follow on to the work done in Phase 1 reported on in [1]. Phase 2 involved the modification of a Sandia National Laboratories MiniSAR system used in Phase 1 to work with an LC-130 aircraft that operated in Antarctica in October through November of 2009. Experiments from the 2006 flights were repeated, as well as a couple new flight tests to examine the effect of colder snow and ice on the radar signatures of 'deep field' sites. This document includes discussion of the hardware development, system capabilities, and results from data collections in Antarctica during the fall of 2009.

More Details

An adaptive grid-based all hexahedral meshing algorithm based on 2-refinement

Owen, Steven J.

Most adaptive mesh generation algorithms employ a 3-refinement method. This method, although easy to employ, provides a mesh that is often too coarse in some areas and over refined in other areas. Because this method generates 27 new hexes in place of a single hex, there is little control on mesh density. This paper presents an adaptive all-hexahedral grid-based meshing algorithm that employs a 2-refinement method. 2-refinement is based on dividing the hex to be refined into eight new hexes. This method allows a greater control on mesh density when compared to a 3-refinement procedure. This adaptive all-hexahedral meshing algorithm provides a mesh that is efficient for analysis by providing a high element density in specific locations and a reduced mesh density in other areas. In addition, this tool can be effectively used for inside-out hexahedral grid based schemes, using Cartesian structured grids for the base mesh, which have shown great promise in accommodating automatic all-hexahedral algorithms. This adaptive all-hexahedral grid-based meshing algorithm employs a 2-refinement insertion method. This allows greater control on mesh density when compared to 3-refinement methods. This algorithm uses a two layer transition zone to increase element quality and keeps transitions from lower to higher mesh densities smooth. Templates were introduced to allow both convex and concave refinement.

More Details

Field-structured chemiresistors : tunable sensors for chemical-switch arrays

Read, Douglas

We have developed a significantly improved composite material for applications to chemiresistors, which are resistance-based sensors for volatile organic compounds. This material is a polymer composite containing Au-coated magnetic particles organized into electrically conducting pathways by magnetic fields. This improved material overcomes the various problems inherent to conventional carbon-black chemiresistors, while achieving an unprecedented magnitude of response. When exposed to chemical vapors, the polymer swells only slightly, yet this is amplified into large, reversible resistance changes - as much as 9 decades at a swelling of only 1.5 %. These conductor-insulator transitions occur over such a narrow range of analyte vapor concentration that these devices can be described as chemical switches. We demonstrate that the sensitivity and response range of these sensors can be tailored over a wide range by controlling the stress within the composite, including through the application of a magnetic field. Such tailorable sensors can be used to create sensor arrays that can accurately determine analyte concentration over a broad concentration range, or can be used to create logic circuits that signal a particular chemical environment. It is shown through combined mass-sorption and conductance measurements, that the response curve of any individual sensor is a function of polymer swelling alone. This has the important implication that individual sensor calibration requires testing with only a single analyte. In addition, we demonstrate a method for analyte discrimination based on sensor response kinetics, which is independent of analyte concentration. This method allows for discrimination even between chemically similar analytes. Lastly, additional variables associated with the composite and their effects on sensor response are explored.

More Details
Results 72001–72100 of 99,299
Results 72001–72100 of 99,299