Publications

Results 76001–76200 of 99,299

Search results

Jump to search filters

Interoperable mesh components for large-scale, distributed-memory simulations

Journal of Physics: Conference Series

Devine, Karen; Diachin, L.; Kraftcheck, J.; Jansen, K.E.; Leung, Vitus J.; Luo, X.; Miller, M.; Ollivier-Gooch, C.; Ovcharenko, A.; Sahni, O.; Shephard, M.S.; Tautges, T.; Xie, T.; Zhou, M.

SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. In this paper, we describe a software component - an abstract data model and programming interface - designed to provide support for parallel unstructured mesh operations. We describe key issues that must be addressed to successfully provide high-performance, distributed-memory unstructured mesh services and highlight some recent research accomplishments in developing new load balancing and MPI-based communication libraries appropriate for leadership class computing. Finally, we give examples of the use of parallel adaptive mesh modification in two SciDAC applications. © 2009 IOP Publishing Ltd.

More Details

Type Ia supernovae: Advances in large scale simulation

Journal of Physics: Conference Series

Woosley, S.E.; Almgren, A.S.; Aspden, A.J.; Bell, J.B.; Kasen, D.; Kerstein, Alan R.; Ma, H.; Nonaka, A.; Zingale, M.

There are two principal scientific objectives in the study of Type Ia supernovae - first, a better understanding of these complex explosions from as near first principles as possible, and second, enabling the more accurate utilization of their emission to measure distances in cosmology. Both tasks lend themselves to large scale numerical simulation, yet take us beyond the current frontiers in astrophysics, combustion science, and radiation transport. Their study requires novel approaches and the creation of new, highly scalable codes. © 2009 IOP Publishing Ltd.

More Details

Formation of a fin trailing vortex in undisturbed and interacting flows

39th AIAA Fluid Dynamics Conference

Beresh, Steven J.; Henfling, John F.; Spillers, Russell

An experiment using fins mounted on a wind tunnel wall has examined the proposition that the interaction between axially-separated aerodynamic control surfaces fundamentally results from an angle of attack superposed upon the downstream fin by the vortex shed from the upstream fin. Particle Image Velocimetry data captured on the surface of a single fin show the formation of the trailing vortex first as a leading-edge vortex, then becoming a tip vortex as it propagates to the fin's spanwise edge. Data acquired on the downstream fin surface in the presence of a trailing vortex shed from an upstream fin may remove this impinging vortex by subtracting its mean velocity field as measured in single-fin experiments, after which the vortex forming on the downstream fin's leeside becomes evident. The properties of the downstream fin's lifting vortex appear to be determined by the total angle of attack imposed upon it, which is a combination of its physical fin cant and the angle of attack induced by the impinging vortex, and are consistent with those of a single fin at equivalent angle of attack.

More Details

DOE's Institute for Advanced Architecture and Algorithms: An application-driven approach

Journal of Physics: Conference Series

Murphy, Richard C.

This paper describes an application driven methodology for understanding the impact of future architecture decisions on the end of the MPP era. Fundamental transistor device limitations combined with application performance characteristics have created the switch to multicore/multithreaded architectures. Designing large-scale supercomputers to match application demands is particularly challenging since performance characteristics are highly counter-intuitive. In fact, data movement more than FLOPS dominates. This work discusses some basic performance analysis for a set of DOE applications, the limits of CMOS technology, and the impact of both on future architectures. © 2009 IOP Publishing Ltd.

More Details

A rapidly deployable virtual presence extended defense system

2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009

Koch, Mark W.; Giron, Casey; Nguyen, Hung D.

We have developed algorithms for a virtual presence and extended defense (VPED) system that automatically learns the detection map of a deployed sensor field without a-priori knowledge of the local terrain. The VPED system is a network of sensor pods, with each pod containing acoustic and seismic sensors. Each pod has a limited detection range, but a network of pods can form a virtual perimeter. The site's geography and soil conditions can affect the detection performance of the pods. Thus a network in the field may not have the same performance as a network designed in the lab. To solve this problem we automatically estimate a network's detection performance as it is being constructed. We demonstrate results using simulated and real data. © 2009 IEEE.

More Details

Causal factors of non-fickian dispersion explored through measures of aquifer connectivity

IAMG 2009 - Computational Methods for the Earth, Energy and Environmental Sciences

Klise, Katherine A.; Mckenna, Sean A.; Tidwell, Vincent C.; Lane, Jonathan W.; Weissmann, Gary S.; Wawrzyniec, Tim F.; Nichols, Elizabeth M.

While connectivity is an important aspect of heterogeneous media, methods to measure and simulate connectivity are limited. For this study, we use natural aquifer analogs developed through lidar imagery to track the importance of connectivity on dispersion characteristics. A 221.8 cm by 50 cm section of a braided sand and gravel deposit of the Ceja Formation in Bernalillo County, New Mexico is selected for the study. The use of two-point (SISIM) and multipoint (Snesim and Filtersim) stochastic simulation methods are then compared based on their ability to replicate dispersion characteristics using the aquifer analog. Detailed particle tracking simulations are used to explore the streamline-based connectivity that is preserved using each method. Connectivity analysis suggests a strong relationship between the length distribution of sand and gravel facies along streamlines and dispersion characteristics.

More Details

Current trends in parallel computation and the implications for modeling and optimization

Computer Aided Chemical Engineering

Siirola, John D.

More Details

Microresonant impedance transformers

Proceedings - IEEE Ultrasonics Symposium

Wojciechowski, Kenneth E.; Olsson, Roy H.; Tuck, Melanie R.; Stevens, James E.

Widely applied to RF filtering, AlN microresonators offer the ability to perform additional functions such as impedance matching and single-ended-to- differential conversion. This paper reports microresonators capable of transforming the characteristic impedance from input to output over a wide range while performing low loss filtering. Microresonant transformer theory of operation and equivalent circuit models are presented and compared with measured 2 and 3-Port devices. Impedance transformation ratios as large as 18:1 are realized with insertion losses less than 5.8 dB, limited by parasitic shunt capacitance. These impedance transformers occupy less than 0.052 mm2, orders of magnitude smaller than competing technologies in the VHF and UHF frequency bands. ©2009 IEEE.

More Details

Analysis of nuclear spectra with non-linear techniques and its implementation in the Cambio software application

Journal of Radioanalytical and Nuclear Chemistry

Lasche, George; Coldwell, Robert L.

Popular nuclear spectral analysis applications typically use either the results of a peak search or of the best match of a set of linear templates as the basis for their conclusions. These well-proven methods work well in controlled environments. However, they often fail in cases where the critical information resides in well-masked peaks, where the data is sparse and good statistics cannot be obtained, and where little is known about the detector that was used. These conditions are common in emergency analysis situations, but are also common in radio-assay situations where background radiation is high and time is limited. To address these limitations, non-linear fitting techniques have been introduced into an application called ''Cambio'' suitable for public use. With this approach, free parameters are varied in iterative steps to converge to values that minimize differences between the actual data and the approximating functions that correspond to the values of the parameters. For each trial nuclide, a single parameter is varied that often has a strongly non-linear dependence on other, simultaneously varied parameters for energy calibration, attenuation by intervening matter, detector resolution, and peak-shape deviations. A brief overview of this technique and its implementation is presented, together with an example of its performance and differences from more common methods of nuclear spectral analysis. © Akadémiai Kiadó, 2009.

More Details

Ten million and one penguins, or, lessons learned from booting millions of virtual machines on HPC systems

Minnich, Ronald G.; Rudish, Donald W.

In this paper we describe Megatux, a set of tools we are developing for rapid provisioning of millions of virtual machines and controlling and monitoring them, as well as what we've learned from booting one million Linux virtual machines on the Thunderbird (4660 nodes) and 550,000 Linux virtual machines on the Hyperion (1024 nodes) clusters. As might be expected, our tools use hierarchical structures. In contrast to existing HPC systems, our tools do not require perfect hardware; that all systems be booted at the same time; and static configuration files that define the role of each node. While we believe these tools will be useful for future HPC systems, we are using them today to construct botnets. Botnets have been in the news recently, as discoveries of their scale (millions of infected machines for even a single botnet) and their reach (global) and their impact on organizations (devastating in financial costs and time lost to recovery) have become more apparent. A distinguishing feature of botnets is their emergent behavior: fairly simple operational rule sets can result in behavior that cannot be predicted. In general, there is no reducible understanding of how a large network will behave ahead of 'running it'. 'Running it' means observing the actual network in operation or simulating/emulating it. Unfortunately, this behavior is only seen at scale, i.e. when at minimum 10s of thousands of machines are infected. To add to the problem, botnets typically change at least 11% of the machines they are using in any given week, and this changing population is an integral part of their behavior. The use of virtual machines to assist in the forensics of malware is not new to the cyber security world. Reverse engineering techniques often use virtual machines in combination with code debuggers. Nevertheless, this task largely remains a manual process to get past code obfuscation and is inherently slow. As part of our cyber security work at Sandia National Laboratories, we are striving to understand the global network behavior of botnets. We are planning to take existing botnets, as found in the wild, and run them on HPC systems. We have turned to HPC systems to support the creation and operation of millions of Linux virtual machines as a means of observing the interaction of the botnet and other noninfected hosts. We started out using traditional HPC tools, but these tools are designed for a much smaller scale, typically topping out at one to ten thousand machines. HPC programming libraries and tools also assume complete connectivity between all nodes, with the attendant configuration files and data structures to match; this assumption holds up very poorly on systems with millions of nodes.

More Details

Nonlinear slewing spacecraft control based on exergy, power flow, and static and dynamic stability

Journal of the Astronautical Sciences

Robinett, Rush D.; Wilson, David G.

This paper presents a new nonlinear control methodology for slewing spacecraft, which provides both necessary and sufficient conditions for stability by identifying the stability boundaries, rigid body modes, and limit cycles. Conservative Hamiltonian system concepts, which are equivalent to static stability of airplanes, are used to find and deal with the static stability boundaries: rigid body modes. The application of exergy and entropy thermodynamic concepts to the work-rate principle provides a natural partitioning through the second law of thermodynamics of power flows into exergy generator, dissipator, and storage for Hamiltonian systems that is employed to find the dynamic stability boundaries: limit cycles. This partitioning process enables the control system designer to directly evaluate and enhance the stability and performance of the system by balancing the power flowing into versus the power dissipated within the system subject to the Hamiltonian surface (power storage). Relationships are developed between exergy, power flow, static and dynamic stability, and Lyapunov analysis. The methodology is demonstrated with two illustrative examples: (1) a nonlinear oscillator with sinusoidal damping and (2) a multi-input-multioutput three-axis slewing spacecraft that employs proportional-integral-derivative tracking control with numerical simulation results.

More Details

Using detailed maps of science to identify potential collaborations

Scientometrics

Boyack, Kevin W.

Research on the effects of collaboration in scientific research has been increasing in recent years. A variety of studies have been done at the institution and country level, many with an eye toward policy implications. However, the question of how to identify the most fruitful targets for future collaboration in high-performing areas of science has not been addressed. This paper presents a method for identifying targets for future collaboration between two institutions. The utility of the method is shown in two different applications: identifying specific potential collaborations at the author level between two institutions, and generating an index that can be used for strategic planning purposes. Identification of these potential collaborations is based on finding authors that belong to the same small paper-level community (or cluster of papers), using a map of science and technology containing nearly 1 million papers organized into 117,435 communities. The map used here is also unique in that it is the first map to combine the ISI Proceedings database with the Science and Social Science Indexes at the paper level. © 2008 Springer Science+Business Media B.V.

More Details

TrustBuilder2: A reconfigurable framework for trust negotiation

IFIP Advances in Information and Communication Technology

Lee, Adam J.; Winslett, Marianne; Perano, Kenneth J.

To date, research in trust negotiation has focused mainly on the theoretical aspects of the trust negotiation process, and the development of proof of concept implementations. These theoretical works and proofs of concept have been quite successful from a research perspective, and thus researchers must now begin to address the systems constraints that act as barriers to the deployment of these systems. To this end, we present TrustBuilder2, a fully-configurable and extensible framework for prototyping and evaluating trust negotiation systems. TrustBuilder2 leverages a plug-in based architecture, extensible data type hierarchy, and flexible communication protocol to provide a framework within which numerous trust negotiation protocols and system configurations can be quantitatively analyzed. In this paper, we discuss the design and implementation of TrustBuilder2, study its performance, examine the costs associated with flexible authorization systems, and leverage this knowledge to identify potential topics for future research, as well as a novel method for attacking trust negotiation systems.

More Details

Cutting Efficiency of a Single PDC Cutter on Hard Rock

Journal of Canadian Petroleum Technology

Hareland, G.; Yan, W.; Nygaard, R.; Wise, Jack L.

Polycrystalline diamond compact (PDC) bits have gained i wide popularity in the petroleum industry for drilling soft and; moderately firm formations. However, in hard formation applications, the PDC bit still has limitations, even though recent developments in PDC cutter designs and materials steadily imj proves PDC bit performance. The limitations of PDC bits for drilling hard formations is an important technical obstacle that must be overcome before using the PDC bit to develop competii tively priced electricity from enhanced geothermal systems, as well as deep continental gas fields. Enhanced geothermal energy is a very promising source for generating electrical energy and therefore, there is an urgent need to further enhance PDC bit per-j formance in hard formations. In this paper, the cutting efficiency of the PDC bit has been) analyzed based on the development of an analytical single PDC cutter force model. The cutting efficiency of a single PDC cutterj is defined as the ratio of the volume removed by a cutter over the force required to remove that volume of rock. The cutting I efficiency is found to be a function of the back rake angle, the depth of cut and the rock property, such as the angle of internal' friction. The highest cutting efficiency is found to occur at specific back rake angles of the cutter based on the material properties of the rock. The cutting efficiency directly relates to the internal angle of friction of the rock being cut. The results of this analysis can be integrated to study PDC bit performance. It can also provide a guideline to the application' and design of PDC bits for specific rocks.

More Details

An overview of the evolution of human reliability analysis in the context of probabilistic risk assessment

Forester, John A.

Since the Reactor Safety Study in the early 1970's, human reliability analysis (HRA) has been evolving towards a better ability to account for the factors and conditions that can lead humans to take unsafe actions and thereby provide better estimates of the likelihood of human error for probabilistic risk assessments (PRAs). The purpose of this paper is to provide an overview of recent reviews of operational events and advances in the behavioral sciences that have impacted the evolution of HRA methods and contributed to improvements. The paper discusses the importance of human errors in complex human-technical systems, examines why humans contribute to accidents and unsafe conditions, and discusses how lessons learned over the years have changed the perspective and approach for modeling human behavior in PRAs of complicated domains such as nuclear power plants. It is argued that it has become increasingly more important to understand and model the more cognitive aspects of human performance and to address the broader range of factors that have been shown to influence human performance in complex domains. The paper concludes by addressing the current ability of HRA to adequately predict human failure events and their likelihood.

More Details

Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan : ASC software quality engineering practices Version 3.0

Turgeon, Jennifer; Minana, Molly A.; Pilch, Martin

The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in the US Department of Energy/National Nuclear Security Agency (DOE/NNSA) Quality Criteria, Revision 10 (QC-1) as 'conformance to customer requirements and expectations'. This quality plan defines the SNL ASC Program software quality engineering (SQE) practices and provides a mapping of these practices to the SNL Corporate Process Requirement (CPR) 001.3.6; 'Corporate Software Engineering Excellence'. This plan also identifies ASC management's and the software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals. This SNL ASC Software Quality Plan establishes the signatories commitments to improving software products by applying cost-effective SQE practices. This plan enumerates the SQE practices that comprise the development of SNL ASC's software products and explains the project teams opportunities for tailoring and implementing the practices.

More Details

Baseline Ecological Footprint of Sandia National Laboratories, New Mexico

Mizner, Jack H.

The Ecological Footprint Model is a mechanism for measuring the environmental effects of operations at Sandia National Laboratories in Albuquerque, New Mexico (SNL/NM). This analysis quantifies environmental impact associated with energy use, transportation, waste, land use, and water consumption at SNL/NM for fiscal year 2005 (FY05). Since SNL/NM’s total ecological footprint (96,434 gha) is greater than the waste absorption capacity of its landholdings (338 gha), it created an ecological deficit of 96,096 gha. This deficit is equal to 886,470lha, or about 3,423 square miles of Pinyon-Juniper woodlands and desert grassland. 89% of the ecological footprint can be attributed to energy use, indicating that in order to mitigate environmental impact, efforts should be focused on energy efficiency, energy reduction, and the incorporation of additional renewable energy alternatives at SNL/NM.

More Details

Graphite oxidation modeling for application in MELCOR

Gelbard, Fred M.

The Arrhenius parameters for graphite oxidation in air are reviewed and compared. One-dimensional models of graphite oxidation coupled with mass transfer of oxidant are presented in dimensionless form for rectangular and spherical geometries. A single dimensionless group is shown to encapsulate the coupled phenomena, and is used to determine the effective reaction rate when mass transfer can impede the oxidation process. For integer reaction order kinetics, analytical expressions are presented for the effective reaction rate. For noninteger reaction orders, a numerical solution is developed and compared to data for oxidation of a graphite sphere in air. Very good agreement is obtained with the data without any adjustable parameters. An analytical model for surface burn-off is also presented, and results from the model are within an order of magnitude of the measurements of burn-off in air and in steam.

More Details

Design & development fo a 20-MW flywheel-based frequency regulation power plant : a study for the DOE Energy Storage Systems program

Huff, Georgianne

This report describes the successful efforts of Beacon Power to design and develop a 20-MW frequency regulation power plant based solely on flywheels. Beacon's Smart Matrix (Flywheel) Systems regulation power plant, unlike coal or natural gas generators, will not burn fossil fuel or directly produce particulates or other air emissions and will have the ability to ramp up or down in a matter of seconds. The report describes how data from the scaled Beacon system, deployed in California and New York, proved that the flywheel-based systems provided faster responding regulation services in terms of cost-performance and environmental impact. Included in the report is a description of Beacon's design package for a generic, multi-MW flywheel-based regulation power plant that allows accurate bids from a design/build contractor and Beacon's recommendations for site requirements that would ensure the fastest possible construction. The paper concludes with a statement about Beacon's plans for a lower cost, modular-style substation based on the 20-MW design.

More Details

Modeling leaks from liquid hydrogen storage systems

Winters, William S.

This report documents a series of models for describing intended and unintended discharges from liquid hydrogen storage systems. Typically these systems store hydrogen in the saturated state at approximately five to ten atmospheres. Some of models discussed here are equilibrium-based models that make use of the NIST thermodynamic models to specify the states of multiphase hydrogen and air-hydrogen mixtures. Two types of discharges are considered: slow leaks where hydrogen enters the ambient at atmospheric pressure and fast leaks where the hydrogen flow is usually choked and expands into the ambient through an underexpanded jet. In order to avoid the complexities of supersonic flow, a single Mach disk model is proposed for fast leaks that are choked. The velocity and state of hydrogen downstream of the Mach disk leads to a more tractable subsonic boundary condition. However, the hydrogen temperature exiting all leaks (fast or slow, from saturated liquid or saturated vapor) is approximately 20.4 K. At these temperatures, any entrained air would likely condense or even freeze leading to an air-hydrogen mixture that cannot be characterized by the REFPROP subroutines. For this reason a plug flow entrainment model is proposed to treat a short zone of initial entrainment and heating. The model predicts the quantity of entrained air required to bring the air-hydrogen mixture to a temperature of approximately 65 K at one atmosphere. At this temperature the mixture can be treated as a mixture of ideal gases and is much more amenable to modeling with Gaussian entrainment models and CFD codes. A Gaussian entrainment model is formulated to predict the trajectory and properties of a cold hydrogen jet leaking into ambient air. The model shows that similarity between two jets depends on the densimetric Froude number, density ratio and initial hydrogen concentration.

More Details

Slanted-wall beam propagation : erratum

Proposed for publication in the Journal of Lightwave Technology.

Hadley, G.R.

Recently, a new algorithm for wide-angle beam propagation was reported that allowed grid points to move in an arbitrary fashion between propagation planes and was thus capable of modeling waveguides whose widths or centerlines varied with propagation distance. That algorithm was accurate and stable for TE polarization but unstable for wide-angle TM propagation. This deficiency has been found to result from an omission in one of the wide-angle terms in the derivation of the finite-difference equation and is remedied here, resulting in a complete algorithm accurate for both polarizations.

More Details

Performance of an MPI-only semiconductor device simulator on a quad socket/quad core InfiniBand platform

Shadid, John N.

This preliminary study considers the scaling and performance of a finite element (FE) semiconductor device simulator on a capacity cluster with 272 compute nodes based on a homogeneous multicore node architecture utilizing 16 cores. The inter-node communication backbone for this Tri-Lab Linux Capacity Cluster (TLCC) machine is comprised of an InfiniBand interconnect. The nonuniform memory access (NUMA) nodes consist of 2.2 GHz quad socket/quad core AMD Opteron processors. The performance results for this study are obtained with a FE semiconductor device simulation code (Charon) that is based on a fully-coupled Newton-Krylov solver with domain decomposition and multilevel preconditioners. Scaling and multicore performance results are presented for large-scale problems of 100+ million unknowns on up to 4096 cores. A parallel scaling comparison is also presented with the Cray XT3/4 Red Storm capability platform. The results indicate that an MPI-only programming model for utilizing the multicore nodes is reasonably efficient on all 16 cores per compute node. However, the results also indicated that the multilevel preconditioner, which is critical for large-scale capability type simulations, scales better on the Red Storm machine than the TLCC machine.

More Details

Technical Advisory Team (TAT) report on the rocket sled test accident of October 9, 2008

Medina, Anthony J.; Stofleth, Jerome H.

This report summarizes probable causes and contributing factors that led to a rocket motor initiating prematurely while employees were preparing instrumentation for an AIII rocket sled test at SNL/NM, resulting in a Type-B Accident. Originally prepared by the Technical Advisory Team that provided technical assistance to the NNSA's Accident Investigation Board, the report includes analyses of several proposed causes and concludes that the most probable source of power for premature initiation of the rocket motor was the independent battery contained in the HiCap recorder package. The report includes data, evidence, and proposed scenarios to substantiate the analyses.

More Details

Investigation of multi-layer thin films for energy storage

Renk, Timothy J.

We investigate here the feasibility of increasing the energy density of thin-film capacitors by construction of a multi-layer capacitor device through ablation and redeposition of the capacitor materials using a high-power pulsed ion beam. The deposition experiments were conducted on the RHEPP-1 facility at Sandia National Laboratories. The dielectric capacitor filler material was a composition of Lead-Lanthanum-Zirconium-Titanium oxide (PLZT). The energy storage can be increased by using material of intrinsically high dielectric constant, and constructing many thin layers of this material. For successful device construction, there are a number of challenging requirements including correct stoichiometric and crystallographic composition of the deposited PLZT. This report details some success in satisfying these requirements, even though the attempt at device manufacture was unsuccessful. The conclusion that 900 C temperatures are necessary to reconstitute the deposited PLZT has implications for future manufacturing capability.

More Details

Investigation of biologically-designed metal-specific chelators for potential metal recovery and waste remediation applications

Criscenti, Louise; Ockwig, Nathan O.

Bacteria, algae and plants produce metal-specific chelators to capture required nutrient or toxic trace metals. Biological systems are thought to be very efficient, honed by evolutionary forces over time. Understanding the approaches used by living organisms to select for specific metals in the environment may lead to design of cheaper and more effective approaches for metal recovery and contaminant-metal remediation. In this study, the binding of a common siderophore, desferrioxamine B (DFO-B), to three aqueous metal cations, Fe(II), Fe(III), and UO{sub 2}(VI) was investigated using classical molecular dynamics. DFO-B has three acetohydroxamate groups and a terminal amine group that all deprotonate with increasing pH. For all three metals, complexes with DFO-B (-2) are the most stable and favored under alkaline conditions. Under more acidic conditions, the metal-DFO complexes involve chelation with both acetohydroxamate and acetylamine groups. The approach taken here allows for detailed investigation of metal binding to biologically-designed organic ligands.

More Details

Microscale Immune Studies Laboratory

Singh, Anup K.

The overarching goal is to develop novel technologies to elucidate molecular mechanisms of the innate immune response in host cells to pathogens such as bacteria and viruses including the mechanisms used by pathogens to subvert/suppress/obfuscate the immune response to cause their harmful effects. Innate immunity is our first line of defense against a pathogenic bacteria or virus. A comprehensive 'system-level' understanding of innate immunity pathways such as toll-like receptor (TLR) pathways is the key to deciphering mechanisms of pathogenesis and can lead to improvements in early diagnosis or developing improved therapeutics. Current methods for studying signaling focus on measurements of a limited number of components in a pathway and hence, fail to provide a systems-level understanding. We have developed a systems biology approach to decipher TLR4 pathways in macrophage cell lines in response to exposure to pathogenic bacteria and their lipopolysaccharide (LPS). Our approach integrates biological reagents, a microfluidic cell handling and analysis platform, high-resolution imaging and computational modeling to provide spatially- and temporally-resolved measurement of TLR-network components. The Integrated microfluidic platform is capable of imaging single cells to obtain dynamic translocation data as well as high-throughput acquisition of quantitative protein expression and phosphorylation information of selected cell populations. The platform consists of multiple modules such as single-cell array, cell sorter, and phosphoflow chip to provide confocal imaging, cell sorting, flow cytomtery and phosphorylation assays. The single-cell array module contains fluidic constrictions designed to trap and hold single host cells. Up to 100 single cells can be trapped and monitored for hours, enabling detailed statistically-significant measurements. The module was used to analyze translocation behavior of transcription factor NF-kB in macrophages upon activation by E. coli and Y. pestis LPS. The chip revealed an oscillation pattern in translocation of NF-kB indicating the presence of a negative feedback loop involving IKK. Activation of NF-kB is preceded by phosphorylation of many kinases and to correlate the kinase activity with translocation, we performed flow cytometric assays in the PhosphoChip module. Phopshorylated forms of p38. ERK and RelA were measured in macrophage cells challenged with LPS and showed a dynamic response where phosphorylation increases with time reaching a maximum at {approx}30-60min. To allow further downstream analysis on selected cells, we also implemented an optical-trapping based sorting of cells. This has allowed us to sort macrophages infected with bacteria from uninfected cells with the goal of obtaining data only on the infected (the desired) population. The various microfluidic chip modules and the accessories required to operate them such as pumps, heaters, electronic control and optical detectors are being assembled in a bench-top, semi-automated device. The data generated is being utilized to refine existing TLR pathway model by adding kinetic rate constants and concentration information. The microfluidic platform allows high-resolution imaging as well as quantitative proteomic measurements with high sensitivity (<pM) and time-resolution ({approx}15 s) in the same population of cells, a feat not achievable by current techniques. Furthermore, our systems approach combining the microfluidic platform and high-resolution imaging with the associated computational models and biological reagents will significantly improve our ability to study cell-signaling involved in host-pathogen interactions and other diseases such as cancer. The advances made in this project have been presented at numerous national and international conferences and are documented in many peer-reviewed publications as listed. Finer details of many of the component technologies are described in these publications. The chapters to follow in this report are also adapted from other manuscripts that are accepted for publication, submitted or in preparation to be submitted to peer-reviewed journals.

More Details

Development of a High-Temperature Diagnostics-While-Drilling Tool

Blankenship, Douglas A.; Chavira, David C.; Henfling, Joseph A.; King, Dennis K.; Knudsen, Steven D.; Polsky, Yarom

This report documents work performed in the second phase of the Diagnostics While-Drilling (DWD) project in which a high-temperature (HT) version of the phase 1 low-temperature (LT) proof-of-concept (POC) DWD tool was built and tested. Descriptions of the design, fabrication and field testing of the HT tool are provided.

More Details

Control of pore size in epoxy systems

Celina, Mathew C.; Dirk, Shawn M.; Sawyer, Patricia S.

Both conventional and combinatorial approaches were used to study the pore formation process in epoxy based polymer systems. Sandia National Laboratories conducted the initial work and collaborated with North Dakota State University (NDSU) using a combinatorial research approach to produce a library of novel monomers and crosslinkers capable of forming porous polymers. The library was screened to determine the physical factors that control porosity, such as porogen loading, polymer-porogen interactions, and polymer crosslink density. We have identified the physical and chemical factors that control the average porosity, pore size, and pore size distribution within epoxy based systems.

More Details

Joint physical and numerical modeling of water distribution networks

Mckenna, Sean A.; Ho, Clifford K.; Cappelle, Malynda A.; Webb, Stephen W.; O'Hern, Timothy J.

This report summarizes the experimental and modeling effort undertaken to understand solute mixing in a water distribution network conducted during the last year of a 3-year project. The experimental effort involves measurement of extent of mixing within different configurations of pipe networks, measurement of dynamic mixing in a single mixing tank, and measurement of dynamic solute mixing in a combined network-tank configuration. High resolution analysis of turbulence mixing is carried out via high speed photography as well as 3D finite-volume based Large Eddy Simulation turbulence models. Macroscopic mixing rules based on flow momentum balance are also explored, and in some cases, implemented in EPANET. A new version EPANET code was developed to yield better mixing predictions. The impact of a storage tank on pipe mixing in a combined pipe-tank network during diurnal fill-and-drain cycles is assessed. Preliminary comparison between dynamic pilot data and EPANET-BAM is also reported.

More Details

Experimental assessment of unvalidated assumptions in classical plasticity theory

Bauer, Stephen J.; Bronowski, David R.

This report investigates the validity of several key assumptions in classical plasticity theory regarding material response to changes in the loading direction. Three metals, two rock types, and one ceramic were subjected to non-standard loading directions, and the resulting strain response increments were displayed in Gudehus diagrams to illustrate the approximation error of classical plasticity theories. A rigorous mathematical framework for fitting classical theories to the data, thus quantifying the error, is provided. Further data analysis techniques are presented that allow testing for the effect of changes in loading direction without having to use a new sample and for inferring the yield normal and flow directions without having to measure the yield surface. Though the data are inconclusive, there is indication that classical, incrementally linear, plasticity theory may be inadequate over a certain range of loading directions. This range of loading directions also coincides with loading directions that are known to produce a physically inadmissible instability for any nonassociative plasticity model.

More Details

Summary report : direct approaches for recycling carbon dioxide into synthetic fuel

Miller, James E.; Siegel, Nathan P.; Diver, Richard B.; Gelbard, Fred M.; Ambrosini, Andrea A.; Allendorf, Mark

The consumption of petroleum by the transportation sector in the United States is roughly equivalent to petroleum imports into the country, which have totaled over 12 million barrels a day every year since 2004. This reliance on foreign oil is a strategic vulnerability for the economy and national security. Further, the effect of unmitigated CO{sub 2} releases on the global climate is a growing concern both here and abroad. Independence from problematic oil producers can be achieved to a great degree through the utilization of non-conventional hydrocarbon resources such as coal, oil-shale and tarsands. However, tapping into and converting these resources into liquid fuels exacerbates green house gas (GHG) emissions as they are carbon rich, but hydrogen deficient. Revolutionary thinking about energy and fuels must be adopted. We must recognize that hydrocarbon fuels are ideal energy carriers, but not primary energy sources. The energy stored in a chemical fuel is released for utilization by oxidation. In the case of hydrogen fuel the chemical product is water; in the case of a hydrocarbon fuel, water and carbon dioxide are produced. The hydrogen economy envisions a cycle in which H{sub 2}O is re-energized by splitting water into H{sub 2} and O{sub 2}, by electrolysis for example. We envision a hydrocarbon analogy in which both carbon dioxide and water are re-energized through the application of a persistent energy source (e.g. solar or nuclear). This is of course essentially what the process of photosynthesis accomplishes, albeit with a relatively low sunlight-to-hydrocarbon efficiency. The goal of this project then was the creation of a direct and efficient process for the solar or nuclear driven thermochemical conversion of CO{sub 2} to CO (and O{sub 2}), one of the basic building blocks of synthetic fuels. This process would potentially provide the basis for an alternate hydrocarbon economy that is carbon neutral, provides a pathway to energy independence, and is compatible with much of the existing fuel infrastructure.

More Details

Understanding and engineering enzymes for enhanced biofuel production

Simmons, Blake; Sapra, Rajat S.; Roe, Diana C.; Volponi, Joanne V.; Buffleben, George M.

Today, carbon-rich fossil fuels, primarily oil, coal and natural gas, provide 85% of the energy consumed in the United States. The release of greenhouse gases from these fuels has spurred research into alternative, non-fossil energy sources. Lignocellulosic biomass is renewable resource that is carbon-neutral, and can provide a raw material for alternative transportation fuels. Plant-derived biomass contains cellulose, which is difficult to convert to monomeric sugars for production of fuels. The development of cost-effective and energy-efficient processes to transform the cellulosic content of biomass into fuels is hampered by significant roadblocks, including the lack of specifically developed energy crops, the difficulty in separating biomass components, the high costs of enzymatic deconstruction of biomass, and the inhibitory effect of fuels and processing byproducts on organisms responsible for producing fuels from biomass monomers. One of the main impediments to more widespread utilization of this important resource is the recalcitrance of cellulosic biomass and techniques that can be utilized to deconstruct cellulosic biomass.

More Details

Thermomechanical measurements on thermal microactuators

Phinney, Leslie; Epp, David S.; Baker, Michael S.; Serrano, Justin R.; Gorby, Allen D.

Due to the coupling of thermal and mechanical behaviors at small scales, a Campaign 6 project was created to investigate thermomechanical phenomena in microsystems. This report documents experimental measurements conducted under the auspices of this project. Since thermal and mechanical measurements for thermal microactuators were not available for a single microactuator design, a comprehensive suite of thermal and mechanical experimental data was taken and compiled for model validation purposes. Three thermal microactuator designs were selected and fabricated using the SUMMiT V{sup TM} process at Sandia National Laboratories. Thermal and mechanical measurements for the bent-beam polycrystalline silicon thermal microactuators are reported, including displacement, overall actuator electrical resistance, force, temperature profiles along microactuator legs in standard laboratory air pressures and reduced pressures down to 50 mTorr, resonant frequency, out-of-plane displacement, and dynamic displacement response to applied voltages.

More Details

Novel ultrafine grain size processing of soft magnetic materials

Michael, Joseph R.

High performance soft magnetic alloys are used in solenoids in a wide variety of applications. These designs are currently being driven to provide more margin, reliability, and functionality through component size reductions; thereby providing greater power to drive ratio margins as well as decreases in volume and power requirements. In an effort to produce soft magnetic materials with improved properties, we have conducted an initial examination of one potential route for producing ultrafine grain sizes in the 49Fe-49Co-2V alloy. The approach was based on a known method for the production of very fine grain sizes in steels, and consisted of repeated, rapid phase transformation cycling through the ferrite to austenite transformation temperature range. The results of this initial attempt to produce highly refined grain sizes in 49Fe-49Co-2V were successful in that appreciable reductions in grain size were realized. The as-received grain size was 15 {micro}m with a standard deviation of 9.5 {micro}m. For the temperature cycling conditions examined, grain refinement appears to saturate after approximately ten cycles at a grain size of 6 {micro}m with standard deviation of 4 {micro}m. The process also reduces the range of grain sizes present in these samples as the largest grain noted in the as received and treated conditions were 64 and 26 {micro}m, respectively. The results were, however, complicated by the formation of an unexpected secondary ferritic constituent and considerable effort was directed at characterizing this phase. The analysis indicates that the phase is a V-rich ferrite, known as {alpha}{sub 2}, that forms due to an imbalance in the partitioning of vanadium during the heating and cooling portions of the thermal cycle. Considerable but unsuccessful effort was also directed at understanding the conditions under which this phase forms, since it is conceivable that this phase restricts the degree to which the grains can be refined. Due to this difficulty and the relatively short timeframe available in the study, magnetic and mechanical properties of the refined material could not be evaluated. An assessment of the potential for properties improvement through the transformation cycling approach, as well as recommendations for potential future work, are included in this report.

More Details

On the two-domain equations for gas chromatography

Romero, Louis; Parks, Michael L.

We present an analysis of gas chromatographic columns where the stationary phase is not assumed to be a thin uniform coating along the walls of the cross section. We also give an asymptotic analysis assuming that the parameter {beta} = KD{sup II}{rho}{sup II}/D{sup I}{rho}{sup I} is small. Here K is the partition coefficient, and D{sup i} and {rho}{sup i}, i = I, II are the diffusivity and density in the mobile (i = I) and stationary (i = II) regions.

More Details

J-Integral modeling and validation for GTS reservoirs

Nibur, Kevin A.; Somerday, Brian P.; Brown, Arthur; Lindblad, Alex; Ohashi, Yuki; Antoun, Bonnie R.; Connelly, Kevin; Zimmerman, Jonathan A.; Margolis, Stephen B.

Non-destructive detection methods can reliably certify that gas transfer system (GTS) reservoirs do not have cracks larger than 5%-10% of the wall thickness. To determine the acceptability of a reservoir design, analysis must show that short cracks will not adversely affect the reservoir behavior. This is commonly done via calculation of the J-Integral, which represents the energetic driving force acting to propagate an existing crack in a continuous medium. J is then compared against a material's fracture toughness (J{sub c}) to determine whether crack propagation will occur. While the quantification of the J-Integral is well established for long cracks, its validity for short cracks is uncertain. This report presents the results from a Sandia National Laboratories project to evaluate a methodology for performing J-Integral evaluations in conjunction with its finite element analysis capabilities. Simulations were performed to verify the operation of a post-processing code (J3D) and to assess the accuracy of this code and our analysis tools against companion fracture experiments for 2- and 3-dimensional geometry specimens. Evaluation is done for specimens composed of 21-6-9 stainless steel, some of which were exposed to a hydrogen environment, for both long and short cracks.

More Details

Ku-band six-bit RF MEMS time delay network

2008 IEEE CSIC Symposium: GaAs ICs Celebrate 30 Years in Monterey, Technical Digest 2008

Nordquist, Christopher D.; Dyck, Christopher; Kraus, Garth K.; Sullivan, Charles T.; Austin IV, Franklin; Finnegan, Patrick S.; Ballance, Mark H.

A six-bit time delay circuit operating from DC to 18 GHz is reported. Capacitively loaded transmission lines are used to reduce the physical length of the delay elements and shrink the die size. Additionally, selection of the reference line lengths to avoid resonances allows the replacement of series-shunt switching elements with only series elements. With through-wafer transitions and a packaging seal ring, the 7 mm x 10 mm circuit demonstrates <2.8 dB of loss and 60 ps of delay with good delay flatness and accuracy through 18 GHz. © 2008 IEEE.

More Details

Full-field characterization of tensile and fracture behavior of a rigid polyurethane foam using digital image correlation

Society for Experimental Mechanics - 11th International Congress and Exhibition on Experimental and Applied Mechanics 2008

Hong, Soonsung H.; Jin, Helena; Lu, Wei-Yang

Tensile deformation and fracture behavior of a closed-cell rigid polyurethane foam, called TufFoam, were investigated. During uniaxial tension tests and fracture mechanics tests, full-field deformation measurements were conducted by using digital image correlation technique. Uniform deformation fields obtained from the tension tests showed that both deviatoric and dilatational yielding contributed to the nonlinear deformation of the foam under tension. Fracture mechanics tests were performed with single-edge-notched specimens under three-point bending and uniaxial tension. A moderate specimen-size and loading-geometry dependence was observed in the measured fracture toughness values based on linear elastic fracture mechanics. Full-field deformation data near the crack-tip were used to investigate stable crack-growth in the foam until unstable fracture occurs. The path-independent J-integral and M-integral were calculated from elastic far-fields of the experimental data, and used to obtain crack-tip field parameters, such as crack-tip energy release rates and effective crack-tip positions. The combination of the full-field deformation measurement technique and the path-independent integrals was proven to be a useful approach to measure the initiation toughness of the foam that is independent of the specimen size and loading geometry. © 2008 Society for Experimental Mechanics Inc.

More Details

Interface delamination fracture toughness experiments at various loading rates

Society for Experimental Mechanics - 11th International Congress and Exhibition on Experimental and Applied Mechanics 2008

Lu, Wei-Yang; Antoun, Bonnie R.; Brown, Arthur; Chen, Weinong; Song, Bo

Mode-I and Mode-ll fracture experiments of composites under high loading rates are presented. In the standard double cantilever beam (DCB) configuration, specimens are loaded with constant speed of 2.5 m/s (100 in/s) on a customized high-rate MTS system. Alternative high rate experiments are also performed on a modified split Hopkinson pressure bar (SHPB). One of the configurations for the characterization of dynamic Mode-I interfacial delamination is to place a wedge-loaded compact-tension (WLCT) specimen in the test section. Pulse-shaping techniques are employed to control the profiles of the loading pulses such that the crack tip is loaded at constant loading rates. Pulse shaping also avoids the excitation of resonance, thus avoiding inertia induced forces mixed with material strength in the data. To create Mode-ll fracture conditions, an (ENF) three-point bending specimen is employed in the gage section of the modified SHPB. © 2008 Society for Experimental Mechanics Inc.

More Details

Doppler electron velocimeter-practical considerations for a useful tool

Society for Experimental Mechanics - 11th International Congress and Exhibition on Experimental and Applied Mechanics 2008

Reu, P.L.

The Doppler electron velocimeter (DEV) is a potentially new dynamic measurement system for the nano-scale. Electron microscopes have been used for many years now for visualizing extremely small samples, but the ability to make dynamic measurements has not existed. The DEV proceeds along the analogous lines of a laser Doppler velocimeter, which uses the Doppler shift of the wave to detect the velocity. The use of electron beams with their extremely short wavelengths overcomes the diffraction limit of light of approximately 1/2-micron to measure samples of current scientific interest in the nano-regime. Previous work has shown that Doppler shifting of electrons is theoretically possible, this paper examines whether a practical instrument can be built given inherent limitations of using electron beams as a probe source. Potential issues and their solutions, including electron beam coherence and interference will be presented. If answers to these problems can be found, the invention of the Doppler electron velocimeter could yield a completely new measurement concept at atomistic scales. © 2008 Society for Experimental Mechanics Inc.

More Details

Practical aspects of contouring using ESPI/DSPI

Society for Experimental Mechanics - 11th International Congress and Exhibition on Experimental and Applied Mechanics 2008

Reu, Phillip L.; Hansche, Bruce D.

Moiré contouring can be implemented by illuminating an object with coherent light from two closely spaced point sources-known as the "two point" method. This method can be implemented using digital speckle pattern interferometry techniques (DSPI) by illuminating the object with a single point source that is moved between datasets. We briefly present the algorithm, and some inherent implicit and explicit assumptions, used in this technique. One assumption made is that the object remains stationary between datasets. If violated, this bold assumption will create hundreds of microns of error from fractions of a micron of object motion. We present simulations and experiments demonstrating these sensitivities and two techniques to compensate for object motion during data acquisition. ©2008 Society for Experimental Mechanics Inc.

More Details

Extending digital image correlation to moving field of view application: A feasibility study

Society for Experimental Mechanics - 11th International Congress and Exhibition on Experimental and Applied Mechanics 2008

Miller, Timothy J.; Schreier, Hubert W.; Valley, Michael T.; Brown, Timothy

Conventional tracking systems measure time-space-position data and collect imagery to quantify the flight dynamics of tracked targets. However, they do not provide 6-degree-of-freedom measurements combined with spin rate, wobble, and other flight related parameters associated with non-rigid body motions. Using high-speed digital video cameras and image processing techniques, it may be possible to measure test-unit attitude and surface deformations during key portions of the test-unit's trajectory. This paper discusses the viability of applying Digital Image Correlation (DICa) methods to image data collected from two laser tracking systems. Stereo imaging methods have proven effective in the laboratory for quantifying temporally and spatially resolved 3D motions across a target surface. The principle limitations of the DIC method have been the need for clean imagery and fixed camera positions and orientations. However, recent field tests have demonstrated that these limitations can be overcome to provide a new method for quantifying flight dynamics with stereo laser tracking and high-speed video imagery in the presence of atmospheric turbulence. © 2008 Society for Experimental Mechanics Inc.

More Details

Doppler electron velocimeter-practical considerations for a useful tool

Society for Experimental Mechanics 11th International Congress and Exhibition on Experimental and Applied Mechanics 2008

Reu, P.L.

The Doppler electron velocimeter (DEV) is a potentially new dynamic measurement system for the nano-scale. Electron microscopes have been used for many years now for visualizing extremely small samples, but the ability to make dynamic measurements has not existed. The DEV proceeds along the analogous lines of a laser Doppler velocimeter, which uses the Doppler shift of the wave to detect the velocity. The use of electron beams with their extremely short wavelengths overcomes the diffraction limit of light of approximately 1/2-micron to measure samples of current scientific interest in the nano-regime. Previous work has shown that Doppler shifting of electrons is theoretically possible, this paper examines whether a practical instrument can be built given inherent limitations of using electron beams as a probe source. Potential issues and their solutions, including electron beam coherence and interference will be presented. If answers to these problems can be found, the invention of the Doppler electron velocimeter could yield a completely new measurement concept at atomistic scales. © 2008 Society for Experimental Mechanics Inc.

More Details

Evaluation of oxidation protection testing methods on ultra-high temperature ceramic coatings for carbon-carbon oxidation resistance

Ceramic Engineering and Science Proceedings

Corral, Erica L.; Ayala, Alicia A.; Loehman, Ronald E.

The development of carbon-carbon (C-C) composites for aerospace applications has prompted the need for ways to improve the poor oxidation resistance of these materials, In order to evaluate and test materials to be used as thermal protection system (TPS) material the need for readily available and reliable testing methods are critical to the success of materials development efforts, With the purpose to evaluate TPS materials, three testing methods were used to assess materials at high temperatures (> 2000°C) and heat flux in excess of 200 Wcm-2. The first two methods are located at the National Solar Thermal Test Facility (NSTTF) at Sandia National Laboratories, which are the Solar Furnace Facility and the Solar Tower Facility, The third method is an oxyacetylene torch set up according to ASTM E285-80 with oxidizing flame control and maximum achievable temperatures in excess of 2000°C In this study, liquid precursors to ultra high temperature ceramics (UHTCs) have been developed into multilayer coatings on C-C composites and evaluated using the oxidation testing methods. The tests will be discussed in detail and correlated with preliminary materials evaluation results with the aim of presenting an understanding of the testing environment on the materials evaluated for oxidation resistance.

More Details

Glass-to-metal (GTM) seal development using finite element analysis: Assessment of material models and design changes

Ceramic Engineering and Science Proceedings

Tandon, Rajan; Neilsen, Michael K.; Jones, Timothy C.; Mahoney, James F.

Glass-to-metal (GTM) seals maintain hermeticity while allowing the passage of electrical signals. Typically, these seals are comprised of one or more metal pins encapsulated in a glass which is contained in a metal shell. In compression seals, the coefficient of thermal expansion of the metal shell is greater than the glass, and the glass is expected to be in compression. Recent development builds of a multi-pin GTM seal revealed severe cracking of the glass, with cracks originating at or near the pin-glass interface, and propagating circumferentially. A series of finite element analyses (FEA) was performed for this seal with the material set: 304 stainless steel (SS304) shell, Schott S-8061 (or equivalent) glass, and Alloy 52 pins. Stress-strain data for both metals was fit by linear-hardening and power-law hardening plasticity models. The glass layer thickness and its location with respect to geometrical features in the shell were varied. Several additional design changes in the shell were explored. Results reveal that: (1) plastic deformation in the small-strain regime in the metals lead to radial tensile stresses in glass, (2) small changes in the mechanical behavior of the metals dramatically change the calculated stresses in the glass, and (3) seemingly minor design changes in the shell geometry influence the stresses in the glass significantly. Based on these results, guidelines for materials selection and design of seals are provided.

More Details

Absence of elastic clamping in quantitative piezoelectric force microscopy measurements of nanostructures

Applied Physics Letters

Scrymgeour, David A.; Hsu, Julia W.

We establish that clamping effects, which limit accurate determination of piezoelectric responses in bulk materials and films using piezoelectric force microscopy (PFM), are not present when measuring discrete nanostructures with radii less than five times the tip radius. This conclusion is established by comparing the piezoelectric response in ZnO rods using two electrode configurations: one with the conducting atomic force microscopy tip acting as the top electrode and the other using a uniform metal top electrode. The distributions of piezoelectric coefficients measured with these two types of electrode configurations are the same. Hence, clamping issues do not play a role in the piezoelectric property measurement of nanomaterials using PFM. The role of conduction electrons on the piezoelectric measurement in both cases is also discussed. © 2008 American Institute of Physics.

More Details

Application of low-heating rate TGA results to hazard analyses involving high-heating rates

International SAMPE Symposium and Exhibition (Proceedings)

Erickson, Kenneth L.

Thermal gravimetric analysis (TGA) combined with evolved gas analysis by Fourier transform infrared spectroscopy (FTIR) or mass spectrometry (MS) often is used to study thermal decomposition of organic polymers. Frequently, results are used to determine decomposition mechanisms and to develop rate expressions for a variety of applications, which include hazard analyses. Although some current TGA instruments operate with controlled heating rates as high as 500° C/min, most experiments are done at much lower heating rates of about 5° to 50° C/min to minimize temperature gradients in the sample. The intended applications, such as hazard analyses involving fire environments, for rate expressions developed from TGA experiments often involve heating rates much greater than 50° C/min. The heating rate can affect polymer decomposition by altering relative rates at which competing decomposition reactions occur. Analysis of the effect of heating rate on competing first-order decomposition reactions with Arrhenius rate constants indicated that relative to heating rates of 5° to 50° C/min, observable changes in decomposition behavior may occur when heating rates approach 1,000° C/min. Results from experiments with poly(methyl methacrylate) (PMMA) samples that were heated at 5° to 50° C/min during TGA-FTIR experiments and results from experiments with samples heated at rates on the order of 1,000° C/min during pyrolysis-GC-FTIR experiments supported the analyses.

More Details

Composite materials for innovative wind turbine blades

International SAMPE Symposium and Exhibition (Proceedings)

Ashwill, Thomas D.; Paquette, Joshua A.

The Wind Energy Technology Department at Sandia National Laboratories (SNL) focuses on producing innovations in wind turbine blade technology to enable the development of longer blades that are lighter, more structurally and aerodynamically efficient, and impart reduced loads to the system. A large part of the effort is to characterize the properties of relevant composite materials built with typical manufacturing processes. This paper provides an overview of recent studies of composite laminates for wind turbine blade construction and summarizes test results for three prototype blades that incorporate a variety of material-related innovations.

More Details

Composite materials for innovative wind turbine blades

International SAMPE Symposium and Exhibition Proceedings

Ashwill, Thomas D.; Paquette, Joshua A.

The Wind Energy Technology Department at Sandia National Laboratories (SNL) focuses on producing innovations in wind turbine blade technology to enable the development of longer blades that are lighter, more structurally and aerodynamically efficient, and impart reduced loads to the system. A large part of the effort is to characterize the properties of relevant composite materials built with typical manufacturing processes. This paper provides an overview of recent studies of composite laminates for wind turbine blade construction and summarizes test results for three prototype blades that incorporate a variety of material-related innovations.

More Details

Two-color resonant four-wave mixing spectroscopy: New perspectives for direct studies of collisional state-to-state transfer

AIP Conference Proceedings

Chen, X.; Settersten, T.B.; Radi, P.P.; Kouzov, A.P.

The two-color resonant four-wave mixing (TC-RFWM) is advertised as a unique spectroscopic device enabling one to directly measure the collisional state-to-state transfer characteristics (rates and correlation times). In contrast to the laser-induced fluorescence, these characteristics are phase-sensitive and open wider opportunities to study the rotational relaxation processes. Further perspectives are offered by the recently recorded collision-induced picosecond TC-RFWM signals of OH. Their quantitative interpretation is now under development. © 2008 American Institute of Physics.

More Details

Criticality calculations for step-2 GPHS modules

AIP Conference Proceedings

Lipinski, Ronald; Hensen, Danielle L.

The Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) will use an improved version of the General Purpose Heat Source (GPHS) module as its source of thermal power. This new version, referred to as the Step-2 GPHS Module, has additional and thicker layers of carbon fiber material (Fine Weaved Pierced Fabric) for increased strength over the original GPHS module. The GPHS uses alpha decay of 238Pu in the oxide form as the primary source of heat, and small amounts of other actinides are also present in the oxide fuel. Criticality calculations have been performed by previous researchers on the original version of the GPHS module (Step 0). This paper presents criticality calculations for the present Step-2 version. The Monte Carlo N-Particle extended code (MCNPX) was used for these calculations. Numerous configurations of GPHS module arrays surrounded by wet sand and other materials (to reflect the neutrons back into the stack with minimal absorption) were modeled. For geometries with eight GPHS modules (from a single MMRTG) surrounded by wet sand, the configuration is extremely sub-critical; keff is about 0.3. It requires about 1000 GPHS modules (from 125 MMRTGs) in a close-spaced stack to approach criticality (keff = 1.0) when surrounded by wet sand. The effect of beryllium in the MMRTG was found to be relatively small. © 2008 American Institute of Physics.

More Details

Changes to the shock response of fused quartz due to glass modification

International Journal of Impact Engineering

Alexander, Charles S.; Chhabildas, L.C.; Reinhart, William D.; Templeton, D.W.

Silica based glasses are commonly used as window material in applications which are subject to high velocity impacts. Thorough understanding of the response to shock loading in these materials is crucial to the development of new designs. Despite the lack of long range order in amorphous glasses, the structure can be described statistically by the random network model. Changes to the network structure alter the response to shock loading. Results indicate that in fused silica, substitution of boron as a network former does not have a large effect on the shock loading properties while modifying the network with sodium and calcium changes the dynamic response. These initial results suggest the potential of a predictive capability to determine the effects of other network substitutions.

More Details

An updated site scale saturated zone ground water transport model for yucca mountain

American Nuclear Society - 12th International High-Level Radioactive Waste Management Conference 2008

Kelkar, Sharad; Ding, Mei; Chu, Shaoping; Robinson, Bruce; Arnold, Bill W.; Meijer, Arend

This paper summarizes the numerical site scale model developed to simulate the transport of radionuclides via ground water in the saturated zone beneath Yucca Mountain.

More Details

Limited-memory techniques for sensor placement in water distribution networks

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Hart, William E.; Berry, Jonathan; Boman, Erik G.; Phillips, Cynthia A.; Riesen, Lee A.; Watson, Jean-Paul

The practical utility of optimization technologies is often impacted by factors that reflect how these tools are used in practice, including whether various real-world constraints can be adequately modeled, the sophistication of the analysts applying the optimizer, and related environmental factors (e.g. whether a company is willing to trust predictions from computational models). Other features are less appreciated, but of equal importance in terms of dictating the successful use of optimization. These include the scale of problem instances, which in practice drives the development of approximate solution techniques, and constraints imposed by the target computing platforms. End-users often lack state-of-the-art computers, and thus runtime and memory limitations are often a significant, limiting factor in algorithm design. When coupled with large problem scale, the result is a significant technological challenge. We describe our experience developing and deploying both exact and heuristic algorithms for placing sensors in water distribution networks to mitigate against damage due intentional or accidental introduction of contaminants. The target computing platforms for this application have motivated limited-memory techniques that can optimize large-scale sensor placement problems. © 2008 Springer Berlin Heidelberg.

More Details

Removing small features with real CAD operations

Proceedings of the 16th International Meshing Roundtable, IMR 2007

Clark, Brett W.

Preparing Computer Aided Design models for successful mesh generation continues to be a crucial part of the design to analysis process. A common problem in CAD models is features that are very small compared to the desired mesh size. Small features exist for a variety of reasons and can require an excessive amount of elements or inhibit mesh generation all together. Many of the tools for removing small features modify only the topology of the model (often in a secondary topological representation of the model) leaving the underlying geometry as is. The availability of tools that actually modify the topology and underlying geometry in the boundary representation (B-rep) model is much more limited regardless of the inherent advantages of this approach. This paper presents a process for removing small featrues from a B-rep model using almost solely functionality provided by the underlying solid modeling kernel. The process cuts out the old topology and reconstructs new topology and geometry to close the volume. The process is quite general and can be applied to complex configurations of unwanted topology.

More Details

Characteristics of a spring-mass system undergoing centrifuge acceleration

Conference Proceedings of the Society for Experimental Mechanics Series

Romero, Edward; Jepsen, Richard A.

Systems in flight often encounter environments with combined vibration and constant acceleration. Sandia National Laboratories has developed a new system capable of combining these environments for hardware qualification testing on a centrifuge. To demonstrate that combined vibration plus centrifuge acceleration is equivalent to the vibration and acceleration encountered in a flight environment the equations of motion of a spring mass damper system in each environment were derived and compared. These equations of motion suggest a decrease in natural frequency for spring mass damper systems undergoing constant rotational velocity on a centrifuge. It was shown mathematically and through experimental testing that the natural frequency of a spring-mass system will decrease with increased rotational velocity. An increase of rotational velocity will eventually result in system instability. The development and testing of a mechanical system to demonstrate this characteristic is discussed. Results obtained from frequency domain analysis of time domain data is presented as is the implications these results conclude about centrifuge testing of systems with low natural frequency on small radius centrifuges.

More Details

Experimental/analytical evaluation of the effect of tip mass on atomic force microscope calibration

Conference Proceedings of the Society for Experimental Mechanics Series

Allen, Matthew S.; Sumali, Hartono (Anton); Locke, Elliott B.

Quantitative studies of material properties and interfaces using the atomic force microscope (AFM) have important applications in engineering, biotechnology and chemistry. Emerging studies require an estimate of the stiffness of the probe so that the forces exerted on a sample can be determined from the measured displacements. Numerous methods for determining the spring constant of AFM cantilevers have been proposed, yet none accounts for the effect of the mass of the probe tip on the calibration procedure. This work demonstrates that the probe tip does have a significant effect on the dynamic response of an AFM cantilever by experimentally measuring the first few modes of a commercial AFM probe and comparing them with those of a theoretical model for a cantilever probe that does not have a tip. The mass and inertia of an AFM probe tip are estimated from scanning electron microscope images and a simple model for the probe is derived and tuned to match the first few modes of the actual probe. Analysis suggests that both the method of Sader and the thermal tune method of Hutter and Bechhoefer give erroneous predictions of the area density or the effective mass of the probe. However, both methods do accurately predict the static stiffness of the AFM probe due to the fact that the mass terms cancel so long as the mode shape of the AFM probe does not deviate from the theoretical model. The calibration errors that would be induced due to differences between mode shapes measured in this study and the theoretical ones are estimated.

More Details

Analysis of modern and ancient artifacts for the presence of corn beer; Dynamic headspace testing of pottery sherds from Mexico and New Mexico

Materials Research Society Symposium Proceedings

Borek, Theodore; Mowry, Curtis D.; Dean, Glenna

A large volume-headspace apparatus that permits the heating of pottery fragments for direct analysis by gas chromatography/mass spectrometry (GC/MS) is described here. A series of fermented-corn beverages were produced in modern clay pots and the pots were analyzed to develop organic-species profiles for comparison with fragments of ancient pottery. Brewing pots from the Tarahumara of northern Mexico, a tribe that produces a corn-based fermented beverage, were also examined for volatile residues and the organic-species profiles were generated. Finally, organic species were generated from ancient potsherds from an archeological site and compared with the modern spectra. The datasets yielded similar organic species, many of which were identified by computer matching of the resulting mass spectra with the NIST mass spectral library. Additional analyses are now underway to highlight patterns of organic species common to all the spectra. This presentation demonstrates the utility of thermal desorption coupled with GC/MS for detecting fermentation residues in the fabric of unglazed archaeological ceramics after centuries of burial. © 2008 Materials Research Society.

More Details

Validation of mathematical models using weighted response measures

Conference Proceedings of the Society for Experimental Mechanics Series

Paez, Thomas L.; Massad, Jordan; Hinnerichs, Terry D.; O'Gorman, Chris; Hunter, Patrick

Advancements in our capabilities to accurately model physical systems using high resolution finite element models have led to increasing use of models for prediction of physical system responses. Yet models are typically not used without first demonstrating their accuracy or, at least, adequacy. In high consequence applications where model predictions are used to make decisions or control operations involving human life or critical systems, a movement toward accreditation of mathematical model predictions via validation is taking hold. Model validation is the activity wherein the predictions of mathematical models are demonstrated to be accurate or adequate for use within a particular regime. Though many types of predictions can be made with mathematical models, not all predictions have the same impact on the usefulness of a model. For example, predictions where the response of a system is greatest may be most critical to the adequacy of a model. Therefore, a model that makes accurate predictions in some environments and poor predictions in other environments may be perfectly adequate for certain uses. The current investigation develops a general technique for validating mathematical models where the measures of response are weighted in some logical manner. A combined experimental and numerical example that demonstrates the validation of a system using both weighted and non-weighted response measures is presented.

More Details

In-situ formation of bismuth-based iodine waste forms

Materials Research Society Symposium Proceedings

Nenoff, Tina; Krumhansl, James L.; Rajan, Ashwath

We investigated the synthesis of bismuth oxy-iodide and iodate compounds, in an effort to develop materials for iodine recovery from caustic waste streams and/or final waste disposal if repository conditions included ambient conditions similar to those under which the iodine was initially captured. The results presented involve the in-situ crystallization of layered bismuth oxide compounds with aqueous dissolved iodine (which resides as both iodide and iodate in solution). Although single-phase bismuth oxy-iodide materials have already been described in the context of capturing radioiodine, our unique contribution is the discovery that there is a mixture of Bi-O-I compositions, not described in the prior work, which optimize both the uptake and the degree of insolubility (and leachability) of iodine. The optimized combination produces a durable material that is suitable as a waste form for repository conditions such as are predicted at the Yucca Mountain repository (YMP) or in a similar type of repository that could be developed in coordination with iodine production via Global Nuclear Energy Program (GNEP) production cycles. © 2008 Materials Research Society.

More Details

Implementing peridynamics within a molecular dynamics code

Computer Physics Communications

Parks, Michael L.; Lehoucq, Rich; Plimpton, Steven J.; Silling, Stewart

Peridynamics (PD) is a continuum theory that employs a nonlocal model to describe material properties. In this context, nonlocal means that continuum points separated by a finite distance may exert force upon each other. A meshless method results when PD is discretized with material behavior approximated as a collection of interacting particles. This paper describes how PD can be implemented within a molecular dynamics (MD) framework, and provides details of an efficient implementation. This adds a computational mechanics capability to an MD code, enabling simulations at mesoscopic or even macroscopic length and time scales. © 2008 Elsevier B.V.

More Details

Accurate measurement of cellular autofluorescence is critical for imaging of host-pathogen interactions

Progress in Biomedical Optics and Imaging - Proceedings of SPIE

Timlin, Jerilyn A.; Noek, Rachel M.; Kaiser, Julia N.; Sinclair, Michael B.; Jones, Howland D.T.; Davis, Ryan W.; Lane, Todd

Cellular autofluorescence, though ubiquitous when imaging cells and tissues, is often assumed to be small in comparison to the signal of interest. Uniform estimates of autofluorescence intensity obtained from separate control specimens are commonly employed to correct for autofluorescence. While these may be sufficient for high signal-to-background applications, improvements in detector and probe technologies and introduction of spectral imaging microscopes have increased the sensitivity of fluorescence imaging methods, exposing the possibility of effectively probing the low signal-to-background regime. With spectral imaging, reliable monitoring of signals near or even below the noise levels of the microscope is possible if autofluorescence and background signals can be accurately compensated for. We demonstrate the importance of accurate autofluorescence determination and utility of spectral imaging and multivariate analysis methods using a case study focusing on fluorescence confocal spectral imaging of host-pathogen interactions. In this application fluorescent proteins are produced when bacteria invade host cells. Unfortunately the analyte signal is spectrally overlapped and typically weaker than the cellular autofluorescence. In addition to discussing the advantages of spectral imaging for following pathogen invasion, we present the spectral properties of mouse macrophage autofluorescence. The imaging and analysis methods developed are widely applicable to cell and tissue imaging. © 2008 Copyright SPIE - The International Society for Optical Engineering.

More Details

Force and moment measurements of a transonic fin-wake interaction

46th AIAA Aerospace Sciences Meeting and Exhibit

Smith, Justin; Henfling, John F.; Beresh, Steven J.; Grasser, Thomas; Spillers, Russell

Force and moment measurements have been made on an instrumented subscale fin model at transonic speeds in Sandia's Trisonic Wind Tunnel to ascertain the effects of Mach number and angle of attack on the interaction of a trailing vortex with a downstream control surface. Components of normal force, bending moment, and hinge moment were measured on an instrumented fin downstream of an identical fin at Mach numbers between 0.85 and 1.24, and combinations of angles of attack between -5° and 10° for both fins. The primary influence of upstream fin deflection is to shift the downstream fin's forces in a direction consistent with the vortex-induced angle of attack on the downstream fin. Secondary non-linear effects of vortex lift were found to increase the slopes of normal force and bending moment coefficients when plotted versus fin deflection angle. This phenomenon was dependent upon Mach number and the angles of attack of both fins. The hinge moment coefficient was also influenced by the vortex lift as the center of pressure was pushed aft with increased Mach number and total angle of attack.

More Details

Aerodynamic and aeroacoustic properties of flatback airfoils

46th AIAA Aerospace Sciences Meeting and Exhibit

Berg, Dale E.; Zayas, Jose R.

In 2002, Sandia National Laboratories (SNL) initiated a research program to demonstrate the use of carbon fiber in wind turbine blades and to investigate advanced structural concepts through the Blade Systems Design Study, known as the BSDS. One of the blade designs resulting from this program, commonly referred to as the BSDS blade, resulted from a systems approach in which manufacturing, structural and aerodynamic performance considerations were all simultaneously included in the design optimization. The BSDS blade design utilizes "flatback" airfoils for the inboard section of the blade to achieve a lighter, stronger blade. Flatback airfoils are generated by opening up the trailing edge of an airfoil uniformly along the camber line, thus preserving the camber of the original airfoil. This process is in distinct contrast to the generation of truncated airfoils, where the trailing edge the airfoil is simply cut off, changing the camber and subsequently degrading the aerodynamic performance. Compared to a thick conventional, sharp trailing-edge airfoil, a flatback airfoil with the same thickness exhibits increased lift and reduced sensitivity to soiling. Although several commercial turbine manufacturers have expressed interest in utilizing flatback airfoils for their wind turbine blades, they are concerned with the potential extra noise that such a blade will generate from the blunt trailing edge of the flatback section. In order to quantify the noise generation characteristics of flatback airfoils, Sandia National Laboratories has conducted a wind tunnel test to measure the noise generation and aerodynamic performance characteristics of a regular DU97-300-W airfoil, a 10% trailing edge thickness flatback version of that airfoil, and the flatback fitted with a trailing edge treatment. The paper describes the test facility, the models, and the test methodology, and provides some preliminary results from the test.

More Details

Fabrication of (Ba,Sr)TiO3 high-value integrated capacitors by chemical solution deposition

IEEE International Symposium on Applications of Ferroelectrics

Sigman, Jennifer; Clem, Paul; Brennecka, Geoff; Tuttle, Bruce

This report focuses on our recent advances in the fabrication and processing of barium strontium titanate (BST) thin films by chemical solution depositiion for next generation fuctional integrated capacitors. Projected trends for capacitors include increasing capacitance density, decreasing operating voltages, decreasing dielectric thickness and decreased process cost. Key to all these trends is the strong correlation of film phase evolution and resulting microstructure, it becomes possible to tailor the microstructure for specific applications. This interplay will be discussed in relation to the resulting temperature dependent dielectric response of the BST films.

More Details

Distributed network fusion for water quality

World Environmental and Water Resources Congress 2008: Ahupua'a - Proceedings of the World Environmental and Water Resources Congress 2008

Koch, Mark W.; Mckenna, Sean A.

To protect drinking water systems, a contamination warning system can use in-line sensors to detect accidental and deliberate contamination. Currently, detection of an incident occurs when data from a single station detects an anomaly. This paper considers the possibility of combining data from multiple locations to reduce false alarms and help determine the contaminant's injection source and time. If we consider the location and time of individual detections as points resulting from a random space-time point process, we can use Kulldorff's scan test to find statistically significant clusters of detections. Using EPANET, we simulate a contaminant moving through a water network and detect significant clusters of events. We show these significant clusters can distinguish true events from random false alarms and the clusters help identify the time and source of the contaminant. Fusion results show reduced errors with only 25% more sensors needed over a nonfusion approach. © 2008 ASCE.

More Details

Moving multiple sinks through wireless sensor networks for lifetime maximization

2008 5th IEEE International Conference on Mobile Ad-Hoc and Sensor Systems, MASS 2008

Basagni, S.; Carosi, A.; Petrioli, C.; Phillips, Cynthia A.

We propose scalable models and centralized heuristics for the concurrent and coordinated movement of multiple sinks in a wireless sensor network (WSN). The proposed centralized heuristic runs in polynomial time given the solution to the linear program and achieves results that are within 2% of the LP-relaxation-based upper bound. It provides a useful benchmark for evaluating centralized and distributed schemes for controlled sink mobility. © 2008 IEEE.

More Details

Fusion-fission hybrids for nuclear waste transmutation: A synergistic step between Gen-IV fission and fusion reactors

Fusion Engineering and Design

Mehlhorn, Thomas A.; Cipiti, Benjamin B.; Olson, C.L.; Rochau, Gary E.

Energy demand and GDP per capita are strongly correlated, while public concern over the role of energy in climate change is growing. Nuclear power plants produce 16% of world electricity demands without greenhouse gases. Generation-IV advanced nuclear energy systems are being designed to be safe and economical. Minimizing the handling and storage of nuclear waste is important. NIF and ITER are bringing sustainable fusion energy closer, but a significant gap in fusion technology development remains. Fusion-fission hybrids could be a synergistic step to a pure fusion economy and act as a technology bridge. We discuss how a pulsed power-driven Z-pinch hybrid system producing only 20 MW of fusion yield can drive a sub-critical transuranic blanket that transmutes 1280 kg of actinide wastes per year and produces 3000 MW. These results are applicable to other inertial and magnetic fusion energy systems. A hybrid system could be introduced somewhat sooner because of the modest fusion yield requirements and can provide both a safe alternative to fast reactors for nuclear waste transmutation and a maturation path for fusion technology. The development and demonstration of advanced materials that withstand high-temperature, high-irradiation environments is a fundamental technology issue that is common to both fusion-fission hybrids and Generation-IV reactors. © 2008 Elsevier B.V. All rights reserved.

More Details

Influence of misfit mechanisms on jointed structure response

Conference Proceedings of the Society for Experimental Mechanics Series

Resor, Brian R.; Starr, Michael

Geometric features with characteristic lengths on the order of the size of the contact patch interface may be at least partly responsible for the variability observed in experimental measurements of structural stiffness and energy dissipation per cycle in a bolted joint. Experiments on combinations of two different types of joints (statically determinate single-joint and statically indeterminate three-joint structures) of nominally identical hardware show that the structural stiffness of the tested specimens varies by up to 25% and the energy dissipation varies by up to nearly 300%. A pressure-sensitive film was assembled into the interfaces of jointed structures to gain a qualitative understanding of the distribution of interfacial pressures of nominally conformal surfaces. The resultant pressure distributions suggest that there are misfit mechanisms that may influence contact patch geometry and also structural response of the interface. These mechanisms include local plateaus and machining induced waviness. The mechanisms are not consistent across nominally machined hardware interfaces. The proposed misfit mechanisms may be partly responsible for the variability in energy dissipation per cycle of joint experiments.

More Details

Air-drag damping on micro-cantilever beams

Conference Proceedings of the Society for Experimental Mechanics Series

Sumali, Hartono (Anton); Carne, Thomas G.

Damping in a micro-cantilever beam was measured for a very broad range of air pressures from atmosphere (10 5 Pa) down to 0.2 Pa. The beam was in open space free from squeeze films. The damping ratio, due mainly to air drag, varied by a factor of 10 4 within this pressure range. The damping due to air drag was separated from other sources of energy dissipation so that air damping could be measured at 10 -6 of critical damping factor. The linearity of the damping was confirmed over a wide range of beam vibration levels. Lastly, the measured damping was compared with several existing theories for air-drag damping for both rarified and viscous flow gas theories. The measured data indicate that, in the rarefied regime the air damping is proportional to pressure, independent of viscosity, and in the viscous regime the damping is determined by viscosity.

More Details

Radar transmitter and receiver MCM subassemblies implemented in LTCC

4th IMAPS/ACerS International Conference and Exhibition on Ceramic Interconnect and Ceramic Microsystems Technologies 2008, CICMT 2008

Knudson, R.T.; Smith, F.; Zawicki, L.R.; Peterson, K.A.

The development of transmitter and receiver Multichip Module subassemblies implemented in LTCC for an S-band radar application followed an approach that reduces the number of discrete devices and increases reliability. The LTCC MCM incorporates custom GaAs RF integrated circuits in faraday cavities, novel methods of reducing line resistance and enhancing lumped element Q, and a thick film back plane which attaches to a heat sink. The incorporation of PIN diodes on the receiver and a 50W power amplifier on the transmitter required methods for removing heat beyond what thermal vias can accomplish. The die is a high voltage pHEMT GaAs power amplifier RFIC chip that measures 6.5 mm × 8 mm. Although thermal vias are adequate in certain cases, the thermal solution includes heat spreaders and thermally conductive backplates. Processing hierarchy, including gold-tin die attach and various use of polymeric attachment, must allow rework on these prototypical devices. LTCC cavity covers employ metallic coatings on their exterior surfaces. The processing of the LTCC and its effect on the function of the transmitter and receiver circuits is discussed in the poster session.

More Details

Low-memory Lagrangian relaxation methods for sensor placement in municipal water networks

World Environmental and Water Resources Congress 2008: Ahupua'a - Proceedings of the World Environmental and Water Resources Congress 2008

Berry, Jonathan; Boman, Erik G.; Phillips, Cynthia A.; Riesen, Lee A.

Placing sensors in municipal water networks to protect against a set of contamination events is a classic p-median problem for most objectives when we assume that sensors are perfect. Many researchers have proposed exact and approximate solution methods for this p-median formulation. For full-scale networks with large contamination event suites, one must generally rely on heuristic methods to generate solutions. These heuristics provide feasible solutions, but give no quality guarantee relative to the optimal placement. In this paper we apply a Lagrangian relaxation method in order to compute lower bounds on the expected impact of suites of contamination events. In all of our experiments with single objectives, these lower bounds establish that the GRASP local search method generates solutions that are provably optimal to to within a fraction of a percentage point. Our Lagrangian heuristic also provides good solutions itself and requires only a fraction of the memory of GRASP. We conclude by describing two variations of the Lagrangian heuristic: an aggregated version that trades off solution quality for further memory savings, and a multi-objective version which balances objectives with additional goals. © 2008 ASCE.

More Details

Analysis of proton and heavy-ion irradiation effects on phase change memories with MOSFET and BJT selectors

IEEE Transactions on Nuclear Science

Gasperin, Alberto; Paccagnella, Alessandro; Schwank, James R.; Vizkelethy, Gyorgy; Ottogalli, Federica; Pellizzer, Fabio

We study proton and heavy ion irradiation effects on Phase Change Memories (PCM) with MOSFET and BJT selectors and the effect of the irradiation on the retention characteristics of these devices. Proton irradiation produces noticeable variations in the cell distributions in PCM with MOSFET selectors mostly due to leakage currents affecting the transistors. PCM with BJT selectors show only small variations after proton irradiation. PCM cells do not appear to be impacted by heavy-ion irradiation. Using high temperature accelerated retention tests, we demonstrate that the retention capability of these memories is not compromised by the irradiation. © 2006 IEEE.

More Details

Operational results of russian-built photovoltaic alternative energy powered lighthouses in extreme climates

American Solar Energy Society - SOLAR 2008, Including Proc. of 37th ASES Annual Conf., 33rd National Passive Solar Conf., 3rd Renewable Energy Policy and Marketing Conf.: Catch the Clean Energy Wave

Estrada, Luis; Rosenthal, Andrew; Foster, Robert; Hauser, Gene C.; Grigoriev, Alexander; Khoudykin, Alexei

This paper summarizes operational histories of three Russian-designed photovoltaic (PV) lighthouses in Norway and Russia. All lighthouses were monitored to evaluate overall system and Nickel Cadmium (NiCad) battery bank performance to determine battery capacity, charging trends, temperature, and reliability. The practical use of PV in this unusual mode, months of battery charging followed by months of battery discharging, is documented and assessed. This paper presents operational data obtained from 2004 through 2007.

More Details

The impact of safeguards authentication measures on the facility operator

8th International Conference on Facility Operations: Safeguards Interface 2008

Tolk, Keith M.; Merkle, Peter B.

In order for the IAEA to draw valid safeguards conclusions, they must be assured that the data used to draw those conclusions are authentic. In order to provide that assurance, authentication measures are applied to the safeguards equipment and the data from the equipment. These authentication measures require that IAEA personnel have direct electronic and physical access to the equipment and severely limit access to the equipment by the operator. Providing the necessary access for the IAEA personnel can be intrusive and potentially disruptive to plant operations. If the equipment is to be used jointly by the operator and the IAEA, the authentication measures can cause difficulties for the operator by limiting his ability to repair and maintain the hardware. In many cases, tamper indicating conduit and enclosures are also required. The installation, sealing, and inspection of this tamper indicating hardware also add to the intrusiveness of the safeguards activities and increase the cost of safeguards. This paper discusses these impacts and proposes methods for mitigating them.

More Details

The cognitive foundry: A flexible platform for intelligent agent modeling

2008 BRIMS Conference - Behavior Representation in Modeling and Simulation

Basilico, Justin D.; Benz, Zachary O.; Dixon, Kevin R.

The Cognitive Foundry is a unified collection of tools for Cognitive Science and Technology applications, supporting the development of intelligent agent models. The Foundry has two primary components designed to facilitate agent construction: the Cognitive Framework and Machine Learning packages. The Cognitive Framework provides design patterns and default implementations of an architecture for evaluating theories of cognition, as well as a suite of tools to assist in the building and analysis of theories of cognition. The Machine Learning package provides tools for populating components of the Cognitive Framework from domain-relevant data using automated knowledge-capture techniques. This paper describes the Cognitive Foundry with a focus on its application within the context of agent behavior modeling.

More Details

Using multivariate analyses to compare subsets of electrodes and potentials within an electrode array for predicting sugar concentrations in mixed solutions

Journal of Electroanalytical Chemistry

Steen, William A.; Stork, Christopher L.

A non-selective electrode array is presented for the quantification of fructose, galactose, and glucose in mixed solutions. A unique feature of this electrode array relative to other published work is the wide diversity of electrode materials incorporated within the array, being constructed of 41 different metals and metal alloys. Cyclic voltammograms were acquired for solutions containing a single sugar at varying concentrations, and the correlation between current and sugar concentration was calculated as a function of potential and electrode array element. The correlation plots identified potential regions and electrodes that scaled most linearly with sugar concentration, and the number of electrodes used in building predictive models was reduced to 15. Partial least squares regression models relating electrochemical response to sugar concentration were constructed using data from single electrodes and multiple electrodes within the array, and the predictive abilities of these models were rigorously compared using a non-parametric Wilcoxon test. Models using single electrodes (Pt:Rh (90:10) for fructose, Au:Ni (82:18) for galactose, and Au for glucose) were judged to be statistically superior or indistinguishable from those built with multiple electrodes. Additionally, for each sugar, interval partial least squares regression successfully identified a subset of potentials within a given electrode that generated a model of statistically equivalent predictive ability relative to the full potential model. While including data from multiple electrodes offered no benefit in predicting sugar concentration, use of the array afforded the versatility and flexibility of selecting the best single electrode for each sugar. © 2008 Elsevier B.V. All rights reserved.

More Details

Dynamic initiation fracture toughness of high strength steel alloys

Society for Experimental Mechanics - 11th International Congress and Exhibition on Experimental and Applied Mechanics 2008

Foster, John T.; Luk, Vincent K.; Chen, Weinong W.

Determination of fracture toughness for metals under quasi-static loading conditions can follow well-established procedures and ASTM standards. The use of metallic materials in impact-related applications requires the determination of dynamic fracture toughness for these materials. There are two main challenges in experiment design that must be overcome before valid dynamic data can be obtained. Dynamic equilibrium over the entire specimen needs to be approximately achieved to relate the crack tip loading state to the far-field loading conditions. The loading rate at the crack tip should be maintained nearly constant during an experiment to delineate rate effects on the values of dynamic fracture toughness. A recently developed experimental technique for determining dynamic fracture toughness of brittle materials has been adapted to measure the dynamic initiation fracture toughness of high strength steel alloys. A split-Hopkinson pressure bar is used to apply the dynamic loading. A pulse shaper is used to achieve constant loading rate at the crack tip and dynamic equilibrium across the specimen. A four-point bending configuration is used at the impact section of the setup. ©2008 Society for Experimental Mechanics Inc.

More Details

Sensitivity analyses of radionuclide transport in the saturated zone at yucca mountain, nevada

American Nuclear Society - 12th International High-Level Radioactive Waste Management Conference 2008

Arnold, Bill W.; Hadgu, Teklu; Sallaberry, Cedric J.

Simulation of potential radionuclide transport in the saturated zone from beneath the proposed repository at Yucca Mountain to the accessible environment is an important aspect of the total system performance assessment (TSPA) for disposal of high-level radioactive waste at the site. Analyses of uncertainty and sensitivity are integral components of the TSPA and have been conducted at both the sub-system and system levels to identify parameters and processes that contribute to the overall uncertainty in predictions of repository performance. Results of the sensitivity analyses indicate that uncertainty in groundwater specific discharge along the flow path in the saturated zone from beneath the repository is an important contributor to uncertainty in TSPA results and is the dominant source of uncertainty in transport times in the saturated zone for most radionuclides. Uncertainties in parameters related to matrix diffusion in the volcanic units, colloid-facilitated transport, and sorption are also important contributors to uncertainty in transport times to differing degrees for various radionuclides.

More Details

Dual-permeability modeling and evaluation of drift-shadow experiments

American Nuclear Society - 12th International High-Level Radioactive Waste Management Conference 2008

Ho, Clifford K.; Arnold, Bill W.; Altman, Susan J.

The drift-shadow effect describes capillary diversion of water flow around a drift or cavity in porous or fractured rock, resulting in lower water flux directly beneath the cavity. This paper presents computational simulations of drift-shadow experiments using dual-permeability models, similar to the models used for performance assessment analyses of flow and seepage in unsaturated fractured tuff at Yucca Mountain. Results show that the dual-penneability models capture the salient trends and behavior observed in the experiments, but constitutive relations (e.g., fracture capillary-pressure curves) can significantly affect the simulated results. An evaluation of different meshes showed that at the grid refinement used, a comparison between orthogonal and unstructured meshes did not result in large differences.

More Details

Yucca mountain 2008 performance assessment: Uncertainty and sensitivity analysis for expected dose

American Nuclear Society - 12th International High-Level Radioactive Waste Management Conference 2008

Hansen, C.W.; Brooks, K.; Groves, J.W.; Helton, J.C.; Lee, K.P.; Sallaberry, Cedric J.; Statham, W.; Thorn, C.

Uncertainty and sensitivity analyses of the expected dose to the reasonably maximally exposed individual in the Yucca Mountain 2008 total system performance assessment (TSPA) are presented. Uncertainty results are obtained with Latin hypercube sampling of epistemic uncertain inputs, and partial rank correlation coefficients are used to illustrate sensitivity analysis results.

More Details

Yucca mountain 2008 performance assessment: Uncertainty and sensitivity analysis for physical processes

American Nuclear Society - 12th International High-Level Radioactive Waste Management Conference 2008

Sallaberry, Cedric J.; Aragon, A.; Bier, A.; Chen, Y.; Groves, J.W.; Hansen, C.W.; Helton, J.C.; Mehta, S.; Miller, S.P.; Min, J.; Vo, P.

The Total System Performance Assessment (TSPA) for the proposed high level radioactive waste repository at Yucca Mountain, Nevada, uses a sampling-based approach to uncertainty and sensitivity analysis. Specifically, Latin hypercube sampling is used to generate a mapping between epistemically uncertain analysis inputs and analysis outcomes of interest. This results in distributions that characterize the uncertainty in analysis outcomes. Further, the resultant mapping can be explored with sensitivity analysis procedures based on (i) examination of scatterplots, (ii) partial rank correlation coefficients, (iii) R2 values and standardized rank regression coefficients obtained in stepwise rank regression analyses, and (iv) other analysis techniques. The TSPA considers over 300 epistemically uncertain inputs (e.g., corrosion properties, solubilities, retardations, defining parameters for Poisson processes, ⋯) and over 70 time-dependent analysis outcomes (e.g., physical properties in waste packages and the engineered barrier system, releases from the engineered barrier system, the unsaturated zone and the saturated zone for individual radionuclides, and annual dose to the reasonably maximally exposed individual (RMEI) from both individual radionuclides and all radionuclides. The obtained uncertainty and sensitivity analysis results play an important role in facilitating understanding of analysis results, supporting analysis verification, establishing risk importance, and enhancing overall analysis credibility. The uncertainty and sensitivity analysis procedures are illustrated and explained with selected results for releases from the engineered barrier system, the unsaturated zone and the saturated zone and also for annual dose to the RMEI.

More Details

Low-memory Lagrangian relaxation methods for sensor placement in municipal water networks

World Environmental and Water Resources Congress 2008: Ahupua'a - Proceedings of the World Environmental and Water Resources Congress 2008

Berry, Jonathan; Boman, Erik G.; Phillips, Cynthia A.; Riesen, Lee A.

Placing sensors in municipal water networks to protect against a set of contamination events is a classic p-median problem for most objectives when we assume that sensors are perfect. Many researchers have proposed exact and approximate solution methods for this p-median formulation. For full-scale networks with large contamination event suites, one must generally rely on heuristic methods to generate solutions. These heuristics provide feasible solutions, but give no quality guarantee relative to the optimal placement. In this paper we apply a Lagrangian relaxation method in order to compute lower bounds on the expected impact of suites of contamination events. In all of our experiments with single objectives, these lower bounds establish that the GRASP local search method generates solutions that are provably optimal to to within a fraction of a percentage point. Our Lagrangian heuristic also provides good solutions itself and requires only a fraction of the memory of GRASP. We conclude by describing two variations of the Lagrangian heuristic: an aggregated version that trades off solution quality for further memory savings, and a multi-objective version which balances objectives with additional goals. © 2008 ASCE.

More Details

Streamer initiation in volume and surface discharges in atmospheric gases

Proceedings of the 2008 IEEE International Power Modulators and High Voltage Conference, PMHVC

Lehr, Jane; Warne, Larry K.; Jorgenson, Roy E.; Wallace, Z.R.; Hodge, K.C.; Caldwell, Michele C.

It is generally acknowledged that once a highly conductive channel is established between two charged and conducting materials, electrical breakdown is well established and difficult to interrupt. An understanding of the initiation mechanism for electrical breakdown is crucial for devising mitigating methods to avoid catastrophic failures. Both volumetric and surface discharges are of interest. An effort is underway where experiments and theory are being simultaneously developed. The experiment consists of an impedance matched discharge chamber capable of investigating various gases and pressures to ten atmospheres. In addition to current and voltage measurements, a high dynamic range streak camera records streamer velocities. The streamer velocities are particularly valuable for comparison with theory. A streamer model is being developed which includes photo-ionization and particle interactions with an insulating surface. The combined theoretical and experimental effort is aimed at detailed comparisons of streamer development as well as a quantitative understanding of how streamers interact with dielectric surfaces and the resulting effects on breakdown voltage. © 2008 IEEE.

More Details

Characterization of general and localized corrosion resistance of several titanium alloys in high temperature brines

17th International Corrosion Congress 2008: Corrosion Control in the Service of Society

Gordon, Gerald M.; Mon, Kevin G.; Kim, Young J.

For the Yucca Mountain Project nuclear waste repository design, the emplaced waste packages are covered by a self-supported inverted U-shaped drip shield fabricated from Ti Grade 7 with Ti Grade 29 structural support members. This paper reports experimental results obtained to characterize the corrosion behavior of several titanium alloys. General corrosion rates were obtained using weight loss and electrochemical techniques such as cyclic potentiodynamic polarization and electrochemical impedance spectroscopy. Localized corrosion resistance was assessed from the results of cyclic potentiodynamic polarization and long-term corrosion potential measurements. The results indicate the drip shield titanium alloys are highly resistant to general and localized corrosion under repository-relevant conditions. © 2009 by NACE International.

More Details

Re-engineering PCM/FM as a phase modulation scheme

Proceedings of the International Telemetering Conference

Punnoose, Ratish J.

Historically, (PCM/FM) receivers have used simple detection schemes yielding low performance. Using multi-symbol detection methods, PCM/FM can be received with better error performance than either SOQPSK or multi-h CPM. We present an approximation by which PCM/FM can be reinterpreted as a phase modulation scheme, allowing the use of coherent detection techniques. This is backward compatible with existing receivers. We also present an extension by which the error performance of the approximated PCM/FM can be improved even further with no change to the spectral properties. This improved waveform can be used in systems where compatibility with existing frequency allocation schemes is required. © International Foundation for Telemetering, 2008.

More Details

Self-voting dual-modular-redundancy circuits for single-event-transient mitigation

IEEE Transactions on Nuclear Science

Teifel, John

Dual-modular-redundancy (DMR) architectures use duplication and self-voting asynchronous circuits to mitigate single event transients (SETs). The area and performance of DMR circuitry is evaluated against conventional triple-modular-redundancy (TMR) logic. Benchmark ASIC circuits designed with DMR logic show a 1024% area improvement for flip-flop designs, and a 33% improvement for latch designs. © 2006 IEEE.

More Details

Load line evaluation of a 1-MV linear transformer driver (LTD)

Proceedings of the 2008 IEEE International Power Modulators and High Voltage Conference, PMHVC

Leckbee, Joshua; Cordova, Steve R.; Oliver, Bryan V.; Johnson, David L.; Toury, Martial; Rosol, Rodolphe; Bui, Bill

A seven cavity LTD system has been assembled and tested in a voltage adder configuration capable of producing approximately 1-MV into a 7-Ω, critically damped load. Individual cavities have been tested with a resistive load. The seven cavity adder has been tested with a large area electron beam diode. The output pulse when tested into a resistive load is that of an RLC circuit. When tested with a dynamic load impedance, the output voltages of the cavities have an added oscillation. The oscillation affects the output pulse shape but is not harmful to the cavity components. © 2008 IEEE.

More Details

Product life-cycle modeling utilizing sysML modeling

18th Annual International Symposium of the International Council on Systems Engineering, INCOSE 2008

Brodbeck, Georgia L.; De Spain, Mark J.; Griego, Regina M.

Functional modeling and SysML/UML are defined communication languages that engineers and related disciplines use to communicate the nature of engineering products. We often see functional modeling and SysML/UML used to describe large, physical entities, such as airplanes or space craft. Systems engineers use functional modeling to decompose these large systems into subsystems. Each subsystem has defined requirements, defined roles and responsibilities (functions), and definable interfaces. Each subsystem consists of electrical hardware, mechanical hardware, and computer software. Functional modeling and SysML/UML can also be used for modeling program/project management processes, systems engineering processes, and manufacturing processes. Many organizations use an array of flow charts, organization charts, network diagrams, and spreadsheets to define engineering processes. This paper presents how Sandia National Laboratories (SNL) used functional modeling and SysML/UML to define the design and development processes and procedures for a product realization process (PRP) called the Integrated Phase Gate (IPG) Process. The use of functional modeling helped the organization more readily accept the use of systematic modeling for developing PRP. Additionally, this paper will explore the value of using SysML/UML over functional modeling in order to completely specify process and process artifacts. © 2008 by Georgia Artery, Mark De Spain and Regina Griego.

More Details

Identification of viruses using microfluidic protein profiling and bayesian classification

Analytical Chemistry

Fruetel, Julia A.; West, Jason A.A.; Debusschere, Bert; Hukari, Kyle; Lane, Todd; Najm, Habib N.; Ortega, Jose; Renzi, Ronald F.; Shokair, Isaac R.; Vandernoot, Victoria A.

We present a rapid method for the identification of viruses using microfluidic chip gel electrophoresis (CGE) of high-copy number proteins to generate unique protein profiles. Viral proteins are solubilized by heating at 95°C in borate buffer containing detergent (5 min), then labeled with fluorescamine dye (10 s), and analyzed using the μChemLab CGE system (5 min). Analyses of closely related T2 and T4 bacteriophage demonstrate sufficient assay sensitivity and peak resolution to distinguish the two phage. CGE analyses of four additional viruses - MS2 bacteriophage, Epstein - Barr, respiratory syncytial, and vaccinia viruses - demonstrate reproducible and visually distinct protein profiles. To evaluate the suitability of the method for unique identification of viruses, we employed a Bayesian classification approach. Using a subset of 126 replicate electropherograms of the six viruses and phage for training purposes, successful classification with non-training data was 66/69 or 95% with no false positives. The classification method is based on a single attribute (elution time), although other attributes such as peak width, peak amplitude, or peak shape could be incorporated and may improve performance further. The encouraging results suggest a rapid and simple way to identify viruses without requiring specialty reagents such as PCR probes and antibodies. © 2008 American Chemical Society.

More Details

Non-proliferation impact assessment for GNEP: Transportation issues

American Nuclear Society - International Topical Meeting on Probabilistic Safety Assessment and Analysis, PSA 2008

Radel, Ross F.; Rochau, Gary E.

This report evaluates transportation risk for nuclear material in the proposed Global Nuclear Energy Partnership (GNEP) fuel cycle. Since many details of the GNEP program are yet to be determined, this document is intended only to identify general issues. The existing regulatory environment is determined to be largely prepared to incorporate the changes that the GNEP program will introduce. Nuclear material vulnerability and attractiveness are considered with respect to the various transport stages within the GNEP fuel cycle. It is determined that increased transportation security will be required for the GNEP fuel cycle, particularly for international transport. Finally, transportation considerations for several fuel cycle scenarios are discussed. These scenarios compare the current "once-through" fuel cycle with various aspects of the proposed GNEP fuel cycle.

More Details

Five-lens corrector for Cassegrain-form telescopes

Proceedings of SPIE - The International Society for Optical Engineering

Ackermann, Mark R.; McGraw, John T.; Zimmer, Peter C.

Refractive elements are commonly used on Cassegrain-form telescopes to correct off-axis aberrations and both widen and flatten the field. Early correctors used two lenses with spherical surfaces, but their performance was somewhat limited. More recent correctors have three or four lenses with some including at least one aspheric surface. These systems produce high resolution images over relatively wide fields but often require the corrector and mirrors to be optimized together. Here we present a new corrector design using five spherical lenses. This approach produces high image quality with low distortion over wide fields and has sufficient degrees of freedom to allow corrector to be optimized independent of the mirrors if necessary. © 2008 Copyright SPIE - The International Society for Optical Engineering.

More Details

Integration of the advanced transparency framework to advanced nuclear systems enhancing safety, operations, security, and safeguards (SOSS)

American Nuclear Society - International Topical Meeting on Probabilistic Safety Assessment and Analysis, PSA 2008

Cleary, Virginia D.; Rochau, Gary E.; Méndez, Carmen

The advent of the nuclear renaissance gives rise to a concern for the effective design of nuclear fuel cycle systems that are safe, secure, nonproliferating and cost-effective. We propose to integrate the monitoring of the four major factors of nuclear facilities by focusing on the interactions between Safeguards, Operations, Security, and Safety (SOSS). We proposed to develop a framework that monitors process information continuously and can demonstrate the ability to enhance safety, operations, security, and safeguards by measuring and reducing relevant SOSS risks, thus ensuring the safe and legitimate use of the nuclear fuel cycle facility. A real-time comparison between expected and observed operations provides the foundation for the calculation of SOSS risk. The automation of new nuclear facilities requiring minimal manual operation provides an opportunity to utilize the abundance of process information for monitoring SOSS risk. A framework that monitors process information continuously can lead to greater transparency of nuclear fuel cycle activities and can demonstrate the ability to enhance the safety, operations, security and safeguards associated with the functioning of the nuclear fuel cycle facility. Sandia National Laboratories (SNL) has developed a risk algorithm for safeguards and is in the process of demonstrating the ability to monitor operational signals in real-time though a cooperative research project with the Japan Atomic Energy Agency (JAEA). The risk algorithms for safety, operations and security are under development. The next stage of this work will be to integrate the four algorithms into a single framework.

More Details

A phenomena identification and ranking table (PIRT) exercise for nuclear power plant fire model applications

American Nuclear Society - International Topical Meeting on Probabilistic Safety Assessment and Analysis, PSA 2008

Nowlen, Steven P.; Olivier, Tara J.; Dreisbach, Jason; Salley, Mark H.

This paper summarizes the results of a Phenomena Identification and Ranking Table (PIRT) exercise performed for nuclear power plant (NPP) fire modeling applications conducted on behalf of the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Regulatory Research (RES). A PIRT exercise is a formalized, facilitated expert elicitation process. In this case, the expert panel was comprised of seven international fire science experts and was facilitated by Sandia National Laboratories (SNL). The objective of a PIRT exercise is to identify key phenomena associated with the intended application and to then rank the importance and current state of knowledge of each identified phenomenon. One intent of this process is to provide input into the process of identifying and prioritizing future research efforts. In practice, the panel considered a series of specific fire scenarios based on scenarios typically considered in NPP applications. Each scenario includes a defined figure of merit; that is, a specific goal to be achieved in analyzing the scenario through the application of fire modeling tools. The panel identifies any and all phenomena relevant to a fire modeling-based analysis for the figure of merit. Each phenomenon is ranked relative to its importance to the fire model outcome and then further ranked against the existing state of knowledge and adequacy of existing modeling tools to predict that phenomenon. The PIRT panel covered several fire scenarios and identified a number of areas potentially in need of further fire modeling improvements. The paper summarizes the results of the ranking exercise.

More Details

A hierarchical Bayesian approach to passive system reliability analysis

American Nuclear Society - International Topical Meeting on Probabilistic Safety Assessment and Analysis, PSA 2008

Middleton, Bobby D.

One source of concern in the nuclear power community is associated with performing PRAs on the passive systems used in Advanced Light Water Reactors. Passive systems rely on physical phenomena in order to perform safety actions. This leads to questions about how one should model the reliability of the system, such as how one should model the uncertainty in physical parameters that define the operational characteristics of the passive system and how to determine the degradation and failure characteristics of a system. Hierarchical Bayesian techniques provide a means for assessing the types of problems presented by passive systems. They allow the analyst to collect multiple types of data, including expert judgment and historical data from different sources and then combine them in one analysis. The importance of this feature is that it allows an analyst to perform a mathematically consistent PRA without large amounts of data for the specific system under scrutiny. As data become available, they are incorporated into the analysis using Bayes' rule. As the dataset becomes large, the data dominate the analysis. A study is performed whereby data are collected from a set of resistors in a corrosive environment. A model is created that related the environmental conditions of the sensors being used to the performance of the sensors. Prior distributions are then proposed for the uncertain parameters. Both longitudinal and failure data are recorded for the sensors. These data are then used to update the model and obtain the posterior distributions related to the uncertain parameters.

More Details

Self-voting dual-modular-redundancy circuits for single-event-transient mitigation

IEEE Transactions on Nuclear Science

Teifel, John

Dual-modular-redundancy (DMR) architectures use duplication and self-voting asynchronous circuits to mitigate single event transients (SETs). The area and performance of DMR circuitry is evaluated against conventional triple-modular-redundancy (TMR) logic. Benchmark ASIC circuits designed with DMR logic show a 1024% area improvement for flip-flop designs, and a 33% improvement for latch designs. © 2006 IEEE.

More Details

Fatigue behavior of thin Cu foils and Cu/Kapton flexible circuits

Materials Science and Technology Conference and Exhibition, MS and T'08

Beck, David F.; Susan, D.F.; Sorensen, Neil R.; Thayer, Gayle E.

A series of thin electrodeposited Cu foils and Cu foil/Kapton flex circuits were tested in bending fatigue according to ASTM E796 and IPC-TM-650. The fatigue behavior was analyzed in terms of strain vs. number of cycles to failure, using a Coffin-Manson approach. The effects of Cu foil thickness and Cu trace width are discussed. The Cu foils performed as expected and the Cu foil/Kapton® (E.I. du Pont de Nemours and Company, Wilmington, DE) composites showed significant improvement in fatigue lifetime due to the composite strengthening effect of the Kapton layers. However, the flex circuits showed more scatter in fatigue life based on electrical continuity. The effect of the Kapton layers manifests itself by significantly more widespread microcracking in the Cu traces and the extent of microcracking depended on the strain level. *Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. © 2008 MS&T'08 ®.

More Details

Chromatic aberrations in the field evaporation behavior of small precipitates

Microscopy and Microanalysis

Marquis, Emmanuelle A.; Vurpillot, Francois

Artifacts in the field evaporation behavior of small precipitates have limited the accuracy of atom probe tomography analysis of clusters and precipitates smaller than 2 nm. Here, we report on specific observations of reconstruction artifacts that were obtained in case of precipitates with radii less than 10 nm in Al alloys, focusing particularly on a shift that appears in the relative positioning of matrix and precipitate atoms. We show that this chemically dependent behavior, referred to as "chromatic aberration," is due to the electrostatic field above the emitter and the variations in field evaporation of the elements constituting the precipitates. © Microscopy Society of America 2008.

More Details

Code case validation of Impulsively Loaded EDS subscale vessel

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Yip, Mien; Haroldsen, Brent L.; Puskar, J.D.

The Explosive Destruction System (EDS) was developed by Sandia National Laboratories for the US Army Product Manager for Non-Stockpile Chemical Materiel (PMNSCM) to destroy recovered, explosively configured,chemical munitions. PMNSCM currently has five EDS units that have processed over 850 items. The system uses linear and conical shaped charges to open munitions and attack the burster followed by chemical treatment of the agent. The main component of the EDS is a stainless steel, cylindrical vessel, which contais the explosion and the subsequent chemical treatment. Extensive modeling and testing have been, and continue to be used, to design and qualify the vessel for different applications and conditions. This has included explosive overtests using small, geometrically scaled vessels to study overloads, plastic deformation, and failure limits. Recently the ASME Task Group on Impulsively Loaded Vessels has developed a Code Case under Section VIII Division 3 of the ASME Boiler and Pressure Vessel Code for the design of vessel like the EDS. In this article, a representative EDS subscale vessel is investigated against the ASME Design Codes for vessels subjected to impulsive loads. Topics include strain-based plastic collapse, fatigue and fracture analysis, and leak-before-burst. Vessel design validation is based on model results, where the high explosive (HE) pressure histories and subsequent vessel response (strain histories) are modeled using the analysis codes CTH and LSDYNA, respectively. Copyright © 2008 by ASME.

More Details

Dual-permeability modeling and evaluation of drift-shadow experiments

American Nuclear Society 12th International High Level Radioactive Waste Management Conference 2008

Ho, Clifford K.; Arnold, Bill W.; Altman, Susan J.

The drift-shadow effect describes capillary diversion of water flow around a drift or cavity in porous or fractured rock, resulting in lower water flux directly beneath the cavity. This paper presents computational simulations of drift-shadow experiments using dual-permeability models, similar to the models used for performance assessment analyses of flow and seepage in unsaturated fractured tuff at Yucca Mountain. Results show that the dual-penneability models capture the salient trends and behavior observed in the experiments, but constitutive relations (e.g., fracture capillary-pressure curves) can significantly affect the simulated results. An evaluation of different meshes showed that at the grid refinement used, a comparison between orthogonal and unstructured meshes did not result in large differences.

More Details

Preparing for the aftermath: Using emotional agents in game-based training for disaster response

2008 IEEE Symposium on Computational Intelligence and Games, CIG 2008

Djordjevich, Donna D.; Xavier, Patrick G.; Bernard, Michael; Whetzel, Jonathan H.; Glickman, Matthew R.; Verzi, Stephen J.

Ground Truth, a training game developed by Sandia National Laboratories in partnership with the University of Southern California GamePipe Lab, puts a player in the role of an Incident Commander working with teammate agents to respond to urban threats. These agents simulate certain emotions that a responder may feel during this high-stress situation. We construct psychology-plausible models compliant with the Sandia Human Embodiment and Representation Cognitive Architecture (SHERCA) that are run on the Sandia Cognitive Runtime Engine with Active Memory (SCREAM) software. SCREAM's computational representations for modeling human decision-making combine aspects of ANNs and fuzzy logic networks. This paper gives an overview of Ground Truth and discusses the adaptation of the SHERCA and SCREAM into the game. We include a semiformal descriptionof SCREAM. ©2008 IEEE.

More Details

The TEVA-SPOT toolkit for drinking water contaminant warning system design

World Environmental and Water Resources Congress 2008: Ahupua'a - Proceedings of the World Environmental and Water Resources Congress 2008

Hart, William E.; Berry, Jonathan; Boman, Erik G.; Murray, Regan; Phillips, Cynthia A.; Riesen, Lee A.; Watson, Jean-Paul

We present the TEVA-SPOT Toolkit, a sensor placement optimization tool developed within the USEPA TEVA program. The TEVA-SPOT Toolkit provides a sensor placement framework that facilitates research in sensor placement optimization and enables the practical application of sensor placement solvers to real-world CWS design applications. This paper provides an overview of its key features, and then illustrates how this tool can be flexibly applied to solve a variety of different types of sensor placement problems. © 2008 ASCE.

More Details

Latent Morpho-Semantic Analysis: Multilingual information retrieval with character n-grams and mutual information

Coling 2008 - 22nd International Conference on Computational Linguistics, Proceedings of the Conference

Chew, Peter A.; Bader, Brett W.; Abdelali, Ahmed

We describe an entirely statistics-based, unsupervised, and language-independent approach to multilingual information retrieval, which we call Latent Morpho-Semantic Analysis (LMSA). LMSA overcomes some of the shortcomings of related previous approaches such as Latent Semantic Analysis (LSA). LMSA has an important theoretical advantage over LSA: it combines well-known techniques in a novel way to break the terms of LSA down into units which correspond more closely to morphemes. Thus, it has a particular appeal for use with morphologically complex languages such as Arabic. We show through empirical results that the theoretical advantages of LMSA can translate into significant gains in precision in multilingual information retrieval tests. These gains are not matched either when a standard stemmer is used with LSA, or when terms are indiscriminately broken down into n-grams. © 2008 Licensed under the Creative Commons.

More Details

Pressure-induced phase transition in a La-doped lead zirconate titanate

Ferroelectrics

Morosin, Bruno; Venturini, Eugene; Samara, George

Ceramic samples of Pb0.99La0.01 (Zr 0.91Ti0.09)O3 were studied by dielectric and time-of-flight neutron diffraction measurements at 300 and 250 K versus pressure. Isothermal dielectric data (300/250 K) suggest structural transitions with onsets near 0.35/0.37 GPa, respectively, for increasing pressure. On pressure release, only the 300K transition occurs (0.10 GPa; none indicated at 250 K). Diffraction data at 300 K show the sample has the R3c structure, remaining in that phase cooling to 250 K. Pressure increase (either 300 or 250 K) above 0.3 GPa yields a Pnma-like (AO) phase (two other prominent peaks in the spectra suggest a possible incommensurate cell). Temperature/pressure excursions show considerable phase hysteresis.

More Details

Yucca mountain 2008 performance assessment: Modeling disruptive events and early failures

American Nuclear Society - 12th International High-Level Radioactive Waste Management Conference 2008

Sevougian, S.D.; Behie, Alda; Chipman, Veraun; Gross, Michael B.; Mehta, Sunil; Statham, William

The representation of disruptive events (seismic and igneous events) and early failures of waste packages and drip shields in the 2008 total system performance assessment (TSPA) for the proposed high-level radioactive waste repository at Yucca Mountain, Nevada is described, in the context of the 2008 TSPA, disruptive events and early failures are treated as phenomena that occur randomly (e.g., the time of a seismic event) and also have properties that are random (e.g., the peak ground velocity associated with a seismic event). Specifically the following potential disruptions are considered: (i) early failure of individual drip shields, (ii) early failure of individual waste packages, (iii) igneous intrusion events that result in the filling of the waste disposal drifts with magma, (iv) volcanic eruption events that result in the dispersal of waste into the atmosphere, (v) seismic events that damage waste packages and drip shields as a result of strong vibratory ground motion, and (vi) seismic events that damage waste packages and drip shields as a result of shear displacement along a fault. Example annual dose results are shown for the two most risk-significant events: strong seismic ground motion and igneous intrusion.

More Details

Risk-informed separation distances for use in NFPA hydrogen codes and standards

17th World Hydrogen Energy Conference 2008, WHEC 2008

LaChance, Jeffrey; Houf, William G.

The development of separation distances for hydrogen facilities can be determined in several ways. A conservative approach is to use the worst possible accidents in terms of consequences. Such accidents may be of very low frequency and would likely never occur. Although this approach bounds separation distances, the resulting distances are generally prohibitive. The current separation distances in hydrogen codes and standards do not reflect this approach. An alternative deterministic approach that is often utilized by standards development organizations and allowed under some regulations is to select accident scenarios that are more probable but do not provide bounding consequences. In this approach, expert opinion is generally used to select the accidents used as the basis for the prescribed separation distances.

More Details

Full tape thickness features for new capabilities in LTCC

Proceedings - 2008 International Symposium on Microelectronics, IMAPS 2008

Knudson, R.T.; Barner, Greg; Smith, Frank; Zawicki, Larry; Peterson, Ken

Full tape thickness features (FTTF) using conductors, high K and low K dielectrics, sacrificial volume materials, and magnetic materials are useful as both technically and cost-effective approaches to multiple needs in laminate microelectronic and microsystem structures. Lowering resistance in conductor traces of all kinds, raising Q-factors in coils, and enhancing EMI shielding in RF desingns are a few of the modern needs. By filling with suitable dielectric compositions one can deliver embedded capacitors with an appropriate balance between mechanical compatibility and safety factor for fabrication. Similar techniques could be applied to magnetic materials without wasteful manufacturing processes when the magnetic material is a small fraction of the overall circuit area. Finally, to open the technology of unfilled volumes for radio frequency performance as well as microfluidics and mixed cofired material applications, the full tape thickness implementation of sacrificial volume materials is also considered. We discuss implementations of FTTF structures and discuss technical problems and the promise such structures hold for the future.

More Details

A framework for the solution of inverse radiation transport problems

IEEE Nuclear Science Symposium Conference Record

Mattingly, John K.; Mitchell, Dean J.

Radiation sensing applications for SNM detection, identification, and characterization all face the same fundamental problem: each to varying degrees must infer the presence, identity, and configuration of a radiation source given a set of radiation signatures. This is a problem of inverse radiation transport: given the outcome of a measurement, what was thesource and transport medium that caused that observation? This paper presents a framework for solving inverse radiation transport problems, describes its essential components, and illustrates its features and performance. © 2008 IEEE.

More Details

Model validation of a complex aerospace structure

Conference Proceedings of the Society for Experimental Mechanics Series

Rice, Amy E.; Carne, Thomas G.; Kelton, David W.

A series of modal tests were performed in order to validate a finite element model of a complex aerospace structure. Data was measured using various excitation methods in order to extract clean modes and damping values for a lightly damped system. Model validation was performed for one subassembly as well as for the full assembly in order to pinpoint the areas of the model that required updating and to better ascertain the quality of the joint models connecting the various components and subassemblies. After model updates were completed, using the measured modal data, the model was validated using frequency response functions (FRFs) as the independent validation metric. Test and model FRFs were compared to determine the validity of the finite element model.

More Details

Experimental comparison of particle interaction measurement techniques using optical trapping

AIChE Annual Meeting, Conference Proceedings

Grillet, Anne M.; Koehler, Timothy P.; Brotherton, Christopher M.; Brinker, C.J.

Optical tweezers has become a powerful and common tool for sensitive determination of electrostatic interactions between colloidal particles. Two optical trapping based techniques, blinking tweezers and direct force measurements, have become increasingly prevalent in investigations of interparticle potentials. The blinking laser tweezers method repeatedly catches and releases a pair of particles to gather physical statistics of particle trajectories. Statistical analysis is used to determine drift velocities, diffusion coefficients, and ultimately colloidal forces as a function of the center-center separation of the particles. Direct force measurements monitor the position of a particle relative to the center of an optical trap as the separation distance between two continuously trapped particles is gradually decreased. As the particles near each other, the displacement from the trap center for each particle increases proportional to the inter-particle force. Although commonly employed in the investigation of interactions of colloidal particles, there exists no direct comparison of these experimental methods in the literature. In this study, an experimental apparatus was developed capable of performing both methods and is used to quantify electrostatic potentials between two sizes of polystyrene particles in an AOT hexadecane solution. Comparisons are drawn between the experiments conducted using the two measurement techniques, theory, and existing literature. Forces are quantified on the femto-Newton scale and results agree well with literature values.

More Details

Tolerating the community detection resolution limit with edge weighting

Proposed for publication in the Proceedings of the National Academy of Sciences.

Hendrickson, Bruce A.; Laviolette, Randall A.; Phillips, Cynthia A.; Berry, Jonathan

Communities of vertices within a giant network such as the World-Wide-Web are likely to be vastly smaller than the network itself. However, Fortunato and Barthelemy have proved that modularity maximization algorithms for community detection may fail to resolve communities with fewer than {radical} L/2 edges, where L is the number of edges in the entire network. This resolution limit leads modularity maximization algorithms to have notoriously poor accuracy on many real networks. Fortunato and Barthelemy's argument can be extended to networks with weighted edges as well, and we derive this corollary argument. We conclude that weighted modularity algorithms may fail to resolve communities with fewer than {radical} W{epsilon}/2 total edge weight, where W is the total edge weight in the network and {epsilon} is the maximum weight of an inter-community edge. If {epsilon} is small, then small communities can be resolved. Given a weighted or unweighted network, we describe how to derive new edge weights in order to achieve a low {epsilon}, we modify the 'CNM' community detection algorithm to maximize weighted modularity, and show that the resulting algorithm has greatly improved accuracy. In experiments with an emerging community standard benchmark, we find that our simple CNM variant is competitive with the most accurate community detection methods yet proposed.

More Details

Molecular dynamics simulations of water confined between matched pairs of hydrophobic and hydrophilic self-assembled monolayers

Proposed for publication in Langmuir.

Stevens, Mark J.; Lane, James M.D.; Grest, Gary S.; Chandross, Michael E.

We have conducted a molecular dynamics (MD) simulation study of water confined between methyl-terminated and carboxyl-terminated alkylsilane self-assembled monolayers (SAMs) on amorphous silica substrates. In doing so, we have investigated the dynamic and structural behavior of the water molecules when compressed to loads ranging from 20 to 950 MPa for two different amounts of water (27 and 58 water molecules/nm{sup 2}). Within the studied range of loads, we observe that no water molecules penetrate the hydrophobic region of the carboxyl-terminated SAMs. However, we observe that at loads larger than 150 MPa water molecules penetrate the methyl-terminated SAMs and form hydrogen-bonded chains that connect to the bulk water. The diffusion coefficient of the water molecules decreases as the water film becomes thinner and pressure increases. When compared to bulk diffusion coefficients of water molecules at the various loads, we found that the diffusion coefficients for the systems with 27 water molecules/nm{sup 2} are reduced by a factor of 20 at low loads and by a factor of 40 at high loads, while the diffusion coefficients for the systems with 58 water molecules/nm{sup 2} are reduced by a factor of 25 at all loads.

More Details

Improved parallel data partitioning by nested dissection with applications to information retrieval

Proposed for publication in Parallel Computing.

Boman, Erik G.; Chevalier, Cedric C.

The computational work in many information retrieval and analysis algorithms is based on sparse linear algebra. Sparse matrix-vector multiplication is a common kernel in many of these computations. Thus, an important related combinatorial problem in parallel computing is how to distribute the matrix and the vectors among processors so as to minimize the communication cost. We focus on minimizing the total communication volume while keeping the computation balanced across processes. In [1], the first two authors presented a new 2D partitioning method, the nested dissection partitioning algorithm. In this paper, we improve on that algorithm and show that it is a good option for data partitioning in information retrieval. We also show partitioning time can be substantially reduced by using the SCOTCH software, and quality improves in some cases, too.

More Details

Simulations and experiments of intense ion beam compression in space and time

Proposed for publication in Physics of Plasmas.

Sefkow, Adam B.

The Heavy Ion Fusion Science Virtual National Laboratory has achieved 60-fold longitudinal pulse compression of ion beams on the Neutralized Drift Compression Experiment (NDCX) [P. K. Roy et al., Phys. Rev. Lett. 95, 234801 (2005)]. To focus a space-charge-dominated charge bunch to sufficiently high intensities for ion-beam-heated warm dense matter and inertial fusion energy studies, simultaneous transverse and longitudinal compression to a coincident focal plane is required. Optimizing the compression under the appropriate constraints can deliver higher intensity per unit length of accelerator to the target, thereby facilitating the creation of more compact and cost-effective ion beam drivers. The experiments utilized a drift region filled with high-density plasma in order to neutralize the space charge and current of an {approx}300 keV K{sup +} beam and have separately achieved transverse and longitudinal focusing to a radius <2 mm and pulse duration <5 ns, respectively. Simulation predictions and recent experiments demonstrate that a strong solenoid (B{sub Z} < 100 kG) placed near the end of the drift region can transversely focus the beam to the longitudinal focal plane. This paper reports on simulation predictions and experimental progress toward realizing simultaneous transverse and longitudinal charge bunch focusing. The proposed NDCX-II facility would capitalize on the insights gained from NDCX simulations and measurements in order to provide a higher-energy (>2 MeV) ion beam user-facility for warm dense matter and inertial fusion energy-relevant target physics experiments.

More Details
Results 76001–76200 of 99,299
Results 76001–76200 of 99,299