Publications

Results 76001–76100 of 99,299

Search results

Jump to search filters

Interoperable mesh components for large-scale, distributed-memory simulations

Journal of Physics: Conference Series

Devine, Karen; Diachin, L.; Kraftcheck, J.; Jansen, K.E.; Leung, Vitus J.; Luo, X.; Miller, M.; Ollivier-Gooch, C.; Ovcharenko, A.; Sahni, O.; Shephard, M.S.; Tautges, T.; Xie, T.; Zhou, M.

SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. In this paper, we describe a software component - an abstract data model and programming interface - designed to provide support for parallel unstructured mesh operations. We describe key issues that must be addressed to successfully provide high-performance, distributed-memory unstructured mesh services and highlight some recent research accomplishments in developing new load balancing and MPI-based communication libraries appropriate for leadership class computing. Finally, we give examples of the use of parallel adaptive mesh modification in two SciDAC applications. © 2009 IOP Publishing Ltd.

More Details

Type Ia supernovae: Advances in large scale simulation

Journal of Physics: Conference Series

Woosley, S.E.; Almgren, A.S.; Aspden, A.J.; Bell, J.B.; Kasen, D.; Kerstein, Alan R.; Ma, H.; Nonaka, A.; Zingale, M.

There are two principal scientific objectives in the study of Type Ia supernovae - first, a better understanding of these complex explosions from as near first principles as possible, and second, enabling the more accurate utilization of their emission to measure distances in cosmology. Both tasks lend themselves to large scale numerical simulation, yet take us beyond the current frontiers in astrophysics, combustion science, and radiation transport. Their study requires novel approaches and the creation of new, highly scalable codes. © 2009 IOP Publishing Ltd.

More Details

Formation of a fin trailing vortex in undisturbed and interacting flows

39th AIAA Fluid Dynamics Conference

Beresh, Steven J.; Henfling, John F.; Spillers, Russell

An experiment using fins mounted on a wind tunnel wall has examined the proposition that the interaction between axially-separated aerodynamic control surfaces fundamentally results from an angle of attack superposed upon the downstream fin by the vortex shed from the upstream fin. Particle Image Velocimetry data captured on the surface of a single fin show the formation of the trailing vortex first as a leading-edge vortex, then becoming a tip vortex as it propagates to the fin's spanwise edge. Data acquired on the downstream fin surface in the presence of a trailing vortex shed from an upstream fin may remove this impinging vortex by subtracting its mean velocity field as measured in single-fin experiments, after which the vortex forming on the downstream fin's leeside becomes evident. The properties of the downstream fin's lifting vortex appear to be determined by the total angle of attack imposed upon it, which is a combination of its physical fin cant and the angle of attack induced by the impinging vortex, and are consistent with those of a single fin at equivalent angle of attack.

More Details

DOE's Institute for Advanced Architecture and Algorithms: An application-driven approach

Journal of Physics: Conference Series

Murphy, Richard C.

This paper describes an application driven methodology for understanding the impact of future architecture decisions on the end of the MPP era. Fundamental transistor device limitations combined with application performance characteristics have created the switch to multicore/multithreaded architectures. Designing large-scale supercomputers to match application demands is particularly challenging since performance characteristics are highly counter-intuitive. In fact, data movement more than FLOPS dominates. This work discusses some basic performance analysis for a set of DOE applications, the limits of CMOS technology, and the impact of both on future architectures. © 2009 IOP Publishing Ltd.

More Details

A rapidly deployable virtual presence extended defense system

2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009

Koch, Mark W.; Giron, Casey; Nguyen, Hung D.

We have developed algorithms for a virtual presence and extended defense (VPED) system that automatically learns the detection map of a deployed sensor field without a-priori knowledge of the local terrain. The VPED system is a network of sensor pods, with each pod containing acoustic and seismic sensors. Each pod has a limited detection range, but a network of pods can form a virtual perimeter. The site's geography and soil conditions can affect the detection performance of the pods. Thus a network in the field may not have the same performance as a network designed in the lab. To solve this problem we automatically estimate a network's detection performance as it is being constructed. We demonstrate results using simulated and real data. © 2009 IEEE.

More Details

Causal factors of non-fickian dispersion explored through measures of aquifer connectivity

IAMG 2009 - Computational Methods for the Earth, Energy and Environmental Sciences

Klise, Katherine A.; Mckenna, Sean A.; Tidwell, Vincent C.; Lane, Jonathan W.; Weissmann, Gary S.; Wawrzyniec, Tim F.; Nichols, Elizabeth M.

While connectivity is an important aspect of heterogeneous media, methods to measure and simulate connectivity are limited. For this study, we use natural aquifer analogs developed through lidar imagery to track the importance of connectivity on dispersion characteristics. A 221.8 cm by 50 cm section of a braided sand and gravel deposit of the Ceja Formation in Bernalillo County, New Mexico is selected for the study. The use of two-point (SISIM) and multipoint (Snesim and Filtersim) stochastic simulation methods are then compared based on their ability to replicate dispersion characteristics using the aquifer analog. Detailed particle tracking simulations are used to explore the streamline-based connectivity that is preserved using each method. Connectivity analysis suggests a strong relationship between the length distribution of sand and gravel facies along streamlines and dispersion characteristics.

More Details

Current trends in parallel computation and the implications for modeling and optimization

Computer Aided Chemical Engineering

Siirola, John D.

More Details

Microresonant impedance transformers

Proceedings - IEEE Ultrasonics Symposium

Wojciechowski, Kenneth E.; Olsson, Roy H.; Tuck, Melanie R.; Stevens, James E.

Widely applied to RF filtering, AlN microresonators offer the ability to perform additional functions such as impedance matching and single-ended-to- differential conversion. This paper reports microresonators capable of transforming the characteristic impedance from input to output over a wide range while performing low loss filtering. Microresonant transformer theory of operation and equivalent circuit models are presented and compared with measured 2 and 3-Port devices. Impedance transformation ratios as large as 18:1 are realized with insertion losses less than 5.8 dB, limited by parasitic shunt capacitance. These impedance transformers occupy less than 0.052 mm2, orders of magnitude smaller than competing technologies in the VHF and UHF frequency bands. ©2009 IEEE.

More Details

Analysis of nuclear spectra with non-linear techniques and its implementation in the Cambio software application

Journal of Radioanalytical and Nuclear Chemistry

Lasche, George; Coldwell, Robert L.

Popular nuclear spectral analysis applications typically use either the results of a peak search or of the best match of a set of linear templates as the basis for their conclusions. These well-proven methods work well in controlled environments. However, they often fail in cases where the critical information resides in well-masked peaks, where the data is sparse and good statistics cannot be obtained, and where little is known about the detector that was used. These conditions are common in emergency analysis situations, but are also common in radio-assay situations where background radiation is high and time is limited. To address these limitations, non-linear fitting techniques have been introduced into an application called ''Cambio'' suitable for public use. With this approach, free parameters are varied in iterative steps to converge to values that minimize differences between the actual data and the approximating functions that correspond to the values of the parameters. For each trial nuclide, a single parameter is varied that often has a strongly non-linear dependence on other, simultaneously varied parameters for energy calibration, attenuation by intervening matter, detector resolution, and peak-shape deviations. A brief overview of this technique and its implementation is presented, together with an example of its performance and differences from more common methods of nuclear spectral analysis. © Akadémiai Kiadó, 2009.

More Details

Ten million and one penguins, or, lessons learned from booting millions of virtual machines on HPC systems

Minnich, Ronald G.; Rudish, Donald W.

In this paper we describe Megatux, a set of tools we are developing for rapid provisioning of millions of virtual machines and controlling and monitoring them, as well as what we've learned from booting one million Linux virtual machines on the Thunderbird (4660 nodes) and 550,000 Linux virtual machines on the Hyperion (1024 nodes) clusters. As might be expected, our tools use hierarchical structures. In contrast to existing HPC systems, our tools do not require perfect hardware; that all systems be booted at the same time; and static configuration files that define the role of each node. While we believe these tools will be useful for future HPC systems, we are using them today to construct botnets. Botnets have been in the news recently, as discoveries of their scale (millions of infected machines for even a single botnet) and their reach (global) and their impact on organizations (devastating in financial costs and time lost to recovery) have become more apparent. A distinguishing feature of botnets is their emergent behavior: fairly simple operational rule sets can result in behavior that cannot be predicted. In general, there is no reducible understanding of how a large network will behave ahead of 'running it'. 'Running it' means observing the actual network in operation or simulating/emulating it. Unfortunately, this behavior is only seen at scale, i.e. when at minimum 10s of thousands of machines are infected. To add to the problem, botnets typically change at least 11% of the machines they are using in any given week, and this changing population is an integral part of their behavior. The use of virtual machines to assist in the forensics of malware is not new to the cyber security world. Reverse engineering techniques often use virtual machines in combination with code debuggers. Nevertheless, this task largely remains a manual process to get past code obfuscation and is inherently slow. As part of our cyber security work at Sandia National Laboratories, we are striving to understand the global network behavior of botnets. We are planning to take existing botnets, as found in the wild, and run them on HPC systems. We have turned to HPC systems to support the creation and operation of millions of Linux virtual machines as a means of observing the interaction of the botnet and other noninfected hosts. We started out using traditional HPC tools, but these tools are designed for a much smaller scale, typically topping out at one to ten thousand machines. HPC programming libraries and tools also assume complete connectivity between all nodes, with the attendant configuration files and data structures to match; this assumption holds up very poorly on systems with millions of nodes.

More Details

Nonlinear slewing spacecraft control based on exergy, power flow, and static and dynamic stability

Journal of the Astronautical Sciences

Robinett, Rush D.; Wilson, David G.

This paper presents a new nonlinear control methodology for slewing spacecraft, which provides both necessary and sufficient conditions for stability by identifying the stability boundaries, rigid body modes, and limit cycles. Conservative Hamiltonian system concepts, which are equivalent to static stability of airplanes, are used to find and deal with the static stability boundaries: rigid body modes. The application of exergy and entropy thermodynamic concepts to the work-rate principle provides a natural partitioning through the second law of thermodynamics of power flows into exergy generator, dissipator, and storage for Hamiltonian systems that is employed to find the dynamic stability boundaries: limit cycles. This partitioning process enables the control system designer to directly evaluate and enhance the stability and performance of the system by balancing the power flowing into versus the power dissipated within the system subject to the Hamiltonian surface (power storage). Relationships are developed between exergy, power flow, static and dynamic stability, and Lyapunov analysis. The methodology is demonstrated with two illustrative examples: (1) a nonlinear oscillator with sinusoidal damping and (2) a multi-input-multioutput three-axis slewing spacecraft that employs proportional-integral-derivative tracking control with numerical simulation results.

More Details

Using detailed maps of science to identify potential collaborations

Scientometrics

Boyack, Kevin W.

Research on the effects of collaboration in scientific research has been increasing in recent years. A variety of studies have been done at the institution and country level, many with an eye toward policy implications. However, the question of how to identify the most fruitful targets for future collaboration in high-performing areas of science has not been addressed. This paper presents a method for identifying targets for future collaboration between two institutions. The utility of the method is shown in two different applications: identifying specific potential collaborations at the author level between two institutions, and generating an index that can be used for strategic planning purposes. Identification of these potential collaborations is based on finding authors that belong to the same small paper-level community (or cluster of papers), using a map of science and technology containing nearly 1 million papers organized into 117,435 communities. The map used here is also unique in that it is the first map to combine the ISI Proceedings database with the Science and Social Science Indexes at the paper level. © 2008 Springer Science+Business Media B.V.

More Details

TrustBuilder2: A reconfigurable framework for trust negotiation

IFIP Advances in Information and Communication Technology

Lee, Adam J.; Winslett, Marianne; Perano, Kenneth J.

To date, research in trust negotiation has focused mainly on the theoretical aspects of the trust negotiation process, and the development of proof of concept implementations. These theoretical works and proofs of concept have been quite successful from a research perspective, and thus researchers must now begin to address the systems constraints that act as barriers to the deployment of these systems. To this end, we present TrustBuilder2, a fully-configurable and extensible framework for prototyping and evaluating trust negotiation systems. TrustBuilder2 leverages a plug-in based architecture, extensible data type hierarchy, and flexible communication protocol to provide a framework within which numerous trust negotiation protocols and system configurations can be quantitatively analyzed. In this paper, we discuss the design and implementation of TrustBuilder2, study its performance, examine the costs associated with flexible authorization systems, and leverage this knowledge to identify potential topics for future research, as well as a novel method for attacking trust negotiation systems.

More Details

Cutting Efficiency of a Single PDC Cutter on Hard Rock

Journal of Canadian Petroleum Technology

Hareland, G.; Yan, W.; Nygaard, R.; Wise, Jack L.

Polycrystalline diamond compact (PDC) bits have gained i wide popularity in the petroleum industry for drilling soft and; moderately firm formations. However, in hard formation applications, the PDC bit still has limitations, even though recent developments in PDC cutter designs and materials steadily imj proves PDC bit performance. The limitations of PDC bits for drilling hard formations is an important technical obstacle that must be overcome before using the PDC bit to develop competii tively priced electricity from enhanced geothermal systems, as well as deep continental gas fields. Enhanced geothermal energy is a very promising source for generating electrical energy and therefore, there is an urgent need to further enhance PDC bit per-j formance in hard formations. In this paper, the cutting efficiency of the PDC bit has been) analyzed based on the development of an analytical single PDC cutter force model. The cutting efficiency of a single PDC cutterj is defined as the ratio of the volume removed by a cutter over the force required to remove that volume of rock. The cutting I efficiency is found to be a function of the back rake angle, the depth of cut and the rock property, such as the angle of internal' friction. The highest cutting efficiency is found to occur at specific back rake angles of the cutter based on the material properties of the rock. The cutting efficiency directly relates to the internal angle of friction of the rock being cut. The results of this analysis can be integrated to study PDC bit performance. It can also provide a guideline to the application' and design of PDC bits for specific rocks.

More Details

An overview of the evolution of human reliability analysis in the context of probabilistic risk assessment

Forester, John A.

Since the Reactor Safety Study in the early 1970's, human reliability analysis (HRA) has been evolving towards a better ability to account for the factors and conditions that can lead humans to take unsafe actions and thereby provide better estimates of the likelihood of human error for probabilistic risk assessments (PRAs). The purpose of this paper is to provide an overview of recent reviews of operational events and advances in the behavioral sciences that have impacted the evolution of HRA methods and contributed to improvements. The paper discusses the importance of human errors in complex human-technical systems, examines why humans contribute to accidents and unsafe conditions, and discusses how lessons learned over the years have changed the perspective and approach for modeling human behavior in PRAs of complicated domains such as nuclear power plants. It is argued that it has become increasingly more important to understand and model the more cognitive aspects of human performance and to address the broader range of factors that have been shown to influence human performance in complex domains. The paper concludes by addressing the current ability of HRA to adequately predict human failure events and their likelihood.

More Details

Sandia National Laboratories Advanced Simulation and Computing (ASC) software quality plan : ASC software quality engineering practices Version 3.0

Turgeon, Jennifer; Minana, Molly A.; Pilch, Martin

The purpose of the Sandia National Laboratories (SNL) Advanced Simulation and Computing (ASC) Software Quality Plan is to clearly identify the practices that are the basis for continually improving the quality of ASC software products. Quality is defined in the US Department of Energy/National Nuclear Security Agency (DOE/NNSA) Quality Criteria, Revision 10 (QC-1) as 'conformance to customer requirements and expectations'. This quality plan defines the SNL ASC Program software quality engineering (SQE) practices and provides a mapping of these practices to the SNL Corporate Process Requirement (CPR) 001.3.6; 'Corporate Software Engineering Excellence'. This plan also identifies ASC management's and the software project teams responsibilities in implementing the software quality practices and in assessing progress towards achieving their software quality goals. This SNL ASC Software Quality Plan establishes the signatories commitments to improving software products by applying cost-effective SQE practices. This plan enumerates the SQE practices that comprise the development of SNL ASC's software products and explains the project teams opportunities for tailoring and implementing the practices.

More Details

Baseline Ecological Footprint of Sandia National Laboratories, New Mexico

Mizner, Jack H.

The Ecological Footprint Model is a mechanism for measuring the environmental effects of operations at Sandia National Laboratories in Albuquerque, New Mexico (SNL/NM). This analysis quantifies environmental impact associated with energy use, transportation, waste, land use, and water consumption at SNL/NM for fiscal year 2005 (FY05). Since SNL/NM’s total ecological footprint (96,434 gha) is greater than the waste absorption capacity of its landholdings (338 gha), it created an ecological deficit of 96,096 gha. This deficit is equal to 886,470lha, or about 3,423 square miles of Pinyon-Juniper woodlands and desert grassland. 89% of the ecological footprint can be attributed to energy use, indicating that in order to mitigate environmental impact, efforts should be focused on energy efficiency, energy reduction, and the incorporation of additional renewable energy alternatives at SNL/NM.

More Details

Graphite oxidation modeling for application in MELCOR

Gelbard, Fred M.

The Arrhenius parameters for graphite oxidation in air are reviewed and compared. One-dimensional models of graphite oxidation coupled with mass transfer of oxidant are presented in dimensionless form for rectangular and spherical geometries. A single dimensionless group is shown to encapsulate the coupled phenomena, and is used to determine the effective reaction rate when mass transfer can impede the oxidation process. For integer reaction order kinetics, analytical expressions are presented for the effective reaction rate. For noninteger reaction orders, a numerical solution is developed and compared to data for oxidation of a graphite sphere in air. Very good agreement is obtained with the data without any adjustable parameters. An analytical model for surface burn-off is also presented, and results from the model are within an order of magnitude of the measurements of burn-off in air and in steam.

More Details

Design & development fo a 20-MW flywheel-based frequency regulation power plant : a study for the DOE Energy Storage Systems program

Huff, Georgianne

This report describes the successful efforts of Beacon Power to design and develop a 20-MW frequency regulation power plant based solely on flywheels. Beacon's Smart Matrix (Flywheel) Systems regulation power plant, unlike coal or natural gas generators, will not burn fossil fuel or directly produce particulates or other air emissions and will have the ability to ramp up or down in a matter of seconds. The report describes how data from the scaled Beacon system, deployed in California and New York, proved that the flywheel-based systems provided faster responding regulation services in terms of cost-performance and environmental impact. Included in the report is a description of Beacon's design package for a generic, multi-MW flywheel-based regulation power plant that allows accurate bids from a design/build contractor and Beacon's recommendations for site requirements that would ensure the fastest possible construction. The paper concludes with a statement about Beacon's plans for a lower cost, modular-style substation based on the 20-MW design.

More Details

Modeling leaks from liquid hydrogen storage systems

Winters, William S.

This report documents a series of models for describing intended and unintended discharges from liquid hydrogen storage systems. Typically these systems store hydrogen in the saturated state at approximately five to ten atmospheres. Some of models discussed here are equilibrium-based models that make use of the NIST thermodynamic models to specify the states of multiphase hydrogen and air-hydrogen mixtures. Two types of discharges are considered: slow leaks where hydrogen enters the ambient at atmospheric pressure and fast leaks where the hydrogen flow is usually choked and expands into the ambient through an underexpanded jet. In order to avoid the complexities of supersonic flow, a single Mach disk model is proposed for fast leaks that are choked. The velocity and state of hydrogen downstream of the Mach disk leads to a more tractable subsonic boundary condition. However, the hydrogen temperature exiting all leaks (fast or slow, from saturated liquid or saturated vapor) is approximately 20.4 K. At these temperatures, any entrained air would likely condense or even freeze leading to an air-hydrogen mixture that cannot be characterized by the REFPROP subroutines. For this reason a plug flow entrainment model is proposed to treat a short zone of initial entrainment and heating. The model predicts the quantity of entrained air required to bring the air-hydrogen mixture to a temperature of approximately 65 K at one atmosphere. At this temperature the mixture can be treated as a mixture of ideal gases and is much more amenable to modeling with Gaussian entrainment models and CFD codes. A Gaussian entrainment model is formulated to predict the trajectory and properties of a cold hydrogen jet leaking into ambient air. The model shows that similarity between two jets depends on the densimetric Froude number, density ratio and initial hydrogen concentration.

More Details

Slanted-wall beam propagation : erratum

Proposed for publication in the Journal of Lightwave Technology.

Hadley, G.R.

Recently, a new algorithm for wide-angle beam propagation was reported that allowed grid points to move in an arbitrary fashion between propagation planes and was thus capable of modeling waveguides whose widths or centerlines varied with propagation distance. That algorithm was accurate and stable for TE polarization but unstable for wide-angle TM propagation. This deficiency has been found to result from an omission in one of the wide-angle terms in the derivation of the finite-difference equation and is remedied here, resulting in a complete algorithm accurate for both polarizations.

More Details

Performance of an MPI-only semiconductor device simulator on a quad socket/quad core InfiniBand platform

Shadid, John N.

This preliminary study considers the scaling and performance of a finite element (FE) semiconductor device simulator on a capacity cluster with 272 compute nodes based on a homogeneous multicore node architecture utilizing 16 cores. The inter-node communication backbone for this Tri-Lab Linux Capacity Cluster (TLCC) machine is comprised of an InfiniBand interconnect. The nonuniform memory access (NUMA) nodes consist of 2.2 GHz quad socket/quad core AMD Opteron processors. The performance results for this study are obtained with a FE semiconductor device simulation code (Charon) that is based on a fully-coupled Newton-Krylov solver with domain decomposition and multilevel preconditioners. Scaling and multicore performance results are presented for large-scale problems of 100+ million unknowns on up to 4096 cores. A parallel scaling comparison is also presented with the Cray XT3/4 Red Storm capability platform. The results indicate that an MPI-only programming model for utilizing the multicore nodes is reasonably efficient on all 16 cores per compute node. However, the results also indicated that the multilevel preconditioner, which is critical for large-scale capability type simulations, scales better on the Red Storm machine than the TLCC machine.

More Details

Technical Advisory Team (TAT) report on the rocket sled test accident of October 9, 2008

Medina, Anthony J.; Stofleth, Jerome H.

This report summarizes probable causes and contributing factors that led to a rocket motor initiating prematurely while employees were preparing instrumentation for an AIII rocket sled test at SNL/NM, resulting in a Type-B Accident. Originally prepared by the Technical Advisory Team that provided technical assistance to the NNSA's Accident Investigation Board, the report includes analyses of several proposed causes and concludes that the most probable source of power for premature initiation of the rocket motor was the independent battery contained in the HiCap recorder package. The report includes data, evidence, and proposed scenarios to substantiate the analyses.

More Details

Investigation of multi-layer thin films for energy storage

Renk, Timothy J.

We investigate here the feasibility of increasing the energy density of thin-film capacitors by construction of a multi-layer capacitor device through ablation and redeposition of the capacitor materials using a high-power pulsed ion beam. The deposition experiments were conducted on the RHEPP-1 facility at Sandia National Laboratories. The dielectric capacitor filler material was a composition of Lead-Lanthanum-Zirconium-Titanium oxide (PLZT). The energy storage can be increased by using material of intrinsically high dielectric constant, and constructing many thin layers of this material. For successful device construction, there are a number of challenging requirements including correct stoichiometric and crystallographic composition of the deposited PLZT. This report details some success in satisfying these requirements, even though the attempt at device manufacture was unsuccessful. The conclusion that 900 C temperatures are necessary to reconstitute the deposited PLZT has implications for future manufacturing capability.

More Details

Investigation of biologically-designed metal-specific chelators for potential metal recovery and waste remediation applications

Criscenti, Louise; Ockwig, Nathan O.

Bacteria, algae and plants produce metal-specific chelators to capture required nutrient or toxic trace metals. Biological systems are thought to be very efficient, honed by evolutionary forces over time. Understanding the approaches used by living organisms to select for specific metals in the environment may lead to design of cheaper and more effective approaches for metal recovery and contaminant-metal remediation. In this study, the binding of a common siderophore, desferrioxamine B (DFO-B), to three aqueous metal cations, Fe(II), Fe(III), and UO{sub 2}(VI) was investigated using classical molecular dynamics. DFO-B has three acetohydroxamate groups and a terminal amine group that all deprotonate with increasing pH. For all three metals, complexes with DFO-B (-2) are the most stable and favored under alkaline conditions. Under more acidic conditions, the metal-DFO complexes involve chelation with both acetohydroxamate and acetylamine groups. The approach taken here allows for detailed investigation of metal binding to biologically-designed organic ligands.

More Details

Microscale Immune Studies Laboratory

Singh, Anup K.

The overarching goal is to develop novel technologies to elucidate molecular mechanisms of the innate immune response in host cells to pathogens such as bacteria and viruses including the mechanisms used by pathogens to subvert/suppress/obfuscate the immune response to cause their harmful effects. Innate immunity is our first line of defense against a pathogenic bacteria or virus. A comprehensive 'system-level' understanding of innate immunity pathways such as toll-like receptor (TLR) pathways is the key to deciphering mechanisms of pathogenesis and can lead to improvements in early diagnosis or developing improved therapeutics. Current methods for studying signaling focus on measurements of a limited number of components in a pathway and hence, fail to provide a systems-level understanding. We have developed a systems biology approach to decipher TLR4 pathways in macrophage cell lines in response to exposure to pathogenic bacteria and their lipopolysaccharide (LPS). Our approach integrates biological reagents, a microfluidic cell handling and analysis platform, high-resolution imaging and computational modeling to provide spatially- and temporally-resolved measurement of TLR-network components. The Integrated microfluidic platform is capable of imaging single cells to obtain dynamic translocation data as well as high-throughput acquisition of quantitative protein expression and phosphorylation information of selected cell populations. The platform consists of multiple modules such as single-cell array, cell sorter, and phosphoflow chip to provide confocal imaging, cell sorting, flow cytomtery and phosphorylation assays. The single-cell array module contains fluidic constrictions designed to trap and hold single host cells. Up to 100 single cells can be trapped and monitored for hours, enabling detailed statistically-significant measurements. The module was used to analyze translocation behavior of transcription factor NF-kB in macrophages upon activation by E. coli and Y. pestis LPS. The chip revealed an oscillation pattern in translocation of NF-kB indicating the presence of a negative feedback loop involving IKK. Activation of NF-kB is preceded by phosphorylation of many kinases and to correlate the kinase activity with translocation, we performed flow cytometric assays in the PhosphoChip module. Phopshorylated forms of p38. ERK and RelA were measured in macrophage cells challenged with LPS and showed a dynamic response where phosphorylation increases with time reaching a maximum at {approx}30-60min. To allow further downstream analysis on selected cells, we also implemented an optical-trapping based sorting of cells. This has allowed us to sort macrophages infected with bacteria from uninfected cells with the goal of obtaining data only on the infected (the desired) population. The various microfluidic chip modules and the accessories required to operate them such as pumps, heaters, electronic control and optical detectors are being assembled in a bench-top, semi-automated device. The data generated is being utilized to refine existing TLR pathway model by adding kinetic rate constants and concentration information. The microfluidic platform allows high-resolution imaging as well as quantitative proteomic measurements with high sensitivity (<pM) and time-resolution ({approx}15 s) in the same population of cells, a feat not achievable by current techniques. Furthermore, our systems approach combining the microfluidic platform and high-resolution imaging with the associated computational models and biological reagents will significantly improve our ability to study cell-signaling involved in host-pathogen interactions and other diseases such as cancer. The advances made in this project have been presented at numerous national and international conferences and are documented in many peer-reviewed publications as listed. Finer details of many of the component technologies are described in these publications. The chapters to follow in this report are also adapted from other manuscripts that are accepted for publication, submitted or in preparation to be submitted to peer-reviewed journals.

More Details

Development of a High-Temperature Diagnostics-While-Drilling Tool

Blankenship, Douglas A.; Chavira, David C.; Henfling, Joseph A.; King, Dennis K.; Knudsen, Steven D.; Polsky, Yarom

This report documents work performed in the second phase of the Diagnostics While-Drilling (DWD) project in which a high-temperature (HT) version of the phase 1 low-temperature (LT) proof-of-concept (POC) DWD tool was built and tested. Descriptions of the design, fabrication and field testing of the HT tool are provided.

More Details

Control of pore size in epoxy systems

Celina, Mathew C.; Dirk, Shawn M.; Sawyer, Patricia S.

Both conventional and combinatorial approaches were used to study the pore formation process in epoxy based polymer systems. Sandia National Laboratories conducted the initial work and collaborated with North Dakota State University (NDSU) using a combinatorial research approach to produce a library of novel monomers and crosslinkers capable of forming porous polymers. The library was screened to determine the physical factors that control porosity, such as porogen loading, polymer-porogen interactions, and polymer crosslink density. We have identified the physical and chemical factors that control the average porosity, pore size, and pore size distribution within epoxy based systems.

More Details

Joint physical and numerical modeling of water distribution networks

Mckenna, Sean A.; Ho, Clifford K.; Cappelle, Malynda A.; Webb, Stephen W.; O'Hern, Timothy J.

This report summarizes the experimental and modeling effort undertaken to understand solute mixing in a water distribution network conducted during the last year of a 3-year project. The experimental effort involves measurement of extent of mixing within different configurations of pipe networks, measurement of dynamic mixing in a single mixing tank, and measurement of dynamic solute mixing in a combined network-tank configuration. High resolution analysis of turbulence mixing is carried out via high speed photography as well as 3D finite-volume based Large Eddy Simulation turbulence models. Macroscopic mixing rules based on flow momentum balance are also explored, and in some cases, implemented in EPANET. A new version EPANET code was developed to yield better mixing predictions. The impact of a storage tank on pipe mixing in a combined pipe-tank network during diurnal fill-and-drain cycles is assessed. Preliminary comparison between dynamic pilot data and EPANET-BAM is also reported.

More Details

Experimental assessment of unvalidated assumptions in classical plasticity theory

Bauer, Stephen J.; Bronowski, David R.

This report investigates the validity of several key assumptions in classical plasticity theory regarding material response to changes in the loading direction. Three metals, two rock types, and one ceramic were subjected to non-standard loading directions, and the resulting strain response increments were displayed in Gudehus diagrams to illustrate the approximation error of classical plasticity theories. A rigorous mathematical framework for fitting classical theories to the data, thus quantifying the error, is provided. Further data analysis techniques are presented that allow testing for the effect of changes in loading direction without having to use a new sample and for inferring the yield normal and flow directions without having to measure the yield surface. Though the data are inconclusive, there is indication that classical, incrementally linear, plasticity theory may be inadequate over a certain range of loading directions. This range of loading directions also coincides with loading directions that are known to produce a physically inadmissible instability for any nonassociative plasticity model.

More Details

Summary report : direct approaches for recycling carbon dioxide into synthetic fuel

Miller, James E.; Siegel, Nathan P.; Diver, Richard B.; Gelbard, Fred M.; Ambrosini, Andrea A.; Allendorf, Mark

The consumption of petroleum by the transportation sector in the United States is roughly equivalent to petroleum imports into the country, which have totaled over 12 million barrels a day every year since 2004. This reliance on foreign oil is a strategic vulnerability for the economy and national security. Further, the effect of unmitigated CO{sub 2} releases on the global climate is a growing concern both here and abroad. Independence from problematic oil producers can be achieved to a great degree through the utilization of non-conventional hydrocarbon resources such as coal, oil-shale and tarsands. However, tapping into and converting these resources into liquid fuels exacerbates green house gas (GHG) emissions as they are carbon rich, but hydrogen deficient. Revolutionary thinking about energy and fuels must be adopted. We must recognize that hydrocarbon fuels are ideal energy carriers, but not primary energy sources. The energy stored in a chemical fuel is released for utilization by oxidation. In the case of hydrogen fuel the chemical product is water; in the case of a hydrocarbon fuel, water and carbon dioxide are produced. The hydrogen economy envisions a cycle in which H{sub 2}O is re-energized by splitting water into H{sub 2} and O{sub 2}, by electrolysis for example. We envision a hydrocarbon analogy in which both carbon dioxide and water are re-energized through the application of a persistent energy source (e.g. solar or nuclear). This is of course essentially what the process of photosynthesis accomplishes, albeit with a relatively low sunlight-to-hydrocarbon efficiency. The goal of this project then was the creation of a direct and efficient process for the solar or nuclear driven thermochemical conversion of CO{sub 2} to CO (and O{sub 2}), one of the basic building blocks of synthetic fuels. This process would potentially provide the basis for an alternate hydrocarbon economy that is carbon neutral, provides a pathway to energy independence, and is compatible with much of the existing fuel infrastructure.

More Details

Understanding and engineering enzymes for enhanced biofuel production

Simmons, Blake; Sapra, Rajat S.; Roe, Diana C.; Volponi, Joanne V.; Buffleben, George M.

Today, carbon-rich fossil fuels, primarily oil, coal and natural gas, provide 85% of the energy consumed in the United States. The release of greenhouse gases from these fuels has spurred research into alternative, non-fossil energy sources. Lignocellulosic biomass is renewable resource that is carbon-neutral, and can provide a raw material for alternative transportation fuels. Plant-derived biomass contains cellulose, which is difficult to convert to monomeric sugars for production of fuels. The development of cost-effective and energy-efficient processes to transform the cellulosic content of biomass into fuels is hampered by significant roadblocks, including the lack of specifically developed energy crops, the difficulty in separating biomass components, the high costs of enzymatic deconstruction of biomass, and the inhibitory effect of fuels and processing byproducts on organisms responsible for producing fuels from biomass monomers. One of the main impediments to more widespread utilization of this important resource is the recalcitrance of cellulosic biomass and techniques that can be utilized to deconstruct cellulosic biomass.

More Details

Thermomechanical measurements on thermal microactuators

Phinney, Leslie; Epp, David S.; Baker, Michael S.; Serrano, Justin R.; Gorby, Allen D.

Due to the coupling of thermal and mechanical behaviors at small scales, a Campaign 6 project was created to investigate thermomechanical phenomena in microsystems. This report documents experimental measurements conducted under the auspices of this project. Since thermal and mechanical measurements for thermal microactuators were not available for a single microactuator design, a comprehensive suite of thermal and mechanical experimental data was taken and compiled for model validation purposes. Three thermal microactuator designs were selected and fabricated using the SUMMiT V{sup TM} process at Sandia National Laboratories. Thermal and mechanical measurements for the bent-beam polycrystalline silicon thermal microactuators are reported, including displacement, overall actuator electrical resistance, force, temperature profiles along microactuator legs in standard laboratory air pressures and reduced pressures down to 50 mTorr, resonant frequency, out-of-plane displacement, and dynamic displacement response to applied voltages.

More Details

Novel ultrafine grain size processing of soft magnetic materials

Michael, Joseph R.

High performance soft magnetic alloys are used in solenoids in a wide variety of applications. These designs are currently being driven to provide more margin, reliability, and functionality through component size reductions; thereby providing greater power to drive ratio margins as well as decreases in volume and power requirements. In an effort to produce soft magnetic materials with improved properties, we have conducted an initial examination of one potential route for producing ultrafine grain sizes in the 49Fe-49Co-2V alloy. The approach was based on a known method for the production of very fine grain sizes in steels, and consisted of repeated, rapid phase transformation cycling through the ferrite to austenite transformation temperature range. The results of this initial attempt to produce highly refined grain sizes in 49Fe-49Co-2V were successful in that appreciable reductions in grain size were realized. The as-received grain size was 15 {micro}m with a standard deviation of 9.5 {micro}m. For the temperature cycling conditions examined, grain refinement appears to saturate after approximately ten cycles at a grain size of 6 {micro}m with standard deviation of 4 {micro}m. The process also reduces the range of grain sizes present in these samples as the largest grain noted in the as received and treated conditions were 64 and 26 {micro}m, respectively. The results were, however, complicated by the formation of an unexpected secondary ferritic constituent and considerable effort was directed at characterizing this phase. The analysis indicates that the phase is a V-rich ferrite, known as {alpha}{sub 2}, that forms due to an imbalance in the partitioning of vanadium during the heating and cooling portions of the thermal cycle. Considerable but unsuccessful effort was also directed at understanding the conditions under which this phase forms, since it is conceivable that this phase restricts the degree to which the grains can be refined. Due to this difficulty and the relatively short timeframe available in the study, magnetic and mechanical properties of the refined material could not be evaluated. An assessment of the potential for properties improvement through the transformation cycling approach, as well as recommendations for potential future work, are included in this report.

More Details

On the two-domain equations for gas chromatography

Romero, Louis; Parks, Michael L.

We present an analysis of gas chromatographic columns where the stationary phase is not assumed to be a thin uniform coating along the walls of the cross section. We also give an asymptotic analysis assuming that the parameter {beta} = KD{sup II}{rho}{sup II}/D{sup I}{rho}{sup I} is small. Here K is the partition coefficient, and D{sup i} and {rho}{sup i}, i = I, II are the diffusivity and density in the mobile (i = I) and stationary (i = II) regions.

More Details

J-Integral modeling and validation for GTS reservoirs

Nibur, Kevin A.; Somerday, Brian P.; Brown, Arthur; Lindblad, Alex; Ohashi, Yuki; Antoun, Bonnie R.; Connelly, Kevin; Zimmerman, Jonathan A.; Margolis, Stephen B.

Non-destructive detection methods can reliably certify that gas transfer system (GTS) reservoirs do not have cracks larger than 5%-10% of the wall thickness. To determine the acceptability of a reservoir design, analysis must show that short cracks will not adversely affect the reservoir behavior. This is commonly done via calculation of the J-Integral, which represents the energetic driving force acting to propagate an existing crack in a continuous medium. J is then compared against a material's fracture toughness (J{sub c}) to determine whether crack propagation will occur. While the quantification of the J-Integral is well established for long cracks, its validity for short cracks is uncertain. This report presents the results from a Sandia National Laboratories project to evaluate a methodology for performing J-Integral evaluations in conjunction with its finite element analysis capabilities. Simulations were performed to verify the operation of a post-processing code (J3D) and to assess the accuracy of this code and our analysis tools against companion fracture experiments for 2- and 3-dimensional geometry specimens. Evaluation is done for specimens composed of 21-6-9 stainless steel, some of which were exposed to a hydrogen environment, for both long and short cracks.

More Details

Ku-band six-bit RF MEMS time delay network

2008 IEEE CSIC Symposium: GaAs ICs Celebrate 30 Years in Monterey, Technical Digest 2008

Nordquist, Christopher D.; Dyck, Christopher; Kraus, Garth K.; Sullivan, Charles T.; Austin IV, Franklin; Finnegan, Patrick S.; Ballance, Mark H.

A six-bit time delay circuit operating from DC to 18 GHz is reported. Capacitively loaded transmission lines are used to reduce the physical length of the delay elements and shrink the die size. Additionally, selection of the reference line lengths to avoid resonances allows the replacement of series-shunt switching elements with only series elements. With through-wafer transitions and a packaging seal ring, the 7 mm x 10 mm circuit demonstrates <2.8 dB of loss and 60 ps of delay with good delay flatness and accuracy through 18 GHz. © 2008 IEEE.

More Details

Full-field characterization of tensile and fracture behavior of a rigid polyurethane foam using digital image correlation

Society for Experimental Mechanics - 11th International Congress and Exhibition on Experimental and Applied Mechanics 2008

Hong, Soonsung H.; Jin, Helena; Lu, Wei-Yang

Tensile deformation and fracture behavior of a closed-cell rigid polyurethane foam, called TufFoam, were investigated. During uniaxial tension tests and fracture mechanics tests, full-field deformation measurements were conducted by using digital image correlation technique. Uniform deformation fields obtained from the tension tests showed that both deviatoric and dilatational yielding contributed to the nonlinear deformation of the foam under tension. Fracture mechanics tests were performed with single-edge-notched specimens under three-point bending and uniaxial tension. A moderate specimen-size and loading-geometry dependence was observed in the measured fracture toughness values based on linear elastic fracture mechanics. Full-field deformation data near the crack-tip were used to investigate stable crack-growth in the foam until unstable fracture occurs. The path-independent J-integral and M-integral were calculated from elastic far-fields of the experimental data, and used to obtain crack-tip field parameters, such as crack-tip energy release rates and effective crack-tip positions. The combination of the full-field deformation measurement technique and the path-independent integrals was proven to be a useful approach to measure the initiation toughness of the foam that is independent of the specimen size and loading geometry. © 2008 Society for Experimental Mechanics Inc.

More Details

Interface delamination fracture toughness experiments at various loading rates

Society for Experimental Mechanics - 11th International Congress and Exhibition on Experimental and Applied Mechanics 2008

Lu, Wei-Yang; Antoun, Bonnie R.; Brown, Arthur; Chen, Weinong; Song, Bo

Mode-I and Mode-ll fracture experiments of composites under high loading rates are presented. In the standard double cantilever beam (DCB) configuration, specimens are loaded with constant speed of 2.5 m/s (100 in/s) on a customized high-rate MTS system. Alternative high rate experiments are also performed on a modified split Hopkinson pressure bar (SHPB). One of the configurations for the characterization of dynamic Mode-I interfacial delamination is to place a wedge-loaded compact-tension (WLCT) specimen in the test section. Pulse-shaping techniques are employed to control the profiles of the loading pulses such that the crack tip is loaded at constant loading rates. Pulse shaping also avoids the excitation of resonance, thus avoiding inertia induced forces mixed with material strength in the data. To create Mode-ll fracture conditions, an (ENF) three-point bending specimen is employed in the gage section of the modified SHPB. © 2008 Society for Experimental Mechanics Inc.

More Details

Doppler electron velocimeter-practical considerations for a useful tool

Society for Experimental Mechanics - 11th International Congress and Exhibition on Experimental and Applied Mechanics 2008

Reu, P.L.

The Doppler electron velocimeter (DEV) is a potentially new dynamic measurement system for the nano-scale. Electron microscopes have been used for many years now for visualizing extremely small samples, but the ability to make dynamic measurements has not existed. The DEV proceeds along the analogous lines of a laser Doppler velocimeter, which uses the Doppler shift of the wave to detect the velocity. The use of electron beams with their extremely short wavelengths overcomes the diffraction limit of light of approximately 1/2-micron to measure samples of current scientific interest in the nano-regime. Previous work has shown that Doppler shifting of electrons is theoretically possible, this paper examines whether a practical instrument can be built given inherent limitations of using electron beams as a probe source. Potential issues and their solutions, including electron beam coherence and interference will be presented. If answers to these problems can be found, the invention of the Doppler electron velocimeter could yield a completely new measurement concept at atomistic scales. © 2008 Society for Experimental Mechanics Inc.

More Details

Practical aspects of contouring using ESPI/DSPI

Society for Experimental Mechanics - 11th International Congress and Exhibition on Experimental and Applied Mechanics 2008

Reu, Phillip L.; Hansche, Bruce D.

Moiré contouring can be implemented by illuminating an object with coherent light from two closely spaced point sources-known as the "two point" method. This method can be implemented using digital speckle pattern interferometry techniques (DSPI) by illuminating the object with a single point source that is moved between datasets. We briefly present the algorithm, and some inherent implicit and explicit assumptions, used in this technique. One assumption made is that the object remains stationary between datasets. If violated, this bold assumption will create hundreds of microns of error from fractions of a micron of object motion. We present simulations and experiments demonstrating these sensitivities and two techniques to compensate for object motion during data acquisition. ©2008 Society for Experimental Mechanics Inc.

More Details

Extending digital image correlation to moving field of view application: A feasibility study

Society for Experimental Mechanics - 11th International Congress and Exhibition on Experimental and Applied Mechanics 2008

Miller, Timothy J.; Schreier, Hubert W.; Valley, Michael T.; Brown, Timothy

Conventional tracking systems measure time-space-position data and collect imagery to quantify the flight dynamics of tracked targets. However, they do not provide 6-degree-of-freedom measurements combined with spin rate, wobble, and other flight related parameters associated with non-rigid body motions. Using high-speed digital video cameras and image processing techniques, it may be possible to measure test-unit attitude and surface deformations during key portions of the test-unit's trajectory. This paper discusses the viability of applying Digital Image Correlation (DICa) methods to image data collected from two laser tracking systems. Stereo imaging methods have proven effective in the laboratory for quantifying temporally and spatially resolved 3D motions across a target surface. The principle limitations of the DIC method have been the need for clean imagery and fixed camera positions and orientations. However, recent field tests have demonstrated that these limitations can be overcome to provide a new method for quantifying flight dynamics with stereo laser tracking and high-speed video imagery in the presence of atmospheric turbulence. © 2008 Society for Experimental Mechanics Inc.

More Details

Doppler electron velocimeter-practical considerations for a useful tool

Society for Experimental Mechanics 11th International Congress and Exhibition on Experimental and Applied Mechanics 2008

Reu, P.L.

The Doppler electron velocimeter (DEV) is a potentially new dynamic measurement system for the nano-scale. Electron microscopes have been used for many years now for visualizing extremely small samples, but the ability to make dynamic measurements has not existed. The DEV proceeds along the analogous lines of a laser Doppler velocimeter, which uses the Doppler shift of the wave to detect the velocity. The use of electron beams with their extremely short wavelengths overcomes the diffraction limit of light of approximately 1/2-micron to measure samples of current scientific interest in the nano-regime. Previous work has shown that Doppler shifting of electrons is theoretically possible, this paper examines whether a practical instrument can be built given inherent limitations of using electron beams as a probe source. Potential issues and their solutions, including electron beam coherence and interference will be presented. If answers to these problems can be found, the invention of the Doppler electron velocimeter could yield a completely new measurement concept at atomistic scales. © 2008 Society for Experimental Mechanics Inc.

More Details
Results 76001–76100 of 99,299
Results 76001–76100 of 99,299