Optical microswitches are being developed for use in communication and security systems because of their small size and fast response time. However, as the intensity of the light incident on the microswitches increases, the thermal and mechanical responses of the reflective surfaces are becoming a concern. It is important to dissipate heat adequately and to minimize any deformation on the reflective surfaces. To understand the mechanical responses of these microswitches, a set of microstructures have been fabricated and tested to evaluate how the surfaces deform when irradiated with a high-intensity laser beam. To evaluate and further investigate the experimental findings, the coupled physical analysis tool, Calagio, has been applied to simulate the mechanical behavior of these test structures when they are optically heated. Code prediction of the surface displacement will be compared against measurement. Our main objective is to assess the existing material models and our code predictive capability so that it will be used to qualify the performance of microswitches being developed.
Polymer electronic devices and materials have vast potential for future microsystems and could have many advantages over conventional inorganic semiconductor based systems, including ease of manufacturing, cost, weight, flexibility, and the ability to integrate a wide variety of functions on a single platform. Starting materials and substrates are relatively inexpensive and amenable to mass manufacturing methods. This project attempted to plant the seeds for a new core competency in polymer electronics at Sandia National Laboratories. As part of this effort a wide variety of polymer components and devices, ranging from simple resistors to infrared sensitive devices, were fabricated and characterized. Ink jet printing capabilities were established. In addition to promising results on prototype devices the project highlighted the directions where future investments must be made to establish a viable polymer electronics competency.
The Kyoto Accords have been signed by 140 nations in order to significantly reduce carbon dioxide emissions into the atmosphere in the medium to long term. In order to achieve this goal without drastic reductions in fossil fuel usage, carbon dioxide must be removed from the atmosphere and stored in acceptable reservoirs. Research has been undertaken to develop economical new technologies for the transfer and storage of carbon dioxide in saline aquifers. In order to maximize the storage rate, the aquifer is first hydraulically fractured in a conventional well stimulation treatment with a slurry containing solid proppant. Well fracturing would increase the injection volume flowrate greatly. In addition, there are several ancillary benefits including extension of the reservoir early storage volume by moving the carbon dioxide further from the well. This extended reach would mitigate the problems with the buoyant plume and increase the surface area between the carbon dioxide and the formation facilitating absorption. A life-cycle cost estimate has been performed showing the benefits of this approach compared to injection without fracturing.
Micro Total Analysis Systems - Proceedings of MicroTAS 2006 Conference: 10th International Conference on Miniaturized Systems for Chemistry and Life Sciences
Micro Total Analysis Systems - Proceedings of MicroTAS 2006 Conference: 10th International Conference on Miniaturized Systems for Chemistry and Life Sciences
2006 International Conference on Megagauss Magnetic Field Generation and Related Topics, including the International Workshop on High Energy Liners and High Energy Density Applications, MEGAGAUSS
Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern after the anthrax attacks of 2001. The ability to characterize such attacks, i.e., to estimate the number of people infected, the time of infection, and the average dose received, is important when planning a medical response. We address this question of characterization by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To be of relevance to response planning, we limit ourselves to 3-5 days of data. In tests performed with anthrax as the pathogen, we find that these data are usually sufficient, especially if the model of the outbreak used in the inverse problem is an accurate one. In some cases the scarcity of data may initially support outbreak characterizations at odds with the true one, but with sufficient data the correct inferences are recovered; in other words, the inverse problem posed and its solution methodology are consistent. We also explore the effect of model error-situations for which the model used in the inverse problem is only a partially accurate representation of the outbreak; here, the model predictions and the observations differ by more than a random noise. We find that while there is a consistent discrepancy between the inferred and the true characterizations, they are also close enough to be of relevance when planning a response.
The goal of our project was to examine a novel quantum cascade laser design that should inherently increase the output power of the laser while simultaneously providing a broad tuning range. Such a laser source enables multiple chemical species identification with a single laser and/or very broad frequency coverage with a small number of different lasers, thus reducing the size and cost of laser based chemical detection systems. In our design concept, the discrete states in quantum cascade lasers are replaced by minibands made of multiple closely spaced electron levels. To facilitate the arduous task of designing miniband-to-miniband quantum cascade lasers, we developed a program that works in conjunction with our existing modeling software to completely automate the design process. Laser designs were grown, characterized, and iterated. The details of the automated design program and the measurement results are summarized in this report.
The US military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives. To support this goal, Sandia National Laboratories (SNL) has undertaken a program of HPM as an integral augmentation to its system-of-system (SoS) analytics capabilities. The previous effort, reported in SAND2005-6569, evaluated the effects of soldier cognitive fatigue on SoS performance. The current effort began with a very broad survey of any performance-shaping factors (PSFs) that also might affect soldiers performance in combat situations. The work included consideration of three different approaches to cognition modeling and how appropriate they would be for application to SoS analytics. This bulk of this report categorizes 47 PSFs into three groups (internal, external, and task-related) and provides brief descriptions of how each affects combat performance, according to the literature. The PSFs were then assembled into a matrix with 22 representative military tasks and assigned one of four levels of estimated negative impact on task performance, based on the literature. Blank versions of the matrix were then sent to two ex-military subject-matter experts to be filled out based on their personal experiences. Data analysis was performed to identify the consensus most influential PSFs. Results indicate that combat-related injury, cognitive fatigue, inadequate training, physical fatigue, thirst, stress, poor perceptual processing, and presence of chemical agents are among the PSFs with the most negative impact on combat performance.
The present study examines the strain-rate sensitivity of four high strength, high-toughness alloys at strain rates ranging from 0.0002 s-1 to 200 s-1: Aermet 100, a modified 4340, modified HP9-4-20, and a recently developed Eglin AFB steel alloy, ES-1c. A refined dynamic servohydraulic method was used to perform tensile tests over this entire range. Each of these alloys exhibit only modest strain-rate sensitivity. Specifically, the strain-rate sensitivity exponent m, is found to be in the range of 0.004-0.007 depending on the alloy. This corresponds to a {approx}10% increase in the yield strength over the 7-orders of magnitude change in strain-rate. Interestingly, while three of the alloys showed a concominant {approx}3-10% drop in their ductility with increasing strain-rate, the ES1-c alloy actually exhibited a 25% increase in ductility with increasing strain-rate. Fractography suggests the possibility that at higher strain-rates ES-1c evolves towards a more ductile dimple fracture mode associated with microvoid coalescence.
Limitations on focused scene size for the Polar Format Algorithm (PFA) for Synthetic Aperture Radar (SAR) image formation are derived. A post processing filtering technique for compensating the spatially variant blurring in the image is examined. Modifications to this technique to enhance its robustness are proposed.
Hydrogen getters were tested for use in storage of plutonium-bearing materials in accordance with DOE's Criteria for Interim Safe Storage of Plutonium Bearing Materials. The hydrogen getter HITOP was aged for 3 months at 70 C and tested under both recombination and hydrogenation conditions at 20 and 70 C; partially saturated and irradiated aged getter samples were also tested. The recombination reaction was found to be very fast and well above the required rate of 45 std. cc H2h. The gettering reaction, which is planned as the backup reaction in this deployment, is slower and may not meet the requirements alone. Pressure drop measurements and {sup 1}H NMR analyses support these conclusions. Although the experimental conditions do not exactly replicate the deployment conditions, the results of our conservative experiments are clear: the aged getter shows sufficient reactivity to maintain hydrogen concentrations below the flammability limit, between the minimum and maximum deployment temperatures, for three months. The flammability risk is further reduced by the removal of oxygen through the recombination reaction. Neither radiation exposure nor thermal aging sufficiently degrades the getter to be a concern. Future testing to evaluate performance for longer aging periods is in progress.
The ''Design and Manufacturing of Complex Optics'' LDRD sought to develop new advanced methods for the design and manufacturing of very complex optical systems. The project team developed methods for including manufacturability into optical designs and also researched extensions of manufacturing techniques to meet the challenging needs of aspherical, 3D, multi-level lenslet arrays on non-planar surfaces. In order to confirm the applicability of the developed techniques, the team chose the Dragonfly Eye optic as a testbed. This optic has arrays of aspherical micro-lenslets on both the exterior and the interior of a 4mm diameter hemispherical shell. Manufacturing of the dragonfly eye required new methods of plunge milling aspherical optics and the development of a method to create the milling tools using focused ion beam milling. The team showed the ability to create aspherical concave milling tools which will have great significance to the optical industry. A prototype dragonfly eye exterior was created during the research, and the methods of including manufacturability in the optical design process were shown to be successful as well.
In this report we present a model to explain the size-dependent shapes of lead nano-precipitates in aluminum. Size-dependent shape transitions, frequently observed at nanolength scales, are commonly attributed to edge energy effects. This report resolves an ambiguity in the definition and calculation of edge energies and presents an atomistic calculation of edge energies for free clusters. We also present a theory for size-dependent shapes of Pb nanoprecipitates in Al, introducing the concept of ''magic-shapes'' defined as precipitate shapes having near zero elastic strains when inserted into similarly shaped voids in the Al matrix. An algorithm for constructing a complete set of magic-shapes is presented. The experimental observations are explained by elastic strain energies and interfacial energies; edge energies play a negligible role. We replicate the experimental observations by selecting precipitates having magic-shapes and interfacial energies less than a cutoff value.
Effective elastic properties for carbon nanotube reinforced composites are obtained through a variety of micromechanics techniques. Using the in-plane elastic properties of graphene, the effective properties of carbon nanotubes are calculated utilizing a composite cylinders micromechanics technique as a first step in a two-step process. These effective properties are then used in the self-consistent and Mori-Tanaka methods to obtain effective elastic properties of composites consisting of aligned single or multi-walled carbon nanotubes embedded in a polymer matrix. Effective composite properties from these averaging methods are compared to a direct composite cylinders approach extended from the work of Hashin and Rosen (1964) and Christensen and Lo (1979). Comparisons with finite element simulations are also performed. The effects of an interphase layer between the nanotubes and the polymer matrix as result of functionalization is also investigated using a multi-layer composite cylinders approach. Finally, the modeling of the clustering of nanotubes into bundles due to interatomic forces is accomplished herein using a tessellation method in conjunction with a multi-phase Mori-Tanaka technique. In addition to aligned nanotube composites, modeling of the effective elastic properties of randomly dispersed nanotubes into a matrix is performed using the Mori-Tanaka method, and comparisons with experimental data are made. Computational micromechanical analysis of high-stiffness hollow fiber nanocomposites is performed using the finite element method. The high-stiffness hollow fibers are modeled either directly as isotropic hollow tubes or equivalent transversely isotropic effective solid cylinders with properties computed using a micromechanics based composite cylinders method. Using a representative volume element for clustered high-stiffness hollow fibers embedded in a compliant matrix with the appropriate periodic boundary conditions, the effective elastic properties are obtained from the finite element results. These effective elastic properties are compared to approximate analytical results found using micromechanics methods. The effects of an interphase layer between the high-stiffness hollow fibers and matrix to simulate imperfect load transfer and/or functionalization of the hollow fibers is also investigated and compared to a multi-layer composite cylinders approach. Finally the combined effects of clustering with fiber-matrix interphase regions are studied. The parametric studies performed herein were motivated by and used properties for single-walled carbon nanotubes embedded in an epoxy matrix, and as such are intended to serve as a guide for continuum level representations of such nanocomposites in a multi-scale modeling approach.
Technical assessment and remodeling of existing data indicates that the Richton salt dome, located in southeastern Mississippi, appears to be a suitable site for expansion of the U.S. Strategic Petroleum Reserve. The maximum area of salt is approximately 7 square miles, at a subsurface elevation of about -2000 ft, near the top of the salt stock. Approximately 5.8 square miles of this appears suitable for cavern development, because of restrictions imposed by modeled shallow salt overhang along several sides of the dome. The detailed geometry of the overhang currently is only poorly understood. However, the large areal extent of the Richton salt mass suggests that significant design flexibility exists for a 160-million-barrel storage facility consisting of 16 ten-million-barrel caverns. The dome itself is prominently elongated from northwest to southeast. The salt stock appears to consist of two major spine features, separated by a likely boundary shear zone trending from southwest to northeast. The dome decreases in areal extent with depth, because of salt flanks that appear to dip inward at 70-80 degrees. Caprock is present at depths as shallow as 274 ft, and the shallowest salt is documented at -425 ft. A large number of existing two-dimensional seismic profiles have been acquired crossing, and in the vicinity of, the Richton salt dome. At least selected seismic profiles should be acquired, examined, potentially reprocessed, and interpreted in an effort to understand the limitations imposed by the apparent salt overhang, should the Richton site be selected for actual expansion of the Reserve.
This report summarizes the results of an effort to establish a framework for assigning and communicating technology readiness levels (TRLs) for the modeling and simulation (ModSim) capabilities at Sandia National Laboratories. This effort was undertaken as a special assignment for the Weapon Simulation and Computing (WSC) program office led by Art Hale, and lasted from January to September 2006. This report summarizes the results, conclusions, and recommendations, and is intended to help guide the program office in their decisions about the future direction of this work. The work was broken out into several distinct phases, starting with establishing the scope and definition of the assignment. These are characterized in a set of key assertions provided in the body of this report. Fundamentally, the assignment involved establishing an intellectual framework for TRL assignments to Sandia's modeling and simulation capabilities, including the development and testing of a process to conduct the assignments. To that end, we proposed a methodology for both assigning and understanding the TRLs, and outlined some of the restrictions that need to be placed on this process and the expected use of the result. One of the first assumptions we overturned was the notion of a ''static'' TRL--rather we concluded that problem context was essential in any TRL assignment, and that leads to dynamic results (i.e., a ModSim tool's readiness level depends on how it is used, and by whom). While we leveraged the classic TRL results from NASA, DoD, and Sandia's NW program, we came up with a substantially revised version of the TRL definitions, maintaining consistency with the classic level definitions and the Predictive Capability Maturity Model (PCMM) approach. In fact, we substantially leveraged the foundation the PCMM team provided, and augmented that as needed. Given the modeling and simulation TRL definitions and our proposed assignment methodology, we conducted four ''field trials'' to examine how this would work in practice. The results varied substantially, but did indicate that establishing the capability dependencies and making the TRL assignments was manageable and not particularly time consuming. The key differences arose in perceptions of how this information might be used, and what value it would have (opinions ranged from negative to positive value). The use cases and field trial results are included in this report. Taken together, the results suggest that we can make reasonably reliable TRL assignments, but that using those without the context of the information that led to those results (i.e., examining the measures suggested by the PCMM table, and extended for ModSim TRL purposes) produces an oversimplified result--that is, you cannot really boil things down to just a scalar value without losing critical information.
Experimental data for material plasticity and failure model calibration and validation were obtained from 304L stainless steel. Model calibration data were taken from smooth tension, notched tension, and compression tests. Model validation data were provided from experiments using thin-walled tube specimens subjected to path dependent combinations of internal pressure, extension, and torsion.
Sandia National Laboratories has developed high-energy all-solid-state UV sources for use in laboratory tests of the feasibility of satellite-based ozone DIAL. These sources generate 320 nm light by sum-frequency mixing the 532 nm second harmonic of an Nd:YAG laser with the 803 nm signal light derived from a self-injection-seeded image-rotating optical parametric oscillator (OPO). The OPO cavity utilizes the RISTRA geometry, denoting rotated-image singly-resonant twisted rectangle. Two configurations were developed, one using extra-cavity sum-frequency mixing, where the sum-frequency-generation (SFG) crystal is outside the OPO cavity, and the other intra-cavity mixing, where the SFG crystal is placed inside the OPO cavity. Our goal was to obtain 200 mJ, 10 ns duration, 320 nm pulses at 10 Hz with near-IR to UV (1064 nm to 320 nm) optical conversion efficiency of 25%. To date we've obtained 190 mJ at 320 nm using extra-cavity SFG with 21% efficiency, and > 140 mJ by intra-cavity SFG with efficiency approaching 24%. While these results are encouraging, we've determined our conversion efficiency can be enhanced by replacing self-seeding at the signal wavelength of 803 nm with pulsed idler seeding at 1576 nm. By switching to idler seeding and increasing the OPO cavity dimensions to accommodate flat-top beams with diameters up to 10 mm, we expect to generate UV energies approaching 300 mJ with optical conversion efficiency approaching 25%. While our technology was originally designed to obtain high pulse energies, it can also be used to generate low-energy UV pulses with high efficiency. Numerical simulations using an idler-seeded intra-cavity SFG RISTRA OPO scaled to half its nominal dimensions yielded 560 μJ of 320 nm light from 2 mJ of 532 nm pump using an idler-seed energy of 100 μJ.
This paper reports on a novel approach to atmospheric cloud segmentation from a space based multi-spectral pushbroom satellite system. The satellite collects 15 spectral bands ranging from visible, 0.45 urn, to long wave in fared (IR), 10.7um. The images are radiometrically calibrated and have ground sample distances (GSD) of 5 meters for visible to very near IR bands and a GSD of 20 meters for near IR to long wave IR. The algorithm consists of a hybrid-classification system in the sense that supervised and unsupervised networks are used in conjunction. For performance evaluation, a series of numerical comparisons to human derived cloud borders were performed. A set of 33 scenes were selected to represent various climate zones with different land cover from around the world. The algorithm consisted of the following. Band separation was performed to find the band combinations which form significant separation between cloud and background classes. The potential bands are fed into a K-Means clustering algorithm in order to identify areas in the image which have similar centroids. Each cluster is then compared to the cloud and background prototypes using the Jeffries-Matusita distance. A minimum distance is found and each unknown cluster is assigned to their appropriate prototype. A classification rate of 88% was found when using one short wave IR band and one midwave IR band. Past investigators have reported segmentation accuracies ranging from 67% to 80%, many of which require human intervention. A sensitivity of 75% and specificity of 90% were reported as well.
Inorganic nanoclusters dispersed in organic matrices are of importance to a number of emerging technologies. However, obtaining useful properties from such organic-inorganic composites often requires high concentrations of well-dispersed nanoclusters. In order to achieve this goal the chemistry of the particle surface and the matrix must be closely matched. This is based on the premise of minimization of the interfacial free energy; an excess of free energy will cause phase separation and ultimately aggregation. Thus, the optimal system is one in which the nanoclusters are stabilized by the same molecules that make up the encapsulant. Yet, the organic matrix is typically chosen for its bulk properties, and therefore may not be amenable to chemical modification. Also, the organic-inorganic interface is often critical to establishing and maintaining the desired nanocluster (and hence composite) properties, placing further constraints on proposed chemical modification. For these reasons we have adopted the use of aminefunctionalized trimethoxysilanes (ormosils) as an optical grade encapsulant. In this work, we demonstrate that ormosils can produce beneficial optical effects that are derived from interfacial phenomena, which can be maintained throughout the encapsulation process.
A series of field tests sponsored by Sandia National Laboratories has simultaneously demonstrated the hard-rock drilling performance of different industry-supplied drag bits as well as Sandia's new Diagnostics-While-Drilling (DWD) system, which features a novel downhole tool that monitors dynamic conditions in close proximity to the bit. Drilling with both conventional and advanced ("best effort") drag bits was conducted at the GTI Catoosa Test Facility (near Tulsa, OK) in a well-characterized lithologic column that features an extended hard-rock interval of Mississippi limestone above a layer of highly abrasive Misener sandstone and an underlying section of hard Arbuckle dolomite. Output from the DWD system was closely observed during drilling and was used to make real-time decisions for adjusting the drilling parameters. This paper summarizes penetration rate and damage results for the various drag bits, shows representative DWD display data, and illustrates the application of these data for optimizing drilling performance and avoiding trouble.
Many MEMS devices are based on polysilicon because of the current availability of surface micromachining technology. However, polysilicon is not the best choice for devices where extensive sliding and/or thermal fields are applied due to its chemical, mechanical and tribological properties. In this work, we investigated the mechanical properties of three new materials for MEMS/NEMS devices: silicon carbide (SiC) from Case Western Reserve University (CWRU), ultrananocrystalline diamond (UNCD) from Argonne National Laboratory (ANL), and hydrogen-free tetrahedral amorphous carbon (ta-C) from Sandia National Laboratories (SNL). Young's modulus, characteristic strength, fracture toughness, and theoretical strength were measured for these three materials using only one testing methodology - the Membrane Deflection Experiment (MDE) developed at Northwestern University. The measured values of Young's modulus were 430GPa, 960GPa, and 800GPa for SiC, UNCD, and ta-C, repectively. Fracture toughness measurments resulted in values of 3.2, 4.5, and 6.2 MPa×m 1/2, respectively. The strengths were found to follow a Weibull distribution but their scaling was found to be controlled by different specimen size parameters. Therefore, a cross comparison of the strengths is not fully meaningful. We instead propose to compare their theoretical strengths as determined by employing Novozhilov fracture criterion. The estimated theoretical strength for SiC is 10.6GPa at a characteristic length of 58nm, for UNCD is 18.6GPa at a characteristic length of 37nm, and for ta-C is 25.4GPa at a characteristic length of 38nm. The techniques used to obtained these results as well as microscopic fractographic analyses are summarized in the article. We also highlight the importance of characterizing mechanical properties of MEMS materials by means of only one simple and accurate experimental technique.
Experimental modal analysis (EMA) was carried out on a micro-machined acceleration switch to characterize the motions of the device as fabricated and to compare this with analytical results for the nominal design. Finite element analysis (FEA) of the nominal design was used for this comparison. The acceleration switch was a single-crystal silicon disc supported by four fork-shaped springs. We shook the base of the die with step sine type excitation. A Laser Doppler Velocimeter (LDV) in conjunction with a microscope was used to measure the velocities of the die at several points. The desired first three modes of the structure were identified. The fundamental natural frequency that we measured in this experiment gives an estimate of the actuation g-level for the specified stroke. The fundamental resonance and actuation g-level results from the EMA and the FEA showed large variations. The discrepancy prompted thorough dimensional measurement of the acceleration switch, which revealed discrepancies between the nominal design and tested component.
Structural assemblies often include bolted connections that are a primary mechanism for energy dissipation and nonlinear response at elevated load levels. Typically these connections are idealized within a structural dynamics finite element model as linear elastic springs. The spring stiffness is generally tuned to reproduce modal test data taken on a prototype. In conventional practice, modal test data is also used to estimate nominal values of modal damping that could be used in applications with load amplitudes comparable to those employed in the modal tests. Although this simplification of joint mechanics provides a convenient modeling approach with the advantages of reduced complexity and solution requirements, it often leads to poor predicted responses for load regimes associated with nonlinear system behavior. In this document we present an alternative approach using the concept of a "whole-joint" or "whole-interface" model [1]. We discuss the nature of the constitutive model, the manner in which model parameters are deduced, and comparison of structural dynamic prediction with results for experimental hardware subjected to a series of transient excitations beginning at low levels and increasing to levels that produced macro-slip in the joint. Further comparison is performed with a traditional "tuned" linear model. The ability of the whole-interface model to predict the onset of macro-slip as well as the vast improvement of the response levels in relation to those given by the linear model is made evident. Additionally, comparison between prediction and high amplitude experiments suggests areas for further work.
This paper addresses the coupling of experimental and finite element models of substructures. In creating the experimental model, difficulties exist in applying moments and estimating resulting rotations at the connection point between the experimental and finite element models. In this work, a simple test fixture for applying moments and estimating rotations is used to more accurately estimate these quantities. The test fixture is analytically "subtracted" from the model using the admittance approach. Inherent in this process is the inversion of frequency response function matrices that can amplify the uncertainty in the measured data. Presented here is the work applied to a two-component beam model and analyses that attempt to identify and quantify some of these uncertainties. The admittance model of one beam component was generated experimentally using the moment-rotation fixture, and the other from a detailed finite element model. During analytical testing of the admittance modeling algorithm, it was discovered that the component admittance models generated by finite elements were ill conditioned due to the inherent physics.
In order to create an analytical model of a material or structure, two sets of experiments must be performed-calibration and validation. Calibration experiments provide the analyst with the parameters from which to build a model that encompasses the behavior of the material. Once the model is calibrated, the new analytical results must be compared with a different, independent set of experiments, referred to as the validation experiments. This modeling procedure was performed for a crushable honeycomb material, with the validation experiments presented here. This paper covers the design of the validation experiments, the analysis of the resulting data, and the metric used for model validation.
Processing-in-Memory (PIM) technology encompasses a range of research leveraging a tight coupling of memory and processing. The most unique features of the technology are extremely wide paths to memory, extremely low memory latency, and wide functional units. Many PIM researchers are also exploring extremely fine-grained multi-threading capabilities. This paper explores a mechanism for leveraging these features of PIM technology to enhance commodity architectures in a seemingly mundane way: accelerating MPI. Modern network interfaces leverage simple processors to offload portions of the MPI semantics, particularly the management of posted receive and unexpected message queues. Without adding cost or increasing clock frequency, using PIMs in the network interface can enhance performance. The results are a significant decrease in latency and increase in small message bandwidth, particularly when long queues are present.
The processes and functional constituents of biological photosynthetic systems can be mimicked to produce a variety of functional nanostructures and nanodevices. The photosynthetic nanostructures produced are analogs of the naturally occurring photosynthetic systems and are composed of biomimetic compounds (e.g., porphyrins). For example, photocatalytic nanotubes can be made by ionic self-assembly of two oppositely charged porphyrins tectons [1]. These nanotubes mimic the light-harvesting and photosynthetic functions of biological systems like the chlorosomal rods and reaction centers of green sulfur bacteria. In addition, metal-composite nanodevices can be made by using the photocatalytic activity of the nanotubes to reduce aqueous metal salts to metal atoms, which are subsequently deposited onto tube surfaces [2]. In another approach, spatial localization of photocatalytic porphyrins within templating surfactant assemblies leads to controlled growth of novel dendritic metal nanostructures [3].
Conference Proceedings of the Society for Experimental Mechanics Series
Hasselman, Timothy; Wathugala, G.W.; Urbina, Angel; Paez, Thomas L.
Mechanical systems behave randomly and it is desirable to capture this feature when making response predictions. Currently, there is an effort to develop predictive mathematical models and test their validity through the assessment of their predictive accuracy relative to experimental results. Traditionally, the approach to quantify modeling uncertainty is to examine the uncertainty associated with each of the critical model parameters and to propagate this through the model to obtain an estimate of uncertainty in model predictions. This approach is referred to as the "bottom-up" approach. However, parametric uncertainty does not account for all sources of the differences between model predictions and experimental observations, such as model form uncertainty and experimental uncertainty due to the variability of test conditions, measurements and data processing. Uncertainty quantification (UQ) based directly on the differences between model predictions and experimental data is referred to as the "top-down" approach. This paper discusses both the top-down and bottom-up approaches and uses the respective stochastic models to assess the validity of a joint model with respect to experimental data not used to calibrate the model, i.e. random vibration versus sine test data. Practical examples based on joint modeling and testing performed by Sandia are presented and conclusions are drawn as to the pros and cons of each approach.
Achieving good scalability for large simulations based on structured adaptive mesh refinement is non-trivial. Performance is limited by the partitioner's ability to efficiently use the underlying parallel computer's resources. Domainbased partitioners serve as a foundation for techniques designed to improve the scalability and they have traditionally been designed on the basis of an independence assumption regarding the computational flow among grid patches at different refinement levels. But this assumption does not hold in practice. Hence the effectiveness of these techniques is significantly impaired. This paper introduces a partitioning method designed on the true premises. The method is tested for four different applications exhibiting different behaviors. The results show that synchronization costs on average can he reduced by 75 percent. The conclusion is that the method is suitable as a foundation in general hierarchical methods designed to improve the scalability of structured adaptive mesh refinement applications.
This paper is about making reversible logic a reality for supercomputing. Reversible logic offers a way to exceed certain basic limits on the performance of computers, yet a powerful case will have to be made to justify its substantial development expense. This paper explores the limits of current, irreversible logic for supercomputers, thus forming a threshold above which reversible logic is the only solution. Problems above this threshold are discussed, with the science and mitigation of global warming being discussed in detail. To further develop the idea of using reversible logic in supercomputing, a design for a 1 Zettaflops supercomputer as required for addressing global climate warming is presented. However, to create such a design requires deviations from the mainstream of both the software for climate simulation and research directions of reversible logic. These deviations provide direction on how to make reversible logic practical. Copyright 2005 ACM.
43rd AIAA Aerospace Sciences Meeting and Exhibit - Meeting Papers
Barone, Matthew F.; Roy, Christopher J.
Simulations of a low-speed square cylinder wake and a supersonic axisymmetric base wake are performed using the Detached Eddy Simulation (DES) model. A reduced-dissipation form of the Symmetric TVD scheme is employed to mitigate the effects of dissipative error in regions of smooth flow. The reduced-dissipation scheme is demonstrated on a 2D square cylinder wake problem, showing a dramatic increase in accuracy for a given grid resolution. The results for simulations on three grids of increasing resolution for the 3D square cylinder wake are compared to experimental data and to other LES and DES studies. The comparisons of mean flow and global mean flow quantities to experimental data are favorable, while the results for second order statistics in the wake are mixed and do not always improve with increasing spatial resolution. Comparisons to LES studies are also generally favorable, suggesting DES provides an adequate subgrid scale model. Predictions of base drag and centerline wake velocity for the supersonic wake are also good, given sufficient grid refinement. These cases add to the validation library for DES and support its use as an engineering analysis tool for accurate prediction of global flow quantities and mean flow properties.
In modal testing, the most popular tools for exciting a structure are hammers and shakers. This paper reviews the applications for which shakers have an advantage. In addition the advantages and disadvantages of different forcing inputs (e.g. sinusoidal, random, burst random and chirp) that can be applied with a shaker are noted. Special considerations are reported for the fixtures required for shaker testing (blocks, force gages, stingers) to obtain satisfactory results. Various problems that the author has encountered during single and multi-shaker modal tests are described with their solutions.
This paper provides an overview of several approaches to formulating and solving optimization under uncertainty (OUU) engineering design problems. In addition, the topic of high-performance computing and OUU is addressed, with a discussion of the coarse- and fine-grained parallel computing opportunities in the various OUU problem formulations. The OUU approaches covered here are: sampling-based OUU, surrogate model-based OUU, analytic reliability-based OUU (also known as reliability-based design optimization), polynomial chaos-based OUU, and stochastic perturbation-based OUU.
Latin Hypercube Sampling (LHS) is widely used as sampling based method for probabilistic calculations. This method has some clear advantages over classical random sampling (RS) that derive from its efficient stratification properties. However, one of its limitations is that it is not possible to extend the size of an initial sample by simply adding new simulations, as this will lead to a loss of the efficient stratification associated with LHS. We describe a new method to extend the size of an LHS to n (>=2) times its original size while preserving both the LHS structure and any induced correlations between the input parameters. This method involves introducing a refined grid for the original sample and then filling in empty rows and columns with new data in a way that conserves both the LHS structure and any induced correlations. An estimate of the bounds of the resulting correlation between two variables is derived for n=2. This result shows that the final correlation is close to the average of the correlations from the original sample and the new sample used in the infilling of the empty rows and columns indicated above.
Chemiresistor microsensors have been developed to provide continuous in-situ detection of volatile organic compounds (VOCs). The chemiresistor sensor is packaged in a rugged, waterproof housing that allows the device to detect VOCs in air, soil, and water. Preconcentrators are also being developed to enhance the sensitivity of the chemiresistor sensor. The "micro- hotplate" preconcentrator is placed face-to-face against the array of chemiresistors inside the package. At prescribed intervals, the preconcentrator is heated to desorb VOCs that have accumulated on the sorbent material on the one-micron-thick silicon-nitride membrane. The pulse of higher-than-ambient concentration of VOC vapor is then detected by the adjacent chemiresistors. The plume is allowed to diffuse out of the package through slots adjacent to the preconcentrator. The integrated chemiresistor/preconcentrator sensor has been tested in the laboratory to evaluate the impacts of sorbent materials, fabrication methods, and repeated heating cycles on the longevity and performance of the sensor. Calibration methods have also been developed, and field tests have been initiated. Copyright ASCE 2005.
Real-time water quality and chemical-specific sensors are becoming more commonplace in water distribution systems. The overall objective of the sensor network is to protect consumers from accidental and malevolent contamination events occurring within the distribution network. This objective can be quantified several different ways including: minimizing the amount of contaminated water consumed, minimizing the extent of the contamination within the network, minimizing the time to detection, etc. We examine the ability of a sensor network to meet these objectives as a function of both the detection limit of the sensors and the number of sensors in the network. A moderately-sized network is used as an example and sensors are placed randomly. The source term is a passive injection into a node and the resulting concentration in the node is a function of the volumetric flow through that node. The concentration of the contaminant at the source node is averaged for all time steps during the injection period. For each combination of a certain number of sensors and a detection limit, the mean values of the different objectives across multiple random sensor placements are evaluated. Results of this analysis allow the tradeoff between the necessary detection limit in a sensor and the number of sensors to be evaluated. Results show that for the example problem examined here, a sensor detection limit of 0.01 of the average source concentration is adequate for maximum protection. Copyright ASCE 2005.
We have developed and implemented a method which given a three-dimensional object can infer from topology the two-dimensional masks needed to produce that object with surface micromachining. This design tool calculates the two-dimensional mask set required to produce a given three-dimensional model by investigating the vertical topology to the model. The 3D model is first separated into bodies that are non-intersecting, made from different materials or only linked through a ground plane. Next, for each body unique horizontal cross sections are located and arranged into a tree based on their topological relationship. A branch-wise search of the tree uncovers locations where deposition boundaries must lie and identifies candidate masks creating a generic mask set for the 3D model. Finally, in the last step specific process requirements are considered that may constrain the generic mask set.
The effects of ozone (O 3) on tin oxide growth rates from mixtures of monobutyltintrichloride (MBTC), O 2 and H 2O are reported. The results indicate that O 3 increases the growth rate under kinetically controlled conditions (MBTC + O 2, 25 torr), but under mass-transport-control (200 torr and/or addition of H 2O to the reactant gases), growth rates are either unaffected or decrease. Kinetic modeling of the gas-phase reactions suggests that O, H, and OH radicals react at the surface to increase the growth rate, but higher pressures reduce their concentrations via recombination. In addition, higher pressures result in increased concentrations of less reactive tin halides, which are decomposition products of MBTC. It appears that when H 2O is a reactant, these radicals reduce the concentration of the tin oxide precursor (thought to be an MBTC-H 2O complex), which significantly decreases the growth rate.
Proceedings of the Solar World Congress 2005: Bringing Water to the World, Including Proceedings of 34th ASES Annual Conference and Proceedings of 30th National Passive Solar Conference
Sattler, Allan R.; Hanley, Charles J.; Hightower, Michael M.; Andelman, Marc
Laboratory and field developments are underway to use solar energy to power a desalination technology - capacitive deionization - for water produced by remote Coal Bed Methane (CBM) natural gas wells. Due to the physical remoteness of many CBM wells throughout the Southwestern U.S., as shown in Figure 1, this approach may offer promise. This promise is not only from its effectiveness in removing salt from CBM water and allowing it to be utilized for various applications, but also for its potentially lower energy consumption compared to other technologies, such as reverse osmosis. This, coupled with the remoteness (Figure 1) of thousands of these wells, makes them more feasible for use with photovoltaic (solar, electric, PV) systems. Concurrent laboratory activities are providing information about the effectiveness and energy requirements of each technology under various produced water qualities and water reuse applications, such as salinity concentrations and water flows. These parameters are being used to driving the design of integrated PV-powered treatment systems. Full-scale field implementations are planned, with data collection and analysis designed to optimize the system design for practical remote applications. Early laboratory studies of capacitive deionization have shown promise that at common CBM salinity levels, the technology may require less energy, is less susceptible to fouling, and is more compact than equivalent reverse osmosis (RO) systems. The technology uses positively and negatively charged electrodes to attract charged ions in a liquid, such as dissolved salts, metals, and some organics, to the electrodes. This concentrates the ions at the electrodes and reduces the ion concentrations in the liquid. This paper discusses the results of these laboratory studies and extends these results to energy consumption and design considerations for field implementation of produced water treatment using photovoltaic systems.
This paper discusses issues that arise in controlling high quality mechanical shock inputs for mock hardware in order to validate a model of a bolted connection. The dynamic response of some mechanical components is strongly dependent upon the behavior of their bolted connections. The bolted connections often provide the only structural load paths into the component and can be highly nonlinear. Accurate analytical modeling of bolted connections is critical to the prediction of component response to dynamic loadings. In particular, it is necessary to understand and correctly model the stiffness of the joint and the energy dissipation (damping) that is a nonlinear function of the forces acting on the joint. Frequency-rich shock inputs composed of several decayed sinusoid components were designed as model validation tests and applied to a test item using an electrodynamic shaker. The test item was designed to isolate the behavior of the joint of interest and responses were dependent on the properties of the joints. The nonlinear stiffness and damping properties of the test item under study presented a challenge in isolating behavior of t4he test hardware from the stiffness, damping and boundary conditions of the shaker. Techniques that yield data to provide a sound basis for model validation comparisons of the bolted joint model are described.
The research goal presented here is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10ppb H 2S at 30°C and a relative humidity of 70% This environment accelerates the attack normally observed in a light industrial environment (similar to, but less severe than, the Battelle class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the electrical resistance of a probe contact with the aged surface, as a function of exposure time. A pore corrosion numerical model was used to predict both the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to close the numerical model. Comparisons are made to the experimentally observed corrosion-bloom number density, bloom size distribution, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area and a probability for bloom-growth extinction proportional to the bloom volume, due to Kirkendall voiding. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms is heavily weighted by contributions from the halo region.
In light of difficulties in realizing a carbohydrate fuel cell that can run on animal or plant carbohydrates, a study was carried out to fabricate a membrane separated, platinum cathode, enzyme anode fuel cell, and test it under both quiescent and flow through conditions. Mediator loss to the flowing solution was the largest contributor to power loss. Use of the phenazine derivative mediators offered decent open circuit potentials for half cell and full cell performance, but suffered from quick loss to the solution which hampered long term operation. A means to stabilize the phenazine molecules to the electrode would need to be developed to extend the lifetime of the cell beyond its current level of a few hours. This is an abstract of a paper presented ACS Fuel Chemistry Meeting (Washington, DC Fall 2005).
A directional scintillating fiber detector for 14-MeV neutrons was simulated using the GEANT4 Monte Carlo simulation tool. Detail design aspects of a prototype 14 MeV neutron fiber detector under development were used in the simulation to assess performance and design features of the detector. Saint-Gobain produced, BCF-12, plastic fiber material was used in the prototype development. The fiber consists of a core scintillating material of polystyrene with 0.48 mm × 0.48 mm dimension and an acrylic outer cladding of 0.02 mm thickness. A total of 64 square fibers, each with a cross-sectional area of 0.25 mm 2 and length of 100 mm were positioned parallel to each other with a spacing of 2.3 mm (fiber pitch) in the tracking of 14-MeV neutron induced recoil proton (n-p) events. Neutron induced recoil proton events, resulting energy deposition in two collinear fibers, were used in reconstructing a two dimensional (2D) direction of incident neutrons. Blurring of recoil protons signal in measurements was also considered to account uncertainty in direction reconstruction. Reconstructed direction has a limiting angular resolution of 3° due to fiber dimension. Blurring the recoil proton energy resulted in further broadening of the reconstructed direction and the angular resolution was 20°. These values were determined when incident neutron beam makes an angle of 45 degree relative to the front surface of the detector. Comparable values were obtained at other angles of incidence. Results from the present simulation have demonstrated promising directional sensitivity of the scintillating fiber detector under development.
As electronic assemblies become more compact and with increased processing bandwidth, the escalating thermal energy has become more difficult to manage. The major limitation has been nonmetallic joining using poor thermal interface materials (TIM). The interfacial, versus bulk, thermal conductivity of an adhesive is the major loss mechanism and normally accounts for an order magnitude loss in conductivity per equivalent thickness. The next generation TIM requires a sophisticated understanding of material and surface sciences, heat transport at sub-micron scales and the manufacturing processes used in packaging of microelectronics and other target applications. Only when this relationship between bondline manufacturing processes, structure and contact resistance is well understood on a fundamental level, would it be possible to advance the development of miniaturized microsystems. We give the status of the study of thermal transport across these interfaces.
Voltage and temperature distributions along the crucible were measured during VAR of 0.81 m diameter Ti-6Al-4V electrode into 0.91 m diameter ingot. These data were used to determine the current distribution along the crucible. Measurements were made for two furnace conditions, one with a bare crucible and the other with a painted crucible. The VAR furnace used for these measurements is of the non-coaxial type, i.e. current is fed directly into the bottom of the crucible through a stool (base plate) contact and exits the furnace through the electrode stinger. The data show that approximately 63% of the current is conducted directly between the ingot and electrode with the remaining conducted between the electrode and crucible wall. This partitioning does not appear to be sensitive to crucible coating. The crucible voltage data were successfully simulated using uniform current distributions for the current conduction zones, a value of 0.63 for the partitioning, and widths of 0.30 and 0.15 m for the ingot/crucible wall and plasma conduction zones, respectively. Successful simulation of the voltage data becomes increasingly difficult (or impossible) as one uses current partitioning values increasingly different from 0.63, indicating that the experimental value is consistent with theory. Current conducted between the ingot and crucible wall through the ingot/wall contact zone may vary during the process without affecting overall current partitioning. The same is true for current conducted through the ingot/stool and stool/crucible contact zones. There is some evidence that the ingot/stool current decreases with increasing ingot length for the case of the bare crucible. Equivalent circuit analysis shows that, under normal conditions, current partitioning is only sensitive to the ratio of the plasma resistance across the annulus to the plasma resistance across the electrode gap, thereby demonstrating the relationship between current partitioning and gap.
This work investigated the relationship between the resistance degradation in low-force metal contacts and hot-switched operational conditions representative of MEMS devices. A modified nano-indentation apparatus was used to bring electrically-biased gold and platinum surfaces into contact at a load of 100 μN. The applied normal force and electrical contact resistance of the contact materials was measured simultaneously. The influence of parallel discharge paths for stored electrical energy in the contact circuit is discussed in relation to surface contamination decomposition and the observed resistance degradation.
Proceedings of the Solar World Congress 2005: Bringing Water to the World, Including Proceedings of 34th ASES Annual Conference and Proceedings of 30th National Passive Solar Conference
Begay-Campbell, Sandra; Coots, Jennifer; Mar, Benjamin
Sandia National Laboratories (Sandia) has an active relationship with the Navajo Nation. Sandia has grown this relationship with through joint formation of strategic multiyear plans oriented toward the development of sustainable Native American renewable energy projects and associated business development. For the last decade, the Navajo Tribal Utility Authority (NTUA) has installed stand-alone photovoltaic (PV) systems on the Navajo Reservation to provide some of its most remote customers with electricity. Sandia and New Mexico State University - Southwest Technology Development Institute's technical assistance supports NTUA as a leader in rural solar electrification, assists NTUA's solar program coordinator to create a sustainable program and conveys NTUA's success in solar to others, including the Department of Energy (DOE). In partnership with DOE's Tribal Energy Program, summer interns' Jennifer Coots (MBA student) and Benjamin Mar (Electrical and Computer Engineering student) prepared case studies that summarize the rural utility's experience with solar electric power.
LMPC 2005 - Proceedings of the 2005 International Symposium on Liquid Metal Processing and Casting
Viswanathan, Srinath; Melgaard, David K.; Patel, Ashish D.; Evans, David G.
A numerical model of the ESR process was used to study the effect of the various process parameters on the resulting temperature profiles, flow field, and pool shapes. The computational domain included the slag and ingot, while the electrode, crucible, and cooling water were considered as external boundary conditions. The model considered heat transfer, fluid flow, solidification, and electromagnetic effects. The predicted pool profiles were compared with experimental results obtained over a range of processing parameters from an industrial-scale 718 alloy ingot. The shape of the melt pool was marked by dropping nickel balls down the annulus of the crucible during melting. Thermocouples placed in the electrode monitored the electrode and slag temperature as melting progressed. The cooling water temperature and flow rate were also monitored. The resulting ingots were sectioned and etched to reveal the ingot macrostructure and the shape of the melt pool. Comparisons of the predicted and experimentally measured pool profiles show excellent agreement. The effect of processing parameters, including the slag cap thickness, on the temperature distribution and flow field are discussed. The results of a sensitivity study of thermophysical properties of the slag are also discussed.
The structural characteristics of buttress thread mechanical joints are not well understood and are difficult to accurately model. As an initial step towards understanding the mechanics of the buttress thread, a 2D plane stress model was created. An experimental investigation was conducted to study the compliance, damping characteristics, and stress field in an axial test condition. The compliance and damping were determined experimentally from a steel cross section of a buttress thread. The stress field was visualized using photoelastic techniques. The mechanics study combined with the photoelastic study provided a set of validation data.