Optical microswitches are being developed for use in communication and security systems because of their small size and fast response time. However, as the intensity of the light incident on the microswitches increases, the thermal and mechanical responses of the reflective surfaces are becoming a concern. It is important to dissipate heat adequately and to minimize any deformation on the reflective surfaces. To understand the mechanical responses of these microswitches, a set of microstructures have been fabricated and tested to evaluate how the surfaces deform when irradiated with a high-intensity laser beam. To evaluate and further investigate the experimental findings, the coupled physical analysis tool, Calagio, has been applied to simulate the mechanical behavior of these test structures when they are optically heated. Code prediction of the surface displacement will be compared against measurement. Our main objective is to assess the existing material models and our code predictive capability so that it will be used to qualify the performance of microswitches being developed.
Polymer electronic devices and materials have vast potential for future microsystems and could have many advantages over conventional inorganic semiconductor based systems, including ease of manufacturing, cost, weight, flexibility, and the ability to integrate a wide variety of functions on a single platform. Starting materials and substrates are relatively inexpensive and amenable to mass manufacturing methods. This project attempted to plant the seeds for a new core competency in polymer electronics at Sandia National Laboratories. As part of this effort a wide variety of polymer components and devices, ranging from simple resistors to infrared sensitive devices, were fabricated and characterized. Ink jet printing capabilities were established. In addition to promising results on prototype devices the project highlighted the directions where future investments must be made to establish a viable polymer electronics competency.
The Kyoto Accords have been signed by 140 nations in order to significantly reduce carbon dioxide emissions into the atmosphere in the medium to long term. In order to achieve this goal without drastic reductions in fossil fuel usage, carbon dioxide must be removed from the atmosphere and stored in acceptable reservoirs. Research has been undertaken to develop economical new technologies for the transfer and storage of carbon dioxide in saline aquifers. In order to maximize the storage rate, the aquifer is first hydraulically fractured in a conventional well stimulation treatment with a slurry containing solid proppant. Well fracturing would increase the injection volume flowrate greatly. In addition, there are several ancillary benefits including extension of the reservoir early storage volume by moving the carbon dioxide further from the well. This extended reach would mitigate the problems with the buoyant plume and increase the surface area between the carbon dioxide and the formation facilitating absorption. A life-cycle cost estimate has been performed showing the benefits of this approach compared to injection without fracturing.
Micro Total Analysis Systems - Proceedings of MicroTAS 2006 Conference: 10th International Conference on Miniaturized Systems for Chemistry and Life Sciences
Micro Total Analysis Systems - Proceedings of MicroTAS 2006 Conference: 10th International Conference on Miniaturized Systems for Chemistry and Life Sciences
2006 International Conference on Megagauss Magnetic Field Generation and Related Topics, including the International Workshop on High Energy Liners and High Energy Density Applications, MEGAGAUSS
Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern after the anthrax attacks of 2001. The ability to characterize such attacks, i.e., to estimate the number of people infected, the time of infection, and the average dose received, is important when planning a medical response. We address this question of characterization by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To be of relevance to response planning, we limit ourselves to 3-5 days of data. In tests performed with anthrax as the pathogen, we find that these data are usually sufficient, especially if the model of the outbreak used in the inverse problem is an accurate one. In some cases the scarcity of data may initially support outbreak characterizations at odds with the true one, but with sufficient data the correct inferences are recovered; in other words, the inverse problem posed and its solution methodology are consistent. We also explore the effect of model error-situations for which the model used in the inverse problem is only a partially accurate representation of the outbreak; here, the model predictions and the observations differ by more than a random noise. We find that while there is a consistent discrepancy between the inferred and the true characterizations, they are also close enough to be of relevance when planning a response.
The goal of our project was to examine a novel quantum cascade laser design that should inherently increase the output power of the laser while simultaneously providing a broad tuning range. Such a laser source enables multiple chemical species identification with a single laser and/or very broad frequency coverage with a small number of different lasers, thus reducing the size and cost of laser based chemical detection systems. In our design concept, the discrete states in quantum cascade lasers are replaced by minibands made of multiple closely spaced electron levels. To facilitate the arduous task of designing miniband-to-miniband quantum cascade lasers, we developed a program that works in conjunction with our existing modeling software to completely automate the design process. Laser designs were grown, characterized, and iterated. The details of the automated design program and the measurement results are summarized in this report.
The US military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives. To support this goal, Sandia National Laboratories (SNL) has undertaken a program of HPM as an integral augmentation to its system-of-system (SoS) analytics capabilities. The previous effort, reported in SAND2005-6569, evaluated the effects of soldier cognitive fatigue on SoS performance. The current effort began with a very broad survey of any performance-shaping factors (PSFs) that also might affect soldiers performance in combat situations. The work included consideration of three different approaches to cognition modeling and how appropriate they would be for application to SoS analytics. This bulk of this report categorizes 47 PSFs into three groups (internal, external, and task-related) and provides brief descriptions of how each affects combat performance, according to the literature. The PSFs were then assembled into a matrix with 22 representative military tasks and assigned one of four levels of estimated negative impact on task performance, based on the literature. Blank versions of the matrix were then sent to two ex-military subject-matter experts to be filled out based on their personal experiences. Data analysis was performed to identify the consensus most influential PSFs. Results indicate that combat-related injury, cognitive fatigue, inadequate training, physical fatigue, thirst, stress, poor perceptual processing, and presence of chemical agents are among the PSFs with the most negative impact on combat performance.
The present study examines the strain-rate sensitivity of four high strength, high-toughness alloys at strain rates ranging from 0.0002 s-1 to 200 s-1: Aermet 100, a modified 4340, modified HP9-4-20, and a recently developed Eglin AFB steel alloy, ES-1c. A refined dynamic servohydraulic method was used to perform tensile tests over this entire range. Each of these alloys exhibit only modest strain-rate sensitivity. Specifically, the strain-rate sensitivity exponent m, is found to be in the range of 0.004-0.007 depending on the alloy. This corresponds to a {approx}10% increase in the yield strength over the 7-orders of magnitude change in strain-rate. Interestingly, while three of the alloys showed a concominant {approx}3-10% drop in their ductility with increasing strain-rate, the ES1-c alloy actually exhibited a 25% increase in ductility with increasing strain-rate. Fractography suggests the possibility that at higher strain-rates ES-1c evolves towards a more ductile dimple fracture mode associated with microvoid coalescence.
Limitations on focused scene size for the Polar Format Algorithm (PFA) for Synthetic Aperture Radar (SAR) image formation are derived. A post processing filtering technique for compensating the spatially variant blurring in the image is examined. Modifications to this technique to enhance its robustness are proposed.
Hydrogen getters were tested for use in storage of plutonium-bearing materials in accordance with DOE's Criteria for Interim Safe Storage of Plutonium Bearing Materials. The hydrogen getter HITOP was aged for 3 months at 70 C and tested under both recombination and hydrogenation conditions at 20 and 70 C; partially saturated and irradiated aged getter samples were also tested. The recombination reaction was found to be very fast and well above the required rate of 45 std. cc H2h. The gettering reaction, which is planned as the backup reaction in this deployment, is slower and may not meet the requirements alone. Pressure drop measurements and {sup 1}H NMR analyses support these conclusions. Although the experimental conditions do not exactly replicate the deployment conditions, the results of our conservative experiments are clear: the aged getter shows sufficient reactivity to maintain hydrogen concentrations below the flammability limit, between the minimum and maximum deployment temperatures, for three months. The flammability risk is further reduced by the removal of oxygen through the recombination reaction. Neither radiation exposure nor thermal aging sufficiently degrades the getter to be a concern. Future testing to evaluate performance for longer aging periods is in progress.
The ''Design and Manufacturing of Complex Optics'' LDRD sought to develop new advanced methods for the design and manufacturing of very complex optical systems. The project team developed methods for including manufacturability into optical designs and also researched extensions of manufacturing techniques to meet the challenging needs of aspherical, 3D, multi-level lenslet arrays on non-planar surfaces. In order to confirm the applicability of the developed techniques, the team chose the Dragonfly Eye optic as a testbed. This optic has arrays of aspherical micro-lenslets on both the exterior and the interior of a 4mm diameter hemispherical shell. Manufacturing of the dragonfly eye required new methods of plunge milling aspherical optics and the development of a method to create the milling tools using focused ion beam milling. The team showed the ability to create aspherical concave milling tools which will have great significance to the optical industry. A prototype dragonfly eye exterior was created during the research, and the methods of including manufacturability in the optical design process were shown to be successful as well.
In this report we present a model to explain the size-dependent shapes of lead nano-precipitates in aluminum. Size-dependent shape transitions, frequently observed at nanolength scales, are commonly attributed to edge energy effects. This report resolves an ambiguity in the definition and calculation of edge energies and presents an atomistic calculation of edge energies for free clusters. We also present a theory for size-dependent shapes of Pb nanoprecipitates in Al, introducing the concept of ''magic-shapes'' defined as precipitate shapes having near zero elastic strains when inserted into similarly shaped voids in the Al matrix. An algorithm for constructing a complete set of magic-shapes is presented. The experimental observations are explained by elastic strain energies and interfacial energies; edge energies play a negligible role. We replicate the experimental observations by selecting precipitates having magic-shapes and interfacial energies less than a cutoff value.
Effective elastic properties for carbon nanotube reinforced composites are obtained through a variety of micromechanics techniques. Using the in-plane elastic properties of graphene, the effective properties of carbon nanotubes are calculated utilizing a composite cylinders micromechanics technique as a first step in a two-step process. These effective properties are then used in the self-consistent and Mori-Tanaka methods to obtain effective elastic properties of composites consisting of aligned single or multi-walled carbon nanotubes embedded in a polymer matrix. Effective composite properties from these averaging methods are compared to a direct composite cylinders approach extended from the work of Hashin and Rosen (1964) and Christensen and Lo (1979). Comparisons with finite element simulations are also performed. The effects of an interphase layer between the nanotubes and the polymer matrix as result of functionalization is also investigated using a multi-layer composite cylinders approach. Finally, the modeling of the clustering of nanotubes into bundles due to interatomic forces is accomplished herein using a tessellation method in conjunction with a multi-phase Mori-Tanaka technique. In addition to aligned nanotube composites, modeling of the effective elastic properties of randomly dispersed nanotubes into a matrix is performed using the Mori-Tanaka method, and comparisons with experimental data are made. Computational micromechanical analysis of high-stiffness hollow fiber nanocomposites is performed using the finite element method. The high-stiffness hollow fibers are modeled either directly as isotropic hollow tubes or equivalent transversely isotropic effective solid cylinders with properties computed using a micromechanics based composite cylinders method. Using a representative volume element for clustered high-stiffness hollow fibers embedded in a compliant matrix with the appropriate periodic boundary conditions, the effective elastic properties are obtained from the finite element results. These effective elastic properties are compared to approximate analytical results found using micromechanics methods. The effects of an interphase layer between the high-stiffness hollow fibers and matrix to simulate imperfect load transfer and/or functionalization of the hollow fibers is also investigated and compared to a multi-layer composite cylinders approach. Finally the combined effects of clustering with fiber-matrix interphase regions are studied. The parametric studies performed herein were motivated by and used properties for single-walled carbon nanotubes embedded in an epoxy matrix, and as such are intended to serve as a guide for continuum level representations of such nanocomposites in a multi-scale modeling approach.
Technical assessment and remodeling of existing data indicates that the Richton salt dome, located in southeastern Mississippi, appears to be a suitable site for expansion of the U.S. Strategic Petroleum Reserve. The maximum area of salt is approximately 7 square miles, at a subsurface elevation of about -2000 ft, near the top of the salt stock. Approximately 5.8 square miles of this appears suitable for cavern development, because of restrictions imposed by modeled shallow salt overhang along several sides of the dome. The detailed geometry of the overhang currently is only poorly understood. However, the large areal extent of the Richton salt mass suggests that significant design flexibility exists for a 160-million-barrel storage facility consisting of 16 ten-million-barrel caverns. The dome itself is prominently elongated from northwest to southeast. The salt stock appears to consist of two major spine features, separated by a likely boundary shear zone trending from southwest to northeast. The dome decreases in areal extent with depth, because of salt flanks that appear to dip inward at 70-80 degrees. Caprock is present at depths as shallow as 274 ft, and the shallowest salt is documented at -425 ft. A large number of existing two-dimensional seismic profiles have been acquired crossing, and in the vicinity of, the Richton salt dome. At least selected seismic profiles should be acquired, examined, potentially reprocessed, and interpreted in an effort to understand the limitations imposed by the apparent salt overhang, should the Richton site be selected for actual expansion of the Reserve.
This report summarizes the results of an effort to establish a framework for assigning and communicating technology readiness levels (TRLs) for the modeling and simulation (ModSim) capabilities at Sandia National Laboratories. This effort was undertaken as a special assignment for the Weapon Simulation and Computing (WSC) program office led by Art Hale, and lasted from January to September 2006. This report summarizes the results, conclusions, and recommendations, and is intended to help guide the program office in their decisions about the future direction of this work. The work was broken out into several distinct phases, starting with establishing the scope and definition of the assignment. These are characterized in a set of key assertions provided in the body of this report. Fundamentally, the assignment involved establishing an intellectual framework for TRL assignments to Sandia's modeling and simulation capabilities, including the development and testing of a process to conduct the assignments. To that end, we proposed a methodology for both assigning and understanding the TRLs, and outlined some of the restrictions that need to be placed on this process and the expected use of the result. One of the first assumptions we overturned was the notion of a ''static'' TRL--rather we concluded that problem context was essential in any TRL assignment, and that leads to dynamic results (i.e., a ModSim tool's readiness level depends on how it is used, and by whom). While we leveraged the classic TRL results from NASA, DoD, and Sandia's NW program, we came up with a substantially revised version of the TRL definitions, maintaining consistency with the classic level definitions and the Predictive Capability Maturity Model (PCMM) approach. In fact, we substantially leveraged the foundation the PCMM team provided, and augmented that as needed. Given the modeling and simulation TRL definitions and our proposed assignment methodology, we conducted four ''field trials'' to examine how this would work in practice. The results varied substantially, but did indicate that establishing the capability dependencies and making the TRL assignments was manageable and not particularly time consuming. The key differences arose in perceptions of how this information might be used, and what value it would have (opinions ranged from negative to positive value). The use cases and field trial results are included in this report. Taken together, the results suggest that we can make reasonably reliable TRL assignments, but that using those without the context of the information that led to those results (i.e., examining the measures suggested by the PCMM table, and extended for ModSim TRL purposes) produces an oversimplified result--that is, you cannot really boil things down to just a scalar value without losing critical information.
Experimental data for material plasticity and failure model calibration and validation were obtained from 304L stainless steel. Model calibration data were taken from smooth tension, notched tension, and compression tests. Model validation data were provided from experiments using thin-walled tube specimens subjected to path dependent combinations of internal pressure, extension, and torsion.
Sandia National Laboratories has developed high-energy all-solid-state UV sources for use in laboratory tests of the feasibility of satellite-based ozone DIAL. These sources generate 320 nm light by sum-frequency mixing the 532 nm second harmonic of an Nd:YAG laser with the 803 nm signal light derived from a self-injection-seeded image-rotating optical parametric oscillator (OPO). The OPO cavity utilizes the RISTRA geometry, denoting rotated-image singly-resonant twisted rectangle. Two configurations were developed, one using extra-cavity sum-frequency mixing, where the sum-frequency-generation (SFG) crystal is outside the OPO cavity, and the other intra-cavity mixing, where the SFG crystal is placed inside the OPO cavity. Our goal was to obtain 200 mJ, 10 ns duration, 320 nm pulses at 10 Hz with near-IR to UV (1064 nm to 320 nm) optical conversion efficiency of 25%. To date we've obtained 190 mJ at 320 nm using extra-cavity SFG with 21% efficiency, and > 140 mJ by intra-cavity SFG with efficiency approaching 24%. While these results are encouraging, we've determined our conversion efficiency can be enhanced by replacing self-seeding at the signal wavelength of 803 nm with pulsed idler seeding at 1576 nm. By switching to idler seeding and increasing the OPO cavity dimensions to accommodate flat-top beams with diameters up to 10 mm, we expect to generate UV energies approaching 300 mJ with optical conversion efficiency approaching 25%. While our technology was originally designed to obtain high pulse energies, it can also be used to generate low-energy UV pulses with high efficiency. Numerical simulations using an idler-seeded intra-cavity SFG RISTRA OPO scaled to half its nominal dimensions yielded 560 μJ of 320 nm light from 2 mJ of 532 nm pump using an idler-seed energy of 100 μJ.
This paper reports on a novel approach to atmospheric cloud segmentation from a space based multi-spectral pushbroom satellite system. The satellite collects 15 spectral bands ranging from visible, 0.45 urn, to long wave in fared (IR), 10.7um. The images are radiometrically calibrated and have ground sample distances (GSD) of 5 meters for visible to very near IR bands and a GSD of 20 meters for near IR to long wave IR. The algorithm consists of a hybrid-classification system in the sense that supervised and unsupervised networks are used in conjunction. For performance evaluation, a series of numerical comparisons to human derived cloud borders were performed. A set of 33 scenes were selected to represent various climate zones with different land cover from around the world. The algorithm consisted of the following. Band separation was performed to find the band combinations which form significant separation between cloud and background classes. The potential bands are fed into a K-Means clustering algorithm in order to identify areas in the image which have similar centroids. Each cluster is then compared to the cloud and background prototypes using the Jeffries-Matusita distance. A minimum distance is found and each unknown cluster is assigned to their appropriate prototype. A classification rate of 88% was found when using one short wave IR band and one midwave IR band. Past investigators have reported segmentation accuracies ranging from 67% to 80%, many of which require human intervention. A sensitivity of 75% and specificity of 90% were reported as well.
Inorganic nanoclusters dispersed in organic matrices are of importance to a number of emerging technologies. However, obtaining useful properties from such organic-inorganic composites often requires high concentrations of well-dispersed nanoclusters. In order to achieve this goal the chemistry of the particle surface and the matrix must be closely matched. This is based on the premise of minimization of the interfacial free energy; an excess of free energy will cause phase separation and ultimately aggregation. Thus, the optimal system is one in which the nanoclusters are stabilized by the same molecules that make up the encapsulant. Yet, the organic matrix is typically chosen for its bulk properties, and therefore may not be amenable to chemical modification. Also, the organic-inorganic interface is often critical to establishing and maintaining the desired nanocluster (and hence composite) properties, placing further constraints on proposed chemical modification. For these reasons we have adopted the use of aminefunctionalized trimethoxysilanes (ormosils) as an optical grade encapsulant. In this work, we demonstrate that ormosils can produce beneficial optical effects that are derived from interfacial phenomena, which can be maintained throughout the encapsulation process.
A series of field tests sponsored by Sandia National Laboratories has simultaneously demonstrated the hard-rock drilling performance of different industry-supplied drag bits as well as Sandia's new Diagnostics-While-Drilling (DWD) system, which features a novel downhole tool that monitors dynamic conditions in close proximity to the bit. Drilling with both conventional and advanced ("best effort") drag bits was conducted at the GTI Catoosa Test Facility (near Tulsa, OK) in a well-characterized lithologic column that features an extended hard-rock interval of Mississippi limestone above a layer of highly abrasive Misener sandstone and an underlying section of hard Arbuckle dolomite. Output from the DWD system was closely observed during drilling and was used to make real-time decisions for adjusting the drilling parameters. This paper summarizes penetration rate and damage results for the various drag bits, shows representative DWD display data, and illustrates the application of these data for optimizing drilling performance and avoiding trouble.
Many MEMS devices are based on polysilicon because of the current availability of surface micromachining technology. However, polysilicon is not the best choice for devices where extensive sliding and/or thermal fields are applied due to its chemical, mechanical and tribological properties. In this work, we investigated the mechanical properties of three new materials for MEMS/NEMS devices: silicon carbide (SiC) from Case Western Reserve University (CWRU), ultrananocrystalline diamond (UNCD) from Argonne National Laboratory (ANL), and hydrogen-free tetrahedral amorphous carbon (ta-C) from Sandia National Laboratories (SNL). Young's modulus, characteristic strength, fracture toughness, and theoretical strength were measured for these three materials using only one testing methodology - the Membrane Deflection Experiment (MDE) developed at Northwestern University. The measured values of Young's modulus were 430GPa, 960GPa, and 800GPa for SiC, UNCD, and ta-C, repectively. Fracture toughness measurments resulted in values of 3.2, 4.5, and 6.2 MPa×m 1/2, respectively. The strengths were found to follow a Weibull distribution but their scaling was found to be controlled by different specimen size parameters. Therefore, a cross comparison of the strengths is not fully meaningful. We instead propose to compare their theoretical strengths as determined by employing Novozhilov fracture criterion. The estimated theoretical strength for SiC is 10.6GPa at a characteristic length of 58nm, for UNCD is 18.6GPa at a characteristic length of 37nm, and for ta-C is 25.4GPa at a characteristic length of 38nm. The techniques used to obtained these results as well as microscopic fractographic analyses are summarized in the article. We also highlight the importance of characterizing mechanical properties of MEMS materials by means of only one simple and accurate experimental technique.
Experimental modal analysis (EMA) was carried out on a micro-machined acceleration switch to characterize the motions of the device as fabricated and to compare this with analytical results for the nominal design. Finite element analysis (FEA) of the nominal design was used for this comparison. The acceleration switch was a single-crystal silicon disc supported by four fork-shaped springs. We shook the base of the die with step sine type excitation. A Laser Doppler Velocimeter (LDV) in conjunction with a microscope was used to measure the velocities of the die at several points. The desired first three modes of the structure were identified. The fundamental natural frequency that we measured in this experiment gives an estimate of the actuation g-level for the specified stroke. The fundamental resonance and actuation g-level results from the EMA and the FEA showed large variations. The discrepancy prompted thorough dimensional measurement of the acceleration switch, which revealed discrepancies between the nominal design and tested component.
Structural assemblies often include bolted connections that are a primary mechanism for energy dissipation and nonlinear response at elevated load levels. Typically these connections are idealized within a structural dynamics finite element model as linear elastic springs. The spring stiffness is generally tuned to reproduce modal test data taken on a prototype. In conventional practice, modal test data is also used to estimate nominal values of modal damping that could be used in applications with load amplitudes comparable to those employed in the modal tests. Although this simplification of joint mechanics provides a convenient modeling approach with the advantages of reduced complexity and solution requirements, it often leads to poor predicted responses for load regimes associated with nonlinear system behavior. In this document we present an alternative approach using the concept of a "whole-joint" or "whole-interface" model [1]. We discuss the nature of the constitutive model, the manner in which model parameters are deduced, and comparison of structural dynamic prediction with results for experimental hardware subjected to a series of transient excitations beginning at low levels and increasing to levels that produced macro-slip in the joint. Further comparison is performed with a traditional "tuned" linear model. The ability of the whole-interface model to predict the onset of macro-slip as well as the vast improvement of the response levels in relation to those given by the linear model is made evident. Additionally, comparison between prediction and high amplitude experiments suggests areas for further work.
This paper addresses the coupling of experimental and finite element models of substructures. In creating the experimental model, difficulties exist in applying moments and estimating resulting rotations at the connection point between the experimental and finite element models. In this work, a simple test fixture for applying moments and estimating rotations is used to more accurately estimate these quantities. The test fixture is analytically "subtracted" from the model using the admittance approach. Inherent in this process is the inversion of frequency response function matrices that can amplify the uncertainty in the measured data. Presented here is the work applied to a two-component beam model and analyses that attempt to identify and quantify some of these uncertainties. The admittance model of one beam component was generated experimentally using the moment-rotation fixture, and the other from a detailed finite element model. During analytical testing of the admittance modeling algorithm, it was discovered that the component admittance models generated by finite elements were ill conditioned due to the inherent physics.
In order to create an analytical model of a material or structure, two sets of experiments must be performed-calibration and validation. Calibration experiments provide the analyst with the parameters from which to build a model that encompasses the behavior of the material. Once the model is calibrated, the new analytical results must be compared with a different, independent set of experiments, referred to as the validation experiments. This modeling procedure was performed for a crushable honeycomb material, with the validation experiments presented here. This paper covers the design of the validation experiments, the analysis of the resulting data, and the metric used for model validation.
Processing-in-Memory (PIM) technology encompasses a range of research leveraging a tight coupling of memory and processing. The most unique features of the technology are extremely wide paths to memory, extremely low memory latency, and wide functional units. Many PIM researchers are also exploring extremely fine-grained multi-threading capabilities. This paper explores a mechanism for leveraging these features of PIM technology to enhance commodity architectures in a seemingly mundane way: accelerating MPI. Modern network interfaces leverage simple processors to offload portions of the MPI semantics, particularly the management of posted receive and unexpected message queues. Without adding cost or increasing clock frequency, using PIMs in the network interface can enhance performance. The results are a significant decrease in latency and increase in small message bandwidth, particularly when long queues are present.
The processes and functional constituents of biological photosynthetic systems can be mimicked to produce a variety of functional nanostructures and nanodevices. The photosynthetic nanostructures produced are analogs of the naturally occurring photosynthetic systems and are composed of biomimetic compounds (e.g., porphyrins). For example, photocatalytic nanotubes can be made by ionic self-assembly of two oppositely charged porphyrins tectons [1]. These nanotubes mimic the light-harvesting and photosynthetic functions of biological systems like the chlorosomal rods and reaction centers of green sulfur bacteria. In addition, metal-composite nanodevices can be made by using the photocatalytic activity of the nanotubes to reduce aqueous metal salts to metal atoms, which are subsequently deposited onto tube surfaces [2]. In another approach, spatial localization of photocatalytic porphyrins within templating surfactant assemblies leads to controlled growth of novel dendritic metal nanostructures [3].
Conference Proceedings of the Society for Experimental Mechanics Series
Hasselman, Timothy; Wathugala, G.W.; Urbina, Angel; Paez, Thomas L.
Mechanical systems behave randomly and it is desirable to capture this feature when making response predictions. Currently, there is an effort to develop predictive mathematical models and test their validity through the assessment of their predictive accuracy relative to experimental results. Traditionally, the approach to quantify modeling uncertainty is to examine the uncertainty associated with each of the critical model parameters and to propagate this through the model to obtain an estimate of uncertainty in model predictions. This approach is referred to as the "bottom-up" approach. However, parametric uncertainty does not account for all sources of the differences between model predictions and experimental observations, such as model form uncertainty and experimental uncertainty due to the variability of test conditions, measurements and data processing. Uncertainty quantification (UQ) based directly on the differences between model predictions and experimental data is referred to as the "top-down" approach. This paper discusses both the top-down and bottom-up approaches and uses the respective stochastic models to assess the validity of a joint model with respect to experimental data not used to calibrate the model, i.e. random vibration versus sine test data. Practical examples based on joint modeling and testing performed by Sandia are presented and conclusions are drawn as to the pros and cons of each approach.
Achieving good scalability for large simulations based on structured adaptive mesh refinement is non-trivial. Performance is limited by the partitioner's ability to efficiently use the underlying parallel computer's resources. Domainbased partitioners serve as a foundation for techniques designed to improve the scalability and they have traditionally been designed on the basis of an independence assumption regarding the computational flow among grid patches at different refinement levels. But this assumption does not hold in practice. Hence the effectiveness of these techniques is significantly impaired. This paper introduces a partitioning method designed on the true premises. The method is tested for four different applications exhibiting different behaviors. The results show that synchronization costs on average can he reduced by 75 percent. The conclusion is that the method is suitable as a foundation in general hierarchical methods designed to improve the scalability of structured adaptive mesh refinement applications.
This paper is about making reversible logic a reality for supercomputing. Reversible logic offers a way to exceed certain basic limits on the performance of computers, yet a powerful case will have to be made to justify its substantial development expense. This paper explores the limits of current, irreversible logic for supercomputers, thus forming a threshold above which reversible logic is the only solution. Problems above this threshold are discussed, with the science and mitigation of global warming being discussed in detail. To further develop the idea of using reversible logic in supercomputing, a design for a 1 Zettaflops supercomputer as required for addressing global climate warming is presented. However, to create such a design requires deviations from the mainstream of both the software for climate simulation and research directions of reversible logic. These deviations provide direction on how to make reversible logic practical. Copyright 2005 ACM.
43rd AIAA Aerospace Sciences Meeting and Exhibit - Meeting Papers
Barone, Matthew F.; Roy, Christopher J.
Simulations of a low-speed square cylinder wake and a supersonic axisymmetric base wake are performed using the Detached Eddy Simulation (DES) model. A reduced-dissipation form of the Symmetric TVD scheme is employed to mitigate the effects of dissipative error in regions of smooth flow. The reduced-dissipation scheme is demonstrated on a 2D square cylinder wake problem, showing a dramatic increase in accuracy for a given grid resolution. The results for simulations on three grids of increasing resolution for the 3D square cylinder wake are compared to experimental data and to other LES and DES studies. The comparisons of mean flow and global mean flow quantities to experimental data are favorable, while the results for second order statistics in the wake are mixed and do not always improve with increasing spatial resolution. Comparisons to LES studies are also generally favorable, suggesting DES provides an adequate subgrid scale model. Predictions of base drag and centerline wake velocity for the supersonic wake are also good, given sufficient grid refinement. These cases add to the validation library for DES and support its use as an engineering analysis tool for accurate prediction of global flow quantities and mean flow properties.
In modal testing, the most popular tools for exciting a structure are hammers and shakers. This paper reviews the applications for which shakers have an advantage. In addition the advantages and disadvantages of different forcing inputs (e.g. sinusoidal, random, burst random and chirp) that can be applied with a shaker are noted. Special considerations are reported for the fixtures required for shaker testing (blocks, force gages, stingers) to obtain satisfactory results. Various problems that the author has encountered during single and multi-shaker modal tests are described with their solutions.
This paper provides an overview of several approaches to formulating and solving optimization under uncertainty (OUU) engineering design problems. In addition, the topic of high-performance computing and OUU is addressed, with a discussion of the coarse- and fine-grained parallel computing opportunities in the various OUU problem formulations. The OUU approaches covered here are: sampling-based OUU, surrogate model-based OUU, analytic reliability-based OUU (also known as reliability-based design optimization), polynomial chaos-based OUU, and stochastic perturbation-based OUU.
Latin Hypercube Sampling (LHS) is widely used as sampling based method for probabilistic calculations. This method has some clear advantages over classical random sampling (RS) that derive from its efficient stratification properties. However, one of its limitations is that it is not possible to extend the size of an initial sample by simply adding new simulations, as this will lead to a loss of the efficient stratification associated with LHS. We describe a new method to extend the size of an LHS to n (>=2) times its original size while preserving both the LHS structure and any induced correlations between the input parameters. This method involves introducing a refined grid for the original sample and then filling in empty rows and columns with new data in a way that conserves both the LHS structure and any induced correlations. An estimate of the bounds of the resulting correlation between two variables is derived for n=2. This result shows that the final correlation is close to the average of the correlations from the original sample and the new sample used in the infilling of the empty rows and columns indicated above.
Chemiresistor microsensors have been developed to provide continuous in-situ detection of volatile organic compounds (VOCs). The chemiresistor sensor is packaged in a rugged, waterproof housing that allows the device to detect VOCs in air, soil, and water. Preconcentrators are also being developed to enhance the sensitivity of the chemiresistor sensor. The "micro- hotplate" preconcentrator is placed face-to-face against the array of chemiresistors inside the package. At prescribed intervals, the preconcentrator is heated to desorb VOCs that have accumulated on the sorbent material on the one-micron-thick silicon-nitride membrane. The pulse of higher-than-ambient concentration of VOC vapor is then detected by the adjacent chemiresistors. The plume is allowed to diffuse out of the package through slots adjacent to the preconcentrator. The integrated chemiresistor/preconcentrator sensor has been tested in the laboratory to evaluate the impacts of sorbent materials, fabrication methods, and repeated heating cycles on the longevity and performance of the sensor. Calibration methods have also been developed, and field tests have been initiated. Copyright ASCE 2005.
Real-time water quality and chemical-specific sensors are becoming more commonplace in water distribution systems. The overall objective of the sensor network is to protect consumers from accidental and malevolent contamination events occurring within the distribution network. This objective can be quantified several different ways including: minimizing the amount of contaminated water consumed, minimizing the extent of the contamination within the network, minimizing the time to detection, etc. We examine the ability of a sensor network to meet these objectives as a function of both the detection limit of the sensors and the number of sensors in the network. A moderately-sized network is used as an example and sensors are placed randomly. The source term is a passive injection into a node and the resulting concentration in the node is a function of the volumetric flow through that node. The concentration of the contaminant at the source node is averaged for all time steps during the injection period. For each combination of a certain number of sensors and a detection limit, the mean values of the different objectives across multiple random sensor placements are evaluated. Results of this analysis allow the tradeoff between the necessary detection limit in a sensor and the number of sensors to be evaluated. Results show that for the example problem examined here, a sensor detection limit of 0.01 of the average source concentration is adequate for maximum protection. Copyright ASCE 2005.
We have developed and implemented a method which given a three-dimensional object can infer from topology the two-dimensional masks needed to produce that object with surface micromachining. This design tool calculates the two-dimensional mask set required to produce a given three-dimensional model by investigating the vertical topology to the model. The 3D model is first separated into bodies that are non-intersecting, made from different materials or only linked through a ground plane. Next, for each body unique horizontal cross sections are located and arranged into a tree based on their topological relationship. A branch-wise search of the tree uncovers locations where deposition boundaries must lie and identifies candidate masks creating a generic mask set for the 3D model. Finally, in the last step specific process requirements are considered that may constrain the generic mask set.
The effects of ozone (O 3) on tin oxide growth rates from mixtures of monobutyltintrichloride (MBTC), O 2 and H 2O are reported. The results indicate that O 3 increases the growth rate under kinetically controlled conditions (MBTC + O 2, 25 torr), but under mass-transport-control (200 torr and/or addition of H 2O to the reactant gases), growth rates are either unaffected or decrease. Kinetic modeling of the gas-phase reactions suggests that O, H, and OH radicals react at the surface to increase the growth rate, but higher pressures reduce their concentrations via recombination. In addition, higher pressures result in increased concentrations of less reactive tin halides, which are decomposition products of MBTC. It appears that when H 2O is a reactant, these radicals reduce the concentration of the tin oxide precursor (thought to be an MBTC-H 2O complex), which significantly decreases the growth rate.
Proceedings of the Solar World Congress 2005: Bringing Water to the World, Including Proceedings of 34th ASES Annual Conference and Proceedings of 30th National Passive Solar Conference
Sattler, Allan R.; Hanley, Charles J.; Hightower, Michael M.; Andelman, Marc
Laboratory and field developments are underway to use solar energy to power a desalination technology - capacitive deionization - for water produced by remote Coal Bed Methane (CBM) natural gas wells. Due to the physical remoteness of many CBM wells throughout the Southwestern U.S., as shown in Figure 1, this approach may offer promise. This promise is not only from its effectiveness in removing salt from CBM water and allowing it to be utilized for various applications, but also for its potentially lower energy consumption compared to other technologies, such as reverse osmosis. This, coupled with the remoteness (Figure 1) of thousands of these wells, makes them more feasible for use with photovoltaic (solar, electric, PV) systems. Concurrent laboratory activities are providing information about the effectiveness and energy requirements of each technology under various produced water qualities and water reuse applications, such as salinity concentrations and water flows. These parameters are being used to driving the design of integrated PV-powered treatment systems. Full-scale field implementations are planned, with data collection and analysis designed to optimize the system design for practical remote applications. Early laboratory studies of capacitive deionization have shown promise that at common CBM salinity levels, the technology may require less energy, is less susceptible to fouling, and is more compact than equivalent reverse osmosis (RO) systems. The technology uses positively and negatively charged electrodes to attract charged ions in a liquid, such as dissolved salts, metals, and some organics, to the electrodes. This concentrates the ions at the electrodes and reduces the ion concentrations in the liquid. This paper discusses the results of these laboratory studies and extends these results to energy consumption and design considerations for field implementation of produced water treatment using photovoltaic systems.
This paper discusses issues that arise in controlling high quality mechanical shock inputs for mock hardware in order to validate a model of a bolted connection. The dynamic response of some mechanical components is strongly dependent upon the behavior of their bolted connections. The bolted connections often provide the only structural load paths into the component and can be highly nonlinear. Accurate analytical modeling of bolted connections is critical to the prediction of component response to dynamic loadings. In particular, it is necessary to understand and correctly model the stiffness of the joint and the energy dissipation (damping) that is a nonlinear function of the forces acting on the joint. Frequency-rich shock inputs composed of several decayed sinusoid components were designed as model validation tests and applied to a test item using an electrodynamic shaker. The test item was designed to isolate the behavior of the joint of interest and responses were dependent on the properties of the joints. The nonlinear stiffness and damping properties of the test item under study presented a challenge in isolating behavior of t4he test hardware from the stiffness, damping and boundary conditions of the shaker. Techniques that yield data to provide a sound basis for model validation comparisons of the bolted joint model are described.
The research goal presented here is to model the electrical response of gold plated copper electrical contacts exposed to a mixed flowing gas stream consisting of air containing 10ppb H 2S at 30°C and a relative humidity of 70% This environment accelerates the attack normally observed in a light industrial environment (similar to, but less severe than, the Battelle class 2 environment). Corrosion rates were quantified by measuring the corrosion site density, size distribution, and the electrical resistance of a probe contact with the aged surface, as a function of exposure time. A pore corrosion numerical model was used to predict both the growth of copper sulfide corrosion product which blooms through defects in the gold layer and the resulting electrical contact resistance of the aged surface. Assumptions about the distribution of defects in the noble metal plating and the mechanism for how corrosion blooms affect electrical contact resistance were needed to close the numerical model. Comparisons are made to the experimentally observed corrosion-bloom number density, bloom size distribution, and the cumulative probability distribution of the electrical contact resistance. Experimentally, the bloom site density increases as a function of time, whereas the bloom size distribution remains relatively independent of time. These two effects are included in the numerical model by adding a corrosion initiation probability proportional to the surface area and a probability for bloom-growth extinction proportional to the bloom volume, due to Kirkendall voiding. The cumulative probability distribution of electrical resistance becomes skewed as exposure time increases. While the resistance increases as a function of time for a fraction of the bloom population, the median value remains relatively unchanged. In order to model this behavior, the resistance calculated for large blooms is heavily weighted by contributions from the halo region.
In light of difficulties in realizing a carbohydrate fuel cell that can run on animal or plant carbohydrates, a study was carried out to fabricate a membrane separated, platinum cathode, enzyme anode fuel cell, and test it under both quiescent and flow through conditions. Mediator loss to the flowing solution was the largest contributor to power loss. Use of the phenazine derivative mediators offered decent open circuit potentials for half cell and full cell performance, but suffered from quick loss to the solution which hampered long term operation. A means to stabilize the phenazine molecules to the electrode would need to be developed to extend the lifetime of the cell beyond its current level of a few hours. This is an abstract of a paper presented ACS Fuel Chemistry Meeting (Washington, DC Fall 2005).
A directional scintillating fiber detector for 14-MeV neutrons was simulated using the GEANT4 Monte Carlo simulation tool. Detail design aspects of a prototype 14 MeV neutron fiber detector under development were used in the simulation to assess performance and design features of the detector. Saint-Gobain produced, BCF-12, plastic fiber material was used in the prototype development. The fiber consists of a core scintillating material of polystyrene with 0.48 mm × 0.48 mm dimension and an acrylic outer cladding of 0.02 mm thickness. A total of 64 square fibers, each with a cross-sectional area of 0.25 mm 2 and length of 100 mm were positioned parallel to each other with a spacing of 2.3 mm (fiber pitch) in the tracking of 14-MeV neutron induced recoil proton (n-p) events. Neutron induced recoil proton events, resulting energy deposition in two collinear fibers, were used in reconstructing a two dimensional (2D) direction of incident neutrons. Blurring of recoil protons signal in measurements was also considered to account uncertainty in direction reconstruction. Reconstructed direction has a limiting angular resolution of 3° due to fiber dimension. Blurring the recoil proton energy resulted in further broadening of the reconstructed direction and the angular resolution was 20°. These values were determined when incident neutron beam makes an angle of 45 degree relative to the front surface of the detector. Comparable values were obtained at other angles of incidence. Results from the present simulation have demonstrated promising directional sensitivity of the scintillating fiber detector under development.
As electronic assemblies become more compact and with increased processing bandwidth, the escalating thermal energy has become more difficult to manage. The major limitation has been nonmetallic joining using poor thermal interface materials (TIM). The interfacial, versus bulk, thermal conductivity of an adhesive is the major loss mechanism and normally accounts for an order magnitude loss in conductivity per equivalent thickness. The next generation TIM requires a sophisticated understanding of material and surface sciences, heat transport at sub-micron scales and the manufacturing processes used in packaging of microelectronics and other target applications. Only when this relationship between bondline manufacturing processes, structure and contact resistance is well understood on a fundamental level, would it be possible to advance the development of miniaturized microsystems. We give the status of the study of thermal transport across these interfaces.
Voltage and temperature distributions along the crucible were measured during VAR of 0.81 m diameter Ti-6Al-4V electrode into 0.91 m diameter ingot. These data were used to determine the current distribution along the crucible. Measurements were made for two furnace conditions, one with a bare crucible and the other with a painted crucible. The VAR furnace used for these measurements is of the non-coaxial type, i.e. current is fed directly into the bottom of the crucible through a stool (base plate) contact and exits the furnace through the electrode stinger. The data show that approximately 63% of the current is conducted directly between the ingot and electrode with the remaining conducted between the electrode and crucible wall. This partitioning does not appear to be sensitive to crucible coating. The crucible voltage data were successfully simulated using uniform current distributions for the current conduction zones, a value of 0.63 for the partitioning, and widths of 0.30 and 0.15 m for the ingot/crucible wall and plasma conduction zones, respectively. Successful simulation of the voltage data becomes increasingly difficult (or impossible) as one uses current partitioning values increasingly different from 0.63, indicating that the experimental value is consistent with theory. Current conducted between the ingot and crucible wall through the ingot/wall contact zone may vary during the process without affecting overall current partitioning. The same is true for current conducted through the ingot/stool and stool/crucible contact zones. There is some evidence that the ingot/stool current decreases with increasing ingot length for the case of the bare crucible. Equivalent circuit analysis shows that, under normal conditions, current partitioning is only sensitive to the ratio of the plasma resistance across the annulus to the plasma resistance across the electrode gap, thereby demonstrating the relationship between current partitioning and gap.
This work investigated the relationship between the resistance degradation in low-force metal contacts and hot-switched operational conditions representative of MEMS devices. A modified nano-indentation apparatus was used to bring electrically-biased gold and platinum surfaces into contact at a load of 100 μN. The applied normal force and electrical contact resistance of the contact materials was measured simultaneously. The influence of parallel discharge paths for stored electrical energy in the contact circuit is discussed in relation to surface contamination decomposition and the observed resistance degradation.
Proceedings of the Solar World Congress 2005: Bringing Water to the World, Including Proceedings of 34th ASES Annual Conference and Proceedings of 30th National Passive Solar Conference
Begay-Campbell, Sandra; Coots, Jennifer; Mar, Benjamin
Sandia National Laboratories (Sandia) has an active relationship with the Navajo Nation. Sandia has grown this relationship with through joint formation of strategic multiyear plans oriented toward the development of sustainable Native American renewable energy projects and associated business development. For the last decade, the Navajo Tribal Utility Authority (NTUA) has installed stand-alone photovoltaic (PV) systems on the Navajo Reservation to provide some of its most remote customers with electricity. Sandia and New Mexico State University - Southwest Technology Development Institute's technical assistance supports NTUA as a leader in rural solar electrification, assists NTUA's solar program coordinator to create a sustainable program and conveys NTUA's success in solar to others, including the Department of Energy (DOE). In partnership with DOE's Tribal Energy Program, summer interns' Jennifer Coots (MBA student) and Benjamin Mar (Electrical and Computer Engineering student) prepared case studies that summarize the rural utility's experience with solar electric power.
LMPC 2005 - Proceedings of the 2005 International Symposium on Liquid Metal Processing and Casting
Viswanathan, Srinath; Melgaard, David K.; Patel, Ashish D.; Evans, David G.
A numerical model of the ESR process was used to study the effect of the various process parameters on the resulting temperature profiles, flow field, and pool shapes. The computational domain included the slag and ingot, while the electrode, crucible, and cooling water were considered as external boundary conditions. The model considered heat transfer, fluid flow, solidification, and electromagnetic effects. The predicted pool profiles were compared with experimental results obtained over a range of processing parameters from an industrial-scale 718 alloy ingot. The shape of the melt pool was marked by dropping nickel balls down the annulus of the crucible during melting. Thermocouples placed in the electrode monitored the electrode and slag temperature as melting progressed. The cooling water temperature and flow rate were also monitored. The resulting ingots were sectioned and etched to reveal the ingot macrostructure and the shape of the melt pool. Comparisons of the predicted and experimentally measured pool profiles show excellent agreement. The effect of processing parameters, including the slag cap thickness, on the temperature distribution and flow field are discussed. The results of a sensitivity study of thermophysical properties of the slag are also discussed.
The structural characteristics of buttress thread mechanical joints are not well understood and are difficult to accurately model. As an initial step towards understanding the mechanics of the buttress thread, a 2D plane stress model was created. An experimental investigation was conducted to study the compliance, damping characteristics, and stress field in an axial test condition. The compliance and damping were determined experimentally from a steel cross section of a buttress thread. The stress field was visualized using photoelastic techniques. The mechanics study combined with the photoelastic study provided a set of validation data.
In this paper, we discuss the primary characteristics and pitfalls associated with the use of Bragg Gratings for distributed temperature sensing, with particular attention to time-division multiplexing (TDM). Two pitfalls are intrinsic to a serial array of such gratings that use TDM: spectral shadowing and crosstalk. Two others involve strain in the fiber that masquerades as temperature and that could affect other methods of interrogating the gratings, in addition to TDM.
LMPC 2005 - Proceedings of the 2005 International Symposium on Liquid Metal Processing and Casting
Minisandram, Ramesh S.; Arnold, Matthew J.; Williamson, Rodney L.
During VAR of a 5377 kg, 0.76 m diameter Ti-6Al-4V alloy electrode into 0.86 m diameter ingot, tantalum balls were dropped into the ingot pool to measure the centerline pool depth. The first was introduced at full power after 1134 kg of electrode had been melted. A second marker was dropped after 4288 kg of electrode had been consumed, also at full power but just prior to power cutback. The third, and final, ball was released at the end of the cutback with 286 kg of electrode remaining. An external solenoidal stirring field was applied to the ingot throughout the melting process, as is typical in such practices. The ingot was sectioned, the marker ball positions recorded, and the pool depths subsequently calculated. The first market was located only 4.5 cm from the bottom of the ingot, but was off-center by nearly 22 cm, indicating a relatively flat pool bottom. The other two balls were located 36.2 cm and 105.4 cm from the bottom, both approximately centered. Pool depths for the three conditions were calculated to be ∼41 cm, ∼131 cm and ∼99 cm. BAR, a 21/2 D, axisymmetric ingot code developed at Sandia National Laboratories, was used to generate pool shapes corresponding to these conditions. The code, which solves heat transfer, fluid flow and electromagnetic effects in a coupled fashion, was able to match the pool depths by adjusting the strength of the stirring field as a parameter, and predicted relatively thin sidewalls under full power melting, a prediction supported by crucible temperature and current distribution data also collected during the test. The applied stirring field was 60 gauss for this test. The effective field strength setting in BAR required to match the pool depths was 30 gauss. All other parameters in BAR were set identical to those required to match low stirring field (4 gauss), full power ingot pool depths measured and reported in an earlier study, except those requiring consistency with observed arc behavior in the two cases. Thus, it is concluded that the 21/2 D code can accurately match pool depths under high field strength stirring conditions once properly benchmarked.
ICEAA 2005 - 9th International Conference on Electromagnetics in Advanced Applications and EESC 2005 - 11th European Electromagnetic Structures Conference
Simulation results demonstrating transmission enhancement through a sub-wavelength aperature in an infinite plasmon array are presented. The results are obtained using EIGER and are considered preliminary before proceeding to the simulation of finite-plasmon arrays.
Grain boundary stiffness and mobility determine the kinetics of curvature driven grain growth. Here the stiffness and mobility are determined using a computational approach based on the analysis of fluctuations in the grain boundary position during molecular dynamics simulations. This work represents the first determination of grain boundary stiffness. The results indicate that the boundary stiffness for a given boundary plane has a strong dependence on the direction of the boundary distortion. The mobility deduced is in accord with previous computer simulation studies.
Engineering/Technology Management 2005: Safety Engineering and Risk Analysis, Technology and Society, Engineering Business Management, Health and Safety
Lloyd, George M.; Hasselman, Timothy; Paez, Thomas
CMMs equipped with non-contact probes, such as video probes, are becoming popular for a variety of 2-D or 2.5-D objects. The advantages of a video (or vision) probe include the ability to measure features which are either too small or too delicate for a touch probe. Unfortunately, vision-based probing systems do not have the same measurement accuracy as touch probe equipped machines. For example, a Moore M48 coordinate measurement machine has an expected measurement uncertainty of 0.2 μm (plus a scale dependent term) when using a touch probe (the actual repeatability is on the order of 0.03 μm). When the probe is changed to a Leitz LS1 vision system, the expected measurement uncertainty is 1.2 μm plus a scale dependent term. The decreased accuracy is due entirely to the change in probing method. Components of the error budget include environmental effects, choice of lighting, lens distortions, and stage 2-D accuracy. Lighting is a major contributor to the measurement error budget, especially when a bidirectional measurement needs to be made (for example, the width of a line, rather than the center location of a line). We report on the effect of the sensitivity of vision probing on an OGP Avant Apex 200 to different lighting conditions, both for unidirectional and bidirectional measurements.
Sandia National Laboratories has developed a mesoscale wheeled hopping vehicle (WHV) to overcome the longstanding problems of mobility and power in small scale unmanned vehicles. The system provides mobility in situations such as negotiating obstacles in the vertical dimension and rough terrain that are prohibitive for other small ground base vehicles.
The human brain functions through a chemically-induced biological process which operates in a manner similar to electrical systems. The signal resulting from this biochemical process can actually be monitored and read using tools and having patterns similar to those found in electrical and electronics engineering. The primary signature of this electrical activity is the ''brain wave'', which looks remarkably similar to the output of many electrical systems. Likewise, the device currently used in medical arenas to read brain electrical activity is the electroencephalogram (EEG) which is synonymous with a multi-channel oscilloscope reading. Brain wave readings and recordings for medical purposes are traditionally taken in clinical settings such as hospitals, laboratories or diagnostic clinics. The signal is captured via externally applied scalp electrodes using semi-viscous gel to reduce impedance. The signal will be in the 10 to 100 microvolt range. In other instances, where surgeons are attempting to isolate particular types of minute brain signals, the electrodes may actually be temporarily implanted in the brain during a preliminary procedure. The current configurations of equipment required for EEGs involve large recording instruments, many electrodes, wires, and large amounts of hard disk space devoted to storing large files of brain wave data which are then eventually analyzed for patterns of concern. Advances in sensors, signal processing, data storage and microelectronics over the last decade would seem to have paved the way for the realization of devices capable of ''real time'' external monitoring, and possible assessment, of brain activity. A myriad of applications for such a capability are likewise presenting themselves, including the ability to assess brain functioning, level of functioning and malfunctioning. Our plan is to develop the sensors, signal processing, and portable instrumentation package which could capture, analyze, and communicate information on brain activity which could be of use to the individual, medical personnel or in other potential arenas. To take this option one step further, one might foresee that the signal would be captured, analyzed, and communicated to a person or device and which would result an action or reaction by that person or device. It is envisioned that ultimately a system would include a sensor detection mechanism, transmitter, receiver, microprocessor and associated memory, and audio and/or visual alert system. If successful in prototyping, the device could be considered for eventual implementation in ASIC form or as a fully integrated CMOS microsystem.
The production of Ultra-cold molecules is a goal of many laboratories through out the world. Here we are pursuing a unique technique that utilizes the kinematics of atomic and molecular collisions to achieve the goal of producing substantial numbers of sub Kelvin molecules confined in a trap. Here a trap is defined as an apparatus that spatially localizes, in a known location in the laboratory, a sample of molecules whose temperature is below one degree absolute Kelvin. Further, the storage time for the molecules must be sufficient to measure and possibly further cool the molecules. We utilize a technique unique to Sandia to form cold molecules from near mass degenerate collisions between atoms and molecules. This report describes the progress we have made using this novel technique and the further progress towards trapping molecules we have cooled.
The purpose of this project was to do some preliminary studies and process development on electroactive polymers to be used for tunable optical elements and MEMS actuators. Working in collaboration between Sandia National Labs and The University of Illinois Urbana-Champaign, we have successfully developed a process for applying thin films of poly (vinylidene fluoride) (PVDF) onto glass substrates and patterning these using a novel stamping technique. We observed actuation in these structures in static and dynamic measurements. Further work is needed to characterize the impact that this approach could have on the field of tunable optical devices for sensing and communication.
This Pollution Prevention Opportunity Assessment (PPOA) was conducted for the two Sandia National Laboratories/New Mexico cafeteria facilities between May and August 2005. The primary purpose of this PPOA is to assess waste and resource reduction opportunities and issue Pollution Prevention (P2) recommendations for Sandia's food service facilities. This PPOA contains recommendations for energy, water and resource reduction, as well as material substitution based upon environmentally preferable purchasing. Division 3000 has requested the PPOA report as part of the Division's compliance effort to implement the Environmental Management System (EMS) per DOE Order 450.1. This report contains a summary of the information collected and analyses performed with recommended options for implementation. The SNL/NM P2 Group will work with Division 3000 and the respective cafeteria facilities to implement these options.
Sandia National Laboratories (SNL) has limited inventories of, and activities with, fissile-material. Personnel who perform nuclear criticality safety (NCS) assignments do so on a part-time basis. Sandia's "tailored approach" to training and qualification of these personnel can serve as a model for others with "small" NCS programs. SNL uses a single set of qualification cards for qualifying nuclear criticality safety engineers (NCSE). Provision is made for: (1) training and mentoring of new NCSE with testing or other verification of their skills and knowledge and (2) "qualification by documentation" for staff who historically have been performing NCSE-like duties. Key areas for evaluation include previous formal education and training; demonstrated success in writing Criticality Safety Assessments (CSA) and related documents; interaction with the SNL criticality safety committees; and overall knowledge (e.g., as judged against the objectives in DOE-STD-1135). Gaps of knowledge are filled through self-study, training, or mentoring. Candidate mastery of topics is confirmed primarily by evaluation of work products and interviews. Completion is approved by the Criticality Safety Officer (CSO) - the closest SNL comes to having an NCS manager - and then management. In applying the tailored approach, NCSE candidates are not required to be subject-matter experts for all NCS-related facilities and activities at SNL at the time of qualification. Familiarity with each of the facilities and activities is expected, along with the ability to "self-train" when needed (e.g., analogous just-in-time [JIT] procurement). The latter is supported by identification of applicable SNL-wide fissile-material facilities and activities along with resource organizations and personnel in NCS, safety analysis, accountability, etc. The capstone is a discussion with the CSO, or other experienced NCSE, demonstrating the ability to explain in some detail how a specific NCS assignment would be tackled (e.g., options for gaining facility/activity knowledge, performing analyses, using resource personnel, and traversing the required peer- and committee-review processes).
Red Storm is an Advanced Simulation and Computing (ASC) funded massively parallel supercomputer located at Sandia National Laboratories (SNL). The Red Storm Usage Model (RSUM) documents the capabilities and the environment provided for the FY05 Tri-Lab Level II Limited Availability Red Storm User Environment Milestone and the FY05 SNL Level II Limited Availability Red Storm Platform Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and SNL. Additionally, the Red Storm Usage Model maps the provided capabilities to the Tri-Lab ASC Computing Environment (ACE) requirements. The ACE requirements reflect the high performance computing requirements for the ASC community and have been updated in FY05 to reflect the community's needs. For each section of the RSUM, Appendix I maps the ACE requirements to the Limited Availability User Environment capabilities and includes a description of ACE requirements met and those requirements that are not met in that particular section. The Red Storm Usage Model, along with the ACE mappings, has been issued and vetted throughout the Tri-Lab community.
Focused Beams from high-power lasers have been used to command trigger gas switches in pulse power accelerators for more than two decades. This Laboratory-Directed Research and Development project was aimed at determining whether high power lasers could also command trigger water switches on high-power accelerators. In initial work, we determined that focused light from three harmonics of a small pulsed Nd:YAG laser at 1064 nm, 532 nm, and 355 nm could be used to form breakdown arcs in water, with the lowest breakdown thresholds of 110 J/cm{sup 2} or 14 GW/cm{sup 2} at 532 nm in the green. In laboratory-scale laser triggering experiments with a 170-kV pulse-charged water switch with a 3-mm anode-cathode gap, we demonstrated that {approx}90 mJ of green laser energy could trigger the gap with a 1-{sigma} jitter of less than 2ns, a factor of 10 improvement over the jitter of the switch in its self breaking mode. In the laboratory-scale experiments we developed optical techniques utilizing polarization rotation of a probe laser beam to measure current in switch channels and electric field enhancements near streamer heads. In the final year of the project, we constructed a pulse-power facility to allow us to test laser triggering of water switches from 0.6- MV to 2.0 MV. Triggering experiments on this facility using an axicon lens for focusing the laser and a switch with a 740 kV self-break voltage produced consistent laser triggering with a {+-} 16-ns 1-{sigma} jitter, a significant improvement over the {+-} 24-ns jitter in the self-breaking mode.
Over the past few years we have developed the ability to acquire images through a confocal microscope that contain, for each pixel, the simultaneous fluorescence lifetime and spectra of multiple fluorophores within that pixel. We have demonstrated that our system has the sensitivity to make these measurements on single molecules. The spectra and lifetimes of fluorophores bound to complex molecules contain a wealth of information on the conformational dynamics and local chemical environments of the molecules. However, the detailed record of spectral and temporal information our system provides from fluorophores in single molecules has not been previously available. Therefore, we have studied several fluorophores and simple fluorophore-molecule systems that are representative of the use of fluorophores in biological systems. Experiments include studies of a simple fluorescence resonance energy transfer (FRET) system, green fluorescent probe variants and quantum dots. This work is intended to provide a basis for understanding how fluorophores report on the chemistry of more complex biological molecules.
The goal of this project was to develop novel hydrogen-oxidation electrocatalyst materials that contain reduced platinum content compared to traditional catalysts by developing flexible synthesis techniques to fabricate supported catalyst structures, and by verifying electrochemical performance in half cells and ultimately laboratory fuel cells. Synthesis methods were developed for making small, well-defined platinum clusters using zeolite hosts, ion exchange, and controlled calcination/reduction processes. Several factors influence cluster size, and clusters below 1 nm with narrow size distribution have been prepared. To enable electrochemical application, the zeolite pores were filled with electrically-conductive carbon via infiltration with carbon precursors, polymerization/cross-linking, and pyrolysis under inert conditions. The zeolite host was then removed by acid washing, to leave a Pt/C electrocatalyst possessing quasi-zeolitic porosity and Pt clusters of well-controlled size. Plotting electrochemical activity versus pyrolysis temperature typically produces a Gaussian curve, with a peak at ca. 800 C. The poorer relative performances at low and high temperature are due to low electrical conductivity of the carbon matrix, and loss of zeolitic structure combined with Pt sintering, respectively. Cluster sizes measured via adsorption-based methods were consistently larger than those observed by TEM and EXAFS, suggesting , that a fraction of the clusters were inaccessible to the fluid phase. Detailed EXAFS analysis has been performed on selected catalysts and catalyst precursors to monitor trends in cluster size evolution, as well as oxidation states of Pt. Experiments were conducted to probe the electroactive surface area of the Pt clusters. These Pt/C materials had as much as 110 m{sup 2}/g{sub pt} electroactive surface area, an almost 30% improvement over what is commercially (mfg. by ETEK) available (86 m{sup 2}/g{sub pt}). These Pt/C materials also perform qualitatively as well as the ETEK material for the ORR, a non-trivial achievement. A fuel cell test showed that Pt/C outperformed the ETEK material by an average of 50% for a 300 hour test. Increasing surface area decreases the amount of Pt needed in a fuel cell, which translates into cost savings. Furthermore, the increased performance realized in the fuel cell test might ultimately mean less Pt is needed in a fuel cell; this again translates into cost savings. Finally, enhanced long-term stability is a key driver within the fuel cell community as improvements in this area must be realized before fuel cells find their way into the marketplace; these Pt/C materials hold great promise of enhanced stability over time. An external laser desorption ion source was successfully installed on the existing Fourier transform ion-cyclotron resonance (FT-ICR) mass spectrometer. However, operation of this laser ablation source has only generated metal atom ions, no clusters have been found to date. It is believed that this is due to the design of the pulsed-nozzle/laser vaporization chamber. The final experimental configuration and design of the two source housings are described.
The ability of semiconductor nanocrystals (NCs) to display multiple (size-specific) colors simultaneously during a single, long term excitation holds great promise for their use in fluorescent bio-imaging. The main challenges of using nanocrystals as biolabels are achieving biocompatibility, low non-specific adsorption, and no aggregation. In addition, functional groups that can be used to further couple and conjugate with biospecies (proteins, DNAs, antibodies, etc.) are required. In this project, we invented a new route to the synthesis of water-soluble and biocompatible NCs. Our approach is to encapsulate as-synthesized, monosized, hydrophobic NCs within the hydrophobic cores of micelles composed of a mixture of surfactants and phospholipids containing head groups functionalized with polyethylene glycol (-PEG), -COOH, and NH{sub 2} groups. PEG provided biocompatibility and the other groups were used for further biofunctionalization. The resulting water-soluble metal and semiconductor NC-micelles preserve the optical properties of the original hydrophobic NCs. Semiconductor NCs emit the same color; they exhibit equal photoluminescence (PL) intensity under long-time laser irradiation (one week) ; and they exhibit the same PL lifetime (30-ns). The results from transmission electron microscopy and confocal fluorescent imaging indicate that water-soluble semiconductor NC-micelles are biocompatible and exhibit no aggregation in cells. We have extended the surfactant/lipid encapsulation techniques to synthesize water-soluble magnetic NC-micelles. Transmission electron microscopy results suggest that water-soluble magnetic NC-micelles exhibit no aggregation. The resulting NC-micelles preserve the magnetic properties of the original hydrophobic magnetic NCs. Viability studies conducted using yeast cells suggest that the magnetic nanocrystal-micelles are biocompatible. We have demonstrated, for the first time, that using external oscillating magnetic fields to manipulate the magnetic micelles, we can kill live cells, presenting a new magnetodynamic therapy without side effects.
The objective of this report is to develop uncertainty estimates for three heat flux measurement techniques used for the measurement of incident heat flux in a combined radiative and convective environment. This is related to the measurement of heat flux to objects placed inside hydrocarbon fuel (diesel, JP-8 jet fuel) fires, which is very difficult to make accurately (e.g., less than 10%). Three methods will be discussed: a Schmidt-Boelter heat flux gage; a calorimeter and inverse heat conduction method; and a thin plate and energy balance method. Steady state uncertainties were estimated for two types of fires (i.e., calm wind and high winds) at three times (early in the fire, late in the fire, and at an intermediate time). Results showed a large uncertainty for all three methods. Typical uncertainties for a Schmidt-Boelter gage ranged from {+-}23% for high wind fires to {+-}39% for low wind fires. For the calorimeter/inverse method the uncertainties were {+-}25% to {+-}40%. The thin plate/energy balance method the uncertainties ranged from {+-}21% to {+-}42%. The 23-39% uncertainties for the Schmidt-Boelter gage are much larger than the quoted uncertainty for a radiative only environment (i.e ., {+-}3%). This large difference is due to the convective contribution and because the gage sensitivities to radiative and convective environments are not equal. All these values are larger than desired, which suggests the need for improvements in heat flux measurements in fires.
This guide describes the R&A process, Common Look and Feel requirements, and preparation and publishing procedures for communication products at Sandia National Laboratories. Samples of forms and examples of published communications products are provided. This guide details the processes for producing a variety of communication products at Sandia National Laboratories. Figure I-1 shows the general publication development process. Because extensive supplemental material is available from Sandia on the internal Web or from external sources (Table I-1), the guide has been shortened to make it easy to find information that you need.
Flexible Alternating Current Transmission Systems (FACTS) devices are installed on electric power transmission lines to stabilize and regulate power flow. Power lines protected by FACTS devices can increase power flow and better respond to contingencies. The University of Missouri Rolla (UMR) is currently working on a multi-year project to examine the potential use of multiple FACTS devices distributed over a large power system region in a cooperative arrangement in which the FACTS devices work together to optimize and stabilize the regional power system. The report describes operational and security challenges that need to be addressed to employ FACTS devices in this way and recommends references, processes, technologies, and policies to address these challenges.
This one year LDRD addresses problems of threat assessment and restoration of facilities following a bioterror incident like the incident that closed down mail facilities in late 2001. Facilities that are contaminated with pathogenic spores such as B. anthracis spores must be shut down while they are treated with a sporicidal agent and the effectiveness of the treatment is ascertained. This process involves measuring the viability of spore test strips, laid out in a grid throughout the facility; the CDC accepted methodologies require transporting the samples to a laboratory and carrying out a 48 hr outgrowth experiment. We proposed developing a technique that will ultimately lead to a fieldable microfluidic device that can rapidly assess (ideally less than 30 min) spore viability and effectiveness of sporicidal treatment, returning facilities to use in hours not days. The proposed method will determine viability of spores by detecting early protein synthesis after chemical germination. During this year, we established the feasibility of this approach and gathered preliminary results that should fuel a future more comprehensive effort. Such a proposal is currently under review with the NIH. Proteomic signatures of Bacillus spores and vegetative cells were assessed by both slab gel electrophoresis as well as microchip based gel electrophoresis employing sensitive laser-induced fluorescence detection. The conditions for germination using a number of chemical germinants were evaluated and optimized and the time course of protein synthesis was ascertained. Microseparations were carried out using both viable spores and spores inactivated by two different methods. A select number of the early synthesis proteins were digested into peptides for analysis by mass spectrometry.
Flows with strong curvature present a challenge for turbulence models, specifically eddy viscosity type models which assume isotropy and a linear and instantaneous equilibrium relation between stress and strain. Results obtained from three different codes and two different linear eddy viscosity turbulence models are compared to a DNS simulation in order to gain some perspective on the turbulence modeling capability of SIERRA/Fuego. The Fuego v2f results are superior to the more common two-layer k-e model results obtained with both a commercial and research code in terms of the concave near wall behavior predictions. However, near the convex wall, including the separated region, little improvement is gained using the v2f model and in general the turbulent kinetic energy prediction is fair at best.
Colloid transport through saturated media is an integral component of predicting the fate and transport of groundwater contaminants. Developing sound predictive capabilities and establishing effective methodologies for remediation relies heavily on our ability to understand the pertinent physical and chemical mechanisms. Traditionally, colloid transport through saturated media has been described by classical colloid filtration theory (CFT), which predicts an exponential decrease in colloid concentration with travel distance. Furthermore, colloid stability as determined by Derjaguin-Landau-Veney-Overbeek (DLVO) theory predicts permanent attachment of unstable particles in a primary energy minimum. However, recent studies show significant deviations from these traditional theories. Deposition in the secondary energy minimum has been suggested as a mechanism by which observed deviations can occur. This work investigates the existence of the secondary energy minimum as predicted by DLVO theory using direct force measurements obtained by Atomic Forces Microscopy. Interaction energy as a function of separation distance between a colloid and a quartz surface in electrolyte solutions of varying ionic strength are obtained. Preliminary force measurements show promise and necessary modifications to the current experimental methodology have been identified. Stringent surface cleaning procedures and the use of high-purity water for all injectant solutions is necessary for the most accurate and precise measurements. Comparisons between direct physical measurements by Atomic Forces Microscopy with theoretical calculations and existing experimental findings will allow the evaluation of the existence or absence of a secondary energy minimum.
In support of the DOE Low Wind Speed Turbine (LWST) program two of the three Micon 65/13M wind turbines at the USDA Agricultural Research Service (ARS) center in Bushland, Texas will be used to test two sets of experimental blades, the CX-100 and TX-100. The blade aerodynamic and structural characterization, meteorological inflow and wind turbine structural response will be monitored with an array of 75 instruments: 33 to characterize the blades, 15 to characterize the inflow, and 27 to characterize the time-varying state of the turbine. For both tests, data will be sampled at a rate of 30 Hz using the ATLAS II (Accurate GPS Time-Linked Data Acquisition System) data acquisition system. The system features a time-synchronized continuous data stream and telemetered data from the turbine rotor. This paper documents the instruments and infrastructure that have been developed to monitor these blades, turbines and inflow.
We define a new method for global optimization, the Homotopy Optimization Method (HOM). This method differs from previous homotopy and continuation methods in that its aim is to find a minimizer for each of a set of values of the homotopy parameter, rather than to follow a path of minimizers. We define a second method, called HOPE, by allowing HOM to follow an ensemble of points obtained by perturbation of previous ones. We relate this new method to standard methods such as simulated annealing and show under what circumstances it is superior. We present results of extensive numerical experiments demonstrating performance of HOM and HOPE.
We performed molecular dynamics simulations of beta-amyloid (A{beta}) protein and A{beta} fragment(31-42) in bulk water and near hydrated lipids to study the mechanism of neurotoxicity associated with the aggregation of the protein. We constructed full atomistic models using Cerius2 and ran simulations using LAMMPS. MD simulations with different conformations and positions of the protein fragment were performed. Thermodynamic properties were compared with previous literature and the results were analyzed. Longer simulations and data analyses based on the free energy profiles along the distance between the protein and the interface are ongoing.
Over the past several years, techniques have been developed for clustering very large segments of the technical literature using sources such as Thomson ISI's Science Citation Index. The primary objective of this work has been to develop indicators of potential impact at the paper level to enhance planning and evaluation of research. These indicators can also be aggregated at different levels to enable profiling of departments, institutions, agencies, etc. Results of this work are presented as maps of science and technology with various overlays corresponding to the indicators associated with a particular search or question.
This report investigates the feasibility of applying Adaptive Mesh Refinement (AMR) techniques to a vector finite element formulation for the wave equation in three dimensions. Possible error estimators are considered first. Next, approaches for refining tetrahedral elements are reviewed. AMR capabilities within the Nevada framework are then evaluated. We summarize our conclusions on the feasibility of AMR for time-domain vector finite elements and identify a path forward.
GaN-based microwave power amplifiers have been identified as critical components in Sandia's next generation micro-Synthetic-Aperture-Radar (SAR) operating at X-band and Ku-band (10-18 GHz). To miniaturize SAR, GaN-based amplifiers are necessary to replace bulky traveling wave tubes. Specifically, for micro-SAR development, highly reliable GaN high electron mobility transistors (HEMTs), which have delivered a factor of 10 times improvement in power performance compared to GaAs, need to be developed. Despite the great promise of GaN HEMTs, problems associated with nitride materials growth currently limit gain, linearity, power-added-efficiency, reproducibility, and reliability. These material quality issues are primarily due to heteroepitaxial growth of GaN on lattice mismatched substrates. Because SiC provides the best lattice match and thermal conductivity, SiC is currently the substrate of choice for GaN-based microwave amplifiers. Obviously for GaN-based HEMTs to fully realize their tremendous promise, several challenges related to GaN heteroepitaxy on SiC must be solved. For this LDRD, we conducted a concerted effort to resolve materials issues through in-depth research on GaN/AlGaN growth on SiC. Repeatable growth processes were developed which enabled basic studies of these device layers as well as full fabrication of microwave amplifiers. Detailed studies of the GaN and AlGaN growth of SiC were conducted and techniques to measure the structural and electrical properties of the layers were developed. Problems that limit device performance were investigated, including electron traps, dislocations, the quality of semi-insulating GaN, the GaN/AlGaN interface roughness, and surface pinning of the AlGaN gate. Surface charge was reduced by developing silicon nitride passivation. Constant feedback between material properties, physical understanding, and device performance enabled rapid progress which eventually led to the successful fabrication of state of the art HEMT transistors and amplifiers.
Thiolated cyclodextrins have been shown to be useful as modifiers of electrode surfaces for application in electrochemical sensing. The adsorption of three different thiolated {beta}-cyclodextrin ({beta}-CD) derivatives onto gold (Au) electrodes was studied by monitoring ferricyanide reduction and ferrocene carboxylic acid (FCA) oxidation at the electrode surface using cyclic voltammetry. Electrodes modified with the {beta}-CD MJF-69 derivative bound FCA within the CD cavity. The monolayer acted as a conducting layer with an increase in the oxidation current. On the other hand, the {beta}-CD layer inhibited the reduction of ferricyanide at the electrode surface since ferricyanide is larger than the cavity of the {beta}-CD derivative and thus unable to form an inclusion complex.
Due to the grave public health implications and economic impact possible with the emergence of the highly pathogenic avian influenza A isolate, H5N1, currently circulating in Asia we have evaluated the efficacy of various disinfectant chemistries against surrogate influenza A strains. Chemistries included in the tests were household bleach, ethanol, Virkon S{reg_sign}, and a modified version of the Sandia National Laboratories developed DF-200 (DF-200d, a diluted version of the standard DF-200 formulation). Validation efforts followed EPA guidelines for evaluating chemical disinfectants against viruses. The efficacy of the various chemistries was determined by infectivity, quantitative RNA, and qualitative protein assays. Additionally, organic challenges using combined poultry feces and litter material were included in the experiments to simulate environments in which decontamination and remediation will likely occur. In all assays, 10% bleach and Sandia DF-200d were the most efficacious treatments against two influenza A isolates (mammalian and avian) as they provided the most rapid and complete inactivation of influenza A viruses.
We present the initial stages of development of new agent-based computational methods to generate and test hypotheses about linkages between environmental change and international instability. This report summarizes the first year's effort of an originally proposed three-year Laboratory Directed Research and Development (LDRD) project. The preliminary work focused on a set of simple agent-based models and benefited from lessons learned in previous related projects and case studies of human response to climate change and environmental scarcity. Our approach was to define a qualitative model using extremely simple cellular agent models akin to Lovelock's Daisyworld and Schelling's segregation model. Such models do not require significant computing resources, and users can modify behavior rules to gain insights. One of the difficulties in agent-based modeling is finding the right balance between model simplicity and real-world representation. Our approach was to keep agent behaviors as simple as possible during the development stage (described herein) and to ground them with a realistic geospatial Earth system model in subsequent years. This work is directed toward incorporating projected climate data--including various C02 scenarios from the Intergovernmental Panel on Climate Change (IPCC) Third Assessment Report--and ultimately toward coupling a useful agent-based model to a general circulation model.3