Wire array and X-pinch study on 1 MA %22Zebra%22 generator
Abstract not provided.
Abstract not provided.
Abstract not provided.
Nature Methods
Abstract not provided.
Blood
Abstract not provided.
Abstract not provided.
Abstract not provided.
SIAM Journal of Numerical Analysis
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This paper builds upon previous work [Sprigg and Ehlen, 2004] by introducing a bond market into a model of production and employment. The previous paper described an economy in which households choose whether to enter the labor and product markets based on wages and prices. Firms experiment with prices and employment levels to maximize their profits. We developed agent-based simulations using Aspen, a powerful economic modeling tool developed at Sandia, to demonstrate that multiple-firm economies converge toward the competitive equilibria typified by lower prices and higher output and employment, but also suffer from market noise stemming from consumer churn. In this paper we introduce a bond market as a mechanism for household savings. We simulate an economy of continuous overlapping generations in which each household grows older in the course of the simulation and continually revises its target level of savings according to a life-cycle hypothesis. Households can seek employment, earn income, purchase goods, and contribute to savings until they reach the mandatory retirement age; upon retirement households must draw from savings in order to purchase goods. This paper demonstrates the simultaneous convergence of product, labor, and savings markets to their calculated equilibria, and simulates how a disruption to a productive sector will create cascading effects in all markets. Subsequent work will use similar models to simulate how disruptions, such as terrorist attacks, would interplay with consumer confidence to affect financial markets and the broader economy.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Publication Applied Physics Letters
Abstract not provided.
Abstract not provided.
Abstract not provided.
The Hydrogen Futures Simulation Model (H{sub 2}Sim) is a high level, internally consistent, strategic tool for exploring the options of a hydrogen economy. Once the user understands how to use the basic functions, H{sub 2}Sim can be used to examine a wide variety of scenarios, such as testing different options for the hydrogen pathway, altering key assumptions regarding hydrogen production, storage, transportation, and end use costs, and determining the effectiveness of various options on carbon mitigation. This User's Guide explains how to run the model for the first time user.
Abstract not provided.
Abstract not provided.
Fusion Science and Technology
Abstract not provided.
In preparation for developing a Z-pinch IFE power plant, the interaction of ferritic steel with the coolant, FLiBe, must be explored. Sandia National Laboratories Fusion Technology Department was asked to drop molten ferritic steel and FLiBe in a vacuum system and determine the gas byproducts and ability to recycle the steel. We tried various methods of resistive heating of ferritic steel using available power supplies and easily obtained heaters. Although we could melt the steel, we could not cause a drop to fall. This report describes the various experiments that were performed and includes some suggestions and materials needed to be successful. Although the steel was easily melted, it was not possible to drip the molten steel into a FLiBe pool Levitation melting of the drop is likely to be more successful.
Abstract not provided.
To establish strength criteria of Big Hill salt, a series of quasi-static triaxial compression tests have been completed. This report summarizes the test methods, set-up, relevant observations, and results. The triaxial compression tests established dilatant damage criteria for Big Hill salt in terms of stress invariants (I{sub 1} and J{sub 2}) and principal stresses ({sigma}{sub a,d} and {sigma}{sub 3}), respectively: {radical}J{sub 2}(psi) = 1746-1320.5 exp{sup -0.00034I{sub 1}(psi)}; {sigma}{sub a,d}(psi) = 2248 + 1.25 {sigma}{sub 3} (psi). For the confining pressure of 1,000 psi, the dilatant damage strength of Big Hill salt is identical to the typical salt strength ({radical}J{sub 2} = 0.27 I{sub 1}). However, for higher confining pressure, the typical strength criterion overestimates the damage strength of Big Hill salt.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Nanoscience and Nanotechnology
Abstract not provided.
Abstract not provided.
Publication International Journal of Heat and Fluid Flow
Abstract not provided.
Abstract not provided.
Symmetric capsule implosions in the double-ended vacuum hohlraum (DEH) on Z have demonstrated convergence ratios of 14-21 for 2.15-mm plastic ablator capsules absorbing 5-7 kJ of x-rays, based on backlit images of the compressed ablator remaining at peak convergence [1]. Experiments with DD-filled 3.3-mm diameter capsules designed to absorb 14 kJ of x-rays have begun as an integrated test of drive temperature and symmetry, complementary to thin-shell symmetry diagnostic capsules. These capsule implosions are characterized by excellent control of symmetry (< 3% time-integrated), but low hohlraum efficiency (< 2%). Possible methods to increase the capsule absorbed energy in the DEH include mixed-component hohlraums, large diameter foam ablator capsules, transmissive shine shields between the z-pinch and capsule, higher spoke electrode x-ray transmission, a double-sided power feed, and smaller initial radius z-pinch wire arrays. Simulations will explore the potential for each of these modifications to increase the capsule coupling efficiency for near-term experiments on Z and ZR.
Abstract not provided.
Abstract not provided.
Publication Physical Review B
Abstract not provided.
Rapid Prototyping Journal
Abstract not provided.
Abstract not provided.
Proposed for publication in Scientometrics.
This paper presents a new map representing the structure of all of science, based on journal articles, including both the natural and social sciences. Similar to cartographic maps of our world, the map of science provides a bird's eye view of today's scientific landscape. It can be used to visually identify major areas of science, their size, similarity, and interconnectedness. In order to be useful, the map needs to be accurate on a local and on a global scale. While our recent work has focused on the former aspect, this paper summarizes results on how to achieve structural accuracy. Eight alternative measures of journal similarity were applied to a data set of 7,121 journals covering over 1 million documents in the combined Science Citation and Social Science Citation Indexes. For each journal similarity measure we generated two-dimensional spatial layouts using the force-directed graph layout tool, VxOrd. Next, mutual information values were calculated for each graph at different clustering levels to give a measure of structural accuracy for each map. The best co-citation and inter-citation maps according to local and structural accuracy were selected and are presented and characterized. These two maps are compared to establish robustness. The inter-citation map is then used to examine linkages between disciplines. Biochemistry appears as the most interdisciplinary discipline in science.
Physics of Plasmas (special issue)
Abstract not provided.
The Z accelerator [R.B. Spielman, W.A. Stygar, J.F. Seamen et al., Proceedings of the 11th International Pulsed Power Conference, Baltimore, MD, 1997, edited by G. Cooperstein and I. Vitkovitsky (IEEE, Piscataway, NJ, 1997), Vol. 1, p. 709] at Sandia National Laboratories delivers {approx}20 MA load currents to create high magnetic fields (>1000 T) and high pressures (megabar to gigabar). In a z-pinch configuration, the magnetic pressure (the Lorentz force) supersonically implodes a plasma created from a cylindrical wire array, which at stagnation typically generates a plasma with energy densities of about 10 MJ/cm{sup 3} and temperatures >1 keV at 0.1% of solid density. These plasmas produce x-ray energies approaching 2 MJ at powers >200 TW for inertial confinement fusion (ICF) and high energy density physics (HEDP) experiments. In an alternative configuration, the large magnetic pressure directly drives isentropic compression experiments to pressures >3 Mbar and accelerates flyer plates to >30 km/s for equation of state (EOS) experiments at pressures up to 10 Mbar in aluminum. Development of multidimensional radiation-magnetohydrodynamic codes, coupled with more accurate material models (e.g., quantum molecular dynamics calculations with density functional theory), has produced synergy between validating the simulations and guiding the experiments. Z is now routinely used to drive ICF capsule implosions (focusing on implosion symmetry and neutron production) and to perform HEDP experiments (including radiation-driven hydrodynamic jets, EOS, phase transitions, strength of materials, and detailed behavior of z-pinch wire-array initiation and implosion). This research is performed in collaboration with many other groups from around the world. A five year project to enhance the capability and precision of Z, to be completed in 2007, will result in x-ray energies of nearly 3 MJ at x-ray powers >300 TW.
Abstract not provided.
Poly(ethylene oxide) (PEO) is the quintessential biocompatible polymer. Due to its ability to form hydrogen bonds, it is soluble in water, and yet is uncharged and relatively inert. It is being investigated for use in a wide range of biomedical and biotechnical applications, including the prevention of protein adhesion (biofouling), controlled drug delivery, and tissue scaffolds. PEO has also been proposed for use in novel polymer hydrogel nanocomposites with superior mechanical properties. However, the phase behavior of PEO in water is highly anomalous and is not addressed by current theories of polymer solutions. The effective interactions between PEO and water are very concentration dependent, unlike other polymer/solvent systems, due to water-water and water-PEO hydrogen bonds. An understanding of this anomalous behavior requires a careful examination of PEO liquids and solutions on the molecular level. We performed massively parallel molecular dynamics simulations and self-consistent Polymer Reference Interaction Site Model (PRISM) calculations on PEO liquids. We also initiated MD studies on PEO/water solutions with and without an applied electric field. This work is summarized in three parts devoted to: (1) A comparison of MD simulations, theory and experiment on PEO liquids; (2) The implementation of water potentials into the LAMMPS MD code; and (3) A theoretical analysis of the effect of an applied electric field on the phase diagram of polymer solutions.
Abstract not provided.
Terahertz radiation from optically-induced plasmas on metal, semiconductor, and dielectric surfaces is compared to electron-hole plasma radiation from GaAs and Ge. Electro-optic sampling and electric-field probes measure radiated field waveforms and distributions to 0.350 THz.
Journal of Crystal Growth
Abstract not provided.
Abstract not provided.
Abstract not provided.
Gold nanocrystal(NC)/silica films are synthesized through self-assembly of water-soluble gold nanocrystal micelles and silica by sol-gel processing. Absorption and transmission spectra show a strong surface plasmon resonance absorption peak at {approx}520 nm. Angular excitation spectra of surface plasmon show a steep dip in the reflectivity curve at {approx}65{sup o} depending on the thickness and refractive index of the gold NC/silica film. A potential SPR sensor with enhanced sensitivities can be realized based on these gold NC/silica films.
A vegetation study was conducted in Technical Area 3 at Sandia National Laboratories, Albuquerque, New Mexico in 2003 to assist in the design and optimization of vegetative soil covers for hazardous, radioactive, and mixed waste landfills at Sandia National Laboratories/New Mexico and Kirtland Air Force Base. The objective of the study was to obtain site-specific, vegetative input parameters for the one-dimensional code UNSAT-H and to identify suitable, diverse native plant species for use on vegetative soil covers that will persist indefinitely as a climax ecological community with little or no maintenance. The identification and selection of appropriate native plant species is critical to the proper design and long-term performance of vegetative soil covers. Major emphasis was placed on the acquisition of representative, site-specific vegetation data. Vegetative input parameters measured in the field during this study include root depth, root length density, and percent bare area. Site-specific leaf area index was not obtained in the area because there was no suitable platform to measure leaf area during the 2003 growing season due to severe drought that has persisted in New Mexico since 1999. Regional LAI data was obtained from two unique desert biomes in New Mexico, Sevilletta Wildlife Refuge and Jornada Research Station.
A decomposition chemistry and heat transfer model to predict the response of removable epoxy foam (REF) exposed to fire-like heat fluxes is described. The epoxy foam was created using a perfluorohexane blowing agent with a surfactant. The model includes desorption of the blowing agent and surfactant, thermal degradation of the epoxy polymer, polymer fragment transport, and vapor-liquid equilibrium. An effective thermal conductivity model describes changes in thermal conductivity with reaction extent. Pressurization is modeled assuming: (1) no strain in the condensed-phase, (2) no resistance to gas-phase transport, (3) spatially uniform stress fields, and (4) no mass loss from the system due to venting. The model has been used to predict mass loss, pressure rise, and decomposition front locations for various small-scale and large-scale experiments performed by others. The framework of the model is suitable for polymeric foams with absorbed gases.
Abstract not provided.
Abstract not provided.
Cold spray, a new member of the thermal spray process family, can be used to prepare dense, thick metal coatings. It has tremendous potential as a spray-forming process. However, it is well known that significant cold work occurs during the cold spray deposition process. This cold work results in hard coatings but relatively brittle bulk deposits. This work investigates the mechanical properties of cold-sprayed aluminum and the effect of annealing on those properties. Cold spray coatings approximately 1 cm thick were prepared using three different feedstock powders: Valimet H-10; Valimet H-20; and Brodmann Flomaster. ASTM E8 tensile specimens were machined from these coatings and tested using standard tensile testing procedures. Each material was tested in two conditions: as-sprayed; and after a 300 C, 22 h air anneal. The as-sprayed material showed high ultimate strength and low ductility, with <1% elongation. The annealed samples showed a reduction in ultimate strength but a dramatic increase in ductility, with up to 10% elongation. The annealed samples exhibited mechanical properties that were similar to those of wrought 1100 H14 aluminum. Microstructural examination and fractography clearly showed a change in fracture mechanism between the as-sprayed and annealed materials. These results indicate good potential for cold spray as a bulk-forming process.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Physical Review Special Topics - Accelerators and Beams
Abstract not provided.
In the search for ''good'' parallel programming environments for Sandia's current and future parallel architectures, they revisit a long-standing open question. Can the PRAM parallel algorithms designed by theoretical computer scientists over the last two decades be implemented efficiently? This open question has co-existed with ongoing efforts in the HPC community to develop practical parallel programming models that can simultaneously provide ease of use, expressiveness, performance, and scalability. Unfortunately, no single model has met all these competing requirements. Here they propose a parallel programming environment, PRAM C, to bridge the gap between theory and practice. This is an attempt to provide an affirmative answer to the PRAM question, and to satisfy these competing practical requirements. This environment consists of a new thin runtime layer and an ANSI C extension. The C extension has two control constructs and one additional data type concept, ''shared''. This C extension should enable easy translation from PRAM algorithms to real parallel programs, much like the translation from sequential algorithms to C programs. The thin runtime layer bundles fine-grained communication requests into coarse-grained communication to be served by message-passing. Although the PRAM represents SIMD-style fine-grained parallelism, a stand-alone PRAM C environment can support both fine-grained and coarse-grained parallel programming in either a MIMD or SPMD style, interoperate with existing MPI libraries, and use existing hardware. The PRAM C model can also be integrated easily with existing models. Unlike related efforts proposing innovative hardware with the goal to realize the PRAM, ours can be a pure software solution with the purpose to provide a practical programming environment for existing parallel machines; it also has the potential to perform well on future parallel architectures.
Abstract not provided.
Abstract not provided.
A laser hazard analysis and safety assessment was performed for the LASIRISTM Model MAG-501L-670M-1000-45o-K diode laser associated with the High Resolution Pulse Scanner based on the ANSI Standard Z136.1-2000, American National Standard for the Safe Use of Lasers and the ANSI Standard Z136.6-2000, American National Standard for the Safe Use of Lasers Outdoors. The laser was evaluated for both indoor and outdoor use.
Proposed for publication in Spectrocheimica Acta, Part B - Atomic Spectroscopy.
Laser-induced breakdown spectroscopy (LIBS) was used in the evaluation of aerosol concentration in the exhaust of an oxygen/natural-gas glass furnace. Experiments showed that for a delay time of 10 {micro}s and a gate width of 50 {micro}s, the presence of CO{sub 2} and changes in gas temperature affect the intensity of both continuum emission and the Na D lines. The intensity increased for the neutral Ca and Mg lines in the presence of 21% CO{sub 2} when compared to 100% N{sub 2}, whereas the intensity of the Mg and Ca ionic lines decreased. An increase in temperature from 300 to 730 K produced an increase in both continuum emission and Na signal. These laboratory measurements were consistent with measurements in the glass furnace exhaust. Time-resolved analysis of the spark radiation suggested that differences in continuum radiation resulting from changes in bath composition are only apparent at long delay times. The changes in the intensity of ionic and neutral lines in the presence of CO{sub 2} are believed to result from higher free electron number density caused by lower ionization energies of species formed during the spark decay process in the presence of CO{sub 2}. For the high Na concentration observed in the glass furnace exhaust, self-absorption of the spark radiation occurred. Power law regression was used to fit laboratory Na LIBS calibration data for sodium loadings, gas temperatures, and a CO{sub 2} content representative of the furnace exhaust. Improvement of the LIBS measurement in this environment may be possible by evaluation of Na lines with weaker emission and through the use of shorter gate delay times.
A particular engineering aspect of distributed sensor networks that has not received adequate attention is the system level hardware architecture of the individual nodes of the network. A novel hardware architecture based on an idea of task specific modular computing is proposed to provide for both the high flexibility and low power consumption required for distributed sensing solutions. The power consumption of the architecture is mathematically analyzed against a traditional approach, and guidelines are developed for application scenarios that would benefit from using this new design. Furthermore a method of decentralized control for the modular system is developed and analyzed. Finally, a few policies for power minimization in the decentralized system are proposed and analyzed.
Proposed for publication in the Applied Physics Letters.
The junction temperature of AlGaN ultraviolet light-emitting diodes emitting at 295 nm is measured by using the temperature coefficients of the diode forward voltage and emission peak energy. The high-energy slope of the spectrum is explored to measure the carrier temperature. A linear relation between junction temperature and current is found. Analysis of the experimental methods reveals that the diode-forward voltage is the most accurate ({+-}3 C). A theoretical model for the dependence of the diode forward voltage (V{sub f}) on junction temperature (T{sub j}) is developed that takes into account the temperature dependence of the energy gap. A thermal resistance of 87.6 K/W is obtained with the device mounted with thermal paste on a heat sink.
Proposed for publication in Health Physics.
Abstract not provided.
Abstract not provided.
Journal of Chemical Physics
The effect of polymer-polymer and solvent-polymer interactions on the behavior of the interdiffusion of a solvent in to an entangled polymer matrix was studied. The state of the polymer was changed from melt to glassy by varying polymer-polymer interaction. From simulation of equilibrated solvent-polymer solution, it was found that the glassy system with Berthelot's rule applied for the cross term is immiscible except in the dilute limit. Increasing the solvent-polymer interaction enhanced the solubility of the system without changing the nature of the diffusion process.
Abstract not provided.
Physical Review Letters
The high mobility of two dimensional electron system in the second Landau level was discussed. In the second level, the larger extent of the wave function as compared to the lowest LL and its additional zero allows for a much broader range of electron correlations to be favorable. An example of electron correlations encountered in the second LL is the even-denominator v=2+1/2 fractional quantum hall effect (FQHE) state. With a varying filling factor, it was observed that quantum liquids of different origins compete with several insulating phases leading to an irregular pattern in the transport parameters.
Abstract not provided.
The Design through Analysis Realization Team (DART) will provide analysts with a complete toolset that reduces the time to create, generate, analyze, and manage the data generated in a computational analysis. The toolset will be both easy to learn and easy to use. The DART Roadmap Vision provides for progressive improvements that will reduce the Design through Analysis (DTA) cycle time by 90-percent over a three-year period while improving both the quality and accountability of the analyses.
Protein Science
We present a two-step approach to modeling the transmembrane spanning helical bundles of integral membrane proteins using only sparse distance constraints, such as those derived from chemical cross-linking, dipolar EPR and FRET experiments. In Step 1, using an algorithm, we developed, the conformational space of membrane protein folds matching a set of distance constraints is explored to provide initial structures for local conformational searches. In Step 2, these structures refined against a custom penalty function that incorporates both measures derived from statistical analysis of solved membrane protein structures and distance constraints obtained from experiments. We begin by describing the statistical analysis of the solved membrane protein structures from which the theoretical portion of the penalty function was derived. We then describe the penalty function, and, using a set of six test cases, demonstrate that it is capable of distinguishing helical bundles that are close to the native bundle from those that are far from the native bundle. Finally, using a set of only 27 distance constraints extracted from the literature, we show that our method successfully recovers the structure of dark-adapted rhodopsin to within 3.2 Å of the crystal structure.
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
A case study is reported to document the details of a validation process to assess the accuracy of a mathematical model to represent experiments involving thermal decomposition of polyurethane foam. The focus of the report is to work through a validation process. The process addresses the following activities. The intended application of mathematical model is discussed to better understand the pertinent parameter space. The parameter space of the validation experiments is mapped to the application parameter space. The mathematical models, computer code to solve the models and its (code) verification are presented. Experimental data from two activities are used to validate mathematical models. The first experiment assesses the chemistry model alone and the second experiment assesses the model of coupled chemistry, conduction, and enclosure radiation. The model results of both experimental activities are summarized and uncertainty of the model to represent each experimental activity is estimated. The comparison between the experiment data and model results is quantified with various metrics. After addressing these activities, an assessment of the process for the case study is given. Weaknesses in the process are discussed and lessons learned are summarized.
The sequential probability ratio test (SPRT) minimizes the expected number of observations to a decision and can solve problems in sequential pattern recognition. Some problems have dependencies between the observations, and Markov chains can model dependencies where the state occupancy probability is geometric. For a non-geometric process we show how to use the effective amount of independent information to modify the decision process, so that we can account for the remaining dependencies. Along with dependencies between observations, a successful system needs to handle the unknown class in unconstrained environments. For example, in an acoustic pattern recognition problem any sound source not belonging to the target set is in the unknown class. We show how to incorporate goodness of fit (GOF) classifiers into the Markov SPRT, and determine the worse case nontarget model. We also develop a multiclass Markov SPRT using the GOF concept.
Matching a set of 3D points to another set of 3D points is an important part of any 3D object recognition system. The Hausdorff distance is known for it robustness in the face of obscuration, clutter, and noise. We show how to approximate the 3D Hausdorff fraction with linear time complexity and quadratic space complexity. We empirically demonstrate that the approximation is very good when compared to actual Hausdorff distances.
Abstract not provided.
Abstract not provided.
A decomposition model has been developed to predict the response of removable syntactic foam (RSF) exposed to fire-like heat fluxes. RSF consists of glass micro-balloons (GMB) in a cured epoxy polymer matrix. A chemistry model is presented based on the chemical structure of the epoxy polymer, mass transport of polymer fragments to the bulk gas, and vapor-liquid equilibrium. Thermophysical properties were estimated from measurements. A bubble nucleation, growth, and coalescence model was used to describe changes in properties with the extent of reaction. Decomposition of a strand of syntactic foam exposed to high temperatures was simulated.
Abstract not provided.
A coupled Euler-Lagrange solution approach is used to model the response of a buried reinforced concrete structure subjected to a close-in detonation of a high explosive charge. The coupling algorithm is discussed along with a set of benchmark calculations involving detonations in clay and sand.
Genetic programming (GP) has proved to be a highly versatile and useful tool for identifying relationships in data for which a more precise theoretical construct is unavailable. In this project, we use a GP search to develop trading strategies for agent based economic models. These strategies use stock prices and technical indicators, such as the moving average convergence/divergence and various exponentially weighted moving averages, to generate buy and sell signals. We analyze the effect of complexity constraints on the strategies as well as the relative performance of various indicators. We also present innovations in the classical genetic programming algorithm that appear to improve convergence for this problem. Technical strategies developed by our GP algorithm can be used to control the behavior of agents in economic simulation packages, such as ASPEN-D, adding variety to the current market fundamentals approach. The exploitation of arbitrage opportunities by technical analysts may help increase the efficiency of the simulated stock market, as it does in the real world. By improving the behavior of simulated stock markets, we can better estimate the effects of shocks to the economy due to terrorism or natural disasters.
Abstract not provided.
The primary goals of the present study are to: (1) determine how and why MEMS-scale friction differs from friction on the macro-scale, and (2) to begin to develop a capability to perform finite element simulations of MEMS materials and components that accurately predicts response in the presence of adhesion and friction. Regarding the first goal, a newly developed nanotractor actuator was used to measure friction between molecular monolayer-coated, polysilicon surfaces. Amontons law does indeed apply over a wide range of forces. However, at low loads, which are of relevance to MEMS, there is an important adhesive contribution to the normal load that cannot be neglected. More importantly, we found that at short sliding distances, the concept of a coefficient of friction is not relevant; rather, one must invoke the notion of 'pre-sliding tangential deflections' (PSTD). Results of a simple 2-D model suggests that PSTD is a cascade of small-scale slips with a roughly constant number of contacts equilibrating the applied normal load. Regarding the second goal, an Adhesion Model and a Junction Model have been implemented in PRESTO, Sandia's transient dynamics, finite element code to enable asperity-level simulations. The Junction Model includes a tangential shear traction that opposes the relative tangential motion of contacting surfaces. An atomic force microscope (AFM)-based method was used to measure nano-scale, single asperity friction forces as a function of normal force. This data is used to determine Junction Model parameters. An illustrative simulation demonstrates the use of the Junction Model in conjunction with a mesh generated directly from an atomic force microscope (AFM) image to directly predict frictional response of a sliding asperity. Also with regards to the second goal, grid-level, homogenized models were studied. One would like to perform a finite element analysis of a MEMS component assuming nominally flat surfaces and to include the effect of roughness in such an analysis by using a homogenized contact and friction models. AFM measurements were made to determine statistical information on polysilicon surfaces with different roughnesses, and this data was used as input to a homogenized, multi-asperity contact model (the classical Greenwood and Williamson model). Extensions of the Greenwood and Williamson model are also discussed: one incorporates the effect of adhesion while the other modifies the theory so that it applies to the case of relatively few contacting asperities.
This report discusses a set of verification test cases for the frequency-domain, boundary-element, electromagnetics code Eiger based on the analytical solution of plane wave scattering from a sphere. Three cases will be considered: when the sphere is made of perfect electric conductor, when the sphere is made of lossless dielectric and when the sphere is made of lossy dielectric. We outline the procedures that must be followed in order to carefully compare the numerical solution to the analytical solution. We define an error criterion and demonstrate convergence behavior for both the analytical and numerical cases. These problems test the code's ability to calculate the surface current density and secondary quantities, such as near fields and far fields.
In this paper we present an analysis of a new configuration for achieving spin stabilized magnetic levitation. In the classical configuration, the rotor spins about a vertical axis; and the spin stabilizes the lateral instability of the top in the magnetic field. In this new configuration the rotor spins about a horizontal axis; and the spin stabilizes the axial instability of the top in the magnetic field.
ML is a multigrid preconditioning package intended to solve linear systems of equations Ax = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. ML should be used on large sparse linear systems arising from partial differential equation (PDE) discretizations. While technically any linear system can be considered, ML should be used on linear systems that correspond to things that work well with multigrid methods (e.g. elliptic PDEs). ML can be used as a stand-alone package or to generate preconditioners for a traditional iterative solver package (e.g. Krylov methods). We have supplied support for working with the Aztec 2.1 and AztecOO iterative package [16]. However, other solvers can be used by supplying a few functions. This document describes one specific algebraic multigrid approach: smoothed aggregation. This approach is used within several specialized multigrid methods: one for the eddy current formulation for Maxwell's equations, and a multilevel and domain decomposition method for symmetric and nonsymmetric systems of equations (like elliptic equations, or compressible and incompressible fluid dynamics problems). Other methods exist within ML but are not described in this document. Examples are given illustrating the problem definition and exercising multigrid options.
As technical knowledge grows deeper, broader, and more interconnected, knowledge domains increasingly combine a number of sub-domains. More often than not, each of these sub-domains has its own community of specialists and forums for interaction. Hence, from a generalist's viewpoint, it is sometimes difficult to understand the relationships between the sub-domains within the larger domain; and, from a specialist's viewpoint, it may be difficult for those working in one sub-domain to keep abreast of knowledge gained in another sub-domain. These difficulties can be especially important in the initial stages of creating new projects aimed at adding knowledge either at the domain or sub-domain level. To circumvent these difficulties, one would ideally like to create a map of the knowledge domain--a map which would help clarify relationships between the various sub-domains, and a map which would help inform choices regarding investing in the production of knowledge either at the domain or sub-domain levels. In practice, creating such a map is non-trivial. First, relationships between knowledge subdomains are complex, and not likely to be easily simplified into a visualizable 2-or-few-dimensional map. Second, even if some of the relationships can be simplified, capturing them would require some degree of expert understanding of the knowledge domain, rendering impossible any fully automated method for creating the map. In this work, we accept these limitations, and within them, attempt to explore semi-automated methodologies for creating such a map. We chose as the knowledge domain for this case study 'displacement damage phenomena in Si junction devices'. This knowledge domain spans a particularly wide range of knowledge subdomains, and hence is a particularly challenging one.
Microfluidic systems are becoming increasingly complicated as the number of applications grows. The use of microfluidic systems for chemical and biological agent detection, for example, requires that a given sample be subjected to many process steps, which requires microvalves to control the position and transport of the sample. Each microfluidic application has its own specific valve requirements and this has precipitated the wide variety of valve designs reported in the literature. Each of these valve designs has its strengths and weaknesses. The strength of the valve design proposed here is its simplicity, which makes it easy to fabricate, easy to actuate, and easy to integrate with a microfluidic system. It can be applied to either gas phase or liquid phase systems. This novel design uses a secondary fluid to stop the flow of the primary fluid in the system. The secondary fluid must be chosen based on the type of flow that it must stop. A dielectric fluid must be used for a liquid phase flow driven by electroosmosis, and a liquid with a large surface tension should be used to stop a gas phase flow driven by a weak pressure differential. Experiments were carried out investigating certain critical functions of the design. These experiments verified that the secondary fluid can be reversibly moved between its 'valve opened' and 'valve closed' positions, where the secondary fluid remained as one contiguous piece during this transport process. The experiments also verified that when Fluorinert is used as the secondary fluid, the valve can break an electric circuit. It was found necessary to apply a hydrophobic coating to the microchannels to stop the primary fluid, an aqueous electrolyte, from wicking past the Fluorinert and short-circuiting the valve. A simple model was used to develop valve designs that could be closed using an electrokinetic pump, and re-opened by simply turning the pump off and allowing capillary forces to push the secondary fluid back into its stowed position.
Abstract not provided.
The goal of this study was to first establish the fitness for service of the carbon steel based oil coolers presently located at the Bryan Mound and West Hackberry sites, and second, to compare quantitatively the performance of two proposed corrosion mitigation strategies. To address these goals, a series of flow loops were constructed to simulate the conditions present within the oil coolers allowing the performance of each corrosion mitigation strategy, as well as the baseline performance of the existing systems, to be assessed. As prior experimentation had indicated that the corrosion and fouling was relatively uniform within the oil coolers, the hot and cold side of the system were simulated, representing the extremes of temperature observed within a typical oil cooler. Upon completion of the experiment, the depth of localized attack observed on carbon steel was such that perforation of the tube walls would likely result within a 180 day drawdown procedure at West Hackberry. Furthermore, considering the average rate of wall recession (from LPR measurements), combined with the extensive localized attack (pitting) which occurred in both environments, the tubing wall thickness remaining after 180 days would be less than that required to contain the operating pressures of the oil coolers for both sites. Finally, the inhibitor package, while it did reduce the measured corrosion rate in the case of the West Hackberry solutions, did not provide a sufficient reduction in the observed attack to justify its use.
Natural gas is a clean fuel that will be the most important domestic energy resource for the first half the 21st centtuy. Ensuring a stable supply is essential for our national energy security. The research we have undertaken will maximize the extractable volume of gas while minimizing the environmental impact of surface disturbances associated with drilling and production. This report describes a methodology for comprehensive evaluation and modeling of the total gas system within a basin focusing on problematic horizontal fluid flow variability. This has been accomplished through extensive use of geophysical, core (rock sample) and outcrop data to interpret and predict directional flow and production trends. Side benefits include reduced environmental impact of drilling due to reduced number of required wells for resource extraction. These results have been accomplished through a cooperative and integrated systems approach involving industry, government, academia and a multi-organizational team within Sandia National Laboratories. Industry has provided essential in-kind support to this project in the forms of extensive core data, production data, maps, seismic data, production analyses, engineering studies, plus equipment and staff for obtaining geophysical data. This approach provides innovative ideas and technologies to bring new resources to market and to reduce the overall environmental impact of drilling. More importantly, the products of this research are not be location specific but can be extended to other areas of gas production throughout the Rocky Mountain area. Thus this project is designed to solve problems associated with natural gas production at developing sites, or at old sites under redevelopment.
A novel method employing machine-based learning to identify messages related to other messages is described and evaluated. This technique may enable an analyst to identify and correlate a small number of related messages from a large sample of individual messages. The classic machine learning techniques of decision trees and naive Bayes classification are seeded with few (or no) messages of interest and 'learn' to identify other related messages. The performance of this approach and these specific learning techniques are evaluated and generalized.
Hydrogen has the potential to become an integral part of our energy transportation and heat and power sectors in the coming decades and offers a possible solution to many of the problems associated with a heavy reliance on oil and other fossil fuels. The Hydrogen Futures Simulation Model (H2Sim) was developed to provide a high level, internally consistent, strategic tool for evaluating the economic and environmental trade offs of alternative hydrogen production, storage, transport and end use options in the year 2020. Based on the model's default assumptions, estimated hydrogen production costs range from 0.68 $/kg for coal gasification to as high as 5.64 $/kg for centralized electrolysis using solar PV. Coal gasification remains the least cost option if carbon capture and sequestration costs ($0.16/kg) are added. This result is fairly robust; for example, assumed coal prices would have to more than triple or the assumed capital cost would have to increase by more than 2.5 times for natural gas reformation to become the cheaper option. Alternatively, assumed natural gas prices would have to fall below $2/MBtu to compete with coal gasification. The electrolysis results are highly sensitive to electricity costs, but electrolysis only becomes cost competitive with other options when electricity drops below 1 cent/kWhr. Delivered 2020 hydrogen costs are likely to be double the estimated production costs due to the inherent difficulties associated with storing, transporting, and dispensing hydrogen due to its low volumetric density. H2Sim estimates distribution costs ranging from 1.37 $/kg (low distance, low production) to 3.23 $/kg (long distance, high production volumes, carbon sequestration). Distributed hydrogen production options, such as on site natural gas, would avoid some of these costs. H2Sim compares the expected 2020 per mile driving costs (fuel, capital, maintenance, license, and registration) of current technology internal combustion engine (ICE) vehicles (0.55$/mile), hybrids (0.56 $/mile), and electric vehicles (0.82-0.84 $/mile) with 2020 fuel cell vehicles (FCVs) (0.64-0.66 $/mile), fuel cell vehicles with onboard gasoline reformation (FCVOB) (0.70 $/mile), and direct combustion hydrogen hybrid vehicles (H2Hybrid) (0.55-0.59 $/mile). The results suggests that while the H2Hybrid vehicle may be competitive with ICE vehicles, it will be difficult for the FCV to compete without significant increases in gasoline prices, reduced predicted vehicle costs, stringent carbon policies, or unless they can offer the consumer something existing vehicles can not, such as on demand power, lower emissions, or better performance.
Abstract not provided.
Specimens of poled 'chem-prep' PNZT ceramic from batch HF803 were tested under hydrostatic, uniaxial, and constant stress difference loading conditions at three temperatures of -55, 25, and 75 C and pressures up to 500 MPa. The objective of this experimental study was to obtain the electro-mechanical properties of the ceramic and the criteria of FE (Ferroelectric) to AFE (Antiferroelectric) phase transformations so that grain-scale modeling efforts can develop and test models and codes using realistic parameters. The poled ceramic undergoes anisotropic deformation during the transition from a FE to an AFE structure. The lateral strain measured parallel to the poling direction was typically 35 % greater than the strain measured perpendicular to the poling direction. The rates of increase in the phase transformation pressures per temperature changes were practically identical for both unpoled and poled PNZT HF803 specimens. We observed that the retarding effect of temperature on the kinetics of phase transformation appears to be analogous to the effect of shear stress. We also observed that the FE-to-AFE phase transformation occurs in poled ceramic when the normal compressive stress, acting perpendicular to a crystallographic plane about the polar axis, equals the hydrostatic pressure at which the transformation otherwise takes place.