Publications

Results 9801–9900 of 9,998

Search results

Jump to search filters

BaO/W(100) thermionic emitters and the effects of Sc, Y, La, and the density functional used in computations

Proposed for publication in Surface Science Letters.

Jennison, Dwight R.; Jennison, Dwight R.; Schultz, Peter A.; King, Donald B.; Zavadil, Kevin R.

Density functional theory is used to predict workfunctions, {psi}. For relaxed clean W(1 0 0), the local density approximation (LDA) agrees with experiment better than the newer generalized gradient approximation, probably due to the surface electron self-energy. The large Ba metallic radius indicates it covers W(1 0 0) at about 0.5 monolayer (ML). However, Ba{sup 2+}, O{sup 2-}, and metallic W all have similar radii. Thus 1 ML of BaO (one BaO unit for each two W atoms) produces minimum strain, indicating commensurate interfaces. BaO (1 ML) and Ba (1/2 ML) have the same {psi} to within 0.02 V, so at these coverages reduction or oxidation is not important. Due to greater chemical activity of ScO vs. highly ionic BaO, when mixing the latter with this suboxide of scandia, the overlayer always has BaO as the top layer and ScO as the second layer. The BaO/ScO bilayer has a rocksalt structure, suggesting high stability. In the series BaO/ScO/, BaO/YO/, and BaO/LaO/W(1 0 0), the latter has a remarkably low {psi} of 1.3 V (LDA), but 2 ML of rocksalt BaO also has {psi} at 1.3 V. We suggest BaO (1 ML) does not exist and that it is worthwhile to attempt the direct synthesis and study of BaO (2 ML) and BaO/LaO.

More Details

SGOPT User Manual Version 2.0

Hart, William E.

This document provides a user manual for the SGOPT software library. SGOPT is a C++ class library for nonlinear optimization. This library uses an object-oriented design that allows the software to be extended to a new problem domains. Furthermore, this library was designed to that the interface is straightforward while providing flexibility to allow new algorithms to be easily added to this library. The SGOPT library has been used by several software projects at Sandia, and it is integrated into the DAKOTA design and analysis toolkit. This report provides a high-level description of the optimization algorithms provided by SGOPT and describes the C++ class hierarchy in which they are implemented. Finally, installation instructions are included.

More Details

ACME: Algorithms for Contact in a Multiphysics Environment API Version 1.3

Brown, Kevin H.; Brown, Kevin H.; Voth, Thomas E.; Glass, Micheal W.; Gullerud, Arne S.; Heinstein, Martin W.; Jones, Reese E.

An effort is underway at Sandia National Laboratories to develop a library of algorithms to search for potential interactions between surfaces represented by analytic and discretized topological entities. This effort is also developing algorithms to determine forces due to these interactions for transient dynamics applications. This document describes the Application Programming Interface (API) for the ACME (Algorithms for Contact in a Multiphysics Environment) library.

More Details

An Exploration in Implementing Fault Tolerance in Scientific Simulation Application Software

Drake, Richard R.; Drake, Richard R.; Summers, Randall M.

The ability for scientific simulation software to detect and recover from errors and failures of supporting hardware and software layers is becoming more important due to the pressure to shift from large, specialized multi-million dollar ASCI computing platforms to smaller, less expensive interconnected machines consisting of off-the-shelf hardware. As evidenced by the CPlant{trademark} experiences, fault tolerance can be necessary even on such a homogeneous system and may also prove useful in the next generation of ASCI platforms. This report describes a research effort intended to study, implement, and test the feasibility of various fault tolerance mechanisms controlled at the simulation code level. Errors and failures would be detected by underlying software layers, communicated to the application through a convenient interface, and then handled by the simulation code itself. Targeted faults included corrupt communication messages, processor node dropouts, and unacceptable slowdown of service from processing nodes. Recovery techniques such as re-sending communication messages and dynamic reallocation of failing processor nodes were considered. However, most fault tolerance mechanisms rely on underlying software layers which were discovered to be lacking to such a degree that mechanisms at the application level could not be implemented. This research effort has been postponed and shifted to these supporting layers.

More Details

Design, implementation, and performance of MPI on Portals 3.0

International Journal of High Performance Computing Applications

Brightwell, Ronald B.; Riesen, Rolf; Maccabe, Arthur B.

This paper describes an implementation of the Message Passing Interface (MPI) on the Portals 3.0 data movement layer. Portals 3.0 provides low-level building blocks that are flexible enough to support higher-level message passing layers, such as MPI, very efficiently. Portals 3.0 is also designed to allow for programmable network interface cards to offload message processing from the host processor, allowing for the ability to overlap computation and MPI communication. We describe the basic building blocks in Portals 3.0, show how they can be put together to implement MPI, and describe the protocols of our MPI implementation. We look at several key operations within the implementation and describe the effects that a Portals 3.0 implementation has on scalability and performance. We also present preliminary performance results from our implementation for Myrinet.

More Details

Statistical Validation of Engineering and Scientific Models: Validation Experiments to Application

Trucano, Timothy G.

Several major issues associated with model validation are addressed here. First, we extend the application-based, model validation metric presented in Hills and Trucano (2001) to the Maximum Likelihood approach introduced in Hills and Trucano (2002). This method allows us to use the target application of the code to weigh the measurements made from a validation experiment so that those measurements that are most important for the application are more heavily weighted. Secondly, we further develop the linkage between suites of validation experiments and the target application so that we can (1) provide some measure of coverage of the target application and, (2) evaluate the effect of uncertainty in the measurements and model parameters on application level validation. We provide several examples of this approach based on steady and transient heat conduction, and shock physics applications.

More Details

Growth and morphology of cadmium chalcogenides : the synthesis of nanorods, tetrapods, and spheres from CdO and Cd(O[2]CCH[3])[2]

Proposed for publication in the Journal of Chemistry and Materials.

Bunge, Scott D.; Bunge, Scott D.; Boyle, Timothy J.; Rodriguez, Marko A.; Headley, Thomas J.

In this work, we investigated the controlled growth of nanocrystalline CdE (E = S, Se, and Te) via the pyrolysis of CdO and Cd(O2CCH3)2 precursors, at the specific Cd to E mole ratio of 0.67 to 1. The experimental results reveal that while the growth of CdS produces only a spherical morphology, CdSe and CdTe exhibit rod-like and tetrapod-like morphologies of temporally controllable aspect ratios. Over a 7200 s time period, CdS spheres grew from 4 nm (15 s aliquot) to 5 nm, CdSe nanorods grew from dimensions of 10.8 x 3.6 nm (15 s aliquot) to 25.7 x 11.2 nm, and CdTe tetrapods with arms 15 x 3.5 nm (15 s aliquot) grew into a polydisperse mixture of spheres, rods, and tetrapods on the order of 20 to 80 nm. Interestingly, long tracks of self-assembled CdSe nanorods (3.5 x 24 nm) of over one micron in length were observed. The temporal growth for each nanocrystalline material was monitored by UV-VIS spectroscopy, transmission electron spectroscopy, and further characterized by powder X-ray diffraction. This study has elucidated the vastly different morphologies available for CdS, CdSe, and CdTe during the first 7200 s after injection of the desired chalcogenide.

More Details

Discrete sensor placement problems in distribution networks

Hart, William E.; Hart, William E.

We consider the problem of placing sensors in a network to detect and identify the source of any contamination. We consider two variants of this problem: (1) sensor-constrained: we are allowed a fixed number of sensors and want to minimize contamination detection time; and (2) time-constrained: we must detect contamination within a given time limit and want to minimize the number of sensors required. Our main results are as follows. First, we give a necessary and sufficient condition for source identification. Second, we show that the sensor and time constrained versions of the problem are polynomially equivalent. Finally, we show that the sensor-constrained version of the problem is polynomially equivalent to the asymmetric k-center problem and that the time-constrained version of the problem is polynomially equivalent to the dominating set problem.

More Details

Sensor placement in municipal water networks

Hart, William E.; Hart, William E.; Phillips, Cynthia A.

We present a model for optimizing the placement of sensors in municipal water networks to detect maliciously-injected contaminants. An optimal sensor configuration minimizes the expected fraction of the population at risk. We formulate this problem as an integer program, which can be solved with generally available IP solvers. We find optimal sensor placements for three real networks with synthetic risk and population data. Our experiments illustrate that this formulation can be solved relatively quickly, and that the predicted sensor configuration is relatively insensitive to uncertainties in the data used for prediction.

More Details

An introduction to the COLIN optimization interface

Hart, William E.; Hart, William E.

We describe COLIN, a Common Optimization Library INterface for C++. COLIN provides C++ template classes that define a generic interface for both optimization problems and optimization solvers. COLIN is specifically designed to facilitate the development of hybrid optimizers, for which one optimizer calls another to solve an optimization subproblem. We illustrate the capabilities of COLIN with an example of a memetic genetic programming solver.

More Details

Molecular Dynamics Simulation of Polymer Dissolution

Thompson, Aidan P.; Thompson, Aidan P.

In the LIGA process for manufacturing microcomponents, a polymer film is exposed to an x-ray beam passed through a gold pattern. This is followed by the development stage, in which a selective solvent is used to remove the exposed polymer, reproducing the gold pattern in the polymer film. Development is essentially polymer dissolution, a physical process which is not well understood. We have used coarse-grained molecular dynamics simulation to study the early stage of polymer dissolution. In each simulation a film of non-glassy polymer was brought into contact with a layer of solvent. The mutual penetration of the two phases was tracked as a function of time. Several film thicknesses and two different chain lengths were simulated. In all cases, the penetration process conformed to ideal Fickian diffusion. We did not see the formation of a gel layer or other non-ideal effects. Variations in the Fickian diffusivities appeared to be caused primarily by differences in the bulk polymer film density.

More Details

Engineering a transformation of human-machine interaction to an augmented cognitive relationship

Forsythe, James C.; Bernard, Michael L.; Xavier, Patrick G.; Abbott, Robert G.; Speed, Ann S.; Brannon, Nathan B.

This project is being conducted by Sandia National Laboratories in support of the DARPA Augmented Cognition program. Work commenced in April of 2002. The objective for the DARPA program is to 'extend, by an order of magnitude or more, the information management capacity of the human-computer warfighter.' Initially, emphasis has been placed on detection of an operator's cognitive state so that systems may adapt accordingly (e.g., adjust information throughput to the operator in response to workload). Work conducted by Sandia focuses on development of technologies to infer an operator's ongoing cognitive processes, with specific emphasis on detecting discrepancies between machine state and an operator's ongoing interpretation of events.

More Details

Carbon sequestration in Synechococcus Sp.: from molecular machines to hierarchical modeling

Proposed for publication in OMICS: A Journal of Integrative Biology, Vol. 6, No.4, 2002.

Heffelfinger, Grant S.; Faulon, Jean-Loup M.; Frink, Laura J.; Haaland, David M.; Hart, William E.; Lane, Todd L.; Plimpton, Steven J.; Roe, Diana C.; Timlin, Jerilyn A.; Martino, Anthony M.; Rintoul, Mark D.; Davidson, George S.

The U.S. Department of Energy recently announced the first five grants for the Genomes to Life (GTL) Program. The goal of this program is to ''achieve the most far-reaching of all biological goals: a fundamental, comprehensive, and systematic understanding of life.'' While more information about the program can be found at the GTL website (www.doegenomestolife.org), this paper provides an overview of one of the five GTL projects funded, ''Carbon Sequestration in Synechococcus Sp.: From Molecular Machines to Hierarchical Modeling.'' This project is a combined experimental and computational effort emphasizing developing, prototyping, and applying new computational tools and methods to elucidate the biochemical mechanisms of the carbon sequestration of Synechococcus Sp., an abundant marine cyanobacteria known to play an important role in the global carbon cycle. Understanding, predicting, and perhaps manipulating carbon fixation in the oceans has long been a major focus of biological oceanography and has more recently been of interest to a broader audience of scientists and policy makers. It is clear that the oceanic sinks and sources of CO(2) are important terms in the global environmental response to anthropogenic atmospheric inputs of CO(2) and that oceanic microorganisms play a key role in this response. However, the relationship between this global phenomenon and the biochemical mechanisms of carbon fixation in these microorganisms is poorly understood. The project includes five subprojects: an experimental investigation, three computational biology efforts, and a fifth which deals with addressing computational infrastructure challenges of relevance to this project and the Genomes to Life program as a whole. Our experimental effort is designed to provide biology and data to drive the computational efforts and includes significant investment in developing new experimental methods for uncovering protein partners, characterizing protein complexes, identifying new binding domains. We will also develop and apply new data measurement and statistical methods for analyzing microarray experiments. Our computational efforts include coupling molecular simulation methods with knowledge discovery from diverse biological data sets for high-throughput discovery and characterization of protein-protein complexes and developing a set of novel capabilities for inference of regulatory pathways in microbial genomes across multiple sources of information through the integration of computational and experimental technologies. These capabilities will be applied to Synechococcus regulatory pathways to characterize their interaction map and identify component proteins in these pathways. We will also investigate methods for combining experimental and computational results with visualization and natural language tools to accelerate discovery of regulatory pathways. Furthermore, given that the ultimate goal of this effort is to develop a systems-level of understanding of how the Synechococcus genome affects carbon fixation at the global scale, we will develop and apply a set of tools for capturing the carbon fixation behavior of complex of Synechococcus at different levels of resolution. Finally, because the explosion of data being produced by high-throughput experiments requires data analysis and models which are more computationally complex, more heterogeneous, and require coupling to ever increasing amounts of experimentally obtained data in varying formats, we have also established a companion computational infrastructure to support this effort as well as the Genomes to Life program as a whole.

More Details

Verification, validation, and predictive capability in computational engineering and physics

Bunge, Scott D.; Boyle, Timothy J.; Headley, Thomas J.; Kotula, Paul G.; Rodriguez, M.A.

Developers of computer codes, analysts who use the codes, and decision makers who rely on the results of the analyses face a critical question: How should confidence in modeling and simulation be critically assessed? Verification and validation (V&V) of computational simulations are the primary methods for building and quantifying this confidence. Briefly, verification is the assessment of the accuracy of the solution to a computational model. Validation is the assessment of the accuracy of a computational simulation by comparison with experimental data. In verification, the relationship of the simulation to the real world is not an issue. In validation, the relationship between computation and the real world, i.e., experimental data, is the issue.

More Details

Solidification Diagnostics for Joining and Microstructural Simulations

Robino, Charles V.; Hall, Aaron C.; Brooks, John A.; Headley, Thomas J.; Roach, R.A.

Solidification is an important aspect of welding, brazing, soldering, LENS fabrication, and casting. The current trend toward utilizing large-scale process simulations and materials response models for simulation-based engineering is driving the development of new modeling techniques. However, the effective utilization of these models is, in many cases, limited by a lack of fundamental understanding of the physical processes and interactions involved. In addition, experimental validation of model predictions is required. We have developed new and expanded experimental techniques, particularly those needed for in-situ measurement of the morphological and kinetic features of the solidification process. The new high-speed, high-resolution video techniques and data extraction methods developed in this work have been used to identify several unexpected features of the solidification process, including the observation that the solidification front is often far more dynamic than previously thought. In order to demonstrate the utility of the video techniques, correlations have been made between the in-situ observations and the final solidification microstructure. Experimental methods for determination of the solidification velocity in highly dynamic pulsed laser welds have been developed, implemented, and used to validate and refine laser welding models. Using post solidification metallographic techniques, we have discovered a previously unreported orientation relationship between ferrite and austenite in the Fe-Cr-Ni alloy system, and have characterized the conditions under which this new relationship develops. Taken together, the work has expanded both our understanding of, and our ability to characterize, solidification phenomena in complex alloy systems and processes.

More Details

Computational Algorithms for Device-Circuit Coupling

Gardner, Timothy J.; Mclaughlin, Linda I.; Mowery-Evans, Deborah L.

Circuit simulation tools (e.g., SPICE) have become invaluable in the development and design of electronic circuits. Similarly, device-scale simulation tools (e.g., DaVinci) are commonly used in the design of individual semiconductor components. Some problems, such as single-event upset (SEU), require the fidelity of a mesh-based device simulator but are only meaningful when dynamically coupled with an external circuit. For such problems a mixed-level simulator is desirable, but the two types of simulation generally have different (sometimes conflicting) numerical requirements. To address these considerations, we have investigated variations of the two-level Newton algorithm, which preserves tight coupling between the circuit and the partial differential equations (PDE) device, while optimizing the numerics for both.

More Details

SIERRA Framework Version 3: Transfer Services Design and Use

Stewart, James R.

This paper presents a description of the SIERRA Framework Version 3 parallel transfer operators. The high-level design including object interrelationships, as well as requirements for their use, is discussed. Transfer operators are used for moving field data from one computational mesh to another. The need for this service spans many different applications. The most common application is to enable loose coupling of multiple physics modules, such as for the coupling of a quasi-statics analysis with a thermal analysis. The SIERRA transfer operators support the transfer of nodal and element fields between meshes of different, arbitrary parallel decompositions. Also supplied are ''copy'' transfer operators for efficient transfer of fields between identical meshes. A ''copy'' transfer operator is also implemented for constraint objects. Each of these transfer operators is described. Also, two different parallel algorithms are presented for handling the geometric misalignment between different parallel-distributed meshes.

More Details

Nonlinear programming strategies for source detection of municipal water networks

van Bloemen Waanders, Bart G.; van Bloemen Waanders, Bart G.; Bartlett, Roscoe B.

Increasing concerns for the security of the national infrastructure have led to a growing need for improved management and control of municipal water networks. To deal with this issue, optimization offers a general and extremely effective method to identify (possibly harmful) disturbances, assess the current state of the network, and determine operating decisions that meet network requirements and lead to optimal performance. This paper details an optimization strategy for the identification of source disturbances in the network. Here we consider the source inversion problem modeled as a nonlinear programming problem. Dynamic behavior of municipal water networks is simulated using EPANET. This approach allows for a widely accepted, general purpose user interface. For the source inversion problem, flows and concentrations of the network will be reconciled and unknown sources will be determined at network nodes. Moreover, intrusive optimization and sensitivity analysis techniques are identified to assess the influence of various parameters and models in the network in a computational efficient manner. A number of numerical comparisons are made to demonstrate the effectiveness of various optimization approaches.

More Details

Simple, scalable protocols for high-performance local networks

Proceedings - Conference on Local Computer Networks, LCN

Riesen, Rolf; Maccabe, Arthur B.

RMPP (reliable message passing protocol) is a lightweight transport protocol designed for clusters that provides end-to-end flow control and fault tolerance. In this article, presentations were made that compares RMPP to TCP, UDP, and "Utopia". The article compared the protocols on four benchmarks: bandwidth, latency, all-to-all, and communication-computation overlap. The results have shown that message-based protocols like RMPP have several advantages over TCP including ease of implementation, support for computation/communication overlap, and low CPU overhead.

More Details

SIERRA Framework Version 3: h-Adaptivity Design and Use

Stewart, James R.; Edwards, Harold C.

This paper presents a high-level overview of the algorithms and supporting functionality provided by SIERRA Framework Version 3 for h-adaptive finite-element mechanics application development. Also presented is a fairly comprehensive description of what is required by the application codes to use the SIERRA h-adaptivity services. In general, the SIERRA framework provides the functionality for hierarchically subdividing elements in a distributed parallel environment, as well as dynamic load balancing. The mechanics application code is required to supply an a posteriori error indicator, prolongation and restriction operators for the field variables, hanging-node constraint handlers, and execution control code. This paper does not describe the Application Programming Interface (API), although references to SIERRA framework classes are given where appropriate.

More Details

SIERRA Framework Version 3: Core Services Theory and Design

Edwards, Harold C.

The SIERRA Framework core services provide essential services for managing the mesh data structure, computational fields, and physics models of an application. An application using these services will supply a set of physics models, define the computational fields that are required by those models, and define the mesh upon which its physics models operate. The SIERRA Framework then manages all of the data for a massively parallel multiphysics application.

More Details

Analysis of Price Equilibriums in the Aspen Economic Model under Various Purchasing Methods

Slepoy, Natasha S.; Pryor, Richard J.

Aspen, a powerful economic modeling tool that uses agent modeling and genetic algorithms, can accurately simulate the economy. In it, individuals are hired by firms to produce a good that households then purchase. The firms decide what price to charge for this good, and based on that price, the households determine which firm to purchase from. We will attempt to discover the Nash Equilibrium price found in this model under two different methods of determining how many orders each firm receives. To keep it simple, we will assume there are only two firms in our model, and that these firms compete for the sale of one identical good.

More Details

Predicting Function of Biological Macromolecules: A Summary of LDRD Activities: Project 10746

Frink, Laura J.; Rempe, Susan R.; Means, Shawn A.; Stevens, Mark J.; Crozier, Paul C.; Martin, Marcus G.; Sears, Mark P.; Hjalmarson, Harold P.

This LDRD project has involved the development and application of Sandia's massively parallel materials modeling software to several significant biophysical systems. They have been successful in applying the molecular dynamics code LAMMPS to modeling DNA, unstructured proteins, and lipid membranes. They have developed and applied a coupled transport-molecular theory code (Tramonto) to study ion channel proteins with gramicidin A as a prototype. they have used the Towhee configurational bias Monte-Carlo code to perform rigorous tests of biological force fields. they have also applied the MP-Sala reacting-diffusion code to model cellular systems. Electroporation of cell membranes has also been studied, and detailed quantum mechanical studies of ion solvation have been performed. In addition, new molecular theory algorithms have been developed (in FasTram) that may ultimately make protein solvation calculations feasible on workstations. Finally, they have begun implementation of a combined molecular theory and configurational bias Monte-Carlo code. They note that this LDRD has provided a basis for several new internal (e.g. several new LDRD) and external (e.g. 4 NIH proposals and a DOE/Genomes to Life) proposals.

More Details

Xyce Parallel Electronic Simulator - User's Guide, Version 1.0

Hutchinson, Scott A.; Keiter, Eric R.; Hoekstra, Robert J.; Waters, Lon J.; Russo, Thomas V.; Rankin, Eric R.; Wix, Steven D.

This manual describes the use of the Xyce Parallel Electronic Simulator code for simulating electrical circuits at a variety of abstraction levels. The Xyce Parallel Electronic Simulator has been written to support,in a rigorous manner, the simulation needs of the Sandia National Laboratories electrical designers. As such, the development has focused on improving the capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) A client-server or multi-tiered operating model wherein the numerical kernel can operate independently of the graphical user interface (GUI). (4) Object-oriented code design and implementation using modern coding-practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. The code is a parallel code in the most general sense of the phrase--a message passing parallel implementation--which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Furthermore, careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved even as the number of processors grows. Another feature required by designers is the ability to add device models, many specific to the needs of Sandia, to the code. To this end, the device package in the Xyce Parallel Electronic Simulator is designed to support a variety of device model inputs. These input formats include standard analytical models, behavioral models and look-up tables. Combined with this flexible interface is an architectural design that greatly simplifies the addition of circuit models. One of the most important contribution Xyce makes to the designers at Sandia National Laboratories is in providing a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia now has an ''in-house''capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods) research and development can be performed. Furthermore, these capabilities will then be migrated to the end users.

More Details

Generalized Fourier Analyses of Semi-Discretizations of the Advection-Diffusion Equation

Christon, Mark A.; Voth, Thomas E.; Martinez, Mario J.

This report presents a detailed multi-methods comparison of the spatial errors associated with finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. The errors are reported in terms of non-dimensional phase and group speeds, discrete diffusivity, artificial diffusivity, and grid-induced anisotropy. It is demonstrated that Fourier analysis (aka von Neumann analysis) provides an automatic process for separating the spectral behavior of the discrete advective operator into its symmetric dissipative and skew-symmetric advective components. Further it is demonstrated that streamline upwind Petrov-Galerkin and its control-volume finite element analogue, streamline upwind control-volume, produce both an artificial diffusivity and an artificial phase speed in addition to the usual semi-discrete artifacts observed in the discrete phase speed, group speed and diffusivity. For each of the numerical methods considered, asymptotic truncation error and resolution estimates are presented for the limiting cases of pure advection and pure diffusion. The Galerkin finite element method and its streamline upwind derivatives are shown to exhibit super-convergent behavior in terms of phase and group speed when a consistent mass matrix is used in the formulation. In contrast, the CVFEM method and its streamline upwind derivatives yield strictly second-order behavior. While this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common mathematical framework.

More Details

ALEGRA: User Input and Physics Descriptions Version 4.2

Boucheron, Edward A.; Haill, Thomas A.; Peery, James S.; Petney, Sharon P.; Robbins, Joshua R.; Robinson, Allen C.; Summers, Randall M.; Voth, Thomas E.; Wong, Michael K.; Brown, Kevin H.; Budge, Kent G.; Burns, Shawn P.; Carroll, Daniel E.; Carroll, Susan K.; Christon, Mark A.; Drake, Richard R.; Garasi, Christopher J.

ALEGRA is an arbitrary Lagrangian-Eulerian finite element code that emphasizes large distortion and shock propagation. This document describes the user input language for the code.

More Details

Large Scale Non-Linear Programming for PDE Constrained Optimization

van Bloemen Waanders, Bart G.; Bartlett, Roscoe B.; Long, Kevin R.; Boggs, Paul T.; Salinger, Andrew G.

Three years of large-scale PDE-constrained optimization research and development are summarized in this report. We have developed an optimization framework for 3 levels of SAND optimization and developed a powerful PDE prototyping tool. The optimization algorithms have been interfaced and tested on CVD problems using a chemically reacting fluid flow simulator resulting in an order of magnitude reduction in compute time over a black box method. Sandia's simulation environment is reviewed by characterizing each discipline and identifying a possible target level of optimization. Because SAND algorithms are difficult to test on actual production codes, a symbolic simulator (Sundance) was developed and interfaced with a reduced-space sequential quadratic programming framework (rSQP++) to provide a PDE prototyping environment. The power of Sundance/rSQP++ is demonstrated by applying optimization to a series of different PDE-based problems. In addition, we show the merits of SAND methods by comparing seven levels of optimization for a source-inversion problem using Sundance and rSQP++. Algorithmic results are discussed for hierarchical control methods. The design of an interior point quadratic programming solver is presented.

More Details

Self-Reconfigurable Robots

Hensinger, David M.; Johnston, Gabriel J.; Hinman-Sweeney, Elaine H.; Feddema, John T.; Eskridge, Steven E.

A distributed reconfigurable micro-robotic system is a collection of unlimited numbers of distributed small, homogeneous robots designed to autonomously organize and reorganize in order to achieve mission-specified geometric shapes and functions. This project investigated the design, control, and planning issues for self-configuring and self-organizing robots. In the 2D space a system consisting of two robots was prototyped and successfully displayed automatic docking/undocking to operate dependently or independently. Additional modules were constructed to display the usefulness of a self-configuring system in various situations. In 3D a self-reconfiguring robot system of 4 identical modules was built. Each module connects to its neighbors using rotating actuators. An individual component can move in three dimensions on its neighbors. We have also built a self-reconfiguring robot system consisting of 9-module Crystalline Robot. Each module in this robot is actuated by expansion/contraction. The system is fully distributed, has local communication (to neighbors) capabilities and it has global sensing capabilities.

More Details

ALEGRA Validation Studies for Regular, Mach, and Double Mach Shock Reflection in Gas Dynamics

Gruebel, Marilyn M.; Cochran, John R.

In this report we describe the performance of the ALEGRA shock wave physics code on a set of gas dynamic shock reflection problems that have associated experimental pressure data. These reflections cover three distinct regimes of oblique shock reflection in gas dynamics--regular, Mach, and double Mach reflection. For the selected data, the use of an ideal gas equation of state is appropriate, thus simplifying to a considerable degree the task of validating the shock wave computational capability of ALEGRA in the application regime of the experiments. We find good agreement of ALEGRA with reported experimental data for sufficient grid resolution. We discuss the experimental data, the nature and results of the corresponding ALEGRA calculations, and the implications of the presented experiment--calculation comparisons.

More Details

SAR Window Functions: A Review and Analysis of the Notched Spectrum Problem

Dickey, Fred M.; Romero, L.A.; Doerry, Armin

Imaging systems such as Synthetic Aperture Radar collect band-limited data from which an image of a target scene is rendered. The band-limited nature of the data generates sidelobes, or ''spilled energy'' most evident in the neighborhood of bright point-like objects. It is generally considered desirable to minimize these sidelobes, even at the expense of some generally small increase in system bandwidth. This is accomplished by shaping the spectrum with window functions prior to inversion or transformation into an image. A window function that minimizes sidelobe energy can be constructed based on prolate spheroidal wave functions. A parametric design procedure allows doing so even with constraints on allowable increases in system bandwidth. This approach is extended to accommodate spectral notches or holes, although the guaranteed minimum sidelobe energy can be quite high in this case. Interestingly, for a fixed bandwidth, the minimum-mean-squared-error image rendering of a target scene is achieved with no windowing at all (rectangular or boxcar window).

More Details

Three-Dimensional Wind Field Modeling: A Review

Homicz, Gregory F.

Over the past several decades, the development of computer models to predict the atmospheric transport of hazardous material across a local (on the order of 10s of km) to mesoscale (on the order of 100s of km) region has received considerable attention, for both regulatory purposes, and to guide emergency response teams. Wind inputs to these models cover a spectrum of sophistication and required resources. At one end is the interpolation/extrapolation of available observations, which can be done rapidly, but at the risk of missing important local phenomena. Such a model can also only describe the wind at the time the observations were made. At the other end are sophisticated numerical solutions based on so-called Primitive Equation models. These prognostic models, so-called because in principle they can forecast future conditions, contain the most physics, but can easily consume tens of hours, if not days, of computer time. They may also require orders of magnitude more effort to set up, as both boundary and initial conditions on all the relevant variables must be supplied. The subject of this report is two classes of models intermediate in sophistication between the interpolated and prognostic ends of the spectrum. The first, known as mass-consistent (sometimes referred to as diagnostic) models, attempt to strike a compromise between simple interpolation and the complexity of the Primitive Equation models by satisfying only the conservation of mass (continuity) equation. The second class considered here consists of the so-called linear models, which purport to satisfy both mass and momentum balances. A review of the published literature on these models over the past few decades was performed. Though diagnostic models use a variety of approaches, they tend to fall into a relatively few well-defined categories. Linear models, on the other hand, follow a more uniform methodology, though they differ in detail. The discussion considers the theoretical underpinnings of each category of the diagnostic models, and the linear models, in order to assess the advantages and disadvantages of each. It is concluded that diagnostic models are the better suited of the two for predicting the atmospheric dispersion of hazardous materials in emergency response scenarios, as the linear models are only able to accommodate gently-sloping terrain, and are predicated on several simplifying approximations which can be difficult to justify a priori. Of the various approaches used in diagnostic modeling, that based on the calculus of variations appears to be the most objective, in that it introduces the fewest number of arbitrary parameters. The strengths and weaknesses of models in this category, as they relate to the activities of Sandia's Nuclear Emergency Support Team (NEST), are further highlighted.

More Details

A 3-D Vortex Code for Parachute Flow Predictions: VIPAR Version 1.0

Strickland, James H.; Homicz, Gregory F.; Porter, V.L.

This report describes a 3-D fluid mechanics code for predicting flow past bluff bodies whose surfaces can be assumed to be made up of shell elements that are simply connected. Version 1.0 of the VIPAR code (Vortex Inflation PARachute code) is described herein. This version contains several first order algorithms that we are in the process of replacing with higher order ones. These enhancements will appear in the next version of VIPAR. The present code contains a motion generator that can be used to produce a large class of rigid body motions. The present code has also been fully coupled to a structural dynamics code in which the geometry undergoes large time dependent deformations. Initial surface geometry is generated from triangular shell elements using a code such as Patran and is written into an ExodusII database file for subsequent input into VIPAR. Surface and wake variable information is output into two ExodusII files that can be post processed and viewed using software such as EnSight{trademark}.

More Details

Level 1 Peer Review Process for the Sandia ASCI V and V Program: FY01 Final Report

Pilch, Martin P.; Froehlich, G.K.; Hodges, Ann L.; Peercy, David E.; Trucano, Timothy G.; Moya, Jaime L.

This report describes the results of the FY01 Level 1 Peer Reviews for the Verification and Validation (V&V) Program at Sandia National Laboratories. V&V peer review at Sandia is intended to assess the ASCI (Accelerated Strategic Computing Initiative) code team V&V planning process and execution. The Level 1 Peer Review process is conducted in accordance with the process defined in SAND2000-3099. V&V Plans are developed in accordance with the guidelines defined in SAND2000-3 101. The peer review process and process for improving the Guidelines are necessarily synchronized and form parts of a larger quality improvement process supporting the ASCI V&V program at Sandia. During FY00 a prototype of the process was conducted for two code teams and their V&V Plans and the process and guidelines updated based on the prototype. In FY01, Level 1 Peer Reviews were conducted on an additional eleven code teams and their respective V&V Plans. This report summarizes the results from those peer reviews, including recommendations from the panels that conducted the reviews.

More Details

DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis Version 3.0 Developers Manual (title change from electronic posting)

Eldred, Michael S.; Giunta, Anthony A.; van Bloemen Waanders, Bart G.; Wojtkiewicz, Steven F.; Hart, William E.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

More Details

DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis Version 3.0

Eldred, Michael S.; Giunta, Anthony A.; van Bloemen Waanders, Bart G.; Wojtkiewicz, Steven F.; Hart, William E.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

More Details

Evaluation Techniques and Properties of an Exact Solution to a Subsonic Free Surface Jet Flow

Robinson, Allen C.

Computational techniques for the evaluation of steady plane subsonic flows represented by Chaplygin series in the hodograph plane are presented. These techniques are utilized to examine the properties of the free surface wall jet solution. This solution is a prototype for the shaped charge jet, a problem which is particularly difficult to compute properly using general purpose finite element or finite difference continuum mechanics codes. The shaped charge jet is a classic validation problem for models involving high explosives and material strength. Therefore, the problem studied in this report represents a useful verification problem associated with shaped charge jet modeling.

More Details

Assembly of LIGA using Electric Fields

Feddema, John T.; Warne, Larry K.; Johnson, William Arthur.; Routson, Allison J.; Armour, David L.

The goal of this project was to develop a device that uses electric fields to grasp and possibly levitate LIGA parts. This non-contact form of grasping would solve many of the problems associated with grasping parts that are only a few microns in dimensions. Scaling laws show that for parts this size, electrostatic and electromagnetic forces are dominant over gravitational forces. This is why micro-parts often stick to mechanical tweezers. If these forces can be controlled under feedback control, the parts could be levitated, possibly even rotated in air. In this project, we designed, fabricated, and tested several grippers that use electrostatic and electromagnetic fields to grasp and release metal LIGA parts. The eventual use of this tool will be to assemble metal and non-metal LIGA parts into small electromechanical systems.

More Details

General Concepts for Experimental Validation of ASCI Code Applications

Trucano, Timothy G.; Pilch, Martin P.; Oberkampf, William L.

This report presents general concepts in a broadly applicable methodology for validation of Accelerated Strategic Computing Initiative (ASCI) codes for Defense Programs applications at Sandia National Laboratories. The concepts are defined and analyzed within the context of their relative roles in an experimental validation process. Examples of applying the proposed methodology to three existing experimental validation activities are provided in appendices, using an appraisal technique recommended in this report.

More Details

LOCA 1.0 Library of Continuation Algorithms: Theory and Implementation Manual

Salinger, Andrew G.; Pawlowski, Roger P.; Lehoucq, Richard B.; Romero, L.A.; Wilkes, Edward D.

LOCA, the Library of Continuation Algorithms, is a software library for performing stability analysis of large-scale applications. LOCA enables the tracking of solution branches as a function of a system parameter, the direct tracking of bifurcation points, and, when linked with the ARPACK library, a linear stability analysis capability. It is designed to be easy to implement around codes that already use Newton's method to converge to steady-state solutions. The algorithms are chosen to work for large problems, such as those that arise from discretizations of partial differential equations, and to run on distributed memory parallel machines. This manual presents LOCA's continuation and bifurcation analysis algorithms, and instructions on how to implement LOCA with an application code. The LOCA code is being made publicly available at www.cs.sandia.gov/loca.

More Details

Molecular Simulation of Reacting Systems

Thompson, Aidan P.

The final report for a Laboratory Directed Research and Development project entitled, Molecular Simulation of Reacting Systems is presented. It describes efforts to incorporate chemical reaction events into the LAMMPS massively parallel molecular dynamics code. This was accomplished using a scheme in which several classes of reactions are allowed to occur in a probabilistic fashion at specified times during the MD simulation. Three classes of reaction were implemented: addition, chain transfer and scission. A fully parallel implementation was achieved using a checkerboarding scheme, which avoids conflicts due to reactions occurring on neighboring processors. The observed chemical evolution is independent of the number of processors used. The code was applied to two test applications: irreversible linear polymerization and thermal degradation chemistry.

More Details

On the Development of the Large Eddy Simulation Approach for Modeling Turbulent Flow: LDRD Final Report

Schmidt, Rodney C.; Smith, Thomas M.; DesJardin, Paul E.; Voth, Thomas E.; Christon, Mark A.; Kerstein, Alan R.; Wunsch, Scott E.

This report describes research and development of the large eddy simulation (LES) turbulence modeling approach conducted as part of Sandia's laboratory directed research and development (LDRD) program. The emphasis of the work described here has been toward developing the capability to perform accurate and computationally affordable LES calculations of engineering problems using unstructured-grid codes, in wall-bounded geometries and for problems with coupled physics. Specific contributions documented here include (1) the implementation and testing of LES models in Sandia codes, including tests of a new conserved scalar--laminar flamelet SGS combustion model that does not assume statistical independence between the mixture fraction and the scalar dissipation rate, (2) the development and testing of statistical analysis and visualization utility software developed for Exodus II unstructured grid LES, and (3) the development and testing of a novel new LES near-wall subgrid model based on the one-dimensional Turbulence (ODT) model.

More Details

Tetrahedral mesh improvement via optimization of the element condition number

International Journal for Numerical Methods in Engineering

Knupp, Patrick K.

We present a new shape measure for tetrahedral elements that is optimal in that it gives the distance of a tetrahedron from the set of inverted elements. This measure is constructed from the condition number of the linear transformation between a unit equilateral tetrahedron and any tetrahedron with positive volume. Using this shape measure, we formulate two optimization objective functions that are differentiated by their goal: the first seeks to improve the average quality of the tetrahedral mesh; the second aims to improve the worst-quality element in the mesh. We review the optimization techniques used with each objective function and present experimental results that demonstrate the effectiveness of the mesh improvement methods. We show that a combined optimization approach that uses both objective functions obtains the best-quality meshes for several complex geometries. Copyright © 2001 John Wiley and Sons, Ltd.

More Details

User Manual and Supporting Information for Library of Codes for Centroidal Voronoi Point Placement and Associated Zeroth, First, and Second Moment Determination

Brannon, Rebecca M.

The theory, numerical algorithm, and user documentation are provided for a new ''Centroidal Voronoi Tessellation (CVT)'' method of filling a region of space (2D or 3D) with particles at any desired particle density. ''Clumping'' is entirely avoided and the boundary is optimally resolved. This particle placement capability is needed for any so-called ''mesh-free'' method in which physical fields are discretized via arbitrary-connectivity discrete points. CVT exploits efficient statistical methods to avoid expensive generation of Voronoi diagrams. Nevertheless, if a CVT particle's Voronoi cell were to be explicitly computed, then it would have a centroid that coincides with the particle itself and a minimized rotational moment. The CVT code provides each particle's volume and centroid, and also the rotational moment matrix needed to approximate a particle by an ellipsoid (instead of a simple sphere). DIATOM region specification is supported.

More Details

An Evaluation of the Material Point Method

Brannon, Rebecca M.

The theory and algorithm for the Material Point Method (MPM) are documented, with a detailed discussion on the treatments of boundary conditions and shock wave problems. A step-by-step solution scheme is written based on direct inspection of the two-dimensional MPM code currently used at the University of Missouri-Columbia (which is, in turn, a legacy of the University of New Mexico code). To test the completeness of the solution scheme and to demonstrate certain features of the MPM, a one-dimensional MPM code is programmed to solve one-dimensional wave and impact problems, with both linear elasticity and elastoplasticity models. The advantages and disadvantages of the MPM are investigated as compared with competing mesh-free methods. Based on the current work, future research directions are discussed to better simulate complex physical problems such as impact/contact, localization, crack propagation, penetration, perforation, fragmentation, and interactions among different material phases. In particular, the potential use of a boundary layer to enforce the traction boundary conditions is discussed within the framework of the MPM.

More Details

Statistical Validation of Engineering and Scientific Models: A Maximum Likelihood Based Metric

Hills, Richard G.; Trucano, Timothy G.

Two major issues associated with model validation are addressed here. First, we present a maximum likelihood approach to define and evaluate a model validation metric. The advantage of this approach is it is more easily applied to nonlinear problems than the methods presented earlier by Hills and Trucano (1999, 2001); the method is based on optimization for which software packages are readily available; and the method can more easily be extended to handle measurement uncertainty and prediction uncertainty with different probability structures. Several examples are presented utilizing this metric. We show conditions under which this approach reduces to the approach developed previously by Hills and Trucano (2001). Secondly, we expand our earlier discussions (Hills and Trucano, 1999, 2001) on the impact of multivariate correlation and the effect of this on model validation metrics. We show that ignoring correlation in multivariate data can lead to misleading results, such as rejecting a good model when sufficient evidence to do so is not available.

More Details

DNA Microarray Technology

Davidson, George S.

Collaboration between Sandia National Laboratories and the University of New Mexico Biology Department resulted in the capability to train students in microarray techniques and the interpretation of data from microarray experiments. These studies provide for a better understanding of the role of stationary phase and the gene regulation involved in exit from stationary phase, which may eventually have important clinical implications. Importantly, this research trained numerous students and is the basis for three new Ph.D. projects.

More Details

Verification and validation in computational fluid dynamics

Progress in Aerospace Sciences

Oberkampf, William L.; Trucano, Timothy G.

The verification and validation (V & V) in computational fluid dynamics was presented. The methods and procedures for assessing V & V were presented. The issues such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainity, conceptual sources of error and uncertainity, and the relationship between validation and prediction was discussed. Methods for determining the accuracy of numerical solutions were presented and the importance of software testing during verification activities were emphasized.

More Details

Processor allocation on Cplant: Achieving general processor locality using one-dimensional allocation strategies

Proceedings - IEEE International Conference on Cluster Computing, ICCC

Leung, Vitus J.; Arkin, E.M.; Bender, M.A.; Bunde, D.; Johnston, J.; Lal, Alok; Mitchell, J.S.B.; Phillips, C.; Seiden, S.S.

The Computational Plant or Cplant is a commodity-based supercomputer under development at Sandia National Laboratories. This paper describes resource-allocation strategies to achieve processor locality for parallel jobs in Cplant and other supercomputers. Users of Cplant and other Sandia supercomputers submit parallel jobs to a job queue. When a job is scheduled to run, it is assigned to a set of processors. To obtain maximum throughput, jobs should be allocated to localized clusters of processors to minimize communication costs and to avoid bandwidth contention caused by overlapping jobs. This paper introduces new allocation strategies and performance metrics based on space-filling curves and one dimensional allocation strategies. These algorithms are general and simple. Preliminary simulations and Cplant experiments indicate that both space-filling curves and one-dimensional packing improve processor locality compared to the sorted free list strategy previously used on Cplant. These new allocation strategies are implemented in the new release of the Cplant System Software, Version 2.0, phased into the Cplant systems at Sandia by May 2002.

More Details

A p-Adic Metric for Particle Mass Scale Organization with Genetic Divisors

Wagner, John S.

The concept of genetic divisors can be given a quantitative measure with a non-Archimedean p-adic metric that is both computationally convenient and physically motivated. For two particles possessing distinct mass parameters x and y, the metric distance D(x, y) is expressed on the field of rational numbers Q as the inverse of the greatest common divisor [gcd (x , y)]. As a measure of genetic similarity, this metric can be applied to (1) the mass numbers of particle states and (2) the corresponding subgroup orders of these systems. The use of the Bezout identity in the form of a congruence for the expression of the gcd (x , y) corresponding to the v{sub e} and {sub {mu}} neutrinos (a) connects the genetic divisor concept to the cosmic seesaw congruence, (b) provides support for the {delta}-conjecture concerning the subgroup structure of particle states, and (c) quantitatively strengthens the interlocking relationships joining the values of the prospectively derived (i) electron neutrino (v{sub e}) mass (0.808 meV), (ii) muon neutrino (v{sub {mu}}) mass (27.68 meV), and (iii) unified strong-electroweak coupling constant ({alpha}*{sup -1} = 34.26).

More Details

Applications of Transport/Reaction Codes to Problems in Cell Modeling

Means, Shawn A.; Rintoul, Mark D.; Shadid, John N.; Rintoul, Mark D.

We demonstrate two specific examples that show how our exiting capabilities in solving large systems of partial differential equations associated with transport/reaction systems can be easily applied to outstanding problems in computational biology. First, we examine a three-dimensional model for calcium wave propagation in a Xenopus Laevis frog egg and verify that a proposed model for the distribution of calcium release sites agrees with experimental results as a function of both space and time. Next, we create a model of the neuron's terminus based on experimental observations and show that the sodium-calcium exchanger is not the route of sodium's modulation of neurotransmitter release. These state-of-the-art simulations were performed on massively parallel platforms and required almost no modification of existing Sandia codes.

More Details

Icarus: A 2-D Direct Simulation Monte Carlo (DSMC) Code for Multi-Processor Computers

Bartel, Timothy J.; Plimpton, Steven J.; Gallis, Michail A.

Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird[11.1] and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modeled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modeled using steric factors derived from Arrhenius reaction rates or in a manner similar to continuum modeling. Surface chemistry is modeled with surface reaction probabilities; an optional site density, energy dependent, coverage model is included. Electrons are modeled by either a local charge neutrality assumption or as discrete simulational particles. Ion chemistry is modeled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electro-static fields can either be: externally input, a Langmuir-Tonks model or from a Green's Function (Boundary Element) based Poison Solver. Icarus has been used for subsonic to hypersonic, chemically reacting, and plasma flows. The Icarus software package includes the grid generation, parallel processor decomposition, post-processing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. All of the software packages are written in standard Fortran.

More Details

ACME - Algorithms for Contact in a Multiphysics Environment API Version 1.0

Brown, Kevin H.; Summers, Randall M.; Glass, Micheal W.; Gullerud, Arne S.; Heinstein, Martin W.; Jones, Reese E.

An effort is underway at Sandia National Laboratories to develop a library of algorithms to search for potential interactions between surfaces represented by analytic and discretized topological entities. This effort is also developing algorithms to determine forces due to these interactions for transient dynamics applications. This document describes the Application Programming Interface (API) for the ACME (Algorithms for Contact in a Multiphysics Environment) library.

More Details
Results 9801–9900 of 9,998
Results 9801–9900 of 9,998