Publications

Results 9901–9998 of 9,998
Skip to search filters

Experiments on Adaptive Techniques for Host-Based Intrusion Detection

Draelos, Timothy J.; Collins, Michael J.; Duggan, David P.; Thomas, Edward V.

This research explores four experiments of adaptive host-based intrusion detection (ID) techniques in an attempt to develop systems that can detect novel exploits. The technique considered to have the most potential is adaptive critic designs (ACDs) because of their utilization of reinforcement learning, which allows learning exploits that are difficult to pinpoint in sensor data. Preliminary results of ID using an ACD, an Elman recurrent neural network, and a statistical anomaly detection technique demonstrate an ability to learn to distinguish between clean and exploit data. We used the Solaris Basic Security Module (BSM) as a data source and performed considerable preprocessing on the raw data. A detection approach called generalized signature-based ID is recommended as a middle ground between signature-based ID, which has an inability to detect novel exploits, and anomaly detection, which detects too many events including events that are not exploits. The primary results of the ID experiments demonstrate the use of custom data for generalized signature-based intrusion detection and the ability of neural network-based systems to learn in this application environment.

More Details

Description of the Sandia Validation Metrics Project

Trucano, Timothy G.; Easterling, Robert G.; Dowding, Kevin J.; Paez, Thomas L.; Urbina, Angel U.; Romero, Vicente J.; Rutherford, Brian M.; Hills, Richard G.

This report describes the underlying principles and goals of the Sandia ASCI Verification and Validation Program Validation Metrics Project. It also gives a technical description of two case studies, one in structural dynamics and the other in thermomechanics, that serve to focus the technical work of the project in Fiscal Year 2001.

More Details

Dynamics of exchange at gas-zeolite interfaces I: Pure component n-butane and isobutane

Journal of Physical Chemistry B

Chandross, M.; Webb, Edmund B.; Grest, Gary S.; Martin, Marcus G.; Thompson, Aidan P.; Roth, M.W.

We present the results of Molecular Dynamics and Monte Carlo simulations of n-butane and isobutane in silicalite. We begin with a comparison of the bulk adsorption and diffusion properties for two different parameterizations of the interaction potential between the hydrocarbon species, both of which have been shown to reproduce experimental gas-liquid coexistence curves. We examine diffusion as a function of the loading of the zeolite, as well as the temperature dependence of the diffusion constant at loading and for infinite dilution. Both force fields give accurate descriptions of bulk properties. We continue with simulations in which interfaces are formed between single component gases and the zeolite. After reaching equilibrium, we examine the dynamics of exchange between the bulk gas and the zeolite. In particular, we examine the average time spent in the adsorption layer by molecules as they enter the zeolite from the gas in an attempt to probe the microscopic origins of the surface barrier. The microscopic barrier is found to be insignificant for experimental systems. Finally, we calculate the permeability of the zeolite for n-butane and isobutane as a function of pressure. Our results underestimate the experimental results by an order of magnitude, indicating a strong effect from the surface barrier in these simulations. Our simulations are performed for a number of different gas temperatures and pressures, covering a wide range of state points.

More Details

Characterization of UOP IONSIV IE-911

Nyman, M.; Nenoff, T.M.; Headley, Thomas J.

As a participating national lab in the inter-institutional effort to resolve performance issues of the non-elutable ion exchange technology for Cs extraction, they have carried out a series of characterization studies of UOP IONSIV{reg_sign} IE-911 and its component parts. IE-911 is a bound form (zirconium hydroxide-binder) of crystalline silicotitanate (CST) ion exchanger. The crystalline silicotitanate removes Cs from solutions by selective ion exchange. The performance issues of primary concern are: (1) excessive Nb leaching and subsequent precipitation of column-plugging Nb-oxide material, and (2) precipitation of aluminosilicate on IE-911 pellet surfaces, which may be initiated by dissolution of Si from the IE-911, thus creating a supersaturated solution with respect to silica. In this work, they have identified and characterized Si- and Nb-oxide based impurity phases in IE-911, which are the most likely sources of leachable Si and Nb, respectively. Furthermore, they have determined the criteria and mechanism for removal from IE-911 of the Nb-based impurity phase that is responsible for the Nb-oxide column plugging incidents.

More Details

Quadratic Reciprocity and the Group Orders of Particle States

Wagner, John S.

The construction of inverse states in a finite field F{sub P{sub P{alpha}}} enables the organization of the mass scale by associating particle states with residue class designations. With the assumption of perfect flatness ({Omega}total = 1.0), this approach leads to the derivation of a cosmic seesaw congruence which unifies the concepts of space and mass. The law of quadratic reciprocity profoundly constrains the subgroup structure of the multiplicative group of units F{sub P{sub {alpha}}}* defined by the field. Four specific outcomes of this organization are (1) a reduction in the computational complexity of the mass state distribution by a factor of {approximately}10{sup 30}, (2) the extension of the genetic divisor concept to the classification of subgroup orders, (3) the derivation of a simple numerical test for any prospective mass number based on the order of the integer, and (4) the identification of direct biological analogies to taxonomy and regulatory networks characteristic of cellular metabolism, tumor suppression, immunology, and evolution. It is generally concluded that the organizing principle legislated by the alliance of quadratic reciprocity with the cosmic seesaw creates a universal optimized structure that functions in the regulation of a broad range of complex phenomena.

More Details

Determination of Supersymmetric Particle Masses and Attributes with Genetic Divisors

Wagner, John S.

Arithmetic conditions relating particle masses can be defined on the basis of (1) the supersymmetric conservation of congruence and (2) the observed characteristics of particle reactions and stabilities. Stated in the form of common divisors, these relations can be interpreted as expressions of genetic elements that represent specific particle characteristics. In order to illustrate this concept, it is shown that the pion triplet ({pi}{sup {+-}}, {pi}{sup 0}) can be associated with the existence of a greatest common divisor d{sub 0{+-}} in a way that can account for both the highly similar physical properties of these particles and the observed {pi}{sup {+-}}/{pi}{sup 0} mass splitting. These results support the conclusion that a corresponding statement holds generally for all particle multiplets. Classification of the respective physical states is achieved by assignment of the common divisors to residue classes in a finite field F{sub P{sub {alpha}}} and the existence of the multiplicative group of units F{sub P{sub {alpha}}} enables the corresponding mass parameters to be associated with a rich subgroup structure. The existence of inverse states in F{sub P{sub {alpha}}} allows relationships connecting particle mass values to be conveniently expressed in a form in which the genetic divisor structure is prominent. An example is given in which the masses of two neutral mesons (K{degree} {r_arrow} {pi}{degree}) are related to the properties of the electron (e), a charged lepton. Physically, since this relationship reflects the cascade decay K{degree} {r_arrow} {pi}{degree} + {pi}{degree}/{pi}{degree} {r_arrow} e{sup +} + e{sup {minus}}, in which a neutral kaon is converted into four charged leptons, it enables the genetic divisor concept, through the intrinsic algebraic structure of the field, to provide a theoretical basis for the conservation of both electric charge and lepton number. It is further shown that the fundamental source of supersymmetry can be expressed in terms of hierarchical relationships between odd and even order subgroups of F{sub P{sub {alpha}}}, an outcome that automatically reflects itself in the phenomenon of fermion/boson pairing of individual particle systems. Accordingly, supersymmetry is best represented as a group rather than a particle property. The status of the Higgs subgroup of order 4 is singular; it is isolated from the hierarchical pattern and communicates globally to the mass scale through the seesaw congruence by (1) fusing the concepts of mass and space and (2) specifying the generators of the physical masses.

More Details

Programming Paradigms for Massively Parallel Computers: LDRD Project Final Report

Brightwell, Ronald B.

This technical report presents the initial proposal and renewable proposals for an LDRD project whose intended goal was to enable applications to take full advantage of the hardware available on Sandia's current and future massively parallel supercomputers by analyzing various ways of combining distributed-memory and shared-memory programming models. Despite Sandia's enormous success with distributed-memory parallel machines and the message-passing programming model, clusters of shared-memory processors appeared to be the massively parallel architecture of the future at the time this project was proposed. They had hoped to analyze various hybrid programming models for their effectiveness and characterize the types of application to which each model was well-suited. The report presents the initial research proposal and subsequent continuation proposals that highlight the proposed work and summarize the accomplishments.

More Details

Superresolution and Synthetic Aperture Radar

Dickey, Fred M.; Romero, L.A.; Doerry, Armin; Doerry, Armin

Superresolution concepts offer the potential of resolution beyond the classical limit. This great promise has not generally been realized. In this study we investigate the potential application of superresolution concepts to synthetic aperture radar. The analytical basis for superresolution theory is discussed. The application of the concept to synthetic aperture radar is investigated as an operator inversion problem. Generally, the operator inversion problem is ill posed. A criterion for judging superresolution processing of an image is presented.

More Details

DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis Version 3.0 Reference Manual

Eldred, Michael S.; Giunta, Anthony A.; van Bloemen Waanders, Bart G.; Wojtkiewicz, Steven F.; Hart, William E.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

More Details

ACME Algorithms for Contact in a Multiphysics Environment API Version 0.3a

Brown, Kevin H.; Glass, Micheal W.; Gullerud, Arne S.; Heinstein, Martin W.; Jones, Reese E.; Summers, Randall M.

An effort is underway at Sandia National Laboratories to develop a library of algorithms to search for potential interactions between surfaces represented by analytic and discretized topological entities. This effort is also developing algorithms to determine forces due to these interactions for transient dynamics applications. This document describes the Application Programming Interface (API) for the ACME (Algorithms for Contact in a Multiphysics Environment) library.

More Details

Fast through-bond diffusion of nitrogen in silicon

Applied Physics Letters

Schultz, Peter A.; Nelson, Jeffrey S.

We report first-principles total energy calculations of interaction of nitrogen in silicon with silicon self-interstitials. Substitutional nitrogen captures a silicon interstitial with 3.5 eV binding energy forming a (100) split interstitial ground-state geometry, with the nitrogen forming three bonds. The low-energy migration path is through a bond bridge state having two bonds. Fast diffusion of nitrogen occurs through a pure interstitialcy mechanism: the nitrogen never has less than two bonds. Near-zero formation energy of the nitrogen interstitialcy with respect to the substitutional rationalizes the low solubility of substitutional nitrogen in silicon. © 2001 American Institute of Physics.

More Details

Gridless Compressible Flow: A White Paper

Strickland, James H.

In this paper the development of a gridless method to solve compressible flow problems is discussed. The governing evolution equations for velocity divergence {delta}, vorticity {omega}, density {rho}, and temperature T are obtained from the primitive variable Navier-Stokes equations. Simplifications to the equations resulting from assumptions of ideal gas behavior, adiabatic flow, and/or constant viscosity coefficients are given. A general solution technique is outlined with some discussion regarding alternative approaches. Two radial flow model problems are considered which are solved using both a finite difference method and a compressible particle method. The first of these is an isentropic inviscid 1D spherical flow which initially has a Gaussian temperature distribution with zero velocity everywhere. The second problem is an isentropic inviscid 2D radial flow which has an initial vorticity distribution with constant temperature everywhere. Results from the finite difference and compressible particle calculations are compared in each case. A summary of the results obtained herein is given along with recommendations for continuing the work.

More Details

Collaborative evaluation of early design decisions and product manufacturability

Proceedings of the Hawaii International Conference on System Sciences

Kleban, S.D.; Stubblefield, W.A.; Mitchiner, K.W.; Mitchiner, John L.; Arms, M.

In manufacturing, the conceptual design and detailed design stages are typically regarded as sequential and distinct. Decisions made in conceptual design are often made with little information as to how they would affect detailed design or manufacturing process specification. Many possibilities and unknowns exist in conceptual design where ideas about product shape and functionality are changing rapidly. Few if any tools exist to aid in this difficult, amorphous stage in contrast to the many CAD and analysis tools for detailed design where much more is known about the final product. The Materials Process Design Environment (MPDE) is a collaborative problem solving environment (CPSE) that was developed so geographically dispersed designers in both the conceptual and detailed stage can work together and understand the impacts of their design decisions on functionality, cost and manufacturability.

More Details

Experimental results on statistical approaches to page replacement policies

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Leung, Vitus J.; Irani, Sandy

This paper investigates the questions of what statistical information about a memory request sequence is useful to have in making page replacement decisions. Our starting point is the Markov Request Model for page request sequences. Although the utility of modeling page request sequences by the Markov model has been recently put into doubt ([13]), we find that two previously suggested algorithms (Maximum Hitting Time [11] and Dominating Distribution [14]) which are based on the Markov model work well on the trace data used in this study. Interestingly, both of these algorithms perform equally well despite the fact that the theoretical results for these two algorithms differ dramatically. We then develop succinct characteristics of memory access patterns in an attempt to approximate the simpler of the two algorithms. Finally, we investigate how to collect these characteristics in an online manner in order to have a purely online algorithm.

More Details

Peer Review Process for the Sandia ASCI V and V Program: Version 1.0

Pilch, Martin P.; Trucano, Timothy G.; Peercy, David E.; Hodges, Ann L.; Young, Eunice R.; Moya, Jaime L.; Trucano, Timothy G.

This report describes the initial definition of the Verification and Validation (V and V) Plan Peer Review Process at Sandia National Laboratories. V and V peer review at Sandia is intended to assess the ASCI code team V and V planning process and execution. Our peer review definition is designed to assess the V and V planning process in terms of the content specified by the Sandia Guidelines for V and V plans. Therefore, the peer review process and process for improving the Guidelines are necessarily synchronized, and form parts of a larger quality improvement process supporting the ASCI V and V program at Sandia.

More Details

Hexahedral Mesh Untangling

Engineering with Computers

Knupp, Patrick K.

We investigate a well-motivated mesh untangling objective function whose optimization automatically produces non-inverted elements when possible. Examples show the procedure is highly effective on simplicial meshes and on non-simplicial (e.g., hexahedral) meshes constructed via mapping or sweeping algorithms. The current whisker-weaving (WW) algorithm in CUBIT usually produces hexahedral meshes that are unsuitable for analyses due to inverted elements. The majority of these meshes cannot be untangled using the new objective function. The most likely source of the difficulty is poor mesh topology.

More Details

The Xyce Parallel Electronic Simulator - An Overview

Hutchinson, Scott A.; Keiter, Eric R.; Hoekstra, Robert J.; Watts, Herman A.; Waters, Lon J.; Schells, Regina L.; Wix, Steven D.

The Xyce{trademark} Parallel Electronic Simulator has been written to support the simulation needs of the Sandia National Laboratories electrical designers. As such, the development has focused on providing the capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). In addition, they are providing improved performance for numerical kernels using state-of-the-art algorithms, support for modeling circuit phenomena at a variety of abstraction levels and using object-oriented and modern coding-practices that ensure the code will be maintainable and extensible far into the future. The code is a parallel code in the most general sense of the phrase--a message passing parallel implementation--which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Furthermore, careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved even as the number of processors grows.

More Details

Aspen-EE: An Agent-Based Model of Infrastructure Interdependency

Barton, Dianne C.; Eidson, Eric D.; Schoenwald, David A.; Stamber, Kevin L.; Reinert, Rhonda K.

This report describes the features of Aspen-EE (Electricity Enhancement), a new model for simulating the interdependent effects of market decisions and disruptions in the electric power system on other critical infrastructures in the US economy. Aspen-EE extends and modifies the capabilities of Aspen, an agent-based model previously developed by Sandia National Laboratories. Aspen-EE was tested on a series of scenarios in which the rules governing electric power trades were changed. Analysis of the scenario results indicates that the power generation company agents will adjust the quantity of power bid into each market as a function of the market rules. Results indicate that when two power markets are faced with identical economic circumstances, the traditionally higher-priced market sees its market clearing price decline, while the traditionally lower-priced market sees a relative increase in market clearing price. These results indicate that Aspen-EE is predicting power market trends that are consistent with expected economic behavior.

More Details

PICO: An Object-Oriented Framework for Branch and Bound

Hart, William E.; Phillips, Cynthia A.; Phillips, Cynthia A.

This report describes the design of PICO, a C++ framework for implementing general parallel branch-and-bound algorithms. The PICO framework provides a mechanism for the efficient implementation of a wide range of branch-and-bound methods on an equally wide range of parallel computing platforms. We first discuss the basic architecture of PICO, including the application class hierarchy and the package's serial and parallel layers. We next describe the design of the serial layer, and its central notion of manipulating subproblem states. Then, we discuss the design of the parallel layer, which includes flexible processor clustering and communication rates, various load balancing mechanisms, and a non-preemptive task scheduler running on each processor. We describe the application of the package to a branch-and-bound method for mixed integer programming, along with computational results on the ASCI Red massively parallel computer. Finally we describe the application of the branch-and-bound mixed-integer programming code to a resource constrained project scheduling problem for Pantex.

More Details

ATR2000 Mercury/MPI Real-Time ATR System User's Guide

Meyer, Richard H.; Doerfler, Douglas W.

The Air Force's Electronic Systems Center has funded Sandia National Laboratories to develop an Automatic Target Recognition (ATR) System for the Air Force's Joint STARS platform using Mercury Computer systems hardware. This report provides general theory on the internal operations of the Real-Time ATR system and provides some basic techniques that can be used to reconfigure the system and monitor its runtime operation. In addition, general information on how to interface an image formation processor and a human machine interface to the ATR is provided. This report is not meant to be a tutorial on the ATR algorithms.

More Details

Visualization of Information Spaces with VxInsight

Wylie, Brian N.; Boyack, Kevin W.; Davidson, George S.

VxInsight provides a visual mechanism for browsing, exploring and retrieving information from a database. The graphical display conveys information about the relationship between objects in several ways and on multiple scales. In this way, individual objects are always observed within a larger context. For example, consider a database consisting of a set of scientific papers. Imagine that the papers have been organized in a two dimensional geometry so that related papers are located close to each other. Now construct a landscape where the altitude reflects the local density of papers. Papers on physics will form a mountain range, and a different range will stand over the biological papers. In between will be research reports from biophysics and other bridging disciplines. Now, imagine exploring these mountains. If we zoom in closer, the physics mountains will resolve into a set of sub-disciplines. Eventually, by zooming in far enough, the individual papers become visible. By pointing and clicking you can learn more about papers of interest or retrieve their full text. Although physical proximity conveys a great deal of information about the relationship between documents, you can also see which papers reference which others, by drawing lines between the citing and cited papers. For even more information, you can choose to highlight papers by a particular researcher or a particular institution, or show the accumulation of papers through time, watching some disciplines explode and other stagnate. VxInsight is a general purpose tool, which enables this kind of interaction with wide variety of relational data: documents, patents, web pages, and financial transactions are just a few examples. The tool allows users to interactively browse, explore and retrieve information from the database in an intuitive way.

More Details

What makes a beam shaping problem difficult

Proceedings of SPIE - The International Society for Optical Engineering

Romero, L.A.; Dickey, Fred M.

The three most important factors effecting the difficulty of a beam shaping problems were discussed. These factors were scaling, smoothness, and coherence. Algorithms were developed to counteract these factors encountered in the design of any beam shaping system.

More Details

Failure analysis of tungsten coated polysilicon micromachined microengines

Proceedings of SPIE - The International Society for Optical Engineering

Walraven, J.A.; Mani, Seethambal S.; Fleming, J.G.; Headley, Thomas J.; Kotula, Paul G.; Pimentel, Alejandro A.; Rye, Michael J.; Tanner, Danelle M.; Smith, Norman F.

Failure analysis (FA) tools have been applied to analyze tungsten coated polysilicon microengines. These devices were stressed under accelerated conditions at ambient temperatures and pressure. Preliminary results illustrating the failure modes of microengines operated under variable humidity and ultra-high drive frequency will also be shown. Analysis of tungsten coated microengines revealed the absence of wear debris in microengines operated under ambient conditions. Plan view imaging of these microengines using scanning electron microscopy (SEM) revealed no accumulation of wear debris on the surface of the gears or ground plane on microengines operated under standard laboratory conditions. Friction bearing surfaces were exposed and analyzed using the focused ion beam (FIB). These cross sections revealed no accumulation of debris along friction bearing surfaces. By using transmission electron microscopy (TEM) in conjunction with electron energy loss spectroscopy (EELS), we were able to identify the thickness, elemental analysis, and crystallographic properties of tungsten coated MEMS devices. Atomic force microscopy was also utilized to analyze the surface roughness of friction bearing surfaces.

More Details

The Consistent Kinetics Porosity (CKP) Model: A Theory for the Mechanical Behavior of Moderately Porous Solids

Brannon, Rebecca M.

A theory is developed for the response of moderately porous solids (no more than {approximately}20% void space) to high-strain-rate deformations. The model is consistent because each feature is incorporated in a manner that is mathematically compatible with the other features. Unlike simple p-{alpha} models, the onset of pore collapse depends on the amount of shear present. The user-specifiable yield function depends on pressure, effective shear stress, and porosity. The elastic part of the strain rate is linearly related to the stress rate, with nonlinear corrections from changes in the elastic moduli due to pore collapse. Plastically incompressible flow of the matrix material allows pore collapse and an associated macroscopic plastic volume change. The plastic strain rate due to pore collapse/growth is taken normal to the yield surface. If phase transformation and/or pore nucleation are simultaneously occurring, the inelastic strain rate will be non-normal to the yield surface. To permit hardening, the yield stress of matrix material is treated as an internal state variable. Changes in porosity and matrix yield stress naturally cause the yield surface to evolve. The stress, porosity, and all other state variables vary in a consistent manner so that the stress remains on the yield surface throughout any quasistatic interval of plastic deformation. Dynamic loading allows the stress to exceed the yield surface via an overstress ordinary differential equation that is solved in closed form for better numerical accuracy. The part of the stress rate that causes no plastic work (i.e-, the part that has a zero inner product with the stress deviator and the identity tensor) is given by the projection of the elastic stressrate orthogonal to the span of the stress deviator and the identity tensor.The model, which has been numerically implemented in MIG format, has been exercised under a wide array of extremal loading and unloading paths. As will be discussed in a companion sequel report, the CKP model is capable of closely matching plate impact measurements for porous materials.

More Details

VFLOW2D - A Vorte-Based Code for Computing Flow Over Elastically Supported Tubes and Tube Arrays

Wolfe, Walter P.; Strickland, James H.; Homicz, Gregory F.; Gossler, A.A.

A numerical flow model is developed to simulate two-dimensional fluid flow past immersed, elastically supported tube arrays. This work is motivated by the objective of predicting forces and motion associated with both deep-water drilling and production risers in the oil industry. This work has other engineering applications including simulation of flow past tubular heat exchangers or submarine-towed sensor arrays and the flow about parachute ribbons. In the present work, a vortex method is used for solving the unsteady flow field. This method demonstrates inherent advantages over more conventional grid-based computational fluid dynamics. The vortex method is non-iterative, does not require artificial viscosity for stability, displays minimal numerical diffusion, can easily treat moving boundaries, and allows a greatly reduced computational domain since vorticity occupies only a small fraction of the fluid volume. A gridless approach is used in the flow sufficiently distant from surfaces. A Lagrangian remap scheme is used near surfaces to calculate diffusion and convection of vorticity. A fast multipole technique is utilized for efficient calculation of velocity from the vorticity field. The ability of the method to correctly predict lift and drag forces on simple stationary geometries over a broad range of Reynolds numbers is presented.

More Details

Digitally Marking RSA Moduli

Johnston, Anna M.

The moduli used in RSA (see [5]) can be generated by many different sources. The generator of that modulus (assuming a single entity generates the modulus) knows its factorization. They would have the ability to forge signatures or break any system based on this moduli. If a moduli and the RSA parameters associated with it were generated by a reputable source, the system would have higher value than if the parameters were generated by an unknown entity. So for tracking, security, confidence and financial reasons it would be beneficial to know who the generator of the RSA modulus was. This is where digital marking comes in. An RSA modulus ia digitally marked, or digitally trade marked, if the generator and other identifying features of the modulus (such as its intended user, the version number, etc.) can be identified and possibly verified by the modulus itself. The basic concept of digitally marking an RSA modulus would be to fix the upper bits of the modulus to this tag. Thus anyone who sees the public modulus can tell who generated the modulus and who the generator believes the intended user/owner of the modulus is.

More Details

Methods for Multisweep Automation

Shepherd, Jason F.; Mitchell, Scott A.; Knupp, Patrick K.; Mitchell, Scott A.

Sweeping has become the workhorse algorithm for creating conforming hexahedral meshes of complex models. This paper describes progress on the automatic, robust generation of MultiSwept meshes in CUBIT. MultiSweeping extends the class of volumes that may be swept to include those with multiple source and multiple target surfaces. While not yet perfect, CUBIT's MultiSweeping has recently become more reliable, and been extended to assemblies of volumes. Sweep Forging automates the process of making a volume (multi) sweepable: Sweep Verification takes the given source and target surfaces, and automatically classifies curve and vertex types so that sweep layers are well formed and progress from sources to targets.

More Details

Automatic scheme selection for toolkit hex meshing

International Journal for Numerical Methods in Engineering

White, David R.; Tautges, Timothy J.

Current hexahedral mesh generation techniques rely on a set of meshing tools, which when combined with geometry decomposition leads to an adequate mesh generation process. Of these tools, sweeping tends to be the workhorse algorithm, accounting for at least 50 per cent of most meshing applications. Constraints which must be met for a volume to be sweepable are derived, and it is proven that these constraints are necessary but not sufficient conditions for sweepability. This paper also describes a new algorithm for detecting extruded or sweepable geometries. This algorithm, based on these constraints, uses topological and local geometric information, and is more robust than feature recognition-based algorithms. A method for computing sweep dependencies in volume assemblies is also given. The auto sweep detect and sweep grouping algorithms have been used to reduce interactive user time required to generate all-hexahedral meshes by filtering out non-sweepable volumes needing further decomposition and by allowing concurrent meshing of independent sweep groups. Parts of the auto sweep detect algorithm have also been used to identify independent sweep paths, for use in volume-based interval assignment. Published in 2000 by John Wiley & Sons, Ltd.

More Details

Massively Parallel Direct Simulation of Multiphase Flow

Cook, Benjamin K.; Preece, Dale S.

The authors understanding of multiphase physics and the associated predictive capability for multi-phase systems are severely limited by current continuum modeling methods and experimental approaches. This research will deliver an unprecedented modeling capability to directly simulate three-dimensional multi-phase systems at the particle-scale. The model solves the fully coupled equations of motion governing the fluid phase and the individual particles comprising the solid phase using a newly discovered, highly efficient coupled numerical method based on the discrete-element method and the Lattice-Boltzmann method. A massively parallel implementation will enable the solution of large, physically realistic systems.

More Details

Algorithmic Strategies in Combinatorial Chemistry

Istrail, Sorin I.; Womble, David E.

Combinatorial Chemistry is a powerful new technology in drug design and molecular recognition. It is a wet-laboratory methodology aimed at ``massively parallel'' screening of chemical compounds for the discovery of compounds that have a certain biological activity. The power of the method comes from the interaction between experimental design and computational modeling. Principles of ``rational'' drug design are used in the construction of combinatorial libraries to speed up the discovery of lead compounds with the desired biological activity. This paper presents algorithms, software development and computational complexity analysis for problems arising in the design of combinatorial libraries for drug discovery. The authors provide exact polynomial time algorithms and intractability results for several Inverse Problems-formulated as (chemical) graph reconstruction problems-related to the design of combinatorial libraries. These are the first rigorous algorithmic results in the literature. The authors also present results provided by the combinatorial chemistry software package OCOTILLO for combinatorial peptide design using real data libraries. The package provides exact solutions for general inverse problems based on shortest-path topological indices. The results are superior both in accuracy and computing time to the best software reports published in the literature. For 5-peptoid design, the computation is rigorously reduced to an exhaustive search of about 2% of the search space; the exact solutions are found in a few minutes.

More Details

Radiation in an Emitting and Absorbing Medium: A Gridless Approach

Numerical Heat Transfer, Part B

Gritzo, Louis A.; Strickland, James H.; DesJardin, Paul E.

A gridless technique for the solution of the integral form of the radiative heat flux equation for emitting and absorbing media is presented. Treatment of non-uniform absorptivity and gray boundaries is included. As part of this work, the authors have developed fast multipole techniques for extracting radiative heat flux quantities from the temperature fields of one-dimensional and three-dimensional geometries. Example calculations include those for one-dimensional radiative heat transfer through multiple flame sheets, a three-dimensional enclosure with black walls, and an axisymmetric enclosure with black walls.

More Details

Welding Behavior of Free Machining Stainless Steel

Welding Journal Research Supplement

Robino, Charles V.; Headley, Thomas J.; Michael, Joseph R.; Robino, Charles V.

The weld solidification and cracking behavior of sulfur bearing free machining austenitic stainless steel was investigated for both gas-tungsten arc (GTA) and pulsed laser beam weld processes. The GTA weld solidification was consistent with those predicted with existing solidification diagrams and the cracking response was controlled primarily by solidification mode. The solidification behavior of the pulsed laser welds was complex, and often contained regions of primary ferrite and primary austenite solidification, although in all cases the welds were found to be completely austenite at room temperature. Electron backscattered diffraction (EBSD) pattern analysis indicated that the nature of the base metal at the time of solidification plays a primary role in initial solidification. The solid state transformation of austenite to ferrite at the fusion zone boundary, and ferrite to austenite on cooling may both be massive in nature. A range of alloy compositions that exhibited good resistance to solidification cracking and was compatible with both welding processes was identified. The compositional range is bounded by laser weldability at lower Cr{sub eq}/Ni{sub eq} ratios and by the GTA weldability at higher ratios. It was found with both processes that the limiting ratios were somewhat dependent upon sulfur content.

More Details

Unconstrained and Constrained Minimization, Linear Scaling, and the Grassmann Manifold: Theory and Applications

Physical Review B

Lippert, Ross A.; Schultz, Peter A.

An unconstrained minimization algorithm for electronic structure calculations using density functional for systems with a gap is developed to solve for nonorthogonal Wannier-like orbitals in the spirit of E. B. Stechel, A. R. Williams, and P. J. Feibelman, Phys. Rev. B 49, 10,008 (1994). The search for the occupied sub-space is a Grassmann conjugate gradient algorithm generalized from the algorithm of A. Edelman, T.A. Arias, and S. T. Smith, SIAM J. on Matrix Anal. Appl. 20, 303 (1998). The gradient takes into account the nonorthogonality of a local atom-centered basis, gaussian in their implementation. With a localization constraint on the Wannier-like orbitals, well-constructed sparse matrix multiplies lead to O(N) scaling of the computationally intensive parts of the algorithm. Using silicon carbide as a test system, the accuracy, convergence, and implementation of this algorithm as a quantitative alternative to diagonalization are investigated. Results up to 1,458 atoms on a single processor are presented.

More Details

Cooperative sentry vehicles and differential GPS leapfrog

Feddema, John T.; Lewis, Christopher L.; Lafarge, Robert A.

As part of a project for the Defense Advanced Research Projects Agency, Sandia National Laboratories Intelligent Systems and Robotics Center is developing and testing the feasibility of using a cooperative team of robotic sentry vehicles to guard a perimeter, perform a surround task, and travel extended distances. This paper describes the authors most recent activities. In particular, this paper highlights the development of a Differential Global Positioning System (DGPS) leapfrog capability that allows two or more vehicles to alternate sending DGPS corrections. Using this leapfrog technique, this paper shows that a group of autonomous vehicles can travel 22.68 kilometers with a root mean square positioning error of only 5 meters.

More Details

Evolutionary pattern search algorithms for unconstrained and linearly constrained optimization

IEEE Transactions on Evolutionary Computation

Hart, William E.

The authors describe a convergence theory for evolutionary pattern search algorithms (EPSAs) on a broad class of unconstrained and linearly constrained problems. EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. The analysis significantly extends the previous convergence theory for EPSAs. The analysis applies to a broader class of EPSAs,and it applies to problems that are nonsmooth, have unbounded objective functions, and which are linearly constrained. Further, they describe a modest change to the algorithmic framework of EPSAs for which a non-probabilistic convergence theory applies. These analyses are also noteworthy because they are considerably simpler than previous analyses of EPSAs.

More Details

Invariant patterns in crystal lattices: Implications for protein folding algorithms

Journal for Universal Computer Science

Hart, William E.; Istrail, Sorin I.

Crystal lattices are infinite periodic graphs that occur naturally in a variety of geometries and which are of fundamental importance in polymer science. Discrete models of protein folding use crystal lattices to define the space of protein conformations. Because various crystal lattices provide discretizations of the same physical phenomenon, it is reasonable to expect that there will exist invariants across lattices related to fundamental properties of the protein folding process. This paper considers whether performance-guaranteed approximability is such an invariant for HP lattice models. The authors define a master approximation algorithm that has provable performance guarantees provided that a specific sublattice exists within a given lattice. They describe a broad class of crystal lattices that are approximable, which further suggests that approximability is a general property of HP lattice models.

More Details

Code Verification by the Method of Manufactured Solutions

Salari, Kambiz S.; Knupp, Patrick K.

A procedure for code Verification by the Method of Manufactured Solutions (MMS) is presented. Although the procedure requires a certain amount of creativity and skill, we show that MMS can be applied to a variety of engineering codes which numerically solve partial differential equations. This is illustrated by detailed examples from computational fluid dynamics. The strength of the MMS procedure is that it can identify any coding mistake that affects the order-of-accuracy of the numerical method. A set of examples which use a blind-test protocol demonstrates the kinds of coding mistakes that can (and cannot) be exposed via the MMS code Verification procedure. The principle advantage of the MMS procedure over traditional methods of code Verification is that code capabilities are tested in full generality. The procedure thus results in a high degree of confidence that all coding mistakes which prevent the equations from being solved correctly have been identified.

More Details

Load balancing fictions, falsehoods and fallacies

Applied Mathematical Modeling

Hendrickson, Bruce A.

Effective use of a parallel computer requires that a calculation be carefully divided among the processors. This load balancing problem appears in many guises and has been a fervent area of research for the past decade or more. Although great progress has been made, and useful software tools developed, a number of challenges remain. It is the conviction of the author that these challenges will be easier to address if programmers first come to terms with some significant shortcomings in their current perspectives. This paper tries to identify several areas in which the prevailing point of view is either mistaken or insufficient. The goal is to motivate new ideas and directions for this important field.

More Details

Interprocessor communication with memory constraints

Hendrickson, Bruce A.; Hendrickson, Bruce A.

Many parallel applications require periodic redistribution of workloads and associated data. In a distributed memory computer, this redistribution can be difficult if limited memory is available for receiving messages. The authors propose a model for optimizing the exchange of messages under such circumstances which they call the minimum phase remapping problem. They first show that the problem is NP-Complete, and then analyze several methodologies for addressing it. First, they show how the problem can be phrased as an instance of multi-commodity flow. Next, they study a continuous approximation to the problem. They show that this continuous approximation has a solution which requires at most two more phases than the optimal discrete solution, but the question of how to consistently obtain a good discrete solution from the continuous problem remains open. Finally, they devise a simple and practical approximation algorithm for the problem with a bound of 1.5 times the optimal number of phases.

More Details

Solving complex-valued linear systems via equivalent real formulations

SIAM Journal of Scientific Computing

Day, David M.; Heroux, Michael A.

Most algorithms used in preconditioned iterative methods are generally applicable to complex valued linear systems, with real valued linear systems simply being a special case. However, most iterative solver packages available today focus exclusively on real valued systems, or deal with complex valued systems as an afterthought. One obvious approach to addressing this problem is to recast the complex problem into one of a several equivalent real forms and then use a real valued solver to solve the related system. However, well-known theoretical results showing unfavorable spectral properties for the equivalent real forms have diminished enthusiasm for this approach. At the same time, experience has shown that there are situations where using an equivalent real form can be very effective. In this paper, the authors explore this approach, giving both theoretical and experimental evidence that an equivalent real form can be useful for a number of practical situations. Furthermore, they show that by making good use of some of the advance features of modem solver packages, they can easily generate equivalent real form preconditioners that are computationally efficient and mathematically identical to their complex counterparts. Using their techniques, they are able to solve very ill-conditioned complex valued linear systems for a variety of large scale applications. However, more importantly, they shed more light on the effectiveness of equivalent real forms and more clearly delineate how and when they should be used.

More Details

Microstructures of laser deposited 304L austenitic stainless steel

Headley, Thomas J.; Robino, Charles V.; Headley, Thomas J.

Laser deposits fabricated from two different compositions of 304L stainless steel powder were characterized to determine the nature of the solidification and solid state transformations. One of the goals of this work was to determine to what extent novel microstructure consisting of single-phase austenite could be achieved with the thermal conditions of the LENS [Laser Engineered Net Shape] process. Although ferrite-free deposits were not obtained, structures with very low ferrite content were achieved. It appeared that, with slight changes in alloy composition, this goal could be met via two different solidification and transformation mechanisms.

More Details

Direct simulation of particle-laden fluids

Cook, Benjamin K.; Noble, David R.; Preece, Dale S.

Processes that involve particle-laden fluids are common in geomechanics and especially in the petroleum industry. Understanding the physics of these processes and the ability to predict their behavior requires the development of coupled fluid-flow and particle-motion computational methods. This paper outlines an accurate and robust coupled computational scheme using the lattice-Boltzmann method for fluid flow and the discrete-element method for solid particle motion. Results from several two-dimensional validation simulations are presented. Simulations reported include the sedimentation of an ellipse, a disc and two interacting discs in a closed column of fluid. The recently discovered phenomenon of drafting, kissing, and tumbling is fully reproduced in the two-disc simulation.

More Details

Materials Issues for Micromachines Development - ASCI Program Plan

Fang, H.E.; Miller, Samuel L.; Dugger, Michael T.; Prasad, Somuri V.; Reedy, Earl D.; Thompson, Aidan P.; Wong, Chungnin C.; Yang, Pin Y.; Battaile, Corbett C.; Battaile, Corbett C.; Benavides, Gilbert L.; Ensz, M.T.; Buchheit, Thomas E.; Chen, Er-Ping C.; Christenson, Todd R.; De Boer, Maarten P.

This report summarizes materials issues associated with advanced micromachines development at Sandia. The intent of this report is to provide a perspective on the scope of the issues and suggest future technical directions, with a focus on computational materials science. Materials issues in surface micromachining (SMM), Lithographic-Galvanoformung-Abformung (LIGA: lithography, electrodeposition, and molding), and meso-machining technologies were identified. Each individual issue was assessed in four categories: degree of basic understanding; amount of existing experimental data capability of existing models; and, based on the perspective of component developers, the importance of the issue to be resolved. Three broad requirements for micromachines emerged from this process. They are: (1) tribological behavior, including stiction, friction, wear, and the use of surface treatments to control these, (2) mechanical behavior at microscale, including elasticity, plasticity, and the effect of microstructural features on mechanical strength, and (3) degradation of tribological and mechanical properties in normal (including aging), abnormal and hostile environments. Resolving all the identified critical issues requires a significant cooperative and complementary effort between computational and experimental programs. The breadth of this work is greater than any single program is likely to support. This report should serve as a guide to plan micromachines development at Sandia.

More Details

A naturalistic decision making model for simulated human combatants

Hart, William E.; Forsythe, James C.

The authors describe a naturalistic behavioral model for the simulation of small unit combat. This model, Klein's recognition-primed decision making (RPD) model, is driven by situational awareness rather than a rational process of selecting from a set of action options. They argue that simulated combatants modeled with RPD will have more flexible and realistic responses to a broad range of small-scale combat scenarios. Furthermore, they note that the predictability of a simulation using an RPD framework can be easily controlled to provide multiple evaluations of a given combat scenario. Finally, they discuss computational issues for building an RPD-based behavior engine for fully automated combatants in small conflict scenarios, which are being investigated within Sandia's Next Generation Site Security project.

More Details

Application of finite element, global polynomial, and kriging response surfaces in Progressive Lattice Sampling designs

Romero, Vicente J.; Swiler, Laura P.; Giunta, Anthony A.

This paper examines the modeling accuracy of finite element interpolation, kriging, and polynomial regression used in conjunction with the Progressive Lattice Sampling (PLS) incremental design-of-experiments approach. PLS is a paradigm for sampling a deterministic hypercubic parameter space by placing and incrementally adding samples in a manner intended to maximally reduce lack of knowledge in the parameter space. When combined with suitable interpolation methods, PLS is a formulation for progressive construction of response surface approximations (RSA) in which the RSA are efficiently upgradable, and upon upgrading, offer convergence information essential in estimating error introduced by the use of RSA in the problem. The three interpolation methods tried here are examined for performance in replicating an analytic test function as measured by several different indicators. The process described here provides a framework for future studies using other interpolation schemes, test functions, and measures of approximation quality.

More Details

Scalable rendering on PC clusters

Wylie, Brian N.; Lewis, Vasily L.; Shirley, David N.; Pavlakos, Constantine P.

This case study presents initial results from research targeted at the development of cost-effective scalable visualization and rendering technologies. The implementations of two 3D graphics libraries based on the popular sort-last and sort-middle parallel rendering techniques are discussed. An important goal of these implementations is to provide scalable rendering capability for extremely large datasets (>> 5 million polygons). Applications can use these libraries for either run-time visualization, by linking to an existing parallel simulation, or for traditional post-processing by linking to an interactive display program. The use of parallel, hardware-accelerated rendering on commodity hardware is leveraged to achieve high performance. Current performance results show that, using current hardware (a small 16-node cluster), they can utilize up to 85% of the aggregate graphics performance and achieve rendering rates in excess of 20 million polygons/second using OpenGL{reg_sign} with lighting, Gouraud shading, and individually specified triangles (not t-stripped).

More Details

Algebraic mesh quality metrics

SIAM Journal of Scientific Computing

Knupp, Patrick K.

Quality metrics for structured and unstructured mesh generation are placed within an algebraic framework to form a mathematical theory of mesh quality metrics. The theory, based on the Jacobian and related matrices, provides a means of constructing, classifying, and evaluating mesh quality metrics. The Jacobian matrix is factored into geometrically meaningful parts. A nodally-invariant Jacobian matrix can be defined for simplicial elements using a weight matrix derived from the Jacobian matrix of an ideal reference element. Scale and orientation-invariant algebraic mesh quality metrics are defined. the singular value decomposition is used to study relationships between metrics. Equivalence of the element condition number and mean ratio metrics is proved. Condition number is shown to measure the distance of an element to the set of degenerate elements. Algebraic measures for skew, length ratio, shape, volume, and orientation are defined abstractly, with specific examples given. Combined metrics for shape and volume, shape-volume-orientation are algebraically defined and examples of such metrics are given. Algebraic mesh quality metrics are extended to non-simplical elements. A series of numerical tests verify the theoretical properties of the metrics defined.

More Details

Scalability limitations of VIA-based technologies in supporting MPI

Brightwell, Ronald B.; Maccabe, Arthur B.

This paper analyzes the scalability limitations of networking technologies based on the Virtual Interface Architecture (VIA) in supporting the runtime environment needed for an implementation of the Message Passing Interface. The authors present an overview of the important characteristics of VIA and an overview of the runtime system being developed as part of the Computational Plant (Cplant) project at Sandia National Laboratories. They discuss the characteristics of VIA that prevent implementations based on this system to meet the scalability and performance requirements of Cplant.

More Details

Salinas - An implicit finite element structural dynamics code developed for massively parallel platforms

Reese, Garth M.; Driessen, Brian D.; Alvin, Kenneth F.; Day, David M.

As computational needs for structural finite element analysis increase, a robust implicit structural dynamics code is needed which can handle millions of degrees of freedom in the model and produce results with quick turn around time. A parallel code is needed to avoid limitations of serial platforms. Salinas is an implicit structural dynamics code specifically designed for massively parallel platforms. It computes the structural response of very large complex structures and provides solutions faster than any existing serial machine. This paper gives a current status of Salinas and uses demonstration problems to show Salinas' performance.

More Details

Computational methods for coupling microstructural and micromechanical materials response simulations

Holm, Elizabeth A.; Wellman, Gerald W.; Battaile, Corbett C.; Buchheit, Thomas E.; Fang, H.E.; Rintoul, Mark D.; Glass, Sarah J.; Knorovsky, Gerald A.; Neilsen, Michael K.

Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

More Details

A case study in working with cell-centered data

Crossno, Patricia J.

This case study provides examples of how some simple decisions the authors made in structuring their algorithms for handling cell-centered data can dramatically influence the results. Although they all know that these decisions produce variations in results, they think that they underestimate the potential magnitude of the differences. More importantly, the users of the codes may not be aware that these choices have been made or what they mean to the resulting visualizations of their data. This raises the question of whether or not these decisions are inadvertently distorting user interpretations of data sets.

More Details

An agent-based microsimulation of critical infrastructure systems

Barton, Dianne C.; Stamber, Kevin L.

US infrastructures provide essential services that support the economic prosperity and quality of life. Today, the latest threat to these infrastructures is the increasing complexity and interconnectedness of the system. On balance, added connectivity will improve economic efficiency; however, increased coupling could also result in situations where a disturbance in an isolated infrastructure unexpectedly cascades across diverse infrastructures. An understanding of the behavior of complex systems can be critical to understanding and predicting infrastructure responses to unexpected perturbation. Sandia National Laboratories has developed an agent-based model of critical US infrastructures using time-dependent Monte Carlo methods and a genetic algorithm learning classifier system to control decision making. The model is currently under development and contains agents that represent the several areas within the interconnected infrastructures, including electric power and fuel supply. Previous work shows that agent-based simulations models have the potential to improve the accuracy of complex system forecasting and to provide new insights into the factors that are the primary drivers of emergent behaviors in interdependent systems. Simulation results can be examined both computationally and analytically, offering new ways of theorizing about the impact of perturbations to an infrastructure network.

More Details

Methodology for characterizing modeling and discretization uncertainties in computational simulation

Alvin, Kenneth F.; Oberkampf, William L.; Rutherford, Brian M.; Diegert, Kathleen V.

This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

More Details

Finite element meshing approached as a global minimization process

Witkowski, Walter R.; Jung, Joseph J.; Dohrmann, Clark R.; Leung, Vitus J.

The ability to generate a suitable finite element mesh in an automatic fashion is becoming the key to being able to automate the entire engineering analysis process. However, placing an all-hexahedron mesh in a general three-dimensional body continues to be an elusive goal. The approach investigated in this research is fundamentally different from any other that is known of by the authors. A physical analogy viewpoint is used to formulate the actual meshing problem which constructs a global mathematical description of the problem. The analogy used was that of minimizing the electrical potential of a system charged particles within a charged domain. The particles in the presented analogy represent duals to mesh elements (i.e., quads or hexes). Particle movement is governed by a mathematical functional which accounts for inter-particles repulsive, attractive and alignment forces. This functional is minimized to find the optimal location and orientation of each particle. After the particles are connected a mesh can be easily resolved. The mathematical description for this problem is as easy to formulate in three-dimensions as it is in two- or one-dimensions. The meshing algorithm was developed within CoMeT. It can solve the two-dimensional meshing problem for convex and concave geometries in a purely automated fashion. Investigation of the robustness of the technique has shown a success rate of approximately 99% for the two-dimensional geometries tested. Run times to mesh a 100 element complex geometry were typically in the 10 minute range. Efficiency of the technique is still an issue that needs to be addressed. Performance is an issue that is critical for most engineers generating meshes. It was not for this project. The primary focus of this work was to investigate and evaluate a meshing algorithm/philosophy with efficiency issues being secondary. The algorithm was also extended to mesh three-dimensional geometries. Unfortunately, only simple geometries were tested before this project ended. The primary complexity in the extension was in the connectivity problem formulation. Defining all of the interparticle interactions that occur in three-dimensions and expressing them in mathematical relationships is very difficult.

More Details

Salvo: Seismic imaging software for complex geologies

Ober, Curtis C.; Womble, David E.

This report describes Salvo, a three-dimensional seismic-imaging software for complex geologies. Regions of complex geology, such as overthrusts and salt structures, can cause difficulties for many seismic-imaging algorithms used in production today. The paraxial wave equation and finite-difference methods used within Salvo can produce high-quality seismic images in these difficult regions. However this approach comes with higher computational costs which have been too expensive for standard production. Salvo uses improved numerical algorithms and methods, along with parallel computing, to produce high-quality images and to reduce the computational and the data input/output (I/O) costs. This report documents the numerical algorithms implemented for the paraxial wave equation, including absorbing boundary conditions, phase corrections, imaging conditions, phase encoding, and reduced-source migration. This report also describes I/O algorithms for large seismic data sets and images and parallelization methods used to obtain high efficiencies for both the computations and the I/O of seismic data sets. Finally, this report describes the required steps to compile, port and optimize the Salvo software, and describes the validation data sets used to help verify a working copy of Salvo.

More Details

Comparison of electrical CD measurements and cross-section lattice-plane counts of sub-micrometer features replicated in Silicon-on-Insulator materials

Headley, Thomas J.; Everist, Sarah C.; Everist, Sarah C.

Electrical test structures of the type known as cross-bridge resistors have been patterned in (100) epitaxial silicon material that was grown on Bonded and Etched-Back Silicon-on-Insulator (BESOI) substrates. The CDs (Critical Dimensions) of a selection of their reference segments have been measured electrically, by SEM (Scanning-Electron Microscopy) cross-section imaging, and by lattice-plane counting. The lattice-plane counting is performed on phase-contrast images made by High-Resolution Transmission-Electron Microscopy (HRTEM). The reference-segment features were aligned with <110> directions in the BESOI surface material. They were defined by a silicon micromachining process which results in their sidewalls being atomically-planar and smooth and inclined at 54.737{degree} to the surface (100) plane of the substrate. This (100) implementation may usefully complement the attributes of the previously-reported vertical-sidewall one for selected reference-material applications. The SEM, HRTEM, and electrical CD (ECD) linewidth measurements that are made on BESOI features of various drawn dimensions on the same substrate is being investigated to determine the feasibility of a CD traceability path that combines the low cost, robustness, and repeatability of the ECD technique and the absolute measurement of the HRTEM lattice-plane counting technique. Other novel aspects of the (100) SOI implementation that are reported here are the ECD test-structure architecture and the making of HRTEM lattice-plane counts from both cross-sectional, as well as top-down, imaging of the reference features. This paper describes the design details and the fabrication of the cross-bridge resistor test structure. The long-term goal is to develop a technique for the determination of the absolute dimensions of the trapezoidal cross-sections of the cross-bridge resistors reference segments, as a prelude to making them available for dimensional reference applications.

More Details

Tensile instabilities for porous plasticity models

Brannon, Rebecca M.

Several concepts (and assumptions) from the literature for porous metals and ceramics have been synthesized into a consistent model that predicts an admissibility limit on a material's porous yield surface. To ensure positive plastic work, the rate at which a yield surface can collapse as pores grow in tension must be constrained.

More Details

Feature based volume decomposition for automatic hexahedral mesh generation

ASME Journal of Manufacturing Science and Engineering

Tautges, Timothy J.; Tautges, Timothy J.

Much progress has been made through these years to achieve automatic hexahedral mesh generation. While general meshing algorithms that can take on general geometry are not there yet; many well-proven automatic meshing algorithms now work on certain classes of geometry. This paper presents a feature based volume decomposition approach for automatic Hexahedral Mesh generation. In this approach, feature recognition techniques are introduced to determine decomposition features from a CAD model. The features are then decomposed and mapped with appropriate automatic meshing algorithms suitable for the correspondent geometry. Thus a formerly unmeshable CAD model may become meshable. The procedure of feature decomposition is recursive: sub-models are further decomposed until either they are matched with appropriate meshing algorithms or no more decomposition features are detected. The feature recognition methods employed are convexity based and use topology and geometry information, which is generally available in BREP solid models. The operations of volume decomposition are also detailed in the paper. The final section, the capability of the feature decomposer is demonstrated over some complicated manufactured parts.

More Details

The generation of hexahedral meshes for assembly geometries: A survey

International Journal for Numberical Methods in Engineering

Tautges, Timothy J.

The finite element method is being used today to model component assemblies in a wide variety of application areas, including structural mechanics, fluid simulations, and others. Generating hexahedral meshes for these assemblies usually requires the use of geometry decomposition, with different meshing algorithms applied to different regions. While the primary motivation for this approach remains the lack of an automatic, reliable all-hexahedral meshing algorithm, requirements in mesh quality and mesh configuration for typical analyses are also factors. For these reasons, this approach is also sometimes required when producing other types of unstructured meshes. This paper will review progress to date in automating many parts of the hex meshing process, which has halved the time to produce all-hex meshes for large assemblies. Particular issues which have been exposed due to this progress will also be discussed, along with their applicability to the general unstructured meshing problem.

More Details

Prospecting for lunar ice using a multi-rover cooperative team

Klarer, Paul R.; Feddema, John T.; Lewis, Christopher L.

A multi-rover cooperative team or swarm developed by Sandia National Laboratories is described, including various control methodologies that have been implemented to date. How the swarm's capabilities could be applied to a lunar ice prospecting mission is briefly explored. Some of the specific major engineering issues that must be addressed to successfully implement the swarm approach to a lunar surface mission are outlined, and potential solutions are proposed.

More Details

Synthesis of logic circuits with evolutionary algorithms

Jones, Jake S.; Davidson, George S.

In the last decade there has been interest and research in the area of designing circuits with genetic algorithms, evolutionary algorithms, and genetic programming. However, the ability to design circuits of the size and complexity required by modern engineering design problems, simply by specifying required outputs for given inputs has as yet eluded researchers. This paper describes current research in the area of designing logic circuits using an evolutionary algorithm. The goal of the research is to improve the effectiveness of this method and make it a practical aid for design engineers. A novel method of implementing the algorithm is introduced, and results are presented for various multiprocessing systems. In addition to evolving standard arithmetic circuits, work in the area of evolving circuits that perform digital signal processing tasks is described.

More Details

A precise determination of the void percolation threshold for two distributions of overlapping spheres

Physical Review Letters

Rintoul, Mark D.

The void percolation threshold is calculated for a distribution of overlapping spheres with equal radii, and for a binary sized distribution of overlapping spheres, where half of the spheres have radii twice as large as the other half. Using systems much larger than previous work, the authors determine a much more precise value for the percolation thresholds and correlation length exponent. The values for the percolation thresholds are shown to be significantly different, in contrast with previous, less precise works that speculated that the threshold might be universal with respect to sphere size distribution.

More Details

Randomized metarounding

Carr, Robert D.

The authors present a new technique for the design of approximation algorithms that can be viewed as a generalization of randomized rounding. They derive new or improved approximation guarantees for a class of generalized congestion problems such as multicast congestion, multiple TSP etc. Their main mathematical tool is a structural decomposition theorem related to the integrality gap of a relaxation.

More Details

Scalability and Performance of a Large Linux Cluster

Journal of Parallel and Distributed Computing

Brightwell, Ronald B.; Plimpton, Steven J.

In this paper the authors present performance results from several parallel benchmarks and applications on a 400-node Linux cluster at Sandia National Laboratories. They compare the results on the Linux cluster to performance obtained on a traditional distributed-memory massively parallel processing machine, the Intel TeraFLOPS. They discuss the characteristics of these machines that influence the performance results and identify the key components of the system software that they feel are important to allow for scalability of commodity-based PC clusters to hundreds and possibly thousands of processors.

More Details

Discretization errors associated with Reproducing Kernel Methods: One-dimensional domains

Voth, Thomas E.; Christon, Mark A.

The Reproducing Kernel Particle Method (RKPM) is a discretization technique for partial differential equations that uses the method of weighted residuals, classical reproducing kernel theory and modified kernels to produce either ``mesh-free'' or ``mesh-full'' methods. Although RKPM has many appealing attributes, the method is new, and its numerical performance is just beginning to be quantified. In order to address the numerical performance of RKPM, von Neumann analysis is performed for semi-discretizations of three model one-dimensional PDEs. The von Neumann analyses results are used to examine the global and asymptotic behavior of the semi-discretizations. The model PDEs considered for this analysis include the parabolic and hyperbolic (first and second-order wave) equations. Numerical diffusivity for the former and phase speed for the later are presented over the range of discrete wavenumbers and in an asymptotic sense as the particle spacing tends to zero. Group speed is also presented for the hyperbolic problems. Excellent diffusive and dispersive characteristics are observed when a consistent mass matrix formulation is used with the proper choice of refinement parameter. In contrast, the row-sum lumped mass matrix formulation severely degraded performance. The asymptotic analysis indicates that very good rates of convergence are possible when the consistent mass matrix formulation is used with an appropriate choice of refinement parameter.

More Details

Design of dynamic load-balancing tools for parallel applications

Devine, Karen D.; Hendrickson, Bruce A.; Boman, Erik G.; Vaughan, Courtenay T.

The design of general-purpose dynamic load-balancing tools for parallel applications is more challenging than the design of static partitioning tools. Both algorithmic and software engineering issues arise. The authors have addressed many of these issues in the design of the Zoltan dynamic load-balancing library. Zoltan has an object-oriented interface that makes it easy to use and provides separation between the application and the load-balancing algorithms. It contains a suite of dynamic load-balancing algorithms, including both geometric and graph-based algorithms. Its design makes it valuable both as a partitioning tool for a variety of applications and as a research test-bed for new algorithmic development. In this paper, the authors describe Zoltan's design and demonstrate its use in an unstructured-mesh finite element application.

More Details

Human Assisted Assembly Processes

Galpin, Terri L.; Peters, Ralph R.

Automatic assembly sequencing and visualization tools are valuable in determining the best assembly sequences, but without Human Factors and Figure Models (HFFMs) it is difficult to evaluate or visualize human interaction. In industry, accelerating technological advances and shorter market windows have forced companies to turn to an agile manufacturing paradigm. This trend has promoted computerized automation of product design and manufacturing processes, such as automated assembly planning. However, all automated assembly planning software tools assume that the individual components fly into their assembled configuration and generate what appear to be a perfectly valid operations, but in reality the operations cannot physically be carried out by a human. Similarly, human figure modeling algorithms may indicate that assembly operations are not feasible and consequently force design modifications; however, if they had the capability to quickly generate alternative assembly sequences, they might have identified a feasible solution. To solve this problem HFFMs must be integrated with automated assembly planning to allow engineers to verify that assembly operations are possible and to see ways to make the designs even better. Factories will very likely put humans and robots together in cooperative environments to meet the demands for customized products, for purposes including robotic and automated assembly. For robots to work harmoniously within an integrated environment with humans the robots must have cooperative operational skills. For example, in a human only environment, humans may tolerate collisions with one another if they did not cause much pain. This level of tolerance may or may not apply to robot-human environments. Humans expect that robots will be able to operate and navigate in their environments without collisions or interference. The ability to accomplish this is linked to the sensing capabilities available. Current work in the field of cooperative automation has shown the effectiveness of humans and machines directly interacting to perform tasks. To continue to advance this area of robotics, effective means need to be developed to allow natural ways for people to communicate and cooperate with robots just as they do with one another.

More Details

Advanced numerical methods and software approaches for semiconductor device simulation

VLSI Design

Carey, Graham F.; Pardhanani, A.L.; Bova, S.W.

In this article we concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of `upwind' and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), `entropy' variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software, support such as those in the SANDIA National Laboratory framework SIERRA.

More Details

Load-balancing techniques for a parallel electromagnetic particle-in-cell code

Plimpton, Steven J.; Seidel, David B.; Pasik, Michael F.; Coats, Rebecca S.

QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER.

More Details

Second-order structural identification procedure via state-space-based system identification

AIAA Journal

Alvin, Kenneth F.; Park, K.C.P.

We present a theory for transforming the system-theory-based realization models into the corresponding physical coordinate-based structural models. The theory has been implemented into computational procedure and applied to several example problems. Our results show that the present transformation theory yields an objective model basis possessing a unique set of structural parameters from an infinite set of equivalent system realization models. For proportionally damped systems, the transformation directly and systematicaly yields the normal modes and modal damping. Moreover, when nonproportional damping is present, the relative magnitude and phase of the damped mode shapes are separately characterized, and a corrective transformation is then employed to capture the undamped normal modes and nondiagonal modal damping matrix.

More Details
Results 9901–9998 of 9,998
Results 9901–9998 of 9,998