Publications

Results 9951–9998 of 9,998
Skip to search filters

Algorithmic Strategies in Combinatorial Chemistry

Istrail, Sorin I.; Womble, David E.

Combinatorial Chemistry is a powerful new technology in drug design and molecular recognition. It is a wet-laboratory methodology aimed at ``massively parallel'' screening of chemical compounds for the discovery of compounds that have a certain biological activity. The power of the method comes from the interaction between experimental design and computational modeling. Principles of ``rational'' drug design are used in the construction of combinatorial libraries to speed up the discovery of lead compounds with the desired biological activity. This paper presents algorithms, software development and computational complexity analysis for problems arising in the design of combinatorial libraries for drug discovery. The authors provide exact polynomial time algorithms and intractability results for several Inverse Problems-formulated as (chemical) graph reconstruction problems-related to the design of combinatorial libraries. These are the first rigorous algorithmic results in the literature. The authors also present results provided by the combinatorial chemistry software package OCOTILLO for combinatorial peptide design using real data libraries. The package provides exact solutions for general inverse problems based on shortest-path topological indices. The results are superior both in accuracy and computing time to the best software reports published in the literature. For 5-peptoid design, the computation is rigorously reduced to an exhaustive search of about 2% of the search space; the exact solutions are found in a few minutes.

More Details

Radiation in an Emitting and Absorbing Medium: A Gridless Approach

Numerical Heat Transfer, Part B

Gritzo, Louis A.; Strickland, James H.; DesJardin, Paul E.

A gridless technique for the solution of the integral form of the radiative heat flux equation for emitting and absorbing media is presented. Treatment of non-uniform absorptivity and gray boundaries is included. As part of this work, the authors have developed fast multipole techniques for extracting radiative heat flux quantities from the temperature fields of one-dimensional and three-dimensional geometries. Example calculations include those for one-dimensional radiative heat transfer through multiple flame sheets, a three-dimensional enclosure with black walls, and an axisymmetric enclosure with black walls.

More Details

Welding Behavior of Free Machining Stainless Steel

Welding Journal Research Supplement

Robino, Charles V.; Headley, Thomas J.; Michael, Joseph R.; Robino, Charles V.

The weld solidification and cracking behavior of sulfur bearing free machining austenitic stainless steel was investigated for both gas-tungsten arc (GTA) and pulsed laser beam weld processes. The GTA weld solidification was consistent with those predicted with existing solidification diagrams and the cracking response was controlled primarily by solidification mode. The solidification behavior of the pulsed laser welds was complex, and often contained regions of primary ferrite and primary austenite solidification, although in all cases the welds were found to be completely austenite at room temperature. Electron backscattered diffraction (EBSD) pattern analysis indicated that the nature of the base metal at the time of solidification plays a primary role in initial solidification. The solid state transformation of austenite to ferrite at the fusion zone boundary, and ferrite to austenite on cooling may both be massive in nature. A range of alloy compositions that exhibited good resistance to solidification cracking and was compatible with both welding processes was identified. The compositional range is bounded by laser weldability at lower Cr{sub eq}/Ni{sub eq} ratios and by the GTA weldability at higher ratios. It was found with both processes that the limiting ratios were somewhat dependent upon sulfur content.

More Details

Unconstrained and Constrained Minimization, Linear Scaling, and the Grassmann Manifold: Theory and Applications

Physical Review B

Lippert, Ross A.; Schultz, Peter A.

An unconstrained minimization algorithm for electronic structure calculations using density functional for systems with a gap is developed to solve for nonorthogonal Wannier-like orbitals in the spirit of E. B. Stechel, A. R. Williams, and P. J. Feibelman, Phys. Rev. B 49, 10,008 (1994). The search for the occupied sub-space is a Grassmann conjugate gradient algorithm generalized from the algorithm of A. Edelman, T.A. Arias, and S. T. Smith, SIAM J. on Matrix Anal. Appl. 20, 303 (1998). The gradient takes into account the nonorthogonality of a local atom-centered basis, gaussian in their implementation. With a localization constraint on the Wannier-like orbitals, well-constructed sparse matrix multiplies lead to O(N) scaling of the computationally intensive parts of the algorithm. Using silicon carbide as a test system, the accuracy, convergence, and implementation of this algorithm as a quantitative alternative to diagonalization are investigated. Results up to 1,458 atoms on a single processor are presented.

More Details

Cooperative sentry vehicles and differential GPS leapfrog

Feddema, John T.; Lewis, Christopher L.; Lafarge, Robert A.

As part of a project for the Defense Advanced Research Projects Agency, Sandia National Laboratories Intelligent Systems and Robotics Center is developing and testing the feasibility of using a cooperative team of robotic sentry vehicles to guard a perimeter, perform a surround task, and travel extended distances. This paper describes the authors most recent activities. In particular, this paper highlights the development of a Differential Global Positioning System (DGPS) leapfrog capability that allows two or more vehicles to alternate sending DGPS corrections. Using this leapfrog technique, this paper shows that a group of autonomous vehicles can travel 22.68 kilometers with a root mean square positioning error of only 5 meters.

More Details

Evolutionary pattern search algorithms for unconstrained and linearly constrained optimization

IEEE Transactions on Evolutionary Computation

Hart, William E.

The authors describe a convergence theory for evolutionary pattern search algorithms (EPSAs) on a broad class of unconstrained and linearly constrained problems. EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. The analysis significantly extends the previous convergence theory for EPSAs. The analysis applies to a broader class of EPSAs,and it applies to problems that are nonsmooth, have unbounded objective functions, and which are linearly constrained. Further, they describe a modest change to the algorithmic framework of EPSAs for which a non-probabilistic convergence theory applies. These analyses are also noteworthy because they are considerably simpler than previous analyses of EPSAs.

More Details

Invariant patterns in crystal lattices: Implications for protein folding algorithms

Journal for Universal Computer Science

Hart, William E.; Istrail, Sorin I.

Crystal lattices are infinite periodic graphs that occur naturally in a variety of geometries and which are of fundamental importance in polymer science. Discrete models of protein folding use crystal lattices to define the space of protein conformations. Because various crystal lattices provide discretizations of the same physical phenomenon, it is reasonable to expect that there will exist invariants across lattices related to fundamental properties of the protein folding process. This paper considers whether performance-guaranteed approximability is such an invariant for HP lattice models. The authors define a master approximation algorithm that has provable performance guarantees provided that a specific sublattice exists within a given lattice. They describe a broad class of crystal lattices that are approximable, which further suggests that approximability is a general property of HP lattice models.

More Details

Code Verification by the Method of Manufactured Solutions

Salari, Kambiz S.; Knupp, Patrick K.

A procedure for code Verification by the Method of Manufactured Solutions (MMS) is presented. Although the procedure requires a certain amount of creativity and skill, we show that MMS can be applied to a variety of engineering codes which numerically solve partial differential equations. This is illustrated by detailed examples from computational fluid dynamics. The strength of the MMS procedure is that it can identify any coding mistake that affects the order-of-accuracy of the numerical method. A set of examples which use a blind-test protocol demonstrates the kinds of coding mistakes that can (and cannot) be exposed via the MMS code Verification procedure. The principle advantage of the MMS procedure over traditional methods of code Verification is that code capabilities are tested in full generality. The procedure thus results in a high degree of confidence that all coding mistakes which prevent the equations from being solved correctly have been identified.

More Details

Load balancing fictions, falsehoods and fallacies

Applied Mathematical Modeling

Hendrickson, Bruce A.

Effective use of a parallel computer requires that a calculation be carefully divided among the processors. This load balancing problem appears in many guises and has been a fervent area of research for the past decade or more. Although great progress has been made, and useful software tools developed, a number of challenges remain. It is the conviction of the author that these challenges will be easier to address if programmers first come to terms with some significant shortcomings in their current perspectives. This paper tries to identify several areas in which the prevailing point of view is either mistaken or insufficient. The goal is to motivate new ideas and directions for this important field.

More Details

Interprocessor communication with memory constraints

Hendrickson, Bruce A.; Hendrickson, Bruce A.

Many parallel applications require periodic redistribution of workloads and associated data. In a distributed memory computer, this redistribution can be difficult if limited memory is available for receiving messages. The authors propose a model for optimizing the exchange of messages under such circumstances which they call the minimum phase remapping problem. They first show that the problem is NP-Complete, and then analyze several methodologies for addressing it. First, they show how the problem can be phrased as an instance of multi-commodity flow. Next, they study a continuous approximation to the problem. They show that this continuous approximation has a solution which requires at most two more phases than the optimal discrete solution, but the question of how to consistently obtain a good discrete solution from the continuous problem remains open. Finally, they devise a simple and practical approximation algorithm for the problem with a bound of 1.5 times the optimal number of phases.

More Details

Solving complex-valued linear systems via equivalent real formulations

SIAM Journal of Scientific Computing

Day, David M.; Heroux, Michael A.

Most algorithms used in preconditioned iterative methods are generally applicable to complex valued linear systems, with real valued linear systems simply being a special case. However, most iterative solver packages available today focus exclusively on real valued systems, or deal with complex valued systems as an afterthought. One obvious approach to addressing this problem is to recast the complex problem into one of a several equivalent real forms and then use a real valued solver to solve the related system. However, well-known theoretical results showing unfavorable spectral properties for the equivalent real forms have diminished enthusiasm for this approach. At the same time, experience has shown that there are situations where using an equivalent real form can be very effective. In this paper, the authors explore this approach, giving both theoretical and experimental evidence that an equivalent real form can be useful for a number of practical situations. Furthermore, they show that by making good use of some of the advance features of modem solver packages, they can easily generate equivalent real form preconditioners that are computationally efficient and mathematically identical to their complex counterparts. Using their techniques, they are able to solve very ill-conditioned complex valued linear systems for a variety of large scale applications. However, more importantly, they shed more light on the effectiveness of equivalent real forms and more clearly delineate how and when they should be used.

More Details

Microstructures of laser deposited 304L austenitic stainless steel

Headley, Thomas J.; Robino, Charles V.; Headley, Thomas J.

Laser deposits fabricated from two different compositions of 304L stainless steel powder were characterized to determine the nature of the solidification and solid state transformations. One of the goals of this work was to determine to what extent novel microstructure consisting of single-phase austenite could be achieved with the thermal conditions of the LENS [Laser Engineered Net Shape] process. Although ferrite-free deposits were not obtained, structures with very low ferrite content were achieved. It appeared that, with slight changes in alloy composition, this goal could be met via two different solidification and transformation mechanisms.

More Details

Direct simulation of particle-laden fluids

Cook, Benjamin K.; Noble, David R.; Preece, Dale S.

Processes that involve particle-laden fluids are common in geomechanics and especially in the petroleum industry. Understanding the physics of these processes and the ability to predict their behavior requires the development of coupled fluid-flow and particle-motion computational methods. This paper outlines an accurate and robust coupled computational scheme using the lattice-Boltzmann method for fluid flow and the discrete-element method for solid particle motion. Results from several two-dimensional validation simulations are presented. Simulations reported include the sedimentation of an ellipse, a disc and two interacting discs in a closed column of fluid. The recently discovered phenomenon of drafting, kissing, and tumbling is fully reproduced in the two-disc simulation.

More Details

Materials Issues for Micromachines Development - ASCI Program Plan

Fang, H.E.; Miller, Samuel L.; Dugger, Michael T.; Prasad, Somuri V.; Reedy, Earl D.; Thompson, Aidan P.; Wong, Chungnin C.; Yang, Pin Y.; Battaile, Corbett C.; Battaile, Corbett C.; Benavides, Gilbert L.; Ensz, M.T.; Buchheit, Thomas E.; Chen, Er-Ping C.; Christenson, Todd R.; De Boer, Maarten P.

This report summarizes materials issues associated with advanced micromachines development at Sandia. The intent of this report is to provide a perspective on the scope of the issues and suggest future technical directions, with a focus on computational materials science. Materials issues in surface micromachining (SMM), Lithographic-Galvanoformung-Abformung (LIGA: lithography, electrodeposition, and molding), and meso-machining technologies were identified. Each individual issue was assessed in four categories: degree of basic understanding; amount of existing experimental data capability of existing models; and, based on the perspective of component developers, the importance of the issue to be resolved. Three broad requirements for micromachines emerged from this process. They are: (1) tribological behavior, including stiction, friction, wear, and the use of surface treatments to control these, (2) mechanical behavior at microscale, including elasticity, plasticity, and the effect of microstructural features on mechanical strength, and (3) degradation of tribological and mechanical properties in normal (including aging), abnormal and hostile environments. Resolving all the identified critical issues requires a significant cooperative and complementary effort between computational and experimental programs. The breadth of this work is greater than any single program is likely to support. This report should serve as a guide to plan micromachines development at Sandia.

More Details

A naturalistic decision making model for simulated human combatants

Hart, William E.; Forsythe, James C.

The authors describe a naturalistic behavioral model for the simulation of small unit combat. This model, Klein's recognition-primed decision making (RPD) model, is driven by situational awareness rather than a rational process of selecting from a set of action options. They argue that simulated combatants modeled with RPD will have more flexible and realistic responses to a broad range of small-scale combat scenarios. Furthermore, they note that the predictability of a simulation using an RPD framework can be easily controlled to provide multiple evaluations of a given combat scenario. Finally, they discuss computational issues for building an RPD-based behavior engine for fully automated combatants in small conflict scenarios, which are being investigated within Sandia's Next Generation Site Security project.

More Details

Application of finite element, global polynomial, and kriging response surfaces in Progressive Lattice Sampling designs

Romero, Vicente J.; Swiler, Laura P.; Giunta, Anthony A.

This paper examines the modeling accuracy of finite element interpolation, kriging, and polynomial regression used in conjunction with the Progressive Lattice Sampling (PLS) incremental design-of-experiments approach. PLS is a paradigm for sampling a deterministic hypercubic parameter space by placing and incrementally adding samples in a manner intended to maximally reduce lack of knowledge in the parameter space. When combined with suitable interpolation methods, PLS is a formulation for progressive construction of response surface approximations (RSA) in which the RSA are efficiently upgradable, and upon upgrading, offer convergence information essential in estimating error introduced by the use of RSA in the problem. The three interpolation methods tried here are examined for performance in replicating an analytic test function as measured by several different indicators. The process described here provides a framework for future studies using other interpolation schemes, test functions, and measures of approximation quality.

More Details

Scalable rendering on PC clusters

Wylie, Brian N.; Lewis, Vasily L.; Shirley, David N.; Pavlakos, Constantine P.

This case study presents initial results from research targeted at the development of cost-effective scalable visualization and rendering technologies. The implementations of two 3D graphics libraries based on the popular sort-last and sort-middle parallel rendering techniques are discussed. An important goal of these implementations is to provide scalable rendering capability for extremely large datasets (>> 5 million polygons). Applications can use these libraries for either run-time visualization, by linking to an existing parallel simulation, or for traditional post-processing by linking to an interactive display program. The use of parallel, hardware-accelerated rendering on commodity hardware is leveraged to achieve high performance. Current performance results show that, using current hardware (a small 16-node cluster), they can utilize up to 85% of the aggregate graphics performance and achieve rendering rates in excess of 20 million polygons/second using OpenGL{reg_sign} with lighting, Gouraud shading, and individually specified triangles (not t-stripped).

More Details

Algebraic mesh quality metrics

SIAM Journal of Scientific Computing

Knupp, Patrick K.

Quality metrics for structured and unstructured mesh generation are placed within an algebraic framework to form a mathematical theory of mesh quality metrics. The theory, based on the Jacobian and related matrices, provides a means of constructing, classifying, and evaluating mesh quality metrics. The Jacobian matrix is factored into geometrically meaningful parts. A nodally-invariant Jacobian matrix can be defined for simplicial elements using a weight matrix derived from the Jacobian matrix of an ideal reference element. Scale and orientation-invariant algebraic mesh quality metrics are defined. the singular value decomposition is used to study relationships between metrics. Equivalence of the element condition number and mean ratio metrics is proved. Condition number is shown to measure the distance of an element to the set of degenerate elements. Algebraic measures for skew, length ratio, shape, volume, and orientation are defined abstractly, with specific examples given. Combined metrics for shape and volume, shape-volume-orientation are algebraically defined and examples of such metrics are given. Algebraic mesh quality metrics are extended to non-simplical elements. A series of numerical tests verify the theoretical properties of the metrics defined.

More Details

Scalability limitations of VIA-based technologies in supporting MPI

Brightwell, Ronald B.; Maccabe, Arthur B.

This paper analyzes the scalability limitations of networking technologies based on the Virtual Interface Architecture (VIA) in supporting the runtime environment needed for an implementation of the Message Passing Interface. The authors present an overview of the important characteristics of VIA and an overview of the runtime system being developed as part of the Computational Plant (Cplant) project at Sandia National Laboratories. They discuss the characteristics of VIA that prevent implementations based on this system to meet the scalability and performance requirements of Cplant.

More Details

Salinas - An implicit finite element structural dynamics code developed for massively parallel platforms

Reese, Garth M.; Driessen, Brian D.; Alvin, Kenneth F.; Day, David M.

As computational needs for structural finite element analysis increase, a robust implicit structural dynamics code is needed which can handle millions of degrees of freedom in the model and produce results with quick turn around time. A parallel code is needed to avoid limitations of serial platforms. Salinas is an implicit structural dynamics code specifically designed for massively parallel platforms. It computes the structural response of very large complex structures and provides solutions faster than any existing serial machine. This paper gives a current status of Salinas and uses demonstration problems to show Salinas' performance.

More Details

Computational methods for coupling microstructural and micromechanical materials response simulations

Holm, Elizabeth A.; Wellman, Gerald W.; Battaile, Corbett C.; Buchheit, Thomas E.; Fang, H.E.; Rintoul, Mark D.; Glass, Sarah J.; Knorovsky, Gerald A.; Neilsen, Michael K.

Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

More Details

A case study in working with cell-centered data

Crossno, Patricia J.

This case study provides examples of how some simple decisions the authors made in structuring their algorithms for handling cell-centered data can dramatically influence the results. Although they all know that these decisions produce variations in results, they think that they underestimate the potential magnitude of the differences. More importantly, the users of the codes may not be aware that these choices have been made or what they mean to the resulting visualizations of their data. This raises the question of whether or not these decisions are inadvertently distorting user interpretations of data sets.

More Details

An agent-based microsimulation of critical infrastructure systems

Barton, Dianne C.; Stamber, Kevin L.

US infrastructures provide essential services that support the economic prosperity and quality of life. Today, the latest threat to these infrastructures is the increasing complexity and interconnectedness of the system. On balance, added connectivity will improve economic efficiency; however, increased coupling could also result in situations where a disturbance in an isolated infrastructure unexpectedly cascades across diverse infrastructures. An understanding of the behavior of complex systems can be critical to understanding and predicting infrastructure responses to unexpected perturbation. Sandia National Laboratories has developed an agent-based model of critical US infrastructures using time-dependent Monte Carlo methods and a genetic algorithm learning classifier system to control decision making. The model is currently under development and contains agents that represent the several areas within the interconnected infrastructures, including electric power and fuel supply. Previous work shows that agent-based simulations models have the potential to improve the accuracy of complex system forecasting and to provide new insights into the factors that are the primary drivers of emergent behaviors in interdependent systems. Simulation results can be examined both computationally and analytically, offering new ways of theorizing about the impact of perturbations to an infrastructure network.

More Details

Methodology for characterizing modeling and discretization uncertainties in computational simulation

Alvin, Kenneth F.; Oberkampf, William L.; Rutherford, Brian M.; Diegert, Kathleen V.

This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

More Details

Finite element meshing approached as a global minimization process

Witkowski, Walter R.; Jung, Joseph J.; Dohrmann, Clark R.; Leung, Vitus J.

The ability to generate a suitable finite element mesh in an automatic fashion is becoming the key to being able to automate the entire engineering analysis process. However, placing an all-hexahedron mesh in a general three-dimensional body continues to be an elusive goal. The approach investigated in this research is fundamentally different from any other that is known of by the authors. A physical analogy viewpoint is used to formulate the actual meshing problem which constructs a global mathematical description of the problem. The analogy used was that of minimizing the electrical potential of a system charged particles within a charged domain. The particles in the presented analogy represent duals to mesh elements (i.e., quads or hexes). Particle movement is governed by a mathematical functional which accounts for inter-particles repulsive, attractive and alignment forces. This functional is minimized to find the optimal location and orientation of each particle. After the particles are connected a mesh can be easily resolved. The mathematical description for this problem is as easy to formulate in three-dimensions as it is in two- or one-dimensions. The meshing algorithm was developed within CoMeT. It can solve the two-dimensional meshing problem for convex and concave geometries in a purely automated fashion. Investigation of the robustness of the technique has shown a success rate of approximately 99% for the two-dimensional geometries tested. Run times to mesh a 100 element complex geometry were typically in the 10 minute range. Efficiency of the technique is still an issue that needs to be addressed. Performance is an issue that is critical for most engineers generating meshes. It was not for this project. The primary focus of this work was to investigate and evaluate a meshing algorithm/philosophy with efficiency issues being secondary. The algorithm was also extended to mesh three-dimensional geometries. Unfortunately, only simple geometries were tested before this project ended. The primary complexity in the extension was in the connectivity problem formulation. Defining all of the interparticle interactions that occur in three-dimensions and expressing them in mathematical relationships is very difficult.

More Details

Salvo: Seismic imaging software for complex geologies

Ober, Curtis C.; Womble, David E.

This report describes Salvo, a three-dimensional seismic-imaging software for complex geologies. Regions of complex geology, such as overthrusts and salt structures, can cause difficulties for many seismic-imaging algorithms used in production today. The paraxial wave equation and finite-difference methods used within Salvo can produce high-quality seismic images in these difficult regions. However this approach comes with higher computational costs which have been too expensive for standard production. Salvo uses improved numerical algorithms and methods, along with parallel computing, to produce high-quality images and to reduce the computational and the data input/output (I/O) costs. This report documents the numerical algorithms implemented for the paraxial wave equation, including absorbing boundary conditions, phase corrections, imaging conditions, phase encoding, and reduced-source migration. This report also describes I/O algorithms for large seismic data sets and images and parallelization methods used to obtain high efficiencies for both the computations and the I/O of seismic data sets. Finally, this report describes the required steps to compile, port and optimize the Salvo software, and describes the validation data sets used to help verify a working copy of Salvo.

More Details

Comparison of electrical CD measurements and cross-section lattice-plane counts of sub-micrometer features replicated in Silicon-on-Insulator materials

Headley, Thomas J.; Everist, Sarah C.; Everist, Sarah C.

Electrical test structures of the type known as cross-bridge resistors have been patterned in (100) epitaxial silicon material that was grown on Bonded and Etched-Back Silicon-on-Insulator (BESOI) substrates. The CDs (Critical Dimensions) of a selection of their reference segments have been measured electrically, by SEM (Scanning-Electron Microscopy) cross-section imaging, and by lattice-plane counting. The lattice-plane counting is performed on phase-contrast images made by High-Resolution Transmission-Electron Microscopy (HRTEM). The reference-segment features were aligned with <110> directions in the BESOI surface material. They were defined by a silicon micromachining process which results in their sidewalls being atomically-planar and smooth and inclined at 54.737{degree} to the surface (100) plane of the substrate. This (100) implementation may usefully complement the attributes of the previously-reported vertical-sidewall one for selected reference-material applications. The SEM, HRTEM, and electrical CD (ECD) linewidth measurements that are made on BESOI features of various drawn dimensions on the same substrate is being investigated to determine the feasibility of a CD traceability path that combines the low cost, robustness, and repeatability of the ECD technique and the absolute measurement of the HRTEM lattice-plane counting technique. Other novel aspects of the (100) SOI implementation that are reported here are the ECD test-structure architecture and the making of HRTEM lattice-plane counts from both cross-sectional, as well as top-down, imaging of the reference features. This paper describes the design details and the fabrication of the cross-bridge resistor test structure. The long-term goal is to develop a technique for the determination of the absolute dimensions of the trapezoidal cross-sections of the cross-bridge resistors reference segments, as a prelude to making them available for dimensional reference applications.

More Details

Tensile instabilities for porous plasticity models

Brannon, Rebecca M.

Several concepts (and assumptions) from the literature for porous metals and ceramics have been synthesized into a consistent model that predicts an admissibility limit on a material's porous yield surface. To ensure positive plastic work, the rate at which a yield surface can collapse as pores grow in tension must be constrained.

More Details

Feature based volume decomposition for automatic hexahedral mesh generation

ASME Journal of Manufacturing Science and Engineering

Tautges, Timothy J.; Tautges, Timothy J.

Much progress has been made through these years to achieve automatic hexahedral mesh generation. While general meshing algorithms that can take on general geometry are not there yet; many well-proven automatic meshing algorithms now work on certain classes of geometry. This paper presents a feature based volume decomposition approach for automatic Hexahedral Mesh generation. In this approach, feature recognition techniques are introduced to determine decomposition features from a CAD model. The features are then decomposed and mapped with appropriate automatic meshing algorithms suitable for the correspondent geometry. Thus a formerly unmeshable CAD model may become meshable. The procedure of feature decomposition is recursive: sub-models are further decomposed until either they are matched with appropriate meshing algorithms or no more decomposition features are detected. The feature recognition methods employed are convexity based and use topology and geometry information, which is generally available in BREP solid models. The operations of volume decomposition are also detailed in the paper. The final section, the capability of the feature decomposer is demonstrated over some complicated manufactured parts.

More Details

The generation of hexahedral meshes for assembly geometries: A survey

International Journal for Numberical Methods in Engineering

Tautges, Timothy J.

The finite element method is being used today to model component assemblies in a wide variety of application areas, including structural mechanics, fluid simulations, and others. Generating hexahedral meshes for these assemblies usually requires the use of geometry decomposition, with different meshing algorithms applied to different regions. While the primary motivation for this approach remains the lack of an automatic, reliable all-hexahedral meshing algorithm, requirements in mesh quality and mesh configuration for typical analyses are also factors. For these reasons, this approach is also sometimes required when producing other types of unstructured meshes. This paper will review progress to date in automating many parts of the hex meshing process, which has halved the time to produce all-hex meshes for large assemblies. Particular issues which have been exposed due to this progress will also be discussed, along with their applicability to the general unstructured meshing problem.

More Details

Prospecting for lunar ice using a multi-rover cooperative team

Klarer, Paul R.; Feddema, John T.; Lewis, Christopher L.

A multi-rover cooperative team or swarm developed by Sandia National Laboratories is described, including various control methodologies that have been implemented to date. How the swarm's capabilities could be applied to a lunar ice prospecting mission is briefly explored. Some of the specific major engineering issues that must be addressed to successfully implement the swarm approach to a lunar surface mission are outlined, and potential solutions are proposed.

More Details

Synthesis of logic circuits with evolutionary algorithms

Jones, Jake S.; Davidson, George S.

In the last decade there has been interest and research in the area of designing circuits with genetic algorithms, evolutionary algorithms, and genetic programming. However, the ability to design circuits of the size and complexity required by modern engineering design problems, simply by specifying required outputs for given inputs has as yet eluded researchers. This paper describes current research in the area of designing logic circuits using an evolutionary algorithm. The goal of the research is to improve the effectiveness of this method and make it a practical aid for design engineers. A novel method of implementing the algorithm is introduced, and results are presented for various multiprocessing systems. In addition to evolving standard arithmetic circuits, work in the area of evolving circuits that perform digital signal processing tasks is described.

More Details

A precise determination of the void percolation threshold for two distributions of overlapping spheres

Physical Review Letters

Rintoul, Mark D.

The void percolation threshold is calculated for a distribution of overlapping spheres with equal radii, and for a binary sized distribution of overlapping spheres, where half of the spheres have radii twice as large as the other half. Using systems much larger than previous work, the authors determine a much more precise value for the percolation thresholds and correlation length exponent. The values for the percolation thresholds are shown to be significantly different, in contrast with previous, less precise works that speculated that the threshold might be universal with respect to sphere size distribution.

More Details

Randomized metarounding

Carr, Robert D.

The authors present a new technique for the design of approximation algorithms that can be viewed as a generalization of randomized rounding. They derive new or improved approximation guarantees for a class of generalized congestion problems such as multicast congestion, multiple TSP etc. Their main mathematical tool is a structural decomposition theorem related to the integrality gap of a relaxation.

More Details

Scalability and Performance of a Large Linux Cluster

Journal of Parallel and Distributed Computing

Brightwell, Ronald B.; Plimpton, Steven J.

In this paper the authors present performance results from several parallel benchmarks and applications on a 400-node Linux cluster at Sandia National Laboratories. They compare the results on the Linux cluster to performance obtained on a traditional distributed-memory massively parallel processing machine, the Intel TeraFLOPS. They discuss the characteristics of these machines that influence the performance results and identify the key components of the system software that they feel are important to allow for scalability of commodity-based PC clusters to hundreds and possibly thousands of processors.

More Details

Discretization errors associated with Reproducing Kernel Methods: One-dimensional domains

Voth, Thomas E.; Christon, Mark A.

The Reproducing Kernel Particle Method (RKPM) is a discretization technique for partial differential equations that uses the method of weighted residuals, classical reproducing kernel theory and modified kernels to produce either ``mesh-free'' or ``mesh-full'' methods. Although RKPM has many appealing attributes, the method is new, and its numerical performance is just beginning to be quantified. In order to address the numerical performance of RKPM, von Neumann analysis is performed for semi-discretizations of three model one-dimensional PDEs. The von Neumann analyses results are used to examine the global and asymptotic behavior of the semi-discretizations. The model PDEs considered for this analysis include the parabolic and hyperbolic (first and second-order wave) equations. Numerical diffusivity for the former and phase speed for the later are presented over the range of discrete wavenumbers and in an asymptotic sense as the particle spacing tends to zero. Group speed is also presented for the hyperbolic problems. Excellent diffusive and dispersive characteristics are observed when a consistent mass matrix formulation is used with the proper choice of refinement parameter. In contrast, the row-sum lumped mass matrix formulation severely degraded performance. The asymptotic analysis indicates that very good rates of convergence are possible when the consistent mass matrix formulation is used with an appropriate choice of refinement parameter.

More Details

Design of dynamic load-balancing tools for parallel applications

Devine, Karen D.; Hendrickson, Bruce A.; Boman, Erik G.; Vaughan, Courtenay T.

The design of general-purpose dynamic load-balancing tools for parallel applications is more challenging than the design of static partitioning tools. Both algorithmic and software engineering issues arise. The authors have addressed many of these issues in the design of the Zoltan dynamic load-balancing library. Zoltan has an object-oriented interface that makes it easy to use and provides separation between the application and the load-balancing algorithms. It contains a suite of dynamic load-balancing algorithms, including both geometric and graph-based algorithms. Its design makes it valuable both as a partitioning tool for a variety of applications and as a research test-bed for new algorithmic development. In this paper, the authors describe Zoltan's design and demonstrate its use in an unstructured-mesh finite element application.

More Details

Human Assisted Assembly Processes

Galpin, Terri L.; Peters, Ralph R.

Automatic assembly sequencing and visualization tools are valuable in determining the best assembly sequences, but without Human Factors and Figure Models (HFFMs) it is difficult to evaluate or visualize human interaction. In industry, accelerating technological advances and shorter market windows have forced companies to turn to an agile manufacturing paradigm. This trend has promoted computerized automation of product design and manufacturing processes, such as automated assembly planning. However, all automated assembly planning software tools assume that the individual components fly into their assembled configuration and generate what appear to be a perfectly valid operations, but in reality the operations cannot physically be carried out by a human. Similarly, human figure modeling algorithms may indicate that assembly operations are not feasible and consequently force design modifications; however, if they had the capability to quickly generate alternative assembly sequences, they might have identified a feasible solution. To solve this problem HFFMs must be integrated with automated assembly planning to allow engineers to verify that assembly operations are possible and to see ways to make the designs even better. Factories will very likely put humans and robots together in cooperative environments to meet the demands for customized products, for purposes including robotic and automated assembly. For robots to work harmoniously within an integrated environment with humans the robots must have cooperative operational skills. For example, in a human only environment, humans may tolerate collisions with one another if they did not cause much pain. This level of tolerance may or may not apply to robot-human environments. Humans expect that robots will be able to operate and navigate in their environments without collisions or interference. The ability to accomplish this is linked to the sensing capabilities available. Current work in the field of cooperative automation has shown the effectiveness of humans and machines directly interacting to perform tasks. To continue to advance this area of robotics, effective means need to be developed to allow natural ways for people to communicate and cooperate with robots just as they do with one another.

More Details

Advanced numerical methods and software approaches for semiconductor device simulation

VLSI Design

Carey, Graham F.; Pardhanani, A.L.; Bova, S.W.

In this article we concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of `upwind' and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), `entropy' variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software, support such as those in the SANDIA National Laboratory framework SIERRA.

More Details

Load-balancing techniques for a parallel electromagnetic particle-in-cell code

Plimpton, Steven J.; Seidel, David B.; Pasik, Michael F.; Coats, Rebecca S.

QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER.

More Details

Second-order structural identification procedure via state-space-based system identification

AIAA Journal

Alvin, Kenneth F.; Park, K.C.P.

We present a theory for transforming the system-theory-based realization models into the corresponding physical coordinate-based structural models. The theory has been implemented into computational procedure and applied to several example problems. Our results show that the present transformation theory yields an objective model basis possessing a unique set of structural parameters from an infinite set of equivalent system realization models. For proportionally damped systems, the transformation directly and systematicaly yields the normal modes and modal damping. Moreover, when nonproportional damping is present, the relative magnitude and phase of the damped mode shapes are separately characterized, and a corrective transformation is then employed to capture the undamped normal modes and nondiagonal modal damping matrix.

More Details
Results 9951–9998 of 9,998
Results 9951–9998 of 9,998