Publications

Results 9976–9998 of 9,998

Search results

Jump to search filters

Methodology for characterizing modeling and discretization uncertainties in computational simulation

Alvin, Kenneth F.; Oberkampf, William L.; Rutherford, Brian M.; Diegert, Kathleen V.

This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

More Details

Finite element meshing approached as a global minimization process

Witkowski, Walter R.; Jung, Joseph J.; Dohrmann, Clark R.; Leung, Vitus J.

The ability to generate a suitable finite element mesh in an automatic fashion is becoming the key to being able to automate the entire engineering analysis process. However, placing an all-hexahedron mesh in a general three-dimensional body continues to be an elusive goal. The approach investigated in this research is fundamentally different from any other that is known of by the authors. A physical analogy viewpoint is used to formulate the actual meshing problem which constructs a global mathematical description of the problem. The analogy used was that of minimizing the electrical potential of a system charged particles within a charged domain. The particles in the presented analogy represent duals to mesh elements (i.e., quads or hexes). Particle movement is governed by a mathematical functional which accounts for inter-particles repulsive, attractive and alignment forces. This functional is minimized to find the optimal location and orientation of each particle. After the particles are connected a mesh can be easily resolved. The mathematical description for this problem is as easy to formulate in three-dimensions as it is in two- or one-dimensions. The meshing algorithm was developed within CoMeT. It can solve the two-dimensional meshing problem for convex and concave geometries in a purely automated fashion. Investigation of the robustness of the technique has shown a success rate of approximately 99% for the two-dimensional geometries tested. Run times to mesh a 100 element complex geometry were typically in the 10 minute range. Efficiency of the technique is still an issue that needs to be addressed. Performance is an issue that is critical for most engineers generating meshes. It was not for this project. The primary focus of this work was to investigate and evaluate a meshing algorithm/philosophy with efficiency issues being secondary. The algorithm was also extended to mesh three-dimensional geometries. Unfortunately, only simple geometries were tested before this project ended. The primary complexity in the extension was in the connectivity problem formulation. Defining all of the interparticle interactions that occur in three-dimensions and expressing them in mathematical relationships is very difficult.

More Details

Comparison of electrical CD measurements and cross-section lattice-plane counts of sub-micrometer features replicated in Silicon-on-Insulator materials

Headley, Thomas J.; Everist, Sarah C.

Electrical test structures of the type known as cross-bridge resistors have been patterned in (100) epitaxial silicon material that was grown on Bonded and Etched-Back Silicon-on-Insulator (BESOI) substrates. The CDs (Critical Dimensions) of a selection of their reference segments have been measured electrically, by SEM (Scanning-Electron Microscopy) cross-section imaging, and by lattice-plane counting. The lattice-plane counting is performed on phase-contrast images made by High-Resolution Transmission-Electron Microscopy (HRTEM). The reference-segment features were aligned with <110> directions in the BESOI surface material. They were defined by a silicon micromachining process which results in their sidewalls being atomically-planar and smooth and inclined at 54.737{degree} to the surface (100) plane of the substrate. This (100) implementation may usefully complement the attributes of the previously-reported vertical-sidewall one for selected reference-material applications. The SEM, HRTEM, and electrical CD (ECD) linewidth measurements that are made on BESOI features of various drawn dimensions on the same substrate is being investigated to determine the feasibility of a CD traceability path that combines the low cost, robustness, and repeatability of the ECD technique and the absolute measurement of the HRTEM lattice-plane counting technique. Other novel aspects of the (100) SOI implementation that are reported here are the ECD test-structure architecture and the making of HRTEM lattice-plane counts from both cross-sectional, as well as top-down, imaging of the reference features. This paper describes the design details and the fabrication of the cross-bridge resistor test structure. The long-term goal is to develop a technique for the determination of the absolute dimensions of the trapezoidal cross-sections of the cross-bridge resistors reference segments, as a prelude to making them available for dimensional reference applications.

More Details

Tensile instabilities for porous plasticity models

Brannon, Rebecca M.

Several concepts (and assumptions) from the literature for porous metals and ceramics have been synthesized into a consistent model that predicts an admissibility limit on a material's porous yield surface. To ensure positive plastic work, the rate at which a yield surface can collapse as pores grow in tension must be constrained.

More Details

Feature based volume decomposition for automatic hexahedral mesh generation

ASME Journal of Manufacturing Science and Engineering

Tautges, Timothy J.

Much progress has been made through these years to achieve automatic hexahedral mesh generation. While general meshing algorithms that can take on general geometry are not there yet; many well-proven automatic meshing algorithms now work on certain classes of geometry. This paper presents a feature based volume decomposition approach for automatic Hexahedral Mesh generation. In this approach, feature recognition techniques are introduced to determine decomposition features from a CAD model. The features are then decomposed and mapped with appropriate automatic meshing algorithms suitable for the correspondent geometry. Thus a formerly unmeshable CAD model may become meshable. The procedure of feature decomposition is recursive: sub-models are further decomposed until either they are matched with appropriate meshing algorithms or no more decomposition features are detected. The feature recognition methods employed are convexity based and use topology and geometry information, which is generally available in BREP solid models. The operations of volume decomposition are also detailed in the paper. The final section, the capability of the feature decomposer is demonstrated over some complicated manufactured parts.

More Details

The generation of hexahedral meshes for assembly geometries: A survey

International Journal for Numberical Methods in Engineering

Tautges, Timothy J.

The finite element method is being used today to model component assemblies in a wide variety of application areas, including structural mechanics, fluid simulations, and others. Generating hexahedral meshes for these assemblies usually requires the use of geometry decomposition, with different meshing algorithms applied to different regions. While the primary motivation for this approach remains the lack of an automatic, reliable all-hexahedral meshing algorithm, requirements in mesh quality and mesh configuration for typical analyses are also factors. For these reasons, this approach is also sometimes required when producing other types of unstructured meshes. This paper will review progress to date in automating many parts of the hex meshing process, which has halved the time to produce all-hex meshes for large assemblies. Particular issues which have been exposed due to this progress will also be discussed, along with their applicability to the general unstructured meshing problem.

More Details

Prospecting for lunar ice using a multi-rover cooperative team

Klarer, Paul R.; Feddema, John T.; Lewis, Christopher L.

A multi-rover cooperative team or swarm developed by Sandia National Laboratories is described, including various control methodologies that have been implemented to date. How the swarm's capabilities could be applied to a lunar ice prospecting mission is briefly explored. Some of the specific major engineering issues that must be addressed to successfully implement the swarm approach to a lunar surface mission are outlined, and potential solutions are proposed.

More Details

Synthesis of logic circuits with evolutionary algorithms

Jones, Jake S.; Davidson, George S.

In the last decade there has been interest and research in the area of designing circuits with genetic algorithms, evolutionary algorithms, and genetic programming. However, the ability to design circuits of the size and complexity required by modern engineering design problems, simply by specifying required outputs for given inputs has as yet eluded researchers. This paper describes current research in the area of designing logic circuits using an evolutionary algorithm. The goal of the research is to improve the effectiveness of this method and make it a practical aid for design engineers. A novel method of implementing the algorithm is introduced, and results are presented for various multiprocessing systems. In addition to evolving standard arithmetic circuits, work in the area of evolving circuits that perform digital signal processing tasks is described.

More Details

A precise determination of the void percolation threshold for two distributions of overlapping spheres

Physical Review Letters

Rintoul, Mark D.

The void percolation threshold is calculated for a distribution of overlapping spheres with equal radii, and for a binary sized distribution of overlapping spheres, where half of the spheres have radii twice as large as the other half. Using systems much larger than previous work, the authors determine a much more precise value for the percolation thresholds and correlation length exponent. The values for the percolation thresholds are shown to be significantly different, in contrast with previous, less precise works that speculated that the threshold might be universal with respect to sphere size distribution.

More Details

Randomized metarounding

Carr, Robert D.

The authors present a new technique for the design of approximation algorithms that can be viewed as a generalization of randomized rounding. They derive new or improved approximation guarantees for a class of generalized congestion problems such as multicast congestion, multiple TSP etc. Their main mathematical tool is a structural decomposition theorem related to the integrality gap of a relaxation.

More Details

Scalability and Performance of a Large Linux Cluster

Journal of Parallel and Distributed Computing

Brightwell, Ronald B.; Plimpton, Steven J.

In this paper the authors present performance results from several parallel benchmarks and applications on a 400-node Linux cluster at Sandia National Laboratories. They compare the results on the Linux cluster to performance obtained on a traditional distributed-memory massively parallel processing machine, the Intel TeraFLOPS. They discuss the characteristics of these machines that influence the performance results and identify the key components of the system software that they feel are important to allow for scalability of commodity-based PC clusters to hundreds and possibly thousands of processors.

More Details

Discretization errors associated with Reproducing Kernel Methods: One-dimensional domains

Voth, Thomas E.; Christon, Mark A.

The Reproducing Kernel Particle Method (RKPM) is a discretization technique for partial differential equations that uses the method of weighted residuals, classical reproducing kernel theory and modified kernels to produce either ``mesh-free'' or ``mesh-full'' methods. Although RKPM has many appealing attributes, the method is new, and its numerical performance is just beginning to be quantified. In order to address the numerical performance of RKPM, von Neumann analysis is performed for semi-discretizations of three model one-dimensional PDEs. The von Neumann analyses results are used to examine the global and asymptotic behavior of the semi-discretizations. The model PDEs considered for this analysis include the parabolic and hyperbolic (first and second-order wave) equations. Numerical diffusivity for the former and phase speed for the later are presented over the range of discrete wavenumbers and in an asymptotic sense as the particle spacing tends to zero. Group speed is also presented for the hyperbolic problems. Excellent diffusive and dispersive characteristics are observed when a consistent mass matrix formulation is used with the proper choice of refinement parameter. In contrast, the row-sum lumped mass matrix formulation severely degraded performance. The asymptotic analysis indicates that very good rates of convergence are possible when the consistent mass matrix formulation is used with an appropriate choice of refinement parameter.

More Details

Human Assisted Assembly Processes

Galpin, Terri L.; Peters, Ralph R.

Automatic assembly sequencing and visualization tools are valuable in determining the best assembly sequences, but without Human Factors and Figure Models (HFFMs) it is difficult to evaluate or visualize human interaction. In industry, accelerating technological advances and shorter market windows have forced companies to turn to an agile manufacturing paradigm. This trend has promoted computerized automation of product design and manufacturing processes, such as automated assembly planning. However, all automated assembly planning software tools assume that the individual components fly into their assembled configuration and generate what appear to be a perfectly valid operations, but in reality the operations cannot physically be carried out by a human. Similarly, human figure modeling algorithms may indicate that assembly operations are not feasible and consequently force design modifications; however, if they had the capability to quickly generate alternative assembly sequences, they might have identified a feasible solution. To solve this problem HFFMs must be integrated with automated assembly planning to allow engineers to verify that assembly operations are possible and to see ways to make the designs even better. Factories will very likely put humans and robots together in cooperative environments to meet the demands for customized products, for purposes including robotic and automated assembly. For robots to work harmoniously within an integrated environment with humans the robots must have cooperative operational skills. For example, in a human only environment, humans may tolerate collisions with one another if they did not cause much pain. This level of tolerance may or may not apply to robot-human environments. Humans expect that robots will be able to operate and navigate in their environments without collisions or interference. The ability to accomplish this is linked to the sensing capabilities available. Current work in the field of cooperative automation has shown the effectiveness of humans and machines directly interacting to perform tasks. To continue to advance this area of robotics, effective means need to be developed to allow natural ways for people to communicate and cooperate with robots just as they do with one another.

More Details

Design of dynamic load-balancing tools for parallel applications

Proceedings of the International Conference on Supercomputing

Devine, Karen D.; Hendrickson, Bruce A.; Boman, Erik G.; Vaughan, Courtenay T.

The design of general-purpose dynamic load-balancing tools for parallel applications is more challenging than the design of static partitioning tools. Both algorithmic and software engineering issues arise. We have addressed many of these issues in the design of the Zoltan dynamic load-balancing library. Zoltan has an object-oriented interface that makes it easy to use and provides separation between the application and the load-balancing algorithms. It contains a suite of dynamic load-balancing algorithms, including both geometric and graph-based algorithms. Its design makes it valuable both as a partitioning tool for a variety of applications and as a research test-bed for new algorithmic development. In this paper, we describe Zoltan's design and demonstrate its use in an unstructured-mesh finite element application.

More Details

Load-balancing techniques for a parallel electromagnetic particle-in-cell code

Plimpton, Steven J.; Seidel, David B.; Pasik, Michael F.; Coats, Rebecca S.

QUICKSILVER is a 3-d electromagnetic particle-in-cell simulation code developed and used at Sandia to model relativistic charged particle transport. It models the time-response of electromagnetic fields and low-density-plasmas in a self-consistent manner: the fields push the plasma particles and the plasma current modifies the fields. Through an LDRD project a new parallel version of QUICKSILVER was created to enable large-scale plasma simulations to be run on massively-parallel distributed-memory supercomputers with thousands of processors, such as the Intel Tflops and DEC CPlant machines at Sandia. The new parallel code implements nearly all the features of the original serial QUICKSILVER and can be run on any platform which supports the message-passing interface (MPI) standard as well as on single-processor workstations. This report describes basic strategies useful for parallelizing and load-balancing particle-in-cell codes, outlines the parallel algorithms used in this implementation, and provides a summary of the modifications made to QUICKSILVER. It also highlights a series of benchmark simulations which have been run with the new code that illustrate its performance and parallel efficiency. These calculations have up to a billion grid cells and particles and were run on thousands of processors. This report also serves as a user manual for people wishing to run parallel QUICKSILVER.

More Details

Advanced numerical methods and software approaches for semiconductor device simulation

VLSI Design

Bova, S.W.

In this article we concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of `upwind' and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), `entropy' variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software, support such as those in the SANDIA National Laboratory framework SIERRA.

More Details

Microstructures of laser deposited 304L austenitic stainless steel

Materials Research Society Symposium - Proceedings

Brooks, John A.; Headley, Thomas J.; Robino, Charles V.

Laser deposits fabricated from two different compositions of 304L stainless steel powder were characterized to determine the nature of the solidification and solid state transformations. One of the goals of this work was to determine to what extent novel microstructures consisting of single-phase austenite could be achieved with the thermal conditions of the LENS process. Although ferrite-free deposits were not obtained, structures with very low ferrite content were achieved. It appeared that, with slight changes in alloy composition, this goal could be met via two different solidification and transformation mechanisms.

More Details

Validation methodology in computational fluid dynamics

Fluids 2000 Conference and Exhibit

Oberkampf, William L.; Trucano, Timothy G.

Verification and validation are the primary means to assess accuracy and reliability in computational simulations. This paper presents an extensive review of the literature in computational validation and develops a number of extensions to existing ideas. We discuss the early work in validation by the operations research, statistics, and CFD communities. The emphasis in our review is to bring together the diverse contributors to validation methodology and procedures. The disadvantages of standard practice of qualitative graphical validation are pointed out and the arguments for and the literature on validation quantification are presented. We discuss the attributes of a beneficial validation experiment hierarchy and then we give an example for a complex system; a hypersonic cruise missile. We present six recommended characteristics of how a validation experiment is designed, executed, and analyzed. Since one of the key features of a validation experiment is a careful experimental uncertainty estimation analysis, we discuss a statistical procedure that has been developed for improving the estimation of experimental uncertainty. One facet of code verification, the estimation of computational error and uncertainty, is discussed in some detail, but we do not address many other important issues in code verification. We argue for the separation of the concepts of error and uncertainty in computational simulations. Error estimation, primarily that due to numerical solution error, is discussed with regard to its importance in validation. In the same vein, we explain the need to move toward nondeterministic simulations in CFD validation, that is, the propagation of input quantity uncertainty in CFD simulations which yield probabilistic output quantities. We discuss the relatively new concept of validation quantification, also referred to as validation metrics. The inadequacy, in our view, of hypothesis testing in computational validation is discussed. We close the paper by presenting our ideas on validation metrics and we apply them to two conceptual examples. © 2000 The American Institute of Aeronautics and Astronautics Inc.

More Details

Automatic Scheme Selection for Toolkit Hex Meshing

White, David R.; Tautges, Timothy J.

Current hexahedral mesh generation techniques rely on a set of meshing tools, which when combined with geometry decomposition leads to an adequate mesh generation process. Of these tools, sweeping tends to be the workhorse algorithm, accounting for at least 50% of most meshing applications. Constraints which must be met for a volume to be sweepable are derived, and it is proven that these constraints are necessary but not sufficient conditions for sweepability. This paper also describes a new algorithm for detecting extruded or sweepable geometries. This algorithm, based on these constraints, uses topological and local geometric information, and is more robust than feature recognition-based algorithms. A method for computing sweep dependencies in volume assemblies is also given. The auto sweep detect and sweep grouping algorithms have been used to reduce interactive user time required to generate all-hexahedral meshes by filtering out non-sweepable volumes needing further decomposition and by allowing concurrent meshing of independent sweep groups. Parts of the auto sweep detect algorithm have also been used to identify independent sweep paths, for use in volume-based interval assignment.

More Details

Invariant patterns in crystal lattices: Implications for protein folding algorithms

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Hart, William E.; Istrail, Sorin I.

Crystal lattices are infinite periodic graphs that occur naturally in a variety of geometries and which are of fundamental importance in polymer science. Discrete models of protein folding use crystal lattices to define the space of protein conformations. Because various crystal lattices provide discretizations of the same physical phenomenon, it is reasonable to expect that there will exist "invariants" across lattices that define fundamental properties of the protein folding process; an invariant defines a property that transcends particular lattice formulations. This paper identifies two classes of invariants, defined in terms of sublattices that are related to the design of algorithms for the structure prediction problem. The first class of invariants is used to define a master approximation algorithm for which provable performance guarantees exist. This algorithm can be applied to generalizations of the hydrophobic-hydrophilic model that have lattices other than the cubic lattice, including most of the crystal lattices commonly used in protein folding lattice models. The second class of invariants applies to a related lattice model. Using these invariants, we show that for this model the structure prediction problem is intractable across a variety of threedimensional lattices. It turns out that these two classes of invariants are respectively sublattices of the two-and three-dimensional square lattice. As the square lattices are the standard lattices used in empirical protein folding studies, our results provide a rigorous confirmation of the ability of these lattices to provide insight into biological phenomenon. Our results are the first in the literature that identify algorithmic paradigms for the protein structure prediction problem that transcend particular lattice formulations.

More Details

Second-order structural identification procedure via state-space-based system identification

AIAA Journal

Alvin, Kenneth F.; Park, K.C.

We present a theory for transforming the system-theory-based realization models into the corresponding physical coordinate-based structural models. The theory has been implemented into computational procedure and applied to several example problems. Our results show that the present transformation theory yields an objective model basis possessing a unique set of structural parameters from an infinite set of equivalent system realization models. For proportionally damped systems, the transformation directly and systematicaly yields the normal modes and modal damping. Moreover, when nonproportional damping is present, the relative magnitude and phase of the damped mode shapes are separately characterized, and a corrective transformation is then employed to capture the undamped normal modes and nondiagonal modal damping matrix.

More Details
Results 9976–9998 of 9,998
Results 9976–9998 of 9,998