Publications

Results 9851–9900 of 9,998

Search results

Jump to search filters

SIERRA Framework Version 3: Transfer Services Design and Use

Stewart, James R.

This paper presents a description of the SIERRA Framework Version 3 parallel transfer operators. The high-level design including object interrelationships, as well as requirements for their use, is discussed. Transfer operators are used for moving field data from one computational mesh to another. The need for this service spans many different applications. The most common application is to enable loose coupling of multiple physics modules, such as for the coupling of a quasi-statics analysis with a thermal analysis. The SIERRA transfer operators support the transfer of nodal and element fields between meshes of different, arbitrary parallel decompositions. Also supplied are ''copy'' transfer operators for efficient transfer of fields between identical meshes. A ''copy'' transfer operator is also implemented for constraint objects. Each of these transfer operators is described. Also, two different parallel algorithms are presented for handling the geometric misalignment between different parallel-distributed meshes.

More Details

Nonlinear programming strategies for source detection of municipal water networks

van Bloemen Waanders, Bart G.; van Bloemen Waanders, Bart G.; Bartlett, Roscoe B.

Increasing concerns for the security of the national infrastructure have led to a growing need for improved management and control of municipal water networks. To deal with this issue, optimization offers a general and extremely effective method to identify (possibly harmful) disturbances, assess the current state of the network, and determine operating decisions that meet network requirements and lead to optimal performance. This paper details an optimization strategy for the identification of source disturbances in the network. Here we consider the source inversion problem modeled as a nonlinear programming problem. Dynamic behavior of municipal water networks is simulated using EPANET. This approach allows for a widely accepted, general purpose user interface. For the source inversion problem, flows and concentrations of the network will be reconciled and unknown sources will be determined at network nodes. Moreover, intrusive optimization and sensitivity analysis techniques are identified to assess the influence of various parameters and models in the network in a computational efficient manner. A number of numerical comparisons are made to demonstrate the effectiveness of various optimization approaches.

More Details

Simple, scalable protocols for high-performance local networks

Proceedings - Conference on Local Computer Networks, LCN

Riesen, Rolf; Maccabe, Arthur B.

RMPP (reliable message passing protocol) is a lightweight transport protocol designed for clusters that provides end-to-end flow control and fault tolerance. In this article, presentations were made that compares RMPP to TCP, UDP, and "Utopia". The article compared the protocols on four benchmarks: bandwidth, latency, all-to-all, and communication-computation overlap. The results have shown that message-based protocols like RMPP have several advantages over TCP including ease of implementation, support for computation/communication overlap, and low CPU overhead.

More Details

SIERRA Framework Version 3: h-Adaptivity Design and Use

Stewart, James R.; Edwards, Harold C.

This paper presents a high-level overview of the algorithms and supporting functionality provided by SIERRA Framework Version 3 for h-adaptive finite-element mechanics application development. Also presented is a fairly comprehensive description of what is required by the application codes to use the SIERRA h-adaptivity services. In general, the SIERRA framework provides the functionality for hierarchically subdividing elements in a distributed parallel environment, as well as dynamic load balancing. The mechanics application code is required to supply an a posteriori error indicator, prolongation and restriction operators for the field variables, hanging-node constraint handlers, and execution control code. This paper does not describe the Application Programming Interface (API), although references to SIERRA framework classes are given where appropriate.

More Details

SIERRA Framework Version 3: Core Services Theory and Design

Edwards, Harold C.

The SIERRA Framework core services provide essential services for managing the mesh data structure, computational fields, and physics models of an application. An application using these services will supply a set of physics models, define the computational fields that are required by those models, and define the mesh upon which its physics models operate. The SIERRA Framework then manages all of the data for a massively parallel multiphysics application.

More Details

Analysis of Price Equilibriums in the Aspen Economic Model under Various Purchasing Methods

Slepoy, Natasha S.; Pryor, Richard J.

Aspen, a powerful economic modeling tool that uses agent modeling and genetic algorithms, can accurately simulate the economy. In it, individuals are hired by firms to produce a good that households then purchase. The firms decide what price to charge for this good, and based on that price, the households determine which firm to purchase from. We will attempt to discover the Nash Equilibrium price found in this model under two different methods of determining how many orders each firm receives. To keep it simple, we will assume there are only two firms in our model, and that these firms compete for the sale of one identical good.

More Details

Predicting Function of Biological Macromolecules: A Summary of LDRD Activities: Project 10746

Frink, Laura J.; Rempe, Susan R.; Means, Shawn A.; Stevens, Mark J.; Crozier, Paul C.; Martin, Marcus G.; Sears, Mark P.; Hjalmarson, Harold P.

This LDRD project has involved the development and application of Sandia's massively parallel materials modeling software to several significant biophysical systems. They have been successful in applying the molecular dynamics code LAMMPS to modeling DNA, unstructured proteins, and lipid membranes. They have developed and applied a coupled transport-molecular theory code (Tramonto) to study ion channel proteins with gramicidin A as a prototype. they have used the Towhee configurational bias Monte-Carlo code to perform rigorous tests of biological force fields. they have also applied the MP-Sala reacting-diffusion code to model cellular systems. Electroporation of cell membranes has also been studied, and detailed quantum mechanical studies of ion solvation have been performed. In addition, new molecular theory algorithms have been developed (in FasTram) that may ultimately make protein solvation calculations feasible on workstations. Finally, they have begun implementation of a combined molecular theory and configurational bias Monte-Carlo code. They note that this LDRD has provided a basis for several new internal (e.g. several new LDRD) and external (e.g. 4 NIH proposals and a DOE/Genomes to Life) proposals.

More Details

Xyce Parallel Electronic Simulator - User's Guide, Version 1.0

Hutchinson, Scott A.; Keiter, Eric R.; Hoekstra, Robert J.; Waters, Lon J.; Russo, Thomas V.; Rankin, Eric R.; Wix, Steven D.

This manual describes the use of the Xyce Parallel Electronic Simulator code for simulating electrical circuits at a variety of abstraction levels. The Xyce Parallel Electronic Simulator has been written to support,in a rigorous manner, the simulation needs of the Sandia National Laboratories electrical designers. As such, the development has focused on improving the capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) A client-server or multi-tiered operating model wherein the numerical kernel can operate independently of the graphical user interface (GUI). (4) Object-oriented code design and implementation using modern coding-practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. The code is a parallel code in the most general sense of the phrase--a message passing parallel implementation--which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Furthermore, careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved even as the number of processors grows. Another feature required by designers is the ability to add device models, many specific to the needs of Sandia, to the code. To this end, the device package in the Xyce Parallel Electronic Simulator is designed to support a variety of device model inputs. These input formats include standard analytical models, behavioral models and look-up tables. Combined with this flexible interface is an architectural design that greatly simplifies the addition of circuit models. One of the most important contribution Xyce makes to the designers at Sandia National Laboratories is in providing a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia now has an ''in-house''capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods) research and development can be performed. Furthermore, these capabilities will then be migrated to the end users.

More Details

Generalized Fourier Analyses of Semi-Discretizations of the Advection-Diffusion Equation

Christon, Mark A.; Voth, Thomas E.; Martinez, Mario J.

This report presents a detailed multi-methods comparison of the spatial errors associated with finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. The errors are reported in terms of non-dimensional phase and group speeds, discrete diffusivity, artificial diffusivity, and grid-induced anisotropy. It is demonstrated that Fourier analysis (aka von Neumann analysis) provides an automatic process for separating the spectral behavior of the discrete advective operator into its symmetric dissipative and skew-symmetric advective components. Further it is demonstrated that streamline upwind Petrov-Galerkin and its control-volume finite element analogue, streamline upwind control-volume, produce both an artificial diffusivity and an artificial phase speed in addition to the usual semi-discrete artifacts observed in the discrete phase speed, group speed and diffusivity. For each of the numerical methods considered, asymptotic truncation error and resolution estimates are presented for the limiting cases of pure advection and pure diffusion. The Galerkin finite element method and its streamline upwind derivatives are shown to exhibit super-convergent behavior in terms of phase and group speed when a consistent mass matrix is used in the formulation. In contrast, the CVFEM method and its streamline upwind derivatives yield strictly second-order behavior. While this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common mathematical framework.

More Details

ALEGRA: User Input and Physics Descriptions Version 4.2

Boucheron, Edward A.; Haill, Thomas A.; Peery, James S.; Petney, Sharon P.; Robbins, Joshua R.; Robinson, Allen C.; Summers, Randall M.; Voth, Thomas E.; Wong, Michael K.; Brown, Kevin H.; Budge, Kent G.; Burns, Shawn P.; Carroll, Daniel E.; Carroll, Susan K.; Christon, Mark A.; Drake, Richard R.; Garasi, Christopher J.

ALEGRA is an arbitrary Lagrangian-Eulerian finite element code that emphasizes large distortion and shock propagation. This document describes the user input language for the code.

More Details

Large Scale Non-Linear Programming for PDE Constrained Optimization

van Bloemen Waanders, Bart G.; Bartlett, Roscoe B.; Long, Kevin R.; Boggs, Paul T.; Salinger, Andrew G.

Three years of large-scale PDE-constrained optimization research and development are summarized in this report. We have developed an optimization framework for 3 levels of SAND optimization and developed a powerful PDE prototyping tool. The optimization algorithms have been interfaced and tested on CVD problems using a chemically reacting fluid flow simulator resulting in an order of magnitude reduction in compute time over a black box method. Sandia's simulation environment is reviewed by characterizing each discipline and identifying a possible target level of optimization. Because SAND algorithms are difficult to test on actual production codes, a symbolic simulator (Sundance) was developed and interfaced with a reduced-space sequential quadratic programming framework (rSQP++) to provide a PDE prototyping environment. The power of Sundance/rSQP++ is demonstrated by applying optimization to a series of different PDE-based problems. In addition, we show the merits of SAND methods by comparing seven levels of optimization for a source-inversion problem using Sundance and rSQP++. Algorithmic results are discussed for hierarchical control methods. The design of an interior point quadratic programming solver is presented.

More Details

Self-Reconfigurable Robots

Hensinger, David M.; Johnston, Gabriel J.; Hinman-Sweeney, Elaine H.; Feddema, John T.; Eskridge, Steven E.

A distributed reconfigurable micro-robotic system is a collection of unlimited numbers of distributed small, homogeneous robots designed to autonomously organize and reorganize in order to achieve mission-specified geometric shapes and functions. This project investigated the design, control, and planning issues for self-configuring and self-organizing robots. In the 2D space a system consisting of two robots was prototyped and successfully displayed automatic docking/undocking to operate dependently or independently. Additional modules were constructed to display the usefulness of a self-configuring system in various situations. In 3D a self-reconfiguring robot system of 4 identical modules was built. Each module connects to its neighbors using rotating actuators. An individual component can move in three dimensions on its neighbors. We have also built a self-reconfiguring robot system consisting of 9-module Crystalline Robot. Each module in this robot is actuated by expansion/contraction. The system is fully distributed, has local communication (to neighbors) capabilities and it has global sensing capabilities.

More Details

ALEGRA Validation Studies for Regular, Mach, and Double Mach Shock Reflection in Gas Dynamics

Gruebel, Marilyn M.; Cochran, John R.

In this report we describe the performance of the ALEGRA shock wave physics code on a set of gas dynamic shock reflection problems that have associated experimental pressure data. These reflections cover three distinct regimes of oblique shock reflection in gas dynamics--regular, Mach, and double Mach reflection. For the selected data, the use of an ideal gas equation of state is appropriate, thus simplifying to a considerable degree the task of validating the shock wave computational capability of ALEGRA in the application regime of the experiments. We find good agreement of ALEGRA with reported experimental data for sufficient grid resolution. We discuss the experimental data, the nature and results of the corresponding ALEGRA calculations, and the implications of the presented experiment--calculation comparisons.

More Details

SAR Window Functions: A Review and Analysis of the Notched Spectrum Problem

Dickey, Fred M.; Romero, L.A.; Doerry, Armin

Imaging systems such as Synthetic Aperture Radar collect band-limited data from which an image of a target scene is rendered. The band-limited nature of the data generates sidelobes, or ''spilled energy'' most evident in the neighborhood of bright point-like objects. It is generally considered desirable to minimize these sidelobes, even at the expense of some generally small increase in system bandwidth. This is accomplished by shaping the spectrum with window functions prior to inversion or transformation into an image. A window function that minimizes sidelobe energy can be constructed based on prolate spheroidal wave functions. A parametric design procedure allows doing so even with constraints on allowable increases in system bandwidth. This approach is extended to accommodate spectral notches or holes, although the guaranteed minimum sidelobe energy can be quite high in this case. Interestingly, for a fixed bandwidth, the minimum-mean-squared-error image rendering of a target scene is achieved with no windowing at all (rectangular or boxcar window).

More Details

Three-Dimensional Wind Field Modeling: A Review

Homicz, Gregory F.

Over the past several decades, the development of computer models to predict the atmospheric transport of hazardous material across a local (on the order of 10s of km) to mesoscale (on the order of 100s of km) region has received considerable attention, for both regulatory purposes, and to guide emergency response teams. Wind inputs to these models cover a spectrum of sophistication and required resources. At one end is the interpolation/extrapolation of available observations, which can be done rapidly, but at the risk of missing important local phenomena. Such a model can also only describe the wind at the time the observations were made. At the other end are sophisticated numerical solutions based on so-called Primitive Equation models. These prognostic models, so-called because in principle they can forecast future conditions, contain the most physics, but can easily consume tens of hours, if not days, of computer time. They may also require orders of magnitude more effort to set up, as both boundary and initial conditions on all the relevant variables must be supplied. The subject of this report is two classes of models intermediate in sophistication between the interpolated and prognostic ends of the spectrum. The first, known as mass-consistent (sometimes referred to as diagnostic) models, attempt to strike a compromise between simple interpolation and the complexity of the Primitive Equation models by satisfying only the conservation of mass (continuity) equation. The second class considered here consists of the so-called linear models, which purport to satisfy both mass and momentum balances. A review of the published literature on these models over the past few decades was performed. Though diagnostic models use a variety of approaches, they tend to fall into a relatively few well-defined categories. Linear models, on the other hand, follow a more uniform methodology, though they differ in detail. The discussion considers the theoretical underpinnings of each category of the diagnostic models, and the linear models, in order to assess the advantages and disadvantages of each. It is concluded that diagnostic models are the better suited of the two for predicting the atmospheric dispersion of hazardous materials in emergency response scenarios, as the linear models are only able to accommodate gently-sloping terrain, and are predicated on several simplifying approximations which can be difficult to justify a priori. Of the various approaches used in diagnostic modeling, that based on the calculus of variations appears to be the most objective, in that it introduces the fewest number of arbitrary parameters. The strengths and weaknesses of models in this category, as they relate to the activities of Sandia's Nuclear Emergency Support Team (NEST), are further highlighted.

More Details

A 3-D Vortex Code for Parachute Flow Predictions: VIPAR Version 1.0

Strickland, James H.; Homicz, Gregory F.; Porter, V.L.

This report describes a 3-D fluid mechanics code for predicting flow past bluff bodies whose surfaces can be assumed to be made up of shell elements that are simply connected. Version 1.0 of the VIPAR code (Vortex Inflation PARachute code) is described herein. This version contains several first order algorithms that we are in the process of replacing with higher order ones. These enhancements will appear in the next version of VIPAR. The present code contains a motion generator that can be used to produce a large class of rigid body motions. The present code has also been fully coupled to a structural dynamics code in which the geometry undergoes large time dependent deformations. Initial surface geometry is generated from triangular shell elements using a code such as Patran and is written into an ExodusII database file for subsequent input into VIPAR. Surface and wake variable information is output into two ExodusII files that can be post processed and viewed using software such as EnSight{trademark}.

More Details

Level 1 Peer Review Process for the Sandia ASCI V and V Program: FY01 Final Report

Pilch, Martin P.; Froehlich, G.K.; Hodges, Ann L.; Peercy, David E.; Trucano, Timothy G.; Moya, Jaime L.

This report describes the results of the FY01 Level 1 Peer Reviews for the Verification and Validation (V&V) Program at Sandia National Laboratories. V&V peer review at Sandia is intended to assess the ASCI (Accelerated Strategic Computing Initiative) code team V&V planning process and execution. The Level 1 Peer Review process is conducted in accordance with the process defined in SAND2000-3099. V&V Plans are developed in accordance with the guidelines defined in SAND2000-3 101. The peer review process and process for improving the Guidelines are necessarily synchronized and form parts of a larger quality improvement process supporting the ASCI V&V program at Sandia. During FY00 a prototype of the process was conducted for two code teams and their V&V Plans and the process and guidelines updated based on the prototype. In FY01, Level 1 Peer Reviews were conducted on an additional eleven code teams and their respective V&V Plans. This report summarizes the results from those peer reviews, including recommendations from the panels that conducted the reviews.

More Details

DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis Version 3.0 Developers Manual (title change from electronic posting)

Eldred, Michael S.; Giunta, Anthony A.; van Bloemen Waanders, Bart G.; Wojtkiewicz, Steven F.; Hart, William E.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

More Details

DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis Version 3.0

Eldred, Michael S.; Giunta, Anthony A.; van Bloemen Waanders, Bart G.; Wojtkiewicz, Steven F.; Hart, William E.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

More Details

Evaluation Techniques and Properties of an Exact Solution to a Subsonic Free Surface Jet Flow

Robinson, Allen C.

Computational techniques for the evaluation of steady plane subsonic flows represented by Chaplygin series in the hodograph plane are presented. These techniques are utilized to examine the properties of the free surface wall jet solution. This solution is a prototype for the shaped charge jet, a problem which is particularly difficult to compute properly using general purpose finite element or finite difference continuum mechanics codes. The shaped charge jet is a classic validation problem for models involving high explosives and material strength. Therefore, the problem studied in this report represents a useful verification problem associated with shaped charge jet modeling.

More Details

Assembly of LIGA using Electric Fields

Feddema, John T.; Warne, Larry K.; Johnson, William Arthur.; Routson, Allison J.; Armour, David L.

The goal of this project was to develop a device that uses electric fields to grasp and possibly levitate LIGA parts. This non-contact form of grasping would solve many of the problems associated with grasping parts that are only a few microns in dimensions. Scaling laws show that for parts this size, electrostatic and electromagnetic forces are dominant over gravitational forces. This is why micro-parts often stick to mechanical tweezers. If these forces can be controlled under feedback control, the parts could be levitated, possibly even rotated in air. In this project, we designed, fabricated, and tested several grippers that use electrostatic and electromagnetic fields to grasp and release metal LIGA parts. The eventual use of this tool will be to assemble metal and non-metal LIGA parts into small electromechanical systems.

More Details

General Concepts for Experimental Validation of ASCI Code Applications

Trucano, Timothy G.; Pilch, Martin P.; Oberkampf, William L.

This report presents general concepts in a broadly applicable methodology for validation of Accelerated Strategic Computing Initiative (ASCI) codes for Defense Programs applications at Sandia National Laboratories. The concepts are defined and analyzed within the context of their relative roles in an experimental validation process. Examples of applying the proposed methodology to three existing experimental validation activities are provided in appendices, using an appraisal technique recommended in this report.

More Details

LOCA 1.0 Library of Continuation Algorithms: Theory and Implementation Manual

Salinger, Andrew G.; Pawlowski, Roger P.; Lehoucq, Richard B.; Romero, L.A.; Wilkes, Edward D.

LOCA, the Library of Continuation Algorithms, is a software library for performing stability analysis of large-scale applications. LOCA enables the tracking of solution branches as a function of a system parameter, the direct tracking of bifurcation points, and, when linked with the ARPACK library, a linear stability analysis capability. It is designed to be easy to implement around codes that already use Newton's method to converge to steady-state solutions. The algorithms are chosen to work for large problems, such as those that arise from discretizations of partial differential equations, and to run on distributed memory parallel machines. This manual presents LOCA's continuation and bifurcation analysis algorithms, and instructions on how to implement LOCA with an application code. The LOCA code is being made publicly available at www.cs.sandia.gov/loca.

More Details

Molecular Simulation of Reacting Systems

Thompson, Aidan P.

The final report for a Laboratory Directed Research and Development project entitled, Molecular Simulation of Reacting Systems is presented. It describes efforts to incorporate chemical reaction events into the LAMMPS massively parallel molecular dynamics code. This was accomplished using a scheme in which several classes of reactions are allowed to occur in a probabilistic fashion at specified times during the MD simulation. Three classes of reaction were implemented: addition, chain transfer and scission. A fully parallel implementation was achieved using a checkerboarding scheme, which avoids conflicts due to reactions occurring on neighboring processors. The observed chemical evolution is independent of the number of processors used. The code was applied to two test applications: irreversible linear polymerization and thermal degradation chemistry.

More Details

On the Development of the Large Eddy Simulation Approach for Modeling Turbulent Flow: LDRD Final Report

Schmidt, Rodney C.; Smith, Thomas M.; DesJardin, Paul E.; Voth, Thomas E.; Christon, Mark A.; Kerstein, Alan R.; Wunsch, Scott E.

This report describes research and development of the large eddy simulation (LES) turbulence modeling approach conducted as part of Sandia's laboratory directed research and development (LDRD) program. The emphasis of the work described here has been toward developing the capability to perform accurate and computationally affordable LES calculations of engineering problems using unstructured-grid codes, in wall-bounded geometries and for problems with coupled physics. Specific contributions documented here include (1) the implementation and testing of LES models in Sandia codes, including tests of a new conserved scalar--laminar flamelet SGS combustion model that does not assume statistical independence between the mixture fraction and the scalar dissipation rate, (2) the development and testing of statistical analysis and visualization utility software developed for Exodus II unstructured grid LES, and (3) the development and testing of a novel new LES near-wall subgrid model based on the one-dimensional Turbulence (ODT) model.

More Details

Tetrahedral mesh improvement via optimization of the element condition number

International Journal for Numerical Methods in Engineering

Knupp, Patrick K.

We present a new shape measure for tetrahedral elements that is optimal in that it gives the distance of a tetrahedron from the set of inverted elements. This measure is constructed from the condition number of the linear transformation between a unit equilateral tetrahedron and any tetrahedron with positive volume. Using this shape measure, we formulate two optimization objective functions that are differentiated by their goal: the first seeks to improve the average quality of the tetrahedral mesh; the second aims to improve the worst-quality element in the mesh. We review the optimization techniques used with each objective function and present experimental results that demonstrate the effectiveness of the mesh improvement methods. We show that a combined optimization approach that uses both objective functions obtains the best-quality meshes for several complex geometries. Copyright © 2001 John Wiley and Sons, Ltd.

More Details

User Manual and Supporting Information for Library of Codes for Centroidal Voronoi Point Placement and Associated Zeroth, First, and Second Moment Determination

Brannon, Rebecca M.

The theory, numerical algorithm, and user documentation are provided for a new ''Centroidal Voronoi Tessellation (CVT)'' method of filling a region of space (2D or 3D) with particles at any desired particle density. ''Clumping'' is entirely avoided and the boundary is optimally resolved. This particle placement capability is needed for any so-called ''mesh-free'' method in which physical fields are discretized via arbitrary-connectivity discrete points. CVT exploits efficient statistical methods to avoid expensive generation of Voronoi diagrams. Nevertheless, if a CVT particle's Voronoi cell were to be explicitly computed, then it would have a centroid that coincides with the particle itself and a minimized rotational moment. The CVT code provides each particle's volume and centroid, and also the rotational moment matrix needed to approximate a particle by an ellipsoid (instead of a simple sphere). DIATOM region specification is supported.

More Details

An Evaluation of the Material Point Method

Brannon, Rebecca M.

The theory and algorithm for the Material Point Method (MPM) are documented, with a detailed discussion on the treatments of boundary conditions and shock wave problems. A step-by-step solution scheme is written based on direct inspection of the two-dimensional MPM code currently used at the University of Missouri-Columbia (which is, in turn, a legacy of the University of New Mexico code). To test the completeness of the solution scheme and to demonstrate certain features of the MPM, a one-dimensional MPM code is programmed to solve one-dimensional wave and impact problems, with both linear elasticity and elastoplasticity models. The advantages and disadvantages of the MPM are investigated as compared with competing mesh-free methods. Based on the current work, future research directions are discussed to better simulate complex physical problems such as impact/contact, localization, crack propagation, penetration, perforation, fragmentation, and interactions among different material phases. In particular, the potential use of a boundary layer to enforce the traction boundary conditions is discussed within the framework of the MPM.

More Details

Statistical Validation of Engineering and Scientific Models: A Maximum Likelihood Based Metric

Hills, Richard G.; Trucano, Timothy G.

Two major issues associated with model validation are addressed here. First, we present a maximum likelihood approach to define and evaluate a model validation metric. The advantage of this approach is it is more easily applied to nonlinear problems than the methods presented earlier by Hills and Trucano (1999, 2001); the method is based on optimization for which software packages are readily available; and the method can more easily be extended to handle measurement uncertainty and prediction uncertainty with different probability structures. Several examples are presented utilizing this metric. We show conditions under which this approach reduces to the approach developed previously by Hills and Trucano (2001). Secondly, we expand our earlier discussions (Hills and Trucano, 1999, 2001) on the impact of multivariate correlation and the effect of this on model validation metrics. We show that ignoring correlation in multivariate data can lead to misleading results, such as rejecting a good model when sufficient evidence to do so is not available.

More Details

DNA Microarray Technology

Davidson, George S.

Collaboration between Sandia National Laboratories and the University of New Mexico Biology Department resulted in the capability to train students in microarray techniques and the interpretation of data from microarray experiments. These studies provide for a better understanding of the role of stationary phase and the gene regulation involved in exit from stationary phase, which may eventually have important clinical implications. Importantly, this research trained numerous students and is the basis for three new Ph.D. projects.

More Details

Verification and validation in computational fluid dynamics

Progress in Aerospace Sciences

Oberkampf, William L.; Trucano, Timothy G.

The verification and validation (V & V) in computational fluid dynamics was presented. The methods and procedures for assessing V & V were presented. The issues such as code verification versus solution verification, model validation versus solution validation, the distinction between error and uncertainity, conceptual sources of error and uncertainity, and the relationship between validation and prediction was discussed. Methods for determining the accuracy of numerical solutions were presented and the importance of software testing during verification activities were emphasized.

More Details

Processor allocation on Cplant: Achieving general processor locality using one-dimensional allocation strategies

Proceedings - IEEE International Conference on Cluster Computing, ICCC

Leung, Vitus J.; Arkin, E.M.; Bender, M.A.; Bunde, D.; Johnston, J.; Lal, Alok; Mitchell, J.S.B.; Phillips, C.; Seiden, S.S.

The Computational Plant or Cplant is a commodity-based supercomputer under development at Sandia National Laboratories. This paper describes resource-allocation strategies to achieve processor locality for parallel jobs in Cplant and other supercomputers. Users of Cplant and other Sandia supercomputers submit parallel jobs to a job queue. When a job is scheduled to run, it is assigned to a set of processors. To obtain maximum throughput, jobs should be allocated to localized clusters of processors to minimize communication costs and to avoid bandwidth contention caused by overlapping jobs. This paper introduces new allocation strategies and performance metrics based on space-filling curves and one dimensional allocation strategies. These algorithms are general and simple. Preliminary simulations and Cplant experiments indicate that both space-filling curves and one-dimensional packing improve processor locality compared to the sorted free list strategy previously used on Cplant. These new allocation strategies are implemented in the new release of the Cplant System Software, Version 2.0, phased into the Cplant systems at Sandia by May 2002.

More Details

A p-Adic Metric for Particle Mass Scale Organization with Genetic Divisors

Wagner, John S.

The concept of genetic divisors can be given a quantitative measure with a non-Archimedean p-adic metric that is both computationally convenient and physically motivated. For two particles possessing distinct mass parameters x and y, the metric distance D(x, y) is expressed on the field of rational numbers Q as the inverse of the greatest common divisor [gcd (x , y)]. As a measure of genetic similarity, this metric can be applied to (1) the mass numbers of particle states and (2) the corresponding subgroup orders of these systems. The use of the Bezout identity in the form of a congruence for the expression of the gcd (x , y) corresponding to the v{sub e} and {sub {mu}} neutrinos (a) connects the genetic divisor concept to the cosmic seesaw congruence, (b) provides support for the {delta}-conjecture concerning the subgroup structure of particle states, and (c) quantitatively strengthens the interlocking relationships joining the values of the prospectively derived (i) electron neutrino (v{sub e}) mass (0.808 meV), (ii) muon neutrino (v{sub {mu}}) mass (27.68 meV), and (iii) unified strong-electroweak coupling constant ({alpha}*{sup -1} = 34.26).

More Details

Applications of Transport/Reaction Codes to Problems in Cell Modeling

Means, Shawn A.; Rintoul, Mark D.; Shadid, John N.; Rintoul, Mark D.

We demonstrate two specific examples that show how our exiting capabilities in solving large systems of partial differential equations associated with transport/reaction systems can be easily applied to outstanding problems in computational biology. First, we examine a three-dimensional model for calcium wave propagation in a Xenopus Laevis frog egg and verify that a proposed model for the distribution of calcium release sites agrees with experimental results as a function of both space and time. Next, we create a model of the neuron's terminus based on experimental observations and show that the sodium-calcium exchanger is not the route of sodium's modulation of neurotransmitter release. These state-of-the-art simulations were performed on massively parallel platforms and required almost no modification of existing Sandia codes.

More Details

Icarus: A 2-D Direct Simulation Monte Carlo (DSMC) Code for Multi-Processor Computers

Bartel, Timothy J.; Plimpton, Steven J.; Gallis, Michail A.

Icarus is a 2D Direct Simulation Monte Carlo (DSMC) code which has been optimized for the parallel computing environment. The code is based on the DSMC method of Bird[11.1] and models from free-molecular to continuum flowfields in either cartesian (x, y) or axisymmetric (z, r) coordinates. Computational particles, representing a given number of molecules or atoms, are tracked as they have collisions with other particles or surfaces. Multiple species, internal energy modes (rotation and vibration), chemistry, and ion transport are modeled. A new trace species methodology for collisions and chemistry is used to obtain statistics for small species concentrations. Gas phase chemistry is modeled using steric factors derived from Arrhenius reaction rates or in a manner similar to continuum modeling. Surface chemistry is modeled with surface reaction probabilities; an optional site density, energy dependent, coverage model is included. Electrons are modeled by either a local charge neutrality assumption or as discrete simulational particles. Ion chemistry is modeled with electron impact chemistry rates and charge exchange reactions. Coulomb collision cross-sections are used instead of Variable Hard Sphere values for ion-ion interactions. The electro-static fields can either be: externally input, a Langmuir-Tonks model or from a Green's Function (Boundary Element) based Poison Solver. Icarus has been used for subsonic to hypersonic, chemically reacting, and plasma flows. The Icarus software package includes the grid generation, parallel processor decomposition, post-processing, and restart software. The commercial graphics package, Tecplot, is used for graphics display. All of the software packages are written in standard Fortran.

More Details

ACME - Algorithms for Contact in a Multiphysics Environment API Version 1.0

Brown, Kevin H.; Summers, Randall M.; Glass, Micheal W.; Gullerud, Arne S.; Heinstein, Martin W.; Jones, Reese E.

An effort is underway at Sandia National Laboratories to develop a library of algorithms to search for potential interactions between surfaces represented by analytic and discretized topological entities. This effort is also developing algorithms to determine forces due to these interactions for transient dynamics applications. This document describes the Application Programming Interface (API) for the ACME (Algorithms for Contact in a Multiphysics Environment) library.

More Details
Results 9851–9900 of 9,998
Results 9851–9900 of 9,998