Publications

Results 8601–8800 of 9,998

Search results

Jump to search filters

Better relaxations of classical discrete optimization problems

Carr, Robert D.

A mathematical program is an optimization problem expressed as an objective function of multiple variables subject to set of constraints. When the optimization problem has specific structure, the problem class usually has a special name. A linear program is the optimization of a linear objective function subject to linear constraints. An integer program is a linear program where some of the variables must take only integer values. A semidefinite program is a linear program where the variables are arranged in a matrix and for all feasible solutions, this matrix must be positive semidefinite. There are general-purpose solvers for each of these classes of mathematical program. There are usually many ways to express a problem as a correct, say, linear program. However, equivalent formulations can have significantly different practical tractability. In this poster, we present new formulations for two classic discrete optimization problems, maximum cut (max cut) and the graphical traveling salesman problem (GTSP), that are significantly stronger, and hence more computationally tractable, than any previous formulations of their class. Both partially answer longstanding open theoretical questions in polyhedral combinatorics.

More Details

Multiphase reacting flow modeling of singlet oxygen generators for chemical oxygen iodine lasers

Pawlowski, Roger P.; Salinger, Andrew G.

Singlet oxygen generators are multiphase flow chemical reactors used to generate energetic oxygen to be used as a fuel for chemical oxygen iodine lasers. In this paper, a theoretical model of the generator is presented along with its solutions over ranges of parameter space and oxygen maximizing optimizations. The singlet oxygen generator (SOG) is a low-pressure, multiphase flow chemical reactor that is used to produce molecular oxygen in an electronically excited state, i.e. singlet delta oxygen. The primary product of the reactor, the energetic oxygen, is used in a stage immediately succeeding the SOG to dissociate and energize iodine. The gas mixture including the iodine is accelerated to a supersonic speed and lased. Thus the SOG is the fuel generator for the chemical oxygen iodine laser (COIL). The COIL has important application for both military purposes--it was developed by the US Air Force in the 1970s--and, as the infrared beam is readily absorbed by metals, industrial cutting and drilling. The SOG appears in various configurations, but the one in focus here is a crossflow droplet generator SOG. A gas consisting of molecular chlorine and a diluent, usually helium, is pumped through a roughly rectangular channel. An aqueous solution of hydrogen peroxide and potassium hydroxide is pumped through small holes into the channel and perpendicular to the direction of the gas flow. So doing causes the solution to become aerosolized. Dissociation of the potassium hydroxide draws a proton from the hydrogen peroxide generating an HO{sub 2} radical in the liquid. Chlorine diffuses into the liquid and reacts with the HO{sub 2} ion producing the singlet delta oxygen; some of the oxygen diffuses back into the gas phase. The focus of this work is to generate a predictive multiphase flow model of the SOG in order to optimize its design. The equations solved are the so-called Eulerian-Eulerian form of the multiphase flow Navier-Stokes equations wherein one set of the equations represents the gas phase and another equation set of size m represents the liquid phase. In this case, m is representative of the division of the liquid phase into distinct representations of the various droplet sizes distributed in the reactor. A stabilized Galerkin formulation is used to solve the equation set on a computer. The set of equations is large. There are five equations representing the gas phase: continuity, vector momentum, heat. There are 5m representing the liquid phase: number density, vector momentum, heat. Four mass transfer equations represent the gas phase constituents and there are m advection diffusion equations representing the HO{sub 2} ion concentration in the liquid phase. Thus we are taking advantage of and developing algorithms to harness the power of large parallel computing architectures to solve the steady-state form of these equations numerous times so as to explore the large parameter space of the equations via continuation methods and to maximize the generation of singlet delta oxygen via optimization methods. Presented here will be the set of equations that are solved and the methods we are using to solve them. Solutions of the equations will be presented along with solution paths representing varying aerosol loading-the ratio of liquid to gas mass flow rates-and simple optimizations centered around maximizing the oxygen production and minimizing the amount of entrained liquid in the gas exit stream. Gas-entrained liquid is important to minimize as it can destroy the lenses and mirrors present in the lasing cavity.

More Details

A nested dissection approach to sparse matrix partitioning for parallel computations

Proposed for publication in SIAM Journal on Scientific Computing.

Boman, Erik G.

We consider how to distribute sparse matrices among processes to reduce communication costs in parallel sparse matrix computations, specifically, sparse matrix-vector multiplication. Our main contributions are: (i) an exact graph model for communication with general (two-dimensional) matrix distribution, and (ii) a recursive partitioning algorithm based on nested dissection (substructuring). We show that the communication volume is closely linked to vertex separators. We have implemented our algorithm using hypergraph partitioning software to enable a fair comparison with existing methods. We present numerical results for sparse matrices from several application areas, with up to 9 million nonzeros. The results show that our new approach is superior to traditional 1d partitioning and comparable to a current leading partitioning method, the finegrain hypergraph method, in terms of communication volume. Our nested dissection method has two advantages over the fine-grain method: it is faster to compute, and the resulting distribution requires fewer communication messages.

More Details

Simulation of the mechanical strength of a single collagen molecule

Biophysical Journal

In 't Veld, Pieter J.; Stevens, Mark J.

We perform atomistic simulations on a single collagen molecule to determine its intrinsic molecular strength. A tensile pull simulation to determine the tensile strength and Young's modulus is performed, and a simulation that separates two of the three helices of collagen examines the internal strength of the molecule. The magnitude of the calculated tensile forces is consistent with the strong forces of bond stretching and angle bending that are involved in the tensile deformation. The triple helix unwinds with increasing tensile force. Pulling apart the triple helix has a smaller, oscillatory force. The oscillations are due to the sequential separation of the hydrogen-bonded helices. The force rises due to reorienting the residues in the direction of the separation force. The force drop occurs once the hydrogen bond between residues on different helices break and the residues separate. © 2008 by the Biophysical Society.

More Details

On sub-linear convergence for linearly degenerate waves in capturing schemes

Journal of Computational Physics

Banks, Jeffrey W.; Aslam, T.; Rider, William J.

A common attribute of capturing schemes used to find approximate solutions to the Euler equations is a sub-linear rate of convergence with respect to mesh resolution. Purely nonlinear jumps, such as shock waves produce a first-order convergence rate, but linearly degenerate discontinuous waves, where present, produce sub-linear convergence rates which eventually dominate the global rate of convergence. The classical explanation for this phenomenon investigates the behavior of the exact solution to the numerical method in combination with the finite error terms, often referred to as the modified equation. For a first-order method, the modified equation produces the hyperbolic evolution equation with second-order diffusive terms. In the frame of reference of the traveling wave, the solution of a discontinuous wave consists of a diffusive layer that grows with a rate of t1/2, yielding a convergence rate of 1/2. Self-similar heuristics for higher-order discretizations produce a growth rate for the layer thickness of Δt1/(p+1) which yields an estimate for the convergence rate as p/(p + 1) where p is the order of the discretization. In this paper we show that this estimated convergence rate can be derived with greater rigor for both dissipative and dispersive forms of the discrete error. In particular, the form of the analytical solution for linear modified equations can be solved exactly. These estimates and forms for the error are confirmed in a variety of demonstrations ranging from simple linear waves to multidimensional solutions of the Euler equations. © 2008 Elsevier Inc.

More Details

Two-way coupling of Presto v2.8 and CTH v8.1

Edwards, Harold C.; Crawford, D.A.; Bishop, Joseph E.

A loose two-way coupling of SNL's Presto v2.8 and CTH v8.1 analysis code has been developed to support the analysis of explosive loading of structures. Presto is a Lagrangian, three-dimensional explicit, transient dynamics code in the SIERRA mechanics suite for the analysis of structures subjected to impact-like loads. CTH is a hydro code for modeling complex multi-dimensional, multi-material problems that are characterized by large deformations and/or strong shocks. A fundamental assumption in this loose coupling is that the compliance of the structure modeled with Presto is significantly smaller than the compliance of the surrounding medium (e.g. air) modeled with CTH. A current limitation of the coupled code is that the interaction between CTH and thin structures modeled in Presto (e.g. shells) is not supported. Research is in progress to relax this thin-structure limitation.

More Details

Large scale visualization on the Cray XT3 using ParaView

Moreland, Kenneth D.; Rogers, David R.

Post-processing and visualization are key components to understanding any simulation. Porting ParaView, a scalable visualization tool, to the Cray XT3 allows our analysts to leverage the same supercomputer they use for simulation to perform post-processing. Visualization tools traditionally rely on a variety of rendering, scripting, and networking resources; the challenge of running ParaView on the Lightweight Kernel is to provide and use the visualization and post-processing features in the absence of many OS resources. We have successfully accomplished this at Sandia National Laboratories and the Pittsburgh Supercomputing Center.

More Details

Summary of multi-core hardware and programming model investigations

Pedretti, Kevin T.T.; Kelly, Suzanne M.; Levenhagen, Michael J.

This report summarizes our investigations into multi-core processors and programming models for parallel scientific applications. The motivation for this study was to better understand the landscape of multi-core hardware, future trends, and the implications on system software for capability supercomputers. The results of this study are being used as input into the design of a new open-source light-weight kernel operating system being targeted at future capability supercomputers made up of multi-core processors. A goal of this effort is to create an agile system that is able to adapt to and efficiently support whatever multi-core hardware and programming models gain acceptance by the community.

More Details

Individual and group electronic brainstorming in an industrial setting

Dornburg, Courtney S.; Hendrickson, Stacey M.; Davidson, George S.

An experiment was conducted comparing the effectiveness of individual versus group electronic brainstorming in addressing real-world 'wickedly difficult' challenges. Previous laboratory research has engaged small groups of students in answering questions irrelevant to an industrial setting. The current experiment extended this research to larger, real-world employee groups engaged in addressing organization-relevant challenges. Within the present experiment, the data demonstrated that individuals performed at least as well as groups in terms of number of ideas produced and significantly (p < .02) outperformed groups in terms of the quality of those ideas (as measured along the dimensions of originality, feasibility, and effectiveness).

More Details

Force flux and the peridynamic stress tensor

Journal of the Mechanics and Physics of Solids

Lehoucq, Richard B.; Silling, Stewart A.

The peridynamic model is a framework for continuum mechanics based on the idea that pairs of particles exert forces on each other across a finite distance. The equation of motion in the peridynamic model is an integro-differential equation. In this paper, a notion of a peridynamic stress tensor derived from nonlocal interactions is defined. At any point in the body, this stress tensor is obtained from the forces within peridynamic bonds that geometrically go through the point. The peridynamic equation of motion can be expressed in terms of this stress tensor, and the result is formally identical to the Cauchy equation of motion in the classical model, even though the classical model is a local theory. We also establish that this stress tensor field is unique in a certain function space compatible with finite element approximations. © 2007 Elsevier Ltd. All rights reserved.

More Details

Pamgen, a library for parallel generation of simple finite element meshes

Hensinger, David M.; Drake, Richard R.; Foucar, James G.

Generating finite-element meshes is a serious bottleneck for large parallel simulations. When mesh generation is limited to serial machines and element counts approach a billion, this bottleneck becomes a roadblock. Pamgen is a parallel mesh generation library that allows on-the-fly scalable generation of hexahedral and quadrilateral finite element meshes for several simple geometries. It has been used to generate more that 1.1 billion elements on 17,576 processors. Pamgen generates an unstructured finite element mesh on each processor at the start of a simulation. The mesh is specified by commands passed to the library as a 'C'-programming language string. The resulting mesh geometry, topology, and communication information can then be queried through an API. pamgen allows specification of boundary condition application regions using sidesets (element faces) and nodesets (collections of nodes). It supports several simple geometry types. It has multiple alternatives for mesh grading. It has several alternatives for the initial domain decomposition. Pamgen makes it easy to change details of the finite element mesh and is very useful for performance studies and scoping calculations.

More Details

Verification and validation benchmarks

Nuclear Engineering and Design

Oberkampf, William L.; Trucano, Timothy G.

Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of achievement in V&V activities, how closely related the V&V benchmarks are to the actual application of interest, and the quantification of uncertainties related to the application of interest. © 2007 Elsevier B.V. All rights reserved.

More Details

Graph analysis with high-performance computing

Computing in Science and Engineering

Hendrickson, Bruce A.; Berry, Jonathan W.

Large, complex graphs arise in many settings including the Internet, social networks, and communication networks. To study such data sets, the authors explored the use of highperformance computing (HPC) for graph algorithms. They found that the challenges in these applications are quite different from those arising in traditional HPC applications and that massively multithreaded machines are well suited for graph problems. © 2008 IEEE.

More Details

Yucca Mountain licensing support network archive assistant

Dunlavy, Daniel D.; Basilico, Justin D.; Verzi, Stephen J.; Bauer, Travis L.

This report describes the Licensing Support Network (LSN) Assistant--a set of tools for categorizing e-mail messages and documents, and investigating and correcting existing archives of categorized e-mail messages and documents. The two main tools in the LSN Assistant are the LSN Archive Assistant (LSNAA) tool for recategorizing manually labeled e-mail messages and documents and the LSN Realtime Assistant (LSNRA) tool for categorizing new e-mail messages and documents. This report focuses on the LSNAA tool. There are two main components of the LSNAA tool. The first is the Sandia Categorization Framework, which is responsible for providing categorizations for documents in an archive and storing them in an appropriate Categorization Database. The second is the actual user interface, which primarily interacts with the Categorization Database, providing a way for finding and correcting categorizations errors in the database. A procedure for applying the LSNAA tool and an example use case of the LSNAA tool applied to a set of e-mail messages are provided. Performance results of the categorization model designed for this example use case are presented.

More Details

Sustaining knowledge in the neutron generator community and benchmarking study

Huff, Tameka B.; Turgeon, Jennifer T.; Baldonado, Esther B.; Stubblefield, W.A.; Kennedy, Bryan C.; Saba, Antony S.

In 2004, the Responsive Neutron Generator Product Deployment department embarked upon a partnership with the Systems Engineering and Analysis knowledge management (KM) team to develop knowledge management systems for the neutron generator (NG) community. This partnership continues today. The most recent challenge was to improve the current KM system (KMS) development approach by identifying a process that will allow staff members to capture knowledge as they learn it. This 'as-you-go' approach will lead to a sustainable KM process for the NG community. This paper presents a historical overview of NG KMSs, as well as research conducted to move toward sustainable KM.

More Details

Parallel job scheduling policies to improve fairness : a case study

Leung, Vitus J.

Balancing fairness, user performance, and system performance is a critical concern when developing and installing parallel schedulers. Sandia uses a customized scheduler to manage many of their parallel machines. A primary function of the scheduler is to ensure that the machines have good utilization and that users are treated in a 'fair' manner. A separate compute process allocator (CPA) ensures that the jobs on the machines are not too fragmented in order to maximize throughput. Until recently, there has been no established technique to measure the fairness of parallel job schedulers. This paper introduces a 'hybrid' fairness metric that is similar to recently proposed metrics. The metric uses the Sandia version of a 'fairshare' queuing priority as the basis for fairness. The hybrid fairness metric is used to evaluate a Sandia workload. Using these results, multiple scheduling strategies are introduced to improve performance while satisfying user and system performance constraints.

More Details

Dissipation-induced heteroclinic orbits in tippe tops

Proposed for publication in SIAM Review.

Romero, L.A.

This paper demonstrates that the conditions for the existence of a dissipation-induced heteroclinic orbit between the inverted and noninverted states of a tippe top are determined by a complex version of the equations for a simple harmonic oscillator: the modified Maxwell-Bloch equations. A standard linear analysis reveals that the modified Maxwell-Bloch equations describe the spectral instability of the noninverted state and Lyapunov stability of the inverted state. Standard nonlinear analysis based on the energy momentum method gives necessary and sufficient conditions for the existence of a dissipation-induced connecting orbit between these relative equilibria.

More Details

ALEGRA: An arbitrary Lagrangian-Eulerian multimaterial, multiphysics code

46th AIAA Aerospace Sciences Meeting and Exhibit

Robinson, Allen C.; Brunner, Thomas A.; Carroll, Susan; Richarddrake; Garasi, Christopher J.; Gardiner, Thomas; Haill, Thomas; Hanshaw, Heath; Hensinger, David; Labreche, Duane; Lemke, Raymond; Love, Edward; Luchini, Christopher; Mosso, Stewart; Niederhaus, John; Ober, Curtis C.; Petney, Sharon; Rider, William J.; Scovazzi, Guglielmo; Strack, O.E.; Summers, Randall; Trucano, Timothy; Weirs, V.G.; Wong, Michael; Voth, Thomas

ALEGRA is an arbitrary Lagrangian-Eulerian (multiphysics) computer code developed at Sandia National Laboratories since 1990. The code contains a variety of physics options including magnetics, radiation, and multimaterial flow. The code has been developed for nearly two decades, but recent work has dramatically improved the code's accuracy and robustness. These improvements include techniques applied to the basic Lagrangian differencing, artificial viscosity and the remap step of the method including an important improvement in the basic conservation of energy in the scheme. We will discuss the various algorithmic improvements and their impact on the results for important applications. Included in these applications are magnetic implosions, ceramic fracture modeling, and electromagnetic launch. Copyright © 2008 by the American Institute of Aeronautics and Astronautics, Inc.

More Details

Peridynamics with LAMMPS : a user guide

Parks, Michael L.; Plimpton, Steven J.; Lehoucq, Richard B.; Silling, Stewart A.

Peridynamics is a nonlocal formulation of continuum mechanics. The discrete peridynamic model has the same computational structure as a molecular dynamic model. This document details the implementation of a discrete peridynamic model within the LAMMPS molecular dynamic code. This document provides a brief overview of the peridynamic model of a continuum, then discusses how the peridynamic model is discretized, and overviews the LAMMPS implementation. A nontrivial example problem is also included.

More Details
Results 8601–8800 of 9,998
Results 8601–8800 of 9,998