Demonstration of a Prototype Real-Time Gas Sensor Designed for Robotic Deployment
Abstract not provided.
Abstract not provided.
Combinatorial Chemistry is a powerful new technology in drug design and molecular recognition. It is a wet-laboratory methodology aimed at ``massively parallel'' screening of chemical compounds for the discovery of compounds that have a certain biological activity. The power of the method comes from the interaction between experimental design and computational modeling. Principles of ``rational'' drug design are used in the construction of combinatorial libraries to speed up the discovery of lead compounds with the desired biological activity. This paper presents algorithms, software development and computational complexity analysis for problems arising in the design of combinatorial libraries for drug discovery. The authors provide exact polynomial time algorithms and intractability results for several Inverse Problems-formulated as (chemical) graph reconstruction problems-related to the design of combinatorial libraries. These are the first rigorous algorithmic results in the literature. The authors also present results provided by the combinatorial chemistry software package OCOTILLO for combinatorial peptide design using real data libraries. The package provides exact solutions for general inverse problems based on shortest-path topological indices. The results are superior both in accuracy and computing time to the best software reports published in the literature. For 5-peptoid design, the computation is rigorously reduced to an exhaustive search of about 2% of the search space; the exact solutions are found in a few minutes.
Numerical Heat Transfer, Part B
A gridless technique for the solution of the integral form of the radiative heat flux equation for emitting and absorbing media is presented. Treatment of non-uniform absorptivity and gray boundaries is included. As part of this work, the authors have developed fast multipole techniques for extracting radiative heat flux quantities from the temperature fields of one-dimensional and three-dimensional geometries. Example calculations include those for one-dimensional radiative heat transfer through multiple flame sheets, a three-dimensional enclosure with black walls, and an axisymmetric enclosure with black walls.
Welding Journal Research Supplement
The weld solidification and cracking behavior of sulfur bearing free machining austenitic stainless steel was investigated for both gas-tungsten arc (GTA) and pulsed laser beam weld processes. The GTA weld solidification was consistent with those predicted with existing solidification diagrams and the cracking response was controlled primarily by solidification mode. The solidification behavior of the pulsed laser welds was complex, and often contained regions of primary ferrite and primary austenite solidification, although in all cases the welds were found to be completely austenite at room temperature. Electron backscattered diffraction (EBSD) pattern analysis indicated that the nature of the base metal at the time of solidification plays a primary role in initial solidification. The solid state transformation of austenite to ferrite at the fusion zone boundary, and ferrite to austenite on cooling may both be massive in nature. A range of alloy compositions that exhibited good resistance to solidification cracking and was compatible with both welding processes was identified. The compositional range is bounded by laser weldability at lower Cr{sub eq}/Ni{sub eq} ratios and by the GTA weldability at higher ratios. It was found with both processes that the limiting ratios were somewhat dependent upon sulfur content.
Physical Review B
An unconstrained minimization algorithm for electronic structure calculations using density functional for systems with a gap is developed to solve for nonorthogonal Wannier-like orbitals in the spirit of E. B. Stechel, A. R. Williams, and P. J. Feibelman, Phys. Rev. B 49, 10,008 (1994). The search for the occupied sub-space is a Grassmann conjugate gradient algorithm generalized from the algorithm of A. Edelman, T.A. Arias, and S. T. Smith, SIAM J. on Matrix Anal. Appl. 20, 303 (1998). The gradient takes into account the nonorthogonality of a local atom-centered basis, gaussian in their implementation. With a localization constraint on the Wannier-like orbitals, well-constructed sparse matrix multiplies lead to O(N) scaling of the computationally intensive parts of the algorithm. Using silicon carbide as a test system, the accuracy, convergence, and implementation of this algorithm as a quantitative alternative to diagonalization are investigated. Results up to 1,458 atoms on a single processor are presented.
Abstract not provided.
Abstract not provided.
As part of a project for the Defense Advanced Research Projects Agency, Sandia National Laboratories Intelligent Systems and Robotics Center is developing and testing the feasibility of using a cooperative team of robotic sentry vehicles to guard a perimeter, perform a surround task, and travel extended distances. This paper describes the authors most recent activities. In particular, this paper highlights the development of a Differential Global Positioning System (DGPS) leapfrog capability that allows two or more vehicles to alternate sending DGPS corrections. Using this leapfrog technique, this paper shows that a group of autonomous vehicles can travel 22.68 kilometers with a root mean square positioning error of only 5 meters.
IEEE Transactions on Evolutionary Computation
The authors describe a convergence theory for evolutionary pattern search algorithms (EPSAs) on a broad class of unconstrained and linearly constrained problems. EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. The analysis significantly extends the previous convergence theory for EPSAs. The analysis applies to a broader class of EPSAs,and it applies to problems that are nonsmooth, have unbounded objective functions, and which are linearly constrained. Further, they describe a modest change to the algorithmic framework of EPSAs for which a non-probabilistic convergence theory applies. These analyses are also noteworthy because they are considerably simpler than previous analyses of EPSAs.
Journal for Universal Computer Science
Crystal lattices are infinite periodic graphs that occur naturally in a variety of geometries and which are of fundamental importance in polymer science. Discrete models of protein folding use crystal lattices to define the space of protein conformations. Because various crystal lattices provide discretizations of the same physical phenomenon, it is reasonable to expect that there will exist invariants across lattices related to fundamental properties of the protein folding process. This paper considers whether performance-guaranteed approximability is such an invariant for HP lattice models. The authors define a master approximation algorithm that has provable performance guarantees provided that a specific sublattice exists within a given lattice. They describe a broad class of crystal lattices that are approximable, which further suggests that approximability is a general property of HP lattice models.
A procedure for code Verification by the Method of Manufactured Solutions (MMS) is presented. Although the procedure requires a certain amount of creativity and skill, we show that MMS can be applied to a variety of engineering codes which numerically solve partial differential equations. This is illustrated by detailed examples from computational fluid dynamics. The strength of the MMS procedure is that it can identify any coding mistake that affects the order-of-accuracy of the numerical method. A set of examples which use a blind-test protocol demonstrates the kinds of coding mistakes that can (and cannot) be exposed via the MMS code Verification procedure. The principle advantage of the MMS procedure over traditional methods of code Verification is that code capabilities are tested in full generality. The procedure thus results in a high degree of confidence that all coding mistakes which prevent the equations from being solved correctly have been identified.
Applied Mathematical Modeling
Effective use of a parallel computer requires that a calculation be carefully divided among the processors. This load balancing problem appears in many guises and has been a fervent area of research for the past decade or more. Although great progress has been made, and useful software tools developed, a number of challenges remain. It is the conviction of the author that these challenges will be easier to address if programmers first come to terms with some significant shortcomings in their current perspectives. This paper tries to identify several areas in which the prevailing point of view is either mistaken or insufficient. The goal is to motivate new ideas and directions for this important field.
Many parallel applications require periodic redistribution of workloads and associated data. In a distributed memory computer, this redistribution can be difficult if limited memory is available for receiving messages. The authors propose a model for optimizing the exchange of messages under such circumstances which they call the minimum phase remapping problem. They first show that the problem is NP-Complete, and then analyze several methodologies for addressing it. First, they show how the problem can be phrased as an instance of multi-commodity flow. Next, they study a continuous approximation to the problem. They show that this continuous approximation has a solution which requires at most two more phases than the optimal discrete solution, but the question of how to consistently obtain a good discrete solution from the continuous problem remains open. Finally, they devise a simple and practical approximation algorithm for the problem with a bound of 1.5 times the optimal number of phases.
SIAM Journal of Scientific Computing
Most algorithms used in preconditioned iterative methods are generally applicable to complex valued linear systems, with real valued linear systems simply being a special case. However, most iterative solver packages available today focus exclusively on real valued systems, or deal with complex valued systems as an afterthought. One obvious approach to addressing this problem is to recast the complex problem into one of a several equivalent real forms and then use a real valued solver to solve the related system. However, well-known theoretical results showing unfavorable spectral properties for the equivalent real forms have diminished enthusiasm for this approach. At the same time, experience has shown that there are situations where using an equivalent real form can be very effective. In this paper, the authors explore this approach, giving both theoretical and experimental evidence that an equivalent real form can be useful for a number of practical situations. Furthermore, they show that by making good use of some of the advance features of modem solver packages, they can easily generate equivalent real form preconditioners that are computationally efficient and mathematically identical to their complex counterparts. Using their techniques, they are able to solve very ill-conditioned complex valued linear systems for a variety of large scale applications. However, more importantly, they shed more light on the effectiveness of equivalent real forms and more clearly delineate how and when they should be used.
Laser deposits fabricated from two different compositions of 304L stainless steel powder were characterized to determine the nature of the solidification and solid state transformations. One of the goals of this work was to determine to what extent novel microstructure consisting of single-phase austenite could be achieved with the thermal conditions of the LENS [Laser Engineered Net Shape] process. Although ferrite-free deposits were not obtained, structures with very low ferrite content were achieved. It appeared that, with slight changes in alloy composition, this goal could be met via two different solidification and transformation mechanisms.
Processes that involve particle-laden fluids are common in geomechanics and especially in the petroleum industry. Understanding the physics of these processes and the ability to predict their behavior requires the development of coupled fluid-flow and particle-motion computational methods. This paper outlines an accurate and robust coupled computational scheme using the lattice-Boltzmann method for fluid flow and the discrete-element method for solid particle motion. Results from several two-dimensional validation simulations are presented. Simulations reported include the sedimentation of an ellipse, a disc and two interacting discs in a closed column of fluid. The recently discovered phenomenon of drafting, kissing, and tumbling is fully reproduced in the two-disc simulation.
This report summarizes materials issues associated with advanced micromachines development at Sandia. The intent of this report is to provide a perspective on the scope of the issues and suggest future technical directions, with a focus on computational materials science. Materials issues in surface micromachining (SMM), Lithographic-Galvanoformung-Abformung (LIGA: lithography, electrodeposition, and molding), and meso-machining technologies were identified. Each individual issue was assessed in four categories: degree of basic understanding; amount of existing experimental data capability of existing models; and, based on the perspective of component developers, the importance of the issue to be resolved. Three broad requirements for micromachines emerged from this process. They are: (1) tribological behavior, including stiction, friction, wear, and the use of surface treatments to control these, (2) mechanical behavior at microscale, including elasticity, plasticity, and the effect of microstructural features on mechanical strength, and (3) degradation of tribological and mechanical properties in normal (including aging), abnormal and hostile environments. Resolving all the identified critical issues requires a significant cooperative and complementary effort between computational and experimental programs. The breadth of this work is greater than any single program is likely to support. This report should serve as a guide to plan micromachines development at Sandia.
The authors describe a naturalistic behavioral model for the simulation of small unit combat. This model, Klein's recognition-primed decision making (RPD) model, is driven by situational awareness rather than a rational process of selecting from a set of action options. They argue that simulated combatants modeled with RPD will have more flexible and realistic responses to a broad range of small-scale combat scenarios. Furthermore, they note that the predictability of a simulation using an RPD framework can be easily controlled to provide multiple evaluations of a given combat scenario. Finally, they discuss computational issues for building an RPD-based behavior engine for fully automated combatants in small conflict scenarios, which are being investigated within Sandia's Next Generation Site Security project.
This paper examines the modeling accuracy of finite element interpolation, kriging, and polynomial regression used in conjunction with the Progressive Lattice Sampling (PLS) incremental design-of-experiments approach. PLS is a paradigm for sampling a deterministic hypercubic parameter space by placing and incrementally adding samples in a manner intended to maximally reduce lack of knowledge in the parameter space. When combined with suitable interpolation methods, PLS is a formulation for progressive construction of response surface approximations (RSA) in which the RSA are efficiently upgradable, and upon upgrading, offer convergence information essential in estimating error introduced by the use of RSA in the problem. The three interpolation methods tried here are examined for performance in replicating an analytic test function as measured by several different indicators. The process described here provides a framework for future studies using other interpolation schemes, test functions, and measures of approximation quality.
This case study presents initial results from research targeted at the development of cost-effective scalable visualization and rendering technologies. The implementations of two 3D graphics libraries based on the popular sort-last and sort-middle parallel rendering techniques are discussed. An important goal of these implementations is to provide scalable rendering capability for extremely large datasets (>> 5 million polygons). Applications can use these libraries for either run-time visualization, by linking to an existing parallel simulation, or for traditional post-processing by linking to an interactive display program. The use of parallel, hardware-accelerated rendering on commodity hardware is leveraged to achieve high performance. Current performance results show that, using current hardware (a small 16-node cluster), they can utilize up to 85% of the aggregate graphics performance and achieve rendering rates in excess of 20 million polygons/second using OpenGL{reg_sign} with lighting, Gouraud shading, and individually specified triangles (not t-stripped).
SIAM Journal of Scientific Computing
Quality metrics for structured and unstructured mesh generation are placed within an algebraic framework to form a mathematical theory of mesh quality metrics. The theory, based on the Jacobian and related matrices, provides a means of constructing, classifying, and evaluating mesh quality metrics. The Jacobian matrix is factored into geometrically meaningful parts. A nodally-invariant Jacobian matrix can be defined for simplicial elements using a weight matrix derived from the Jacobian matrix of an ideal reference element. Scale and orientation-invariant algebraic mesh quality metrics are defined. the singular value decomposition is used to study relationships between metrics. Equivalence of the element condition number and mean ratio metrics is proved. Condition number is shown to measure the distance of an element to the set of degenerate elements. Algebraic measures for skew, length ratio, shape, volume, and orientation are defined abstractly, with specific examples given. Combined metrics for shape and volume, shape-volume-orientation are algebraically defined and examples of such metrics are given. Algebraic mesh quality metrics are extended to non-simplical elements. A series of numerical tests verify the theoretical properties of the metrics defined.
This paper analyzes the scalability limitations of networking technologies based on the Virtual Interface Architecture (VIA) in supporting the runtime environment needed for an implementation of the Message Passing Interface. The authors present an overview of the important characteristics of VIA and an overview of the runtime system being developed as part of the Computational Plant (Cplant) project at Sandia National Laboratories. They discuss the characteristics of VIA that prevent implementations based on this system to meet the scalability and performance requirements of Cplant.
As computational needs for structural finite element analysis increase, a robust implicit structural dynamics code is needed which can handle millions of degrees of freedom in the model and produce results with quick turn around time. A parallel code is needed to avoid limitations of serial platforms. Salinas is an implicit structural dynamics code specifically designed for massively parallel platforms. It computes the structural response of very large complex structures and provides solutions faster than any existing serial machine. This paper gives a current status of Salinas and uses demonstration problems to show Salinas' performance.
Abstract not provided.
Abstract not provided.