Publications

Results 9701–9750 of 9,998

Search results

Jump to search filters

Inferring genetic networks from microarray data

Davidson, George S.; May, Elebeoba E.; Faulon, Jean-Loup M.

In theory, it should be possible to infer realistic genetic networks from time series microarray data. In practice, however, network discovery has proved problematic. The three major challenges are: (1) inferring the network; (2) estimating the stability of the inferred network; and (3) making the network visually accessible to the user. Here we describe a method, tested on publicly available time series microarray data, which addresses these concerns. The inference of genetic networks from genome-wide experimental data is an important biological problem which has received much attention. Approaches to this problem have typically included application of clustering algorithms [6]; the use of Boolean networks [12, 1, 10]; the use of Bayesian networks [8, 11]; and the use of continuous models [21, 14, 19]. Overviews of the problem and general approaches to network inference can be found in [4, 3]. Our approach to network inference is similar to earlier methods in that we use both clustering and Boolean network inference. However, we have attempted to extend the process to better serve the end-user, the biologist. In particular, we have incorporated a system to assess the reliability of our network, and we have developed tools which allow interactive visualization of the proposed network.

More Details

Xyce Parallel Electronic Simulator : users' guide, version 2.0

Keiter, Eric R.; Hutchinson, Scott A.; Hoekstra, Robert J.; Russo, Thomas V.; Rankin, Eric R.; Pawlowski, Roger P.; Wix, Steven D.; Fixel, Deborah A.

This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator capable of simulating electrical circuits at a variety of abstraction levels. Primarily, Xyce has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability the current state-of-the-art in the following areas: {sm_bullet} Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. {sm_bullet} Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. {sm_bullet} Device models which are specifically tailored to meet Sandia's needs, including many radiation-aware devices. {sm_bullet} A client-server or multi-tiered operating model wherein the numerical kernel can operate independently of the graphical user interface (GUI). {sm_bullet} Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing of computing platforms. These include serial, shared-memory and distributed-memory parallel implementation - which allows it to run efficiently on the widest possible number parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. One feature required by designers is the ability to add device models, many specific to the needs of Sandia, to the code. To this end, the device package in the Xyce These input formats include standard analytical models, behavioral models look-up Parallel Electronic Simulator is designed to support a variety of device model inputs. tables, and mesh-level PDE device models. Combined with this flexible interface is an architectural design that greatly simplifies the addition of circuit models. One of the most important feature of Xyce is in providing a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia now has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods) research and development can be performed. Ultimately, these capabilities are migrated to end users.

More Details

Xyce Parallel Electronic Simulator : reference guide, version 2.0

Keiter, Eric R.; Hutchinson, Scott A.; Hoekstra, Robert J.; Russo, Thomas V.; Rankin, Eric R.; Pawlowski, Roger P.; Fixel, Deborah A.; Wix, Steven D.

This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.

More Details

Feature length-scale modeling of LPCVD & PECVD MEMS fabrication processes

Proposed for publication in the Journal of Microsystems Technologies.

Plimpton, Steven J.; Schmidt, Rodney C.

The surface micromachining processes used to manufacture MEMS devices and integrated circuits transpire at such small length scales and are sufficiently complex that a theoretical analysis of them is particularly inviting. Under development at Sandia National Laboratories (SNL) is Chemically Induced Surface Evolution with Level Sets (ChISELS), a level-set based feature-scale modeler of such processes. The theoretical models used, a description of the software and some example results are presented here. The focus to date has been of low-pressure and plasma enhanced chemical vapor deposition (low-pressure chemical vapor deposition, LPCVD and PECVD) processes. Both are employed in SNLs SUMMiT V technology. Examples of step coverage of SiO{sub 2} into a trench by each of the LPCVD and PECVD process are presented.

More Details

A new algorithm for computing multivariate Gauss-like quadrature points

Taylor, Mark A.

The diagonal-mass-matrix spectral element method has proven very successful in geophysical applications dominated by wave propagation. For these problems, the ability to run fully explicit time stepping schemes at relatively high order makes the method more competitive then finite element methods which require the inversion of a mass matrix. The method relies on Gauss-Lobatto points to be successful, since the grid points used are required to produce well conditioned polynomial interpolants, and be high quality 'Gauss-like' quadrature points that exactly integrate a space of polynomials of higher dimension than the number of quadrature points. These two requirements have traditionally limited the diagonal-mass-matrix spectral element method to use square or quadrilateral elements, where tensor products of Gauss-Lobatto points can be used. In non-tensor product domains such as the triangle, both optimal interpolation points and Gauss-like quadrature points are difficult to construct and there are few analytic results. To extend the diagonal-mass-matrix spectral element method to (for example) triangular elements, one must find appropriate points numerically. One successful approach has been to perform numerical searches for high quality interpolation points, as measured by the Lebesgue constant (Such as minimum energy electrostatic points and Fekete points). However, these points typically do not have any Gauss-like quadrature properties. In this work, we describe a new numerical method to look for Gauss-like quadrature points in the triangle, based on a previous algorithm for computing Fekete points. Performing a brute force search for such points is extremely difficult. A common strategy to increase the numerical efficiency of these searches is to reduce the number of unknowns by imposing symmetry conditions on the quadrature points. Motivated by spectral element methods, we propose a different way to reduce the number of unknowns: We look for quadrature formula that have the same number of points as the number of basis functions used in the spectral element method's transform algorithm. This is an important requirement if they are to be used in a diagonal-mass-matrix spectral element method. This restriction allows for the construction of cardinal functions (Lagrange interpolating polynomials). The ability to construct cardinal functions leads to a remarkable expression relating the variation in the quadrature weights to the variation in the quadrature points. This relation in turn leads to an analytical expression for the gradient of the quadrature error with respect to the quadrature points. Thus the quadrature weights have been completely removed from the optimization problem, and we can implement an exact steepest descent algorithm for driving the quadrature error to zero. Results from the algorithm will be presented for the triangle and the sphere.

More Details

The two-level Newton method and its application to electronic simulation

Keiter, Eric R.; Hutchinson, Scott A.; Hoekstra, Robert J.; Russo, Thomas V.; Rankin, Eric R.

Coupling between transient simulation codes of different fidelity can often be performed at the nonlinear solver level, if the time scales of the two codes are similar. A good example is electrical mixed-mode simulation, in which an analog circuit simulator is coupled to a PDE-based semiconductor device simulator. Semiconductor simulation problems, such as single-event upset (SEU), often require the fidelity of a mesh-based device simulator but are only meaningful when dynamically coupled with an external circuit. For such problems a mixed-level simulator is desirable, but the two types of simulation generally have different (somewhat conflicting) numerical requirements. To address these considerations, we have investigated variations of the two-level Newton algorithm, which preserves tight coupling between the circuit and the PDE device, while optimizing the numerics for both. The research was done within Xyce, a massively parallel electronic simulator under development at Sandia National Laboratories.

More Details

Teuchos::RefCountPtr beginner's guide : an introduction to the Trilinos smart reference-counted pointer class for (almost) automatic dynamic memory management in C++

Bartlett, Roscoe B.

Dynamic memory management in C++ is one of the most common areas of difficulty and errors for amateur and expert C++ developers alike. The improper use of operator new and operator delete is arguably the most common cause of incorrect program behavior and segmentation faults in C++ programs. Here we introduce a templated concrete C++ class Teuchos::RefCountPtr<>, which is part of the Trilinos tools package Teuchos, that combines the concepts of smart pointers and reference counting to build a low-overhead but effective tool for simplifying dynamic memory management in C++. We discuss why the use of raw pointers for memory management, managed through explicit calls to operator new and operator delete, is so difficult to accomplish without making mistakes and how programs that use raw pointers for memory management can easily be modified to use RefCountPtr<>. In addition, explicit calls to operator delete is fragile and results in memory leaks in the presents of C++ exceptions. In its most basic usage, RefCountPtr<> automatically determines when operator delete should be called to free an object allocated with operator new and is not fragile in the presents of exceptions. The class also supports more sophisticated use cases as well. This document describes just the most basic usage of RefCountPtr<> to allow developers to get started using it right away. However, more detailed information on the design and advanced features of RefCountPtr<> is provided by the companion document 'Teuchos::RefCountPtr : The Trilinos Smart Reference-Counted Pointer Class for (Almost) Automatic Dynamic Memory Management in C++'.

More Details

Historical precedence and technical requirements of biological weapons use : a threat assessment

Salerno, Reynolds M.; Barnett, Natalie B.; Gaudioso, Jennifer M.; Estes, Daniel P.

The threat from biological weapons is assessed through both a comparative historical analysis of the patterns of biological weapons use and an assessment of the technological hurdles to proliferation and use that must be overcome. The history of biological weapons is studied to learn how agents have been acquired and what types of states and substate actors have used agents. Substate actors have generally been more willing than states to use pathogens and toxins and they have focused on those agents that are more readily available. There has been an increasing trend of bioterrorism incidents over the past century, but states and substate actors have struggled with one or more of the necessary technological steps. These steps include acquisition of a suitable agent, production of an appropriate quantity and form, and effective deployment. The technological hurdles associated with the steps present a real barrier to producing a high consequence event. However, the ever increasing technological sophistication of society continually lowers the barriers, resulting in a low but increasing probability of a high consequence bioterrorism event.

More Details

Amesos 1.0 reference guide

Sala, Marzio S.

This document describes the main functionalities of the Amesos package, version 1.0. Amesos, available as part of Trilinos 4.0, provides an object-oriented interface to several serial and parallel sparse direct solvers libraries, for the solution of the linear systems of equations A X = B where A is a real sparse, distributed matrix, defined as an EpetraRowMatrix object, and X and B are defined as EpetraMultiVector objects. Amesos provides a common look-and-feel to several direct solvers, insulating the user from each package's details, such as matrix and vector formats, and data distribution.

More Details

Trilinos 4.0 tutorial

Sala, Marzio S.; Heroux, Michael A.; Day, David M.

The Trilinos Project is an effort to facilitate the design, development, integration and ongoing support of mathematical software libraries. The goal of the Trilinos Project is to develop parallel solver algorithms and libraries within an object-oriented software framework for the solution of large-scale, complex multiphysics engineering and scientific applications. The emphasis is on developing robust, scalable algorithms in a software framework, using abstract interfaces for flexible interoperability of components while providing a full-featured set of concrete classes that implement all the abstract interfaces. This document introduces the use of Trilinos, version 4.0. The presented material includes, among others, the definition of distributed matrices and vectors with Epetra, the iterative solution of linear systems with AztecOO, incomplete factorizations with IF-PACK, multilevel and domain decomposition preconditioners with ML, direct solution of linear system with Amesos, and iterative solution of nonlinear systems with NOX. The tutorial is a self-contained introduction, intended to help computational scientists effectively apply the appropriate Trilinos package to their applications. Basic examples are presented that are fit to be imitated. This document is a companion to the Trilinos User's Guide [20] and Trilinos Development Guides [21,22]. Please note that the documentation included in each of the Trilinos' packages is of fundamental importance.

More Details

ML 3.0 smoothed aggregation user's guide

Sala, Marzio S.; Hu, Jonathan J.; Tuminaro, Raymond S.

ML is a multigrid preconditioning package intended to solve linear systems of equations Az = b where A is a user supplied n x n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. ML should be used on large sparse linear systems arising from partial differential equation (PDE) discretizations. While technically any linear system can be considered, ML should be used on linear systems that correspond to things that work well with multigrid methods (e.g. elliptic PDEs). ML can be used as a stand-alone package or to generate preconditioners for a traditional iterative solver package (e.g. Krylov methods). We have supplied support for working with the AZTEC 2.1 and AZTECOO iterative package [15]. However, other solvers can be used by supplying a few functions. This document describes one specific algebraic multigrid approach: smoothed aggregation. This approach is used within several specialized multigrid methods: one for the eddy current formulation for Maxwell's equations, and a multilevel and domain decomposition method for symmetric and non-symmetric systems of equations (like elliptic equations, or compressible and incompressible fluid dynamics problems). Other methods exist within ML but are not described in this document. Examples are given illustrating the problem definition and exercising multigrid options.

More Details

A mathematical framework for multiscale science and engineering : the variational multiscale method and interscale transfer operators

Wagner, Gregory J.; Bochev, Pavel B.; Christon, Mark A.; Collis, Samuel S.; Lehoucq, Richard B.; Shadid, John N.; Slepoy, Alexander S.

Existing approaches in multiscale science and engineering have evolved from a range of ideas and solutions that are reflective of their original problem domains. As a result, research in multiscale science has followed widely diverse and disjoint paths, which presents a barrier to cross pollination of ideas and application of methods outside their application domains. The status of the research environment calls for an abstract mathematical framework that can provide a common language to formulate and analyze multiscale problems across a range of scientific and engineering disciplines. In such a framework, critical common issues arising in multiscale problems can be identified, explored and characterized in an abstract setting. This type of overarching approach would allow categorization and clarification of existing models and approximations in a landscape of seemingly disjoint, mutually exclusive and ad hoc methods. More importantly, such an approach can provide context for both the development of new techniques and their critical examination. As with any new mathematical framework, it is necessary to demonstrate its viability on problems of practical importance. At Sandia, lab-centric, prototype application problems in fluid mechanics, reacting flows, magnetohydrodynamics (MHD), shock hydrodynamics and materials science span an important subset of DOE Office of Science applications and form an ideal proving ground for new approaches in multiscale science.

More Details

ML 3.1 developer's guide

Sala, Marzio S.; Hu, Jonathan J.; Tuminaro, Raymond S.

ML development was started in 1997 by Ray Tuminaro and Charles Tong. Currently, there are several full- and part-time developers. The kernel of ML is written in ANSI C, and there is a rich C++ interface for Trilinos users and developers. ML can be customized to run geometric and algebraic multigrid; it can solve a scalar or a vector equation (with constant number of equations per grid node), and it can solve a form of Maxwell's equations. For a general introduction to ML and its applications, we refer to the Users Guide [SHT04], and to the ML web site, http://software.sandia.gov/ml.

More Details

Containment of uranium in the proposed Egyptian geologic repository for radioactive waste using hydroxyapatite

Moore, Robert C.; Hasan, Ahmed H.; Larese, Kathleen C.; Headley, Thomas J.; Zhao, Hongting Z.; Salas, Fred S.

Currently, the Egyptian Atomic Energy Authority is designing a shallow-land disposal facility for low-level radioactive waste. To insure containment and prevent migration of radionuclides from the site, the use of a reactive backfill material is being considered. One material under consideration is hydroxyapatite, Ca{sub 10}(PO{sub 4}){sub 6}(OH){sub 2}, which has a high affinity for the sorption of many radionuclides. Hydroxyapatite has many properties that make it an ideal material for use as a backfill including low water solubility (K{sub sp}>10{sup -40}), high stability under reducing and oxidizing conditions over a wide temperature range, availability, and low cost. However, there is often considerable variation in the properties of apatites depending on source and method of preparation. In this work, we characterized and compared a synthetic hydroxyapatite with hydroxyapatites prepared from cattle bone calcined at 500 C, 700 C, 900 C and 1100 C. The analysis indicated the synthetic hydroxyapatite was similar in morphology to 500 C prepared cattle hydroxyapatite. With increasing calcination temperature the crystallinity and crystal size of the hydroxyapatites increased and the BET surface area and carbonate concentration decreased. Batch sorption experiments were performed to determine the effectiveness of each material to sorb uranium. Sorption of U was strong regardless of apatite type indicating all apatite materials evaluated. Sixty day desorption experiments indicated desorption of uranium for each hydroxyapatite was negligible.

More Details

Algebraic multigrid methods for constrained linear systems with applications to contact problems in solid mechanics

Numerical Linear Algebra with Applications

Adams, Mark F.

This paper develops a general framework for applying algebraic multigrid techniques to constrained systems of linear algebraic equations that arise in applications with discretized PDEs. We discuss constraint coarsening strategies for constructing multigrid coarse grid spaces and several classes of multigrid smoothers for these systems. The potential of these methods is investigated with their application to contact problems in solid mechanics. Published in 2004 by John Wiley &Sons, Ltd.

More Details

Taking ASCI supercomputing to the end game

DeBenedictis, Erik

The ASCI supercomputing program is broadly defined as running physics simulations on progressively more powerful digital computers. What happens if we extrapolate the computer technology to its end? We have developed a model for key ASCI computations running on a hypothetical computer whose technology is parameterized in ways that account for advancing technology. This model includes technology information such as Moore's Law for transistor scaling and developments in cooling technology. The model also includes limits imposed by laws of physics, such as thermodynamic limits on power dissipation, limits on cooling, and the limitation of signal propagation velocity to the speed of light. We apply this model and show that ASCI computations will advance smoothly for another 10-20 years to an 'end game' defined by thermodynamic limits and the speed of light. Performance levels at the end game will vary greatly by specific problem, but will be in the Exaflops to Zetaflops range for currently anticipated problems. We have also found an architecture that would be within a constant factor of giving optimal performance at the end game. This architecture is an evolutionary derivative of the mesh-connected microprocessor (such as ASCI Red Storm or IBM Blue Gene/L). We provide designs for the necessary enhancement to microprocessor functionality and the power-efficiency of both the processor and memory system. The technology we develop in the foregoing provides a 'perfect' computer model with which we can rate the quality of realizable computer designs, both in this writing and as a way of designing future computers. This report focuses on classical computers based on irreversible digital logic, and more specifically on algorithms that simulate space computing, irreversible logic, analog computers, and other ways to address stockpile stewardship that are outside the scope of this report.

More Details

Molecular simulations of MEMS and membrane coatings (PECASE)

Thompson, Aidan P.

The goal of this Laboratory Directed Research & Development (LDRD) effort was to design, synthesize, and evaluate organic-inorganic nanocomposite membranes for solubility-based separations, such as the removal of higher hydrocarbons from air streams, using experiment and theory. We synthesized membranes by depositing alkylchlorosilanes on the nanoporous surfaces of alumina substrates, using techniques from the self-assembled monolayer literature to control the microstructure. We measured the permeability of these membranes to different gas species, in order to evaluate their performance in solubility-based separations. Membrane design goals were met by manipulating the pore size, alkyl group size, and alkyl surface density. We employed molecular dynamics simulation to gain further understanding of the relationship between membrane microstructure and separation performance.

More Details

Verification of Euler/Navier-Stokes codes using the method of manufactured solutions

International Journal for Numerical Methods in Fluids

Roy, C.J.; Nelson, C.C.; Smith, T.M.; Ober, Curtis C.

The method of manufactured solutions is used to verify the order of accuracy of two finite-volume Euler and Navier-Stokes codes. The Premo code employs a node-centred approach using unstructured meshes, while the Wind code employs a similar scheme on structured meshes. Both codes use Roe's upwind method with MUSCL extrapolation for the convective terms and central differences for the diffusion terms, thus yielding a numerical scheme that is formally second-order accurate. The method of manufactured solutions is employed to generate exact solutions to the governing Euler and Navier-Stokes equations in two dimensions along with additional source terms. These exact solutions are then used to accurately evaluate the discretization error in the numerical solutions. Through global discretization error analyses, the spatial order of accuracy is observed to be second order for both codes, thus giving a high degree of confidence that the two codes are free from coding mistakes in the options exercised. Examples of coding mistakes discovered using the method are also given. © 2004 John Wiley and Sons, Ltd.

More Details

LDRD report : parallel repartitioning for optimal solver performance

Devine, Karen D.; Boman, Erik G.; Devine, Karen D.; Heaphy, Robert T.; Hendrickson, Bruce A.; Heroux, Michael A.

We have developed infrastructure, utilities and partitioning methods to improve data partitioning in linear solvers and preconditioners. Our efforts included incorporation of data repartitioning capabilities from the Zoltan toolkit into the Trilinos solver framework, (allowing dynamic repartitioning of Trilinos matrices); implementation of efficient distributed data directories and unstructured communication utilities in Zoltan and Trilinos; development of a new multi-constraint geometric partitioning algorithm (which can generate one decomposition that is good with respect to multiple criteria); and research into hypergraph partitioning algorithms (which provide up to 56% reduction of communication volume compared to graph partitioning for a number of emerging applications). This report includes descriptions of the infrastructure and algorithms developed, along with results demonstrating the effectiveness of our approaches.

More Details

A filter-based evolutionary algorithm for constrained optimization

Proposed for publication in Evolutionary Computations.

Hart, William E.

We introduce a filter-based evolutionary algorithm (FEA) for constrained optimization. The filter used by an FEA explicitly imposes the concept of dominance on a partially ordered solution set. We show that the algorithm is provably robust for both linear and nonlinear problems and constraints. FEAs use a finite pattern of mutation offsets, and our analysis is closely related to recent convergence results for pattern search methods. We discuss how properties of this pattern impact the ability of an FEA to converge to a constrained local optimum.

More Details

Verification, validation, and predictive capability in computational engineering and physics

Applied Mechanics Reviews

Oberkampf, William L.; Trucano, Timothy G.; Hirsch, Charles

The views of state of art in verification and validation (V & V) in computational physics are discussed. These views are described in the framework in which predictive capability relies on V & V, as well as other factors that affect predictive capability. Some of the research topics addressed are development of improved procedures for the use of the phenomena identification and ranking table (PIRT) for prioritizing V & V activities, and the method of manufactured solutions for code verification. It also addressed development and use of hierarchical validation diagrams, and the construction and use of validation metrics incorporating statistical measures.

More Details

The signature molecular descriptor: 3. Inverse-quantitative structure-activity relationship of ICAM-1 inhibitory peptides

Journal of Molecular Graphics and Modelling

Churchwell, Carla J.; Rintoul, Mark D.; Martin, Shawn; Visco, Donald P.; Kotu, Archana; Larson, Richard S.; Sillerud, Laurel O.; Brown, David C.; Faulon, Jean L.

We present a methodology for solving the inverse-quantitative structure-activity relationship (QSAR) problem using the molecular descriptor called signature. This methodology is detailed in four parts. First, we create a QSAR equation that correlates the occurrence of a signature to the activity values using a stepwise multilinear regression technique. Second, we construct constraint equations, specifically the graphicality and consistency equations, which facilitate the reconstruction of the solution compounds directly from the signatures. Third, we solve the set of constraint equations, which are both linear and Diophantine in nature. Last, we reconstruct and enumerate the solution molecules and calculate their activity values from the QSAR equation. We apply this inverse-QSAR method to a small set of LFA-1/ICAM-1 peptide inhibitors to assist in the search and design of more-potent inhibitory compounds. Many novel inhibitors were predicted, a number of which are predicted to be more potent than the strongest inhibitor in the training set. Two of the more potent inhibitors were synthesized and tested in-vivo, confirming them to be the strongest inhibiting peptides to date. Some of these compounds can be recycled to train a new QSAR and develop a more focused library of lead compounds. © 2003 Elsevier Inc. All rights reserved.

More Details

Covering a set of points with a minimum number of turns

International Journal of Computational Geometry and Applications

Collins, Michael J.

Given a finite set of points in Euclidean space, we can ask what is the minimum number of times a piecewise-linear path must change direction in order to pass through all of them. We prove some new upper and lower bounds for the rectilinear version of this problem in which all motion is orthogonal to the coordinate axes. We also consider the more general case of arbitrary directions.

More Details

Communication patterns and allocation strategies

Leung, Vitus J.

Motivated by observations about job runtimes on the CPlant system, we use a trace-driven microsimulator to begin characterizing the performance of different classes of allocation algorithms on jobs with different communication patterns in space-shared parallel systems with mesh topology. We show that relative performance varies considerably with communication pattern. The Paging strategy using the Hilbert space-filling curve and the Best Fit heuristic performed best across several communication patterns.

More Details

Simulating economic effects of disruptions in the telecommunications infrastructure

Barton, Dianne C.; Eidson, Eric D.; Schoenwald, David A.; Cox, Roger G.; Reinert, Rhonda K.

CommAspen is a new agent-based model for simulating the interdependent effects of market decisions and disruptions in the telecommunications infrastructure on other critical infrastructures in the U.S. economy such as banking and finance, and electric power. CommAspen extends and modifies the capabilities of Aspen-EE, an agent-based model previously developed by Sandia National Laboratories to analyze the interdependencies between the electric power system and other critical infrastructures. CommAspen has been tested on a series of scenarios in which the communications network has been disrupted, due to congestion and outages. Analysis of the scenario results indicates that communications networks simulated by the model behave as their counterparts do in the real world. Results also show that the model could be used to analyze the economic impact of communications congestion and outages.

More Details

Trilinos 3.1 tutorial

Heroux, Michael A.; Sala, Marzio S.

This document introduces the use of Trilinos, version 3.1. Trilinos has been written to support, in a rigorous manner, the solver needs of the engineering and scientific applications at Sandia National Laboratories. Aim of this manuscript is to present the basic features of some of the Trilinos packages. The presented material includes the definition of distributed matrices and vectors with Epetra, the iterative solution of linear system with AztecOO, incomplete factorizations with IFPACK, multilevel methods with ML, direct solution of linear system with Amesos, and iterative solution of nonlinear systems with NOX. With the help of several examples, some of the most important classes and methods are detailed to the inexperienced user. For the most majority, each example is largely commented throughout the text. Other comments can be found in the source of each example. This document is a companion to the Trilinos User's Guide and Trilinos Development Guides. Also, the documentation included in each of the Trilinos' packages is of fundamental importance.

More Details

Application of multidisciplinary analysis to gene expression

Davidson, George S.; Haaland, David M.; Martin, Shawn

Molecular analysis of cancer, at the genomic level, could lead to individualized patient diagnostics and treatments. The developments to follow will signal a significant paradigm shift in the clinical management of human cancer. Despite our initial hopes, however, it seems that simple analysis of microarray data cannot elucidate clinically significant gene functions and mechanisms. Extracting biological information from microarray data requires a complicated path involving multidisciplinary teams of biomedical researchers, computer scientists, mathematicians, statisticians, and computational linguists. The integration of the diverse outputs of each team is the limiting factor in the progress to discover candidate genes and pathways associated with the molecular biology of cancer. Specifically, one must deal with sets of significant genes identified by each method and extract whatever useful information may be found by comparing these different gene lists. Here we present our experience with such comparisons, and share methods developed in the analysis of an infant leukemia cohort studied on Affymetrix HG-U95A arrays. In particular, spatial gene clustering, hyper-dimensional projections, and computational linguistics were used to compare different gene lists. In spatial gene clustering, different gene lists are grouped together and visualized on a three-dimensional expression map, where genes with similar expressions are co-located. In another approach, projections from gene expression space onto a sphere clarify how groups of genes can jointly have more predictive power than groups of individually selected genes. Finally, online literature is automatically rearranged to present information about genes common to multiple groups, or to contrast the differences between the lists. The combination of these methods has improved our understanding of infant leukemia. While the complicated reality of the biology dashed our initial, optimistic hopes for simple answers from microarrays, we have made progress by combining very different analytic approaches.

More Details

Compact optimization can outperform separation: A case study in structural proteomics

4OR

Carr, Robert D.; Lancia, Giuseppe G.

In Combinatorial Optimization, one is frequently faced with linear programming (LP) problems with exponentially many constraints, which can be solved either using separation or what we call compact optimization. The former technique relies on a separation algorithm, which, given a fractional solution, tries to produce a violated valid inequality. Compact optimization relies on describing the feasible region of the LP by a polynomial number of constraints, in a higher dimensional space. A commonly held belief is that compact optimization does not perform as well as separation in practice. In this paper,we report on an application in which compact optimization does in fact largely outperform separation. The problem arises in structural proteomics, and concerns the comparison of 3-dimensional protein folds. Our computational results show that compact optimization achieves an improvement of up to two orders of magnitude over separation. We discuss some reasons why compact optimization works in this case but not, e.g., for the LP relaxation of the TSP. © Springer-Verlag 2004.

More Details

Color Snakes for Dynamic Lighting Conditions on Mobile Manipulation Platforms

IEEE International Conference on Intelligent Robots and Systems

Schaub, Hanspeter; Smith, Christopher E.

Statistical active contour models (aka statistical pressure snakes) have attractive properties for use in mobile manipulation platforms as both a method for use in visual servoing and as a natural component of a human-computer interface. Unfortunately, the constantly changing illumination expected in outdoor environments presents problems for statistical pressure snakes and for their image gradient-based predecessors. This paper introduces a new color-based variant of statistical pressure snakes that gives superior performance under dynamic lighting conditions and improves upon the previously published results of attempts to incorporate color imagery into active deformable models.

More Details

Equilibration of long chain polymer melts in computer simulations

Journal of Chemical Physics

Auhl, Rolf; Everaers, Ralf; Grest, Gary S.; Kremer, Kurt; Plimpton, Steven J.

Equilibrated melts of long chain polymers were prepared. The combination of molecular dynamic (MD) relaxation, double-bridging and slow push-off allowed the efficient and controlled preparation of equilibrated melts of short, medium, and long chains, respectively. Results were obtained for an off-lattice bead-spring model with chain lengths up to N=7000 beads.

More Details

Final report for the endowment of simulator agents with human-like episodic memory LDRD

Forsythe, James C.; Speed, Ann S.; Lippitt, Carl E.; Schaller, Mark J.; Xavier, Patrick G.; Thomas, Edward V.; Schoenwald, David A.

This report documents work undertaken to endow the cognitive framework currently under development at Sandia National Laboratories with a human-like memory for specific life episodes. Capabilities have been demonstrated within the context of three separate problem areas. The first year of the project developed a capability whereby simulated robots were able to utilize a record of shared experience to perform surveillance of a building to detect a source of smoke. The second year focused on simulations of social interactions providing a queriable record of interactions such that a time series of events could be constructed and reconstructed. The third year addressed tools to promote desktop productivity, creating a capability to query episodic logs in real time allowing the model of a user to build on itself based on observations of the user's behavior.

More Details

Epetra developers coding guidelines

Heroux, Michael A.

Epetra is a package of classes for the construction and use of serial and distributed parallel linear algebra objects. It is one of the base packages in Trilinos. This document describes guidelines for Epetra coding style. The issues discussed here go beyond correct C++ syntax to address issues that make code more readable and self-consistent. The guidelines presented here are intended to aid current and future development of Epetra specifically. They reflect design decisions that were made in the early development stages of Epetra. Some of the guidelines are contrary to more commonly used conventions, but we choose to continue these practices for the purposes of self-consistency. These guidelines are intended to be complimentary to policies established in the Trilinos Developers Guide.

More Details

Unique Signal mathematical analysis task group FY03 status report

Cooper, Arlin C.; Johnston, Anna M.

The Unique Signal is a key constituent of Enhanced Nuclear Detonation Safety (ENDS). Although the Unique Signal approach is well prescribed and mathematically assured, there are numerous unsolved mathematical problems that could help assess the risk of deviations from the ideal approach. Some of the mathematics-based results shown in this report are: 1. The risk that two patterns with poor characteristics (easily generated by inadvertent processes) could be combined through exclusive-or mixing to generate an actual Unique Signal pattern has been investigated and found to be minimal (not significant when compared to the incompatibility metric of actual Unique Signal patterns used in nuclear weapons). 2. The risk of generating actual Unique Signal patterns with linear feedback shift registers is minimal, but the patterns in use are not as invulnerable to inadvertent generation by dependent processes as previously thought. 3. New methods of testing pair-wise incompatibility threats have resulted in no significant problems found for the set of Unique Signal patterns currently used. Any new patterns introduced would have to be carefully assessed for compatibility with existing patterns, since some new patterns under consideration were found to be deficient when associated with other patterns in use. 4. Markov models were shown to correspond to some of the engineered properties of Unique Signal sequences. This gives new support for the original design objectives. 5. Potential dependence among events (caused by a variety of communication protocols) has been studied. New evidence has been derived of the risk associated with combined communication of multiple events, and of the improvement in abnormal-environment safety that can be achieved through separate-event communication.

More Details

ChemCell : a particle-based model of protein chemistry and diffusion in microbial cells

Plimpton, Steven J.; Slepoy, Alexander S.

Prokaryotic single-cell microbes are the simplest of all self-sufficient living organisms. Yet microbes create and use much of the molecular machinery present in more complex organisms, and the macro-molecules in microbial cells interact in regulatory, metabolic, and signaling pathways that are prototypical of the reaction networks present in all cells. We have developed a simple simulation model of a prokaryotic cell that treats proteins, protein complexes, and other organic molecules as particles which diffuse via Brownian motion and react with nearby particles in accord with chemical rate equations. The code models protein motion and chemistry within an idealized cellular geometry. It has been used to simulate several simple reaction networks and compared to more idealized models which do not include spatial effects. In this report we describe an initial version of the simulation code that was developed with FY03 funding. We discuss the motivation for the model, highlight its underlying equations, and describe simulations of a 3-stage kinase cascade and a portion of the carbon fixation pathway in the Synechococcus microbe.

More Details
Results 9701–9750 of 9,998
Results 9701–9750 of 9,998