Publications

Results 9726–9750 of 9,998

Search results

Jump to search filters

ML 3.1 developer's guide

Sala, Marzio; Hu, Jonathan J.; Tuminaro, Raymond S.

ML development was started in 1997 by Ray Tuminaro and Charles Tong. Currently, there are several full- and part-time developers. The kernel of ML is written in ANSI C, and there is a rich C++ interface for Trilinos users and developers. ML can be customized to run geometric and algebraic multigrid; it can solve a scalar or a vector equation (with constant number of equations per grid node), and it can solve a form of Maxwell's equations. For a general introduction to ML and its applications, we refer to the Users Guide [SHT04], and to the ML web site, http://software.sandia.gov/ml.

More Details

Containment of uranium in the proposed Egyptian geologic repository for radioactive waste using hydroxyapatite

Moore, Robert C.; Hasan, Ahmed; Larese, Kathleen C.; Headley, Thomas J.; Zhao, Hongting; Salas, Fred

Currently, the Egyptian Atomic Energy Authority is designing a shallow-land disposal facility for low-level radioactive waste. To insure containment and prevent migration of radionuclides from the site, the use of a reactive backfill material is being considered. One material under consideration is hydroxyapatite, Ca{sub 10}(PO{sub 4}){sub 6}(OH){sub 2}, which has a high affinity for the sorption of many radionuclides. Hydroxyapatite has many properties that make it an ideal material for use as a backfill including low water solubility (K{sub sp}>10{sup -40}), high stability under reducing and oxidizing conditions over a wide temperature range, availability, and low cost. However, there is often considerable variation in the properties of apatites depending on source and method of preparation. In this work, we characterized and compared a synthetic hydroxyapatite with hydroxyapatites prepared from cattle bone calcined at 500 C, 700 C, 900 C and 1100 C. The analysis indicated the synthetic hydroxyapatite was similar in morphology to 500 C prepared cattle hydroxyapatite. With increasing calcination temperature the crystallinity and crystal size of the hydroxyapatites increased and the BET surface area and carbonate concentration decreased. Batch sorption experiments were performed to determine the effectiveness of each material to sorb uranium. Sorption of U was strong regardless of apatite type indicating all apatite materials evaluated. Sixty day desorption experiments indicated desorption of uranium for each hydroxyapatite was negligible.

More Details

Algebraic multigrid methods for constrained linear systems with applications to contact problems in solid mechanics

Numerical Linear Algebra with Applications

Adams, Mark F.

This paper develops a general framework for applying algebraic multigrid techniques to constrained systems of linear algebraic equations that arise in applications with discretized PDEs. We discuss constraint coarsening strategies for constructing multigrid coarse grid spaces and several classes of multigrid smoothers for these systems. The potential of these methods is investigated with their application to contact problems in solid mechanics. Published in 2004 by John Wiley &Sons, Ltd.

More Details

Taking ASCI supercomputing to the end game

Debenedictis, Erik

The ASCI supercomputing program is broadly defined as running physics simulations on progressively more powerful digital computers. What happens if we extrapolate the computer technology to its end? We have developed a model for key ASCI computations running on a hypothetical computer whose technology is parameterized in ways that account for advancing technology. This model includes technology information such as Moore's Law for transistor scaling and developments in cooling technology. The model also includes limits imposed by laws of physics, such as thermodynamic limits on power dissipation, limits on cooling, and the limitation of signal propagation velocity to the speed of light. We apply this model and show that ASCI computations will advance smoothly for another 10-20 years to an 'end game' defined by thermodynamic limits and the speed of light. Performance levels at the end game will vary greatly by specific problem, but will be in the Exaflops to Zetaflops range for currently anticipated problems. We have also found an architecture that would be within a constant factor of giving optimal performance at the end game. This architecture is an evolutionary derivative of the mesh-connected microprocessor (such as ASCI Red Storm or IBM Blue Gene/L). We provide designs for the necessary enhancement to microprocessor functionality and the power-efficiency of both the processor and memory system. The technology we develop in the foregoing provides a 'perfect' computer model with which we can rate the quality of realizable computer designs, both in this writing and as a way of designing future computers. This report focuses on classical computers based on irreversible digital logic, and more specifically on algorithms that simulate space computing, irreversible logic, analog computers, and other ways to address stockpile stewardship that are outside the scope of this report.

More Details

Molecular simulations of MEMS and membrane coatings (PECASE)

Thompson, A.P.

The goal of this Laboratory Directed Research & Development (LDRD) effort was to design, synthesize, and evaluate organic-inorganic nanocomposite membranes for solubility-based separations, such as the removal of higher hydrocarbons from air streams, using experiment and theory. We synthesized membranes by depositing alkylchlorosilanes on the nanoporous surfaces of alumina substrates, using techniques from the self-assembled monolayer literature to control the microstructure. We measured the permeability of these membranes to different gas species, in order to evaluate their performance in solubility-based separations. Membrane design goals were met by manipulating the pore size, alkyl group size, and alkyl surface density. We employed molecular dynamics simulation to gain further understanding of the relationship between membrane microstructure and separation performance.

More Details

A filter-based evolutionary algorithm for constrained optimization

Proposed for publication in Evolutionary Computations.

Hart, William E.

We introduce a filter-based evolutionary algorithm (FEA) for constrained optimization. The filter used by an FEA explicitly imposes the concept of dominance on a partially ordered solution set. We show that the algorithm is provably robust for both linear and nonlinear problems and constraints. FEAs use a finite pattern of mutation offsets, and our analysis is closely related to recent convergence results for pattern search methods. We discuss how properties of this pattern impact the ability of an FEA to converge to a constrained local optimum.

More Details

LDRD report : parallel repartitioning for optimal solver performance

Devine, Karen; Boman, Erik G.; Heaphy, Robert T.; Hendrickson, Bruce A.; Heroux, Michael A.

We have developed infrastructure, utilities and partitioning methods to improve data partitioning in linear solvers and preconditioners. Our efforts included incorporation of data repartitioning capabilities from the Zoltan toolkit into the Trilinos solver framework, (allowing dynamic repartitioning of Trilinos matrices); implementation of efficient distributed data directories and unstructured communication utilities in Zoltan and Trilinos; development of a new multi-constraint geometric partitioning algorithm (which can generate one decomposition that is good with respect to multiple criteria); and research into hypergraph partitioning algorithms (which provide up to 56% reduction of communication volume compared to graph partitioning for a number of emerging applications). This report includes descriptions of the infrastructure and algorithms developed, along with results demonstrating the effectiveness of our approaches.

More Details

Verification, validation, and predictive capability in computational engineering and physics

Applied Mechanics Reviews

Oberkampf, William L.; Trucano, Timothy G.; Hirsch, Charles

The views of state of art in verification and validation (V & V) in computational physics are discussed. These views are described in the framework in which predictive capability relies on V & V, as well as other factors that affect predictive capability. Some of the research topics addressed are development of improved procedures for the use of the phenomena identification and ranking table (PIRT) for prioritizing V & V activities, and the method of manufactured solutions for code verification. It also addressed development and use of hierarchical validation diagrams, and the construction and use of validation metrics incorporating statistical measures.

More Details

Covering a set of points with a minimum number of turns

International Journal of Computational Geometry and Applications

Collins, Michael J.

Given a finite set of points in Euclidean space, we can ask what is the minimum number of times a piecewise-linear path must change direction in order to pass through all of them. We prove some new upper and lower bounds for the rectilinear version of this problem in which all motion is orthogonal to the coordinate axes. We also consider the more general case of arbitrary directions.

More Details

Will moore's law be sufficient?

Proceedings of the ACM/IEEE SC 2004 Conference: Bridging Communities

Debenedictis, Erik

It seems well understood that supercomputer simulation is an enabler for scientific discoveries, weapons, and other activities of value to society. It also seems widely believed that Moores Law will make progressively more powerful supercomputers over time and thus enable more of these contributions. This paper seeks to add detail to these arguments, revealing them to be generally correct but not a smooth and effortless progression. This paper will review some key problems that can be solved with supercomputer simulation, showing that more powerful supercomputers will be useful up to a very high yet finite limit of around 1021 FLOPS (1 Zettaflops. The review will also show the basic nature of these extreme problems. This paper will review work by others showing that the theoretical maximum supercomputer power is very high indeed, but will explain how a straightforward extrapolation of Moores Law will lead to technological maturity in a few decades. The power of a supercomputer at the maturity of Moores Law will be very high by todays standards at 1016-1019 FLOPS (100 Petaflops to 10 Exaflops, depending on architecture , but distinctly below the level required for the most ambitious applications. Having established that Moores Law will not be that last word in supercomputing, this paper will explore the nearer term issue of what a supercomputer will look like at maturity of Moores Law. Our approach will quantify the maximum performance as permitted by the laws of physics for extension of current technology and then find a design that approaches this limit closely. We study a "multi-architecture" for supercomputers that combines a microprocessor with other "advanced" concepts and find it can reach the limits as well. This approach should be quite viable in the future because the microprocessor would provide compatibility with existing codes and programming styles while the "advanced" features would provide a boost to the limits of performance.

More Details

Compact optimization can outperform separation: A case study in structural proteomics

4OR

Carr, Robert D.; Lancia, Giuseppe G.

In Combinatorial Optimization, one is frequently faced with linear programming (LP) problems with exponentially many constraints, which can be solved either using separation or what we call compact optimization. The former technique relies on a separation algorithm, which, given a fractional solution, tries to produce a violated valid inequality. Compact optimization relies on describing the feasible region of the LP by a polynomial number of constraints, in a higher dimensional space. A commonly held belief is that compact optimization does not perform as well as separation in practice. In this paper,we report on an application in which compact optimization does in fact largely outperform separation. The problem arises in structural proteomics, and concerns the comparison of 3-dimensional protein folds. Our computational results show that compact optimization achieves an improvement of up to two orders of magnitude over separation. We discuss some reasons why compact optimization works in this case but not, e.g., for the LP relaxation of the TSP. © Springer-Verlag 2004.

More Details

Communication patterns and allocation strategies

Leung, Vitus J.

Motivated by observations about job runtimes on the CPlant system, we use a trace-driven microsimulator to begin characterizing the performance of different classes of allocation algorithms on jobs with different communication patterns in space-shared parallel systems with mesh topology. We show that relative performance varies considerably with communication pattern. The Paging strategy using the Hilbert space-filling curve and the Best Fit heuristic performed best across several communication patterns.

More Details

Simulating economic effects of disruptions in the telecommunications infrastructure

Barton, Dianne C.; Eidson, Eric D.; Schoenwald, David A.; Cox, Roger G.; Reinert, Rhonda K.

CommAspen is a new agent-based model for simulating the interdependent effects of market decisions and disruptions in the telecommunications infrastructure on other critical infrastructures in the U.S. economy such as banking and finance, and electric power. CommAspen extends and modifies the capabilities of Aspen-EE, an agent-based model previously developed by Sandia National Laboratories to analyze the interdependencies between the electric power system and other critical infrastructures. CommAspen has been tested on a series of scenarios in which the communications network has been disrupted, due to congestion and outages. Analysis of the scenario results indicates that communications networks simulated by the model behave as their counterparts do in the real world. Results also show that the model could be used to analyze the economic impact of communications congestion and outages.

More Details

Trilinos 3.1 tutorial

Heroux, Michael A.; Sala, Marzio

This document introduces the use of Trilinos, version 3.1. Trilinos has been written to support, in a rigorous manner, the solver needs of the engineering and scientific applications at Sandia National Laboratories. Aim of this manuscript is to present the basic features of some of the Trilinos packages. The presented material includes the definition of distributed matrices and vectors with Epetra, the iterative solution of linear system with AztecOO, incomplete factorizations with IFPACK, multilevel methods with ML, direct solution of linear system with Amesos, and iterative solution of nonlinear systems with NOX. With the help of several examples, some of the most important classes and methods are detailed to the inexperienced user. For the most majority, each example is largely commented throughout the text. Other comments can be found in the source of each example. This document is a companion to the Trilinos User's Guide and Trilinos Development Guides. Also, the documentation included in each of the Trilinos' packages is of fundamental importance.

More Details

Application of multidisciplinary analysis to gene expression

Davidson, George S.; Haaland, David M.; Martin, Shawn

Molecular analysis of cancer, at the genomic level, could lead to individualized patient diagnostics and treatments. The developments to follow will signal a significant paradigm shift in the clinical management of human cancer. Despite our initial hopes, however, it seems that simple analysis of microarray data cannot elucidate clinically significant gene functions and mechanisms. Extracting biological information from microarray data requires a complicated path involving multidisciplinary teams of biomedical researchers, computer scientists, mathematicians, statisticians, and computational linguists. The integration of the diverse outputs of each team is the limiting factor in the progress to discover candidate genes and pathways associated with the molecular biology of cancer. Specifically, one must deal with sets of significant genes identified by each method and extract whatever useful information may be found by comparing these different gene lists. Here we present our experience with such comparisons, and share methods developed in the analysis of an infant leukemia cohort studied on Affymetrix HG-U95A arrays. In particular, spatial gene clustering, hyper-dimensional projections, and computational linguistics were used to compare different gene lists. In spatial gene clustering, different gene lists are grouped together and visualized on a three-dimensional expression map, where genes with similar expressions are co-located. In another approach, projections from gene expression space onto a sphere clarify how groups of genes can jointly have more predictive power than groups of individually selected genes. Finally, online literature is automatically rearranged to present information about genes common to multiple groups, or to contrast the differences between the lists. The combination of these methods has improved our understanding of infant leukemia. While the complicated reality of the biology dashed our initial, optimistic hopes for simple answers from microarrays, we have made progress by combining very different analytic approaches.

More Details

Color Snakes for Dynamic Lighting Conditions on Mobile Manipulation Platforms

IEEE International Conference on Intelligent Robots and Systems

Harrigan, Raymond W.; Schaub, Hanspeter; Smith, Christopher E.

Statistical active contour models (aka statistical pressure snakes) have attractive properties for use in mobile manipulation platforms as both a method for use in visual servoing and as a natural component of a human-computer interface. Unfortunately, the constantly changing illumination expected in outdoor environments presents problems for statistical pressure snakes and for their image gradient-based predecessors. This paper introduces a new color-based variant of statistical pressure snakes that gives superior performance under dynamic lighting conditions and improves upon the previously published results of attempts to incorporate color imagery into active deformable models.

More Details

Final report for the endowment of simulator agents with human-like episodic memory LDRD

Forsythe, James C.; Speed, Ann E.; Lippitt, Carl E.; Schaller, Mark J.; Xavier, Patrick G.; Thomas, Edward V.; Schoenwald, David A.

This report documents work undertaken to endow the cognitive framework currently under development at Sandia National Laboratories with a human-like memory for specific life episodes. Capabilities have been demonstrated within the context of three separate problem areas. The first year of the project developed a capability whereby simulated robots were able to utilize a record of shared experience to perform surveillance of a building to detect a source of smoke. The second year focused on simulations of social interactions providing a queriable record of interactions such that a time series of events could be constructed and reconstructed. The third year addressed tools to promote desktop productivity, creating a capability to query episodic logs in real time allowing the model of a user to build on itself based on observations of the user's behavior.

More Details

Epetra developers coding guidelines

Heroux, Michael A.

Epetra is a package of classes for the construction and use of serial and distributed parallel linear algebra objects. It is one of the base packages in Trilinos. This document describes guidelines for Epetra coding style. The issues discussed here go beyond correct C++ syntax to address issues that make code more readable and self-consistent. The guidelines presented here are intended to aid current and future development of Epetra specifically. They reflect design decisions that were made in the early development stages of Epetra. Some of the guidelines are contrary to more commonly used conventions, but we choose to continue these practices for the purposes of self-consistency. These guidelines are intended to be complimentary to policies established in the Trilinos Developers Guide.

More Details

Unique Signal mathematical analysis task group FY03 status report

Cooper, James A.; Johnston, Anna M.

The Unique Signal is a key constituent of Enhanced Nuclear Detonation Safety (ENDS). Although the Unique Signal approach is well prescribed and mathematically assured, there are numerous unsolved mathematical problems that could help assess the risk of deviations from the ideal approach. Some of the mathematics-based results shown in this report are: 1. The risk that two patterns with poor characteristics (easily generated by inadvertent processes) could be combined through exclusive-or mixing to generate an actual Unique Signal pattern has been investigated and found to be minimal (not significant when compared to the incompatibility metric of actual Unique Signal patterns used in nuclear weapons). 2. The risk of generating actual Unique Signal patterns with linear feedback shift registers is minimal, but the patterns in use are not as invulnerable to inadvertent generation by dependent processes as previously thought. 3. New methods of testing pair-wise incompatibility threats have resulted in no significant problems found for the set of Unique Signal patterns currently used. Any new patterns introduced would have to be carefully assessed for compatibility with existing patterns, since some new patterns under consideration were found to be deficient when associated with other patterns in use. 4. Markov models were shown to correspond to some of the engineered properties of Unique Signal sequences. This gives new support for the original design objectives. 5. Potential dependence among events (caused by a variety of communication protocols) has been studied. New evidence has been derived of the risk associated with combined communication of multiple events, and of the improvement in abnormal-environment safety that can be achieved through separate-event communication.

More Details

ChemCell : a particle-based model of protein chemistry and diffusion in microbial cells

Plimpton, Steven J.; Slepoy, Alexander S.

Prokaryotic single-cell microbes are the simplest of all self-sufficient living organisms. Yet microbes create and use much of the molecular machinery present in more complex organisms, and the macro-molecules in microbial cells interact in regulatory, metabolic, and signaling pathways that are prototypical of the reaction networks present in all cells. We have developed a simple simulation model of a prokaryotic cell that treats proteins, protein complexes, and other organic molecules as particles which diffuse via Brownian motion and react with nearby particles in accord with chemical rate equations. The code models protein motion and chemistry within an idealized cellular geometry. It has been used to simulate several simple reaction networks and compared to more idealized models which do not include spatial effects. In this report we describe an initial version of the simulation code that was developed with FY03 funding. We discuss the motivation for the model, highlight its underlying equations, and describe simulations of a 3-stage kinase cascade and a portion of the carbon fixation pathway in the Synechococcus microbe.

More Details

Improved kinematic options in ALEGRA

Robinson, Allen C.; Farnsworth, Grant V.

Algorithms for higher order accuracy modeling of kinematic behavior within the ALEGRA framework are presented. These techniques improve the behavior of the code when kinematic errors are found, ensure orthonormality of the rotation tensor at each time step, and increase the accuracy of the Lagrangian stretch and rotation tensor update algorithm. The implementation of these improvements in ALEGRA is described. A short discussion of issues related to improving the accuracy of the stress update procedures is also included.

More Details

Large deformation solid-fluid interaction via a level set approach

Rao, Rekha R.; Noble, David R.; Schunk, Peter R.; Wilkes, Edward D.; Baer, Thomas A.; Notz, Patrick K.

Solidification and blood flow seemingly have little in common, but each involves a fluid in contact with a deformable solid. In these systems, the solid-fluid interface moves as the solid advects and deforms, often traversing the entire domain of interest. Currently, these problems cannot be simulated without innumerable expensive remeshing steps, mesh manipulations or decoupling the solid and fluid motion. Despite the wealth of progress recently made in mechanics modeling, this glaring inadequacy persists. We propose a new technique that tracks the interface implicitly and circumvents the need for remeshing and remapping the solution onto the new mesh. The solid-fluid boundary is tracked with a level set algorithm that changes the equation type dynamically depending on the phases present. This novel approach to coupled mechanics problems promises to give accurate stresses, displacements and velocities in both phases, simultaneously.

More Details

High throughput instruments, methods, and informatics for systems biology

Davidson, George S.; Sinclair, Michael B.; Thomas, Edward V.; Werner-Washburne, Margaret C.; Martin, Shawn; Boyack, Kevin W.; Wylie, Brian N.; Haaland, David M.; Timlin, Jerilyn A.; Keenan, Michael R.

High throughput instruments and analysis techniques are required in order to make good use of the genomic sequences that have recently become available for many species, including humans. These instruments and methods must work with tens of thousands of genes simultaneously, and must be able to identify the small subsets of those genes that are implicated in the observed phenotypes, or, for instance, in responses to therapies. Microarrays represent one such high throughput method, which continue to find increasingly broad application. This project has improved microarray technology in several important areas. First, we developed the hyperspectral scanner, which has discovered and diagnosed numerous flaws in techniques broadly employed by microarray researchers. Second, we used a series of statistically designed experiments to identify and correct errors in our microarray data to dramatically improve the accuracy, precision, and repeatability of the microarray gene expression data. Third, our research developed new informatics techniques to identify genes with significantly different expression levels. Finally, natural language processing techniques were applied to improve our ability to make use of online literature annotating the important genes. In combination, this research has improved the reliability and precision of laboratory methods and instruments, while also enabling substantially faster analysis and discovery.

More Details
Results 9726–9750 of 9,998
Results 9726–9750 of 9,998