Publications

Results 8401–8600 of 9,998

Search results

Jump to search filters

Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report

Murphy, Richard C.

This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

More Details

Final Report on LDRD project 130784 : functional brain imaging by tunable multi-spectral Event-Related Optical Signal (EROS)

Hsu, Alan Y.; Speed, Ann S.

Functional brain imaging is of great interest for understanding correlations between specific cognitive processes and underlying neural activity. This understanding can provide the foundation for developing enhanced human-machine interfaces, decision aides, and enhanced cognition at the physiological level. The functional near infrared spectroscopy (fNIRS) based event-related optical signal (EROS) technique can provide direct, high-fidelity measures of temporal and spatial characteristics of neural networks underlying cognitive behavior. However, current EROS systems are hampered by poor signal-to-noise-ratio (SNR) and depth of measure, limiting areas of the brain and associated cognitive processes that can be investigated. We propose to investigate a flexible, tunable, multi-spectral fNIRS EROS system which will provide up to 10x greater SNR as well as improved spatial and temporal resolution through significant improvements in electronics, optoelectronics and optics, as well as contribute to the physiological foundation of higher-order cognitive processes and provide the technical foundation for miniaturized portable neuroimaging systems.

More Details

LDRD final report : massive multithreading applied to national infrastructure and informatics

Barrett, Brian B.; Hendrickson, Bruce A.; Laviolette, Randall A.; Leung, Vitus J.; Mackey, Greg; Murphy, Richard C.; Phillips, Cynthia A.; Pinar, Ali P.

Large relational datasets such as national-scale social networks and power grids present different computational challenges than do physical simulations. Sandia's distributed-memory supercomputers are well suited for solving problems concerning the latter, but not the former. The reason is that problems such as pattern recognition and knowledge discovery on large networks are dominated by memory latency and not by computation. Furthermore, most memory requests in these applications are very small, and when the datasets are large, most requests miss the cache. The result is extremely low utilization. We are unlikely to be able to grow out of this problem with conventional architectures. As the power density of microprocessors has approached that of a nuclear reactor in the past two years, we have seen a leveling of Moores Law. Building larger and larger microprocessor-based supercomputers is not a solution for informatics and network infrastructure problems since the additional processors are utilized to only a tiny fraction of their capacity. An alternative solution is to use the paradigm of massive multithreading with a large shared memory. There is only one instance of this paradigm today: the Cray MTA-2. The proposal team has unique experience with and access to this machine. The XMT, which is now being delivered, is a Red Storm machine with up to 8192 multithreaded 'Threadstorm' processors and 128 TB of shared memory. For many years, the XMT will be the only way to address very large graph problems efficiently, and future generations of supercomputers will include multithreaded processors. Roughly 10 MTA processor can process a simple short paths problem in the time taken by the Gordon Bell Prize-nominated distributed memory code on 32,000 processors of Blue Gene/Light. We have developed algorithms and open-source software for the XMT, and have modified that software to run some of these algorithms on other multithreaded platforms such as the Sun Niagara and Opteron multi-core chips.

More Details

Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing

Pedretti, Kevin T.T.; Levenhagen, Michael J.; Brightwell, Ronald B.

Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

More Details

HPC application fault-tolerance using transparent redundant computation

Ferreira, Kurt; Riesen, Rolf; Oldfield, Ron A.; Brightwell, Ronald B.; Laros, James H.; Pedretti, Kevin P.

As the core count of HPC machines continue to grow in size, issues such as fault tolerance and reliability are becoming limiting factors for application scalability. Current techniques to ensure progress across faults, for example coordinated checkpoint-restart, are unsuitable for machines of this scale due to their predicted high overheads. In this study, we present the design and implementation of a novel system for ensuring reliability which uses transparent, rank-level, redundant computation. Using this system, we show the overheads involved in redundant computation for a number of real-world HPC applications. Additionally, we relate the communication characteristics of an application to the overheads observed.

More Details

Algebraic connectivity and graph robustness

Feddema, John T.

Recent papers have used Fiedler's definition of algebraic connectivity to show that network robustness, as measured by node-connectivity and edge-connectivity, can be increased by increasing the algebraic connectivity of the network. By the definition of algebraic connectivity, the second smallest eigenvalue of the graph Laplacian is a lower bound on the node-connectivity. In this paper we show that for circular random lattice graphs and mesh graphs algebraic connectivity is a conservative lower bound, and that increases in algebraic connectivity actually correspond to a decrease in node-connectivity. This means that the networks are actually less robust with respect to node-connectivity as the algebraic connectivity increases. However, an increase in algebraic connectivity seems to correlate well with a decrease in the characteristic path length of these networks - which would result in quicker communication through the network. Applications of these results are then discussed for perimeter security.

More Details

IceT users' guide and reference

Moreland, Kenneth D.

The Image Composition Engine for Tiles (IceT) is a high-performance sort-last parallel rendering library. In addition to providing accelerated rendering for a standard display, IceT provides the unique ability to generate images for tiled displays. The overall resolution of the display may be several times larger than any viewport that may be rendered by a single machine. This document is an overview of the user interface to IceT.

More Details

Using adversary text to detect adversary phase changes

Doser, Adele D.; Speed, Ann S.; Warrender, Christina E.

The purpose of this work was to help develop a research roadmap and small proof ofconcept for addressing key problems and gaps from the perspective of using text analysis methods as a primary tool for detecting when a group is undergoing a phase change. Self- rganizing map (SOM) techniques were used to analyze text data obtained from the tworld-wide web. Statistical studies indicate that it may be possible to predict phase changes, as well as detect whether or not an example of writing can be attributed to a group of interest.

More Details

Parallel phase model : a programming model for high-end parallel machines with manycores

Brightwell, Ronald B.; Heroux, Michael A.; Wen, Zhaofang W.

This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

More Details

An extensible operating system design for large-scale parallel machines

Riesen, Rolf; Ferreira, Kurt

Running untrusted user-level code inside an operating system kernel has been studied in the 1990's but has not really caught on. We believe the time has come to resurrect kernel extensions for operating systems that run on highly-parallel clusters and supercomputers. The reason is that the usage model for these machines differs significantly from a desktop machine or a server. In addition, vendors are starting to add features, such as floating-point accelerators, multicore processors, and reconfigurable compute elements. An operating system for such machines must be adaptable to the requirements of specific applications and provide abstractions to access next-generation hardware features, without sacrificing performance or scalability.

More Details

Algorithmic properties of the midpoint predictor-corrector time integrator

Love, Edward L.; Scovazzi, Guglielmo S.; Rider, William J.

Algorithmic properties of the midpoint predictor-corrector time integration algorithm are examined. In the case of a finite number of iterations, the errors in angular momentum conservation and incremental objectivity are controlled by the number of iterations performed. Exact angular momentum conservation and exact incremental objectivity are achieved in the limit of an infinite number of iterations. A complete stability and dispersion analysis of the linearized algorithm is detailed. The main observation is that stability depends critically on the number of iterations performed.

More Details

Verification of complex codes

Ober, Curtis C.

Over the past several years, verifying and validating complex codes at Sandia National Laboratories has become a major part of code development. These aspects tackle two important parts of simulation modeling: determining if the models have been correctly implemented - verification, and determining if the correct models have been selected - validation. In this talk, we will focus on verification and discuss the basics of code verification and its application to a few codes and problems at Sandia.

More Details

Xyce Parallel Electronic Simulator : reference guide, version 4.1

Keiter, Eric R.; Mei, Ting M.; Russo, Thomas V.; Pawlowski, Roger P.; Schiek, Richard S.; Santarelli, Keith R.; Coffey, Todd S.; Thornquist, Heidi K.

This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

More Details

Xyce Parallel Electronic Simulator : users' guide, version 4.1

Keiter, Eric R.; Mei, Ting M.; Russo, Thomas V.; Pawlowski, Roger P.; Schiek, Richard S.; Santarelli, Keith R.; Coffey, Todd S.; Thornquist, Heidi K.

This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only). (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical simulation capability, designed to meet the unique needs of the laboratory.

More Details

Enabling immersive simulation

Abbott, Robert G.; Basilico, Justin D.; Glickman, Matthew R.; Hart, Derek H.; Whetzel, Jonathan H.

The object of the 'Enabling Immersive Simulation for Complex Systems Analysis and Training' LDRD has been to research, design, and engineer a capability to develop simulations which (1) provide a rich, immersive interface for participation by real humans (exploiting existing high-performance game-engine technology wherever possible), and (2) can leverage Sandia's substantial investment in high-fidelity physical and cognitive models implemented in the Umbra simulation framework. We report here on these efforts. First, we describe the integration of Sandia's Umbra modular simulation framework with the open-source Delta3D game engine. Next, we report on Umbra's integration with Sandia's Cognitive Foundry, specifically to provide for learning behaviors for 'virtual teammates' directly from observed human behavior. Finally, we describe the integration of Delta3D with the ABL behavior engine, and report on research into establishing the theoretical framework that will be required to make use of tools like ABL to scale up to increasingly rich and realistic virtual characters.

More Details

EEG analyses with SOBI

Glickman, Matthew R.

The motivating vision behind Sandia's MENTOR/PAL LDRD project has been that of systems which use real-time psychophysiological data to support and enhance human performance, both individually and of groups. Relevant and significant psychophysiological data being a necessary prerequisite to such systems, this LDRD has focused on identifying and refining such signals. The project has focused in particular on EEG (electroencephalogram) data as a promising candidate signal because it (potentially) provides a broad window on brain activity with relatively low cost and logistical constraints. We report here on two analyses performed on EEG data collected in this project using the SOBI (Second Order Blind Identification) algorithm to identify two independent sources of brain activity: one in the frontal lobe and one in the occipital. The first study looks at directional influences between the two components, while the second study looks at inferring gender based upon the frontal component.

More Details

An optimization approach for fitting canonical tensor decompositions

Acar Ataman, Evrim N.; Dunlavy, Daniel D.

Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

More Details

The Design for Tractable Analysis (DTA) Framework: A Methodology for the Analysis and Simulation of Complex Systems

International Journal of Decision Support System Technology (IJDSST)

Linebarger, John M.; De Spain, Mark J.; McDonald, Michael J.; Spencer, Floyd W.; Cloutier, Robert J.

The Design for Tractable Analysis (DTA) framework was developed to address the analysis of complex systems and so-called “wicked problems.” DTA is distinctive because it treats analytic processes as key artifacts that can be created and improved through formal design processes. Systems (or enterprises) are analyzed as a whole, in conjunction with decomposing them into constituent elements for domain-specific analyses that are informed by the whole. After using the Systems Modeling Language (SysML) to frame the problem in the context of stakeholder needs, DTA harnesses the Design Structure Matrix (DSM) to structure the analysis of the system and address questions about the emergent properties of the system. The novel use of DSM to “design the analysis” makes DTA particularly suitable for addressing the interdependent nature of complex systems. The use of DTA is demonstrated by a case study of sensor grid placement decisions to secure assets at a fixed site. © 2009, IGI Global. All rights reserved.

More Details

Performance of an MPI-only semiconductor device simulator on a quad socket/quad core InfiniBand platform

Shadid, John N.

This preliminary study considers the scaling and performance of a finite element (FE) semiconductor device simulator on a capacity cluster with 272 compute nodes based on a homogeneous multicore node architecture utilizing 16 cores. The inter-node communication backbone for this Tri-Lab Linux Capacity Cluster (TLCC) machine is comprised of an InfiniBand interconnect. The nonuniform memory access (NUMA) nodes consist of 2.2 GHz quad socket/quad core AMD Opteron processors. The performance results for this study are obtained with a FE semiconductor device simulation code (Charon) that is based on a fully-coupled Newton-Krylov solver with domain decomposition and multilevel preconditioners. Scaling and multicore performance results are presented for large-scale problems of 100+ million unknowns on up to 4096 cores. A parallel scaling comparison is also presented with the Cray XT3/4 Red Storm capability platform. The results indicate that an MPI-only programming model for utilizing the multicore nodes is reasonably efficient on all 16 cores per compute node. However, the results also indicated that the multilevel preconditioner, which is critical for large-scale capability type simulations, scales better on the Red Storm machine than the TLCC machine.

More Details

On the two-domain equations for gas chromatography

Romero, L.A.; Parks, Michael L.

We present an analysis of gas chromatographic columns where the stationary phase is not assumed to be a thin uniform coating along the walls of the cross section. We also give an asymptotic analysis assuming that the parameter {beta} = KD{sup II}{rho}{sup II}/D{sup I}{rho}{sup I} is small. Here K is the partition coefficient, and D{sup i} and {rho}{sup i}, i = I, II are the diffusivity and density in the mobile (i = I) and stationary (i = II) regions.

More Details

Interoperable mesh components for large-scale, distributed-memory simulations

Journal of Physics: Conference Series

Devine, Karen D.; Diachin, L.; Kraftcheck, J.; Jansen, K.E.; Leung, Vitus J.; Luo, X.; Miller, M.; Ollivier-Gooch, C.; Ovcharenko, A.; Sahni, O.; Shephard, M.S.; Tautges, T.; Xie, T.; Zhou, M.

SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. In this paper, we describe a software component - an abstract data model and programming interface - designed to provide support for parallel unstructured mesh operations. We describe key issues that must be addressed to successfully provide high-performance, distributed-memory unstructured mesh services and highlight some recent research accomplishments in developing new load balancing and MPI-based communication libraries appropriate for leadership class computing. Finally, we give examples of the use of parallel adaptive mesh modification in two SciDAC applications. © 2009 IOP Publishing Ltd.

More Details

DOE's Institute for Advanced Architecture and Algorithms: An application-driven approach

Journal of Physics: Conference Series

Murphy, Richard C.

This paper describes an application driven methodology for understanding the impact of future architecture decisions on the end of the MPP era. Fundamental transistor device limitations combined with application performance characteristics have created the switch to multicore/multithreaded architectures. Designing large-scale supercomputers to match application demands is particularly challenging since performance characteristics are highly counter-intuitive. In fact, data movement more than FLOPS dominates. This work discusses some basic performance analysis for a set of DOE applications, the limits of CMOS technology, and the impact of both on future architectures. © 2009 IOP Publishing Ltd.

More Details

Current trends in parallel computation and the implications for modeling and optimization

Computer Aided Chemical Engineering

Siirola, John D.

More Details

Finite element solution of optimal control problems arising in semiconductor modeling

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Bochev, Pavel B.; Ridzal, Denis R.

Optimal design, parameter estimation, and inverse problems arising in the modeling of semiconductor devices lead to optimization problems constrained by systems of PDEs. We study the impact of different state equation discretizations on optimization problems whose objective functionals involve flux terms. Galerkin methods, in which the flux is a derived quantity, are compared with mixed Galerkin discretizations where the flux is approximated directly. Our results show that the latter approach leads to more robust and accurate solutions of the optimization problem, especially for highly heterogeneous materials with large jumps in material properties. © 2008 Springer.

More Details

New applications of the verdict library for standardized mesh verification pre, post, and end-to-end processing

Proceedings of the 16th International Meshing Roundtable, IMR 2007

Pébay, Philippe P.; Thompson, David; Shepherd, Jason F.; Knupp, Patrick K.; Lisle, Curtis; Magnotta, Vincent A.; Grosland, Nicole M.

Verdict is a collection of subroutines for evaluating the geometric qualities of triangles, quadrilaterals, tetrahedra, and hexahedra using a variety of functions. A quality is a real number assigned to one of these shapes depending on its particular vertex coordinates. These functions are used to evaluate the input to finite element, finite volume, boundary element, and other types of solvers that approximate the solution to partial differential equations defined over regions of space. This article describes the most recent version of Verdict and provides a summary of the main properties of the quality functions offered by the library. It finally demonstrates the versatility and applicability of Verdict by illustrating its use in several scientific applications that pertain to pre, post, and end-to-end processing.

More Details

pCAMAL: An embarrassingly parallel hexahedral mesh generator

Proceedings of the 16th International Meshing Roundtable, IMR 2007

Pébay, Philippe P.; Stephenson, Michael B.; Fortier, Leslie A.; Owen, Steven J.; Melander, Darryl J.

This paper describes a distributed-memory, embarrassingly parallel hexahedral mesh generator, pCAMAL (parallel CUBIT Adaptive Mesh Algorithm Library). pCAMAL utilizes the sweeping method following a serial step of geometry decomposition conducted in the CUBIT geometry preparation and mesh generation tool. The utility of pCAMAL in generating large meshes is illustrated, and linear speed-up under load-balanced conditions is demonstrated.

More Details

Limited-memory techniques for sensor placement in water distribution networks

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Hart, William E.; Berry, Jonathan W.; Boman, Erik G.; Phillips, Cynthia A.; Riesen, Lee A.; Watson, Jean-Paul W.

The practical utility of optimization technologies is often impacted by factors that reflect how these tools are used in practice, including whether various real-world constraints can be adequately modeled, the sophistication of the analysts applying the optimizer, and related environmental factors (e.g. whether a company is willing to trust predictions from computational models). Other features are less appreciated, but of equal importance in terms of dictating the successful use of optimization. These include the scale of problem instances, which in practice drives the development of approximate solution techniques, and constraints imposed by the target computing platforms. End-users often lack state-of-the-art computers, and thus runtime and memory limitations are often a significant, limiting factor in algorithm design. When coupled with large problem scale, the result is a significant technological challenge. We describe our experience developing and deploying both exact and heuristic algorithms for placing sensors in water distribution networks to mitigate against damage due intentional or accidental introduction of contaminants. The target computing platforms for this application have motivated limited-memory techniques that can optimize large-scale sensor placement problems. © 2008 Springer Berlin Heidelberg.

More Details

Implementing peridynamics within a molecular dynamics code

Computer Physics Communications

Parks, Michael L.; Lehoucq, Richard B.; Plimpton, Steven J.; Silling, Stewart A.

Peridynamics (PD) is a continuum theory that employs a nonlocal model to describe material properties. In this context, nonlocal means that continuum points separated by a finite distance may exert force upon each other. A meshless method results when PD is discretized with material behavior approximated as a collection of interacting particles. This paper describes how PD can be implemented within a molecular dynamics (MD) framework, and provides details of an efficient implementation. This adds a computational mechanics capability to an MD code, enabling simulations at mesoscopic or even macroscopic length and time scales. © 2008 Elsevier B.V.

More Details

Remarks on mesh quality

46th AIAA Aerospace Sciences Meeting and Exhibit

Knupp, Patrick K.

Various aspects of mesh quality are surveyed to clarify the disconnect between the traditional uses of mesh quality metrics within industry and the fact that quality ultimately depends on the solution to the physical problem. Truncation error analysis for ffnite difference methods reveals no clear connection to most traditional mesh quality metrics. Finite element bounds to the interpolation error can be shown, in some cases, to be related to known quality metrics such as the condition number. On the other hand, the use of quality metrics that do not take solution characteristics into account can be valid in certain circumstances, primarily as a means of automatically detecting defective meshes. The use of such metrics when applied to simulations for which quality is highly-dependent on the physical solution is clearly inappropriate. Various ffaws and problems with existing quality metrics are mentioned, along with a discussion on the use of threshold values. In closing, the author advocates the investigation of explicitly-referenced quality metrics as a potential means of bridging the gap between a priori quality metrics and solution-dependent metrics.

More Details

Low-memory Lagrangian relaxation methods for sensor placement in municipal water networks

World Environmental and Water Resources Congress 2008: Ahupua'a - Proceedings of the World Environmental and Water Resources Congress 2008

Berry, Jonathan W.; Boman, Erik G.; Phillips, Cynthia A.; Riesen, Lee A.

Placing sensors in municipal water networks to protect against a set of contamination events is a classic p-median problem for most objectives when we assume that sensors are perfect. Many researchers have proposed exact and approximate solution methods for this p-median formulation. For full-scale networks with large contamination event suites, one must generally rely on heuristic methods to generate solutions. These heuristics provide feasible solutions, but give no quality guarantee relative to the optimal placement. In this paper we apply a Lagrangian relaxation method in order to compute lower bounds on the expected impact of suites of contamination events. In all of our experiments with single objectives, these lower bounds establish that the GRASP local search method generates solutions that are provably optimal to to within a fraction of a percentage point. Our Lagrangian heuristic also provides good solutions itself and requires only a fraction of the memory of GRASP. We conclude by describing two variations of the Lagrangian heuristic: an aggregated version that trades off solution quality for further memory savings, and a multi-objective version which balances objectives with additional goals. © 2008 ASCE.

More Details

Preparing for the aftermath: Using emotional agents in game-based training for disaster response

2008 IEEE Symposium on Computational Intelligence and Games, CIG 2008

Djordjevich Reyna, Donna D.; Xavier, Patrick G.; Bernard, Michael L.; Whetzel, Jonathan H.; Glickman, Matthew R.; Verzi, Stephen J.

Ground Truth, a training game developed by Sandia National Laboratories in partnership with the University of Southern California GamePipe Lab, puts a player in the role of an Incident Commander working with teammate agents to respond to urban threats. These agents simulate certain emotions that a responder may feel during this high-stress situation. We construct psychology-plausible models compliant with the Sandia Human Embodiment and Representation Cognitive Architecture (SHERCA) that are run on the Sandia Cognitive Runtime Engine with Active Memory (SCREAM) software. SCREAM's computational representations for modeling human decision-making combine aspects of ANNs and fuzzy logic networks. This paper gives an overview of Ground Truth and discusses the adaptation of the SHERCA and SCREAM into the game. We include a semiformal descriptionof SCREAM. ©2008 IEEE.

More Details

The TEVA-SPOT toolkit for drinking water contaminant warning system design

World Environmental and Water Resources Congress 2008: Ahupua'a - Proceedings of the World Environmental and Water Resources Congress 2008

Hart, William E.; Berry, Jonathan W.; Boman, Erik G.; Murray, Regan; Phillips, Cynthia A.; Riesen, Lee A.; Watson, Jean-Paul W.

We present the TEVA-SPOT Toolkit, a sensor placement optimization tool developed within the USEPA TEVA program. The TEVA-SPOT Toolkit provides a sensor placement framework that facilitates research in sensor placement optimization and enables the practical application of sensor placement solvers to real-world CWS design applications. This paper provides an overview of its key features, and then illustrates how this tool can be flexibly applied to solve a variety of different types of sensor placement problems. © 2008 ASCE.

More Details

Tolerating the community detection resolution limit with edge weighting

Proposed for publication in the Proceedings of the National Academy of Sciences.

Hendrickson, Bruce A.; Laviolette, Randall A.; Phillips, Cynthia A.; Berry, Jonathan W.

Communities of vertices within a giant network such as the World-Wide-Web are likely to be vastly smaller than the network itself. However, Fortunato and Barthelemy have proved that modularity maximization algorithms for community detection may fail to resolve communities with fewer than {radical} L/2 edges, where L is the number of edges in the entire network. This resolution limit leads modularity maximization algorithms to have notoriously poor accuracy on many real networks. Fortunato and Barthelemy's argument can be extended to networks with weighted edges as well, and we derive this corollary argument. We conclude that weighted modularity algorithms may fail to resolve communities with fewer than {radical} W{epsilon}/2 total edge weight, where W is the total edge weight in the network and {epsilon} is the maximum weight of an inter-community edge. If {epsilon} is small, then small communities can be resolved. Given a weighted or unweighted network, we describe how to derive new edge weights in order to achieve a low {epsilon}, we modify the 'CNM' community detection algorithm to maximize weighted modularity, and show that the resulting algorithm has greatly improved accuracy. In experiments with an emerging community standard benchmark, we find that our simple CNM variant is competitive with the most accurate community detection methods yet proposed.

More Details

Improved parallel data partitioning by nested dissection with applications to information retrieval

Proposed for publication in Parallel Computing.

Boman, Erik G.; Chevalier, Cedric C.

The computational work in many information retrieval and analysis algorithms is based on sparse linear algebra. Sparse matrix-vector multiplication is a common kernel in many of these computations. Thus, an important related combinatorial problem in parallel computing is how to distribute the matrix and the vectors among processors so as to minimize the communication cost. We focus on minimizing the total communication volume while keeping the computation balanced across processes. In [1], the first two authors presented a new 2D partitioning method, the nested dissection partitioning algorithm. In this paper, we improve on that algorithm and show that it is a good option for data partitioning in information retrieval. We also show partitioning time can be substantially reduced by using the SCOTCH software, and quality improves in some cases, too.

More Details

The Xygra gun simulation tool

Garasi, Christopher J.; Robinson, Allen C.; Russo, Thomas V.; Lamppa, Derek C.

Inductive electromagnetic launchers, or coilguns, use discrete solenoidal coils to accelerate a coaxial conductive armature. To date, Sandia has been using an internally developed code, SLINGSHOT, as a point-mass lumped circuit element simulation tool for modeling coilgun behavior for design and verification purposes. This code has shortcomings in terms of accurately modeling gun performance under stressful electromagnetic propulsion environments. To correct for these limitations, it was decided to attempt to closely couple two Sandia simulation codes, Xyce and ALEGRA, to develop a more rigorous simulation capability for demanding launch applications. This report summarizes the modifications made to each respective code and the path forward to completing interfacing between them.

More Details

Distance-avoiding sequences for extremely low-bandwidth authentication

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Collins, Michael J.; Mitchell, Scott A.

We develop a scheme for providing strong cryptographic authentication on a stream of messages which consumes very little bandwidth (as little as one bit per message) and is robust in the presence of dropped messages. Such a scheme should be useful for extremely low-power, low-bandwidth wireless sensor networks and "smart dust" applications. The tradeoffs among security, memory, bandwidth, and tolerance for missing messages give rise to several new optimization problems. We report on experimental results and derive bounds on the performance of the scheme. © 2008 Springer-Verlag Berlin Heidelberg.

More Details

Inexact newton dogleg methods

SIAM Journal on Numerical Analysis

Pawlowski, Roger P.; Simonis, Joseph P.; Walker, Homer F.; Shadid, John N.

The dogleg method is a classical trust-region technique for globalizing Newton's method. While it is widely used in optimization, including large-scale optimization via truncated-Newton approaches, its implementation in general inexact Newton methods for systems of nonlinear equations can be problematic. In this paper, we first outline a very general dogleg method suitable for the general inexact Newton context and provide a global convergence analysis for it. We then discuss certain issues that may arise with the standard dogleg implementational strategy and propose modified strategies that address them. Newton-Krylov methods have provided important motivation for this work, and we conclude with a report on numerical experiments involving a Newton-GMRES dogleg method applied to benchmark CFD problems. © 2008 Society for Industrial and Applied Mathematics.

More Details

Solving elliptic finite element systems in near-linear time with support preconditioners

SIAM Journal on Numerical Analysis

Boman, Erik G.; Hendrickson, Bruce A.; Vavasis, Stephen

We consider linear systems arising from the use of the finite element method for solving scalar linear elliptic problems. Our main result is that these linear systems, which are symmetric and positive semidefinite, are well approximated by symmetric diagonally dominant matrices. Our framework for defining matrix approximation is support theory. Significant graph theoretic work has already been developed in the support framework for preconditioners in the diagonally dominant case, and, in particular, it is known that such systems can be solved with iterative methods in nearly linear time. Thus, our approximation result implies that these graph theoretic techniques can also solve a class of finite element problems in nearly linear time. We show that the support number bounds, which control the number of iterations in the preconditioned iterative solver, depend on mesh quality measures but not on the problem size or shape of the domain. © 2008 Society for Industrial and Applied Mathematics.

More Details

The Arctic as a test case for an assessment of climate impacts on national security

Boslough, Mark B.; Taylor, Mark A.; Zak, Bernard D.; Backus, George A.

The Arctic region is rapidly changing in a way that will affect the rest of the world. Parts of Alaska, western Canada, and Siberia are currently warming at twice the global rate. This warming trend is accelerating permafrost deterioration, coastal erosion, snow and ice loss, and other changes that are a direct consequence of climate change. Climatologists have long understood that changes in the Arctic would be faster and more intense than elsewhere on the planet, but the degree and speed of the changes were underestimated compared to recent observations. Policy makers have not yet had time to examine the latest evidence or appreciate the nature of the consequences. Thus, the abruptness and severity of an unfolding Arctic climate crisis has not been incorporated into long-range planning. The purpose of this report is to briefly review the physical basis for global climate change and Arctic amplification, summarize the ongoing observations, discuss the potential consequences, explain the need for an objective risk assessment, develop scenarios for future change, review existing modeling capabilities and the need for better regional models, and finally to make recommendations for Sandia's future role in preparing our leaders to deal with impacts of Arctic climate change on national security. Accurate and credible regional-scale climate models are still several years in the future, and those models are essential for estimating climate impacts around the globe. This study demonstrates how a scenario-based method may be used to give insights into climate impacts on a regional scale and possible mitigation. Because of our experience in the Arctic and widespread recognition of the Arctic's importance in the Earth climate system we chose the Arctic as a test case for an assessment of climate impacts on national security. Sandia can make a swift and significant contribution by applying modeling and simulation tools with internal collaborations as well as with outside organizations. Because changes in the Arctic environment are happening so rapidly, a successful program will be one that can adapt very quickly to new information as it becomes available, and can provide decision makers with projections on the 1-5 year time scale over which the most disruptive, high-consequence changes are likely to occur. The greatest short-term impact would be to initiate exploratory simulations to discover new emergent and robust phenomena associated with one or more of the following changing systems: Arctic hydrological cycle, sea ice extent, ocean and atmospheric circulation, permafrost deterioration, carbon mobilization, Greenland ice sheet stability, and coastal erosion. Sandia can also contribute to new technology solutions for improved observations in the Arctic, which is currently a data-sparse region. Sensitivity analyses have the potential to identify thresholds which would enable the collaborative development of 'early warning' sensor systems to seek predicted phenomena that might be precursory to major, high-consequence changes. Much of this work will require improved regional climate models and advanced computing capabilities. Socio-economic modeling tools can help define human and national security consequences. Formal uncertainty quantification must be an integral part of any results that emerge from this work.

More Details

Re-thinking linearized coupled-cluster theory

Proposed for publication in the Journal of Chemical Physics.

Taube, Andrew G.

Hermitian linearized coupled-cluster methods have several advantages over more conventional coupled-cluster methods including facile analytical gradients for searching a potential energy surface. A persistent failure of linearized methods, however, is the presence of singularities on the potential energy surface. A simple Tikhonov regularization procedure is introduced that can eliminate this singularity. Application of the regularized linearized coupled-cluster singles and doubles (CCSD) method to both equilibrium structures and transition states shows that it is competitive with or better than conventional CCSD, and is more amenable to parallelization.

More Details

Distributed micro-releases of bioterror pathogens : threat characterizations and epidemiology from uncertain patient observables

Adams, Brian M.; Devine, Karen D.; Najm, H.N.; Marzouk, Youssef M.

Terrorist attacks using an aerosolized pathogen preparation have gained credibility as a national security concern since the anthrax attacks of 2001. The ability to characterize the parameters of such attacks, i.e., to estimate the number of people infected, the time of infection, the average dose received, and the rate of disease spread in contemporary American society (for contagious diseases), is important when planning a medical response. For non-contagious diseases, we address the characterization problem by formulating a Bayesian inverse problem predicated on a short time-series of diagnosed patients exhibiting symptoms. To keep the approach relevant for response planning, we limit ourselves to 3.5 days of data. In computational tests performed for anthrax, we usually find these observation windows sufficient, especially if the outbreak model employed in the inverse problem is accurate. For contagious diseases, we formulated a Bayesian inversion technique to infer both pathogenic transmissibility and the social network from outbreak observations, ensuring that the two determinants of spreading are identified separately. We tested this technique on data collected from a 1967 smallpox epidemic in Abakaliki, Nigeria. We inferred, probabilistically, different transmissibilities in the structured Abakaliki population, the social network, and the chain of transmission. Finally, we developed an individual-based epidemic model to realistically simulate the spread of a rare (or eradicated) disease in a modern society. This model incorporates the mixing patterns observed in an (American) urban setting and accepts, as model input, pathogenic transmissibilities estimated from historical outbreaks that may have occurred in socio-economic environments with little resemblance to contemporary society. Techniques were also developed to simulate disease spread on static and sampled network reductions of the dynamic social networks originally in the individual-based model, yielding faster, though approximate, network-based epidemic models. These reduced-order models are useful in scenario analysis for medical response planning, as well as in computationally intensive inverse problems.

More Details

Post-processing V&V Level II ASC Milestone (2843) results

Moreland, Kenneth D.; Wilke, Jason W.; Attaway, Stephen W.; Karelitz, David B.

The 9/30/2008 ASC Level 2 Post-Processing V&V Milestone (Milestone 2843) contains functionality required by the user community for certain verification and validation tasks. These capabilities include fragment detection from CTH simulation data, fragment characterization and analysis, and fragment sorting and display operations. The capabilities were tested extensively both on sample and actual simulations. In addition, a number of stretch criteria were met including a comparison between simulated and test data, and the ability to output each fragment as an individual geometric file.

More Details

Nanoparticle flow, ordering and self-assembly

Grest, Gary S.; Brown, William M.; Lechman, Jeremy B.; Petersen, Matt K.; Plimpton, Steven J.; Schunk, Randy

Nanoparticles are now more than ever being used to tailor materials function and performance in differentiating technologies because of their profound effect on thermo-physical, mechanical and optical properties. The most feasible way to disperse particles in a bulk material or control their packing at a substrate is through fluidization in a carrier, followed by solidification through solvent evaporation/drying/curing/sintering. Unfortunately processing particles as concentrated, fluidized suspensions into useful products remains an art largely because the effect of particle shape and volume fraction on fluidic properties and suspension stability remains unexplored in a regime where particle-particle interaction mechanics is prevalent. To achieve a stronger scientific understanding of the factors that control nanoparticle dispersion and rheology we have developed a multiscale modeling approach to bridge scales between atomistic and molecular-level forces active in dense nanoparticle suspensions. At the largest length scale, two 'coarse-grained' numerical techniques have been developed and implemented to provide for high-fidelity numerical simulations of the rheological response and dispersion characteristics typical in a processing flow. The first is a coupled Navier-Stokes/discrete element method in which the background solvent is treated by finite element methods. The second is a particle based method known as stochastic rotational dynamics. These two methods provide a new capability representing a 'bridge' between the molecular scale and the engineering scale, allowing the study of fluid-nanoparticle systems over a wide range of length and timescales as well as particle concentrations. To validate these new methodologies, multi-million atoms simulations explicitly including the solvent have been carried out. These simulations have been vital in establishing the necessary 'subgrid' models for accurate prediction at a larger scale and refining the two coarse-grained methodologies.

More Details

Verification for ALEGRA using magnetized shock hydrodynamics problems

Gardiner, Thomas A.; Rider, William J.; Robinson, Allen C.

Two classical verification problems from shock hydrodynamics are adapted for verification in the context of ideal magnetohydrodynamics (MHD) by introducing strong transverse magnetic fields, and simulated using the finite element Lagrange-remap MHD code ALEGRA for purposes of rigorous code verification. The concern in these verification tests is that inconsistencies related to energy advection are inherent in Lagrange-remap formulations for MHD, such that conservation of the kinetic and magnetic components of the energy may not be maintained. Hence, total energy conservation may also not be maintained. MHD shock propagation may therefore not be treated consistently in Lagrange-remap schemes, as errors in energy conservation are known to result in unphysical shock wave speeds and post-shock states. That kinetic energy is not conserved in Lagrange-remap schemes is well known, and the correction of DeBar has been shown to eliminate the resulting errors. Here, the consequences of the failure to conserve magnetic energy are revealed using order verification in the two magnetized shock-hydrodynamics problems. Further, a magnetic analog to the DeBar correction is proposed and its accuracy evaluated using this verification testbed. Results indicate that only when the total energy is conserved, by implementing both the kinetic and magnetic components of the DeBar correction, can simulations in Lagrange-remap formulation capture MHD shock propagation accurately. Additional insight is provided by the verification results, regarding the implementation of the DeBar correction and the advection scheme.

More Details

Multilinear algebra for analyzing data with multiple linkages

Dunlavy, Daniel D.; Kolda, Tamara G.; Kegelmeyer, William P.

Link analysis typically focuses on a single type of connection, e.g., two journal papers are linked because they are written by the same author. However, often we want to analyze data that has multiple linkages between objects, e.g., two papers may have the same keywords and one may cite the other. The goal of this paper is to show that multilinear algebra provides a tool for multilink analysis. We analyze five years of publication data from journals published by the Society for Industrial and Applied Mathematics. We explore how papers can be grouped in the context of multiple link types using a tensor to represent all the links between them. A PARAFAC decomposition on the resulting tensor yields information similar to the SVD decomposition of a standard adjacency matrix. We show how the PARAFAC decomposition can be used to understand the structure of the document space and define paper-paper similarities based on multiple linkages. Examples are presented where the decomposed tensor data is used to find papers similar to a body of work (e.g., related by topic or similar to a particular author's papers), find related authors using linkages other than explicit co-authorship or citations, distinguish between papers written by different authors with the same name, and predict the journal in which a paper was published.

More Details

CPOPT : optimization for fitting CANDECOMP/PARAFAC models

Kolda, Tamara G.; Acar Ataman, Evrim N.; Dunlavy, Daniel D.

Tensor decompositions (e.g., higher-order analogues of matrix decompositions) are powerful tools for data analysis. In particular, the CANDECOMP/PARAFAC (CP) model has proved useful in many applications such chemometrics, signal processing, and web analysis; see for details. The problem of computing the CP decomposition is typically solved using an alternating least squares (ALS) approach. We discuss the use of optimization-based algorithms for CP, including how to efficiently compute the derivatives necessary for the optimization methods. Numerical studies highlight the positive features of our CPOPT algorithms, as compared with ALS and Gauss-Newton approaches.

More Details

Qthreads: An API for programming with millions of lightweight threads

IPDPS Miami 2008 - Proceedings of the 22nd IEEE International Parallel and Distributed Processing Symposium, Program and CD-ROM

Wheeler, Kyle B.; Murphy, Richard C.; Thain, Douglas

Large scale hardware-supported multithreading, an attractive means of increasing computational power, benefits significantly from low per-thread costs. Hardware support for lightweight threads is a developing area of research. Each architecture with such support provides a unique interface, hindering development for them and comparisons between them. A portable abstraction that provides basic lightweight thread control and synchronization primitives is needed. Such an abstraction would assist in exploring both the architectural needs of large scale threading and the semantic power of existing languages. Managing thread resources is a problem that must be addressed if massive parallelism is to be popularized. The qthread abstraction enables development of large-scale multithreading applications on commodity architectures. This paper introduces the qthread API and its Unix implementation, discusses resource management, and presents performance results from the HPCCG benchmark. ©2008 IEEE.

More Details

LDRD final report for improving human effectiveness for extreme-scale problem solving : assessing the effectiveness of electronic brainstorming in an industrial setting

Dornburg, Courtney S.; Adams, Susan S.; Hendrickson, Stacey M.; Davidson, George S.

An experiment was conducted comparing the effectiveness of individual versus group electronic brainstorming in order to address difficult, real world challenges. While industrial reliance on electronic communications has become ubiquitous, empirical and theoretical understanding of the bounds of its effectiveness have been limited. Previous research using short-term, laboratory experiments have engaged small groups of students in answering questions irrelevant to an industrial setting. The present experiment extends current findings beyond the laboratory to larger groups of real-world employees addressing organization-relevant challenges over the course of four days. Employees and contractors at a national security laboratory participated, either in a group setting or individually, in an electronic brainstorm to pose solutions to a 'wickedly' difficult problem. The data demonstrate that (for this design) individuals perform at least as well as groups in producing quantity of electronic ideas, regardless of brainstorming duration. However, when judged with respect to quality along three dimensions (originality, feasibility, and effectiveness), the individuals significantly (p<0.05) out-performed the group working together. When idea quality is used as the benchmark of success, these data indicate that work-relevant challenges are better solved by aggregating electronic individual responses, rather than electronically convening a group. This research suggests that industrial reliance upon electronic problem solving groups should be tempered, and large nominal groups might be the more appropriate vehicle for solving wicked corporate issues.

More Details

Climate-derived tensions in Arctic security

Backus, George A.; Strickland, James H.

Globally, there is no lack of security threats. Many of them demand priority engagement and there can never be adequate resources to address all threats. In this context, climate is just another aspect of global security and the Arctic just another region. In light of physical and budgetary constraints, new security needs must be integrated and prioritized with existing ones. This discussion approaches the security impacts of climate from that perspective, starting with the broad security picture and establishing how climate may affect it. This method provides a different view from one that starts with climate and projects it, in isolation, as the source of a hypothetical security burden. That said, the Arctic does appear to present high-priority security challenges. Uncertainty in the timing of an ice-free Arctic affects how quickly it will become a security priority. Uncertainty in the emergent extreme and variable weather conditions will determine the difficulty (cost) of maintaining adequate security (order) in the area. The resolution of sovereignty boundaries affects the ability to enforce security measures, and the U.S. will most probably need a military presence to back-up negotiated sovereignty agreements. Without additional global warming, technology already allows the Arctic to become a strategic link in the global supply chain, possibly with northern Russia as its main hub. Additionally, the multinational corporations reaping the economic bounty may affect security tensions more than nation-states themselves. Countries will depend ever more heavily on the global supply chains. China has particular needs to protect its trade flows. In matters of security, nation-state and multinational-corporate interests will become heavily intertwined.

More Details

R&D for computational cognitive and social models : foundations for model evaluation through verification and validation (final LDRD report)

McNamara, Laura A.; Trucano, Timothy G.; Backus, George A.; Mitchell, Scott A.

Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worth of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model evaluation in situations of high consequence decision-making.

More Details

High-throughput proteomics : optical approaches

Davidson, George S.

Realistic cell models could greatly accelerate our ability to engineer biochemical pathways and the production of valuable organic products, which would be of great use in the development of biofuels, pharmaceuticals, and the crops for the next green revolution. However, this level of engineering will require a great deal more knowledge about the mechanisms of life than is currently available. In particular, we need to understand the interactome (which proteins interact) as it is situated in the three dimensional geometry of the cell (i.e., a situated interactome), and the regulation/dynamics of these interactions. Methods for optical proteomics have become available that allow the monitoring and even disruption/control of interacting proteins in living cells. Here, a range of these methods is reviewed with respect to their role in elucidating the interactome and the relevant spatial localizations. Development of these technologies and their integration into the core competencies of research organizations can position whole institutions and teams of researchers to lead in both the fundamental science and the engineering applications of cellular biology. That leadership could be particularly important with respect to problems of national urgency centered around security, biofuels, and healthcare.

More Details

Capabilities for Uncertainty in Predictive Science (LDRD Final Report)

Phipps, Eric T.; Eldred, Michael S.; Salinger, Andrew G.

Predictive simulation of systems comprised of numerous interconnected, tightly coupled components promises to help solve many problems of scientific and national interest. However predictive simulation of such systems is extremely challenging due to the coupling of a diverse set of physical and biological length and time scales. This report investigates un-certainty quantification methods for such systems that attempt to exploit their structure to gain computational efficiency. The traditional layering of uncertainty quantification around nonlinear solution processes is inverted to allow for heterogeneous uncertainty quantification methods to be applied to each component in a coupled system. Moreover this approach allows stochastic dimension reduction techniques to be applied at each coupling interface. The mathematical feasibility of these ideas is investigated in this report, and mathematical formulations for the resulting stochastically coupled nonlinear systems are developed.

More Details

Asymmetric cubature formulas for polynomial integration in the triangle and square

Journal of Computational and Applied Mathematics

Taylor, Mark A.

We present five new cubature formula in the triangle and square for exact integration of polynomials. The points were computed numerically with a cardinal function algorithm which does not impose any symmetry requirements on the points. Cubature formula are presented which integrate degrees 10, 11 and 12 in the triangle and degrees 10 and 12 in the square. They have positive weights, contain no points outside the domain, and have fewer points than previously known results. © 2007 Elsevier B.V. All rights reserved.

More Details

Computational modeling of analogy: Destined ever to only be metaphor?

Behavioral and Brain Sciences

Speed, Ann S.

The target article by Leech et al. presents a compelling computational theory of analogy-making. However, there is a key difficulty that persists in theoretical treatments of analogy-making, computational and otherwise: namely, the lack of a detailed account of the neurophysiological mechanisms that give rise to analogy behavior. My commentary explores this issue. © 2008 Cambridge University Press.

More Details
Results 8401–8600 of 9,998
Results 8401–8600 of 9,998