Publications

Results 8801–8825 of 9,998

Search results

Jump to search filters

A mesh optimization algorithm to decrease the maximum error in finite element computations

Proceedings of the 17th International Meshing Roundtable, IMR 2008

Hetmaniuk, U.; Knupp, Patrick K.

We present a mesh optimization algorithm for adaptively improving the finite element interpolation of a function of interest. The algorithm minimizes an objective function by swapping edges and moving nodes. Numerical experiments are performed on model problems. The results illustrate that the mesh optimization algorithm can reduce the W1,∞ semi-norm of the interpolation error. For these examples, the L2, L∞, and H1 norms decreased also.

More Details

A unified architecture for cognition and motor control based on neuroanatomy, psychophysical experiments, and cognitive behaviors

AAAI Fall Symposium - Technical Report

Rohrer, Brandon R.

A Brain-Emulating Cognition and Control Architecture (BECCA) is presented. It is consistent with the hypothesized functions of pervasive intra-cortical and cortico-subcortical neural circuits. It is able to reproduce many salient aspects of human voluntary movement and motor learning. It also provides plausible mechanisms for many phenomena described in cognitive psychology, including perception and mental modeling. Both "inputs" (afferent channels) and "outputs"' (efferent channels) are treated as neural signals; they are all binary (either on or off) and there is no meaning, information, or tag associated with any of them. Although BECCA initially has no internal models, it learns complex interrelations between outputs and inputs through which it bootstraps a model of the system it is controlling and the outside world. BECCA uses two key algorithms to accomplish this: S-Learning and Context-Based Similarity (CBS).

More Details

Individual and group electronic brainstorming in an industrial setting

Proceedings of the Human Factors and Ergonomics Society

Dornburg, Courtney S.; Hendrickson, Stacey M.; Davidson, George S.

An experiment was conducted comparing the effectiveness of individual versus group electronic brainstorming in addressing real-world "wickedly difficult" challenges. Previous laboratory research has engaged small groups of students in answering questions irrelevant to an industrial setting. The current experiment extended this research to larger, real-world employee groups engaged in addressing organizationrelevant challenges. Within the present experiment, the data demonstrated that individuals performed at least as well as groups in terms of number of ideas produced and significantly (p<.02) outperformed groups in terms of the quality of those ideas (as measured along the dimensions of originality, feasibility, and effectiveness).

More Details

Understanding virulence mechanisms in M. tuberculosis infection via a circuit-based simulation framework

Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS'08 - "Personalized Healthcare through Technology"

May, Elebeoba E.; Leitao, Andrei; Faulon, Jean-Loup M.; Joo, Jaewook J.; Misra, Milind; Oprea, Tudor I.

Tuberculosis (TB), caused by the bacterium Mycobacterium tuberculosis (Mtb), is a growing international health crisis. Mtb is able to persist in host tissues in a nonreplicating persistent (NRP) or latent state. This presents a challenge in the treatment of TB. Latent TB can re-activate in 10% of individuals with normal immune systems, higher for those with compromised immune systems. A quantitative understanding of latency-associated virulence mechanisms may help researchers develop more effective methods to battle the spread and reduce TB associated fatalities. Leveraging BioXyce's ability to simulate whole-cell and multi-cellular systems we are developing a circuit-based framework to investigate the impact of pathogenicity-associated pathways on the latency/reactivation phase of tuberculosis infection. We discuss efforts to simulate metabolic pathways that potentially impact the ability of Mtb to persist within host immune cells. We demonstrate how simulation studies can provide insight regarding the efficacy of potential anti-TB agents on biological networks critical to Mtb pathogenicity using a systems chemical biology approach. © 2008 IEEE.

More Details

Model calibration under uncertainty: Matching distribution information

12th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, MAO

Swiler, Laura P.; Adams, Brian M.; Eldred, Michael S.

We develop an approach for estimating model parameters which result in the "best distribution fit" between experimental and simulation data. Best distribution fit means matching moments of experimental data to those of a simulation (and possibly matching a full probability distribution). This approach extends typical nonlinear least squares methods which identify parameters maximizing agreement between experimental points and computational simulation results. Several analytic formulations for the distribution matching problem are provided, along with results for solving test problems and comparisons of this parameter estimation technique with a deterministic least squares approach. Copyright © 2008 by the American Institute of Aeronautics and Astronautics, Inc.

More Details

Scheduling manual sampling for contamination detection in municipal water networks

8th Annual Water Distribution Systems Analysis Symposium 2006

Berry, Jonathan W.; Lin, Henry; Lauer, Erik; Phillips, Cynthia

Cities without an early warning system of indwelling sensors can consider monitoring their networks manually, especially during times of heightened security levels. We consider the problem of calculating an optimal schedule for manual sampling in a municipal water network. Preliminary computations with a small-scale example indicate that during normal times, manual sampling can provide some benefit, but it is far inferior to an indwelling sensor network. However, given information that significantly constrains the nature of an imminent threat, manual sampling can perform as well as a small sensor network designed to handle normal threats. Copyright ASCE 2006.

More Details

Variational multiscale residual-based turbulence modeling for large eddy simulation of incompressible flows

Computer Methods in Applied Mechanics and Engineering

Bazilevs, Y.; Calo, V.M.; Cottrell, J.A.; Hughes, T.J.R.; Reali, A.; Scovazzi, Guglielmo S.

We present an LES-type variational multiscale theory of turbulence. Our approach derives completely from the incompressible Navier-Stokes equations and does not employ any ad hoc devices, such as eddy viscosities. We tested the formulation on forced homogeneous isotropic turbulence and turbulent channel flows. In the calculations, we employed linear, quadratic and cubic NURBS. A dispersion analysis of simple model problems revealed NURBS elements to be superior to classical finite elements in approximating advective and diffusive processes, which play a significant role in turbulence computations. The numerical results are very good and confirm the viability of the theoretical framework. © 2007 Elsevier B.V. All rights reserved.

More Details

Evaluating NIC hardware requirements to achieve high message rate PGAS support on multi-core processors

Proceedings of the 2007 ACM/IEEE Conference on Supercomputing, SC'07

Underwood, Keith; Levenhagen, Michael J.; Brightwell, Ronald B.

Partitioned global address space (PGAS) programming models have been identified as one of the few viable approaches for dealing with emerging many-core systems. These models tend to generate many small messages, which requires specific support from the network interface hardware to enable efficient execution. In the past, Cray included E-registers on the Cray T3E to support the SHMEM API; however, with the advent of multi-core processors, the balance of computation to communication capabilities has shifted toward computation. This paper explores the message rates that are achievable with multi-core processors and simplified PGAS support on a more conventional network interface. For message rate tests, we find that simple network interface hardware is more than sufficient. We also find that even typical data distributions, such as cyclic or block-cyclic, do not need specialized hardware support. Finally, we assess the impact of such support on the well known RandomAccess benchmark. (c) 2007 ACM.

More Details

EXACT: The experimental algorithmics computational toolkit

Proceedings of the 2007 Workshop on Experimental Computer Science

Hart, William E.; Berry, Jonathan W.; Heaphy, Robert T.; Phillips, Cynthia A.

In this paper, we introduce EXACT, the EXperimental Algorithmics Computational Toolkit. EXACT is a software framework for describing, controlling, and analyzing computer experiments. It provides the experimentalist with convenient software tools to ease and organize the entire experimental process, including the description of factors and levels, the design of experiments, the control of experimental runs, the archiving of results, and analysis of results. As a case study for EXACT, we describe its interaction with FAST, the Sandia Framework for Agile Software Testing. EXACT and FAST now manage the nightly testing of several large software projects at Sandia. We also discuss EXACT's advanced features, which include a driver module that controls complex experiments such as comparisons of parallel algorithms. Copyright 2007 ACM.

More Details

Optimal monitoring location selection for water quality issues

Restoring Our Natural Habitat - Proceedings of the 2007 World Environmental and Water Resources Congress

Boccelli, Dominic L.; Hart, William E.

Recently, extensive focus has been placed on determining the optimal locations of sensors within a distribution system to minimize the impact on public health from intentional intrusion events. Modified versions of these tools may have additional benefits for determining monitoring locations for other more common objectives associated with distribution systems. A modified Sensor Placement Optimization Tool (SPOT) is presented that can be used for satisfying more generic location problems such as determining monitoring locations for tracer tests or disinfectant byproduct sampling. The utility for the modified SPOT algorithm is discussed with respect to implementing a distribution system field-scale tracer study. © 2007 ASCE.

More Details

On the effects of memory latency and bandwidth on supercomputer application performance

Proceedings of the 2007 IEEE International Symposium on Workload Characterization, IISWC

Murphy, Richard C.

Since the first vector supercomputers in the mid-1970's, the largest scale applications have traditionally been floating point oriented numerical codes, which can be broadly characterized as the simulation of physics on a computer. Supercomputer architectures have evolved to meet the needs of those applications. Specifically, the computational work of the application tends to be floating point oriented, and the decomposition of the problem two or three dimensional. Today, an emerging class of critical applications may change those assumptions: they are combinatorial in nature, integer oriented, and irregular. The performance of both classes of applications is dominated by the performance of the memory system. This paper compares the memory performance sensitivity of both traditional and emerging HPC applications, and shows that the new codes are significantly more sensitive to memory latency and bandwidth than their traditional counterparts. Additionally, these codes exhibit lower base-line performance, which only exacerbates the problem. As a result, the construction of future supercomputer architectures to support these applications will most likely be different from those used to support traditional codes. Quantitatively understanding the difference between the two workloads will form the basis for future design choices. ©2007 IEEE.

More Details

An extended finite element method formulation for modeling the response of polycrystalline materials to dynamic loading

AIP Conference Proceedings

Robbins, Joshua R.; Voth, Thomas E.

The extended Finite Element Method (X-FEM) is a finite-element based discretization technique developed originally to model dynamic crack propagation [1]. Since that time the method has been used for modeling physics ranging from static meso-scale material failure to dendrite growth. Here we adapt the recent advances of Vitali and Benson [2] and Song et. al. [3] to model dynamic loading of a polycry stalline material. We use demonstration problems to examine the method's efficacy for modeling the dynamic response of polycrystalline materials at the meso-scale. Specifically, we use the X-FEM to model grain boundaries. This approach allows us to i) eliminate ad-hoc mixture rules for multi-material elements and ii) avoid explicitly meshing grain boundaries. © 2007 American Institute of Physics.

More Details

Toward a more rigorous application of margins and uncertainties within the nuclear weapons life cycle : a Sandia perspective

Diegert, Kathleen V.; Klenke, S.E.; Paulsen, Robert A.; Pilch, Martin P.; Trucano, Timothy G.

This paper presents the conceptual framework that is being used to define quantification of margins and uncertainties (QMU) for application in the nuclear weapons (NW) work conducted at Sandia National Laboratories. The conceptual framework addresses the margins and uncertainties throughout the NW life cycle and includes the definition of terms related to QMU and to figures of merit. Potential applications of QMU consist of analyses based on physical data and on modeling and simulation. Appendix A provides general guidelines for addressing cases in which significant and relevant physical data are available for QMU analysis. Appendix B gives the specific guidance that was used to conduct QMU analyses in cycle 12 of the annual assessment process. Appendix C offers general guidelines for addressing cases in which appropriate models are available for use in QMU analysis. Appendix D contains an example that highlights the consequences of different treatments of uncertainty in model-based QMU analyses.

More Details

On the effects of memory latency and bandwidth on supercomputer application performance

Proceedings of the 2007 IEEE International Symposium on Workload Characterization, IISWC

Murphy, Richard C.

Since the first vector supercomputers in the mid-1970's, the largest scale applications have traditionally been floating point oriented numerical codes, which can be broadly characterized as the simulation of physics on a computer. Supercomputer architectures have evolved to meet the needs of those applications. Specifically, the computational work of the application tends to be floating point oriented, and the decomposition of the problem two or three dimensional. Today, an emerging class of critical applications may change those assumptions: they are combinatorial in nature, integer oriented, and irregular. The performance of both classes of applications is dominated by the performance of the memory system. This paper compares the memory performance sensitivity of both traditional and emerging HPC applications, and shows that the new codes are significantly more sensitive to memory latency and bandwidth than their traditional counterparts. Additionally, these codes exhibit lower base-line performance, which only exacerbates the problem. As a result, the construction of future supercomputer architectures to support these applications will most likely be different from those used to support traditional codes. Quantitatively understanding the difference between the two workloads will form the basis for future design choices. ©2007 IEEE.

More Details

The analysis of a sparse grid stochastic collocation method for partial differential equations with high-dimensional random input data

Webster, Clayton G.

This work describes the convergence analysis of a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems and, as such, the derived strong error estimates for the fully discrete solution are used to compare the computational efficiency of the proposed method with the Monte Carlo method. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo.

More Details

Searching for globally optimal functional forms for interatomic potentials using genetic programming with parallel tempering

Journal of Computational Chemistry

Slepoy, Alexander S.; Peters, Michael D.; Thompson, Aidan P.

Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. © 2007 Wiley Periodicals, Inc.

More Details
Results 8801–8825 of 9,998
Results 8801–8825 of 9,998