Publications

Results 8801–8850 of 9,998

Search results

Jump to search filters

A mesh optimization algorithm to decrease the maximum error in finite element computations

Proceedings of the 17th International Meshing Roundtable, IMR 2008

Hetmaniuk, U.; Knupp, Patrick K.

We present a mesh optimization algorithm for adaptively improving the finite element interpolation of a function of interest. The algorithm minimizes an objective function by swapping edges and moving nodes. Numerical experiments are performed on model problems. The results illustrate that the mesh optimization algorithm can reduce the W1,∞ semi-norm of the interpolation error. For these examples, the L2, L∞, and H1 norms decreased also.

More Details

A unified architecture for cognition and motor control based on neuroanatomy, psychophysical experiments, and cognitive behaviors

AAAI Fall Symposium - Technical Report

Rohrer, Brandon R.

A Brain-Emulating Cognition and Control Architecture (BECCA) is presented. It is consistent with the hypothesized functions of pervasive intra-cortical and cortico-subcortical neural circuits. It is able to reproduce many salient aspects of human voluntary movement and motor learning. It also provides plausible mechanisms for many phenomena described in cognitive psychology, including perception and mental modeling. Both "inputs" (afferent channels) and "outputs"' (efferent channels) are treated as neural signals; they are all binary (either on or off) and there is no meaning, information, or tag associated with any of them. Although BECCA initially has no internal models, it learns complex interrelations between outputs and inputs through which it bootstraps a model of the system it is controlling and the outside world. BECCA uses two key algorithms to accomplish this: S-Learning and Context-Based Similarity (CBS).

More Details

Individual and group electronic brainstorming in an industrial setting

Proceedings of the Human Factors and Ergonomics Society

Dornburg, Courtney S.; Hendrickson, Stacey M.; Davidson, George S.

An experiment was conducted comparing the effectiveness of individual versus group electronic brainstorming in addressing real-world "wickedly difficult" challenges. Previous laboratory research has engaged small groups of students in answering questions irrelevant to an industrial setting. The current experiment extended this research to larger, real-world employee groups engaged in addressing organizationrelevant challenges. Within the present experiment, the data demonstrated that individuals performed at least as well as groups in terms of number of ideas produced and significantly (p<.02) outperformed groups in terms of the quality of those ideas (as measured along the dimensions of originality, feasibility, and effectiveness).

More Details

Understanding virulence mechanisms in M. tuberculosis infection via a circuit-based simulation framework

Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS'08 - "Personalized Healthcare through Technology"

May, Elebeoba E.; Leitao, Andrei; Faulon, Jean-Loup M.; Joo, Jaewook J.; Misra, Milind; Oprea, Tudor I.

Tuberculosis (TB), caused by the bacterium Mycobacterium tuberculosis (Mtb), is a growing international health crisis. Mtb is able to persist in host tissues in a nonreplicating persistent (NRP) or latent state. This presents a challenge in the treatment of TB. Latent TB can re-activate in 10% of individuals with normal immune systems, higher for those with compromised immune systems. A quantitative understanding of latency-associated virulence mechanisms may help researchers develop more effective methods to battle the spread and reduce TB associated fatalities. Leveraging BioXyce's ability to simulate whole-cell and multi-cellular systems we are developing a circuit-based framework to investigate the impact of pathogenicity-associated pathways on the latency/reactivation phase of tuberculosis infection. We discuss efforts to simulate metabolic pathways that potentially impact the ability of Mtb to persist within host immune cells. We demonstrate how simulation studies can provide insight regarding the efficacy of potential anti-TB agents on biological networks critical to Mtb pathogenicity using a systems chemical biology approach. © 2008 IEEE.

More Details

Model calibration under uncertainty: Matching distribution information

12th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, MAO

Swiler, Laura P.; Adams, Brian M.; Eldred, Michael S.

We develop an approach for estimating model parameters which result in the "best distribution fit" between experimental and simulation data. Best distribution fit means matching moments of experimental data to those of a simulation (and possibly matching a full probability distribution). This approach extends typical nonlinear least squares methods which identify parameters maximizing agreement between experimental points and computational simulation results. Several analytic formulations for the distribution matching problem are provided, along with results for solving test problems and comparisons of this parameter estimation technique with a deterministic least squares approach. Copyright © 2008 by the American Institute of Aeronautics and Astronautics, Inc.

More Details

Scheduling manual sampling for contamination detection in municipal water networks

8th Annual Water Distribution Systems Analysis Symposium 2006

Berry, Jonathan W.; Lin, Henry; Lauer, Erik; Phillips, Cynthia

Cities without an early warning system of indwelling sensors can consider monitoring their networks manually, especially during times of heightened security levels. We consider the problem of calculating an optimal schedule for manual sampling in a municipal water network. Preliminary computations with a small-scale example indicate that during normal times, manual sampling can provide some benefit, but it is far inferior to an indwelling sensor network. However, given information that significantly constrains the nature of an imminent threat, manual sampling can perform as well as a small sensor network designed to handle normal threats. Copyright ASCE 2006.

More Details

Variational multiscale residual-based turbulence modeling for large eddy simulation of incompressible flows

Computer Methods in Applied Mechanics and Engineering

Bazilevs, Y.; Calo, V.M.; Cottrell, J.A.; Hughes, T.J.R.; Reali, A.; Scovazzi, Guglielmo S.

We present an LES-type variational multiscale theory of turbulence. Our approach derives completely from the incompressible Navier-Stokes equations and does not employ any ad hoc devices, such as eddy viscosities. We tested the formulation on forced homogeneous isotropic turbulence and turbulent channel flows. In the calculations, we employed linear, quadratic and cubic NURBS. A dispersion analysis of simple model problems revealed NURBS elements to be superior to classical finite elements in approximating advective and diffusive processes, which play a significant role in turbulence computations. The numerical results are very good and confirm the viability of the theoretical framework. © 2007 Elsevier B.V. All rights reserved.

More Details

Evaluating NIC hardware requirements to achieve high message rate PGAS support on multi-core processors

Proceedings of the 2007 ACM/IEEE Conference on Supercomputing, SC'07

Underwood, Keith; Levenhagen, Michael J.; Brightwell, Ronald B.

Partitioned global address space (PGAS) programming models have been identified as one of the few viable approaches for dealing with emerging many-core systems. These models tend to generate many small messages, which requires specific support from the network interface hardware to enable efficient execution. In the past, Cray included E-registers on the Cray T3E to support the SHMEM API; however, with the advent of multi-core processors, the balance of computation to communication capabilities has shifted toward computation. This paper explores the message rates that are achievable with multi-core processors and simplified PGAS support on a more conventional network interface. For message rate tests, we find that simple network interface hardware is more than sufficient. We also find that even typical data distributions, such as cyclic or block-cyclic, do not need specialized hardware support. Finally, we assess the impact of such support on the well known RandomAccess benchmark. (c) 2007 ACM.

More Details

EXACT: The experimental algorithmics computational toolkit

Proceedings of the 2007 Workshop on Experimental Computer Science

Hart, William E.; Berry, Jonathan W.; Heaphy, Robert T.; Phillips, Cynthia A.

In this paper, we introduce EXACT, the EXperimental Algorithmics Computational Toolkit. EXACT is a software framework for describing, controlling, and analyzing computer experiments. It provides the experimentalist with convenient software tools to ease and organize the entire experimental process, including the description of factors and levels, the design of experiments, the control of experimental runs, the archiving of results, and analysis of results. As a case study for EXACT, we describe its interaction with FAST, the Sandia Framework for Agile Software Testing. EXACT and FAST now manage the nightly testing of several large software projects at Sandia. We also discuss EXACT's advanced features, which include a driver module that controls complex experiments such as comparisons of parallel algorithms. Copyright 2007 ACM.

More Details

Optimal monitoring location selection for water quality issues

Restoring Our Natural Habitat - Proceedings of the 2007 World Environmental and Water Resources Congress

Boccelli, Dominic L.; Hart, William E.

Recently, extensive focus has been placed on determining the optimal locations of sensors within a distribution system to minimize the impact on public health from intentional intrusion events. Modified versions of these tools may have additional benefits for determining monitoring locations for other more common objectives associated with distribution systems. A modified Sensor Placement Optimization Tool (SPOT) is presented that can be used for satisfying more generic location problems such as determining monitoring locations for tracer tests or disinfectant byproduct sampling. The utility for the modified SPOT algorithm is discussed with respect to implementing a distribution system field-scale tracer study. © 2007 ASCE.

More Details

On the effects of memory latency and bandwidth on supercomputer application performance

Proceedings of the 2007 IEEE International Symposium on Workload Characterization, IISWC

Murphy, Richard C.

Since the first vector supercomputers in the mid-1970's, the largest scale applications have traditionally been floating point oriented numerical codes, which can be broadly characterized as the simulation of physics on a computer. Supercomputer architectures have evolved to meet the needs of those applications. Specifically, the computational work of the application tends to be floating point oriented, and the decomposition of the problem two or three dimensional. Today, an emerging class of critical applications may change those assumptions: they are combinatorial in nature, integer oriented, and irregular. The performance of both classes of applications is dominated by the performance of the memory system. This paper compares the memory performance sensitivity of both traditional and emerging HPC applications, and shows that the new codes are significantly more sensitive to memory latency and bandwidth than their traditional counterparts. Additionally, these codes exhibit lower base-line performance, which only exacerbates the problem. As a result, the construction of future supercomputer architectures to support these applications will most likely be different from those used to support traditional codes. Quantitatively understanding the difference between the two workloads will form the basis for future design choices. ©2007 IEEE.

More Details

An extended finite element method formulation for modeling the response of polycrystalline materials to dynamic loading

AIP Conference Proceedings

Robbins, Joshua R.; Voth, Thomas E.

The extended Finite Element Method (X-FEM) is a finite-element based discretization technique developed originally to model dynamic crack propagation [1]. Since that time the method has been used for modeling physics ranging from static meso-scale material failure to dendrite growth. Here we adapt the recent advances of Vitali and Benson [2] and Song et. al. [3] to model dynamic loading of a polycry stalline material. We use demonstration problems to examine the method's efficacy for modeling the dynamic response of polycrystalline materials at the meso-scale. Specifically, we use the X-FEM to model grain boundaries. This approach allows us to i) eliminate ad-hoc mixture rules for multi-material elements and ii) avoid explicitly meshing grain boundaries. © 2007 American Institute of Physics.

More Details

Toward a more rigorous application of margins and uncertainties within the nuclear weapons life cycle : a Sandia perspective

Diegert, Kathleen V.; Klenke, S.E.; Paulsen, Robert A.; Pilch, Martin P.; Trucano, Timothy G.

This paper presents the conceptual framework that is being used to define quantification of margins and uncertainties (QMU) for application in the nuclear weapons (NW) work conducted at Sandia National Laboratories. The conceptual framework addresses the margins and uncertainties throughout the NW life cycle and includes the definition of terms related to QMU and to figures of merit. Potential applications of QMU consist of analyses based on physical data and on modeling and simulation. Appendix A provides general guidelines for addressing cases in which significant and relevant physical data are available for QMU analysis. Appendix B gives the specific guidance that was used to conduct QMU analyses in cycle 12 of the annual assessment process. Appendix C offers general guidelines for addressing cases in which appropriate models are available for use in QMU analysis. Appendix D contains an example that highlights the consequences of different treatments of uncertainty in model-based QMU analyses.

More Details

On the effects of memory latency and bandwidth on supercomputer application performance

Proceedings of the 2007 IEEE International Symposium on Workload Characterization, IISWC

Murphy, Richard C.

Since the first vector supercomputers in the mid-1970's, the largest scale applications have traditionally been floating point oriented numerical codes, which can be broadly characterized as the simulation of physics on a computer. Supercomputer architectures have evolved to meet the needs of those applications. Specifically, the computational work of the application tends to be floating point oriented, and the decomposition of the problem two or three dimensional. Today, an emerging class of critical applications may change those assumptions: they are combinatorial in nature, integer oriented, and irregular. The performance of both classes of applications is dominated by the performance of the memory system. This paper compares the memory performance sensitivity of both traditional and emerging HPC applications, and shows that the new codes are significantly more sensitive to memory latency and bandwidth than their traditional counterparts. Additionally, these codes exhibit lower base-line performance, which only exacerbates the problem. As a result, the construction of future supercomputer architectures to support these applications will most likely be different from those used to support traditional codes. Quantitatively understanding the difference between the two workloads will form the basis for future design choices. ©2007 IEEE.

More Details

The analysis of a sparse grid stochastic collocation method for partial differential equations with high-dimensional random input data

Webster, Clayton G.

This work describes the convergence analysis of a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems and, as such, the derived strong error estimates for the fully discrete solution are used to compare the computational efficiency of the proposed method with the Monte Carlo method. Numerical examples illustrate the theoretical results and are used to compare this approach with several others, including the standard Monte Carlo.

More Details

Searching for globally optimal functional forms for interatomic potentials using genetic programming with parallel tempering

Journal of Computational Chemistry

Slepoy, Alexander S.; Peters, Michael D.; Thompson, Aidan P.

Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. © 2007 Wiley Periodicals, Inc.

More Details

Microstructural modeling of ferroic switching and phase transitions in PZT

Proceedings of SPIE - The International Society for Optical Engineering

Robbins, Joshua R.; Khraishi, Tariq A.; Chaplya, Pavel

Niobium doped Lead Zirconate Titanate (PZT) with a Zr/Ti ratio of 95/5 (i.e., PZT 95/5-2Nb) is a ferroelectric with a rhombohedral structure at room temperature. A crystal (or a subdomain within a crystal) exhibits a spontaneous polarization in any one of eight crystallographically equivalent directions. Such a material becomes polarized when subjected to a large electric field. When the electric field is removed, a remanent polarization remains and a bound charge is stored. A displacive phase transition from a rhombohedral ferroelectric phase to an orthorhombic anti-ferroelectric phase can be induced with the application of a mechanical load. When this occurs, the material becomes depoled and the bound charge is released. The polycrystalline character of PZT 95/5-2Nb leads to highly non-uniform fields at the grain scale. These local fields lead to very complex material behavior during mechanical depoling that has important implications to device design and performance. This paper presents a microstructurally based numerical model that describes the 3D non-linear behavior of ferroelectric ceramics. The model resolves the structure of polycrystals directly in the topology of the problem domain and uses the extended finite element method (X-FEM) to solve the governing equations of electromechanics. The material response is computed from anisotropic single crystal constants and the volume fractions of the various polarization variants (i.e., three variants for rhombohedral anti-ferroelectric and eight for rhomobohedral ferroelectric ceramic). Evolution of the variant volume fractions is governed by the minimization of internally stored energy and accounts for ferroelectric and ferroelastic domain switching and phase transitions in response to the applied loads. The developed model is used to examine hydrostatic depoling in PZT 95/5-2Nb.

More Details

Final report on LDRD project : coupling strategies for multi-physics applications

Hopkins, Matthew M.; Pawlowski, Roger P.; Moffat, Harry K.; Carnes, Brian C.; Hooper, Russell H.

Many current and future modeling applications at Sandia including ASC milestones will critically depend on the simultaneous solution of vastly different physical phenomena. Issues due to code coupling are often not addressed, understood, or even recognized. The objectives of the LDRD has been both in theory and in code development. We will show that we have provided a fundamental analysis of coupling, i.e., when strong coupling vs. a successive substitution strategy is needed. We have enabled the implementation of tighter coupling strategies through additions to the NOX and Sierra code suites to make coupling strategies available now. We have leveraged existing functionality to do this. Specifically, we have built into NOX the capability to handle fully coupled simulations from multiple codes, and we have also built into NOX the capability to handle Jacobi Free Newton Krylov simulations that link multiple applications. We show how this capability may be accessed from within the Sierra Framework as well as from outside of Sierra. The critical impact from this LDRD is that we have shown how and have delivered strategies for enabling strong Newton-based coupling while respecting the modularity of existing codes. This will facilitate the use of these codes in a coupled manner to solve multi-physic applications.

More Details

The acquisition of dangerous biological materials: Technical facts sheets to assist risk assessments of 46 potential BW agents

Astuto Gribble, Lisa A.; Gaudioso, Jennifer M.

Numerous terrorist organizations have openly expressed interest in producing and deploying biological weapons. However, a limiting factor for many terrorists has been the acquisition of dangerous biological agents, as evidenced by the very few successful instances of biological weapons use compared to the number of documented hoaxes. Biological agents vary greatly in their ability to cause loss of life and economic damage. Some agents, if released properly, can kill many people and cause an extensive number of secondary infections; other agents will sicken only a small number of people for a short period of time. Consequently, several biological agents can potentially be used to perpetrate a bioterrorism attack but few are likely capable of causing a high consequence event. It is crucial, from a US national security perspective, to more deeply understand the likelihood that terrorist organizations can acquire the range of these agents. Few studies have attempted to comprehensively compile the technical information directly relevant to the acquisition of dangerous bacteria, viruses and toxins. In this report, technical fact sheets were assembled for 46 potentially dangerous biological agents. Much of the information was taken from various research sources which could ultimately and significantly expedite and improve bioterrorism threat assessments. By systematically examining a number of specific agent characteristics included in these fact sheets, it may be possible to detect, target, and implement measures to thwart future terrorist acquisition attempts. In addition, the information in these fact sheets may be used as a tool to help laboratories gain a rudimentary understanding of how attractive a method laboratory theft is relative to other potential acquisition modes.

More Details
Results 8801–8850 of 9,998
Results 8801–8850 of 9,998