Publications

Results 8351–8400 of 9,998

Search results

Jump to search filters

Peridynamic theory of solid mechanics

Proposed for publication in Advances in Applied Mechanics.

Silling, Stewart A.; Lehoucq, Richard B.

The peridynamic theory of mechanics attempts to unite the mathematical modeling of continuous media, cracks, and particles within a single framework. It does this by replacing the partial differential equations of the classical theory of solid mechanics with integral or integro-differential equations. These equations are based on a model of internal forces within a body in which material points interact with each other directly over finite distances. The classical theory of solid mechanics is based on the assumption of a continuous distribution of mass within a body. It further assumes that all internal forces are contact forces that act across zero distance. The mathematical description of a solid that follows from these assumptions relies on partial differential equations that additionally assume sufficient smoothness of the deformation for the PDEs to make sense in either their strong or weak forms. The classical theory has been demonstrated to provide a good approximation to the response of real materials down to small length scales, particularly in single crystals, provided these assumptions are met. Nevertheless, technology increasingly involves the design and fabrication of devices at smaller and smaller length scales, even interatomic dimensions. Therefore, it is worthwhile to investigate whether the classical theory can be extended to permit relaxed assumptions of continuity, to include the modeling of discrete particles such as atoms, and to allow the explicit modeling of nonlocal forces that are known to strongly influence the behavior of real materials.

More Details

Elastic wave propagation in variable media using a discontinuous Galerkin method

Society of Exploration Geophysicists International Exposition and 80th Annual Meeting 2010, SEG 2010

Smith, Thomas M.; Collis, Samuel S.; Ober, Curtis C.; Overfelt, James R.; Schwaiger, Hans F.

Motivated by the needs of seismic inversion and building on our prior experience for fluid-dynamics systems, we present a high-order discontinuous Galerkin (DG) Runge-Kutta method applied to isotropic, linearized elasto-dynamics. Unlike other DG methods recently presented in the literature, our method allows for inhomogeneous material variations within each element that enables representation of realistic earth models — a feature critical for future use in seismic inversion. Likewise, our method supports curved elements and hybrid meshes that include both simplicial and nonsimplicial elements. We demonstrate the capabilities of this method through a series of numerical experiments including hybrid mesh discretizations of the Marmousi2 model as well as a modified Marmousi2 model with a oscillatory ocean bottom that is exactly captured by our discretization.

More Details

A cognitive-consistency based model of population wide attitude change

AAAI Fall Symposium - Technical Report

Lakkaraju, Kiran L.; Speed, Ann S.

Attitudes play a significant role in determining how individuals process information and behave. In this paper we have developed a new computational model of population wide attitude change that captures the social level: how individuals interact and communicate information, and the cognitive level: how attitudes and concept interact with each other. The model captures the cognitive aspect by representing each individuals as a parallel constraint satisfaction network. The dynamics of this model are explored through a simple attitude change experiment where we vary the social network and distribution of attitudes in a population. Copyright © 2010, Association for the Advancement of Artificial Intelligence. All rights reserved.

More Details

Simulation of dynamic fracture using peridynamics, finite element modeling, and contact

ASME International Mechanical Engineering Congress and Exposition, Proceedings (IMECE)

Littlewood, David J.

Peridynamics is a nonlocal extension of classical solid mechanics that allows for the modeling of bodies in which discontinuities occur spontaneously. Because the peridynamic expression for the balance of linear momentum does not contain spatial derivatives and is instead based on an integral equation, it is well suited for modeling phenomena involving spatial discontinuities such as crack formation and fracture. In this study, both peridynamics and classical finite element analysis are applied to simulate material response under dynamic blast loading conditions. A combined approach is utilized in which the portion of the simulation modeled with peridynamics interacts with the finite element portion of the model via a contact algorithm. The peridynamic portion of the analysis utilizes an elastic-plastic constitutive model with linear hardening. The peridynamic interface to the constitutive model is based on the calculation of an approximate deformation gradient, requiring the suppression of possible zero-energy modes. The classical finite element portion of the model utilizes a Johnson-Cook constitutive model. Simulation results are validated by direct comparison to expanding tube experiments. The coupled modeling approach successfully captures material response at the surface of the tube and the emerging fracture pattern. Copyright © 2010 by ASME.

More Details

Formulation and optimization of robust sensor placement problems for drinking water contamination warning systems

Journal of Infrastructure Systems

Watson, Jean P.; Murray, Regan; Hart, William E.

The sensor placement problem in contamination warning system design for municipal water distribution networks involves maximizing the protection level afforded by limited numbers of sensors, typically quantified as the expected impact of a contamination event; the issue of how to mitigate against high-consequence events is either handled implicitly or ignored entirely. Consequently, expected-case sensor placements run the risk of failing to protect against high-consequence 9/11-style attacks. In contrast, robust sensor placements address this concern by focusing strictly on high-consequence events and placing sensors to minimize the impact of these events. We introduce several robust variations of the sensor placement problem, distinguished by how they quantify the potential damage due to high-consequence events. We explore the nature of robust versus expected-case sensor placements on three real-world large-scale distribution networks. We find that robust sensor placements can yield large reductions in the number and magnitude of high-consequence events, with only modest increases in expected impact. The ability to trade-off between robust and expected-case impacts is a key unexplored dimension in contamination warning system design. © 2009 ASCE.

More Details

Probabilistic methods in model validation

Conference Proceedings of the Society for Experimental Mechanics Series

Paez, Thomas L.; Swiler, Laura P.

Extensive experimentation over the past decade has shown that fabricated physical systems that are intended to be identical, and are nominally identical, in fact, differ from one another, and sometimes substantially. This fact makes it difficult to validate a mathematical model for any system and results in the requirement to characterize physical system behavior using the tools of uncertainty quantification. Further, because of the existence of system, component, and material uncertainty, the mathematical models of these elements sometimes seek to reflect the uncertainty. This presentation introduces some of the methods of probability and statistics, and shows how they can be applied in engineering modeling and data analysis. The ideas of randomness and some basic means for measuring and modeling it are presented. The ideas of random experiment, random variable, mean, variance and standard deviation, and probability distribution are introduced. The ideas are introduced in the framework of a practical, yet simple, example; measured data are included. This presentation is the third in a sequence of tutorial discussions on mathematical model validation. The example introduced here is also used in later presentations. © 2009 Society for Experimental Mechanics Inc.

More Details

Density functional theory (DFT) simulations of shocked liquid xenon

AIP Conference Proceedings

Mattsson, Thomas M.; Magyar, Rudolph J.

Xenon is not only a technologically important element used in laser technologies and jet propulsion, but it is also one of the most accessible materials in which to study the metal-insulator transition with increasing pressure. Because of its closed shell electronic configuration, xenon is often assumed to be chemically inert, interacting almost entirely through the van der Waals interaction, and at liquid density, is typically modeled well using Leonard-Jones potentials. However, such modeling has a limited range of validity as xenon is known to form compounds under normal conditions and likely exhibits considerably more chemistry at higher densities when hybridization of occupied orbitals becomes significant. We present DFT-MD simulations of shocked liquid xenon with the goal of developing an improved equation of state. The calculated Hugoniot to 2 MPa compares well with available experimental shock data. Sandia is a mul-tiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. © 2009 American Institute of Physics.

More Details

Density functional theory (DFT) simulations of shocked liquid xenon

AIP Conference Proceedings

Mattsson, Thomas M.; Magyar, Rudolph J.

Xenon is not only a technologically important element used in laser technologies and jet propulsion, but it is also one of the most accessible materials in which to study the metal-insulator transition with increasing pressure. Because of its closed shell electronic configuration, xenon is often assumed to be chemically inert, interacting almost entirely through the van der Waals interaction, and at liquid density, is typically modeled well using Leonard-Jones potentials. However, such modeling has a limited range of validity as xenon is known to form compounds under normal conditions and likely exhibits considerably more chemistry at higher densities when hybridization of occupied orbitals becomes significant. We present DFT-MD simulations of shocked liquid xenon with the goal of developing an improved equation of state. The calculated Hugoniot to 2 MPa compares well with available experimental shock data. Sandia is a mul-tiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. © 2009 American Institute of Physics.

More Details

Label-invariant mesh quality metrics

Proceedings of the 18th International Meshing Roundtable, IMR 2009

Knupp, Patrick K.

Mappings from a master element to the physical mesh element, in conjunction with local metrics such as those appearing in the Target-matrix paradigm, are used to measure quality at points within an element. The approach is applied to both linear and quadratic triangular elements; this enables, for example, one to measure quality within a quadratic finite element. Quality within an element may also be measured on a set of symmetry points, leading to so-called symmetry metrics. An important issue having to do with the labeling of the element vertices is relevant to mesh quality tools such as Verdict and Mesquite. Certain quality measures like area, volume, and shape should be label-invariant, while others such as aspect ratio and orientation should not. It is shown that local metrics whose Jacobian matrix is non-constant are label-invariant only at the center of the element, while symmetry metrics can be label-invariant anywhere within the element, provided the reference element is properly restricted.

More Details

Analysis of micromixers and biocidal coatings on water-treatment membranes to minimize biofouling

Altman, Susan J.; Clem, Paul G.; Cook, Adam W.; Hart, William E.; Hibbs, Michael R.; Ho, Clifford K.; Jones, Howland D.; Sun, Amy C.; Webb, Stephen W.

Biofouling, the unwanted growth of biofilms on a surface, of water-treatment membranes negatively impacts in desalination and water treatment. With biofouling there is a decrease in permeate production, degradation of permeate water quality, and an increase in energy expenditure due to increased cross-flow pressure needed. To date, a universal successful and cost-effect method for controlling biofouling has not been implemented. The overall goal of the work described in this report was to use high-performance computing to direct polymer, material, and biological research to create the next generation of water-treatment membranes. Both physical (micromixers - UV-curable epoxy traces printed on the surface of a water-treatment membrane that promote chaotic mixing) and chemical (quaternary ammonium groups) modifications of the membranes for the purpose of increasing resistance to biofouling were evaluated. Creation of low-cost, efficient water-treatment membranes helps assure the availability of fresh water for human use, a growing need in both the U. S. and the world.

More Details

Summary of the CSRI Workshop on Combinatorial Algebraic Topology (CAT): Software, Applications, & Algorithms

Mitchell, Scott A.; Bennett, Janine C.; Day, David M.

This report summarizes the Combinatorial Algebraic Topology: software, applications & algorithms workshop (CAT Workshop). The workshop was sponsored by the Computer Science Research Institute of Sandia National Laboratories. It was organized by CSRI staff members Scott Mitchell and Shawn Martin. It was held in Santa Fe, New Mexico, August 29-30. The CAT Workshop website has links to some of the talk slides and other information, http://www.cs.sandia.gov/CSRI/Workshops/2009/CAT/index.html. The purpose of the report is to summarize the discussions and recap the sessions. There is a special emphasis on technical areas that are ripe for further exploration, and the plans for follow-up amongst the workshop participants. The intended audiences are the workshop participants, other researchers in the area, and the workshop sponsors.

More Details

Host suppression and bioinformatics for sequence-based characterization of unknown pathogens

Misra, Milind; Patel, Kamlesh P.; Kaiser, Julia N.; Meagher, Robert M.; Branda, Steven B.; Schoeniger, Joseph S.

Bioweapons and emerging infectious diseases pose formidable and growing threats to our national security. Rapid advances in biotechnology and the increasing efficiency of global transportation networks virtually guarantee that the United States will face potentially devastating infectious disease outbreaks caused by novel ('unknown') pathogens either intentionally or accidentally introduced into the population. Unfortunately, our nation's biodefense and public health infrastructure is primarily designed to handle previously characterized ('known') pathogens. While modern DNA assays can identify known pathogens quickly, identifying unknown pathogens currently depends upon slow, classical microbiological methods of isolation and culture that can take weeks to produce actionable information. In many scenarios that delay would be costly, in terms of casualties and economic damage; indeed, it can mean the difference between a manageable public health incident and a full-blown epidemic. To close this gap in our nation's biodefense capability, we will develop, validate, and optimize a system to extract nucleic acids from unknown pathogens present in clinical samples drawn from infected patients. This system will extract nucleic acids from a clinical sample, amplify pathogen and specific host response nucleic acid sequences. These sequences will then be suitable for ultra-high-throughput sequencing (UHTS) carried out by a third party. The data generated from UHTS will then be processed through a new data assimilation and Bioinformatic analysis pipeline that will allow us to characterize an unknown pathogen in hours to days instead of weeks to months. Our methods will require no a priori knowledge of the pathogen, and no isolation or culturing; therefore it will circumvent many of the major roadblocks confronting a clinical microbiologist or virologist when presented with an unknown or engineered pathogen.

More Details

Xyce parallel electronic simulator : users' guide. Version 5.1

Keiter, Eric R.; Mei, Ting M.; Russo, Thomas V.; Pawlowski, Roger P.; Schiek, Richard S.; Santarelli, Keith R.; Coffey, Todd S.; Thornquist, Heidi K.

This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only). (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical simulation capability, designed to meet the unique needs of the laboratory.

More Details

Xyce™ Parallel Electronic Simulator: Reference Guide, Version 5.1

Keiter, Eric R.; Mei, Ting M.; Russo, Thomas V.; Pawlowski, Roger P.; Schiek, Richard S.; Santarelli, Keith R.; Coffey, Todd S.; Thornquist, Heidi K.

This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users’ Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users’ Guide.

More Details

Modeling aspects of human memory for scientific study

Bernard, Michael L.; Morrow, James D.; Taylor, Shawn E.; Verzi, Stephen J.; Vineyard, Craig M.

Working with leading experts in the field of cognitive neuroscience and computational intelligence, SNL has developed a computational architecture that represents neurocognitive mechanisms associated with how humans remember experiences in their past. The architecture represents how knowledge is organized and updated through information from individual experiences (episodes) via the cortical-hippocampal declarative memory system. We compared the simulated behavioral characteristics with those of humans measured under well established experimental standards, controlling for unmodeled aspects of human processing, such as perception. We used this knowledge to create robust simulations of & human memory behaviors that should help move the scientific community closer to understanding how humans remember information. These behaviors were experimentally validated against actual human subjects, which was published. An important outcome of the validation process will be the joining of specific experimental testing procedures from the field of neuroscience with computational representations from the field of cognitive modeling and simulation.

More Details

Crossing the mesoscale no-mans land via parallel kinetic Monte Carlo

Plimpton, Steven J.; Battaile, Corbett C.; Chandross, M.; Holm, Elizabeth A.; Thompson, Aidan P.; Tikare, Veena T.; Wagner, Gregory J.; Webb, Edmund B.; Zhou, Xiaowang Z.

The kinetic Monte Carlo method and its variants are powerful tools for modeling materials at the mesoscale, meaning at length and time scales in between the atomic and continuum. We have completed a 3 year LDRD project with the goal of developing a parallel kinetic Monte Carlo capability and applying it to materials modeling problems of interest to Sandia. In this report we give an overview of the methods and algorithms developed, and describe our new open-source code called SPPARKS, for Stochastic Parallel PARticle Kinetic Simulator. We also highlight the development of several Monte Carlo models in SPPARKS for specific materials modeling applications, including grain growth, bubble formation, diffusion in nanoporous materials, defect formation in erbium hydrides, and surface growth and evolution.

More Details

Evaluation of the impact chip multiprocessors have on SNL application performance

Doerfler, Douglas W.

This report describes trans-organizational efforts to investigate the impact of chip multiprocessors (CMPs) on the performance of important Sandia application codes. The impact of CMPs on the performance and applicability of Sandia's system software was also investigated. The goal of the investigation was to make algorithmic and architectural recommendations for next generation platform acquisitions.

More Details

Decision support for integrated water-energy planning

Tidwell, Vincent C.; Kobos, Peter H.; Malczynski, Leonard A.; Hart, William E.; Castillo, Cesar R.

Currently, electrical power generation uses about 140 billion gallons of water per day accounting for over 39% of all freshwater withdrawals thus competing with irrigated agriculture as the leading user of water. Coupled to this water use is the required pumping, conveyance, treatment, storage and distribution of the water which requires on average 3% of all electric power generated. While water and energy use are tightly coupled, planning and management of these fundamental resources are rarely treated in an integrated fashion. Toward this need, a decision support framework has been developed that targets the shared needs of energy and water producers, resource managers, regulators, and decision makers at the federal, state and local levels. The framework integrates analysis and optimization capabilities to identify trade-offs, and 'best' alternatives among a broad list of energy/water options and objectives. The decision support framework is formulated in a modular architecture, facilitating tailored analyses over different geographical regions and scales (e.g., national, state, county, watershed, NERC region). An interactive interface allows direct control of the model and access to real-time results displayed as charts, graphs and maps. Ultimately, this open and interactive modeling framework provides a tool for evaluating competing policy and technical options relevant to the energy-water nexus.

More Details

Increasing fault resiliency in a message-passing environment

Ferreira, Kurt; Oldfield, Ron A.; Stearley, Jon S.; Laros, James H.; Pedretti, Kevin T.T.; Brightwell, Ronald B.

Petaflops systems will have tens to hundreds of thousands of compute nodes which increases the likelihood of faults. Applications use checkpoint/restart to recover from these faults, but even under ideal conditions, applications running on more than 30,000 nodes will likely spend more than half of their total run time saving checkpoints, restarting, and redoing work that was lost. We created a library that performs redundant computations on additional nodes allocated to the application. An active node and its redundant partner form a node bundle which will only fail, and cause an application restart, when both nodes in the bundle fail. The goal of this library is to learn whether this can be done entirely at the user level, what requirements this library places on a Reliability, Availability, and Serviceability (RAS) system, and what its impact on performance and run time is. We find that our redundant MPI layer library imposes a relatively modest performance penalty for applications, but that it greatly reduces the number of applications interrupts. This reduction in interrupts leads to huge savings in restart and rework time. For large-scale applications the savings compensate for the performance loss and the additional nodes required for redundant computations.

More Details

Graph algorithms in the titan toolkit

McLendon, William C.

Graph algorithms are a key component in a wide variety of intelligence analysis activities. The Graph-Based Informatics for Non-Proliferation and Counter-Terrorism project addresses the critical need of making these graph algorithms accessible to Sandia analysts in a manner that is both intuitive and effective. Specifically we describe the design and implementation of an open source toolkit for doing graph analysis, informatics, and visualization that provides Sandia with novel analysis capability for non-proliferation and counter-terrorism.

More Details

Final report LDRD project 105816 : model reduction of large dynamic systems with localized nonlinearities

Lehoucq, Richard B.; Dohrmann, Clark R.; Segalman, Daniel J.

Advanced computing hardware and software written to exploit massively parallel architectures greatly facilitate the computation of extremely large problems. On the other hand, these tools, though enabling higher fidelity models, have often resulted in much longer run-times and turn-around-times in providing answers to engineering problems. The impediments include smaller elements and consequently smaller time steps, much larger systems of equations to solve, and the inclusion of nonlinearities that had been ignored in days when lower fidelity models were the norm. The research effort reported focuses on the accelerating the analysis process for structural dynamics though combinations of model reduction and mitigation of some factors that lead to over-meshing.

More Details

Performance of a parallel algebraic multilevel preconditioner for stabilized finite element semiconductor device modeling

Journal of Computational Physics

Lin, Paul T.; Shadid, John N.; Sala, Marzio; Tuminaro, Raymond S.; Hennigan, Gary L.; Hoekstra, Robert J.

In this study results are presented for the large-scale parallel performance of an algebraic multilevel preconditioner for solution of the drift-diffusion model for semiconductor devices. The preconditioner is the key numerical procedure determining the robustness, efficiency and scalability of the fully-coupled Newton-Krylov based, nonlinear solution method that is employed for this system of equations. The coupled system is comprised of a source term dominated Poisson equation for the electric potential, and two convection-diffusion-reaction type equations for the electron and hole concentration. The governing PDEs are discretized in space by a stabilized finite element method. Solution of the discrete system is obtained through a fully-implicit time integrator, a fully-coupled Newton-based nonlinear solver, and a restarted GMRES Krylov linear system solver. The algebraic multilevel preconditioner is based on an aggressive coarsening graph partitioning of the nonzero block structure of the Jacobian matrix. Representative performance results are presented for various choices of multigrid V-cycles and W-cycles and parameter variations for smoothers based on incomplete factorizations. Parallel scalability results are presented for solution of up to 108 unknowns on 4096 processors of a Cray XT3/4 and an IBM POWER eServer system. © 2009 Elsevier Inc. All rights reserved.

More Details

A comparison of Lagrangian/Eulerian approaches for tracking the kinematics of high deformation solid motion

Ames, Thomas L.; Robinson, Allen C.

The modeling of solids is most naturally placed within a Lagrangian framework because it requires constitutive models which depend on knowledge of the original material orientations and subsequent deformations. Detailed kinematic information is needed to ensure material frame indifference which is captured through the deformation gradient F. Such information can be tracked easily in a Lagrangian code. Unfortunately, not all problems can be easily modeled using Lagrangian concepts due to severe distortions in the underlying motion. Either a Lagrangian/Eulerian or a pure Eulerian modeling framework must be introduced. We discuss and contrast several Lagrangian/Eulerian approaches for keeping track of the details of material kinematics.

More Details

Investigating methods of supporting dynamically linked executables on high performance computing platforms

Laros, James H.; Kelly, Suzanne M.; Levenhagen, Michael J.; Pedretti, Kevin T.T.

Shared libraries have become ubiquitous and are used to achieve great resource efficiencies on many platforms. The same properties that enable efficiencies on time-shared computers and convenience on small clusters prove to be great obstacles to scalability on large clusters and High Performance Computing platforms. In addition, Light Weight operating systems such as Catamount have historically not supported the use of shared libraries specifically because they hinder scalability. In this report we will outline the methods of supporting shared libraries on High Performance Computing platforms using Light Weight kernels that we investigated. The considerations necessary to evaluate utility in this area are many and sometimes conflicting. While our initial path forward has been determined based on this evaluation we consider this effort ongoing and remain prepared to re-evaluate any technology that might provide a scalable solution. This report is an evaluation of a range of possible methods of supporting dynamically linked executables on capability class1 High Performance Computing platforms. Efforts are ongoing and extensive testing at scale is necessary to evaluate performance. While performance is a critical driving factor, supporting whatever method is used in a production environment is an equally important and challenging task.

More Details

Improving performance via mini-applications

Doerfler, Douglas W.; Crozier, Paul C.; Edwards, Harold C.; Williams, Alan B.; Rajan, Mahesh R.; Keiter, Eric R.; Thornquist, Heidi K.

Application performance is determined by a combination of many choices: hardware platform, runtime environment, languages and compilers used, algorithm choice and implementation, and more. In this complicated environment, we find that the use of mini-applications - small self-contained proxies for real applications - is an excellent approach for rapidly exploring the parameter space of all these choices. Furthermore, use of mini-applications enriches the interaction between application, library and computer system developers by providing explicit functioning software and concrete performance results that lead to detailed, focused discussions of design trade-offs, algorithm choices and runtime performance issues. In this paper we discuss a collection of mini-applications and demonstrate how we use them to analyze and improve application performance on new and future computer platforms.

More Details

Efficient algorithms for mixed aleatory-epistemic uncertainty quantification with application to radiation-hardened electronics. Part I, algorithms and benchmark results

Eldred, Michael S.; Swiler, Laura P.

This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.

More Details

Quantitative resilience analysis through control design

Vugrin, Eric D.; Camphouse, Russell C.; Sunderland, Daniel S.

Critical infrastructure resilience has become a national priority for the U. S. Department of Homeland Security. System resilience has been studied for several decades in many different disciplines, but no standards or unifying methods exist for critical infrastructure resilience analysis. Few quantitative resilience methods exist, and those existing approaches tend to be rather simplistic and, hence, not capable of sufficiently assessing all aspects of critical infrastructure resilience. This report documents the results of a late-start Laboratory Directed Research and Development (LDRD) project that investigated the development of quantitative resilience through application of control design methods. Specifically, we conducted a survey of infrastructure models to assess what types of control design might be applicable for critical infrastructure resilience assessment. As a result of this survey, we developed a decision process that directs the resilience analyst to the control method that is most likely applicable to the system under consideration. Furthermore, we developed optimal control strategies for two sets of representative infrastructure systems to demonstrate how control methods could be used to assess the resilience of the systems to catastrophic disruptions. We present recommendations for future work to continue the development of quantitative resilience analysis methods.

More Details

Toward improved branch prediction through data mining

Hemmert, Karl S.

Data mining and machine learning techniques can be applied to computer system design to aid in optimizing design decisions, improving system runtime performance. Data mining techniques have been investigated in the context of branch prediction. Specifically, a comparison of traditional branch predictor performance has been made to data mining algorithms. Additionally, the possiblity of whether additional features available within the architectural state might serve to further improve branch prediction has been evaluated. Results show that data mining techniques indicate potential for improved branch prediction, especially when register file contents are included as a feature set.

More Details

Scalable analysis tools for sensitivity analysis and UQ (3160) results

Ice, Lisa I.; Fabian, Nathan D.; Moreland, Kenneth D.; Bennett, Janine C.; Thompson, David C.; Karelitz, David B.

The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.

More Details

A fully implicit method for 3D quasi-steady state magnetic advection-diffusion

Siefert, Christopher S.; Robinson, Allen C.

We describe the implementation of a prototype fully implicit method for solving three-dimensional quasi-steady state magnetic advection-diffusion problems. This method allows us to solve the magnetic advection diffusion equations in an Eulerian frame with a fixed, user-prescribed velocity field. We have verified the correctness of method and implementation on two standard verification problems, the Solberg-White magnetic shear problem and the Perry-Jones-White rotating cylinder problem.

More Details

Highly scalable linear solvers on thousands of processors

Siefert, Christopher S.; Tuminaro, Raymond S.; Domino, Stefan P.; Robinson, Allen C.

In this report we summarize research into new parallel algebraic multigrid (AMG) methods. We first provide a introduction to parallel AMG. We then discuss our research in parallel AMG algorithms for very large scale platforms. We detail significant improvements in the AMG setup phase to a matrix-matrix multiplication kernel. We present a smoothed aggregation AMG algorithm with fewer communication synchronization points, and discuss its links to domain decomposition methods. Finally, we discuss a multigrid smoothing technique that utilizes two message passing layers for use on multicore processors.

More Details

Neural assembly models derived through nano-scale measurements

Fan, Hongyou F.; Forsythe, James C.; Branda, Catherine B.; Warrender, Christina E.; Schiek, Richard S.

This report summarizes accomplishments of a three-year project focused on developing technical capabilities for measuring and modeling neuronal processes at the nanoscale. It was successfully demonstrated that nanoprobes could be engineered that were biocompatible, and could be biofunctionalized, that responded within the range of voltages typically associated with a neuronal action potential. Furthermore, the Xyce parallel circuit simulator was employed and models incorporated for simulating the ion channel and cable properties of neuronal membranes. The ultimate objective of the project had been to employ nanoprobes in vivo, with the nematode C elegans, and derive a simulation based on the resulting data. Techniques were developed allowing the nanoprobes to be injected into the nematode and the neuronal response recorded. To the authors's knowledge, this is the first occasion in which nanoparticles have been successfully employed as probes for recording neuronal response in an in vivo animal experimental protocol.

More Details
Results 8351–8400 of 9,998
Results 8351–8400 of 9,998