MPI Task Placement on Multicores
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
PLOS Computational Biology
Abstract not provided.
Journal of Theoretical Biology
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
SciDAC Review
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Nuclear Engineering and Design
Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of achievement in V&V activities, how closely related the V&V benchmarks are to the actual application of interest, and the quantification of uncertainties related to the application of interest. © 2007 Elsevier B.V. All rights reserved.
Computing in Science and Engineering
Large, complex graphs arise in many settings including the Internet, social networks, and communication networks. To study such data sets, the authors explored the use of highperformance computing (HPC) for graph algorithms. They found that the challenges in these applications are quite different from those arising in traditional HPC applications and that massively multithreaded machines are well suited for graph problems. © 2008 IEEE.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This report describes the Licensing Support Network (LSN) Assistant--a set of tools for categorizing e-mail messages and documents, and investigating and correcting existing archives of categorized e-mail messages and documents. The two main tools in the LSN Assistant are the LSN Archive Assistant (LSNAA) tool for recategorizing manually labeled e-mail messages and documents and the LSN Realtime Assistant (LSNRA) tool for categorizing new e-mail messages and documents. This report focuses on the LSNAA tool. There are two main components of the LSNAA tool. The first is the Sandia Categorization Framework, which is responsible for providing categorizations for documents in an archive and storing them in an appropriate Categorization Database. The second is the actual user interface, which primarily interacts with the Categorization Database, providing a way for finding and correcting categorizations errors in the database. A procedure for applying the LSNAA tool and an example use case of the LSNAA tool applied to a set of e-mail messages are provided. Performance results of the categorization model designed for this example use case are presented.
Abstract not provided.
Concurrency and Computation: Practice and Experience
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
In 2004, the Responsive Neutron Generator Product Deployment department embarked upon a partnership with the Systems Engineering and Analysis knowledge management (KM) team to develop knowledge management systems for the neutron generator (NG) community. This partnership continues today. The most recent challenge was to improve the current KM system (KMS) development approach by identifying a process that will allow staff members to capture knowledge as they learn it. This 'as-you-go' approach will lead to a sustainable KM process for the NG community. This paper presents a historical overview of NG KMSs, as well as research conducted to move toward sustainable KM.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
SIAM Journal on Scientific Computing
Abstract not provided.
Abstract not provided.
Balancing fairness, user performance, and system performance is a critical concern when developing and installing parallel schedulers. Sandia uses a customized scheduler to manage many of their parallel machines. A primary function of the scheduler is to ensure that the machines have good utilization and that users are treated in a 'fair' manner. A separate compute process allocator (CPA) ensures that the jobs on the machines are not too fragmented in order to maximize throughput. Until recently, there has been no established technique to measure the fairness of parallel job schedulers. This paper introduces a 'hybrid' fairness metric that is similar to recently proposed metrics. The metric uses the Sandia version of a 'fairshare' queuing priority as the basis for fairness. The hybrid fairness metric is used to evaluate a Sandia workload. Using these results, multiple scheduling strategies are introduced to improve performance while satisfying user and system performance constraints.
Abstract not provided.
Proposed for publication in SIAM Review.
This paper demonstrates that the conditions for the existence of a dissipation-induced heteroclinic orbit between the inverted and noninverted states of a tippe top are determined by a complex version of the equations for a simple harmonic oscillator: the modified Maxwell-Bloch equations. A standard linear analysis reveals that the modified Maxwell-Bloch equations describe the spectral instability of the noninverted state and Lyapunov stability of the inverted state. Standard nonlinear analysis based on the energy momentum method gives necessary and sufficient conditions for the existence of a dissipation-induced connecting orbit between these relative equilibria.
46th AIAA Aerospace Sciences Meeting and Exhibit
ALEGRA is an arbitrary Lagrangian-Eulerian (multiphysics) computer code developed at Sandia National Laboratories since 1990. The code contains a variety of physics options including magnetics, radiation, and multimaterial flow. The code has been developed for nearly two decades, but recent work has dramatically improved the code's accuracy and robustness. These improvements include techniques applied to the basic Lagrangian differencing, artificial viscosity and the remap step of the method including an important improvement in the basic conservation of energy in the scheme. We will discuss the various algorithmic improvements and their impact on the results for important applications. Included in these applications are magnetic implosions, ceramic fracture modeling, and electromagnetic launch. Copyright © 2008 by the American Institute of Aeronautics and Astronautics, Inc.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Peridynamics is a nonlocal formulation of continuum mechanics. The discrete peridynamic model has the same computational structure as a molecular dynamic model. This document details the implementation of a discrete peridynamic model within the LAMMPS molecular dynamic code. This document provides a brief overview of the peridynamic model of a continuum, then discusses how the peridynamic model is discretized, and overviews the LAMMPS implementation. A nontrivial example problem is also included.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Computational Physics
Abstract not provided.
Journal of Computational Physics
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
International Journal of Numerical Methods in Engineering
Abstract not provided.
Journal of Water Resources Planning and Management
Abstract not provided.