The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.
This document describes how to obtain, install, use, and enjoy a better life with OVIS version 2.0. The OVIS project targets scalable, real-time analysis of very large data sets. We characterize the behaviors of elements and aggregations of elements (e.g., across space and time) in data sets in order to detect anomalous behaviors. We are particularly interested in determining anomalous behaviors that can be used as advance indicators of significant events of which notification can be made or upon which action can be taken or invoked. The OVIS open source tool (BSD license) is available for download at ovis.ca.sandia.gov. While we intend for it to support a variety of application domains, the OVIS tool was initially developed for, and continues to be primarily tuned for, the investigation of High Performance Compute (HPC) cluster system health. In this application it is intended to be both a system administrator tool for monitoring and a system engineer tool for exploring the system state in depth. OVIS 2.0 provides a variety of statistical tools for examining the behavior of elements in a cluster (e.g., nodes, racks) and associated resources (e.g., storage appliances and network switches). It calculates and reports model values and outliers relative to those models. Additionally, it provides an interactive 3D physical view in which the cluster elements can be colored by raw element values (e.g., temperatures, memory errors) or by the comparison of those values to a given model. The analysis tools and the visual display allow the user to easily determine abnormal or outlier behaviors. The OVIS project envisions the OVIS tool, when applied to compute cluster monitoring, to be used in conjunction with the scheduler or resource manager in order to enable intelligent resource utilization. For example, nodes that are deemed less healthy, that is, nodes that exhibit outlier behavior in some variable, or set of variables, that has shown to be correlated with future failure, can be discovered and assigned to shorter duration or less important jobs. Further, applications with fault-tolerant capabilities can invoke those mechanisms on demand, based upon notification of a node exhibiting impending failure conditions, rather than performing such mechanisms (e.g. checkpointing) at regular intervals unnecessarily.
This report presents progress on identifying and classifying features involving combustion in turbulent flow using principal component analysis (PCA) and k-means clustering using an in situ analysis framework. We describe a process for extracting temporally- and spatially-varying information from the simulation, classifying the information, and then applying the classification algorithm to either other portions of the simulation not used for training the classifier or further simulations. Because the regions classified as being of interest take up a small portion of the overall simulation domain, it will consume fewer resources to perform further analysis or save these regions at a higher fidelity than previously possible. The implementation of this process is partially complete and results obtained from PCA of test data is presented that indicates the process may have merit: the basis vectors that PCA provides are significantly different in regions where combustion is occurring and even when all 21 species of a lifted flame simulation are correlated the computational cost of PCA is minimal. What remains to be determined is whether k-means (or other) clustering techniques will be able to identify combined combustion and flow features with an accuracy that makes further characterization of these regions feasible and meaningful.
This is a progress report on polynomial system solving for statistical modeling. This is a progress report on polynomial system solving for statistical modeling. This quarter we have developed our first model of shock response data and an algorithm for identifying the chamber cone containing a polynomial system in n variables with n+k terms within polynomial time - a significant improvement over previous algorithms, all having exponential worst-case complexity. We have implemented and verified the chamber cone algorithm for n+3 and are working to extend the implementation to handle arbitrary k. Later sections of this report explain chamber cones in more detail; the next section provides an overview of the project and how the current progress fits into it.
This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized multi-correlative and principal component analysis engines. It is a sequel to [PT08] which studied the parallel descriptive and correlative engines. The ease of use of these parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; then, this theoretical property is verified with test runs that demonstrate optimal parallel speed-up with up to 200 processors.
This report summarizes the existing statistical engines in VTK/Titan and presents the parallel versions thereof which have already been implemented. The ease of use of these parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; then, this theoretical property is verified with test runs that demonstrate optimal parallel speed-up with up to 200 processors.
In this report, we present the novel functionality of parallel tetrahedral mesh refinement which we have implemented in MOAB. This report details work done to implement parallel, edge-based, tetrahedral refinement into MOAB. The theoretical basis for this work is contained in [PT04, PT05, TP06] while information on design, performance, and operation specific to MOAB are contained herein. As MOAB is intended mainly for use in pre-processing and simulation (as opposed to the post-processing bent of previous papers), the primary use case is different: rather than refining elements with non-linear basis functions, the goal is to increase the number of degrees of freedom in some region in order to more accurately represent the solution to some system of equations that cannot be solved analytically. Also, MOAB has a unique mesh representation which impacts the algorithm. This introduction contains a brief review of streaming edge-based tetrahedral refinement. The remainder of the report is broken into three sections: design and implementation, performance, and conclusions. Appendix A contains instructions for end users (simulation authors) on how to employ the refiner.
This report specifies the way in which Gauss points shall be named and ordered when storing them in an EXODUS II file so that they may be properly interpreted by visualization tools. This naming convention covers hexahedra and tetrahedra. Future revisions of this document will cover quadrilaterals, triangles, and shell elements.
The purpose of this report is to define a standard interface for storing and retrieving novel, non-traditional partial differential equation (PDE) discretizations. Although it focuses specifically on finite elements where state is associated with edges and faces of volumetric elements rather than nodes and the elements themselves (as implemented in ALEGRA), the proposed interface should be general enough to accommodate most discretizations, including hp-adaptive finite elements and even mimetic techniques that define fields over arbitrary polyhedra. This report reviews the representation of edge and face elements as implemented by ALEGRA. It then specifies a convention for storing these elements in EXODUS files by extending the EXODUS API to include edge and face blocks in addition to element blocks. Finally, it presents several techniques for rendering edge and face elements using VTK and ParaView, including the use of VTK's generic dataset interface for interpolating values interior to edges and faces.
Field programmable gate arrays (FPGAs) have been used as alternative computational de-vices for over a decade; however, they have not been used for traditional scientific com-puting due to their perceived lack of floating-point performance. In recent years, there hasbeen a surge of interest in alternatives to traditional microprocessors for high performancecomputing. Sandia National Labs began two projects to determine whether FPGAs wouldbe a suitable alternative to microprocessors for high performance scientific computing and,if so, how they should be integrated into the system. We present results that indicate thatFPGAs could have a significant impact on future systems. FPGAs have thepotentialtohave order of magnitude levels of performance wins on several key algorithms; however,there are serious questions as to whether the system integration challenge can be met. Fur-thermore, there remain challenges in FPGA programming and system level reliability whenusing FPGA devices.4 AcknowledgmentArun Rodrigues provided valuable support and assistance in the use of the Structural Sim-ulation Toolkit within an FPGA context. Curtis Janssen and Steve Plimpton provided valu-able insights into the workings of two Sandia applications (MPQC and LAMMPS, respec-tively).5
Verdict is a collection of subroutines for evaluating the geometric qualities of triangles, quadrilaterals, tetrahedra, and hexahedra using a variety of metrics. A metric is a real number assigned to one of these shapes depending on its particular vertex coordinates. These metrics are used to evaluate the input to finite element, finite volume, boundary element, and other types of solvers that approximate the solution to partial differential equations defined over regions of space. The geometric qualities of these regions is usually strongly tied to the accuracy these solvers are able to obtain in their approximations. The subroutines are written in C++ and have a simple C interface. Each metric may be evaluated individually or in combination. When multiple metrics are evaluated at once, they share common calculations to lower the cost of the evaluation.
This report contains an algorithm for decomposing higher-order finite elementsinto regions appropriate for isosurfacing and proves the conditions under which thealgorithm will terminate. Finite elements are used to create piecewise polynomialapproximants to the solution of partial differential equations for which no analyticalsolution exists. These polynomials represent fields such as pressure, stress, and mo-mentim. In the past, these polynomials have been linear in each parametric coordinate.Each polynomial coefficient must be uniquely determined by a simulation, and thesecoefficients are called degrees of freedom. When there are not enough degrees of free-dom, simulations will typically fail to produce a valid approximation to the solution.Recent work has shown that increasing the number of degrees of freedom by increas-ing the order of the polynomial approximation (instead of increasing the number offinite elements, each of which has its own set of coefficients) can allow some typesof simulations to produce a valid approximation with many fewer degrees of freedomthan increasing the number of finite elements alone. However, once the simulation hasdetermined the values of all the coefficients in a higher-order approximant, tools donot exist for visual inspection of the solution.This report focuses on a technique for the visual inspection of higher-order finiteelement simulation results based on decomposing each finite element into simplicialregions where existing visualization algorithms such as isosurfacing will work. Therequirements of the isosurfacing algorithm are enumerated and related to the placeswhere the partial derivatives of the polynomial become zero. The original isosurfacingalgorithm is then applied to each of these regions in turn.3 AcknowledgementThe authors would like to thank David Day and Louis Romero for their insight into poly-nomial system solvers and the LDRD Senior Council for the opportunity to pursue thisresearch. The authors were supported by the United States Department of Energy, Officeof Defense Programs by the Labratory Directed Research and Development Senior Coun-cil, project 90499. Sandia is a multiprogram laboratory operated by Sandia Corporation,a Lockheed-Martin Company, for the United States Department of Energy under contractDE-AC04-94-AL85000.4