Sandia has a legacy of leadership in the advancement of high performance computing (HPC) at extreme scales. First-of-a-kind scalable distributed-memory parallel platforms such as the Intel Paragon, ASCI Red (the world’s first teraflops computer), and Red Storm (co-developed with Cray) helped form the basis for one of the most successful supercomputer product lines ever: the Cray XT series. Sandia also has pioneered system software elements—including lightweight operating systems, the Portals network programming interface, advanced interconnection network designs, and scalable I/O— that are critical to achieving scalability on large computing systems.
QMU stands for 'Quantification of Margins and Uncertainties'. QMU is a basic framework for consistency in integrating simulation, data, and/or subject matter expertise to provide input into a risk-informed decision-making process. QMU is being applied to a wide range of NNSA stockpile issues, from performance to safety. The implementation of QMU varies with lab and application focus. The Advanced Simulation and Computing (ASC) Program develops validated computational simulation tools to be applied in the context of QMU. QMU provides input into a risk-informed decision making process. The completeness aspect of QMU can benefit from the structured methodology and discipline of quantitative risk assessment (QRA)/probabilistic risk assessment (PRA). In characterizing uncertainties it is important to pay attention to the distinction between those arising from incomplete knowledge ('epistemic' or systematic), and those arising from device-to-device variation ('aleatory' or random). The national security labs should investigate the utility of a probability of frequency (PoF) approach in presenting uncertainties in the stockpile. A QMU methodology is connected if the interactions between failure modes are included. The design labs should continue to focus attention on quantifying uncertainties that arise from epistemic uncertainties such as poorly-modeled phenomena, numerical errors, coding errors, and systematic uncertainties in experiment. The NNSA and design labs should ensure that the certification plan for any RRW is supported by strong, timely peer review and by an ongoing, transparent QMU-based documentation and analysis in order to permit a confidence level necessary for eventual certification.
There is considerable interest in achieving a 1000 fold increase in supercomputing power in the next decade, but the challenges are formidable. In this paper, the authors discuss some of the driving science and security applications that require Exascale computing (a million, trillion operations per second). Key architectural challenges include power, memory, interconnection networks and resilience. The paper summarizes ongoing research aimed at overcoming these hurdles. Topics of interest are architecture aware and scalable algorithms, system simulation, 3D integration, new approaches to system-directed resilience and new benchmarks. Although significant progress is being made, a broader international program is needed.
As computational needs for structural finite element analysis increase, a robust implicit structural dynamics code is needed which can handle millions of degrees of freedom in the model and produce results with quick turn around time. A parallel code is needed to avoid limitations of serial platforms. Salinas is an implicit structural dynamics code specifically designed for massively parallel platforms. It computes the structural response of very large complex structures and provides solutions faster than any existing serial machine. This paper gives a current status of Salinas and uses demonstration problems to show Salinas' performance.
This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.
Salinas provides a massively parallel implementation of structural dynamics finite element analysis, required for high fidelity, validated models used in modal, vibration, static and shock analysis of weapons systems. This document provides a users guide to the input for Salinas. Details of input specifications for the different solution types, output options, element types and parameters are included. The appendices contain detailed examples, and instructions for running the software on parallel platforms.
A response surface methodology-based technique is presented for treating discretization error in non-deterministic analysis. The response surface, or metamodel, is estimated from computer experiments which vary both uncertain physical parameters and the fidelity of the computational mesh. The resultant metamodel is then used to propagate the variabilities in the continuous input parameters, while the mesh size is taken to zero, its asymptotic limit. With respect to mesh size, the metamodel is equivalent to Richardson extrapolation, in which solutions on coarser and finer meshes are used to estimate discretization error. The method is demonstrated on a one dimensional prismatic bar, in which uncertainty in the third vibration frequency is estimated by propagating variations in material modulus, density, and bar length. The results demonstrate the efficiency of the method for combining non-deterministic analysis with error estimation to obtain estimates of total simulation uncertainty. The results also show the relative sensitivity of failure estimates to solution bias errors in a reliability analysis, particularly when the physical variability of the system is low.
A computational procedure for extracting substructure-by-substructure flexibility properties from global modal parameters is presented. The present procedure consists of two key features: an element-based direct flexibility method, which uniquely determines the global flexibility without resorting to case-dependent redundancy selections, and the projection of kinematically inadmissible modes that are contained in the iterated substructural matrices. The direct flexibility method is used as the basis of an inverse problem, whose goal is to determine substructural flexibilities given the global flexibility, geometrically determined substructural rigid-body modes, and the local-to-global assembly operators. The resulting procedure, given accurate global flexibility, extracts the exact element-by-element substructural flexibilities for determinate structures. For indeterminate structures, the accuracy depends on the iteration tolerance limits. The procedure is illustrated using both simple and complex numerical examples and appears to be effective for structural applications such as damage localization and finite element model reconciliation.
A linear least-squares procedure for the determination of modal residues using time-domain system realization theory is presented. The present procedure is intended to complement existing techniques for time-domain system identification and is shown to be theoretically equivalent to residue determination in realization algorithms such as the eigensystem realization algorithm and Q-Markov covariance equivalent realization method. However, isolating the optimal residue estimation problem from the general realization problem affords several alternative strategies as compared to standard realization algorithms for structural dynamics identification. Primary among these are alternative techniques for handling data sets with large numbers of sensors using small numbers of reference point responses and the inclusion of terms that accurately model the effects of residual flexibility. The accuracy and efficiency of the present realization theory-based procedure is demonstrated for both simulated and experimental data.
We present a theory for transforming the system-theory-based realization models into the corresponding physical coordinate-based structural models. The theory has been implemented into computational procedure and applied to several example problems. Our results show that the present transformation theory yields an objective model basis possessing a unique set of structural parameters from an infinite set of equivalent system realization models. For proportionally damped systems, the transformation directly and systematicaly yields the normal modes and modal damping. Moreover, when nonproportional damping is present, the relative magnitude and phase of the damped mode shapes are separately characterized, and a corrective transformation is then employed to capture the undamped normal modes and nondiagonal modal damping matrix.