The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.
Presented in this document are tests that exist in the Sierra/SolidMechanics example problem suite, which is a subset of the Sierra/SM regression and performance test suite. These examples showcase common and advanced code capabilities. A wide variety of other regression and verification tests exist in the Sierra/SM test suite that are not included in this manual.
We consider the utilization of a computational model to guide the optimal acquisition of experimental data to inform the stochastic description of model input parameters. Our formulation is based on the recently developed consistent Bayesian approach for solving stochastic inverse problems, which seeks a posterior probability density that is consistent with the model and the data in the sense that the push-forward of the posterior (through the computational model) matches the observed density on the observations almost everywhere. Given a set of potential observations, our optimal experimental design (OED) seeks the observation, or set of observations, that maximizes the expected information gain from the prior probability density on the model parameters. We discuss the characterization of the space of observed densities and a computationally efficient approach for rescaling observed densities to satisfy the fundamental assumptions of the consistent Bayesian approach. Numerical results are presented to compare our approach with existing OED methodologies using the classical/statistical Bayesian approach and to demonstrate our OED on a set of representative partial differential equations (PDE)-based models.
The goal of the ExaWind project is to enable predictive simulations of wind farms composed of many MW-scale turbines situated in complex terrain. Predictive simulations will require computational fluid dynamics (CFD) simulations for which the mesh resolves the geometry of the turbines, and captures the rotation and large deflections of blades. Whereas such simulations for a single turbine are arguably petascale class, multi-turbine wind farm simulations will require exascale-class resources.
The goal of the Eyes On the Ground project is to develop tools to aid IAEA inspectors. Our original vision was to produce a tool that would take three-dimensional measurements of an unknown piece of equipment, construct a semantic representation of the measured object, and then use the resulting data to infer possible explanations of equipment function. We report our tests of a 3-d laser scanner to obtain 3-d point cloud data, and subsequent tests of software to convert the resulting point clouds into primitive geometric objects such as planes and cylinders. These tests successfully identified pipes of moderate diameter and planar surfaces, but also incurred significant noise. We also investigated the IAEA inspector task context, and learned that task constraints may present significant obstacles to using 3-d laser scanners. We further learned that equipment scale and enclosing cases may confound our original goal of equipment diagnosis. Meanwhile, we also surveyed the rapidly evolving field of 3-d measurement technology, and identified alternative sensor modalities that may prove more suitable for inspector use in a safeguards context. We conclude with a detailed discussion of lessons learned and the resulting implications for project goals. Approved for public release; further dissemination unlimited.
This document is a user's guide for capabilities that are not considered mature but are available in Sierra/SolidMechanics (Sierra/SM) for early adopters. The determination of maturity of a capability is determined by many aspects: having regression and verification level testing, documentation of functionality and syntax, and usability are such considerations. Capabilities in this document are lacking in one or many of these aspects.
Accurate and efficient constitutive modeling remains a cornerstone issues for solid mechanics analysis. Over the years, the LAME advanced material model library has grown to address this challenge by implement- ing models capable of describing material systems spanning soft polymers to stiff ceramics including both isotropic and anisotropic responses. Inelastic behaviors including (visco)plasticity, damage, and fracture have all incorporated for use in various analyses. This multitude of options and flexibility, however, comes at the cost of many capabilities, features, and responses and the ensuing complexity in the resulting imple- mentation. Therefore, to enhance confidence and enable the utilization of the LAME library in application, this effort seeks to document and verify the various models in the LAME library. Specifically, the broader strategy, organization, and interface of the library itself is first presented. The physical theory, numerical implementation, and user guide for a large set of models is then discussed. Importantly, a number of verifi- cation tests are performed with each model to not only have confidence in the model itself but also highlight some important response characteristics and features that may be of interest to end-users. Finally, in looking ahead to the future, approaches to add material models to this library and further expand the capabilities are presented.
This is an addendum to the Sierra/SolidMechanics 4.48 User's Guide that documents additional capabilities available only in alternate versions of the Sierra/SolidMechanics (Sierra/SM) code. These alternate versions are enhanced to provide capabilities that are regulated under the U.S. Department of State's International Traffic in Arms Regulations (ITAR) export-control rules. The ITAR regulated codes are only distributed to entities that comply with the ITAR export-control requirements. The ITAR enhancements to Sierra/SM in- clude material models with an energy-dependent pressure response (appropriate for very large deformations and strain rates) and capabilities for blast modeling. Since this is an addendum to the standard Sierra/SM user's guide, please refer to that document first for general descriptions of code capability and use.
Sierra/SolidMechanics (Sierra/SM) is a Lagrangian, three-dimensional code for finite element analysis of solids and structures. It provides capabilities for explicit dynamic, implicit quasistatic and dynamic analyses. The explicit dynamics capabilities allow for the efficient and robust solution of models with extensive contact subjected to large, suddenly applied loads. For implicit problems, Sierra/SM uses a multi-level iterative solver, which enables it to effectively solve problems with large deformations, nonlinear material behavior, and contact. Sierra/SM has a versatile library of continuum and structural elements, and a large library of material models. The code is written for parallel computing environments enabling scalable solutions of extremely large problems for both implicit and explicit analyses. It is built on the SIERRA Framework, which facilitates coupling with other SIERRA mechanics codes. This document describes the functionality and input syntax for Sierra/SM.
This document covers Sierra/SolidMechanics capabilities specific to Goodyear use cases. Some information may be duplicated directly from the Sierra/SolidMechanics User's Guide but is reproduced here to provide context for Goodyear-specific options.
This report documents the completion of milestone STPM12-2 Kokkos User Support Infrastructure. The goal of this milestone was to develop and deploy an initial Kokkos support infrastructure, which facilitates communication and growth of the user community, adds a central place for user documentation and manages access to technical experts. Multiple possible support infrastructure venues were considered and a solution was put into place by Q1 of FY 18 consisting of (1) a Wiki programming guide, (2) github issues and projects for development planning and bug tracking and (3) a “Slack” channel for low latency support communications with the Kokkos user community. Furthermore, the desirability of a cloud based training infrastructure was recognized and put in place in order to support training events.
This report introduces the concepts of Bayesian model selection, which provides a systematic means of calibrating and selecting an optimal model to represent a phenomenon. This has many potential applications, including for comparing constitutive models. The ideas described herein are applied to a model selection problem between different yield models for hardened steel under extreme loading conditions.
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) should not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.
Here, we introduce a meshless method for solving both continuous and discrete variational formulations of a volume constrained, nonlocal diffusion problem. We use the discrete solution to approximate the continuous solution. Our method is nonconforming and uses a localized Lagrange basis that is constructed out of radial basis functions. By verifying that certain inf-sup conditions hold, we demonstrate that both the continuous and discrete problems are well-posed, and also present numerical and theoretical results for the convergence behavior of the method. The stiffness matrix is assembled by a special quadrature routine unique to the localized basis. Combining the quadrature method with the localized basis produces a well-conditioned, symmetric matrix. This then is used to find the discretized solution.