Publications

Results 1601–1700 of 9,998

Search results

Jump to search filters

Towards an integrated and efficient framework for leveraging reduced order models for multifidelity uncertainty quantification

AIAA Scitech 2020 Forum

Blonigan, Patrick J.; Geraci, Gianluca; Rizzi, Francesco; Eldred, Michael

Truly predictive numerical simulations can only be obtained by performing Uncertainty Quantification. However, many realistic engineering applications require extremely complex and computationally expensive high-fidelity numerical simulations for their accurate performance characterization. Very often the combination of complex physical models and extreme operative conditions can easily lead to hundreds of uncertain parameters that need to be propagated through high-fidelity codes. Under these circumstances, a single fidelity uncertainty quantification approach, i.e. a workflow that only uses high-fidelity simulations, is unfeasible due to its prohibitive overall computational cost. To overcome this difficulty, in recent years multifidelity strategies emerged and gained popularity. Their core idea is to combine simulations with varying levels of fidelity/accuracy in order to obtain estimators or surrogates that can yield the same accuracy of their single fidelity counterparts at a much lower computational cost. This goal is usually accomplished by defining a priori a sequence of discretization levels or physical modeling assumptions that can be used to decrease the complexity of a numerical model realization and thus its computational cost. Less attention has been dedicated to low-fidelity models that can be built directly from a small number of available high-fidelity simulations. In this work we focus our attention on reduced order models (ROMs). Our main goal in this work is to investigate the combination of multifidelity uncertainty quantification and ROMs in order to evaluate the possibility to obtain an efficient framework for propagating uncertainties through expensive numerical codes. We focus our attention on sampling-based multifidelity approaches, like the multifidelity control variate, and we consider several scenarios for a numerical test problem, namely the Kuramoto-Sivashinsky equation, for which the efficiency of the multifidelity-ROM estimator is compared to the standard (single-fidelity) Monte Carlo approach.

More Details

An Energy Consistent Discretization of the Nonhydrostatic Equations in Primitive Variables

Journal of Advances in Modeling Earth Systems

Taylor, Mark A.; Guba, Oksana; Steyer, Andrew J.; Ullrich, Paul A.; Hall; Eldred, Christopher

We derive a formulation of the nonhydrostatic equations in spherical geometry with a Lorenz staggered vertical discretization. The combination conserves a discrete energy in exact time integration when coupled with a mimetic horizontal discretization. The formulation is a version of Dubos and Tort (2014, https://doi.org/10.1175/MWR-D-14-00069.1) rewritten in terms of primitive variables. It is valid for terrain following mass or height coordinates and for both Eulerian or vertically Lagrangian discretizations. The discretization relies on an extension to Simmons and Burridge (1981, https://doi.org/10.1175/1520-0493(1981)109<0758:AEAAMC>2.0.CO;2) vertical differencing, which we show obeys a discrete derivative product rule. This product rule allows us to simplify the treatment of the vertical transport terms. Energy conservation is obtained via a term-by-term balance in the kinetic, internal, and potential energy budgets, ensuring an energy-consistent discretization up to time truncation error with no spurious sources of energy. We demonstrate convergence with respect to time truncation error in a spectral element code with a horizontal explicit vertically implicit implicit-explicit time stepping algorithm.

More Details

Group Formation Theory at Multiple Scales

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Doyle, Casey L.; Naugle, Asmeret B.; Bernard, Michael; Lakkaraju, Kiran; Kittinger, Robert; Sweitzer, Matthew D.; Rothganger, Fredrick R.

There is a wealth of psychological theory regarding the drive for individuals to congregate and form social groups, positing that people may organize out of fear, social pressure, or even to manage their self-esteem. We evaluate three such theories for multi-scale validity by studying them not only at the individual scale for which they were originally developed, but also for applicability to group interactions and behavior. We implement this multi-scale analysis using a dataset of communications and group membership derived from a long-running online game, matching the intent behind the theories to quantitative measures that describe players’ behavior. Once we establish that the theories hold for the dataset, we increase the scope to test the theories at the higher scale of group interactions. Despite being formulated to describe individual cognition and motivation, we show that some group dynamics theories hold at the higher level of group cognition and can effectively describe the behavior of joint decision making and higher-level interactions.

More Details

FROSch: A Fast And Robust Overlapping Schwarz Domain Decomposition Preconditioner Based on Xpetra in Trilinos

Lecture Notes in Computational Science and Engineering

Heinlein, Alexander; Klawonn, Axel; Rajamanickam, Sivasankaran; Rheinbach, Oliver

This article describes a parallel implementation of a two-level overlapping Schwarz preconditioner with the GDSW (Generalized Dryja–Smith–Widlund) coarse space described in previous work [12, 10, 15] into the Trilinos framework; cf. [16]. The software is a significant improvement of a previous implementation [12]; see Sec. 4 for results on the improved performance.

More Details

srMO-BO-3GP: A sequential regularized multi-objective constrained Bayesian optimization for design applications

Proceedings of the ASME Design Engineering Technical Conference

Foulk, James W.; Eldred, Michael; Mccann, Scott; Wang, Yan

Bayesian optimization (BO) is an efficient and flexible global optimization framework that is applicable to a very wide range of engineering applications. To leverage the capability of the classical BO, many extensions, including multi-objective, multi-fidelity, parallelization, and latent-variable modeling, have been proposed to address the limitations of the classical BO framework. In this work, we propose a novel multi-objective (MO) extension, called srMOBO-3GP, to solve the MO optimization problems in a sequential setting. Three different Gaussian processes (GPs) are stacked together, where each of the GP is assigned with a different task: the first GP is used to approximate a single-objective computed from the MO definition, the second GP is used to learn the unknown constraints, and the third GP is used to learn the uncertain Pareto frontier. At each iteration, a MO augmented Tchebycheff function converting MO to single-objective is adopted and extended with a regularized ridge term, where the regularization is introduced to smooth the single-objective function. Finally, we couple the third GP along with the classical BO framework to explore the richness and diversity of the Pareto frontier by the exploitation and exploration acquisition function. The proposed framework is demonstrated using several numerical benchmark functions, as well as a thermomechanical finite element model for flip-chip package design optimization.

More Details

An algebraic sparsified nested dissection algorithm using low-rank approximations

SIAM Journal on Matrix Analysis and Applications

Cambier, Leopold; Boman, Erik G.; Rajamanickam, Sivasankaran; Tuminaro, Raymond S.; Darve, Eric

We propose a new algorithm for the fast solution of large, sparse, symmetric positive-definite linear systems, spaND (sparsified Nested Dissection). It is based on nested dissection, sparsification, and low-rank compression. After eliminating all interiors at a given level of the elimination tree, the algorithm sparsifies all separators corresponding to the interiors. This operation reduces the size of the separators by eliminating some degrees of freedom but without introducing any fill-in. This is done at the expense of a small and controllable approximation error. The result is an approximate factorization that can be used as an efficient preconditioner. We then perform several numerical experiments to evaluate this algorithm. We demonstrate that a version using orthogonal factorization and block-diagonal scaling takes fewer CG iterations to converge than previous similar algorithms on various kinds of problems. Furthermore, this algorithm is provably guaranteed to never break down and the matrix stays symmetric positive-definite throughout the process. We evaluate the algorithm on some large problems show it exhibits near-linear scaling. The factorization time is roughly \scrO (N), and the number of iterations grows slowly with N.

More Details

Regular sensitivity computation avoiding chaotic effects in particle-in-cell plasma methods

Journal of Computational Physics

Chung, Seung W.; Bond, Stephen D.; Cyr, Eric C.; Freund, Jonathan B.

Particle-in-cell (PIC) simulation methods are attractive for representing species distribution functions in plasmas. However, as a model, they introduce uncertain parameters, and for quantifying their prediction uncertainty it is useful to be able to assess the sensitivity of a quantity-of-interest (QoI) to these parameters. Such sensitivity information is likewise useful for optimization. However, computing sensitivity for PIC methods is challenging due to the chaotic particle dynamics, and sensitivity techniques remain underdeveloped compared to those for Eulerian discretizations. This challenge is examined from a dual particle–continuum perspective that motivates a new sensitivity discretization. Two routes to sensitivity computation are presented and compared: a direct fully-Lagrangian particle-exact approach provides sensitivities of each particle trajectory, and a new particle-pdf discretization, which is formulated from a continuum perspective but discretized by particles to take the advantages of the same type of Lagrangian particle description leveraged by PIC methods. Since the sensitivity particles in this approach are only indirectly linked to the plasma-PIC particles, they can be positioned and weighted independently for efficiency and accuracy. The corresponding numerical algorithms are presented in mathematical detail. The advantage of the particle-pdf approach in avoiding the spurious chaotic sensitivity of the particle-exact approach is demonstrated for Debye shielding and sheath configurations. In essence, the continuum perspective makes implicit the distinctness of the particles, which circumvents the Lyapunov instability of the N-body PIC system. The cost of the particle-pdf approach is comparable to the baseline PIC simulation.

More Details

Layer-Parallel Training of Deep Residual Neural Networks

SIAM Journal on Mathematics of Data Science

Gunther, Stefanie; Ruthotto, Lars; Schroder, Jacob B.; Cyr, Eric C.; Gauger, Nicolas R.

Residual neural networks (ResNets) are a promising class of deep neural networks that have shown excellent performance for a number of learning tasks, e.g., image classification and recognition. Mathematically, ResNet architectures can be interpreted as forward Euler discretizations of a nonlinear initial value problem whose time-dependent control variables represent the weights of the neural network. Hence, training a ResNet can be cast as an optimal control problem of the associated dynamical system. For similar time-dependent optimal control problems arising in engineering applications, parallel-in-time methods have shown notable improvements in scalability. This paper demonstrates the use of those techniques for efficient and effective training of ResNets. The proposed algorithms replace the classical (sequential) forward and backward propagation through the network layers with a parallel nonlinear multigrid iteration applied to the layer domain. This adds a new dimension of parallelism across layers that is attractive when training very deep networks. From this basic idea, we derive multiple layer-parallel methods. The most efficient version employs a simultaneous optimization approach where updates to the network parameters are based on inexact gradient information in order to speed up the training process. Using numerical examples from supervised classification, we demonstrate that the new approach achieves a training performance similar to that of traditional methods, but enables layer-parallelism and thus provides speedup over layer-serial methods through greater concurrency.

More Details

Enabling Scalable Multifluid Plasma Simulations Through Block Preconditioning

Lecture Notes in Computational Science and Engineering

Phillips, Edward; Shadid, John N.; Cyr, Eric C.; Miller, Sean

Recent work has demonstrated that block preconditioning can scalably accelerate the performance of iterative solvers applied to linear systems arising in implicit multiphysics PDE simulations. The idea of block preconditioning is to decompose the system matrix into physical sub-blocks and apply individual specialized scalable solvers to each sub-block. It can be advantageous to block into simpler segregated physics systems or to block by discretization type. This strategy is particularly amenable to multiphysics systems in which existing solvers, such as multilevel methods, can be leveraged for component physics and to problems with disparate discretizations in which scalable monolithic solvers are rare. This work extends our recent work on scalable block preconditioning methods for structure-preserving discretizatons of the Maxwell equations and our previous work in MHD system solvers to the context of multifluid electromagnetic plasma systems. We argue how a block preconditioner can address both the disparate discretization, as well as strongly-coupled off-diagonal physics that produces fast time-scales (e.g. plasma and cyclotron frequencies). We propose a block preconditioner for plasma systems that allows reuse of existing multigrid solvers for different degrees of freedom while capturing important couplings, and demonstrate the algorithmic scalability of this approach at time-scales of interest.

More Details

Synchronous and concurrent multidomain computing method for cloud computing platforms

SIAM Journal on Scientific Computing

Anguiano, Marcelino; Kuberry, Paul; Bochev, Pavel B.; Masud, Arif

We present a numerical method for synchronous and concurrent solution of transient elastodynamics problem where the computational domain is divided into subdomains that may reside on separate computational platforms. This work employs the variational multiscale discontinuous Galerkin (VMDG) method to develop interdomain transmission conditions for transient problems. The fine-scale modeling concept leads to variationally consistent coupling terms at the common interfaces. The method admits a large class of time discretization schemes, and decoupling of the solution for each subdomain is achieved by selecting any explicit algorithm. Numerical tests with a manufactured solution problem show optimal convergence rates. The energy history in a free vibration problem is in agreement with that of the solution from a monolithic computational domain.

More Details

Linking pyrometry to porosity in additively manufactured metals

Additive Manufacturing

Mitchell, John A.; Ivanoff, Thomas; Dagel, Daryl; Madison, Jonathan D.; Jared, Bradley H.

Porosity in additively manufactured metals can reduce material strength and is generally undesirable. Although studies have shown relationships between process parameters and porosity, monitoring strategies for defect detection and pore formation are still needed. In this paper, instantaneous anomalous conditions are detected in-situ via pyrometry during laser powder bed fusion additive manufacturing and correlated with voids observed using post-build micro-computed tomography. Large two-color pyrometry data sets were used to estimate instantaneous temperatures, melt pool orientations and aspect ratios. Machine learning algorithms were then applied to processed pyrometry data to detect outlier images and conditions. It is shown that melt pool outliers are good predictors of voids observed post-build. With this approach, real time process monitoring can be incorporated into systems to detect defect and void formation. Alternatively, using the methodology presented here, pyrometry data can be post processed for porosity assessment.

More Details

KKT preconditioners for pde-constrained optimization with the helmholtz equation

SIAM Journal on Scientific Computing

Kouri, Drew P.; Ridzal, Denis; Tuminaro, Raymond S.

This paper considers preconditioners for the linear systems that arise from optimal control and inverse problems involving the Helmholtz equation. Specifically, we explore an all-at-once approach. The main contribution centers on the analysis of two block preconditioners. Variations of these preconditioners have been proposed and analyzed in prior works for optimal control problems where the underlying partial differential equation is a Laplace-like operator. In this paper, we extend some of the prior convergence results to Helmholtz-based optimization applications. Our analysis examines situations where control variables and observations are restricted to subregions of the computational domain. We prove that solver convergence rates do not deteriorate as the mesh is refined or as the wavenumber increases. More specifically, for one of the preconditioners we prove accelerated convergence as the wavenumber increases. Additionally, in situations where the control and observation subregions are disjoint, we observe that solver convergence rates have a weak dependence on the regularization parameter. We give a partial analysis of this behavior. We illustrate the performance of the preconditioners on control problems motivated by acoustic testing.

More Details

ExaWind: Exascale Predictive Wind Plant Flow Physics Modeling

Sprague, M.; Ananthan, S.; Brazell, M.; Glaws, A.; De Frahan, M.; King, R.; Natarajan, M.; Rood, J.; Sharma, A.; Sirydowicz, K.; Thomas, S.; Vijaykumar, G.; Yellapantula, S.; Crozier, Paul; Berger-Vergiat, Luc; Cheung, Lawrence; Glaze, David J.; Hu, Jonathan J.; Knaus, Robert C.; Lee, Dong H.; Okusanya, Tolulope O.; Overfelt, James R.; Rajamanickam, Sivasankaran; Sakievich, Philip; Smith, Timothy A.; Vo, Johnathan; Williams, Alan B.; Yamazaki, Ichitaro; Turner, J.; Prokopenko, A.; Wilson, R.; Moser, R.; Melvin, J.; Sitaraman, J.

Abstract not provided.

Hyper-Differential Sensitivity Analysis of Uncertain Parameters in PDE-Constrained Optimization

International Journal for Uncertainty Quantification

Van Bloemen Waanders, Bart

Many problems in engineering and sciences require the solution of large scale optimization constrained by partial differential equations (PDEs). Though PDE-constrained optimization is itself challenging, most applications pose additional complexity, namely, uncertain parameters in the PDEs. Uncertainty quantification (UQ) is necessary to characterize, prioritize, and study the influence of these uncertain parameters. Sensitivity analysis, a classical tool in UQ, is frequently used to study the sensitivity of a model to uncertain parameters. In this article, we introduce "hyper-differential sensitivity analysis" which considers the sensitivity of the solution of a PDE-constrained optimization problem to uncertain parameters. Our approach is a goal-oriented analysis which may be viewed as a tool to complement other UQ methods in the service of decision making and robust design. We formally define hyper-differential sensitivity indices and highlight their relationship to the existing optimization and sensitivity analysis literatures. Assuming the presence of low rank structure in the parameter space, computational efficiency is achieved by leveraging a generalized singular value decomposition in conjunction with a randomized solver which converts the computational bottleneck of the algorithm into an embarrassingly parallel loop. Two multi-physics examples, consisting of nonlinear steady state control and transient linear inversion, demonstrate efficient identification of the uncertain parameters which have the greatest influence on the optimal solution.

More Details

30 cm Drop Tests

Kalinina, Elena A.; Ammerman, Douglas; Grey, Carissa A.; Arviso, Michael; Wright, Catherine; Lujan, Lucas A.; Flores, Gregg; Saltzstein, Sylvia J.

The data from the multi-modal transportation test conducted in 2017 demonstrated that the inputs from the shock events during all transport modes (truck, rail, and ship) were amplified from the cask to the spent commercial nuclear fuel surrogate assemblies. These data do not support common assumption that the cask content experiences the same accelerations as the cask itself. This was one of the motivations for conducting 30 cm drop tests. The goal of the 30 cm drop test is to measure accelerations and strains on the surrogate spent nuclear fuel assembly and to determine whether the fuel rods can maintain their integrity inside a transportation cask when dropped from a height of 30 cm. The 30 cm drop is the remaining NRC normal conditions of transportation regulatory requirement (10 CFR 71.71) for which there are no data on the actual surrogate fuel. Because the full-scale cask and impact limiters were not available (and their cost was prohibitive), it was proposed to achieve this goal by conducting three separate tests. This report describes the first two tests — the 30 cm drop test of the 1/3 scale cask (conducted in December 2018) and the 30 cm drop of the full-scale dummy assembly (conducted in June 2019). The dummy assembly represents the mass of a real spent nuclear fuel assembly. The third test (to be conducted in the spring of 2020) will be the 30 cm drop of the full-scale surrogate assembly. The surrogate assembly represents a real full-scale assembly in physical, material, and mechanical characteristics, as well as in mass.

More Details

Data Pallets: Containerizing Storage For Reproducibility and Traceability

Lecture Notes in Computer Science

Lofstead, Gerald F.; Baker, Joshua; Younge, Andrew J.

Trusting simulation output is crucial for Sandia’s mission objectives. Here, we rely on these simulations to perform our high-consequence mission tasks given national treaty obligations. Other science and modeling applications, while they may have high-consequence results, still require the strongest levels of trust to enable using the result as the foundation for both practical applications and future research. To this end, the computing community has developed workflow and provenance systems to aid in both automating simulation and modeling execution as well as determining exactly how was some output was created so that conclusions can be drawn from the data. Current approaches for workflows and provenance systems are all at the user level and have little to no system level support making them fragile, difficult to use, and incomplete solutions. The introduction of container technology is a first step towards encapsulating and tracking artifacts used in creating data and resulting insights, but their current implementation is focused solely on making it easy to deploy an application in an isolated “sandbox” and maintaining a strictly read-only mode to avoid any potential changes to the application. All storage activities are still using the system-level shared storage. This project explores extending the container concept to include storage as a new container type we call data pallets. Data Pallets are potentially writeable, auto generated by the system based on IO activities, and usable as a way to link the contained data back to the application and input deck used to create it.

More Details

Two Problems in Knowledge Graph Embedding: Non-Exclusive Relation Categories and Zero Gradients

Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019

Lee, Kookjin L.; Nur, Nasheen; Park, Noseong; Kang, Hyunjoong; Kwon, Soonhyeon

Knowledge graph embedding (KGE) learns latent vector representations of named entities (i.e., vertices) and relations (i.e., edge labels) of knowledge graphs. Herein, we address two problems in KGE. First, relations may belong to one or multiple categories, such as functional, symmetric, transitive, reflexive, and so forth; thus, relation categories are not exclusive. Some relation categories cause non-trivial challenges for KGE. Second, we found that zero gradients happen frequently in many translation based embedding methods such as TransE and its variations. To solve these problems, we propose i) converting a knowledge graph into a bipartite graph, although we do not physically convert the graph but rather use an equivalent trick; ii) using multiple vector representations for a relation; and iii) using a new hinge loss based on energy ratio(rather than energy gap) that does not cause zero gradients. We show that our method significantly improves the quality of embedding.

More Details

Making social networks more human: A topological approach

Statistical Analysis and Data Mining

Berry, Jonathan

A key problem in social network analysis is to identify nonhuman interactions. State-of-the-art bot-detection systems like Botometer train machine-learning models on user-specific data. Unfortunately, these methods do not work on data sets in which only topological information is available. In this paper, we propose a new, purely topological approach. Our method removes edges that connect nodes exhibiting strong evidence of non-human activity from publicly available electronic-social-network datasets, including, for example, those in the Stanford Network Analysis Project repository (SNAP). Our methodology is inspired by classic work in evolutionary psychology by Dunbar that posits upper bounds on the total strength of the set of social connections in which a single human can be engaged. We model edge strength with Easley and Kleinberg's topological estimate; label nodes as “violators” if the sum of these edge strengths exceeds a Dunbar-inspired bound; and then remove the violator-to-violator edges. We run our algorithm on multiple social networks and show that our Dunbar-inspired bound appears to hold for social networks, but not for nonsocial networks. Our cleaning process classifies 0.04% of the nodes of the Twitter-2010 followers graph as violators, and we find that more than 80% of these violator nodes have Botometer scores of 0.5 or greater. Furthermore, after we remove the roughly 15 million violator-violator edges from the 1.2-billion-edge Twitter-2010 follower graph, 34% of the violator nodes experience a factor-of-two decrease in PageRank. PageRank is a key component of many graph algorithms such as node/edge ranking and graph sparsification. Thus, this artificial inflation would bias algorithmic output, and result in some incorrect decisions based on this output.

More Details

A mathematical programming approach for the optimal placement of flame detectors in petrochemical facilities

Process Safety and Environmental Protection

Zhen, Todd; Klise, Katherine A.; Cunningham, Sean; Marszal, Edward; Laird, Carl

Flame detectors provide an important layer of protection for personnel in petrochemical plants, but effective placement can be challenging. A mixed-integer nonlinear programming formulation is proposed for optimal placement of flame detectors while considering non-uniform probabilities of detection failure. We show that this approach allows for the placement of fire detectors using a fixed sensor budget and outperforms models that do not account for imperfect detection. We develop a linear relaxation to the formulation and an efficient solution algorithm that achieves global optimality with reasonable computational effort. We integrate this problem formulation into the Python package, Chama, and demonstrate the effectiveness of this formulation on a small test case and on two real-world case studies using the fire and gas mapping software, Kenexis Effigy.

More Details

Development, Demonstration and Validation of Data-Driven Compact Diode Models for Circuit Simulation and Analysis

Aadithya, Karthik V.; Kuberry, Paul; Paskaleva, Biliana S.; Bochev, Pavel B.; Leeson, Kenneth M.; Mar, Alan; Mei, Ting; Keiter, Eric R.

Compact semiconductor device models are essential for efficiently designing and analyzing large circuits. However, traditional compact model development requires a large amount of manual effort and can span many years. Moreover, inclusion of new physics (e.g., radiation effects) into an existing model is not trivial and may require redevelopment from scratch. Machine Learning (ML) techniques have the potential to automate and significantly speed up the development of compact models. In addition, ML provides a range of modeling options that can be used to develop hierarchies of compact models tailored to specific circuit design stages. In this paper, we explore three such options: (1) table-based interpolation, (2) Generalized Moving Least-Squares, and (3) feedforward Deep Neural Networks, to develop compact models for a p-n junction diode. We evaluate the performance of these "data-driven" compact models by (1) comparing their voltage-current characteristics against laboratory data, and (2) building a bridge rectifier circuit using these devices, predicting the circuit's behavior using SPICE-like circuit simulations, and then comparing these predictions against laboratory measurements of the same circuit.

More Details
Results 1601–1700 of 9,998
Results 1601–1700 of 9,998