Publications

Results 3451–3475 of 9,998

Search results

Jump to search filters

Nonlocal and mixed-locality multiscale finite element methods

Multiscale Modeling and Simulation

Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.

In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. In this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. We conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.

More Details

Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations

IEEE Transactions on Visualization and Computer Graphics

Matzen, Laura E.; Haass, Michael J.; Divis, Kristin; Wang, Zhiyuan; Wilson, Andrew T.

Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene have visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. Finally, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.

More Details

A class of simple and effective UQ methods for sparse replicate data applied to the cantilever beam end-to-end UQ problem

AIAA Non-Deterministic Approaches Conference, 2018

Romero, Vicente J.; Weirs, Vincent G.

When very few samples of a random quantity are available from a source distribution or probability density function (PDF) of unknown shape, it is usually not possible to accurately infer the PDF from which the data samples come. Then a significant component of epistemic uncertainty exists concerning the source distribution of random or aleatory variability. For many engineering purposes, including design and risk analysis, one would normally want to avoid inference related under-estimation of important quantities such as response variance, and failure probabilities. Recent research has established the practicality and effectiveness of a class of simple and inexpensive UQ Methods for reasonable conservative estimation of such quantities when only sparse samples of a random quantity are available. This class of UQ methods is explained, demonstrated, and analyzed in this paper within the context of the Sandia Cantilever Beam End-to-End UQ Problem, Part A.1. Several sets of sparse replicate data are involved and several representative uncertainty quantities are to be estimated: A) beam deflection variability, in particular the 2.5 to 97.5 percentile “central 95%” range of the sparsely sampled PDF of deflection; and B) a small exceedance probability associated with a tail of the PDF integrated beyond a specified deflection tolerance.

More Details

Unstructured grid adaptation and solver technology for turbulent flows

AIAA Aerospace Sciences Meeting, 2018

Park, Michael A.; Barral, Nicolas; Ibanez-Granados, Daniel A.; Kamenetskiy, Dmitry S.; Krakos, Joshua A.; Michal, Todd; Loseille, Adrien

Unstructured grid adaptation is a tool to control Computational Fluid Dynamics (CFD) discretization error. However, adaptive grid techniques have made limited impact on production analysis workflows where the control of discretization error is critical to obtaining reliable simulation results. Issues that prevent the use of adaptive grid methods are identified by applying unstructured grid adaptation methods to a series of benchmark cases. Once identified, these challenges to existing adaptive workflows can be addressed. Unstructured grid adaptation is evaluated for test cases described on the Turbulence Modeling Resource (TMR) web site, which documents uniform grid refinement of multiple schemes. The cases are turbulent flow over a Hemisphere Cylinder and an ONERA M6 Wing. Adaptive grid force and moment trajectories are shown for three integrated grid adaptation processes with Mach interpolation control and output error based metrics. The integrated grid adaptation process with a finite element (FE) discretization produced results consistent with uniform grid refinement of fixed grids. The integrated grid adaptation processes with finite volume schemes were slower to converge to the reference solution than the FE method. Metric conformity is documented on grid/metric snapshots for five grid adaptation mechanics implementations. These tools produce anisotropic boundary conforming grids requested by the adaptation process.

More Details

A class of simple and effective UQ methods for sparse replicate data applied to the cantilever beam end-to-end UQ problem

AIAA Non-Deterministic Approaches Conference, 2018

Romero, Vicente J.; Weirs, Vincent G.

When very few samples of a random quantity are available from a source distribution or probability density function (PDF) of unknown shape, it is usually not possible to accurately infer the PDF from which the data samples come. Then a significant component of epistemic uncertainty exists concerning the source distribution of random or aleatory variability. For many engineering purposes, including design and risk analysis, one would normally want to avoid inference related under-estimation of important quantities such as response variance, and failure probabilities. Recent research has established the practicality and effectiveness of a class of simple and inexpensive UQ Methods for reasonable conservative estimation of such quantities when only sparse samples of a random quantity are available. This class of UQ methods is explained, demonstrated, and analyzed in this paper within the context of the Sandia Cantilever Beam End-to-End UQ Problem, Part A.1. Several sets of sparse replicate data are involved and several representative uncertainty quantities are to be estimated: A) beam deflection variability, in particular the 2.5 to 97.5 percentile “central 95%” range of the sparsely sampled PDF of deflection; and B) a small exceedance probability associated with a tail of the PDF integrated beyond a specified deflection tolerance.

More Details

Towards a scalable multifidelity simulation approach for electrokinetic problems at the mesoscale

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Hong, Brian D.; Perego, Mauro P.; Bochev, Pavel B.; Frischknecht, Amalie F.; Phillips, Edward G.

In this work we present a computational capability featuring a hierarchy of models with different fidelities for the solution of electrokinetics problems at the micro-/nano-scale. A multifidelity approach allows the selection of the most appropriate model, in terms of accuracy and computational cost, for the particular application at hand. We demonstrate the proposed multifidelity approach by studying the mobility of a colloid in a micro-channel as a function of the colloid charge and of the size of the ions dissolved in the fluid.

More Details

A virtual control coupling approach for problems with non-coincident discrete interfaces

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Bochev, Pavel B.; Kuberry, Paul A.; Peterson, Kara J.

Independent meshing of subdomains separated by an interface can lead to spatially non-coincident discrete interfaces. We present an optimization-based coupling method for such problems, which does not require a common mesh refinement of the interface, has optimal H1 convergence rates, and passes a patch test. The method minimizes the mismatch of the state and normal stress extensions on discrete interfaces subject to the subdomain equations, while interface “fluxes” provide virtual Neumann controls.

More Details

It’s not the heat, it’s the humidity: Scheduling resilience activity at scale

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Widener, Patrick W.; Ferreira, Kurt B.; Levy, Scott L.

Maintaining the performance of high-performance computing (HPC) applications with the expected increase in failures is a major challenge for next-generation extreme-scale systems. With increasing scale, resilience activities (e.g. checkpointing) are expected to become more diverse, less tightly synchronized, and more computationally intensive. Few existing studies, however, have examined how decisions about scheduling resilience activities impact application performance. In this work, we examine the relationship between the duration and frequency of resilience activities and application performance. Our study reveals several key findings: (i) the aggregate amount of time consumed by resilience activities is not an effective metric for predicting application performance; (ii) the duration of the interruptions due to resilience activities has the greatest influence on application performance; shorter, but more frequent, interruptions are correlated with better application performance; and (iii) the differential impact of resilience activities across applications is related to the applications’ inter-collective frequencies; the performance of applications that perform infrequent collective operations scales better in the presence of resilience activities than the performance of applications that perform more frequent collective operations. This initial study demonstrates the importance of considering how resilience activities are scheduled. We provide critical analysis and direct guidance on how the resilience challenges of future systems can be met while minimizing the impact on application performance.

More Details

Physical foundations of Landauer’s principle

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Frank, Michael P.

We review the physical foundations of Landauer’s Principle, which relates the loss of information from a computational process to an increase in thermodynamic entropy. Despite the long history of the Principle, its fundamental rationale and proper interpretation remain frequently misunderstood. Contrary to some misinterpretations of the Principle, the mere transfer of entropy between computational and non-computational subsystems can occur in a thermodynamically reversible way without increasing total entropy. However, Landauer’s Principle is not about general entropy transfers; rather, it more specifically concerns the ejection of (all or part of) some correlated information from a controlled, digital form (e.g., a computed bit) to an uncontrolled, non-computational form, i.e., as part of a thermal environment. Any uncontrolled thermal system will, by definition, continually re-randomize the physical information in its thermal state, from our perspective as observers who cannot predict the exact dynamical evolution of the microstates of such environments. Thus, any correlations involving information that is ejected into and subsequently thermalized by the environment will be lost from our perspective, resulting directly in an irreversible increase in thermodynamic entropy. Avoiding the ejection and thermalization of correlated computational information motivates the reversible computing paradigm, although the requirements for computations to be thermodynamically reversible are less restrictive than frequently described, particularly in the case of stochastic computational operations. There are interesting possibilities for the design of computational processes that utilize stochastic, many-to-one computational operations while nevertheless avoiding net entropy increase that remain to be fully explored.

More Details

Towards a scalable multifidelity simulation approach for electrokinetic problems at the mesoscale

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Hong, Brian H.; Perego, Mauro P.; Bochev, Pavel B.; Frischknecht, Amalie F.; Phillips, Edward G.

In this work we present a computational capability featuring a hierarchy of models with different fidelities for the solution of electrokinetics problems at the micro-/nano-scale. A multifidelity approach allows the selection of the most appropriate model, in terms of accuracy and computational cost, for the particular application at hand. We demonstrate the proposed multifidelity approach by studying the mobility of a colloid in a micro-channel as a function of the colloid charge and of the size of the ions dissolved in the fluid.

More Details

Global sensitivity analysis and estimation of model error, toward uncertainty quantification in scramjet computations

AIAA Journal

Huan, Xun H.; Safta, Cosmin S.; Sargsyan, Khachik S.; Geraci, Gianluca G.; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, H.N.

The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertainparameters involvedandthe high computational costofflow simulations. These difficulties are addressedin this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying themin the current studyto large-eddy simulations ofajet incrossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system's stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

More Details

Ensemble grouping strategies for embedded stochastic collocation methods applied to anisotropic diffusion problems

SIAM-ASA Journal on Uncertainty Quantification

D'Elia, Marta D.; Phipps, Eric T.; Edwards, Harold C.; Hu, Jonathan J.; Rajamanickam, Sivasankaran R.

Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162-C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble when applying iterative linear solvers to parameterized and stochastic linear systems. In this work we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen-Loève expansions. We demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.

More Details

Multi-threaded Sparse Matrix Sparse Matrix Multiplication for Many-Core and GPU Architectures

Deveci, Mehmet D.; Trott, Christian R.; Rajamanickam, Sivasankaran R.

Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix- matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.

More Details

Multifidelity statistical analysis of large eddy simulations in scramjet computations

AIAA Non-Deterministic Approaches Conference, 2018

Huan, Xun H.; Geraci, Gianluca G.; Safta, Cosmin S.; Eldred, Michael S.; Sargsyan, Khachik S.; Vane, Zachary P.; Oefelein, Joseph C.; Najm, H.N.

The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress towards optimal engine designs requires accurate and computationally affordable flow simulations, as well as uncertainty quantification (UQ). While traditional UQ techniques can become prohibitive under expensive simulations and high-dimensional parameter spaces, polynomial chaos (PC) surrogate modeling is a useful tool for alleviating some of the computational burden. However, non-intrusive quadrature-based constructions of PC expansions relying on a single high-fidelity model can still be quite expensive. We thus introduce a two-stage numerical procedure for constructing PC surrogates while making use of multiple models of different fidelity. The first stage involves an initial dimension reduction through global sensitivity analysis using compressive sensing. The second stage utilizes adaptive sparse quadrature on a multifidelity expansion to compute PC surrogate coefficients in the reduced parameter space where quadrature methods can be more effective. The overall method is used to produce accurate surrogates and to propagate uncertainty induced by uncertain boundary conditions and turbulence model parameters, for performance quantities of interest from large eddy simulations of supersonic reactive flows inside a scramjet engine.

More Details
Results 3451–3475 of 9,998
Results 3451–3475 of 9,998