Publications

Results 3426–3450 of 9,998

Search results

Jump to search filters

DARPA TRADES Annual Report

Valentin, Miguel A.

During calendar year 2017, Sandia National Laboratories (SNL) made strides towards developing an open portable design platform rich in highperformance computing (HPC) enabled modeling, analysis and synthesis tools. The main focus was to lay the foundations of the core interfaces that will enable plug-n-play insertion of synthesis optimization technologies in the areas of modeling, analysis and synthesis.

More Details

Multiscale modeling of shock wave localization in porous energetic material

Physical Review B

Wood, M.A.; Kittell, D.E.; Yarrington, C.D.; Thompson, A.P.

Shock wave interactions with defects, such as pores, are known to play a key role in the chemical initiation of energetic materials. The shock response of hexanitrostilbene is studied through a combination of large-scale reactive molecular dynamics and mesoscale hydrodynamic simulations. In order to extend our simulation capability at the mesoscale to include weak shock conditions (<6 GPa), atomistic simulations of pore collapse are used to define a strain-rate-dependent strength model. Comparing these simulation methods allows us to impose physically reasonable constraints on the mesoscale model parameters. In doing so, we have been able to study shock waves interacting with pores as a function of this viscoplastic material response. We find that the pore collapse behavior of weak shocks is characteristically different than that of strong shocks.

More Details

Ensemble Grouping Strategies for Embedded Stochastic Collocation Methods Applied to Anisotropic Diffusion Problems

SIAM/ASA Journal on Uncertainty Quantification

D'Elia, Marta; Phipps, Eric T.; Edwards, Harold C.; Hu, Jonathan J.; Rajamanickam, Sivasankaran

Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble when applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.

More Details

ECP ALCC Quarterly Report (Oct-Dec 2017)

Hu, Jonathan J.

The scientific goal of ExaWind Exascale Computing Project (ECP) is to advance our fundamental understanding of the flow physics governing whole wind plant performance, including wake formation, complex terrain impacts, and turbine-turbine-interaction effects. Current methods for modeling wind plant performance fall short due to insufficient model fidelity and inadequate treatment of key phenomena, combined with a lack of computational power necessary to address the wide range of relevant length scales associated with wind plants. Thus, our ten-year exascale challenge is the predictive simulation of a wind plant composed of O(100) multi-MW wind turbines sited within a 100 km2 area with complex terrain, involving simulations with O(100) billion grid points. The project plan builds progressively from predictive petascale simulations of a single turbine, where the detailed blade geometry is resolved, meshes rotate and deform with blade motions, and atmospheric turbulence is realistically modeled, to a multi turbine array in complex terrain. The ALCC allocation will be used continually throughout the allocation period. In the first half of the allocation period, small (e.g., for testing Kokkos algorithms) and medium (e.g., 10K cores for highly resolved ABL simulations) sized jobs will be typical. In the second half of the allocation period, we will also have a number of large submittals for our resolved-turbine simulations. A challenge in the latter period is that small time step sizes will require long wall-clock times for statistically meaningful solutions. As such, we expect our allocation-hour burn rate to increase as we move through the allocation period.

More Details

Numerical methods for the inverse problem of density functional theory

International Journal of Quantum Chemistry

Jensen, Daniel S.; Wasserman, Adam

The inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic model systems.

More Details

Multilevel-multifidelity approaches for forward uq in the DARPA SEQUOIA project

AIAA Non-Deterministic Approaches Conference, 2018

Eldred, Michael; Geraci, Gianluca; Gorodetsky, Alex; Jakeman, John D.

Within the SEQUOIA project, funded by the DARPA EQUiPS program, we pursue algorithmic approaches that enable comprehensive design under uncertainty, through inclusion of aleatory/parametric and epistemic/model form uncertainties within scalable forward/inverse UQ approaches. These statistical methods are embedded within design processes that manage computational expense through active subspace, multilevel-multifidelity, and reduced-order modeling approximations. To demonstrate these methods, we focus on the design of devices that involve multi-physics interactions in advanced aerospace vehicles. A particular problem of interest is the shape design of nozzles for advanced vehicles such as the Northrop Grumman UCAS X-47B, involving coupled aero-structural-thermal simulations for nozzle performance. In this paper, we explore a combination of multilevel and multifidelity forward and inverse UQ algorithms to reduce the overall computational cost of the analysis by leveraging hierarchies of model form (i.e., multifidelity hierarchies) and solution discretization (i.e., multilevel hierarchies) in order of exploit trade offs between solution accuracy and cost. In particular, we seek the most cost effective fusion of information across complex multi-dimensional modeling hierarchies. Results to date indicate the utility of multiple approaches, including methods that optimally allocate resources when estimator variance varies smoothly across levels, methods that allocate sufficient sampling density based on sparsity estimates, and methods that employ greedy multilevel refinement.

More Details

Multifidelity statistical analysis of large eddy simulations in scramjet computations

AIAA Non-Deterministic Approaches Conference, 2018

Huan, Xun H.; Geraci, Gianluca; Safta, Cosmin; Eldred, Michael; Sargsyan, Khachik; Vane, Zachary P.; Oefelein, Joseph C.; Najm, Habib N.

The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress towards optimal engine designs requires accurate and computationally affordable flow simulations, as well as uncertainty quantification (UQ). While traditional UQ techniques can become prohibitive under expensive simulations and high-dimensional parameter spaces, polynomial chaos (PC) surrogate modeling is a useful tool for alleviating some of the computational burden. However, non-intrusive quadrature-based constructions of PC expansions relying on a single high-fidelity model can still be quite expensive. We thus introduce a two-stage numerical procedure for constructing PC surrogates while making use of multiple models of different fidelity. The first stage involves an initial dimension reduction through global sensitivity analysis using compressive sensing. The second stage utilizes adaptive sparse quadrature on a multifidelity expansion to compute PC surrogate coefficients in the reduced parameter space where quadrature methods can be more effective. The overall method is used to produce accurate surrogates and to propagate uncertainty induced by uncertain boundary conditions and turbulence model parameters, for performance quantities of interest from large eddy simulations of supersonic reactive flows inside a scramjet engine.

More Details

Towards a scalable multifidelity simulation approach for electrokinetic problems at the mesoscale

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Hong, Brian D.; Perego, Mauro; Bochev, Pavel B.; Frischknecht, Amalie L.; Phillips, Edward

In this work we present a computational capability featuring a hierarchy of models with different fidelities for the solution of electrokinetics problems at the micro-/nano-scale. A multifidelity approach allows the selection of the most appropriate model, in terms of accuracy and computational cost, for the particular application at hand. We demonstrate the proposed multifidelity approach by studying the mobility of a colloid in a micro-channel as a function of the colloid charge and of the size of the ions dissolved in the fluid.

More Details

Nonlocal and mixed-locality multiscale finite element methods

Multiscale Modeling and Simulation

Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.

In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. In this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. We conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.

More Details

It’s not the heat, it’s the humidity: Scheduling resilience activity at scale

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Widener, Patrick; Ferreira, Kurt; Levy, Scott L.N.

Maintaining the performance of high-performance computing (HPC) applications with the expected increase in failures is a major challenge for next-generation extreme-scale systems. With increasing scale, resilience activities (e.g. checkpointing) are expected to become more diverse, less tightly synchronized, and more computationally intensive. Few existing studies, however, have examined how decisions about scheduling resilience activities impact application performance. In this work, we examine the relationship between the duration and frequency of resilience activities and application performance. Our study reveals several key findings: (i) the aggregate amount of time consumed by resilience activities is not an effective metric for predicting application performance; (ii) the duration of the interruptions due to resilience activities has the greatest influence on application performance; shorter, but more frequent, interruptions are correlated with better application performance; and (iii) the differential impact of resilience activities across applications is related to the applications’ inter-collective frequencies; the performance of applications that perform infrequent collective operations scales better in the presence of resilience activities than the performance of applications that perform more frequent collective operations. This initial study demonstrates the importance of considering how resilience activities are scheduled. We provide critical analysis and direct guidance on how the resilience challenges of future systems can be met while minimizing the impact on application performance.

More Details
Results 3426–3450 of 9,998
Results 3426–3450 of 9,998