Publications

Results 1601–1800 of 9,998

Search results

Jump to search filters

Optimization-based property-preserving solution recovery for fault-tolerant scalar transport

Proceedings of the 6th European Conference on Computational Mechanics: Solids, Structures and Coupled Problems, ECCM 2018 and 7th European Conference on Computational Fluid Dynamics, ECFD 2018

Ridzal, Denis R.; Bochev, Pavel B.

As the mean time between failures on the future high-performance computing platforms is expected to decrease to just a few minutes, the development of “smart”, property-preserving checkpointing schemes becomes imperative to avoid dramatic decreases in application utilization. In this paper we formulate a generic optimization-based approach for fault-tolerant computations, which separates property preservation from the compression and recovery stages of the checkpointing processes. We then specialize the approach to obtain a fault recovery procedure for a model scalar transport equation, which preserves local solution bounds and total mass. Numerical examples showing solution recovery from a corrupted application state for three different failure modes illustrate the potential of the approach.

More Details

Operational, gauge-free quantum tomography

Quantum

Di Matteo, Olivia; Gamble, John; Granade, Chris; Rudinger, Kenneth M.; Wiebe, Nathan

As increasingly impressive quantum information processors are realized in laboratories around the world, robust and reliable characterization of these devices is now more urgent than ever. These diagnostics can take many forms, but one of the most popular categories is tomography, where an underlying parameterized model is proposed for a device and inferred by experiments. Here, we introduce and implement efficient operational tomography, which uses experimental observables as these model parameters. This addresses a problem of ambiguity in representation that arises in current tomographic approaches (the gauge problem). Solving the gauge problem enables us to efficiently implement operational tomography in a Bayesian framework computationally, and hence gives us a natural way to include prior information and discuss uncertainty in fit parameters. We demonstrate this new tomography in a variety of different experimentally-relevant scenarios, including standard process tomography, Ramsey interferometry, randomized benchmarking, and gate set tomography.

More Details

Towards an integrated and efficient framework for leveraging reduced order models for multifidelity uncertainty quantification

AIAA Scitech 2020 Forum

Blonigan, Patrick J.; Geraci, Gianluca G.; Rizzi, Francesco N.; Eldred, Michael S.

Truly predictive numerical simulations can only be obtained by performing Uncertainty Quantification. However, many realistic engineering applications require extremely complex and computationally expensive high-fidelity numerical simulations for their accurate performance characterization. Very often the combination of complex physical models and extreme operative conditions can easily lead to hundreds of uncertain parameters that need to be propagated through high-fidelity codes. Under these circumstances, a single fidelity uncertainty quantification approach, i.e. a workflow that only uses high-fidelity simulations, is unfeasible due to its prohibitive overall computational cost. To overcome this difficulty, in recent years multifidelity strategies emerged and gained popularity. Their core idea is to combine simulations with varying levels of fidelity/accuracy in order to obtain estimators or surrogates that can yield the same accuracy of their single fidelity counterparts at a much lower computational cost. This goal is usually accomplished by defining a priori a sequence of discretization levels or physical modeling assumptions that can be used to decrease the complexity of a numerical model realization and thus its computational cost. Less attention has been dedicated to low-fidelity models that can be built directly from a small number of available high-fidelity simulations. In this work we focus our attention on reduced order models (ROMs). Our main goal in this work is to investigate the combination of multifidelity uncertainty quantification and ROMs in order to evaluate the possibility to obtain an efficient framework for propagating uncertainties through expensive numerical codes. We focus our attention on sampling-based multifidelity approaches, like the multifidelity control variate, and we consider several scenarios for a numerical test problem, namely the Kuramoto-Sivashinsky equation, for which the efficiency of the multifidelity-ROM estimator is compared to the standard (single-fidelity) Monte Carlo approach.

More Details

A Portable SIMD Primitive Using Kokkos for Heterogeneous Architectures

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Sahasrabudhe, Damodar; Phipps, Eric T.; Rajamanickam, Sivasankaran R.; Berzins, Martin

As computer architectures are rapidly evolving (e.g. those designed for exascale), multiple portability frameworks have been developed to avoid new architecture-specific development and tuning. However, portability frameworks depend on compilers for auto-vectorization and may lack support for explicit vectorization on heterogeneous platforms. Alternatively, programmers can use intrinsics-based primitives to achieve more efficient vectorization, but the lack of a gpu back-end for these primitives makes such code non-portable. A unified, portable, Single Instruction Multiple Data (simd) primitive proposed in this work, allows intrinsics-based vectorization on cpus and many-core architectures such as Intel Knights Landing (knl), and also facilitates Single Instruction Multiple Threads (simt) based execution on gpus. This unified primitive, coupled with the Kokkos portability ecosystem, makes it possible to develop explicitly vectorized code, which is portable across heterogeneous platforms. The new simd primitive is used on different architectures to test the performance boost against hard-to-auto-vectorize baseline, to measure the overhead against efficiently vectroized baseline, and to evaluate the new feature called the “logical vector length” (lvl). The simd primitive provides portability across cpus and gpus without any performance degradation being observed experimentally.

More Details

Robust Training and Initialization of Deep Neural Networks: An Adaptive Basis Viewpoint

Proceedings of Machine Learning Research

Cyr, Eric C.; Gulian, Mamikon G.; Patel, Ravi G.; Perego, Mauro P.; Trask, Nathaniel A.

Motivated by the gap between theoretical optimal approximation rates of deep neural networks (DNNs) and the accuracy realized in practice, we seek to improve the training of DNNs. The adoption of an adaptive basis viewpoint of DNNs leads to novel initializations and a hybrid least squares/gradient descent optimizer. We provide analysis of these techniques and illustrate via numerical examples dramatic increases in accuracy and convergence rate for benchmarks characterizing scientific applications where DNNs are currently used, including regression problems and physics-informed neural networks for the solution of partial differential equations.

More Details

Hyper-Differential Sensitivity Analysis of Uncertain Parameters in PDE-Constrained Optimization

International Journal for Uncertainty Quantification

van Bloemen Waanders, Bart G.

Many problems in engineering and sciences require the solution of large scale optimization constrained by partial differential equations (PDEs). Though PDE-constrained optimization is itself challenging, most applications pose additional complexity, namely, uncertain parameters in the PDEs. Uncertainty quantification (UQ) is necessary to characterize, prioritize, and study the influence of these uncertain parameters. Sensitivity analysis, a classical tool in UQ, is frequently used to study the sensitivity of a model to uncertain parameters. In this article, we introduce "hyper-differential sensitivity analysis" which considers the sensitivity of the solution of a PDE-constrained optimization problem to uncertain parameters. Our approach is a goal-oriented analysis which may be viewed as a tool to complement other UQ methods in the service of decision making and robust design. We formally define hyper-differential sensitivity indices and highlight their relationship to the existing optimization and sensitivity analysis literatures. Assuming the presence of low rank structure in the parameter space, computational efficiency is achieved by leveraging a generalized singular value decomposition in conjunction with a randomized solver which converts the computational bottleneck of the algorithm into an embarrassingly parallel loop. Two multi-physics examples, consisting of nonlinear steady state control and transient linear inversion, demonstrate efficient identification of the uncertain parameters which have the greatest influence on the optimal solution.

More Details

Evaluating the efficiency of openmp tasking for unbalanced computation on diverse cpu architectures

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Olivier, Stephen L.

In the decade since support for task parallelism was incorporated into OpenMP, its use has remained limited in part due to concerns about its performance and scalability. This paper revisits a study from the early days of OpenMP tasking that used the Unbalanced Tree Search (UTS) benchmark as a stress test to gauge implementation efficiency. The present UTS study includes both Clang/LLVM and vendor OpenMP implementations on four different architectures. We measure parallel efficiency to examine each implementation’s performance in response to varying task granularity. We find that most implementations achieve over 90% efficiency using all available cores for tasks of O(100k) instructions, and the best even manage tasks of O(10k) instructions well.

More Details

Linking pyrometry to porosity in additively manufactured metals

Additive Manufacturing

Mitchell, John A.; Ivanoff, Thomas I.; Dagel, Daryl; Madison, Jonathan D.; Jared, Bradley H.

Porosity in additively manufactured metals can reduce material strength and is generally undesirable. Although studies have shown relationships between process parameters and porosity, monitoring strategies for defect detection and pore formation are still needed. In this paper, instantaneous anomalous conditions are detected in-situ via pyrometry during laser powder bed fusion additive manufacturing and correlated with voids observed using post-build micro-computed tomography. Large two-color pyrometry data sets were used to estimate instantaneous temperatures, melt pool orientations and aspect ratios. Machine learning algorithms were then applied to processed pyrometry data to detect outlier images and conditions. It is shown that melt pool outliers are good predictors of voids observed post-build. With this approach, real time process monitoring can be incorporated into systems to detect defect and void formation. Alternatively, using the methodology presented here, pyrometry data can be post processed for porosity assessment.

More Details

Multifidelity uncertainty propagation for cardiovascular hemodynamics

Proceedings of the 6th European Conference on Computational Mechanics: Solids, Structures and Coupled Problems, ECCM 2018 and 7th European Conference on Computational Fluid Dynamics, ECFD 2018

Schiavazzi, Daniele E.; Fleeter, Casey M.; Geraci, Gianluca G.; Marsden, Alison L.

Predictions from numerical hemodynamics are increasingly adopted and trusted in the diagnosis and treatment of cardiovascular disease. However, the predictive abilities of deterministic numerical models are limited due to the large number of possible sources of uncertainty including boundary conditions, vessel wall material properties, and patient specific model anatomy. Stochastic approaches have been proposed as a possible improvement, but are penalized by the large computational cost associated with repeated solutions of the underlying deterministic model. We propose a stochastic framework which leverages three cardiovascular model fidelities, i.e., three-, one- and zero-dimensional representations of cardiovascular blood flow. Specifically, we employ multilevel and multifidelity estimators from Sandia's open-source Dakota toolkit to reduce the variance in our estimated quantities of interest, while maintaining a reasonable computational cost. The performance of these estimators in terms of computational cost reductions is investigated for both global and local hemodynamic indicators.

More Details

Krylov Smoothing for Fully-Coupled AMG Preconditioners for VMS Resistive MHD

Lecture Notes in Computational Science and Engineering

Lin, Paul L.; Shadid, John N.; Tsuji, Paul H.

This study explores the use of a Krylov iterative method (GMRES) as a smoother for an algebraic multigrid (AMG) preconditioned Newton–Krylov iterative solution approach for a fully-implicit variational multiscale (VMS) finite element (FE) resistive magnetohydrodynamics (MHD) formulation. The efficiency of this approach is critically dependent on the scalability and performance of the AMG preconditioner for the linear solutions and the performance of the smoothers play an essential role. Krylov smoothers are considered an attempt to reduce the time and memory requirements of existing robust smoothers based on additive Schwarz domain decomposition (DD) with incomplete LU factorization solves on each subdomain. This brief study presents three time dependent resistive MHD test cases to evaluate the method. The results demonstrate that the GMRES smoother can be faster due to a decrease in the preconditioner setup time and a reduction in outer GMRESR solver iterations, and requires less memory (typically 35% less memory for global GMRES smoother) than the DD ILU smoother.

More Details

Multifideliy optimization under uncertainty for a scramjet-inspired problem

Proceedings of the 6th European Conference on Computational Mechanics: Solids, Structures and Coupled Problems, ECCM 2018 and 7th European Conference on Computational Fluid Dynamics, ECFD 2018

Menhorn, Friedrich M.; Geraci, Gianluca G.; Eldred, Michael S.; Marzouk, Youssef M.

SNOWPAC (Stochastic Nonlinear Optimization With Path-Augmented Constraints) is a method for stochastic nonlinear constrained derivative-free optimization. For such problems, it extends the path-augmented constraints framework introduced by the deterministic optimization method NOWPAC and uses a noise-adapted trust region approach and Gaussian processes for noise reduction. In recent developments, SNOWPAC is available in the DAKOTA framework which offers a highly flexible interface to couple the optimizer with different sampling strategies or surrogate models. In this paper we discuss details of SNOWPAC and demonstrate the coupling with DAKOTA. We showcase the approach by presenting design optimization results of a shape in a 2D supersonic duct. This simulation is supposed to imitate the behavior of the flow in a SCRAMJET simulation but at a much lower computational cost. Additionally different mesh or model fidelities can be tested. Thus, it serves as a convenient test case before moving to costly SCRAMJET computations. Here, we study deterministic results and results obtained by introducing uncertainty on inflow parameters. As sampling strategies we compare classical Monte Carlo sampling with multilevel Monte Carlo approaches for which we developed new error estimators. All approaches show a reasonable optimization of the design over the objective while maintaining or seeking feasibility. Furthermore, we achieve significant reductions in computational cost by using multilevel approaches that combine solutions from different grid resolutions.

More Details

srMO-BO-3GP: A sequential regularized multi-objective constrained Bayesian optimization for design applications

Proceedings of the ASME Design Engineering Technical Conference

Laros, James H.; Eldred, Michael S.; Mccann, Scott; Wang, Yan

Bayesian optimization (BO) is an efficient and flexible global optimization framework that is applicable to a very wide range of engineering applications. To leverage the capability of the classical BO, many extensions, including multi-objective, multi-fidelity, parallelization, and latent-variable modeling, have been proposed to address the limitations of the classical BO framework. In this work, we propose a novel multi-objective (MO) extension, called srMOBO-3GP, to solve the MO optimization problems in a sequential setting. Three different Gaussian processes (GPs) are stacked together, where each of the GP is assigned with a different task: the first GP is used to approximate a single-objective computed from the MO definition, the second GP is used to learn the unknown constraints, and the third GP is used to learn the uncertain Pareto frontier. At each iteration, a MO augmented Tchebycheff function converting MO to single-objective is adopted and extended with a regularized ridge term, where the regularization is introduced to smooth the single-objective function. Finally, we couple the third GP along with the classical BO framework to explore the richness and diversity of the Pareto frontier by the exploitation and exploration acquisition function. The proposed framework is demonstrated using several numerical benchmark functions, as well as a thermomechanical finite element model for flip-chip package design optimization.

More Details

Group Formation Theory at Multiple Scales

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Doyle, Casey L.; Naugle, Asmeret B.; Bernard, Michael L.; Lakkaraju, Kiran L.; Kittinger, Robert; Sweitzer, Matthew; Rothganger, Fredrick R.

There is a wealth of psychological theory regarding the drive for individuals to congregate and form social groups, positing that people may organize out of fear, social pressure, or even to manage their self-esteem. We evaluate three such theories for multi-scale validity by studying them not only at the individual scale for which they were originally developed, but also for applicability to group interactions and behavior. We implement this multi-scale analysis using a dataset of communications and group membership derived from a long-running online game, matching the intent behind the theories to quantitative measures that describe players’ behavior. Once we establish that the theories hold for the dataset, we increase the scope to test the theories at the higher scale of group interactions. Despite being formulated to describe individual cognition and motivation, we show that some group dynamics theories hold at the higher level of group cognition and can effectively describe the behavior of joint decision making and higher-level interactions.

More Details

ExaWind: Exascale Predictive Wind Plant Flow Physics Modeling

Sprague, M.; Ananthan, S.; Brazell, M.; Glaws, A.; De Frahan, M.; King, R.; Natarajan, M.; Rood, J.; Sharma, A.; Sirydowicz, K.; Thomas, S.; Vijaykumar, G.; Yellapantula, S.; Crozier, Paul C.; Berger-Vergiat, Luc B.; Cheung, Lawrence C.; Glaze, D.J.; Hu, Jonathan J.; Knaus, Robert C.; Lee, Dong H.; Okusanya, Tolulope O.; Overfelt, James R.; Rajamanickam, Sivasankaran R.; Sakievich, Philip S.; Smith, Timothy A.; Vo, Johnathan V.; Williams, Alan B.; Yamazaki, Ichitaro Y.; Turner, J.; Prokopenko, A.; Wilson, R.; Moser, R.; Melvin, J.; Sitaraman, J.

Abstract not provided.

Optimization Based Particle-Mesh Algorithm for High-Order and Conservative Scalar Transport

Lecture Notes in Computational Science and Engineering

Maljaars, Jakob M.; Labeur, Robert J.; Trask, Nathaniel A.; Sulsky, Deborah L.

A particle-mesh strategy is presented for scalar transport problems which provides diffusion-free advection, conserves mass locally (i.e. cellwise) and exhibits optimal convergence on arbitrary polyhedral meshes. This is achieved by expressing the convective field naturally located on the Lagrangian particles as a mesh quantity by formulating a dedicated particle-mesh projection based via a PDE-constrained optimization problem. Optimal convergence and local conservation are demonstrated for a benchmark test, and the application of the scheme to mass conservative density tracking is illustrated for the Rayleigh–Taylor instability.

More Details

WearGP: A UQ/ML wear prediction framework for slurry pump impellers and casings

American Society of Mechanical Engineers, Fluids Engineering Division (Publication) FEDSM

Laros, James H.; Visintainer, Robert; Furlan, John; Pagalthivarthi, Krishnan V.; Garman, Mohamed; Cutright, Aaron; Wang, Yan

Wear prediction is important in designing reliable machinery for slurry industry. It usually relies on multi-phase computational fluid dynamics, which is accurate but computationally expensive. Each run of the simulations can take hours or days even on a high-performance computing platform. The high computational cost prohibits a large number of simulations in the process of design optimization. In contrast to physics-based simulations, data-driven approaches such as machine learning are capable of providing accurate wear predictions at a small fraction of computational costs, if the models are trained properly. In this paper, a recently developed WearGP framework [1] is extended to predict the global wear quantities of interest by constructing Gaussian process surrogates. The effects of different operating conditions are investigated. The advantages of the WearGP framework are demonstrated by its high accuracy and low computational cost in predicting wear rates.

More Details

GMLS-NEts: A machine learning framework for unstructured data

CEUR Workshop Proceedings

Trask, Nathaniel A.; Patel, Ravi G.; Gross, Ben J.; Atzberger, Paul J.

Data fields sampled on irregularly spaced points arise in many science and engineering applications. For regular grids, Convolutional Neural Networks (CNNs) gain benefits from weight sharing and invariances. We generalize CNNs by introducing methods for data on unstructured point clouds using Generalized Moving Least Squares (GMLS). GMLS is a nonparametric meshfree technique for estimating linear bounded functionals from scattered data, and has emerged as an effective technique for solving partial differential equations (PDEs). By parameterizing the GMLS estimator, we obtain learning methods for linear and non-linear operators with unstructured stencils. The requisite calculations are local, embarrassingly parallelizable, and supported by a rigorous approximation theory. We show how the framework may be used for unstructured physical data sets to perform operator regression, develop predictive dynamical models, and obtain feature extractors for engineering quantities of interest. The results show the promise of these architectures as foundations for data-driven model development in scientific machine learning applications.

More Details

An Energy Consistent Discretization of the Nonhydrostatic Equations in Primitive Variables

Journal of Advances in Modeling Earth Systems

Taylor, Mark A.; Guba, Oksana G.; Steyer, Andrew S.; Ullrich, Paul A.; Hall; Eldred, Christopher

We derive a formulation of the nonhydrostatic equations in spherical geometry with a Lorenz staggered vertical discretization. The combination conserves a discrete energy in exact time integration when coupled with a mimetic horizontal discretization. The formulation is a version of Dubos and Tort (2014, https://doi.org/10.1175/MWR-D-14-00069.1) rewritten in terms of primitive variables. It is valid for terrain following mass or height coordinates and for both Eulerian or vertically Lagrangian discretizations. The discretization relies on an extension to Simmons and Burridge (1981, https://doi.org/10.1175/1520-0493(1981)109<0758:AEAAMC>2.0.CO;2) vertical differencing, which we show obeys a discrete derivative product rule. This product rule allows us to simplify the treatment of the vertical transport terms. Energy conservation is obtained via a term-by-term balance in the kinetic, internal, and potential energy budgets, ensuring an energy-consistent discretization up to time truncation error with no spurious sources of energy. We demonstrate convergence with respect to time truncation error in a spectral element code with a horizontal explicit vertically implicit implicit-explicit time stepping algorithm.

More Details

FROSch: A Fast And Robust Overlapping Schwarz Domain Decomposition Preconditioner Based on Xpetra in Trilinos

Lecture Notes in Computational Science and Engineering

Heinlein, Alexander; Klawonn, Axel; Rajamanickam, Sivasankaran R.; Rheinbach, Oliver

This article describes a parallel implementation of a two-level overlapping Schwarz preconditioner with the GDSW (Generalized Dryja–Smith–Widlund) coarse space described in previous work [12, 10, 15] into the Trilinos framework; cf. [16]. The software is a significant improvement of a previous implementation [12]; see Sec. 4 for results on the improved performance.

More Details

Towards an integrated and efficient framework for leveraging reduced order models for multifidelity uncertainty quantification

AIAA Scitech 2020 Forum

Blonigan, Patrick J.; Geraci, Gianluca G.; Rizzi, Francesco N.; Eldred, Michael S.

Truly predictive numerical simulations can only be obtained by performing Uncertainty Quantification. However, many realistic engineering applications require extremely complex and computationally expensive high-fidelity numerical simulations for their accurate performance characterization. Very often the combination of complex physical models and extreme operative conditions can easily lead to hundreds of uncertain parameters that need to be propagated through high-fidelity codes. Under these circumstances, a single fidelity uncertainty quantification approach, i.e. a workflow that only uses high-fidelity simulations, is unfeasible due to its prohibitive overall computational cost. To overcome this difficulty, in recent years multifidelity strategies emerged and gained popularity. Their core idea is to combine simulations with varying levels of fidelity/accuracy in order to obtain estimators or surrogates that can yield the same accuracy of their single fidelity counterparts at a much lower computational cost. This goal is usually accomplished by defining a priori a sequence of discretization levels or physical modeling assumptions that can be used to decrease the complexity of a numerical model realization and thus its computational cost. Less attention has been dedicated to low-fidelity models that can be built directly from a small number of available high-fidelity simulations. In this work we focus our attention on reduced order models (ROMs). Our main goal in this work is to investigate the combination of multifidelity uncertainty quantification and ROMs in order to evaluate the possibility to obtain an efficient framework for propagating uncertainties through expensive numerical codes. We focus our attention on sampling-based multifidelity approaches, like the multifidelity control variate, and we consider several scenarios for a numerical test problem, namely the Kuramoto-Sivashinsky equation, for which the efficiency of the multifidelity-ROM estimator is compared to the standard (single-fidelity) Monte Carlo approach.

More Details

Multilevel uncertainty quantification using cfd and openfast simulations of the swift facility

AIAA Scitech 2020 Forum

Laros, James H.; Maniaci, David C.; Herges, Thomas H.; Geraci, Gianluca G.; Seidl, Daniel T.; Eldred, Michael S.; Blaylock, Myra L.; Houchens, Brent C.

Uncertainty is present in all wind energy problems of interest, but quantifying its impact for wind energy research, design and analysis applications often requires the collection of large ensembles of numerical simulations. These predictions require a range of model fidelity as predictive models, that include the interaction of atmospheric and wind turbine wake physics, can require weeks or months to solve on institutional high-performance computing systems. The need for these extremely expensive numerical simulations extends the computational resource requirements usually associated with uncertainty quantification analysis. To alleviate the computational burden, we propose here to adopt several Multilevel-Multifidelity sampling strategies that we compare for a realistic test case. A demonstration study was completed using simulations of a V27 turbine at Sandia National Laboratories’ SWiFT facility in a neutral atmospheric boundary layer. The flow was simulated with three models of disparate fidelity. OpenFAST with TurbSim was used stand-alone as the most computationally-efficient, lower-fidelity model. The computational fluid dynamics code Nalu-Wind was used for large eddy simulations with both medium-fidelity actuator disk and high-fidelity actuator line models, with various mesh resolutions. In an uncertainty quantification study, we considered five different turbine properties as random parameters: yaw offset, generator torque constant, collective blade pitch, gearbox efficiency and blade mass. For all quantities of interest, the Multilevel-Multifidelity estimators demonstrated greater efficiency compared to standard and multilevel Monte Carlo estimators.

More Details

Lightweight Software Process Improvement Using Productivity and Sustainability Improvement Planning (PSIP)

Communications in Computer and Information Science

Milewicz, Reed M.; Heroux, Michael A.; Gonsiorowski, Elsa; Gupta, Rinku; Moulton, J.D.; Watson, Gregory R.; Willenbring, James M.; Zamora, Richard J.; Raybourn, Elaine M.

Productivity and Sustainability Improvement Planning (PSIP) is a lightweight, iterative workflow that allows software development teams to identify development bottlenecks and track progress to overcome them. In this paper, we present an overview of PSIP and how it compares to other software process improvement (SPI) methodologies, and provide two case studies that describe how the use of PSIP led to successful improvements in team effectiveness and efficiency.

More Details

Fourier analyses of high-order continuous and discontinuous Galerkin methods

SIAM Journal on Numerical Analysis

Le Roux, Daniel Y.; Eldred, Christopher; Taylor, Mark A.

We present a Fourier analysis of wave propagation problems subject to a class of continuous and discontinuous discretizations using high-degree Lagrange polynomials. This allows us to obtain explicit analytical formulas for the dispersion relation and group velocity and, for the first time to our knowledge, characterize analytically the emergence of gaps in the dispersion relation at specific wavenumbers, when they exist, and compute their specific locations. Wave packets with energy at these wavenumbers will fail to propagate correctly, leading to significant numerical dispersion. We also show that the Fourier analysis generates mathematical artifacts, and we explain how to remove them through a branch selection procedure conducted by analysis of eigenvectors and associated reconstructed solutions. The higher frequency eigenmodes, named erratic in this study, are also investigated analytically and numerically.

More Details

30 cm Drop Tests

Kalinina, Elena A.; Ammerman, Douglas J.; Grey, Carissa A.; Arviso, Michael A.; Wright, Catherine W.; Lujan, Lucas A.; Flores, Gregg J.; Saltzstein, Sylvia J.

The data from the multi-modal transportation test conducted in 2017 demonstrated that the inputs from the shock events during all transport modes (truck, rail, and ship) were amplified from the cask to the spent commercial nuclear fuel surrogate assemblies. These data do not support common assumption that the cask content experiences the same accelerations as the cask itself. This was one of the motivations for conducting 30 cm drop tests. The goal of the 30 cm drop test is to measure accelerations and strains on the surrogate spent nuclear fuel assembly and to determine whether the fuel rods can maintain their integrity inside a transportation cask when dropped from a height of 30 cm. The 30 cm drop is the remaining NRC normal conditions of transportation regulatory requirement (10 CFR 71.71) for which there are no data on the actual surrogate fuel. Because the full-scale cask and impact limiters were not available (and their cost was prohibitive), it was proposed to achieve this goal by conducting three separate tests. This report describes the first two tests — the 30 cm drop test of the 1/3 scale cask (conducted in December 2018) and the 30 cm drop of the full-scale dummy assembly (conducted in June 2019). The dummy assembly represents the mass of a real spent nuclear fuel assembly. The third test (to be conducted in the spring of 2020) will be the 30 cm drop of the full-scale surrogate assembly. The surrogate assembly represents a real full-scale assembly in physical, material, and mechanical characteristics, as well as in mass.

More Details

Data Pallets: Containerizing Storage For Reproducibility and Traceability

Lecture Notes in Computer Science

Lofstead, Gerald F.; Baker, Joshua B.; Younge, Andrew J.

Trusting simulation output is crucial for Sandia’s mission objectives. Here, we rely on these simulations to perform our high-consequence mission tasks given national treaty obligations. Other science and modeling applications, while they may have high-consequence results, still require the strongest levels of trust to enable using the result as the foundation for both practical applications and future research. To this end, the computing community has developed workflow and provenance systems to aid in both automating simulation and modeling execution as well as determining exactly how was some output was created so that conclusions can be drawn from the data. Current approaches for workflows and provenance systems are all at the user level and have little to no system level support making them fragile, difficult to use, and incomplete solutions. The introduction of container technology is a first step towards encapsulating and tracking artifacts used in creating data and resulting insights, but their current implementation is focused solely on making it easy to deploy an application in an isolated “sandbox” and maintaining a strictly read-only mode to avoid any potential changes to the application. All storage activities are still using the system-level shared storage. This project explores extending the container concept to include storage as a new container type we call data pallets. Data Pallets are potentially writeable, auto generated by the system based on IO activities, and usable as a way to link the contained data back to the application and input deck used to create it.

More Details

Making social networks more human: A topological approach

Statistical Analysis and Data Mining

Berry, Jonathan W.

A key problem in social network analysis is to identify nonhuman interactions. State-of-the-art bot-detection systems like Botometer train machine-learning models on user-specific data. Unfortunately, these methods do not work on data sets in which only topological information is available. In this paper, we propose a new, purely topological approach. Our method removes edges that connect nodes exhibiting strong evidence of non-human activity from publicly available electronic-social-network datasets, including, for example, those in the Stanford Network Analysis Project repository (SNAP). Our methodology is inspired by classic work in evolutionary psychology by Dunbar that posits upper bounds on the total strength of the set of social connections in which a single human can be engaged. We model edge strength with Easley and Kleinberg's topological estimate; label nodes as “violators” if the sum of these edge strengths exceeds a Dunbar-inspired bound; and then remove the violator-to-violator edges. We run our algorithm on multiple social networks and show that our Dunbar-inspired bound appears to hold for social networks, but not for nonsocial networks. Our cleaning process classifies 0.04% of the nodes of the Twitter-2010 followers graph as violators, and we find that more than 80% of these violator nodes have Botometer scores of 0.5 or greater. Furthermore, after we remove the roughly 15 million violator-violator edges from the 1.2-billion-edge Twitter-2010 follower graph, 34% of the violator nodes experience a factor-of-two decrease in PageRank. PageRank is a key component of many graph algorithms such as node/edge ranking and graph sparsification. Thus, this artificial inflation would bias algorithmic output, and result in some incorrect decisions based on this output.

More Details

Two Problems in Knowledge Graph Embedding: Non-Exclusive Relation Categories and Zero Gradients

Proceedings - 2019 IEEE International Conference on Big Data, Big Data 2019

Lee, Kookjin L.; Nur, Nasheen; Park, Noseong; Kang, Hyunjoong; Kwon, Soonhyeon

Knowledge graph embedding (KGE) learns latent vector representations of named entities (i.e., vertices) and relations (i.e., edge labels) of knowledge graphs. Herein, we address two problems in KGE. First, relations may belong to one or multiple categories, such as functional, symmetric, transitive, reflexive, and so forth; thus, relation categories are not exclusive. Some relation categories cause non-trivial challenges for KGE. Second, we found that zero gradients happen frequently in many translation based embedding methods such as TransE and its variations. To solve these problems, we propose i) converting a knowledge graph into a bipartite graph, although we do not physically convert the graph but rather use an equivalent trick; ii) using multiple vector representations for a relation; and iii) using a new hinge loss based on energy ratio(rather than energy gap) that does not cause zero gradients. We show that our method significantly improves the quality of embedding.

More Details

Development, Demonstration and Validation of Data-Driven Compact Diode Models for Circuit Simulation and Analysis

Aadithya, Karthik V.; Kuberry, Paul A.; Paskaleva, Biliana S.; Bochev, Pavel B.; Leeson, Kenneth M.; Mar, Alan M.; Mei, Ting M.; Keiter, Eric R.

Compact semiconductor device models are essential for efficiently designing and analyzing large circuits. However, traditional compact model development requires a large amount of manual effort and can span many years. Moreover, inclusion of new physics (e.g., radiation effects) into an existing model is not trivial and may require redevelopment from scratch. Machine Learning (ML) techniques have the potential to automate and significantly speed up the development of compact models. In addition, ML provides a range of modeling options that can be used to develop hierarchies of compact models tailored to specific circuit design stages. In this paper, we explore three such options: (1) table-based interpolation, (2) Generalized Moving Least-Squares, and (3) feedforward Deep Neural Networks, to develop compact models for a p-n junction diode. We evaluate the performance of these "data-driven" compact models by (1) comparing their voltage-current characteristics against laboratory data, and (2) building a bridge rectifier circuit using these devices, predicting the circuit's behavior using SPICE-like circuit simulations, and then comparing these predictions against laboratory measurements of the same circuit.

More Details

A mathematical programming approach for the optimal placement of flame detectors in petrochemical facilities

Process Safety and Environmental Protection

Zhen, Todd; Klise, Katherine A.; Cunningham, Sean; Marszal, Edward; Laird, Carl D.

Flame detectors provide an important layer of protection for personnel in petrochemical plants, but effective placement can be challenging. A mixed-integer nonlinear programming formulation is proposed for optimal placement of flame detectors while considering non-uniform probabilities of detection failure. We show that this approach allows for the placement of fire detectors using a fixed sensor budget and outperforms models that do not account for imperfect detection. We develop a linear relaxation to the formulation and an efficient solution algorithm that achieves global optimality with reasonable computational effort. We integrate this problem formulation into the Python package, Chama, and demonstrate the effectiveness of this formulation on a small test case and on two real-world case studies using the fire and gas mapping software, Kenexis Effigy.

More Details

Scalable generation of graphs for benchmarking HPC community-detection algorithms

International Conference for High Performance Computing, Networking, Storage and Analysis, SC

Slota, George M.; Berry, Jonathan W.; Hammond, Simon D.; Olivier, Stephen L.; Phillips, Cynthia A.; Rajamanickam, Sivasankaran R.

Community detection in graphs is a canonical social network analysis method. We consider the problem of generating suites of teras-cale synthetic social networks to compare the solution quality of parallel community-detection methods. The standard method, based on the graph generator of Lancichinetti, Fortunato, and Radicchi (LFR), has been used extensively for modest-scale graphs, but has inherent scalability limitations. We provide an alternative, based on the scalable Block Two-Level Erdos-Renyi (BTER) graph generator, that enables HPC-scale evaluation of solution quality in the style of LFR. Our approach varies community coherence, and retains other important properties. Our methods can scale real-world networks, e.g., to create a version of the Friendster network that is 512 times larger. With BTER's inherent scalability, we can generate a 15-terabyte graph (4.6B vertices, 925B edges) in just over one minute. We demonstrate our capability by showing that label-propagation community-detection algorithm can be strong-scaled with negligible solution-quality loss.

More Details

Milestone 1261

Trujillo, Gabrielle T.; Trott, Christian R.

Supporting the latest hardware and compiler versions is important to leverage improvements in the software environment and new HPC platforms. We will provide certified support for the latest releases of vendor compilers from Intel, AMD, IBM, NVIDIA, ARM and Cray as well as of open source compilers GCC and Clang.

More Details

ECP ST Capability Assessment Report (CAR) for VTK-m (FY19)

Moreland, Kenneth D.

The ECP/VTK-m project is providing the core capabilities to perform scientific visualization on Exascale architectures. The ECP/VTK-m project fills the critical feature gap of performing visualization and analysis on processors like graphics-based processors. The results of this project will be delivered in tools like ParaView, Vislt, and Ascent as well as in stand-alone form. Moreover, these projects are depending on this ECP effort to be able to make effective use of ECP architectures.

More Details

Evaluation of Programming Models to Address Load Imbalance on Distributed Multi-Core CPUs: A Case Study with Block Low-Rank Factorization

Proceedings of PAW-ATM 2019: Parallel Applications Workshop, Alternatives to MPI+X, Held in conjunction with SC 2019: The International Conference for High Performance Computing, Networking, Storage and Analysis

Pei, Yu; Bosilca, George; Yamazaki, Ichitaro Y.; Ida, Akihiro; Dongarra, Jack

To minimize data movement, many parallel ap-plications statically distribute computational tasks among the processes. However, modern simulations often encounters ir-regular computational tasks whose computational loads change dynamically at runtime or are data dependent. As a result, load imbalance among the processes at each step of simulation is a natural situation that must be dealt with at the programming level. The de facto parallel programming approach, flat MPI (one process per core), is hardly suitable to manage the lack of balance, imposing significant idle time on the simulation as processes have to wait for the slowest process at each step of simulation. One critical application for many domains is the LU factor-ization of a large dense matrix stored in the Block Low-Rank (BLR) format. Using the low-rank format can significantly reduce the cost of factorization in many scientific applications, including the boundary element analysis of electrostatic field. However, the partitioning of the matrix based on underlying geometry leads to different sizes of the matrix blocks whose numerical ranks change at each step of factorization, leading to the load imbalance among the processes at each step of factorization. We use BLR LU factorization as a test case to study the programmability and performance of five different programming approaches: (1) flat MPI, (2) Adaptive MPI (Charm++), (3) MPI + OpenMP, (4) parameterized task graph (PTG), and (5) dynamic task discovery (DTD). The last two versions use a task-based paradigm to express the algorithm; we rely on the PaRSEC run-time system to execute the tasks. We first point out programming features needed to efficiently solve this category of problems, hinting at possible alternatives to the MPI+X programming paradigm. We then evaluate the programmability of the different approaches, detailing our experience implementing the algorithm using each of the models. Finally, we show the performance result on the Intel Haswell-based Bridges system at the Pittsburgh Supercomputing Center (PSC) and analyze the effectiveness of the implementations to address the load imbalance.

More Details

A dynamic, unified design for dedicated message matching engines for collective and point-to-point communications

Parallel Computing

Ghazimirsaeed, S.M.; Grant, Ryan E.; Afsahi, Ahmad

The Message Passing Interface (MPI) libraries use message queues to guarantee correct message ordering between communicating processes. Message queues are in the critical path of MPI communications and thus, the performance of message queue operations can have significant impact on the performance of applications. Collective communications are widely used in MPI applications and they can have considerable impact on generating long message queues. In this paper, we propose a unified message matching mechanism that improves the message queue search time by distinguishing messages coming from point-to-point and collective communications and using a distinct message queue data structure for them. For collective operations, it dynamically profiles the impact of each collective call on message queues during the application runtime and uses this information to adapt the message queue data structure for each collective dynamically. Moreover, we use a partner/non-partner message queue data structure for the messages coming from point-to-point communications. The proposed approach can successfully reduce the queue search time while maintaining scalable memory consumption. The evaluation results show that we can obtain up to 5.5x runtime speedup for applications with long list traversals. Moreover, we can gain up to 15% and 94% queue search time improvement for all elements in applications with short and medium list traversals, respectively.

More Details

Fine-Grained Analysis of Communication Similarity between Real and Proxy Applications

Proceedings of PMBS 2019: Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems - Held in conjunction with SC 2019: The International Conference for High Performance Computing, Networking, Storage and Analysis

Aaziz, Omar R.; Vaughan, Courtenay T.; Cook, Jonathan E.; Cook, Jeanine C.; Kuehn, Jeffery; Richards, David

In this work we investigate the dynamic communication behavior of parent and proxy applications, and investigate whether or not the dynamic communication behavior of the proxy matches that of its respective parent application. The idea of proxy applications is that they should match their parent well, and should exercise the hardware and perform similarly, so that from them lessons can be learned about how the HPC system and the application can best be utilized. We show here that some proxy/parent pairs do not need the extra detail of dynamic behavior analysis, while others can benefit from it, and through this we also identified a parent/proxy mismatch and improved the proxy application.

More Details

Enabling HPC Workloads on Cloud Infrastructure Using Kubernetes Container Orchestration Mechanisms

Proceedings of CANOPIE-HPC 2019: 1st International Workshop on Containers and New Orchestration Paradigms for Isolated Environments in HPC - Held in conjunction with SC 2019: The International Conference for High Performance Computing, Networking, Storage and Analysis

Beltre, Angel M.; Saha, Pankaj; Govindaraju, Madhusudhan; Grant, Ryan E.; Younge, Andrew J.

Containers offer a broad array of benefits, including a consistent lightweight runtime environment through OS-level virtualization, as well as low overhead to maintain and scale applications with high efficiency. Moreover, containers are known to package and deploy applications consistently across varying infrastructures. Container orchestrators manage a large number of containers for microservices based cloud applications. However, the use of such service orchestration frameworks towards HPC workloads remains relatively unexplored. In this paper we study the potential use of Kubernetes on HPC infrastructure for use by the scientific community. We directly compare both its features and performance against Docker Swarm and bare metal execution of HPC applications. Herein, we detail the configurations required for Kubernetes to operate with containerized MPI applications, specifically accounting for operations such as (1) underlying device access, (2) inter-container communication across different hosts, and (3) configuration limitations. This evaluation quantifies the performance difference between representative MPI workloads running both on bare metal and containerized orchestration frameworks with Kubernetes, operating over both Ethernet and InfiniBand interconnects. Our results show that Kubernetes and Docker Swarm can achieve near bare metal performance over RDMA communication when high performance transports are enabled. Our results also show that Kubernetes presents overheads for several HPC applications over TCP/IP protocol. However, Docker Swarm's throughput is near bare metal performance for the same applications.

More Details

A Case for Portability and Reproducibility of HPC Containers

Proceedings of CANOPIE-HPC 2019: 1st International Workshop on Containers and New Orchestration Paradigms for Isolated Environments in HPC - Held in conjunction with SC 2019: The International Conference for High Performance Computing, Networking, Storage and Analysis

Canon, Richard S.; Younge, Andrew J.

Containerized computing is quickly changing the landscape for the development and deployment of many HPC applications. Containers are able to lower the barrier of entry for emerging workloads to leverage supercomputing resources. However, containers are no silver bullet for deploying HPC software and there are several challenges ahead in which the community must address to ensure container workloads can be reproducible and inter-operable. In this paper, we discuss several challenges in utilizing containers for HPC applications and the current approaches used in many HPC container runtimes. These approaches have been proven to enable high-performance execution of containers at scale with the appropriate runtimes. However, the use of these techniques are still ad hoc, test the limits of container workload portability, and several gaps likely remain. We discuss those remaining gaps and propose several potential solutions, including custom container label tagging and runtime hooks as a first step in managing HPC system library complexity.

More Details

Adaptive multi-index collocation for uncertainty quantification and sensitivity analysis

Jakeman, John D.; Eldred, Michael S.; Geraci, G.; Gorodetsky, A.

In this paper, we present an adaptive algorithm to construct response surface approximations of high-fidelity models using a hierarchy of lower fidelity models. Our algorithm is based on multiindex stochastic collocation and automatically balances physical discretization error and response surface error to construct an approximation of model outputs. This surrogate can be used for uncertainty quantification (UQ) and sensitivity analysis (SA) at a fraction of the cost of a purely high-fidelity approach. We demonstrate the effectiveness of our algorithm on a canonical test problem from the UQ literature and a complex multi-physics model that simulates the performance of an integrated nozzle for an unmanned aerospace vehicle. We find that when the input-output response is sufficiently smooth our algorithm produces approximations that can be up to orders of magnitude more accurate than single fidelity approximations for a fixed computational budget.

More Details

Multi-Level Memory Algorithmics for Large, Sparse Problems

Berry, Jonathan W.; Butcher, Neil; Catalyurek, Umit; Kogge, Peter; Lin, Paul; Olivier, Stephen L.; Phillips, Cynthia A.; Rajamanickam, Sivasankaran R.; Slota, George M.; Voskuilen, Gwendolyn R.; Yasar, Abdurrahman; Young, Jeffrey G.

In this report, we abstract eleven papers published during the project and describe preliminary unpublished results that warrant follow-up work. The topic is multi-level memory algorithmics, or how to effectively use multiple layers of main memory. Modern compute nodes all have this feature in some form.

More Details

Fast and Robust Linear Solvers based on Hierarchical Matrices (LDRD Final Report)

Boman, Erik G.; Darve, Eric; Lehoucq, Richard B.; Rajamanickam, Sivasankaran R.; Tuminaro, Raymond S.; Yamazaki, Ichitaro Y.

This report is the final report for the LDRD project "Fast and Robust Linear Solvers using Hierarchical Matrices". The project was a success. We developed two novel algorithms for solving sparse linear systems. We demonstrated their effectiveness on ill-conditioned linear systems from ice sheet simulations. We showed that in many cases, we can obtain near-linear scaling. We believe this approach has strong potential for difficult linear systems and should be considered for other Sandia and DOE applications. We also report on some related research activities in dense solvers and randomized linear algebra.

More Details

A robust hierarchical solver for ill-conditioned systems with applications to ice sheet modeling

Journal of Computational Physics

Chen, Chao; Cambier, Leopold; Boman, Erik G.; Rajamanickam, Sivasankaran R.; Tuminaro, Raymond S.; Darve, Eric

A hierarchical solver is proposed for solving sparse ill-conditioned linear systems in parallel. The solver is based on a modification of the LoRaSp method, but employs a deferred-compression technique, which provably reduces the approximation error and significantly improves efficiency. Moreover, the deferred-compression technique introduces minimal overhead and does not affect parallelism. As a result, the new solver achieves linear computational complexity under mild assumptions and excellent parallel scalability. To demonstrate the performance of the new solver, we focus on applying it to solve sparse linear systems arising from ice sheet modeling. The strong anisotropic phenomena associated with the thin structure of ice sheets creates serious challenges for existing solvers. To address the anisotropy, we additionally developed a customized partitioning scheme for the solver, which captures the strong-coupling direction accurately. In general, the partitioning can be computed algebraically with existing software packages, and thus the new solver is generalizable for solving other sparse linear systems. Our results show that ice sheet problems of about 300 million degrees of freedom have been solved in just a few minutes using 1024 processors.

More Details

SECURE: An Evidence-based Approach to Cyber Experimentation

Proceedings - 2019 Resilience Week, RWS 2019

Pinar, Ali P.; Benz, Zachary O.; Castillo, Anya; Hart, William E.; Swiler, Laura P.; Tarman, Thomas D.

Securing cyber systems is of paramount importance, but rigorous, evidence-based techniques to support decision makers for high-consequence decisions have been missing. The need for bringing rigor into cybersecurity is well-recognized, but little progress has been made over the last decades. We introduce a new project, SECURE, that aims to bring more rigor into cyber experimentation. The core idea is to follow the footsteps of computational science and engineering and expand similar capabilities to support rigorous cyber experimentation. In this paper, we review the cyber experimentation process, present the research areas that underlie our effort, discuss the underlying research challenges, and report on our progress to date. This paper is based on work in progress, and we expect to have more complete results for the conference.

More Details

Formulation analysis and computation of an optimization-based local-to-nonlocal coupling method

D'Elia, Marta D.; Bochev, Pavel B.

We present an optimization-based coupling method for local and nonlocal continuum models. Our approach couches the coupling of the models into a control problem where the states are the solutions of the nonlocal and local equations, the objective is to minimize their mismatch on the overlap of the local and nonlocal problem domains, and the virtual controls are the nonlocal volume constraint and the local boundary condition. We present the method in the context of Local-to-Nonlocal diffusion coupling. Numerical examples illustrate the theoretical properties of the approach.

More Details

An error estimation driven adaptive tetrahedral workflow for full engineering models

Foulk III, James W.; Granzow, Brian N.; Mota, Alejandro M.; Ibanez-Granados, Daniel A.

Tetrahedral finite element workflows have the potential to drastically reduce time to solution for computational solid mechanics simulations when compared to traditional hexahedral finite element analogues. A recently developed, higher-order composite tetrahedral element has shown promise in the space of incompressible computational plasticity. Mesh adaptivity has the potential to increase solution accuracy and increase solution robustness. In this work, we demonstrate an initial strategy to perform conformal mesh adaptivity for this higher-order composite tetrahedral element using well-established mesh modification operations for linear tetrahedra. We propose potential extensions to improve this initial strategy in terms of robustness and accuracy.

More Details

Investigations of irradiation effects in crystalline and amorphous SiC

Journal of Applied Physics

Cowen, Benjamin J.; El-Genk, Mohamed S.; Hattar, Khalid M.; Briggs, Samuel A.

The effects of irradiation on 3C-silicon carbide (SiC) and amorphous SiC (a-SiC) are investigated using both in situ transmission electron microscopy (TEM) and complementary molecular dynamics (MD) simulations. The single ion strikes identified in the in situ TEM irradiation experiments, utilizing a 1.7 MeV Au3+ ion beam with nanosecond resolution, are contrasted to MD simulation results of the defect cascades produced by 10-100 keV Si primary knock-on atoms (PKAs). The MD simulations also investigated defect structures that could possibly be responsible for the observed strain fields produced by single ion strikes in the TEM ion beam irradiation experiments. Both MD simulations and in situ TEM experiments show evidence of radiation damage in 3C-SiC but none in a-SiC. Selected area electron diffraction patterns, based on the results of MD simulations and in situ TEM irradiation experiments, show no evidence of structural changes in either 3C-SiC or a-SiC.

More Details

Deep Conservation: A latent dynamics model for exact satisfaction of physical conservation laws [Report]

Lee, Kookjin L.; Carlberg, Kevin

This work proposes an approach for latent dynamics learning that exactly enforces physical conservation laws. The method comprises two steps. First, we compute a low-dimensional embedding of the high-dimensional dynamical-system state using deep convolutional autoencoders. This defines a low-dimensional nonlinear manifold on which the state is subsequently enforced to evolve. Second, we define a latent dynamics model that associates with a constrained optimization problem. Specifically, the objective function is defined as the sum of squares of conservation-law violations over control volumes in a finite-volume discretization of the problem; nonlinear equality constraints explicitly enforce conservation over prescribed subdomains of the problem. The resulting dynamics model—which can be considered as a projection-based reduced-order model—ensures that the time-evolution of the latent state exactly satisfies conservation laws over the prescribed subdomains. In contrast to existing methods for latent dynamics learning, this is the only method that both employs a nonlinear embedding and computes dynamics for the latent state that guarantee the satisfaction of prescribed physical properties. Numerical experiments on a benchmark advection problem illustrate the method's ability to significantly reduce the dimensionality while enforcing physical conservation.

More Details

Molecular dynamics investigation of threshold displacement energies in CaF2

Computational Materials Science

Morris, Joseph; Cowen, Benjamin J.; Teysseyre, S.; Hecht, Adam A.

Understanding the propagation of radiation damage in a material is paramount to predicting the material damage effects. To date, no current literature has investigated the Threshold Displacement Energy (TDE) of Ca and F atoms in CaF2 through molecular dynamics and simulated statistical analysis. A set of interatomic potentials between Ca-Ca, F-F, and F-Ca were splined, fully characterizing a pure CaF2 simulation cell, by using published Born-Mayer-Huggins, standard ZBL, and Coulomb potentials, with a resulting structure within 1% of standard density and published lattice constants. Using this simulation cell, molecular dynamics simulations were performed with LAMMPS using a simulation that randomly generated 500 Ca and F PKA directions for each incremental set of energies, and a simulation in each of the [1 0 0], [1 1 0], and [1 1 1] directions with 500 trials for each incremental energy. MD simulations of radiation damage in CaF2 are carried out using F and Ca PKAs, with energies ranging from 2 to 200 eV. Probabilistic determinations of the TDE and Threshold Vacancy Energy (TVE) of Ca and F atoms in CaF2 were performed, as well as examining vacancy, interstitial, and antisite production rates over the range of PKA energies. Many more F atoms were displaced from both PKA species, and though F recombination appears more probable than Ca recombination, F vacancy numbers are higher. In conclusion, the higher number of F vacancies than Ca vacancies suggests F Frenkel pairs dominate CaF2 damage.

More Details

A Fast Solver for the Fractional Helmholtz Equation

Glusa, Christian A.; D'Elia, Marta D.; Antil, Harbir; Weiss, Chester J.; van Bloemen Waanders, Bart G.

The purpose of this paper is to study a Helmholtz problem with a spectral fractional Laplacian, instead ofthe standard Laplacian. Recently, it has been established that such a fractional Helmholtz problem better captures the underlying behavior in Geophysical Electromagnetics. We establish the well-posedness and regularity of this problem. We introduce a hybrid finite element-spectral approach to discretize it and show well-posedness of the discrete system. In addition, we derive a priori discretization error estimates. Finally, we introduce an efficient solver that scales as well as the best possible solver for the classical integer-order Helmholtz equation. We conclude with several illustrative examples that confirm our theoretical findings.

More Details

An Anisotropic Adaptive Voronoi Meshing Method

Ebeida, Mohamed S.

We propose a novel method for generating anisotropic adaptive Voronoi meshes that conforms to non-manifold curved boundaries. Our novel method modifies the sampling rules for the VoroCrust software to bring the VoroCrust seeds closer to the surface they are representing. This enables the reconstruction of two surfaces bounding a narrow region while filling the space in-between with stretched Voronoi cells.

More Details

Complex Fracture Nucleation and Evolution with Nonlocal Elastodynamics

Journal of Peridynamics and Nonlocal Modeling

Lehoucq, Richard B.; Lipton, Robert P.; Jha, Prashant K.

A mechanical model is introduced for predicting the initiation and evolution of complex fracture patterns without the need for a damage variable or law. The model, a continuum variant of Newton’s second law, uses integral rather than partial differential operators where the region of integration is over finite domain. The force interaction is derived from a novel nonconvex strain energy density function, resulting in a nonmonotonic material model. The resulting equation of motion is proved to be mathematically well-posed. The model has the capacity to simulate nucleation and growth of multiple, mutually interacting dynamic fractures. In the limit of zero region of integration, the model reproduces the classic Griffith model of brittle fracture. The simplicity of the formulation avoids the need for supplemental kinetic relations that dictate crack growth or the need for an explicit damage evolution law.

More Details
Results 1601–1800 of 9,998
Results 1601–1800 of 9,998