Publications

Results 1001–1100 of 9,998

Search results

Jump to search filters

FROSch Preconditioners for Land Ice Simulations of Greenland and Antarctica

Heinlein, Alexander; Perego, Mauro P.; Rajamanickam, Sivasankaran R.

Numerical simulations of Greenland and Antarctic ice sheets involve the solution of large-scale highly nonlinear systems of equations on complex shallow geometries. This work is concerned with the construction of Schwarz preconditioners for the solution of the associated tangent problems, which are challenging for solvers mainly because of the strong anisotropy of the meshes and wildly changing boundary conditions that can lead to poorly constrained problems on large portions of the domain. Here, two-level GDSW (Generalized Dryja–Smith–Widlund) type Schwarz preconditioners are applied to different land ice problems, i.e., a velocity problem, a temperature problem, as well as the coupling of the former two problems. We employ the MPI-parallel implementation of multi-level Schwarz preconditioners provided by the package FROSch (Fast and Robust Schwarz)from the Trilinos library. The strength of the proposed preconditioner is that it yields out-of-the-box scalable and robust preconditioners for the single physics problems. To our knowledge, this is the first time two-level Schwarz preconditioners are applied to the ice sheet problem and a scalable preconditioner has been used for the coupled problem. The pre-conditioner for the coupled problem differs from previous monolithic GDSW preconditioners in the sense that decoupled extension operators are used to compute the values in the interior of the sub-domains. Several approaches for improving the performance, such as reuse strategies and shared memory OpenMP parallelization, are explored as well. In our numerical study we target both uniform meshes of varying resolution for the Antarctic ice sheet as well as non uniform meshes for the Greenland ice sheet are considered. We present several weak and strong scaling studies confirming the robustness of the approach and the parallel scalability of the FROSch implementation. Among the highlights of the numerical results are a weak scaling study for up to 32 K processor cores (8 K MPI-ranks and 4 OpenMP threads) and 566 M degrees of freedom for the velocity problem as well as a strong scaling study for up to 4 K processor cores (and MPI-ranks) and 68 M degrees of freedom for the coupled problem.

More Details

Using Computation Effectively for Scalable Poisson Tensor Factorization: Comparing Methods beyond Computational Efficiency

2021 IEEE High Performance Extreme Computing Conference, HPEC 2021

Myers, Jeremy M.; Dunlavy, Daniel D.

Poisson Tensor Factorization (PTF) is an important data analysis method for analyzing patterns and relationships in multiway count data. In this work, we consider several algorithms for computing a low-rank PTF of tensors with sparse count data values via maximum likelihood estimation. Such an approach reduces to solving a nonlinear, non-convex optimization problem, which can leverage considerable parallel computation due to the structure of the problem. However, since the maximum likelihood estimator corresponds to the global minimizer of this optimization problem, it is important to consider how effective methods are at both leveraging this inherent parallelism as well as computing a good approximation to the global minimizer. In this work we present comparisons of multiple methods for PTF that illustrate the tradeoffs in computational efficiency and accurately computing the maximum likelihood estimator. We present results using synthetic and real-world data tensors to demonstrate some of the challenges when choosing a method for a given tensor.

More Details

Error estimates for the optimal control of a parabolic fractional pde

SIAM Journal on Numerical Analysis

Glusa, Christian A.; Otarola, Enrique

We consider the integral definition of the fractional Laplacian and analyze a linearquadratic optimal control problem for the so-called fractional heat equation; control constraints are also considered. We derive existence and uniqueness results, first order optimality conditions, and regularity estimates for the optimal variables. To discretize the state equation we propose a fully discrete scheme that relies on an implicit finite difference discretization in time combined with a piecewise linear finite element discretization in space. We derive stability results and a novel L2(0, T;L2(Ω)) a priori error estimate. On the basis of the aforementioned solution technique, we propose a fully discrete scheme for our optimal control problem that discretizes the control variable with piecewise constant functions, and we derive a priori error estimates for it. We illustrate the theory with one- and two-dimensional numerical experiments.

More Details

Scalable3-BO: Big data meets HPC - A scalable asynchronous parallel high-dimensional Bayesian optimization framework on supercomputers

Proceedings of the ASME Design Engineering Technical Conference

Laros, James H.

Bayesian optimization (BO) is a flexible and powerful framework that is suitable for computationally expensive simulation-based applications and guarantees statistical convergence to the global optimum. While remaining as one of the most popular optimization methods, its capability is hindered by the size of data, the dimensionality of the considered problem, and the nature of sequential optimization. These scalability issues are intertwined with each other and must be tackled simultaneously. In this work, we propose the Scalable3-BO framework, which employs sparse GP as the underlying surrogate model to scope with Big Data and is equipped with a random embedding to efficiently optimize high-dimensional problems with low effective dimensionality. The Scalable3-BO framework is further leveraged with asynchronous parallelization feature, which fully exploits the computational resource on HPC within a computational budget. As a result, the proposed Scalable3-BO framework is scalable in three independent perspectives: with respect to data size, dimensionality, and computational resource on HPC. The goal of this work is to push the frontiers of BO beyond its well-known scalability issues and minimize the wall-clock waiting time for optimizing high-dimensional computationally expensive applications. We demonstrate the capability of Scalable3-BO with 1 million data points, 10,000-dimensional problems, with 20 concurrent workers in an HPC environment.

More Details

Proctor: A Semi-Supervised Performance Anomaly Diagnosis Framework for Production HPC Systems

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Aksar, Burak; Zhang, Yijia; Ates, Emre; Schwaller, Benjamin S.; Aaziz, Omar R.; Leung, Vitus J.; Brandt, James M.; Egele, Manuel; Coskun, Ayse K.

Performance variation diagnosis in High-Performance Computing (HPC) systems is a challenging problem due to the size and complexity of the systems. Application performance variation leads to premature termination of jobs, decreased energy efficiency, or wasted computing resources. Manual root-cause analysis of performance variation based on system telemetry has become an increasingly time-intensive process as it relies on human experts and the size of telemetry data has grown. Recent methods use supervised machine learning models to automatically diagnose previously encountered performance anomalies in compute nodes. However, supervised machine learning models require large labeled data sets for training. This labeled data requirement is restrictive for many real-world application domains, including HPC systems, because collecting labeled data is challenging and time-consuming, especially considering anomalies that sparsely occur. This paper proposes a novel semi-supervised framework that diagnoses previously encountered performance anomalies in HPC systems using a limited number of labeled data points, which is more suitable for production system deployment. Our framework first learns performance anomalies’ characteristics by using historical telemetry data in an unsupervised fashion. In the following process, we leverage supervised classifiers to identify anomaly types. While most semi-supervised approaches do not typically use anomalous samples, our framework takes advantage of a few labeled anomalous samples to classify anomaly types. We evaluate our framework on a production HPC system and on a testbed HPC cluster. We show that our proposed framework achieves 60% F1-score on average, outperforming state-of-the-art supervised methods by 11%, and maintains an average 0.06% anomaly miss rate.

More Details

Inelastic equation of state for solids

Computer Methods in Applied Mechanics and Engineering

Sanchez, Jason J.

In this study, a complete inelastic equation of state (IEOS) for solids is developed based on a superposition of thermodynamic energy potentials. The IEOS allows for a tensorial stress state by including an isochoric hyperelastic Helmholtz potential in addition to the zero-kelvin isotherm and lattice vibration energy contributions. Inelasticity is introduced through the nonlinear equations of finite strain plasticity which utilize the temperature dependent Johnson–Cook yield model. Material failure is incorporated into the model by a coupling of the damage history variable to the energy potentials. The numerical evaluation of the IEOS requires a nonlinear solution of stress, temperature and history variables associated with elastic trial states for stress and temperature. The model is implemented into the ALEGRA shock and multi-physics code and the applications presented include single element deformation paths, the Taylor anvil problem and an energetically driven thermo-mechanical problem.

More Details

A framework to evaluate IMEX schemes for atmospheric models

Geoscientific Model Development (Online)

Guba, Oksana G.; Taylor, Mark A.; Bradley, Andrew M.; Bosler, Peter A.; Steyer, Andrew S.

We present a new evaluation framework for implicit and explicit (IMEX) Runge–Kutta time-stepping schemes. The new framework uses a linearized nonhydrostatic system of normal modes. We utilize the framework to investigate the stability of IMEX methods and their dispersion and dissipation of gravity, Rossby, and acoustic waves. We test the new framework on a variety of IMEX schemes and use it to develop and analyze a set of second-order low-storage IMEX Runge–Kutta methods with a high Courant–Friedrichs–Lewy (CFL) number. We show that the new framework is more selective than the 2-D acoustic system previously used in the literature. Schemes that are stable for the 2-D acoustic system are not stable for the system of normal modes.

More Details

A framework to evaluate IMEX schemes for atmospheric models

Geoscientific Model Development

Guba, Oksana G.; Taylor, Mark A.; Bradley, Andrew M.; Bosler, Peter A.; Steyer, Andrew S.

We present a new evaluation framework for implicit and explicit (IMEX) Runge-Kutta time-stepping schemes. The new framework uses a linearized nonhydrostatic system of normal modes. We utilize the framework to investigate the stability of IMEX methods and their dispersion and dissipation of gravity, Rossby, and acoustic waves. We test the new framework on a variety of IMEX schemes and use it to develop and analyze a set of second-order low-storage IMEX Runge-Kutta methods with a high Courant-Friedrichs-Lewy (CFL) number. We show that the new framework is more selective than the 2-D acoustic system previously used in the literature. Schemes that are stable for the 2-D acoustic system are not stable for the system of normal modes.

More Details

Extreme Scale Infrasound Inversion and Prediction for Weather Characterization and Acute Event Detection

van Bloemen Waanders, Bart G.; Ober, Curtis C.

Accurate and timely weather predictions are critical to many aspects of society with a profound impact on our economy, general well-being, and national security. In particular, our ability to forecast severe weather systems is necessary to avoid injuries and fatalities, but also important to minimize infrastructure damage and maximize mitigation strategies. The weather community has developed a range of sophisticated numerical models that are executed at various spatial and temporal scales in an attempt to issue global, regional, and local forecasts in pseudo real time. The accuracy however depends on the time period of the forecast, the nonlinearities of the dynamics, and the target spatial resolution. Significant uncertainties plague these predictions including errors in initial conditions, material properties, data, and model approximations. To address these shortcomings, a continuous data collection occurs at an effort level that is even larger than the modeling process. It has been demonstrated that the accuracy of the predictions depends on the quality of the data and is independent to a certain extent on the sophistication of the numerical models. Data assimilation has become one of the more critical steps in the overall weather prediction business and consequently substantial improvements in the quality of the data would have transformational benefits. This paper describes the use of infrasound inversion technology, enabled through exascale computing, that could potentially achieve orders of magnitude improvement in data quality and therefore transform weather predictions with significant impact on many aspects of our society.

More Details

The Hardware of Smaller Clusters (V.3.0)

Lacy, Susan L.; Brightwell, Ronald B.

Chris Saunders and three technologists are in high demand from Sandia’s deep learning teams, and they’re kept busy by building new clusters of computer nodes for researchers who need the power of supercomputing on a smaller scale. Sandia researchers working on Laboratory Directed Research & Development (LDRD) projects, or innovative ideas for solutions on short timeframes, formulate new ideas on old themes and frequently rely on smaller cluster machines to help solve problems before introducing their code to larger HPC resources. These research teams need an agile hardware and software environment where nascent ideas can be tested and cultivated on a smaller scale.

More Details

Review of the Carbon Capture Multidisciplinary Science Center (CCMSC) at the University of Utah (2017)

Hoekstra, Robert J.; Malone, C.M.; Montoya, D.R.; Ferencz, M.R.; Kuhl, A.L.; Wagner, J.

The review was conducted on May 8-9, 2017 at the University of Utah. Overall the review team was impressed with the work presented and found that the CCMSC had met or exceeded the Year 3 milestones. Specific details, comments, and recommendations are included in this document.

More Details

Efficient, Predictive Tomography of Multi-Qubit Quantum Processors

Blume-Kohout, Robin J.; Nielsen, Erik N.; Rudinger, Kenneth M.; Sarovar, Mohan S.; Young, Kevin C.

After decades of R&D, quantum computers comprising more than 2 qubits are appearing. If this progress is to continue, the research community requires a capability for precise characterization (“tomography”) of these enlarged devices, which will enable benchmarking, improvement, and finally certification as mission-ready. As world leaders in characterization -- our gate set tomography (GST) method is the current state of the art – the project team is keenly aware that every existing protocol is either (1) catastrophically inefficient for more than 2 qubits, or (2) not rich enough to predict device behavior. GST scales poorly, while the popular randomized benchmarking technique only measures a single aggregated error probability. This project explored a new insight: that the combinatorial explosion plaguing standard GST could be avoided by using an ansatz of few-qubit interactions to build a complete, efficient model for multi-qubit errors. We developed this approach, prototyped it, and tested it on a cutting-edge quantum processor developed by Rigetti Quantum Computing (RQC), a US-based startup. We implemented our new models within Sandia’s PyGSTi open-source code, and tested them experimentally on the RQC device by probing crosstalk. We found two major results: first, our schema worked and is viable for further development; second, while the Rigetti device is indeed a “real” 8-qubit quantum processor, its behavior fluctuated significantly over time while we were experimenting with it and this drift made it difficult to fit our models of crosstalk to the data.

More Details

MFNets: Multifidelity data-driven networks for Bayesian learning and prediction

International Journal for Uncertainty Quantification

Gorodetsky, Alex; Jakeman, John D.; Geraci, Gianluca G.; Eldred, Michael S.

This paper presents a multifidelity uncertainty quantification framework called MFNets. We seek to address three existing challenges that arise when experimental and simulation data from different sources are used to enhance statistical estimation and prediction with quantified uncertainty. Specifically, we demonstrate that MFNets can (1) fuse heterogeneous data sources arising from simulations with different parameterizations, e.g simulation models with different uncertain parameters or data sets collected under different environmental conditions; (2) encode known relationships among data sources to reduce data requirements; and (3) improve the robustness of existing multi-fidelity approaches to corrupted data. MFNets construct a network of latent variables (LVs) to facilitate the fusion of data from an ensemble of sources of varying credibility and cost. These LVs are posited as explanatory variables that provide the source of correlation in the observed data. Furthermore, MFNets provide a way to encode prior physical knowledge to enable efficient estimation of statistics and/or construction of surrogates via conditional independence relations on the LVs. We highlight the utility of our framework with a number of theoretical results which assess the quality of the posterior mean as a frequentist estimator and compare it to standard sampling approaches that use single fidelity, multilevel, and control variate Monte Carlo estimators. We also use the proposed framework to derive the Monte Carlo-based control variate estimator entirely from the use of Bayes rule and linear-Gaussian models -- to our knowledge the first such derivation. Finally, we demonstrate the ability to work with different uncertain parameters across different models.

More Details

Scalable asynchronous domain decomposition solvers

SIAM Journal on Scientific Computing

Glusa, Christian A.; Boman, Erik G.; Chow, Edmond; Rajamanickam, Sivasankaran R.; Szyld, Daniel B.

Parallel implementations of linear iterative solvers generally alternate between phases of data exchange and phases of local computation. Increasingly large problem sizes and more heterogeneous compute architectures make load balancing and the design of low latency network interconnects that are able to satisfy the communication requirements of linear solvers very challenging tasks. In particular, global communication patterns such as inner products become increasingly limiting at scale. We explore the use of asynchronous communication based on one-sided Message Passing Interface primitives in the context of domain decomposition solvers. In particular, a scalable asynchronous two-level Schwarz method is presented. We discuss practical issues encountered in the development of a scalable solver and show experimental results obtained on a state-of-the-art supercomputer system that illustrate the benefits of asynchronous solvers in load balanced as well as load imbalanced scenarios. Using the novel method, we can observe speedups of up to four times over its classical synchronous equivalent.

More Details

Suppression of helium bubble nucleation in beryllium exposed tungsten surfaces

Nuclear Fusion

Cusentino, Mary A.; Wood, Mitchell A.; Thompson, Aidan P.

One of the most severe obstacles to increasing the longevity of tungsten-based plasma facing components, such as divertor tiles, is the surface deterioration driven by sub-surface helium bubble formation and rupture. Supported by experimental observations at PISCES, this work uses molecular dynamics simulations to identify the microscopic mechanisms underlying suppression of helium bubble formation by the introduction of plasma-borne beryllium. Simulations of the initial surface material (crystalline W), early-time Be exposure (amorphous W-Be) and final WBe2 intermetallic surfaces were used to highlight the effect of Be. Significant differences in He retention, depth distribution and cluster size were observed in the cases with beryllium present. Helium resided much closer to the surface in the Be cases with nearly 80% of the total helium inventory located within the first 2 nm. Moreover, coarsening of the He depth profile due to bubble formation is suppressed due to a one-hundred fold decrease in He mobility in WBe2, relative to crystalline W. This is further evidenced by the drastic reduction in He cluster sizes even when it was observed that both the amorphous W-Be and WBe2 intermetallic phases retain nearly twice as much He during cumulative implantation studies.

More Details

On the peridynamic effective force state and multiphase constitutive correspondence principle

Journal of the Mechanics and Physics of Solids

Song, Xiaoyu; Silling, Stewart A.

This article concerns modeling unsaturated deformable porous media as an equivalent single-phase and single-force state peridynamic material through the effective force state. The balance equations of linear momentum and mass of unsaturated porous media are presented by defining relevant peridynamic states. The energy balance of unsaturated porous media is utilized to derive the effective force state for the solid skeleton that is an energy conjugate to the nonlocal deformation state of the solid, and the suction force state. Through an energy equivalence, a multiphase constitutive correspondence principle is built between classical unsaturated poromechanics and peridynamic unsaturated poromechanics. The multiphase correspondence principle provides a means to incorporate advanced constitutive models in classical unsaturated porous theory directly into unsaturated peridynamic poromechanics. Numerical simulations of localized failure in unsaturated porous media under different matric suctions are presented to demonstrate the feasibility of modeling the mechanical behavior of such three-phase materials as an equivalent single-phase peridynamic material through the effective force state concept.

More Details

Non-destructive simulation of node defects in additively manufactured lattice structures

Additive Manufacturing

Lozanovski, Bill; Downing, David; Tino, Rance; Du Plessis, Anton; Tran, Phuong; Jakeman, John D.; Shidid, Darpan; Emmelmann, Claus; Qian, Ma; Choong, Peter; Brandt, Milan; Leary, Martin

Additive Manufacturing (AM), commonly referred to as 3D printing, offers the ability to not only fabricate geometrically complex lattice structures but parts in which lattice topologies in-fill volumes bounded by complex surface geometries. However, current AM processes produce defects on the strut and node elements which make up the lattice structure. This creates an inherent difference between the as-designed and as-fabricated geometries, which negatively affects predictions (via numerical simulation) of the lattice's mechanical performance. Although experimental and numerical analysis of an AM lattice's bulk structure, unit cell and struts have been performed, there exists almost no research data on the mechanical response of the individual as-manufactured lattice node elements. This research proposes a methodology that, for the first time, allows non-destructive quantification of the mechanical response of node elements within an as-manufactured lattice structure. A custom-developed tool is used to extract and classify each individual node geometry from micro-computed tomography scans of an AM fabricated lattice. Voxel-based finite element meshes are generated for numerical simulation and the mechanical response distribution is compared to that of the idealised computer-aided design model. The method demonstrates compatibility with Uncertainty Quantification methods that provide opportunities for efficient prediction of a population of nodal responses from sampled data. Overall, the non-destructive and automated nature of the node extraction and response evaluation is promising for its application in qualification and certification of additively manufactured lattice structures.

More Details

Evaluating Trade-offs in Potential Exascale Interconnect Technologies

Hemmert, Karl S.; Bair, Ray; Bhatale, Abhinav; Groves, Taylor; Jain, Nikhil; Lewis, Cannada L.; Mubarak, Misbah; Pakin, Scott D.; Ross, Robert; Wilke, Jeremiah J.

This report details work to study trade-offs in topology and network bandwidth for potential interconnects in the exascale (2021-2022) timeframe. The work was done using multiple interconnect models across two parallel discrete event simulators. Results from each independent simulator are shown and discussed and the areas of agreement and disagreement are explored.

More Details

Slycat Enables Synchronized 3D Comparison of Surface Mesh Ensembles [Brief]

Crossno, Patricia J.

In support of analyst requests for Mobile Guardian Transport studies, researchers at Sandia National Laboratories have expanded data types for the Slycat ensemble-analysis and visualization tool to include 3D surface meshes. This new capability represents a significant advance in our ability to perform detailed comparative analysis of simulation results. Analyzing mesh data rather than images provides greater flexibility for post-processing exploratory analysis.

More Details

Detecting and tracking drift in quantum information processors

Nature Communications

Proctor, Timothy J.; Revelle, Melissa R.; Nielsen, Erik N.; Rudinger, Kenneth M.; Lobser, Daniel L.; Maunz, Peter; Blume-Kohout, Robin J.; Young, Kevin C.

If quantum information processors are to fulfill their potential, the diverse errors that affect them must be understood and suppressed. But errors typically fluctuate over time, and the most widely used tools for characterizing them assume static error modes and rates. This mismatch can cause unheralded failures, misidentified error modes, and wasted experimental effort. Here, we demonstrate a spectral analysis technique for resolving time dependence in quantum processors. Our method is fast, simple, and statistically sound. It can be applied to time-series data from any quantum processor experiment. We use data from simulations and trapped-ion qubit experiments to show how our method can resolve time dependence when applied to popular characterization protocols, including randomized benchmarking, gate set tomography, and Ramsey spectroscopy. In the experiments, we detect instability and localize its source, implement drift control techniques to compensate for this instability, and then demonstrate that the instability has been suppressed.

More Details

Low-cost MPI Multithreaded Message Matching Benchmarking

Proceedings - 2020 IEEE 22nd International Conference on High Performance Computing and Communications, IEEE 18th International Conference on Smart City and IEEE 6th International Conference on Data Science and Systems, HPCC-SmartCity-DSS 2020

Schonbein, William W.; Levy, Scott L.; Marts, William P.; Dosanjh, Matthew D.; Grant, Ryan E.

The Message Passing Interface (MPI) standard allows user-level threads to concurrently call into an MPI library. While this feature is currently rarely used, there is considerable interest from developers in adopting it in the near future. There is reason to believe that multithreaded communication may incur additional message processing overheads in terms of number of items searched during demultiplexing and amount of time spent searching because it has the potential to increase the number of messages exchanged and to introduce non-deterministic message ordering. Therefore, understanding the implications of adding multithreading to MPI applications is important for future application development.One strategy for advancing this understanding is through 'low-cost' benchmarks that emulate full communication patterns using fewer resources. For example, while a complete, 'real-world' multithreaded halo exchange requires 9 or 27 nodes, the low-cost alternative needs only two, making it deployable on systems where acquiring resources is difficult because of high utilization (e.g., busy capacity-computing systems), or impossible because the necessary resources do not exist (e.g., testbeds with too few nodes). While such benchmarks have been proposed, the reported results have been limited to a single architecture or derived indirectly through simulation, and no attempt has been made to confirm that a low-cost benchmark accurately captures features of full (non-emulated) exchanges. Moreover, benchmark code has not been made publicly available.The purpose of the study presented in this paper is to quantify how accurately the low-cost benchmark captures the matching behavior of the full, real-world benchmark. In the process, we also advocate for the feasibility and utility of the low-cost benchmark. We present a 'real-world' benchmark implementing a full multithreaded halo exchange on 9 and 27 nodes, as defined by 5-point and 9-point 2D stencils, and 7-point and 27-point 3D stencils. Likewise, we present a 'low-cost' benchmark that emulates these communication patterns using only two nodes. We then confirm, across multiple architectures, that the low-cost benchmark gives accurate estimates of both number of items searched during message processing, and time spent processing those messages. Finally, we demonstrate the utility of the low-cost benchmark by using it to profile the performance impact of state-of-The-Art Mellanox ConnectX-5 hardware support for offloaded MPI message demultiplexing. To facilitate further research on the effects of multithreaded MPI on message matching behavior, the source of our two benchmarks is to be included in the next release version of the Sandia MPI Micro-Benchmark Suite.

More Details

Hurricane-induced power outage risk under climate change is primarily driven by the uncertainty in projections of future hurricane frequency

Scientific Reports

Alemazkoor, Negin; Rachunok, Benjamin; Chavas, Daniel R.; Staid, Andrea S.; Louhghalam, Arghavan; Nateghi, Roshanak; Tootkaboni, Mazdak

Nine in ten major outages in the US have been caused by hurricanes. Long-term outage risk is a function of climate change-triggered shifts in hurricane frequency and intensity; yet projections of both remain highly uncertain. However, outage risk models do not account for the epistemic uncertainties in physics-based hurricane projections under climate change, largely due to the extreme computational complexity. Instead they use simple probabilistic assumptions to model such uncertainties. Here, we propose a transparent and efficient framework to, for the first time, bridge the physics-based hurricane projections and intricate outage risk models. We find that uncertainty in projections of the frequency of weaker storms explains over 95% of the uncertainty in outage projections; thus, reducing this uncertainty will greatly improve outage risk management. We also show that the expected annual fraction of affected customers exhibits large variances, warranting the adoption of robust resilience investment strategies and climate-informed regulatory frameworks.

More Details

Numerical methods for nonlocal and fractional models

Acta Numerica

D'Elia, Marta D.; Du, Qiang; Glusa, Christian A.; Gunzburger, Max D.; Tian, Xiaochuan; Zhou, Zhi

Partial differential equations (PDEs) are used with huge success to model phenomena across all scientific and engineering disciplines. However, across an equally wide swath, there exist situations in which PDEs fail to adequately model observed phenomena, or are not the best available model for that purpose. On the other hand, in many situations, nonlocal models that account for interaction occurring at a distance have been shown to more faithfully and effectively model observed phenomena that involve possible singularities and other anomalies. Here, we consider a generic nonlocal model, beginning with a short review of its definition, the properties of its solution, its mathematical analysis and of specific concrete examples. We then provide extensive discussions about numerical methods, including finite element, finite difference and spectral methods, for determining approximate solutions of the nonlocal models considered. In that discussion, we pay particular attention to a special class of nonlocal models that are the most widely studied in the literature, namely those involving fractional derivatives. The article ends with brief considerations of several modelling and algorithmic extensions, which serve to show the wide applicability of nonlocal modelling.

More Details

Size- and temperature-dependent magnetization of iron nanoclusters

Physical Review B

Tranchida, Julien G.; Dos Santos, Gonzalo; Aparicio, Romina; Linares, D.; Miranda, E.N.; Pastor, Gustavo M.; Bringa, Eduardo M.

In this paper, the magnetic behavior of bcc iron nanoclusters, with diameters between 2 and 8 nm, is investigated by means of spin dynamics simulations coupled to molecular dynamics, using a distance-dependent exchange interaction. Finite-size effects in the total magnetization as well as the influence of the free surface and the surface/core proportion of the nanoclusters are analyzed in detail for a wide temperature range, going beyond the cluster and bulk Curie temperatures. Comparison is made with experimental data and with theoretical models based on the mean-field Ising model adapted to small clusters, and taking into account the influence of low coordinated spins at free surfaces. Our results for the temperature dependence of the average magnetization per atom M (T), including the thermalization of the transnational lattice degrees of freedom, are in very good agreement with available experimental measurements on small Fe nanoclusters. In contrast, significant discrepancies with experiment are observed if the translational degrees of freedom are artificially frozen. The finite-size effects on M (T) are found to be particularly important near the cluster Curie temperature. Simulated magnetization above the Curie temperature scales with cluster size as predicted by models assuming short-range magnetic ordering. Analytical approximations to the magnetization as a function of temperature and size are proposed.

More Details

Towards Use of Mixed Precision in ECP Math Libraries [Exascale Computing Project]

Antz, Hartwig; Boman, Erik G.; Gates, Mark; Kruger, Scott; Li, Sherry; Loe, Jennifer A.; Osei-Kuffuor, Daniel; Tomov, Stan; Tsai, Yaohung M.; Meier Yang, Ulrike

The use of multiple types of precision in mathematical software has the potential to increase its performance on new heterogeneous architectures. The xSDK project focuses both on the investigation and development of multiprecision algorithms as well as their inclusion into xSDK member libraries. This report summarizes current efforts on including and/or using mixed precision capabilities in the math libraries Ginkgo, heFFTe, hypre, MAGMA, PETSc/TAO, SLATE, SuperLU, and Trilinos, including KokkosKernels. It contains both numerical results from libraries that already provide mixed precision capabilities, as well as descriptions of the strategies to incorporate multiprecision into established libraries.

More Details

A Roadmap for Reaching the Potential of Brain-Derived Computing

Advanced Intelligent Systems

Aimone, James B.

Neuromorphic computing is a critical future technology for the computing industry, but it has yet to achieve its promise and has struggled to establish a cohesive research community. A large part of the challenge is that full realization of the potential of brain inspiration requires advances in both device hardware, computing architectures, and algorithms. This simultaneous development across technology scales is unprecedented in the computing field. This article presents a strategy, framed by market and policy pressures, for moving past these current technological and cultural hurdles to realize its full impact across technology. Achieving the full potential of brain-derived algorithms as well as post-complementary metal-oxide-semiconductor (CMOS) scaling neuromorphic hardware requires appropriately balancing the near-term opportunities of deep learning applications with the long-term potential of less understood opportunities in neural computing.

More Details

A physics-informed operator regression framework for extracting data-driven continuum models

Computer Methods in Applied Mechanics and Engineering

Patel, Ravi G.; Trask, Nathaniel A.; Wood, Mitchell A.; Cyr, Eric C.

The application of deep learning toward discovery of data-driven models requires careful application of inductive biases to obtain a description of physics which is both accurate and robust. We present here a framework for discovering continuum models from high fidelity molecular simulation data. Our approach applies a neural network parameterization of governing physics in modal space, allowing a characterization of differential operators while providing structure which may be used to impose biases related to symmetry, isotropy, and conservation form. Here, we demonstrate the effectiveness of our framework for a variety of physics, including local and nonlocal diffusion processes and single and multiphase flows. For the flow physics we demonstrate this approach leads to a learned operator that generalizes to system characteristics not included in the training sets, such as variable particle sizes, densities, and concentration.

More Details

Scale and rate in CdS pressure-induced phase transition

AIP Conference Proceedings

Lane, James M.; Koski, Jason K.; Thompson, Aidan P.; Srivastava, Ishan S.; Grest, Gary S.; Ao, Tommy A.; Stoltzfus, Brian S.; Austin, Kevin N.; Fan, Hongyou F.; Morgan, Dane; Knudson, Marcus D.

Here, we describe recent efforts to improve our predictive modeling of rate-dependent behavior at, or near, a phase transition using molecular dynamics simulations. Cadmium sulfide (CdS) is a well-studied material that undergoes a solid-solid phase transition from wurtzite to rock salt structures between 3 and 9 GPa. Atomistic simulations are used to investigate the dominant transition mechanisms as a function of orientation, size and rate. We found that the final rock salt orientations were determined relative to the initial wurtzite orientation, and that these orientations were different for the two orientations and two pressure regimes studied. The CdS solid-solid phase transition is studied, for both a bulk single crystal and for polymer-encapsulated spherical nanoparticles of various sizes.

More Details

Generating Uncertainty Distributions for Seismic Signal Onset Times

Bulletin of the Seismological Society of America

Peterson, Matthew G.; Stracuzzi, David J.; Young, Christopher J.; Vollmer, Charles V.; Brogan, Ronald

Signal arrival-time estimation plays a critical role in a variety of downstream seismic analyses, including location estimation and source characterization. Any arrival-time errors propagate through subsequent data-processing results. In this article, we detail a general framework for refining estimated seismic signal arrival times along with full estimation of their associated uncertainty. Using the standard short-term average/long-term average threshold algorithm to identify a search window, we demonstrate how to refine the pick estimate through two different approaches. In both cases, new waveform realizations are generated through bootstrap algorithms to produce full a posteriori estimates of uncertainty of onset arrival time of the seismic signal. The onset arrival uncertainty estimates provide additional data-derived information from the signal and have the potential to influence seismic analysis along several fronts.

More Details

Implementing Flexible Threading Support in Open MPI

Proceedings of ExaMPI 2020: Exascale MPI Workshop, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis

Evans, Noah; Ciesko, Jan; Olivier, Stephen L.; Pritchard, Howard; Iwasaki, Shintaro; Raffenetti, Ken; Balaji, Pavan

Multithreaded MPI applications are gaining popularity in scientific and high-performance computing. While the combination of programming models is suited to support current parallel hardware, it moves threading models and their interaction with MPI into focus. With the advent of new threading libraries, the flexibility to select threading implementations of choice is becoming an important usability feature. Open MPI has traditionally avoided componentizing its threading model, relying on code inlining and static initialization to minimize potential impacts on runtime fast paths and synchronization. This paper describes the implementation of a generic threading runtime support in Open MPI using the Opal Modular Component Architecture. This architecture allows the programmer to select a threading library at compile-or run-time, providing both static initialization of threading primitives as well as dynamic instantiation of threading objects. In this work, we present the implementation, define required interfaces, and discuss trade-offs of dynamic and static initialization.

More Details

Novel Geometric Operations for Linear Programming

Ebeida, Mohamed S.; Abdelkader, Ahmed; Amenta, Nina; Kouri, Drew P.; Parekh, Ojas D.; Phillips, Cynthia A.; Winovich, Nickolas W.

This report summarizes the work performed under the project "Linear Programming in Strongly Polynomial Time." Linear programming (LP) is a classic combinatorial optimization problem heavily used directly and as an enabling subroutine in integer programming (IP). Specifically IP is the same as LP except that some solution variables must take integer values (e.g. to represent yes/no decisions). Together LP and IP have many applications in resource allocation including general logistics, and infrastructure design and vulnerability analysis. The project was motivated by the PI's recent success developing methods to efficiently sample Voronoi vertices (essentially finding nearest neighbors in high-dimensional point sets) in arbitrary dimension. His method seems applicable to exploring the high-dimensional convex feasible space of an LP problem. Although the project did not provably find a strongly-polynomial algorithm, it explored multiple algorithm classes. The new medial simplex algorithms may still lead to solvers with improved provable complexity. We describe medial simplex algorithms and some relevant structural/complexity results. We also designed a novel parallel LP algorithm based on our geometric insights and implemented it in the Spoke-LP code. A major part of the computational step is many independent vector dot products. Our parallel algorithm distributes the problem constraints across processors. Current commercial and high-quality free LP solvers require all problem details to fit onto a single processor or multicore. Our new algorithm might enable the solution of problems too large for any current LP solvers. We describe our new algorithm, give preliminary proof-of-concept experiments, and describe a new generator for arbitrarily large LP instances.

More Details

Dakota, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis: Version 6.13 User's Manual

Adams, Brian M.; Bohnhoff, William J.; Dalbey, Keith R.; Ebeida, Mohamed S.; Eddy, John P.; Eldred, Michael S.; Hooper, Russell W.; Hough, Patricia D.; Hu, Kenneth T.; Jakeman, John D.; Khalil, Mohammad; Maupin, Kathryn A.; Monschke, Jason A.; Ridgway, Elliott M.; Rushdi, Ahmad; Seidl, Daniel T.; Stephens, John A.; Winokur, Justin G.

The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user’s manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

More Details

Radd runtimes: Radical and different distributed runtimes with smartnics

Proceedings of IPDRM 2020: 4th Annual Workshop on Emerging Parallel and Distributed Runtime Systems and Middleware, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis

Grant, Ryan E.; Schonbein, Whit; Levy, Scott L.

As network speeds increase, the overhead of processing incoming messages is becoming onerous enough that many manufacturers now provide network interface cards (NICs) with offload capabilities to handle these overheads. This increase in NIC capabilities creates an opportunity to enable computation on data in-situ on the NIC. These enhanced NICs can be classified into several different categories of SmartNICs. SmartNICs present an interesting opportunity for future runtime software designs. Designing runtime software to be located in the network as opposed to the host level leads to new radical distributed runtime possibilities that were not practical prior to SmartNICs. In the process of transitioning to a radically different runtime software design for SmartNICs there are intermediary steps of migrating current runtime software to be offloaded onto a SmartNIC that also present interesting possibilities. This paper will describe SmartNIC design and how SmartNICs can be leveraged to offload current generation runtime software and lead to future radically different in-network distributed runtime systems.

More Details

CSRI Summer Proceedings 2020

Rushdi, Ahmad R.

The Computer Science Research Institute (CSRI) brings university faculty and students to Sandia for focused collaborative research on Department of Energy (DOE) computer and computational science problems. The institute provides an opportunity for university researchers to learn about problems in computer and computational science at DOE laboratories. Participants conduct leading-edge research, interact with scientists and engineers at the laboratories, and help transfer results of their research to programs at the labs. Some specific CSRI research interest areas are: scalable solvers, optimization, adaptivity and mesh refinement, graph-based, discrete, and combinatorial algorithms, uncertainty estimation, mesh generation, dynamic load-balancing, virus and other malicious-code defense, visualization, scalable cluster computers, data-intensive computing, environments for scalable computing, parallel input/output, advanced architectures, and theoretical computer science. The CSRI Summer Program is organized by CSRI and typically includes the organization of a weekly seminar series and the publication of a summer proceedings. In 2020, the CSRI summer program was executed completely virtually; all student interns worked from home, due to the COVID-19 pandemic.

More Details

Distributed Memory Graph Coloring Algorithms for Multiple GPUs

Proceedings of IA3 2020: 10th Workshop on Irregular Applications: Architectures and Algorithms, Held in conjunction with SC 2020: The International Conference for High Performance Computing, Networking, Storage and Analysis

Bogle, Ian A.; Boman, Erik G.; Devine, Karen D.; Rajamanickam, Sivasankaran R.; Slota, George M.

Graph coloring is often used in parallelizing scientific computations that run in distributed and multi-GPU environments; it identifies sets of independent data that can be updated in parallel. Many algorithms exist for graph coloring on a single GPU or in distributed memory, but hybrid MPI+GPU algorithms have been unexplored until this work, to the best of our knowledge. We present several MPI+GPU coloring approaches that use implementations of the distributed coloring algorithms of Gebremedhin et al. and the shared-memory algorithms of Deveci et al. The on-node parallel coloring uses implementations in KokkosKernels, which provide parallelization for both multicore CPUs and GPUs. We further extend our approaches to solve for distance-2 coloring, giving the first known distributed and multi-GPU algorithm for this problem. In addition, we propose novel methods to reduce communication in distributed graph coloring. Our experiments show that our approaches operate efficiently on inputs too large to fit on a single GPU and scale up to graphs with 76.7 billion edges running on 128 GPUs.

More Details
Results 1001–1100 of 9,998
Results 1001–1100 of 9,998