Projection-based reduced-order models (pROMs) show great promise as a means to accelerate many-query applications such as forward error propagation, solving inverse problems, and design optimization. In order to deploy pROMs in the context of high-consequence decision making, accurate error estimates are required to determine the region(s) of applicability in the parameter space. The following paper considers the dual-weighted residual (DWR) error estimate for pROMs and compares it to another promising pROM error estimate, machine learned error models (MLEM). In this paper, we show how DWR can be applied to ROMs and then evaluate DWR on two partial differential equations (PDEs): a two-dimensional linear convection–reaction–diffusion equation, and a three-dimensional static hyper-elastic beam. It is shown that DWR is able to estimate errors for pROMs extrapolating outside of their training set while MLEM is best suited for pROMs used to interpolate within the pROM training set.
Solving large number of small linear systems is increasingly becoming a bottleneck in computational science applications. While dense linear solvers for such systems have been studied before, batched sparse linear solvers are just starting to emerge. In this paper, we discuss algorithms for solving batched sparse linear systems and their implementation in the Kokkos Kernels library. The new algorithms are performance portable and map well to the hierarchical parallelism available in modern accelerator architectures. The sparse matrix vector product (SPMV) kernel is the main performance bottleneck of the Krylov solvers we implement in this work. The implementation of the batched SPMV and its performance are therefore discussed thoroughly in this paper. The implemented kernels are tested on different Central Processing Unit (CPU) and Graphic Processing Unit (GPU) architectures. We also develop batched Conjugate Gradient (CG) and batched Generalized Minimum Residual (GMRES) solvers using the batched SPMV. Our proposed solver was able to solve 20,000 sparse linear systems on V100 GPUs with a mean speedup of 76x and 924x compared to using a parallel sparse solver with a block diagonal system with all the small linear systems, and compared to solving the small systems one at a time, respectively. We see mean speedup of 0.51 compared to dense batched solver of cuSOLVER on V100, while using lot less memory. Thorough performance evaluation on three different architectures and analysis of the performance are presented.
We propose primal–dual mesh optimization algorithms that overcome shortcomings of the standard algorithm while retaining some of its desirable features. “Hodge-Optimized Triangulations” defines the “HOT energy” as a bound on the discretization error of the diagonalized Delaunay Hodge star operator. HOT energy is a natural choice for an objective function, but unstable for both mathematical and algorithmic reasons: it has minima for collapsed edges, and its extrapolation to non-regular triangulations is inaccurate and has unbounded minima. We propose a different extrapolation with a stronger theoretical foundation, and avoid extrapolation by recalculating the objective just beyond the flip threshold. We propose new objectives, based on normalizations of the HOT energy, with barriers to edge collapses and other undesirable configurations. We propose mesh improvement algorithms coupling these. When HOT optimization nearly collapses an edge, we actually collapse the edge. Otherwise, we use the barrier objective to update positions and weights and remove vertices. By combining discrete connectivity changes with continuous optimization, we more fully explore the space of possible meshes and obtain higher quality solutions.
Nuclear power plants (NPPs) are considering flexible plant operations to take advantage of excess thermal and electrical energy. One option for NPPs is to pursue hydrogen production through high temperature electrolysis as an alternate revenue stream to remain economically viable. The intent of this study is to investigate the risk of a 100 MW hydrogen production facility in close proximity to an NPP. Previous analyses have evaluated preliminary designs of a hydrogen production facility in a conservative manner to determine if it is feasible to co-locate the facility within 1 km of an NPP. This analysis specifically evaluates the risk components of a 100 MW hydrogen production facility design, including the likelihood of a leak within the system and the associated consequence to critical NPP targets. This analysis shows that although the likelihood of a leak in an HTEF is not negligible, the consequence to critical NPP targets is not expected to lead to a failure given adequate distance from the plant.
Azizur-Rahman, Khalifa M.; Mah, Jasmine J.; Liang, Baolai; Huffaker, Diana L.; Nolde, Jill; Aifer, Edward; Hun Park, Jeung; Kim, Richard S.; Dahl, Russel
Compressible wall-modeled large-eddy simulations of Mach 8 turbulent boundary-layer flows over a flat plate were carried out for the conditions of the hypersonic wind tunnel at Sandia National Laboratories. The simulations provide new insight into the effect of wall cooling on the aero-optical path distortions for hypersonic turbulent boundary-layer flows. Four different wall-to-recovery temperature ratios, 0.3, 0.48, 0.71, and 0.89, are considered. Despite the much lower grid resolution, the mean velocity, temperature, and resolved Reynolds stress profiles from the simulation for a temperature ratio of 0.48 are in good agreement with those from a reference direct numerical simulation. The normalized root-mean-square optical path difference obtained from the present simulations is compared with that from reference direct numerical simulations, Sandia experiments, as well as predictions obtained with a semi-analytical model by Notre Dame University. The present analysis focuses on the effect of wall cooling on the wall-normal density correlations, on key underlying assumptions of the aforementioned model such as the strong Reynolds analogy, and on the elevation angle effect on the optical path difference. Wall cooling is found to increase the velocity fluctuations and decrease the density fluctuations, resulting in an overall reduction of the normalized optical path distortion. Compared to the simulations, the basic strong Reynolds analogy overpredicts the temperature fluctuations for cooled walls. Also different from the strong Reynolds analogy, the velocity and temperature fluctuations are not perfectly anticorrelated. Finally, as the wall temperature is raised, the density correlation length, away from the wall but inside the boundary layer, increases significantly for beam paths tilted in the downstream direction.
Characterizing the shallow structure of the Rock Valley region of the Nevada National Security Site is a critical component of the Rock Valley Direct Comparison project. Geophysical data of the region is needed for operational decisions, to constrain geologic models used for simulation, and to facilitate the analysis of future explosive source data. Local measurements of gravity are a key piece of geophysical information that helps to resolve the underlying geologic composition, fault structure, and density characteristics, yet, in the Rock Valley region these measurements are sparse on the scale of the testbed. In this report, we present the details of a recent gravity data acquisition survey designed to collect a dense dataset in the region of interest that complements the existing gravity work but greatly enhances our resolution. This dataset will be integrated with a complementary Los Alamos National Laboratory gravity collection and combined with the existing seismic data in a joint inversion. These measurements were conducted over two weeks with a portable gravimeter and high-resolution GPS and include repeat measurements at a USGS base station as well as reoccupation of gravity sites in the regional dataset. This collection of over 100 new dense gravity measurements will facilitate refinement of the existing Geologic Framework Model and directly complement newly acquired dense seismic data, ultimately improving the project’s ability to investigate the direct comparison of shallow earthquake and explosive sources.
High speed analog-to-digital converters (ADC), switched-capacitor delay elements, and pulsed radio frequency (RF) systems all require switches in the signal path operating at high switching speeds, providing low resistance when enabled, and providing high signal isolation when disabled. In semiconductor technologies such as CMOS, the enabled state resistance directly scales with the sizing of the switch device, where a larger width switch provides a lower enabled state resistance. As the device width is increased, so is the capacitance formed between the gate, drain, and source of the device.
Aftershock sequences are a burden to real-time seismic monitoring. Cross-correlation can be used because aftershocks exhibit similar waveforms, but the method is computationally expensive. Deep learning may be an alternative, as it is computationally efficient, but great attention to training and testing is required in order to trust that the model can generalize to new aftershock sequences. This is problematic for aftershock sequences, because large-magnitude earthquakes are unpredictable and are globally widespread. Here, we test several paired neural network (PNN) models trained on a augmented (noise-added) earthquake dataset, to determine whether they can be generalized to process real aftershock sequences. Two aftershock datasets that were originally detected by cross-correlation and subsequently validated by an expert analyst were used. We found that current PNN models struggle to generalize to aftershock sequences. However, we identify approaches to improve training future PNN models and believe that improvements may be achieved by transfer learning.
The purpose of this protocol is to define procedures and practices to be used by the PACT center for field testing of metal halide perovskite (MHP) photovoltaic (PV) modules. The protocol defines the physical, electrical, and analytical configuration of the tests and applies equally to mounting systems at a fixed orientation or sun tracking systems. While standards exist for outdoor testing of conventional PV modules, these do not anticipate the unique electrical behavior of perovskite cells. Further, the existing standards are oriented toward mature, relatively stable products with lifetimes that can be measured on the scale of years to decades. The state of the art for MHP modules is still immature with considerable sample to sample variation among nominally identical modules. Version 0.0 of this protocol does not define a minimum test duration, although the intent is for modules to be fielded for periods ranging for weeks to months. This protocol draws from relevant parts of existing standards, and where necessary includes modifications specific to the behavior of perovskites.
This document provides the instructions for participating in the 2021 blind photovoltaic (PV) modeling intercomparison organized by the PV Performance Modeling Collaborative (PVPMC). It describes the system configurations, metadata, and other information necessary for the modeling exercise. The practical details of the validation datasets are also described. The datasets were published online in open access in April 2023, after completing the analysis of the results.