Uncertainty quantification is recognized as a fundamental task to obtain predictive numerical simulations. However, many realistic engineering applications require complex and computationally expensive high-fidelity numerical simulations for the accurate characterization of the system responses. Moreover, complex physical models and extreme operative conditions can easily lead to hundreds of uncertain parameters that need to be propagated through high-fidelity codes. Under these circumstances, a single fidelity approach, i.e. a workflow that only uses high-fidelity simulations to perform the uncertainty quantification task, is unfeasible due to the prohibitive overall computational cost. In recent years, multifidelity strategies have been introduced to overcome this issue. The core idea of this family of methods is to combine simulations with varying levels of fidelity/accuracy in order to obtain the multifidelity estimators or surrogates with the same accuracy of their single fidelity counterparts at a much lower computational cost. This goal is usually accomplished by defining a prioria sequence of discretization levels or physical modeling assumptions that can be used to decrease the complexity of a numerical realization and thus its computational cost. However ,less attention has been dedicated to low-fidelity models that can be built directly from the small number of high-fidelity simulations available. In this work we focus our attention on Reduced-Order Models that can be considered a particular class of data-driven approaches. Our main goal is to explore the combination of multifidelity uncertainty quantification and reduced-order models to obtain an efficient framework for propagating uncertainties through expensive numerical codes.
Motivated by the need for improved forward modeling and inversion capabilities of geophysical response in geologic settings whose fine--scale features demand accountability, this project describes two novel approaches which advance the current state of the art. First is a hierarchical material properties representation for finite element analysis whereby material properties can be prescribed on volumetric elements, in addition to their facets and edges. Hence, thin or fine--scaled features can be economically represented by small numbers of connected edges or facets, rather than 10's of millions of very small volumetric elements. Examples of this approach are drawn from oilfield and near--surface geophysics where, for example, electrostatic response of metallic infastructure or fracture swarms is easily calculable on a laptop computer with an estimated reduction in resource allocation by 4 orders of magnitude over traditional methods. Second is a first-ever solution method for the space--fractional Helmholtz equation in geophysical electromagnetics, accompanied by newly--found magnetotelluric evidence supporting a fractional calculus representation of multi-scale geomaterials. Whereas these two achievements are significant in themselves, a clear understanding the intermediate length scale where these two endmember viewpoints must converge remains unresolved and is a natural direction for future research. Additionally, an explicit mapping from a known multi-scale geomaterial model to its equivalent fractional calculus representation proved beyond the scope of the present research and, similarly, remains fertile ground for future exploration.
There are numerous applications that combine data collected from sensors with machine-learning based classification models to predict the type of event or objects observed. Both the collection of the data itself and the classification models can be tuned for optimal performance, but we hypothesize that additional gains can be realized by jointly assessing both factors together. Through this research, we used a seismic event dataset and two neural network classification models that issued probabilistic predictions on each event to determine whether it was an earthquake or a quarry blast. Real world applications will have constraints on data collection, perhaps in terms of a budget for the number of sensors or on where, when, or how data can be collected. We mimicked such constraints by creating subnetworks of sensors with both size and locational constraints. We compare different methods of determining the set of sensors in each subnetwork in terms of their predictive accuracy and the number of events that they observe overall. Additionally, we take the classifiers into account, treating them both as black-box models and testing out various ways of combining predictions among models and among the set of sensors that observe any given event. We find that comparable overall performance can be seen with less than half the number of sensors in the full network. Additionally, a voting scheme that uses the average confidence across the sensors for a given event shows improved predictive accuracy across nearly all subnetworks. Lastly, locational constraints matter, but sometimes in unintuitive ways, as better-performing sensors may be chosen instead of the ones excluded based on location. This being a short-term research effort, we offer a lengthy discussion on interesting next-steps and ties to other ongoing research efforts that we did not have time to pursue. These include a detailed analysis of the subnetwork performance broken down by event type, specific location, and model confidence. This project also included a Campus Executive research partnership with Texas A&M University. Through this, we worked with a professor and student to study information gain for UAV routing. This was an alternative way of looking at the similar problem space that includes sensor operation for data collection and the resulting benefit to be gained from it. This work is described in an Appendix.
Due to its balance of accuracy and computational cost, density functional theory has become the method of choice for computing the electronic structure and related properties of materials. However, present-day semi-local approximations to the exchange-correlation energy of density functional theory break down for materials containing d and f electrons. In this report we summarize the results of our research efforts within the LDRD 200202 titled "Making density functional theory work for all materials" in addressing this issue. Our efforts are grouped into two research thrusts. In the first thrust, we develop an exchange-correlation functional (BSC functional) within the subsystem functional formalism. It enables us to capture bulk, surface, and confinement physics with a single, semi-local exchange-correlation functional in density functional theory calculations. We present the analytical properties of the BSC functional and demonstrate that the BSC functional is able to capture confinement physics more accurately than standard semi-local exchange-correlation functionals. The second research thrust focusses on developing a database for transition metal binary compounds. The database consists of materials properties (formation energies, ground-state energies, lattice constants, and elastic constants) of 26 transition metal elements and 89 transition metal alloys. It serves as a reference for benchmarking computational models (such as lower-level modeling methods and exchange-correlation functionals). We expect that our database will significantly impact the materials science community. We conclude with a brief discussion on the future research directions and impact of our results.
Approximation algorithms for constraint satisfaction problems (CSPs) are a central direction of study in theoretical computer science. In this work, we study classical product state approximation algorithms for a physically motivated quantum generalization of Max-Cut, known as the quantum Heisenberg model. This model is notoriously difficult to solve exactly, even on bipartite graphs, in stark contrast to the classical setting of Max-Cut. Here we show, for any interaction graph, how to classically and efficiently obtain approximation ratios 0.649 (anti-feromagnetic XY model) and 0.498 (anti-ferromagnetic Heisenberg XYZ model). These are almost optimal; we show that the best possible ratios achievable by a product state for these models is 2/3 and 1/2, respectively.
In stochastic optimization, probabilities naturally arise as cost functionals and chance constraints. Unfortunately, these functions are difficult to handle both theoretically and computationally. The buffered probability of failure and its subsequent extensions were developed as numerically tractable, conservative surrogates for probabilistic computations. In this manuscript, we introduce the higher-moment buffered probability. Whereas the buffered probability is defined using the conditional value-at-risk, the higher-moment buffered probability is defined using higher-moment coherent risk measures. In this way, the higher-moment buffered probability encodes information about the magnitude of tail moments, not simply the tail average. We prove that the higher-moment buffered probability is closed, monotonic, quasi-convex and can be computed by solving a smooth one-dimensional convex optimization problem. These properties enable smooth reformulations of both higher-moment buffered probability cost functionals and constraints.
We present a new method for reducing parallel applications’ communication time by mapping their MPI tasks to processors in a way that lowers the distance messages travel and the amount of congestion in the network. Assuming geometric proximity among the tasks is a good approximation of their communication interdependence, we use a geometric partitioning algorithm to order both the tasks and the processors, assigning task parts to the corresponding processor parts. In this way, interdependent tasks are assigned to “nearby” cores in the network. We also present a number of algorithmic optimizations that exploit specific features of the network or application to further improve the quality of the mapping. We specifically address the case of sparse node allocation, where the nodes assigned to a job are not necessarily located in a contiguous block nor within close proximity to each other in the network. However, our methods generalize to contiguous allocations as well, and results are shown for both contiguous and non-contiguous allocations. We show that, for the structured finite difference mini-application MiniGhost, our mapping methods reduced communication time up to 75 percent relative to MiniGhost’s default mapping on 128K cores of a Cray XK7 with sparse allocation. For the atmospheric modeling code E3SM/HOMME, our methods reduced communication time up to 31% on 16K cores of an IBM BlueGene/Q with contiguous allocation.
Tallman, Aaron E.; Stopka, Krzysztof S.; Swiler, Laura P.; Wang, Yan; Kalidindi, Surya R.; Mcdowell, David L.
Data-driven tools for finding structure–property (S–P) relations, such as the Materials Knowledge System (MKS) framework, can accelerate materials design, once the costly and technical calibration process has been completed. A three-model method is proposed to reduce the expense of S–P relation model calibration: (1) direct simulations are performed as per (2) a Gaussian process-based data collection model, to calibrate (3) an MKS homogenization model in an application to α-Ti. The new methods are compared favorably with expert texture selection on the performance of the so-calibrated MKS models. Benefits for the development of new and improved materials are discussed.
This report presents the code verification of EMPIRE-PIC to the analytic solution to a cold diode which was first derived by Jaffe. The cold diode was simulated using EMPIRE-PIC and the error norms were computed based on the Jaffe solution. The diode geometry is one-dimensional and uses the EMPIRE electrostatic field solver. After a transient start-up phase as the electrons first cross the anode-cathode gap, the simulations reach an equilibrium where the electric potential and electric field are approximately steady. The expected spatial order of convergence for potential, electric field and particle velocity are observed.
We present a new, distributed-memory parallel algorithm for detection of degenerate mesh features that can cause singularities in ice sheet mesh simulations. Identifying and removing mesh features such as disconnected components (icebergs) or hinge vertices (peninsulas of ice detached from the land) can significantly improve the convergence of iterative solvers. Because the ice sheet evolves during the course of a simulation, it is important that the detection algorithm can run in situ with the simulation - - running in parallel and taking a negligible amount of computation time - - so that degenerate features (e.g., calving icebergs) can be detected as they develop. We present a distributed memory, BFS-based label-propagation approach to degenerate feature detection that is efficient enough to be called at each step of an ice sheet simulation, while correctly identifying all degenerate features of an ice sheet mesh. Our method finds all degenerate features in a mesh with 13 million vertices in 0.0561 seconds on 1536 cores in the MPAS Albany Land Ice (MALI) model. Compared to the previously used serial pre-processing approach, we observe a 46,000x speedup for our algorithm, and provide additional capability to do dynamic detection of degenerate features in the simulation.
Use of insensitive high explosives (IHEs) has significantly improved ammunition safety because of their remarkable insensitivity to violent cook-off, shock and impact. Triamino-trinitrobenzene (TATB) is the IHE used in many modern munitions. Previously, lightning simulations in different test configurations have shown that the required detonation threshold for standard density TATB at ambient and elevated temperatures (250 C) has a sufficient margin over the shock caused by an arc from the most severe lightning. In this paper, the Braginskii model with Lee-More channel conductivity prescription is used to demonstrate how electrical arcs from lightning could cause detonation in TATB. The steep rise and slow decay in typical lightning pulse are used in demonstrating that the shock pressure from an electrical arc, after reaching the peak, falls off faster than the inverse of the arc radius. For detonation to occur, two necessary detonation conditions must be met: the Pop-Plot criterion and minimum spot size requirement. The relevant Pop-Plot for TATB at 250 C was converted into an empirical detonation criterion, which is applicable to explosives subject to shocks of variable pressure. The arc cross-section was required to meet the minimum detonation spot size reported in the literature. One caveat is that when the shock pressure exceeds the detonation pressure the Pop-Plot may not be applicable, and the minimum spot size requirement may be smaller.