The 2022 National Defense Strategy of the United States listed climate change as a serious threat to national security. Climate intervention methods, such as stratospheric aerosol injection, have been proposed as mitigation strategies, but the downstream effects of such actions on a complex climate system are not well understood. The development of algorithmic techniques for quantifying relationships between source and impact variables related to a climate event (i.e., a climate pathway) would help inform policy decisions. Data-driven deep learning models have become powerful tools for modeling highly nonlinear relationships and may provide a route to characterize climate variable relationships. In this paper, we explore the use of an echo state network (ESN) for characterizing climate pathways. ESNs are a computationally efficient neural network variation designed for temporal data, and recent work proposes ESNs as a useful tool for forecasting spatiotemporal climate data. However, ESNs are noninterpretable black-box models along with other neural networks. The lack of model transparency poses a hurdle for understanding variable relationships. We address this issue by developing feature importance methods for ESNs in the context of spatiotemporal data to quantify variable relationships captured by the model. We conduct a simulation study to assess and compare the feature importance techniques, and we demonstrate the approach on reanalysis climate data. In the climate application, we consider a time period that includes the 1991 volcanic eruption of Mount Pinatubo. This event was a significant stratospheric aerosol injection, which acts as a proxy for an anthropogenic stratospheric aerosol injection. We are able to use the proposed approach to characterize relationships between pathway variables associated with this event that agree with relationships previously identified by climate scientists.
As global temperatures continue to rise, climate mitigation strategies such as stratospheric aerosol injections (SAI) are increasingly discussed, but the downstream effects of these strategies are not well understood. As such, there is interest in developing statistical methods to quantify the evolution of climate variable relationships during the time period surrounding an SAI. Feature importance applied to echo state network (ESN) models has been proposed as a way to understand the effects of SAI using a data-driven model. This approach depends on the ESN fitting the data well. If not, the feature importance may place importance on features that are not representative of the underlying relationships. Typically, time series prediction models such as ESNs are assessed using out-of-sample performance metrics that divide the times series into separate training and testing sets. However, this model assessment approach is geared towards forecasting applications and not scenarios such as the motivating SAI example where the objective is using a data driven model to capture variable relationships. Here, in this paper, we demonstrate a novel use of climate model replicates to investigate the applicability of the commonly used repeated hold-out model assessment approach for the SAI application. Simulations of an SAI are generated using a simplified climate model, and different initialization conditions are used to provide independent training and testing sets containing the same SAI event. The climate model replicates enable out-of-sample measures of model performance, which are compared to the single time series hold-out validation approach. For our case study, it is found that the repeated hold-out sample performance is comparable, but conservative, to the replicate out-of-sample performance when the training set contains enough time after the aerosol injection.
Physical experiments are often expensive and time-consuming. Test engineers must certify the compatibility of aircraft and their weapon systems before they can be deployed in the field, but the testing required is time consuming, expensive, and resource limited. Adopting Bayesian adaptive designs is a promising way to borrow from the successes seen in the clinical trials domain. The use of predictive probability (PP) to stop testing early and make faster decisions is particularly appealing given the aforementioned constraints. Given the high-consequence nature of the tests performed in the national security space, a strong understanding of new methods is required before being deployed. Although PP has been thoroughly studied for binary data, there is less work with continuous data, where many reliability studies are interested in certifying the specification limits of components. A simulation study evaluating the robustness of this approach indicates early stopping based on PP is reasonably robust to minor assumption violations, especially when only a few interim analyses are conducted. The simulation study also compares PP to conditional power, showing its relative strengths and weaknesses. A post-hoc analysis exploring whether release requirements of a weapon system from an aircraft are within specification with desired reliability resulted in stopping the experiment early and saving 33% of the experimental runs.
Physical fatigue can have adverse effects on humans in extreme environments. Therefore, being able to predict fatigue using easy to measure metrics such as heart rate (HR) signatures has potential to have an impact in real-life scenarios. We apply a functional logistic regression model that uses HR signatures to predict physical fatigue, where physical fatigue is defined in a data-driven manner. Data were collected using commercially available wearable devices on 47 participants hiking the 20.7-mile Grand Canyon rim-to-rim trail in a single day. Fitted model provides good predictions and interpretable parameters for real-life application.
This project evaluated the use of emerging spintronic memory devices for robust and efficient variational inference schemes. Variational inference (VI) schemes, which constrain the distribution for each weight to be a Gaussian distribution with a mean and standard deviation, are a tractable method for calculating posterior distributions of weights in a Bayesian neural network such that this neural network can also be trained using the powerful backpropagation algorithm. Our project focuses on domain-wall magnetic tunnel junctions (DW-MTJs), a powerful multi-functional spintronic synapse design that can achieve low power switching while also opening the pathway towards repeatable, analog operation using fabricated notches. Our initial efforts to employ DW-MTJs as an all-in-one stochastic synapse with both a mean and standard deviation didn’t end up meeting the quality metrics for hardware-friendly VI. In the future, new device stacks and methods for expressive anisotropy modification may make this idea still possible. However, as a fall back that immediately satisfies our requirements, we invented and detailed how the combination of a DW-MTJ synapse encoding the mean and a probabilistic Bayes-MTJ device, programmed via a ferroelectric or ionically modifiable layer, can robustly and expressively implement VI. This design includes a physics-informed small circuit model, that was scaled up to perform and demonstrate rigorous uncertainty quantification applications, up to and including small convolutional networks on a grayscale image classification task, and larger (Residual) networks implementing multi-channel image classification. Lastly, as these results and ideas all depend upon the idea of an inference application where weights (spintronic memory states) remain non-volatile, the retention of these synapses for the notched case was further interrogated. These investigations revealed and emphasized the importance of both notch geometry and anisotropy modification in order to further enhance the endurance of written spintronic states. In the near future, these results will be mapped to effective predictions for room temperature and elevated operation DW-MTJ memory retention, and experimentally verified when devices become available.
Inverse prediction models have commonly been developed to handle scalar data from physical experiments. However, it is not uncommon for data to be collected in functional form. When data are collected in functional form, it must be aggregated to fit the form of traditional methods, which often results in a loss of information. For expensive experiments, this loss of information can be costly. In this study, we introduce the functional inverse prediction (FIP) framework, a general approach which uses the full information in functional response data to provide inverse predictions with probabilistic prediction uncertainties obtained with the bootstrap. The FIP framework is a general methodology that can be modified by practitioners to accommodate many different applications and types of data. We demonstrate the framework, highlighting points of flexibility, with a simulation example and applications to weather data and to nuclear forensics. Results show how functional models can improve the accuracy and precision of predictions.