This manuscript presents a complete framework for the development and verification of physics-informed neural networks with application to the alternating-current power flow (ACPF) equations. Physics-informed neural networks (PINN)s have received considerable interest within power systems communities for their ability to harness underlying physical equations to produce simple neural network architectures that achieve high accuracy using limited training data. The methodology developed in this work builds on existing methods and explores new important aspects around the implementation of PINNs including: (i) obtaining operationally relevant training data, (ii) efficiently training PINNs and using pruning techniques to reduce their complexity, and (iii) globally verifying the worst-case predictions given known physical constraints. The methodology is applied to the IEEE-14 and 118 bus systems where PINNs show substantially improved accuracy in a data-limited setting and attain better guarantees with respect to worst-case predictions.
In many areas of constrained optimization, representing all possible constraints that give rise to an accurate feasible region can be difficult and computationally prohibitive for online use. Satisfying feasibility constraints becomes more challenging in high-dimensional, non-convex regimes which are common in engineering applications. A prominent example that is explored in the manuscript is the security-constrained optimal power flow (SCOPF) problem, which minimizes power generation costs, while enforcing system feasibility under contingency failures in the transmission network. In its full form, this problem has been modeled as a nonlinear two-stage stochastic programming problem. In this work, we propose a hybrid structure that incorporates and takes advantage of both a high-fidelity physical model and fast machine learning surrogates. Neural network (NN) models have been shown to classify highly non-linear functions and can be trained offline but require large training sets. In this work, we present how model-guided sampling can efficiently create datasets that are highly informative to a NN classifier for non-convex functions. We show how the resultant NN surrogates can be integrated into a non-linear program as smooth, continuous functions to simultaneously optimize the objective function and enforce feasibility using existing non-linear solvers. Overall, this allows us to optimize instances of the SCOPF problem with an order of magnitude CPU improvement over existing methods.
Efficiently embedding and/or integrating mechanistic information with data-driven models is essential if it is desired to simultaneously take advantage of both engineering principles and data-science. The opportunity for hybridization occurs in many scenarios, such as the development of a faster model of an accurate high-fidelity computer model; the correction of a mechanistic model that does not fully-capture the physical phenomena of the system; or the integration of a data-driven component approximating an unknown correlation within a mechanistic model. At the same time, different techniques have been proposed and applied in different literatures to achieve this hybridization, such as hybrid modeling, physics-informed Machine Learning (ML) and model calibration. In this paper we review the methods, challenges, applications and algorithms of these three research areas and discuss them in the context of the different hybridization scenarios. Moreover, we provide a comprehensive comparison of the hybridization techniques with respect to their differences and similarities, as well as advantages and limitations and future perspectives. Finally, we apply and illustrate hybrid modeling, physics-informed ML and model calibration via a chemical reactor case study.
This report documents the Resilience Enhancements through Deep Learning Yields (REDLY) project, a three-year effort to improve electrical grid resilience by developing scalable methods for system operators to protect the grid against threats leading to interrupted service or physical damage. The computational complexity and uncertain nature of current real-world contingency analysis presents significant barriers to automated, real-time monitoring. While there has been a significant push to explore the use of accurate, high-performance machine learning (ML) model surrogates to address this gap, their reliability is unclear when deployed in high-consequence applications such as power grid systems. Contemporary optimization techniques used to validate surrogate performance can exploit ML model prediction errors, which necessitates the verification of worst-case performance for the models.