For the resiliency of both small and large distribution systems, the concept of microgrids is arising. The ability for sections of the distribution system to be 'self-sufficient' and operate under their own energy generation is a desirable concept. This would allow for only small sections of the system to be without power after being affected by abnormal events such as a fault or a natural disaster, and allow for a greater number of consumers to go through their lives as normal. Research is needed to determine how different forms of generation will perform in a microgrid, as well as how to properly protect an islanded system. While synchronous generators are well understood and generally accepted amongst utility operators, inverter-based resources (IBRs) are less common. An IBR's fault characteristic varies between manufacturers and is heavily based on the internal control scheme. Additionally, with the internal protections of these devices to not damage the switching components, IBRs are usually limited to only 1.1-2.5p.u. of the rated current, depending on the technology. This results in traditional protection methods such as overcurrent devices being unable to 'trip' in a microgrid with high IBR penetration. Moreover, grid-following inverters (commonly used for photovoltaic systems) require a voltage source to synchronize with before operating. Also, these inverters do not provide any inertia to a system. On the other hand, grid-forming inverters can operate as a primary voltage source, and provide an 'emulated inertia' to the system. This study will look at a small islanded system with a grid-forming inverter, and a grid-following inverter subjected to a line-to-ground fault.
A novel method for fault classification and location is presented in this paper. This method is divided into an initial signal processing stage that is followed by a machine learning stage. The initial stage analyzes voltages and currents with a window-based approach based on the dynamic mode decomposition (DMD) and then applies signal norms to the resulting DMD data. The outputs for the signal norms are used as features for a random-forests for classifying the type of fault in the system as well as for fault location purposes. The method was tested on a small distribution system where it showed an accuracy of 100% in fault classification and a mean error of ~ 30 m when predicting the fault location.
Identifying the location of faults in a fast and accurate manner is critical for effective protection and restoration of distribution networks. This paper describes an efficient method for detecting, localizing, and classifying faults using advanced signal processing and machine learning tools. The method uses an Isolation Forest technique to detect the fault. Then Continuous Wavelet Transform (CWT) is used to analyze the traveling waves produced by the faults. The CWT coefficients of the current signals at the time of arrival of the traveling wave present unique characteristics for different fault types and locations. These CWT coefficients are fed into a Convolutional Neural Network (CNN) to train and classify fault events. The results show that for multiple fault scenarios and solar PV conditions, the method is able to determine the fault type and location with high accuracy.
Identifying the location of faults in a fast and accurate manner is critical for effective protection and restoration of distribution networks. This paper describes an efficient method for detecting, localizing, and classifying faults using advanced signal processing and machine learning tools. The method uses an Isolation Forest technique to detect the fault. Then Continuous Wavelet Transform (CWT) is used to analyze the traveling waves produced by the faults. The CWT coefficients of the current signals at the time of arrival of the traveling wave present unique characteristics for different fault types and locations. These CWT coefficients are fed into a Convolutional Neural Network (CNN) to train and classify fault events. The results show that for multiple fault scenarios and solar PV conditions, the method is able to determine the fault type and location with high accuracy.
With the increase in penetration of inverter-based resources (IBRs) in the electrical power system, the ability of these devices to provide grid support to the system has become a necessity. With standards previously developed for the interconnection requirements of grid-following inverters (GFLI) (most commonly photovoltaic inverters), it has been well documented how these inverters 'should' respond to changes in voltage and frequency. However, with other IBRs such as grid-forming inverters (GFMIs) (used for energy storage systems, standalone systems, and as uninterruptable power supplies) these requirements are either: not yet documented, or require a more in deep analysis. With the increased interest in microgrids, GFMIs that can be paralleled onto a distribution system have become desired. With the proper control schemes, a GFMI can help maintain grid stability through fast response compared to rotating machines. This paper will present an experimental comparison of commercially available GFMIand GFLI ' responses to voltage and frequency deviation, as well as the GFMIoperating as a standalone system and subjected to various changes in loads.
In the near future, grid operators are expected to regularly use advanced distributed energy resource (DER) functions, defined in IEEE 1547-2018, to perform a range of grid-support operations. Many of these functions adjust the active and reactive power of the device through commanded or autonomous modes, which will produce new stresses on the grid-interfacing power electronics components, such as DC/AC inverters. In previous work, multiple DER devices were instrumented to evaluate additional component stress under multiple reactive power setpoints. We utilize quasi-static time-series simulations to determine voltage-reactive power mode (volt-var) mission profile of inverters in an active power system. Mission profiles and loss estimates are then combined to estimate the reduction of the useful life of inverters from different reactive power profiles. It was found that the average lifetime reduction was approximately 0.15% for an inverter between standard unity power factor operation and the IEEE 1547 default volt-var curve based on thermal damage due to switching in the power transistors. For an inverter with an expected 20-year lifetime, the 1547 volt-var curve would reduce the expected life of the device by 12 days. This framework for determining an inverter's useful life from experimental and modeling data can be applied to any failure mechanism and advanced inverter operation.
Frequent changes in penetration levels of distributed energy resources (DERs) and grid control objectives have caused the maintenance of accurate and reliable grid models for behind-the-meter (BTM) photovoltaic (PV) system impact studies to become an increasingly challenging task. At the same time, high adoption rates of advanced metering infrastructure (AMI) devices have improved load modeling techniques and have enabled the application of machine learning algorithms to a wide variety of model calibration tasks. Therefore, we propose that these algorithms can be applied to improve the quality of the input data and grid models used for PV impact studies. In this paper, these potential improvements were assessed for their ability to improve the accuracy of locational BTM PV hosting capacity analysis (HCA). Specifically, the voltage- and thermal-constrained hosting capacities of every customer location on a distribution feeder (1,379 in total) were calculated every 15 minutes for an entire year before and after each calibration algorithm or load modeling technique was applied. Overall, the HCA results were found to be highly sensitive to the various modeling deficiencies under investigation, illustrating the opportunity for more data-centric/model-free approaches to PV impact studies.
The paper proposes an implementation of Graph Neural Networks (GNNs) for distribution power system Traveling Wave (TW) - based protection schemes. Simulated faults on the IEEE 34 system are processed by using the Karrenbauer Transform and the Stationary Wavelet Transform (SWT), and the energy of the resulting signals is calculated using the Parseval's Energy Theorem. This data is used to train Graph Convolutional Networks (GCNs) to perform fault zone location. Several levels of measurement noise are considered for comparison. The results show outstanding performance, more than 90% for the most developed models, and outline a fast, reliable, asynchronous and distributed protection scheme for distribution level networks.
In the near future, grid operators are expected to regularly use advanced distributed energy resource (DER) functions, defined in IEEE 1547-2018, to perform a range of grid-support operations. Many of these functions adjust the active and reactive power of the device through commanded or autonomous modes, which will produce new stresses on the grid-interfacing power electronics components, such as DC/AC inverters. In previous work, multiple DER devices were instrumented to evaluate additional component stress under multiple reactive power setpoints. We utilize quasi-static time-series simulations to determine voltage-reactive power mode (volt-var) mission profile of inverters in an active power system. Mission profiles and loss estimates are then combined to estimate the reduction of the useful life of inverters from different reactive power profiles. It was found that the average lifetime reduction was approximately 0.15% for an inverter between standard unity power factor operation and the IEEE 1547 default volt-var curve based on thermal damage due to switching in the power transistors. For an inverter with an expected 20-year lifetime, the 1547 volt-var curve would reduce the expected life of the device by 12 days. This framework for determining an inverter's useful life from experimental and modeling data can be applied to any failure mechanism and advanced inverter operation.
The proper coordination of power system protective devices is essential for maintaining grid safety and reliability but requires precise knowledge of fault current contributions from generators like solar photovoltaic (PV) systems. PV inverter fault response is known to change with atmospheric conditions, grid conditions, and inverter control settings, but this time-varying behavior may not be fully captured by conventional static fault studies that are used to evaluate protection constraints in PV hosting capacity analyses. To address this knowledge gap, hosting capacity protection constraints were evaluated on a simplified test circuit using both a time-series fault analysis and a conventional static fault study approach. A PV fault contribution model was developed and utilized in the test circuit after being validated by hardware experiments under various irradiances, fault voltages, and advanced inverter control settings. While the results were comparable for certain protection constraints, the time-series fault study identified additional impacts that would not have been captured with the conventional static approach. Overall, while conducting full time-series fault studies may become prohibitively burdensome, these findings indicate that existing fault study practices may be improved by including additional test scenarios to better capture the time-varying impacts of PV on hosting capacity protection constraints.
This paper proposes a framework to explain and quantify how a Traveling Wave (TW)-based fault location classifier, a Random Forest, is affected by different TW propagation factors. The classifier's goal is to determine the faulty Protection Zone. In order to work with a simplified, yet realistic, distribution system, this work considers a use case with different configurations that are obtained by optionally including several common distribution elements such as voltage regulators, capacitor banks, laterals, and extra loads. Simulated faults are decomposed in frequency bands using the Stationary Wavelet Transform, and the classifier is trained with such signals' energy. SHapley Additive exPlanations (SHAP) are used to identify the most important features, and the effect of different fault configurations is quantified using the Jensen-Shannon Divergence. Results show that distance, the presence of voltage regulators and the fault type are the main factors that affect the classifier's behavior.
Modern distribution systems can accommodate different topologies through controllable tie lines for increasing the reliability of the system. Estimating the prevailing circuit topology or configuration is of particular importance at the substation for different applications to properly operate and control the distribution system. One of the applications of circuit configuration estimation is adaptive protection. An adaptive protection system relies on the communication system infrastructure to identify the latest status of power. However, when the communication links to some of the equipment are outaged, the adaptive protection system may lose its awareness over the status of the system. Therefore, it is necessary to estimate the circuit status using the available healthy communicated data. This paper proposes the use of machine learning algorithms at the substation to estimate circuit configuration when the communication to the tie breakers is compromised. Doing so, the adaptive protection system can identify the correct protection settings corresponding to the estimated circuit topology. The effectiveness of the proposed approach is verified on IEEE 123 bus test system.
Interest in the application of DC Microgrids to distribution systems have been spurred by the continued rise of renewable energy resources and the dependence on DC loads. However, in comparison to AC systems, the lack of natural zero crossing in DC Microgrids makes the interruption of fault currents with fuses and circuit breakers more difficult. DC faults can cause severe damage to voltage-source converters within few milliseconds, hence, the need to quickly detect and isolate the fault. In this paper, the potential for five different Machine Learning (ML) classifiers to identify fault type and fault resistance in a DC Microgrid is explored. The ML algorithms are trained using simulated fault data recorded from a 750 VDC Microgrid modeled in PSCAD/EMTDC. The performance of the trained algorithms are tested using real fault data gathered from an operational DC Microgrid located on the Kirtland Air Force Base. Of the five ML algorithms, three could detect the fault and determine the fault type with at least 99% accuracy, and only one could estimate the fault resistance with at least 99% accuracy. By performing a self-learning monitoring and decision making analysis, protection relays equipped with ML algorithms can quickly detect and isolate faults to improve the protection operations on DC Microgrids.
Adaptive protection is defined as a real-time system that can modify the protective actions according to the changes in the system condition. An adaptive protection system (APS) is conventionally coordinated through a central management system located at the distribution system substation. An APS depends significantly on the communication infrastructure to monitor the latest status of the electric power grid and send appropriate settings to all of the protection relays existing in the grid. This makes an APS highly vulnerable to communication system failures (e.g., broken communication links due to natural disasters as well as wide-range cyber-attacks). To this end, this paper presents the addition of local adaptive modular protection (LAMP) units to the protection system to guarantee its reliable operation under extreme events when the operation of the APS is compromised. LAMP units operate in parallel with the conventional APS. As a backup, if APS fails to operate because of an issue in the communication system, LAMP units can accommodate a reliable fault detection and location on behalf of the protection relay. The performance of the proposed APS is verified using IEEE 123 node test system.
Communication-assisted adaptive protection can improve the speed and selectivity of the protection system. However, in the event, that communication is disrupted to the relays from the centralized adaptive protection system, predicting the local relay protection settings is a viable alternative. This work evaluates the potential for machine learning to overcome these challenges by using the Prophet algorithm programmed into each relay to individually predict the time-dial (TDS) and pickup current (IPICKUP) settings. A modified IEEE 123 feeder was used to generate the data needed to train and test the Prophet algorithm to individually predict the TDS and IPICKUP settings. The models were evaluated using the mean average percentage error (MAPE) and the root mean squared error (RMSE) as metrics. The results show that the algorithms could accurately predict IPICKUP setting with an average MAPE accuracy of 99.961%, and the TDS setting with a average MAPE accuracy of 94.32% which is sufficient for protection parameter prediction.
The installation of digital sensors, such as advanced meter infrastructure (AMI) meters, has provided the means to implement a wide variety of techniques to increase visibility into the distribution system, including the ability to calibrate the utility models using data-driven algorithms. One challenge in maintaining accurate and up-to-date distribution system models is identifying changes and event occurrences that happen during the year, such as customers who have changed phases due to maintenance or other events. This work proposes a method for the detection of phase change events that utilizes techniques from an existing phase identification algorithm. This work utilizes an ensemble step to obtain predicted phases for windows of data, therefore allowing the predicted phase of customers to be observed over time. The proposed algorithm was tested on four utility datasets as well as a synthetic dataset. The synthetic tests showed the algorithm was capable of accurately detecting true phase change events while limiting the number of false-positive events flagged. In addition, the algorithm was able to identify possible phase change events on two real datasets.
Downtown low-voltage (LV) distribution networks are generally protected with network protectors that detect faults by restricting reverse power flow out of the network. This creates protection challenges for protecting the system as new smart grid technologies and distributed generation are installed. This report summarizes well-established methods for the control and protection of LV secondary network systems and spot networks, including operating features of network relays. Some current challenges and findings are presented from interviews with three utilities, PHI PEPCO, Oncor Energy Delivery, and Consolidated Edison Company of New York. Opportunities for technical exploration are presented with an assessment of the importance or value and the difficulty or cost. Finally, this leads to some recommendations for research to improve protection in secondary networks.
2022 IEEE Texas Power and Energy Conference, TPEC 2022
Biswal, Milan; Pati, Shubhasmita; Ranade, Satish J.; Lavrova, Olga; Reno, Matthew J.
The application of traveling wave principles for fault detection in distribution systems is challenging because of multiple reflections from the laterals and other lumped elements, particularly when we consider communication-free applications. We propose and explore the use of Shapelets to characterize fault signatures and a data-driven machine learning model to accurately classify the faults based on their distance. Studies of a simple 5-bus system suggest that the use of Shapelets for detecting faults is promising. The application to practical three-phase distribution feeders is the subject of continuing research.
As the legacy distance protection schemes are starting to transition from impedance-based to traveling wave (TW) time-based, it is important to perform diligent simulations prior to commissioning the TW relay. Since Control-Hardware-In-the-Loop (CHIL) simulations have recently become a common practice for power system research, this work aims to illustrate some limitations in the integration of commercially available TW relays in CHIL for transmission-level simulations. The interconnection of Frequency-Dependent (FD) with PI-modeled transmission lines, which is a common practice in CHIL, may lead to sharp reflections that ease the relaying task. However, modeling contiguous lines as FD, or the presence of certain shunt loads, may cover certain TW reflections. As a consequence, the fault location algorithm in the relay may lead to a wrong calculation. In this paper, a qualitative comparison of the performance of commercially available TW relay is carried out to show how the system modeling in CHIL may affect the fault location accuracy.
2022 IEEE Texas Power and Energy Conference, TPEC 2022
Biswal, Milan; Pati, Shubhasmita; Ranade, Satish J.; Lavrova, Olga; Reno, Matthew J.
The application of traveling wave principles for fault detection in distribution systems is challenging because of multiple reflections from the laterals and other lumped elements, particularly when we consider communication-free applications. We propose and explore the use of Shapelets to characterize fault signatures and a data-driven machine learning model to accurately classify the faults based on their distance. Studies of a simple 5-bus system suggest that the use of Shapelets for detecting faults is promising. The application to practical three-phase distribution feeders is the subject of continuing research.
The proper coordination of power system protective devices is essential for maintaining grid safety and reliability but requires precise knowledge of fault current contributions from generators like solar photovoltaic (PV) systems. PV inverter fault response is known to change with atmospheric conditions, grid conditions, and inverter control settings, but this time-varying behavior may not be fully captured by conventional static fault studies that are used to evaluate protection constraints in PV hosting capacity analyses. To address this knowledge gap, hosting capacity protection constraints were evaluated on a simplified test circuit using both a time-series fault analysis and a conventional static fault study approach. A PV fault contribution model was developed and utilized in the test circuit after being validated by hardware experiments under various irradiances, fault voltages, and advanced inverter control settings. While the results were comparable for certain protection constraints, the time-series fault study identified additional impacts that would not have been captured with the conventional static approach. Overall, while conducting full time-series fault studies may become prohibitively burdensome, these findings indicate that existing fault study practices may be improved by including additional test scenarios to better capture the time-varying impacts of PV on hosting capacity protection constraints.
Incorrect modeling of control characteristics for inverter-based resources (IBRs) can affect the accuracy of electric power system studies. In many distribution system contexts, the control settings for behind-the-meter (BTM) IBRs are unknown. This paper presents an efficient method for selecting a small number of time series samples from net load meter data that can be used for reconstructing or classifying the control settings of BTM IBRs. Sparse approximation techniques are used to select the time series samples that cause the inversion of a matrix of candidate responses to be as well-conditioned as possible. We verify these methods on 451 actual advanced metering infrastructure (AMI) datasets from loads with BTM IBRs. Selecting 60 15-minute granularity time series samples, we recover BTM control characteristics with a mean error less than 0.2 kVAR.
Reno, Matthew J.; Blakely, Logan; Trevizan, Rodrigo D.; Pena, Bethany D.; Lave, Matthew S.; Azzolini, Joseph A.; Yusuf, Jubair; Jones, Christian B.; Furlani Bastos, Alvaro F.; Chalamala, Rohit; Korkali, Mert; Sun, Chih-Che; Donadee, Jonathan; Stewart, Emma M.; Donde, Vaibhav; Peppanen, Jouni; Hernandez, Miguel; Deboever, Jeremiah; Rocha, Celso; Rylander, Matthew; Siratarnsophon, Piyapath; Grijalva, Santiago; Talkington, Samuel; Gomez-Peces, Cristian; Mason, Karl; Vejdan, Sadegh; Khan, Ahmad U.; Mbeleg, Jordan S.; Ashok, Kavya; Divan, Deepak; Li, Feng; Therrien, Francis; Jacques, Patrick; Rao, Vittal; Francis, Cody; Zaragoza, Nicholas; Nordy, David; Glass, Jim
This report summarizes the work performed under a project funded by U.S. DOE Solar Energy Technologies Office (SETO) to use grid edge measurements to calibrate distribution system models for improved planning and grid integration of solar PV. Several physics-based data-driven algorithms are developed to identify inaccuracies in models and to bring increased visibility into distribution system planning. This includes phase identification, secondary system topology and parameter estimation, meter-to-transformer pairing, medium-voltage reconfiguration detection, determination of regulator and capacitor settings, PV system detection, PV parameter and setting estimation, PV dynamic models, and improved load modeling. Each of the algorithms is tested using simulation data and demonstrated on real feeders with our utility partners. The final algorithms demonstrate the potential for future planning and operations of the electric power grid to be more automated and data-driven, with more granularity, higher accuracy, and more comprehensive visibility into the system.
This paper presents a simulation and respective analysis of traveling waves from a 5-bus distribution system connected to a grid-forming inverter (GFMI). The goal is to analyze the numerical differences in traveling waves if a GFMI is used in place of a traditional generator. The paper introduces the topic of traveling waves and their use in distribution systems for fault clearing. Then it introduces a Simulink design of said 5-bus system around which this paper is centered. The system is subject to various simulation tests of which the results and design are explained further in the paper to discuss if and how exactly inverters affect traveling waves and how different design choices for the system can impact these waves. Finally, a consideration is made for what these traveling waves represent in a practical environment and how to properly address them using the information derived in this study.
For the protection engineer, it is often the case, that full coverage and thus perfect selectivity of the system is not an option for protection devices. This is because perfect selectivity requires protection devices on every line section of the network. Due to cost limitation, relays may not be placed on each branch of a network. Therefore, a method is needed to allow for optimal coordination of relays with sparse relay placement. In this paper, methods for optimal coordination of networks with sparse relay placement introduced in prior work are applied to a system where both overcurrent and distance relays are present. Additionally, a method for defining primary (Zone 1) and secondary (Zone 2) protection zones for the distance relays in such a sparse system is proposed. The proposed method is applied to the IEEE 123-bus test case. The proposed method is found to successfully coordinate the system while also limiting the maximum relay operating time to 1.78s which approaches the theoretical lower bound of 1.75s.