Cyber-physical systems have behaviour that crosses domain boundaries during events such as planned operational changes and malicious disturbances. Traditionally, the cyber and physical systems are monitored separately and use very different toolsets and analysis paradigms. The security and privacy of these cyber-physical systems requires improved understanding of the combined cyber-physical system behaviour and methods for holistic analysis. Therefore, the authors propose leveraging clustering techniques on cyber-physical data from smart grid systems to analyse differences and similarities in behaviour during cyber-, physical-, and cyber-physical disturbances. Since clustering methods are commonly used in data science to examine statistical similarities in order to sort large datasets, these algorithms can assist in identifying useful relationships in cyber-physical systems. Through this analysis, deeper insights can be shared with decision-makers on what cyber and physical components are strongly or weakly linked, what cyber-physical pathways are most traversed, and the criticality of certain cyber-physical nodes or edges. This paper presents several types of clustering methods for cyber-physical graphs of smart grid systems and their application in assessing different types of disturbances for informing cyber-physical situational awareness. The collection of these clustering techniques provide a foundational basis for cyber-physical graph interdependency analysis.
The report summarizes the work and accomplishments of DOE SETO funded project 36533 “Adaptive Protection and Control for High Penetration PV and Grid Resilience”. In order to increase the amount of distributed solar power that can be integrated into the distribution system, new methods for optimal adaptive protection, artificial intelligence or machine learning based protection, and time domain traveling wave protection are developed and demonstrated in hardware-in-the-loop and a field demonstration.
As the electric grid becomes increasingly cyber-physical, it is important to characterize its inherent cyber-physical interdepedencies and explore how that characterization can be leveraged to improve grid operation. It is crucial to investigate what data features are transferred at the system boundaries, how disturbances cascade between the systems, and how planning and/or mitigation measures can leverage that information to increase grid resilience. In this paper, we explore several numerical analysis and graph decomposition techniques that may be suitable for modeling these cyber-physical system interdependencies and for understanding their significance. An augmented WSCC 9-bus cyber-physical system model is used as a small use-case to assess these techniques and their ability in characterizing different events within the cyber-physical system. These initial results are then analyzed to formulate a high-level approach for characterizing cyber-physical interdependencies.
The harmonized automatic relay mitigation of nefarious intentional events (HARMONIE) special protection scheme (SPS) was developed to provide adaptive, cyber-physical response to unpredictable disturbances in the electric grid. The HARMONIE-SPS methodology includes a machine learning classification framework that analyzes real time cyber-physical data and determines if the system is in normal conditions, cyber disturbance, physical disturbance, or cyber-physical disturbance. This classification then informs response, if needed and/or suitable, and included cyber-physical corrective actions. Beyond standard power system mitigations, a few novel approaches were developed that included a consensus algorithm-based relay voting scheme, an automated power system triggering condition and corrective action pairing algorithm, and a cyber traffic routing optimization algorithm. Both the classification and response techniques were tested within a newly integrated emulation environment composed of a real-time digital simulator (RTDS) and SCEPTRE™. This report details the HARMONIE-SPS methodology, highlighting both the classification and response techniques, and the subsequent testing results from the emulation environment.
The electric grid has undergone rapid, revolutionary changes in recent years; from the addition of advanced smart technologies to the growing penetration of distributed energy resources (DERs) to increased interconnectivity and communications. However, these added communications, access interfaces, and third-party software to enable autonomous control schemes and interconnectivity also expand the attack surface of the grid. To address the gap of DER cybersecurity and secure the grid-edge to motivate a holistic, defense-in-depth approach, a proactive intrusion detection and mitigation system (PIDMS) device was developed to secure PV smart inverter communications. The PIDMS was developed as a distributed, flexible bump-in-the-wire (BITW) solution for protecting PV smart inverter communications. Both cyber (network traffic) and physical (power system measurements) are processed using network intrusion monitoring tools and custom machinelearning algorithms for deep packet analysis and cyber-physical event correlation. The PIDMS not only detects abnormal events but also deploys mitigations to limit or eliminate system impact; the PIDMS communicates with peer PIDMSs at different locations using the MQTT protocol for increased situational awareness and alerting. The details of the PIDMS methodology and prototype development are detailed in this report as well as the evaluation results within a cyber-physical emulation environment and subsequent industry feedback.
For the protection engineer, it is often the case, that full coverage and thus perfect selectivity of the system is not an option for protection devices. This is because perfect selectivity requires protection devices on every line section of the network. Due to cost limitation, relays may not be placed on each branch of a network. Therefore, a method is needed to allow for optimal coordination of relays with sparse relay placement. In this paper, methods for optimal coordination of networks with sparse relay placement introduced in prior work are applied to a system where both overcurrent and distance relays are present. Additionally, a method for defining primary (Zone 1) and secondary (Zone 2) protection zones for the distance relays in such a sparse system is proposed. The proposed method is applied to the IEEE 123-bus test case. The proposed method is found to successfully coordinate the system while also limiting the maximum relay operating time to 1.78s which approaches the theoretical lower bound of 1.75s.
Unpredictable disturbances with dynamic trajectories such as extreme weather events and cyber attacks require adaptive, cyber-physical special protection schemes to mitigate cascading impact in the electric grid. A harmonized automatic relay mitigation of nefarious intentional events (HARMONIE) special protection scheme (SPS) is being developed to address that need. However, for evaluating the HARMONIE-SPS performance in classifying system disturbances and mitigating consequences, a cyber-physical testbed is required to further development and validate the methodology. In this paper, we present a design for a co-simulation testbed leveraging the SCEPTRE™ platform and the real-time digital simulator (RTDS). The integration of these two platforms is detailed, as well as the unique, specific needs for testing HARMONIE-SPS within the environment. Results are presented from tests involving a WSCC 9-bus system with different load shedding scenarios with varying cyber-physical impact.
Adaptive protection is defined as a real-time system that can modify the protective actions according to the changes in the system condition. An adaptive protection system (APS) is conventionally coordinated through a central management system located at the distribution system substation. An APS depends significantly on the communication infrastructure to monitor the latest status of the electric power grid and send appropriate settings to all of the protection relays existing in the grid. This makes an APS highly vulnerable to communication system failures (e.g., broken communication links due to natural disasters as well as wide-range cyber-attacks). To this end, this paper presents the addition of local adaptive modular protection (LAMP) units to the protection system to guarantee its reliable operation under extreme events when the operation of the APS is compromised. LAMP units operate in parallel with the conventional APS. As a backup, if APS fails to operate because of an issue in the communication system, LAMP units can accommodate a reliable fault detection and location on behalf of the protection relay. The performance of the proposed APS is verified using IEEE 123 node test system.
Penetration of the power grid by renewable energy sources, distributed storage, and distributed generators is becoming more widespread. Increased utilization of these distributed energy resources (DERs) has given rise to additional protection concerns. With radial feeders terminating in DERs or in microgrids containing DERs, standard non-directional radial protection may be rendered useless. Moreover, coordination will first require the protection engineer to determine what combination of directional and nondirectional elements is required to properly protect the system at a reasonable cost. In this paper, a method is proposed to determine the type of protection that should be placed on each line. Further, an extreme cost constraint is assumed so that an attempt is made to protect a meshed network using only overcurrent protection devices. A method is proposed where instantaneous reclosers are placed in locations that cause the system to temporarily become radial when a fault occurs. Directional and nondirectional overcurrent (OC) relays are placed in locations that allow for standard radial coordination techniques to be utilized while the reclosers are open to clear any sustained faults. The proposed algorithm is found to effectively determine the placement of protection devices while utilizing a minimal number of directional devices. Additionally, it was shown for the IEEE 14-bus case that the proposed relay placement algorithm results in a system where relay coordination remains feasible.
Communication-assisted adaptive protection can improve the speed and selectivity of the protection system. However, in the event, that communication is disrupted to the relays from the centralized adaptive protection system, predicting the local relay protection settings is a viable alternative. This work evaluates the potential for machine learning to overcome these challenges by using the Prophet algorithm programmed into each relay to individually predict the time-dial (TDS) and pickup current (IPICKUP) settings. A modified IEEE 123 feeder was used to generate the data needed to train and test the Prophet algorithm to individually predict the TDS and IPICKUP settings. The models were evaluated using the mean average percentage error (MAPE) and the root mean squared error (RMSE) as metrics. The results show that the algorithms could accurately predict IPICKUP setting with an average MAPE accuracy of 99.961%, and the TDS setting with a average MAPE accuracy of 94.32% which is sufficient for protection parameter prediction.
We present our research findings on the novel NDN protocol. In this work, we defined key attack scenarios for possible exploitation and detail software security testing procedures to evaluate the security of the NDN software. This work was done in the context of distributed energy resources (DER). The software security testing included an execution of unit tests and static code analyses to better understand the software rigor and the security that has been implemented. The results from the penetration testing are presented. Recommendations are discussed to provide additional defense for secure end-to-end NDN communications.
There are now over 2.5 million Distributed Energy Resource (DER) installations connected to the U.S. power system. These installations represent a major portion of American electricity critical infrastructure and a cyberattack on these assets in aggregate would significantly affect grid operations. Virtualized Operational Technology (OT) equipment has been shown to provide practitioners with situational awareness and better understanding of adversary tactics, techniques, and procedures (TTPs). Deploying synthetic DER devices as honeypots and canaries would open new avenues of operational defense, threat intelligence gathering, and empower DER owners and operators with new cyber-defense mechanisms against the growing intensity and sophistication of cyberattacks on OT systems. Well-designed DER canary field deployments would deceive adversaries and provide early-warning notifications of adversary presence and malicious activities on OT networks. In this report, we present progress to design a high-fidelity DER honeypot/canary prototype in a late-start Laboratory Directed Research and Development (LDRD) project.
As conventional generation sources continue to be replaced with inverter-based resources, the traditional fixed overcurrent protection schemes used at the distribution level will no longer be valid. Adaptive protection will provide the ability to update the protection scheme in near real-time to ensure reliability and increase the resilience of the grid. However, knowing and detecting when to update protection parameters that are calculated with an adaptive protection algorithm to prevent unnecessarily communicating with relays still needs to be understood. The proposed method provides a sensitivity analysis to understand when it is necessary to issue new parameters to the relays. The results show that settings do not need to be issued at each available time step. Instead, the proposed sensitivity analysis method can be used to ensure that only the imperative protection parameters are communicated to the relay, allowing for more optimal utilization of the communications. The results show that the sensitivity analysis reduces the settings communicated to the devices by 93% over the year.
The electric grid is becoming increasingly cyber-physical with the addition of smart technologies, new communication interfaces, and automated grid-support functions. Because of this, it is no longer sufficient to only study the physical system dynamics, but the cyber system must also be monitored as well to examine cyber-physical interactions and effects on the overall system. To address this gap for both operational and security needs, cyber-physical situational awareness is needed to monitor the system to detect any faults or malicious activity. Techniques and models to understand the physical system (the power system operation) exist, but methods to study the cyber system are needed, which can assist in understanding how the network traffic and changes to network conditions affect applications such as data analysis, intrusion detection systems (IDS), and anomaly detection. In this paper, we examine and develop models of data flows in communication networks of cyber-physical systems (CPSs) and explore how network calculus can be utilized to develop those models for CPSs, with a focus on anomaly and intrusion detection. This provides a foundation for methods to examine how changes to behavior in the CPS can be modeled and for investigating cyber effects in CPSs in anomaly detection applications.
The goal of this paper is to utilize machine learning (ML) techniques for estimating the distribution circuit topology in an adaptive protection system. In a reconfigurable distribution system with multiple tie lines, the adaptive protection system requires knowledge of the existing circuit topology to adapt the correct settings for the relay. Relays rely on the communication system to identify the latest status of remote breakers and tie lines. However, in the case of communication system failure, the performance of adaptive protection system can be significantly impacted. To tackle this challenge, the remote circuit breakers and tie lines' status are estimated locally at a relay to identify the circuit topology in a reconfigurable distribution system. This paper utilizes Support Vector Machine (SVM) to forecast the status of remote circuit breakers and identify the circuit topology. The effectiveness of proposed approach is verified on two sample test systems.
The electric grid is rapidly being modernized with novel technologies, adaptive and automated grid-support functions, and added connectivity with internet-based communications and remote interfaces. These advancements render the grid increasingly 'smart' and cyber-physical, but also broaden the vulnerability landscape and potential for malicious, cascading disturbances. The grid must be properly defended with security mechanisms such as intrusion detection systems (IDSs), but these tools must account for power system behavior as well as network traffic to be effective. In this paper, we present a cyber-physical IDS, the proactive intrusion detection and mitigation system (PIDMS), that analyzes both cyber and physical data streams in parallel, detects intrusion, and deploys proactive response. We demonstrate the PIDMS with an exemplar case study exploring a packet replay attack scenario focused on photovoltaic inverter communications; the scenario is tested with an emulated, cyber-physical grid environment with hardware-in-the-loop inverters.
Traditional protective relay voting schemes utilize simple logic to achieve confidence in relay trip actions. However, the smart grid is rapidly evolving and there are new needs for a next-generation relay voting scheme. In such new schemes, aspects such as inter-relay relationships and out-of-band data can be included. In this work, we explore the use of consensus algorithms and how they can be utilized for groups of relays to vote on system protection actions and also reach consensus on the values of variables in the system. A proposed design is explored with a simple case study with two different scenarios, including simulation in PowerWorld Simulator, to demonstrate the consensus algorithm benefits and future directions are discussed.
Recent trends in the growth of distributed energy resources (DER) in the electric grid and newfound malware frameworks that target internet of things (IoT) devices is driving an urgent need for more reliable and effective methods for intrusion detection and prevention. Cybersecurity intrusion detection systems (IDSs) are responsible for detecting threats by monitoring and analyzing network data, which can originate either from networking equipment or end-devices. Creating intrusion detection systems for PV/DER networks is a challenging undertaking because of the diversity of the attack types and intermittency and variability in the data. Distinguishing malicious events from other sources of anomalies or system faults is particularly difficult. New approaches are needed that not only sense anomalies in the power system but also determine causational factors for the detected events. In this report, a range of IDS approaches were summarized along with their pros and cons. Using the review of IDS approaches and subsequent gap analysis for application to DER systems, a preliminary hybrid IDS approach to protect PV/DER communications is formed in the conclusion of this report to inform ongoing and future research regarding the cybersecurity and resilience enhancement of DER systems.
Special protection schemes (SPSs) safeguard the grid by detecting predefined abnormal conditions and deploying predefined corrective actions. Utilities leverage SPSs to maintain stability, acceptable voltages, and loading limits during disturbances. However, traditional SPSs cannot defend against unpredictable disturbances. Events such as cyber attacks, extreme weather, and electromagnetic pulses have unpredictable trajectories and require adaptive response. Therefore, we propose a harmonized automatic relay mitigation of nefarious intentional events (HARMONIE)-SPS that learns system conditions, mitigates cyber-physical consequences, and preserves grid operation during both predictable and unpredictable disturbances. In this paper, we define the HARMONIE-SPS approach, detail progress on its development, and provide initial results using a WSCC 9-bus system.
The benefits and risks associated with Volt-Var Curve (VVC) control for management of voltages in electric feeders with distributed, roof-top photovoltaic (PV) can be defined using a stochastic hosting capacity analysis methodology. Although past work showed that a PV inverter's reactive power can improve grid voltages for large PV installations, this study adds to the past research by evaluating the control method's impact (both good and bad) when deployed throughout the feeder within small, distributed PV systems. The stochastic hosting capacity simulation effort iterated through hundreds of load and PV generation scenarios and various control types. The simulations also tested the impact of VVCs with tampered settings to understand the potential risks associated with a cyber-attack on all of the PV inverters scattered throughout a feeder. The simulation effort found that the VVC can have an insignificant role in managing the voltage when deployed in distributed roof-top PV inverters. This type of integration strategy will result in little to no harm when subjected to a successful cyber-attack that alters the VVC settings.
Grid operators are now considering using distributed energy resources (DERs) to provide distribution voltage regulation rather than installing costly voltage regulation hardware. DER devices include multiple adjustable reactive power control functions, so grid operators have the difficult decision of selecting the best operating mode and settings for the DER. In this work, we develop a novel state estimation-based particle swarm optimization (PSO) for distribution voltage regulation using DER-reactive power setpoints and establish a methodology to validate and compare it against alternative DER control technologies (volt-VAR (VV), extremum seeking control (ESC)) in increasingly higher fidelity environments. Distribution system real-time simulations with virtualized and power hardware-in-the-loop (PHIL)-interfaced DER equipment were run to evaluate the implementations and select the best voltage regulation technique. Each method improved the distribution system voltage profile; VV did not reach the global optimum but the PSO and ESC methods optimized the reactive power contributions of multiple DER devices to approach the optimal solution.
During the last decade, utility companies around the world have experienced a significant increase in the occurrences of either planned or unplanned blackouts, and microgrids have emerged as a viable solution to improve grid resiliency and robustness. Recently, power converters with grid-forming capabilities have attracted interest from researchers and utilities as keystone devices enabling modern microgrid architectures. Therefore, proper and thorough testing of Grid-Forming Inverters (GFMIs) is crucial to understand their dynamics and limitations before they are deployed. The use of closed-loop real-time Power Hardware-in-the-Loop (PHIL) simulations will facilitate the testing of GFMIs using a digital twin of the power system under various contingency scenarios within a controlled environment. So far, lower to medium scale commercially available GFMIs are difficult to interface into PHIL simulations because of their lack of a synchronization mechanism that allows a smooth and stable interconnection with a voltage source such as a power amplifier. Under this scenario, the use of the well-known Ideal Transformer Method to create a PHIL setup can lead to catastrophic damages of the GFMI. This paper addresses a simple but novel method to interface commercially available GFMIs into a PHIL testbed. Experimental results showed that the proposed method is stable and accurate under standalone operation with abrupt (step) load-changing dynamics, followed by the corresponding steady state behavior. Such results were validated against the dynamics of the GFMI connected to a linear load bank.
During the last decade, utility companies around the world have experienced a significant increase in the occurrences of either planned or unplanned blackouts, and microgrids have emerged as a viable solution to improve grid resiliency and robustness. Recently, power converters with grid-forming capabilities have attracted interest from researchers and utilities as keystone devices enabling modern microgrid architectures. Therefore, proper and thorough testing of Grid-Forming Inverters (GFMIs) is crucial to understand their dynamics and limitations before they are deployed. The use of closed-loop real-time Power Hardware-in-the-Loop (PHIL) simulations will facilitate the testing of GFMIs using a digital twin of the power system under various contingency scenarios within a controlled environment. So far, lower to medium scale commercially available GFMIs are difficult to interface into PHIL simulations because of their lack of a synchronization mechanism that allows a smooth and stable interconnection with a voltage source such as a power amplifier. Under this scenario, the use of the well-known Ideal Transformer Method to create a PHIL setup can lead to catastrophic damages of the GFMI. This paper addresses a simple but novel method to interface commercially available GFMIs into a PHIL testbed. Experimental results showed that the proposed method is stable and accurate under standalone operation with abrupt (step) load-changing dynamics, followed by the corresponding steady state behavior. Such results were validated against the dynamics of the GFMI connected to a linear load bank.
The penetration of Internet-of-Things (IoT) devices in the electric grid is growing at a rapid pace; from smart meters at residential homes to distributed energy resource (DER) system technologies such as smart inverters, various devices are being integrated into the grid with added connectivity and communications. Furthermore, with these increased capabilities, automated grid-support functions, demand response, and advanced communication-assisted control schemes are being implemented to improve the operation of the grid. These advancements render our power systems increasingly cyber-physical. It is no longer sufficient to only focus on the physical interactions, especially when implementing cybersecurity mechanisms such as intrusion detection systems (IDSs) and mitigation schemes that need to access both cyber and physical data. This new landscape necessitates novel methods and technologies to successfully interact and understand the overall cyber-physical system. Specifically, this paper will investigate the need and definition of cyber-physical observability for the grid.
The energy grid becomes more complex with increasing penetration of renewable resources, distributed energy storage, distributed generators, and more diverse loads such as electric vehicle charging stations. The presence of distributed energy resources (DERs) requires directional protection due to the added potential for energy to flow in both directions down the line. Additionally, contingency requirements for critical loads within a microgrid may result in looped or meshed systems. Computation speeds of iterative methods required to coordinate loops are improved by starting with a minimum breakpoint set (MBPS) of relays. A breakpoint set (BPS) is a set of breakers such that, when opened, breaks all loops in a mesh grid creating a radial system. A MBPS is a BPS that consists of the minimum possible number of relays required to accomplish this goal. In this paper, a method is proposed in which a minimum spanning tree is computed to indirectly break all loops in the system, and a set difference is used to identify the MBPS. The proposed method is found to minimize the cardinality of the BPS to achieve a MBPS.
Historically, photovoltaic inverters have been grid-following controlled, but with increasing penetrations of inverter-based generation on the grid, grid-forming inverters (GFMI) are gaining interest. GFMIs can also be used in microgrids that require the ability to interact and operate with the grid (grid-tied), or to operate autonomously (islanded) while supplying their corresponding loads. This approach can substantially improve the response of the grid to severe contingencies such as hurricanes, or to high load demands. During islanded conditions, GFMIs play an important role on dictating the system's voltage and frequency the same way as synchronous generators do in large interconnected systems. For this reason, it is important to understand the behavior of such grid-forming inverters under fault scenarios. This paper focuses on testing different commercially available grid-forming inverters under fault conditions.
Modern power grids include a variety of renewable Distributed Energy Resources (DERs) as a strategy to comply with new environmental and renewable portfolio standards (RPSs) imposed by state and federal agencies. Typically, DERs include the use of power electronic (PE) interfaces to interactwith the power grid. Recently this interaction has not only been focused on supplying maximum available energy, but also on supporting the power grid under abnormal conditions such as low voltage/frequency conditions or non-unity power factor. Over the last few years, grid-following inverters (GFLIs) have proven their value while providing these ancillary grid-support services either at residential or utility scale. However, the use of grid-forming inverters (GFMIs) is gaining momentum as the penetration-level of DERs increases and system inertia decreases. Under abnormal operating conditions, GFMIs tend to better preserve grid stability due to their intrinsic ability to balance loadswithout the aid of coordination controls. In order to gain and propose fundamental insights into the interfacing of GFMIs to real time simulation, this paper analyzes the dynamics of two different GFMI simulation models in terms of stability and load changes using a Power Hardware-in-the-Loop (PHIL) simulation testbed.
Historically, photovoltaic inverters have been grid-following controlled, but with increasing penetrations of inverter-based generation on the grid, grid-forming inverters (GFMI) are gaining interest. GFMIs can also be used in microgrids that require the ability to interact and operate with the grid (grid-tied), or to operate autonomously (islanded) while supplying their corresponding loads. This approach can substantially improve the response of the grid to severe contingencies such as hurricanes, or to high load demands. During islanded conditions, GFMIs play an important role on dictating the system's voltage and frequency the same way as synchronous generators do in large interconnected systems. For this reason, it is important to understand the behavior of such grid-forming inverters under fault scenarios. This paper focuses on testing different commercially available grid-forming inverters under fault conditions.
Power outages are a challenge that utility companies must face, with the potential to affect millions of customers and cost billions in damage. For this reason, there is a need for developing approaches that help understand the effects of fault conditions on the power grid. In distribution circuits with high renewable penetrations, the fault currents from DER equipment can impact coordinated protection scheme implementations so it is critical to accurately analyze fault contributions from DER systems. To do this, MATLAB/Simulink/RT-Labs was used to simulate the reduced-order distribution system and three different faults are applied at three different bus locations in the distribution system. The use of Real-Time (RT) Power Hardware-in-the-Loop (PHIL) simulations was also used to further improve the fidelity of the model. A comparison between OpenDSS simulation results and the Opal-RT experimental fault currents was conducted to determine the steady-state and dynamic accuracy of each method as well as the response of using simulated and hardware PV inverters. It was found that all methods were closely correlated in steady-state, but the transient response of the inverter was difficult to capture with a PV model and the physical device behavior could not be represented completely without incorporating it through PHIL.
Power outages are a challenge that utility companies must face, with the potential to affect millions of customers and cost billions in damage. For this reason, there is a need for developing approaches that help understand the effects of fault conditions on the power grid. In distribution circuits with high renewable penetrations, the fault currents from DER equipment can impact coordinated protection scheme implementations so it is critical to accurately analyze fault contributions from DER systems. To do this, MATLAB/Simulink/RT-Labs was used to simulate the reduced-order distribution system and three different faults are applied at three different bus locations in the distribution system. The use of Real-Time (RT) Power Hardware-in-the-Loop (PHIL) simulations was also used to further improve the fidelity of the model. A comparison between OpenDSS simulation results and the Opal-RT experimental fault currents was conducted to determine the steady-state and dynamic accuracy of each method as well as the response of using simulated and hardware PV inverters. It was found that all methods were closely correlated in steady-state, but the transient response of the inverter was difficult to capture with a PV model and the physical device behavior could not be represented completely without incorporating it through PHIL.
The Ideal Transformer Method (ITM) and the Damping Impedance Method (DIM) are the most widely used techniques for connecting power equipment to a Power-Hardware-in-the-Loop (PHIL) real-time simulation. Both methods have been studied for their stability and accuracy in PHIL simulations, but neither have been analyzed when the hardware is providing grid-support services with volt-var, frequency-watt, and fixed power factor functions. In this work, we experimentally validate the two methods of connecting a physical PV inverter to a PHIL system and evaluate them for dynamic stability and accuracy when operating with grid-support functions. It was found that the DIM Low Pass Lead Filter (LPF LD) method was the best under unity and negative power factor conditions, but the ITM LPF LD method was preferred under positive power factor conditions.
The Ideal Transformer Method (ITM) and the Damping Impedance Method (DIM) are the most widely used techniques for connecting power equipment to a Power-Hardware-in-the-Loop (PHIL) real-time simulation. Both methods have been studied for their stability and accuracy in PHIL simulations, but neither have been analyzed when the hardware is providing grid-support services with volt-var, frequency-watt, and fixed power factor functions. In this work, we experimentally validate the two methods of connecting a physical PV inverter to a PHIL system and evaluate them for dynamic stability and accuracy when operating with grid-support functions. It was found that the DIM Low Pass Lead Filter (LPF LD) method was the best under unity and negative power factor conditions, but the ITM LPF LD method was preferred under positive power factor conditions.
The integration of communication-enabled grid-support functions in distributed energy resources (DER) and other smart grid features will increase the U.S. power grid's exposure to cyber-physical attacks. Unwanted changes in DER system data and control signals can damage electrical infrastructure and lead to outages. To protect against these threats, intrusion detection systems (IDSs) can be deployed, but their implementation presents a unique set of challenges in industrial control systems (ICSs), New approaches need to be developed that not only sense cyber anomalies, but also detect undesired physical system behaviors. For DER systems, a combination of cyber security data and power system and control information should be collected by the IDS to provide insight into the nature of an anomalous event. This allows joint forensic analysis to be conducted to reveal any relationships between the observed cyber and physical events. In this paper, we propose a hybrid IDS approach that monitors and evaluates both physical and cyber network data in DER systems, and present a series of scenarios to demonstrate how our approach enables the cyber-physical IDS to achieve more robust identification and mitigation of malicious events on the DER system.
This report documents the use of wind turbine inertial energy for the supply of two specific electric power grid services; system balancing and real power modulation to improve grid stability. Each service is developed to require zero net energy consumption. Grid stability was accomplished by modulating the real power output of the wind turbine at a frequency and phase associated with wide-area modes. System balancing was conducted using a grid frequency signal that was high-pass filtered to ensure zero net energy. Both services used Phasor Measurement Units (PMUs) as their primary source of system data in a feedforward control (for system balancing) and feedback control (for system stability).