Most recently, stochastic control methods such as deep reinforcement learning (DRL) have proven to be efficient and quick converging methods in providing localized grid voltage control. Because of the random dynamical characteristics of grid reactive loads and bus voltages, such stochastic control methods are particularly useful in accurately predicting future voltage levels and in minimizing associated cost functions. Although DRL is capable of quickly inferring future voltage levels given specific voltage control actions, it is prone to high variance when the learning rate or discount factors are set for rapid convergence in the presence of bus noise. Evolutionary learning is also capable of minimizing cost function and can be leveraged for localized grid control, but it does not infer future voltage levels given specific control inputs and instead simply selects those control actions that result in the best voltage control. For this reason, evolutionary learning is better suited than DRL for voltage control in noisy grid environments. To illustrate this, using a cyber adversary to inject random noise, we compare the use of evolutionary learning and DRL in autonomous voltage control (AVC) under noisy control conditions and show that it is possible to achieve a high mean voltage control using a genetic algorithm (GA). We show that the GA additionally can provide superior AVC to DRL with comparable computational efficiency. We illustrate that the superior noise immunity properties of evolutionary learning make it a good choice for implementing AVC in noisy environments or in the presence of random cyber-attacks.
This paper presents a literature review on current practices and trends on cyberphysical security of grid-connected battery energy storage systems (BESSs). Energy storage is critical to the operation of Smart Grids powered by intermittent renewable energy resources. To achieve this goal, utility-scale and consumer-scale BESS will have to be fully integrated into power systems operations, providing ancillary services and performing functions to improve grid reliability, balance power and demand, among others. This vision of the future power grid will only become a reality if BESS are able to operate in a coordinated way with other grid entities, thus requiring significant communication capabilities. The pervasive networking infrastructure necessary to fully leverage the potential of storage increases the attack surface for cyberthreats, and the unique characteristics of battery systems pose challenges for cyberphysical security. This paper discusses a number of such threats, their associated attack vectors, detection methods, protective measures, research gaps in the literature and future research trends.
To meet the challenges oflow-carbon power generation, distributed energy resources (DERs) such as solar and wind power generators are now widely integrated into the power grid. Because of the autonomous nature of DERs, ensuring properly regulated output voltages of the individual sources to the grid distribution system poses a technical challenge to grid operators. Stochastic, model-free voltage regulations methods such as deep reinforcement learning (DRL) have proven effective in the regulation of DER output voltages; however, deriving an optimal voltage control policy using DRL over a large state space has a large computational time complexity. In this paper we illustrate a computationally efficient method for deriving an optimal voltage control policy using a parallelized DRL ensemble. Additionally, we illustrate the resiliency of the control ensemble when random noise is introduced by a cyber adversary.
In recent years, the pervasive use of lithium ion (Li-ion) batteries in applications such as cell phones, laptop computers, electric vehicles, and grid energy storage systems has prompted the development of specialized battery management systems (BMS). The primary goal of a BMS is to maintain a reliable and safe battery power source while maximizing the calendar life and performance of the cells. To maintain safe operation, a BMS should be programmed to minimize degradation and prevent damage to a Li-ion cell, which can lead to thermal runaway. Cell damage can occur over time if a BMS is not properly configured to avoid overcharging and discharging. To prevent cell damage, efficient and accurate cell charging cycle characteristics algorithms must be employed. In this paper, computationally efficient and accurate ensemble learning algorithms capable of detecting Li-ion cell charging irregularities are described. Additionally, it is shown using machine and deep learning that it is possible to accurately and efficiently detect when a cell has experienced thermal and electrical stress due to cell overcharging by measuring charging cycle divergence.
The ever increasing need to ensure that code is reliably, efficiently and safely constructed has fueled the evolution of popular static binary code analysis tools. In identifying potential coding flaws in binaries, tools such as IDA Pro are used to disassemble the binaries into an opcode/assembly language format in support of manual static code analysis. Because of the highly manual and resource intensive nature involved with analyzing large binaries, the probability of overlooking potential coding irregularities and inefficiencies is quite high. In this paper, a light-weight, unsupervised data flow methodology is described which uses highly-correlated data flow graph (CDFGs) to identify coding irregularities such that analysis time and required computing resources are minimized. Such analysis accuracy and efficiency gains are achieved by using a combination of graph analysis and unsupervised machine learning techniques which allows an analyst to focus on the most statistically significant flow patterns while performing binary static code analysis.
Graph analysis in large integrated circuit (IC) designs is an essential tool for verifying design logic and timing via dynamic timing analysis (DTA). IC designs resemble graphs with each logic gate as a vertex and the conductive connections between gates as edges. Using DTA digital statistical correlations, graph condensation, and graph partitioning, it is possible to identify high-entropy component centers and paths within an IC design. Identification of high-entropy component centers (HECC) enables focused DTA, effectively lowering the computational complexity of DTA on large integrated circuit graphs. In this paper, a devised methodology termed IC layout subgraph component center identification (CCI) is described. CCI lowers DTA computational complexity by condensing IC graphs into reduced subgraphs in which dominant logic functions are verified.
There are multiple factors involved in successfully manufacturing ASICIVLSI chips, and ensuring operational specifications are maintained throughout the design and manufacturing process is often challenging. Dynamic timing analysis (DTA) is the principal method used to validate that a manufactured chip complies to its design specifications. In DTA functionality of both synchronous and asynchronous designs are verified by applying input signals and checking for correct output signals. In complex designs where the number of input signal permutations is extremely large, the computing resources required to properly verify the functionality of a chip is prohibitive. In this paper, a strategy using reinforcement learning (RL) for reducing DTA time and resources in such cases is discussed. RL assisted DTA holds much promise in ensuring that VLSI chip design and functionality are fully and optimally verified.
To ensure reliable and predictable service in the electrical grid between distributed renewable distributed energy resources (DERs) it is important to gauge the level of trust present within critical components and DER aggregators (DERAs). Although trust throughout a smart grid is temporal and dynamically varies according to measured states, it is possible to accurately formulate communications and service level strategies based on such trust measurements. Utilizing an effective set of machine learning and statistical methods, it is shown that establishment of trust levels between DERAs using behavioral pattern analysis is possible. Further, it is also shown that the establishment of such trust can facilitate simple secure communications routing between DERAs. Providing secure routing between DERAs enables a grid operator to maintain service level agreements to its customers, reduce the attack surface and increase operational resiliency.
In recent years the use of security gateways (SG) located within the electrical grid distribution network has become pervasive. SGs in substations and renewable distributed energy resource aggregators (DERAs) protect power distribution control devices from cyber and cyber-physical attacks. When encrypted communications within a DER network is used, TCP/IP packet inspection is restricted to packet header behavioral analysis which in most cases only allows the SG to perform anomaly detection of blocks of time-series data (event windows). Packet header anomaly detection calculates the probability of the presence of a threat within an event window, but fails in such cases where the unreadable encrypted payload contains the attack content. The SG system log (syslog) is a time-series record of behavioral patterns of network users and processes accessing and transferring data through the SG network interfaces. Threatening behavioral pattern in the syslog are measurable using both anomaly detection and graph theory. In this paper it will be shown that it is possible to efficiently detect the presence of and classify a potential threat within an SG syslog using light-weight anomaly detection and graph theory.
Proceedings - 17th IEEE International Conference on Trust, Security and Privacy in Computing and Communications and 12th IEEE International Conference on Big Data Science and Engineering, Trustcom/BigDataSE 2018
To ensure reliable and predictable service in the electrical grid it is important to gauge the level of trust present within critical components and substations. Although trust throughout a smart grid is temporal and dynamically varies according to measured states, it is possible to accurately formulate communications and service level strategies based on such trust measurements. Utilizing an effective set of machine learning and statistical methods, it is shown that establishment of trust levels between substations using behavioral pattern analysis is possible. It is also shown that the establishment of such trust can facilitate simple secure communications routing between substations.
To counter manufacturing irregularities and ensure ASIC design integrity, it is essential that robust design verification methods are employed. It is possible to ensure such integrity using ASIC static timing analysis (STA) and machine learning. In this research, uniquely devised machine and statistical learning methods which quantify anomalous variations in Register Transfer Level (RTL) or Graphic Design System II (GDSII) formats are discussed. To measure the variations in ASIC analysis data, the timing delays in relation to path electrical characteristics are explored. It is shown that semi-supervised learning techniques are powerful tools in characterizing variations within STA path data and has much potential for identifying anomalies in ASIC RTL and GDSII design data.