Publications

3 Results

Search results

Jump to search filters

Machine Learning for Single-Axis Tracker Fault Detection and Classification

Conference Record of the IEEE Photovoltaic Specialists Conference

Transue, Taos; Theristis, Marios; Riley, Daniel

More than 90% of utility-scale photovoltaic (PV) power plants in the US use single-axis trackers (SATs) due to their potential for substantially higher power production over fixed-array systems. However, they are subject to software misconfigurations and mechanical failures, leading to suboptimal tracking accuracy. If failures are left undetected, the overall power yield of the PV power plant is reduced significantly. Robust detection and diagnosis of SAT faults is needed to minimize downtime and ensure continuous and efficient operation. This work presents analytic tools based on machine learning to detect deviations in SAT tracking performance and classify SAT faults.

More Details

Benchmark Tests for IV Fitting Algorithms

Conference Record of the IEEE Photovoltaic Specialists Conference

Hansen, Clifford; Jones, Abigail R.; Transue, Taos; Theristis, Marios

We propose a set of benchmark tests for current-voltage (IV) curve fitting algorithms. Benchmark tests enable transparent and repeatable comparisons among algorithms, allowing for measuring algorithm improvement over time. An absence of such tests contributes to the proliferation of fitting methods and inhibits achieving consensus on best practices. Benchmarks include simulated curves with known parameter solutions, with and without simulated measurement error. We implement the reference tests on an automated scoring platform and invite algorithm submissions in an open competition for accurate and performant algorithms.

More Details

Benchmark Tests for IV Fitting Algorithms

Conference Record of the IEEE Photovoltaic Specialists Conference

Hansen, Clifford; Jones, Abigail R.; Transue, Taos; Theristis, Marios

We propose a set of benchmark tests for current-voltage (IV) curve fitting algorithms. Benchmark tests enable transparent and repeatable comparisons among algorithms, allowing for measuring algorithm improvement over time. An absence of such tests contributes to the proliferation of fitting methods and inhibits achieving consensus on best practices. Benchmarks include simulated curves with known parameter solutions, with and without simulated measurement error. We implement the reference tests on an automated scoring platform and invite algorithm submissions in an open competition for accurate and performant algorithms.

More Details
3 Results
3 Results