Publications

Results 1–200 of 214

Search results

Jump to search filters

On mixed-integer programming formulations for the unit commitment problem

INFORMS Journal on Computing

Knueven, Bernard; Ostrowski, James; Watson, Jean-Paul

We provide a comprehensive overview of mixed-integer programming formulations for the unit commitment (UC) problem. UC formulations have been an especially active area of research over the past 12 years due to their practical importance in power grid operations, and this paper serves as a capstone for this line of work. We additionally provide publicly available reference implementations of all formulations examined. We computationally test existing and novel UC formulations on a suite of instances drawn from both academic and real-world data sources. Driven by our computational experience from this and previous work, we contribute some additional formulations for both generator production upper bounds and piecewise linear production costs. By composing new UC formulations using existing components found in the literature and new components introduced in this paper, we demonstrate that performance can be significantly improved—and in the process, we identify a new state-of-the-art UC formulation.

More Details

Modeling Flexible Generator Operating Regions via Chance- constrained Stochastic Unit Commitment

Computational Management Science

Singh, Bismark; Knueven, Bernard; Watson, Jean-Paul

Here, we introduce a novel chance-constrained stochastic unit commitment model to address uncertainty in renewables' production uncertainty in power systems operation. For most thermal generators,underlying technical constraints that are universally treated as "hard" by deterministic unit commitment models are in fact based on engineering judgments, such that system operators can periodically request operation outside these limits in non-nominal situations, e.g., to ensure reliability. We incorporate this practical consideration into a chance-constrained stochastic unit commitment model, specifically by in-frequently allowing minor deviations from the minimum and maximum thermal generator power output levels. We demonstrate that an extensive form of our model is computationally tractable for medium-sized power systems given modest numbers of scenarios for renewables' production. We show that the model is able to potentially save significant annual production costs by allowing infrequent and controlled violation of the traditionally hard bounds imposed on thermal generator production limits. Finally, we conduct a sensitivity analysis of optimal solutions to our model under two restricted regimes and observe similar qualitative results.

More Details

A novel matching formulation for startup costs in unit commitment

Mathematical Programming Computation

Knueven, Bernard; Watson, Jean-Paul

We present a novel formulation for startup cost computation in the unit commitment problem (UC). Both our proposed formulation and existing formulations in the literature are placed in a formal, theoretical dominance hierarchy based on their respective linear programming relaxations. Our proposed formulation is tested empirically against existing formulations on large-scale UC instances drawn from real-world data. While requiring more variables than the current state-of-the-art formulation, our proposed formulation requires fewer constraints, and is empirically demonstrated to be as tight as a perfect formulation for startup costs. This tightening can reduce the computational burden in comparison to existing formulations, especially for UC instances with large reserve margins and high penetration levels of renewables.

More Details

Approximating two-stage chance-constrained programs with classical probability bounds

Optimization Letters

Singh, Bismark; Watson, Jean-Paul

We consider a joint-chance constraint (JCC) as a union of sets, and approximate this union using bounds from classical probability theory. When these bounds are used in an optimization model constrained by the JCC, we obtain corresponding upper and lower bounds on the optimal objective function value. We compare the strength of these bounds against each other under two different sampling schemes, and observe that a larger correlation between the uncertainties tends to result in more computationally challenging optimization models. We also observe the same set of inequalities to provide the tightest upper and lower bounds in our computational experiments.

More Details

Approximating two-stage chance-constrained programs with classical probability bounds

Optimization Letters

Singh, Bismark; Watson, Jean-Paul

We consider a joint-chance constraint (JCC) as a union of sets, and approximate this union using bounds from classical probability theory. When these bounds are used in an optimization model constrained by the JCC, we obtain corresponding upper and lower bounds on the optimal objective function value. We compare the strength of these bounds against each other under two different sampling schemes, and observe that a larger correlation between the uncertainties tends to result in more computationally challenging optimization models. We also observe the same set of inequalities to provide the tightest upper and lower bounds in our computational experiments.

More Details

Global Solution Strategies for the Network-Constrained Unit Commitment Problem with AC Transmission Constraints

IEEE Transactions on Power Systems

Castillo, Andrea; Watson, Jean-Paul; Laird, Carl

We propose a novel global solution algorithm for the network-constrained unit commitment problem that incorporates a nonlinear alternating current (ac) model of the transmission network, which is a nonconvex mixed-integer nonlinear programming problem. Our algorithm is based on the multi-tree global optimization methodology, which iterates between a mixed-integer lower-bounding problem and a nonlinear upper-bounding problem. We exploit the mathematical structure of the unit commitment problem with ac power flow constraints and leverage second-order cone relaxations, piecewise outer approximations, and optimization-based bounds tightening to provide a globally optimal solution at convergence. Numerical results on four benchmark problems illustrate the effectiveness of our algorithm, both in terms of convergence rate and solution quality.

More Details

Approximating Two-Stage Chance-Constrained Programs with Classical Probability Bounds

Optimization Letters

Singh, Bismark; Watson, Jean-Paul

We consider a joint-chance constraint (JCC) as a union of sets, and approximate this union using bounds from classical probability theory. When these bounds are used in an optimization model constrained by the JCC, we obtain corresponding upper and lower bounds on the optimal objective function value. We compare the strength of these bounds against each other under two different sampling schemes, and observe that a larger correlation between the uncertainties tends to result in more computationally challenging optimization models. We also observe the same set of inequalities to provide the tightest upper and lower bounds in our computational experiments.

More Details

Evaluating demand response opportunities for power systems resilience using MILP and MINLP Formulations

AIChE Journal

Bynum, Michael L.; Castillo, Andrea; Watson, Jean-Paul; Laird, Carl

While peak shaving is commonly used to reduce power costs, chemical process facilities that can reduce power consumption on demand during emergencies (e.g., extreme weather events) bring additional value through improved resilience. For process facilities to effectively negotiate demand response (DR) contracts and make investment decisions regarding flexibility, they need to quantify their additional value to the grid. We present a grid–centric mixed–integer stochastic programming framework to determine the value of DR for improving grid resilience in place of capital investments that can be cost prohibitive for system operators. We formulate problems using both a linear approximation and a nonlinear alternating current power flow model. Our numerical results with both models demonstrate that DR can be used to reduce the capital investment necessary for resilience, increasing the value that chemical process facilities bring through DR. Furthermore, the linearized model often underestimates the amount of DR needed in our case studies.

More Details

Stochastic Optimization with Risk Aversion for Virtual Power Plant Operations: A Rolling Horizon Control

IET Generation, Transmission, & Distribution

Castillo, Andrea; Flicker, Jack D.; Hansen, Clifford; Watson, Jean-Paul; Johnson, Jay

While the concept of aggregating and controlling renewable distributed energy resources (DERs) to provide grid services is not new, increasing policy support of DER market participation has driven research and development in algorithms to pool DERs for economically viable market participation. Sandia National Laboratories recently undertook a three-year research program to create the components of a real-world virtual power plant (VPP) that can simultaneously participate in multiple markets. Our research extends current state-of-the-art rolling horizon control through the application of stochastic programming with risk aversion at various time resolutions. Our rolling horizon control consists of (1) day-ahead optimization to produce an hourly aggregate schedule for the VPP operator and (2) sub-hourly optimization for real-time dispatch of each VPP subresource. Both optimization routines leverage a two-stage stochastic program (SP) with risk aversion, and integrate the most up-to-date forecasts to generate probabilistic scenarios in real operating time. Our results demonstrate the benefits to the VPP operator of constructing a stochastic solution regardless of the weather. In more extreme weather, applying risk optimization strategies can dramatically increase the financial viability of the VPP. As a result, the methodologies presented here can be further tailored for optimal control of any VPP asset fleet and its operational requirements.

More Details

Mixed-integer programming models for optimal constellation scheduling given cloud cover uncertainty

European Journal of Operational Research

Valicka, Christopher G.; Garcia, Deanna; Staid, Andrea; Watson, Jean-Paul; Hackebeil, Gabriel; Rathinam, Sivakumar; Ntaimo, Lewis

We introduce the problem of scheduling observations on a constellation of remote sensors, to maximize the aggregate quality of the collections obtained. While automated tools exist to schedule remote sensors, they are often based on heuristic scheduling techniques, which typically fail to provide bounds on the quality of the resultant schedules. To address this issue, we first introduce a novel deterministic mixed-integer programming (MIP) model for scheduling a constellation of one to n satellites, which relies on extensive pre-computations associated with orbital propagators and sensor collection simulators to mitigate model size and complexity. Our MIP model captures realistic and complex constellation-target geometries, with solutions providing optimality guarantees. We then extend our base deterministic MIP model to obtain two-stage and three-stage stochastic MIP models that proactively schedule to maximize expected collection quality across a set of scenarios representing cloud cover uncertainty. Our experimental conclusions on instances of one and two satellites demonstrate that our stochastic MIP models yield significantly improved collection quality relative to our base deterministic MIP model. We further demonstrate that commercial off-the-shelf MIP solvers can produce provably optimal or near-optimal schedules from these models in time frames suitable for sensor operations.

More Details

On Mixed Integer Programming Formulations for the Unit Commitment Problem

Optimization Online Repository

Knueven, Bernard; Watson, Jean-Paul; Ostrowski, James

We provide a comprehensive overview of mixed integer programming formulations for the unit commitment problem (UC). UC formulations have been an especially active area of research over the past twelve years, due to their practical importance in power grid operations, and this paper serves as a capstone for this line of work. We additionally provide publicly available reference implementations of all formulations examined. We computationally test existing and novel UC formulations on a suite of instances drawn from both academic and real-world data sources. Driven by our computational experience from this and previous work, we contribute some additional formulations for both production upper bound and piecewise linear produc- tion costs. By composing new UC formulations using existing components found in the literature and new components introduced in this paper, we demonstrate that performance can be significantly improved – and in the process, we identify a new state-of-the-art UC formulation.

More Details

Tightening McCormick Relaxations Toward Global Solution of the ACOPF Problem

IEEE Transactions on Power Systems

Bynum, Michael L.; Castillo, Andrea; Watson, Jean-Paul; Laird, Carl

In this work, we show that a strong upper bound on the objective of the alternating current optimal power flow (ACOPF) problem can significantly improve the effectiveness of optimization-based bounds tightening (OBBT) on a number of relaxations. We additionally compare the performance of relaxations of the ACOPF problem, including the rectangular form without reference bus constraints, the rectangular form with reference bus constraints, and the polar form. We find that relaxations of the rectangular form significantly strengthen existing relaxations if reference bus constraints are included. Overall, relaxations of the polar form perform the best. However, neither the rectangular nor the polar form dominates the other. In conclusion, with these strategies, we are able to reduce the optimality gap to less than 0.1% on all but 5 NESTA test cases with up to 300 buses by performing OBBT alone.

More Details

Tightening McCormick Relaxations Toward Global Solution of the ACOPF Problem

IEEE Transactions on Power Systems

Bynum, Michael L.; Castillo, Andrea; Watson, Jean-Paul; Laird, Carl

Here, we show that a strong upper bound on the objective of the alternating current optimal power flow (ACOPF) problem can significantly improve the effectiveness of optimization-based bounds tightening (OBBT) on a number of relaxations. We additionally compare the performance of relaxations of the ACOPF problem, including the rectangular form without reference bus constraints, the rectangular form with reference bus constraints, and the polar form. We find that relaxations of the rectangular form significantly strengthen existing relaxations if reference bus constraints are included. Overall, relaxations of the polar form perform the best. However, neither the rectangular nor the polar form dominates the other. Ultimately, with these strategies, we are able to reduce the optimality gap to less than 0.1% on all but 5 NESTA test cases with up to 300 buses by performing OBBT alone.

More Details

A Novel Matching Formulation for Startup Costs in Unit Commitment

Optimization Online Repository

Knueven, Bernard; Watson, Jean-Paul

We present a novel formulation for startup cost computation in the unit commitment problem (UC). Both the proposed formulation and existing formulations in the literature are placed in a formal, theoretical dominance hierarchy based on their respective linear programming relaxations. The proposed formulation is tested empirically against existing formulations on large-scale unit commitment instances drawn from real-world data. While requiring more variables than the current state-of-the-art formulation, our proposed formulation requires fewer constraints, and is empirically demonstrated to be as tight as a perfect formulation for startup costs. This tightening reduces the computational burden in comparison to existing formulations, especially for UC instances with large variability in net-load due to renewables production.

More Details

Chance-Constrained Optimization for Critical Infrastructure Protection

Singh, Bismark; Watson, Jean-Paul

Stochastic optimization deals with making highly reliable decisions under uncertainty. Chance constraints are a crucial tool of stochastic optimization to develop mathematical optimization models; they form the backbone of many important national security data science applications. These include critical infrastructure resiliency, cyber security, power system operations, and disaster relief management. However, existing algorithms to solve chance-constrained optimization models are severely limited by problem size and structure. In this investigative study, we (i) develop new algorithms to approximate chance-constrained optimization models, (ii) demonstrate the application of chance-constraints to a national security problem, and (iii) investigate related stochastic optimization problems. We believe our work will pave way for new research is stochastic optimization as well as secure national infrastructures against unforeseen attacks.

More Details

Proactive Operations and Investment Planning via Stochastic Optimization to Enhance Power Systems Extreme Weather Resilience

Optimization Online Repository

Bynum, Michael L.; Staid, Andrea; Arguello, Bryan; Castillo, Andrea; Watson, Jean-Paul; Laird, Carl

We present novel stochastic optimization models to improve power systems resilience to extreme weather events. We consider proactive redispatch, transmission line hardening, and transmission line capacity increases as alternatives for mitigating expected load shed due to extreme weather. Our model is based on linearized or "DC" optimal power flow, similar to models in widespread use by independent system operators (ISOs) and regional transmission operators (RTOs). Our computational experiments indicate that proactive redispatch alone can reduce the expected load shed by as much as 25% relative to standard economic dispatch. This resiliency enhancement strategy requires no capital investments and is implementable by ISOs and RTOs solely through operational adjustments. We additionally demonstrate that transmission line hardening and increases in transmission capacity can, in limited quantities, be effective strategies to further enhance power grid resiliency, although at significant capital investment cost. We perform a cross validation analysis to demonstrate the robustness of proposed recommendations. Our proposed model can be augmented to incorporate a variety of other operational and investment resilience strategies, or combination of such strategies.

More Details

Stochastic unit commitment performance considering monte carlo wind power scenarios

2018 International Conference on Probabilistic Methods Applied to Power Systems, PMAPS 2018 - Proceedings

Rachunok, Benjamin; Staid, Andrea; Watson, Jean-Paul; Woodruff, David L.; Yang, Dominic

Stochastic versions of the unit commitment problem have been advocated for addressing the uncertainty presented by high levels of wind power penetration. However, little work has been done to study trade-offs between computational complexity and the quality of solutions obtained as the number of probabilistic scenarios is varied. Here, we describe extensive experiments using real publicly available wind power data from the Bonneville Power Administration. Solution quality is measured by re-enacting day-ahead reliability unit commitment (which selects the thermal units that will be used each hour of the next day) and real-time economic dispatch (which determines generation levels) for an enhanced WECC-240 test system in the context of a production cost model simulator; outputs from the simulation, including cost, reliability, and computational performance metrics, are then analyzed. Unsurprisingly, we find that both solution quality and computational difficulty increase with the number of probabilistic scenarios considered. However, we find unexpected transitions in computational difficulty at a specific threshold in the number of scenarios, and report on key trends in solution performance characteristics. Our findings are novel in that we examine these tradeoffs using real-world wind power data in the context of an out-of-sample production cost model simulation, and are relevant for both practitioners interested in deploying and researchers interested in developing scalable solvers for stochastic unit commitment.

More Details

Exploiting Identical Generators in Unit Commitment

IEEE Transactions on Power Systems

Watson, Jean-Paul; Knueven, Bernard

We present sufficient conditions under which thermal generators can be aggregated in mixed-integer linear programming (MILP) formulations of the unit commitment (UC) problem, while maintaining feasibility and optimality for the original disaggregated problem. Aggregating thermal generators with identical characteristics (e.g., minimum/maximum power output, minimum up/down time, and cost curves) into a single unit reduces redundancy in the search space induced by both exact symmetry (permutations of generator schedules) and certain classes of mutually nondominated solutions. We study the impact of aggregation on two large-scale UC instances: one from the academic literature and the other based on real-world operator data. Our computational tests demonstrate that, when present, identical generators can negatively affect the performance of modern MILP solvers on UC formulations. Furthermore, we show that our reformation of the UC MILP through aggregation is an effective method for mitigating this source of computational difficulty.

More Details

A multitree approach for global solution of ACOPF problems using piecewise outer approximations

Computers and Chemical Engineering

Liu, Jianfeng; Bynum, Michael L.; Castillo, Andrea; Watson, Jean-Paul; Laird, Carl

Electricity markets rely on the rapid solution of the optimal power flow (OPF) problem to determine generator power levels and set nodal prices. Traditionally, the OPF problem has been formulated using linearized, approximate models, ignoring nonlinear alternating current (AC) physics. These approaches do not guarantee global optimality or even feasibility in the real ACOPF problem. We introduce an outer-approximation approach to solve the ACOPF problem to global optimality based on alternating solution of upper- and lower-bounding problems. The lower-bounding problem is a piecewise relaxation based on strong second-order cone relaxations of the ACOPF, and these piecewise relaxations are selectively refined at each major iteration through increased variable domain partitioning. Our approach is able to efficiently solve all but one of the test cases considered to an optimality gap below 0.1%. Furthermore, this approach opens the door for global solution of MINLP problems with AC power flow equations.

More Details

Chance-constrained economic dispatch with renewable energy and storage

Computational Optimization and Applications

Safta, Cosmin; Cheng, Jianqiang; Najm, Habib N.; Pinar, Ali P.; Chen, Richard L.Y.; Watson, Jean-Paul

Increasing penetration levels of renewables have transformed how power systems are operated. High levels of uncertainty in production make it increasingly difficulty to guarantee operational feasibility; instead, constraints may only be satisfied with high probability. We present a chance-constrained economic dispatch model that efficiently integrates energy storage and high renewable penetration to satisfy renewable portfolio requirements. Specifically, we require that wind energy contribute at least a prespecified proportion of the total demand and that the scheduled wind energy is deliverable with high probability. We develop an approximate partial sample average approximation (PSAA) framework to enable efficient solution of large-scale chance-constrained economic dispatch problems. Computational experiments on the IEEE-24 bus system show that the proposed PSAA approach is more accurate, closer to the prescribed satisfaction tolerance, and approximately 100 times faster than standard sample average approximation. Finally, the improved efficiency of our PSAA approach enables solution of a larger WECC-240 test system in minutes.

More Details

pyomo.dae: a modeling and automatic discretization framework for optimization with differential and algebraic equations

Mathematical Programming Computation

Watson, Jean-Paul; Siirola, John D.; Nicholson, Bethany; Zavala, Victor M.; Biegler, Lorenz T.

We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org. One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differential equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.

More Details

Strengthened SOCP Relaxations for ACOPF with McCormick Envelopes and Bounds Tightening

Computer Aided Chemical Engineering

Bynum, Michael L.; Castillo, Andrea; Watson, Jean-Paul; Laird, Carl

The solution of the Optimal Power Flow (OPF) and Unit Commitment (UC) problems (i.e., determining generator schedules and set points that satisfy demands) is critical for efficient and reliable operation of the electricity grid. For computational efficiency, the alternating current OPF (ACOPF) problem is usually formulated with a linearized transmission model, often referred to as the DCOPF problem. However, these linear approximations do not guarantee global optimality or even feasibility for the true nonlinear alternating current (AC) system. Nonlinear AC power flow models can and should be used to improve model fidelity, but successful global solution of problems with these models requires the availability of strong relaxations of the AC optimal power flow constraints. In this paper, we use McCormick envelopes to strengthen the well-known second-order cone (SOC) relaxation of the ACOPF problem. With this improved relaxation, we can further include tight bounds on the voltages at the reference bus, and this paper demonstrates the effectiveness of this for improved bounds tightening. We present results on the optimality gap of both the base SOC relaxation and our Strengthened SOC (SSOC) relaxation for the National Information and Communications Technology Australia (NICTA) Energy System Test Case Archive (NESTA). For the cases where the SOC relaxation yields an optimality gap more than 0.1 %, the SSOC relaxation with bounds tightening further reduces the optimality gap by an average of 67 % and ultimately reduces the optimality gap to less than 0.1 % for 58 % of all the NESTA cases considered. Stronger relaxations enable more efficient global solution of the ACOPF problem and can improve computational efficiency of MINLP problems with AC power flow constraints, e.g., unit commitment.

More Details

Constructing probabilistic scenarios for wide-area solar power generation

Solar Energy

Watson, Jean-Paul; Woodruff, David L.; Deride Silva, Julio A.; Slevogt, Gerrit; Silva-Monroy, Cesar

Optimizing thermal generation commitments and dispatch in the presence of high penetrations of renewable resources such as solar energy requires a characterization of their stochastic properties. In this study, we describe novel methods designed to create day-ahead, wide-area probabilistic solar power scenarios based only on historical forecasts and associated observations of solar power production. Each scenario represents a possible trajectory for solar power in next-day operations with an associated probability computed by algorithms that use historical forecast errors. Scenarios are created by segmentation of historic data, fitting non-parametric error distributions using epi-splines, and then computing specific quantiles from these distributions. Additionally, we address the challenge of establishing an upper bound on solar power output. Our specific application driver is for use in stochastic variants of core power systems operations optimization problems, e.g., unit commitment and economic dispatch. These problems require as input a range of possible future realizations of renewables production. However, the utility of such probabilistic scenarios extends to other contexts, e.g., operator and trader situational awareness. Finally, we compare the performance of our approach to a recently proposed method based on quantile regression, and demonstrate that our method performs comparably to this approach in terms of two widely used methods for assessing the quality of probabilistic scenarios: the Energy score and the Variogram score.

More Details

pyomo.dae: a modeling and automatic discretization framework for optimization with differential and algebraic equations

Mathematical Programming Computation

Nicholson, Bethany L.; Siirola, John D.; Watson, Jean-Paul; Zavala, Victor M.; Biegler, Lorenz T.

We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org. One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differential equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.

More Details

Generating short-term probabilistic wind power scenarios via nonparametric forecast error density estimators: Generating short-term probabilistic wind power scenarios via nonparametric forecast error density estimators

Wind Energy

Watson, Jean-Paul; Staid, Andrea; Wets, Roger J.B.; Woodruff, David L.

Forecasts of available wind power are critical in key electric power systems operations planning problems, including economic dispatch and unit commitment. Such forecasts are necessarily uncertain, limiting the reliability and cost effectiveness of operations planning models based on a single deterministic or “point” forecast. A common approach to address this limitation involves the use of a number of probabilistic scenarios, each specifying a possible trajectory of wind power production, with associated probability. We present and analyze a novel method for generating probabilistic wind power scenarios, leveraging available historical information in the form of forecasted and corresponding observed wind power time series. We estimate non-parametric forecast error densities, specifically using epi-spline basis functions, allowing us to capture the skewed and non-parametric nature of error densities observed in real-world data. We then describe a method to generate probabilistic scenarios from these basis functions that allows users to control for the degree to which extreme errors are captured.We compare the performance of our approach to the current state-of-the-art considering publicly available data associated with the Bonneville Power Administration, analyzing aggregate production of a number of wind farms over a large geographic region. Finally, we discuss the advantages of our approach in the context of specific power systems operations planning problems: stochastic unit commitment and economic dispatch. Here, our methodology is embodied in the joint Sandia – University of California Davis Prescient software package for assessing and analyzing stochastic operations strategies.

More Details

Efficient Uncertainty Quantification in Stochastic Economic Dispatch

IEEE Transactions on Power Systems

Safta, Cosmin; Chen, Richard L.Y.; Najm, Habib N.; Pinar, Ali P.; Watson, Jean-Paul

Stochastic economic dispatch models address uncertainties in forecasts of renewable generation output by considering a finite number of realizations drawn from a stochastic process model, typically via Monte Carlo sampling. Accurate evaluations of expectations or higher order moments for quantities of interest, e.g., generating cost, can require a prohibitively large number of samples. We propose an alternative to Monte Carlo sampling based on polynomial chaos expansions. These representations enable efficient and accurate propagation of uncertainties in model parameters, using sparse quadrature methods. We also use Karhunen-Loève expansions for efficient representation of uncertain renewable energy generation that follows geographical and temporal correlations derived from historical data at each wind farm. Considering expected production cost, we demonstrate that the proposed approach can yield several orders of magnitude reduction in computational cost for solving stochastic economic dispatch relative to Monte Carlo sampling, for a given target error threshold.

More Details

Co-Planning of Investments in Transmission and Merchant Energy Storage

IEEE Transactions on Power Systems

Dvorkin, Yury; Fernandez-Blanco, Ricardo; Wang, Yishen; Xu, Bolun; Kirschen, Daniel S.; Pandzic, Hrvoje; Watson, Jean-Paul; Silva-Monroy, Cesar A.

We observe suitably located energy storage systems are able to collect significant revenue through spatiotemporal arbitrage in congested transmission networks. However, transmission capacity expansion can significantly reduce or eliminate this source of revenue. Investment decisions by merchant storage operators must, therefore, account for the consequences of potential investments in transmission capacity by central planners. This paper presents a tri-level model to co-optimize merchant electrochemical storage siting and sizing with centralized transmission expansion planning. The upper level takes the merchant storage owner's perspective and aims to maximize the lifetime profits of the storage, while ensuring a given rate of return on investments. The middle level optimizes centralized decisions about transmission expansion. The lower level simulates market clearing. The proposed model is recast as a bi-level equivalent, which is solved using the column-and-constraint generation technique. A case study based on a 240-bus, 448-line testbed of the Western Electricity Coordinating Council interconnection demonstrates the usefulness of the proposed tri-level model.

More Details

Does risk aversion affect transmission and generation planning? A Western North America case study

Energy Economics

Watson, Jean-Paul; Munoz, Francisco D.; Van Der Weijde, Adriaan H.; Hobbs, Benjamin F.

We investigate the effects of risk aversion on optimal transmission and generation expansion planning in a competitive and complete market. To do so, we formulate a stochastic model that minimizes a weighted average of expected transmission and generation costs and their conditional value at risk (CVaR). We show that the solution of this optimization problem is equivalent to the solution of a perfectly competitive risk-averse Stackelberg equilibrium, in which a risk-averse transmission planner maximizes welfare after which risk-averse generators maximize profits. This model is then applied to a 240-bus representation of the Western Electricity Coordinating Council, in which we examine the impact of risk aversion on levels and spatial patterns of generation and transmission investment. Although the impact of risk aversion remains small at an aggregate level, state-level impacts on generation and transmission investment can be significant, which emphasizes the importance of explicit consideration of risk aversion in planning models.

More Details

BBPH: Using progressive hedging within branch and bound to solve multi-stage stochastic mixed integer programs

Operations Research Letters

Watson, Jean-Paul; Woodruff, David L.; Barnett, Jason

Progressive hedging, though an effective heuristic for solving stochastic mixed integer programs (SMIPs), is not guaranteed to converge in this case. Here, we describe BBPH, a branch and bound algorithm that uses PH at each node in the search tree such that, given sufficient time, it will always converge to a globally optimal solution. In addition to providing a theoretically convergent “wrapper” for PH applied to SMIPs, computational results demonstrate that for some difficult problem instances branch and bound can find improved solutions after exploring only a few nodes.

More Details

Ensuring Profitability of Energy Storage

IEEE Transactions on Power Systems

Dvorkin, Yury; Fernandez-Blanco, Ricardo; Kirschen, Daniel S.; Pandzic, Hrvoje; Watson, Jean-Paul; Silva-Monroy, Cesar A.

Energy storage (ES) is a pivotal technology for dealing with the challenges caused by the integration of renewable energy sources. It is expected that a decrease in the capital cost of storage will eventually spur the deployment of large amounts of ES. These devices will provide transmission services, such as spatiotemporal energy arbitrage, i.e., storing surplus energy from intermittent renewable sources for later use by loads while reducing the congestion in the transmission network. This paper proposes a bilevel program that determines the optimal location and size of storage devices to perform this spatiotemporal energy arbitrage. This method aims to simultaneously reduce the system-wide operating cost and the cost of investments in ES while ensuring that merchant storage devices collect sufficient profits to fully recover their investment cost. The usefulness of the proposed method is illustrated using a representative case study of the ISO New England system with a prospective wind generation portfolio.

More Details

Final Report LDRD Project 173090 An Advanced Decision Framework for Power Grid Resiliency

Watson, Jean-Paul

The purpose of this report is to briefly survey the major contributions of the FY14- FY16 LDRD project titled “An Advanced Decision Framework for Power Grid Resiliency”. The primary contributions of the project are described in detailed technical reports and journal articles, references to which we provide in a bibliography.

More Details

Dynamic Multi-Sensor Multi-Mission Optimal Planning Tool

Valicka, Christopher G.; Rowe, Stephen; Zou, Simon; Mitchell, Scott A.; Irelan, William R.; Pollard, Eric L.; Garcia, Deanna; Hackebeil, Gabriel; Staid, Andrea; Foulk, James W.; Watson, Jean-Paul; Hart, William E.; Rathinam, Sivakumar; Ntaimo, Lewis

Remote sensing systems have firmly established a role in providing immense value to commercial industry, scientific exploration, and national security. Continued maturation of sensing technology has reduced the cost of deploying highly-capable sensors while at the same time increased reliance on the information these sensors can provide. The demand for time on these sensors is unlikely to diminish. Coordination of next-generation sensor systems, larger constellations of satellites, unmanned aerial vehicles, ground telescopes, etc. is prohibitively complex for existing heuristics-based scheduling techniques. The project was a two-year collaboration spanning multiple Sandia centers and included a partnership with Texas A&M University. We have developed algorithms and software for collection scheduling, remote sensor field-of-view pointing models, and bandwidth-constrained prioritization of sensor data. Our approach followed best practices from the operations research and computational geometry communities. These models provide several advantages over state of the art techniques. In particular, our approach is more flexible compared to heuristics that tightly couple models and solution techniques. First, our mixed-integer linear models afford a rigorous analysis so that sensor planners can quantitatively describe a schedule relative to the best possible. Optimal or near-optimal schedules can be produced with commercial solvers in operational run-times. These models can be modified and extended to incorporate different scheduling and resource constraints and objective function definitions. Further, we have extended these models to proactively schedule sensors under weather and ad hoc collection uncertainty. This approach stands in contrast to existing deterministic schedulers which assume a single future weather or ad hoc collection scenario. The field-of-view pointing algorithm produces a mosaic with the fewest number of images required to fully cover a region of interest. The bandwidth-constrained algorithms find the highest priority information that can be transmitted. All of these are based on mixed-integer linear programs so that, in the future, collection scheduling, field-of-view, and bandwidth prioritization can be combined into a single problem. Experiments conducted using the developed models, commercial solvers, and benchmark datasets have demonstrated that proactively scheduling against uncertainty regularly and significantly outperforms deterministic schedulers.

More Details

Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs

Mathematical Programming

Gade, Dinakar; Hackebeil, Gabriel; Ryan, Sarah M.; Watson, Jean-Paul; Wets, Roger J.B.; Woodruff, David L.

We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. We report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds.

More Details

Cutting planes for the multistage stochastic unit commitment problem

Mathematical Programming

Watson, Jean-Paul; Guan, Yongpei; Jiang, Ruiwei

As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program. Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Finally, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.

More Details

Strengthened MILP Formulation for Certain Gas Turbine Unit Commitment Problems

IEEE Transactions on Power Systems

Watson, Jean-Paul; Pan, Kai; Guan, Yongpei; Wang, Jianhui

In this paper, we derive a strengthened MILP formulation for certain gas turbine unit commitment problems, in which the ramping rates are no smaller than the minimum generation amounts. This type of gas turbines can usually start-up faster and have a larger ramping rate, as compared to the traditional coal-fired power plants. Recently, the number of this type of gas turbines increases significantly due to affordable gas prices and their scheduling flexibilities to accommodate intermittent renewable energy generation. In this study, several new families of strong valid inequalities are developed to help reduce the computational time to solve these types of problems. Meanwhile, the validity and facet-defining proofs are provided for certain inequalities. Finally, numerical experiments on a modified IEEE 118-bus system and the power system data based on recent studies verify the effectiveness of applying our formulation to model and solve this type of gas turbine unit commitment problems, including reducing the computational time to obtain an optimal solution or obtaining a much smaller optimality gap, as compared to the default CPLEX, when the time limit is reached with no optimal solutions obtained.

More Details

Modeling Bilevel Programs in Pyomo

Hart, William E.; Watson, Jean-Paul; Siirola, John D.; Chen, Richard L.Y.

We describe new capabilities for modeling bilevel programs within the Pyomo modeling software. These capabilities include new modeling components that represent subproblems, modeling transformations for re-expressing models with bilevel structure in other forms, and optimize bilevel programs with meta-solvers that apply transformations and then perform op- timization on the resulting model. We illustrate the breadth of Pyomo's modeling capabilities for bilevel programs, and we describe how Pyomo's meta-solvers can perform local and global optimization of bilevel programs.

More Details

Security-Constrained Unit Commitment with Linearized AC Optimal Power Flow

IEEE Transactions on Power Systems

Watson, Jean-Paul; Silva-Monroy, Cesar A.; Castillo, Anya; Laird, Carl; Neill, Richard'

We propose a mathematical programming-based approach to optimize the security-constrained unit commitment problem with a full AC transmission network representation. Our approach is based on our previously introduced successive linear programming (SLP) approach to solving the non-linear, nonconvex AC optimal power flow (ACOPF) problem. By linearizing the ACOPF, we are able to leverage powerful commercial mixed-integer solvers to iteratively optimize the combined unit commitment plus ACOPF model. We demonstrate our approach on six-bus, IEEE RTS-96, and IEEE 118-bus test systems. We perform a comparative analysis of the relative impacts of singlebus, DC, and AC transmission network models on the unit commitment and dispatch solutions and their associated costs.

More Details

Optimizing Your Options: Extracting the Full Economic Value of Transmission When Planning Under Uncertainty

Electricity Journal

Watson, Jean-Paul; Munoz, Francisco D.; Hobbs, Benjamin F.

The anticipated magnitude of needed investments in new transmission infrastructure in the U.S. requires that these be allocated in a way that maximizes the likelihood of achieving society's goals for power system operation. The use of state-of-the-art optimization tools can identify cost-effective investment alternatives, extract more benefits out of transmission expansion portfolios, and account for the huge economic, technology, and policy uncertainties that the power sector faces over the next several decades.

More Details

Integration of progressive hedging and dual decomposition in stochastic integer programs

Operations Research Letters

Watson, Jean-Paul; Guo, Ge; Hackebeil, Gabriel; Ryan, Sarah M.; Woodruff, David L.

We present a method for integrating the Progressive Hedging (PH) algorithm and the Dual Decomposition (DD) algorithm of Carøe and Schultz for stochastic mixed-integer programs. Based on the correspondence between lower bounds obtained with PH and DD, a method to transform weights from PH to Lagrange multipliers in DD is found. Fast progress in early iterations of PH speeds up convergence of DD to an exact solution. As a result, we report computational results on server location and unit commitment instances.

More Details

A scalable solution framework for stochastic transmission and generation planning problems

Computational Management Science

Munoz, Francisco D.; Watson, Jean-Paul

Current commercial software tools for transmission and generation investment planning have limited stochastic modeling capabilities. Because of this limitation, electric power utilities generally rely on scenario planning heuristics to identify potentially robust and cost effective investment plans for a broad range of system, economic, and policy conditions. Several research studies have shown that stochastic models perform significantly better than deterministic or heuristic approaches, in terms of overall costs. However, there is a lack of practical solution techniques to solve such models. In this paper we propose a scalable decomposition algorithm to solve stochastic transmission and generation planning problems, respectively considering discrete and continuous decision variables for transmission and generation investments. Given stochasticity restricted to loads and wind, solar, and hydro power output, we develop a simple scenario reduction framework based on a clustering algorithm, to yield a more tractable model. The resulting stochastic optimization model is decomposed on a scenario basis and solved using a variant of the Progressive Hedging (PH) algorithm. We perform numerical experiments using a 240-bus network representation of the Western Electricity Coordinating Council in the US. Although convergence of PH to an optimal solution is not guaranteed for mixed-integer linear optimization models, we find that it is possible to obtain solutions with acceptable optimality gaps for practical applications. Our numerical simulations are performed both on a commodity workstation and on a high-performance cluster. The results indicate that large-scale problems can be solved to a high degree of accuracy in at most 2 h of wall clock time.

More Details

A Scalable Solution Framework for Stochastic Transmission and Generation Planning Problems. Draft

Munoz, Francisco D.; Watson, Jean-Paul

Current commercial software tools for transmission and generation investment planning have limited stochastic modeling capabilities. Because of this limitation, electric power utilities generally rely on scenario planning heuristics to identify potentially robust and cost effective investment plans for a broad range of system, economic, and policy conditions. Several research studies have shown that stochastic models perform significantly better than deterministic or heuristic approaches, in terms of overall costs. However, there is a lack of practical solution approaches to solve such models. In this paper we propose a scalable decomposition algorithm to solve stochastic transmission and generation planning problems, respectively considering discrete and continuous decision variables for transmission and generation investments. Given stochasticity restricted to loads and wind, solar, and hydro power output, we develop a simple scenario reduction framework based on a clustering algorithm, to yield a more tractable model. The resulting stochastic optimization model is decomposed on a scenario basis and solved using a variant of the Progressive Hedging (PH) algorithm. We perform numerical experiments using a 240-bus network representation of the Western Electricity Coordinating Council in the US. Although convergence of PH to an optimal solution is not guaranteed for mixed-integer linear optimization models, we find that it is possible to obtain solutions with acceptable optimality gaps for practical applications. Our numerical simulations are performed both on a commodity workstation and on a high-performance cluster. The results indicate that large-scale problems can be solved to a high degree of accuracy in at most two hours of wall clock time.

More Details

Toward using surrogates to accelerate solution of stochastic electricity grid operations problems

2014 North American Power Symposium, NAPS 2014

Safta, Cosmin; Chen, Richard L.Y.; Najm, Habib N.; Pinar, Ali P.; Watson, Jean-Paul

Stochastic unit commitment models typically handle uncertainties in forecast demand by considering a finite number of realizations from a stochastic process model for loads. Accurate evaluations of expectations or higher moments for the quantities of interest require a prohibitively large number of model evaluations. In this paper we propose an alternative approach based on using surrogate models valid over the range of the forecast uncertainty. We consider surrogate models based on Polynomial Chaos expansions, constructed using sparse quadrature methods. Considering expected generation cost, we demonstrate that the approach can lead to several orders of magnitude reduction in computational cost relative to using Monte Carlo sampling on the original model, for a given target error threshold.

More Details

Conceptual Framework for Developing Resilience Metrics for the Electricity, Oil, and Gas Sectors in the United States

Watson, Jean-Paul; Guttromson, Ross; Silva-Monroy, Cesar A.; Jeffers, Robert; Jones, Katherine; Ellison, James; Rath, Charles; Gearhart, Jared L.; Jones, Dean A.; Corbet Jr., Thomas F.; Hanley, Charles; La Jenkins, Tonya N.

This report has been written for the Department of Energy’s Energy Policy and Systems Analysis Office to inform their writing of the Quadrennial Energy Review in the area of energy resilience. The topics of measuring and increasing energy resilience are addressed, including definitions, means of measuring, and analytic methodologies that can be used to make decisions for policy, infrastructure planning, and operations. A risk-based framework is presented which provides a standard definition of a resilience metric. Additionally, a process is identified which explains how the metrics can be applied. Research and development is articulated that will further accelerate the resilience of energy infrastructures.

More Details

Encoding and Analyzing Aerial Imagery Using Geospatial Semantic Graphs

Rintoul, Mark D.; Watson, Jean-Paul; Mclendon, William; Parekh, Ojas D.; Martin, Shawn

While collection capabilities have yielded an ever-increasing volume of aerial imagery, analytic techniques for identifying patterns in and extracting relevant information from this data have seriously lagged. The vast majority of imagery is never examined, due to a combination of the limited bandwidth of human analysts and limitations of existing analysis tools. In this report, we describe an alternative, novel approach to both encoding and analyzing aerial imagery, using the concept of a geospatial semantic graph. The advantages of our approach are twofold. First, intuitive templates can be easily specified in terms of the domain language in which an analyst converses. These templates can be used to automatically and efficiently search large graph databases, for specific patterns of interest. Second, unsupervised machine learning techniques can be applied to automatically identify patterns in the graph databases, exposing recurring motifs in imagery. We illustrate our approach using real-world data for Anne Arundel County, Maryland, and compare the performance of our approach to that of an expert human analyst.

More Details

Quantifiably secure power grid operation, management, and evolution :

Watson, Jean-Paul; Silva-Monroy, Cesar A.

This report summarizes findings and results of the Quantifiably Secure Power Grid Operation, Management, and Evolution LDRD. The focus of the LDRD was to develop decisionsupport technologies to enable rational and quantifiable risk management for two key grid operational timescales: scheduling (day-ahead) and planning (month-to-year-ahead). Risk or resiliency metrics are foundational in this effort. The 2003 Northeast Blackout investigative report stressed the criticality of enforceable metrics for system resiliency the grids ability to satisfy demands subject to perturbation. However, we neither have well-defined risk metrics for addressing the pervasive uncertainties in a renewable energy era, nor decision-support tools for their enforcement, which severely impacts efforts to rationally improve grid security. For day-ahead unit commitment, decision-support tools must account for topological security constraints, loss-of-load (economic) costs, and supply and demand variability especially given high renewables penetration. For long-term planning, transmission and generation expansion must ensure realized demand is satisfied for various projected technological, climate, and growth scenarios. The decision-support tools investigated in this project paid particular attention to tailoriented risk metrics for explicitly addressing high-consequence events. Historically, decisionsupport tools for the grid consider expected cost minimization, largely ignoring risk and instead penalizing loss-of-load through artificial parameters. The technical focus of this work was the development of scalable solvers for enforcing risk metrics. Advanced stochastic programming solvers were developed to address generation and transmission expansion and unit commitment, minimizing cost subject to pre-specified risk thresholds. Particular attention was paid to renewables where security critically depends on production and demand prediction accuracy. To address this concern, powerful filtering techniques for spatio-temporal measurement assimilation were used to develop short-term predictive stochastic models. To achieve uncertaintytolerant solutions, very large numbers of scenarios must be simultaneously considered. One focus of this work was investigating ways of reasonably reducing this number.

More Details

Formulating and analyzing multi-stage sensor placement problems

Water Distribution Systems Analysis 2010 - Proceedings of the 12th International Conference, WDSA 2010

Watson, Jean-Paul; Hart, William E.; Woodruff, David L.; Murray, Regan

The optimization of sensor placements is a key aspect of the design of contaminant warning systems for automatically detecting contaminants in water distribution systems. Although researchers have generally assumed that all sensors are placed at the same time, in practice sensor networks will likely grow and evolve over time. For example, limitations for a water utility's budget may dictate an staged, incremental deployment of sensors over many years. We describe optimization formulations of multi-stage sensor placement problems. The objective of these formulations includes an explicit trade-off between the value of the initially deployed and final sensor networks. This trade-off motivates the deployment of sensors in initial stages of the deployment schedule, even though these choices typically lead to a solution that is suboptimal when compared to placing all sensors at once. These multi-stage sensor placement problems can be represented as mixed-integer programs, and we illustrate the impact of this trade-off using standard commercial solvers. We also describe a multi-stage formulation that models budget uncertainty, expressed as a tree of potential budget scenarios through time. Budget uncertainty is used to assess and hedge against risks due to a potentially incomplete deployment of a planned sensor network. This formulation is a multi-stage stochastic mixed-integer program, which are notoriously difficult to solve. We apply standard commercial solvers to small-scale test problems, enabling us to effectively analyze multi-stage sensor placement problems subject to budget uncertainties, and assess the impact of accounting for such uncertainty relative to a deterministic multi-stage model. © 2012 ASCE.

More Details

Optimization of Large-Scale Heterogeneous System-of-Systems Models

Gray, Genetha A.; Hart, William E.; Hough, Patricia D.; Parekh, Ojas D.; Phillips, Cynthia A.; Siirola, John D.; Swiler, Laura P.; Watson, Jean-Paul

Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

More Details

Sensor placement for municipal water networks

Phillips, Cynthia A.; Boman, Erik G.; Carr, Robert D.; Hart, William E.; Berry, Jonathan; Watson, Jean-Paul; Hart, David; Mckenna, Sean A.; Riesen, Lee A.

We consider the problem of placing a limited number of sensors in a municipal water distribution network to minimize the impact over a given suite of contamination incidents. In its simplest form, the sensor placement problem is a p-median problem that has structure extremely amenable to exact and heuristic solution methods. We describe the solution of real-world instances using integer programming or local search or a Lagrangian method. The Lagrangian method is necessary for solution of large problems on small PCs. We summarize a number of other heuristic methods for effectively addressing issues such as sensor failures, tuning sensors based on local water quality variability, and problem size/approximation quality tradeoffs. These algorithms are incorporated into the TEVA-SPOT toolkit, a software suite that the US Environmental Protection Agency has used and is using to design contamination warning systems for US municipal water systems.

More Details

Computing confidence intervals on solution costs for stochastic grid generation expansion problems

Watson, Jean-Paul

A range of core operations and planning problems for the national electrical grid are naturally formulated and solved as stochastic programming problems, which minimize expected costs subject to a range of uncertain outcomes relating to, for example, uncertain demands or generator output. A critical decision issue relating to such stochastic programs is: How many scenarios are required to ensure a specific error bound on the solution cost? Scenarios are the key mechanism used to sample from the uncertainty space, and the number of scenarios drives computational difficultly. We explore this question in the context of a long-term grid generation expansion problem, using a bounding procedure introduced by Mak, Morton, and Wood. We discuss experimental results using problem formulations independently minimizing expected cost and down-side risk. Our results indicate that we can use a surprisingly small number of scenarios to yield tight error bounds in the case of expected cost minimization, which has key practical implications. In contrast, error bounds in the case of risk minimization are significantly larger, suggesting more research is required in this area in order to achieve rigorous solutions for decision makers.

More Details

Pyomo : Python Optimization Modeling Objects

Siirola, John D.; Watson, Jean-Paul; Hart, William E.

The Python Optimization Modeling Objects (Pyomo) package [1] is an open source tool for modeling optimization applications within Python. Pyomo provides an objected-oriented approach to optimization modeling, and it can be used to define symbolic problems, create concrete problem instances, and solve these instances with standard solvers. While Pyomo provides a capability that is commonly associated with algebraic modeling languages such as AMPL, AIMMS, and GAMS, Pyomo's modeling objects are embedded within a full-featured high-level programming language with a rich set of supporting libraries. Pyomo leverages the capabilities of the Coopr software library [2], which integrates Python packages (including Pyomo) for defining optimizers, modeling optimization applications, and managing computational experiments. A central design principle within Pyomo is extensibility. Pyomo is built upon a flexible component architecture [3] that allows users and developers to readily extend the core Pyomo functionality. Through these interface points, extensions and applications can have direct access to an optimization model's expression objects. This facilitates the rapid development and implementation of new modeling constructs and as well as high-level solution strategies (e.g. using decomposition- and reformulation-based techniques). In this presentation, we will give an overview of the Pyomo modeling environment and model syntax, and present several extensions to the core Pyomo environment, including support for Generalized Disjunctive Programming (Coopr GDP), Stochastic Programming (PySP), a generic Progressive Hedging solver [4], and a tailored implementation of Bender's Decomposition.

More Details

PySP : modeling and solving stochastic mixed-integer programs in Python

Watson, Jean-Paul

Although stochastic programming is a powerful tool for modeling decision-making under uncertainty, various impediments have historically prevented its widespread use. One key factor involves the ability of non-specialists to easily express stochastic programming problems as extensions of deterministic models, which are often formulated first. A second key factor relates to the difficulty of solving stochastic programming models, particularly the general mixed-integer, multi-stage case. Intricate, configurable, and parallel decomposition strategies are frequently required to achieve tractable run-times. We simultaneously address both of these factors in our PySP software package, which is part of the COIN-OR Coopr open-source Python project for optimization. To formulate a stochastic program in PySP, the user specifies both the deterministic base model and the scenario tree with associated uncertain parameters in the Pyomo open-source algebraic modeling language. Given these two models, PySP provides two paths for solution of the corresponding stochastic program. The first alternative involves writing the extensive form and invoking a standard deterministic (mixed-integer) solver. For more complex stochastic programs, we provide an implementation of Rockafellar and Wets Progressive Hedging algorithm. Our particular focus is on the use of Progressive Hedging as an effective heuristic for approximating general multi-stage, mixed-integer stochastic programs. By leveraging the combination of a high-level programming language (Python) and the embedding of the base deterministic model in that language (Pyomo), we are able to provide completely generic and highly configurable solver implementations. PySP has been used by a number of research groups, including our own, to rapidly prototype and solve difficult stochastic programming problems.

More Details

Limited-memory techniques for sensor placement in water distribution networks

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Hart, William E.; Berry, Jonathan; Boman, Erik G.; Phillips, Cynthia A.; Riesen, Lee A.; Watson, Jean-Paul

The practical utility of optimization technologies is often impacted by factors that reflect how these tools are used in practice, including whether various real-world constraints can be adequately modeled, the sophistication of the analysts applying the optimizer, and related environmental factors (e.g. whether a company is willing to trust predictions from computational models). Other features are less appreciated, but of equal importance in terms of dictating the successful use of optimization. These include the scale of problem instances, which in practice drives the development of approximate solution techniques, and constraints imposed by the target computing platforms. End-users often lack state-of-the-art computers, and thus runtime and memory limitations are often a significant, limiting factor in algorithm design. When coupled with large problem scale, the result is a significant technological challenge. We describe our experience developing and deploying both exact and heuristic algorithms for placing sensors in water distribution networks to mitigate against damage due intentional or accidental introduction of contaminants. The target computing platforms for this application have motivated limited-memory techniques that can optimize large-scale sensor placement problems. © 2008 Springer Berlin Heidelberg.

More Details

The TEVA-SPOT toolkit for drinking water contaminant warning system design

World Environmental and Water Resources Congress 2008: Ahupua'a - Proceedings of the World Environmental and Water Resources Congress 2008

Hart, William E.; Berry, Jonathan; Boman, Erik G.; Murray, Regan; Phillips, Cynthia A.; Riesen, Lee A.; Watson, Jean-Paul

We present the TEVA-SPOT Toolkit, a sensor placement optimization tool developed within the USEPA TEVA program. The TEVA-SPOT Toolkit provides a sensor placement framework that facilitates research in sensor placement optimization and enables the practical application of sensor placement solvers to real-world CWS design applications. This paper provides an overview of its key features, and then illustrates how this tool can be flexibly applied to solve a variety of different types of sensor placement problems. © 2008 ASCE.

More Details

A hybrid constraint programming / local search approach to the job-shop scheduling problem

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Watson, Jean-Paul; Beck, J.C.

Since their introduction, local search algorithms - and in particular tabu search algorithms - have consistently represented the state-of-the-art in solution techniques for the classical job-shop scheduling problem. This is despite the availability of powerful search and inference techniques for scheduling problems developed by the constraint programming community. In this paper, we introduce a simple hybrid algorithm for job-shop scheduling that leverages both the fast, broad search capabilities of modern tabu search and the scheduling-specific inference capabilities of constraint programming. The hybrid algorithm significantly improves the performance of a state-of-the-art tabu search for the job-shop problem, and represents the first instance in which a constraint programming algorithm obtains performance competitive with the best local search algorithms. Further, the variability in solution quality obtained by the hybrid is significantly lower than that of pure local search algorithms. As an illustrative example, we identify twelve new best-known solutions on Taillard's widely studied benchmark problems. © 2008 Springer-Verlag Berlin Heidelberg.

More Details

LDRD final report : robust analysis of large-scale combinatorial applications

Hart, William E.; Carr, Robert D.; Phillips, Cynthia A.; Watson, Jean-Paul

Discrete models of large, complex systems like national infrastructures and complex logistics frameworks naturally incorporate many modeling uncertainties. Consequently, there is a clear need for optimization techniques that can robustly account for risks associated with modeling uncertainties. This report summarizes the progress of the Late-Start LDRD 'Robust Analysis of Largescale Combinatorial Applications'. This project developed new heuristics for solving robust optimization models, and developed new robust optimization models for describing uncertainty scenarios.

More Details

DAKOTA, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 reference manual

Brown, Shannon L.; Griffin, Joshua D.; Hough, Patricia D.; Kolda, Tamara G.; Martinez-Canales, Monica L.; Williams, Pamela J.; Adams, Brian M.; Dunlavy, Daniel M.; Gay, David M.; Swiler, Laura P.; Giunta, Anthony A.; Hart, William E.; Watson, Jean-Paul; Eddy, John P.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

More Details

Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 developers manual

Brown, Shannon L.; Griffin, Joshua D.; Hough, Patricia D.; Kolda, Tamara G.; Martinez-Canales, Monica L.; Williams, Pamela J.; Adams, Brian M.; Dunlavy, Daniel M.; Gay, David M.; Swiler, Laura P.; Giunta, Anthony A.; Hart, William E.; Watson, Jean-Paul; Eddy, John P.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

More Details

DAKOTA, a multilevel parellel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 uers's manual

Swiler, Laura P.; Giunta, Anthony A.; Hart, William E.; Watson, Jean-Paul; Eddy, John P.; Griffin, Joshua D.; Hough, Patricia D.; Kolda, Tamara G.; Martinez-Canales, Monica L.; Williams, Pamela J.; Eldred, Michael; Brown, Shannon L.; Adams, Brian M.; Dunlavy, Daniel M.; Gay, David M.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

More Details
Results 1–200 of 214
Results 1–200 of 214