Publications

Publications / Journal Article

Benchmarking ADMM in nonconvex NLPs

Rodriguez, Jose S.; Nicholson, Bethany L.; Laird, Carl D.; Zavala, Victor M.

We study connections between the alternating direction method of multipliers (ADMM), the classical method of multipliers (MM), and progressive hedging (PH). The connections are used to derive benchmark metrics and strategies to monitor and accelerate convergence and to help explain why ADMM and PH are capable of solving complex nonconvex NLPs. Specifically, we observe that ADMM is an inexact version of MM and approaches its performance when multiple coordination steps are performed. In addition, we use the observation that PH is a specialization of ADMM and borrow Lyapunov function and primal-dual feasibility metrics used in ADMM to explain why PH is capable of solving nonconvex NLPs. This analysis also highlights that specialized PH schemes can be derived to tackle a wider range of stochastic programs and even other problem classes. Our exposition is tutorial in nature and seeks to to motivate algorithmic improvements and new decomposition strategies