Publications

10 Results

Search results

Jump to search filters

Efficient proximal subproblem solvers for a nonsmooth trust-region method

Computational Optimization and Applications

Baraldi, Robert J.; Kouri, Drew P.

In [R. J. Baraldi and D. P. Kouri, Mathematical Programming, (2022), pp. 1-40], we introduced an inexact trust-region algorithm for minimizing the sum of a smooth nonconvex and nonsmooth convex function. The principle expense of this method is in computing a trial iterate that satisfies the so-called fraction of Cauchy decrease condition—a bound that ensures the trial iterate produces sufficient decrease of the subproblem model. In this paper, we expound on various proximal trust-region subproblem solvers that generalize traditional trust-region methods for smooth unconstrained and convex-constrained problems. We introduce a simplified spectral proximal gradient solver, a truncated nonlinear conjugate gradient solver, and a dogleg method. We compare algorithm performance on examples from data science and PDE-constrained optimization.

More Details

Local convergence analysis of an inexact trust-region method for nonsmooth optimization

Optimization Letters

Kouri, Drew P.; Baraldi, Robert J.

In Baraldi (Math Program 20:1–40, 2022), we introduced an inexact trust-region algorithm for minimizing the sum of a smooth nonconvex function and a nonsmooth convex function in Hilbert space—a class of problems that is ubiquitous in data science, learning, optimal control, and inverse problems. This algorithm has demonstrated excellent performance and scalability with problem size. In this paper, we enrich the convergence analysis for this algorithm, proving strong convergence of the iterates with guaranteed rates. In particular, we demonstrate that the trust-region algorithm recovers superlinear, even quadratic, convergence rates when using a second-order Taylor approximation of the smooth objective function term.

More Details

A proximal trust-region method for nonsmooth optimization with inexact function and gradient evaluations

Mathematical Programming

Kouri, Drew P.; Baraldi, Robert J.

Many applications require minimizing the sum of smooth and nonsmooth functions. For example, basis pursuit denoising problems in data science require minimizing a measure of data misfit plus an $\ell^1$-regularizer. Similar problems arise in the optimal control of partial differential equations (PDEs) when sparsity of the control is desired. Here, we develop a novel trust-region method to minimize the sum of a smooth nonconvex function and a nonsmooth convex function. Our method is unique in that it permits and systematically controls the use of inexact objective function and derivative evaluations. When using a quadratic Taylor model for the trust-region subproblem, our algorithm is an inexact, matrix-free proximal Newton-type method that permits indefinite Hessians. We prove global convergence of our method in Hilbert space and demonstrate its efficacy on three examples from data science and PDE-constrained optimization.

More Details
10 Results
10 Results