Publications

Results 1–25 of 121

Search results

Jump to search filters

Efficient proximal subproblem solvers for a nonsmooth trust-region method

Computational Optimization and Applications

Baraldi, Robert J.; Kouri, Drew P.

In [R. J. Baraldi and D. P. Kouri, Mathematical Programming, (2022), pp. 1-40], we introduced an inexact trust-region algorithm for minimizing the sum of a smooth nonconvex and nonsmooth convex function. The principle expense of this method is in computing a trial iterate that satisfies the so-called fraction of Cauchy decrease condition—a bound that ensures the trial iterate produces sufficient decrease of the subproblem model. In this paper, we expound on various proximal trust-region subproblem solvers that generalize traditional trust-region methods for smooth unconstrained and convex-constrained problems. We introduce a simplified spectral proximal gradient solver, a truncated nonlinear conjugate gradient solver, and a dogleg method. We compare algorithm performance on examples from data science and PDE-constrained optimization.

More Details

Local convergence analysis of an inexact trust-region method for nonsmooth optimization

Optimization Letters

Kouri, Drew P.; Baraldi, Robert J.

In Baraldi (Math Program 20:1–40, 2022), we introduced an inexact trust-region algorithm for minimizing the sum of a smooth nonconvex function and a nonsmooth convex function in Hilbert space—a class of problems that is ubiquitous in data science, learning, optimal control, and inverse problems. This algorithm has demonstrated excellent performance and scalability with problem size. In this paper, we enrich the convergence analysis for this algorithm, proving strong convergence of the iterates with guaranteed rates. In particular, we demonstrate that the trust-region algorithm recovers superlinear, even quadratic, convergence rates when using a second-order Taylor approximation of the smooth objective function term.

More Details

An inexact semismooth Newton method with application to adaptive randomized sketching for dynamic optimization

Finite Elements in Analysis and Design

Kouri, Drew P.; Antil, Harbir; Alshehri, Mohammed; Herberg, Evelyn

In many applications, one can only access the inexact gradients and inexact hessian times vector products. Thus it is essential to consider algorithms that can handle such inexact quantities with a guaranteed convergence to solution. An inexact adaptive and provably convergent semismooth Newton method is considered to solve constrained optimization problems. In particular, dynamic optimization problems, which are known to be highly expensive, are the focus. A memory efficient semismooth Newton algorithm is introduced for these problems. The source of efficiency and inexactness is the randomized matrix sketching. Applications to optimization problems constrained by partial differential equations are also considered.

More Details

A greedy Galerkin method to efficiently select sensors for linear dynamical systems

Linear Algebra and Its Applications

Kouri, Drew P.; Udell, Madeleine; Hua, Zuhao

A key challenge in inverse problems is the selection of sensors to gather the most effective data. In this paper, we consider the problem of inferring the initial condition to a linear dynamical system and develop an efficient control-theoretical approach for greedily selecting sensors. Our method employs a Galerkin projection to reduce the size of the inverse problem, resulting in a computationally efficient algorithm for sensor selection. As a byproduct of our algorithm, we obtain a preconditioner for the inverse problem that enables the rapid recovery of the initial condition. We analyze the theoretical performance of our greedy sensor selection algorithm as well as the performance of the associated preconditioner. Finally, we verify our theoretical results on various inverse problems involving partial differential equations.

More Details

ALESQP: AN AUGMENTED LAGRANGIAN EQUALITY-CONSTRAINED SQP METHOD FOR OPTIMIZATION WITH GENERAL CONSTRAINTS

SIAM Journal on Optimization

Kouri, Drew P.; Ridzal, Denis; Antil, Harbir

We present a new algorithm for infinite-dimensional optimization with general constraints, called ALESQP. In short, ALESQP is an augmented Lagrangian method that penalizes inequality constraints and solves equality-constrained nonlinear optimization subproblems at every iteration. The subproblems are solved using a matrix-free trust-region sequential quadratic programming (SQP) method that takes advantage of iterative, i.e., inexact linear solvers, and is suitable for large-scale applications. A key feature of ALESQP is a constraint decomposition strategy that allows it to exploit problem-specific variable scalings and inner products. We analyze convergence of ALESQP under different assumptions. We show that strong accumulation points are stationary. Consequently, in finite dimensions ALESQP converges to a stationary point. In infinite dimensions we establish that weak accumulation points are feasible in many practical situations. Under additional assumptions we show that weak accumulation points are stationary. We present several infinite-dimensional examples where ALESQP shows remarkable discretization-independent performance in all of its iterative components, requiring a modest number of iterations to meet constraint tolerances at the level of machine precision. Also, we demonstrate a fully matrix-free solution of an infinite-dimensional problem with nonlinear inequality constraints.

More Details

A proximal trust-region method for nonsmooth optimization with inexact function and gradient evaluations

Mathematical Programming

Kouri, Drew P.; Baraldi, Robert J.

Many applications require minimizing the sum of smooth and nonsmooth functions. For example, basis pursuit denoising problems in data science require minimizing a measure of data misfit plus an $\ell^1$-regularizer. Similar problems arise in the optimal control of partial differential equations (PDEs) when sparsity of the control is desired. Here, we develop a novel trust-region method to minimize the sum of a smooth nonconvex function and a nonsmooth convex function. Our method is unique in that it permits and systematically controls the use of inexact objective function and derivative evaluations. When using a quadratic Taylor model for the trust-region subproblem, our algorithm is an inexact, matrix-free proximal Newton-type method that permits indefinite Hessians. We prove global convergence of our method in Hilbert space and demonstrate its efficacy on three examples from data science and PDE-constrained optimization.

More Details

Surrogate modeling for efficiently, accurately and conservatively estimating measures of risk

Reliability Engineering and System Safety

Jakeman, John D.; Kouri, Drew P.; Huerta, Jose G.

We present a surrogate modeling framework for conservatively estimating measures of risk from limited realizations of an expensive physical experiment or computational simulation. Risk measures combine objective probabilities with the subjective values of a decision maker to quantify anticipated outcomes. Given a set of samples, we construct a surrogate model that produces estimates of risk measures that are always greater than their empirical approximations obtained from the training data. These surrogate models limit over-confidence in reliability and safety assessments and produce estimates of risk measures that converge much faster to the true value than purely sample-based estimates. We first detail the construction of conservative surrogate models that can be tailored to a stakeholder's risk preferences and then present an approach, based on stochastic orders, for constructing surrogate models that are conservative with respect to families of risk measures. Our surrogate models include biases that permit them to conservatively estimate the target risk measures. We provide theoretical results that show that these biases decay at the same rate as the L2 error in the surrogate model. Numerical demonstrations confirm that risk-adapted surrogate models do indeed overestimate the target risk measures while converging at the expected rate.

More Details
Results 1–25 of 121
Results 1–25 of 121