The exact information-based complexity of smooth convex minimization

@article{Drori2016TheEI,
  title={The exact information-based complexity of smooth convex minimization},
  author={Yoel Drori},
  journal={J. Complex.},
  year={2016},
  volume={39},
  pages={1-16},
  url={https://api.semanticscholar.org/CorpusID:205861966}
}

Optimizing the Efficiency of First-Order Methods for Decreasing the Gradient of Smooth Convex Functions

It is illustrated that the proposed method has a computationally efficient form that is similar to the optimized gradient method, and the worst-case gradient bound of the resulting method is optimal up to a constant for large-dimensional smooth convex minimization problems, under the initial bounded condition on the cost function value.

An optimal gradient method for smooth strongly convex minimization

This work presents an optimal gradient method for smooth strongly convex optimization and provides a constructive recipe for obtaining the algorithmic parameters of the method and illustrates that it can be used for deriving methods for other optimality criteria as well.

An optimal lower bound for smooth convex functions

This work defines a global lower bound for smooth differentiable objectives that is optimal with respect to the collected oracle information that can be readily employed by the Gradient Method with Memory to improve its performance.

Better Worst-Case Complexity Analysis of the Block Coordinate Descent Method for Large Scale Machine Learning

A new lower bound is obtained, which is 16p3 times smaller than the best known on the information-based complexity of BCD method, by using an effective technique called Performance Estimation Problem (PEP) approach for analyzing the performance of first-order black box optimization methods.

The Complexity of Finding Stationary Points with Stochastic Gradient Descent

It is shown that for nonconvex functions, the feasibility of minimizing gradients with SGD is surprisingly sensitive to the choice of optimality criteria, and this holds even if the authors limit ourselves to convex quadratic functions.

Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization

A new analytical worst-case guarantee is presented for the proximal point algorithm that is twice better than previously known, and the standard worst- case guarantee for the conditional gradient method is improved by more than a factor of two.

On the Properties of Convex Functions over Open Sets

We consider the class of smooth convex functions defined over an open convex set. We show that this class is essentially different than the class of smooth convex functions defined over the entire

Efficient first-order methods for convex minimization: a constructive approach

We describe a novel constructive technique for devising efficient first-order methods for a wide range of large-scale convex minimization settings, including smooth, non-smooth, and strongly convex

Exact Worst-Case Convergence Rates of the Proximal Gradient Method for Composite Convex Minimization

We study the worst-case convergence rates of the proximal gradient method for minimizing the sum of a smooth strongly convex function and a non-smooth convex function, whose proximal operator is
...

Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization

A new notion of discrepancy between functions is introduced, and used to reduce problems of stochastic convex optimization to statistical parameter estimation, which can be lower bounded using information-theoretic methods.

Performance of first-order methods for smooth convex minimization: a novel approach

A novel approach for analyzing the worst-case performance of first-order black-box optimization methods, which focuses on smooth unconstrained convex minimization over the Euclidean space and derives a new and tight analytical bound on its performance.

Information-based complexity

Information-based complexity seeks to develop general results about the intrinsic difficulty of solving problems where available information is partial or approximate and to apply these results to

Information-Based Complexity, Feedback and Dynamics in Convex Programming

The present work connects the intuitive notions of “information” in optimization, experimental design, estimation, and active learning to the quantitative notion of Shannon information and shows that optimization algorithms often obey the law of diminishing returns.

Smooth strongly convex interpolation and exact worst-case performance of first-order methods

We show that the exact worst-case performance of fixed-step first-order methods for unconstrained optimization of smooth (possibly strongly) convex functions can be obtained by solving convex

An optimal variant of Kelley’s cutting-plane method

A new variant of Kelley’s cutting-plane method for minimizing a nonsmooth convex Lipschitz-continuous function over the Euclidean space is proposed and it is proved that it attains the optimal rate of convergence for this class of problems.

Introductory Lectures on Convex Optimization - A Basic Course

It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization, and it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments.

On Complexity of Stochastic Programming Problems

It is argued that two-stage (linear) stochastic programming problems with recourse can be solved with a reasonable accuracy by using Monte Carlo sampling techniques, while multistage Stochastic programs, in general, are intractable.