scispace - formally typeset
Search or ask a question
Topic

Rate of convergence

About: Rate of convergence is a research topic. Over the lifetime, 31257 publications have been published within this topic receiving 795334 citations. The topic is also known as: convergence rate.


Papers
More filters
Journal ArticleDOI
TL;DR: A multistage AC-SA algorithm is introduced, which possesses an optimal rate of convergence for solving strongly convex SCO problems in terms of the dependence on not only the target accuracy, but also a number of problem parameters and the selection of initial points.
Abstract: In this paper we study new stochastic approximation (SA) type algorithms, namely, the accelerated SA (AC-SA), for solving strongly convex stochastic composite optimization (SCO) problems. Specifically, by introducing a domain shrinking procedure, we significantly improve the large-deviation results associated with the convergence rate of a nearly optimal AC-SA algorithm presented by Ghadimi and Lan in [SIAM J. Optim., 22 (2012), pp 1469--1492]. Moreover, we introduce a multistage AC-SA algorithm, which possesses an optimal rate of convergence for solving strongly convex SCO problems in terms of the dependence on not only the target accuracy, but also a number of problem parameters and the selection of initial points. To the best of our knowledge, this is the first time that such an optimal method has been presented in the literature. From our computational results, these AC-SA algorithms can substantially outperform the classical SA and some other SA type algorithms for solving certain classes of strongly...

226 citations

Journal ArticleDOI
TL;DR: In this paper, the Jacobi-Davidson iterative method is used to solve generalized eigenproblems, where the projection operator is chosen according to the desired eigenvalues and eigenvectors.
Abstract: In this paper we will show how the Jacobi-Davidson iterative method can be used to solve generalized eigenproblems. Similar ideas as for the standard eigenproblem are used, but the projections, that are required to reduce the given problem to a small manageable size, need more attention. We show that by proper choices for the projection operators quadratic convergence can be achieved. The advantage of our approach is that none of the involved operators needs to be inverted. It turns out that similar projections can be used for the iterative approximation of selected eigenvalues and eigenvectors of polynomial eigenvalue equations. This approach has already been used with great success for the solution of quadratic eigenproblems associated with acoustic problems.

225 citations

Journal ArticleDOI
TL;DR: In this paper, a rate sharp minimax lower bound for estimating sparse covariance matrices under a range of matrix operator norm and Bregman divergence losses was derived, and a thresholding estimator was shown to attain the optimal rate of convergence under the spectral norm.
Abstract: This paper considers estimation of sparse covariance matrices and establishes the optimal rate of convergence under a range of matrix operator norm and Bregman divergence losses. A major focus is on the derivation of a rate sharp minimax lower bound. The problem exhibits new features that are significantly different from those that occur in the conventional nonparametric function estimation problems. Standard techniques fail to yield good results, and new tools are thus needed. We first develop a lower bound technique that is particularly well suited for treating “two-directional” problems such as estimating sparse covariance matrices. The result can be viewed as a generalization of Le Cam’s method in one direction and Assouad’s Lemma in another. This lower bound technique is of independent interest and can be used for other matrix estimation problems. We then establish a rate sharp minimax lower bound for estimating sparse covariance matrices under the spectral norm by applying the general lower bound technique. A thresholding estimator is shown to attain the optimal rate of convergence under the spectral norm. The results are then extended to

224 citations

Journal ArticleDOI
TL;DR: It is proved that the rate of convergence of a slight variant of Nesterov's accelerated forward-backward method, which produces convergent sequences, is actually $o(k-2)$, rather than $\mathcal O(k^{-2})$.
Abstract: The forward-backward algorithm is a powerful tool for solving optimization problems with an additively separable and smooth plus nonsmooth structure. In the convex setting, a simple but ingenious acceleration scheme developed by Nesterov improves the theoretical rate of convergence for the function values from the standard $\mathcal O(k^{-1})$ down to $\mathcal O(k^{-2})$. In this short paper, we prove that the rate of convergence of a slight variant of Nesterov's accelerated forward-backward method, which produces convergent sequences, is actually $o(k^{-2})$, rather than $\mathcal O(k^{-2})$. Our arguments rely on the connection between this algorithm and a second-order differential inclusion with vanishing damping.

224 citations

Journal ArticleDOI
TL;DR: A framework of block-decomposition prox-type algorithms for solving the monotone inclusion problem and shows that any method in this framework is also a special instance of the hybrid proximal extragradient (HPE) method introduced by Solodov and Svaiter is shown.
Abstract: In this paper, we consider the monotone inclusion problem consisting of the sum of a continuous monotone map and a point-to-set maximal monotone operator with a separable two-block structure and introduce a framework of block-decomposition prox-type algorithms for solving it which allows for each one of the single-block proximal subproblems to be solved in an approximate sense. Moreover, by showing that any method in this framework is also a special instance of the hybrid proximal extragradient (HPE) method introduced by Solodov and Svaiter, we derive corresponding convergence rate results. We also describe some instances of the framework based on specific and inexpensive schemes for solving the single-block proximal subproblems. Finally, we consider some applications of our methodology to establish for the first time (i) the iteration-complexity of an algorithm for finding a zero of the sum of two arbitrary maximal monotone operators and, as a consequence, the ergodic iteration-complexity of the Douglas-...

224 citations


Network Information
Related Topics (5)
Partial differential equation
70.8K papers, 1.6M citations
89% related
Markov chain
51.9K papers, 1.3M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
88% related
Differential equation
88K papers, 2M citations
88% related
Nonlinear system
208.1K papers, 4M citations
88% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023693
20221,530
20212,129
20202,036
20191,995