scispace - formally typeset
Search or ask a question
Topic

Rate of convergence

About: Rate of convergence is a research topic. Over the lifetime, 31257 publications have been published within this topic receiving 795334 citations. The topic is also known as: convergence rate.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a unified approach to iterative soft thresholding algorithms for the solution of linear operator equations in infinite dimensional Hilbert spaces is presented, and a new convergence analysis is presented.
Abstract: In this article a unified approach to iterative soft-thresholding algorithms for the solution of linear operator equations in infinite dimensional Hilbert spaces is presented. We formulate the algorithm in the framework of generalized gradient methods and present a new convergence analysis. As main result we show that the algorithm converges with linear rate as soon as the underlying operator satisfies the so-called finite basis injectivity property or the minimizer possesses a so-called strict sparsity pattern. Moreover it is shown that the constants can be calculated explicitly in special cases (i.e. for compact operators). Furthermore, the techniques also can be used to establish linear convergence for related methods such as the iterative thresholding algorithm for joint sparsity and the accelerated gradient projection method.

239 citations

Journal ArticleDOI
TL;DR: Rates of convergence of subgradient optimization are studied and it is shown that if the step size is chosen to be a geometric progression with ratioρ the convergence, if it occurs, is geometric with rateρ.
Abstract: Rates of convergence of subgradient optimization are studied. If the step size is chosen to be a geometric progression with ratioρ the convergence, if it occurs, is geometric with rateρ. For convergence to occur, it is necessary that the initial step size be large enough, and that the ratioρ be greater than a sustainable ratez(μ), which depends upon a condition numberμ, defined for both differentiable and nondifferentiable functions. The sustainable ratez(μ) is closely related to the rate of convergence of the steepest ascent method for differentiable functions: in fact it is identical if the function is not too well conditioned.

238 citations

Journal ArticleDOI
TL;DR: A generalization of the widely studied least squares cross-validation method is considered for bandwidth selection of a kernel density estimator, and reveals that a rather large amount of presmoothing yields excellent asymptotic performance.
Abstract: For bandwidth selection of a kernel density estimator, a generalization of the widely studied least squares cross-validation method is considered. The essential idea is to do a particular type of “presmoothing” of the data. This is seen to be essentially the same as using the smoothed bootstrap estimate of the mean integrated squared error. Analysis reveals that a rather large amount of presmoothing yields excellent asymptotic performance. The rate of convergence to the optimum is known to be best possible under a wide range of smoothness conditions. The method is more appealing than other selectors with this property, because its motivation is not heavily dependent on precise asymptotic analysis, and because its form is simple and intuitive. Theory is also given for choice of the amount of presmoothing, and this is used to derive a data-based method for this choice.

237 citations

Journal ArticleDOI
TL;DR: An accelerated version of the cubic regularization of Newton’s method that converges for the same problem class with order, keeping the complexity of each iteration unchanged and arguing that for the second-order schemes, the class of non-degenerate problems is different from the standard class.
Abstract: In this paper we propose an accelerated version of the cubic regularization of Newton’s method (Nesterov and Polyak, in Math Program 108(1): 177–205, 2006). The original version, used for minimizing a convex function with Lipschitz-continuous Hessian, guarantees a global rate of convergence of order $$O\big({1 \over k^2}\big)$$, where k is the iteration counter. Our modified version converges for the same problem class with order $$O\big({1 \over k^3}\big)$$, keeping the complexity of each iteration unchanged. We study the complexity of both schemes on different classes of convex problems. In particular, we argue that for the second-order schemes, the class of non-degenerate problems is different from the standard class.

237 citations

Journal ArticleDOI
TL;DR: In this article, a sampling technique based on the Euler discretization of the Langevin stochastic differential equation is studied, and for both constant and decreasing step sizes, non-asymptotic bounds for the convergence to stationarity in both total variation and Wasserstein distances are obtained.
Abstract: Sampling distribution over high-dimensional state-space is a problem which has recently attracted a lot of research efforts; applications include Bayesian non-parametrics, Bayesian inverse problems and aggregation of estimators. All these problems boil down to sample a target distribution $\pi$ having a density \wrt\ the Lebesgue measure on $\mathbb{R}^d$, known up to a normalisation factor $x \mapsto \mathrm{e}^{-U(x)}/\int_{\mathbb{R}^d} \mathrm{e}^{-U(y)} \mathrm{d} y$ where $U$ is continuously differentiable and smooth. In this paper, we study a sampling technique based on the Euler discretization of the Langevin stochastic differential equation. Contrary to the Metropolis Adjusted Langevin Algorithm (MALA), we do not apply a Metropolis-Hastings correction. We obtain for both constant and decreasing step sizes in the Euler discretization, non-asymptotic bounds for the convergence to stationarity in both total variation and Wasserstein distances. A particular attention is paid on the dependence on the dimension of the state space, to demonstrate the applicability of this method in the high dimensional setting, at least when $U$ is convex. These bounds are based on recently obtained estimates of the convergence of the Langevin diffusion to stationarity using Poincar{\'e} and log-Sobolev inequalities. These bounds improve and extend the results of (Dalalyan, 2014). We also investigate the convergence of an appropriately weighted empirical measure and we report sharp bounds for the mean square error and exponential deviation inequality for Lipschitz functions. A limited Monte Carlo experiment is carried out to support our findings.

236 citations


Network Information
Related Topics (5)
Partial differential equation
70.8K papers, 1.6M citations
89% related
Markov chain
51.9K papers, 1.3M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
88% related
Differential equation
88K papers, 2M citations
88% related
Nonlinear system
208.1K papers, 4M citations
88% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023693
20221,530
20212,129
20202,036
20191,995