scispace - formally typeset
Search or ask a question
Topic

Rate of convergence

About: Rate of convergence is a research topic. Over the lifetime, 31257 publications have been published within this topic receiving 795334 citations. The topic is also known as: convergence rate.


Papers
More filters
Journal ArticleDOI
TL;DR: This work presents a novel accelerated primal-dual (APD) method for solving a class of deterministic and stochastic saddle point problems (SPPs) and demonstrates an optimal rate of convergence not only in terms of its dependence on the number of the iteration, but also on a variety of problem parameters.
Abstract: We present a novel accelerated primal-dual (APD) method for solving a class of deterministic and stochastic saddle point problems (SPPs). The basic idea of this algorithm is to incorporate a multistep acceleration scheme into the primal-dual method without smoothing the objective function. For deterministic SPP, the APD method achieves the same optimal rate of convergence as Nesterov's smoothing technique. Our stochastic APD method exhibits an optimal rate of convergence for stochastic SPP not only in terms of its dependence on the number of the iteration, but also on a variety of problem parameters. To the best of our knowledge, this is the first time that such an optimal algorithm has been developed for stochastic SPP in the literature. Furthermore, for both deterministic and stochastic SPP, the developed APD algorithms can deal with the situation when the feasible region is unbounded, as long as a saddle point exists. In the unbounded case, we incorporate the modified termination criterion introduced b...

232 citations

Journal ArticleDOI
TL;DR: In this paper, the authors consider the time evolution of a system of identical bosons whose interaction potential is rescaled by N−1 and derive bounds on the rate of convergence of the quantum N-body dynamics to the Hartree equation.
Abstract: We consider the time evolution of a system of N identical bosons whose interaction potential is rescaled by N−1. We choose the initial wave function to describe a condensate in which all particles are in the same one-particle state. It is well known that in the mean-field limit N → ∞ the quantum N-body dynamics is governed by the nonlinear Hartree equation. Using a nonperturbative method, we extend previous results on the mean-field limit in two directions. First, we allow a large class of singular interaction potentials as well as strong, possibly time-dependent external potentials. Second, we derive bounds on the rate of convergence of the quantum N-body dynamics to the Hartree dynamics.

232 citations

Journal Article
TL;DR: In the gradient oracle model, this paper gave a deterministic algorithm with regret O(log(T) for stochastic strongly-convex optimization with O(1/T)-approximation.
Abstract: We give novel algorithms for stochastic strongly-convex optimization in the gradient oracle model which return a O(1/T)-approximate solution after T iterations. The first algorithm is deterministic, and achieves this rate via gradient updates and historical averaging. The second algorithm is randomized, and is based on pure gradient steps with a random step size. his rate of convergence is optimal in the gradient oracle model. This improves upon the previously known best rate of O(log(T/T), which was obtained by applying an online strongly-convex optimization algorithm with regret O(log(T)) to the batch setting. We complement this result by proving that any algorithm has expected regret of Ω(log(T)) in the online stochastic strongly-convex optimization setting. This shows that any online-to-batch conversion is inherently suboptimal for stochastic strongly-convex optimization. This is the first formal evidence that online convex optimization is strictly more difficult than batch stochastic convex optimization.

231 citations

Journal ArticleDOI
TL;DR: In this paper, the authors studied the convergence rate of the posterior distribution for Bayesian density estimation with Dirichlet mixtures of normal distributions as the prior and derived a new general rate theorem by considering a countable covering of the parameter space whose prior probabilities satisfy a summability condition.
Abstract: We study the rates of convergence of the posterior distribution for Bayesian density estimation with Dirichlet mixtures of normal distributions as the prior. The true density is assumed to be twice continuously differentiable. The bandwidth is given a sequence of priors which is obtained by scaling a single prior by an appropriate order. In order to handle this problem, we derive a new general rate theorem by considering a countable covering of the parameter space whose prior probabilities satisfy a summability condition together with certain individual bounds on the Hellinger metric entropy. We apply this new general theorem on posterior convergence rates by computing bounds for Hellinger (bracketing) entropy numbers for the involved class of densities, the error in the approximation of a smooth density by normal mixtures and the concentration rate of the prior. The best obtainable rate of convergence of the posterior turns out to be equivalent to the well-known frequentist rate for integrated mean squared error n -2/5 up to a logarithmic factor.

231 citations

Posted Content
TL;DR: In this article, the authors use meta-analysis to investigate whether there is substance to the "myth" of the 2% convergence rate, and to assess several unresolved issues of interpretation and estimation.
Abstract: The topic of convergence is at the heart of a wide-ranging debate in the growth literature. Empirical studies of convergence differ widely in their theoretical backgrounds, empirical specifications and in their treatment of cross-sectional heterogeneity. Despite these differences, a rate of convergence of about 2% has been found under a variety of different conditions, resulting in the widespread belief that the rate of convergence is a natural constant. We use meta-analysis to investigate whether there is substance to the ‘myth’ of the legendary 2% convergence rate, and to assess several unresolved issues of interpretation and estimation. Our dataset contains approximately 600 estimates taken from a random sample of empirical growth studies published in peer-reviewed journals. We show that publication bias does not interfere with the analysis, and that it is misleading to speak of a natural convergence rate, since estimates of different growth regression! s come from different populations. We find that correcting for the bias resulting from unobserved heterogeneity in technology levels leads to higher estimates of the rate of convergence. We also find that correcting for endogeneity in the explanatory variables has a substantial effect on the estimates, and that measures of financial and fiscal development are important determinants of long-run differences in per-capita income levels.

230 citations


Network Information
Related Topics (5)
Partial differential equation
70.8K papers, 1.6M citations
89% related
Markov chain
51.9K papers, 1.3M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
88% related
Differential equation
88K papers, 2M citations
88% related
Nonlinear system
208.1K papers, 4M citations
88% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023693
20221,530
20212,129
20202,036
20191,995