scispace - formally typeset
Search or ask a question
Topic

Rate of convergence

About: Rate of convergence is a research topic. Over the lifetime, 31257 publications have been published within this topic receiving 795334 citations. The topic is also known as: convergence rate.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, conditions on non-gradient drift diffusion Fokker-Planck equations for its solutions to converge to equilibrium with a uniform exponential rate in Wasserstein distance are described.

129 citations

Journal ArticleDOI
George Zames1
TL;DR: An operator theory is outlined for the general, nonlinear, feedback loop and it is shown that feedback reduces distortion for band-limited inputs and an iteration whose rate of convergence is optimized is derived.
Abstract: An operator theory is outlined for the general, nonlinear, feedback loop. Methods for bounding system responses and investigating stability are introduced. An iterative expansion of the feedback loop, valid for large nonlinearities and unstable systems, is derived. The theory is applied to the study of nonlinear distortion in a class of amplifiers; it is shown that feedback reduces distortion for band-limited inputs. A model of the distortion is obtained, shown to be stable, and an iteration whose rate of convergence is optimized is derived.

129 citations

Journal ArticleDOI
01 Mar 2015
TL;DR: A new adaptive inertia weight adjusting approach is proposed based on Bayesian techniques in PSO, which is used to set up a sound tradeoff between the exploration and exploitation characteristics and is compared with other types of improved PSO algorithms, which also performs well.
Abstract: Graphical abstractA new particle swarm optimization algorithm based on the Bayesian techniques(BPSO) is proposed. Fig. 1 is the comparisons between different inertia weight strategies for f5 on 10 dimensions. Fig. 2 is comparisons between different PSO methods for f5 on 10 dimensions. Parameter s is the interval of the adjacent two inertia weight change in all iterations. As shown in Fig. 3, different values of s affect the convergence rate in the test function. Fig. 4 is the change of ω in the iterations. Display Omitted HighlightsWhy BPSO can achieve the excellent balance between exploration and exploitation in optimization processing is explained.To overcome the defect of ordinary PSO, a new algorithm with adaptive inertia weight based on Bayesian techniques is proposed.Analysis of parameters s and ω in the BPSO. Particle swarm optimization is a stochastic population-based algorithm based on social interaction of bird flocking or fish schooling. In this paper, a new adaptive inertia weight adjusting approach is proposed based on Bayesian techniques in PSO, which is used to set up a sound tradeoff between the exploration and exploitation characteristics. It applies the Bayesian techniques to enhance the PSO's searching ability in the exploitation of past particle positions and uses the cauchy mutation for exploring the better solution. A suite of benchmark functions are employed to test the performance of the proposed method. The results demonstrate that the new method exhibits higher accuracy and faster convergence rate than other inertia weight adjusting methods in multimodal and unimodal functions. Furthermore, to show the generalization ability of BPSO method, it is compared with other types of improved PSO algorithms, which also performs well.

129 citations

Journal ArticleDOI
TL;DR: An extension of the velocity of the underlying Hamilton-Jacobi equation is proposed, endowed with a Hilbertian structure based on the H1 Sobolev space, for structural optimization by the level-set method.
Abstract: In the context of structural optimization by the level-set method, we propose an extension of the velocity of the underlying Hamilton-Jacobi equation. The gradient method is endowed with a Hilbertian structure based on the H1 Sobolev space. Numerical results for compliance minimization and mechanism design show a strong improvement of the rate of convergence of the level-set method. Another important application is the optimization of multiple eigenvalues.

129 citations

Posted Content
TL;DR: A sharp analysis of a recently proposed adaptive gradient method namely partially adaptive momentum estimation method (Padam) (Chen and Gu, 2018), which admits many existing adaptive gradient methods such as RMSProp and AMSGrad as special cases, shows that Padam converges to a first-order stationary point at the rate of O\big.
Abstract: Adaptive gradient methods are workhorses in deep learning. However, the convergence guarantees of adaptive gradient methods for nonconvex optimization have not been thoroughly studied. In this paper, we provide a fine-grained convergence analysis for a general class of adaptive gradient methods including AMSGrad, RMSProp and AdaGrad. For smooth nonconvex functions, we prove that adaptive gradient methods in expectation converge to a first-order stationary point. Our convergence rate is better than existing results for adaptive gradient methods in terms of dimension, and is strictly faster than stochastic gradient decent (SGD) when the stochastic gradients are sparse. To the best of our knowledge, this is the first result showing the advantage of adaptive gradient methods over SGD in nonconvex setting. In addition, we also prove high probability bounds on the convergence rates of AMSGrad, RMSProp as well as AdaGrad, which have not been established before. Our analyses shed light on better understanding the mechanism behind adaptive gradient methods in optimizing nonconvex objectives.

129 citations


Network Information
Related Topics (5)
Partial differential equation
70.8K papers, 1.6M citations
89% related
Markov chain
51.9K papers, 1.3M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
88% related
Differential equation
88K papers, 2M citations
88% related
Nonlinear system
208.1K papers, 4M citations
88% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023693
20221,530
20212,129
20202,036
20191,995