scispace - formally typeset
Search or ask a question
Topic

Rate of convergence

About: Rate of convergence is a research topic. Over the lifetime, 31257 publications have been published within this topic receiving 795334 citations. The topic is also known as: convergence rate.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a necessary and sufficient condition for consensusability under a common control protocol is given, which explicitly reveals how the intrinsic entropy rate of the agent dynamic and the communication graph jointly affect consensusability.
Abstract: This paper investigates the joint effect of agent dynamic, network topology and communication data rate on consensusability of linear discrete-time multi-agent systems. Neglecting the finite communication data rate constraint and under undirected graphs, a necessary and sufficient condition for consensusability under a common control protocol is given, which explicitly reveals how the intrinsic entropy rate of the agent dynamic and the communication graph jointly affect consensusability. The result is established by solving a discrete-time simultaneous stabilization problem. A lower bound of the optimal convergence rate to consensus, which is shown to be tight for some special cases, is provided as well. Moreover, a necessary and sufficient condition for formationability of multi-agent systems is obtained. As a special case, the discrete-time second-order consensus is discussed where an optimal control gain is designed to achieve the fastest convergence. The effects of undirected graphs on consensability/formationability and optimal convergence rate are exactly quantified by the ratio of the second smallest to the largest eigenvalues of the graph Laplacian matrix. An extension to directed graphs is also made. The consensus problem under a finite communication data rate is finally investigated.

537 citations

Book ChapterDOI
01 Jan 1978
TL;DR: The given theory helps to explain the excellent numerical results that are obtained by a recent algorithm (Powell, 1977) by regarding the positive definite matrix that is revised on each iteration as an approximation to the second derivative matrix of the Lagrangian function.
Abstract: Variable metric methods for unconstrained optimization calculations can be extended to the constrained case by regarding the positive definite matrix that is revised on each iteration as an approximation to the second derivative matrix of the Lagrangian function. Linear approximations to the constraints are used. Han (1976) has analyzed the convergence of these methods in the case when the true second derivative matrix of the Lagrangian function is positive definite at the solution. However, this matrix sometimes has negative eigenvalues so we analyze the rate of convergence in this case. We find that it is still superlinear. Therefore we may continue to use positive definite second derivative approximations and there is no need to introduce any penalty terms. The given theory helps to explain the excellent numerical results that are obtained by a recent algorithm (Powell, 1977).

534 citations

Journal ArticleDOI
TL;DR: In this paper, the authors consider non-linear ill-posed problems in a Hilbert space setting, and show that Tikhonov regularization is a stable method for solving nonlinear illposed problems and give conditions that guarantee the convergence rate O( square root delta ) for the regularised solutions.
Abstract: The authors consider non-linear ill-posed problems in a Hilbert space setting, they show that Tikhonov regularisation is a stable method for solving non-linear ill-posed problems and give conditions that guarantee the convergence rate O( square root delta ) for the regularised solutions, where delta is a norm bound for the noise in the data. They illustrate these conditions for several examples including parameter estimation problems. In an appendix, they study the connection between the ill-posedness of a non-linear problem and its linearisation and show that this connection is rather weak. A sufficient condition for ill-posedness is given in the case that the non-linear operator is compact.

534 citations

Journal ArticleDOI
TL;DR: In this paper, a variant of the Byzantine Generals problem is considered, in which processes start with arbitrary real values rather than Boolean values or values from some bounded range, and in which approximate, rather than exact, agreement is the desired goal.
Abstract: This paper considers a variant of the Byzantine Generals problem, in which processes start with arbitrary real values rather than Boolean values or values from some bounded range, and in which approximate, rather than exact, agreement is the desired goal. Algorithms are presented to reach approximate agreement in asynchronous, as well as synchronous systems. The asynchronous agreement algorithm is an interesting contrast to a result of Fischer et al, who show that exact agreement with guaranteed termination is not attainable in an asynchronous system with as few as one faulty process. The algorithms work by successive approximation, with a provable convergence rate that depends on the ratio between the number of faulty processes and the total number of processes. Lower bounds on the convergence rate for algorithms of this form are proved, and the algorithms presented are shown to be optimal.

531 citations

Journal ArticleDOI
TL;DR: The accelerated stochastic approximation (AC-SA) algorithm based on Nesterov’s optimal method for smooth CP is introduced, and it is shown that the AC-SA algorithm can achieve the aforementioned lower bound on the rate of convergence for SCO.
Abstract: This paper considers an important class of convex programming (CP) problems, namely, the stochastic composite optimization (SCO), whose objective function is given by the summation of general nonsmooth and smooth stochastic components. Since SCO covers non-smooth, smooth and stochastic CP as certain special cases, a valid lower bound on the rate of convergence for solving these problems is known from the classic complexity theory of convex programming. Note however that the optimization algorithms that can achieve this lower bound had never been developed. In this paper, we show that the simple mirror-descent stochastic approximation method exhibits the best-known rate of convergence for solving these problems. Our major contribution is to introduce the accelerated stochastic approximation (AC-SA) algorithm based on Nesterov’s optimal method for smooth CP (Nesterov in Doklady AN SSSR 269:543–547, 1983; Nesterov in Math Program 103:127–152, 2005), and show that the AC-SA algorithm can achieve the aforementioned lower bound on the rate of convergence for SCO. To the best of our knowledge, it is also the first universally optimal algorithm in the literature for solving non-smooth, smooth and stochastic CP problems. We illustrate the significant advantages of the AC-SA algorithm over existing methods in the context of solving a special but broad class of stochastic programming problems.

531 citations


Network Information
Related Topics (5)
Partial differential equation
70.8K papers, 1.6M citations
89% related
Markov chain
51.9K papers, 1.3M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
88% related
Differential equation
88K papers, 2M citations
88% related
Nonlinear system
208.1K papers, 4M citations
88% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023693
20221,530
20212,129
20202,036
20191,995