scispace - formally typeset
Search or ask a question
Topic

Rate of convergence

About: Rate of convergence is a research topic. Over the lifetime, 31257 publications have been published within this topic receiving 795334 citations. The topic is also known as: convergence rate.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that such a one-step method can not be optimal when di erent coe cient functions admit di Erent degrees of smoothness, and this drawback can be repaired by using the proposed two-step estimation procedure.
Abstract: Varying coefficient models are a useful extension of classical linear models. They arise naturally when one wishes to examine how regression coefficients change over different groups characterized by certain covariates such as age. The appeal of these models is that the coef .cient functions can easily be estimated via a simple local regression.This yields a simple one-step estimation procedure. We show that such a one-step method cannot be optimal when different coefficient functions admit different degrees of smoothness. This drawback can be repaired by using our proposed two-step estimation procedure.The asymptotic mean-squared error for the two-step procedure is obtained and is shown to achieve the optimal rate of convergence. A few simulation studies show that the gain by the two-step procedure can be quite substantial.The methodology is illustrated by an application to an environmental data set.

643 citations

Journal ArticleDOI
TL;DR: Recent results in Markov chain theory are applied to Hastings and Metropolis algorithms with either independent or symmetric candidate distributions, and it is shown geometric convergence essentially occurs if and only if $pi$ has geometric tails.
Abstract: We apply recent results in Markov chain theory to Hastings and Metropolis algorithms with either independent or symmetric candidate distributions, and provide necessary and sufficient conditions for the algorithms to converge at a geometric rate to a prescribed distribution $\pi$. In the independence case (in $\mathbb{R}^k$) these indicate that geometric convergence essentially occurs if and only if the candidate density is bounded below by a multiple of $\pi$; in the symmetric case (in $\mathbb{R}$ only) we show geometric convergence essentially occurs if and only if $\pi$ has geometric tails. We also evaluate recently developed computable bounds on the rates of convergence in this context: examples show that these theoretical bounds can be inherently extremely conservative, although when the chain is stochastically monotone the bounds may well be effective.

639 citations

Journal ArticleDOI
TL;DR: A number of variants of incremental subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions are established, including some that are stochastic.
Abstract: We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to perform the subgradient iteration incrementally, by sequentially taking steps along the subgradients of the component functions, with intermediate adjustment of the variables after processing each component function. This incremental approach has been very successful in solving large differentiable least squares problems, such as those arising in the training of neural networks, and it has resulted in a much better practical rate of convergence than the steepest descent method. In this paper, we establish the convergence properties of a number of variants of incremental subgradient methods, including some that are stochastic. Based on the analysis and computational experiments, the methods appear very promising and effective for important classes of large problems. A particularly interesting discovery is that by randomizing the order of selection of component functions for iteration, the convergence rate is substantially improved.

611 citations

Journal ArticleDOI
TL;DR: In this paper, it is shown that the annealing algorithm converges with probability arbitrarily close to 1, and that it is no better than a deterministic method. But it is also shown that there are cases where convergence takes exponentially long.
Abstract: The annealing algorithm is a stochastic optimization method which has attracted attention because of its success with certain difficult problems, including NP-hard combinatorial problems such as the travelling salesman, Steiner trees and others. There is an appealing physical analogy for its operation, but a more formal model seems desirable. In this paper we present such a model and prove that the algorithm converges with probability arbitrarily close to 1. We also show that there are cases where convergence takes exponentially long—that is, it is no better than a deterministic method. We study how the convergence rate is affected by the form of the problem. Finally we describe a version of the algorithm that terminates in polynomial time and allows a good deal of ‘practical’ confidence in the solution.

609 citations

Book
01 Jan 1989
TL;DR: Inverse problems in the study of flexible structures as discussed by the authors have been identified in many applications, e.g., in ecology and lake and sea sedimentation analysis, as well as in the analysis of linear parabolic systems.
Abstract: I Examples of Inverse Problems Arising in Applications.- I.1. Inverse Problems in Ecology.- I.2. Inverse Problems in Lake and Sea Sedimentation Analysis.- I.3. Inverse Problems in the Study of Flexible Structures.- I.4. Inverse Problems in Physiology.- II Operator Theory Preliminaries.- II.1. Linear Semigroups.- II.2. Galerkin Schemes.- III Parameter Estimation: Basic Concepts and Examples.- III.1. The Parameter Estimation Problem.- III.2. Application of the Theory to Special Schemes for Linear Parabolic Systems.- III.2.1. Modal Approximations.- III.2.2. Cubic Spline Approximations.- III.3. Parameter Dependent Approximation and the Nonlinear Variation of Constants Formula.- IV Identifiability and Stability.- IV.1. Generalities.- IV.2. Examples.- IV.3. Identifiability and Stability Concepts.- IV.4. A Sufficient Condition for Identifiability.- IV.5. Output Least Squares Identifiability.- IV.5.1. Theory.- IV.5.2. Applications.- IV.6. Output Least Squares Stability.- IV.6.1. Theory.- IV.6.2. An Example.- IV.7. Regularization.- IV.7.1. Tikhonov's Lemma and Its Application.- IV.7.2. Regularization Revisited.- IV.8. Concluding Remarks on Stability.- IV.8.1. A Summary of Possible Approaches.- IV.8.2. Remarks on Implementation.- V Parabolic Equations.- V.1. Modal Approximations: Discrete Fit-to-Data Criteria.- V.2. Quasimodal Approximations.- V.3. Operator Factorization: A = -C*C.- V.4. Operator Factorization: A = A1/2A1/2.- V.5. Numerical Considerations.- V.6. Numerical Test Examples.- V.7. Examples with Experimental Data.- VI Approximation of Unknown Coefficients in Linear Elliptic Equations.- VI.1. Parameter Estimation Convergence.- VI.2. Function Space Parameter Estimation Convergence.- VI.3. Rate of Convergence for a Special Case.- VI.4. Methods Other Than Output-Least-Squares.- VI.4.1. Method of Characteristics.- VI.4.2. Equation Error Method.- VI.4.3. A Variational Technique.- VI.4.4. Singular Perturbation Techniques.- VI.4.5. Adaptive Control Methods.- VI.4.6. An Augmented Lagrangian Technique.- VI.5. Numerical Test Examples.- VII An Annotated Bibliography.- Al) Preliminaries.- A2) Linear Splines.- A3) Cubic Hermite Splines.- A5) Polynomial Splines, Quasi- Interpolation.

606 citations


Network Information
Related Topics (5)
Partial differential equation
70.8K papers, 1.6M citations
89% related
Markov chain
51.9K papers, 1.3M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
88% related
Differential equation
88K papers, 2M citations
88% related
Nonlinear system
208.1K papers, 4M citations
88% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023693
20221,530
20212,129
20202,036
20191,995