scispace - formally typeset
Search or ask a question
Topic

Rate of convergence

About: Rate of convergence is a research topic. Over the lifetime, 31257 publications have been published within this topic receiving 795334 citations. The topic is also known as: convergence rate.


Papers
More filters
Journal ArticleDOI
TL;DR: This work considers the enhancement of accuracy, by means of a simple post-processing technique, for finite element approximations to transient hyperbolic equations, and shows results displaying the sharpness of the estimates.
Abstract: We consider the enhancement of accuracy, by means of a simple post-processing technique, for finite element approximations to transient hyperbolic equations. The post-processing is a convolution with a kernel whose support has measure of order one in the case of arbitrary unstructured meshes; if the mesh is locally translation invariant, the support of the kernel is a cube whose edges are of size of the order of Δx only. For example, when polynomials of degree k are used in the discontinuous Galerkin (DG) method, and the exact solution is globally smooth, the DG method is of order k+1/2 in the L2-norm, whereas the post-processed approximation is of order 2k + 1; if the exact solution is in L2 only, in which case no order of convergence is available for the DG method, the post-processed approximation converges with order k + 1/2 in L2(Ω0), where Ω0 is a subdomain over which the exact solution is smooth. Numerical results displaying the sharpness of the estimates are presented.

170 citations

Journal ArticleDOI
TL;DR: An analysis of convergence of the class of Sequential Partial Update LMS algorithms (S-LMS) under various assumptions is presented and it is shown that divergence can be prevented by scheduling coefficient updates at random, which is called the Stochastic Partial update LMS algorithm (SPU-L MS).
Abstract: Partial updating of LMS filter coefficients is an effective method for reducing computational load and power consumption in adaptive filter implementations. This paper presents an analysis of convergence of the class of Sequential Partial Update LMS algorithms (S-LMS) under various assumptions and shows that divergence can be prevented by scheduling coefficient updates at random, which we call the Stochastic Partial Update LMS algorithm (SPU-LMS). Specifically, under the standard independence assumptions, for wide sense stationary signals, the S-LMS algorithm converges in the mean if the step-size parameter /spl mu/ is in the convergent range of ordinary LMS. Relaxing the independence assumption, it is shown that S-LMS and LMS algorithms have the same sufficient conditions for exponential stability. However, there exist nonstationary signals for which the existing algorithms, S-LMS included, are unstable and do not converge for any value of /spl mu/. On the other hand, under broad conditions, the SPU-LMS algorithm remains stable for nonstationary signals. Expressions for convergence rate and steady-state mean-square error of SPU-LMS are derived. The theoretical results of this paper are validated and compared by simulation through numerical examples.

170 citations

Journal ArticleDOI
TL;DR: This paper proposes LADM with parallel splitting and adaptive penalty (LADMPSAP) to solve multi-block separable convex programs efficiently and proposes a simple optimality measure and reveals the convergence rate of LADmPSAP in an ergodic sense.
Abstract: Many problems in machine learning and other fields can be (re)formulated as linearly constrained separable convex programs. In most of the cases, there are multiple blocks of variables. However, the traditional alternating direction method (ADM) and its linearized version (LADM, obtained by linearizing the quadratic penalty term) are for the two-block case and cannot be naively generalized to solve the multi-block case. So there is great demand on extending the ADM based methods for the multi-block case. In this paper, we propose LADM with parallel splitting and adaptive penalty (LADMPSAP) to solve multi-block separable convex programs efficiently. When all the component objective functions have bounded subgradients, we obtain convergence results that are stronger than those of ADM and LADM, e.g., allowing the penalty parameter to be unbounded and proving the sufficient and necessary conditions for global convergence. We further propose a simple optimality measure and reveal the convergence rate of LADMPSAP in an ergodic sense. For programs with extra convex set constraints, with refined parameter estimation we devise a practical version of LADMPSAP for faster convergence. Finally, we generalize LADMPSAP to handle programs with more difficult objective functions by linearizing part of the objective function as well. LADMPSAP is particularly suitable for sparse representation and low-rank recovery problems because its subproblems have closed form solutions and the sparsity and low-rankness of the iterates can be preserved during the iteration. It is also highly parallelizable and hence fits for parallel or distributed computing. Numerical experiments testify to the advantages of LADMPSAP in speed and numerical accuracy.

170 citations

Journal ArticleDOI
TL;DR: This paper analyzes the exponential method of multipliers for convex constrained minimization problems, which operates like the usual Augmented Lagrangian method, except that it uses an exponential penalty function in place of the usual quadratic.
Abstract: In this paper, we analyze the exponential method of multipliers for convex constrained minimization problems, which operates like the usual Augmented Lagrangian method, except that it uses an exponential penalty function in place of the usual quadratic We also analyze a dual counterpart, the entropy minimization algorithm, which operates like the proximal minimization algorithm, except that it uses a logarithmic/entropy “proximal” term in place of a quadratic We strengthen substantially the available convergence results for these methods, and we derive the convergence rate of these methods when applied to linear programs

170 citations

Journal ArticleDOI
TL;DR: Two classes of iterative methods for saddle point problems are considered: inexact Uzawa algorithms and a class of methods with symmetric preconditioners and the obtained estimates are partially sharper than the known estimates in literature.
Abstract: In this paper two classes of iterative methods for saddle point problems are considered: inexact Uzawa algorithms and a class of methods with symmetric preconditioners. In both cases the iteration matrix can be transformed to a symmetric matrix by block diagonal matrices, a simple but essential observation which allows one to estimate the convergence rate of both classes by studying associated eigenvalue problems. The obtained estimates apply for a wider range of situations and are partially sharper than the known estimates in literature. A few numerical tests are given which confirm the sharpness of the estimates.

169 citations


Network Information
Related Topics (5)
Partial differential equation
70.8K papers, 1.6M citations
89% related
Markov chain
51.9K papers, 1.3M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
88% related
Differential equation
88K papers, 2M citations
88% related
Nonlinear system
208.1K papers, 4M citations
88% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023693
20221,530
20212,129
20202,036
20191,995