scispace - formally typeset
Search or ask a question
Topic

Rate of convergence

About: Rate of convergence is a research topic. Over the lifetime, 31257 publications have been published within this topic receiving 795334 citations. The topic is also known as: convergence rate.


Papers
More filters
Journal ArticleDOI
TL;DR: A high-order discontinuous Galerkin (dG) scheme for the numerical solution of three-dimensional wave propagation problems in coupled elastic-acoustic media is introduced, and consistency and stability of the proposed dG scheme are proved.

244 citations

Journal ArticleDOI
TL;DR: It is shown that many of the existing extrapolation algorithms for noiseless observations are unified under the criterion of minimum norm least squares (MNLS) extrapolation, and some new algorithms useful for extrapolation and spectral estimation of band-limited sequences in one and two dimensions are presented.
Abstract: In this paper we present some new algorithms useful for extrapolation and spectral estimation of band-limited sequences in one and two dimensions. First we show that many of the existing extrapolation algorithms for noiseless observations are unified under the criterion of minimum norm least squares (MNLS) extrapolation. For example, the iterative algorithms proposed in [2] and [8]-[10] are shown to be special cases of a one-step gradient algorithm which has linear convergence. Convergence and other numerical properties are improved by going to a conjugate gradient algorithm. For noisy observations, these algorithms could be extended by considering a mean-square extrapolation criterion which gives rise to a mean-square extrapolation filter and also to a recursive extrapolation filter. Examples and application of these methods are given. Extension of these algorithms is made for problems where the signal is known to be periodic. A new set of functions called the periodic-discrete prolate spheroidal sequences (P-DPSS), analogous to DPSS [21], [22], are introduced and their properties are studied. Finally, several of these algorithms are generalized to two dimensions and the relevant equations are given.

243 citations

Journal ArticleDOI
TL;DR: This work presents an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives that is Hessian free, i.e., it only requires gradient computations, and is therefore suitable for large-scale applications.
Abstract: We present an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives In a time $O(\epsilon^{-7/4} \log(1/ \epsilon) )$, the method f

243 citations

Journal ArticleDOI
TL;DR: The Kurdyka–Łojasiewicz exponent is studied, an important quantity for analyzing the convergence rate of first-order methods, and various calculus rules are developed to deduce the KL exponent of new (possibly nonconvex and nonsmooth) functions formed from functions with known KL exponents.
Abstract: In this paper, we study the Kurdyka–Łojasiewicz (KL) exponent, an important quantity for analyzing the convergence rate of first-order methods. Specifically, we develop various calculus rules to deduce the KL exponent of new (possibly nonconvex and nonsmooth) functions formed from functions with known KL exponents. In addition, we show that the well-studied Luo–Tseng error bound together with a mild assumption on the separation of stationary values implies that the KL exponent is $$\frac{1}{2}$$ . The Luo–Tseng error bound is known to hold for a large class of concrete structured optimization problems, and thus we deduce the KL exponent of a large class of functions whose exponents were previously unknown. Building upon this and the calculus rules, we are then able to show that for many convex or nonconvex optimization models for applications such as sparse recovery, their objective function’s KL exponent is $$\frac{1}{2}$$ . This includes the least squares problem with smoothly clipped absolute deviation regularization or minimax concave penalty regularization and the logistic regression problem with $$\ell _1$$ regularization. Since many existing local convergence rate analysis for first-order methods in the nonconvex scenario relies on the KL exponent, our results enable us to obtain explicit convergence rate for various first-order methods when they are applied to a large variety of practical optimization models. Finally, we further illustrate how our results can be applied to establishing local linear convergence of the proximal gradient algorithm and the inertial proximal algorithm with constant step sizes for some specific models that arise in sparse recovery.

242 citations

Journal ArticleDOI
TL;DR: A modified formulation of Maxwell's equations is presented that includes a complex and nonlinear coordinate transform along one or two Cartesian coordinates that allows one to map an infinite space to a finite space and to specify graded perfectly matched absorbing boundaries that allow the outgoing wave condition to be satisfied.
Abstract: A modified formulation of Maxwell’s equations is presented that includes a complex and nonlinear coordinate transform along one or two Cartesian coordinates. The added degrees of freedom in the modified Maxwell’s equations allow one to map an infinite space to a finite space and to specify graded perfectly matched absorbing boundaries that allow the outgoing wave condition to be satisfied. The approach is validated by numerical results obtained by using Fourier-modal methods and shows enhanced convergence rate and accuracy.

242 citations


Network Information
Related Topics (5)
Partial differential equation
70.8K papers, 1.6M citations
89% related
Markov chain
51.9K papers, 1.3M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
88% related
Differential equation
88K papers, 2M citations
88% related
Nonlinear system
208.1K papers, 4M citations
88% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023693
20221,530
20212,129
20202,036
20191,995