scispace - formally typeset
Search or ask a question
Topic

Rate of convergence

About: Rate of convergence is a research topic. Over the lifetime, 31257 publications have been published within this topic receiving 795334 citations. The topic is also known as: convergence rate.


Papers
More filters
Journal ArticleDOI
TL;DR: A Fourier method is proposed for analyzing the stability and convergence of the implicit difference approximation scheme (IDAS), derive the global accuracy of the IDAS, and discuss the solvability.

351 citations

Journal ArticleDOI
Hakan Erdogan1, Jeffrey A. Fessler1
TL;DR: The new algorithms are based on paraboloidal surrogate functions for the log likelihood, which lead to monotonic algorithms even for the nonconvex log likelihood that arises due to background events, such as scatter and random coincidences.
Abstract: We present a framework for designing fast and monotonic algorithms for transmission tomography penalized-likelihood image reconstruction. The new algorithms are based on paraboloidal surrogate functions for the log likelihood. Due to the form of the log-likelihood function it is possible to find low curvature surrogate functions that guarantee monotonicity. Unlike previous methods, the proposed surrogate functions lead to monotonic algorithms even for the nonconvex log likelihood that arises due to background events, such as scatter and random coincidences. The gradient and the curvature of the likelihood terms are evaluated only once per iteration. Since the problem is simplified at each iteration, the CPU time is less than that of current algorithms which directly minimize the objective, yet the convergence rate is comparable. The simplicity, monotonicity, and speed of the new algorithms are quite attractive. The convergence rates of the algorithms are demonstrated using real and simulated PET transmission scans.

351 citations

Journal ArticleDOI
TL;DR: In this paper, it has been shown that a very modest degree of convergence of an extreme Ritz value already suffices for an increased rate of convergence to occur, which is known as superlinear convergence.
Abstract: It has been observed that the rate of convergence of Conjugate Gradients increases when one or more of the extreme Ritz values have sufficiently converged to the corresponding eigenvalues (the “superlinear convergence” of CG). In this paper this will be proved and made quantitative. It will be shown that a very modest degree of convergence of an extreme Ritz value already suffices for an increased rate of convergence to occur.

351 citations

Journal ArticleDOI
TL;DR: It is proved that the curvelet shrinkage can be tuned so that the estimator will attain, within logarithmic factors, the MSE $O(\varepsilon^{4/5})$ as noise level $\varePSilon\to 0$.
Abstract: We consider a model problem of recovering a function $f(x_1,x_2)$ from noisy Radon data. The function $f$ to be recovered is assumed smooth apart from a discontinuity along a $C^2$ curve, that is, an edge. We use the continuum white-noise model, with noise level $\varepsilon$. Traditional linear methods for solving such inverse problems behave poorly in the presence of edges. Qualitatively, the reconstructions are blurred near the edges; quantitatively, they give in our model mean squared errors (MSEs) that tend to zero with noise level $\varepsilon$ only as $O(\varepsilon^{1/2})$ as $\varepsilon\to 0$. A recent innovation--nonlinear shrinkage in the wavelet domain--visually improves edge sharpness and improves MSE convergence to $O(\varepsilon^{2/3})$. However, as we show here, this rate is not optimal. In fact, essentially optimal performance is obtained by deploying the recently-introduced tight frames of curvelets in this setting. Curvelets are smooth, highly anisotropic elements ideally suited for detecting and synthesizing curved edges. To deploy them in the Radon setting, we construct a curvelet-based biorthogonal decomposition of the Radon operator and build "curvelet shrinkage" estimators based on thresholding of the noisy curvelet coefficients. In effect, the estimator detects edges at certain locations and orientations in the Radon domain and automatically synthesizes edges at corresponding locations and directions in the original domain. We prove that the curvelet shrinkage can be tuned so that the estimator will attain, within logarithmic factors, the MSE $O(\varepsilon^{4/5})$ as noise level $\varepsilon\to 0$. This rate of convergence holds uniformly over a class of functions which are $C^2$ except for discontinuities along $C^2$ curves, and (except for log terms) is the minimax rate for that class. Our approach is an instance of a general strategy which should apply in other inverse problems; we sketch a deconvolution example.

347 citations

Journal ArticleDOI
TL;DR: In this article, the authors proposed an approach based on maximizing the determinant of the Fisher information matrix (FIM) subject to state constraints imposed on the observer trajectory (e.g., by the target defense system).
Abstract: The problem of bearings-only target localization is to estimate the location of a fixed target from a sequence of noisy bearing measurements. Although, in theory, this process is observable even without an observer maneuver, estimation performance (i.e., accuracy, stability and convergence rate) can be greatly enhanced by properly exploiting observer motion to increase observability. This work addresses the optimization of observer trajectories for bearings-only fixed-target localization. The approach presented herein is based on maximizing the determinant of the Fisher information matrix (FIM), subject to state constraints imposed on the observer trajectory (e.g., by the target defense system). Direct optimal control numerical schemes, including the recently introduced differential inclusion (DI) method, are used to solve the resulting optimal control problem. Computer simulations, utilizing the familiar Stansfield and maximum likelihood (ML) estimators, demonstrate the enhancement to target position estimability using the optimal observer trajectories.

347 citations


Network Information
Related Topics (5)
Partial differential equation
70.8K papers, 1.6M citations
89% related
Markov chain
51.9K papers, 1.3M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
88% related
Differential equation
88K papers, 2M citations
88% related
Nonlinear system
208.1K papers, 4M citations
88% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023693
20221,530
20212,129
20202,036
20191,995