scispace - formally typeset
Search or ask a question
Topic

Rate of convergence

About: Rate of convergence is a research topic. Over the lifetime, 31257 publications have been published within this topic receiving 795334 citations. The topic is also known as: convergence rate.


Papers
More filters
Journal ArticleDOI
TL;DR: Two practical and effective, h–p-type, finite element adaptive procedures are presented, which allow not only the final global energy norm error to be well estimated using hierarchic p-refinement, but in addition give a nearly optimal mesh.
Abstract: Two practical and effective, h–p-type, finite element adaptive procedures are presented. The procedures allow not only the final global energy norm error to be well estimated using hierarchic p-refinement, but in addition give a nearly optimal mesh. The design of this is guided by the local information computed on the previous mesh. The desired accuracy can always be obtained within one or at most two h–p-refinements. The rate of convergence of the adaptive h–p-version analysis procedures has been tested for some examples and found to be very strong. The presented procedures can easily be incorporated into existing p- or h-type code structures.

154 citations

Journal ArticleDOI
TL;DR: A general framework for tensor singular value decomposition (tensor singular value decompposition (SVD)), which focuses on the methodology and theory for extracting the hidden low-rank structure from high-dimensional tensor data, is proposed.
Abstract: In this paper, we propose a general framework for tensor singular value decomposition (tensor singular value decomposition (SVD)), which focuses on the methodology and theory for extracting the hidden low-rank structure from high-dimensional tensor data. Comprehensive results are developed on both the statistical and computational limits for tensor SVD. This problem exhibits three different phases according to the signal-to-noise ratio (SNR). In particular, with strong SNR, we show that the classical higher-order orthogonal iteration achieves the minimax optimal rate of convergence in estimation; with weak SNR, the information-theoretical lower bound implies that it is impossible to have consistent estimation in general; with moderate SNR, we show that the non-convex maximum likelihood estimation provides optimal solution, but with NP-hard computational cost; moreover, under the hardness hypothesis of hypergraphic planted clique detection, there are no polynomial-time algorithms performing consistently in general.

154 citations

Journal ArticleDOI
TL;DR: A numerical validation of the continuous mesh framework, showing the ability of this framework to predict the order of convergence, given a specific adaptive strategy defined by a sequence of continuous meshes, and derive the optimal continuous mesh minimizing this error.
Abstract: This paper gives a numerical validation of the continuous mesh framework introduced in Part I [A. Loseille and F. Alauzet, SIAM J. Numer. Anal., 49 (2011), pp. 38-60]. We numerically show that the interpolation error can be evaluated analytically once analytical expressions of a mesh and a function are given. In particular, the strong duality between discrete and continuous views for the interpolation error is emphasized on two-dimensional and three-dimensional examples. In addition, we show the ability of this framework to predict the order of convergence, given a specific adaptive strategy defined by a sequence of continuous meshes. The continuous mesh concept is then used to devise an adaptive strategy to control the $\mathbf{L}^p$ norm of the continuous interpolation error. Given the $\mathbf{L}^p$ norm of the continuous interpolation error, we derive the optimal continuous mesh minimizing this error. This exemplifies the potential of this framework, as we use a calculus of variations that is not defined on the space of discrete meshes. Anisotropic adaptations on analytical functions correlate the optimal predicted theoretical order of convergence. The extension to a solution of nonlinear PDEs is also given. Comparisons with experiments show the efficiency and the accuracy of this approach.

154 citations

Proceedings Article
21 Jun 2014
TL;DR: A new method is introduced with a theoretical convergence rate four times faster than existing methods, for sums with sufficiently many terms, that is also amendable to a sampling without replacement scheme that in practice gives further speed-ups.
Abstract: Recent advances in optimization theory have shown that smooth strongly convex finite sums can be minimized faster than by treating them as a black box "batch" problem. In this work we introduce a new method in this class with a theoretical convergence rate four times faster than existing methods, for sums with sufficiently many terms. This method is also amendable to a sampling without replacement scheme that in practice gives further speed-ups. We give empirical results showing state of the art performance.

153 citations

Journal ArticleDOI
TL;DR: To annihilate the effect of the extreme eigenvalues a deflated CG method is used, the convergence rate improves considerably, the termination criterion becomes again reliable and a cheap approximation of the eigenvectors is proposed.

153 citations


Network Information
Related Topics (5)
Partial differential equation
70.8K papers, 1.6M citations
89% related
Markov chain
51.9K papers, 1.3M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
88% related
Differential equation
88K papers, 2M citations
88% related
Nonlinear system
208.1K papers, 4M citations
88% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023693
20221,530
20212,129
20202,036
20191,995