scispace - formally typeset
Search or ask a question
Topic

Rate of convergence

About: Rate of convergence is a research topic. Over the lifetime, 31257 publications have been published within this topic receiving 795334 citations. The topic is also known as: convergence rate.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, a refined iterative likelihood-maximization algorithm for reconstructing a quantum state from a set of tomographic measurements is proposed, which is characterized by a very high convergence rate and features a simple adaptive procedure that ensures likelihood increase in every iteration and convergence to the maximum likelihood state.
Abstract: We propose a refined iterative likelihood-maximization algorithm for reconstructing a quantum state from a set of tomographic measurements. The algorithm is characterized by a very high convergence rate and features a simple adaptive procedure that ensures likelihood increase in every iteration and convergence to the maximum-likelihood state. We apply the algorithm to homodyne tomography of optical states and quantum tomography of entangled spin states of trapped ions and investigate its convergence properties.

193 citations

Journal ArticleDOI
TL;DR: A modification of the Newton method, based on quadrature formulas of order at least one, is extended, which produces iterative methods with order of convergence three that may be more efficient then other third-order methods as they do not require the use of the second-order Frechet derivative.

193 citations

01 Jun 2008
TL;DR: In this article, the authors present a number of test cases and meshes which were designed to form a benchmark for finite volume schemes and give a summary of some of the results which were presented by the participants to this benchmark.
Abstract: We present here a number of test cases and meshes which were designed to form a benchmark for finite volume schemes and give a summary of some of the results which were presented by the participants to this benchmark. We address a two-dimensional anisotropic diffusion problem, which is discretized on general, possibly non-conforming meshes. In most cases, the diffusion tensor is taken to be anisotropic, and at times heterogeneous and/or discontinuous. The meshes are either triangular or quadrangular, and sometimes quite distorted. Several methods were tested, among which finite element, discontinous Galerkin, cell centred and vertex centred finite volume methods, discrete duality finite volume methods, mimetic methods. The results given by the participants to the benchmark range from the number of unknowns, the errors on the fluxes or the minimum and maximum values and energy, to the order of convergence (when available).

193 citations

Journal ArticleDOI
TL;DR: A new, easy to implement, nonparametric VSS-NLMS algorithm that employs the mean-square error and the estimated system noise power to control the step-size update and is in very good agreement with the experimental results.
Abstract: Numerous variable step-size normalized least mean-square (VSS-NLMS) algorithms have been derived to solve the dilemma of fast convergence rate or low excess mean-square error in the past two decades. This paper proposes a new, easy to implement, nonparametric VSS-NLMS algorithm that employs the mean-square error and the estimated system noise power to control the step-size update. Theoretical analysis of its steady-state behavior shows that, when the input is zero-mean Gaussian distributed, the misadjustment depends only on a parameter β controlling the update of step size. Simulation experiments show that the proposed algorithm performs very well. Furthermore, the theoretical steady-state behavior is in very good agreement with the experimental results.

193 citations

Journal ArticleDOI
TL;DR: The framework proposed and the numerical methods derived from it provide a new and powerful tool for the exploration of neural behaviors at different scales and shed some new light on such neural mass models as the one of Jansen and Rit (1995).
Abstract: We deal with the problem of bridging the gap between two scales in neuronal modeling. At the first (microscopic) scale, neurons are considered individually and their behavior described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. The equations describing the dynamical and the stationary mean field behaviors are considered as functional equations on a set of stochastic processes. Using this new point of view allows us to prove that these equations are well-posed on any finite time interval and to provide a constructive method for effectively computing their unique solution. This method is proved to converge to the unique solution and we characterize its complexity and convergence rate. We also provide partial results for the stationary problem on infinite time intervals. These results shed some new light on such neural mass models as the one of Jansen and Rit \cite{jansen-rit:95}: their dynamics appears as a coarse approximation of the much richer dynamics that emerges from our analysis. Our numerical experiments confirm that the framework we propose and the numerical methods we derive from it provide a new and powerful tool for the exploration of neural behaviors at different scales.

193 citations


Network Information
Related Topics (5)
Partial differential equation
70.8K papers, 1.6M citations
89% related
Markov chain
51.9K papers, 1.3M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
88% related
Differential equation
88K papers, 2M citations
88% related
Nonlinear system
208.1K papers, 4M citations
88% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023693
20221,530
20212,129
20202,036
20191,995