scispace - formally typeset
Search or ask a question
Topic

Rate of convergence

About: Rate of convergence is a research topic. Over the lifetime, 31257 publications have been published within this topic receiving 795334 citations. The topic is also known as: convergence rate.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors consider the minimization of a convex, but necessarily differentiable, functional in convex set of Hilbert space, and show how the method may be modified if the minimum of the functional is unknown.
Abstract: WE consider the minimization of a convex, but but necessarily differentiable, functional in a convex set of Hilbert space. The minimization method amounts to movement along a reference functional. The step length is evaluated here, not assigned; all we require for its evaluation is a knowledge of the minimum value of the functional. Under some natural assumptions, this method proves to be convergent both for smooth and non-differentiable functionals, at the rate of a geometric progression. We show how the method may be modified if the minimum of the functional is unknown. We also consider another minimization method, whereby the convergence rate can be increased considerably; this is based on approximation of the functional close to its minimum by a piecewise linear functional. Finally, we quote examples of problems to which our methods are applicable, and examine the computational aspects of the methods. 1. The gradient type method Consider the minimization of a functional f(x) in a set Q of Hilbert space H. We shall assume that f(x) has a support functional at every point (we recall that a linear functional c is a support of f(x) at x if j(d + v) 2 f(z) + (c, y) for ally), call it f’(x), while the set Q is “simple” enough for projection on it to be feasible, i.e. we can find PQ(~) E Q, lk - po (z) 11 = inf 115 - YII).We minimize fcx) by an iterative method in which, starting with”%ne x0, we construct a sequence of x” in accordance with the relation

555 citations

Journal ArticleDOI
TL;DR: This article surveys iterative domain decomposition techniques that have been developed in recent years for solving several kinds of partial differential equations, including elliptic, parabolic, and differential systems such as the Stokes problem and mixed formulations of elliptic problems.
Abstract: Domain decomposition (DD) has been widely used to design parallel efficient algorithms for solving elliptic problems. In this thesis, we focus on improving the efficiency of DD methods and applying them to more general problems. Specifically, we propose efficient variants of the vertex space DD method and minimize the complexity of general DD methods. In addition, we apply DD algorithms to coupled elliptic systems, singular Neumann boundary problems and linear algebraic systems. We successfully improve the vertex space DD method of Smith by replacing the exact edge, vertex dense matrices by approximate sparse matrices. It is extremely expensive to calculate, invert and store the exact vertex and edge Schur complement dense sub-matrices in the vertex space DD algorithm. We propose several approximations for these dense matrices, by using Fourier approximation and an algebraic probing technique. Our numerical and theoretical results show that these variants retain the fast convergence rate and greatly reduce the computational cost. We develop a simple way to reduce the overall complexity of domain decomposition methods through choosing the coarse grid size. For sub-domain solvers with different complexities, we derive the optimal coarse grid size $H\sb{opt},$ which asymptotically minimizes the total computational cost of DD methods under the sequential and parallel environments. The overall complexity of DD methods is significantly reduced by using this optimal coarse grid size. We apply the additive and multiplicative Schwarz algorithms to solving coupled elliptic systems. Using the Dryja-Widlund framework, we prove that their convergence rates are independent of both the mesh and the coupling parameters. We also construct several approximate interface sparse matrices by using Sobolev inequalities, Fourier analysis and probe technique. We further discuss the application of DD to the singular Neumann boundary value problems. We extend the general framework to these problems and show how to deal with the null space in practice. Numerical and theoretical results show that these modified DD methods still have optimal convergence rate. By using the DD methodology, we propose algebraic additive and multiplicative Schwarz methods to solve general sparse linear algebraic systems. We analyze the eigenvalue distribution of the iterative matrix of each algebraic DD method to study the convergence behavior.

550 citations

Journal ArticleDOI
TL;DR: This paper suggests a method for the exact simulation of the stock price and variance under Hestons stochastic volatility model and other affine jump diffusion processes and achieves an O(s-1/2) convergence rate, where s is the total computational budget.
Abstract: The stochastic differential equations for affine jump diffusion models do not yield exact solutions that can be directly simulated. Discretization methods can be used for simulating security prices under these models. However, discretization introduces bias into the simulation results, and a large number of time steps may be needed to reduce the discretization bias to an acceptable level. This paper suggests a method for the exact simulation of the stock price and variance under Hestons stochastic volatility model and other affine jump diffusion processes. The sample stock price and variance from the exact distribution can then be used to generate an unbiased estimator of the price of a derivative security. We compare our method with the more conventional Euler discretization method and demonstrate the faster convergence rate of the error in our method. Specifically, our method achieves an O(s-1/2) convergence rate, where s is the total computational budget. The convergence rate for the Euler discretization method is O(s-1/3) or slower, depending on the model coefficients and option payoff function.

543 citations

Journal ArticleDOI
08 Dec 2014
TL;DR: In this article, an improved finite-sample guarantee on the linear convergence of stochastic gradient descent for smooth and strongly convex objectives was obtained, which is based on a connection between SGD and the randomized Kaczmarz algorithm.
Abstract: We obtain an improved finite-sample guarantee on the linear convergence of stochastic gradient descent for smooth and strongly convex objectives, improving from a quadratic dependence on the conditioning $$(L/\mu )^2$$(L/μ)2 (where $$L$$L is a bound on the smoothness and $$\mu $$μ on the strong convexity) to a linear dependence on $$L/\mu $$L/μ. Furthermore, we show how reweighting the sampling distribution (i.e. importance sampling) is necessary in order to further improve convergence, and obtain a linear dependence in the average smoothness, dominating previous results. We also discuss importance sampling for SGD more broadly and show how it can improve convergence also in other scenarios. Our results are based on a connection we make between SGD and the randomized Kaczmarz algorithm, which allows us to transfer ideas between the separate bodies of literature studying each of the two methods. In particular, we recast the randomized Kaczmarz algorithm as an instance of SGD, and apply our results to prove its exponential convergence, but to the solution of a weighted least squares problem rather than the original least squares problem. We then present a modified Kaczmarz algorithm with partially biased sampling which does converge to the original least squares solution with the same exponential convergence rate.

542 citations


Network Information
Related Topics (5)
Partial differential equation
70.8K papers, 1.6M citations
89% related
Markov chain
51.9K papers, 1.3M citations
88% related
Optimization problem
96.4K papers, 2.1M citations
88% related
Differential equation
88K papers, 2M citations
88% related
Nonlinear system
208.1K papers, 4M citations
88% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20241
2023693
20221,530
20212,129
20202,036
20191,995