Topic
Rate of convergence
About: Rate of convergence is a research topic. Over the lifetime, 31257 publications have been published within this topic receiving 795334 citations. The topic is also known as: convergence rate.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: A rigorous error analysis is provided for the proposed spectral Jacobi-collocation approximation for the linear Volterra integral equations (VIEs) of the second kind with weakly singular kernels, which shows that the numerical errors decay exponentially in the infinity norm and weighted Sobolev space norms.
138 citations
01 Jan 2013
137 citations
••
TL;DR: The order of convergence of the Decomposition method is contemplated, and the results are applied to some problems.
137 citations
•
TL;DR: These results show that SGD is robust to compressed and/or delayed stochastic gradient updates and is in particular important for distributed parallel implementations, where asynchronous and communication efficient methods are the key to achieve linear speedups for optimization with multiple devices.
Abstract: We analyze (stochastic) gradient descent (SGD) with delayed updates on smooth quasi-convex and non-convex functions and derive concise, non-asymptotic, convergence rates. We show that the rate of convergence in all cases consists of two terms: (i) a stochastic term which is not affected by the delay, and (ii) a higher order deterministic term which is only linearly slowed down by the delay. Thus, in the presence of noise, the effects of the delay become negligible after a few iterations and the algorithm converges at the same optimal rate as standard SGD. This result extends a line of research that showed similar results in the asymptotic regime or for strongly-convex quadratic functions only. We further show similar results for SGD with more intricate form of delayed gradients -- compressed gradients under error compensation and for local~SGD where multiple workers perform local steps before communicating with each other. In all of these settings, we improve upon the best known rates. These results show that SGD is robust to compressed and/or delayed stochastic gradient updates. This is in particular important for distributed parallel implementations, where asynchronous and communication efficient methods are the key to achieve linear speedups for optimization with multiple devices.
137 citations
••
TL;DR: In this article, an improved meshless method is proposed, based on the combination of the natural neighbour finite element method with the radial point interpolation method, the natural neighbor Radial Point Interpolation Method (NNRPIM).
137 citations