scispace - formally typeset
Search or ask a question
Topic

Convergence (routing)

About: Convergence (routing) is a research topic. Over the lifetime, 23702 publications have been published within this topic receiving 415745 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The convergence of a wide class of approximation schemes to the viscosity solution of fully nonlinear second-order elliptic or parabolic, possibly degenerate, partial differential equations is studied in this paper.
Abstract: The convergence of a wide class of approximation schemes to the viscosity solution of fully nonlinear second-order elliptic or parabolic, possibly degenerate, partial differential equations is studied. It is proved that any monotone, stable, and consistent scheme converges (to the correct solution), provided that there exists a comparison principle for the limiting equation. Several examples are given where the result applies. >

1,063 citations

Journal ArticleDOI
TL;DR: This paper studies an alternative inexact BCD approach which updates the variable blocks by successively minimizing a sequence of approximations of f which are either locally tight upper bounds of $f$ or strictly convex local approximation of f.
Abstract: The block coordinate descent (BCD) method is widely used for minimizing a continuous function $f$ of several block variables. At each iteration of this method, a single block of variables is optimized, while the remaining variables are held fixed. To ensure the convergence of the BCD method, the subproblem of each block variable needs to be solved to its unique global optimal. Unfortunately, this requirement is often too restrictive for many practical scenarios. In this paper, we study an alternative inexact BCD approach which updates the variable blocks by successively minimizing a sequence of approximations of $f$ which are either locally tight upper bounds of $f$ or strictly convex local approximations of $f$. The main contributions of this work include the characterizations of the convergence conditions for a fairly wide class of such methods, especially for the cases where the objective functions are either nondifferentiable or nonconvex. Our results unify and extend the existing convergence results ...

1,032 citations

Journal ArticleDOI
TL;DR: The back-propagation algorithm described by Rumelhart et al. (1986) can greatly accelerate convergence as discussed by the authors, however, in many applications, the number of iterations required before convergence can be large.
Abstract: The utility of the back-propagation method in establishing suitable weights in a distributed adaptive network has been demonstrated repeatedly. Unfortunately, in many applications, the number of iterations required before convergence can be large. Modifications to the back-propagation algorithm described by Rumelhart et al. (1986) can greatly accelerate convergence. The modifications consist of three changes:1) instead of updating the network weights after each pattern is presented to the network, the network is updated only after the entire repertoire of patterns to be learned has been presented to the network, at which time the algebraic sums of all the weight changes are applied:2) instead of keeping ź, the "learning rate" (i.e., the multiplier on the step size) constant, it is varied dynamically so that the algorithm utilizes a near-optimum ź, as determined by the local optimization topography; and3) the momentum factor ź is set to zero when, as signified by a failure of a step to reduce the total error, the information inherent in prior steps is more likely to be misleading than beneficial. Only after the network takes a useful step, i.e., one that reduces the total error, does ź again assume a non-zero value. Considering the selection of weights in neural nets as a problem in classical nonlinear optimization theory, the rationale for algorithms seeking only those weights that produce the globally minimum error is reviewed and rejected.

1,017 citations

Journal ArticleDOI
TL;DR: The aim of this paper is to present a survey of convergence results on particle filtering methods to make them accessible to practitioners.
Abstract: Optimal filtering problems are ubiquitous in signal processing and related fields. Except for a restricted class of models, the optimal filter does not admit a closed-form expression. Particle filtering methods are a set of flexible and powerful sequential Monte Carlo methods designed to. solve the optimal filtering problem numerically. The posterior distribution of the state is approximated by a large set of Dirac-delta masses (samples/particles) that evolve randomly in time according to the dynamics of the model and the observations. The particles are interacting; thus, classical limit theorems relying on statistically independent samples do not apply. In this paper, our aim is to present a survey of convergence results on this class of methods to make them accessible to practitioners.

1,013 citations

Proceedings Article
07 Dec 2009
TL;DR: A unified framework for establishing consistency and convergence rates for regularized M-estimators under high-dimensional scaling is provided and one main theorem is state and shown how it can be used to re-derive several existing results, and also to obtain several new results.
Abstract: High-dimensional statistical inference deals with models in which the the number of parameters p is comparable to or larger than the sample size n. Since it is usually impossible to obtain consistent procedures unless p/n → 0, a line of recent work has studied models with various types of structure (e.g., sparse vectors; block-structured matrices; low-rank matrices; Markov assumptions). In such settings, a general approach to estimation is to solve a regularized convex program (known as a regularized M-estimator) which combines a loss function (measuring how well the model fits the data) with some regularization function that encourages the assumed structure. The goal of this paper is to provide a unified framework for establishing consistency and convergence rates for such regularized M-estimators under high-dimensional scaling. We state one main theorem and show how it can be used to re-derive several existing results, and also to obtain several new results on consistency and convergence rates. Our analysis also identifies two key properties of loss and regularization functions, referred to as restricted strong convexity and decomposability, that ensure the corresponding regularized M-estimators have fast convergence rates.

974 citations


Network Information
Related Topics (5)
Nonlinear system
208.1K papers, 4M citations
91% related
Optimization problem
96.4K papers, 2.1M citations
90% related
Differential equation
88K papers, 2M citations
90% related
Partial differential equation
70.8K papers, 1.6M citations
90% related
Matrix (mathematics)
105.5K papers, 1.9M citations
89% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202262
20211,831
20201,524
20191,346
20181,321
20171,075