scispace - formally typeset
Search or ask a question

Showing papers on "Convergence (routing) published in 1997"


Journal ArticleDOI
TL;DR: A novel fast algorithm for independent component analysis is introduced, which can be used for blind source separation and feature extraction, and the convergence speed is shown to be cubic.
Abstract: We introduce a novel fast algorithm for independent component analysis, which can be used for blind source separation and feature extraction. We show how a neural network learning rule can be transformed into a fixedpoint iteration, which provides an algorithm that is very simple, does not depend on any user-defined parameters, and is fast to converge to the most accurate solution allowed by the data. The algorithm finds, one at a time, all nongaussian independent components, regardless of their probability distributions. The computations can be performed in either batch mode or a semiadaptive manner. The convergence of the algorithm is rigorously proved, and the convergence speed is shown to be cubic. Some comparisons to gradient-based algorithms are made, showing that the new algorithm is usually 10 to 100 times faster, sometimes giving the solution in just a few iterations.

3,215 citations


Journal ArticleDOI
TL;DR: The characterization of pattern search methods is exploited to establish a global convergence theory that does not enforce a notion of sufficient decrease, and is possible because the iterates of a pattern search method lie on a scaled, translated integer lattice.
Abstract: We introduce an abstract definition of pattern search methods for solving nonlinear unconstrained optimization problems. Our definition unifies an important collection of optimization methods that neither compute nor explicitly approximate derivatives. We exploit our characterization of pattern search methods to establish a global convergence theory that does not enforce a notion of sufficient decrease. Our analysis is possible because the iterates of a pattern search method lie on a scaled, translated integer lattice. This allows us to relax the classical requirements on the acceptance of the step, at the expense of stronger conditions on the form of the step, and still guarantee global convergence.

1,229 citations


Book
01 Jan 1997
TL;DR: Applications and issues application to learning, state dependent noise and queueing applications to signal processing and adaptive control mathematical background convergence with probability one, introduction weak convergence methods for general algorithms applications, proofs of convergence rate of convergence averaging of the iterates distributed/decentralized and asynchronous algorithms.
Abstract: Applications and issues application to learning, state dependent noise and queueing applications to signal processing and adaptive control mathematical background convergence with probability one - Martingale difference noise convergence with probability one - correlated noise weak convergence - introduction weak convergence methods for general algorithms applications - proofs of convergence rate of convergence averaging of the iterates distributed/decentralized and asynchronous algorithms.

1,172 citations


Journal ArticleDOI
TL;DR: Exact computable rates of convergence for Gaussian target distributions are obtained and different random and non‐random updating strategies and blocking combinations are compared using the rates.
Abstract: In this paper many convergence issues concerning the implementation of the Gibbs sampler are investigated. Exact computable rates of convergence for Gaussian target distributions are obtained. Different random and non-random updating strategies and blocking combinations are compared using the rates. The effect of dimensionality and correlation structure on the convergence rates are studied. Some examples are considered to demonstrate the results. For a Gaussian image analysis problem several updating strategies are described and compared. For problems in Bayesian linear models several possible parameterizations are analysed in terms of their convergence rates characterizing the optimal choice.

448 citations


Proceedings ArticleDOI
01 Oct 1997
TL;DR: The analysis in this paper is based on data collected from BGP routing messages generated by border routers at five of the Internet core's public exchange points during a nine month period and reveals several unexpected trends and ill-behaved systematic properties in Internet routing.
Abstract: This paper examines the network inter-domain routing information exchanged between backbone service providers at the major U.S. public Internet exchange points. Internet routing instability, or the rapid fluctuation of network reachability information, is an important problem currently facing the Internet engineering community. High levels of network instability can lead to packet loss, increased network latency and time to convergence. At the extreme, high levels of routing instability have lead to the loss of internal connectivity in wide-area, national networks. In this paper, we describe several unexpected trends in routing instability, and examine a number of anomalies and pathologies observed in the exchange of inter-domain routing information. The analysis in this paper is based on data collected from BGP routing messages generated by border routers at five of the Internet core's public exchange points during a nine month period. We show that the volume of these routing updates is several orders of magnitude more than expected and that the majority of this routing information is redundant, or pathological. Furthermore, our analysis reveals several unexpected trends and ill-behaved systematic properties in Internet routing. We finally posit a number of explanations for these anomalies and evaluate their potential impact on the Internet infrastructure.

380 citations


Journal ArticleDOI
TL;DR: For the first time in a general setting, global and local contraction rates are derived and are derived in a form which makes it possible to determine the optimal step size relative to certain constants associated with the given problem.
Abstract: Forward--backward splitting methods provide a range of approaches to solving large-scale optimization problems and variational inequalities in which structures conducive to decomposition can be utilized. Apart from special cases where the forward step is absent and a version of the proximal point algorithm comes out, efforts at evaluating the convergence potential of such methods have so far relied on Lipschitz properties and strong monotonicity, or inverse strong monotonicity, of the mapping involved in the forward step, the perspective mainly being that of projection algorithms. Here, convergence is analyzed by a technique that allows properties of the mapping in the backward step to be brought in as well. For the first time in such a general setting, global and local contraction rates are derived; moreover, they are derived in a form which makes it possible to determine the optimal step size relative to certain constants associated with the given problem. Insights are thereby gained into the effects of shifting strong monotonicity between the forward and backward mappings when a splitting is selected.

290 citations


Proceedings Article
23 Aug 1997
TL;DR: Two new distributed routing algorithms for data networks based on simple biological "ants" that explore the network and rapidly learn good routes, using a novel variation of reinforcement learning are investigated, and they scale well with increase in network size-using a realistic topology.
Abstract: We investigate two new distributed routing algorithms for data networks based on simple biological "ants" that explore the network and rapidly learn good routes, using a novel variation of reinforcement learning. These two algorithms are fully adaptive to topology changes and changes in link costs in the network, and have space and computational overheads that are competitive with traditional packet routing algorithms: although they can generate more routing traffic when the rate of failures in a network is low, they perform much better under higher failure rates. Both algorithms are more resilient than traditional algorithms, in the sense that random corruption of routing state has limited impact on the computation of paths. We present convergence theorems for both of our algorithms drawing on the theory of non-stationary and stationary discrete-time Markov chains over the reals. We present an extensive empirical evaluation of our algorithms on a simulator that is widely used in the computer networks community for validating and testing protocols. We present comparative results on data delivery performance, aggregate routing traffic (algorithm overhead), as well as the degree of resilience for our new algorithms and two traditional routing algorithms in current use. We also show that the performance of our algorithms scale well with increase in network size-using a realistic topology.

264 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the iteratively regularized Gauss-Newton method is a locally convergent method for solving nonlinear ill-posed problems, provided the nonlinear operator satisfies a certain smoothness condition.
Abstract: In this paper we prove that the iteratively regularized Gauss-Newton method is a locally convergent method for solving nonlinear ill-posed problems, provided the nonlinear operator satisfies a certain smoothness condition. For perturbed data we propose a priori and a posteriori stopping rules that guarantee convergence of the iterates, if the noise level goes to zero. Under appropriate closeness and smoothness conditions on the exact solution we obtain the same convergence rates as for linear ill-posed problems.

249 citations


Journal ArticleDOI
TL;DR: In this paper, the authors analyzed the convergence of an iterative method for total variation denoising with discontinuities, which involves a "lagged diffusivity" approach in which a sequence of linear diffusion problems are solved.
Abstract: In total variation denoising, one attempts to remove noise from a signal or image by solving a nonlinear minimization problem involving a total variation criterion. Several approaches based on this idea have recently been shown to be very effective, particularly for denoising functions with discontinuities. This paper analyzes the convergence of an iterative method for solving such problems. The iterative method involves a "lagged diffusivity" approach in which a sequence of linear diffusion problems are solved. Global convergence in a finite-dimensional setting is established, and local convergence properties, including rates and their dependence on various parameters, are examined.

221 citations


Journal ArticleDOI
TL;DR: A discrete time algorithm is proposed, the Euler method, for the computation of the traffic equilibrium and its notable feature is that it decomposes the traffic problem into network subproblems of special structure, each of which can then be solved simultaneously and in closed form using exact equilibration.
Abstract: This paper proposes, for a fixed demand traffic network problem, a route travel choice adjustment process formulated as a projected dynamical system, whose stationary points correspond to the traffic equilibria. Stability analysis is then conducted in order to investigate conditions under which the route travel choice adjustment process approaches equilibria. We also propose a discrete time algorithm, the Euler method, for the computation of the traffic equilibrium and provide convergence results. The notable feature of the algorithm is that it decomposes the traffic problem into network subproblems of special structure, each of which can then be solved simultaneously and in closed form using exact equilibration. Finally, we illustrate the computational performance of the Euler method through various numerical examples.

191 citations



Journal ArticleDOI
TL;DR: The PSA algorithm proposed in the paper has shown significant improvements in solution quality for the largest of the test networks, and the conditions under which the parallel algorithm is most efficient are investigated.
Abstract: The simulated annealing optimization technique has been successfully applied to a number of electrical engineering problems, including transmission system expansion planning. The method is general in the sense that it does not assume any particular property of the problem being solved, such as linearity or convexity. Moreover, it has the ability to provide solutions arbitrarily close to an optimum (i.e. it is asymptotically convergent) as the cooling process slows down. The drawback of the approach is the computational burden: finding optimal solutions may be extremely expensive in some cases. This paper presents a parallel simulated annealing (PSA) algorithm for solving the long-term transmission network expansion planning problem. A strategy that does not affect the basic convergence properties of the sequential simulated annealing algorithm have been implemented and tested. The paper investigates the conditions under which the parallel algorithm is most efficient. The parallel implementations have been tested on three example networks: a small 6-bus network; and two complex real-life networks. Excellent results are reported in the test section of the paper: in addition to reductions in computing times, the PSA algorithm proposed in the paper has shown significant improvements in solution quality for the largest of the test networks.

Book ChapterDOI
Jean Jacod1
01 Jan 1997
TL;DR: In this article, the conditions générales d'utilisation (http://www.numdam.org/legal.php) of the agreement with the séminaire de probabilités (Strasbourg) are discussed, i.e., the copie ou impression de ce fichier do not contenir la présente mention de copyright.
Abstract: © Springer-Verlag, Berlin Heidelberg New York, 1997, tous droits réservés. L’accès aux archives du séminaire de probabilités (Strasbourg) (http://portail. mathdoc.fr/SemProba/) implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/legal.php). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright.


Journal ArticleDOI
TL;DR: An extensive numerical study with a wide spectrum of test problems indicates that there are substantial differences between the rules in terms of the required CPU time, the number of function and derivative evaluations and space complexity.
Abstract: The role of the interval subdivision-selection rule is investigated in branch-and-bound algorithms for global optimization. The class of rules that allows convergence for the model algorithm is characterized, and it is shown that the four rules investigated satisfy the conditions of convergence. A numerical study with a wide spectrum of test problems indicates that there are substantial differences between the rules in terms of the required CPU time, the number of function and derivative evaluations, and space complexity, and two rules can provide substantial improvements in efficiency.

Journal ArticleDOI
TL;DR: New learning (adaptive) laws are proposed which when applied to recurrent high order neural networks (RHONN) ensure that the identification error converges to zero exponentially fast, and even more, in the case where the Identification error is initially zero, it remains equal to zero during the whole identification process.

Journal Article
TL;DR: This paper analyzes, in an abstract framework, necessary and sufficient conditions for the convergence of eigenvalue problems for mixed formulation.
Abstract: Eigenvalue problems for mixed formulation show peculiar features that make them substantially different from the corresponding mixed direct problems. In this paper we analyze, in an abstract framework, necessary and sufficient conditions for their convergence.

01 Feb 1997
TL;DR: This paper presents a detailed derivation of the Neural Generalized Predictive Control algorithm with Newton-Raphson as the minimization algorithm and results show convergence to a good solution within two iterations and timing data show that real-time control is possible.
Abstract: An efficient implementation of Generalized Predictive Control using a multi-layer feedforward neural network as the plant''s nonlinear model is presented. In using Newton-Raphson as the optimization algorithm, the number of iterations needed for convergence is significantly reduced from other techniques. The main cost of the Newton-Raphson algorithm is in the calculation of the Hessian, but even with this overhead the low iteration numbers make Newton-Raphson faster than other techniques and a viable algorithm for real-time control. This paper presents a detailed derivation of the Neural Generalized Predictive Control algorithm with Newton-Raphson as the minimization algorithm. Simulation results show convergence to a good solution within two iterations and timing data show that real-time control is possible. Comments about the algorithm''s implementation are also included.

Journal ArticleDOI
TL;DR: This work explains and demonstrates what happens when the linear program is degenerate, where convergence to acceptable accuracy is usually obtained, and shows that convergence to arbitrarily high accuracy occurs.
Abstract: Some implementations of interior-point algorithms obtain their search directions by solving symmetric indefinite systems of linear equations. The conditioning of the coefficient matrices in these so-called augmented systems deteriorates on later iterations, as some of the diagonal elements grow without bound. Despite this apparent difficulty, the steps produced by standard factorization procedures are often accurate enough to allow the interior-point method to converge to high accuracy. When the underlying linear program is nondegenerate, we show that convergence to arbitrarily high accuracy occurs, at a rate that closely approximates the theory. We also explain and demonstrate what happens when the linear program is degenerate, where convergence to acceptable accuracy (but not arbitrarily high accuracy) is usually obtained.

Proceedings ArticleDOI
24 Sep 1997
TL;DR: In this article, a new set of learning rules for the nonlinear blind source separation problem based on the information maximization criterion is presented, which focuses on a parametric sigmoidal nonlinearity and higher order polynomials.
Abstract: We present a new set of learning rules for the nonlinear blind source separation problem based on the information maximization criterion. The mixing model is divided into a linear mixing part and a nonlinear transfer channel. The proposed model focuses on a parametric sigmoidal nonlinearity and higher order polynomials. Our simulation results verify the convergence of the proposed algorithms.

Patent
07 May 1997
TL;DR: In this article, a feasibility condition that provides multiple loop-free paths through a computer network and that minimizes the amount of synchronization among routers necessary for the correct operation of a routing algorithm is presented.
Abstract: A system for maintaining routing tables at each router in a computer network The system is based on (a) a feasibility condition that provides multiple loop-free paths through a computer network and that minimizes the amount of synchronization among routers necessary for the correct operation of a routing algorithm, and (b) a method that manages the set of successors during the time it synchronizes its routing-table update activity with other routers, in order to efficiently compute multiple loop-free paths, including the shortest path, through a computer network

Journal Article
TL;DR: In this article, implicit and explicit proximal regularization methods using conventional and modified Lagrange functions are suggested to solve the equi-programming problem, and the convergence of these methods to the equilibrium solutions is proven.
Abstract: The equi-programming problem is formulated. Its relation to game settings is discussed. To solve this problem, implicit and explicit proximal-regularization methods using conventional and modified Lagrange functions are suggested. The convergence of these methods to the equilibrium solutions is proven.

Journal ArticleDOI
TL;DR: A method is described, DDVERK, which implements this approach and justifies the strategies and heuristics that have been adopted and shows how the assumptions related to error control, stepsize control, and discontinuity detection can be efficiently realized for a particular sixth-order numerical method.
Abstract: We have recently developed a generic approach for solving neutral delay differential equations based on the use of a continuous Runge–Kutta formula with defect control and investigated its convergence properties. In this paper, we describe a method, DDVERK, which implements this approach and justify the strategies and heuristics that have been adopted. In particular we show how the assumptions related to error control, stepsize control, and discontinuity detection (required for convergence) can be efficiently realized for a particular sixth-order numerical method. Summaries of extensive testing are also reported.

Journal ArticleDOI
TL;DR: In this paper, the authors considered pseudoorbits of discrete dynamical systems such that the one-step errors of the orbits tend to zero with increasing indices, and the rates of convergence are studied via considering pseudoorbit such that error sequences belong to certain (weighted) lp-spaces and showing that corresponding shadowing errors are there, too.
Abstract: In this paper pseudoorbits of discrete dynamical systems are considered such that the one-step errors of the orbits tend to zero with increasing indices. First it is shown that close to hyperbolic sets such orbits are shadowed by true trajectories of the system with shadowing errors also tending to zero. Then the rates of convergence are studied via considering pseudoorbits such that the error sequences belong to certain (weighted) lp-spaces and showing that the corresponding shadowing errors are there, too. Under certain conditions on the weights we establish weighted shadowing near nonhyperbolic sets.

Journal ArticleDOI
TL;DR: This work employs a "feedback" technique in which the LC vectors are used to produce a new "data" vector, and the algorithm restarted, and Convergence of this nested iterative scheme to an approximate solution is proven.
Abstract: It has been shown that convergence to a solution can be significantly accelerated for a number of iterative image reconstruction algorithms, including simultaneous Cimmino-type algorithms, the "expectation maximization" method for maximizing likelihood (EMML) and the simultaneous multiplicative algebraic reconstruction technique (SMART), through the use of rescaled block-iterative (BI) methods. These BI methods involve partitioning the data into disjoint subsets and using only one subset at each step of the iteration. One drawback of these methods is their failure to converge to an approximate solution in the inconsistent case, in which no image consistent with the data exists; they are always observed to produce limit cycles (LCs) of distinct images, through which the algorithm cycles. No one of these images provides a suitable solution, in general. The question that arises then is whether or not these LC vectors retain sufficient information to construct from them a suitable approximate solution; we show that they do. To demonstrate that, we employ a "feedback" technique in which the LC vectors are used to produce a new "data" vector, and the algorithm restarted. Convergence of this nested iterative scheme to an approximate solution is then proven. Preliminary work also suggests that this feedback method may be incorporated in a practical reconstruction method.

Journal ArticleDOI
TL;DR: This paper proves tracking error convergence without persistence of excitation, and shows that the adaptive controller is robust with respect to sufficiently small bounded disturbance, and adds a robustifying control component to show that the controllers is robust for a wide class of, not-necessarily-small, bounded disturbance.

Journal ArticleDOI
TL;DR: This paper addresses a global optimization approach to a water distribution network design problem, employing an arc-based formulation that is linear except forcertain complicating head-loss constraints and developing a first globaloptimization scheme for this model.
Abstract: In this paper, we address a global optimization approach to a water distribution network design problem. Traditionally, a variety of local optimization schemes have been developed for such problems, each new method discovering improved solutions for some standard test problems, with no known lower bound to test the quality of the solutions obtained. A notable exception is a recent paper by Eiger et al. (1994) who present a first global optimization approach for a loop and path-based formulation of this problem, using a semi-infinite linear program to derive lower bounds. In contrast, we employ an arc-based formulation that is linear except for certain complicating head-loss constraints and develop a first global optimization scheme for this model. Our lower bounds are derived through the design of a suitable Reformulation-Linearization Technique (RLT) that constructs a tight linear programming relaxation for the given problem, and this is embedded within a branch-and-bound algorithm. Convergence to an optimal solution is induced by coordinating this process with an appropriate partitioning scheme. Some preliminary computational experience is provided on two versions of a particular standard test problem for the literature for which an even further improved solution is discovered, but one that is verified for the first time to be an optimum, without any assumed bounds on the flows. Two other variants of this problem are also solved exactly for illustrative purposes and to provide researchers with additional test cases having known optimal solutions. Suggestions on a more elaborate study involving several algorithmic enhancements are presented for future research.

Journal ArticleDOI
TL;DR: In this article, the convergence of the higher-order quasi-linear approximations of the Born series is considered and a new approach is proposed to estimate the accuracy of the original QL approximation.
Abstract: We have recently introduced a quasi-linear (QL) approximation for the solution of the three-dimensional (3-D) electromagnetic modeling problem. In this paper we discuss an approach to improving its accuracy by considering the QL approximations of the higher-order. This approach can be considered the natural generalization of the Born series. We use the modified Green's operator with the norm less than 1 to ensure the convergence of the higher orders QL approximations to the true solution. This new approach produces the converged QL series, which makes it possible to estimate the accuracy of the original QL approximation without direct comparison with the rigorous full integral equation solution. It also opens principally new possibilities for fast and accurate 3-D EM modeling and inversion.

Journal ArticleDOI
TL;DR: This work investigates explicit and implicit continuous Runge-Kutta methods for solving neutral Volterra integro-differential equations with delay and considers the convergence of the iterative scheme required on each step and the convergence the numerical solution to the true solution of theVolterra systems.

Journal ArticleDOI
TL;DR: This paper proposes a subspace limited memory quasi-Newton method for solving large-scale optimization with simple bounds on the variables and the global convergence of the method is proved.
Abstract: In this paper we propose a subspace limited memory quasi-Newton method for solving large-scale optimization with simple bounds on the variables. The limited memory quasi-Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. The search direction consists of three parts: a subspace quasi-Newton direction, and two subspace gradient and modified gradient directions. Our algorithm can be applied to large-scale problems as there is no need to solve any subproblems. The global convergence of the method is proved and some numerical results are also given.