scispace - formally typeset
Search or ask a question

Showing papers on "Convergence (routing) published in 1984"


Journal ArticleDOI
TL;DR: It is shown that under certain conditions the K-means algorithm may fail to converge to a local minimum, and that it converges under differentiability conditions to a Kuhn-Tucker point.
Abstract: The K-means algorithm is a commonly used technique in cluster analysis. In this paper, several questions about the algorithm are addressed. The clustering problem is first cast as a nonconvex mathematical program. Then, a rigorous proof of the finite convergence of the K-means-type algorithm is given for any metric. It is shown that under certain conditions the algorithm may fail to converge to a local minimum, and that it converges under differentiability conditions to a Kuhn-Tucker point. Finally, a method for obtaining a local-minimum solution is given.

1,180 citations


Journal ArticleDOI
TL;DR: This paper presents a geometrical discussion as to the origin of that defect, and a new adaptive algorithm is proposed based on the result of the investigation, called APA (affine projection algorithm).
Abstract: The LMS algorithm and learning identification, which presently are typical adaptive algorithms, have a problem in that the speed of convergence may decrease greatly depending on the property of the input signal. To avoid this problem, this paper presents a geometrical discussion as to the origin of that defect, and proposes a new adaptive algorithm based on the result of the investigation. Comparing the convergence speeds of the proposed algorithm and the learning identification by numerical experiment by computer, great improvement was verified. The algorithm is extended to a group of algorithms which includes the original algorithm and the learning identification, which are called APA (affine projection algorithm). It is shown that APA has some desirable properties, such as, the coefficient vector approaches the true value monotonically and the convergence speed is independent of the amplitude of the input signal. Clear conclusions are also obtained for the problem as to what noise is included in the output signal when an external disturbance is impressed or the degree of the adaptive filter is not sufficient.

843 citations


Journal ArticleDOI
TL;DR: In this paper, a direct simultaneous solution for all of the unknowns in the Lagrangian function on each iteration is proposed, where each iteration minimizes a quadratic approximation of the Lagrangeian.
Abstract: The classical optimal power flow problem with a nonseparable objective function can be solved by an explicit Newton approach. Efficient, robust solutions can be obtained for problems of any practical size or kind. Solution effort is approximately proportional to network size, and is relatively independent of the number of controls or binding inequalities. The key idea is a direct simultaneous solution for all of the unknowns in the Lagrangian function on each iteration. Each iteration minimizes a quadratic approximation of the Lagrangian. For any given set of binding constraints the process converges to the Kuhn-Tucker conditions in a few iterations. The challenge in algorithm development is to efficiently identify the binding inequalities.

817 citations


Journal ArticleDOI
TL;DR: A theoretical analysis of self-adaptive equalization for data-transmission is carried out starting from known convergence results for the corresponding trained adaptive filter and it can be proved that the algorithm is bounded.
Abstract: A theoretical analysis of self-adaptive equalization for data-transmission is carried out starting from known convergence results for the corresponding trained adaptive filter. The development relies on a suitable ergodicity model for the sequence of observations at the output of the transmission channel. Thanks to the boundedness of the decision function used for data recovery, it can be proved that the algorithm is bounded. Strong convergence results can be reached when a perfect (noiseless) equalizer exists: the algorithm will converge to it if the eye pattern is initially open. Otherwise convergence may take place towards certain other stationary points of the algorithm for which domains of attraction have been defined. Some of them will result in a poor error rate. The case of a noisy channel exhibits limit points for the algorithm that differ from those of the classical (trained) algorithm. The stronger the noise, the greater the difference is. One of the principal results of this study is the proof of the stability of the usual decision feedback algorithms once the learning period is over.

190 citations


Journal ArticleDOI
TL;DR: This work considers how large these approximations have to be, if they prevent convergence when the objective function is bounded below and continuously differentiable, and obtains a useful convergence result in the case when there is a bound on the second derivative approximation that depends linearly on the iteration number.
Abstract: Many trust region algorithms for unconstrained minimization have excellent global convergence properties if their second derivative approximations are not too large [2]. We consider how large these approximations have to be, if they prevent convergence when the objective function is bounded below and continuously differentiable. Thus we obtain a useful convergence result in the case when there is a bound on the second derivative approximations that depends linearly on the iteration number.

171 citations



Journal ArticleDOI
TL;DR: A further advantage of the nonlinear approach of this paper is that the identified parameters have a clear physical meaning and can give a useful information on the state of the biomass.

165 citations


Journal ArticleDOI
TL;DR: The algorithms are based on Gallager's method and provide methods for iteratively updating the routing table entries of each node in a manner that guarantees convergence to a minimum delay routing and utilize second derivatives of the objective function.
Abstract: We propose a class of algorithms for finding an optimal quasi-static routing in a communication network. The algorithms are based on Gallager's method [1] and provide methods for iteratively updating the routing table entries of each node in a manner that guarantees convergence to a minimum delay routing. Their main feature is that they utilize second derivatives of the objective function and may be viewed as approximations to a constrained version of Newton's method. The use of second derivatives results in improved speed of convergence and automatic stepsize scaling with respect to level of traffic input. These advantages are of crucial importance for the practical implementation of the algorithm using distributed computation in an environment where input traffic statistics gradually change.

162 citations


Book ChapterDOI
01 Jan 1984
TL;DR: This paper develops a method to approximate expectations of functionals of the solution of a S.D.E.; this method is efficiently implementable on a computer, and is based on both Monte-Carlo methods and discretizations of S. D.E.
Abstract: This paper presents results appearing in Talay [8], and show their possible applications in non linear filtering in particular. We develop a method to approximate expectations of functionals of the solution of a S.D.E.; this method is efficiently implementable on a computer, and is based on both Monte-Carlo methods and discretizations of S.D.E. Convergence is shown and bounds of the error are given.

127 citations



Journal ArticleDOI
TL;DR: In this paper, the convergence of one-step method of lines (MOL) schemes for evolutionary problems in PDEs was studied and the stability materials for this framework were taken from the field of nonlinear stiff ODEs.
Abstract: Many existing numerical schemes for evolutionary problems in partial differential equations (PDEs) can be viewed as method of lines (MOL) schemes. This paper treats the convergence of one-step MOL schemes. Our main purpose is to set up a general framework for a convergence analysis applicable to nonlinear problems. The stability materials for this framework are taken from the field of nonlinear stiff ODEs. In this connection, important concepts are the logarithmic matrix norm and C-stability. A nonlinear parabolic equation and the cubic Schrodinger equation are used for illustrating the ideas.

Journal ArticleDOI
TL;DR: In this paper, the authors present a mathematical and numerical analysis of the rates of convergence of variational calculations and their impact on the issue of the convergence or divergence of expectation values obtained from variational wave functions.
Abstract: We present a mathematical and numerical analysis of the rates of convergence of variational calculations and their impact on the issue of the convergence or divergence of expectation values obtained from variational wave functions. The rate of convergence of a variational calculation is critically dependent on the ability of finite linear combinations of basis functions to simulate the nonanalyticities (cusps) in the exact wave function being approximated. A slow rate of convergence of the variational energy can imply that the corresponding variational wave functions will yield divergent expectation values of physical operators not relatively bounded by the Hamiltonian. We illustrate the sorts of problems which can arise by examining Gauss‐type approximations to hydrogenic orbitals. Since all many‐electron wave functions have cusps similar to those in hydrogenic wave functions, this simple example is relevant to variational calculations performed on atoms and molecules. Finally, we offer suggestions on what types of variational wave functions are likely to yield rapid rates of convergence for the energy and reasonable rates of convergence for physical operators such as the dipole moment operator.

Journal ArticleDOI
TL;DR: In this article, the stability, parameter convergence and robustness aspects of single input-single output model reference adaptive systems were studied, and conditions on the exogenous input to the adaptive loop, the reference signal, to guarantee exponential para meter and error convergence.
Abstract: We study stability, parameter convergence and robustness aspects of single input-single output model reference adaptive systems. We begin by establishing a framework for studying parametrizable and unparametrizable uncertainty in the plant to be controlled. Using the standard assumptions on the parametrizable part of the plant dynamics we give a corrected proof (of Narendra, Lin and Valavani) of the stability of the nominal adaptive scheme. Next, we give conditions on the exogenous input to the adaptive loop, the reference signal, to guarantee exponential para meter and error convergence. Using our framework for studying unmodelled (unparametrized) dynamics; we show how the model should be chosen, and the update law modified (by a deadzone in the update law) to preserve stability of the adaptive loop in the presence of output disturbances and unmodelled dynamics. Finally, we compare adaptive and non-adaptive con trol and list directions of ongoing research.


Journal ArticleDOI
TL;DR: The algorithm of Pshenichny had been undiscovered until now, and is examined here for the first time, and it is found that the proof of global convergence by Han requires computing sensitivity coefficients (derivatives) of all constraint functions of the problem at every iteration, which is prohibitively expensive for large-scale applications in optimal design.
Abstract: Recursive quadratic programming methods have become popular in the field of mathematical programming owing to their excellent convergence characteristics. There are two recursive quadratic programming methods that have been published in the literature. One is by Han and the other is by Pshenichny, published in 1977 and 1970, respectively. The algorithm of Pshenichny had been undiscovered until now, and is examined here for the first time. It is found that the proof of global convergence by Han requires computing sensitivity coefficients (derivatives) of all constraint functions of the problem at every iteration. This is prohibitively expensive for large-scale applications in optimal design. In contrast, Pshenichny has proved global convergence of his algorithm using only an active-set strategy. This is clearly preferable for large-scale applications. The method of Pshenichny has been coded into a FORTRAN program. Applications of this method to four example problems are presented. The method is found to be very reliable. However, the method is found to be very sensitive to local minima, i.e. it converges to a local minimum nearest to the starting design. Thus, for optimal design problems (which usually possess multiple local minima) it is suggested that Pshenichny's method be used as part of a hybrid method.

Journal ArticleDOI
TL;DR: In this paper, the convergence of an adaptive linear estimator governed by a stochastic gradient algorithm with decreasing step size in the presence of correlated observations was shown to be almost sure.
Abstract: In this work we prove the almost sure convergence of an adaptive linear estimator governed by a stochastic gradient algorithm with decreasing step size in the presence of correlated observations Two complementary contributions are added to the famous 1977 Ljung theorem First we drop the condition of nondivergence of the algorithm assumed by Ljung While that condition can be ensured by adding a barrier, the convergence of the suitably bounded algorithm itself is not established even on the basis of Ljung theorem Here, the barrier problem is overcome by proving that it is not necessary for the convergence Our second contribution is to generalize the model describing the correlated observations No state space model is used and no linear relationship between the observations and the signal to be estimated needs to be assumed Instead we use a decreasing covariance model that agrees with a very wide class of practical applications

Journal ArticleDOI
TL;DR: New results are obtained, giving the exact convergence and divergence domains for such iterative applications of the block-SOR iterative method to a consistently ordered block-Jacobi matrix that is weakly cyclic of index 3.

Book ChapterDOI
T. Zolezzi1
01 Jan 1984
TL;DR: In this article, the same authors obtained sufficient conditions for upper and approximate lower semicontinuity of solutions and multipliers in infinite dimensional convex programming under gamma convergence of the data.
Abstract: Sufficient conditions for upper semicontinuity of approximate solutions and continuity of the values of mathematical programming problems with respect to data perturbations are obtained by using the variational convergence, thereby generalizing many known results. Upper and approximate lower semicontinuity of solutions and multipliers in infinite dimensional convex programming are obtained under gamma convergence of the data.

Journal ArticleDOI
TL;DR: In this paper, an analytical procedure based on the same topological approach is developed, and a comparison is made with the classical Zubov method, noting the possibility of overcoming some of the classical method's drawbacks (e.g., its nonuniform convergence).
Abstract: The paper deals with the problem of the estimation of regions of asymptotic stability for continuous, autonomous, nonlinear systems. After an outline of the main approaches available in the literature, the "trajectory reversing method" is presented as a. powerful numerical technique for low order systems. Then, an analytical procedure based on the same topological approach is developed, and a comparison is made with the classical Zubov method, noting the possibility of overcoming some of the classical method's drawbacks (e.g., its nonuniform convergence). Several examples of applications of the "trajectory reversing method" both in the numerical and analytical formulation are reported.

Journal ArticleDOI
TL;DR: This paper presents a family of descent or merit functions which are shown to be compatible with local Q-superlinear convergence of Newton and quasi-Newton methods.
Abstract: In order to achieve a robust implementation of methods for nonlinear programming problems, it is necessary to devise a procedure which can be used to test whether or not a prospective step would yield a “better” approximation to the solution than the current iterate. In this paper, we present a family of descent or merit functions which are shown to be compatible with local Q-superlinear convergence of Newton and quasi-Newton methods. A simple algorithm is used to verify that good descent and convergence properties are possible using this merit function.

Proceedings ArticleDOI
01 Dec 1984
TL;DR: The work shows that iterative ARE solutions offer a viable alternative to the Shur eigenvector approach that is a generally accepted reference and demonstrates numerical robustness, accurate results, rapid (super linear) convergence, algorithmic simplicity, and modest storage requirements.
Abstract: Roberts matrix sign function solution to the ARE is defined so as to speed convergence and reduce storage requirements; our work extends ideas proposed by R. Byers, ref(8). Features of the sign function presented here are: (a) our formulation of the Roberts-Byers algorithm recurses on the symmetric transformed Hamiltonian, which reduces storage requirements, (b) the symmetric indefinite matrix inversion required by the algorithm is carried out using LINPACK (and our excellent numerical results reflect the wisdom of this choice); corrections to the computed (approximate) solution are obtained by applying the same algorithms to the translated problem (which improves upon the linear Lyapunov equation correction that has been used), and (d) simple (but somewhat ad hoc) convergence criteria are proposed to reduce computation. The algorithm described in this work has been tested on a variety of continuous time ARE test problems, and the results have been very satisfactory. Tests on numerically ill-conditioned problems produced results of comparable accuracy with those obtained by the Shur vector RICPACK method. Our sign function iterative ARE solution demonstrates numerical robustness, accurate results, rapid (super linear) convergence, algorithmic simplicity, and modest storage requirements. Our work shows that iterative ARE solutions offer a viable alternative to the Shur eigenvector approach that is a generally accepted reference.

Journal ArticleDOI
TL;DR: The convergence of Gauss-Seidel and nonlinear successive overrelaxation methods for finding the minimum of a strictly convex functional defined onRn is studied.
Abstract: We study the convergence of Gauss-Seidel and nonlinear successive overrelaxation methods for finding the minimum of a strictly convex functional defined onR n .

Journal Article
01 Jan 1984-Networks
TL;DR: In this paper, the authors discuss the solution of the fixed-demand traffic equilibrium problem by certain linearized simplicial decomposition methods derived from the family of linear approximation methods for solving a general variational inequality problem.
Abstract: This article discusses the solution of the fixed-demand traffic equilibrium problem by certain linearized simplicial decomposition methods. These methods are derived from the family of linear approximation methods for solving a general variational inequality problem. The central idea of a linearized simplicial decomposition method is that instead of solving linear variational inequality subproblems over the entire set of feasible flows as in a typical linear approximation method, one solves the same subproblems over subsets of feasible flows where each such subset is defined explicitly by certain extreme points of the (polyhedral) set of feasible flows. A global convergence result of the linearized decomposition methods will be established under suitable assumptions on the change of the set of "working" extreme points in each iteration plus some standard conditions on the linear approximating mappings used. Extensive computational results with the use of such methods are reported. Sizes of problems solved range from relatively small to reasonably large. (Author/TRRL)

Journal ArticleDOI
K.F. Pratt1, J.G. Wilson1
TL;DR: A computer program for the steady-state optimisation of the operation of a gas transmission system solves the nonlinear optimisation problem iteratively, linearising the gas-flow equations to give a linear constrained problem; optimising by mixed-integer linear programming; and re-linearising about the optimum; until convergence is obtained.
Abstract: A computer program for the steady-state optimisation of the operation of a gas transmission system is described. The program solves the nonlinear optimisation problem iteratively, linearising the gas-flow equations to give a linear constrained problem; optimising by mixed-integer linear programming; and re-linearising about the optimum; until convergence is obtained. Costs may include compressor fuel, flows from sources, or any chosen control variables. Examples of applications are given . Some other published methods are briefly described.

Journal ArticleDOI
TL;DR: In this paper, the authors derived stability and convergence results for the box and trapezoidal schemes applied to boundary value problems for linear singularly perturbed first order systems of o.d.
Abstract: Stability and convergence results are derived for the box and trapezoidal schemes applied to boundary value problems for linear singularly perturbed first order systems of o.d.e.'s without turning points.

Journal ArticleDOI
TL;DR: In this article, a related theory of convergence is developed and the optimum values of the involved parameters for each considered scheme are determined, and it reveals that under the aforementioned assumptions, the Extended Successive Underrelaxation method attains a rate of convergence which is clearly superior over the successful successive underrelaxations method when the Jacobi iteration matrix is non-singular.
Abstract: A variety of iterative methods considered in [3] are applied to linear algebraic systems of the formAu=b, where the matrixA is consistently ordered [12] and the iteration matrix of the Jacobi method is skew-symmetric. The related theory of convergence is developed and the optimum values of the involved parameters for each considered scheme are determined. It reveals that under the aforementioned assumptions the Extrapolated Successive Underrelaxation method attains a rate of convergence which is clearly superior over the Successive Underrelaxation method [5] when the Jacobi iteration matrix is non-singular.

Journal ArticleDOI
TL;DR: The method combines polyhedral and quadratic approximation, a new type of penalty technique and a safeguard in such a way as to give convergence to a stationary point and is shown to be superlinear under somewhat stronger assumptions that allow both nonsmooth and nonconvex cases.
Abstract: This paper introduces an algorithm for minimizing a single-variable locally Lipschitz function subject to a like function being nonpositive. The method combines polyhedral and quadratic approximation, a new type of penalty technique and a safeguard in such a way as to give convergence to a stationary point. The convergence is shown to be superlinear under somewhat stronger assumptions that allow both nonsmooth and nonconvex cases. The algorithm can be an effective subroutine for solving line search subproblems called for by multivariable optimization algorithms.

Journal ArticleDOI
TL;DR: In this article, the authors provide asymptotic convergence rates for Arcangeli's method of regularizing an ill-posed operator equation of the first kind, in terms of the error level in the data.
Abstract: We provide asymptotic convergence rates, in terms of the error level in the data, for Arcangeli's method of regularizing an ill-posed operator equation of the first kind

Journal ArticleDOI
TL;DR: This paper presents efficient and reliable algorithms to solve the classic economic load dispatch problem, i.e., the parametric quadratic programming, the modified parametric quadruatic programming and the recursive quadratics programming algorithm.
Abstract: This paper presents efficient and reliable algorithms to solve the classic economic load dispatch problem. In the conventional equal incremental method, an ambiguity exists for selecting a value of a relaxation coefficient. Since the value of the relaxation coefficient can be determined only from experience, there exist some examples whose convergence is very slow or convergence is not accomplishable. In order to overcome this defect, three algorithms are proposed, i.e., the parametric quadratic programming, the modified parametric quadratic programming and the recursive quadratic programming algorithm. A number of numerical tests for real system have been carried out to demonstrate the effectiveness of the proposed algorithms. The numerical results show that the proposed algorithms are practical for real-time applications.

Journal ArticleDOI
01 Jan 1984
TL;DR: The properties of convergence to the optimal point for HOC problems that have linear equality constraints and linear inequality constraints, respectively, are explored in this paper.
Abstract: Hierarchical overlapping coordination (HOC) has been developed in order to coordinate decision-making in a large-scale system in terms of its various hierarchical structures (i.e. decompositions) which are derived from the various aspects and databased on the system. The main drawback of HOC has been the convergence problem. The properties of convergence to the optimal point for HOC problems that have linear equality constraints and linear inequality constraints, respectively, are explored in this paper. Sufficient conditions for achieving convergence are presented and several examples are given.