scispace - formally typeset
Search or ask a question

Showing papers on "Convergence (routing) published in 1986"


Journal ArticleDOI
TL;DR: In this paper, it is shown that the annealing algorithm converges with probability arbitrarily close to 1, and that it is no better than a deterministic method. But it is also shown that there are cases where convergence takes exponentially long.
Abstract: The annealing algorithm is a stochastic optimization method which has attracted attention because of its success with certain difficult problems, including NP-hard combinatorial problems such as the travelling salesman, Steiner trees and others. There is an appealing physical analogy for its operation, but a more formal model seems desirable. In this paper we present such a model and prove that the algorithm converges with probability arbitrarily close to 1. We also show that there are cases where convergence takes exponentially long—that is, it is no better than a deterministic method. We study how the convergence rate is affected by the form of the problem. Finally we describe a version of the algorithm that terminates in polynomial time and allows a good deal of ‘practical’ confidence in the solution.

609 citations


Journal ArticleDOI
TL;DR: In this paper, it has been shown that a very modest degree of convergence of an extreme Ritz value already suffices for an increased rate of convergence to occur, which is known as superlinear convergence.
Abstract: It has been observed that the rate of convergence of Conjugate Gradients increases when one or more of the extreme Ritz values have sufficiently converged to the corresponding eigenvalues (the “superlinear convergence” of CG). In this paper this will be proved and made quantitative. It will be shown that a very modest degree of convergence of an extreme Ritz value already suffices for an increased rate of convergence to occur.

351 citations


Journal ArticleDOI
TL;DR: Based on the concept of a self-orthogonalizing algorithm in the transform domain, it is shown that the convergence speed of the TRLMS ADF can be improved significantly for the same excess MSE as that of the L MS ADF.
Abstract: In this paper we analyze the performance, particularly the convergence behavior, of the transform-domain least mean-square (LMS) adaptive digital filter (ADF) using the discrete Fourier transform and discrete orthogonal transforms such as discrete cosine and sine transforms. We first obtain the optimum Wiener solution and the minimum mean-squared error (MSE) in the transform domain. It is shown that the two minimum MSE's in the time and transform domains are identical independently of the transforms used. We then study the convergence conditions and the steady-state excess MSE's of the transform-domain LMS (TRLMS) algorithms both for the cases of having a constant and a time-varying convergence factors. When a constant convergence factor is used, the convergence behaviors of the LMS and TRLMS ADF's appear to be almost identical, provided that each has an appropriate value of the convergence factor depending on the transform used. Also, based on the concept of a self-orthogonalizing algorithm in the transform domain, it is shown that the convergence speed of the TRLMS ADF can be improved significantly for the same excess MSE as that of the LMS ADF. In addition, we compare the computational complexities of the LMS and TRLMS ADF'S. Finally, we investigate by computer simulation the effects of system parameter values and different transforms on the convergence behavior of the TRLMS ADF.

199 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a practical computer implementation of a technique which dramatically speeds up the convergence of the infinite series Green's function associated with the Helmholtz operator in the case of periodic structures.

192 citations


Journal ArticleDOI
Masao Fukushima1
TL;DR: Each iteration of the proposed algorithm consists of projection onto a halfspace containing the given closed convex set rather than the latter set itself, so its global convergence to the solution can be established under suitable conditions.
Abstract: This paper presents a modification of the projection methods for solving variational inequality problems. Each iteration of the proposed algorithm consists of projection onto a halfspace containing the given closed convex set rather than the latter set itself. The algorithm can thus be implemented very easily and its global convergence to the solution can be established under suitable conditions.

188 citations


Journal ArticleDOI
TL;DR: This paper shows convergence to an optimal routing without assuming synchronization of computation at all nodes and measurement of link lengths at all links, while taking into account the possibility of link flow transients caused by routing updates.
Abstract: In this paper we study the performance of a class of distributed optimal routing algorithms of the gradient projection type under weaker and more realistic assumptions than those considered thus far. In particular, we show convergence to an optimal routing without assuming synchronization of computation at all nodes and measurement of link lengths at all links, while taking into account the possibility of link flow transients caused by routing updates. This demonstrates the robustness of these algorithms in a realistic distributed operating environment.

137 citations


Journal ArticleDOI
TL;DR: In this paper, a family of stochastic approximation variants of the Steiglitz-McBride identification scheme was developed for adaptive IIR filtering, and the convergence was shown by computer simulation.
Abstract: A family of stochastic approximation variants of the Steiglitz-McBride identification scheme [1]-[3] is developed for adaptive IIR filtering. Parameter convergence is shown by computer simulation. An interesting phenomenon, global convergence regardless of local minima, is observed.

129 citations


Patent
03 Jun 1986
TL;DR: In this article, a recognize only embodiment of a recognition matrix comprised of a forward matrix and a reverse matrix each having a plurality of contacts which cause convergence responses on target lines when an input signal is received by said contact.
Abstract: There is disclosed herein a recognize only embodiment of a recognition matrix comprised of a forward matrix and a reverse matrix each having a plurality of contacts which cause convergence responses on target lines when an input signal is received by said contact. Learning is performed by changing the characteristics of the contacts to alter the convergence responses they cause in accordance with a learning rule involving the comparison of total convergence response on each target line to a convergence threshold. The contacts are not programmed ad hoc in the field as events are individually learned. Instead each contact is programmed permanently by the user for a class of events which is fixed and which can never change. The user typically performs the learning on a computer simulator for all the events which a particular system is to be used to recognize. The patterns of convergence responses and contact structure characteristics which cause these convergence responses for the class of events as a whole are then examined and optimized for maximum recognition power and minimum confusion. This pattern of convergence responses or contact characteristics is then permanently programmed in the contacts of the forward and reverse matrices. A no-confusion embodiment is also disclosed whereby an array oif recognition machines are each programmed to recognize only one event, and all are coupled in parallel to an input bus carrying the signals characterizing the event to be recognized. The outputs are or'ed together.

127 citations


Journal ArticleDOI
TL;DR: This paper proposes and investigates two algorithms satisfying the above constraint: individual adaptation (IA) and homogeneous adaptation (HA), and shows that the individual adaptation approach yields much better filters than the conventional fixed group adaptation approach.
Abstract: Conventional gradient-type adaptive filters use the fixed convergence factor \mu which is normally chosen to be the same for all the filter parameters. In this paper, we propose to use individual convergence factors which are optimally tailored to adapt individual filter parameters. Furthermore, we propose to adjust the individual convergence factors in real time so that their values are kept optimum for a new set of input variables. We call this approach "individual" adaptation as opposed to the conventional fixed "group" adaptation using the same fixed \mu for all the filter parameters. Computer simulation results show that the individual adaptation approach yields much better filters than the conventional fixed group adaptation approach. Optimization of individual time-varying convergence factors leads to a constraint which may be satisfied by several different algorithms. We propose and investigate here two algorithms satisfying the above constraint: individual adaptation (IA) and homogeneous adaptation (HA). The HA algorithm turns out to have the general form as some well known gradient algorithms that normalize the step size which were previously obtained either intuitively or using involved derivations. Both IA and HA are shown to provide much better performance than the conventional "group" adaptation. However, for several simulations, IA provides better performance than HA, at the expense of increased computation.

118 citations


Journal ArticleDOI
TL;DR: A recursive quadratic programming algorithm for solving equality constrained optimization problems is proposed and studied, and some numerical results are given.
Abstract: In this paper, a recursive quadratic programming algorithm for solving equality constrained optimization problems is proposed and studied. The line search functions used are approximations to Fletcher's differentiable exact penalty function. Global convergence and local superlinear convergence results are proved, and some numerical results are given.

107 citations


Journal ArticleDOI
TL;DR: This paper clarifies the relative roles of the gradient and lag errors by proving their decoupled character and quantitative evaluations of upper and lower bounds allow an approximate optimization of the gain.
Abstract: Adaptive identification in a time-varying context is studied when controlled by the LMS algorithm with constant gain μ, under the assumption of correlated successive input vectors, it is well known by experience that the tracking mean square error (MSE) \epsilon(\mu) results from the tradeoff between the gradient part which is μ-increasing and the lag contribution which is μ-decreasing. In this note we clarify the relative roles of the gradient and lag errors by proving their decoupled character. This property relies upon independence between the additive noise at the output of the plant to be identified and the information vector at the plant input. Convergence of the MSE is established rather than assumed. Quantitative evaluations of upper and lower bounds allow an approximate optimization of the gain. In two important cases the optimum is exact. One of these cases is "slow-variations." It is defined in a quantitative manner thanks to the ratio of the "variation" noise to the output additive noise.

Journal ArticleDOI
TL;DR: In this article, a derivative-free line search in the range of g is used to establish superlinear convergence from within any compact level set of γ on which g has a differentiable inverse function g−1.
Abstract: Iterative methods for solving a square system of nonlinear equations g(x) = 0 often require that the sum of squares residual γ (x) ≡ ½∥g(x)∥2 be reduced at each step. Since the gradient of γ depends on the Jacobian ∇g, this stabilization strategy is not easily implemented if only approximations Bk to ∇g are available. Therefore most quasi-Newton algorithms either include special updating steps or reset Bk to a divided difference estimate of ∇g whenever no satisfactory progress is made. Here the need for such back-up devices is avoided by a derivative-free line search in the range of g. Assuming that the Bk are generated from an rbitrary B0 by fixed scale updates, we establish superlinear convergence from within any compact level set of γ on which g has a differentiable inverse function g−1.

Journal ArticleDOI
TL;DR: In this paper, the authors present a feasible directions algorithm based on Lagrangian concepts for the solution of the nonlinear programming problem with equality and inequality constraints, and prove the global convergence of the algorithm and apply it to some test problems.
Abstract: We present a feasible directions algorithm, based on Lagrangian concepts, for the solution of the nonlinear programming problem with equality and inequality constraints. At each iteration a descent direction is defined; by modifying it, we obtain a feasible descent direction. The line search procedure assures the global convergence of the method and the feasibility of all the iterates. We prove the global convergence of the algorithm and apply it to the solution of some test problems. Although the present version of the algorithm does not include any second-order information, like quasi-Newton methods, these numerical results exhibit a behavior comparable to that of the best methods known at present for nonlinear programming.

Journal ArticleDOI
TL;DR: A new continuously differentiable exact penalty function is introduced for the solution of nonlinear programming problems with compact feasible set that is defined on a suitable bounded open set containing the feasible region and that it goes to infinity on the boundary of this set.
Abstract: In this paper a new continuously differentiable exact penalty function is introduced for the solution of nonlinear programming problems with compact feasible set A distinguishing feature of the penalty function is that it is defined on a suitable bounded open set containing the feasible region and that it goes to infinity on the boundary of this set This allows the construction of an implementable unconstrained minimization algorithm, whose global convergence towards Kuhn-Tucker points of the constrained problem can be established


Journal ArticleDOI
TL;DR: A local convergence property is proved which shows that whenever an FCM algorithm is started sufficiently near a minimizer of the corresponding objective function, then the iteration sequence must converge to that particular minimizer.

Journal ArticleDOI
TL;DR: In this paper, an accelerated version of Cimmino's algorithm for solving the convex feasibility problem in finite dimension is presented, which is similar to that given by Censor and Elfving for linear inequalities.
Abstract: We present an accelerated version of Cimmino's algorithm for solving the convex feasibility problem in finite dimension. The algorithm is similar to that given by Censor and Elfving for linear inequalities. We show that the nonlinear version converges locally to a weighted least squares solution in the general case and globally to a feasible solution in the consistent case. Applications to the linear problem are suggested.

Journal ArticleDOI
TL;DR: In this article, it is shown that the initial-boundary value problem for Burgers' equation converges in time to a unique steady state, but the speed of convergence depends on the boundary conditions and can be exponentially slow.

Journal ArticleDOI
TL;DR: The applicability of Broyden's second method for accelerating the convergence of self-consistent electronic-structure calculations based on the linearized augmented-plane-wave method is discussed in terms of a W(001) surface calculation and it is concluded that it should increase the size of the systems for which such calculations are feasible.
Abstract: The applicability of Broyden's second method for accelerating the convergence of self-consistent electronic-structure calculations based on the linearized augmented-plane-wave method is discussed in terms of a W(001) surface calculation. It is found that its use results in a significant improvement in the convergence of the calculation, and based on this it is concluded that its use should increase the size of the systems for which such calculations are feasible.

Journal ArticleDOI
01 Dec 1986-Metrika
TL;DR: In this paper, a vertex exchange method for finding D-optimal experimental designs is presented, and convergence of the procedure is established, and the convergence behavior investigated through example including comparisons to other known iteration procedures.
Abstract: A vertex-exchange-method for findingD-optimal experimental designs is presented. Formulae for the optimal step-length and for the inversion of the information matrix are provided. Convergence of the procedure is established, and the convergence behavior investigated through example including comparisons to other known iteration procedures.

Journal ArticleDOI
Peter Kall1
TL;DR: The aim of this review is to show in an elementary way how closely the arguments in the epi-convergence approach are related to those of the classical theory of convergence of functions.
Abstract: During the last two decades the concept of epi-convergence was introduced and then was used in various investigations in optimization and related areas. The aim of this review is to show in an elementary way how closely the arguments in the epi-convergence approach are related to those of the classical theory of convergence of functions.

Journal ArticleDOI
TL;DR: In this paper, the theoretical prediction of the static equilibrium properties of solids by means of total-energy minimisation is investigated, and practical aspects and the pitfalls connected with finite plane-wave basis sets are discussed.
Abstract: Methods for the theoretical prediction of the static equilibrium properties of solids by means of total-energy minimisation are investigated. The authors discuss practical aspects and the pitfalls connected with finite plane-wave basis sets, and show how a number of ambiguities can be overcome with the aid of the stress theorem. The procedures developed are very efficient for achieving full convergence of calculated properties. Precautions necessary when performing calculations that are not fully converged, which are unavoidable for complex systems, are discussed.

Journal ArticleDOI
TL;DR: In this article, a modification of an algorithm recently suggested by the same authors in this journal was presented, and the speed of convergence was improved for the same complexity of computation, which is the same as in this paper.
Abstract: We present a modification of an algorithm recently suggested by the same authors in this journal (Ref. 1). The speed of convergence is improved for the same complexity of computation.

Journal ArticleDOI
TL;DR: In this article, a convergence analysis for the Gauss-Newton-Method is presented, which reduces to the wellknown Newton-Kantorovich-Theorem for the Newton-Method in a natural way.
Abstract: We present a (semilocal) "Kantorovich-type" convergence analysis for the Gauss-Newton-Method which reduces to the wellknown Newton-Kantorovich-Theorem for the Newton-Method in a natural way. Additionnally a classification of the nonlinear regression problem into "adequate" and "not-adequate" models is obtained.

22 Sep 1986
TL;DR: In this article, a general computational approach to limit solutions is proposed, which is robust such that from any initial trial solution, the first iteration falls into a convex hull that contains the exact solution(s) of the problem.
Abstract: A computational approach to limit solutions is considered most challenging for two major reasons. A limit solution is likely to be non-smooth such that certain non-differentiable functions are perfectly admissible and make physical and mathematical sense. Moreover, the possibility of non-unique solutions makes it difficult to analyze the convergence of an iterative algorithm or even to define a criterion of convergence. In this paper, we use two mathematical tools to resolve these difficulties. A duality theorem defines convergence from above and from below the exact solution. A combined smoothing and successive approximation applied to the upper bound formulation perturbs the original problem into a smooth one by a small parameter e. As e → 0, the solution of the original problem is recovered. This general computational algorithm is robust such that from any initial trial solution, the first iteration falls into a convex hull that contains the exact solution(s) of the problem. Unlike an incremental method thut invariably renders the limit problem ill-conditioned, the algorithm is numerically stable. Limit analysis itself is a highly efficient concept which bypasses the tedium of the intermediate elastic-plastic deformation and seeks the most important information directly. With the said algorithm, we have produced many limit solutions of plane stress problems. Certain non-smooth characters of the limit solutions are shown in the examples presented. Two well-known as well as one parametric family of yield functions are used to allow comparison with some classical solutions.

Journal ArticleDOI
TL;DR: The problem of the estimation of functions and their derivatives from noisy observations is discussed, and a general algorithm derived from orthogonal series, the Parzen kernels and the k_n nearest neighbor rules is proposed and its asymptotic properties are investigated.
Abstract: The problem of the estimation of functions and their derivatives from noisy observations is discussed. The study is motivated by the interest in nonparametric identification of linear circuits. A general algorithm is proposed and its asymptotic properties are investigated. Three special cases of this algorithm-derived from orthogonal series, the Parzen kernels and the k_n nearest neighbor rules-are presented. In the each case the mean square error convergence and the strong convergence is established. The best speed of convergence is found under some assumptions.

Journal ArticleDOI
TL;DR: In this paper, an algorithm was presented to find an approximant to the maximal state constraint set for a linear discrete-time dynamical system with polyhedral state and input hounds.
Abstract: In [1] an algorithm was presented to find an approximant to the maximal state constraint set for a linear discrete-time dynamical system with polyhedral state and input hounds. Here it is shown that the algorithm will yield an approximant arbitrarily close to the maximal state constraint set and the number of iterations is given as a function of the prescribed precision of the approximant.

Journal ArticleDOI
TL;DR: In this article, multi-grid algorithms for the numerical solution of Hamilton-Jacobi-Bellman equations were developed for numerical solutions of Hamilton and Jacobi Bellman equations using a combination of standard multigrid techniques and the iterative methods used by Lions and mercier in [11].
Abstract: In this paper we develop multi-grid algorithms for the numerical solution of Hamilton-Jacobi-Bellman equations The proposed schemes result from a combination of standard multi-grid techniques and the iterative methods used by Lions and mercier in [11] A convergence result is given and the efficiency of the algorithms is illustrated by some numerical examples

Proceedings ArticleDOI
01 Jan 1986
TL;DR: In this paper, a technique for upwind differencing of the three-dimensional species continuity equations is presented, which permits computation of steady flows in chemical equilibrium and nonequilibrium, respectively.
Abstract: A technique for upwind differencing of the three-dimensional species continuity equations is presented which permits computation of steady flows in chemical equilibrium and nonequilibrium. The capabilities and shortcomings of the present approach for equilibrium and nonequilibrium flows is discussed. Modifications now being investigated to improve computational time are outlined.

Journal ArticleDOI
TL;DR: Global convergence is proved for a partitioned BFGS algorithm, when applied on a partially separable problem with a convex decomposition, a known practical optimization method for large dimensional unconstrained problems.
Abstract: Global convergence is proved for a partitioned BFGS algorithm, when applied on a partially separable problem with a convex decomposition. This case convers a known practical optimization method for large dimensional unconstrained problems. Inexact solution of the linear system defining the search direction and variants of the steplength rule are also shown to be acceptable without affecting the global convergence properties.