scispace - formally typeset
Search or ask a question

Showing papers on "Convergence (routing) published in 1987"


Book
01 Jan 1987
TL;DR: Finite representations Finite evaluation Finite convergence Computable sufficient conditions for existence and convergence Safe starting regions for iterative methods.
Abstract: Finite representations Finite evaluation Finite convergence Computable sufficient conditions for existence and convergence Safe starting regions for iterative methods Applications to mathematical programming Applications to operator equations An application in finance Internal rates-of-return.

2,983 citations


Journal ArticleDOI
TL;DR: In this paper, a new stochastic process, a collection of $U$-statistics indexed by a family of symmetric kernels, is introduced and conditions for the uniform almost-sure convergence of a sequence of such processes are obtained.
Abstract: This paper introduces a new stochastic process, a collection of $U$-statistics indexed by a family of symmetric kernels. Conditions are found for the uniform almost-sure convergence of a sequence of such processes. Rates of convergence are obtained. An application to cross-validation in density estimation is given. The proofs adapt methods from the theory of empirical processes.

440 citations


Journal ArticleDOI
TL;DR: The multigrid method is applied to two different discretizations of the Euler equations, and results about multi-element airfoils are presented, resulting in convergence rates comparable to those obtained with structured multigrids.
Abstract: A multigrid algorithm has been developed to accelerate the convergence of the Euler equations to a steady state on unstructured triangular meshes. The method operates on a sequence of coarse and fine unstructured meshes and assumes no relation exists between the various meshes. A tree-search algorithm is used to efficiently identify regions of overlap between coarse and fine grid cells. The multigrid method is applied to two different discretizations of the Euler equations, and results about multi-element airfoils are presented. For both discretization schemes, convergence is accelerated by an order of magnitude, resulting in convergence rates comparable to those obtained with structured multigrid algorithms.

281 citations


Journal ArticleDOI
TL;DR: Convergence analysis of stochastic gradient adaptive filters using the sign algorithm is presented, and the theoretical and empirical curves show a very good match.
Abstract: Convergence analysis of stochastic gradient adaptive filters using the sign algorithm is presented in this paper. The methods of analysis currently available in literature assume that the input signals to the filter are white. This restriction is removed for Gaussian signals in our analysis. Expressions for the second moment of the coefficient vector and the steady-state error power are also derived. Simulation results are presented, and the theoretical and empirical curves show a very good match.

279 citations


Journal ArticleDOI
TL;DR: This paper sets up a theoretical framework to analyze the convergence of iterative methods called waveform relaxation methods, restricting the discussion to linear systems and doing not consider the effects of time discretization.
Abstract: In VLSI-simulation there has recently been interest in an iterative technique called the waveform relaxation method. In this paper we set up a theoretical framework to analyze the convergence of such methods. We restrict the discussion to linear systems and do not consider the effects of time discretization, but assume that the initial value problems are solved exactly.

224 citations


Journal ArticleDOI
TL;DR: In this article, a closed form solution for the wall displacements and the ground pressure acting on the lining is given for the case of a circular tunnel driven in an homogeneous and isotropic medium with time-dependent behaviour.

177 citations


Journal ArticleDOI
TL;DR: In this article, a local convergence analysis is done for this vector (grouped variable) version of coordinate descent, and assuming certain regularity conditions, it is shown that such an approach is locally convergent to a minimizer and that the rate of convergence in each vector variable is linear.
Abstract: LetF(x,y) be a function of the vector variablesx∈R n andy∈R m . One possible scheme for minimizingF(x,y) is to successively alternate minimizations in one vector variable while holding the other fixed. Local convergence analysis is done for this vector (grouped variable) version of coordinate descent, and assuming certain regularity conditions, it is shown that such an approach is locally convergent to a minimizer and that the rate of convergence in each vector variable is linear. Examples where the algorithm is useful in clustering and mixture density decomposition are given, and global convergence properties are briefly discussed.

140 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that if the initial iteration matrix is nonnegative, then Gauss-Seidel iteration matrix can improve the convergence rate of the original linear system.

105 citations


Journal ArticleDOI
TL;DR: In this article, a class of models with fixed and random factors, including an additive relationship matrix, were investigated and two iterative procedures were investigated, Gauss-Seidel and Jacobi.

103 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an analysis of the asymptotic behavior of the solutions to various iterative linear least-squares methods that synthesize transfer functions from frequency-response data.
Abstract: This paper presents an analysis of the asymptotic behaviour of the solutions to various iterative linear least-squares methods that synthesize transfer functions from frequency-response data. The methods of Sanathanan and Koerner and Lawrence and Rogers are shown to possess asymptotic solutions that do not coincide with the solution to a fundamental non-linear least-squares criterion. Several methods with more appropriate asymptotic behaviour are then suggested. The analytical tools presented should allow analysis of conditions for the convergence of iterative linear least-squares methods.

75 citations


Journal ArticleDOI
G.A. Maria1, J. A. Findlay1
TL;DR: In this paper, a Newton optimal power flow program was developed for the Ontario Hydro Energy Management System, which combines the fast convergence of the Newton technique with the speed and reliability of Linear Programming.
Abstract: A Newton optimal power flow program was developed for the Ontario Hydro Energy Management System. Each iteration minimizes a quadratic approximation of the Lagrangian. All the equations are solved simultaneously for all the unknowns. A new technique based on linear programming is used to identify the binding inequalities. All binding constraints are enforced using Lagrange multipliers. The algorithm combines the fast convergence of the Newton technique with the speed and reliability of Linear programming. Most cases converged in three iterations or less.

Journal ArticleDOI
TL;DR: In this article, the authors established the uniform convergence and obtained convergence rates for several algorithms for solving a class of Hadamard singular integral equations, and improved on the mean square convergence shown in [2].

Journal ArticleDOI
TL;DR: Six aggregation/disaggregation procedures constructed to accelerate the convergence of successive approximation methods suitable for computing the stationary distribution of a finite Markov chain are implemented and analysed.
Abstract: We implement and analyse aggregation/disaggregation procedures constructed to accelerate the convergence of successive approximation methods suitable for computing the stationary distribution of a finite Markov chain. We define six of these methods and analyse them in detail. In particular, we show that some existing procedures lie in the aggregation/disaggregation framework we set, and hence can be considered as special cases. Also, for all described methods, we identify cases where they are promising. Numerical examples for the applications of some of the methods for nearly completely decomposable stochastic matrices are given as well.

Proceedings ArticleDOI
Ronald C. Wong1
21 Jun 1987
TL;DR: Newton-Raphson iteration is then applied to compute the steady state, substantially speeding up convergence to the solution as compared to brute-force simulation methods.
Abstract: Accelerated convergence to the steady-state solution of closed-loop regulated switching-mode systems is achieved by more accurately computing the sensitivity matrices involved through a new matrix-based algorithm. By taking into consideration the variation of the timing of the intracycle intervals of a switching-mode system with respect to changes in the state vector, the new algorithm is able to predict system sensitivity with considerably greater accuracy but only slight penalty in computation time over existing algorithms. Newton-Raphson iteration is then applied to compute the steady state, substantially speeding up convergence to the solution as compared to brute-force simulation methods.

Journal ArticleDOI
TL;DR: In this paper, the numerical characteristics of penalty methods for evaluating the solution of symmetric systems of equations with imposed constraints are analyzed and the sources of error resulting from this approach are identified and an estimate for the penalty parameter that minimizes this error is obtained.
Abstract: This paper looks at the numerical characteristics of penalty methods for evaluating the solution of symmetric systems of equations with imposed constraints. The sources of error resulting from this approach are identified and an estimate for the penalty parameter that minimizes this error is obtained. The results of the error analysis and the effect of penalty parameter on the accuracy and rates of convergence of the solution algorithm are demonstrated with the aid of some numerical examples.

01 Jul 1987
TL;DR: A class of multiscale algorithms for the solution of large sparse linear systems that are particularly well adapted to massively parallel supercomputers is described, using an approximate inverse for smoothing and a super-interpolation operator to move the correction from coarse to fine scales, chosen to optimize the rate of convergence.
Abstract: We describe a class of multiscale algorithms for the solution of large sparse linear systems that are particularly well adapted to massively parallel supercomputers. While standard multigrid algorithms are unable to effectively use all processors when computing on coarse grids, the new algorithms utilize the same number of processors at all times. The basic idea is to solve many coarse scale problems simultaneously, combining the results in an optimal way to provide an improved fine scale solution. As a result, convergence rates are much faster than for standard multigrid methods we have obtained V-cycle convergence rates as good as .0046 with one smoothing application per cycle, and .0013 with two smoothings. On massively parallel machines, the improved convergence rate is attained at no extra computational cost since processors that would otherwise be sitting idle are utilized to provide the better convergence. On serial machines, the algorithm is slower because of the extra time spent on multiple coarse scales, though in certain cases the improved convergence rate may justify this particularly in cases where other methods do not converge. In constant coefficient situations the algorithm is easily analyzed theoretically using Fourier methods on a single grid. The fact that only one grid is involved substantially simplifies convergence proofs. A feature of the algorithms is the use of a matched pair of operators: an approximate inverse for smoothing and a super-interpolation operator to move the correction from coarse to fine scales, chosen to optimize the rate of convergence.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the convergence of dynamic iteration methods for large systems of linear initial value problems and showed how the convergence can be reduced to a graphical test relating the splitting of the matrix to the stability properties of the discretization method.
Abstract: This paper continues the authors' study of the convergence of dynamic iteration methods for large systems of linear initial value problems. We ask for convergence on [0, ∞) and show how the convergence can be reduced to a graphical test relating the splitting of the matrix to the stability properties of the discretization method.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the problem of maximizing the entropy under linear equality constraints and the convergence of a special-purpose block-iterative algorithm for its solution is investigated.
Abstract: Entropy maximization under linear equality constraints is considered and convergence of a special-purpose block-iterative algorithm for its solution is investigated. This algorithm processes in each iterative step a group of constraints. When each block contains a single equation then a well known row-action method, called MART (for “Multiplicative Algebraic Reconstruction Technique”), is obtained. At the other extreme, when all equations are grouped into a single block, a fully simultaneous entropy maximization algorithm is obtained. In the general ease the algorithm permits assignment of weights to the equations within each block to reflect any a priori knowledge, which might be available from the modelling of some real-world process, about their relative importance. The blocks are processed in an almost cyclic order and, in contrast with Bregman’s method, the iterative step does not require inner-loop iterations but is rather represented by a closed form formula. The iterative step of the algo...

Journal ArticleDOI
TL;DR: In this paper, two classes of numerical approaches have been adopted to solve the single-stage isothermal flash problem: equation-solving methods that try to solve a nonlinear equation system and a minimization of the total Gibbs free energy.
Abstract: Various numerical approaches have been adopted to solve the single-stage isothermal flash problem. These approaches result in two classes of methods. The first includes equation-solving methods that try to solve a nonlinear equation system; the second is based on a minimization of the total Gibbs free energy. Most of these methods may fail to find a solution or may lead to erroneous solutions near critical conditions when an equation of state is applied to both the vapor and liquid phases. New methods for solving the problem are proposed. Combining the simplicity in structure of the conventional successive-substitution method and the efficiency of some unconstrained minimization algorithms, they all ensure convergence to local minima of the Gibbs free energy. The new methods are compared from the standpoints of computer storage and calculational effort requirements. The performance of these methods is tested on four multicomponent systems taken from literature, and a comparison is made with other published methods.


Journal ArticleDOI
F. Yassa1
TL;DR: The existence of an optimal value for this convergence factor μ is investigated for two classes of algorithms, including the complex adaptive-linear-combiner (ALC) and algorithms which are linear only in a subset of their adaptive coefficients.
Abstract: The convergence and the adaptation speed of gradient-based adaptive algorithms are controlled by the chosen value for the convergence factor μ. In this paper, the existence of an optimal value for this convergence factor is investigated for two classes of algorithms. A proof is first presented for the general case of the complex adaptive-linear-combiner (ALC). The results are applied to the complex and real LMS algorithms. This is followed by a second proof for algorithms which are linear only in a subset of their adaptive coefficients. These cases are found in IIR applications such as the hybrid-recursive, lattice-recursive, and recursive algorithms using the direct realization IIR. For each case, the optimal value is shown to be generated using instantaneous signal estimates. The resulting adaptive algorithms become self-optimizing in terms of their convergence factor, and dependence on incoming training signal levels is reduced. Moreover, a correction factor is introduced in each case to regulate the adaptation process and accommodate practical applications where additive signals are present with the desired signal.

Journal ArticleDOI
TL;DR: The Chow-Yorke algorithm is a homotopy method that has been proved globally convergent for Brouwer fixed-point problems, certain classes of zero finding and nonlinear programming problems, and two-point boundary-value approximations based on shooting and finite differences as discussed by the authors.

Journal ArticleDOI
TL;DR: Results are presented on the convergence and asymptotic agreement of a class of asynchronous distributed algorithms which are in general time-varying, memorydependent, and not necessarily associated with the optimization of a common cost functional.
Abstract: In this paper, we present results on the convergence and asymptotic agreement of a class of asynchronous stochastic distributed algorithms which are in general time-varying, memory-dependent, and not necessarily associated with the optimization of a common cost functional We show that convergence and agreement can be reached by distributed learning and computation under a number of conditions, in which case a separation of fast and slow parts of the algorithm is possible, leading to a separation of the estimation part from the main algorithm

Journal ArticleDOI
TL;DR: A quasi-Newton method is developed which preserves known bounds on the Jacobian matrix and can be computed with the same amount of work as competitive methods, and it is proved that the number of operations required to obtain this update is proportional to theNumber of nonzeros in the sparsity pattern of the Jacobians.
Abstract: We develop a quasi-Newton method which preserves known bounds on the Jacobian matrix. We show that this update can be computed with the same amount of work as competitive methods. In particular, we prove that the number of operations required to obtain this update is proportional to the number of nonzeros in the sparsity pattern of the Jacobian matrix. The method is also shown to share the local convergence properties of Broyden’s and Schubert’s method.

Journal ArticleDOI
TL;DR: This paper presents some convergence results on the Phase II and Phase I portions of the scaling algorithm and presents results of numerical experiments on examples of Klee—Minty type which show sensitivity to the starting interior-feasible point and steepest descent step size.

Journal ArticleDOI
TL;DR: In this paper, a population of individuals was simulated to study convergence rate of an iterative method, a mix of Gauss-Seidel and second-order Jacobi, for solving mixed model equations for an animal model.

Book ChapterDOI
01 Jan 1987
TL;DR: A method for constrained optimization which obtains its search directions from a quadratic programming subproblem based on the well-known augmented Lagrangian function and an algorithm with global convergence for equality constrained problems is presented.
Abstract: This paper describes a method for constrained optimization which obtains its search directions from a quadratic programming subproblem based on the well-known augmented Lagrangian function. The method can be viewed as a development of the algorithm REQP which is related to the classical exterior point penalty function: and it is argued that the new technique will have certain computational advantages arising from the fact that it need not involve a sequence of penalty parameters tending to zero. An algorithm with global convergence for equality constrained problems is presented. Computational results are also given for this algorithm and some alternative strategies are briefly considered for extending it to deal with inequality constraints.

Journal ArticleDOI
TL;DR: In this article, the convergence and optimality properties of the modified two-step algorithm for on-line determination of the optimum steady-state operating point of an industrial process were investigated.
Abstract: This paper investigates convergence and optimality properties of the modified two-step algorithm for on-line determination of the optimum steady-state operating point of an industrial process. Mild sufficient conditions are derived for the convergence and feasibility of the algorithm. It is shown that every point within the solution set of the algorithm satisfies first-order necessary conditions for optimality, and that every optimal solution belongs to this set. It is also shown that there are advantages to be gained by using a linear mathematical model of the process within the implementation of the algorithm.

Journal ArticleDOI
TL;DR: In this paper, an extension to nonlinear algebraic systems of the class of algorithms recently proposed by Abaffy, Broyden and Spedicato for general linear systems is considered.
Abstract: In this paper we consider an extension to nonlinear algebraic systems of the class of algorithms recently proposed by Abaffy, Broyden and Spedicato for general linear systems. We analyze the convergence properties, showing that under the usual assumptions on the function and some mild assumptions on the free parameters available in the class, the algorithm is locally convergent and has a superlinear rate of convergence (per major iteration, which is computationally comparable to a single Newton's step). Some particular algorithms satisfying the conditions on the free parameters are considered.

Journal Article
TL;DR: In this paper, the convergence analysis of a class of one and two-time scale time-varying nonlinear systems using averaging theory is studied, and new stability theorems are developed.
Abstract: We develop new stability theorems for the convergence analysis of a class of one and two-time scale time-varying nonlinear systems using averaging theory. These theorems are applied to a class of continuous time adaptive identifiers and model reference adaptive controllers to obtain estimates of the parameter rate of exponential convergence.