scispace - formally typeset
Search or ask a question

Showing papers on "Rate of convergence published in 1976"


Journal ArticleDOI
TL;DR: The results discussed highlight the operational aspects of multiplier methods and demonstrate their significant advantages over ordinary penalty methods.

332 citations


Journal ArticleDOI
TL;DR: In this paper, the convergence rate of the Cauchy problem for a quasi-linear equation in the class of measurable bounded functions was investigated and convergence rate in L 1 (E n ) was estimated.
Abstract: APPROXIMATE methods for solving the Cauchy problem for a quasi-linear equation in the class of measurable bounded functions are investigated. The convergence rate in L 1 ( E n ) is estimated.

254 citations


Journal ArticleDOI
TL;DR: In this paper, a duality theory for linear and concave-convex fractional programs is developed and related to recent results by Bector, Craven-Mond, Jagannathan, Sharma-Swarup, et al.
Abstract: This paper, which is presented in two parts, is a contribution to the theory of fractional programming, i.e., maximization of quotients subject to constraints. In Part I a duality theory for linear and concave-convex fractional programs is developed and related to recent results by Bector, Craven-Mond, Jagannathan, Sharma-Swarup, et al. Basic duality theorems of linear, quadratic and convex programming are extended. In Part II Dinkelbach's algorithm solving fractional programs is considered. The rate of convergence as well as a priori and a posteriori error estimates are determined. In view of these results the stopping rule of the algorithm is changed. Also the starting rule is modified using duality as introduced in Part I. Furthermore a second algorithm is proposed. In contrast to Dinkelbach's procedure the rate of convergence is still controllable. Error estimates are obtained too.

150 citations


Journal ArticleDOI
TL;DR: A class of algorithms for nonlinearly constrained optimization problems which contain an updated estimate of the Hessian of the Lagrangian and which have obtained rapid convergence in a number of test problems by using a program based on the algorithms proposed here.
Abstract: A class of algorithms for nonlinearly constrained optimization problems is proposed. The subproblems of the algorithms are linearly constrained quadratic minimization problems which contain an updated estimate of the Hessian of the Lagrangian. Under suitable conditions and updating schemes local convergence and a superlinear rate of convergence are established. The convergence proofs require among other things twice differentiable objective and constraint functions, while the calculations use only first derivative data. Rapid convergence has been obtained in a number of test problems by using a program based on the algorithms proposed here.

120 citations


Journal ArticleDOI
TL;DR: In this article, a generalized class of quadratic penalty function methods for nonconvex nonlinear programming problems is considered and convergence and rate of convergence results for the sequences of primal and dual variables generated are obtained.
Abstract: In this paper we consider a generalized class of quadratic penalty function methods for the solution of nonconvex nonlinear programming problems. This class contains as special cases both the usual quadratic penalty function method and the recently proposed multiplier method. We obtain convergence and rate of convergence results for the sequences of primal and dual variables generated. The convergence results for the multiplier method are global in nature and constitute a substantial improvement over existing local convergence results. The rate of convergence results show that the multiplier method should be expected to converge considerably faster than the pure penalty method. At the same time, we construct a global duality framework for nonconvex optimization problems. The dual functional is concave, everywhere finite, and has strong differentiability properties. Furthermore, its value, gradient and Hessian matrix within an arbitrary bounded set can be obtained by unconstrained minimization of a certain...

115 citations


Book
01 Jun 1976
TL;DR: The book represents an introduction to computation in control by an iterative, gradient, numerical method, where linearity is not assumed, and conjugate gradient descent is used.
Abstract: The book represents an introduction to computation in control by an iterative, gradient, numerical method, where linearity is not assumed. The general language and approach used are those of elementary functional analysis. The particular gradient method that is emphasized and used is conjugate gradient descent, a well known method exhibiting quadratic convergence while requiring very little more computation than simple steepest descent. Constraints are not dealt with directly, but rather the approach is to introduce them as penalty terms in the criterion. General conjugate gradient descent methods are developed and applied to problems in control.

115 citations


Journal ArticleDOI
TL;DR: It has been conjectured that the conjugate gradient method for minimizing functions of several variables has a superlinear rate of convergence, but Crowder and Wolfe show by example that the conjecture is false.
Abstract: It has been conjectured that the conjugate gradient method for minimizing functions of several variables has a superlinear rate of convergence, but Crowder and Wolfe show by example that the conjecture is false. Now the stronger result is given that, if the objective function is a convex quadratic and if the initial search direction is an arbitrary downhill direction, then either termination occurs or the rate of convergence is only linear, the second possibility being more usual. Relations between the starting point and the initial search direction that are necessary and sufficient for termination in the quadratic case are studied.

108 citations


Journal ArticleDOI
M.M. Sondhi1, Debasis Mitra
01 Nov 1976
TL;DR: In this article, the authors derived a broad range of theoretical results concerning the performance and limitations of a class of analog adaptive filters and proved the exponential convergence to zero of the norm ||r(t)|| with weak nondegeneracy requirements on x(t).
Abstract: We derive a broad range of theoretical results concerning the performance and limitations of a class of analog adaptive filters. Applications of these filters have been proposed in many different engineering contexts which have in common the following idealized identification problem: A system has a vector input x(t) and a scalar output z(t)=h'x(t), where h is an unknown time-invariant coefficient vector. From a knowledge of x(t) and z(t) it is required to estimate h. The filter considered here adjusts an estimate vector h^(t) in a control loop, thus d/dt h^= KF[z(t)-z^(t)]x(t) where z^(t) = h^'x(t), F is a suitable, in general nonlinear, function, and K is the loop gain. The effectiveness of the filter is determined by the convergence properties of the misalignment vector, r = h - h^. With weak nondegeneracy requirements on x(t) we prove the exponential convergence to zero of the norm ||r(t)||. For all values of K, we give upper and lower bounds on the convergence rate which are tight in that both bounds have similar qualitative dependence on K. The dependence of these bounds on K is unexpected and important since it reveals basic limitations of the filters which are not predicted by the conventional approximate method of analysis, the "method of averaging." By analyzing the effects of an added forcing term u(t) in the control equation we obtain upper bounds to the effects on the convergence process of various important departures from the idealized model as when noise is present as an additional component of z(t), the coefficient vector h is time-varying, and the integrators in a hardware implementation have finite memory.

100 citations


Journal ArticleDOI
TL;DR: A class of combined primal–dual and penalty methods for constrained minimization which generalizes the method of multipliers is proposed and analyzed and it is shown that the rate of convergence may be linear or superlinear with arbitrary Q-order of convergence depending on the problem at hand.
Abstract: In this paper we propose and analyze a class of combined primal–dual and penalty methods for constrained minimization which generalizes the method of multipliers. We provide a convergence and rate of convergence analysis for these methods for the case of a convex programming problem. We prove global convergence in the presence of both exact and inexact unconstrained minimization, and we show that the rate of convergence may be linear or superlinear with arbitrary Q-order of convergence depending on the problem at hand and the form of the penalty function employed.

79 citations


Journal ArticleDOI
TL;DR: In this article, the convergence rates for the error between the solution to a discrete approximation of a fixed time, unconstrained control problem and the corresponding continuous optimal control were derived for one-step and multistep integration schemes.
Abstract: Convergence rates for the error between the solution to a discrete approximation of a fixed time, unconstrained control problem and the corresponding continuous optimal control are derived for one-step and multistep integration schemes The convergence rate for multistep schemes depends on the order of the integration scheme and the approximation properties of the discrete costate equation at the right endpoint Furthermore, the order is $ \leqq 3$ and the error in the optimal discrete control exhibits a boundary layer with most of the error concentrated at the right endpoint For a class of one-step integration schemes satisfying a symmetry condition, second order convergence of the optimal discrete control is both proved and observed experimentally The computations also indicate that the convergence rate of the optimal discrete state and costate variables equals the order of the integration scheme By an auxiliary computation, this order can also be recovered for the control approximation Some numeric

69 citations


Journal ArticleDOI
TL;DR: In this article, the convergence rate of the generalized Brillouin theorem based multiconfiguration method (MCGBT) has been analyzed and found to converge quadratically, compared with other optimization methods such as the first-order Rayleigh-Schroedinger perturbation, the steepest descent, the single vector diagonalization, the Newton-Raphson, and the conventional SCF method.
Abstract: The optimization scheme of the ''multiconfiguration method based on the generalized Brillouin theorem'' (MCGBT) has been analyzed and found to converge quadratically. Its rate of convergence has been compared with that of other optimization methods such as the first-order Rayleigh--Schroedinger perturbation, the steepest descent, the single vector diagonalization, the Newton--Raphson, and the conventional SCF method. The convergence of the MCGBT scheme has been found to be superior to the convergence of the above mentioned schemes.

Journal ArticleDOI
TL;DR: A new feature of the PN approach to the solution of the Schrodinger equation is reported, namely, the remarkable stability of the present PN algorithm against the round off errors.

Journal ArticleDOI
TL;DR: In this paper, the convergence properties of iterative orthogonalization processes are investigated using polar decomposition of matrices, and the convergence rate and the range of convergence is established in terms of the spectral radius of the modulus of the matrix which is being orthogonalyzed.
Abstract: Polar decomposition of matrices is used here to investigate the convergence properties of iterative orthogonalization processes. It is shown that, applying this decomposition, the investigation of a general iterative process of a certain form can be reduced to the investigation of a scalar iterative process which is simple. Three known iterative orthogonalization processes, which are special cases of the general process, are analyzed, their convergence rate (order) is determined, and their range of convergence is established in terms of the spectral radius of the modulus of the matrix which is being orthogonalyzed.

Journal ArticleDOI
TL;DR: Among the iterative schemes for computing the Moore — Penrose inverse of a woll-conditioned matrix, only those which have an order of convergence three or two are computationally efficient.
Abstract: Among the iterative schemes for computing the Moore — Penrose inverse of a woll-conditioned matrix, only those which have an order of convergence three or two are computationally efficient. A Fortran programme for these schemes is provided.

Journal ArticleDOI
TL;DR: In this paper, a general procedure for the sequential construction of D-optimal designs is given, which includes Wynn's procedure as a special case, and an example is presented to compare various alternative procedures.
Abstract: A very general procedure for the sequential construction of D-optimal designs is given, which includes Wynn's procedure as a special case. An example is presented to compare various alternative procedures. Properties of the weights αn in these procedures are discussed, and the convergence rate of Fedorov's procedure is obtained.

Journal ArticleDOI
TL;DR: In this article, a class of algorithms for the minimization of a function of n variables subject to linear inequality constraints is described, and the convergence to a stationary point is demonstrated under weak conditions.
Abstract: A class of algorithms are described for the minimization of a function of n variables subject to linear inequality constraints. Under weak conditions convergence to a stationary point is demonstrated. The method uses a mixture of conjugate di- rection constructing and accelerating steps. Any mixture, for example alternation, may be used provided that the subsequence of conjugate direction constructing steps is infinite. The mixture of steps may be specified so that under appropriate assump- tions the rate of convergence of the method is two-step superlinear or (n - p + 1)- step cubic where p is the number of constraints active at a stationary point. The accelerating step is always superlinearly convergent. A condition is given under which the alternating policy is every step superlinear. Computational results are given for several test problems. 1. Introduction. In (3) a conjugate direction method is described for minimiz- ing a nonlinear function subject to linear inequality constraints. An accelerating step is always performed after the construction of (« - p) conjugate directions, where n is the number of variables and p is the number of constraints active at the limit point of the sequence of points constructed by the method. Under appropriate assumptions this results in an (n - p + l)-step cubic rate of convergence. The idea of accelerating the rate of convergence of methods of conjugate direc- tions for unconstrained optimization has further been pursued in (2) and (9). In (2) the construction of conjugate directions is based on Zoutendijk's projection method (11), and the accelerating direction is obtained using an approximation to the solution of certain linear equations involving differences of gradients at previous iterations. In (9), conjugate directions are obtained by always choosing the descent direction orthog- onal to n — 1 differences of gradients; and therefore, a set of n conjugate directions is available at every iteration. This allows an accelerating direction to be used more frequently than every n iterations. It is the purpose of this paper to extend these methods to minimization problems with linear inequality constraints. The algorithm allows considerable flexibility in the mixture of accelerating and conjugate direction constructing steps. If the algorithm does not terminate in a finite number of steps it is only required that the number of conjugate direction constructing steps be infinite. Under appropriate assumptions then each accelerating step is a superlinear step, and this results in an /-step superlinear rate

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a fast, stable and easy to code matrix method of which the number of nodes and rate of convergence (and therefore the computing time) are independent of injection rate.
Abstract: T structure of a laminar boundary layer with massive blowing is greatly complicated in view of the viscidinviscid interactions. These interactions result in larger deflection of incident flow and also a significant alteration in the major flow variables including pressure distribution. To date various analytical models have been developed for the prediction of the flowfield. More recently, numerical studies based on the laminar boundary-layer approximation have been developed for the prediction of a boundary-layer flow structure with a prescribed pressure gradient. Methods adopted in the numerical investigation include backward shooting, Libby, Nachtsheim and Green, Liu and Nachtscheim, and matrix method Garrett, Smith, and Perkins. Although it has been shown by Liu and Nachtsheim that the backward shooting technique is stable, its computing timeis long and increases as injection rates increase. Therefore, from an economic point of view, it is highly desirable to devise another stable and yet fast numerical scheme. With this motivation, the authors tried the matrix method resulting from various finite differencing. However, for differential equations with large parasitic eigenvalues such as massive blowing problem, most differencing schemes result in a matrix with large condition number and therefore is unstable. It is the purpose of the present study to propose a fast, stable, and easy to code matrix method of which the number of nodes and rate of convergence (and therefore the computing time) are independent of injection rate.

Journal ArticleDOI
TL;DR: Numerical experiments suggest that on a parallel computer this new algorithm is the best of the iterative algorithms considered, whose rate of convergence is comparable to that of the optimal two-cyclic Chebyshev iteration.
Abstract: Iterative methods for the solution of tridiagonal systems are considered, and a new iteration is presented, whose rate of convergence is comparable to that of the optimal two-cyclic Chebyshev iteration but which does not require the calculation of optimal parameters. The convergence rate depends only on the magnitude of the elements of the tridiagonal matrix and not on its dimension or spectrum. The theory also has a natural extension to block tridiagonal systems. Numerical experiments suggest that on a parallel computer this new algorithm is the best of the iterative algorithms considered.

Journal ArticleDOI
TL;DR: An algorithm for nonlinear minimax optimization is developed that maintains the quadratic convergence property of a recent algorithm by Madsen et al. when applied to regular problems and it is demonstrated to significantly improve the final convergence on singular problems.
Abstract: A theoretical treatment of singularities in nonlinear minimax optimization problems, which allows for a classification in regular and singular problems, is presented. A theorem for determining a singularity that is present in a given problem is formulated. A group of problems often used in the literature to test nonlinear minimax algorithms, i.e., minimax design of multisection quarter-wave transformers, is shown to exhibit singularities and the reason for this is pointed out. Based on the theoretical results presented an algorithm for nonlinear minimax optimization is developed. The new algorithm maintains the quadratic convergence property of a recent algorithm by Madsen et al. when applied to regular problems and it is demonstrated to significantly improve the final convergence on singular problems.

Journal ArticleDOI
TL;DR: Quadratic convergence is proven and the condition for convergence is determined and illustrated with numerical examples.
Abstract: There exist several algorithms for the optimal orthogonalization of the Direction Cosine Matrix used in navigation, control and simulation One of these recursive algorithms is shown to be derived from a dual solution to the optimal orthogonalization problem. The duality of the algorithm is demonstrated and its convergence properties are investigated Quadratic convergence is proven and the condition for convergence is determined and illustrated with numerical examples.

Journal ArticleDOI
TL;DR: In this article, it was shown that for Markov chains, the normal approximation is of order n -" for each c~ < 1/4, where c is the number of random variables in the chain.
Abstract: All results concerning the accuracy of the normal approximation for sums of not necessarily independent random variables assume Doeblin's condition or the condition of (p-mixing (see e.g. [1, 3, 5, 7, 9]). Both assumptions mean in some sense that the random variables are "asymptotically independent", and they are rarely fulfilled for Markov-chains. Using Doeblin's condition or the condition of (p-mixing the rate of convergence to the normal distribution obtained in some of the papers cited above is of order n -1/2. The authors do not know of any results on the accuracy of the normal approximation holding without such conditions. In this paper we prove under weak moment conditions that for Markov-chains the normal approximation is of order n -" for each c~ < 1/4.

Journal ArticleDOI
TL;DR: In this paper, the authors derived the order of convergence of line search techniques based on fitting polynomials using function values only and showed that the order increases with the degree of the polynomial.
Abstract: In this study we derive the order of convergence of some line search techniques based on fitting polynomials; using function values only. It is shown that the order of convergence increases with the degree of the polynomial. If viewed as a sequence, the orders approach the Golden Section Ratio when the degree of the polynomial tends to infinity.

Proceedings ArticleDOI
01 Dec 1976
TL;DR: An improved adaptive observer is formulated for nth order, linear, single-output, observable and time invariant systems and uses only input and output measurements and has an arbitrarily fast rate of convergence for both the system state and the system parameters.
Abstract: Based on the implementation and the method of approach, the adaptive observers are herein classified as explicit and implicit observers. Both types are discussed from a convergence point of view, and a geometrical interpretation of the adaptive algorithms is given in order to account for their slow rate of convergence. Finally, an improved adaptive observer is formulated for nth order, linear, single-output, observable and time invariant systems. The proposed scheme uses only input and output measurements, is globally asymptotically stable and has an arbitrarily fast rate of convergence for both the system state and the system parameters. Simulation results are included.

Journal ArticleDOI
R. Mifflin1
TL;DR: In this paper, the convergence of a method of centers algorithm for nonlinear programming problems is considered, where the subproblems that must be solved during its execution may be solved by finite-step procedures.
Abstract: Convergence of a method of centers algorithm for solving nonlinear programming problems is considered. The algorithm is defined so that the subproblems that must be solved during its execution may be solved by finite-step procedures. Conditions are given under which the algorithm generates sequences of feasible points and constraint multiplier vectors that have accumulation points satisfying the Fritz John or the Kuhn-Tucker optimality conditions. Under stronger assumptions, linear convergence rates are established for the sequences of objective function, constraint function, feasible point, and multiplier values.

Journal ArticleDOI
TL;DR: Pironneau and Polak as mentioned in this paper proved theorems which show that three of these methods have a linear rate of convergence for certain convex problems in which the objective functions have positive definite Hessians near the solutions.
Abstract: This paper is concerned with first-order methods of feasible directions. Pironneau and Polak have recently proved theorems which show that three of these methods have a linear rate of convergence for certain convex problems in which the objective functions have positive definite Hessians near the solutions. In the present note, it is shown that these theorems on rate of convergence can be extended to larger classes of problems. These larger classes are determined in part by certain second-order sufficiency conditions, and they include many nonconvex problems. The arguments used here are based on the finite-dimensional version of Hestenes' indirect sufficiency method.

Journal ArticleDOI
01 Dec 1976-Metrika
TL;DR: It is shown that minimum distance estimators for families of unimodal densities are always consistent and the rate of convergence is indicated.
Abstract: It is shown that minimum distance estimators for families of unimodal densities are always consistent; the rate of convergence is indicated. An algorithm is proposed for computing the minimum distance estimator for the family of all unimodal densities. References are given to the maximum likelihood method and the kernel method.


Journal ArticleDOI
TL;DR: In this article, large-matrix extended-shell model calculations are used to compute self-consistency corrections to the effective interaction and to the linked-cluster effective interaction, and the influence of various partial corrections is tested.

Journal ArticleDOI
TL;DR: In this paper, the authors consider finite element projection methods for linear partial differential equations, in which the spaces of trial functions and test functions may be different and conditions implying equality of dimensions and uniform coerciveness are required, the most important of which resembles a strong form of an inverse assumption.
Abstract: We consider finite element projection methods for linear partial differential equations, in which the spaces of trial functions and test functions may be different. In addition to approximation and smoothness properties, conditions implying equality of dimensions and uniform coerciveness are required, the most important of which resembles a strong form of an inverse assumption. Our results provide a mechanism for the difference in the rate of convergence of Galerkin procedures with cubic splines and Hermite cubics, applied to first order symmetric hyperbolic problems [131.