scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Numerical linear algebra aspects of globally convergent homotopy methods

01 Dec 1986-Siam Review (Society for Industrial and Applied Mathematics)-Vol. 28, Iss: 4, pp 529-545
TL;DR: Probability one homotopy algorithms as mentioned in this paper are a class of methods for solving nonlinear systems of equations that are globally convergent with probability one, and if constructed and implemented properly, are robust, numerically stable, accurate, and practical.
Abstract: Probability one homotopy algorithms are a class of methods for solving nonlinear systems of equations that are globally convergent with probability one These methods are theoretically powerful, and if constructed and implemented properly, are robust, numerically stable, accurate, and practical The concomitant numerical linear algebra problems deal with rectangular matrices, and good algorithms require a delicate balance (not always achieved) of accuracy, robustness, and efficiency in both space and time The author's experience with globally convergent homotopy algorithms is surveyed here, and some of the linear algebra difficulties for dense and sparse problems are discussed

Content maybe subject to copyright    Report

Citations
More filters
Book
01 Nov 2008
TL;DR: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization, responding to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems.
Abstract: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization. It responds to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems. For this new edition the book has been thoroughly updated throughout. There are new chapters on nonlinear interior methods and derivative-free methods for optimization, both of which are used widely in practice and the focus of much current research. Because of the emphasis on practical methods, as well as the extensive illustrations and exercises, the book is accessible to a wide audience. It can be used as a graduate text in engineering, operations research, mathematics, computer science, and business. It also serves as a handbook for researchers and practitioners in the field. The authors have strived to produce a text that is pleasant to read, informative, and rigorous - one that reveals both the beautiful nature of the discipline and its practical side.

17,420 citations

BookDOI
01 Jan 1990

1,149 citations

Journal ArticleDOI
TL;DR: The back-propagation algorithm described by Rumelhart et al. (1986) can greatly accelerate convergence as discussed by the authors, however, in many applications, the number of iterations required before convergence can be large.
Abstract: The utility of the back-propagation method in establishing suitable weights in a distributed adaptive network has been demonstrated repeatedly. Unfortunately, in many applications, the number of iterations required before convergence can be large. Modifications to the back-propagation algorithm described by Rumelhart et al. (1986) can greatly accelerate convergence. The modifications consist of three changes:1) instead of updating the network weights after each pattern is presented to the network, the network is updated only after the entire repertoire of patterns to be learned has been presented to the network, at which time the algebraic sums of all the weight changes are applied:2) instead of keeping ź, the "learning rate" (i.e., the multiplier on the step size) constant, it is varied dynamically so that the algorithm utilizes a near-optimum ź, as determined by the local optimization topography; and3) the momentum factor ź is set to zero when, as signified by a failure of a step to reduce the total error, the information inherent in prior steps is more likely to be misleading than beneficial. Only after the network takes a useful step, i.e., one that reduces the total error, does ź again assume a non-zero value. Considering the selection of weights in neural nets as a problem in classical nonlinear optimization theory, the rationale for algorithms seeking only those weights that produce the globally minimum error is reviewed and rejected.

1,017 citations

Book
01 Jan 1987
TL;DR: The Numerical Continuation Methods for Nonlinear Systems of Equations (NCME) as discussed by the authors is an excellent introduction to numerical continuuation methods for solving nonlinear systems of equations.
Abstract: From the Publisher: Introduction to Numerical Continuation Methods continues to be useful for researchers and graduate students in mathematics, sciences, engineering, economics, and business looking for an introduction to computational methods for solving a large variety of nonlinear systems of equations. A background in elementary analysis and linear algebra is adequate preparation for reading this book; some knowledge from a first course in numerical analysis may also be helpful.

889 citations

Journal ArticleDOI
TL;DR: The structure and design of the software package PHC is described, which features great variety of root-counting methods among its tools and is ensured by the gnu-ada compiler.
Abstract: Polynomial systems occur in a wide variety of application domains. Homotopy continuation methods are reliable and powerful methods to compute numerically approximations to all isolated complex solutions. During the last decade considerable progress has been accomplished on exploiting structure in a polynomial system, in particular its sparsity. In this article the structure and design of the software package PHC is described. The main program operates in several modes, is menu driven, and is file oriented. This package features great variety of root-counting methods among its tools. The outline of one black-box solver is sketched, and a report is given on its performance on a large database of test problems. The software has been developed on four different machine architectures. Its portability is ensured by the gnu-ada compiler.

708 citations

References
More filters
Book
01 Feb 1996
TL;DR: In this paper, Schnabel proposed a modular system of algorithms for unconstrained minimization and nonlinear equations, based on Newton's method for solving one equation in one unknown convergence of sequences of real numbers.
Abstract: Preface 1. Introduction. Problems to be considered Characteristics of 'real-world' problems Finite-precision arithmetic and measurement of error Exercises 2. Nonlinear Problems in One Variable. What is not possible Newton's method for solving one equation in one unknown Convergence of sequences of real numbers Convergence of Newton's method Globally convergent methods for solving one equation in one uknown Methods when derivatives are unavailable Minimization of a function of one variable Exercises 3. Numerical Linear Algebra Background. Vector and matrix norms and orthogonality Solving systems of linear equations-matrix factorizations Errors in solving linear systems Updating matrix factorizations Eigenvalues and positive definiteness Linear least squares Exercises 4. Multivariable Calculus Background Derivatives and multivariable models Multivariable finite-difference derivatives Necessary and sufficient conditions for unconstrained minimization Exercises 5. Newton's Method for Nonlinear Equations and Unconstrained Minimization. Newton's method for systems of nonlinear equations Local convergence of Newton's method The Kantorovich and contractive mapping theorems Finite-difference derivative methods for systems of nonlinear equations Newton's method for unconstrained minimization Finite difference derivative methods for unconstrained minimization Exercises 6. Globally Convergent Modifications of Newton's Method. The quasi-Newton framework Descent directions Line searches The model-trust region approach Global methods for systems of nonlinear equations Exercises 7. Stopping, Scaling, and Testing. Scaling Stopping criteria Testing Exercises 8. Secant Methods for Systems of Nonlinear Equations. Broyden's method Local convergence analysis of Broyden's method Implementation of quasi-Newton algorithms using Broyden's update Other secant updates for nonlinear equations Exercises 9. Secant Methods for Unconstrained Minimization. The symmetric secant update of Powell Symmetric positive definite secant updates Local convergence of positive definite secant methods Implementation of quasi-Newton algorithms using the positive definite secant update Another convergence result for the positive definite secant method Other secant updates for unconstrained minimization Exercises 10. Nonlinear Least Squares. The nonlinear least-squares problem Gauss-Newton-type methods Full Newton-type methods Other considerations in solving nonlinear least-squares problems Exercises 11. Methods for Problems with Special Structure. The sparse finite-difference Newton method Sparse secant methods Deriving least-change secant updates Analyzing least-change secant methods Exercises Appendix A. A Modular System of Algorithms for Unconstrained Minimization and Nonlinear Equations (by Robert Schnabel) Appendix B. Test Problems (by Robert Schnabel) References Author Index Subject Index.

6,831 citations

Book
01 Mar 1983
TL;DR: Newton's Method for Nonlinear Equations and Unconstrained Minimization and methods for solving nonlinear least-squares problems with Special Structure.
Abstract: Preface 1. Introduction. Problems to be considered Characteristics of 'real-world' problems Finite-precision arithmetic and measurement of error Exercises 2. Nonlinear Problems in One Variable. What is not possible Newton's method for solving one equation in one unknown Convergence of sequences of real numbers Convergence of Newton's method Globally convergent methods for solving one equation in one uknown Methods when derivatives are unavailable Minimization of a function of one variable Exercises 3. Numerical Linear Algebra Background. Vector and matrix norms and orthogonality Solving systems of linear equations-matrix factorizations Errors in solving linear systems Updating matrix factorizations Eigenvalues and positive definiteness Linear least squares Exercises 4. Multivariable Calculus Background Derivatives and multivariable models Multivariable finite-difference derivatives Necessary and sufficient conditions for unconstrained minimization Exercises 5. Newton's Method for Nonlinear Equations and Unconstrained Minimization. Newton's method for systems of nonlinear equations Local convergence of Newton's method The Kantorovich and contractive mapping theorems Finite-difference derivative methods for systems of nonlinear equations Newton's method for unconstrained minimization Finite difference derivative methods for unconstrained minimization Exercises 6. Globally Convergent Modifications of Newton's Method. The quasi-Newton framework Descent directions Line searches The model-trust region approach Global methods for systems of nonlinear equations Exercises 7. Stopping, Scaling, and Testing. Scaling Stopping criteria Testing Exercises 8. Secant Methods for Systems of Nonlinear Equations. Broyden's method Local convergence analysis of Broyden's method Implementation of quasi-Newton algorithms using Broyden's update Other secant updates for nonlinear equations Exercises 9. Secant Methods for Unconstrained Minimization. The symmetric secant update of Powell Symmetric positive definite secant updates Local convergence of positive definite secant methods Implementation of quasi-Newton algorithms using the positive definite secant update Another convergence result for the positive definite secant method Other secant updates for unconstrained minimization Exercises 10. Nonlinear Least Squares. The nonlinear least-squares problem Gauss-Newton-type methods Full Newton-type methods Other considerations in solving nonlinear least-squares problems Exercises 11. Methods for Problems with Special Structure. The sparse finite-difference Newton method Sparse secant methods Deriving least-change secant updates Analyzing least-change secant methods Exercises Appendix A. A Modular System of Algorithms for Unconstrained Minimization and Nonlinear Equations (by Robert Schnabel) Appendix B. Test Problems (by Robert Schnabel) References Author Index Subject Index.

6,217 citations

Journal ArticleDOI
TL;DR: In this paper, an incremental approach to the solution of buckling and snapping problems is explored, where the authors use the length of the equilibrium path as a control parameter, together with the second order iteration method of Newton.

1,821 citations