scispace - formally typeset
Search or ask a question
Author

Ya-xiang Yuan

Bio: Ya-xiang Yuan is an academic researcher from Chinese Academy of Sciences. The author has contributed to research in topics: Trust region & Gradient method. The author has an hindex of 36, co-authored 132 publications receiving 6667 citations. Previous affiliations of Ya-xiang Yuan include Academia Sinica & University of Cambridge.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents a new version of the conjugate gradient method, which converges globally, provided the line search satisfies the standard Wolfe conditions.
Abstract: Conjugate gradient methods are widely used for unconstrained optimization, especially large scale problems. The strong Wolfe conditions are usually used in the analyses and implementations of conjugate gradient methods. This paper presents a new version of the conjugate gradient method, which converges globally, provided the line search satisfies the standard Wolfe conditions. The conditions on the objective function are also weak, being similar to those required by the Zoutendijk condition.

1,065 citations

Book
25 Nov 2010
TL;DR: In this article, the convergence theory for inverse line search is used for line search and convergence theory of Inexact Line Search Exercises is used to define convergence conditions for unconstrained line search.
Abstract: Preface 1 Introduction 11 Introduction 12 Mathematics Foundations 121 Norm 122 Inverse and Generalized Inverse of a Matrix 123 Properties of Eigenvalues 124 Rank-One Update 125 Function and Differential 13 Convex Sets and Convex Functions 131 Convex Sets 132 Convex Functions 133 Separation and Support of Convex Sets 14 Optimality Conditions for Unconstrained Case 15 Structure of Optimization Methods Exercises 2 Line Search 21 Introduction 22 Convergence Theory for Exact Line Search 23 Section Methods 231 The Golden Section Method 232 The Fibonacci Method 24 Interpolation Method 241 Quadratic Interpolation Methods 242 Cubic Interpolation Method 25 Inexact Line Search Techniques 251 Armijo and Goldstein Rule 252 Wolfe-Powell Rule 253 Goldstein Algorithm and Wolfe-Powell Algorithm 254 Backtracking Line Search 255 Convergence Theorems of Inexact Line Search Exercises 3 Newton's Methods 31 The Steepest Descent Method 311 The Steepest Descent Method 312 Convergence of the Steepest Descent Method 313 Barzilai and Borwein Gradient Method 314 Appendix: Kantorovich Inequality 32 Newton's Method 33 Modified Newton's Method 34 Finite-Difference Newton's Method 35 Negative Curvature Direction Method 351 Gill-Murray Stable Newton's Method 352 Fiacco-McCormick Method 353 Fletcher-Freeman Method 354 Second-Order Step Rules 36 Inexact Newton's Method Exercises 4 Conjugate Gradient Method 41 Conjugate Direction Methods 42 Conjugate Gradient Method 421 Conjugate Gradient Method 422 Beale's Three-Term Conjugate Gradient Method 423 Preconditioned Conjugate Gradient Method 43 Convergence of Conjugate Gradient Methods 431 Global Convergence of Conjugate Gradient Methods 432 Convergence Rate of Conjugate Gradient Methods Exercises 5 Quasi-Newton Methods 51 Quasi-Newton Methods 511 Quasi-Newton Equation 512 Symmetric Rank-One (SR1) Update 513 DFP Update 514 BFGS Update and PSB Update 515 The Least Change Secant Update 52 The Broyden Class 53 Global Convergence of Quasi-Newton Methods 531 Global Convergence under Exact Line Search 532 Global Convergence under Inexact Line Search 54 Local Convergence of Quasi-Newton Methods 541 Superlinear Convergence of General Quasi-Newton Methods 542 Linear Convergence of General Quasi-Newton Methods 543 Local Convergence of Broyden's Rank-One Update 544 Local and Linear Convergence of DFP Method 545 Superlinear Convergence of BFGS Method 546 Superlinear Convergence of DFP Method 547 Local Convergence of Broyden's Class Methods 55 Self-Scaling Variable Metric (SSVM) Methods 551 Motivation to SSVM Method 552 Self-Scaling Variable Metric (SSVM) Method 553 Choices of the Scaling Factor 56 Sparse Quasi-Newton Methods 57 Limited Memory BFGS Method Exercises 6 Trust-Region and Conic Model Methods 61 Trust-Region Methods 611 Trust-Region Methods 612 Convergence of Trust-Region Methods 613 Solving A Trust-Region Subproblem 62 Conic Model and Collinear Scaling Algorithm 621 Conic Model 622 Generalized Quasi-Newton Equation 623 Updates that Preserve Past Information 624 Collinear Scaling BFGS Algorithm 63 Tensor Methods 631 Tensor Method for Nonlinear Equations 632 Tensor Methods for Unconstrained Optimization Exercises

837 citations

Book
01 Jan 2006
TL;DR: This book systematically describes optimization theory and several powerful methods, including recent results, which will be very beneficial as a research reference.
Abstract: This book, a result of the author's teaching and research experience in various universities and institutes over the past ten years, can be used as a textbook for an optimization course for graduates and senior undergraduates. It systematically describes optimization theory and several powerful methods, including recent results. For most methods, the authors discuss an idea’s motivation, study the derivation, establish the global and local convergence, describe algorithmic steps, and discuss the numerical performance. The book deals with both theory and algorithms of optimization concurrently. It also contains an extensive bibliography. Finally, apart from its use for teaching, Optimization Theory and Methods will be very beneficial as a research reference.

591 citations

Journal ArticleDOI
TL;DR: In this article, the authors study the global convergence properties of the restricted Broyden class of quasi-Newton methods, when applied to a convex objective function, assuming that the line search satisfies a standard sufficient decrease condition and that the initial Hessian approximation is any positive definite matrix.
Abstract: We study the global convergence properties of the restricted Broyden class of quasi-Newton methods, when applied to a convex objective function. We assume that the line search satisfies a standard sufficient decrease condition and that the initial Hessian approximation is any positive definite matrix. We show global and superlinear convergence for this class of methods, except for DFP. This generalizes Powell’s well-known result for the BFGS method. The analysis gives us insight into the properties of these algorithms; in particular it shows that DFP lacks a very desirable self-correcting property possessed by BFGS.

338 citations

Journal ArticleDOI
TL;DR: If ||F(x)|| provides a local error bound for the system of nonlinear equations F(x)=0, it is shown that the sequence {xk} generated by the new method converges to a solution quadratically, which is stronger than dist(xk,X*)→0 given by Yamashita and Fukushima.
Abstract: Recently Yamashita and Fukushima [11] established an interesting quadratic convergence result for the Levenberg-Marquardt method without the nonsingularity assumption. This paper extends the result of Yamashita and Fukushima by using µk = |F(xk|2-where δ ∈(1,2) instead of µk = F(xk)2 as the Levenberg-Marquardt parameter. If |F(x)| provides a local error bound for the system of nonlinear equations F(x) = 0, it is shown that the sequence {xk} generated by the new method converges to a solution quadratically, which is stronger than dist(xkċX∞) -- 0 given by Yamashita and Fukushima. Numerical results show that the method performs well for singular problems.

250 citations


Cited by
More filters
Book
01 Nov 2008
TL;DR: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization, responding to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems.
Abstract: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization. It responds to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems. For this new edition the book has been thoroughly updated throughout. There are new chapters on nonlinear interior methods and derivative-free methods for optimization, both of which are used widely in practice and the focus of much current research. Because of the emphasis on practical methods, as well as the extensive illustrations and exercises, the book is accessible to a wide audience. It can be used as a graduate text in engineering, operations research, mathematics, computer science, and business. It also serves as a handbook for researchers and practitioners in the field. The authors have strived to produce a text that is pleasant to read, informative, and rigorous - one that reveals both the beautiful nature of the discipline and its practical side.

17,420 citations

Book
01 Jan 1987
TL;DR: Iterative Methods for Optimization does more than cover traditional gradient-based optimization: it is the first book to treat sampling methods, including the Hooke& Jeeves, implicit filtering, MDS, and Nelder& Mead schemes in a unified way.
Abstract: This book presents a carefully selected group of methods for unconstrained and bound constrained optimization problems and analyzes them in depth both theoretically and algorithmically. It focuses on clarity in algorithmic description and analysis rather than generality, and while it provides pointers to the literature for the most general theoretical results and robust software, the author thinks it is more important that readers have a complete understanding of special cases that convey essential ideas. A companion to Kelley's book, Iterative Methods for Linear and Nonlinear Equations (SIAM, 1995), this book contains many exercises and examples and can be used as a text, a tutorial for self-study, or a reference. Iterative Methods for Optimization does more than cover traditional gradient-based optimization: it is the first book to treat sampling methods, including the Hooke& Jeeves, implicit filtering, MDS, and Nelder& Mead schemes in a unified way.

1,980 citations

Journal ArticleDOI
TL;DR: Since its popularization in the late 1970s, Sequential Quadratic Programming (SQP) has arguably become the most successful method for solving nonlinearly constrained optimization problems.
Abstract: Since its popularization in the late 1970s, Sequential Quadratic Programming (SQP) has arguably become the most successful method for solving nonlinearly constrained optimization problems. As with most optimization methods, SQP is not a single algorithm, but rather a conceptual method from which numerous specific algorithms have evolved. Backed by a solid theoretical and computational foundation, both commercial and public-domain SQP algorithms have been developed and used to solve a remarkably large set of important practical problems. Recently large-scale versions have been devised and tested with promising results.

1,765 citations