scispace - formally typeset
Open AccessBook

Optimization Theory and Methods: Nonlinear Programming

Wenyu Sun, +1 more
TLDR
In this article, the convergence theory for inverse line search is used for line search and convergence theory of Inexact Line Search Exercises is used to define convergence conditions for unconstrained line search.
Abstract
Preface 1 Introduction 11 Introduction 12 Mathematics Foundations 121 Norm 122 Inverse and Generalized Inverse of a Matrix 123 Properties of Eigenvalues 124 Rank-One Update 125 Function and Differential 13 Convex Sets and Convex Functions 131 Convex Sets 132 Convex Functions 133 Separation and Support of Convex Sets 14 Optimality Conditions for Unconstrained Case 15 Structure of Optimization Methods Exercises 2 Line Search 21 Introduction 22 Convergence Theory for Exact Line Search 23 Section Methods 231 The Golden Section Method 232 The Fibonacci Method 24 Interpolation Method 241 Quadratic Interpolation Methods 242 Cubic Interpolation Method 25 Inexact Line Search Techniques 251 Armijo and Goldstein Rule 252 Wolfe-Powell Rule 253 Goldstein Algorithm and Wolfe-Powell Algorithm 254 Backtracking Line Search 255 Convergence Theorems of Inexact Line Search Exercises 3 Newton's Methods 31 The Steepest Descent Method 311 The Steepest Descent Method 312 Convergence of the Steepest Descent Method 313 Barzilai and Borwein Gradient Method 314 Appendix: Kantorovich Inequality 32 Newton's Method 33 Modified Newton's Method 34 Finite-Difference Newton's Method 35 Negative Curvature Direction Method 351 Gill-Murray Stable Newton's Method 352 Fiacco-McCormick Method 353 Fletcher-Freeman Method 354 Second-Order Step Rules 36 Inexact Newton's Method Exercises 4 Conjugate Gradient Method 41 Conjugate Direction Methods 42 Conjugate Gradient Method 421 Conjugate Gradient Method 422 Beale's Three-Term Conjugate Gradient Method 423 Preconditioned Conjugate Gradient Method 43 Convergence of Conjugate Gradient Methods 431 Global Convergence of Conjugate Gradient Methods 432 Convergence Rate of Conjugate Gradient Methods Exercises 5 Quasi-Newton Methods 51 Quasi-Newton Methods 511 Quasi-Newton Equation 512 Symmetric Rank-One (SR1) Update 513 DFP Update 514 BFGS Update and PSB Update 515 The Least Change Secant Update 52 The Broyden Class 53 Global Convergence of Quasi-Newton Methods 531 Global Convergence under Exact Line Search 532 Global Convergence under Inexact Line Search 54 Local Convergence of Quasi-Newton Methods 541 Superlinear Convergence of General Quasi-Newton Methods 542 Linear Convergence of General Quasi-Newton Methods 543 Local Convergence of Broyden's Rank-One Update 544 Local and Linear Convergence of DFP Method 545 Superlinear Convergence of BFGS Method 546 Superlinear Convergence of DFP Method 547 Local Convergence of Broyden's Class Methods 55 Self-Scaling Variable Metric (SSVM) Methods 551 Motivation to SSVM Method 552 Self-Scaling Variable Metric (SSVM) Method 553 Choices of the Scaling Factor 56 Sparse Quasi-Newton Methods 57 Limited Memory BFGS Method Exercises 6 Trust-Region and Conic Model Methods 61 Trust-Region Methods 611 Trust-Region Methods 612 Convergence of Trust-Region Methods 613 Solving A Trust-Region Subproblem 62 Conic Model and Collinear Scaling Algorithm 621 Conic Model 622 Generalized Quasi-Newton Equation 623 Updates that Preserve Past Information 624 Collinear Scaling BFGS Algorithm 63 Tensor Methods 631 Tensor Method for Nonlinear Equations 632 Tensor Methods for Unconstrained Optimization Exercises

read more

Citations
More filters
Journal ArticleDOI

Kalman filtering with state constraints: a survey of linear and nonlinear algorithms

TL;DR: In this paper, the authors provide an overview of various ways to incorporate state constraints in the Kalman filter and its nonlinear modifications, including the unscented Kalman Filter, the particle filter, and the extended Kalman Filtering.
Journal ArticleDOI

Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics

TL;DR: The basic idea is to perform nonlinear logistic regression to discriminate between the observed data and some artificially generated noise and it is shown that the new method strikes a competitive trade-off in comparison to other estimation methods for unnormalized models.
Journal ArticleDOI

Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization

TL;DR: This paper proposes to linearize the ALM and the ADM for some nuclear norm involved minimization problems such that closed-form solutions of these linearized subproblems can be easily derived.
Journal ArticleDOI

A Fast Algorithm for Sparse Reconstruction Based on Shrinkage, Subspace Optimization, and Continuation

TL;DR: The code FPC_AS embeds this basic two-stage algorithm in a continuation (homotopy) approach by assigning a decreasing sequence of values to $\mu$ and exhibits state-of-the-art performance in terms of both its speed and its ability to recover sparse signals.
Journal ArticleDOI

A Survey of Optimization Methods From a Machine Learning Perspective

TL;DR: A systematic retrospect and summary of the optimization methods from the perspective of machine learning can be found in this article, which can offer guidance for both developments of optimization and machine learning research.
Related Papers (5)