scispace - formally typeset
Search or ask a question
Book

Practical Methods of Optimization

01 Jan 2009-
TL;DR: The aim of this book is to provide a Discussion of Constrained Optimization and its Applications to Linear Programming and Other Optimization Problems.
Abstract: Preface Table of Notation Part 1: Unconstrained Optimization Introduction Structure of Methods Newton-like Methods Conjugate Direction Methods Restricted Step Methods Sums of Squares and Nonlinear Equations Part 2: Constrained Optimization Introduction Linear Programming The Theory of Constrained Optimization Quadratic Programming General Linearly Constrained Optimization Nonlinear Programming Other Optimization Problems Non-Smooth Optimization References Subject Index.
Citations
More filters
Journal ArticleDOI
TL;DR: QUANTUM ESPRESSO as discussed by the authors is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density functional theory, plane waves, and pseudopotentials (norm-conserving, ultrasoft, and projector-augmented wave).
Abstract: QUANTUM ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudopotentials (norm-conserving, ultrasoft, and projector-augmented wave). The acronym ESPRESSO stands for opEn Source Package for Research in Electronic Structure, Simulation, and Optimization. It is freely available to researchers around the world under the terms of the GNU General Public License. QUANTUM ESPRESSO builds upon newly-restructured electronic-structure codes that have been developed and tested by some of the original authors of novel electronic-structure algorithms and applied in the last twenty years by some of the leading materials modeling groups worldwide. Innovation and efficiency are still its main focus, with special attention paid to massively parallel architectures, and a great effort being devoted to user friendliness. QUANTUM ESPRESSO is evolving towards a distribution of independent and interoperable codes in the spirit of an open-source project, where researchers active in the field of electronic-structure calculations are encouraged to participate in the project by contributing their own codes or by implementing their own ideas into existing codes.

19,985 citations

Journal ArticleDOI
TL;DR: There are several arguments which support the observed high accuracy of SVMs, which are reviewed and numerous examples and proofs of most of the key theorems are given.
Abstract: The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.

15,696 citations


Cites background from "Practical Methods of Optimization"

  • ...This particular dual formulation of the problem is called the Wolfe dual (Fletcher, 1987)....

    [...]

  • ...This is a property of any convex programming problem (Fletcher, 1987)....

    [...]

  • ...For the primal problem above, the KKT conditions may be stated (Fletcher, 1987): ∂ ∂wν LP = wν − ∑ i αiyixiν = 0 ν = 1, · · · , d (17) ∂ ∂b LP = − ∑ i αiyi = 0 (18) yi(xi · w+ b) − 1 ≥ 0 i = 1, · · · , l (19) αi ≥ 0 ∀i (20) αi(yi(w · xi + b) − 1) = 0 ∀i (21) The KKT conditions are satisfied at the…...

    [...]

  • ...For more on nonlinear programming techniques see (Fletcher, 1987; Mangasarian, 1969; McCormick, 1983)....

    [...]

  • ...…with any kind of constraints, provided that the intersection of the set of feasible directions with the set of descent directions coincides with the intersection of the set of feasible directions for linearized constraints with the set of descent directions (see Fletcher, 1987; McCormick, 1983))....

    [...]

Journal ArticleDOI
TL;DR: This tutorial gives an overview of the basic ideas underlying Support Vector (SV) machines for function estimation, and includes a summary of currently used algorithms for training SV machines, covering both the quadratic programming part and advanced methods for dealing with large datasets.
Abstract: In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.

10,696 citations


Cites background from "Practical Methods of Optimization"

  • ...This requirement is made, as we want to ensure the existence and uniqueness (for strict convexity) of a minimum of optimization problems [Fletcher, 1989]....

    [...]

  • ...[Fletcher, 1989]....

    [...]

Journal ArticleDOI
TL;DR: A least squares version for support vector machine (SVM) classifiers that follows from solving a set of linear equations, instead of quadratic programming for classical SVM's.
Abstract: In this letter we discuss a least squares version for support vector machine (SVM) classifiers. Due to equality type constraints in the formulation, the solution follows from solving a set of linear equations, instead of quadratic programming for classical SVM‘s. The approach is illustrated on a two-spiral benchmark classification problem.

8,811 citations


Cites background from "Practical Methods of Optimization"

  • ...Because the matrixassociated with suykens.tex; 24/11/1999; 16:59; p.3 this quadratic programming problem is not indefinite, the soluti n to (11) will be global (Fletcher, 1987)....

    [...]

  • ...…(16) One defines the Lagrangian L3(w;b;e;α) = J3(w;b;e) N∑ k=1αkfyk[wTϕ(xk)+b] 1+ekg (17) suykens.tex; 24/11/1999; 16:59; p.4 whereαk are Lagrange multipliers (which can be either positive or negative now due to the equality constraints as follows from the Kuhn-Tucker conditions (Fletcher, 1987))....

    [...]

  • ...which is equivalent to ykTwT 0.xk/CbU1 ;k D 1;:::;N; ( 3 )...

    [...]

  • ...Furthermore, one can show that hyperplanes ( 3 ) satisfying the constraintkwk2 a have a VC-dimensionh which is bounded by...

    [...]

  • ...In order to have the possibility to violate ( 3 ), in case a separating hyperplane in this higher dimensional space does not exist, variables xk are introduced such that ykTwT 0.xk/CbU1 xk ;k D 1;:::;N;...

    [...]

Journal ArticleDOI
TL;DR: A comprehensive description of the primal-dual interior-point algorithm with a filter line-search method for nonlinear programming is provided, including the feasibility restoration phase for the filter method, second-order corrections, and inertia correction of the KKT matrix.
Abstract: We present a primal-dual interior-point algorithm with a filter line-search method for nonlinear programming. Local and global convergence properties of this method were analyzed in previous work. Here we provide a comprehensive description of the algorithm, including the feasibility restoration phase for the filter method, second-order corrections, and inertia correction of the KKT matrix. Heuristics are also considered that allow faster performance. This method has been implemented in the IPOPT code, which we demonstrate in a detailed numerical study based on 954 problems from the CUTEr test set. An evaluation is made of several line-search options, and a comparison is provided with two state-of-the-art interior-point codes for nonlinear programming.

7,966 citations


Cites background from "Practical Methods of Optimization"

  • ..., [7, 12]) to improve the proposed step if a trial point has been rejected....

    [...]

  • ...Note that if the regularization parameter ζ > 0 is chosen sufficiently small, the optimization problem (30) is the exact penalty formulation [12] of the problem “find the feasible point that is closest (in a weighted norm) to the reference point x̄R ,”...

    [...]