scispace - formally typeset
Search or ask a question
Topic

Nonlinear programming

About: Nonlinear programming is a research topic. Over the lifetime, 19486 publications have been published within this topic receiving 656602 citations. The topic is also known as: non-linear programming & NLP.


Papers
More filters
Journal ArticleDOI
TL;DR: This work gives a pattern search method for nonlinearly constrained optimization that is an adaption of a bound constrained augmented Lagrangian method first proposed by Conn, Gould, and Toint and is the first provably convergent directsearch method for general nonlinear programming.
Abstract: We give a pattern search method for nonlinearly constrained optimization that is an adaption of a bound constrained augmented Lagrangian method first proposed by Conn, Gould, and Toint [SIAM J. Numer. Anal., 28 (1991), pp. 545--572]. In the pattern search adaptation, we solve the bound constrained subproblem approximately using a pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of the subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. As far as we know, this is the first provably convergent direct search method for general nonlinear programming.

332 citations

Book ChapterDOI
01 Jan 1982
TL;DR: An algorithm is described for solving large-scale nonlinear programs whose objective and constraint functions are smooth and continuously differentiable.
Abstract: An algorithm is described for solving large-scale nonlinear programs whose objective and constraint functions are smooth and continuously differentiable The algorithm is of the projected Lagrangian type, involving a sequence of sparse, linearly constrained subproblems whose objective functions include a modified Lagrangian term and a modified quadratic penalty function

331 citations

Journal ArticleDOI
05 Apr 2018
TL;DR: The aim of the Optim package is to enable researchers, users, and other Julia packages to solve optimization problems without writing such algorithms themselves.
Abstract: Optim provides a range of optimization capabilities written in the Julia programming language (Bezanson et al. 2017). Our aim is to enable researchers, users, and other Julia packages to solve optimization problems without writing such algorithms themselves. The package supports optimization on manifolds, functions of complex numbers, and input types such as arbitrary precision vectors and matrices. We have implemented routines for derivative free, first-order, and second-order optimization methods. The user can provide derivatives themselves, or request that they are calculated using automatic differentiation or finite difference methods. The main focus of the package has currently been on unconstrained optimization, however, box-constrained optimization is supported, and a more comprehensive support for constraints is underway. Similar to Optim, the C library NLopt (Johnson 2008) contains a collection of nonlinear optimization routines. In Python, scipy.optimize supports many of the same algorithms as Optim does, and Pymanopt (Townsend, Niklas, and Weichwald 2016) is a toolbox for manifold optimization. Within the Julia community, the packages BlackBoxOptim.jl and Optimize.jl provide optimization capabilities focusing on derivative-free and large-scale smooth problems respectively. The packages Convex.jl and JuMP.jl (Dunning, Huchette, and Lubin 2017) define modelling languages for which users can formulate optimization problems. In contrast to the previously mentioned optimization codes, Convex and JuMP work as abstraction layers between the user and solvers from a other packages.

329 citations

Journal ArticleDOI
TL;DR: In this paper, the authors derive linear matrix inequality characterizations and dual decomposition algorithms for certain matrix cones which are generated by a given set using generalized co-positivity, which are in fact cones of nonconvex quadratic functions that are nonnegative on a certain domain.
Abstract: We derive linear matrix inequality (LMI) characterizations and dual decomposition algorithms for certain matrix cones which are generated by a given set using generalized co-positivity. These matrix cones are in fact cones of nonconvex quadratic functions that are nonnegative on a certain domain. As a domain, we consider for instance the intersection of a (upper) level-set of a quadratic function and a half-plane. Consequently, we arrive at a generalization of Yakubovich's S-procedure result. Although the primary concern of this paper is to characterize the matrix cones by LMIs, we show, as an application of our results, that optimizing a general quadratic function over the intersection of an ellipsoid and a half-plane can be formulated as semidefinite programming (SDP), thus proving the polynomiality of this class of optimization problems, which arise, e.g., from the application of the trust region method for nonlinear programming. Other applications are in control theory and robust optimization.

328 citations

Journal ArticleDOI
TL;DR: This work study in detail the formulation of the primal-dual interior-point method for linear programming and extends the formulation to general nonlinear programming, and proves that this algorithm can be implemented so that it is locally and Q-quadratically convergent under only the standard Newton method assumptions.
Abstract: In this work, we first study in detail the formulation of the primal-dual interior-point method for linear programming. We show that, contrary to popular belief, it cannot be viewed as a damped Newton method applied to the Karush-Kuhn-Tucker conditions for the logarithmic barrier function problem. Next, we extend the formulation to general nonlinear programming, and then validate this extension by demonstrating that this algorithm can be implemented so that it is locally and Q-quadratically convergent under only the standard Newton method assumptions. We also establish a global convergence theory for this algorithm and include promising numerical experimentation.

328 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
93% related
Scheduling (computing)
78.6K papers, 1.3M citations
86% related
Robustness (computer science)
94.7K papers, 1.6M citations
86% related
Linear system
59.5K papers, 1.4M citations
85% related
Control theory
299.6K papers, 3.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023113
2022259
2021615
2020650
2019640
2018630