scispace - formally typeset
Search or ask a question
Topic

Nonlinear programming

About: Nonlinear programming is a research topic. Over the lifetime, 19486 publications have been published within this topic receiving 656602 citations. The topic is also known as: non-linear programming & NLP.


Papers
More filters
Book
31 Mar 1998
TL;DR: This book discusses the role of Ellipsoid Method for Complexity Analysis of Combinatorial Problems, and the importance of Semidefinite Programming Bounds for Extremal Graph Problems.
Abstract: Preface. 1. Elements of Convex Analysis, Linear Algebra, and Graph Theory. 2. Subgradient and epsilon-Subgradient Methods. 3. Subgradient-Type Methods with Space Dilation. 4. Elements of Information and Numerical Complexity of Polynomial Extremal Problems. 5. Decomposition Methods Based on Nonsmooth Optimization. 6. Algorithms for Constructing Optimal on Volume Ellipsoids and Semidefinite Programming. 7. The Role of Ellipsoid Method for Complexity Analysis of Combinatorial Problems. 8. Semidefinite Programming Bounds for Extremal Graph Problems. 9. Global Minimization of Polynomial Functions and 17-th Hilbert Problem. References.

291 citations

Journal ArticleDOI
TL;DR: The nonlinear solver that is considered in this paper is a Sequential Quadratic Programming solver, which is based on branch-and-bound, but does not require the NLP problem at each node to be solved to optimality.
Abstract: This paper considers the solution of Mixed Integer Nonlinear Programming (MINLP) problems. Classical methods for the solution of MINLP problems decompose the problem by separating the nonlinear part from the integer part. This approach is largely due to the existence of packaged software for solving Nonlinear Programming (NLP) and Mixed Integer Linear Programming problems. In contrast, an integrated approach to solving MINLP problems is considered here. This new algorithm is based on branch-and-bound, but does not require the NLP problem at each node to be solved to optimality. Instead, branching is allowed after each iteration of the NLP solver. In this way, the nonlinear part of the MINLP problem is solved whilst searching the tree. The nonlinear solver that is considered in this paper is a Sequential Quadratic Programming solver. A numerical comparison of the new method with nonlinear branch-and-bound is presented and a factor of up to 3 improvement over branch-and-bound is observed.

291 citations

Journal ArticleDOI
TL;DR: In this paper, a statistical inference is developed and applied to estimation of the error, validation of optimality of a calculated solution and statistically based stopping criteria for an iterative alogrithm for two-stage stochastic programming with recourse where the random data have a continuous distribution.
Abstract: In this paper we consider stochastic programming problems where the objective function is given as an expected value function. We discuss Monte Carlo simulation based approaches to a numerical solution of such problems. In particular, we discuss in detail and present numerical results for two-stage stochastic programming with recourse where the random data have a continuous (multivariate normal) distribution. We think that the novelty of the numerical approach developed in this paper is twofold. First, various variance reduction techniques are applied in order to enhance the rate of convergence. Successful application of those techniques is what makes the whole approach numerically feasible. Second, a statistical inference is developed and applied to estimation of the error, validation of optimality of a calculated solution and statistically based stopping criteria for an iterative alogrithm. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.

287 citations

Book ChapterDOI
13 Jun 2001
TL;DR: This work considers the general nonlinear optimization problem in 0- 1 variables and provides an explicit equivalent convex positive semidefinite program in 2n - 1 variables that is equivalent to the optimal values of both problems.
Abstract: We consider the general nonlinear optimization problem in 0- 1 variables and provide an explicit equivalent convex positive semidefinite program in 2n - 1 variables. The optimal values of both problems are identical. From every optimal solution of the former one easily find an optimal solution of the latter and conversely, from every solution of the latter one may construct an optimal solution of the former.

287 citations

01 Jan 1988
TL;DR: It is shown that in plateau regions of relatively constant gradient, the momentum term acts to increase the step size by a factor of 1/1-μ, where μ is the momentumTerm, and in valley regions with steep sides,The momentum constant acts to focus the search direction toward the local minimum by averaging oscillations in the gradient.
Abstract: The problem of learning using connectionist networks, in which network connection strengths are modified systematically so that the response of the network increasingly approximates the desired response can be structured as an optimization problem. The widely used back propagation method of connectionist learning [19, 21, 18] is set in the context of nonlinear optimization. In this framework, the issues of stability, convergence and parallelism are considered. As a form of gradient descent with fixed step size, back propagation is known to be unstable, which is illustrated using Rosenbrock's function. This is contrasted with stable methods which involve a line search in the gradient direction. The convergence criterion for connectionist problems involving binary functions is discussed relative to the behavior of gradient descent in the vicinity of local minima. A minimax criterion is compared with the least squares criterion. The contribution of the momentum term [19, 18] to more rapid convergence is interpreted relative to the geometry of the weight space. It is shown that in plateau regions of relatively constant gradient, the momentum term acts to increase the step size by a factor of 1/1-μ, where μ is the momentum term. In valley regions with steep sides, the momentum constant acts to focus the search direction toward the local minimum by averaging oscillations in the gradient. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-88-62. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/597 LEARNING ALGORITHMS FOR CONNECTIONIST NETWORKS: APPLIED GRADIENT METHODS OF NONLINEAR OPTIMIZATION

286 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
93% related
Scheduling (computing)
78.6K papers, 1.3M citations
86% related
Robustness (computer science)
94.7K papers, 1.6M citations
86% related
Linear system
59.5K papers, 1.4M citations
85% related
Control theory
299.6K papers, 3.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023113
2022259
2021615
2020650
2019640
2018630