scispace - formally typeset
Search or ask a question
Topic

Nonlinear programming

About: Nonlinear programming is a research topic. Over the lifetime, 19486 publications have been published within this topic receiving 656602 citations. The topic is also known as: non-linear programming & NLP.


Papers
More filters
Posted Content
TL;DR: A major theme of this study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter, leading to a discussion about the next generation of optimization methods for large- scale machine learning.
Abstract: This paper provides a review and commentary on the past, present, and future of numerical optimization algorithms in the context of machine learning applications. Through case studies on text classification and the training of deep neural networks, we discuss how optimization problems arise in machine learning and what makes them challenging. A major theme of our study is that large-scale machine learning represents a distinctive setting in which the stochastic gradient (SG) method has traditionally played a central role while conventional gradient-based nonlinear optimization techniques typically falter. Based on this viewpoint, we present a comprehensive theory of a straightforward, yet versatile SG algorithm, discuss its practical behavior, and highlight opportunities for designing algorithms with improved performance. This leads to a discussion about the next generation of optimization methods for large-scale machine learning, including an investigation of two main streams of research on techniques that diminish noise in the stochastic directions and methods that make use of second-order derivative approximations.

178 citations

Journal ArticleDOI
TL;DR: A new learning procedure is presented which is based on a linearization of the nonlinear processing elements and the optimization of the multilayer perceptron layer by layer, which yields results in both accuracy and convergence rates which are orders of magnitude superior compared to conventional backpropagation learning.
Abstract: Multilayer perceptrons are successfully used in an increasing number of nonlinear signal processing applications. The backpropagation learning algorithm, or variations hereof, is the standard method applied to the nonlinear optimization problem of adjusting the weights in the network in order to minimize a given cost function. However, backpropagation as a steepest descent approach is too slow for many applications. In this paper a new learning procedure is presented which is based on a linearization of the nonlinear processing elements and the optimization of the multilayer perceptron layer by layer. In order to limit the introduced linearization error a penalty term is added to the cost function. The new learning algorithm is applied to the problem of nonlinear prediction of chaotic time series. The proposed algorithm yields results in both accuracy and convergence rates which are orders of magnitude superior compared to conventional backpropagation learning. >

178 citations

Journal ArticleDOI
TL;DR: A detailed description of an efficient, reliable SLP algorithm along with a convergence theorem for linearly constrained problems and extensive computational results show that SLP compares favorably with the Generalized Reduced Gradient Code GRG2 and with MINOS/GRG.
Abstract: Successive Linear Programming SLP, which is also known as the Method of Approximation Programming, solves nonlinear optimization problems via a sequence of linear programs. This paper reports on promising computational results with SLP that contrast with the poor performance indicated by previously published comparative tests. The paper provides a detailed description of an efficient, reliable SLP algorithm along with a convergence theorem for linearly constrained problems and extensive computational results. It also discusses several alternative strategies for implementing SLP. The computational results show that SLP compares favorably with the Generalized Reduced Gradient Code GRG2 and with MINOS/GRG. It appears that SLP will be most successful when applied to large problems with low degrees of freedom.

177 citations

Journal ArticleDOI
TL;DR: This paper provides a recursive procedure to solve knapsack problems and differs from classical optimization algorithms of convex programming in that it determines at each iteration the optimal value of at least one variable.
Abstract: The allocation of a specific amount of a given resource among competitive alternatives can often be modelled as a knapsack problem. This model formulation is extremely efficient because it allows convex cost representation with bounded variables to be solved without great computational efforts. Practical applications of this problem abound in the fields of operations management, finance, manpower planning, marketing, etc. In particular, knapsack problems emerge in hierarchical planning systems when a first level of decisions need to be further allocated among specific activities which have been previously treated in an aggregate way. In this paper we provide a recursive procedure to solve such problems. The method differs from classical optimization algorithms of convex programming in that it determines at each iteration the optimal value of at least one variable. Applications and computational results are presented.

177 citations

Proceedings ArticleDOI
01 May 2007
TL;DR: Simulation results show that solutions obtained by this algorithm are very close to lower bounds obtained via relaxation, thus suggesting that the solution produced by the algorithm is near-optimal.
Abstract: Software defined radio (SDR) capitalizes advances in signal processing and radio technology and is capable of reconfiguring RF and switching to desired frequency bands. It is a frequency-agile data communication device that is vastly more powerful than recently proposed multi-channel multi-radio (MC-MR) technology. In this paper, we investigate the important problem of multi-hop networking with SDR nodes. For such network, each node has a pool of frequency bands (not necessarily of equal size) that can be used for communication. The uneven size of bands in the radio spectrum prompts the need of further division into sub-bands for optimal spectrum sharing. We characterize behaviors and constraints for such multi-hop SDR network from multiple layers, including modeling of spectrum sharing and sub-band division, scheduling and interference constraints, and flow routing. We give a formal mathematical formulation with the objective of minimizing the required network-wide radio spectrum resource for a set of user sessions. Since such problem formulation falls into mixed integer non-linear programming (MINLP), which is NP-hard in general, we develop a lower bound for the objective by relaxing the integer variables and linearization. Subsequently, we develop a near-optimal algorithm to this MINLP problem. This algorithm is based on a novel sequential fixing procedure, where the integer variables are determined iteratively via a sequence of linear programming. Simulation results show that solutions obtained by this algorithm are very close to lower bounds obtained via relaxation, thus suggesting that the solution produced by the algorithm is near-optimal.

177 citations


Network Information
Related Topics (5)
Optimization problem
96.4K papers, 2.1M citations
93% related
Scheduling (computing)
78.6K papers, 1.3M citations
86% related
Robustness (computer science)
94.7K papers, 1.6M citations
86% related
Linear system
59.5K papers, 1.4M citations
85% related
Control theory
299.6K papers, 3.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023113
2022259
2021615
2020650
2019640
2018630