Author
David G. Luenberger
Bio: David G. Luenberger is an academic researcher from Stanford University. The author has contributed to research in topics: Linear programming & Penalty method. The author has an hindex of 5, co-authored 15 publications receiving 5315 citations.
Papers
More filters
•
01 Jan 1984
TL;DR: Strodiot and Zentralblatt as discussed by the authors introduced the concept of unconstrained optimization, which is a generalization of linear programming, and showed that it is possible to obtain convergence properties for both standard and accelerated steepest descent methods.
Abstract: This new edition covers the central concepts of practical optimization techniques, with an emphasis on methods that are both state-of-the-art and popular. One major insight is the connection between the purely analytical character of an optimization problem and the behavior of algorithms used to solve a problem. This was a major theme of the first edition of this book and the fourth edition expands and further illustrates this relationship. As in the earlier editions, the material in this fourth edition is organized into three separate parts. Part I is a self-contained introduction to linear programming. The presentation in this part is fairly conventional, covering the main elements of the underlying theory of linear programming, many of the most effective numerical algorithms, and many of its important special applications. Part II, which is independent of Part I, covers the theory of unconstrained optimization, including both derivations of the appropriate optimality conditions and an introduction to basic algorithms. This part of the book explores the general properties of algorithms and defines various notions of convergence. Part III extends the concepts developed in the second part to constrained optimization problems. Except for a few isolated sections, this part is also independent of Part I. It is possible to go directly into Parts II and III omitting Part I, and, in fact, the book has been used in this way in many universities.New to this edition is a chapter devoted to Conic Linear Programming, a powerful generalization of Linear Programming. Indeed, many conic structures are possible and useful in a variety of applications. It must be recognized, however, that conic linear programming is an advanced topic, requiring special study. Another important topic is an accelerated steepest descent method that exhibits superior convergence properties, and for this reason, has become quite popular. The proof of the convergence property for both standard and accelerated steepest descent methods are presented in Chapter 8. As in previous editions, end-of-chapter exercises appear for all chapters.From the reviews of the Third Edition: this very well-written book is a classic textbook in Optimization. It should be present in the bookcase of each student, researcher, and specialist from the host of disciplines from which practical optimization applications are drawn. (Jean-Jacques Strodiot, Zentralblatt MATH, Vol. 1207, 2011)
4,908 citations
•
TL;DR: Strodiot and Zentralblatt as mentioned in this paper introduced the concept of unconstrained optimization, which is a generalization of linear programming, and showed that it is possible to obtain convergence properties for both standard and accelerated steepest descent methods.
Abstract: This new edition covers the central concepts of practical optimization techniques, with an emphasis on methods that are both state-of-the-art and popular. One major insight is the connection between the purely analytical character of an optimization problem and the behavior of algorithms used to solve a problem. This was a major theme of the first edition of this book and the fourth edition expands and further illustrates this relationship. As in the earlier editions, the material in this fourth edition is organized into three separate parts. Part I is a self-contained introduction to linear programming. The presentation in this part is fairly conventional, covering the main elements of the underlying theory of linear programming, many of the most effective numerical algorithms, and many of its important special applications. Part II, which is independent of Part I, covers the theory of unconstrained optimization, including both derivations of the appropriate optimality conditions and an introduction to basic algorithms. This part of the book explores the general properties of algorithms and defines various notions of convergence. Part III extends the concepts developed in the second part to constrained optimization problems. Except for a few isolated sections, this part is also independent of Part I. It is possible to go directly into Parts II and III omitting Part I, and, in fact, the book has been used in this way in many universities.New to this edition is a chapter devoted to Conic Linear Programming, a powerful generalization of Linear Programming. Indeed, many conic structures are possible and useful in a variety of applications. It must be recognized, however, that conic linear programming is an advanced topic, requiring special study. Another important topic is an accelerated steepest descent method that exhibits superior convergence properties, and for this reason, has become quite popular. The proof of the convergence property for both standard and accelerated steepest descent methods are presented in Chapter 8. As in previous editions, end-of-chapter exercises appear for all chapters.From the reviews of the Third Edition: this very well-written book is a classic textbook in Optimization. It should be present in the bookcase of each student, researcher, and specialist from the host of disciplines from which practical optimization applications are drawn. (Jean-Jacques Strodiot, Zentralblatt MATH, Vol. 1207, 2011)
364 citations
••
01 Jan 2016TL;DR: Although CLP has long been known to be convex optimization problems, no efficient solution algorithm was known until about two decades ago, when it was discovered that interior-point algorithms for LP can be adapted to solve certain CLPs with both theoretical and practical efficiency.
Abstract: Conic Linear Programming, hereafter CLP, is a natural extension of Linear programming (LP). In LP, the variables form a vector which is required to be componentwise nonnegative, while in CLP they are points in a pointed convex cone (see Appendix B.1) of an Euclidean space, such as vectors as well as matrices of finite dimensions. For example, Semidefinite programming (SDP) is a kind of CLP, where the variable points are symmetric matrices constrained to be positive semidefinite. Both types of problems may have linear equality constraints as well. Although CLPs have long been known to be convex optimization problems, no efficient solution algorithm was known until about two decades ago, when it was discovered that interior-point algorithms for LP discussed in Chap. 5, can be adapted to solve certain CLPs with both theoretical and practical efficiency. During the same period, it was discovered that CLP, especially SDP, is representative of a wide assortment of applications, including combinatorial optimization, statistical computation, robust optimization, Euclidean distance geometry, quantum computing, optimal control, etc. CLP is now widely recognized as a powerful mathematical computation model of general importance.
18 citations
••
01 Jan 2016TL;DR: A linear program (LP) is an optimization problem in which the objective function is linear in the unknowns and the constraints consist of linear equalities and linear inequalities as discussed by the authors, and the exact form of these constraints may differ from one problem to another, but as shown below, any linear program can be transformed into the following standard form:
Abstract: A linear program (LP) is an optimization problem in which the objective function is linear in the unknowns and the constraints consist of linear equalities and linear inequalities. The exact form of these constraints may differ from one problem to another, but as shown below, any linear program can be transformed into the following standard form:
8 citations
••
01 Jan 2016TL;DR: For a problem with n variables and m constraints, penalty and barrier methods work directly in the n-dimensional space of variables, as compared to primal methods that work in (n − m)-dimensional space.
Abstract: Penalty and barrier methods are procedures for approximating constrained optimization problems by unconstrained problems. The approximation is accomplished in the case of penalty methods by adding to the objective function a term that prescribes a high cost for violation of the constraints, and in the case of barrier methods by adding a term that favors points interior to the feasible region over those near the boundary. Associated with these methods is a parameter c or μ that determines the severity of the penalty or barrier and consequently the degree to which the unconstrained problem approximates the original constrained problem. For a problem with n variables and m constraints, penalty and barrier methods work directly in the n-dimensional space of variables, as compared to primal methods that work in (n − m)-dimensional space.
7 citations
Cited by
More filters
••
TL;DR: In this paper, the authors describe a general-purpose representation-independent method for the accurate and computationally efficient registration of 3D shapes including free-form curves and surfaces, based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point.
Abstract: The authors describe a general-purpose, representation-independent method for the accurate and computationally efficient registration of 3-D shapes including free-form curves and surfaces. The method handles the full six degrees of freedom and is based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point. The ICP algorithm always converges monotonically to the nearest local minimum of a mean-square distance metric, and the rate of convergence is rapid during the first few iterations. Therefore, given an adequate set of initial rotations and translations for a particular class of objects with a certain level of 'shape complexity', one can globally minimize the mean-square distance metric over all six degrees of freedom by testing each initial registration. One important application of this method is to register sensed data from unfixtured rigid objects with an ideal geometric model, prior to shape inspection. Experimental results show the capabilities of the registration algorithm on point sets, curves, and surfaces. >
17,598 citations
••
01 Jul 1992TL;DR: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented, applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions.
Abstract: A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of the classification functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.
11,211 citations
••
TL;DR: This work has developed a code able to pack millions of atoms, grouped in arbitrarily complex molecules, inside a variety of three‐dimensional regions, which can be intersections of spheres, ellipses, cylinders, planes, or boxes.
Abstract: Adequate initial configurations for molecular dynamics simulations consist of arrangements of molecules distributed in space in such a way to approximately represent the system's overall structure. In order that the simulations are not disrupted by large van der Waals repulsive interactions, atoms from different molecules must keep safe pairwise distances. Obtaining such a molecular arrangement can be considered a packing problem: Each type molecule must satisfy spatial constraints related to the geometry of the system, and the distance between atoms of different molecules must be greater than some specified tolerance. We have developed a code able to pack millions of atoms, grouped in arbitrarily complex molecules, inside a variety of three-dimensional regions. The regions may be intersections of spheres, ellipses, cylinders, planes, or boxes. The user must provide only the structure of one molecule of each type and the geometrical constraints that each type of molecule must satisfy. Building complex mixtures, interfaces, solvating biomolecules in water, other solvents, or mixtures of solvents, is straightforward. In addition, different atoms belonging to the same molecule may also be restricted to different spatial regions, in such a way that more ordered molecular arrangements can be built, as micelles, lipid double-layers, etc. The packing time for state-of-the-art molecular dynamics systems varies from a few seconds to a few minutes in a personal computer. The input files are simple and currently compatible with PDB, Tinker, Molden, or Moldy coordinate files. The package is distributed as free software and can be downloaded from http://www.ime.unicamp.br/~martinez/packmol/.
5,322 citations
••
TL;DR: Experiments show that SCG is considerably faster than BP, CGL, and BFGS, and avoids a time consuming line search.
3,882 citations
••
07 Aug 2002TL;DR: In this paper, the authors describe decentralized control laws for the coordination of multiple vehicles performing spatially distributed tasks, which are based on a gradient descent scheme applied to a class of decentralized utility functions that encode optimal coverage and sensing policies.
Abstract: This paper describes decentralized control laws for the coordination of multiple vehicles performing spatially distributed tasks. The control laws are based on a gradient descent scheme applied to a class of decentralized utility functions that encode optimal coverage and sensing policies. These utility functions are studied in geographical optimization problems and they arise naturally in vector quantization and in sensor allocation tasks. The approach exploits the computational geometry of spatial structures such as Voronoi diagrams.
2,445 citations