scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Decomposition Principle for Linear Programs

01 Feb 1960-Operations Research (INFORMS)-Vol. 8, Iss: 1, pp 101-111
TL;DR: A technique is presented for the decomposition of a linear program that permits the problem to be solved by alternate solutions of linear sub-programs representing its several parts and a coordinating program that is obtained from the parts by linear transformations.
Abstract: A technique is presented for the decomposition of a linear program that permits the problem to be solved by alternate solutions of linear sub-programs representing its several parts and a coordinating program that is obtained from the parts by linear transformations. The coordinating program generates at each cycle new objective forms for each part, and each part generates in turn from its optimal basic feasible solutions new activities columns for the interconnecting program. Viewed as an instance of a “generalized programming problem” whose columns are drawn freely from given convex sets, such a problem can be studied by an appropriate generalization of the duality theorem for linear programming, which permits a sharp distinction to be made between those constraints that pertain only to a part of the problem and those that connect its parts. This leads to a generalization of the Simplex Algorithm, for which the decomposition procedure becomes a special case. Besides holding promise for the efficient computation of large-scale systems, the principle yields a certain rationale for the “decentralized decision process” in the theory of the firm. Formally the prices generated by the coordinating program cause the manager of each part to look for a “pure” sub-program analogue of pure strategy in game theory, which he proposes to the coordinator as best he can do. The coordinator finds the optimum “mix” of pure sub-programs using new proposals and earlier ones consistent with over-all demands and supply, and thereby generates new prices that again generates new proposals by each of the parts, etc. The iterative process is finite.
Citations
More filters
Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations


Additional excerpts

  • ...…. . . . . . . . . . . . 31 6.3 General ℓ1 regularized loss minimization . . . . . . . . . . . . . . . . . . . . 32 6.4 Lasso . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 6.5 Sparse inverse covariance selection . . . . . . . . . . . . . . . . . . . . . . . . 33...

    [...]

Book
27 Nov 2013
TL;DR: The many different interpretations of proximal operators and algorithms are discussed, their connections to many other topics in optimization and applied mathematics are described, some popular algorithms are surveyed, and a large number of examples of proxiesimal operators that commonly arise in practice are provided.
Abstract: This monograph is about a class of optimization algorithms called proximal algorithms. Much like Newton's method is a standard tool for solving unconstrained smooth optimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, or distributed versions of these problems. They are very generally applicable, but are especially well-suited to problems of substantial recent interest involving large or high-dimensional datasets. Proximal methods sit at a higher level of abstraction than classical algorithms like Newton's method: the base operation is evaluating the proximal operator of a function, which itself involves solving a small convex optimization problem. These subproblems, which generalize the problem of projecting a point onto a convex set, often admit closed-form solutions or can be solved very quickly with standard or simple specialized methods. Here, we discuss the many different interpretations of proximal operators and algorithms, describe their connections to many other topics in optimization and applied mathematics, survey some popular algorithms, and provide a large number of examples of proximal operators that commonly arise in practice.

3,627 citations


Cites background from "Decomposition Principle for Linear ..."

  • ...Some classic and important modern references include those by Dantzig and Wolfe [66], Benders [23], Lasdon [118], Geoffrion [93], Tsitsiklis [189], Bertsekas and Tsitsklis [27], Censor and Zenios [50], and Nedic̀ and Ozdaglar [144, 145]....

    [...]

Journal ArticleDOI
J. F. Benders1
TL;DR: In this article, the 8th International Meeting of the Institute of Management Sciences, Brussels, August 23-26, 1961, the authors presented a paper entitled "The International Journal of Management Science and Management Sciences".
Abstract: Paper presented to the 8th International Meeting of the Institute of Management Sciences, Brussels, August 23-26, 1961.

2,782 citations


Cites methods from "Decomposition Principle for Linear ..."

  • ...We assume the reader to be familiar with the theory of convex polyhedral sets and with the computational aspects of solving a linear programming problem by the simplex method; see e.g. TUCKER [13], GOLDMAN [8] and GASS [ 6 ]....

    [...]

Journal ArticleDOI
TL;DR: In this paper, column generation methods for integer programs with a huge number of variables are discussed, including implicit pricing of nonbasic variables to generate new columns or to prove LP optimality at a node of the branch-and-bound tree.
Abstract: We discuss formulations of integer programs with a huge number of variables and their solution by column generation methods, i.e., implicit pricing of nonbasic variables to generate new columns or to prove LP optimality at a node of the branch-and-bound tree. We present classes of models for which this approach decomposes the problem, provides tighter LP relaxations, and eliminates symmetry. We then discuss computational issues and implementation of column generation, branch-and-bound algorithms, including special branching rules and efficient ways to solve the LP relaxation. We also discuss the relationship with Lagrangian duality.

2,248 citations

Journal ArticleDOI
P. C. Gilmore1, Ralph E. Gomory1
TL;DR: In this paper, a technique is described for overcoming the difficulty in the linear programming formulation of the cutting-stock problem, which enables one to compute always with a matrix which has no more columns than it has rows.
Abstract: The cutting-stock problem is the problem of filling an order at minimum cost for specified numbers of lengths of material to be cut from given stock lengths of given cost. When expressed as an integer programming problem the large number of variables involved generally makes computation infeasible. This same difficulty persists when only an approximate solution is being sought by linear programming. In this paper, a technique is described for overcoming the difficulty in the linear programming formulation of the problem. The technique enables one to compute always with a matrix which has no more columns than it has rows.

1,933 citations