scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A variation on Karmarkar's algorithm for solving linear programming problems

Earl R. Barnes1
01 Nov 1986-Mathematical Programming (Springer-Verlag New York, Inc.)-Vol. 36, Iss: 2, pp 174-182
TL;DR: The algorithm described here is a variation on Karmarkar’s algorithm for linear programming that applies to the standard form of a linear programming problem and produces a monotone decreasing sequence of values of the objective function.
Abstract: The algorithm described here is a variation on Karmarkar's algorithm for linear programming. It has several advantages over Karmarkar's original algorithm. In the first place, it applies to the standard form of a linear programming problem and produces a monotone decreasing sequence of values of the objective function. The minimum value of the objective function does not have to be known in advance. Secondly, in the absence of degeneracy, the algorithm converges to an optimal basic feasible solution with the nonbasic variables converging monotonically to zero. This makes it possible to identify an optimal basis before the algorithm converges.
Citations
More filters
Book
01 Jan 1996
TL;DR: The Simplex Method in Matrix Notation and Duality Theory, and Applications: Foundations of Convex Programming.
Abstract: Preface. Part 1: Basic Theory - The Simplex Method and Duality. 1. Introduction. 2. The Simplex Method. 3. Degeneracy. 4. Efficiency of the Simplex Method. 5. Duality Theory. 6. The Simplex Method in Matrix Notation. 7. Sensitivity and Parametric Analyses. 8. Implementation Issues. 9. Problems in General Form. 10. Convex Analysis. 11. Game Theory. 12. Regression. Part 2: Network-Type Problems. 13. Network Flow Problems. 14. Applications. 15. Structural Optimization. Part 3: Interior-Point Methods. 16. The Central Path. 17. A Path-Following Method. 18. The KKT System. 19. Implementation Issues. 20. The Affine-Scaling Method. 21. The Homogeneous Self-Dual Method. Part 4: Extensions. 22. Integer Programming. 23. Quadratic Programming. 24. Convex Programming. Appendix A: Source Listings. Answers to Selected Exercises. Bibliography. Index.

1,194 citations


Cites background or methods from "A variation on Karmarkar's algorith..."

  • ...(1977), Kennington & Helgason (1980), Jensen & Barnes (1980), Bertsekas (1991), and Ahuja et al. (1993). The two “original” algorithms for solving minimum-cost network flow problems are thenetwork simplex methodeveloped by Dantzig (1951 a) and theprimal–dual methoddeveloped by Ford & Fulkerson (1958)....

    [...]

  • ...Of these independent rediscoveries, only two papers offered a convergence analysis: one by Barnes (1986) and the other by Vanderbei et al. (1986). It is interesting to note that Karmarkar himself was one of the independent rediscoverers, but he mistakenly believed that the algorithm enjoyed the same convergence properties as his algorithm (i.e., that it would get within any fixed tolerance of optimality within a specific number of iterations bounded by a polynomial in n). Theorem 20.1(a) was proved by Vanderbei et al. (1986). Part (b) of the Theorem was proved by Tsuchiya & Muramatsu (1992) who also show that the result is sharp. A sharper sharpness result can be found in Hall & Vanderbei (1993). Part (c) of the Theorem was established by Mascarenhas (1997)....

    [...]

  • ...Of these independent rediscoveries, only two papers offered a convergence analysis: one by Barnes (1986) and the other by Vanderbei et al. (1986). It is interesting to note that Karmarkar himself was one of the independent rediscoverers, but he mistakenly believed that the algorithm enjoyed the same convergence properties as his algorithm (i....

    [...]

  • ...(1977), Kennington & Helgason (1980), Jensen & Barnes (1980), Bertsekas (1991), and Ahuja et al. (1993). The two “original” algorithms for solving minimum-cost network flow problems are thenetwork simplex methodeveloped by Dantzig (1951 a) and theprimal–dual methoddeveloped by Ford & Fulkerson (1958). The self-dual algorithm described in this chapter is neither of these....

    [...]

  • ...(1977), Kennington & Helgason (1980), Jensen & Barnes (1980), Bertsekas (1991), and Ahuja et al....

    [...]

Book
26 Mar 2009
TL;DR: This chapter discusses the foundations of optimization, and some of the methods for unconstrained optimization, as well as topics from linear algebra, including the simplex method and other fundamentals.
Abstract: Preface Part I. Basics: 1. Optimization models 2. Fundamentals of optimization 3. Representation of linear constraints Part II. Linear Programming: 4. Geometry of linear programming 5. The simplex method 6. Duality and sensitivity 7. Enhancements of the simplex method 8. Network problems 9. Computational complexity of linear programming 10. Interior-point methods of linear programming Part III. Unconstrained Optimization: 11. Basics of unconstrained optimization 12. Methods for unconstrained optimization 13. Low-storage methods for unconstrained problems Part IV. Nonlinear Optimization: 14. Optimality conditions for constrained problems 15. Feasible-point methods 16. Penalty and barrier methods Part V. Appendices: Appendix A. Topics from linear algebra Appendix B. Other fundamentals Appendix C. Software Bibliography Index.

524 citations


Additional excerpts

  • ...Then M−1A has nine distinct eigenvalues (1, 8, 9, 10, 11, 12, 13, 14, 15)....

    [...]

Journal ArticleDOI
TL;DR: Based on a continuous version of Karmarkar's algorithm, two variants resulting from first and second order approximations of the continuous trajectory are implemented and tested and compares favorably with the simplex codeMinos 4.0.
Abstract: This paper describes the implementation of power series dual affine scaling variants of Karmarkar's algorithm for linear programming. Based on a continuous version of Karmarkar's algorithm, two variants resulting from first and second order approximations of the continuous trajectory are implemented and tested. Linear programs are expressed in an inequality form, which allows for the inexact computation of the algorithm's direction of improvement, resulting in a significant computational advantage. Implementation issues particular to this family of algorithms, such as treatment of dense columns, are discussed. The code is tested on several standard linear programming problems and compares favorably with the simplex codeMinos 4.0.

386 citations


Additional excerpts

  • ...Similar algorithms have been discussed by Barnes (1986) and Vanderbei, Meketon & Freedman (1986)....

    [...]

Journal ArticleDOI
TL;DR: A unified treatment of algorithms is described for linear programming methods based on the central path, which is a curve along which the cost decreases, and that stays always far from the centre.
Abstract: In this paper a unified treatment of algorithms is described for linear programming methods based on the central path. This path is a curve along which the cost decreases, and that stays always far...

347 citations

Journal ArticleDOI
TL;DR: A modification of Karmarkar's linear programming algorithm that uses a recentered projected gradient approach thereby obviating a priori knowledge of the optimal objective function value and proves that the algorithm converges.
Abstract: We present a modification of Karmarkar's linear programming algorithm. Our algorithm uses a recentered projected gradient approach thereby obviatinga priori knowledge of the optimal objective function value. Assuming primal and dual nondegeneracy, we prove that our algorithm converges. We present computational comparisons between our algorithm and the revised simplex method. For small, dense constraint matrices we saw little difference between the two methods.

325 citations


Cites background or methods from "A variation on Karmarkar's algorith..."

  • ...There is a convergence proof in [ 2 ] that is substantially different from the one presented here....

    [...]

  • ...Some recent papers on Karmarkar's algorithm are [ 2 ], [3], [8], [9], and [10]....

    [...]

  • ...This algorithm has been independently proposed in [ 2 ] and [3]....

    [...]

References
More filters
Journal ArticleDOI
Narendra Karmarkar1
TL;DR: It is proved that given a polytopeP and a strictly interior point a εP, there is a projective transformation of the space that mapsP, a toP′, a′ having the following property: the ratio of the radius of the smallest sphere with center a′, containingP′ to theradius of the largest sphere withCenter a′ contained inP′ isO(n).
Abstract: We present a new polynomial-time algorithm for linear programming. In the worst case, the algorithm requiresO(n 3.5 L) arithmetic operations onO(L) bit numbers, wheren is the number of variables andL is the number of bits in the input. The running-time of this algorithm is better than the ellipsoid algorithm by a factor ofO(n 2.5). We prove that given a polytopeP and a strictly interior point a eP, there is a projective transformation of the space that mapsP, a toP′, a′ having the following property. The ratio of the radius of the smallest sphere with center a′, containingP′ to the radius of the largest sphere with center a′ contained inP′ isO(n). The algorithm consists of repeated application of such projective transformations each followed by optimization over an inscribed sphere to create a sequence of points which converges to the optimal solution in polynomial time.

4,806 citations

Journal ArticleDOI
TL;DR: This work reviews classical barrier-function methods for nonlinear programming based on applying a logarithmic transformation to inequality constraints and shows a “projected Newton barrier” method to be equivalent to Karmarkar's projective method for a particular choice of the barrier parameter.
Abstract: Interest in linear programming has been intensified recently by Karmarkar's publication in 1984 of an algorithm that is claimed to be much faster than the simplex method for practical problems. We review classical barrier-function methods for nonlinear programming based on applying a logarithmic transformation to inequality constraints. For the special case of linear programming, the transformed problem can be solved by a "projected Newton barrier" method. This method is shown to be equivalent to Karmarkar's projective method for a particular choice of the barrier parameter. We then present details of a specific barrier algorithm and its practical implementation. Numerical results are given for several non-trivial test problems, and the implications for future developments in linear programming are discussed.

431 citations

Journal ArticleDOI
TL;DR: A modification of Karmarkar's linear programming algorithm that uses a recentered projected gradient approach thereby obviating a priori knowledge of the optimal objective function value and proves that the algorithm converges.
Abstract: We present a modification of Karmarkar's linear programming algorithm. Our algorithm uses a recentered projected gradient approach thereby obviatinga priori knowledge of the optimal objective function value. Assuming primal and dual nondegeneracy, we prove that our algorithm converges. We present computational comparisons between our algorithm and the revised simplex method. For small, dense constraint matrices we saw little difference between the two methods.

325 citations