scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A combined phase I--phase II scaled potential algorithm for linear programming

01 Dec 1991-Mathematical Programming (Springer-Verlag New York, Inc.)-Vol. 52, Iss: 3, pp 429-439
TL;DR: An extension of the affinely scaled potential reduction algorithm which simultaneously obtains feasibility and optimality in a standard form linear program, without the addition of any “M” terms is developed.
Abstract: We develop an extension of the affinely scaled potential reduction algorithm which simultaneously obtains feasibility and optimality in a standard form linear program, without the addition of any "M" terms. The method, and its lower-bounding procedure, are particularly simple compared with previous interior algorithms not requiring feasibility.
Citations
More filters
Journal ArticleDOI
TL;DR: A unified treatment of algorithms is described for linear programming methods based on the central path, which is a curve along which the cost decreases, and that stays always far from the centre.
Abstract: In this paper a unified treatment of algorithms is described for linear programming methods based on the central path. This path is a curve along which the cost decreases, and that stays always far...

347 citations

Journal ArticleDOI
TL;DR: The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming and proves an overallworst-case operation count of O(m5.5L1.5).
Abstract: We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worst-case analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interior-point methods the overall computational effort is therefore dominated by the least-squares system that must be solved in each iteration. A type of conjugate-gradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number of iterations remains valid even if the conjugate-gradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration. We describe in detail how the algorithm works for optimization problems withL Lyapunov inequalities, each of sizem. We prove an overallworst-case operation count of O(m 5.5L1.5). Theaverage-case complexity appears to be closer to O(m 4L1.5). This estimate is justified by extensive numerical experimentation, and is consistent with other researchers' experience with the practical performance of interior-point algorithms for linear programming. This result means that the computational cost of extending current control theory based on the solution of Lyapunov or Riccatiequations to a theory that is based on the solution of (multiple, coupled) Lyapunov or Riccatiinequalities is modest.

249 citations


Additional excerpts

  • ...The easiest is to precede the algorithm with a phase I algorithm to find feasible initial points (see, e.g., [30, §4.3.5] ). In other approaches both phases are combined; see, e.g., [ 4 ,14,26,28]....

    [...]

Book
07 Sep 2005
TL;DR: In this paper, the duality theory for linear optimization has been used to solve the Canonical Problem in the method of the central path, and a polynomial algorithm for the self-dual model has been proposed.
Abstract: List of figures.- List of tables.- Preface.- Acknowledgements.- Introduction.- I. Introdcution: Theory and Complexity.- Duality Theory for Linear Optimization.- A Polynomial Algorithm for the Self-dual Model.- Solving the Canonical Problem.- II. The Logatithmic Barrier Approach.- Preliminaries.- The Dual Logarithmic Barrier Method.- The Primal-Dual Logarithmic Barrier Method.- Initialization.- III. The Target-Following Approach.- Preliminaries.- The Primal-Dual Newton Method.- Applications.- The Dual Newton Method.- The Primal Newton Method.- Application to the Method of Centers.- IV. Miscellaneous Topics.- Karmarkar's Projective Method.- More Properties of the Central Path.- Partial Updating.- Higher-Order Methods.- Parametric and Sensitivity Analysis.- Implementing Interior Point Methods.- Appendices.- Bibliography.- Author Index.- Subject Index.- Symbol Index.

180 citations


Cites background from "A combined phase I--phase II scaled..."

  • ...1 Papers in that stream were written by Anstreicher [14, 15, 16, 18, 19, 20, 21, 22, 23, 24], Freund [83, 85], de Ghellinck and Vial [95, 96], Goffin and Vial [102, 103], Goldfarb and Mehrotra [105, 106, 107], Goldfarb and Xiao [110], Goldfarb and Shaw [108], Shaw and Goldfarb [254], Gonzaga [117, 119], Roos [239], Vial [282, 283, 284], Xu, Yao and Chen [300], Yamashita [301], Ye [304, 305, 306, 307], Ye and Todd [315] and Todd and Burrell [266]....

    [...]

01 Jan 2003
TL;DR: The purpose of the thesis is to elaborate new interior point algorithms for solving linear optimization problems, and it is proved that these algorithms are polynomial.
Abstract: In this paper the abstract of the thesis ”New Interior Point Algorithms in Linear Programming” is presented. The purpose of the thesis is to elaborate new interior point algorithms for solving linear optimization problems. The theoretical complexity of the new algorithms are calculated. We also prove that these algorithms are polynomial. The thesis is composed of seven chapters. In the first chapter a short history of interior point methods is discussed. In the following three chapters some variants of the ane scaling, the projective and the path-following algorithms are presented. In the last three chapters new path-following interior point algorithms are defined. In the fifth chapter a new method for constructing search directions for interior point algorithms is introduced, and a new primal-dual pathfollowing algorithm is defined. Polynomial complexity of this algorithm is proved. We mention that this complexity is identical with the best known complexity in the present. In the sixth chapter, using a similar approach with the one defined in the previous chapter, a new class of search directions for the self-dual problem is introduced. A new primal-dual algorithm is defined for solving the self-dual linear optimization problem, and polynomial complexity is proved. In the last chapter the method proposed in the fifth chapter is generalized for target-following methods. A conceptual target-following algorithm is defined, and this algorithm is particularized in order to obtain a new primal-dual weighted-path-following method. The complexity of this algorithm is computed.

89 citations


Cites background from "A combined phase I--phase II scaled..."

  • ...IPMs are classified as affine scaling algorithms [38, 39, 40, 1, 21, 119, 17, 116, 26], projective algorithms [76, 11, 13, 14, 15, 45, 46, 43, 47, 48, 50, 51, 52, 58, 114, 124, 125, 126, 127, 12, 16, 54, 62, 61, 64, 113] and path-following algorithms [109, 99, 57, 105, 91, 81, 35, 36, 71, 88, 89, 59, 60, 104, 90, 65, 70]....

    [...]

03 Oct 1996
TL;DR: In this article, a proximal perturbation strategy is proposed to improve the robustness of Newton-based complementarity solvers, which enables algorithms to reliably find solutions even for problems whose natural merit functions have strict local minima.
Abstract: Recent improvements in the capabilities of complementarity solvers have led to an increased interest in using the complementarity problem framework to address practical problems arising in mathematical programming, economics, engineering, and the sciences. As a result, increasingly more difficult problems are being proposed that exceed the capabilities of even the best algorithms currently available. There is, therefore, an immediate need to improve the capabilities of complementarity solvers. This thesis addresses this need in two significant ways. First, the thesis proposes and develops a proximal perturbation strategy that enhances the robustness of Newton-based complementarity solvers. This strategy enables algorithms to reliably find solutions even for problems whose natural merit functions have strict local minima that are not solutions. Based upon this strategy, three new algorithms are proposed for solving nonlinear mixed complementarity problems that represent a significant improvement in robustness over previous algorithms. These algorithms have local Q-quadratic convergence behavior, yet depend only on a pseudo-monotonicity assumption to achieve global convergence from arbitrary starting points. Using the MCPLIB and GAMSLIB test libraries, we perform extensive computational tests that demonstrate the effectiveness of these algorithms on realistic problems. Second, the thesis extends some previously existing algorithms to solve more general problem classes. Specifically, the NE/SQP method of Pang & Gabriel (1993), the semismooth equations approach of De Luca, Facchinei & Kanzow (1995), and the infeasible-interior point method of Wright (1994) are all generalized to the mixed complementarity problem framework. In addition, the pivotal method of Cao & Ferris (1995b), which solves affine variational inequalities, is extended to solve affine generalized equations. To develop this extension, the piecewise-linear homotopy framework of Eaves (1976) is used to generate an algorithm for finding zeros of piecewise affine maps. We show that the resulting algorithm finds a solution in a finite number of iterations under the assumption that the piecewise affine map is coherently oriented.

77 citations


Additional excerpts

  • ...(Anstreicher 1991, Anstreicher1989, Lustig, Marsten & Shanno 1991, Lustig, Marsten & Shanno 1992, McShane,Monma & Shanno 1989, Mehrotra 1992, Kojima, Mizuno & Todd 1992, Mizuno1993, Potra 1992a, Potra 1992c)....

    [...]

References
More filters
Journal ArticleDOI
Narendra Karmarkar1
TL;DR: It is proved that given a polytopeP and a strictly interior point a εP, there is a projective transformation of the space that mapsP, a toP′, a′ having the following property: the ratio of the radius of the smallest sphere with center a′, containingP′ to theradius of the largest sphere withCenter a′ contained inP′ isO(n).
Abstract: We present a new polynomial-time algorithm for linear programming. In the worst case, the algorithm requiresO(n 3.5 L) arithmetic operations onO(L) bit numbers, wheren is the number of variables andL is the number of bits in the input. The running-time of this algorithm is better than the ellipsoid algorithm by a factor ofO(n 2.5). We prove that given a polytopeP and a strictly interior point a eP, there is a projective transformation of the space that mapsP, a toP′, a′ having the following property. The ratio of the radius of the smallest sphere with center a′, containingP′ to the radius of the largest sphere with center a′ contained inP′ isO(n). The algorithm consists of repeated application of such projective transformations each followed by optimization over an inscribed sphere to create a sequence of points which converges to the optimal solution in polynomial time.

4,806 citations

Journal ArticleDOI
Yinyu Ye1
TL;DR: In this paper, a primal-dual potential function for linear programming is described, and an interior point algorithm is developed to find the optimal solution set in O(n ≥ 3
Abstract: We describe a primal-dual potential function for linear programming: $$\phi (x,s) = \rho \ln (x^T s) - \sum\limits_{j = 1}^n {\ln (x_j s_j )} $$ whereρ⩾ n, x is the primal variable, ands is the dual-slack variable. As a result, we develop an interior point algorithm seeking reductions in the potential function with $$\rho = n + \sqrt n $$ . Neither tracing the central path nor using the projective transformation, the algorithm converges to the optimal solution set in $$O(\sqrt n L)$$ iterations and uses O(n 3 L) total arithmetic operations. We also suggest a practical approach to implementing the algorithm.

265 citations

Journal ArticleDOI
TL;DR: An extension of Karmarkar's algorithm for linear programming that handles problems with unknown optimal value and generates Primal and dual solutions with objective values converging to the common optimal primal and dual value is described.
Abstract: We describe an extension of Karmarkar's algorithm for linear programming that handles problems with unknown optimal value and generates primal and dual solutions with objective values converging to the common optimal primal and dual value. We also describe an implementation for the dense case and show how extreme point solutions can be obtained naturally, with little extra computation.

137 citations

Journal ArticleDOI
TL;DR: An algorithm is presented for solving a set of linear equations on the nonnegative orthant which can be made equivalent to the maximization of a simple concave function subject to a similar set oflinear equations and bounds on the variables.
Abstract: An algorithm is presented for solving a set of linear equations on the nonnegative orthant. This problem can be made equivalent to the maximization of a simple concave function subject to a similar set of linear equations and bounds on the variables. A Newton method can then be used which enforces a uniform lower bound which increases geometrically with the number of iterations. The basic steps are a projection operation and a simple line search. It is shown that this procedure either proves in at mostO(n2m2L) operations that there is no solution or, else, computes an exact solution in at mostO(n3m2L) operations.

113 citations

Journal ArticleDOI
TL;DR: It is demonstrated that Karmarkar's projective algorithm is fundamentally an algorithm for fractional linear programming on the simplex and can be easily modified so as to assure monotonicity of the true objective values, while retaining all global convergence properties.
Abstract: We demonstrate that Karmarkar's projective algorithm is fundamentally an algorithm for fractional linear programming on the simplex. Convergence for the latter problem is established assuming only an initial lower bound on the optimal objective value. We also show that the algorithm can be easily modified so as to assure monotonicity of the true objective values, while retaining all global convergence properties. Finally, we show how the monotonic algorithm can be used to obtain an initial lower bound when none is otherwise available.

91 citations