scispace - formally typeset
Search or ask a question

Showing papers on "Linear-fractional programming published in 1986"


Book
01 Dec 1986
TL;DR: Introduction and Preliminaries.
Abstract: Introduction and Preliminaries. Problems, Algorithms, and Complexity. LINEAR ALGEBRA. Linear Algebra and Complexity. LATTICES AND LINEAR DIOPHANTINE EQUATIONS. Theory of Lattices and Linear Diophantine Equations. Algorithms for Linear Diophantine Equations. Diophantine Approximation and Basis Reduction. POLYHEDRA, LINEAR INEQUALITIES, AND LINEAR PROGRAMMING. Fundamental Concepts and Results on Polyhedra, Linear Inequalities, and Linear Programming. The Structure of Polyhedra. Polarity, and Blocking and Anti--Blocking Polyhedra. Sizes and the Theoretical Complexity of Linear Inequalities and Linear Programming. The Simplex Method. Primal--Dual, Elimination, and Relaxation Methods. Khachiyana s Method for Linear Programming. The Ellipsoid Method for Polyhedra More Generally. Further Polynomiality Results in Linear Programming. INTEGER LINEAR PROGRAMMING. Introduction to Integer Linear Programming. Estimates in Integer Linear Programming. The Complexity of Integer Linear Programming. Totally Unimodular Matrices: Fundamental Properties and Examples. Recognizing Total Unimodularity. Further Theory Related to Total Unimodularity. Integral Polyhedra and Total Dual Integrality. Cutting Planes. Further Methods in Integer Linear Programming. References. Indexes.

7,005 citations


Book
01 Jan 1986
TL;DR: The solution of Large-scale Programming Problems: Generalized Linear Programming and Decomposition Techniques Dynamic Programming Optimization in Infinite Dimension and Applications is presented.
Abstract: Preface Foreword Notation Fundamental Concepts Linear Programming One-dimensional Optimization Nonlinear, Unconstrained Optimization Nonlinear Optimization with Constraints Nonlinear Constrained Optimization Integer Programming Solution of Large-scale Programming Problems: Generalized Linear Programming and Decomposition Techniques Dynamic Programming Optimization in Infinite Dimension and Applications References Appendices Index.

564 citations


Journal ArticleDOI
TL;DR: This work reviews classical barrier-function methods for nonlinear programming based on applying a logarithmic transformation to inequality constraints and shows a “projected Newton barrier” method to be equivalent to Karmarkar's projective method for a particular choice of the barrier parameter.
Abstract: Interest in linear programming has been intensified recently by Karmarkar's publication in 1984 of an algorithm that is claimed to be much faster than the simplex method for practical problems. We review classical barrier-function methods for nonlinear programming based on applying a logarithmic transformation to inequality constraints. For the special case of linear programming, the transformed problem can be solved by a "projected Newton barrier" method. This method is shown to be equivalent to Karmarkar's projective method for a particular choice of the barrier parameter. We then present details of a specific barrier algorithm and its practical implementation. Numerical results are given for several non-trivial test problems, and the implications for future developments in linear programming are discussed.

431 citations


Journal ArticleDOI
TL;DR: A modification of Karmarkar's linear programming algorithm that uses a recentered projected gradient approach thereby obviating a priori knowledge of the optimal objective function value and proves that the algorithm converges.
Abstract: We present a modification of Karmarkar's linear programming algorithm. Our algorithm uses a recentered projected gradient approach thereby obviatinga priori knowledge of the optimal objective function value. Assuming primal and dual nondegeneracy, we prove that our algorithm converges. We present computational comparisons between our algorithm and the revised simplex method. For small, dense constraint matrices we saw little difference between the two methods.

325 citations


Journal ArticleDOI
Earl R. Barnes1
TL;DR: The algorithm described here is a variation on Karmarkar’s algorithm for linear programming that applies to the standard form of a linear programming problem and produces a monotone decreasing sequence of values of the objective function.
Abstract: The algorithm described here is a variation on Karmarkar's algorithm for linear programming. It has several advantages over Karmarkar's original algorithm. In the first place, it applies to the standard form of a linear programming problem and produces a monotone decreasing sequence of values of the objective function. The minimum value of the objective function does not have to be known in advance. Secondly, in the absence of degeneracy, the algorithm converges to an optimal basic feasible solution with the nonbasic variables converging monotonically to zero. This makes it possible to identify an optimal basis before the algorithm converges.

307 citations


Journal ArticleDOI
TL;DR: This method for solving a multicriteria linear program where the coefficients of the objective functions and the constraints are fuzzy numbers of the L-R type is elaborated with a view to the application to the development of a water supply system.

240 citations


Journal ArticleDOI
TL;DR: An extension of Karmarkar's algorithm for linear programming that handles problems with unknown optimal value and generates Primal and dual solutions with objective values converging to the common optimal primal and dual value is described.
Abstract: We describe an extension of Karmarkar's algorithm for linear programming that handles problems with unknown optimal value and generates primal and dual solutions with objective values converging to the common optimal primal and dual value. We also describe an implementation for the dense case and show how extreme point solutions can be obtained naturally, with little extra computation.

137 citations


Journal ArticleDOI
TL;DR: An algorithm is presented for solving a set of linear equations on the nonnegative orthant which can be made equivalent to the maximization of a simple concave function subject to a similar set oflinear equations and bounds on the variables.
Abstract: An algorithm is presented for solving a set of linear equations on the nonnegative orthant. This problem can be made equivalent to the maximization of a simple concave function subject to a similar set of linear equations and bounds on the variables. A Newton method can then be used which enforces a uniform lower bound which increases geometrically with the number of iterations. The basic steps are a projection operation and a simple line search. It is shown that this procedure either proves in at mostO(n2m2L) operations that there is no solution or, else, computes an exact solution in at mostO(n3m2L) operations.

113 citations


Book
01 Aug 1986
TL;DR: In this article, the authors present a dual of a dynamic inventory control model: the deterministic case and the stochastic case, and present a list of optimization problems for both cases.
Abstract: 1 Introduction and Summary.- 2 Mathematical Programming and Duality Theory.- 3 Stochastic Linear Programming Models.- 4 Some Linear Programs in Probabilities and Their Duals.- 5 On Integrated Chance Constraints.- 6 On The Behaviour of the Optimal Value Operator of Dynamic Programming.- 7 Robustness against Dependence in Pert.- 8 A Dual of a Dynamic Inventory Control Model: The Deterministic and the Stochastic Case.- List of Optimization Problems.

99 citations


01 Jan 1986
TL;DR: This report forms the user's guide for Version 1.0 of LSSOL, a set of Fortran 77 subroutines for linearly constrained linear least-squares and convex quadratic programming and its exploitation of convexity and treatment of singularity.
Abstract: This report forms the user's guide for Version 1.0 of LSSOL, a set of Fortran 77 subroutines for linearly constrained linear least-squares and convex quadratic programming. The method of LSSOL is of the two-phase, active-set type, and is related to the method used in the package SOL/QPSOL. Two main features of LSSOL are its exploitation of convexity and treatment of singularity. LSSOL may also be used for linear programming, and to find a feasible point with respect to a set of linear inequality constraints. LSSOL treats all matrices as dense, and hence is not intended for large sparse problems.

99 citations


Journal ArticleDOI
TL;DR: It is demonstrated that Karmarkar's projective algorithm is fundamentally an algorithm for fractional linear programming on the simplex and can be easily modified so as to assure monotonicity of the true objective values, while retaining all global convergence properties.
Abstract: We demonstrate that Karmarkar's projective algorithm is fundamentally an algorithm for fractional linear programming on the simplex. Convergence for the latter problem is established assuming only an initial lower bound on the optimal objective value. We also show that the algorithm can be easily modified so as to assure monotonicity of the true objective values, while retaining all global convergence properties. Finally, we show how the monotonic algorithm can be used to obtain an initial lower bound when none is otherwise available.

Journal ArticleDOI
TL;DR: It is shown that a particular pivoting algorithm, which is called the lexicographic Lemke algorithm, takes an expected number of steps that is bounded by a quadratic inn, when applied to a random linear complementarity problem of dimensionn.
Abstract: We show that a particular pivoting algorithm, which we call the lexicographic Lemke algorithm, takes an expected number of steps that is bounded by a quadratic inn, when applied to a random linear complementarity problem of dimensionn. We present two probabilistic models, both requiring some nondegeneracy and sign-invariance properties. The second distribution is concerned with linear complementarity problems that arise from linear programming. In this case we give bounds that are quadratic in the smaller of the two dimensions of the linear programming problem, and independent of the larger. Similar results have been obtained by Adler and Megiddo.

Journal ArticleDOI
TL;DR: The problem of processing and combining possibilistic information in the scope of linear programming is approached using the concepts of ‘more possible value’, ‘α-possibly feasible action’ and ‘ α-possibly efficient action‘.

Book ChapterDOI
TL;DR: A general approach to global optimization based on a solution method for d.c. c. programming problems is presented.
Abstract: A d.c. function is a function which can be represented as a difference of two convex functions. A d.c. programming problem is a mathematical programming problem involving a d.c. objective function and (or) d.c. constraints. We present a general approach to global optimization based on a solution method for d.c. programming problems.

Journal ArticleDOI
TL;DR: A simple Newton-like descent algorithm for linear programming is proposed together with results of preliminary computational experiments on small- and medium-size problems and, experimentally, shows global linear convergence.
Abstract: A simple Newton-like descent algorithm for linear programming is proposed together with results of preliminary computational experiments on small- and medium-size problems. The proposed algorithm gives local superlinear convergence to the optimum and, experimentally, shows global linear convergence. It is similar to Karmarkar's algorithm in that it is an interior feasible direction method and self-correcting, while it is quite different from Karmarkar's in that it gives superlinear convergence and that no artificial extra constraint is introduced nor is protective geometry needed, but only affine geometry suffices.

Proceedings ArticleDOI
01 Dec 1986
TL;DR: A dual problem which is unconstrained, piecewise linear, and involves a dual variable for each node is formulated, and a dual algorithm that resembles a Gauss-Seidel relaxation method is proposed.
Abstract: We consider distributed solution of the classical linear minimum cost network flow problem. We formulate a dual problem which is unconstrained, piecewise linear, and involves a dual variable for each node. We propose a dual algorithm that resembles a Gauss-Seidel relaxation method. At each iteration the dual variable of a single node is changed based on local information from adjacent nodes. In a distributed setting each node can change its variable independently of the variable changes of other nodes. The algorithm is efficient for some classes of problems, notably for the max-flow problem for which it resembles a recent algorithm by Goldberg [11].

Journal ArticleDOI
TL;DR: This paper presents a procedure based on this relaxation from which more efficient iterations can be expected in Phase I — especially in the presence of degeneracy — simply owing to the new way of determining the outgoing variable.

Journal ArticleDOI
TL;DR: The new projective scaling algorithm for linear programming has caused quite a stir in the press, mainly because of reports that it is 50 times faster than the simplex method on large problems and has a polynomial bound on worst-case running time that is better than the ellipsoid algorithm's.
Abstract: N. Karmarkar’s new projective scaling algorithm for linear programming has caused quite a stir in the press, mainly because of reports that it is 50 times faster than the simplex method on large problems. It also has a polynomial bound on worst-case running time that is better than the ellipsoid algorithm’s. Radically different from the simplex method, it moves through the interior of the polytope, transforming the space at each step to place the current point at the polytope’s center. The algorithm is described in enough detail to enable one to write one’s own computer code and to understand why it has polynomial running time. Some recent attempts to make the algorithm live up to its promise are also reviewed.

Journal ArticleDOI
TL;DR: An algorithm based on a linear function of the standard deviation of the random nutrient within each feedstuff, for which a penalty parameter is iterated in a search for a desired probability, yielded results very similar to the exact solutions found by nonlinear programming methods.

Journal ArticleDOI
TL;DR: It is shown that the problem of exiting a degenerate vertex is as hard as the general linear programming problem and that to solve the latter, it is sufficient to exit that vertex in a direction that improves the objective function value.
Abstract: We show that the problem of exiting a degenerate vertex is as hard as the general linear programming problem. More precisely, every linear programming problem can easily be reduced to one where the second best vertex (which is highly degenerate) is already given. So, to solve the latter, it is sufficient to exit that vertex in a direction that improves the objective function value.

Book ChapterDOI
01 Jan 1986
TL;DR: Since an evident framework for the quantitative analysis of uncertainty is provided by probability theory it seems only natural to interpret the uncertain coefficient values as realizations of random variables, which characterizes stochastic linear programming.
Abstract: Linear programming has proven to be a suitable framework for the quantitative analysis of many decision problems. The reasons for its popularity are obvious: many practical problems can be modeled, at least approximately, as linear programs, and powerful software is available. Nevertheless, even if the problem has the necessary linear structure it is not sure that the linear programming approach works. One of the reasons is that the model builder must be able to provide numerical values for each of the coefficients. But in practical situations one often is not sure about the “true” values of all coefficients. Usually the uncertainty is exorcized by taking reasonable guesses or maybe by making careful estimates. In combination with a sensitivity analysis with respect to the most inaccurate coefficients this approach is satisfactory in many cases. However, if it appears that the optimal solution depends heavily on the value of some inaccurate data, it might be sensible to take the uncertainty of the coefficients into consideration in a more fundamental way. Since an evident framework for the quantitative analysis of uncertainty is provided by probability theory it seems only natural to interpret the uncertain coefficient values as realizations of random variables. This approach characterizes stochastic linear programming.

Journal ArticleDOI
TL;DR: In this paper, a linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures using first-order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure.
Abstract: A linear optimization approach with a simple, real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first-order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam, finite element model for the optimal sizing and placement of active/passive structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to the initial conditions of the linear optimization approach is also demonstrated.

Journal ArticleDOI
TL;DR: In this article, the authors give a different proof of convergence of the polynomial-time algorithm for linear programming based on projective transformations and show that it converges in time O(n log n).

Book ChapterDOI
01 Jan 1986
TL;DR: This presentation explains how discrete variables must be introduced representing, for instance, an investment choice, a production level, etc. in many real world applications to create a MOILP problem.
Abstract: For the last 15 years, many Multi-Objective Linear Programming (MOLP) methods with continuous solutions have been developed. In many real world applications, however, discrete variables must be introduced representing, for instance, an investment choice, a production level, etc. The linear mathematical structure is then Integer Linear Programming (ILP), associated with MOLP giving a MOILP problem. Unfortunately, this type of problems has its own difficulties, as it cannot be solved by simply combining ILP and MOLP methods.

Journal ArticleDOI
TL;DR: A family of variants of the Simplex method, which are based on a Constraint-By-Constraint procedure: the solution to a linear program is obtained by solving a sequence of subproblems with an increasing number of constraints.
Abstract: We present a family of variants of the Simplex method, which are based on a Constraint-By-Constraint procedure: the solution to a linear program is obtained by solving a sequence of subproblems with an increasing number of constraints. We discuss several probabilistic models for generating linear programs. In all of them the underlying distribution is assumed to be invariant under changing the signs of rows or columns in the problem data. A weak regularity condition is also assumed. Under these models, for linear programs with d variables and m + d inequality constraints, the expected number of pivots required by these algorithms is bounded by a function of minm, d only. In particular this means that, for a fixed number of variables, the expected number of pivots is bounded by a constant when the number of constraints tends to infinity. Since Smale's original model [Smale, S. 1983. On the average speed of the simplex method of linear programming. Math. Programming27.] satisfies our probabilistic assumptions, the same results apply to his model, although not to the particular algorithm he analyzes. We also present some results for models generating only feasible linear programs, and for Bland's pivoting rule. We conclude with a discussion of our probabilistic models, and show why they are inadequate for obtaining meaningful results unless d and m are of the same order of magnitude.

Journal ArticleDOI
TL;DR: This note considers the solution of a linear program, using suitably adapted homotopy techniques of nonlinear programming and equation solving that move through the interior of the polytope of feasible solutions, using quadratic regularizing term in an appropriate metric.
Abstract: In this note, we consider the solution of a linear program, using suitably adapted homotopy techniques of nonlinear programming and equation solving that move through the interior of the polytope of feasible solutions. The homotopy is defined by means of a quadratic regularizing term in an appropriate metric. We also briefly discuss algorithmic implications and connections with the affine variant of Karmarkar's method.

Journal ArticleDOI
TL;DR: This work extends and simplifies Smale's work on the expected number of pivots for a linear program with many variables and few constraints to new versions of the simplex algorithm and to new random distributions.
Abstract: We extend and simplify Smale's work on the expected number of pivots for a linear program with many variables and few constraints. Our analysis applies to new versions of the simplex algorithm and to new random distributions.

Journal ArticleDOI
TL;DR: The two bilinear programming algorithms discussed are the most general and a specialization of the recent biconvex programming algorithm of Al-Khayyal and Falk and the other is an entirely new implicit enumeration procedure.

Book
01 Jan 1986
TL;DR: The Simplex Method for Solving Linear Programs and Sensitivity (or Postoptimality) Analysis Computer Solutions to Linear Programming Problems are presented.
Abstract: Introduction Linear Programming Modeling Graphical Solution of Linear Programs in Two Variables The Simplex Method for Solving Linear Programs Sensitivity (or Postoptimality) Analysis Computer Solutions to Linear Programming Problems

Journal ArticleDOI
TL;DR: In this paper, an algorithm for quasiconcave nonlinear fractional programming problems based on ranking the vertices of a linear fractional program problem and techniques from global optimization is proposed.
Abstract: In this note we consider an algorithm for quasiconcave nonlinear fractional programming problems, based on ranking the vertices of a linear fractional programming problem and techniques from global optimization.