scispace - formally typeset
Search or ask a question

Showing papers on "Linear programming published in 1990"


Journal ArticleDOI
TL;DR: It is proved that no polynomial algorithm can achieve a worst-case ratio less than 3/2 unlessP = NP, and a complexity classification for all special cases with a fixed number of processing times is obtained.
Abstract: We consider the following scheduling problem. There arem parallel machines andn independent jobs. Each job is to be assigned to one of the machines. The processing of jobj on machinei requires timep ij . The objective is to find a schedule that minimizes the makespan. Our main result is a polynomial algorithm which constructs a schedule that is guaranteed to be no longer than twice the optimum. We also present a polynomial approximation scheme for the case that the number of machines is fixed. Both approximation results are corollaries of a theorem about the relationship of a class of integer programming problems and their linear programming relaxations. In particular, we give a polynomial method to round the fractional extreme points of the linear program to integral points that nearly satisfy the constraints. In contrast to our main result, we prove that no polynomial algorithm can achieve a worst-case ratio less than 3/2 unlessP = NP. We finally obtain a complexity classification for all special cases with a fixed number of processing times.

953 citations



Journal ArticleDOI
TL;DR: In this article, a two-phase decomposition method is proposed for the optimal design of new looped water distribution networks as well as for the parallel expansion of existing ones, where the main feature of the method is that it generates a sequence of improving local optimal solutions.
Abstract: A two-phase decomposition method is proposed for the optimal design of new looped water distribution networks as well as for the parallel expansion of existing ones. The main feature of the method is that it generates a sequence of improving local optimal solutions. The first phase of the method takes a gradient approach with the flow distribution and pumping heads as decision variables and is an extension of the linear programming gradient method proposed by Alperovits and Shamir (1977) for nonlinear modeling. The technique is iterative and produces a local optimal solution. In the second phase the link head losses of this local optimal solution are fixed, and the resulting concave program is solved for the link flows and pumping heads; these then serve to restart the first phase to obtain an improved local optimal solution. The whole procedure continues until no further improvement can be achieved. Some applications and extensions of the method are also discussed.

532 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe developments that have transformed the LP (linear programming) approach into a truly general-purpose OPF (optimal power flow) solver, with computational and other advantages over even recent nonlinear programming (NLP) methods.
Abstract: The authors describe developments that have transformed the LP (linear programming) approach into a truly general-purpose OPF (optimal power flow) solver, with computational and other advantages over even recent nonlinear programming (NLP) methods. it is pointed out that the nonseparable loss-minimization problem can now be solved, giving the same results as NLP on power systems of any size and type. Coupled formulations, where for instance voltages and VAr become constraints on MW scheduling, are handled. Former limitations on the modeling of generator cost curves have been eliminated. In addition, the approach accommodates a large variety of power system operating limits, including the very important category of contingency constraints. All of the reported enhancements are fully implemented in the production OPF software described here, and most have already been utilized within the industry. >

517 citations


Journal ArticleDOI
TL;DR: An algorithm for solving the linear/quadratic case of the bilevel programming problem is reformulated as a standard mathematical program by exploiting the follower's Kuhn–Tucker conditions.
Abstract: The bilevel programming problem is a static Stackelberg game in which two players try to maximize their individual objective functions. Play is sequential and uncooperative in nature. This paper presents an algorithm for solving the linear/quadratic case. In order to make the problem more manageable, it is reformulated as a standard mathematical program by exploiting the follower's Kuhn–Tucker conditions. A branch and bound scheme suggested by Fortuny-Amat and McCarl is used to enforce the underlying complementary slackness conditions. An example is presented to illustrate the computations, and results are reported for a wide range of problems containing up to 60 leader variables, 40 follower variables, and 40 constraints. The main contributions of the paper are in the step-by-step details of the implementation, and in the scope of the testing.

388 citations


Journal ArticleDOI
TL;DR: It is shown, using small examples, that two algorithms previously published for the Bilevel Linear Programming problem BLP may fail to find the optimal solution and thus must be considered to be heuristics.
Abstract: We show, using small examples, that two algorithms previously published for the Bilevel Linear Programming problem BLP may fail to find the optimal solution and thus must be considered to be heuristics. A proof is given that solving BLP problems is NP-hard, which makes it unlikely that there is a good, exact algorithm.

372 citations


Journal ArticleDOI
TL;DR: A basic implicit enumeration scheme is developed that finds good feasible solutions within relatively few iterations in the case where each player tries to maximize the individual objective function over a jointly constrained polyhedron.
Abstract: A two-person, noncooperative game in which the players move in sequence can be modeled as a bilevel optimization problem. In this paper, we examine the case where each player tries to maximize the individual objective function over a jointly constrained polyhedron. The decision variables are variously partitioned into continuous and discrete sets. The leader goes first, and through his choice may influence but not control the responses available to the follower. For two reasons the resultant problem is extremely difficult to solve, even by complete enumeration. First, it is not possible to obtain tight upper bounds from the natural relaxation; and second, two of the three standard fathoming rules common to branch and bound cannot be applied fully. In light of these limitations, we develop a basic implicit enumeration scheme that finds good feasible solutions within relatively few iterations. A series of heuristics are then proposed in an effort to strike a balance between accuracy and speed. The computational results suggest that some compromise is needed when the problem contains more than a modest number of integer variables.

349 citations


Journal ArticleDOI
TL;DR: In this paper, a mathematical framework is presented for the solution of the economic dispatch problem, and the application of the Dantzig-Wolfe decomposition method is emphasized to solve the problem.
Abstract: A mathematical framework is presented for the solution of the economic dispatch problem The application of the Dantzig-Wolfe decomposition method for the solution of this problem is emphasized The system's optimization problem is decomposed into several subproblems corresponding to specific areas in the power system The upper bound technique along with the decomposition method are applied to a 16-bus system and a modified IEEE 30-bus system, and numerical results are presented for larger systems The results indicate that the presented formulation of the reactive power optimization and the application of the decomposition procedure will facilitate the solution of the problem The algorithm can be applied to a large-scale power network, where its solution represents a significant reduction in the number of iterations and the required computation time >

342 citations



BookDOI
01 Nov 1990
TL;DR: This book discusses fuzzy logic with linguistic quantifiers in multiobjective decision making and optimization, a step towards more human-consistent models, and Stochastic Versus Fuzzy Approaches and Related Issues.
Abstract: I. The General Framework.- 1. Multiobjective programming under uncertainty : scope and goals of the book.- 2. Multiobjective programming : basic concepts and approaches.- 3. Stochastic programming : numerical solution techniques by semi-stochastic approximation methods.- 4. Fuzzy programming : a survey of recent developments.- II. The Stochastic Approach.- 1. Overview of different approaches for solving stochastic programming problems with multiple objective functions.- 2. "STRANGE" : an interactive method for multiobjective stochastic linear programming, and "STRANGE-MOMIX" : its extension to integer variables.- 3. Application of STRANGE to energy studies.- 4. Multiobjective stochastic linear programming with incomplete information : a general methodology.- 5. Computation of efficient solutions of stochastic optimization problems with applications to regression and scenario analysis.- III. The Fuzzy Approach.- 1. Interactive decision-making for multiobjective programming problems with fuzzy parameters.- 2. A possibilistic approach for multiobjective programming problems. Efficiency of solutions.- 3. "FLIP" : an interactive method for multiobjective linear programming with fuzzy coefficients.- 4. Application of "FLIP" method to farm structure optimization under uncertainty.- 5. "FULPAL" : an interactive method for solving (multiobjective) fuzzy linear programming problems.- 6. Multiple objective linear programming problems in the presence of fuzzy coefficients.- 7. Inequality constraints between fuzzy numbers and their use in mathematical programming.- 8. Using fuzzy logic with linguistic quantifiers in multiobjective decision making and optimization: A step towards more human-consistent models.- IV. Stochastic Versus Fuzzy Approaches and Related Issues.- 1. Stochastic versus possibilistic multiobjective programming.- 2. A comparison study of "STRANGE" and "FLIP".- 3. Multiobjective mathematical programming with inexact data.

291 citations


Journal ArticleDOI
TL;DR: A general-purpose algorithm for converting procedures that solves linear programming problems that is polynomial for constraint matrices with polynomially bounded subdeterminants and an algorithm for finding a ε-accurate optimal continuous solution to the nonlinear problem.
Abstract: The polynomiality of nonlinear separable convex (concave) optimization problems, on linear constraints with a matrix with “small” subdeterminants, and the polynomiality of such integer problems, provided the inteter linear version of such problems ins polynomial, is proven. This paper presents a general-purpose algorithm for converting procedures that solves linear programming problems. The conversion is polynomial for constraint matrices with polynomially bounded subdeterminants. Among the important corollaries of the algorithm is the extension of the polynomial solvability of integer linear programming problems with totally unimodular constraint matrix, to integer-separable convex programming. An algorithm for finding a e-accurate optimal continuous solution to the nonlinear problem that is polynomial in log(1/e) and the input size and the largest subdeterminant of the constraint matrix is also presented. These developments are based on proximity results between the continuous and integral optimal solutions for problems with any nonlinear separable convex objective function. The practical feature of our algorithm is that is does not demand an explicit representation of the nonlinear function, only a polynomial number of function evaluations on a prespecified grid.

Journal ArticleDOI
TL;DR: A hierarchy of relaxations obtained by combining enumeration of initial sequences with Smith's rule can be formulated as a linear programming problem in an enlarged space of variables and new valid inequalities for the problem are obtained.

Journal ArticleDOI
TL;DR: This paper describes an efficient implementation of a nested decomposition algorithm for the multistage stochastic linear programming problem and results compare the performance of the algorithm to MINOS 5.0.
Abstract: This paper describes an efficient implementation of a nested decomposition algorithm for the multistage stochastic linear programming problem. Many of the computational tricks developed for deterministic staircase problems are adapted to the stochastic setting and their effect on computation times is investigated. The computer code supports an arbitrary number of time periods and various types of random structures for the input data. Numerical results compare the performance of the algorithm to MINOS 5.0.

Journal ArticleDOI
TL;DR: It is shown how to eliminate a previously undetected distortion and thereby increase the scope and flexibility of the LP discriminant analysis models, including the use of a successive goal method for establishing a series of conditional objectives to achieve improved discrimination.
Abstract: Discriminant analysis is an important tool for practical problem solving. Classical statistical applications have been joined recently by applications in the fields of management science and artificial intelligence. In a departure from the methodology of statistics, a series of proposals have appeared for capturing the goals of discriminant analysis in a collection of linear programming formulations. The evolution of these formulations has brought advances that have removed a number of initial shortcomings and deepened our understanding of how these models differ in essential ways from other familiar classes of LP formulations. We will demonstrate, however, that the full power of the LP discriminant analysis models has not been achieved, due to a previously undetected distortion that inhibits the quality of solutions generated. The purpose of this paper is to show how to eliminate this distortion and thereby increase the scope and flexibility of these models. We additionally show how these outcomes open the door to special model manipulations and simplifications, including the use of a successive goal method for establishing a series of conditional objectives to achieve improved discrimination.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a two-stage synthesis procedure that employs one minimum allowable composition difference for all possible rich-lean stream pairs and a mixed-integer linear program is used to yield minimum-utility cost networks in which the number of mass-exchanger units is minimized.

Journal ArticleDOI
01 Apr 1990
TL;DR: An efficient algorithm, the compact-dual linear programming (LP) method, is presented to solve the force distribution problem and is applicable to a wide range of systems, constraints, and objective functions and yet is computationally efficient.
Abstract: An efficient algorithm, the compact-dual linear programming (LP) method, is presented to solve the force distribution problem. In this method, the general solution of the linear equality constraints is obtained by transforming the underspecified matrix into row-reduced echelon form; then, the linear equality constraints of the force distribution problem are eliminated. In addition, the duality theory of linear programming is applied. The resulting method is applicable to a wide range of systems, constraints, and objective functions and yet is computationally efficient. The significance of this method is demonstrated by solving the force distribution problem of a grasping system under development at Ohio State called DIGITS. With two fingers grasping an object and hard point contact with friction considered, the CPU time on a VAX-11/785 computer is only 1.47 ms. If four fingers are considered and a linear programming package in the IMSL library is utilized, the CPU time is then less than 45 ms. >

Journal ArticleDOI
P. M. Vaidya1
TL;DR: The worst-case running time of the algorithm is better than that of Karmarkar's algorithm by a factor of $$\sqrt {m + n} $$ .
Abstract: We present an algorithm for linear programming which requires O(((m+n)n 2+(m+n)1.5 n)L) arithmetic operations wherem is the number of constraints, andn is the number of variables. Each operation is performed to a precision of O(L) bits.L is bounded by the number of bits in the input. The worst-case running time of the algorithm is better than that of Karmarkar's algorithm by a factor of $$\sqrt {m + n} $$ .

Journal ArticleDOI
TL;DR: Two different approaches to parameter estimation when the data are corrupted by unknown but bounded errors are reviewed and compared, the first based on a recursive parameter ellipsoidal- bounding algorithm, the other on an orthotopic-bounding set, obtained by solving linear programming problems.

Proceedings ArticleDOI
01 May 1990
TL;DR: Two randomized algorithms that solve linear programs involving constraints in d variables in expected time and construct convex hulls of n points in Rsupscrpt (3) with dependence of the time bound on d.
Abstract: We present two randomized algorithms. One solves linear programs involving m constraints in d variables in expected time O(m). The other constructs convex hulls of n points in Rd, d > 3, in expected time O(n⌈d/2⌉). In both bounds d is considered to be a constant. In the linear programming algorithm the dependence of the time bound on d is of the form d!. The main virtue of our results lies in the utter simplicity of the algorithms as well as their analyses.

Journal ArticleDOI
TL;DR: This work demonstrates for an important class of multistage stochastic models that three techniques — namely nested decomposition, Monte Carlo importance sampling, and parallel computing — can be effectively combined to solve this fundamental problem of large-scale linear programming.
Abstract: Our goal is to demonstrate for an important class of multistage stochastic models that three techniques — namely nested decomposition, Monte Carlo importance sampling, and parallel computing — can be effectively combined to solve this fundamental problem of large-scale linear programming.

Journal ArticleDOI
TL;DR: In this paper, the problem of stabilizing linear control synthesis in the presence of state and input bounds for systems with additive unknown disturbances is considered, and it is proved that a solution of the problem is achieved by the selection of a polyhedral set S and the computation of a feedback matrix K such that S is positively D-invariant for the closed-loop system.
Abstract: The problem of the stabilizing linear control synthesis in the presence of state and input bounds for systems with additive unknown disturbances is considered. The only information required about the disturbances is a finite convex polyhedral bound. Discrete- and continuous-time systems are considered. The property of positive D-invariance of a region is introduced, and it is proved that a solution of the problem is achieved by the selection of a polyhedral set S and the computation of a feedback matrix K such that S is positively D-invariant for the closed-loop system. It is shown that if polyhedral sets are considered, the solution involves simple linear programming algorithms. However, the procedure suggested requires a great amount of computational work offline if the state-space dimension is large, because the feedback matrix K is obtained as a solution of a large set of linear inequalities. All of the vertices of S are required. >

Journal ArticleDOI
TL;DR: A new portfolio optimization model using a piecewise linear risk function is proposed, which has several advantages over the classical Markowitz's quadratic risk model and can generate the capital-market line and derive CAPM type equilibrium relations.
Abstract: A new portfolio optimization model using a piecewise linear risk function is proposed. This model is similar to, but has several advantages over the classical Markowitz's quadratic risk model. First, it is much easier to generate an optimal portfolio since the problem to be solved is a linear program instead of a quadratic program. Second, integer constraints associated with real transaction can be incorporated without making the problem intractable. Third, it enables us to distinguish two distributions with the same first and second moment but with different third moment. Fourth, we can generate the capital-market line and derive CAPM type equilibrium relations. We compared the piecewise linear risk model with the quadratic risk model using historical data of Tokyo Stock Market, whose results partly support the claims stated above.

Proceedings ArticleDOI
12 Mar 1990
TL;DR: Results show that circuits can be speeded up by a factor of 2 at a cost of only 10 to 30% of extra power, and has proven feasible for circuits of up to several thousand cells.
Abstract: In this paper a solution is presented to tune the delay of a circuit composed of cells to a prescribed value, while minimizing power consumption. The tuning is performed by adapting the load drive capabilities of the cells. This optimization problem is mapped onto a linear program, which is then solved by the Simplex algorithm. This approach guarantees to find the global optimum, and has proven feasible for circuits of up to several thousand cells. The method can be used with any convex delay model. Results show that circuits can be speeded up by a factor of 2 at a cost of only 10 to 30% of extra power.

PatentDOI
TL;DR: The system in principle is different from neural networks and statistical classifiers and can deal with very difficult pattern classification problems that arise in speech and image recognition, robotics, medical diagnosis, warfare systems and others.
Abstract: A method and apparatus (system) is provided for the separation into and the identification of classes of events wherein each of the events is represented by a signal vector comprising the signals x1, x2, . . . , xj, . . . , xn. The system in principle is different from neural networks and statistical classifiers. The system comprises a plurality of assemblies. The training or adaptation module stores a set of training examples and has a set of procedures (linear programming, clustering and others) that operate on the training examples and determine a set of transfer function and threshold values. These transfer functions and threshold values are installed on a recognized module for use in the identification phase. The training module is extremely fast and can deal with very difficult pattern classification problems that arise in speech and image recognition, robotics, medical diagnosis, warfare systems and others. The systems also exploits parallelism in both the learning and recognition phases.

Journal ArticleDOI
TL;DR: It is shown that the Held-Karp 1-trees have a certain monotonicity property: given a particular instance of the symmetric TSP with triangle inequality, the cost of the minimum weighted 1-tree is monotonic with respect to the set of nodes included.

Journal ArticleDOI
TL;DR: A new method for obtaining an initial feasible interior-point solution to a linear program is presented, which avoids the use of a “big-M”, and is shown to work well on a standard set of test problems.
Abstract: A new method for obtaining an initial feasible interior-point solution to a linear program is presented. This method avoids the use of a "big-M", and is shown to work well on a standard set of test problems. Conditions are developed for obtaining a near-optimal solution that is feasible for an associated problem, and details of the computational testing are presented. Other issues related to obtaining and maintaining accurate feasible solutions to linear programs with an interior-point method are discussed. These issues are important to consider when solving problems that have no primal or dual interior-point feasible solutions.

Journal ArticleDOI
TL;DR: This work explicitly introduces the structure of the decision-maker's preferences into the GP model in order to evaluate the impact of deviations from the decided levels, and uses the idea of a generalized criterion, as introduced in the Promethee outranking method, to build this structure of preferences.
Abstract: Many algorithms have been developed for multiple-criteria decision-making problems. Goal programming (GP) is one of these algorithms. This model is a special extension of linear programming. Usually, it is not easy for the decision-maker to choose his aspiration levels a priori. Moreover, the incommensurability of the measurement units of the various objectives creates an aggregation problem. However, in the standard GP formulation, the decision-maker is not required to arbitrate among conflicting objectives. To deal with these difficulties, we explicitly introduce the structure of the decision-maker's preferences into the GP model in order to evaluate the impact of deviations from the decision-maker's aspirations levels. Easily and naturally, the idea of a generalized criterion, as introduced in the Promethee outranking method, will be used to build this structure of preferences.

Journal ArticleDOI
TL;DR: A projective algorithms for linear programming that shares features with Karmarkar's projective algorithm and its variants and with the path-following methods of Gonzaga, Kojima-Mizuno-Yoshise, Monteiro-Adler, Renegar, Vaidya and Ye is described.
Abstract: We describe a projective algorithm for linear programming that shares features with Karmarkar's projective algorithm and its variants and with the path-following methods of Gonzaga, Kojima-Mizuno-Yoshise, Monteiro-Adler, Renegar, Vaidya and Ye. It operates in a primal-dual setting, stays close to the central trajectories, and converges in O√n L iterations like the latter methods. Here n is the number of variables and L the input size of the problem. However, it is motivated by seeking reductions in a suitable potential function as in projective algorithms, and the approximate centering is an automatic byproduct of our choice of potential function.

Journal ArticleDOI
TL;DR: This work offers an elementary approach to the problem of maximizing the net present value of a project through the manipulation of the times of realization of its key events that maintains the essential simplicity of the problem.

Journal ArticleDOI
TL;DR: This work shows that existing convergence results for this projection algorithm follow from one given by Gabay for a splitting algorithm for finding a zero of the sum of two maximal monotone operators, and obtains a decomposition method that can simultaneously dualize the linear constraints and diagonalize the cost function.
Abstract: A classical method for solving the variational inequality problem is the projection algorithm. We show that existing convergence results for this algorithm follow from one given by Gabay for a splitting algorithm for finding a zero of the sum of two maximal monotone operators. Moreover, we extend the projection algorithm to solveany monotone affine variational inequality problem. When applied to linear complementarity problems, we obtain a matrix splitting algorithm that is simple and, for linear/quadratic programs, massively parallelizable. Unlike existing matrix splitting algorithms, this algorithm converges under no additional assumption on the problem. When applied to generalized linear/quadratic programs, we obtain a decomposition method that, unlike existing decomposition methods, can simultaneously dualize the linear constraints and diagonalize the cost function. This method gives rise to highly parallelizable algorithms for solving a problem of deterministic control in discrete time and for computing the orthogonal projection onto the intersection of convex sets.