scispace - formally typeset
Search or ask a question

Showing papers on "Linear-fractional programming published in 1969"


Book
01 Jan 1969
TL;DR: Book on theory of optimal control and mathematical programming covering linear, nonlinear, quadratic programmings, etc.
Abstract: Book on theory of optimal control and mathematical programming covering linear, nonlinear, quadratic programmings, etc

271 citations


Journal ArticleDOI
TL;DR: In this article, a simple implicit enumeration algorithm fitted with optional imbedded linear programming machinery was implemented and tested extensively on an IBM 7044 and shown to reduce solution times by a factor of about 100.
Abstract: This paper synthesizes the Balasian implicit enumeration approach to integer linear programming with the approach typified by Land and Doig and by Roy, Bertier, and Nghiem. The synthesis results from the use of an imbedded linear program to compute surrogate constraints that are as "strong" as possible in a sense slightly different from that originally used by Glover. A simple implicit enumeration algorithm fitted with optional imbedded linear programming machinery was implemented and tested extensively on an IBM 7044. Use of the imbedded linear program greatly reduced solution times in virtually every case, and seemed to render the tested algorithm superior to the five other implicit enumeration algorithms for which comparable published experience was available. The crucial issue of the sensitivity of solution time to the number of integer variables was given special attention. Sequences were run of set-covering, optimal-routing, and knapsack problems with multiple constraints of varying sizes up to 90 variables. The results suggest that use of the imbedded linear program in the prescribed way may mitigate solution-time dependence on the number of variables from an exponential to a low-order polynomial increase. The dependence appeared to be approximately linear for the first two problem classes, with 90-variable problems typically being solved in about 15 seconds; and approximately cubic for the third class, with 80-variable problems typically solved in less than 2 minutes. In the 35-variable range for all three classes, use of the imbedded linear program reduced solution times by a factor of about 100.

214 citations


ReportDOI
01 Jun 1969
TL;DR: In this paper, duality for linear and nonlinear programs with arbitrary variables is discussed and algorithms based on duality constructions based on them are discussed. But the most important class of such problems is that of mixed-integer (linear and non-linear) programs.
Abstract: : The paper discusses duality for linear and nonlinear programs in which some of the variables are arbitrarily constrained. The most important class of such problems is that of mixed-integer (linear and nonlinear) programs. The paper introduces the duality constructions and discusses algorithms based on them.

65 citations




Journal ArticleDOI
TL;DR: A new algorithm for solving the pure-integer linear programming problem with general integer variables is presented and evaluated and encouraging computational experience is reported that suggests that this algorithm should compare favorably in efficiency with existing algorithms.
Abstract: A new algorithm for solving the pure-integer linear programming problem with general integer variables is presented and evaluated. Roughly speaking, this algorithm proceeds by obtaining tight bounds or conditional bounds on the relevant values of the respective variables, and then identifying a sequence of constantly improving feasible solutions by scanning the relevant solutions. Encouraging computational experience is reported that suggests that this algorithm should compare favorably in efficiency with existing algorithms. Plans for investigating ways of further increasing the efficiency of the algorithm and of extending it to more general problems also are outlined.

30 citations


Journal ArticleDOI
TL;DR: The analysis attempts to define relatively distribution-free tolerance levels and the incidence of nonnormality in chance-constrained linear programming.
Abstract: The approach of chance-constrained linear programming is analyzed here in the context of safety-first principles based on Tchebycheff-type inequalities. The analysis attempts to define relatively distribution-free tolerance levels and the incidence of nonnormality in chance-constrained linear programming.

28 citations


Book
01 Jan 1969

24 citations



Journal ArticleDOI
TL;DR: A decomposition method for nonlinear programming problems with structured linear constraints is described and an algorithm for performing post optimality analysis-ranging and parametric programming-for such structured linear programs is included.

17 citations


Journal ArticleDOI
TL;DR: In this article, it is shown how to obtain the dual variables of a linear program if the problem is solved by using the Dantzig-Wolfe decomposition principle, which is a generalization of the simplex method.
Abstract: It is well known that the dual variables of a linear program may be obtained easily if the simplex method is vised to solve the problem. This note shows how to obtain these dual variables if the problem is solved by using the Dantzig-Wolfe decomposition principle.

01 Jan 1969
TL;DR: In this paper, the authors apply the mathematical control theory to the accounting network flows, where the flow rates are constrained by linear inequalities, and the optimal control policy is of the "generalized bang-bang" variety which is obtained by solving at each instant in time a linear programming problem whose objective function parameters are determined by the "switching function" which is derived from the Hamiltonian function.
Abstract: : The paper applies the mathematical control theory to the accounting network flows, where the flow rates are constrained by linear inequalities. The optimal control policy is of the 'generalized bang-bang' variety which is obtained by solving at each instant in time a linear programming problem whose objective function parameters are determined by the 'switching function' which is derived from the Hamiltonian function. The interpretation of the adjoint variables of the control problem and the dual evaluators of the linear programming problem demonstrates an interesting interaction of the cross section phase of the problem, which is characterized by linear programming, and the dynamic phase of the problem, which is characterized by control theory. (Author)


01 Sep 1969
TL;DR: The problem is discussed as one of finding the point where a moving hyperplane last touches a convex set and an approximate procedure based on linear programming methods is given and an algorithm for solving the problem is given.
Abstract: : The paper discusses how to approximate a function g(x) from one side by a linear combination of functions f sub 1 (x), ..., f sub n (x) so as to minimize the area between the two. It discusses the problem as one of finding the point where a moving hyperplane last touches a convex set and an approximate procedure based on linear programming methods. It gives details of an algorithm for solving the problem, examples, and applications to Monte Carlo Theory--generating random variables in a computer. (Author)

Journal ArticleDOI
TL;DR: The purpose of this paper is to discuss a framework that appears to offer some potential for devising new algorithms, and provides a theory that helps to unify some of the previously advanced methods for solving integer linear programming problems.
Abstract: During the last decade a great number of approaches and schemes have been proposed for solving integer linear programming problems. These range from the implicit enumeration schemes including, for example, the additive algorithm of Balas to schemes that proceed in a simplicial fashion to an optimal solution as does, for example, the All-Integer Method of Gomory. Still other approaches have been heuristic in nature and strive to achieve so-called good (not necessarily optimal) solutions. The purpose of this paper is to discuss a framework that appears to offer some potential for devising new algorithms, and, at the same time, provides a theory that helps to unify some of the previously advanced methods for solving integer linear programming problems. The framework involves the use of bounds on variables and is related to some of the author's earlier work on the Geometric Definition Method of linear programming.

Journal ArticleDOI
TL;DR: An algorithm for determining an approximate solution of a large class of discrete linear programming problems, and an upper bound on the profit loss due to the approximation is computed and a geometrical interpretation of the algorithm is given.
Abstract: An algorithm is presented for determining an approximate solution of a large class of discrete linear programming problems, and an upper bound on the profit loss due to the approximation is computed. A subregion of the original polyhedron of feasible solutions is also defined; such a subregion certainly contains the optimal solution of the discrete linear programming problem considered. A geometrical interpretation of the algorithm is given.

Journal ArticleDOI
TL;DR: The redundancy in Marenco's aggregation theorem is exposed by Maret as mentioned in this paper, who concludes that a proportionally heterogeneous aggregate can be represented by a single linear programming problem based on aggregate resources, as the aggregation theorem states, but with no further conditions.
Abstract: The redundancy in my aggregation theorem is clearly exposed by Marenco, and we can conclude that a proportionally heterogeneous aggregate can be represented by a single linear programming problem based on aggregate resources, as the aggregation theorem states, but with no further conditions. However, my statement about the choice of the aggregation weights (the a's) [1, p. 803] is not entirely meaningless, as Marenco asserts. We should merely make it stronger. For a pecuniously proportional aggregate, any weighted average of objective functions will do. Since the objective functions are proportional to each other, they are proportional to their average and the 'Yi's needed for the theorem always exist. This does not mean that any conditions of the theorem have been relaxed, only that one condition is implied by the others. Marenco is clear on this pointBut a desire to obtain less restrictive hypotheses that would justify larger aggregates is a kind of theoretical snipe, which in between times has led some investigators off on what seems to me to be a pointless hunt. Miller [6, p. 54], for example, hopes to get around the straight-jacket of proportionality by defining exact aggregation in terms of "qualitatively homogeneous output vectors," vectors whose optimal bases (including slacks) are the same for each firm. Unfortunately, his "theorem" leads to the following procedure: (1) Estimate the zi, B , and ci matrices for each farm i= 1, * * *, n. (2) Formulate and solve each l.p. problem; observe for each problem the optimal basis. (3) Group those farms together that have common optimal bases. (4) Aggregate the resources for these farms and solve the aggregate is i t-

Journal ArticleDOI
TL;DR: An iterative scheme in which all the components of the matrix of unknowns are varied at each step is discussed for an absolute-value linear programming problem from structural design in this paper, where global convergence is shown for this scheme, and several cases in which it can be applied to more common programming problems are considered.



Journal ArticleDOI
TL;DR: In this article, a linear programming problem subject to uncertainty in the requirements vector is solved deterministically, and sensitivity analysis is performed to determine the effect of the random variation on this solution.
Abstract: A linear programming problem subject to uncertainty in the requirements vector is solved deterministically, then sensitivity analysis is performed to determine the effect of the random variation on this solution. Since there could be an appreciable cost of modifying a solution once implemented, the probability of the random components perturbing a solution is considered. Unlike existing methods of linear programming under uncertainty, this article assumes no knowledge of the distributions of the random variables. Rather, the notion of non-parametric tolerance limits is employed to establish a criterion for changing basic solutions.


01 May 1969
TL;DR: In this paper, an algorithm for sorting feasible basic solutions of a linear program is proposed, which proceeds from one basic solution to the next in order of nonincreasing values of the objective function.
Abstract: : An algorithm for sorting feasible basic solutions of a linear program is proposed. The algorithm proceeds from one basic solution to the next in order of non-increasing values of the objective function. The use of the proposed procedure is demonstrated through numerical examples and geometrical interpretations. The algorithm solves problems where a basic feasible solution is sought, maximizing a linear objective function and satisfying some prescribed conditions. This category includes the fixed-charge problem, the travelling salesman-problem, the mixed 0,1 linear program, etc. (Author)



Journal ArticleDOI
TL;DR: In this article, a heuristic method is developed for linear programming problems with homogeneous costs in the objective function and upper bounds on the variables, and the iterative procedure developed starts with an initial nontrivial feasible solution vector and improves the objective functions value at each iteration by improvement on the value of a variable in the solution vector.
Abstract: A heuristic method is developed for linear programming problems with homogeneous costs in the objective function and upper bounds on the variables. The iterative procedure developed starts with an initial nontrivial feasible solution vector and improves the objective function value at each iteration by improvement on the value of a variable in the solution vector. No artificial or slack variables are used in the solution of the problem. Computational experience with the method and comparison with standard LP codes are presented. Advantages and disadvantages of the method are also discussed.



01 Oct 1969
TL;DR: The required conditions for explicit solution and one iterative algorithm for solving the general IP are summarized in this paper and a FORTRAN listing for SUBOPT is given in App A.
Abstract: : The term 'interval linear programming' refers to the theory, computational methods, and applications of problems having the form (denoted by IP) maximize (c to the power t)x subject to b(-) < or = Ax < or = b(+) where the matrix A and vectors b(-), b(+), and c are given. IP is an alternative model for linear programming that offers the advantages of (a) explicit solution in some cases and (b) efficient algorithms that save considerable computational effort on applications that may be put in form IP more compactly than in the standard linear programming form. The required conditions for explicit solution and one iterative algorithm, called SUBOPT, for solving the general IP are summarized in this paper. A FORTRAN listing for SUBOPT is given in App A. Some applications for which interval linear programming may save computational effort are also discussed. (Author)