scispace - formally typeset
Search or ask a question

Showing papers in "Mathematical Programming in 1975"


Journal ArticleDOI
TL;DR: A sufficient “local” optimality condition for (VP) is given and this result is used to derive relations between ( VP) and the linear program (VLP) obtained by deleting the integrality restrictions in (VP).
Abstract: We consider a binary integer programming formulation (VP) for the weighted vertex packing problem in a simple graph. A sufficient “local” optimality condition for (VP) is given and this result is used to derive relations between (VP) and the linear program (VLP) obtained by deleting the integrality restrictions in (VP). Our most striking result is that those variables which assume binary values in an optimum (VLP) solution retain the same values in an optimum (VP) solution. This result is of interest because variables are (0, 1/2, 1). valued in basic feasible solutions to (VLP) and (VLP) can be solved by a “good” algorithm. This relationship and other optimality conditions are incorporated into an implicit enumeration algorithm for solving (VP). Some computational experience is reported.

718 citations


Journal ArticleDOI
Guillermo Owen1
TL;DR: An economic production game is treated, in which players pool resources to produce finished goods which can be sold at a given market price anduality theory of linear programming is used to obtain equilibrium price vectors and to prove the non-emptiness of the core.
Abstract: An economic production game is treated, in which players pool resources to produce finished goods which can be sold at a given market price. The production process is linear, so that the characteristic function can be obtained by solving linear programs.

468 citations


Journal ArticleDOI
TL;DR: A branch and bound algorithm is developed that solves the generalized assignment problem by solving a series of binary knapsack problems to determine the bounds.
Abstract: This paper describes what is termed the “generalized assignment problem”. It is a generalization of the ordinary assignment problem of linear programming in which multiple assignments of tasks to agents are limited by some resource available to the agents. A branch and bound algorithm is developed that solves the generalized assignment problem by solving a series of binary knapsack problems to determine the bounds. Computational results are cited for problems with up to 4 000 0–1 variables, and comparisons are made with other algorithms.

464 citations


Journal ArticleDOI
TL;DR: A necessary and sufficient condition is given for an inequality with coefficients 0 or 1 to define a facet of the knapsack polytope, i.e., of the convex hull of 0–1 points satisfying a given linear inequality.
Abstract: A necessary and sufficient condition is given for an inequality with coefficients 0 or 1 to define a facet of the knapsack polytope, i.e., of the convex hull of 0---1 points satisfying a given linear inequality. A sufficient condition is also established for a larger class of inequalities (with coefficients not restricted to 0 and 1) to define a facet for the same polytope, and a procedure is given for generating all facets in the above two classes. The procedure can be viewed as a way of generating cutting planes for 0---1 programs.

411 citations


Journal ArticleDOI
TL;DR: Special subclasses of inequalities for which all faces can be generated are demonstrated, including the “matroidal” and “graphic” inequalities, where a count on the number of such inequalities is obtained, and inequalities where all Faces can be derived from lower dimensional faces.
Abstract: Given a linear inequality in 0–1 variables we attempt to obtain the faces of the integer hull of 0–1 feasible solutions. For the given inequality we specify how faces of a variety of lower-dimensional inequalities can be raised to give full-dimensional faces. In terms of a set, called a ldquostrong coverrdquo, we obtain necessary and sufficient conditions for any inequality with 0–1 coefficients to be a face, and characterize different forms that the integer hull must take. In general the suggested procedures fail to produce the complete integer hull. Special subclasses of inequalities for which all faces can be generated are demonstrated. These include the ldquomatroidalrdquo and ldquographicrdquo inequalities, where a count on the number of such inequalities is obtained, and inequalities where all faces can be derived from lower dimensional faces.

327 citations


Journal ArticleDOI
TL;DR: New variable metric algorithms are presented with three distinguishing features: they make no line searches and allow quite arbitrary step directions while maintaining quadratic termination and positive updates for the matrixH, whose inverse is the hessian matrix of second derivatives for a Quadratic approximation to the objective function.
Abstract: New variable metric algorithms are presented with three distinguishing features: (1) They make no line searches and allow quite arbitrary step directions while maintaining quadratic termination and positive updates for the matrixH, whose inverse is the hessian matrix of second derivatives for a quadratic approximation to the objective function. (2) The updates fromH toH+ are optimally conditioned in the sense that they minimize the ratio of the largest to smallest eigenvalue ofH−1H+. (3) Instead of working with the matrixH directly, these algorithms represent it asJJT, and only store and update the Jacobian matrix J. A theoretical basis is laid for this family of algorithms and an example is given along with encouraging numerical results obtained with several standard test functions.

281 citations


Journal ArticleDOI
TL;DR: An algorithm to detect structure is described and this algorithm identifies sets of variables and the corresponding constraint relationships so that the total number of GUB-type constraints is maximized.
Abstract: Large practical linear and integer programming problems are not always presented in a form which is the most compact representation of the problem. Such problems are likely to posses generalized upper bound(GUB) and related structures which may be exploited by algorithms designed to solve them efficiently. The steps of an algorithm which by repeated application reduces the rows, columns, and bounds in a problem matrix and leads to the freeing of some variables are first presented. The ‘unbounded solution’ and ‘no feasible solution’ conditions may also be detected by this. Computational results of applying this algorithm are presented and discussed. An algorithm to detect structure is then described. This algorithm identifies sets of variables and the corresponding constraint relationships so that the total number of GUB-type constraints is maximized. Comparisons of computational results of applying different heuristics in this algorithm are presented and discussed.

252 citations


Journal ArticleDOI
TL;DR: This note shows how a dynamic programming solution to this problem generalizes a number of previously published algorithms in diverse metric spaces, each of which has direct and significant applications to biological systematics or evolutionary theory.
Abstract: Given a tree each of whose terminal vertices is associated with a given point in a compact metric space, the problem is to optimally associate a point in this space to each nonterminal vertex of the tree. The optimality criterion is the minimization of the sum of the lengths, in the metric space, over all edges of the tree. This note shows how a dynamic programming solution to this problem generalizes a number of previously published algorithms in diverse metric spaces, each of which has direct and significant applications to biological systematics or evolutionary theory.

213 citations


Journal ArticleDOI
TL;DR: A special class of prime implicants is described for regular functions and it is shown that for anyP in this class,F(P) consists of one facet of H, and this facet has 0–1 coefficients, and every nontrivial facet ofH with 0-1 coefficients is obtained from this class.
Abstract: The role of 0–1 programming problems having monotone or regular feasible sets was pointed out in [6]. The solution sets of covering and of knapsack problems are examples of monotone and of regular sets respectively. Some connections are established between prime implicants of a monotone or a regular Boolean functionβ on the one hand, and facets of the convex hullH of the zeros ofβ on the other. In particular (Corollary 2) a necessary and sufficient condition is given for a constraint of a covering problem to be a facet of the corresponding integer polyhedron. For any prime implicantP ofβ, a nonempty familyF(P) of facets ofH is constructed. Proposition 17 gives easy-to-determine sharp upper bounds for the coefficients of these facets whenβ is regular. A special class of prime implicants is described for regular functions and it is shown that for anyP in this class,F(P) consists of one facet ofH, and this facet has 0–1 coefficients. Every nontrivial facet ofH with 0–1 coefficients is obtained from this class.

206 citations


Journal ArticleDOI
TL;DR: Three matroid intersection algorithms are presented and provide constructive proofs of various important theorems of matroid theory, such as the Matroid Intersection Duality Theorem and Edmonds' Matroid Polyhedral Intersection Theorem.
Abstract: LetM 1 = (E, 91),M 2 = (E, 92) be two matroids over the same set of elementsE, and with families of independent sets 91, 92 A setI ∈ 91 ∩ 92 is said to be anintersection of the matroidsM 1,M 2 An important problem of combinatorial optimization is that of finding an optimal intersection ofM 1,M 2 In this paper three matroid intersection algorithms are presented One algorithm computes an intersection containing a maximum number of elements The other two algorithms compute intersections which are of maximum total weight, for a given weighting of the elements inE One of these algorithms is “primal-dual”, being based on duality considerations of linear programming, and the other is “primal” All three algorithms are based on the computation of an “augmenting sequence” of elements, a generalization of the notion of an augmenting path from network flow theory and matching theory The running time of each algorithm is polynomial inm, the number of elements inE, and in the running times of subroutines for independence testing inM 1,M 2 The algorithms provide constructive proofs of various important theorems of matroid theory, such as the Matroid Intersection Duality Theorem and Edmonds' Matroid Polyhedral Intersection Theorem

180 citations


Journal ArticleDOI
TL;DR: This paper identifies necessary and sufficient conditions for a penalty method to yield an optimal solution or a Lagrange multiplier of a convex programming problem by means of a single unconstrained minimization.
Abstract: This paper identifies necessary and sufficient conditions for a penalty method to yield an optimal solution or a Lagrange multiplier of a convex programming problem by means of a single unconstrained minimization. The conditions are given in terms of properties of the objective and constraint functions of the problem as well as the penalty function adopted. It is shown among other things that all linear programs with finite optimal value satisfy such conditions when the penalty function is quadratic.

Journal ArticleDOI
TL;DR: This paper gives a precise definition for sets of vectors, called test sets, which will include those sets described above arising in the simplex and flow algorithms, and proves that any “improvement process” which searches through a test set at each stage converges to an optimal point in a finite number of steps.
Abstract: In this paper we consider the question: how does the flow algorithm and the simplex algorithm work? The usual answer has two parts: first a description of the "improvement process", and second a proof that if no further improvement can be made by this process, an optimal vector has been found. This second part is usually based on duality a technique not available in the case of an arbitrary integer programming problem. We wish to give a general description of "improvement processes" which will include both the simplex and flow algorithms, which will be applicable to arbitrary integer programming problems, and which will in themselves assure convergence to a solution. Geometrically both the simplex algorithm and the flow algorithm may be described as follows. At the i th stage, we have a vertex (or feasible flow) to which is associated a finite set of vectors, namely the set of edges leaving that vertex (or the set of unsaturated paths). The algorithm proceeds by searching among this special set for a vector along which the gain function is increasing. If such a vector is found, the algorithm continues by moving along this vector as far as is possible while still remaining feasible. The search is then repeated at this new feasible point. We give a precise definition for sets of vectors, called test sets, which will include those sets described above arising in the simplex and flow algorithms. We will then prove that any "improvement process" which searches through a test set at each stage converges to an optimal point in a finite nmnber of steps. We also construct specific test sets which are the natural extensions of the test sets employed by the flow algorithm to arbitrary linear and integer linear programming problems.

Journal ArticleDOI
TL;DR: An algorithm is developed for solving the convex programming problem by constructing a cutting plane through the center of a polyhedral approximation to the optimum, which generates a sequence of primal feasible points whose limit points satisfy the Kuhn—Tucker conditions of the problem.
Abstract: An algorithm is developed for solving the convex programming problem which iteratively proceeds to the optimum by constructing a cutting plane through the center of a polyhedral approximation to the optimum. This generates a sequence of primal feasible points whose limit points satisfy the Kuhn—Tucker conditions of the problem. Additionally, we present a simple, effective rule for dropping prior cuts, an easily calculated bound on the objective function, and a rate of convergence.

Journal ArticleDOI
TL;DR: This paper presents a new algorithm for the solution of multi-state dynamic programming problems, referred to as the Progressive Optimality Algorithm, a method of successive approximation using a general two-stage solution that is computationally efficient and has minimal storage requirements.
Abstract: This paper presents a new algorithm for the solution of multi-state dynamic programming problems, referred to as the Progressive Optimality Algorithm. It is a method of successive approximation using a general two-stage solution. The algorithm is computationally efficient and has minimal storage requirements. A description of the algorithm is given including a proof of convergence. Performance characteristics for a trial problem are summarized.

Journal ArticleDOI
TL;DR: An algorithm is presented for estimating the density distribution in a cross section of an object from X-ray data, which in practice is unavoidably noisy, and a finite convergence result is proved.
Abstract: An algorithm is presented for estimating the density distribution in a cross section of an object from X-ray data, which in practice is unavoidably noisy. The data give rise to a large sparse system of inconsistent equations, not untypically 105 equations with 104 unknowns, with only about 1% of the coefficients non-zero. Using the physical interpretation of the equations, each equality can in principle be replaced by a pair of inequalities, giving us the limits within which we believe the sum must lie. An algorithm is proposed for solving this set of inequalities. The algorithm is basically a relaxation method. A finite convergence result is proved. In spite of the large size of the system, in the application area of interest practical solution on a computer is possible because of the simple geometry of the problem and the redundancy of equations obtained from nearby X-rays. The algorithm has been implemented, and is demonstrated by actual reconstructions.



Journal ArticleDOI
TL;DR: An algorithm that finds the exact maximum of certain nonlinear functions on polytopes by performing a finite number of logical and arithmetic operations by selectively decomposing the feasible set into simplices of varying dimensions is developed and proves.
Abstract: This paper develops and proves an algorithm that finds the exact maximum of certain nonlinear functions on polytopes by performing a finite number of logical and arithmetic operations. Permissible objective functions need to be pseudoconcave and allow the closed-form solution of sets of equations\(\partial f(Dy + \hat x^k )/\partial y = 0\), which are first order conditions associated with the unconstrained, but affinely transformed objective function. Examples are pseudoconcave quadratics and especially the homogeneous functioncx +m(xVx)1/2,m < 0, V positive definite, for which sofar no finite algorithm existed.

Journal ArticleDOI
TL;DR: A method of solving the 0–1 knapsack problem which derives from the “shrinking boundary method” is described and compared to other methods through extensive computational experimentation.
Abstract: A method of solving the 0–1 knapsack problem which derives from the “shrinking boundary method” is described and compared to other methods through extensive computational experimentation.

Journal ArticleDOI
Masakazu Kojima1
TL;DR: Eaves's basic theorem of complementarity is generalized, and the generalized theorem is used as a unified framework for several typical existence theorems.
Abstract: The nonlinear complementarity problem is the problem of finding a point x in the n-dimensional Euclidean space,R n , such that x ⩾ 0, f(x) ⩾ 0 and 〈x,f(x)∼ = 0, where f is a nonlinear continuous function fromR n into itself. Many existence theorems for the problem have been established in various ways. The aim of the present paper is to treat them in a unified manner. Eaves's basic theorem of complementarity is generalized, and the generalized theorem is used as a unified framework for several typical existence theorems.

Journal ArticleDOI
TL;DR: This paper presents a globally convergent multiplier method which utilizes an explicit formula for the multiplier, which automatically calculates a value for the penalty coefficient, which, under certain assumptions, leads to global convergence.
Abstract: This paper presents a globally convergent multiplier method which utilizes an explicit formula for the multiplier. The algorithm solves finite dimensional optimization problems with equality constraints. A unique feature of the algorithm is that it automatically calculates a value for the penalty coefficient, which, under certain assumptions, leads to global convergence.

Journal ArticleDOI
TL;DR: The Goldfarb's algorithm can be regarded as an extension of linear programming to allow a non-linear objective function.
Abstract: Goldfarb's algorithm, which is one of the most successful methods for minimizing a function of several variables subject to linear constraints, uses a single matrix to keep second derivative information and to ensure that search directions satisfy any active constraints. In the original version of the algorithm this matrix is full, but by making a change of variables so that the active constraints become bounds on vector components, this matrix is transformed so that the dimension of its non-zero part is only the number of variablesless the number of active constraints. It is shown how this transformation may be used to give a version of the algorithm that usually provides a good saving in the amount of computation over the original version. Also it allows the use of sparse matrix techniques to take advantage of zeros in the matrix of linear constraints. Thus the method described can be regarded as an extension of linear programming to allow a non-linear objective function.

Journal ArticleDOI
Robert Mifflin1
TL;DR: An algorithm for unconstrained minimization of a function of n variables that does not require the evaluation of partial derivatives is presented and its convergence is superlinear for a twice continuously differentiable strongly convex function.
Abstract: An algorithm for unconstrained minimization of a function of n variables that does not require the evaluation of partial derivatives is presented It is a second order extension of the method of local variations and it does not require any exact one variable minimizations This method retains the local variations property of accumulation points being stationary for a continuously differentiable function Furthermore, because this extension makes the algorithm an approximate Newton method, its convergence is superlinear for a twice continuously differentiable strongly convex function

Journal ArticleDOI
TL;DR: Polyhedral annexation is a new approach for generating all valid inequalities in mixed integer and combinatorial programming, including the facets of the convex hull of feasible integer solutions, without resorting to specialized notions of group theory, convex analysis or projective geometry.
Abstract: Polyhedral annexation is a new approach for generating all valid inequalities in mixed integer and combinatorial programming. These include the facets of the convex hull of feasible integer solutions. The approach is capable of exploiting the characteristics of the feasible solution space in regions both “adjacent to” and “distant from” the linear programming vertex without resorting to specialized notions of group theory, convex analysis or projective geometry. The approach also provides new ways for exploiting the “branching inequalities” of branch and bound.

Journal ArticleDOI
TL;DR: A local convergence analysis of Broyden's class of rank-2 algorithms for solving unconstrained minimization problems, assuming that the step-size ai in each iterationxi+1 =xi -αiHi▽h(xi) is determined by approximate line searches only.
Abstract: This paper presents a local convergence analysis of Broyden's class of rank-2 algorithms for solving unconstrained minimization problems,\(h(\bar x) = \min h(x)\),h ∈ C1(Rn), assuming that the step-size ai in each iterationxi+1 =xi -αiHi▽h(xi) is determined by approximate line searches only. Many of these methods including the ones most often used in practice, converge locally at least with R-order,\(\tau \geqslant \sqrt[n]{2}\).

Journal ArticleDOI
P. Huard1
TL;DR: Two general nonlinear optimization algorithms generating a sequence of feasible solutions based on the concept of point-to-set mapping continuity are described and the results unify these apparently diverse approaches.
Abstract: Two general nonlinear optimization algorithms generating a sequence of feasible solutions are described. The justifications for their convergence are based on the concept of point-to-set mapping continuity. These two algorithms cover many conventional feasible solution methods. The convergence results unify these apparently diverse approaches.

Journal ArticleDOI
TL;DR: In this article, a general formulation of the theorem of the alternative is presented, and the results of a generalization of this theorem are exploited for a general class of alternative problems.
Abstract: Consequences of a general formulation of the theorem of the alternative are exploited.

Journal ArticleDOI
TL;DR: A family of test-problems is described which is designed to investigate the relative efficiencies of general optimisation algorithms and specialised algorithms for the solution of nonlinear sums-of-squares problems, and it is shown that the best choice of algorithms is critically affected by the value of one parameter in the test functions.
Abstract: A family of test-problems is described which is designed to investigate the relative efficiencies of general optimisation algorithms and specialised algorithms for the solution of nonlinear sums-of-squares problems. Five algorithms are tested on three members of the family, and it is shown that the best choice of algorithms is critically affected by the value of one parameter in the test functions.

Journal ArticleDOI
Robert Mifflin1
TL;DR: Convergence bounds are given for a general class of nonlinear programming algorithms such that, under certain specified assumptions, accumulation points of the multiplier and solution sequences satisfy the Fritz John or the Kuhn—Tucker optimality conditions.
Abstract: Bounds on convergence are given for a general class of nonlinear programming algorithms. Methods in this class generate at each interation both constraint multipliers and approximate solutions such that, under certain specified assumptions, accumulation points of the multiplier and solution sequences satisfy the Fritz John or the Kuhn—Tucker optimality conditions. Under stronger assumptions, convergence bounds are derived for the sequences of approximate solution, multiplier and objective function values. The theory is applied to an interior—exterior penalty function algorithm modified to allow for inexact subproblem solutions. An entirely new convergence bound in terms of the square root of the penalty controlling parameter is given for this algorithm.

Journal ArticleDOI
TL;DR: This work considers the linear programming formulation of the asymmetric travelling salesman problem and states several new inequalities which yield a sharper characterization in terms of linear inequalities of the travelling salesman polytope, i.e., the convex hull of tours.
Abstract: We consider the linear programming formulation of the asymmetric travelling salesman problem. Several new inequalities are stated which yield a sharper characterization in terms of linear inequalities of the travelling salesman polytope, i.e., the convex hull of tours.