# Showing papers in "Mathematical Programming in 1986"

••

TL;DR: An outer-approximation algorithm is presented for solving mixed-integer nonlinear programming problems of a particular class and a theoretical comparison with generalized Benders decomposition is presented on the lower bounds predicted by the relaxed master programs.

Abstract: An outer-approximation algorithm is presented for solving mixed-integer nonlinear programming problems of a particular class. Linearity of the integer (or discrete) variables, and convexity of the nonlinear functions involving continuous variables are the main features in the underlying mathematical structure. Based on principles of decomposition, outer-approximation and relaxation, the proposed algorithm effectively exploits the structure of the problems, and consists of solving an alternating finite sequence of nonlinear programming subproblems and relaxed versions of a mixed-integer linear master program. Convergence and optimality properties of the algorithm are presented, as well as a general discussion on its implementation. Numerical results are reported for several example problems to illustrate the potential of the proposed algorithm for programs in the class addressed in this paper. Finally, a theoretical comparison with generalized Benders decomposition is presented on the lower bounds predicted by the relaxed master programs.

1,157 citations

••

TL;DR: In this paper, it is shown that the annealing algorithm converges with probability arbitrarily close to 1, and that it is no better than a deterministic method. But it is also shown that there are cases where convergence takes exponentially long.

Abstract: The annealing algorithm is a stochastic optimization method which has attracted attention because of its success with certain difficult problems, including NP-hard combinatorial problems such as the travelling salesman, Steiner trees and others. There is an appealing physical analogy for its operation, but a more formal model seems desirable. In this paper we present such a model and prove that the algorithm converges with probability arbitrarily close to 1. We also show that there are cases where convergence takes exponentially long—that is, it is no better than a deterministic method. We study how the convergence rate is affected by the form of the problem. Finally we describe a version of the algorithm that terminates in polynomial time and allows a good deal of ‘practical’ confidence in the solution.

593 citations

••

TL;DR: This work reviews classical barrier-function methods for nonlinear programming based on applying a logarithmic transformation to inequality constraints and shows a “projected Newton barrier” method to be equivalent to Karmarkar's projective method for a particular choice of the barrier parameter.

Abstract: Interest in linear programming has been intensified recently by Karmarkar's publication in 1984 of an algorithm that is claimed to be much faster than the simplex method for practical problems. We review classical barrier-function methods for nonlinear programming based on applying a logarithmic transformation to inequality constraints. For the special case of linear programming, the transformed problem can be solved by a "projected Newton barrier" method. This method is shown to be equivalent to Karmarkar's projective method for a particular choice of the barrier parameter. We then present details of a specific barrier algorithm and its practical implementation. Numerical results are given for several non-trivial test problems, and the implications for future developments in linear programming are discussed.

423 citations

••

[...]

TL;DR: It is shown that inequalities associated with chordless cycles define facets of this polytope; moreover, for these inequalities a polynomial algorithm to solve the separation problem is presented.

Abstract: The cut polytopeP C (G) of a graphG=(V, E) is the convex hull of the incidence vectors of all edge sets of cuts ofG. We show some classes of facet-defining inequalities ofP C (G). We describe three methods with which new facet-defining inequalities ofP C (G) can be constructed from known ones. In particular, we show that inequalities associated with chordless cycles define facets of this polytope; moreover, for these inequalities a polynomial algorithm to solve the separation problem is presented. We characterize the facet defining inequalities ofP C (G) ifG is not contractible toK 5. We give a simple characterization of adjacency inP C (G) and prove that for complete graphs this polytope has diameter one and thatP C (G) has the Hirsch property. A relationship betweenP C (G) and the convex hull of incidence vectors of balancing edge sets of a signed graph is studied.

392 citations

••

IBM

^{1}TL;DR: The algorithm described here is a variation on Karmarkar’s algorithm for linear programming that applies to the standard form of a linear programming problem and produces a monotone decreasing sequence of values of the objective function.

Abstract: The algorithm described here is a variation on Karmarkar's algorithm for linear programming. It has several advantages over Karmarkar's original algorithm. In the first place, it applies to the standard form of a linear programming problem and produces a monotone decreasing sequence of values of the objective function. The minimum value of the objective function does not have to be known in advance. Secondly, in the absence of degeneracy, the algorithm converges to an optimal basic feasible solution with the nonbasic variables converging monotonically to zero. This makes it possible to identify an optimal basis before the algorithm converges.

303 citations

••

TL;DR: A new decomposition method that may start from an arbitrary point and simultaneously processes objective and feasibility cuts for each component and is finitely convergent without any nondegeneracy assumptions is proposed.

Abstract: A problem of minimizing a sum of many convex piecewise-linear functions is considered. In view of applications to two-stage linear programming, where objectives are marginal values of lower level problems, it is assumed that domains of objectives may be proper polyhedral subsets of the space of decision variables and are defined by piecewise-linear induced feasibility constraints. We propose a new decomposition method that may start from an arbitrary point and simultaneously processes objective and feasibility cuts for each component. The master program is augmented with a quadratic regularizing term and comprises an a priori bounded number of cuts. The method goes through nonbasic points, in general, and is finitely convergent without any nondegeneracy assumptions. Next, we present a special technique for solving the regularized master problem that uses an active set strategy and QR factorization and exploits the structure of the master. Finally, some numerical evidence is given.

295 citations

••

TL;DR: A formal description of the network design problem with continuous decision variables representing link capacities can be cast into a framework of multilevel programming and various suboptimal procedures to solve it are developed.

Abstract: Recently much attention has been focused on multilevel programming, a branch of mathematical programming that can be viewed either as a generalization of min-max problems or as a particular class of Stackelberg games with continuous variables The network design problem with continuous decision variables representing link capacities can be cast into such a framework We first give a formal description of the problem and then develop various suboptimal procedures to solve it Worst-case behaviour results concerning the heuristics, as well as numerical results on a small network, are presented

206 citations

••

TL;DR: It is given a detailed proof, under slightly weaker conditions on the objective function, that a modified Frank-Wolfe algorithm based on Wolfe's ‘away step’ strategy can achieve geometric convergence, provided a strict complementarity assumption holds.

Abstract: We give a detailed proof, under slightly weaker conditions on the objective function, that a modified Frank-Wolfe algorithm based on Wolfe's ‘away step’ strategy can achieve geometric convergence, provided a strict complementarity assumption holds.

191 citations

••

Kyoto University

^{1}TL;DR: Each iteration of the proposed algorithm consists of projection onto a halfspace containing the given closed convex set rather than the latter set itself, so its global convergence to the solution can be established under suitable conditions.

Abstract: This paper presents a modification of the projection methods for solving variational inequality problems. Each iteration of the proposed algorithm consists of projection onto a halfspace containing the given closed convex set rather than the latter set itself. The algorithm can thus be implemented very easily and its global convergence to the solution can be established under suitable conditions.

175 citations

••

TL;DR: It is shown that the Chvátal rank of a polyhedron can be bounded above by a function of the matrixA, independent of the vectorb, a result which, as Blair observed, is equivalent to Blair and Jeroslow's theorem that ‘each integer programming value function is a Gomory function.’

Abstract: We consider integer linear programming problems with a fixed coefficient matrix and varying objective function and right-hand-side vector. Among our results, we show that, for any optimal solution to a linear program max{wx: Ax≤b}, the distance to the nearest optimal solution to the corresponding integer program is at most the dimension of the problem multiplied by the largest subdeterminant of the integral matrixA. Using this, we strengthen several integer programming ‘proximity’ results of Blair and Jeroslow; Graver; and Wolsey. We also show that the Chvatal rank of a polyhedron {x: Ax≤b} can be bounded above by a function of the matrixA, independent of the vectorb, a result which, as Blair observed, is equivalent to Blair and Jeroslow's theorem that ‘each integer programming value function is a Gomory function.’

168 citations

••

Rider University

^{1}TL;DR: This algorithm for global optimization uses an arbitrary starting point, requires no derivatives, uses comparatively few function evaluations and is not side-tracked by nearby relative optima, so as to build a gradually closer piecewise-differentiable approximation to the objective function.

Abstract: This algorithm for global optimization uses an arbitrary starting point, requires no derivatives, uses comparatively few function evaluations and is not side-tracked by nearby relative optima. The algorithm builds a gradually closer piecewise-differentiable approximation to the objective function. The computer program exhibits a (theoretically expected) strong tendency to cluster around relative optima close to the global. Results of testing with several standard functions are given.

••

TL;DR: This paper studies inexact Newton methods for solving the nonlinear, complementarity problem and establishes and analyzed the necessary accuracies that are needed to preserve the nice features of the exact Newton method.

Abstract: An exact Newton method for solving a nonlinear complementarity problem consists of solving a sequence of linear complementarity subproblems. For problems of large size, solving the subproblems exactly can be very expensive. In this paper we study inexact Newton methods for solving the nonlinear, complementarity problem. In such an inexact method, the subproblems are solved only up to a certain degree of accuracy. The necessary accuracies that are needed to preserve the nice features of the exact Newton method are established and analyzed. We also discuss some extensions as well as an application.

••

TL;DR: In this paper, the concept of generalized critical points (g.c. point) is introduced and the set of points of ∑ can be divided into five (characteristic) types: non-degenerate critical points, strict complementarity, nondegeneracy of the corresponding quadratic form and linear independence of the gradients of binding constraints.

Abstract: We deal with one-parameter families of optimization problems in finite dimensions. The constraints are both of equality and inequality type. The concept of a ‘generalized critical point’ (g.c. point) is introduced. In particular, every local minimum, Kuhn-Tucker point, and point of Fritz John type is a g.c. point. Under fairly weak (even generic) conditions we study the set∑ consisting of all g.c. points. Due to the parameter, the set∑ is pieced together from one-dimensional manifolds. The points of∑ can be divided into five (characteristic) types. The subset of ‘nondegenerate critical points’ (first type) is open and dense in∑ (nondegenerate means: strict complementarity, nondegeneracy of the corresponding quadratic form and linear independence of the gradients of binding constraints). A nondegenerate critical point is completely characterized by means of four indices. The change of these indices along∑ is presented. Finally, the Kuhn-Tucker subset of∑ is studied in more detail, in particular in connection with the (failure of the) Mangasarian-Fromowitz constraint qualification.

••

Kyoto University

^{1}TL;DR: This paper presents a successive quadratic programming algorithm for solving general nonlinear programming problems and proves that the algorithm possesses global and superlinear convergence properties.

Abstract: This paper presents a successive quadratic programming algorithm for solving general nonlinear programming problems. In order to avoid the Maratos effect, direction-finding subproblems are derived by modifying the second-order approximations to both objective and constraint functions of the problem. We prove that the algorithm possesses global and superlinear convergence properties.

••

TL;DR: It appears that the generalized linear production model is a unifying model which can be used to explain the non-emptiness of the core of cooperative games generated by various, seemingly different, optimization models.

Abstract: We introduce a generalized linear production model whose attractive feature being that the resources held by any subset of producersS is not restricted to be the vector sum of the resources held by the members ofS. We provide sufficient conditions for the non-emptiness of the core of the associated generalized linear production game, and show that if the core of the game is not empty then a solution in it can be produced from a dual optimal solution to the associated linear programming problem. Our generalized linear production model is a proper generalization of the linear production model introduced by Owen, and it can be used to analyze cooperative games which cannot be studied in the ordinary linear production model framework. We use the generalized model to show that the cooperative game induced by a network optimization problem in which players are the nodes of the network has a non-empty core. We further employ our model to prove the non-emptiness of the core of two other classes of cooperative games, which were not previously studied in the literature, and we also use our generalized model to provide an alternative proof for the non-emptiness of the core of the class of minimum cost spanning tree games. Thus, it appears that the generalized linear production model is a unifying model which can be used to explain the non-emptiness of the core of cooperative games generated by various, seemingly different, optimization models.

••

TL;DR: A recursive quadratic programming algorithm for solving equality constrained optimization problems is proposed and studied, and some numerical results are given.

Abstract: In this paper, a recursive quadratic programming algorithm for solving equality constrained optimization problems is proposed and studied. The line search functions used are approximations to Fletcher's differentiable exact penalty function. Global convergence and local superlinear convergence results are proved, and some numerical results are given.

••

TL;DR: The global minimization of a large-scale linearly constrained concave quadratic problem is considered and a guaranteedε-approximate solution is obtained by solving a single liner zero–one mixed integer programming problem.

Abstract: The global minimization of a large-scale linearly constrained concave quadratic problem is considered. The concave quadratic part of the objective function is given in terms of the nonlinear variablesx ∈R
n
, while the linear part is in terms ofy ∈R
k. For large-scale problems we may havek much larger thann. The original problem is reduced to an equivalent separable problem by solving a multiple-cost-row linear program with 2n cost rows. The solution of one additional linear program gives an incumbent vertex which is a candidate for the global minimum, and also gives a bound on the relative error in the function value of this incumbent. Ana priori bound on this relative error is obtained, which is shown to be ≤ 0.25, in important cases. If the incumbent is not a satisfactory approximation to the global minimum, a guaranteede-approximate solution is obtained by solving a single liner zero–one mixed integer programming problem. This integer problem is formulated by a simple piecewise-linear underestimation of the separable problem.

••

TL;DR: In this paper, the authors present a feasible directions algorithm based on Lagrangian concepts for the solution of the nonlinear programming problem with equality and inequality constraints, and prove the global convergence of the algorithm and apply it to some test problems.

Abstract: We present a feasible directions algorithm, based on Lagrangian concepts, for the solution of the nonlinear programming problem with equality and inequality constraints. At each iteration a descent direction is defined; by modifying it, we obtain a feasible descent direction. The line search procedure assures the global convergence of the method and the feasibility of all the iterates.
We prove the global convergence of the algorithm and apply it to the solution of some test problems. Although the present version of the algorithm does not include any second-order information, like quasi-Newton methods, these numerical results exhibit a behavior comparable to that of the best methods known at present for nonlinear programming.

••

TL;DR: The results help to explain why the DFP method is often less suitable than the BFGS algorithm for general unconstrained optimization calculations, and they show that quadratic functions provide much information about efficiency when the current vector of variables is too far from the solution for an asymptotic convergence analysis.

Abstract: We study the use of the BFGS and DFP algorithms with step-lengths of one for minimizing quadratic functions of only two variables. The updating formulae in this case imply nonlinear three term recurrence relations between the eigenvalues of consecutive second derivative approximations, which are analysed in order to explain some gross inefficiencies that can occur. Specifically, the BFGS algorithm may require more than 10 iterations to achieve the first decimal place of accuracy, while the performance of the DFP method is far worse. The results help to explain why the DFP method is often less suitable than the BFGS algorithm for general unconstrained optimization calculations, and they show that quadratic functions provide much information about efficiency when the current vector of variables is too far from the solution for an asymptotic convergence analysis.

••

TL;DR: A dual simplex method for the assignment problem leaves open to choice the activity (i,j) of rowi and columnj that is to be dropped in pivoting so long asxij < 0.1, and it is argued that on average the number of pivots is at mostn logn.

Abstract: “Where there is abundance of mystery and confusion in every direction, the truth seldom remains hidden for long. It's a matter of having plenty of angles to go at it from. Only the utterly simple crimes - the simplex crimes, you may say - have the trick of remaining baffling.” - Sir John (from Michael Innes,The Open House (A Sir John Appleby Mystery), Penguin Books, 1974).

••

TL;DR: It is shown that a particular pivoting algorithm, which is called the lexicographic Lemke algorithm, takes an expected number of steps that is bounded by a quadratic inn, when applied to a random linear complementarity problem of dimensionn.

Abstract: We show that a particular pivoting algorithm, which we call the lexicographic Lemke algorithm, takes an expected number of steps that is bounded by a quadratic inn, when applied to a random linear complementarity problem of dimensionn. We present two probabilistic models, both requiring some nondegeneracy and sign-invariance properties. The second distribution is concerned with linear complementarity problems that arise from linear programming. In this case we give bounds that are quadratic in the smaller of the two dimensions of the linear programming problem, and independent of the larger. Similar results have been obtained by Adler and Megiddo.

••

TL;DR: A new continuously differentiable exact penalty function is introduced for the solution of nonlinear programming problems with compact feasible set that is defined on a suitable bounded open set containing the feasible region and that it goes to infinity on the boundary of this set.

Abstract: In this paper a new continuously differentiable exact penalty function is introduced for the solution of nonlinear programming problems with compact feasible set A distinguishing feature of the penalty function is that it is defined on a suitable bounded open set containing the feasible region and that it goes to infinity on the boundary of this set This allows the construction of an implementable unconstrained minimization algorithm, whose global convergence towards Kuhn-Tucker points of the constrained problem can be established

••

TL;DR: For linear semi-infinite programming problems a discretization method is presented and a numerically stable Simplex-algorithm is used to solve the LP-subproblems.

Abstract: For linear semi-infinite programming problems a discretization method is presented. A first coarse grid is successively refined in such a way that the solution on the foregoing grids can be used on the one hand as starting points for the subsequent grids and on the other hand to considerably reduce the number of constraints which have to be considered in the subsequent problems. This enables an efficient treatment of large problems with moderate storage requirements. A numerically stable Simplex-algorithm is used to solve the LP-subproblems. Numerical examples from bivariate Chebyshev approximation are presented.

••

TL;DR: A bound on the distance between an arbitrary point and the solution set of a monotone linear complementarity problem is given in terms of a condition constant that depends on the problem data only and a residual function of the violations of the complementary problem conditions by the point considered.

Abstract: We give a bound on the distance between an arbitrary point and the solution set of a monotone linear complementarity problem in terms of a condition constant that depends on the problem data only and a residual function of the violations of the complementary problem conditions by the point considered. When the point satisfies the linear inequalities of the complementarity problem, the residual consists of the complementarity condition plus its square root. This latter term is essential and without it the error bound cannot hold. We also show that another natural residual that has been employed to bound errors for strictly monotone linear complementarity problems fails to bound errors for the monotone case considered here.

••

TL;DR: A new brief proof of Fan's minimax theorem for convex-concave like functions is established using separation arguments.

Abstract: A new brief proof of Fan's minimax theorem for convex-concave like functions is established using separation arguments.

••

TL;DR: An iterative algorithm for solving a convex quadratic program with one equality constraint and bounded variables and preliminary testing suggests that this approach is efficient for problems with diagonally dominant matrices.

Abstract: In this paper we propose an iterative algorithm for solving a convex quadratic program with one equality constraint and bounded variables. At each iteration, a separable convex quadratic program with the same constraint set is solved. Two variants are analyzed: one that uses an exact line search, and the other a unit step size. Preliminary testing suggests that this approach is efficient for problems with diagonally dominant matrices.

••

TL;DR: This work considers linear programs in which the objective function (cost) coefficients are independent non-negative random variables, and gives upper bounds for the random minimum cost.

Abstract: We consider linear programs in which the objective function (cost) coefficients are independent non-negative random variables, and give upper bounds for the random minimum cost. One application shows that for quadratic assignment problems with such costs certain branch-and-bound algorithms usually take more than exponential time.

••

TL;DR: Results from topology are reported showing that, in general, there is no continuous function that generates the null space basis of all full rank rectangular matrices of a fixed size, thus constrained optimization algorithms cannot assume an everywhere continuousnull space basis.

Abstract: Many constrained optimization algorithms use a basis for the null space of the matrix of constraint gradients. Recently, methods have been proposed that enable this null space basis to vary continuously as a function of the iterates in a neighborhood of the solution. This paper reports results from topology showing that, in general, there is no continuous function that generates the null space basis of all full rank rectangular matrices of a fixed size. Thus constrained optimization algorithms cannot assume an everywhere continuous null space basis. We also give some indication of where these discontinuities must occur. We then propose an alternative implementation of a class of constrained optimization algorithms that uses approximations to the reduced Hessian of the Lagrangian but is independent of the choice of null space basis. This approach obviates the need for a continuously varying null space basis.

••

TL;DR: This paper presents an algorithm of complexity O(nr(r+c+logn)) for the weighted matroid intersection problem, and presents a second algorithm that, given a feasible solution of cardinalityk, finds an optimal one of the same cardinality.

Abstract: Consider a finite setE, a weight functionw:EźR, and two matroidsM1 andM2 defined onE The weighted matroid intersection problem consists of finding a setI⊆E, independent in both matroids, that maximizes Σ{w(e):e inI} We present an algorithm of complexity O(nr(r+c+logn)) for this problem, wheren=|E|,r=min(rank(M1), rank (M2)),c=max (c1,c2) and, fori=1,2,ci is the complexity of finding the circuit ofIź{e} inMi (or show that none exists) wheree is inE andI⊆E is independent inM1 andM2 A related problem is to find a maximum weight set, independent in both matroids, and of given cardinalityk (if one exists) Our algorithm also solves this problem In addition, we present a second algorithm that, given a feasible solution of cardinalityk, finds an optimal one of the same cardinality A sensitivity analysis on the weights is easy to perform using this approach Our two algorithms are related to existing algorithms In fact, our framework provides new simple proofs of their validity Other contributions of this paper are the existence of nonnegative reduced weights (Theorem 6), allowing the improved complexity bound, and the introduction of artificial elements, allowing an improved start and flexibility in the implementation of the algorithms

••

TL;DR: Efficient algorithms based upon Balinski's signature method are described for solving then × n assignment problem and are shown to have computational bounds of O(n3) space and O(mn + n2 logn) time in the worst case.

Abstract: Efficient algorithms based upon Balinski's signature method are described for solving then × n assignment problem. These algorithms are special variants of the dual simplex method and are shown to have computational bounds of O(n3). Variants for solving sparse assignment problems withm arcs that require O(m) space and O(mn + n2 logn) time in the worst case are also presented.