scispace - formally typeset
Search or ask a question

Showing papers in "Mathematical Programming in 1996"


Journal ArticleDOI
TL;DR: In this article, the authors conduct an extensive computational study of shortest path algorithms, including some very recent algorithms, and suggest new algorithms motivated by the experimental results and prove interesting theoretical results suggested by the test data.
Abstract: We conduct an extensive computational study of shortest paths algorithms, including some very recent algorithms. We also suggest new algorithms motivated by the experimental results and prove interesting theoretical results suggested by the experimental data. Our computational study is based on several natural problem classes which identify strengths and weaknesses of various algorithms. These problem classes and algorithm implementations form an environment for testing the performance of shortest paths algorithms. The interaction between the experimental evaluation of algorithm behavior and the theoretical analysis of algorithm performance plays an important role in our research.

686 citations


Journal ArticleDOI
TL;DR: A branch-and-cut algorithm to solve quadratic programming problems where there is an upper bound on the number of positive variables and the algorithm solves the largest real-life problems in a few minutes of run-time.
Abstract: We present computational experience with a branch-and-cut algorithm to solve quadratic programming problems where there is an upper bound on the number of positive variables. Such problems arise in financial applications. The algorithm solves the largest real-life problems in a few minutes of run-time.

409 citations


Journal ArticleDOI
TL;DR: The recent extension of Newton's method to semismooth systems of equations and the fact that the natural merit function associated to the equation reformulation is continuously differentiable are exploited to develop an algorithm whose global and quadratic convergence properties can be established under very mild assumptions.
Abstract: In this paper we present a new algorithm for the solution of nonlinear complementarity problems. The algorithm is based on a semismooth equation reformulation of the complementarity problem. We exploit the recent extension of Newton's method to semismooth systems of equations and the fact that the natural merit function associated to the equation reformulation is continuously differentiable to develop an algorithm whose global and quadratic convergence properties can be established under very mild assumptions. Other interesting features of the new algorithm are an extreme simplicity along with a low computational burden per iteration. We include numerical tests which show the viability of the approach.

372 citations


Journal ArticleDOI
P.M. Vaidya1
TL;DR: In this article, a new algorithm for the feasibility problem was proposed, based on the notion of a volumetric center of a polytope and a related ellipsoid of maximum volume inscribable in the polytopes.
Abstract: Let $$S \subseteq \mathbb{R}^n $$ be a convex set for which there is an oracle with the following property. Given any pointz∈ℝ n the oracle returns a “Yes” ifz∈S; whereas ifz∉S then the oracle returns a “No” together with a hyperplane that separatesz fromS. The feasibility problem is the problem of finding a point inS; the convex optimization problem is the problem of minimizing a convex function overS. We present a new algorithm for the feasibility problem. The notion of a volumetric center of a polytope and a related ellipsoid of maximum volume inscribable in the polytope are central to the algorithm. Our algorithm has a significantly better global convergence rate and time complexity than the ellipsoid algorithm. The algorithm for the feasibility problem easily adapts to the convex optimization problem.

232 citations


Journal ArticleDOI
TL;DR: A nonlinearity of Coulomb's law leads to a nonlinear complementarity formulation of the system model, used in conjunction with the theory of quasi-variational inequalities to prove for the first time that multi-rigid-body systems with all contacts rolling always has a solution under a feasibility-type condition.
Abstract: In this paper, we study the problem of predicting the acceleration of a set of rigid, 3-dimensional bodies in contact with Coulomb friction. The nonlinearity of Coulomb's law leads to a nonlinear complementarity formulation of the system model. This model is used in conjunction with the theory of quasi-variational inequalities to prove for the first time that multi-rigid-body systems with all contacts rolling always has a solution under a feasibility-type condition. The analysis of the more general problem with sliding and rolling contacts presents difficulties that motivate our consideration of a relaxed friction law. The corresponding complementarity formulations of the multi-rigid-body contact problem are derived and existence of solutions of these models is established.

220 citations


Journal ArticleDOI
TL;DR: It is shown that the original problem is equivalent to a convex minimization problem with simple linear constraints, and a special problem of minimizing a concave quadratic function subject to finitely many convexquadratic constraints, which is also shown to be equivalents to a minimax convex problem.
Abstract: We consider the problem of minimizing an indefinite quadratic objective function subject to twosided indefinite quadratic constraints. Under a suitable simultaneous diagonalization assumption (which trivially holds for trust region type problems), we prove that the original problem is equivalent to a convex minimization problem with simple linear constraints. We then consider a special problem of minimizing a concave quadratic function subject to finitely many convex quadratic constraints, which is also shown to be equivalent to a minimax convex problem. In both cases we derive the explicit nonlinear transformations which allow for recovering the optimal solution of the nonconvex problems via their equivalent convex counterparts. Special cases and applications are also discussed. We outline interior-point polynomial-time algorithms for the solution of the equivalent convex programs.

172 citations


Journal ArticleDOI
TL;DR: This paper proposes a method for optimizing convex performance functions in stochastic systems, which can include expected performance in static systems and steady-state performance in discrete-event dynamic systems; they may be nonsmooth.
Abstract: In this paper we propose a method for optimizing convex performance functions in stochastic systems. These functions can include expected performance in static systems and steady-state performance in discrete-event dynamic systems; they may be nonsmooth. The method is closely related to retrospective simulation optimization; it appears to overcome some limitations of stochastic approximation, which is often applied to such problems. We explain the method and give computational results for two classes of problems: tandem production lines with up to 50 machines, and stochastic PERT (Program Evaluation and Review Technique) problems with up to 70 nodes and 110 arcs.

167 citations


Journal ArticleDOI
TL;DR: If the LIP algorithm is applied to integer data, it gets as another corollary a new proof of a well-known theorem by Tardos that linear programming can be solved in strongly polynomial time provided that A contains small-integer entries.
Abstract: We propose a primal-dual "layered-step" interior point (LIP) algorithm for linear programming with data given by real numbers. This algorithm follows the central path, either with short steps or with a new type of step called a "layered least squares" (LLS) step. The algorithm returns an exact optimum after a finite number of steps--in particular, after O(n3.5c(A)) iterations, wherec(A) is a function of the coefficient matrix. The LLS steps can be thought of as accelerating a classical path-following interior point method. One consequence of the new method is a new characterization of the central path: we show that it composed of at mostn2 alternating straight and curved segments. If the LIP algorithm is applied to integer data, we get as another corollary a new proof of a well-known theorem by Tardos that linear programming can be solved in strongly polynomial time provided thatA contains small-integer entries.

163 citations


Journal ArticleDOI
TL;DR: Methodology for sharing cuts in decomposition algorithms for stochastic programs that satisfy certain interstage dependency models enable sampling-based algorithms to handle a richer class of multistage problems, and may also be used to accelerate the convergence of exact decompose algorithms.
Abstract: Multistage stochastic programs with interstage independent random parameters have recourse functions that do not depend on the state of the system. Decomposition-based algorithms can exploit this structure by sharing cuts (outer-linearizations of the recourse function) among different scenario subproblems at the same stage. The ability to share cuts is necessary in practical implementations of algorithms that incorporate Monte Carlo sampling within the decomposition scheme. In this paper, we provide methodology for sharing cuts in decomposition algorithms for stochastic programs that satisfy certain interstage dependency models. These techniques enable sampling-based algorithms to handle a richer class of multistage problems, and may also be used to accelerate the convergence of exact decomposition algorithms.

156 citations


Journal ArticleDOI
TL;DR: This paper shows that {0, 1/2}-SEP can be solved in polynomial time for a convenient relaxation of the systemAx<-b, which leads to an efficient separation algorithm for a subclass of {0- 1/ 2}-CG cuts, which often contains wide families of strong inequalities forP1.
Abstract: Given the integer polyhedronP t := conv{x ∈ℤ n :Ax⩽b}, whereA ∈ℤ m × n andb ∈ℤ m , aChvatal-Gomory (CG)cut is a valid inequality forP 1 of the type λτAx⩽⌊λτb⌋ for some λ∈ℝ + such that λτA∈ℤ n . In this paper we study {0, 1/2}-CG cuts, arising for λ∈{0, 1/2} m . We show that the associated separation problem, {0, 1/2}-SEP, is equivalent to finding a minimum-weight member of a binary clutter. This implies that {0, 1/2}-SEP is NP-complete in the general case, but polynomially solvable whenA is related to the edge-path incidence matrix of a tree. We show that {0, 1/2}-SEP can be solved in polynomial time for a convenient relaxation of the systemAx<-b. This leads to an efficient separation algorithm for a subclass of {0, 1/2}-CG cuts, which often contains wide families of strong inequalities forP 1. Applications to the clique partitioning, asymmetric traveling salesman, plant location, acyclic subgraph and linear ordering polytopes are briefly discussed.

149 citations


Journal ArticleDOI
TL;DR: Various exact penalty functions for mathematical programs subject to equilibrium constraints are derived, and stationary points of these programs are characterized.
Abstract: Using the theory of exact penalization for mathematical programs with subanalytic constraints, the theory of error bounds for quadratic inequality systems, and the theory of parametric normal equations, we derive various exact penalty functions for mathematical programs subject to equilibrium constraints, and we also characterize stationary points of these programs.

Journal ArticleDOI
TL;DR: In this paper, the advantages of such parallel implementations over serial implementations and compared alternative sequencing protocols for parallel processors are explored. But they require careful attention to processor load balancing, which may not be optimal.
Abstract: Multistage stochastic linear programs can represent a variety of practical decision problems. Solving a multistage stochastic program can be viewed as solving a large tree of linear programs. A common approach for solving these problems is the nested decomposition algorithm, which moves up down the tree by solving nodes and passing information among nodes. The natural independence of subtrees suggests that much of the computational effort of the nested decomposition algorithm can run in parallel across small numbers of fast processors. This paper explores the advantages of such parallel implementations over serial implementations and compares alternative sequencing protocols for parallel processors. Computational experience on a large test set of practical problems with up to 1.5 million constraints and almost 5 million variables suggests that parallel implementations may indeed work well, but they require careful attention to processor load balancing.

Journal ArticleDOI
TL;DR: For the extended linear complementarity problem, column- Sufficiency, row-sufficiency and P-properties are introduced and characterize and these properties are specialized to the vertical, horizontal and mixedlinear complementarity problems.
Abstract: For the extended linear complementarity problem (Mangasarian and Pang, 1995), we introduce and characterize column-sufficiency, row-sufficiency andP-properties. These properties are then specialized to the vertical, horizontal and mixed linear complementarity problems.

Journal ArticleDOI
TL;DR: This paper proves local convergence rates of primal-dual interior point methods for general nonlinearly constrained optimization problems by proving local and quadratic convergence of the Newton method and local and superlinear converge of the quasi-Newton method.
Abstract: This paper proves local convergence rates of primal-dual interior point methods for general nonlinearly constrained optimization problems. Conditions to be satisfied at a solution are those given by the usual Jacobian uniqueness conditions. Proofs about convergence rates are given for three kinds of step size rules. They are: (i) the step size rule adopted by Zhang et al. in their convergence analysis of a primal-dual interior point method for linear programs, in which they used single step size for primal and dual variables; (ii) the step size rule used in the software package OB1, which uses different step sizes for primal and dual variables; and (iii) the step size rule used by Yamashita for his globally convergent primal-dual interior point method for general constrained optimization problems, which also uses different step sizes for primal and dual variables. Conditions to the barrier parameter and parameters in step size rules are given for each case. For these step size rules, local and quadratic convergence of the Newton method and local and superlinear convergence of the quasi-Newton method are proved.

Journal ArticleDOI
TL;DR: This work investigates the convex hull of incidence vectors of feasible multicuts in the space of edges of the multicut, and several classes of inequalities are introduced, and their strength and robustness are analyzed as various problem parameters change.
Abstract: We investigate the problem of partitioning the nodes of a graph under capacity restriction on the sum of the node weights in each subset of the partition. The objective is to minimize the sum of the costs of the edges between the subsets of the partition. This problem has a variety of applications, for instance in the design of electronic circuits and devices. We present alternative integer programming formulations for this problem and discuss the links between these formulations. Having chosen to work in the space of edges of the multicut, we investigate the convex hull of incidence vectors of feasible multicuts. In particular, several classes of inequalities are introduced, and their strength and robustness are analyzed as various problem parameters change.

Journal ArticleDOI
TL;DR: The Douglas-Rachford splitting algorithm is applied to a class of multi-valued equations consisting of the sum of two monotone mappings and shown to derive decomposition algorithms for solving the variational inequality formulation of the traffic equilibrium problem.
Abstract: We apply the Douglas-Rachford splitting algorithm to a class of multi-valued equations consisting of the sum of two monotone mappings. Compared with the dual application of the same algorithm, which is known as the alternating direction method of multipliers, the primal application yields algorithms that seem somewhat involved. However, the resulting algorithms may be applied effectively to problems with certain special structure. In particular we show that they can be used to derive decomposition algorithms for solving the variational inequality formulation of the traffic equilibrium problem.

Journal ArticleDOI
TL;DR: The nucleolus is formed, which minimizes maximum discontent among the players in a co-operative game in characteristic function form and gives conditions for when the core of the vehicle routing game is nonempty.
Abstract: In the vehicle routing cost allocation problem the aim is to find a good cost allocation method, i.e., a method that according to specified criteria allocates the cost of an optimal route configuration among the customers. We formulate this problem as a co-operative game in characteristic function form and give conditions for when the core of the vehicle routing game is nonempty. One specific solution concept to the cost allocation problem is the nucleolus, which minimizes maximum discontent among the players in a co-operative game. The class of games we study is such that the values of the characteristic function are obtained from the solution of a set of mathematical programming problems. We do not require an explicit description of the characteristic function for all coalitions. Instead, by applying a constraint generation approach, we evaluate information about the function only when it is needed for the computation of the nucleolus.

Journal ArticleDOI
TL;DR: Under natural regularity conditions, it is proved the exactness of a certain penalty function, and strong necessary optimality conditions for a class of generalized bilevel programs are given.
Abstract: We consider a hierarchical system where a leader incorporates into its strategy the reaction of the follower to its decision. The follower's reaction is quite generally represented as the solution set to a monotone variational inequality. For the solution of this nonconvex mathematical program a penalty approach is proposed, based on the formulation of the lower level variational inequality as a mathematical program. Under natural regularity conditions, we prove the exactness of a certain penalty function, and give strong necessary optimality conditions for a class of generalized bilevel programs.

Journal ArticleDOI
TL;DR: This paper uses a cutting plane algorithm based on the polyhedral theory for the Steiner tree packing polyhedron to solve some switchbox routing problems of VLSI-design and reports on the computational experience.
Abstract: In this paper we describe a cutting plane algorithm for the Steiner tree packing problem. We use our algorithm to solve some switchbox routing problems of VLSI-design and report on our computational experience. This includes a brief discussion of separation algorithms, a new LP-based primal heuristic and implementation details. The paper is based on the polyhedral theory for the Steiner tree packing polyhedron developed in our companion paper (this issue) and meant to turn this theory into an algorithmic tool for the solution of practical problems.

Journal ArticleDOI
TL;DR: An algorithm for convex minimization which includes quasi-Newton updates within a proximal point algorithm that depends on a preconditioned bundle subalgorithm that is proved under boundedness assumptions on the preconditionser sequence is introduced.
Abstract: This paper introduces an algorithm for convex minimization which includes quasi-Newton updates within a proximal point algorithm that depends on a preconditioned bundle subalgorithm. The method uses the Hessian of a certain outer function which depends on the Jacobian of a proximal point mapping which, in turn, depends on the preconditioner matrix and on a Lagrangian Hessian relative to a certain tangent space. Convergence is proved under boundedness assumptions on the preconditioner sequence.

Journal ArticleDOI
TL;DR: An extensive computational study of shortest paths algorithms, including some very recent algorithms, is conducted, which suggests new algorithms motivated by the experimental results and proves inter-operability guarantees.
Abstract: We conduct an extensive computational study of shortest paths algorithms, including some very recent algorithms. We also suggest new algorithms motivated by the experimental results and prove inter...

Journal ArticleDOI
TL;DR: Convergence of the approximate solutions is proven under the stated assumptions and sequences of barycentric scenario trees with associated probability trees are derived for minorizing and majorizing the given problem.
Abstract: This work deals with the approximation of convex stochastic multistage programs allowing prices and demand to be stochastic with compact support. Based on earlier results, sequences of barycentric scenario trees with associated probability trees are derived for minorizing and majorizing the given problem. Error bounds for the optimal policies of the approximate problem and duality analysis with respect to the stochastic data determine the scenarios which improve the approximation. Convergence of the approximate solutions is proven under the stated assumptions. Preliminary computational results are outlined.

Journal ArticleDOI
TL;DR: A new dual problem for convex generalized fractional programs with no duality gap is presented and it is shown how this dual problem can be efficiently solved using a parametric approach using the Dinkelbach-type algorithm.
Abstract: A new dual problem for convex generalized fractional programs with no duality gap is presented and it is shown how this dual problem can be efficiently solved using a parametric approach. The resulting algorithm can be seen as “dual” to the Dinkelbach-type algorithm for generalized fractional programs since it approximates the optimal objective value of the dual (primal) problem from below. Convergence results for this algorithm are derived and an easy condition to achieve superlinear convergence is also established. Moreover, under some additional assumptions the algorithm also recovers at the same time an optimal solution of the primal problem. We also consider a variant of this new algorithm, based on scaling the “dual” parametric function. The numerical results, in case of quadratic-linear ratios and linear constraints, show that the performance of the new algorithm and its scaled version is superior to that of the Dinkelbach-type algorithms. From the computational results it also appears that contrary to the primal approach, the “dual” approach is less influenced by scaling.

Journal ArticleDOI
Adam B. Levy1
TL;DR: A new second-order condition is derived which guarantees that the stationary points associated with the Karush-Kuhn-Tucker conditions exhibit generalized Lipschitz continuity with respect to the parameter.
Abstract: We study implicit multifunctions (set-valued mappings) obtained from inclusions of the form 0źM(p,x), whereM is a multifunction. Our basic implicit multifunction theorem provides an approximation for a generalized derivative of the implicit multifunction in terms of the derivative of the multifunctionM. Our primary focus is on three special cases of inclusions 0źM(p,x) which represent different kinds of generalized variational inequalities, called "variational conditions". Appropriate versions of our basic implicit multifunction theorem yield approximations for generalized derivatives of the solutions to each kind of variational condition. We characterize a well-known generalized Lipschitz property in terms of generalized derivatives, and use our implicit multifunction theorems to state sufficient conditions (and necessary in one case) for solutions of variational conditions to possess this Lipschitz, property. We apply our results to a general parameterized nonlinear programming problem, and derive a new second-order condition which guarantees that the stationary points associated with the Karush-Kuhn-Tucker conditions exhibit generalized Lipschitz continuity with respect to the parameter.

Journal ArticleDOI
TL;DR: This work considers a mixed integer model for multi-item single machine production planning, incorporating both start-up costs and machine capacity, and several families of valid inequalities are derived.
Abstract: We consider a mixed integer model for multi-item single machine production planning, incorporating both start-up costs and machine capacity. The single-item version of this model is studied from the polyhedral point of view and several families of valid inequalities are derived. For some of these inequalities, we give necessary and sufficient facet inducing conditions, and efficient separation algorithms. We use these inequalities in a cutting plane/branch and bound procedure. A set of real life based problems with 5 items and up to 36 periods is solved to optimality.

Journal ArticleDOI
TL;DR: The more challenging open questions in the field of stochastic programming are identified and many go to the foundations of designing models for decision making under uncertainty.
Abstract: Remarkable progress has been made in the development of algorithmic procedures and the availability of software for stochastic programming problems. However, some fundamental questions have remained unexplored. This paper identifies the more challenging open questions in the field of stochastic programming. Some are purely technical in nature, but many also go to the foundations of designing models for decision making under uncertainty.

Journal ArticleDOI
TL;DR: A Newton-type method will be described for the solution of LCPs and it will be shown that this method has a finite termination property, i.e., if an iterate is sufficiently close to a solution ofLCP, the method finds this solution in one step.
Abstract: Based on a well-known reformulation of the linear complementarity problem (LCP) as a nondifferentiable system of nonlinear equations, a Newton-type method will be described for the solution of LCPs Under certain assumptions, it will be shown that this method has a finite termination property, ie, if an iterate is sufficiently close to a solution of LCP, the method finds this solution in one step This result will be applied to a recently proposed algorithm by Harker and Pang in order to prove that their algorithm also has the finite termination property

Journal ArticleDOI
TL;DR: The production-transportation problem involving an arbitrary fixed number of factories with concave production cost is solvable in strongly polynomial time because of monotonicity of the objective function along certain directions.
Abstract: We show that the production-transportation problem involving an arbitrary fixed number of factories with concave production cost is solvable in strongly polynomial time. The algorithm is based on a parametric approach which takes full advantage of the specific structure of the problem: monotonicity of the objective function along certain directions, small proportion of nonlinear variables and combinatorial properties implied by transportation constraints.

Journal ArticleDOI
TL;DR: In this paper, the authors describe the algorithmic options of Release A of LANCELOT, a Fortran package for large-scale nonlinear optimization, and present the results of intensitive numberical tests.
Abstract: In this paper, we describe the algorithmic options of Release A of LANCELOT, a Fortran package for large-scale nonlinear optimization. We then present the results of intensitive numberical tests and discuss the relative merits of the options. The experiments described involve both academic and applied problems. Finally, we propose conclusion, both specific to LANCELOT and of more general scope.

Journal ArticleDOI
TL;DR: It turns out that, under mild assumptions, each inequality that defines a facet for the (single) Steiner tree polyhedron can be lifted to a facet-defining inequality for the Steiner family of inequalities.
Abstract: LetG=(V, E) be a graph andT⊆V be a node set. We call an edge setS a Steiner tree forT ifS connects all pairs of nodes inT. In this paper we address the following problem, which we call the weighted Steiner tree packing problem. Given a graphG=(V, E) with edge weightsw e , edge capacitiesc e ,e∈E, and node setT 1,…,T N , find edge setsS 1,…,S N such that eachS k is a Steiner tree forT k , at mostc e of these edge sets use edgee for eache∈E, and the sum of the weights of the edge sets is minimal. Our motivation for studying this problem arises from a routing problem in VLSI-design, where given sets of points have to be connected by wires. We consider the Steiner tree packing problem from a polyhedral point of view and define an associated polyhedron, called the Steiner tree packing polyhedron. The goal of this paper is to (partially) describe this polyhedron by means of inequalities. It turns out that, under mild assumptions, each inequality that defines a facet for the (single) Steiner tree polyhedron can be lifted to a facet-defining inequality for the Steiner tree packing polyhedron. The main emphasis of this paper lies on the presentation of so-called joint inequalities that are valid and facet-defining for this polyhedron. Inequalities of this kind involve at least two Steiner trees. The classes of inequalities we have found form the basis of a branch & cut algorithm. This algorithm is described in our companion paper (in this issue).