scispace - formally typeset
Search or ask a question

Showing papers in "Mathematical Programming in 1995"


Journal ArticleDOI
TL;DR: This model of the fleet assignment problem is a large multi-commodity flow problem with side constraints defined on a time-expanded network, and the algorithm found solutions with a maximum optimality gap of 0.02% and is more than two orders of magnitude faster than using default options of a standard LP-based branch-and-bound code.
Abstract: Given a flight schedule and set of aircraft, the fleet assignment problem is to determine which type of aircraft should fly each flight segment. This paper describes a basic daily, domestic fleet assignment problem and then presents chronologically the steps taken to solve it efficiently. Our model of the fleet assignment problem is a large multi-commodity flow problem with side constraints defined on a time-expanded network. These problems are often severely degenerate, which leads to poor performance of standard linear programming techniques. Also, the large number of integer variables can make finding optimal integer solutions difficult and time-consuming. The methods used to attack this problem include an interior-point algorithm, dual steepest edge simplex, cost perturbation, model aggregation, branching on set-partitioning constraints and prioritizing the order of branching. The computational results show that the algorithm finds solutions with a maximum optimality gap of 0.02% and is more than two orders of magnitude faster than using default options of a standard LP-based branch-and-bound code.

420 citations


Journal ArticleDOI
TL;DR: A number of new variants of bundle methods for nonsmooth unconstrained and constrained convex optimization, convex—concave games and variational inequalities are described.
Abstract: In this paper we describe a number of new variants of bundle methods for nonsmooth unconstrained and constrained convex optimization, convex—concave games and variational inequalities. We outline the ideas underlying these methods and present rate-of-convergence estimates.

419 citations


Journal ArticleDOI
TL;DR: A smooth approximationp (x, α) to the plus function max{x, 0} is obtained by integrating the sigmoid function 1/(1 + e−αx), commonly used in neural networks, by means of which an exact solution can be obtained for finiteα.
Abstract: A smooth approximationp (x, α) to the plus function max{x, 0} is obtained by integrating the sigmoid function 1/(1 + e−αx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy forα sufficiently large. In the special case when a Slater constraint qualification is satisfied, an exact solution can be obtained for finiteα. Speedup over MINOS 5.4 was as high as 1142 times for linear inequalities of size 2000 × 1000, and 580 times for convex inequalities with 400 variables. Linear complementarity problems are converted into a system of smooth nonlinear equations and are solved by a quadratically convergent Newton method. For monotone LCPs with as many as 10 000 variables, the proposed approach was as much as 63 times faster than Lemke's method.

276 citations


Journal ArticleDOI
James Renegar1
TL;DR: This work proposes analyzing interior-point methods using notions of problem-instance size which are direct generalizations of the condition number of a matrix which are appropriate in the context of semi-definite programming.
Abstract: We propose analyzing interior-point methods using notions of problem-instance size which are direct generalizations of the condition number of a matrix. The notions pertain to linear programming quite generally; the underlying vector spaces are not required to be finite-dimensional and, more importantly, the cones defining nonnegativity are not required to be polyhedral. Thus, for example, the notions are appropriate in the context of semi-definite programming. We prove various theorems to demonstrate how the notions can be used in analyzing interior-point methods. These theorems assume little more than that the interiors of the cones (defining nonnegativity) are the domains of self-concordant barrier functions.

267 citations


Journal ArticleDOI
TL;DR: The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming and proves an overallworst-case operation count of O(m5.5L1.5).
Abstract: We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linear programming. A worst-case analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interior-point methods the overall computational effort is therefore dominated by the least-squares system that must be solved in each iteration. A type of conjugate-gradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number of iterations remains valid even if the conjugate-gradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration. We describe in detail how the algorithm works for optimization problems withL Lyapunov inequalities, each of sizem. We prove an overallworst-case operation count of O(m 5.5L1.5). Theaverage-case complexity appears to be closer to O(m 4L1.5). This estimate is justified by extensive numerical experimentation, and is consistent with other researchers' experience with the practical performance of interior-point algorithms for linear programming. This result means that the computational cost of extending current control theory based on the solution of Lyapunov or Riccatiequations to a theory that is based on the solution of (multiple, coupled) Lyapunov or Riccatiinequalities is modest.

249 citations


Journal ArticleDOI
TL;DR: This paper presents a comprehensive survey of presolve methods and discusses the restoration procedure in detail, i.e., the procedure that undoes the presolve.
Abstract: Most modern linear programming solvers analyze the LP problem before submitting it to optimization. Some examples are the solvers WHIZARD (Tomlin and Welch, 1983), OB1 (Lustig et al., 1994), OSL (Forrest and Tomlin, 1992), Sciconic (1990) and CPLEX (Bixby, 1994). The purpose of the presolve phase is to reduce the problem size and to discover whether the problem is unbounded or infeasible. In this paper we present a comprehensive survey of presolve methods. Moreover, we discuss the restoration procedure in detail, i.e., the procedure that undoes the presolve. Computational results on the NETLIB problems (Gay, 1985) are reported to illustrate the efficiency of the presolve methods.

229 citations


Journal ArticleDOI
TL;DR: This work gives a new algorithm that finds a feasible point inS in cases where an oracle is available, and uses the analytic center of a polytope as test point, and successively modifies thepolytope with the separating hyperplanes returned by the oracle.
Abstract: Anoracle for a convex setS ⊂ ℝ n accepts as input any pointz in ℝ n , and ifz ∈S, then it returns ‘yes’, while ifz ∉S, then it returns ‘no’ along with a separating hyperplane. We give a new algorithm that finds a feasible point inS in cases where an oracle is available. Our algorithm uses the analytic center of a polytope as test point, and successively modifies the polytope with the separating hyperplanes returned by the oracle. The key to establishing convergence is that hyperplanes judged to be ‘unimportant’ are pruned from the polytope. If a ball of radius 2−L is contained inS, andS is contained in a cube of side 2 L+1, then we can show our algorithm converges after O(nL 2) iterations and performs a total of O(n 4 L 3+TnL 2) arithmetic operations, whereT is the number of arithmetic operations required for a call to the oracle. The bound is independent of the number of hyperplanes generated in the algorithm. An important application in which an oracle is available is minimizing a convex function overS.

161 citations


Journal ArticleDOI
TL;DR: Most of the known classes of valid inequalities for the graphical travelling salesman polyhedron are considered and it is shown that the comb inequalities cannot improve the subtour bound by a factor greater than 10/9.
Abstract: We consider most of the known classes of valid inequalities for the graphical travelling salesman polyhedron and compute the worst-case improvement resulting from their addition to the subtour polyhedron. For example, we show that the comb inequalities cannot improve the subtour bound by a factor greater than 10/9. The corresponding factor for the class of clique tree inequalities is 8/7, while it is 4/3 for the path configuration inequalities.

158 citations


Journal ArticleDOI
TL;DR: A linear time on-line algorithm is proposed for which the expected difference between the optimum and the approximate solution value is O(log3/2n), and anΩ(1) lower bound on the expected Difference between the optimal and the solution found by any on- line algorithm is shown to hold.
Abstract: Different classes of on-line algorithms are developed and analyzed for the solution of {0, 1} and relaxed stochastic knapsack problems, in which both profit and size coefficients are random variables. In particular, a linear time on-line algorithm is proposed for which the expected difference between the optimum and the approximate solution value isO(log3/2 n). AnΩ(1) lower bound on the expected difference between the optimum and the solution found by any on-line algorithm is also shown to hold.

149 citations


Journal ArticleDOI
TL;DR: This paper studies the Precedence-Constrained Asymmetric Traveling Salesman (PCATS) polytope, i.e. the convex hull of incidence vectors of tours in a precedence-constrained directed graph, and derives several families of valid inequalities.
Abstract: Many applications of the traveling salesman problem require the introduction of additional constraints. One of the most frequently occurring classes of such constraints are those requiring that certain cities be visited before others (precedence constraints). In this paper we study the Precedence-Constrained Asymmetric Traveling Salesman (PCATS) polytope, i.e. the convex hull of incidence vectors of tours in a precedence-constrained directed graph. We derive several families of valid inequalities, and give polynomial time separation algorithms for important subfamilies. We then establish the dimension of the PCATS polytope and show that, under reasonable assumptions, the two main classes of inequalities derived are facet inducing.

147 citations


Journal ArticleDOI
TL;DR: A global convergence result is established, with a quadratic rate under the regularity assumption, for the minimization of the convex composite optimization functionh ο F.
Abstract: An extension of the Gauss—Newton method for nonlinear equations to convex composite optimization is described and analyzed. Local quadratic convergence is established for the minimization ofh ο F under two conditions, namelyh has a set of weak sharp minima,C, and there is a regular point of the inclusionF(x) ∈ C. This result extends a similar convergence result due to Womersley (this journal, 1985) which employs the assumption of a strongly unique solution of the composite functionh ο F. A backtracking line-search is proposed as a globalization strategy. For this algorithm, a global convergence result is established, with a quadratic rate under the regularity assumption.

Journal ArticleDOI
TL;DR: This work considers the problem of finding the maximum of a multivariate polynomial inside a convex polytope and shows that even when the polynomic is quadratic there is no polynometric time approximation unless NP is contained in quasi-polynomial time.
Abstract: We consider the problem of finding the maximum of a multivariate polynomial inside a convex polytope. We show that there is no polynomial time approximation algorithm for this problem, even one with a very poor guarantee, unless P = NP. We show that even when the polynomial is quadratic (i.e. quadratic programming) there is no polynomial time approximation unless NP is contained in quasi-polynomial time.

Journal ArticleDOI
TL;DR: This paper applies the cost scaling push-relabel method to the assignment problem and investigates implementations of the method that take advantage of assignment's special structure to show that it is very promising for practical use.
Abstract: The cost scaling push-relabel method has been shown to be efficient for solving minimum-cost flow problems. In this paper we apply the method to the assignment problem and investigate implementations of the method that take advantage of assignment's special structure. The results show that the method is very promising for practical use.

Journal ArticleDOI
TL;DR: A practical application of quadratic programming is shown to calculate the directional derivative in the case when the optimal multipliers are not unique, for the first time to the authors' knowledge.
Abstract: Consider a parametric nonlinear optimization problem subject to equality and inequality constraints. Conditions under which a locally optimal solution exists and depends in a continuous way on the parameter are well known. We show, under the additional assumption of constant rank of the active constraint gradients, that the optimal solution is actually piecewise smooth, hence B-differentiable. We show, for the first time to our knowledge, a practical application of quadratic programming to calculate the directional derivative in the case when the optimal multipliers are not unique.

Journal ArticleDOI
TL;DR: This work considers conceptual optimization methods combining two ideas: the Moreau—Yosida regularization in convex analysis, and quasi-Newton approximations of smooth functions, and outlines several approaches based on this combination, and establishes their global convergence.
Abstract: Nous considerons des methodes conceptuelles d'optimisation combinant deux ideees : La regularisation de Moreau-Yosida en analyse convexe et les approximations quasi-Newtoniennes des fonctions regulieres.

Journal ArticleDOI
TL;DR: Non-differentiable optimization method is used for constrained minimization of a local Lipschitz function to solve variational inequality constraints on the computation of the Stackelberg—Cournot—Nash equilibria and to the numerical solution of a class of quasi-variational inequalities.
Abstract: Optimization problems with variational inequality constraints are converted to constrained minimization of a local Lipschitz function. To this minimization a non-differentiable optimization method is used; the required subgradients of the objective are computed by means of a special adjoint equation. Besides tests with some academic examples, the approach is applied to the computation of the Stackelberg—Cournot—Nash equilibria and to the numerical solution of a class of quasi-variational inequalities.

Journal ArticleDOI
TL;DR: It is shown that the ELCP can be viewed as a kind of unifying framework for the LCP and its various generalizations and an algorithm to find all its solutions is developed.
Abstract: In this paper we define the Extended Linear Complementarity Problem (ELCP), an extension of the well-known Linear Complementarity Problem (LCP). We show that the ELCP can be viewed as a kind of unifying framework for the LCP and its various generalizations. We study the general solution set of an ELCP and we develop an algorithm to find all its solutions. We also show that the general ELCP is an NP-hard problem.

Journal ArticleDOI
TL;DR: These methods require bounded storage in contrast to the original level methods of Lemaréchal, Nemirovskii and Nesterov and give extensions for solving convex constrained problems, convex saddle-point problems and variational inequalities with monotone operators.
Abstract: We study proximal level methods for convex optimization that use projections onto successive approximations of level sets of the objective corresponding to estimates of the optimal value. We show that they enjoy almost optimal efficiency estimates. We give extensions for solving convex constrained problems, convex-concave saddle-point problems and variational inequalities with monotone operators. We present several variants, establish their efficiency estimates, and discuss possible implementations. In particular, our methods require bounded storage in contrast to the original level methods of Lemarechal, Nemirovskii and Nesterov.

Journal ArticleDOI
TL;DR: New strongly polynomial algorithms for special cases of convex separable quadratic minimization over submodular constraints and an O(NM log(N2/M) algorithm for the problemNetwork defined on a network onM arcs andN nodes are presented.
Abstract: We present new strongly polynomial algorithms for special cases of convex separable quadratic minimization over submodular constraints. The main results are: an O(NM log(N 2/M)) algorithm for the problemNetwork defined on a network onM arcs andN nodes; an O(n logn) algorithm for thetree problem onn variables; an O(n logn) algorithm for theNested problem, and a linear time algorithm for theGeneralized Upper Bound problem. These algorithms are the best known so far for these problems. The status of the general problem and open questions are presented as well.

Journal ArticleDOI
TL;DR: The theoretical analysis conducted in this paper seems to indicate that a sensitivity-based approach is rather promising for solving general nonlinear BLPP.
Abstract: This paper is concerned with general nonlinear nonconvex bilevel programming problems (BLPP). We derive necessary and sufficient conditions at a local solution and investigate the stability and sensitivity analysis at a local solution in the BLPP. We then explore an approach in which a bundle method is used in the upper-level problem with subgradient information from the lower-level problem. Two algorithms are proposed to solve the general nonlinear BLPP and are shown to converge to regular points of the BLPP under appropriate conditions. The theoretical analysis conducted in this paper seems to indicate that a sensitivity-based approach is rather promising for solving general nonlinear BLPP.

Journal ArticleDOI
TL;DR: It is proved that the rate of convergence of the second method is optimal uniformly in the number of variables and the approximate Hessian strategy significantly improves the total arithmetical complexity of the method.
Abstract: In this paper we establish the efficiency estimates for two cutting plane methods based on the analytic barrier We prove that the rate of convergence of the second method is optimal uniformly in the number of variables We present a modification of the second method In this modified version each test point satisfies an approximate centering condition We also use the standard strategy for updating approximate Hessians of the logarithmic barrier function We prove that the rate of convergence of the modified scheme remains optimal and demonstrate that the number of Newton steps in the auxiliary minimization processes is bounded by an absolute constant We also show that the approximate Hessian strategy significantly improves the total arithmetical complexity of the method

Journal ArticleDOI
TL;DR: Under the monotonicity assumption, polynomial complexity bounds are established for two variants of the Mehrotra-type predictor—corrector interior-point algorithms.
Abstract: Recently, Mehrotra [3] proposed a predictor—corrector primal—dual interior-point algorithm for linear programming. At each iteration, this algorithm utilizes a combination of three search directions: the predictor, the corrector and the centering directions, and requires only one matrix factorization. At present, Mehrotra's algorithmic framework is widely regarded as the most practically efficient one and has been implemented in the highly successful interior-point code OB1 [2]. In this paper, we study the theoretical convergence properties of Mehrotra's interior-point algorithmic framework. For generality, we carry out our analysis on a horizontal linear complementarity problem that includes linear and quadratic programming, as well as the standard linear complementarity problem. Under the monotonicity assumption, we establish polynomial complexity bounds for two variants of the Mehrotra-type predictor—corrector interior-point algorithms. These results are summarized in the last section in a table.

Journal ArticleDOI
TL;DR: Even small instances of this combined design/routing problem are extremely intractable, so computational experience with a cutting plane algorithm for this problem is described.
Abstract: The following problem arises in the study of lightwave networks. Given a demand matrix containing amounts to be routed between corresponding nodes, we wish to design a network with certain topological features, and in this network, route all the demands, so that the maximum load (total flow) on any edge is minimized. As we show, even small instances of this combined design/routing problem are extremely intractable. We describe computational experience with a cutting plane algorithm for this problem.

Journal ArticleDOI
TL;DR: Drawing conclusions on the stability of optimal values and optimal solutions to the two-stage stochastic program when subjecting the underlying probability measure to perturbations are presented.
Abstract: For two-stage stochastic programs with integrality constraints in the second stage, we study continuity properties of the expected recourse as a function both of the first-stage policy and the integrating probability measure.

Journal ArticleDOI
TL;DR: The method of cutting planes from analytic centers applied to similar formulations is investigated, finding that new cutting planes will be generated not from the optimum of the linear programming relaxation, but from the analytic center of the set of localization.
Abstract: The stochastic linear programming problem with recourse has a dual block-angular structure. It can thus be handled by Benders' decomposition or by Kelley's method of cutting planes; equivalently the dual problem has a primal block-angular structure and can be handled by Dantzig-Wolfe decomposition—the two approaches are in fact identical by duality. Here we shall investigate the use of the method of cutting planes from analytic centers applied to similar formulations. The only significant difference form the aforementioned methods is that new cutting planes (or columns, by duality) will be generated not from the optimum of the linear programming relaxation, but from the analytic center of the set of localization.

Journal ArticleDOI
TL;DR: It is proved that Weiszfeld's algorithm converges to the unique optimal solution for all but a denumerable set of starting points if, and only if, the convex hull of the given points is of dimensionN.
Abstract: The Fermat—Weber location problem requires finding a point in ℝN that minimizes the sum of weighted Euclidean distances tom given points. A one-point iterative method was first introduced by Weiszfeld in 1937 to solve this problem. Since then several research articles have been published on the method and generalizations thereof. Global convergence of Weiszfeld's algorithm was proven in a seminal paper by Kuhn in 1973. However, since them given points are singular points of the iteration functions, convergence is conditional on none of the iterates coinciding with one of the given points. In addressing this problem, Kuhn concluded that whenever them given points are not collinear, Weiszfeld's algorithm will converge to the unique optimal solution except for a denumerable set of starting points. As late as 1989, Chandrasekaran and Tamir demonstrated with counter-examples that convergence may not occur for continuous sets of starting points when the given points are contained in an affine subspace of ℝN. We resolve this open question by proving that Weiszfeld's algorithm converges to the unique optimal solution for all but a denumerable set of starting points if, and only if, the convex hull of the given points is of dimensionN.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the special class of cooperative sequencing games that arise from one-machine sequencing situations in which all jobs have equal processing times and the ready time of each job is a multiple of the processing time.
Abstract: This paper considers the special class of cooperative sequencing games that arise from one-machine sequencing situations in which all jobs have equal processing times and the ready time of each job is a multiple of the processing time By establishing relations between optimal orders of subcoalitions, it is shown that each sequencing game within this class is convex

Journal ArticleDOI
TL;DR: A transformation between the two classes which enables us to apply Lehman's modified theorem about deletion-minimal nonideal matrices to obtain new results about packing polyhedra results in a polyhedral description for the stable set polytopes of near-bipartite graphs.
Abstract: A 0–1 matrixA isideal if the polyhedronQ(A) = conv{x ∈QV:A ⋅ x ⩾ 1,x ⩾ 0} (V denotes the column index set ofA) is integral. Similarly a matrix isperfect ifP(A) = conv{x ∈QV:A ⋅ x ⩽ 1,x ⩾ 0} is integral. Little is known about the relationship between these two classes of matrices. We consider a transformation between the two classes which enables us to apply Lehman's modified theorem about deletion-minimal nonideal matrices to obtain new results about packing polyhedra. This results in a polyhedral description for the stable set polytopes ofnear-bipartite graphs (the deletion of any neighbourhood produces a bipartite graph). Note that this class includes the complements of line graphs. To date, this is the only natural class, besides the perfect graphs, for which such a description is known for the graphs and their complements. Some remarks are also made on possible approaches to describing the stable set polyhedra of quasi-line graphs, and more generally claw-free graphs. These results also yield a new class oft-perfect graphs.

Journal ArticleDOI
TL;DR: An entropy-like proximal method for the minimization of a convex function subject to positivity constraints is extended to an interior algorithm in two directions, to general linearly constrained convex minimization problems and to variational inequalities on polyhedra.
Abstract: In this paper, an entropy-like proximal method for the minimization of a convex function subject to positivity constraints is extended to an interior algorithm in two directions. First, to general linearly constrained convex minimization problems and second, to variational inequalities on polyhedra. For linear programming, numerical results are presented and quadratic convergence is established.

Journal ArticleDOI
TL;DR: This work develops and introduces a new approach, based on a geometric interpretation of some recent results in Gröbner basis theory, to provide a solution method applicable to a general class of chance constrained integer programming problems.
Abstract: We study here a problem of schedulingn job types onm parallel machines, when setups are required and the demands for the products are correlated random variables. We model this problem as a chance constrained integer program. Methods of solution currently available—in integer programming and stochastic programming—are not sufficient to solve this model exactly. We develop and introduce here a new approach, based on a geometric interpretation of some recent results in Grobner basis theory, to provide a solution method applicable to a general class of chance constrained integer programming problems. Out algorithm is conceptually simple and easy to implement. Starting from a (possibly) infeasible solution, we move from one lattice point to another in a monotone manner regularly querying a membership oracle for feasibility until the optimal solution is found. We illustrate this methodology by solving a problem based on a real system.