scispace - formally typeset
Search or ask a question

Showing papers in "Mathematical Programming in 1997"


Journal ArticleDOI
TL;DR: In this article, a survey of clustering from a mathematical programming viewpoint is presented, focusing on solution methods, i.e., dynamic programming, graph theoretical algorithms, branch-and-bound, cutting planes, column generation and heuristics.
Abstract: Given a set of entities, Cluster Analysis aims at finding subsets, called clusters, which are homogeneous and/or well separated. As many types of clustering and criteria for homogeneity or separation are of interest, this is a vast field. A survey is given from a mathematical programming viewpoint. Steps of a clustering study, types of clustering and criteria are discussed. Then algorithms for hierarchical, partitioning, sequential, and additive clustering are studied. Emphasis is on solution methods, i.e., dynamic programming, graph theoretical algorithms, branch-and-bound, cutting planes, column generation and heuristics.

600 citations


Journal ArticleDOI
TL;DR: This paper gives a comprehensive, state-of-the-art survey of the extensive theory and rich applications of error bounds for inequality and optimization systems and solution sets of equilibrium problems.
Abstract: Originated from the practical implementation and numerical considerations of iterative methods for solving mathematical programs, the study of error bounds has grown and proliferated in many interesting areas within mathematical programming. This paper gives a comprehensive, state-of-the-art survey of the extensive theory and rich applications of error bounds for inequality and optimization systems and solution sets of equilibrium problems.

514 citations


Journal ArticleDOI
TL;DR: Under a monotonicity hypothesis it is shown that equilibrium solutions can be found via iterative convex minimization via iteratives convex maximization.
Abstract: We compute constrained equilibria satisfying an optimality condition. Important examples include convex programming, saddle problems, noncooperative games, and variational inequalities. Under a monotonicity hypothesis we show that equilibrium solutions can be found via iterative convex minimization. In the main algorithm each stage of computation requires two proximal steps, possibly using Bregman functions. One step serves to predict the next point; the other helps to correct the new prediction. To enhance practical applicability we tolerate numerical errors.

452 citations


Journal ArticleDOI
TL;DR: The main topics covered include the Lovász theta function and its applications to stable sets, perfect graphs, and coding theory, the automatic generation of strong valid inequalities, and the embedding of finite metric spaces.
Abstract: We discuss the use of semidefinite programming for combinatorial optimization problems. The main topics covered include (i) the Lovasz theta function and its applications to stable sets, perfect graphs, and coding theory, (ii) the automatic generation of strong valid inequalities, (iii) the maximum cut problem and related problems, and (iv) the embedding of finite metric spaces and its relationship to the sparsest cut problem.

302 citations


Journal ArticleDOI
TL;DR: IfF is monotone in a neighbourhood ofx, it is proved that 0 εδθ(x) is necessary and sufficient forx to be a solution of CP(F) and the generalized Newton method is shown to be locally well defined and superlinearly convergent with the order of 1+p.
Abstract: The paper deals with complementarity problems CP(F), where the underlying functionF is assumed to be locally Lipschitzian. Based on a special equivalent reformulation of CP(F) as a system of equationsź(x)=0 or as the problem of minimizing the merit functionź=1/2źźź22, we extend results which hold for sufficiently smooth functionsF to the nonsmooth case. In particular, ifF is monotone in a neighbourhood ofx, it is proved that 0 źźź(x) is necessary and sufficient forx to be a solution of CP(F). Moreover, for monotone functionsF, a simple derivative-free algorithm that reducesź is shown to possess global convergence properties. Finally, the local behaviour of a generalized Newton method is analyzed. To this end, the result by Mifflin that the composition of semismooth functions is again semismooth is extended top-order semismooth functions. Under a suitable regularity condition and ifF isp-order semismooth the generalized Newton method is shown to be locally well defined and superlinearly convergent with the order of 1+p.

296 citations


Journal ArticleDOI
TL;DR: An introduction to a new class of derivative free methods for unconstrained optimization in the context of a trust region framework that focuses on techniques that ensure a suitable “geometric quality” of the considered models.
Abstract: We present an introduction to a new class of derivative free methods for unconstrained optimization. We start by discussing the motivation for such methods and why they are in high demand by practitioners. We then review the past developments in this field, before introducing the features that characterize the newer algorithms. In the context of a trust region framework, we focus on techniques that ensure a suitable “geometric quality” of the considered models. We then outline the class of algorithms based on these techniques, as well as their respective merits. We finally conclude the paper with a discussion of open questions and perspectives.

278 citations


Journal ArticleDOI
TL;DR: It is shown that primal and dual nondegeneracy and strict complementarity all hold generically and Numerical experiments suggest probability distributions for the ranks ofX andZ which are consistent with the nondEGeneracy conditions.
Abstract: Primal and dual nondegeneracy conditions are defined for semidefinite programming. Given the existence of primal and dual solutions, it is shown that primal nondegeneracy implies a unique dual solution and that dual nondegeneracy implies a unique primal solution. The converses hold if strict complementarity is assumed. Primal and dual nondegeneracy assumptions do not imply strict complementarity, as they do in LP. The primal and dual nondegeneracy assumptions imply a range of possible ranks for primal and dual solutions $X$ and $Z$. This is in contrast with LP where nondegeneracy assumptions exactly determine the number of variables which are zero. It is shown that primal and dual nondegeneracy and strict complementarity all hold generically. Numerical experiments suggest probability distributions for the ranks of $X$ and $Z$ which are consistent with the nondegeneracy conditions.

277 citations


Journal ArticleDOI
TL;DR: In this paper, an exact dual is derived for Semidefinite Programming (SDP), for which strong duality properties hold without any regularity assumptions, and the dual is then applied to derive certain complexity results for SDP.
Abstract: In this paper, we present a new and more complete duality for Semidefinite Programming (SDP), with the following features: \begin{itemize} \item This dual is an explicit semidefinite program, whose number of variables and the coefficient bitlengths are polynomial in those of the primal. \item If the Primal is feasible, then it is bounded if and only if the dual is feasible. \item The duality gap, \ie the difference between the primal and the dual objective function values, is zero whenever the primal is feasible and bounded. Also, in this case, the dual attains its optimum \item It yields a precise Farkas Lemma for semidefinite feasibility systems, \ie characterization of the {\it infeasibility} of a semidefinite inequality in terms of the {\it feasibility} of another polynomial size semidefinite inequality. \end{itemize} Note that the standard duality for Linear Programming satisfies all of the above features, but no such duality theory was previously known for SDP, without Slater-like conditions being assumed. Then we apply the dual to derive certain complexity results for Semidefinite Programming Problems. The decision problem of Semidefinite Feasibility (SDFP), \ie that of determining if a given semidefinite inequality system is feasible, is the central problem of interest. The complexity of SDFP is unknown, but we show the following: 1) In the Turing machine model, SDFP is not NP-Complete unless NP=Co-NP; 2) In the real number model of Blum, Shub and Smale\cite{bss}, SDFP is in NP$\cap$Co-NP. We then give polynomial reductions from the following problems to SDFP: 1) Checking whether an SDP is bounded; 2) Checking whether a feasible and bounded SDP attains the optimum; 3) Checking the optimality of a feasible solution.

257 citations


Journal ArticleDOI
TL;DR: A dual simplex type method is studied that solves (TRS) as a parametric eigenvalue problem and the essential cost of the algorithm is the matrix-vector multiplication and, thus, sparsity can be exploited.
Abstract: Primal-dual pairs of semidefinite programs provide a general framework for the theory and algorithms for the trust region subproblem (TRS). This latter problem consists in minimizing a general quadratic function subject to a convex quadratic constraint and, therefore, it is a generalization of the minimum eigenvalue problem. The importance of (TRS) is due to the fact that it provides the step in trust region minimization algorithms. The semidefinite framework is studied as an interesting instance of semidefinite programming as well as a tool for viewing known algorithms and deriving new algorithms for (TRS). In particular, a dual simplex type method is studied that solves (TRS) as a parametric eigenvalue problem. This method uses the Lanczos algorithm for the smallest eigenvalue as a black box. Therefore, the essential cost of the algorithm is the matrix-vector multiplication and, thus, sparsity can be exploited. A primal simplex type method provides steps for the so-called hard case. Extensive numerical tests for large sparse problems are discussed. These tests show that the cost of the algorithm is 1 +ź(n) times the cost of finding a minimum eigenvalue using the Lanczos algorithm, where 0<ź(n)<1 is a fraction which decreases as the dimension increases.

250 citations


Journal ArticleDOI
TL;DR: Many problems arising in traffic planning can be modelled and solved using discrete optimization, and this chapter focuses on recent developments which were applied to large scale real world instances.
Abstract: Many problems arising in traffic planning can be modelled and solved using discrete optimization. We will focus on recent developments which were applied to large scale real world instances. Most railroad companies apply a hierarchically structured planning process. Starting with the definition of the underlying network used for transport one has to decide which infrastructural improvements are necessary. Usually, the rail system is periodically scheduled. A fundamental base of the schedule are the lines connecting several stations with a fixed frequency. Possible objectives for the construction of the line plan may be the minimization of the total cost or the maximization of the passengers’s comfort satisfying certain regulations. After the lines of the system are fixed, the train schedule can be determined. A criterion for the quality of a schedule is the total transit time of the passengers including the waiting time which should be minimized satisfying some operational constraints. For each trip of the schedule a train consisting of a locomotive and some carriages is needed for service. The assignment of rolling stock to schedule trips has to satisfy operational requirements. A comprehensible objective is to minimize the total cost. After all strategic and tactical planning the schedule has to be realized. Several external influences, for example delayed trains, force the dispatcher to recompute parts of the schedule on-line.

243 citations


Journal ArticleDOI
TL;DR: A new line search algorithm that ensures global convergence of the Polak-Ribière conjugate gradient method for the unconstrained minimization of nonconvex differentiable functions and defines adaptive rules for the choice of the parameters in a way that the first stationary point along a search direction can be eventually accepted when the algorithm is converging to a minimum point with positive definite Hessian matrix.
Abstract: In this paper we propose a new line search algorithm that ensures global convergence of the Polak-Ribiere conjugate gradient method for the unconstrained minimization of nonconvex differentiable functions. In particular, we show that with this line search every limit point produced by the Polak-Ribiere iteration is a stationary point of the objective function. Moreover, we define adaptive rules for the choice of the parameters in a way that the first stationary point along a search direction can be eventually accepted when the algorithm is converging to a minimum point with positive definite Hessian matrix. Under strong convexity assumptions, the known global convergence results can be reobtained as a special case. From a computational point of view, we may expect that an algorithm incorporating the step-size acceptance rules proposed here will retain the same good features of the Polak-Ribiere method, while avoiding pathological situations.

Journal ArticleDOI
TL;DR: The theoretical foundations of the binarization process studying the combinatorial optimization problems related to the minimization of the number of binary variables are developed and compact linear integer programming formulations of them are constructed.
Abstract: “Logical analysis of data” (LAD) is a methodology developed since the late eighties, aimed at discovering hidden structural information in data sets. LAD was originally developed for analyzing binary data by using the theory of partially defined Boolean functions. An extension of LAD for the analysis of numerical data sets is achieved through the process of “binarization” consisting in the replacement of each numerical variable by binary “indicator” variables, each showing whether the value of the original variable is above or below a certain level. Binarization was successfully applied to the analysis of a variety of real life data sets. This paper develops the theoretical foundations of the binarization process studying the combinatorial optimization problems related to the minimization of the number of binary variables. To provide an algorithmic framework for the practical solution of such problems, we construct compact linear integer programming formulations of them. We develop polynomial time algorithms for some of these minimization problems, and prove NP-hardness of others.

Journal ArticleDOI
TL;DR: In this paper, the minimum cost flow problem is solved in O(min(n 2m lognC, n 2m 2m2 logn) time, wheren is the number of nodes in the network, m is the maximum absolute arc costs, and C denotes the maximum arc costs if arc costs are integer and ∞ otherwise.
Abstract: Developing a polynomial time primal network simplex algorithm for the minimum cost flow problem has been a long standing open problem. In this paper, we develop one such algorithm that runs in O(min(n 2m lognC, n 2m2 logn)) time, wheren is the number of nodes in the network,m is the number of arcs, andC denotes the maximum absolute arc costs if arc costs are integer and ∞ otherwise. We first introduce a pseudopolynomial variant of the network simplex algorithm called the “premultiplier algorithm”. We then develop a cost-scaling version of the premultiplier algorithm that solves the minimum cost flow problem in O(min(nm lognC, nm 2 logn)) pivots. With certain simple data structures, the average time per pivot can be shown to be O(n). We also show that the diameter of the network polytope is O(nm logn).

Journal ArticleDOI
TL;DR: This paper discusses in some detail the solution techniques currently adopted at the Italian railway company, Ferrovie dello Stato SpA, for solving crew scheduling and rostering problems.
Abstract: Crew management is concerned with building the work schedules of crews needed to cover a planned timetable. This is a well-known problem in Operations Research and has been historically associated with airlines and mass-transit companies. More recently, railway applications have also come on the scene, especially in Europe. In practice, the overall crew management problem is decomposed into two subproblems, called crew scheduling and crew rostering. In this paper, we give an outline of different ways of modeling the two subproblems and possible solution methods. Two main solution approaches are illustrated for real-world applications. In particular we discuss in some detail the solution techniques currently adopted at the Italian railway company, Ferrovie dello Stato SpA, for solving crew scheduling and rostering problems.

Journal ArticleDOI
TL;DR: Basic ideas of cutting plane methods, augmented Lagrangian and splitting methods, and stochastic decomposition methods for convex polyhedral multi-stage stochastically programming problems are reviewed.
Abstract: Stochastic programming problems have very large dimension and characteristic structures which are tractable by decomposition. We review basic ideas of cutting plane methods, augmented Lagrangian and splitting methods, and stochastic decomposition methods for convex polyhedral multi-stage stochastic programming problems.

Journal ArticleDOI
TL;DR: Convexity, duality and first-order optimality conditions for nonlinear semidefinite programming problems are presented and sensitivity analysis of such programs is discussed.
Abstract: In this paper we study nonlinear semidefinite programming problems. Convexity, duality and first-order optimality conditions for such problems are presented. A second-order analysis is also given. Second-order necessary and sufficient optimality conditions are derived. Finally, sensitivity analysis of such programs is discussed.

Journal ArticleDOI
TL;DR: An efficient method for computing the two directions when the semidefinite program to be solved is large scale and sparse is proposed.
Abstract: The Helmberg-Rendl-Vanderbei-Wolkowicz/Kojima-Shindoh-Hara/Monteiro and Nesterov-Todd search directions have been used in many primal-dual interior-point methods for semidefinite programs. This paper proposes an efficient method for computing the two directions when the semidefinite program to be solved is large scale and sparse.

Journal ArticleDOI
TL;DR: To minimize a convex function, this work combines Moreau-Yosida regularizations, quasi-Newton matrices and bundling mechanisms, and incorporates a bundle strategy together with a “curve-search”.
Abstract: To minimize a convex function, we combine Moreau-Yosida regularizations, quasi-Newton matrices and bundling mechanisms. First we develop conceptual forms using "reversal" quasi-Newton formulae and we state their global and local convergence. Then, to produce implementable versions, we incorporate a bundle strategy together with a "curve-search". No convergence results are given for the implementable versions; however some numerical illustrations show their good behaviour even for large-scale problems.

Journal ArticleDOI
TL;DR: A new algorithm for the solation of large-scale nonlinear complementarity problems is introduced, based on a nonsmooth equation reformulation of the complementarity problem and on an inexact Levenberg-Marquardt-type algorithm for its solution.
Abstract: A new algorithm for the solation of large-scale nonlinear complementarity problems is introduced. The algorithm is based on a nonsmooth equation reformulation of the complementarity problem and on an inexact Levenberg-Marquardt-type algorithm for its solution. Under mild assumptions, and requiring only the approximate solution of a linear system at each iteration, the algorithm is shown to be both globally and superlinearly convergent, even on degenerate problems. Numerical results for problems with up to 10 000 variables are presented.

Journal ArticleDOI
TL;DR: Two new trust-region methods for solving nonlinear optimization problems over convex feasible domains are presented, distinguished by the fact that they do not enforce strict monotonicity of the objective function values at successive iterates.
Abstract: This paper presents two new trust-region methods for solving nonlinear optimization problems over convex feasible domains. These methods are distinguished by the fact that they do not enforce strict monotonicity of the objective function values at successive iterates. The algorithms are proved to be convergent to critical points of the problem from any starting point. Extensive numerical experiments show that this approach is competitive with the LANCELOT package.

Journal ArticleDOI
TL;DR: The algorithm is presented by means of a primal-dual infeasible algorithm developed simultaneously for solving the classical posynomial geometric programming dual pair of problems simultaneously and indicates that the algorithm is effective regardless of thedegree of difficulty.
Abstract: In this paper an algorithm is presented for solving the classical posynomial geometric programming dual pair of problems simultaneously. The approach is by means of a primal-dual infeasible algorithm developed simultaneously for (i) the dual geometric program after logarithmic transformation of its objective function, and (ii) its Lagrangian dual program. Under rather general assumptions, the mechanism defines a primal-dual infeasible path from a specially constructed, perturbed Karush-Kuhn-Tucker system.Subfeasible solutions, as described by Duffin in 1956, are generated for each program whose primal and dual objective function values converge to the respective primal and dual program values. The basic technique is one of a predictor-corrector type involving Newton’s method applied to the perturbed KKT system, coupled with effective techniques for choosing iterate directions and step lengths. We also discuss implementation issues and some sparse matrix factorizations that take advantage of the very special structure of the Hessian matrix of the logarithmically transformed dual objective function. Our computational results on 19 of the most challenging GP problems found in the literature are encouraging. The performance indicates that the algorithm is effective regardless of thedegree of difficulty, which is a generally accepted measure in geometric programming.

Journal ArticleDOI
TL;DR: The paper deals with nonlinear multicommodity flow problems with convex costs and proposes a decomposition method that takes full advantage of the supersparsity of the network in the linear algebra operations.
Abstract: The paper deals with nonlinear multicommodity flow problems with convex costs. A decomposition method is proposed to solve them. The approach applies a potential reduction algorithm to solve the master problem approximately and a column generation technique to define a sequence of primal linear programming problems. Each subproblem consists of finding a minimum cost flow between an origin and a destination node in an uncapacited network. It is thus formulated as a shortest path problem and solved with Dijkstra's d-heap algorithm. An implementation is described that takes full advantage of the supersparsity of the network in the linear algebra operations. Computational results show the efficiency of this approach on well-known nondifferentiable problems and also large scale randomly generated problems (up to 1000 arcs and 5000 commodities).

Journal ArticleDOI
TL;DR: The emphasis of the presentation is on numerical procedures for this type of problem, and it is shown that the problems after discretization can be rewritten as mathematical programming problems of special form.
Abstract: This paper deals with a central question of structural optimization which is formulated as the problem of finding the stiffest structure which can be made when both the distribution of material as well as the material itself can be freely varied. We consider a general multi-load formulation and include the possibility of unilateral contact. The emphasis of the presentation is on numerical procedures for this type of problem, and we show that the problems after discretization can be rewritten as mathematical programming problems of special form. We propose iterative optimization algorithms based on penalty-barrier methods and interior-point methods and show a broad range of numerical examples that demonstrates the efficiency of our approach.

Journal ArticleDOI
TL;DR: A polytopeSt(A) whose normal cones are the equivalence classes, and a union of the reduced Gröbner bases asc varies (called the universal Gr Öbner basis) consists precisely of the edge directions of St(A).
Abstract: We study the problem of minimizingc · x subject toA · x =b. x ź 0 andx integral, for a fixed matrixA. Two cost functionsc andcź are considered equivalent if they give the same optimal solutions for eachb. We construct a polytopeSt(A) whose normal cones are the equivalence classes. Explicit inequality presentations of these cones are given by the reduced Grobner bases associated withA. The union of the reduced Grobner bases asc varies (called the universal Grobner basis) consists precisely of the edge directions ofSt(A). We present geometric algorithms for computingSt(A), the Graver basis, and the universal Grobner basis.

Journal ArticleDOI
TL;DR: In this paper, the authors present a simple implementation with O(logn) per tree operation, wheren is the number of tree vertices, where n is the size of the network.
Abstract: Thedynamic tree is an abstract data type that allows the maintenance of a collection of trees subject to joining by adding edges (linking) and splitting by deleting edges (cutting), while at the same time allowing reporting of certain combinations of vertex or edge values. For many applications of dynamic trees, values must be combined along paths. For other applications, values must be combined over entire trees. For the latter situation, an idea used originally in parallel graph algorithms, to represent trees by Euler tours, leads to a simple implementation with a time of O(logn) per tree operation, wheren is the number of tree vertices. We apply this representation to the implementation of two versions of the network simplex algorithm, resulting in a time of O(logn) per pivot, wheren is the number of vertices in the problem network.

Journal ArticleDOI
TL;DR: The class ofweight inequalities is introduced, needed to describe the knapsack polyhedron when the weights of the items lie in certain intervals, and the properties of lifted minimal cover inequalities are extended to this general class of inequalities.
Abstract: This paper deals with the 0/1 knapsack polytope. In particular, we introduce the class ofweight inequalities. This class of inequalities is needed to describe the knapsack polyhedron when the weights of the items lie in certain intervals. A generalization of weight inequalities yields the so-called "weight-reduction principle" and the class of extended weight inequalities. The latter class of inequalities includes minimal cover and (l,k)-configuration inequalities. The properties of lifted minimal cover inequalities are extended to this general class of inequalities.

Journal ArticleDOI
TL;DR: A Newton-type method for solving a semismooth reformulation of monotone complementarity problems, which has a superlinear, or possibly quadratic, rate of convergence under suitable assumptions and some numerical results are presented.
Abstract: In this paper, we propose a Newton-type method for solving a semismooth reformulation of monotone complementarity problems. In this method, a direction-finding subproblem, which is a system of linear equations, is uniquely solvable at each iteration. Moreover, the obtained search direction always affords a direction of sufficient decrease for the merit function defined as the squared residual for the semismooth equation equivalent to the complementarity problem. We show that the algorithm is globally convergent under some mild assumptions. Next, by slightly modifying the direction-finding problem, we propose another Newton-type method, which may be considered a restricted version of the first algorithm. We show that this algorithm has a superlinear, or possibly quadratic, rate of convergence under suitable assumptions. Finally, some numerical results are presented.

Journal ArticleDOI
TL;DR: Necessary and sufficient conditions for the set of solutions of a pseudomonotone variational inequality problem to be nonempty and compact are given.
Abstract: Necessary and sufficient conditions for the set of solutions of a pseudomonotone variational inequality problem to be nonempty and compact are given.

Journal ArticleDOI
Jiming Peng1
TL;DR: A class of merit functions for variational inequality problems (VI) is proposed and conditions under which the stationary points of these functions are the solutions of VI are given.
Abstract: In this paper we propose a class of merit functions for variational inequality problems (VI). Through these merit functions, the variational inequality problem is cast as unconstrained minimization problem. We estimate the growth rate of these merit functions and give conditions under which the stationary points of these functions are the solutions of VI.

Journal ArticleDOI
TL;DR: This work shows that the NRP leads naturally to proximal methods with an entropy-like kernel, which is defined by the conjugate of the scaling function, and establishes that the two methods are dually equivalent for convex constrained minimization problems.
Abstract: The nonlinear rescaling principle (NRP) consists of transforming the objective function and/or the constraints of a given constrained optimization problem into another problem which is equivalent to the original one in the sense that their optimal set of solutions coincides. A nonlinear transformation parameterized by a positive scalar parameter and based on a smooth sealing function is used to transform the constraints. The methods based on NRP consist of sequential unconstrained minimization of the classical Lagrangian for the equivalent problem, followed by an explicit formula updating the Lagrange multipliers. We first show that the NRP leads naturally to proximal methods with an entropy-like kernel, which is defined by the conjugate of the scaling function, and establish that the two methods are dually equivalent for convex constrained minimization problems. We then study the convergence properties of the nonlinear rescaling algorithm and the corresponding entropy-like proximal methods for convex constrained optimization problems. Special cases of the nonlinear rescaling algorithm are presented. In particular a new class of exponential penalty-modified barrier functions methods is introduced.