scispace - formally typeset
Search or ask a question

Showing papers in "Mathematical Programming in 2013"


Journal ArticleDOI
TL;DR: This paper analyzes several new methods for solving optimization problems with the objective function formed as a sum of two terms, one is smooth and given by a black-box oracle, and another is a simple general convex function with known structure.
Abstract: In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two terms: one is smooth and given by a black-box oracle, and another is a simple general convex function with known structure. Despite the absence of good properties of the sum, such problems, both in convex and nonconvex cases, can be solved with efficiency typical for the first part of the objective. For convex problems of the above structure, we consider primal and dual variants of the gradient method (with convergence rate $$O\left({1 \over k}\right)$$ ), and an accelerated multistep version with convergence rate $$O\left({1 \over k^2}\right)$$ , where $$k$$ is the iteration counter. For nonconvex problems with this structure, we prove convergence to a point from which there is no descent direction. In contrast, we show that for general nonsmooth, nonconvex problems, even resolving the question of whether a descent direction exists from a point is NP-hard. For all methods, we suggest some efficient “line search” procedures and show that the additional computational work necessary for estimating the unknown problem class parameters can only multiply the complexity of each iteration by a small constant factor. We present also the results of preliminary computational experiments, which confirm the superiority of the accelerated scheme.

1,444 citations


Journal ArticleDOI
TL;DR: This work proves an abstract convergence result for descent methods satisfying a sufficient-decrease assumption, and allowing a relative error tolerance, that guarantees the convergence of bounded sequences under the assumption that the function f satisfies the Kurdyka–Łojasiewicz inequality.
Abstract: In view of the minimization of a nonsmooth nonconvex function f, we prove an abstract convergence result for descent methods satisfying a sufficient-decrease assumption, and allowing a relative error tolerance. Our result guarantees the convergence of bounded sequences, under the assumption that the function f satisfies the Kurdyka–Łojasiewicz inequality. This assumption allows to cover a wide range of problems, including nonsmooth semi-algebraic (or more generally tame) minimization. The specialization of our result to different kinds of structured problems provides several new convergence results for inexact versions of the gradient method, the proximal method, the forward–backward splitting algorithm, the gradient projection and some proximal regularization of the Gauss–Seidel method in a nonconvex setting. Our results are illustrated through feasibility problems, or iterative thresholding procedures for compressive sensing.

1,282 citations


Journal ArticleDOI
TL;DR: The Cayley transform is applied—a Crank-Nicolson-like update scheme—to preserve the constraints and based on it, curvilinear search algorithms with lower flops are developed with high efficiency for polynomial optimization, nearest correlation matrix estimation and extreme eigenvalue problems.
Abstract: Minimization with orthogonality constraints (e.g., $$X^\top X = I$$) and/or spherical constraints (e.g., $$\Vert x\Vert _2 = 1$$) has wide applications in polynomial optimization, combinatorial optimization, eigenvalue problems, sparse PCA, p-harmonic flows, 1-bit compressive sensing, matrix rank minimization, etc. These problems are difficult because the constraints are not only non-convex but numerically expensive to preserve during iterations. To deal with these difficulties, we apply the Cayley transform--a Crank-Nicolson-like update scheme--to preserve the constraints and based on it, develop curvilinear search algorithms with lower flops compared to those based on projections and geodesics. The efficiency of the proposed algorithms is demonstrated on a variety of test problems. In particular, for the maxcut problem, it exactly solves a decomposition formulation for the SDP relaxation. For polynomial optimization, nearest correlation matrix estimation and extreme eigenvalue problems, the proposed algorithms run very fast and return solutions no worse than those from their state-of-the-art algorithms. For the quadratic assignment problem, a gap 0.842 % to the best known solution on the largest problem "tai256c" in QAPLIB can be reached in 5 min on a typical laptop.

855 citations


Journal ArticleDOI
TL;DR: It is proved that this approximation is exact for robust individual chance constraints with concave or (not necessarily concave) quadratic constraint functions, and it is demonstrated that the Worst-Case CVaR can be computed efficiently for these classes of constraint functions.
Abstract: We develop tractable semidefinite programming based approximations for distributionally robust individual and joint chance constraints, assuming that only the first- and second-order moments as well as the support of the uncertain parameters are given. It is known that robust chance constraints can be conservatively approximated by Worst-Case Conditional Value-at-Risk (CVaR) constraints. We first prove that this approximation is exact for robust individual chance constraints with concave or (not necessarily concave) quadratic constraint functions, and we demonstrate that the Worst-Case CVaR can be computed efficiently for these classes of constraint functions. Next, we study the Worst-Case CVaR approximation for joint chance constraints. This approximation affords intuitive dual interpretations and is provably tighter than two popular benchmark approximations. The tightness depends on a set of scaling parameters, which can be tuned via a sequential convex optimization algorithm. We show that the approximation becomes essentially exact when the scaling parameters are chosen optimally and that the Worst-Case CVaR can be evaluated efficiently if the scaling parameters are kept constant. We evaluate our joint chance constraint approximation in the context of a dynamic water reservoir control problem and numerically demonstrate its superiority over the two benchmark approximations.

529 citations


Journal ArticleDOI
TL;DR: It is found that when f is locally Lipschitz and semi-algebraic with bounded sublevel sets, the BFGS method with the inexact line search almost always generates sequences whose cluster points are Clarke stationary and with function values converging R-linearly to a Clarke stationary value.
Abstract: We investigate the behavior of quasi-Newton algorithms applied to minimize a nonsmooth function f , not necessarily convex. We introduce an inex- act line search that generates a sequence of nested intervals containing a set of points of nonzero measure that satisfy the Armijo and Wolfe conditions if f is absolutely continuous along the line. Furthermore, the line search is guaranteed to terminate if f is semi-algebraic. It seems quite difficult to establish a convergence theorem for quasi-Newton methods applied to such general classes of functions, so we give a care- ful analysis of a special but illuminating case, the Euclidean norm, in one variable using the inexact line search and in two variables assuming that the line search is exact. In practice, we find that when f is locally Lipschitz and semi-algebraic with bounded sublevel sets, the BFGS (Broyden-Fletcher-Goldfarb-Shanno) method with the inexact line search almost always generates sequences whose cluster points are Clarke stationary and with function values converging R-linearly to a Clarke station- ary value. We give references documenting the successful use of BFGS in a variety of nonsmooth applications, particularly the design of low-order controllers for linear dynamical systems. We conclude with a challenging open question.

311 citations


Journal ArticleDOI
TL;DR: Algorithms in this paper are Gauss-Seidel type methods, in contrast to the ones proposed by Goldfarb and Ma in (Fast multiple splitting algorithms for convex optimization, Columbia University, 2009) where the algorithms are Jacobi type methods.
Abstract: We present in this paper alternating linearization algorithms based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions. Our basic methods require at most $${O(1/\epsilon)}$$ iterations to obtain an $${\epsilon}$$ -optimal solution, while our accelerated (i.e., fast) versions of them require at most $${O(1/\sqrt{\epsilon})}$$ iterations, with little change in the computational effort required at each iteration. For both types of methods, we present one algorithm that requires both functions to be smooth with Lipschitz continuous gradients and one algorithm that needs only one of the functions to be so. Algorithms in this paper are Gauss-Seidel type methods, in contrast to the ones proposed by Goldfarb and Ma in (Fast multiple splitting algorithms for convex optimization, Columbia University, 2009) where the algorithms are Jacobi type methods. Numerical results are reported to support our theoretical conclusions and demonstrate the practical potential of our algorithms.

241 citations


Journal ArticleDOI
TL;DR: This paper improves the convergence theorems of several existing relaxation methods and takes a closer look at the properties of the feasible sets of the relaxed problems and shows which standard constraint qualifications are satisfied for these relaxed problems.
Abstract: Mathematical programs with equilibrium constraints (MPECs) are difficult optimization problems whose feasible sets do not satisfy most of the standard constraint qualifications. Hence MPECs cause difficulties both from a theoretical and a numerical point of view. As a consequence, a number of MPEC-tailored solution methods have been suggested during the last decade which are known to converge under suitable assumptions. Among these MPEC-tailored solution schemes, the relaxation methods are certainly one of the most prominent class of solution methods. Several different relaxation schemes are available in the meantime, and the aim of this paper is to provide a theoretical and numerical comparison of these schemes. More precisely, in the theoretical part, we improve the convergence theorems of several existing relaxation methods. There, we also take a closer look at the properties of the feasible sets of the relaxed problems and show which standard constraint qualifications are satisfied for these relaxed problems. Finally, the numerical comparison is based on the MacMPEC test problem collection.

164 citations


Journal ArticleDOI
TL;DR: It is shown that unless P = NP, there exists no polynomial time (or even pseudo-polynomial time) algorithm that can decide whether a multivariate polynomials of degree four (or higher even degree) is globally convex, and it is proved that deciding strict convexity, strong conveXity, quasiconvexality, and pseudoconvexity of polynmials of even degree four or higher is strongly NP-hard.
Abstract: We show that unless P = NP, there exists no polynomial time (or even pseudo-polynomial time) algorithm that can decide whether a multivariate polynomial of degree four (or higher even degree) is globally convex. This solves a problem that has been open since 1992 when N. Z. Shor asked for the complexity of deciding convexity for quartic polynomials. We also prove that deciding strict convexity, strong convexity, quasiconvexity, and pseudoconvexity of polynomials of even degree four or higher is strongly NP-hard. By contrast, we show that quasiconvexity and pseudoconvexity of odd degree polynomials can be decided in polynomial time.

130 citations


Journal ArticleDOI
TL;DR: This work considers the bilevel programming problem and its optimal value and KKT one level reformulations and shows how KKT type optimality conditions can be obtained under the partial calmness, using the differential calculus of Mordukhovich.
Abstract: We consider the bilevel programming problem and its optimal value and KKT one level reformulations. The two reformulations are studied in a unified manner and compared in terms of optimal solutions, constraint qualifications and optimality conditions. We also show that any bilevel programming problem where the lower level problem is linear with respect to the lower level variable, is partially calm without any restrictive assumption. Finally, we consider the bilevel demand adjustment problem in transportation, and show how KKT type optimality conditions can be obtained under the partial calmness, using the differential calculus of Mordukhovich.

126 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that there are 0/1 polytopes that do not admit a compact LP formulation for any hard optimization problem, even if the formulation may contain arbitrary real numbers.
Abstract: We prove that there are 0/1 polytopes \({P \subseteq \mathbb{R}^{n}}\) that do not admit a compact LP formulation. More precisely we show that for every n there is a set \({X \subseteq \{ 0,1\}^n}\) such that conv(X) must have extension complexity at least \({2^{n/2\cdot(1-o(1))}}\) . In other words, every polyhedron Q that can be linearly projected on conv(X) must have exponentially many facets. In fact, the same result also applies if conv(X) is restricted to be a matroid polytope. Conditioning on \({\mathbf{NP} ot\subseteq \mathbf{P_{/poly}}}\) , our result rules out the existence of a compact formulation for any \({\mathbf{NP}}\) -hard optimization problem even if the formulation may contain arbitrary real numbers.

108 citations


Journal ArticleDOI
TL;DR: This work studies the so-called K KT-approach for solving bilevel problems, where the lower level minimality condition is replaced by the KKT- or the FJ-condition, which leads to a special structured mathematical program with complementarity constraints.
Abstract: Bilevel programs (BL) form a special class of optimization problems. They appear in many models in economics, game theory and mathematical physics. BL programs show a more complicated structure than standard finite problems. We study the so-called KKT-approach for solving bilevel problems, where the lower level minimality condition is replaced by the KKT- or the FJ-condition. This leads to a special structured mathematical program with complementarity constraints. We analyze the KKT-approach from a generic viewpoint and reveal the advantages and possible drawbacks of this approach for solving BL problems numerically.

Journal ArticleDOI
TL;DR: This paper studies the computational complexity of quadratic penalty based methods for solving a special but broad class of convex programming problems whose feasible region is a simple compact convex set intersected with the inverse image of a closed convex cone under an affine transformation.
Abstract: This paper considers a special but broad class of convex programming problems whose feasible region is a simple compact convex set intersected with the inverse image of a closed convex cone under an affine transformation. It studies the computational complexity of quadratic penalty based methods for solving the above class of problems. An iteration of these methods, which is simply an iteration of Nesterov’s optimal method (or one of its variants) for approximately solving a smooth penalization subproblem, consists of one or two projections onto the simple convex set. Iteration-complexity bounds expressed in terms of the latter type of iterations are derived for two quadratic penalty based variants, namely: one which applies the quadratic penalty method directly to the original problem and another one which applies the latter method to a perturbation of the original problem obtained by adding a small quadratic term to its objective function.

Journal ArticleDOI
TL;DR: In this article, a generalization of the matrix variable relaxation to the matrix cube problem is presented. But the relaxation is based on substituting matrices for the variables x j.
Abstract: Given linear matrix inequalities (LMIs) L 1 and L 2 it is natural to ask: (Q1) when does one dominate the other, that is, does \({L_1(X) \succeq 0}\) imply \({L_2(X) \succeq 0}\)? (Q2) when are they mutually dominant, that is, when do they have the same solution set? The matrix cube problem of Ben-Tal and Nemirovski (SIAM J Optim 12:811–833, 2002) is an example of LMI domination. Hence such problems can be NP-hard. This paper describes a natural relaxation of an LMI, based on substituting matrices for the variables x j . With this relaxation, the domination questions (Q1) and (Q2) have elegant answers, indeed reduce to constructible semidefinite programs. As an example, to test the strength of this relaxation we specialize it to the matrix cube problem and obtain essentially the relaxation given in Ben-Tal and Nemirovski (SIAM J Optim 12:811–833, 2002). Thus our relaxation could be viewed as generalizing it. Assume there is an X such that L 1(X) and L 2(X) are both positive definite, and suppose the positivity domain of L 1 is bounded. For our “matrix variable” relaxation a positive answer to (Q1) is equivalent to the existence of matrices V j such that $$\begin{array}{ll}L_2(x) = V_1^{*} L_1(x) V_1 + \cdots + V_\mu^{*} L_1(x) V_{\mu}. \quad \quad \quad ({\rm A}_1)\end{array}$$ As for (Q2) we show that L 1 and L 2 are mutually dominant if and only if, up to certain redundancies described in the paper, L 1 and L 2 are unitarily equivalent. Algebraic certificates for positivity, such as (A1) for linear polynomials, are typically called Positivstellensatze. The paper goes on to derive a Putinar-type Positivstellensatz for polynomials with a cleaner and more powerful conclusion under the stronger hypothesis of positivity on an underlying bounded domain of the form \({ \{X \mid L(X)\succeq0\}. }\) An observation at the core of the paper is that the relaxed LMI domination problem is equivalent to a classical problem. Namely, the problem of determining if a linear map τ from a subspace of matrices to a matrix algebra is “completely positive”. Complete positivity is one of the main techniques of modern operator theory and the theory of operator algebras. On one hand it provides tools for studying LMIs and on the other hand, since completely positive maps are not so far from representations and generally are more tractable than their merely positive counterparts, the theory of completely positive maps provides perspective on the difficulties in solving LMI domination problems.

Journal ArticleDOI
TL;DR: By examining the recession properties of conveX polynomials, this paper provides a necessary and sufficient condition for a piecewise convex polynomial to have a H Ölder-type global error bound with an explicit Hölder exponent.
Abstract: In this paper, by examining the recession properties of convex polynomials, we provide a necessary and sufficient condition for a piecewise convex polynomial to have a Holder-type global error bound with an explicit Holder exponent. Our result extends the corresponding results of Li (SIAM J Control Optim 33(5):1510–1529, 1995) from piecewise convex quadratic functions to piecewise convex polynomials.

Journal ArticleDOI
TL;DR: In this paper, a detailed polyhedral study of the SRFLP is performed, and several huge classes of valid and facet-inducing inequalities are derived.
Abstract: The single row facility layout problem (SRFLP) is the NP-hard problem of arranging facilities on a line, while minimizing a weighted sum of the distances between facility pairs. In this paper, a detailed polyhedral study of the SRFLP is performed, and several huge classes of valid and facet-inducing inequalities are derived. Some separation heuristics are presented, along with a primal heuristic based on multi-dimensional scaling. Finally, a branch-and-cut algorithm is described and some encouraging computational results are given.

Journal ArticleDOI
TL;DR: The fastest known n-fold integer programming algorithm runs in time O(n 3L) as discussed by the authors, where L is the binary length of the numerical part of the input and g is the Graver complexity of the bimatrix A defining the system.
Abstract: n-Fold integer programming is a fundamental problem with a variety of natural applications in operations research and statistics. Moreover, it is universal and provides a new, variable-dimension, parametrization of all of integer programming. The fastest algorithm for n-fold integer programming predating the present article runs in time \({O \left(n^{g(A)}L\right)}\) with L the binary length of the numerical part of the input and g(A) the so-called Graver complexity of the bimatrix A defining the system. In this article we provide a drastic improvement and establish an algorithm which runs in time O (n3L) having cubic dependency on n regardless of the bimatrix A. Our algorithm works for separable convex piecewise affine objectives as well. Moreover, it can be used to define a hierarchy of approximations for any integer programming problem.

Journal ArticleDOI
TL;DR: It is shown that flat truncation can be used as a certificate to check exactness of standard SOS relaxations and Jacobian SDP relaxations.
Abstract: Consider the optimization problem of minimizing a polynomial function subject to polynomial constraints. A typical approach for solving it globally is applying Lasserre’s hierarchy of semidefinite relaxations, based on either Putinar’s or Schmudgen’s Positivstellensatz. A practical question in applications is: how to certify its convergence and get minimizers? In this paper, we propose flat truncation as a certificate for this purpose. Assume the set of global minimizers is nonempty and finite. Our main results are: (1) Putinar type Lasserre’s hierarchy has finite convergence if and only if flat truncation holds, under some generic assumptions; the same conclusion holds for the Schmudgen type one under weaker assumptions. (2) Flat truncation is asymptotically satisfied for Putinar type Lasserre’s hierarchy if the archimedean condition holds; the same conclusion holds for the Schmudgen type one if the feasible set is compact. (3) We show that flat truncation can be used as a certificate to check exactness of standard SOS relaxations and Jacobian SDP relaxations.

Journal ArticleDOI
TL;DR: It is shown that for any choice of conjectural variations ranging from perfect competition to Cournot, the closed loop equilibrium coincides with the Cournot open loop equilibrium, thereby obtaining a ‘Kreps and Scheinkman’-like result and extending it to arbitrary strategic behavior.
Abstract: We consider two game-theoretic models of the generation capacity expansion problem in liberalized electricity markets. The first is an open loop equilibrium model, where generation companies simultaneously choose capacities and quantities to maximize their individual profit. The second is a closed loop model, in which companies first choose capacities maximizing their profit anticipating the market equilibrium outcomes in the second stage. The latter problem is an equilibrium problem with equilibrium constraints. In both models, the intensity of competition among producers in the energy market is frequently represented using conjectural variations. Considering one load period, we show that for any choice of conjectural variations ranging from perfect competition to Cournot, the closed loop equilibrium coincides with the Cournot open loop equilibrium, thereby obtaining a ‘Kreps and Scheinkman’-like result and extending it to arbitrary strategic behavior. When expanding the model framework to multiple load periods, the closed loop equilibria for different conjectural variations can diverge from each other and from open loop equilibria. We also present and analyze alternative conjectured price response models with switching conjectures. Surprisingly, the rank ordering of the closed loop equilibria in terms of consumer surplus and market efficiency (as measured by total social welfare) is ambiguous. Thus, regulatory approaches that force marginal cost-based bidding in spot markets may diminish market efficiency and consumer welfare by dampening incentives for investment. We also show that the closed loop capacity yielded by a conjectured price response second stage competition can be less or equal to the closed loop Cournot capacity, and that the former capacity cannot exceed the latter when there are symmetric agents and two load periods.

Journal ArticleDOI
TL;DR: In this article, a new type semidefinite programming (SDP) relaxation was proposed for solving the problem of minimizing the minimum of f on the set of polynomials f(x), gi (x), hj (x).
Abstract: Given polynomials f (x), gi (x), hj (x), we study how to minimize f (x) on the set $$S = \left\{ x \in \mathbb{R}^n:\, h_1(x) = \cdots = h_{m_1}(x) = 0,\\ g_1(x)\geq 0, \ldots, g_{m_2}(x) \geq 0 \right\}.$$ Let fmin be the minimum of f on S. Suppose S is nonsingular and fmin is achievable on S, which are true generically. This paper proposes a new type semidefinite programming (SDP) relaxation which is the first one for solving this problem exactly. First, we construct new polynomials \({\varphi_1, \ldots, \varphi_r}\) , by using the Jacobian of f, hi , gj , such that the above problem is equivalent to $$\begin{gathered}\underset{x\in\mathbb{R}^n}{\min} f(x) \hfill \\ \, \, {\rm s.t.}\; h_i(x) = 0, \, \varphi_j(x) = 0, \, 1\leq i \leq m_1, 1 \leq j \leq r, \hfill \\ \quad \, \, \, g_1(x)^{ u_1}\cdots g_{m_2}(x)^{ u_{m_2}}\geq 0, \, \quad\forall\, u \,\in \{0,1\}^{m_2} .\hfill \end{gathered}$$ Second, we prove that for all N big enough, the standard N-th order Lasserre’s SDP relaxation is exact for solving this equivalent problem, that is, its optimal value is equal to fmin. Some variations and examples are also shown.

Journal ArticleDOI
TL;DR: This paper develops a new error criterion for the approximate minimization of augmented Lagrangian subproblems that uses a single relative tolerance parameter, rather than a summable parameter sequence, and proves a global convergence result for the resulting algorithm.
Abstract: This paper develops a new error criterion for the approximate minimization of augmented Lagrangian subproblems. This criterion is practical since it is readily testable given only a gradient (or subgradient) of the augmented Lagrangian. It is also “relative” in the sense of relative error criteria for proximal point algorithms: in particular, it uses a single relative tolerance parameter, rather than a summable parameter sequence. Our analysis first describes an abstract version of the criterion within Rockafellar’s general parametric convex duality framework, and proves a global convergence result for the resulting algorithm. Specializing this algorithm to a standard formulation of convex programming produces a version of the classical augmented Lagrangian method with a novel inexact solution condition for the subproblems. Finally, we present computational results drawn from the CUTE test set—including many nonconvex problems—indicating that the approach works well in practice.

Journal ArticleDOI
TL;DR: It is shown that these differential variational inequalities, when considering slow solutions and the more general level of a Hilbert space, contain projected dynamical systems, another recent subclass of general differential inclusions, and a stability result for linear complementarity systems is obtained.
Abstract: This paper addresses a new class of differential variational inequalities that have recently been introduced and investigated in finite dimensions as a new modeling paradigm of variational analysis to treat many applied problems in engineering, operations research, and physical sciences. This new subclass of general differential inclusions unifies ordinary differential equations with possibly discontinuous right-hand sides, differential algebraic systems with constraints, dynamic complementarity systems, and evolutionary variational systems. The purpose of this paper is two-fold. Firstly, we show that these differential variational inequalities, when considering slow solutions and the more general level of a Hilbert space, contain projected dynamical systems, another recent subclass of general differential inclusions. This relation follows from a precise geometric description of the directional derivative of the metric projection in Hilbert space, which is based on the notion of the quasi relative interior. Secondly we are concerned with stability of the solution set to this class of differential variational inequalities. Here we present a novel upper set convergence result with respect to perturbations in the data, including perturbations of the associated set-valued maps and the constraint set. Here we impose weak convergence assumptions on the perturbed set-valued maps, use the monotonicity method of Browder and Minty, and employ Mosco convergence as set convergence. Also as a consequence, we obtain a stability result for linear complementarity systems.

Journal ArticleDOI
TL;DR: In this paper, explicit characterizations of convex and concave envelopes of several nonlinear functions over various subsets of a hyper-rectangle are derived by identifying polyhedral subdivisions of the hyper- rectangle over which the envelopes can be constructed easily.
Abstract: In this paper, we derive explicit characterizations of convex and concave envelopes of several nonlinear functions over various subsets of a hyper-rectangle. These envelopes are obtained by identifying polyhedral subdivisions of the hyper-rectangle over which the envelopes can be constructed easily. In particular, we use these techniques to derive, in closed-form, the concave envelopes of concave-extendable supermodular functions and the convex envelopes of disjunctive convex functions.

Journal ArticleDOI
TL;DR: A four-dimensional example such that the BFGS method need not converge is constructed and the objective function is polynomial and hence is infinitely continuously differentiable.
Abstract: Consider the BFGS quasi-Newton method applied to a general non-convex function that has continuous second derivatives. This paper aims to construct a four-dimensional example such that the BFGS method need not converge. The example is perfect in the following sense: (a) All the stepsizes are exactly equal to one; the unit stepsize can also be accepted by various line searches including the Wolfe line search and the Arjimo line search; (b) The objective function is strongly convex along each search direction although it is not in itself. The unit stepsize is the unique minimizer of each line search function. Hence the example also applies to the global line search and the line search that always picks the first local minimizer; (c) The objective function is polynomial and hence is infinitely continuously differentiable. If relaxing the convexity requirement of the line search function; namely, (b) we are able to construct a relatively simple polynomial example.

Journal ArticleDOI
TL;DR: An algorithm for solving highly symmetric integer linear programs which only takes time which is linear in the number of constraints and quadratic in the dimension is proposed.
Abstract: This paper deals with exploiting symmetry for solving linear and integer programming problems. Basic properties of linear representations of finite groups can be used to reduce symmetric linear programming to solving linear programs of lower dimension. Combining this approach with knowledge of the geometry of feasible integer solutions yields an algorithm for solving highly symmetric integer linear programs which only takes time which is linear in the number of constraints and quadratic in the dimension.

Journal ArticleDOI
TL;DR: A unified analysis of the recovery of simple objects from random linear measurements shows that an s-sparse vector in $${\mathbb{R}^n}$$ can be efficiently recovered from 2s log n measurements with high probability and a rank r, n × n matrix can be efficient recovered from r(6n − 5r) measurements withhigh probability.
Abstract: This note presents a unied analysis of the recovery of simple objects from random linear measurements. When the linear functionals are Gaussian, we show that an s-sparse vector in R n can be eciently recovered from 2 s logn measurements with high probability and a rank r, n n matrix can be eciently recovered from r(6n 5r) measurements with high probability. For sparse vectors, this is within an additive factor of the best known nonasymptotic bounds. For low-rank matrices, this matches the best known bounds. We present a parallel analysis for blocksparse vectors obtaining similarly tight bounds. In the case of sparse and block-sparse signals, we additionally demonstrate that our bounds are only slightly weakened when the measurement map is a random sign matrix. Our results are based on analyzing a particular dual point which certies optimality conditions of the respective convex programming problem. Our calculations rely only on standard large deviation inequalities and our analysis is self-contained.

Journal ArticleDOI
TL;DR: Semidefinite relaxations for unconstrained non-convex quadratic mixed-integer optimization problems are presented and are computationally easy to solve for medium-sized instances, even if some of the variables are integer and unbounded.
Abstract: We present semidefinite relaxations for unconstrained non-convex quadratic mixed-integer optimization problems. These relaxations yield tight bounds and are computationally easy to solve for medium-sized instances, even if some of the variables are integer and unbounded. In this case, the problem contains an infinite number of linear constraints; these constraints are separated dynamically. We use this approach as a bounding routine in an SDP-based branch-and-bound framework. In case of a convex objective function, the new SDP bound improves the bound given by the continuous relaxation of the problem. Numerical experiments show that our algorithm performs well on various types of non-convex instances.

Journal ArticleDOI
TL;DR: This paper establishes strong duality between the robust counterpart of an uncertain semi-infinite linear program and the optimistic counterpart of its uncertain Lagrangian dual and shows that robust duality holds whenever a robust moment cone is closed and convex.
Abstract: In this paper, we propose a duality theory for semi-infinite linear programming problems under uncertainty in the constraint functions, the objective function, or both, within the framework of robust optimization. We present robust duality by establishing strong duality between the robust counterpart of an uncertain semi-infinite linear program and the optimistic counterpart of its uncertain Lagrangian dual. We show that robust duality holds whenever a robust moment cone is closed and convex. We then establish that the closed-convex robust moment cone condition in the case of constraint-wise uncertainty is in fact necessary and sufficient for robust duality. In other words, the robust moment cone is closed and convex if and only if robust duality holds for every linear objective function of the program. In the case of uncertain problems with affinely parameterized data uncertainty, we establish that robust duality is easily satisfied under a Slater type constraint qualification. Consequently, we derive robust forms of the Farkas lemma for systems of uncertain semi-infinite linear inequalities.

Journal ArticleDOI
TL;DR: The paper studies regularity properties of set-valued mappings between metric spaces by addressing equivalence of the corresponding concepts of openness and “pseudo-Hölder” behavior, general and local regularity criteria with special emphasis on “regularity of order $$k$$”, for local settings, and variational methods to extimate regularity moduli in case of length range spaces.
Abstract: The paper studies regularity properties of set-valued mappings between metric spaces. In the context of metric regularity, nonlinear models correspond to nonlinear dependencies of estimates of error bounds in terms of residuals. Among the questions addressed in the paper are equivalence of the corresponding concepts of openness and “pseudo-Holder” behavior, general and local regularity criteria with special emphasis on “regularity of order $$k$$ ”, for local settings, and variational methods to extimate regularity moduli in case of length range spaces. The majority of the results presented in the paper are new.

Journal ArticleDOI
TL;DR: A simultaneous column-and-row generation algorithm that could be applied to a general class of large-scale linear programming problems, which typically arise in the context of linear programming formulations with exponentially many variables.
Abstract: In this paper, we develop a simultaneous column-and-row generation algorithm that could be applied to a general class of large-scale linear programming problems. These problems typically arise in the context of linear programming formulations with exponentially many variables. The defining property for these formulations is a set of linking constraints, which are either too many to be included in the formulation directly, or the full set of linking constraints can only be identified, if all variables are generated explicitly. Due to this dependence between columns and rows, we refer to this class of linear programs as problems with column-dependent-rows. To solve these problems, we need to be able to generate both columns and rows on-the-fly within an efficient solution approach. We emphasize that the generated rows are structural constraints and distinguish our work from the branch-and-cut-and-price framework. We first characterize the underlying assumptions for the proposed column-and-row generation algorithm. These assumptions are general enough and cover all problems with column-dependent-rows studied in the literature up until now to the best of our knowledge. We then introduce in detail a set of pricing subproblems, which are used within the proposed column-and-row generation algorithm. This is followed by a formal discussion on the optimality of the algorithm. To illustrate our approach, the paper is concluded by applying the proposed framework to the multi-stage cutting stock and the quadratic set covering problems.

Journal ArticleDOI
TL;DR: In this article, a new lower bounding method for the capacitated arc routing problem (CARP) based on a set partitioning-like formulation of the problem with additional cuts is presented.
Abstract: In the capacitated arc routing problem (CARP), a subset of the edges of an undirected graph has to be serviced at least cost by a fleet of identical vehicles in such a way that the total demand of the edges serviced by each vehicle does not exceed its capacity. This paper describes a new lower bounding method for the CARP based on a set partitioning-like formulation of the problem with additional cuts. This method uses cut-and-column generation to solve different relaxations of the problem, and a new dynamic programming method for generating routes. An exact algorithm based on the new lower bounds was also implemented to assess their effectiveness. Computational results over a large set of classical benchmark instances show that the proposed method improves most of the best known lower bounds for the open instances, and can solve several of these for the first time.