scispace - formally typeset
Search or ask a question

Showing papers in "Mathematical Programming in 2007"


Journal ArticleDOI
TL;DR: This paper overviews several selected topics in this popular area, specifically, recent extensions of the basic concept of robust counterpart of an optimization problem with uncertain data, tractability of robust counterparts, links between RO and traditional chance constrained settings of problems with stochastic data, and a novel generic application of the RO methodology in Robust Linear Control.
Abstract: Robust Optimization is a rapidly developing methodology for handling optimization problems affected by non-stochastic “uncertain-but- bounded” data perturbations. In this paper, we overview several selected topics in this popular area, specifically, (1) recent extensions of the basic concept of robust counterpart of an optimization problem with uncertain data, (2) tractability of robust counterparts, (3) links between RO and traditional chance constrained settings of problems with stochastic data, and (4) a novel generic application of the RO methodology in Robust Linear Control.

339 citations


Journal ArticleDOI
TL;DR: In this article, dual methods for solving variational inequalities with monotone operators were proposed and shown to be optimal both for Lipschitz continuous operators and for operators with bounded variations.
Abstract: In this paper we suggest new dual methods for solving variational inequalities with monotone operators. We show that with an appropriate step-size strategy, our method is optimal both for Lipschitz continuous operators ($$O({1 \over \epsilon})$$ iterations), and for the operators with bounded variations ($$O({1 \over \epsilon^2})$$ iterations). Our technique can be applied for solving non-smooth convex minimization problems with known structure. In this case the worst-case complexity bound is $$O({1 \over \epsilon})$$ iterations.

290 citations


Journal ArticleDOI
TL;DR: In this paper, a semidefinite programming (SDP) based model and method for the position estimation problem in sensor network localization and other Euclidean distance geometry applications is presented.
Abstract: We analyze the semidefinite programming (SDP) based model and method for the position estimation problem in sensor network localization and other Euclidean distance geometry applications. We use SDP duality and interior-point algorithm theories to prove that the SDP localizes any network or graph that has unique sensor positions to fit given distance measures. Therefore, we show, for the first time, that these networks can be localized in polynomial time. We also give a simple and efficient criterion for checking whether a given instance of the localization problem has a unique realization in $$\mathcal{R}^2$$ using graph rigidity theory. Finally, we introduce a notion called strong localizability and show that the SDP model will identify all strongly localizable sub-networks in the input network.

271 citations


Journal ArticleDOI
TL;DR: This survey paper gives overview on the fundamental properties of submodular functions and recent algorithmic devolopments of their minimization.
Abstract: Submodular functions often arise in various fields of operations research including discrete optimization, game theory, queueing theory and information theory. In this survey paper, we give overview on the fundamental properties of submodular functions and recent algorithmic devolopments of their minimization.

244 citations


Journal ArticleDOI
TL;DR: An accelerated version of the cubic regularization of Newton’s method that converges for the same problem class with order, keeping the complexity of each iteration unchanged and arguing that for the second-order schemes, the class of non-degenerate problems is different from the standard class.
Abstract: In this paper we propose an accelerated version of the cubic regularization of Newton’s method (Nesterov and Polyak, in Math Program 108(1): 177–205, 2006). The original version, used for minimizing a convex function with Lipschitz-continuous Hessian, guarantees a global rate of convergence of order $$O\big({1 \over k^2}\big)$$, where k is the iteration counter. Our modified version converges for the same problem class with order $$O\big({1 \over k^3}\big)$$, keeping the complexity of each iteration unchanged. We study the complexity of both schemes on different classes of convex problems. In particular, we argue that for the second-order schemes, the class of non-degenerate problems is different from the standard class.

237 citations


Journal ArticleDOI
TL;DR: This tutorial introduces the necessary tools from polyhedral theory and gives a geometric understanding of several classical families of valid inequalities such as lift-and-project cuts, Gomory mixed integer cuts, mixed integer rounding cuts, split cuts and intersection cuts, and it reveals the relationships between these families.
Abstract: This tutorial presents a theory of valid inequalities for mixed integer linear sets. It introduces the necessary tools from polyhedral theory and gives a geometric understanding of several classical families of valid inequalities such as lift-and-project cuts, Gomory mixed integer cuts, mixed integer rounding cuts, split cuts and intersection cuts, and it reveals the relationships between these families. The tutorial also discusses computational aspects of generating the cuts and their strength.

227 citations


Journal ArticleDOI
TL;DR: It is argued that two stage (say linear) stochastic programming problems can be solved with a reasonable accuracy by Monte Carlo sampling techniques while there are indications that complexity of multistage programs grows fast with increase of the number of stages.
Abstract: In this paper we discuss computational complexity and risk averse approaches to two and multistage stochastic programming problems. We argue that two stage (say linear) stochastic programming problems can be solved with a reasonable accuracy by Monte Carlo sampling techniques while there are indications that complexity of multistage programs grows fast with increase of the number of stages. We discuss an extension of coherent risk measures to a multistage setting and, in particular, dynamic programming equations for such problems.

221 citations


Journal ArticleDOI
TL;DR: This paper develops a simple framework for estimating a Lipschitz constant for the gradient of some symmetric functions of eigenvalues of symmetric matrices and analyzes the efficiency of the special gradient-type schemes on the problems of minimizing the maximal eigenvalue or the spectral radius of the matrix.
Abstract: In this paper we extend the smoothing technique (Nesterov in Math Program 103(1): 127–152, 2005; Nesterov in Unconstrained convex mimimization in relative scale, 2003) onto the problems of semidefinite optimization. For that, we develop a simple framework for estimating a Lipschitz constant for the gradient of some symmetric functions of eigenvalues of symmetric matrices. Using this technique, we can justify the Lipschitz constants for some natural approximations of maximal eigenvalue and the spectral radius of symmetric matrices. We analyze the efficiency of the special gradient-type schemes on the problems of minimizing the maximal eigenvalue or the spectral radius of the matrix, which depends linearly on the design variables. We show that in the first case the number of iterations of the method is bounded by $$O({1}/{\epsilon})$$, where $$\epsilon$$ is the required absolute accuracy of the problem. In the second case, the number of iterations is bounded by $${({4}/{\delta})} \sqrt{(1 + \delta) r\, \ln r }$$, where δ is the required relative accuracy and r is the maximal rank of corresponding linear matrix inequality. Thus, the latter method is a fully polynomial approximation scheme.

184 citations


Journal ArticleDOI
TL;DR: It is proved that feasible limit points that satisfy the Constant Positive Linear Dependence constraint qualification are KKT solutions and boundedness of the penalty parameters is proved.
Abstract: Two Augmented Lagrangian algorithms for solving KKT systems are introduced. The algorithms differ in the way in which penalty parameters are updated. Possibly infeasible accumulation points are characterized. It is proved that feasible limit points that satisfy the Constant Positive Linear Dependence constraint qualification are KKT solutions. Boundedness of the penalty parameters is proved under suitable assumptions. Numerical experiments are presented.

174 citations


Journal ArticleDOI
TL;DR: Key ingredients of the algorithm are a delayed column-and-row generation technique, exploiting the special structure of the formulation, to solve the LP-relaxation, and cutting planes to strengthen the formulation and limit the size of the enumeration tree.
Abstract: Given a directed graph G(V,A), the p-Median problem consists of determining p nodes (the median nodes) minimizing the total distance from the other nodes of the graph. We present a Branch-and-Cut-and-Price algorithm yielding provably good solutions for instances with |V|≤3795. Key ingredients of the algorithm are a delayed column-and-row generation technique, exploiting the special structure of the formulation, to solve the LP-relaxation, and cutting planes to strengthen the formulation and limit the size of the enumeration tree.

169 citations


Journal ArticleDOI
TL;DR: This paper considers problem (P) of minimizing a quadratic function q(x)=xtQx+ctx of binary variables and devise two different preprocessing methods, which consist in computing the smallest eigenvalue of Q and vector u, both of which are classical SDP relaxation methods.
Abstract: In this paper, we consider problem (P) of minimizing a quadratic function q(x)=x tQx+ctx of binary variables. Our main idea is to use the recent Mixed Integer Quadratic Programming (MIQP) solvers. But, for this, we have to first convexify the objective function q(x). A classical trick is to raise up the diagonal entries of Q by a vector u until (Q+diag(u)) is positive semidefinite. Then, using the fact that xi2=xi, we can obtain an equivalent convex objective function, which can then be handled by an MIQP solver. Hence, computing a suitable vector u constitutes a preprocessing phase in this exact solution method. We devise two different preprocessing methods. The first one is straightforward and consists in computing the smallest eigenvalue of Q. In the second method, vector u is obtained once a classical SDP relaxation of (P) is solved.We carry out computational tests using the generator of (Pardalos and Rodgers, 1990) and we compare our two solution methods to several other exact solution methods. Furthermore, we report computational results for the max-cut problem.

Journal ArticleDOI
Yinyu Ye1
TL;DR: A continuous path leading to the set of the Arrow–Debreu equilibrium, similar to the central path developed for linear programming interior-point methods is presented, derived from the weighted logarithmic utility and barrier functions and the Brouwer fixed-point theorem.
Abstract: We present polynomial-time interior-point algorithms for solving the Fisher and Arrow–Debreu competitive market equilibrium problems with linear utilities and n players. Both of them have the arithmetic operation complexity bound of $${O(n^{4}log(1/\epsilon}$$)) for computing an $${\epsilon}$$ -equilibrium solution. If the problem data are rational numbers and their bit-length is L, then the bound to generate an exact solution is O(n4L) which is in line with the best complexity bound for linear programming of the same dimension and size. This is a significant improvement over the previously best bound $$O(n^{8}log(1/\epsilon$$)) for approximating the two problems using other methods. The key ingredient to derive these results is to show that these problems admit convex optimization formulations, efficient barrier functions and fast rounding techniques. We also present a continuous path leading to the set of the Arrow–Debreu equilibrium, similar to the central path developed for linear programming interior-point methods. This path is derived from the weighted logarithmic utility and barrier functions and the Brouwer fixed-point theorem. The defining equations are bilinear and possess some primal-dual structure for the application of the Newton-based path-following method.

Journal ArticleDOI
TL;DR: The bounds on the error between an interpolating polynomial and the true function can be used in the convergence theory of derivative free sampling methods and this constant is related to the condition number of a certain matrix.
Abstract: We consider derivative free methods based on sampling approaches for nonlinear optimization problems where derivatives of the objective function are not available and cannot be directly approximated. We show how the bounds on the error between an interpolating polynomial and the true function can be used in the convergence theory of derivative free sampling methods. These bounds involve a constant that reflects the quality of the interpolation set. The main task of such a derivative free algorithm is to maintain an interpolation sampling set so that this constant remains small, and at least uniformly bounded. This constant is often described through the basis of Lagrange polynomials associated with the interpolation set. We provide an alternative, more intuitive, definition for this concept and show how this constant is related to the condition number of a certain matrix. This relation enables us to provide a range of algorithms whilst maintaining the interpolation set so that this condition number or the geometry constant remain uniformly bounded. We also derive bounds on the error between the model and the function and between their derivatives, directly in terms of this condition number and of this geometry constant.

Journal ArticleDOI
TL;DR: Various examples of applications in optimization, probability, financial economics and optimal control are investigated, which all can be viewed as particular instances of the GPM.
Abstract: We consider the generalized problem of moments (GPM) from a computational point of view and provide a hierarchy of semidefinite programming relaxations whose sequence of optimal values converges to the optimal value of the GPM. We then investigate in detail various examples of applications in optimization, probability, financial economics and optimal control, which all can be viewed as particular instances of the GPM.

Journal ArticleDOI
TL;DR: A general technique to reduce the size ofSemidefinite programming problems on which a permutation group is acting is described, based on a low-order matrix based on the representation of the commutant (centralizer ring) of the matrix algebra generated by the permutation matrices.
Abstract: We consider semidefinite programming problems on which a permutation group is acting. We describe a general technique to reduce the size of such problems, exploiting the symmetry. The technique is based on a low-order matrix $$*$$-representation of the commutant (centralizer ring) of the matrix algebra generated by the permutation matrices. We apply it to extending a method of de Klerk et al. that gives a semidefinite programming lower bound to the crossing number of complete bipartite graphs. It implies that cr(K 8,n ) ≥ 2.9299n 2 − 6n, cr(K 9,n ) ≥ 3.8676n 2 − 8n, and (for any m ≥ 9) $$\lim_{n\to\infty}\frac{{\rm cr}(K_{m,n})}{Z(m,n)}\geq 0.8594\frac{m}{m-1},$$ where Z(m,n) is the Zarankiewicz number $$\lfloor\frac{1}{4}(m-1)^2\rfloor\lfloor\frac{1}{4}(n-1)^2\rfloor$$, which is the conjectured value of cr(K m,n). Here the best factor previously known was 0.8303 instead of 0.8594.

Journal ArticleDOI
TL;DR: This work proves the finite convergence of a hierarchy of semidefinite relaxations introduced by Lasserre, which involves combinatorial moment matrices indexed by a basis of the quotient vector space ℝ[x1, . . . ,xn]/I.
Abstract: We consider the problem of minimizing a polynomial over a set defined by polynomial equations and inequalities. When the polynomial equations have a finite set of complex solutions, we can reformulate this problem as a semidefinite programming problem. Our semidefinite representation involves combinatorial moment matrices, which are matrices indexed by a basis of the quotient vector space ℝ[x 1, . . . ,xn]/I, where I is the ideal generated by the polynomial equations in the problem. Moreover, we prove the finite convergence of a hierarchy of semidefinite relaxations introduced by Lasserre. Semidefinite approximations can be constructed by considering truncated combinatorial moment matrices; rank conditions are given (in a grid case) that ensure that the approximation solves the original problem to optimality.

Journal ArticleDOI
TL;DR: This work proposes primal–dual path-following Mehrotra-type predictor–corrector methods for solving convex quadratic semidefinite programming problems, and shows that the corresponding preconditioned matrices have favorable asymptotic eigenvalue distributions for fast convergence under suitable nondegeneracy assumptions.
Abstract: We propose primal–dual path-following Mehrotra-type predictor–corrector methods for solving convex quadratic semidefinite programming (QSDP) problems of the form: $$\min_{X} \{\frac{1}{2} X\bullet \mathcal{Q}(X) + C\bullet X : \mathcal{A} (X) = b, X\succeq 0\}$$, where $$\mathcal{Q}$$ is a self-adjoint positive semidefinite linear operator on $$\mathcal{S}^n$$, b∈R m , and $$\mathcal{A}$$ is a linear map from $$\mathcal{S}^n$$ to R m . At each interior-point iteration, the search direction is computed from a dense symmetric indefinite linear system (called the augmented equation) of dimension m + n(n + 1)/2. Such linear systems are typically very large and can only be solved by iterative methods. We propose three classes of preconditioners for the augmented equation, and show that the corresponding preconditioned matrices have favorable asymptotic eigenvalue distributions for fast convergence under suitable nondegeneracy assumptions. Numerical experiments on a variety of QSDPs with n up to 1600 are performed and the computational results show that our methods are efficient and robust.

Journal ArticleDOI
TL;DR: The projective algorithms converge under more general conditions than prior splitting methods, allowing the proximal parameter to vary from iteration to iteration, and even from operator to operator, while retaining convergence for essentially arbitrary pairs of operators.
Abstract: A splitting method for two monotone operators A and B is an algorithm that attempts to converge to a zero of the sum A + B by solving a sequence of subproblems, each of which involves only the operator A, or only the operator B Prior algorithms of this type can all in essence be categorized into three main classes, the Douglas/Peaceman-Rachford class, the forward-backward class, and the little-used double-backward class Through a certain “extended” solution set in a product space, we construct a fundamentally new class of splitting methods for pairs of general maximal monotone operators in Hilbert space Our algorithms are essentially standard projection methods, using splitting decomposition to construct separators We prove convergence through Fejer monotonicity techniques, but showing Fejer convergence of a different sequence to a different set than in earlier splitting methods Our projective algorithms converge under more general conditions than prior splitting methods, allowing the proximal parameter to vary from iteration to iteration, and even from operator to operator, while retaining convergence for essentially arbitrary pairs of operators The new projective splitting class also contains noteworthy preexisting methods either as conventional special cases or excluded boundary cases

Journal ArticleDOI
TL;DR: A new variant of this method is introduced and it is proved its global convergence for locally Lipschitz continuous objective functions, which are not necessarily differentiable or convex.
Abstract: Many practical optimization problems involve nonsmooth (that is, not necessarily differentiable) functions of thousands of variables. In the paper [Haarala, Miettinen, Makela, Optimization Methods and Software, 19, (2004), pp. 673–692] we have described an efficient method for large-scale nonsmooth optimization. In this paper, we introduce a new variant of this method and prove its global convergence for locally Lipschitz continuous objective functions, which are not necessarily differentiable or convex. In addition, we give some encouraging results from numerical experiments.

Journal ArticleDOI
TL;DR: This paper examines how global optimality of non-convex constrained optimization problems is related to Lagrange multiplier conditions, and obtains necessaryglobal optimality conditions, which are different from the Lag range multiplier conditions for special classes of quadratic optimization problems.
Abstract: In this paper, we first examine how global optimality of non-convex constrained optimization problems is related to Lagrange multiplier conditions. We then establish Lagrange multiplier conditions for global optimality of general quadratic minimization problems with quadratic constraints. We also obtain necessary global optimality conditions, which are different from the Lagrange multiplier conditions for special classes of quadratic optimization problems. These classes include weighted least squares with ellipsoidal constraints, and quadratic minimization with binary constraints. We discuss examples which demonstrate that our optimality conditions can effectively be used for identifying global minimizers of certain multi-extremal non-convex quadratic optimization problems.

Journal ArticleDOI
TL;DR: It is shown that when F is convex in every one of Xj, a natural semidefinite relaxation of the problem is tight within a factor slowly growing with the size m of the matrices.
Abstract: Let B i be deterministic real symmetric m × m matrices, and ξ i be independent random scalars with zero mean and “of order of one” (e.g., $$\xi_{i}\sim \mathcal{N}(0,1)$$). We are interested to know under what conditions “typical norm” of the random matrix $$S_N = \sum_{i=1}^N\xi_{i}B_{i}$$ is of order of 1. An evident necessary condition is $${\bf E}\{S_{N}^{2}\}\preceq O(1)I$$, which, essentially, translates to $$\sum_{i=1}^{N}B_{i}^{2}\preceq I$$; a natural conjecture is that the latter condition is sufficient as well. In the paper, we prove a relaxed version of this conjecture, specifically, that under the above condition the typical norm of S N is $$\leq O(1)m^{{1\over 6}}$$: $${\rm Prob}\{||S_N||>\Omega m^{1/6}\}\leq O(1)\exp\{-O(1)\Omega^2\}$$ for all Ω > 0 We outline some applications of this result, primarily in investigating the quality of semidefinite relaxations of a general quadratic optimization problem with orthogonality constraints $${\rm Opt} = \max\limits_{X_{j}\in{\bf R}^{m\times m}}\left\{F(X_1,\ldots ,X_k): X_jX_j^{\rm T}=I,\,j=1,\ldots ,k\right\}$$, where F is quadratic in X = (X 1,... ,X k ). We show that when F is convex in every one of X j , a natural semidefinite relaxation of the problem is tight within a factor slowly growing with the size m of the matrices $$X_j : {\rm Opt}\leq {\rm Opt}(SDP)\leq O(1) [m^{1/3}+\ln k]{\rm Opt}$$.

Journal ArticleDOI
TL;DR: In this article, the authors used semidefinite programming (SDP) relaxations of the quadratic assignment problem (QAP) and solved them approximately using a dynamic version of the bundle method.
Abstract: Semidefinite programming (SDP) has recently turned out to be a very powerful tool for approximating some NP-hard problems. The nature of the quadratic assignment problem (QAP) suggests SDP as a way to derive tractable relaxations. We recall some SDP relaxations of QAP and solve them approximately using a dynamic version of the bundle method. The computational results demonstrate the efficiency of the approach. Our bounds are currently among the strongest ones available for QAP. We investigate their potential for branch and bound settings by looking also at the bounds in the first levels of the branching tree.

Journal ArticleDOI
TL;DR: The existence of a unique equilibrium, characterized as the solution of an unconstrained strictly convex minimization problem of low dimension, is established, providing a formal convergence proof for MSA.
Abstract: We analyze an equilibrium model for traffic networks based on stochastic dynamic programming. In this model passengers move towards their destinations by a sequential process of arc selection based on a discrete choice model at every intermediate node in their trip. Route selection is the outcome of this sequential process while network flows correspond to the invariant measures of the underlying Markov chains. The approach may handle different discrete choice models at every node, including the possibility of mixing deterministic and stochastic distribution rules. It can also be used over a multi-modal network in order to model the simultaneous selection of mode and route, as well as to treat the case of elastic demands. We establish the existence of a unique equilibrium, which is characterized as the solution of an unconstrained strictly convex minimization problem of low dimension. We report some numerical experiences comparing the performance of the method of successive averages (MSA) and Newton’s method on one small and one large network, providing a formal convergence proof for MSA.

Journal ArticleDOI
TL;DR: It is shown that the problem of accumulating Jacobian matrices by using a minimal number of floating-point operations is NP-complete by reduction from Ensemble Computation.
Abstract: We show that the problem of accumulating Jacobian matrices by using a minimal number of floating-point operations is NP-complete by reduction from Ensemble Computation. The proof makes use of the fact that, deviating from the state-of-the-art assumption, algebraic dependences can exist between the local partial derivatives. It follows immediately that the same problem for directional derivatives, adjoints, and higher derivatives is NP-complete, too.

Journal ArticleDOI
TL;DR: A sensitivity result for nonlinear semidefinite programs is presented, and a self-contained proof of local quadratic convergence of the SSP method is given, which is a generalization of the well-known sequential Quadratic programming method for standard nonlinear programs.
Abstract: We consider the solution of nonlinear programs with nonlinear semidefiniteness constraints. The need for an efficient exploitation of the cone of positive semidefinite matrices makes the solution of such nonlinear semidefinite programs more complicated than the solution of standard nonlinear programs. This paper studies a sequential semidefinite programming (SSP) method, which is a generalization of the well-known sequential quadratic programming method for standard nonlinear programs. We present a sensitivity result for nonlinear semidefinite programs, and then based on this result, we give a self-contained proof of local quadratic convergence of the SSP method. We also describe a class of nonlinear semidefinite programs that arise in passive reduced-order modeling, and we report results of some numerical experiments with the SSP method applied to problems in that class.

Journal ArticleDOI
TL;DR: A computational framework based on a population-based stochastic method in which different candidate solutions for a single problem are maintained in a population which evolves in such a way as to guarantee a sufficient diversity among solutions.
Abstract: When dealing with extremely hard global optimization problems, i.e. problems with a large number of variables and a huge number of local optima, heuristic procedures are the only possible choice. In this situation, lacking any possibility of guaranteeing global optimality for most problem instances, it is quite difficult to establish rules for discriminating among different algorithms. We think that in order to judge the quality of new global optimization methods, different criteria might be adopted like, e.g.:

Journal ArticleDOI
TL;DR: The RDM method has several advantages including robustness and provision of high accuracy compared to traditional electronic structure methods, although its computational time and memory consumption are still extremely large.
Abstract: It has been a long-time dream in electronic structure theory in physical chemistry/chemical physics to compute ground state energies of atomic and molecular systems by employing a variational approach in which the two-body reduced density matrix (RDM) is the unknown variable. Realization of the RDM approach has benefited greatly from recent developments in semidefinite programming (SDP). We present the actual state of this new application of SDP as well as the formulation of these SDPs, which can be arbitrarily large. Numerical results using parallel computation on high performance computers are given. The RDM method has several advantages including robustness and provision of high accuracy compared to traditional electronic structure methods, although its computational time and memory consumption are still extremely large.

Journal ArticleDOI
TL;DR: Several linear time algorithms for the continuous quadratic knapsack problem are given and cycling and wrong-convergence examples in a number of existing algorithms are reported.
Abstract: We give several linear time algorithms for the continuous quadratic knapsack problem. In addition, we report cycling and wrong-convergence examples in a number of existing algorithms, and give encouraging computational results for large-scale problems.

Journal ArticleDOI
TL;DR: This work investigates computational simplifications for graphs with rich automorphism groups, and explores several strengthenings of the Lovász theta number that can be tightened toward either χ(G) or ω(G).
Abstract: The semidefinite programming formulation of the Lovasz theta number does not only give one of the best polynomial simultaneous bounds on the chromatic number ź(G) or the clique number ź(G) of a graph, but also leads to heuristics for graph coloring and extracting large cliques. This semidefinite programming formulation can be tightened toward either ź(G) or ź(G) by adding several types of cutting planes. We explore several such strengthenings, and show that some of them can be computed with the same effort as the theta number. We also investigate computational simplifications for graphs with rich automorphism groups.

Journal ArticleDOI
TL;DR: The analysis and the numerical evidence show that exact complementarity can be achieved finitely even when the elastic-mode formulation is solved inexactly, and global convergence properties of methods based on this formulation are studied.
Abstract: The elastic-mode formulation of the problem of minimizing a nonlinear function subject to equilibrium constraints has appealing local properties in that, for a finite value of the penalty parameter, local solutions satisfying first- and second-order necessary optimality conditions for the original problem are also first- and second-order points of the elastic-mode formulation. Here we study global convergence properties of methods based on this formulation, which involve generating an (exact or inexact) first- or second-order point of the formulation, for nondecreasing values of the penalty parameter. Under certain regularity conditions on the active constraints, we establish finite or asymptotic convergence to points having a certain stationarity property (such as strong stationarity, M-stationarity, or C-stationarity). Numerical experience with these approaches is discussed. In particular, our analysis and the numerical evidence show that exact complementarity can be achieved finitely even when the elastic-mode formulation is solved inexactly.