scispace - formally typeset
Search or ask a question

Showing papers in "Siam Journal on Optimization in 2006"


Journal ArticleDOI
TL;DR: The main result of this paper is that the general MADS framework is flexible enough to allow the generation of an asymptotically dense set of refining directions along which the Clarke derivatives are nonnegative.
Abstract: This paper addresses the problem of minimization of a nonsmooth function under general nonsmooth constraints when no derivatives of the objective or constraint functions are available. We introduce the mesh adaptive direct search (MADS) class of algorithms which extends the generalized pattern search (GPS) class by allowing local exploration, called polling, in an asymptotically dense set of directions in the space of optimization variables. This means that under certain hypotheses, including a weak constraint qualification due to Rockafellar, MADS can treat constraints by the extreme barrier approach of setting the objective to infinity for infeasible points and treating the problem as unconstrained. The main GPS convergence result is to identify limit points $\hat{x}$, where the Clarke generalized derivatives are nonnegative in a finite set of directions, called refining directions. Although in the unconstrained case, nonnegative combinations of these directions span the whole space, the fact that there can only be finitely many GPS refining directions limits rigorous justification of the barrier approach to finitely many linear constraints for GPS. The main result of this paper is that the general MADS framework is flexible enough to allow the generation of an asymptotically dense set of refining directions along which the Clarke derivatives are nonnegative. We propose an instance of MADS for which the refining directions are dense in the hypertangent cone at $\hat{x}$ with probability 1 whenever the iterates associated with the refining directions converge to a single $\hat{x}$. The instance of MADS is compared to versions of GPS on some test problems. We also illustrate the limitation of our results with examples.

1,207 citations


Journal ArticleDOI
TL;DR: A large deviation-type approximation, referred to as “Bernstein approximation,” of the chance constrained problem is built that is convex and efficiently solvable and extended to the case of ambiguous chance constrained problems, where the random perturbations are independent with the collection of distributions known to belong to a given convex compact set.
Abstract: We consider a chance constrained problem, where one seeks to minimize a convex objective over solutions satisfying, with a given close to one probability, a system of randomly perturbed convex constraints. This problem may happen to be computationally intractable; our goal is to build its computationally tractable approximation, i.e., an efficiently solvable deterministic optimization program with the feasible set contained in the chance constrained problem. We construct a general class of such convex conservative approximations of the corresponding chance constrained problem. Moreover, under the assumptions that the constraints are affine in the perturbations and the entries in the perturbation vector are independent-of-each-other random variables, we build a large deviation-type approximation, referred to as “Bernstein approximation,” of the chance constrained problem. This approximation is convex and efficiently solvable. We propose a simulation-based scheme for bounding the optimal value in the chance constrained problem and report numerical experiments aimed at comparing the Bernstein and well-known scenario approximation approaches. Finally, we extend our construction to the case of ambiguous chance constrained problems, where the random perturbations are independent with the collection of distributions known to belong to a given convex compact set rather than to be known exactly, while the chance constraint should be satisfied for every distribution given by this set.

1,099 citations


Journal ArticleDOI
TL;DR: An analogous inequality in which the derivative $ abla f(x)$ can be replaced by any element $x^{\ast}$ of the subdifferential $\partial f( x)$ of $f$ is established, which provides new insights into the convergence aspects of subgradient-type dynamical systems.
Abstract: Given a real-analytic function $f:\mathbb{R}^{n} \rightarrow \mathbb{R}$ and a critical point $a \in \mathbb{R}^{n}$, the Łojasiewicz inequality asserts that there exists $\theta\in\lbrack\frac{1}{2},1)$ such that the function $|f-f(a)|^{\theta}\,\Vert abla f\Vert^{-1}$ remains bounded around $a$. In this paper, we extend the above result to a wide class of nonsmooth functions (that possibly admit the value $+\infty$), by establishing an analogous inequality in which the derivative $ abla f(x)$ can be replaced by any element $x^{\ast}$ of the subdifferential $\partial f(x)$ of $f$. Like its smooth version, this result provides new insights into the convergence aspects of subgradient-type dynamical systems. Provided that the function $f$ is sufficiently regular (for instance, convex or lower-$C^{2}$), the bounded trajectories of the corresponding subgradient dynamical system can be shown to be of finite length. Explicit estimates of the rate of convergence are also derived.

732 citations


Journal ArticleDOI
TL;DR: Using a correlative sparsity pattern graph, sets of the supports for sums of squares polynomials that lead to efficient SOS and semidefinite program (SDP) relaxations are obtained.
Abstract: Unconstrained and inequality constrained sparse polynomial optimization problems (POPs) are considered. A correlative sparsity pattern graph is defined to find a certain sparse structure in the objective and constraint polynomials of a POP. Based on this graph, sets of the supports for sums of squares (SOS) polynomials that lead to efficient SOS and semidefinite program (SDP) relaxations are obtained. Numerical results from various test problems are included to show the improved performance of the SOS and SDP relaxations.

525 citations


Journal ArticleDOI
TL;DR: A class of interior gradient algorithms is derived which exhibits an $O(k^{-2})$ global convergence rate estimate and is illustrated with many applications and examples, including some new explicit and simple algorithms for conic optimization problems.
Abstract: Interior gradient (subgradient) and proximal methods for convex constrained minimization have been much studied, in particular for optimization problems over the nonnegative octant. These methods are using non-Euclidean projections and proximal distance functions to exploit the geometry of the constraints. In this paper, we identify a simple mechanism that allows us to derive global convergence results of the produced iterates as well as improved global rates of convergence estimates for a wide class of such methods, and with more general convex constraints. Our results are illustrated with many applications and examples, including some new explicit and simple algorithms for conic optimization problems. In particular, we derive a class of interior gradient algorithms which exhibits an $O(k^{-2})$ global convergence rate estimate.

307 citations


Journal ArticleDOI
TL;DR: It is proved that convergence to the global optimum of $\P$ when the sparsity pattern satisfies a condition often encountered in large size problems of practical applications, and known as the running intersection property in graph theory.
Abstract: We consider a polynomial programming problem $\P$ on a compact basic semialgebraic set $\K\subset\R^n$, described by $m$ polynomial inequalities $g_j(X)\geq0$, and with criterion $f\in\R[X]$. We propose a hierarchy of semidefinite relaxations in the spirit of those of Waki e [SIAM J. Optim., 17 (2006), pp. 218-242]. In particular, the SDP-relaxation of order $r$ has the following two features: (a) The number of variables is $O(\kappa^{2r})$, where $\kappa=\max[\kappa_1,\kappa_2]$ with $\kappa_1$ (resp., $\kappa_2$) being the maximum number of variables appearing in the monomials of $f$ (resp., appearing in a single constraint $g_j(X)\geq0$). (b) The largest size of the linear matrix inequalities (LMIs) is $O(\kappa^r)$. This is to compare with the respective number of variables $O(n^{2r})$ and LMI size $O(n^r)$ in the original SDP-relaxations defined in [J. B. Lasserre, SIAM J. Optim., 11 (2001), pp. 796-817]. Therefore, great computational savings are expected in case of sparsity in the data $\{g_j,f\}$, i.e., when k is small, a frequent case in practical applications of interest. The novelty with respect to [H. Waki, S. Kim, M. Kojima, and M. Maramatsu, SIAM J. Optim., 17 (2006), pp. 218-242] is that we prove convergence to the global optimum of $\P$ when the sparsity pattern satisfies a condition often encountered in large size problems of practical applications, and known as the running intersection property in graph theory. In such cases, and as a by-product, we also obtain a new representation result for polynomials positive on a compact basic semialgebraic set, a sparse version of Putinar’s Positivstellensatz [M. Putinar, Indiana Univ. Math. J., 42 (1993), pp. 969-984].

305 citations


Journal ArticleDOI
TL;DR: This paper examines the local convergence properties of SQP methods applied to MPECs and SQP is shown to converge superlinearly under reasonable assumptions near a strongly stationary point.
Abstract: Recently, nonlinear programming solvers have been used to solve a range of mathematical programs with equilibrium constraints (MPECs). In particular, sequential quadratic programming (SQP) methods have been very successful. This paper examines the local convergence properties of SQP methods applied to MPECs. SQP is shown to converge superlinearly under reasonable assumptions near a strongly stationary point. A number of examples are presented that show that some of the assumptions are difficult to relax.

267 citations


Journal ArticleDOI
TL;DR: A connection between the image of the real and complex spaces under a quadratic mapping is developed, which together with the results in the complex case lead to a condition that ensures strong duality in the real setting.
Abstract: We consider the problem of minimizing an indefinite quadratic function subject to two quadratic inequality constraints. When the problem is defined over the complex plane we show that strong duality holds and obtain necessary and sufficient optimality conditions. We then develop a connection between the image of the real and complex spaces under a quadratic mapping, which together with the results in the complex case lead to a condition that ensures strong duality in the real setting. Preliminary numerical simulations suggest that for random instances of the extended trust region subproblem, the sufficient condition is satisfied with a high probability. Furthermore, we show that the sufficient condition is always satisfied in two classes of nonconvex quadratic problems. Finally, we discuss an application of our results to robust least squares problems.

259 citations


Journal ArticleDOI
TL;DR: An active set algorithm (ASA) for box constrained optimization is developed which exploits the recently developed cyclic Barzilai-Borwein algorithm for the gradient projection step and the Recently developed conjugate gradient algorithm CG_DESCENT for unconstrained optimization.
Abstract: An active set algorithm (ASA) for box constrained optimization is developed. The algorithm consists of a nonmonotone gradient projection step, an unconstrained optimization step, and a set of rules for branching between the two steps. Global convergence to a stationary point is established. For a nondegenerate stationary point, the algorithm eventually reduces to unconstrained optimization without restarts. Similarly, for a degenerate stationary point, where the strong second-order sufficient optimality condition holds, the algorithm eventually reduces to unconstrained optimization without restarts. A specific implementation of the ASA is given which exploits the recently developed cyclic Barzilai-Borwein (CBB) algorithm for the gradient projection step and the recently developed conjugate gradient algorithm CG_DESCENT for unconstrained optimization. Numerical experiments are presented using box constrained problems in the CUTEr and MINPACK-2 test problem libraries.

244 citations


Journal ArticleDOI
TL;DR: If the objective is to maximize a positive semidefinite Hermitian form, then the randomization-rounding procedure guarantees a worst-case performance ratio of 0.7854, which is better than the ratio of $2/\pi \approx 0.6366$ for its counterpart in the real case due to Nesterov.
Abstract: In this paper we study the approximation algorithms for a class of discrete quadratic optimization problems in the Hermitian complex form. A special case of the problem that we study corresponds to the max-3-cut model used in a recent paper of Goemans and Williamson J. Comput. System Sci., 68 (2004), pp. 442-470]. We first develop a closed-form formula to compute the probability of a complex-valued normally distributed bivariate random vector to be in a given angular region. This formula allows us to compute the expected value of a randomized (with a specific rounding rule) solution based on the optimal solution of the complex semidefinite programming relaxation problem. In particular, we present an $[m^2(1-\cos\frac{2\pi}{m})/8\pi]$-approximation algorithm, and then study the limit of that model, in which the problem remains NP-hard. We show that if the objective is to maximize a positive semidefinite Hermitian form, then the randomization-rounding procedure guarantees a worst-case performance ratio of $\pi/4 \approx 0.7854$, which is better than the ratio of $2/\pi \approx 0.6366$ for its counterpart in the real case due to Nesterov. Furthermore, if the objective matrix is real-valued positive semidefinite with nonpositive off-diagonal elements, then the performance ratio improves to 0.9349.

217 citations


Journal ArticleDOI
TL;DR: The algorithm is shown to be globally convergent to strongly stationary points, under standard assumptions, and the results are then extended to an interior-relaxation approach.
Abstract: This paper studies theoretical and practical properties of interior-penalty methods for mathematical programs with complementarity constraints. A framework for implementing these methods is presented, and the need for adaptive penalty update strategies is motivated with examples. The algorithm is shown to be globally convergent to strongly stationary points, under standard assumptions. These results are then extended to an interior-relaxation approach. Superlinear convergence to strongly stationary points is also established. Two strategies for updating the penalty parameter are proposed, and their efficiency and robustness are studied on an extensive collection of test problems.

Journal ArticleDOI
TL;DR: This paper constructs an iterative process for finding a common fixed point of two mappings, such that one of these mappings is nonexpansive and the other is taken from the more general class of Lipschitz pseudocontractive mappings.
Abstract: In this paper we introduce an iterative process for finding a common element of the set of fixed points of a nonexpansive mapping and the set of solutions of the variational inequality problem for a monotone, Lipschitz-continuous mapping. The iterative process is based on two well-known methods: hybrid and extragradient. We obtain a strong convergence theorem for three sequences generated by this process. Based on this result, we also construct an iterative process for finding a common fixed point of two mappings, such that one of these mappings is nonexpansive and the other is taken from the more general class of Lipschitz pseudocontractive mappings.

Journal ArticleDOI
TL;DR: In this paper, the global optimization problem of a multidimensional "black-box" function satisfying the Lipschitz condition over a hyperinterval with an unknown Lipchitz constant is considered.
Abstract: In the paper, the global optimization problem of a multidimensional "black-box" function satisfying the Lipschitz condition over a hyperinterval with an unknown Lipschitz constant is considered. A new efficient algorithm for solving this problem is presented. At each iteration of the method a number of possible Lipschitz constants are chosen from a set of values varying from zero to infinity. This idea is unified with an efficient diagonal partition strategy. A novel technique balancing usage of local and global information during partitioning is proposed. A new procedure for finding lower bounds of the objective function over hyperintervals is also considered. It is demonstrated by extensive numerical experiments performed on more than 1600 multidimensional test functions that the new algorithm shows a very promising performance.

Journal ArticleDOI
TL;DR: A general framework for identifying locally optimal algorithmic parameters in unconstrained optimization is devised and the derivative-free method chosen to guide the process is the mesh adaptive direct search, a generalization of pattern search methods.
Abstract: The objectives of this paper are twofold. We devise a general framework for identifying locally optimal algorithmic parameters. Algorithmic parameters are treated as decision variables in a problem for which no derivative knowledge or existence is assumed. A derivative-free method for optimization seeks to minimize some measure of performance of the algorithm being fine-tuned. This measure is treated as a black-box and may be chosen by the user. Examples are given in the text. The second objective is to illustrate this framework by specializing it to the identification of locally optimal trust-region parameters in unconstrained optimization. The derivative-free method chosen to guide the process is the mesh adaptive direct search, a generalization of pattern search methods. We illustrate the flexibility of the latter and in particular make provision for surrogate objectives. Locally, optimal parameters with respect to overall computational time on a set of test problems are identified. Each function call may take several hours and may not always return a predictable result. A tailored surrogate function is used to guide the search towards a local solution. The parameters thus identified differ from traditionally used values, and allow one to solve a problem that remained otherwise unsolved in a reasonable time using traditional values.

Journal ArticleDOI
TL;DR: It is shown that at most at most $O(n)$ iterations suffice to reduce the duality gap and the residuals by the factor $1/{e}$, which implies an $O (n\log(n/\varepsilon)$ iteration bound for getting an $\varePSilon-solution of the problem at hand, which coincides with the best known bound for infeasible interior-point algorithms.
Abstract: We present a primal-dual infeasible interior-point algorithm. As usual, the algorithm decreases the duality gap and the feasibility residuals at the same rate. Assuming that an optimal solution exists, it is shown that at most $O(n)$ iterations suffice to reduce the duality gap and the residuals by the factor $1/{e}$. This implies an $O(n\log(n/\varepsilon))$ iteration bound for getting an $\varepsilon$-solution of the problem at hand, which coincides with the best known bound for infeasible interior-point algorithms. The algorithm constructs strictly feasible iterates for a sequence of perturbations of the given problem and its dual problem. A special feature of the algorithm is that it uses only full-Newton steps. Two types of full-Newton steps are used, so-called feasibility steps and usual (centering) steps. Starting at strictly feasible iterates of a perturbed pair, (very) close to its central path, feasibility steps serve to generate strictly feasible iterates for the next perturbed pair. By accomplishing a few centering steps for the new perturbed pair we obtain strictly feasible iterates close enough to the central path of the new perturbed pair. The algorithm finds an optimal solution or detects infeasibility or unboundedness of the given problem.

Journal ArticleDOI
TL;DR: Path-following methods for primal-dual active set strategies requiring a regularization parameter are introduced and their efficiency is demonstrated by means of examples.
Abstract: Path-following methods for primal-dual active set strategies requiring a regularization parameter are introduced. Existence of a primal-dual path and its differentiability properties are analyzed. Monotonicity and convexity of the primal-dual path value function are investigated. Both feasible and infeasible approximations are considered. Numerical path-following strategies are developed and their efficiency is demonstrated by means of examples.

Journal ArticleDOI
TL;DR: Quantitative stability of linear multistage stochastic programs is studied and it is shown that the infima of such programs behave (locally) Lipschitz continuous with respect to the sum of an $L_{r}$-distance and of a distance measure for the filtrations of the original and approximate Stochastic processes.
Abstract: Quantitative stability of linear multistage stochastic programs is studied. It is shown that the infima of such programs behave (locally) Lipschitz continuous with respect to the sum of an $L_{r}$-distance and of a distance measure for the filtrations of the original and approximate stochastic (input) processes. Various issues of the result are discussed and an illustrative example is given. Consequences for the reduction of scenario trees are also discussed.

Journal ArticleDOI
TL;DR: This work is able to establish reasonable conditions under which a subsequence of MADS iterates converges to a limit point satisfying second-order necessary or sufficient optimality conditions for general set-constrained optimization problems.
Abstract: A previous analysis of second-order behavior of generalized pattern search algorithms for unconstrained and linearly constrained minimization is extended to the more general class of mesh adaptive direct search (MADS) algorithms for general constrained optimization. Because of the ability of MADS to generate an asymptotically dense set of search directions, we are able to establish reasonable conditions under which a subsequence of MADS iterates converges to a limit point satisfying second-order necessary or sufficient optimality conditions for general set-constrained optimization problems.

Journal ArticleDOI
TL;DR: For the special case of regularization with a squared Euclidean norm, it is shown that ${\mathcal{G}}$ is unimodal and provided an alternative algorithm, which requires only one spectral decomposition.
Abstract: Total least squares (TLS) is a method for treating an overdetermined system of linear equations ${\bf A} {\bf x} \approx {\bf b}$, where both the matrix ${\bf A}$ and the vector ${\bf b}$ are contaminated by noise. Tikhonov regularization of the TLS (TRTLS) leads to an optimization problem of minimizing the sum of fractional quadratic and quadratic functions. As such, the problem is nonconvex. We show how to reduce the problem to a single variable minimization of a function ${\mathcal{G}}$ over a closed interval. Computing a value and a derivative of ${\mathcal{G}}$ consists of solving a single trust region subproblem. For the special case of regularization with a squared Euclidean norm we show that ${\mathcal{G}}$ is unimodal and provide an alternative algorithm, which requires only one spectral decomposition. A numerical example is given to illustrate the effectiveness of our method.

Journal ArticleDOI
TL;DR: This paper considers Levitin--Polyak-type well-posedness for a general constrained optimization problem and introduces generalized and strongly generalized LevitIn-- Polyak well-posingness.
Abstract: In this paper, we consider Levitin--Polyak-type well-posedness for a general constrained optimization problem. We introduce generalized Levitin--Polyak well-posedness and strongly generalized Levitin--Polyak well-posedness. Necessary and sufficient conditions for these types of well-posedness are given. Relations among these types of well-posedness are investigated. Finally, we consider convergence of a class of penalty methods and a class of augmented Lagrangian methods under the assumption of strongly generalized Levitin--Polyak well-posedness.

Journal ArticleDOI
TL;DR: A homogeneous model for standard monotone nonlinear complementarity problems over symmetric cones is proposed and the existence of a path having the following properties is shown: the path is bounded and has a trivial starting point without any regularity assumption concerning theexistence of feasible or strictly feasible solutions.
Abstract: We study the continuous trajectories for solving monotone nonlinear mixed complementarity problems over symmetric cones. While the analysis in [L. Faybusovich, Positivity, 1 (1997), pp. 331-357] depends on the optimization theory of convex log-barrier functions, our approach is based on the paper of Monteiro and Pang [Math. Oper. Res., 23 (1998), pp. 39-60], where a vast set of conclusions concerning continuous trajectories is shown for monotone complementarity problems over the cone of symmetric positive semidefinite matrices. As an application of the results, we propose a homogeneous model for standard monotone nonlinear complementarity problems over symmetric cones and discuss its theoretical aspects. Consequently, we show the existence of a path having the following properties: (a) The path is bounded and has a trivial starting point without any regularity assumption concerning the existence of feasible or strictly feasible solutions. (b) Any accumulation point of the path is a solution of the homogeneous model. (c) If the original problem is solvable, then every accumulation point of the path gives us a finite solution. (d) If the original problem is strongly infeasible, then, under the assumption of Lipschitz continuity, any accumulation point of the path gives us a finite certificate proving infeasibility.

Journal ArticleDOI
TL;DR: The key idea is a restructuring of the relaxations, which isolates the complicating constraints and allows for a Lagrangian approach to the lift-and-project relaxations of binary integer programs.
Abstract: We propose a method for optimizing the lift-and-project relaxations of binary integer programs introduced by Lovasz and Schrijver. In particular, we study both linear and semidefinite relaxations. The key idea is a restructuring of the relaxations, which isolates the complicating constraints and allows for a Lagrangian approach. We detail an enhanced subgradient method and discuss its efficient implementation. Computational results illustrate that our algorithm produces tight bounds more quickly than state-of-the-art linear and semidefinite solvers.

Journal ArticleDOI
TL;DR: The proximal bundle method for minimizing a convex function f over a closed convex set only requires evaluating f and its subgradients with an accuracy $\epsilon>0$ and asymptotically finds points that are $\ep silon$-optimal.
Abstract: We give a proximal bundle method for minimizing a convex function $f$ over a closed convex set. It only requires evaluating $f$ and its subgradients with an accuracy $\epsilon>0$, which is fixed but possibly unknown. It asymptotically finds points that are $\epsilon$-optimal. When applied to Lagrangian relaxation, it allows for $\epsilon$-accurate solutions of Lagrangian subproblems and finds $\epsilon$-optimal solutions of convex programs.

Journal ArticleDOI
TL;DR: It is established polynomial-time convergence of infeasible-interior-point methods for conic programs over symmetric cones using a wide neighborhood of the central path.
Abstract: We establish polynomial-time convergence of infeasible-interior-point methods for conic programs over symmetric cones using a wide neighborhood of the central path. The convergence is shown for a commutative family of search directions used in Schmieta and Alizadeh [Math. Program., 96 (2003), pp. 409-438]. Monteiro and Zhang [Math. Program., 81 (1998), pp. 281-299] introduced this family of directions when analyzing semidefinite programs. These conic programs include linear and semidefinite programs. This extends the work of Rangarajan and Todd [Tech. rep. 1388, School of OR & IE, Cornell University, Ithaca, NY, 2003], which established convergence of infeasible-interior-point methods for self-scaled conic programs using the NT direction. Our work is built on earlier analyses by Faybusovich [J. Comput. Appl. Math., 86 (1997), pp. 149-175] and Schmieta and Alizadeh [Math. Program., 96 (2003), pp. 409-438]. Of independent interest, we provide a constructive proof of Lyapunov lemma in the Jordan algebraic se...

Journal ArticleDOI
TL;DR: A new condition to define the set of conforming search directions that admits several computational advantages is added and a bound relating a measure of stationarity is derived, which is equivalent to the norm of the gradient of the objective in the unconstrained case.
Abstract: We present a new generating set search (GSS) approach for minimizing functions subject to linear constraints. GSS is a class of direct search optimization methods that includes generalized pattern search. One of our main contributions in this paper is a new condition to define the set of conforming search directions that admits several computational advantages. For continuously differentiable functions we also derive a bound relating a measure of stationarity, which is equivalent to the norm of the gradient of the objective in the unconstrained case, and a parameter used by GSS algorithms to control the lengths of the steps. With the additional assumption that the derivative is Lipschitz, we obtain a big-$O$ bound. As a consequence of this relationship, we obtain subsequence convergence to a KKT point, even though GSS algorithms lack explicit gradient information. Numerical results indicate that the bound provides a reasonable estimate of stationarity.

Journal ArticleDOI
TL;DR: This work constructs a specially devised semidefinite relaxation (SDR) and dual for the QMP problem and shows that under some mild conditions strong duality holds for QMP problems with at most $r$ constraints.
Abstract: We introduce and study a special class of nonconvex quadratic problems in which the objective and constraint functions have the form $f(\boldmath $X$)={Tr}(\boldmath $X$^T \boldmath $A$ \boldmath $X$) + 2 Tr(\boldmath $B$^T \boldmath $X$) +c, \boldmath $X$ \in {\real R}^{n \times r}$. The latter formulation is termed quadratic matrix programming (QMP) of order $r$. We construct a specially devised semidefinite relaxation (SDR) and dual for the QMP problem and show that under some mild conditions strong duality holds for QMP problems with at most $r$ constraints. Using a result on the equivalence of two characterizations of the nonnegativity property of quadratic functions of the above form, we are able to compare the constructed SDR and dual problems to other known SDRs and dual formulations of the problem. An application to robust least squares problems is discussed.

Journal ArticleDOI
TL;DR: An adaptive rule-based algorithm, SpaseLoc, is described to solve localization problems for ad hoc wireless sensor networks that scales well and provides excellent localization accuracy.
Abstract: An adaptive rule-based algorithm, SpaseLoc, is described to solve localization problems for ad hoc wireless sensor networks. A large problem is solved as a sequence of very small subproblems, each of which is solved by semidefinite programming relaxation of a geometric optimization model. The subproblems are generated according to a set of sensor/anchor selection rules. Computational results compared with existing approaches show that the SpaseLoc algorithm scales well and provides excellent localization accuracy.

Journal ArticleDOI
TL;DR: It is shown that every real nonnegative polynomial f can be approximated as closely as desired (in the $l_1$-norm of its coefficient vector) by a sequence of polynomials that are sums of squares.
Abstract: We show that every real nonnegative polynomial $f$ can be approximated as closely as desired (in the $l_1$-norm of its coefficient vector) by a sequence of polynomials $\{f_\epsilon\}$ that are sums of squares. The novelty is that each $f_\epsilon$ has a simple and explicit form in terms of $f$ and $\epsilon$.

Journal ArticleDOI
TL;DR: In this paper, the authors introduced a new V-efficiency concept which extends and unifies different approximate solution notions introduced in the literature and obtained necessary and sufficient conditions via nonlinear scalarization, which allow them to study this new class of approximate solutions in a general framework, since any convexity hypothesis is required.
Abstract: This paper deals with approximate (V-efficient) solutions of vector optimization problems. We introduce a new V-efficiency concept which extends and unifies different approximate solution notions introduced in the literature. We obtain necessary and sufficient conditions via nonlinear scalarization, which allow us to study this new class of approximate solutions in a general framework, since any convexity hypothesis is required. Several examples are proposed to show the concepts introduced and the results attained.

Journal ArticleDOI
TL;DR: This work replaces the gradient variety of a real polynomial with larger semialgebraic subsets of $\mathbb R^n$ which it now gets substantially harder to prove the existence of the necessary sums of squares certificates.
Abstract: We consider the problem of computing the global infimum of a real polynomial $f$ on $\mathbb R^n$. Every global minimizer of $f$ lies on its gradient variety, i.e., the algebraic subset of $\mathbb R^n$ where the gradient of $f$ vanishes. If $f$ attains a minimum on $\mathbb R^n$, it is therefore equivalent to look for the greatest lower bound of $f$ on its gradient variety. Nie, Demmel, and Sturmfels proved recently a theorem about the existence of sums of squares certificates for such lower bounds. Based on these certificates, they find arbitrarily tight relaxations of the original problem that can be formulated as semidefinite programs and thus be solved efficiently. We deal here with the more general case when $f$ is bounded from below but does not necessarily attain a minimum. In this case, the method of Nie, Demmel, and Sturmfels might yield completely wrong results. In order to overcome this problem, we replace the gradient variety by larger semialgebraic subsets of $\mathbb R^n$ which we call gradient tentacles. It now gets substantially harder to prove the existence of the necessary sums of squares certificates.