Topic
Convex optimization
About: Convex optimization is a research topic. Over the lifetime, 24906 publications have been published within this topic receiving 908795 citations. The topic is also known as: convex optimisation.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: The main result is that a minimal confidence ellipsoid for the state, consistent with the measured output and the uncertainty description, may be recursively computed in polynomial time, using interior-point methods for convex optimization.
Abstract: This note presents a new approach to finite-horizon guaranteed state prediction for discrete-time systems affected by bounded noise and unknown-but-bounded parameter uncertainty. Our framework handles possibly nonlinear dependence of the state-space matrices on the uncertain parameters. The main result is that a minimal confidence ellipsoid for the state, consistent with the measured output and the uncertainty description, may be recursively computed in polynomial time, using interior-point methods for convex optimization. With n states, l uncertain parameters appearing linearly in the state-space matrices, with rank-one matrix coefficients, the worst-case complexity grows as O(l(n + l)/sup 3.5/) With unstructured uncertainty in all system matrices, the worst-case complexity reduces to O(n/sup 3.5/).
277 citations
••
TL;DR: In this article, the authors formulate phase retrieval as a convex optimization problem, which they call PhaseMax, and develop sharp lower bounds on the success probability of PhaseMax for a broad range of random measurement ensembles, and analyze the impact of measurement noise on the solution accuracy.
Abstract: We consider the recovery of a (real- or complex-valued) signal from magnitude-only measurements, known as phase retrieval. We formulate phase retrieval as a convex optimization problem, which we call PhaseMax. Unlike other convex methods that use semidefinite relaxation and lift the phase retrieval problem to a higher dimension, PhaseMax is a “non-lifting” relaxation that operates in the original signal dimension. We show that the dual problem to PhaseMax is basis pursuit, which implies that the phase retrieval can be performed using algorithms initially designed for sparse signal recovery. We develop sharp lower bounds on the success probability of PhaseMax for a broad range of random measurement ensembles, and we analyze the impact of measurement noise on the solution accuracy. We use numerical results to demonstrate the accuracy of our recovery guarantees, and we showcase the efficacy and limits of PhaseMax in practice.
276 citations
••
22 Mar 2006TL;DR: In this paper, the best known guarantees for exact reconstruction of a sparse signal f from few nonadaptive universal linear measurements were shown. But these guarantees involve huge constants, in spite of very good performance of the algorithms in practice.
Abstract: This paper proves best known guarantees for exact reconstruction of a sparse signal f from few non-adaptive universal linear measurements. We consider Fourier measurements (random sample of frequencies of f) and random Gaussian measurements. The method for reconstruction that has recently gained momentum in the sparse approximation theory is to relax this highly non-convex problem to a convex problem, and then solve it as a linear program. What are best guarantees for the reconstruction problem to be equivalent to its convex relaxation is an open question. Recent work shows that the number of measurements k(r,n) needed to exactly reconstruct any r-sparse signal f of length n from its linear measurements with convex relaxation is usually O(r poly log (n)). However, known guarantees involve huge constants, in spite of very good performance of the algorithms in practice. In attempt to reconcile theory with practice, we prove the first guarantees for universal measurements (i.e. which work for all sparse functions) with reasonable constants. For Gaussian measurements, k(r,n) lsim 11.7 r [1.5 + log(n/r)], which is optimal up to constants. For Fourier measurements, we prove the best known bound k(r, n) = O(r log(n) middot log2(r) log(r log n)), which is optimal within the log log n and log3 r factors. Our arguments are based on the technique of geometric functional analysis and probability in Banach spaces.
276 citations
••
01 Jun 2004TL;DR: This tutorial paper considers the problem of minimizing the rank of a matrix over a convex set and focuses on how convex optimization can be used to develop heuristic methods for this problem.
Abstract: In this tutorial paper, we consider the problem of minimizing the rank of a matrix over a convex set. The rank minimization problem (RMP) arises in diverse areas such as control, system identification, statistics and signal processing, and is known to be computationally NP-hard. We give an overview of the problem, its interpretations, applications, and solution methods. In particular, we focus on how convex optimization can be used to develop heuristic methods for this problem.
276 citations
••
TL;DR: This paper establishes several important properties of the distance functions with respect to the global optimal solution set and a class of invariant sets with the help of convex and non-smooth analysis.
Abstract: In this paper, multi-agent systems minimizing a sum of objective functions, where each component is only known to a particular node, is considered for continuous-time dynamics with time-varying interconnection topologies. Assuming that each node can observe a convex solution set of its optimization component, and the intersection of all such sets is nonempty, the considered optimization problem is converted to an intersection computation problem. By a simple distributed control rule, the considered multi-agent system with continuous-time dynamics achieves not only a consensus, but also an optimal agreement within the optimal solution set of the overall optimization objective. Directed and bidirectional communications are studied, respectively, and connectivity conditions are given to ensure a global optimal consensus. In this way, the corresponding intersection computation problem is solved by the proposed decentralized continuous-time algorithm. We establish several important properties of the distance functions with respect to the global optimal solution set and a class of invariant sets with the help of convex and non-smooth analysis.
275 citations