scispace - formally typeset
Search or ask a question
Author

Pablo A. Parrilo

Bio: Pablo A. Parrilo is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Semidefinite programming & Polynomial. The author has an hindex of 65, co-authored 290 publications receiving 25859 citations. Previous affiliations of Pablo A. Parrilo include Université catholique de Louvain & California Institute of Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space.
Abstract: The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.

3,432 citations

Journal Article
TL;DR: In this paper, it was shown that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space.
Abstract: The affine rank minimization problem consists of finding a matrix of minimum rank that satisfies a given system of linear equality constraints. Such problems have appeared in the literature of a diverse set of fields including system identification and control, Euclidean embedding, and collaborative filtering. Although specific instances can often be solved with specialized algorithms, the general affine rank minimization problem is NP-hard because it contains vector cardinality minimization as a special case. In this paper, we show that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. We present several random ensembles of equations where the restricted isometry property holds with overwhelming probability, provided the codimension of the subspace is sufficiently large. The techniques used in our analysis have strong parallels in the compressed sensing framework. We discuss how affine rank minimization generalizes this preexisting concept and outline a dictionary relating concepts from cardinality minimization to those of rank minimization. We also discuss several algorithmic approaches to minimizing the nuclear norm and illustrate our results with numerical examples.

2,742 citations

DissertationDOI
01 Jan 2000
TL;DR: In this paper, the authors introduce a specific class of linear matrix inequalities (LMI) whose optimal solution can be characterized exactly, i.e., the optimal value equals the spectral radius of the operator.
Abstract: In the first part of this thesis, we introduce a specific class of Linear Matrix Inequalities (LMI) whose optimal solution can be characterized exactly. This family corresponds to the case where the associated linear operator maps the cone of positive semidefinite matrices onto itself. In this case, the optimal value equals the spectral radius of the operator. It is shown that some rank minimization problems, as well as generalizations of the structured singular value ($mu$) LMIs, have exactly this property. In the same spirit of exploiting structure to achieve computational efficiency, an algorithm for the numerical solution of a special class of frequency-dependent LMIs is presented. These optimization problems arise from robustness analysis questions, via the Kalman-Yakubovich-Popov lemma. The procedure is an outer approximation method based on the algorithms used in the computation of hinf norms for linear, time invariant systems. The result is especially useful for systems with large state dimension. The other main contribution in this thesis is the formulation of a convex optimization framework for semialgebraic problems, i.e., those that can be expressed by polynomial equalities and inequalities. The key element is the interaction of concepts in real algebraic geometry (Positivstellensatz) and semidefinite programming. To this end, an LMI formulation for the sums of squares decomposition for multivariable polynomials is presented. Based on this, it is shown how to construct sufficient Positivstellensatz-based convex tests to prove that certain sets are empty. Among other applications, this leads to a nonlinear extension of many LMI based results in uncertain linear system analysis. Within the same framework, we develop stronger criteria for matrix copositivity, and generalizations of the well-known standard semidefinite relaxations for quadratic programming. Some applications to new and previously studied problems are presented. A few examples are Lyapunov function computation, robust bifurcation analysis, structured singular values, etc. It is shown that the proposed methods allow for improved solutions for very diverse questions in continuous and combinatorial optimization.

2,269 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a distributed algorithm that can be used by multiple agents to align their estimates with a particular value over a network with time-varying connectivity.
Abstract: We present distributed algorithms that can be used by multiple agents to align their estimates with a particular value over a network with time-varying connectivity. Our framework is general in that this value can represent a consensus value among multiple agents or an optimal solution of an optimization problem, where the global objective function is a combination of local agent objective functions. Our main focus is on constrained problems where the estimates of each agent are restricted to lie in different convex sets. To highlight the effects of constraints, we first consider a constrained consensus problem and present a distributed "projected consensus algorithm" in which agents combine their local averaging operation with projection on their individual constraint sets. This algorithm can be viewed as a version of an alternating projection method with weights that are varying over time and across agents. We establish convergence and convergence rate results for the projected consensus algorithm. We next study a constrained optimization problem for optimizing the sum of local objective functions of the agents subject to the intersection of their local constraint sets. We present a distributed "projected subgradient algorithm" which involves each agent performing a local averaging operation, taking a subgradient step to minimize its own objective function, and projecting on its constraint set. We show that, with an appropriately selected stepsize rule, the agent estimates generated by this algorithm converge to the same optimal solution for the cases when the weights are constant and equal, and when the weights are time-varying but all agents have the same constraint set.

1,773 citations

Journal ArticleDOI
TL;DR: It is shown how to construct a complete family of polynomially sized semidefinite programming conditions that prove infeasibility and provide a constructive approach for finding bounded degree solutions to the Positivstellensatz.
Abstract: A hierarchy of convex relaxations for semialgebraic problems is introduced. For questions reducible to a finite number of polynomial equalities and inequalities, it is shown how to construct a complete family of polynomially sized semidefinite programming conditions that prove infeasibility. The main tools employed are a semidefinite programming formulation of the sum of squares decomposition for multivariate polynomials, and some results from real algebraic geometry. The techniques provide a constructive approach for finding bounded degree solutions to the Positivstellensatz, and are illustrated with examples from diverse application fields.

1,747 citations


Cited by
More filters
Proceedings ArticleDOI
02 Sep 2004
TL;DR: Free MATLAB toolbox YALMIP is introduced, developed initially to model SDPs and solve these by interfacing eternal solvers by making development of optimization problems in general, and control oriented SDP problems in particular, extremely simple.
Abstract: The MATLAB toolbox YALMIP is introduced. It is described how YALMIP can be used to model and solve optimization problems typically occurring in systems and control theory. In this paper, free MATLAB toolbox YALMIP, developed initially to model SDPs and solve these by interfacing eternal solvers. The toolbox makes development of optimization problems in general, and control oriented SDP problems in particular, extremely simple. In fact, learning 3 YALMIP commands is enough for most users to model and solve the optimization problems

7,676 citations

Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal ArticleDOI
TL;DR: In this article, the basic aspects of entanglement including its characterization, detection, distillation, and quantification are discussed, and a basic role of entonglement in quantum communication within distant labs paradigm is discussed.
Abstract: All our former experience with application of quantum theory seems to say: {\it what is predicted by quantum formalism must occur in laboratory} But the essence of quantum formalism - entanglement, recognized by Einstein, Podolsky, Rosen and Schr\"odinger - waited over 70 years to enter to laboratories as a new resource as real as energy This holistic property of compound quantum systems, which involves nonclassical correlations between subsystems, is a potential for many quantum processes, including ``canonical'' ones: quantum cryptography, quantum teleportation and dense coding However, it appeared that this new resource is very complex and difficult to detect Being usually fragile to environment, it is robust against conceptual and mathematical tools, the task of which is to decipher its rich structure This article reviews basic aspects of entanglement including its characterization, detection, distillation and quantifying In particular, the authors discuss various manifestations of entanglement via Bell inequalities, entropic inequalities, entanglement witnesses, quantum cryptography and point out some interrelations They also discuss a basic role of entanglement in quantum communication within distant labs paradigm and stress some peculiarities such as irreversibility of entanglement manipulations including its extremal form - bound entanglement phenomenon A basic role of entanglement witnesses in detection of entanglement is emphasized

6,980 citations

Journal ArticleDOI
TL;DR: In this paper, the authors prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm.
Abstract: This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individuallyq We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.

6,783 citations