Topic
Convex optimization
About: Convex optimization is a research topic. Over the lifetime, 24906 publications have been published within this topic receiving 908795 citations. The topic is also known as: convex optimisation.
Papers published on a yearly basis
Papers
More filters
•
TL;DR: In this article, the authors prove that if the vectors z_i are sampled independently and uniformly at random on the unit sphere, then the signal x can be recovered exactly (up to a global phase factor) by solving a convenient semidefinite program.
Abstract: Suppose we wish to recover a signal x in C^n from m intensity measurements of the form | |^2, i = 1, 2,..., m; that is, from data in which phase information is missing. We prove that if the vectors z_i are sampled independently and uniformly at random on the unit sphere, then the signal x can be recovered exactly (up to a global phase factor) by solving a convenient semidefinite program---a trace-norm minimization problem; this holds with large probability provided that m is on the order of n log n, and without any assumption about the signal whatsoever. This novel result demonstrates that in some instances, the combinatorial phase retrieval problem can be solved by convex programming techniques. Finally, we also prove that our methodology is robust vis a vis additive noise.
878 citations
••
TL;DR: This work brings together and notably extends several classical splitting schemes, like the forward–backward and Douglas–Rachford methods, as well as the recent primal–dual method of Chambolle and Pock designed for problems with linear composite terms.
Abstract: We propose a new first-order splitting algorithm for solving jointly the primal and dual formulations of large-scale convex minimization problems involving the sum of a smooth function with Lipschitzian gradient, a nonsmooth proximable function, and linear composite functions. This is a full splitting approach, in the sense that the gradient and the linear operators involved are applied explicitly without any inversion, while the nonsmooth functions are processed individually via their proximity operators. This work brings together and notably extends several classical splitting schemes, like the forward–backward and Douglas–Rachford methods, as well as the recent primal–dual method of Chambolle and Pock designed for problems with linear composite terms.
877 citations
•
TL;DR: CVXPY as mentioned in this paper is a domain-specific language for convex optimization embedded in Python, which allows the user to express convex optimisation problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers.
Abstract: CVXPY is a domain-specific language for convex optimization embedded in Python. It allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers. CVXPY makes it easy to combine convex optimization with high-level features of Python such as parallelism and object-oriented design. CVXPY is available at http://www.cvxpy.org/ under the GPL license, along with documentation and examples.
873 citations
•
TL;DR: In this paper, the search for a piecewise quadratic Lyapunov function is formulated as a convex optimization problem in terms of linear matrix inequalities, and the relation to frequency domain methods such as the circle and Popov criteria is explained.
Abstract: This paper presents a computational approach to stability analysis of nonlinear and hybrid systems. The search for a piecewise quadratic Lyapunov function is formulated as a convex optimization problem in terms of linear matrix inequalities. The relation to frequency domain methods such as the circle and Popov criteria is explained. Several examples are included to demonstrate the flexibility and power of the approach.
862 citations
•
07 Dec 2009TL;DR: A new online algorithm is developed, the regularized dual averaging (RDA) method, that can explicitly exploit the regularization structure in an online setting and can be very effective for sparse online learning with l1-regularization.
Abstract: We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss function of the learning task, and the other is a simple regularization term such as l1-norm for promoting sparsity. We develop extensions of Nesterov's dual averaging method, that can exploit the regularization structure in an online setting. At each iteration of these methods, the learning variables are adjusted by solving a simple minimization problem that involves the running average of all past subgradients of the loss function and the whole regularization term, not just its subgradient. In the case of l1-regularization, our method is particularly effective in obtaining sparse solutions. We show that these methods achieve the optimal convergence rates or regret bounds that are standard in the literature on stochastic and online convex optimization. For stochastic learning problems in which the loss functions have Lipschitz continuous gradients, we also present an accelerated version of the dual averaging method.
859 citations