scispace - formally typeset
Open AccessJournal ArticleDOI

Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding

Reads0
Chats0
TLDR
In this article, the alternating directions method of multipliers is used to solve the homogeneous self-dual embedding, an equivalent feasibility problem involving finding a nonzero point in the intersection of a subspace and a cone.
Abstract
We introduce a first-order method for solving very large convex cone programs. The method uses an operator splitting method, the alternating directions method of multipliers, to solve the homogeneous self-dual embedding, an equivalent feasibility problem involving finding a nonzero point in the intersection of a subspace and a cone. This approach has several favorable properties. Compared to interior-point methods, first-order methods scale to very large problems, at the cost of requiring more time to reach very high accuracy. Compared to other first-order methods for cone programs, our approach finds both primal and dual solutions when available or a certificate of infeasibility or unboundedness otherwise, is parameter free, and the per-iteration cost of the method is the same as applying a splitting method to the primal or dual alone. We discuss efficient implementation of the method in detail, including direct and indirect methods for computing projection onto the subspace, scaling the original problem data, and stopping criteria. We describe an open-source implementation, which handles the usual (symmetric) nonnegative, second-order, and semidefinite cones as well as the (non-self-dual) exponential and power cones and their duals. We report numerical results that show speedups over interior-point cone solvers for large problems, and scaling to very large general cone programs.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

CVXPY: A Python-Embedded Modeling Language for Convex Optimization

TL;DR: CVXPY allows the user to express convex optimization problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers.
Journal Article

CVXPY: a python-embedded modeling language for convex optimization

TL;DR: CVXPY as mentioned in this paper is a domain-specific language for convex optimization embedded in Python, which allows the user to express convex optimisation problems in a natural syntax that follows the math, rather than in the restrictive standard form required by solvers.
Journal ArticleDOI

OSQP: An Operator Splitting Solver for Quadratic Programs

TL;DR: This work presents a general-purpose solver for convex quadratic programs based on the alternating direction method of multipliers, employing a novel operator splitting technique that requires the solution of a quasi-definite linear system with the same coefficient matrix at almost every iteration.
Journal ArticleDOI

Fast online deconvolution of calcium imaging data.

TL;DR: The algorithm is a generalization of the pool adjacent violators algorithm (PAVA) for isotonic regression and inherits its linear-time computational complexity and gains remarkable increases in processing speed: more than one order of magnitude compared to currently employed state of the art convex solvers relying on interior point methods.
Journal ArticleDOI

A rewriting system for convex optimization problems

TL;DR: In this paper, a modular rewriting system for translating optimization problems written in a domain-specific language (DSL) to forms compatible with low-level solver interfaces is described.
References
More filters
Book

Feedback Control of Dynamic Systems

TL;DR: This introductory book provides an in-depth, comprehensive treatment of a collection of classical and state-space approaches to control system design and ties the methods together so that a designer is able to pick the method that best fits the problem at hand.
Journal ArticleDOI

The Split Bregman Method for L1-Regularized Problems

TL;DR: This paper proposes a “split Bregman” method, which can solve a very broad class of L1-regularized problems, and applies this technique to the Rudin-Osher-Fatemi functional for image denoising and to a compressed sensing problem that arises in magnetic resonance imaging.
Posted Content

An iterative thresholding algorithm for linear inverse problems with a sparsity constraint

Abstract: We consider linear inverse problems where the solution is assumed to have a sparse expansion on an arbitrary pre-assigned orthonormal basis. We prove that replacing the usual quadratic regularizing penalties by weighted l^p-penalties on the coefficients of such expansions, with 1 < or = p < or =2, still regularizes the problem. If p < 2, regularized solutions of such l^p-penalized problems will have sparser expansions, with respect to the basis under consideration. To compute the corresponding regularized solutions we propose an iterative algorithm that amounts to a Landweber iteration with thresholding (or nonlinear shrinkage) applied at each iteration step. We prove that this algorithm converges in norm. We also review some potential applications of this method.
Book

Proximal Algorithms

TL;DR: The many different interpretations of proximal operators and algorithms are discussed, their connections to many other topics in optimization and applied mathematics are described, some popular algorithms are surveyed, and a large number of examples of proxiesimal operators that commonly arise in practice are provided.
Journal ArticleDOI

Monotone Operators and the Proximal Point Algorithm

TL;DR: In this paper, the proximal point algorithm in exact form is investigated in a more general form where the requirement for exact minimization at each iteration is weakened, and the subdifferential $\partial f$ is replaced by an arbitrary maximal monotone operator T.