scispace - formally typeset
Open AccessJournal ArticleDOI

Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding

Reads0
Chats0
TLDR
In this article, the alternating directions method of multipliers is used to solve the homogeneous self-dual embedding, an equivalent feasibility problem involving finding a nonzero point in the intersection of a subspace and a cone.
Abstract
We introduce a first-order method for solving very large convex cone programs. The method uses an operator splitting method, the alternating directions method of multipliers, to solve the homogeneous self-dual embedding, an equivalent feasibility problem involving finding a nonzero point in the intersection of a subspace and a cone. This approach has several favorable properties. Compared to interior-point methods, first-order methods scale to very large problems, at the cost of requiring more time to reach very high accuracy. Compared to other first-order methods for cone programs, our approach finds both primal and dual solutions when available or a certificate of infeasibility or unboundedness otherwise, is parameter free, and the per-iteration cost of the method is the same as applying a splitting method to the primal or dual alone. We discuss efficient implementation of the method in detail, including direct and indirect methods for computing projection onto the subspace, scaling the original problem data, and stopping criteria. We describe an open-source implementation, which handles the usual (symmetric) nonnegative, second-order, and semidefinite cones as well as the (non-self-dual) exponential and power cones and their duals. We report numerical results that show speedups over interior-point cone solvers for large problems, and scaling to very large general cone programs.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

Local Linear Convergence of Douglas-Rachford for Linear Programming: a Probabilistic Analysis

Oisín Faust, +1 more
TL;DR: In this paper , the authors analyzed the local linear convergence rate r of the DRS method for random linear programs, and gave explicit and tight bounds on r. They showed that 1 − r 2 is typically of the order of m − 1 (n − m ) − 1 , where n is the number of variables and m is the total number of constraints.

Classical-Quantum Combs, their Min-Entropy and their Measurement-Based Applications

TL;DR: In this paper , the authors propose a solution to solve the problem of the problem: this paper ] of "uniformity" and "uncertainty" of the solution.
Posted Content

A Cutting-plane Method for Semidefinite Programming with Potential Applications on Noisy Quantum Devices

TL;DR: In this paper, the authors leverage quantum speed-up of an eigensolver in speeding up an SDP solver utilizing the cutting-plane method, and demonstrate a practical implementation of a randomized variant of the cuttingplane method for semidefinite programming on instances from SDPLIB.
Journal ArticleDOI

<i>In Vivo</i> Fast Nonlinear Microscopy Reveals Impairment of Fast Axonal Transport Induced by Molecular Motor Imbalances in the Brain of Zebrafish Larvae

- 02 Dec 2022 - 
TL;DR: In this paper , the authors measured axonal transport by tracing the second harmonic generation (SHG) signal of potassium titanyl phosphate (KTP) nanocrystals endocytosed by brain neurons of zebrafish (Zf) larvae.
Journal ArticleDOI

Storage and retrieval of von Neumann measurements

- 18 Nov 2022 - 
TL;DR: In this paper , the authors examined the problem of learning an unknown von Neumann measurement of dimension $d$ from a finite number of copies, and compared various learning schemes for arbitrary but fixed dimension.
References
More filters
Book ChapterDOI

I and J

Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Book

Matrix computations

Gene H. Golub
Book

Convex Optimization

TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Book

Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers

TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.