Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding
Reads0
Chats0
TLDR
In this article, the alternating directions method of multipliers is used to solve the homogeneous self-dual embedding, an equivalent feasibility problem involving finding a nonzero point in the intersection of a subspace and a cone.Abstract:
We introduce a first-order method for solving very large convex cone programs. The method uses an operator splitting method, the alternating directions method of multipliers, to solve the homogeneous self-dual embedding, an equivalent feasibility problem involving finding a nonzero point in the intersection of a subspace and a cone. This approach has several favorable properties. Compared to interior-point methods, first-order methods scale to very large problems, at the cost of requiring more time to reach very high accuracy. Compared to other first-order methods for cone programs, our approach finds both primal and dual solutions when available or a certificate of infeasibility or unboundedness otherwise, is parameter free, and the per-iteration cost of the method is the same as applying a splitting method to the primal or dual alone. We discuss efficient implementation of the method in detail, including direct and indirect methods for computing projection onto the subspace, scaling the original problem data, and stopping criteria. We describe an open-source implementation, which handles the usual (symmetric) nonnegative, second-order, and semidefinite cones as well as the (non-self-dual) exponential and power cones and their duals. We report numerical results that show speedups over interior-point cone solvers for large problems, and scaling to very large general cone programs.read more
Citations
More filters
Journal ArticleDOI
Flexible Differentiable Optimization via Model Transformations
TL;DR: Di Opt.jl is introduced, a Julia library to solve convex optimization problems with respect to arbitrary parameters present in the objective and/or constraints, enabling multiple use cases from hyperparameter optimization to backpropagation and sensitivity analysis, bridging constrained optimization with end-to-end di erentiable programming.
Posted Content
Support recovery and sup-norm convergence rates for sparse pivotal estimation
TL;DR: In this paper, the authors show minimax sup-norm convergence rates for non-smooth and smoothed, single task and multitask square-root Lasso-type estimators.
Journal ArticleDOI
Projection onto the exponential cone: a univariate root-finding problem
TL;DR: In this paper , it was shown that finding the nearest mapping of a point onto these convex sets all reduce to a single univariate root-finding problem, which leads to a fast projection algorithm shown numerically robust over a wide range of inputs.
Posted Content
Nonsmooth Minimization Using Smooth Envelope Functions
Pontus Giselsson,Mattias Fält +1 more
TL;DR: In this article, a general envelope function for convex feasibility problems with two sets, of which one is affine, was proposed, which can be solved by finding any stationary point of the smooth and under some assumptions convex GAP envelope.
Proceedings ArticleDOI
A semidefinite programming method for moment approximation in stochastic differential algebraic systems
TL;DR: Numerical simulations demonstrate how the method can be applied to solve moment-closure problems in representative systems described by stochastic differential equations with trigonometric and polynomial nonlinearities.
References
More filters
Journal ArticleDOI
Regression Shrinkage and Selection via the Lasso
TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Book
Convex Optimization
Stephen Boyd,Lieven Vandenberghe +1 more
TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Book
Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.