scispace - formally typeset
Open AccessPosted Content

Faster convergence rates of relaxed Peaceman-Rachford and ADMM under regularity assumptions

Reads0
Chats0
TLDR
In this paper, the convergence rate analysis of the Douglas-Rachford splitting (DRS), Peaceman-rachford (PRS), and alternating direction method of multipliers (ADMM) algorithms under various regularity assumptions including strong convexity, Lipschitz differentiability, and bounded linear regularity is presented.
Abstract
Splitting schemes are a class of powerful algorithms that solve complicated monotone inclusion and convex optimization problems that are built from many simpler pieces. They give rise to algorithms in which the simple pieces of the decomposition are processed individually. This leads to easily implementable and highly parallelizable algorithms, which often obtain nearly state-of-the-art performance. In this paper, we provide a comprehensive convergence rate analysis of the Douglas-Rachford splitting (DRS), Peaceman-Rachford splitting (PRS), and alternating direction method of multipliers (ADMM) algorithms under various regularity assumptions including strong convexity, Lipschitz differentiability, and bounded linear regularity. The main consequence of this work is that relaxed PRS and ADMM automatically adapt to the regularity of the problem and achieve convergence rates that improve upon the (tight) worst-case rates that hold in the absence of such regularity. All of the results are obtained using simple techniques.

read more

Citations
More filters
Journal ArticleDOI

Global Convergence of ADMM in Nonconvex Nonsmooth Optimization

TL;DR: In this paper, the convergence of the alternating direction method of multipliers (ADMM) for minimizing a nonconvex and possibly nonsmooth objective function is analyzed, subject to coupled linear equality constraints.
Journal ArticleDOI

On the Global and Linear Convergence of the Generalized Alternating Direction Method of Multipliers

TL;DR: This paper shows that global linear convergence can be guaranteed under the assumptions of strong convexity and Lipschitz gradient on one of the two functions, along with certain rank assumptions on A and B.
Journal ArticleDOI

An introduction to continuous optimization for imaging

TL;DR: The state of the art in continuous optimization methods for such problems, and particular emphasis on optimal first-order schemes that can deal with typical non-smooth and large-scale objective functions used in imaging problems are described.
Journal ArticleDOI

Parallel Multi-Block ADMM with o(1 / k) Convergence

TL;DR: The classic ADMM can be extended to the N-block Jacobi fashion and preserve convergence in the following two cases: (i) matrices A_i and Ai are mutually near-orthogonal and have full column-rank, or (ii) proximal terms are added to theN subproblems (but without any assumption on matrices $$A_i$$Ai).
Journal ArticleDOI

A Proximal Gradient Algorithm for Decentralized Composite Optimization

TL;DR: A proximal gradient exact first-order algorithm (PG-EXTRA) that utilizes the composite structure and has the best known convergence rate and is a nontrivial extension to the recent algorithm EXTRA.
References
More filters
Book

Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers

TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Journal ArticleDOI

A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems

TL;DR: A new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically.
Journal ArticleDOI

Theory of Reproducing Kernels.

TL;DR: In this paper, a short historical introduction is given to indicate the different manners in which these kernels have been used by various investigators and discuss the more important trends of the application of these kernels without attempting, however, a complete bibliography of the subject matter.
Book

Parallel and Distributed Computation: Numerical Methods

TL;DR: This work discusses parallel and distributed architectures, complexity measures, and communication and synchronization issues, and it presents both Jacobi and Gauss-Seidel iterations, which serve as algorithms of reference for many of the computational approaches addressed later.
Journal ArticleDOI

The Split Bregman Method for L1-Regularized Problems

TL;DR: This paper proposes a “split Bregman” method, which can solve a very broad class of L1-regularized problems, and applies this technique to the Rudin-Osher-Fatemi functional for image denoising and to a compressed sensing problem that arises in magnetic resonance imaging.
Related Papers (5)