scispace - formally typeset
Open AccessJournal ArticleDOI

Tight global linear convergence rate bounds for Douglas–Rachford splitting

Pontus Giselsson
- 13 Mar 2017 - 
- Vol. 19, Iss: 4, pp 2241-2270
TLDR
In this article, the authors show that the convergence rate for Douglas-Rachford splitting is not tight, meaning that no problem from the considered class converges exactly with that rate.
Abstract
Recently, several authors have shown local and global convergence rate results for Douglas–Rachford splitting under strong monotonicity, Lipschitz continuity, and cocoercivity assumptions. Most of these focus on the convex optimization setting. In the more general monotone inclusion setting, Lions and Mercier showed a linear convergence rate bound under the assumption that one of the two operators is strongly monotone and Lipschitz continuous. We show that this bound is not tight, meaning that no problem from the considered class converges exactly with that rate. In this paper, we present tight global linear convergence rate bounds for that class of problems. We also provide tight linear convergence rate bounds under the assumptions that one of the operators is strongly monotone and cocoercive, and that one of the operators is strongly monotone and the other is cocoercive. All our linear convergence results are obtained by proving the stronger property that the Douglas–Rachford operator is contractive.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

FedSplit: An algorithmic framework for fast federated optimization

TL;DR: FedSplit is introduced, a class of algorithms based on operator splitting procedures for solving distributed convex minimization with additive structure and theory shows that these methods are provably robust to inexact computation of intermediate local quantities.
Journal ArticleDOI

A review of nonlinear FFT-based computational homogenization methods

TL;DR: A condensed overview of results scattered throughout the literature is provided and guides the reader to the current state of the art in nonlinear computational homogenization methods using the fast Fourier transform.
Journal ArticleDOI

On the equivalence of the primal-dual hybrid gradient method and Douglas–Rachford splitting

TL;DR: It is shown that the PDHG algorithm can be viewed as a special case of the Douglas–Rachford splitting algorithm for minimizing the sum of two convex functions.
Posted Content

Plug-and-Play Unplugged: Optimization Free Reconstruction using Consensus Equilibrium

TL;DR: Consensus Equilibrium (CE) as discussed by the authors is based on the solution of a set of equilibrium equations that balance data fit and regularity, which can be approached in multiple ways, including ADMM with a novel form of preconditioning and Newton's method.
Journal ArticleDOI

Tight Global Linear Convergence Rate Bounds for Operator Splitting Methods

TL;DR: This work establishes necessary and sufficient conditions for global linear convergence rate bounds in operator splitting methods for a general class of convex optimization problems, where the associated fixed-point operator is strongly quasi-nonexpansive.
References
More filters
Book

Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers

TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Book

Convex Analysis and Monotone Operator Theory in Hilbert Spaces

TL;DR: This book provides a largely self-contained account of the main results of convex analysis and optimization in Hilbert space, and a concise exposition of related constructive fixed point theory that allows for a wide range of algorithms to construct solutions to problems in optimization, equilibrium theory, monotone inclusions, variational inequalities, and convex feasibility.
Journal ArticleDOI

A dual algorithm for the solution of nonlinear variational problems via finite element approximation

TL;DR: A dual method is proposed which decouples the difficulties relative to the functionals f and g from the possible ill-conditioning effects of the linear operator A and leads to an efficient and simply implementable algorithm.
Journal ArticleDOI

Mean value methods in iteration

TL;DR: In this article, it is shown that the Schauder fixpoint theorem can play a somewhat analogous role in the theory of divergent iteration processes, and that the same methods can be used to prove that a given problem has a solution.
Related Papers (5)