scispace - formally typeset
Open AccessJournal ArticleDOI

Fast alternating linearization methods for minimizing the sum of two convex functions

Reads0
Chats0
TLDR
Algorithms in this paper are Gauss-Seidel type methods, in contrast to the ones proposed by Goldfarb and Ma in (Fast multiple splitting algorithms for convex optimization, Columbia University, 2009) where the algorithms are Jacobi type methods.
Abstract
We present in this paper alternating linearization algorithms based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions. Our basic methods require at most $${O(1/\epsilon)}$$ iterations to obtain an $${\epsilon}$$ -optimal solution, while our accelerated (i.e., fast) versions of them require at most $${O(1/\sqrt{\epsilon})}$$ iterations, with little change in the computational effort required at each iteration. For both types of methods, we present one algorithm that requires both functions to be smooth with Lipschitz continuous gradients and one algorithm that needs only one of the functions to be so. Algorithms in this paper are Gauss-Seidel type methods, in contrast to the ones proposed by Goldfarb and Ma in (Fast multiple splitting algorithms for convex optimization, Columbia University, 2009) where the algorithms are Jacobi type methods. Numerical results are reported to support our theoretical conclusions and demonstrate the practical potential of our algorithms.

read more

Citations
More filters
Journal ArticleDOI

Fast Alternating Direction Optimization Methods

TL;DR: This paper considers accelerated variants of two common alternating direction methods: the alternating direction method of multipliers (ADMM) and the alternating minimization algorithm (AMA), of the form first proposed by Nesterov for gradient descent methods.
Journal ArticleDOI

On the Global and Linear Convergence of the Generalized Alternating Direction Method of Multipliers

TL;DR: This paper shows that global linear convergence can be guaranteed under the assumptions of strong convexity and Lipschitz gradient on one of the two functions, along with certain rank assumptions on A and B.
Journal ArticleDOI

On the linear convergence of the alternating direction method of multipliers

TL;DR: This paper establishes the global R-linear convergence of the ADMM for minimizing the sum of any number of convex separable functions, assuming that a certain error bound condition holds true and the dual stepsize is sufficiently small.
Journal ArticleDOI

Robust PCA via Principal Component Pursuit: A review for a comparative evaluation in video surveillance

TL;DR: This work aims to initiate a rigorous and comprehensive review of RPCA-PCP based methods for testing and ranking existing algorithms for foreground detection and investigates how these methods are solved and if incremental algorithms and real-time implementations can be achieved.
Posted Content

Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey.

TL;DR: A unified algorithmic framework is introduced for incremental methods for minimizing a sum P m=1 fi(x) consisting of a large number of convex component functions fi, including the advantages offered by randomization in the selection of components.
References
More filters
Book

Compressed sensing

TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Journal ArticleDOI

Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information

TL;DR: In this paper, the authors considered the model problem of reconstructing an object from incomplete frequency samples and showed that with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the lscr/sub 1/ minimization problem.
Book

Nonlinear Programming

Journal ArticleDOI

A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems

TL;DR: A new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically.
Related Papers (5)