scispace - formally typeset
Open AccessJournal ArticleDOI

Efficient Schemes for Total Variation Minimization Under Constraints in Image Processing

Reads0
Chats0
TLDR
New fast algorithms to minimize total variation and more generally $l^1$-norms under a general convex constraint and a recent advance in convex optimization proposed by Yurii Nesterov are presented.
Abstract
This paper presents new fast algorithms to minimize total variation and more generally $l^1$-norms under a general convex constraint. Such problems are standards of image processing. The algorithms are based on a recent advance in convex optimization proposed by Yurii Nesterov. Depending on the regularity of the data fidelity term, we solve either a primal problem or a dual problem. First we show that standard first-order schemes allow one to get solutions of precision $\epsilon$ in $O(\frac{1}{\epsilon^2})$ iterations at worst. We propose a scheme that allows one to obtain a solution of precision $\epsilon$ in $O(\frac{1}{\epsilon})$ iterations for a general convex constraint. For a strongly convex constraint, we solve a dual problem with a scheme that requires $O(\frac{1}{\sqrt{\epsilon}})$ iterations to get a solution of precision $\epsilon$. Finally we perform some numerical experiments which confirm the theoretical results on various problems of image processing.

read more

Content maybe subject to copyright    Report

HAL Id: inria-00166096
https://hal.inria.fr/inria-00166096v3
Submitted on 7 Mar 2008
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
Ecient schemes for total variation minimization under
constraints in image processing
Pierre Weiss, Gilles Aubert, Laure Blanc-Féraud
To cite this version:
Pierre Weiss, Gilles Aubert, Laure Blanc-Féraud. Ecient schemes for total variation minimization
under constraints in image processing. [Research Report] RR-6260, INRIA. 2007, pp.36. �inria-
00166096v3�

apport
de recherche
ISSN 0249-6399 ISRN INRIA/RR--6260--FR+ENG
Thème COG
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE
Efficient schemes for total variation minimization
under constraints in image processing
Pierre Weiss Gilles Aubert Laure Blanc-Féraud
6260
July 2007


Unité de recherche INRIA Sophia Antipolis
2004, route des Lucioles, BP 93, 06902 Sophia Antipolis Cedex (France)
Téléphone : +33 4 92 38 77 77 Télécopie : +33 4 92 38 77 65
Efficient schemes for total variation minimization under
constraints in image processing
Pierre Weiss , Gilles Aubert , Laure Blanc-Féraud
Thème COG Systèmes cognitifs
Projets Ariana
Rapp ort de recherche 6260 July 2007 36 pages
Abstract: This paper presents new algorithms to minimize total variation and more g e n-
erally l
1
-norms under a general convex constraint. The algorithms are based on a r e c ent
advance in convex optimization proposed by Yurii Nesterov [34]. Depending on the regu-
larity of the data fidelity term, we solve either a primal problem, either a dual problem.
First we show that standard first order schemes allow to get solutions of precision ǫ in O(
1
ǫ
2
)
iterations at worst. For a genera l convex c onstraint, we propose a scheme that allows to
obtain a solution of precision ǫ in O(
1
ǫ
) iterations. For a strongly convex constraint, we solve
a dual problem with a scheme that requires O(
1
ǫ
) iterations to get a solution o f precision
ǫ. Thus, depending on the regularity of the data term, we gain from one to two order s of
magnitude in the convergence rates with respect to standard schemes. Finally we perform
some numerical ex periments which confirm the theoretical results on various problems.
Key-words: l
1
-norm minimization , total variation minimization, l
p
-norms, duality, gra-
dient and subgradient descents, Nesterov scheme, bounded and non-bounded noises, tex-
ture+geometry decomposition, complexity.

Algorithmes efficaces pour la minimisation de la
variation totale sous contraintes
Résumé : Ce papier présente de nouveaux algorithmes pour minimiser la variation to-
tale, et plus généralement des normes l
1
, sous des contraintes convexes. Ces algo rithmes
proviennent d’une avancée récente en optimisation convexe proposée par Yurii Nesterov.
Suivant la régularité de l’attache a ux données, nous résolvons soit un problème primal, so it
un problème dual. Première ment, nous montrons que les schémas standard de premier ordre
permettent d’obtenir des solutions de précision ǫ en O(
1
ǫ
2
) itérations au pire des cas. Pour
une contrainte convexe quelconque, nous proposons un schéma qui permet d’obtenir une
solution de précision ǫ en O(
1
ǫ
) itérations. Pour une contrainte fortement convexe, no us
résolvons un problème dual avec un schéma qui demande O(
1
ǫ
) itérations pour obtenir une
solution de précision ǫ. Suivant la contrainte, nous gagnons donc un à deux ordres dans
la rapidité de convergence par rapport à des approches standard. Finalement, nous faisons
quelques expériences numériques qui confirment les résultats théoriques sur de nombreux
problèmes.
Mots-clés : norme l
1
, minimisation variation totale, bruits bo rnés, bruits de compr e ssion,
décomposition texture + géométrie, dualité, s ous-gradient projeté, complexité, contraintes
convexes

Citations
More filters
Posted Content

Proximal Splitting Methods in Signal Processing

Abstract: The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of signal processing, where it has become increasingly important. In this paper, we review the basic properties of proximity operators which are relevant to signal processing and present optimization methods based on these operators. These proximal splitting methods are shown to capture and extend several well-known algorithms in a unifying framework. Applications of proximal methods in signal recovery and synthesis are discussed.
Book ChapterDOI

Proximal Splitting Methods in Signal Processing

TL;DR: The basic properties of proximity operators which are relevant to signal processing and optimization methods based on these operators are reviewed and proximal splitting methods are shown to capture and extend several well-known algorithms in a unifying framework.
Journal ArticleDOI

NESTA: A Fast and Accurate First-Order Method for Sparse Recovery

TL;DR: A smoothing technique and an accelerated first-order algorithm are applied and it is demonstrated that this approach is ideally suited for solving large-scale compressed sensing reconstruction problems and is robust in the sense that its excellent performance across a wide range of problems does not depend on the fine tuning of several parameters.
Journal ArticleDOI

Templates for convex cone problems with applications to sparse signal recovery

TL;DR: A general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields, and results showing that the smooth and unsmoothed problems are sometimes formally equivalent are applied.
Journal ArticleDOI

A General Framework for a Class of First Order Primal-Dual Algorithms for Convex Optimization in Imaging Science

TL;DR: This work generalizes the primal-dual hybrid gradient (PDHG) algorithm to a broader class of convex optimization problems, and surveys several closely related methods and explains the connections to PDHG.
References
More filters
Journal ArticleDOI

Nonlinear total variation based noise removal algorithms

TL;DR: In this article, a constrained optimization type of numerical algorithm for removing noise from images is presented, where the total variation of the image is minimized subject to constraints involving the statistics of the noise.
Book

Convex analysis and variational problems

TL;DR: In this article, the authors consider non-convex variational problems with a priori estimate in convex programming and show that they can be solved by the minimax theorem.
Book

Introductory Lectures on Convex Optimization: A Basic Course

TL;DR: A polynomial-time interior-point method for linear optimization was proposed in this paper, where the complexity bound was not only in its complexity, but also in the theoretical pre- diction of its high efficiency was supported by excellent computational results.
Book

An introduction to optimization

TL;DR: This review discusses mathematics, linear programming, and set--Constrained and Unconstrained Optimization, as well as methods of Proof and Some Notation, and problems with Equality Constraints.
Related Papers (5)
Frequently Asked Questions (16)
Q1. What contributions have the authors mentioned in the paper "Efficient schemes for total variation minimization under constraints in image processing" ?

This paper presents new algorithms to minimize total variation and more generally l-norms under a general convex constraint. First the authors show that standard first order schemes allow to get solutions of precision ǫ in O ( 1 ǫ2 ) iterations at worst. For a general convex constraint, the authors propose a scheme that allows to obtain a solution of precision ǫ in O ( 1ǫ ) iterations. Finally the authors perform some numerical experiments which confirm the theoretical results on various problems. 

Those include subgradient descents [28], Newton-like methods [25], second order cone programming [21], interior point methods [19], or graph based approaches [15, 10]. 

After 200 iterations, very little perceptual modifications are observed in all experiments, while a projected gradient descent requires around 1000 iterations to get the same result. 

For a 256 × 256 image, the cost per iteration is 0.2 seconds (we used the fft2 function of Matlab and implemented a C code with Matlab mex compiler for the projections). 

In this paper the authors concentrate on the use of B = ∇, ||Bu||1 corresponding to the total variation (see the appendix for the discretization of differential operators). 

A function F defined on K is said to be L-Lipschitz differentiable if it is differentiableon K and that |∇F (u1) −∇F (u2)|2 ≤ L|u1 − u2|2 for any (u1, u2) ∈ K2.Definition 2.7. [Strongly convex differentiable function] 

Maybe the most straightforward algorithm to solve (1.1) for general convex function F , is the projected subgradient descent algorithm. 

To retrieve the original image, the authors can solve the following problem :inf u∈X,|h⋆u−f |2≤α (J(u)) (2.18)In view of Shannon’s theorem, one could think that the best way to zoom an image is to use a zero-padding technique. 

Then it can be shown using the Bayes rule that the "best" way to retrieve u from f is to solve the following problem :inf u∈X,|u−f |p≤α (J(u)) (2.14)with p = 1 for impulse noise [2, 35, 11, 15], p = 2 for Gaussian noise [41], p = ∞ for uniform noise [45], and α a parameter depending on the variance of the noise. 

Convergence rate (3.24) and bound (3.30) tell us that after k iterations of a Nesterovscheme, the worst case precision is 2||div||22d(ū)µk2 + nµ 2 . 

Having the previous section in mind, a straightforward approach to solve (1.2) is to smooth the total variation, if F is non-smooth, it can be smoothed too, and then one just needs to use a fast scheme like (3.23) adapted to the unconstrained minimization of Lipschitz differentiable functions [33]. 

It can be shown that the amplitude of theRR n° 6260Figure 2: Image to be decomposedtexture (v component) is bounded by a parameter depending linearly on α in (3.45). 

The second is a projected subgradient descent with optimal step :{u0 = f uk+1 = ΠK(u k − tk η k ||ηk||2 ) (4.82)ηk must belong to ∂J(uk) for convergence. 

Let us precise that the computational effort per iteration of the Nesterov technique is between one and twice that of the projected gradient descent. 

In [34], Y. Nesterov presents an O( 1√ ǫ ) algorithm adapted to the problem:inf u∈Q E(u) (3.22)where E is any convex, L − Lipschitz − differentiable function, and Q is any convex, closed set. 

the authors show that when dealing with a strongly convex function F , the resolution of a dual problem can be done with an O( 1√ǫ ) algorithm.