Efficient Schemes for Total Variation Minimization Under Constraints in Image Processing
read more
Citations
Proximal Splitting Methods in Signal Processing
Proximal Splitting Methods in Signal Processing
NESTA: A Fast and Accurate First-Order Method for Sparse Recovery
Templates for convex cone problems with applications to sparse signal recovery
A General Framework for a Class of First Order Primal-Dual Algorithms for Convex Optimization in Imaging Science
References
Nonlinear total variation based noise removal algorithms
Convex analysis and variational problems
Introductory Lectures on Convex Optimization: A Basic Course
An introduction to optimization
Related Papers (5)
Frequently Asked Questions (16)
Q2. What are some of the techniques that have been proposed to solve convex problems?
Those include subgradient descents [28], Newton-like methods [25], second order cone programming [21], interior point methods [19], or graph based approaches [15, 10].
Q3. How many iterations do the authors need to get the same result?
After 200 iterations, very little perceptual modifications are observed in all experiments, while a projected gradient descent requires around 1000 iterations to get the same result.
Q4. What is the cost per iteration of this model?
For a 256 × 256 image, the cost per iteration is 0.2 seconds (we used the fft2 function of Matlab and implemented a C code with Matlab mex compiler for the projections).
Q5. What is the main purpose of this paper?
In this paper the authors concentrate on the use of B = ∇, ||Bu||1 corresponding to the total variation (see the appendix for the discretization of differential operators).
Q6. What is the definition of a function that is differentiable?
A function F defined on K is said to be L-Lipschitz differentiable if it is differentiableon K and that |∇F (u1) −∇F (u2)|2 ≤ L|u1 − u2|2 for any (u1, u2) ∈ K2.Definition 2.7. [Strongly convex differentiable function]
Q7. What is the straightforward algorithm to solve for general convex function F?
Maybe the most straightforward algorithm to solve (1.1) for general convex function F , is the projected subgradient descent algorithm.
Q8. What is the way to deblur the image?
To retrieve the original image, the authors can solve the following problem :inf u∈X,|h⋆u−f |2≤α (J(u)) (2.18)In view of Shannon’s theorem, one could think that the best way to zoom an image is to use a zero-padding technique.
Q9. What is the way to retrieve u from f?
Then it can be shown using the Bayes rule that the "best" way to retrieve u from f is to solve the following problem :inf u∈X,|u−f |p≤α (J(u)) (2.14)with p = 1 for impulse noise [2, 35, 11, 15], p = 2 for Gaussian noise [41], p = ∞ for uniform noise [45], and α a parameter depending on the variance of the noise.
Q10. How many iterations of the problem are there?
Convergence rate (3.24) and bound (3.30) tell us that after k iterations of a Nesterovscheme, the worst case precision is 2||div||22d(ū)µk2 + nµ 2 .
Q11. What is the way to solve the BV l1 problem?
Having the previous section in mind, a straightforward approach to solve (1.2) is to smooth the total variation, if F is non-smooth, it can be smoothed too, and then one just needs to use a fast scheme like (3.23) adapted to the unconstrained minimization of Lipschitz differentiable functions [33].
Q12. What is the amplitude of the rr?
It can be shown that the amplitude of theRR n° 6260Figure 2: Image to be decomposedtexture (v component) is bounded by a parameter depending linearly on α in (3.45).
Q13. What is the method for a subgradient descent?
The second is a projected subgradient descent with optimal step :{u0 = f uk+1 = ΠK(u k − tk η k ||ηk||2 ) (4.82)ηk must belong to ∂J(uk) for convergence.
Q14. How much computational effort is the Nesterov technique taking?
Let us precise that the computational effort per iteration of the Nesterov technique is between one and twice that of the projected gradient descent.
Q15. What is the algorithm for this problem?
In [34], Y. Nesterov presents an O( 1√ ǫ ) algorithm adapted to the problem:inf u∈Q E(u) (3.22)where E is any convex, L − Lipschitz − differentiable function, and Q is any convex, closed set.
Q16. How many iterations can be done to solve a dual problem?
the authors show that when dealing with a strongly convex function F , the resolution of a dual problem can be done with an O( 1√ǫ ) algorithm.