scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A comparison of three total variation based texture extraction models

TL;DR: This paper qualitatively compares three recently proposed models for signal/image texture extraction based on total variation minimization: the Meyer, Vese-Osher (VO), and TV-L^1[12,38,2-4,29-31] models.
About: This article is published in Journal of Visual Communication and Image Representation.The article was published on 2007-06-01 and is currently open access. It has received 68 citations till now. The article focuses on the topics: Image texture.

Summary (2 min read)

1 Introduction

  • Let f be an observed image that contains texture and/or noise.
  • Texture is characterized as repeated and meaningful structure of small patterns.
  • Noise is characterized as uncorrelated random patterns.
  • The rest of an image, which is called cartoon, contains object hues and sharp edges .

1.1 The spaces BV and G

  • In image processing, the space BV and the total variation semi-norm were first used by Rudin, Osher, and Fatemi [33] to remove noise from images.
  • The ROF model is the precursor to a large number of image processing models having a similar form.

1.3 Second-order cone programming

  • Since a one-dimensional second-order cone corresponds to a semi-infinite ray, SOCPs can accommodate nonnegative variables.
  • In fact if all cones are onedimensional, then the above SOCP is just a standard form linear program.
  • As is the case for linear programs, SOCPs can be solved in polynomial time by interior point methods.
  • This is the approach that the authors take to solve the TV-based cartoon-texture decomposition models in this paper.

2.2.3 The Vese-Osher (VO) model

  • This is equivalent to solving the residual-free version (45) below.
  • The authors chose to solve the latter in their numerical tests because using a large λ in (44) makes it difficult to numerically solve its SOCP accurately.

3 Numerical results

  • Similar artifacts can also be found in the results Figures 2 (h )-(j) of the VO model, but the differences are that the VO model generated u's that have a block-like structure and thus v's with more complicated patterns.
  • In Figure 2 (h), most of the signal in the second and third section was extracted from u, leaving very little signal near the boundary of these signal parts.
  • In short, the VO model performed like an approximation of Meyer's model but with certain features closer to those of the TV-L 1 model.

Example 2:

  • This fingerprint has slightly inhomogeneous brightness because the background near the center of the finger is whiter than the rest.
  • The authors believe that the inhomogeneity like this is not helpful to the recognition and comparison of fingerprints so should better be corrected.
  • The authors can observe in Figures 4 (a ) and (b) that their cartoon parts are close to each other, but slightly different from the cartoon in Figure 4 (c).
  • The VO and the TV-L 1 models gave us more satisfactory results than Meyer's model.
  • Compared to the parameters used in the three models for decomposing noiseless images in Example 3, the parameters used in the Meyer and VO models in this set of tests were changed due to the increase in the G-norm of the texture/noise part v that resulted from adding noise.

4 Conclusion

  • The authors have computationally studied three total variation based models with discrete inputs: the Meyer, VO, and TV-L 1 models.
  • The authors tested these models using a variety of 1D sig- nals and 2D images to reveal their differences in decomposing inputs into their cartoon and oscillating/small-scale/texture parts.
  • The Meyer model tends to capture the pattern of the oscillations in the input, which makes it well-suited to applications such as fingerprint image processing.
  • On the other hand, the TV-L 1 model decomposes the input into two parts according to the geometric scales of the components in the input, independent of the signal intensities, one part containing large-scale components and the other containing smallscale ones.
  • These results agree with those in [9] , which compares the ROF, Meyer, and TV-L 1 models.

Did you find this useful? Give us your feedback

Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors proposed simple and extremely efficient methods for solving the basis pursuit problem, which is used in compressed sensing, using Bregman iterative regularization, and they gave a very accurate solution after solving only a very small number of instances of the unconstrained problem.
Abstract: We propose simple and extremely efficient methods for solving the basis pursuit problem $\min\{\|u\|_1 : Au = f, u\in\mathbb{R}^n\},$ which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of instances of the unconstrained problem $\min_{u\in\mathbb{R}^n} \mu\|u\|_1+\frac{1}{2}\|Au-f^k\|_2^2$ for given matrix $A$ and vector $f^k$. We show analytically that this iterative approach yields exact solutions in a finite number of steps and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrix-vector operations involving $A$ and $A^\top$ can be computed by fast transforms. Utilizing a fast fixed-point continuation solver that is based solely on such operations for solving the above unconstrained subproblem, we were able to quickly solve huge instances of compressed sensing problems on a standard PC.

1,510 citations


Cites background from "A comparison of three total variati..."

  • ...It is proved that the recovery is perfect, i.e., the solution uopt = ū, for any ū whenever k, m, n, and A satisfy certain conditions (e.g., see [13, 30, 37, 42, 78, 95, 96] )....

    [...]

Journal ArticleDOI
TL;DR: In this letter, an enhanced pixel domain JND model with a new algorithm for CM estimation is proposed, and the proposed one shows its advantages brought by the better EM and TM estimation.
Abstract: In just noticeable difference (JND) models, evaluation of contrast masking (CM) is a crucial step. More specifically, CM due to edge masking (EM) and texture masking (TM) needs to be distinguished due to the entropy masking property of the human visual system. However, TM is not estimated accurately in the existing JND models since they fail to distinguish TM from EM. In this letter, we propose an enhanced pixel domain JND model with a new algorithm for CM estimation. In our model, total-variation based image decomposition is used to decompose an image into structural image (i.e., cartoon like, piecewise smooth regions with sharp edges) and textural image for estimation of EM and TM, respectively. Compared with the existing models, the proposed one shows its advantages brought by the better EM and TM estimation. It has been also applied to noise shaping and visual distortion gauge, and favorable results are demonstrated by experiments on different images.

218 citations


Cites background or methods from "A comparison of three total variati..."

  • ...In [9], different TV-based image decomposition models are considered and the model of minimizing TV with an L1-norm fidelity term is shown to achieve better results; we adopt this (TV-L1) model in our work for image decomposition, and then (1) becomes as follows:...

    [...]

  • ...2 to 2 [8], [9] for most natural images....

    [...]

Journal ArticleDOI
TL;DR: New fast algorithms to minimize total variation and more generally $l^1$-norms under a general convex constraint and a recent advance in convex optimization proposed by Yurii Nesterov are presented.
Abstract: This paper presents new fast algorithms to minimize total variation and more generally $l^1$-norms under a general convex constraint. Such problems are standards of image processing. The algorithms are based on a recent advance in convex optimization proposed by Yurii Nesterov. Depending on the regularity of the data fidelity term, we solve either a primal problem or a dual problem. First we show that standard first-order schemes allow one to get solutions of precision $\epsilon$ in $O(\frac{1}{\epsilon^2})$ iterations at worst. We propose a scheme that allows one to obtain a solution of precision $\epsilon$ in $O(\frac{1}{\epsilon})$ iterations for a general convex constraint. For a strongly convex constraint, we solve a dual problem with a scheme that requires $O(\frac{1}{\sqrt{\epsilon}})$ iterations to get a solution of precision $\epsilon$. Finally we perform some numerical experiments which confirm the theoretical results on various problems of image processing.

216 citations

Journal ArticleDOI
TL;DR: This paper converts the linear model, which reduces to a low-pass/high-pass filter pair, into a nonlinear filter pair involving the total variation, which retains both the essential features of Meyer's models and the simplicity and rapidity of thelinear model.
Abstract: Can images be decomposed into the sum of a geometric part and a textural part? In a theoretical breakthrough, [Y. Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations. Providence, RI: American Mathematical Society, 2001] proposed variational models that force the geometric part into the space of functions with bounded variation, and the textural part into a space of oscillatory distributions. Meyer's models are simple minimization problems extending the famous total variation model. However, their numerical solution has proved challenging. It is the object of a literature rich in variants and numerical attempts. This paper starts with the linear model, which reduces to a low-pass/high-pass filter pair. A simple conversion of the linear filter pair into a nonlinear filter pair involving the total variation is introduced. This new-proposed nonlinear filter pair retains both the essential features of Meyer's models and the simplicity and rapidity of the linear model. It depends upon only one transparent parameter: the texture scale, measured in pixel mesh. Comparative experiments show a better and faster separation of cartoon from texture. One application is illustrated: edge detection.

203 citations

Journal ArticleDOI
TL;DR: It is shown that the images produced by this model can be formed from the minimizers of a sequence of decoupled geometry sub-problems, and that the TV-L1 model is able to separate image features according to their scales.
Abstract: This paper studies the total variation regularization with an $L^1$ fidelity term (TV‐$L^1$) model for decomposing an image into features of different scales. We first show that the images produced by this model can be formed from the minimizers of a sequence of decoupled geometry subproblems. Using this result we show that the TV‐$L^1$ model is able to separate image features according to their scales, where the scale is analytically defined by the G‐value. A number of other properties including the geometric and morphological invariance of the TV‐$L^1$ model are also proved and their applications discussed.

109 citations


Cites methods from "A comparison of three total variati..."

  • ...Since the second-order cone programming (SOCP) approach [27, 45 ] has proven to give very accurate solutions for solving TVbased image models, we formulated the TV-L1 model (1.1) and the G-value formula (5.1) as SOCPs and solved them using the commercial optimization package Mosek [33]....

    [...]

  • ...decomposition can also be used to fllter 1D signal [3], to remove impulsive (salt-npepper) noise [35], to extract textures from natural images [ 45 ], to remove varying illumination in face images for face recognition [22, 21], to decompose 2D/3D images for multiscale MR image registration [20], to assess damage from satellite imagery [19], and to remove inhomogeneous background from cDNA microarray and digital microscopic images [44]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this article, a constrained optimization type of numerical algorithm for removing noise from images is presented, where the total variation of the image is minimized subject to constraints involving the statistics of the noise.

15,225 citations


"A comparison of three total variati..." refers methods in this paper

  • ...Moreover, in [19] it is shown that each interior-point iteration takes O(n(3)) time and O(n(2)logn) bytes for solving an SOCP formulation of the Rudin–Osher–Fatemi model [33]....

    [...]

  • ...Moreover, in [19] it is shown that each interior-point iteration takes O(n3) time and O(n2logn) bytes for solving an SOCP formulation of the Rudin–Osher–Fatemi model [33]....

    [...]

  • ...In image processing, the space BV and the total variation semi-norm were first used by Rudin, Osher, and Fatemi [33] to remove noise from images....

    [...]

BookDOI
03 Jan 1989

2,132 citations

Journal ArticleDOI
TL;DR: SOCP formulations are given for four examples: the convex quadratically constrained quadratic programming (QCQP) problem, problems involving fractional quadRatic functions, and many of the problems presented in the survey paper of Vandenberghe and Boyd as examples of SDPs can in fact be formulated as SOCPs and should be solved as such.
Abstract: Second-order cone programming (SOCP) problems are convex optimization problems in which a linear function is minimized over the intersection of an affine linear manifold with the Cartesian product of second-order (Lorentz) cones. Linear programs, convex quadratic programs and quadratically constrained convex quadratic programs can all be formulated as SOCP problems, as can many other problems that do not fall into these three categories. These latter problems model applications from a broad range of fields from engineering, control and finance to robust optimization and combinatorial optimization. On the other hand semidefinite programming (SDP)—that is the optimization problem over the intersection of an affine set and the cone of positive semidefinite matrices—includes SOCP as a special case. Therefore, SOCP falls between linear (LP) and quadratic (QP) programming and SDP. Like LP, QP and SDP problems, SOCP problems can be solved in polynomial time by interior point methods. The computational effort per iteration required by these methods to solve SOCP problems is greater than that required to solve LP and QP problems but less than that required to solve SDP’s of similar size and structure. Because the set of feasible solutions for an SOCP problem is not polyhedral as it is for LP and QP problems, it is not readily apparent how to develop a simplex or simplex-like method for SOCP. While SOCP problems can be solved as SDP problems, doing so is not advisable both on numerical grounds and computational complexity concerns. For instance, many of the problems presented in the survey paper of Vandenberghe and Boyd [VB96] as examples of SDPs can in fact be formulated as SOCPs and should be solved as such. In §2, 3 below we give SOCP formulations for four of these examples: the convex quadratically constrained quadratic programming (QCQP) problem, problems involving fractional quadratic functions ∗RUTCOR, Rutgers University, e-mail:alizadeh@rutcor.rutgers.edu. Research supported in part by the U.S. National Science Foundation grant CCR-9901991 †IEOR, Columbia University, e-mail: gold@ieor.columbia.edu. Research supported in part by the Department of Energy grant DE-FG02-92ER25126, National Science Foundation grants DMS-94-14438, CDA-97-26385 and DMS-01-04282.

1,535 citations


"A comparison of three total variati..." refers background or methods in this paper

  • ...When 1 < p <1, we use second-order cone formulations presented in [1]....

    [...]

  • ...With these definitions an SOCP can be written in the following form [1]:...

    [...]

MonographDOI
01 Sep 2001
TL;DR: It turns out that this mathematics involves new properties of various Besov-type function spaces and leads to many deep results, including new generalizations of famous Gagliardo-Nirenberg and Poincare inequalities.
Abstract: From the Publisher: "Image compression, the Navier-Stokes equations, and detection of gravitational waves are three seemingly unrelated scientific problems that, remarkably, can be studied from one perspective. The notion that unifies the three problems is that of "oscillating patterns", which are present in many natural images, help to explain nonlinear equations. and are pivotal in studying chirps and frequency-modulated signals." "In the book, the author describes both what the oscillating patterns are and the mathematics necessary for their analysis. It turns out that this mathematics involves new properties of various Besov-type function spaces and leads to many deep results, including new generalizations of famous Gagliardo-Nirenberg and Poincare inequalities." This book can be used either as a textbook in studying applications of wavelets to image processing or as a supplementary resource for studying nonlinear evolution equations or frequency-modulated signals. Most of the material in the book did not appear previously in monograph literature.

1,147 citations


"A comparison of three total variati..." refers background or methods in this paper

  • ...G is the dual of the closed subspace BV of BV, where BV :1⁄4 fu 2 BV : jrf j 2 L(1)g [27]....

    [...]

  • ...Meyer’s model To extract cartoon u in the space BV and texture and/or noise v as an oscillating function, Meyer [27] proposed the following model:...

    [...]

  • ...Among the recent total variation-based cartoon-texture decomposition models, Meyer [27] and Haddad and Meyer [20] proposed using the G-norm defined above, Vese and Osher [35] approximated the G-norm by the div(L)-norm, Osher, Sole and Vese [32] proposed using the H (1)-norm, Lieu and Vese [26] proposed using the more general H -norm, and Le and Vese [24] and Garnett, Le, Meyer and Vese [18] proposed using the homogeneous Besov space _ Bp;q, 2 < s < 0, 1 6 p, q 61, extending Meyer’s _ B 1 1;1, to model the oscillation component of an image....

    [...]

  • ...This paper qualitatively compares three recently proposed models for signal/image texture extraction based on total variation minimization: the Meyer [27], Vese–Osher (VO) [35], and TV-L(1) [12,38,2–4,29–31] models....

    [...]

  • ...Meyer gave a few examples in [27], including the one shown at the end of next paragraph, illustrating the appropriateness of modeling oscillating patterns by functions in G....

    [...]