scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A comparison of three total variation based texture extraction models

TL;DR: This paper qualitatively compares three recently proposed models for signal/image texture extraction based on total variation minimization: the Meyer, Vese-Osher (VO), and TV-L^1[12,38,2-4,29-31] models.
About: This article is published in Journal of Visual Communication and Image Representation.The article was published on 2007-06-01 and is currently open access. It has received 68 citations till now. The article focuses on the topics: Image texture.

Summary (2 min read)

1 Introduction

  • Let f be an observed image that contains texture and/or noise.
  • Texture is characterized as repeated and meaningful structure of small patterns.
  • Noise is characterized as uncorrelated random patterns.
  • The rest of an image, which is called cartoon, contains object hues and sharp edges .

1.1 The spaces BV and G

  • In image processing, the space BV and the total variation semi-norm were first used by Rudin, Osher, and Fatemi [33] to remove noise from images.
  • The ROF model is the precursor to a large number of image processing models having a similar form.

1.3 Second-order cone programming

  • Since a one-dimensional second-order cone corresponds to a semi-infinite ray, SOCPs can accommodate nonnegative variables.
  • In fact if all cones are onedimensional, then the above SOCP is just a standard form linear program.
  • As is the case for linear programs, SOCPs can be solved in polynomial time by interior point methods.
  • This is the approach that the authors take to solve the TV-based cartoon-texture decomposition models in this paper.

2.2.3 The Vese-Osher (VO) model

  • This is equivalent to solving the residual-free version (45) below.
  • The authors chose to solve the latter in their numerical tests because using a large λ in (44) makes it difficult to numerically solve its SOCP accurately.

3 Numerical results

  • Similar artifacts can also be found in the results Figures 2 (h )-(j) of the VO model, but the differences are that the VO model generated u's that have a block-like structure and thus v's with more complicated patterns.
  • In Figure 2 (h), most of the signal in the second and third section was extracted from u, leaving very little signal near the boundary of these signal parts.
  • In short, the VO model performed like an approximation of Meyer's model but with certain features closer to those of the TV-L 1 model.

Example 2:

  • This fingerprint has slightly inhomogeneous brightness because the background near the center of the finger is whiter than the rest.
  • The authors believe that the inhomogeneity like this is not helpful to the recognition and comparison of fingerprints so should better be corrected.
  • The authors can observe in Figures 4 (a ) and (b) that their cartoon parts are close to each other, but slightly different from the cartoon in Figure 4 (c).
  • The VO and the TV-L 1 models gave us more satisfactory results than Meyer's model.
  • Compared to the parameters used in the three models for decomposing noiseless images in Example 3, the parameters used in the Meyer and VO models in this set of tests were changed due to the increase in the G-norm of the texture/noise part v that resulted from adding noise.

4 Conclusion

  • The authors have computationally studied three total variation based models with discrete inputs: the Meyer, VO, and TV-L 1 models.
  • The authors tested these models using a variety of 1D sig- nals and 2D images to reveal their differences in decomposing inputs into their cartoon and oscillating/small-scale/texture parts.
  • The Meyer model tends to capture the pattern of the oscillations in the input, which makes it well-suited to applications such as fingerprint image processing.
  • On the other hand, the TV-L 1 model decomposes the input into two parts according to the geometric scales of the components in the input, independent of the signal intensities, one part containing large-scale components and the other containing smallscale ones.
  • These results agree with those in [9] , which compares the ROF, Meyer, and TV-L 1 models.

Did you find this useful? Give us your feedback

Citations
More filters
14 Dec 2013
TL;DR: The solutions of TVL1 are described by means of elementary morphological operations, and a strong constraint is exhibited on the structure part and the texture part of the data which is neccessary to obtain exact decompositions using the TV-G model.
Abstract: In this paper, we analyze the fine properties of the minimizers of the TVL1 and the TV-G models used in image processing. We describe the solutions of TVL1 by means of elementary morphological operations, and we exhibit a strong constraint on the structure part and the texture part of the data which is neccessary to obtain exact decompositions using the TV-G model.

Cites background from "A comparison of three total variati..."

  • ...This approach has had a certain success in practical applications (see [18, 73, 42])....

    [...]

Journal ArticleDOI
TL;DR: In this paper , a multi-region image segmentation model based on low-rank prior decomposition is proposed, where the texture component and the total variational norm are used to characterize the structure component.
Abstract: Natural images usually contain structure component and texture component. The existing image segmentation models based on piecewise smooth cannot handle such natural images containing texture well. For this reason, in this paper, a multi-region image segmentation model based on low-rank prior decomposition is proposed. We use the low-rank prior to characterize the texture component, and the total variational norm to characterize the piecewise smooth structure component so that the proposed method can perform image decomposition and image segmentation tasks simultaneously. Then, the decomposed structure image can be used for image segmentation, and thus we can improve the accuracy of segmentation. To solve the new model, the alternating direction multiplier method is designed. The experimental results show that compared with the related classical models, the new model can significantly improve the subjective visual effect of image segmentation and the mean values of the precision rate, F1-measure, and Jaccard similarity coefficient for the new model on the test images are at least 3.29%, 1.74%, and 3.13% higher, respectively.
Journal ArticleDOI
TL;DR: In this article , the authors employ the fast additive half-quadratic (AHQ) iterative minimization algorithm for solving the l p − l q ${l}_p - {l}q$ optimization model with 0 < p , q ≤ 1 $0 < p,q \le 1$ .
Abstract: In many real-world applications, it is important for the authors to remove insignificant image details while preserving the significant structures. This is the so-called image smoothing problem. In this paper, the authors investigate the image smoothing problem using the l p − l q ${l}_p - {l}_q$ optimization model with 0 < p , q ≤ 1 $0 < p,q \le 1$ . The authors employ the fast additive half-quadratic (AHQ) iterative minimization algorithm for solving the l p − l q ${l}_p - {l}_q$ optimization model. The authors discuss the convergence of the AQH iterative minimization algorithm. Experimental results and comparisons are provided to show the efficiency and flexibility of the proposed method in terms of both qualitative and quantitative evaluations.
Posted Content
TL;DR: In this study, cartoon part of an image is separated, by reconstructing it from pixels of multi scale Total-Variation filtered versions of the original image which is sought to be decomposed into cartoon and texture parts.
Abstract: Separating an image into cartoon and texture components comes useful in image processing applications, such as image compression, image segmentation, image inpainting. Yves Meyer's influential cartoon texture decomposition model involves deriving an energy functional by choosing appropriate spaces and functionals. Minimizers of the derived energy functional are cartoon and texture components of an image. In this study, cartoon part of an image is separated, by reconstructing it from pixels of multi scale Total-Variation filtered versions of the original image which is sought to be decomposed into cartoon and texture parts. An information theoretic pixel by pixel selection criteria is employed to choose the contributing pixels and their scales.

Cites background from "A comparison of three total variati..."

  • ...[3] gives a detailed comparison of these function space approaches and their performance....

    [...]

Proceedings ArticleDOI
01 Dec 2008
TL;DR: The Vese-Osher (VO) model and Mumford-Shah-G(MSG) model based on G-Space and PDE are introduced for real texture image extraction and the experimental results show the contrast effects in these two models for cell image decomposition.
Abstract: Base on G-Space and partial differential equation, a novel method is proposed for cell image decomposition in this paper. The Vese-Osher (VO) model and Mumford-Shah-G(MSG) model based on G-Space and PDE are introduced for real texture image extraction. An image f can be decomposed into a cartoon part u and a noise/texture part v in both two models. The experimental results show the contrast effects in these two models for cell image decomposition. As the basis research for cell image recognition, this image decomposition approach is proved available and effective by the experimental results.
References
More filters
Journal ArticleDOI
TL;DR: In this article, a constrained optimization type of numerical algorithm for removing noise from images is presented, where the total variation of the image is minimized subject to constraints involving the statistics of the noise.

15,225 citations


"A comparison of three total variati..." refers methods in this paper

  • ...Moreover, in [19] it is shown that each interior-point iteration takes O(n(3)) time and O(n(2)logn) bytes for solving an SOCP formulation of the Rudin–Osher–Fatemi model [33]....

    [...]

  • ...Moreover, in [19] it is shown that each interior-point iteration takes O(n3) time and O(n2logn) bytes for solving an SOCP formulation of the Rudin–Osher–Fatemi model [33]....

    [...]

  • ...In image processing, the space BV and the total variation semi-norm were first used by Rudin, Osher, and Fatemi [33] to remove noise from images....

    [...]

BookDOI
03 Jan 1989

2,132 citations

Journal ArticleDOI
TL;DR: SOCP formulations are given for four examples: the convex quadratically constrained quadratic programming (QCQP) problem, problems involving fractional quadRatic functions, and many of the problems presented in the survey paper of Vandenberghe and Boyd as examples of SDPs can in fact be formulated as SOCPs and should be solved as such.
Abstract: Second-order cone programming (SOCP) problems are convex optimization problems in which a linear function is minimized over the intersection of an affine linear manifold with the Cartesian product of second-order (Lorentz) cones. Linear programs, convex quadratic programs and quadratically constrained convex quadratic programs can all be formulated as SOCP problems, as can many other problems that do not fall into these three categories. These latter problems model applications from a broad range of fields from engineering, control and finance to robust optimization and combinatorial optimization. On the other hand semidefinite programming (SDP)—that is the optimization problem over the intersection of an affine set and the cone of positive semidefinite matrices—includes SOCP as a special case. Therefore, SOCP falls between linear (LP) and quadratic (QP) programming and SDP. Like LP, QP and SDP problems, SOCP problems can be solved in polynomial time by interior point methods. The computational effort per iteration required by these methods to solve SOCP problems is greater than that required to solve LP and QP problems but less than that required to solve SDP’s of similar size and structure. Because the set of feasible solutions for an SOCP problem is not polyhedral as it is for LP and QP problems, it is not readily apparent how to develop a simplex or simplex-like method for SOCP. While SOCP problems can be solved as SDP problems, doing so is not advisable both on numerical grounds and computational complexity concerns. For instance, many of the problems presented in the survey paper of Vandenberghe and Boyd [VB96] as examples of SDPs can in fact be formulated as SOCPs and should be solved as such. In §2, 3 below we give SOCP formulations for four of these examples: the convex quadratically constrained quadratic programming (QCQP) problem, problems involving fractional quadratic functions ∗RUTCOR, Rutgers University, e-mail:alizadeh@rutcor.rutgers.edu. Research supported in part by the U.S. National Science Foundation grant CCR-9901991 †IEOR, Columbia University, e-mail: gold@ieor.columbia.edu. Research supported in part by the Department of Energy grant DE-FG02-92ER25126, National Science Foundation grants DMS-94-14438, CDA-97-26385 and DMS-01-04282.

1,535 citations


"A comparison of three total variati..." refers background or methods in this paper

  • ...When 1 < p <1, we use second-order cone formulations presented in [1]....

    [...]

  • ...With these definitions an SOCP can be written in the following form [1]:...

    [...]

MonographDOI
01 Sep 2001
TL;DR: It turns out that this mathematics involves new properties of various Besov-type function spaces and leads to many deep results, including new generalizations of famous Gagliardo-Nirenberg and Poincare inequalities.
Abstract: From the Publisher: "Image compression, the Navier-Stokes equations, and detection of gravitational waves are three seemingly unrelated scientific problems that, remarkably, can be studied from one perspective. The notion that unifies the three problems is that of "oscillating patterns", which are present in many natural images, help to explain nonlinear equations. and are pivotal in studying chirps and frequency-modulated signals." "In the book, the author describes both what the oscillating patterns are and the mathematics necessary for their analysis. It turns out that this mathematics involves new properties of various Besov-type function spaces and leads to many deep results, including new generalizations of famous Gagliardo-Nirenberg and Poincare inequalities." This book can be used either as a textbook in studying applications of wavelets to image processing or as a supplementary resource for studying nonlinear evolution equations or frequency-modulated signals. Most of the material in the book did not appear previously in monograph literature.

1,147 citations


"A comparison of three total variati..." refers background or methods in this paper

  • ...G is the dual of the closed subspace BV of BV, where BV :1⁄4 fu 2 BV : jrf j 2 L(1)g [27]....

    [...]

  • ...Meyer’s model To extract cartoon u in the space BV and texture and/or noise v as an oscillating function, Meyer [27] proposed the following model:...

    [...]

  • ...Among the recent total variation-based cartoon-texture decomposition models, Meyer [27] and Haddad and Meyer [20] proposed using the G-norm defined above, Vese and Osher [35] approximated the G-norm by the div(L)-norm, Osher, Sole and Vese [32] proposed using the H (1)-norm, Lieu and Vese [26] proposed using the more general H -norm, and Le and Vese [24] and Garnett, Le, Meyer and Vese [18] proposed using the homogeneous Besov space _ Bp;q, 2 < s < 0, 1 6 p, q 61, extending Meyer’s _ B 1 1;1, to model the oscillation component of an image....

    [...]

  • ...This paper qualitatively compares three recently proposed models for signal/image texture extraction based on total variation minimization: the Meyer [27], Vese–Osher (VO) [35], and TV-L(1) [12,38,2–4,29–31] models....

    [...]

  • ...Meyer gave a few examples in [27], including the one shown at the end of next paragraph, illustrating the appropriateness of modeling oscillating patterns by functions in G....

    [...]