scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A comparison of three total variation based texture extraction models

TL;DR: This paper qualitatively compares three recently proposed models for signal/image texture extraction based on total variation minimization: the Meyer, Vese-Osher (VO), and TV-L^1[12,38,2-4,29-31] models.
About: This article is published in Journal of Visual Communication and Image Representation.The article was published on 2007-06-01 and is currently open access. It has received 68 citations till now. The article focuses on the topics: Image texture.

Summary (2 min read)

1 Introduction

  • Let f be an observed image that contains texture and/or noise.
  • Texture is characterized as repeated and meaningful structure of small patterns.
  • Noise is characterized as uncorrelated random patterns.
  • The rest of an image, which is called cartoon, contains object hues and sharp edges .

1.1 The spaces BV and G

  • In image processing, the space BV and the total variation semi-norm were first used by Rudin, Osher, and Fatemi [33] to remove noise from images.
  • The ROF model is the precursor to a large number of image processing models having a similar form.

1.3 Second-order cone programming

  • Since a one-dimensional second-order cone corresponds to a semi-infinite ray, SOCPs can accommodate nonnegative variables.
  • In fact if all cones are onedimensional, then the above SOCP is just a standard form linear program.
  • As is the case for linear programs, SOCPs can be solved in polynomial time by interior point methods.
  • This is the approach that the authors take to solve the TV-based cartoon-texture decomposition models in this paper.

2.2.3 The Vese-Osher (VO) model

  • This is equivalent to solving the residual-free version (45) below.
  • The authors chose to solve the latter in their numerical tests because using a large λ in (44) makes it difficult to numerically solve its SOCP accurately.

3 Numerical results

  • Similar artifacts can also be found in the results Figures 2 (h )-(j) of the VO model, but the differences are that the VO model generated u's that have a block-like structure and thus v's with more complicated patterns.
  • In Figure 2 (h), most of the signal in the second and third section was extracted from u, leaving very little signal near the boundary of these signal parts.
  • In short, the VO model performed like an approximation of Meyer's model but with certain features closer to those of the TV-L 1 model.

Example 2:

  • This fingerprint has slightly inhomogeneous brightness because the background near the center of the finger is whiter than the rest.
  • The authors believe that the inhomogeneity like this is not helpful to the recognition and comparison of fingerprints so should better be corrected.
  • The authors can observe in Figures 4 (a ) and (b) that their cartoon parts are close to each other, but slightly different from the cartoon in Figure 4 (c).
  • The VO and the TV-L 1 models gave us more satisfactory results than Meyer's model.
  • Compared to the parameters used in the three models for decomposing noiseless images in Example 3, the parameters used in the Meyer and VO models in this set of tests were changed due to the increase in the G-norm of the texture/noise part v that resulted from adding noise.

4 Conclusion

  • The authors have computationally studied three total variation based models with discrete inputs: the Meyer, VO, and TV-L 1 models.
  • The authors tested these models using a variety of 1D sig- nals and 2D images to reveal their differences in decomposing inputs into their cartoon and oscillating/small-scale/texture parts.
  • The Meyer model tends to capture the pattern of the oscillations in the input, which makes it well-suited to applications such as fingerprint image processing.
  • On the other hand, the TV-L 1 model decomposes the input into two parts according to the geometric scales of the components in the input, independent of the signal intensities, one part containing large-scale components and the other containing smallscale ones.
  • These results agree with those in [9] , which compares the ROF, Meyer, and TV-L 1 models.

Did you find this useful? Give us your feedback

Citations
More filters
Book ChapterDOI
TL;DR: This chapter presents variational models to perform texture analysis, extraction, or both for image processing on second order decomposition models, and discusses most classical first order, models.
Abstract: This chapter presents variational models to perform texture analysis, extraction, or both for image processing. It focuses on second order decomposition models. Variational decomposition models have been studied extensively during the past decades. The most famous one is the Rudin-Osher-Fatemi model. First, most classical first order, models are discussed. Then the chapter deals with second order ones: the mathematical framework, theoretical models, and numerical implementation, ending with two 3D applications. Finally, an appendix includes the mathematical tools that are used to perfom this study and a second appendix provides Matlab© codes, ending.

9 citations


Cites background from "A comparison of three total variati..."

  • ...It is assumed that the image to be recovered from the data u d can be decomposed as f = u + v or f = u + v + w where u, v and w are functions that characterize di↵erent parts of f (see Aujol et al. (2005), Osher et al. (2003), Yin et al. (2007) for example)....

    [...]

Journal ArticleDOI
TL;DR: In this article , a structure-aware bilateral filter that incorporates the structural information throughout texture smoothing is proposed. But, it is not viable to use the bilateral filter for simple image smoothing, as certain modifications are required in a bilateral filter to exploit it as a precise tool for texture smoothhing.
Abstract: The classical bilateral filter is designed for preserving the structure of the image by utilizing the range and spatial kernel. However, its straightforward application on texture smoothing is not viable as certain modifications are required in a bilateral filter to exploit it as a precise tool for texture smoothing. It is worth noting that with numerous rectifications, several methods have been developed over the last few decades that employ a bilateral filter as a precise tool for texture smoothing. Although these methods are precise in preserving significant structural information, a loss in the sharpness of the structures transpires simultaneously. Moreover, these methods do smooth texture efficiently but at the same time, they also blur some prominent structures. In this paper, we have designed a novel structure-aware bilateral filter that incorporates the structural information throughout texture smoothing. The filtering is executed on individual pixel from the scale map by employing the spatial kernel. The experimental section reveals the supremacy of the proposed method with respect to texture smoothing and structure preservation. We have also made an effort to demonstrate the proposed method's efficiency in several applications, namely, edge detection, detail enhancement, and texture transfer.

9 citations

Journal ArticleDOI
TL;DR: A novel variational model for image decomposition and a new cartoon-texture dictionary learning algorithm, which is guided by diffusion flow, which has better performance than the existing algorithms in image decomposing and denoising are presented.
Abstract: A novel variational model for image decomposition is proposed. Meanwhile a new cartoon-texture dictionary learning algorithm, which is guided by diffusion flow, is presented. Numerical experiments show that the proposed method has better performance than the existing algorithms in image decomposition and denoising.

8 citations

Book ChapterDOI
Koji Kashu1, Yusuke Kameda1, Masaki Narita1, Atsushi Imiya1, Tomoya Sakai1 
11 Jul 2010
TL;DR: A method for volumetric cardiac motion analysis using variational optical flow computation involving the prior with the fractional order differentiations to obtain the optical flow vector with the optimal continuity at each point.
Abstract: We introduce a method for volumetric cardiac motion analysis using variational optical flow computation involving the prior with the fractional order differentiations. The order of the differentiation of the prior controls the continuity class of the solution. Fractional differentiations is a typical tool for edge detection of images. As a sequel of image analysis by fractional differentiation, we apply the theory of fractional differentiation to a temporal image sequence analysis. Using the fractional order differentiations, we can estimate the orders of local continuities of optical flow vectors. Therefore, we can obtain the optical flow vector with the optimal continuity at each point.

6 citations


Cites background from "A comparison of three total variati..."

  • ...Although TV regularization [12,13] accurately and stably computes an optical flow field and extracts moving segments from the background, the operation is nonlinear....

    [...]

Journal ArticleDOI
TL;DR: In this work, a simple and effective method based on the image decomposition for image quality assessment is proposed, which is more efficient and delivers higher prediction accuracy than previous approaches in the literatures.
Abstract: Perceptual image quality assessment (IQA) adopts a computational model to assess the image quality in a fashion, which is consistent with human visual system (HVS). From the view of HVS, different image regions have different importance. Based on this fact, we propose a simple and effective method based on the image decomposition for image quality assessment. In our method, we first divide an image into two components: edge component and texture component. To separate edge and texture components, we use the TV flow-based nonlinear diffusion method rather than the classic TV regularization methods, for highly effective computing. Different from the existing content-based IQA methods, we realize different methods on different components to compute image quality. More specifically, the luminance and contrast similarity are computed in texture component, while the structural similarity is computed in edge component. After obtaining the local quality map, we use texture component again as a weight function to derive a single quality score. Experimental results on five datasets show that, compared with previous approaches in the literatures, the proposed method is more efficient and delivers higher prediction accuracy.

6 citations


Cites background from "A comparison of three total variati..."

  • ...The total variation of fu is minimized to regularize uwhile keeping edges like object boundaries of f in fu [26]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this article, a constrained optimization type of numerical algorithm for removing noise from images is presented, where the total variation of the image is minimized subject to constraints involving the statistics of the noise.

15,225 citations


"A comparison of three total variati..." refers methods in this paper

  • ...Moreover, in [19] it is shown that each interior-point iteration takes O(n(3)) time and O(n(2)logn) bytes for solving an SOCP formulation of the Rudin–Osher–Fatemi model [33]....

    [...]

  • ...Moreover, in [19] it is shown that each interior-point iteration takes O(n3) time and O(n2logn) bytes for solving an SOCP formulation of the Rudin–Osher–Fatemi model [33]....

    [...]

  • ...In image processing, the space BV and the total variation semi-norm were first used by Rudin, Osher, and Fatemi [33] to remove noise from images....

    [...]

BookDOI
03 Jan 1989

2,132 citations

Journal ArticleDOI
TL;DR: SOCP formulations are given for four examples: the convex quadratically constrained quadratic programming (QCQP) problem, problems involving fractional quadRatic functions, and many of the problems presented in the survey paper of Vandenberghe and Boyd as examples of SDPs can in fact be formulated as SOCPs and should be solved as such.
Abstract: Second-order cone programming (SOCP) problems are convex optimization problems in which a linear function is minimized over the intersection of an affine linear manifold with the Cartesian product of second-order (Lorentz) cones. Linear programs, convex quadratic programs and quadratically constrained convex quadratic programs can all be formulated as SOCP problems, as can many other problems that do not fall into these three categories. These latter problems model applications from a broad range of fields from engineering, control and finance to robust optimization and combinatorial optimization. On the other hand semidefinite programming (SDP)—that is the optimization problem over the intersection of an affine set and the cone of positive semidefinite matrices—includes SOCP as a special case. Therefore, SOCP falls between linear (LP) and quadratic (QP) programming and SDP. Like LP, QP and SDP problems, SOCP problems can be solved in polynomial time by interior point methods. The computational effort per iteration required by these methods to solve SOCP problems is greater than that required to solve LP and QP problems but less than that required to solve SDP’s of similar size and structure. Because the set of feasible solutions for an SOCP problem is not polyhedral as it is for LP and QP problems, it is not readily apparent how to develop a simplex or simplex-like method for SOCP. While SOCP problems can be solved as SDP problems, doing so is not advisable both on numerical grounds and computational complexity concerns. For instance, many of the problems presented in the survey paper of Vandenberghe and Boyd [VB96] as examples of SDPs can in fact be formulated as SOCPs and should be solved as such. In §2, 3 below we give SOCP formulations for four of these examples: the convex quadratically constrained quadratic programming (QCQP) problem, problems involving fractional quadratic functions ∗RUTCOR, Rutgers University, e-mail:alizadeh@rutcor.rutgers.edu. Research supported in part by the U.S. National Science Foundation grant CCR-9901991 †IEOR, Columbia University, e-mail: gold@ieor.columbia.edu. Research supported in part by the Department of Energy grant DE-FG02-92ER25126, National Science Foundation grants DMS-94-14438, CDA-97-26385 and DMS-01-04282.

1,535 citations


"A comparison of three total variati..." refers background or methods in this paper

  • ...When 1 < p <1, we use second-order cone formulations presented in [1]....

    [...]

  • ...With these definitions an SOCP can be written in the following form [1]:...

    [...]

MonographDOI
01 Sep 2001
TL;DR: It turns out that this mathematics involves new properties of various Besov-type function spaces and leads to many deep results, including new generalizations of famous Gagliardo-Nirenberg and Poincare inequalities.
Abstract: From the Publisher: "Image compression, the Navier-Stokes equations, and detection of gravitational waves are three seemingly unrelated scientific problems that, remarkably, can be studied from one perspective. The notion that unifies the three problems is that of "oscillating patterns", which are present in many natural images, help to explain nonlinear equations. and are pivotal in studying chirps and frequency-modulated signals." "In the book, the author describes both what the oscillating patterns are and the mathematics necessary for their analysis. It turns out that this mathematics involves new properties of various Besov-type function spaces and leads to many deep results, including new generalizations of famous Gagliardo-Nirenberg and Poincare inequalities." This book can be used either as a textbook in studying applications of wavelets to image processing or as a supplementary resource for studying nonlinear evolution equations or frequency-modulated signals. Most of the material in the book did not appear previously in monograph literature.

1,147 citations


"A comparison of three total variati..." refers background or methods in this paper

  • ...G is the dual of the closed subspace BV of BV, where BV :1⁄4 fu 2 BV : jrf j 2 L(1)g [27]....

    [...]

  • ...Meyer’s model To extract cartoon u in the space BV and texture and/or noise v as an oscillating function, Meyer [27] proposed the following model:...

    [...]

  • ...Among the recent total variation-based cartoon-texture decomposition models, Meyer [27] and Haddad and Meyer [20] proposed using the G-norm defined above, Vese and Osher [35] approximated the G-norm by the div(L)-norm, Osher, Sole and Vese [32] proposed using the H (1)-norm, Lieu and Vese [26] proposed using the more general H -norm, and Le and Vese [24] and Garnett, Le, Meyer and Vese [18] proposed using the homogeneous Besov space _ Bp;q, 2 < s < 0, 1 6 p, q 61, extending Meyer’s _ B 1 1;1, to model the oscillation component of an image....

    [...]

  • ...This paper qualitatively compares three recently proposed models for signal/image texture extraction based on total variation minimization: the Meyer [27], Vese–Osher (VO) [35], and TV-L(1) [12,38,2–4,29–31] models....

    [...]

  • ...Meyer gave a few examples in [27], including the one shown at the end of next paragraph, illustrating the appropriateness of modeling oscillating patterns by functions in G....

    [...]