scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A comparison of three total variation based texture extraction models

TL;DR: This paper qualitatively compares three recently proposed models for signal/image texture extraction based on total variation minimization: the Meyer, Vese-Osher (VO), and TV-L^1[12,38,2-4,29-31] models.
About: This article is published in Journal of Visual Communication and Image Representation.The article was published on 2007-06-01 and is currently open access. It has received 68 citations till now. The article focuses on the topics: Image texture.

Summary (2 min read)

1 Introduction

  • Let f be an observed image that contains texture and/or noise.
  • Texture is characterized as repeated and meaningful structure of small patterns.
  • Noise is characterized as uncorrelated random patterns.
  • The rest of an image, which is called cartoon, contains object hues and sharp edges .

1.1 The spaces BV and G

  • In image processing, the space BV and the total variation semi-norm were first used by Rudin, Osher, and Fatemi [33] to remove noise from images.
  • The ROF model is the precursor to a large number of image processing models having a similar form.

1.3 Second-order cone programming

  • Since a one-dimensional second-order cone corresponds to a semi-infinite ray, SOCPs can accommodate nonnegative variables.
  • In fact if all cones are onedimensional, then the above SOCP is just a standard form linear program.
  • As is the case for linear programs, SOCPs can be solved in polynomial time by interior point methods.
  • This is the approach that the authors take to solve the TV-based cartoon-texture decomposition models in this paper.

2.2.3 The Vese-Osher (VO) model

  • This is equivalent to solving the residual-free version (45) below.
  • The authors chose to solve the latter in their numerical tests because using a large λ in (44) makes it difficult to numerically solve its SOCP accurately.

3 Numerical results

  • Similar artifacts can also be found in the results Figures 2 (h )-(j) of the VO model, but the differences are that the VO model generated u's that have a block-like structure and thus v's with more complicated patterns.
  • In Figure 2 (h), most of the signal in the second and third section was extracted from u, leaving very little signal near the boundary of these signal parts.
  • In short, the VO model performed like an approximation of Meyer's model but with certain features closer to those of the TV-L 1 model.

Example 2:

  • This fingerprint has slightly inhomogeneous brightness because the background near the center of the finger is whiter than the rest.
  • The authors believe that the inhomogeneity like this is not helpful to the recognition and comparison of fingerprints so should better be corrected.
  • The authors can observe in Figures 4 (a ) and (b) that their cartoon parts are close to each other, but slightly different from the cartoon in Figure 4 (c).
  • The VO and the TV-L 1 models gave us more satisfactory results than Meyer's model.
  • Compared to the parameters used in the three models for decomposing noiseless images in Example 3, the parameters used in the Meyer and VO models in this set of tests were changed due to the increase in the G-norm of the texture/noise part v that resulted from adding noise.

4 Conclusion

  • The authors have computationally studied three total variation based models with discrete inputs: the Meyer, VO, and TV-L 1 models.
  • The authors tested these models using a variety of 1D sig- nals and 2D images to reveal their differences in decomposing inputs into their cartoon and oscillating/small-scale/texture parts.
  • The Meyer model tends to capture the pattern of the oscillations in the input, which makes it well-suited to applications such as fingerprint image processing.
  • On the other hand, the TV-L 1 model decomposes the input into two parts according to the geometric scales of the components in the input, independent of the signal intensities, one part containing large-scale components and the other containing smallscale ones.
  • These results agree with those in [9] , which compares the ROF, Meyer, and TV-L 1 models.

Did you find this useful? Give us your feedback

Citations
More filters
Journal ArticleDOI
TL;DR: The experimental results demonstrate that the proposed STD method can better separate image structure and texture and result in shaper edges in the structural component and has successfully applied to several applications including detail enhancement, edge detection, and visual quality assessment of super-resolved images.
Abstract: Structure-texture image decomposition is a fundamental but challenging topic in computational graphics and image processing. In this paper, we introduce a structure-aware and a texture-aware measures to facilitate the structure-texture decomposition (STD) of images. Edge strengths and spatial scales that have been widely-used in previous STD researches cannot describe the structures and textures of images well. The proposed two measures differentiate image textures from image structures based on their distinctive characteristics. Specifically, the first one aims to measure the anisotropy of local gradients, and the second one is designed to measure the repeatability degree of signal patterns in a neighboring region. Since these two measures describe different properties of image structures and textures, they are complementary to each other. The STD is achieved by optimizing an objective function based on the two new measures. As using traditional optimization methods to solve the optimization problem will require designing different optimizers for different functional spaces, we employ an architecture of deep neural network to optimize the STD cost function in a unified manner. The experimental results demonstrate that, as compared with some state-of-the-art methods, our method can better separate image structure and texture and result in shaper edges in the structural component. Furthermore, to demonstrate the usefulness of the proposed STD method, we have successfully applied it to several applications including detail enhancement, edge detection, and visual quality assessment of super-resolved images.

15 citations


Cites background or methods from "A comparison of three total variati..."

  • ...In [51], three regularization-based STD models, including L1-norm and G-norm, are formulated as a paradigm of second-order cone programs (SCOPs)....

    [...]

  • ...In a variational framework, the structural component u is generally obtained by optimizing the following objective function [1], [51]...

    [...]

  • ...Traditionally, to solve (6), diverse optimizers are required for different functional spaces [51]....

    [...]

Journal ArticleDOI
TL;DR: Experiments show that the learned dictionaries by the proposed algorithms can describe the different components of image effectively and leads to high quality image decomposition performance.

14 citations


Cites methods from "A comparison of three total variati..."

  • ...Yin et al. proposed TVL1 model for decomposing an image into features of different scales (Yin et al., 2005, 2007), and compared several models (TV-L1, VO, Meyer) for image texture extraction in (Yin et al., 2007)....

    [...]

Journal ArticleDOI
TL;DR: A second order image decomposition model is presented to perform denoising and texture extraction and gives a two-scale texture decomposition for highly textured images.
Abstract: We present a second order image decomposition model to perform denoising and texture extraction. We look for the decomposition f=u+v+w where u is a first order term, v a second order term and w the (0 order) remainder term. For highly textured images the model gives a two-scale texture decomposition: u can be viewed as a macro-texture (larger scale) whose oscillations are not too large and w is the micro-texture (very oscillating) that may contain noise. We perform mathematical analysis of the model and give numerical examples.

14 citations

Journal ArticleDOI
TL;DR: This paper modifies the algorithm proposed in Buades et al. improving its performance near image discontinuities while keeping the simplicity and rapidity of a linear model and illustrates how the proposed method is the only one dealing correctly with frame-by-frame video processing.
Abstract: The decomposition of an image in a geometrical and a textural part has shown to be a challenging problem with several applications. Since the theoretical breakthrough of Y. Meyer, many variational methods and minimization techniques have been proposed for this task. This paper uses a different approach based on low/high-pass filtering with directional filters. This approach modifies the algorithm proposed in Buades et al. (IEEE Trans Image Process 19(8):1978---1986, 2010) improving its performance near image discontinuities while keeping the simplicity and rapidity of a linear model. Comparisons with variational methods illustrate the flexibility of the proposed algorithm. We illustrate how the proposed method is the only one dealing correctly with frame-by-frame video processing.

14 citations


Cites background from "A comparison of three total variati..."

  • ...There has been an extensive line of papers (starting with [35]) modifying and interpreting Meyer’s models, andproposingminimization schemes: [2,3,19,24,33,36,39]....

    [...]

Journal ArticleDOI
TL;DR: A deep convolutional neural network architecture with the nested UNets for automatic segmentation and enhancement of latent fingerprints is proposed to transform low-quality latent image into the segmentation mask and high-quality fingerprint through the pixels- to-pixels and end-to-end training.
Abstract: Latent fingerprints are one of the most important evidences used to identify criminals in the law enforcement and forensic agencies. Automated recognition of latent fingerprints is still challenging due to their poor image quality caused by unclear ridge structure and various overlapping patterns. Segmentation and enhancement are important to identify valid fingerprint regions, reduce the noise and improve the clarity of ridge structure for more accurate fingerprint recognition. In this paper, we propose a deep convolutional neural network architecture with the nested UNets for automatic segmentation and enhancement of latent fingerprints. First, to prepare training data, we synthetically generate the latent fingerprints and their segmentation and enhancement ground truth data for training. Then, a deep architecture of nested UNets is proposed to transform low-quality latent image into the segmentation mask and high-quality fingerprint through the pixels-to-pixels and end-to-end training. Finally, the test latent fingerprint is segmented and enhanced with the deep nested UNets to improve the image quality in one shot. The enhancement network is optimized by combining the local and global losses, which not only helps reconstruct the global structure, but also enhance the local ridge details of latent fingerprints. The proposed network can make use of multi-level feature maps in a pyramid way of nested UNets for segmentation and enhancement. Experimental results and comparison on NIST SD27 and IIITD-MOLF latent fingerprint databases demonstrate the promising performance of the proposed method.

12 citations


Cites background or methods from "A comparison of three total variati..."

  • ...Since latent fingerprint is overlapped with different kinds of noisy patterns, the total variation (TV) image model, which minimized the total variation in image decomposition, was proposed to reduce the heavy noise for segmentation and enhancement [3], [12]....

    [...]

  • ...To simulate the varying backgrounds of latent images, we first decompose the latent images of NIST SD27 into texture and cartoon components by using the TV method [12]....

    [...]

  • ...We also compare the proposed fingerprint enhancement method with other methods in the literature [12], [18], [20], [28]....

    [...]

  • ...CMC curves of different enhancement methods with manual segmentation masks on NIST SD27: the proposed method with the loss functions of ‘SSIM + MSE’, ‘SSIM’ and ‘MSE’, DenseUNet method [20], Localized Dictionary method [28], multi-task method [18], and the TV method [12], which are denoted as “Proposed method with SSIM + MSE”,“Proposed method with SSIM”, “Proposed method with MSE”, “DenseUNet”, “Localized Dictionary”, “Multi-task” and “Texture”, respectively....

    [...]

  • ...To reduce the noise of latent image, we apply the Total Variance method [12] to decompose the latent image into the texture and cartoon components....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this article, a constrained optimization type of numerical algorithm for removing noise from images is presented, where the total variation of the image is minimized subject to constraints involving the statistics of the noise.

15,225 citations


"A comparison of three total variati..." refers methods in this paper

  • ...Moreover, in [19] it is shown that each interior-point iteration takes O(n(3)) time and O(n(2)logn) bytes for solving an SOCP formulation of the Rudin–Osher–Fatemi model [33]....

    [...]

  • ...Moreover, in [19] it is shown that each interior-point iteration takes O(n3) time and O(n2logn) bytes for solving an SOCP formulation of the Rudin–Osher–Fatemi model [33]....

    [...]

  • ...In image processing, the space BV and the total variation semi-norm were first used by Rudin, Osher, and Fatemi [33] to remove noise from images....

    [...]

BookDOI
03 Jan 1989

2,132 citations

Journal ArticleDOI
TL;DR: SOCP formulations are given for four examples: the convex quadratically constrained quadratic programming (QCQP) problem, problems involving fractional quadRatic functions, and many of the problems presented in the survey paper of Vandenberghe and Boyd as examples of SDPs can in fact be formulated as SOCPs and should be solved as such.
Abstract: Second-order cone programming (SOCP) problems are convex optimization problems in which a linear function is minimized over the intersection of an affine linear manifold with the Cartesian product of second-order (Lorentz) cones. Linear programs, convex quadratic programs and quadratically constrained convex quadratic programs can all be formulated as SOCP problems, as can many other problems that do not fall into these three categories. These latter problems model applications from a broad range of fields from engineering, control and finance to robust optimization and combinatorial optimization. On the other hand semidefinite programming (SDP)—that is the optimization problem over the intersection of an affine set and the cone of positive semidefinite matrices—includes SOCP as a special case. Therefore, SOCP falls between linear (LP) and quadratic (QP) programming and SDP. Like LP, QP and SDP problems, SOCP problems can be solved in polynomial time by interior point methods. The computational effort per iteration required by these methods to solve SOCP problems is greater than that required to solve LP and QP problems but less than that required to solve SDP’s of similar size and structure. Because the set of feasible solutions for an SOCP problem is not polyhedral as it is for LP and QP problems, it is not readily apparent how to develop a simplex or simplex-like method for SOCP. While SOCP problems can be solved as SDP problems, doing so is not advisable both on numerical grounds and computational complexity concerns. For instance, many of the problems presented in the survey paper of Vandenberghe and Boyd [VB96] as examples of SDPs can in fact be formulated as SOCPs and should be solved as such. In §2, 3 below we give SOCP formulations for four of these examples: the convex quadratically constrained quadratic programming (QCQP) problem, problems involving fractional quadratic functions ∗RUTCOR, Rutgers University, e-mail:alizadeh@rutcor.rutgers.edu. Research supported in part by the U.S. National Science Foundation grant CCR-9901991 †IEOR, Columbia University, e-mail: gold@ieor.columbia.edu. Research supported in part by the Department of Energy grant DE-FG02-92ER25126, National Science Foundation grants DMS-94-14438, CDA-97-26385 and DMS-01-04282.

1,535 citations


"A comparison of three total variati..." refers background or methods in this paper

  • ...When 1 < p <1, we use second-order cone formulations presented in [1]....

    [...]

  • ...With these definitions an SOCP can be written in the following form [1]:...

    [...]

MonographDOI
01 Sep 2001
TL;DR: It turns out that this mathematics involves new properties of various Besov-type function spaces and leads to many deep results, including new generalizations of famous Gagliardo-Nirenberg and Poincare inequalities.
Abstract: From the Publisher: "Image compression, the Navier-Stokes equations, and detection of gravitational waves are three seemingly unrelated scientific problems that, remarkably, can be studied from one perspective. The notion that unifies the three problems is that of "oscillating patterns", which are present in many natural images, help to explain nonlinear equations. and are pivotal in studying chirps and frequency-modulated signals." "In the book, the author describes both what the oscillating patterns are and the mathematics necessary for their analysis. It turns out that this mathematics involves new properties of various Besov-type function spaces and leads to many deep results, including new generalizations of famous Gagliardo-Nirenberg and Poincare inequalities." This book can be used either as a textbook in studying applications of wavelets to image processing or as a supplementary resource for studying nonlinear evolution equations or frequency-modulated signals. Most of the material in the book did not appear previously in monograph literature.

1,147 citations


"A comparison of three total variati..." refers background or methods in this paper

  • ...G is the dual of the closed subspace BV of BV, where BV :1⁄4 fu 2 BV : jrf j 2 L(1)g [27]....

    [...]

  • ...Meyer’s model To extract cartoon u in the space BV and texture and/or noise v as an oscillating function, Meyer [27] proposed the following model:...

    [...]

  • ...Among the recent total variation-based cartoon-texture decomposition models, Meyer [27] and Haddad and Meyer [20] proposed using the G-norm defined above, Vese and Osher [35] approximated the G-norm by the div(L)-norm, Osher, Sole and Vese [32] proposed using the H (1)-norm, Lieu and Vese [26] proposed using the more general H -norm, and Le and Vese [24] and Garnett, Le, Meyer and Vese [18] proposed using the homogeneous Besov space _ Bp;q, 2 < s < 0, 1 6 p, q 61, extending Meyer’s _ B 1 1;1, to model the oscillation component of an image....

    [...]

  • ...This paper qualitatively compares three recently proposed models for signal/image texture extraction based on total variation minimization: the Meyer [27], Vese–Osher (VO) [35], and TV-L(1) [12,38,2–4,29–31] models....

    [...]

  • ...Meyer gave a few examples in [27], including the one shown at the end of next paragraph, illustrating the appropriateness of modeling oscillating patterns by functions in G....

    [...]