scispace - formally typeset

Journal ArticleDOI

A comparison of three total variation based texture extraction models

01 Jun 2007-Journal of Visual Communication and Image Representation (Academic Press, Inc.)-Vol. 18, Iss: 3, pp 240-252

TL;DR: This paper qualitatively compares three recently proposed models for signal/image texture extraction based on total variation minimization: the Meyer, Vese-Osher (VO), and TV-L^1[12,38,2-4,29-31] models.

AbstractThis paper qualitatively compares three recently proposed models for signal/image texture extraction based on total variation minimization: the Meyer [27], Vese-Osher (VO) [35], and TV-L^1[12,38,2-4,29-31] models We formulate discrete versions of these models as second-order cone programs (SOCPs) which can be solved efficiently by interior-point methods Our experiments with these models on 1D oscillating signals and 2D images reveal their differences: the Meyer model tends to extract oscillation patterns in the input, the TV-L^1 model performs a strict multiscale decomposition, and the Vese-Osher model has properties falling in between the other two models

Topics: Image texture (54%)

Summary (2 min read)

1 Introduction

  • Let f be an observed image that contains texture and/or noise.
  • Texture is characterized as repeated and meaningful structure of small patterns.
  • Noise is characterized as uncorrelated random patterns.
  • The rest of an image, which is called cartoon, contains object hues and sharp edges .

1.1 The spaces BV and G

  • In image processing, the space BV and the total variation semi-norm were first used by Rudin, Osher, and Fatemi [33] to remove noise from images.
  • The ROF model is the precursor to a large number of image processing models having a similar form.

1.3 Second-order cone programming

  • Since a one-dimensional second-order cone corresponds to a semi-infinite ray, SOCPs can accommodate nonnegative variables.
  • In fact if all cones are onedimensional, then the above SOCP is just a standard form linear program.
  • As is the case for linear programs, SOCPs can be solved in polynomial time by interior point methods.
  • This is the approach that the authors take to solve the TV-based cartoon-texture decomposition models in this paper.

2.2.3 The Vese-Osher (VO) model

  • This is equivalent to solving the residual-free version (45) below.
  • The authors chose to solve the latter in their numerical tests because using a large λ in (44) makes it difficult to numerically solve its SOCP accurately.

3 Numerical results

  • Similar artifacts can also be found in the results Figures 2 (h )-(j) of the VO model, but the differences are that the VO model generated u's that have a block-like structure and thus v's with more complicated patterns.
  • In Figure 2 (h), most of the signal in the second and third section was extracted from u, leaving very little signal near the boundary of these signal parts.
  • In short, the VO model performed like an approximation of Meyer's model but with certain features closer to those of the TV-L 1 model.

Example 2:

  • This fingerprint has slightly inhomogeneous brightness because the background near the center of the finger is whiter than the rest.
  • The authors believe that the inhomogeneity like this is not helpful to the recognition and comparison of fingerprints so should better be corrected.
  • The authors can observe in Figures 4 (a ) and (b) that their cartoon parts are close to each other, but slightly different from the cartoon in Figure 4 (c).
  • The VO and the TV-L 1 models gave us more satisfactory results than Meyer's model.
  • Compared to the parameters used in the three models for decomposing noiseless images in Example 3, the parameters used in the Meyer and VO models in this set of tests were changed due to the increase in the G-norm of the texture/noise part v that resulted from adding noise.

4 Conclusion

  • The authors have computationally studied three total variation based models with discrete inputs: the Meyer, VO, and TV-L 1 models.
  • The authors tested these models using a variety of 1D sig- nals and 2D images to reveal their differences in decomposing inputs into their cartoon and oscillating/small-scale/texture parts.
  • The Meyer model tends to capture the pattern of the oscillations in the input, which makes it well-suited to applications such as fingerprint image processing.
  • On the other hand, the TV-L 1 model decomposes the input into two parts according to the geometric scales of the components in the input, independent of the signal intensities, one part containing large-scale components and the other containing smallscale ones.
  • These results agree with those in [9] , which compares the ROF, Meyer, and TV-L 1 models.

Did you find this useful? Give us your feedback

...read more

Content maybe subject to copyright    Report

A
Comparison
of Three Total Variation Based
Texture Extraction Models
?
Wotao Yin
a
, Donald Goldfarb
b
, Stanley Osher
c
a
Rice University, Department of Computational and Applied Mathematics, 6100
Main St, MS-134, Houston, TX 77005, USA
b
Columbia University, Department of Industrial Engineering and Operations
Research, 500 West 120th St, Mudd 313, New York, NY 10027, USA
c
UCLA Mathematics Department, Box 951555, Los Angeles, CA 90095, USA
Abstract
This
pap
er qualitatively compares three recently proposed models for signal/image
texture extraction based on total variation minimization: the Meyer [27], Vese-Osher
(VO) [35], and TV-L
1
[12,38,2–4,29–31] models. We formulate discrete versions of
these models as second-order cone programs (SOCPs) which can be solved efficiently
by interior-point methods. Our experiments with these models on 1D oscillating
signals and 2D images reveal their differences: the Meyer model tends to extract
oscillation patterns in the input, the TV-L
1
model performs a strict multiscale
decomposition, and the Vese-Osher model has properties falling in between the
other two models.
Key words: image decomposition, texture extraction, feature selection, total
variation, variational imaging, second-order cone programming, interior-point
method
1
In
troduction
Let f be an observed image that contains texture and/or noise. Texture is
characterized as repeated and meaningful structure of small patterns. Noise is
?
Researc
h
supported by NSF Grants DMS-01-04282, DNS-03-12222 and ACI-03-
21917, ONR Grants N00014-03-1-0514 and N00014-03-0071, and DOE Grant GE-
FG01-92ER-25126.
Email addresses: wotao.yin@rice.edu (Wotao Yin), goldfarb@columbia.edu
(Donald Goldfarb), sjo@math.ucla.edu (Stanley Osher).
Preprint submitted to Elsevier 21 January 2007

c
haracterized as uncorrelated random patterns. The rest of an image, which
is called cartoon, contains object hues and sharp edges (boundaries). Thus an
image f can be decomposed as f = u+v, where u represents image cartoon and
v is texture and/or noise. A general way to obtain this decomposition using
the variational approach is to solve the problem min {T V (u) | kufk
B
σ},
where T V (u) denotes the total variation of u and k · k
B
is a norm (or semi-
norm). The total variation of u, which is defined below in Subsection 1.1, is
minimized to regularize u while keeping edges like object boundaries of f in
u (i.e., while allowing discontinuities in u). The fidelity term ku fk
B
σ
forces u to be close to f.
1.1 The spaces BV and G
The Banach space BV of functions of bounded variation is important in image
processing because such functions are allowed to have discontinuities and hence
keep edges. This can be seen from its definition as follows.
Let u L
1
, and define [39]
T V (u) := sup
Z
u div(~g) dx :
~g C
1
0
(R
n
; R
n
),
|~g(x)| 1 for all x R
n
as the total variation of u, where |·| denotes the Euclidean norm. Also, u BV
if kuk
BV
:= kuk
L
1
+ T V (u) < . In the above definition, ~g C
1
0
(R
n
; R
n
),
the set of continuously differentiable vector-valued functions, serves as a test
set for u. If u is in the Sobolev spaces W
1,1
or H
1
, it follows from integration
by parts that T V (u) is equal to
R
|∇u|, where u is the weak derivative
of u. However, the use of test functions to define T V (u) allows u to have
discontinuities. Therefore, BV is larger than W
1,1
and H
1
. Equipped with the
k · k
BV
-norm, BV is a Banach space.
BV (Ω) with being a bounded open domain is defined analogously to BV
with L
1
and C
1
0
(R
n
; R
n
) replaced by L
1
(Ω) and C
1
0
(Ω; R
n
), respectively.
Next, we define the space G [27]. Let G denote the Banach space consisting
of all generalized functions v(x) defined on R
n
, which can be written as
v = div(~g), ~g = [g
i
]
i=1,...,n
L
(R
n
; R
n
). (1)
Its norm kvk
G
is defined as the infimum of all L
norms of the functions
|~g(x)| over all decompositions (1) of f . In short, kvk
G
= inf{k(|~g(x)|) k
L
:
v = div(~g)}. G is the dual of the closed subspace BV of BV , where BV :=
{u BV : |∇f| L
1
} [27]. We note that finite difference approximations to
2

functions
in BV and BV are the same. For the definition and properties of
G(Ω), see [5].
An immediate result of the above definitions is that
Z
u v =
Z
u ·~g =
Z
u ·~g T V (u)kvk
G
, (2)
holds for any u BV with compact support and v G. We say (u, v) is an
extremal pair if (2) holds with equality.
In image processing, the space BV and the total variation semi-norm were
first used by Rudin, Osher, and Fatemi [33] to remove noise from images.
Specifically, their model obtains a cleaner image u BV of a noisy image f
by letting u be the minimizer of T V (u)+λkufk
2
L
2
, in which the regularizing
term T V (u) tends to reduce the oscillations in u and the data fidelity term
ku fk
L
2
tends to keep u close to f.
The ROF model is the precursor to a large number of image processing models
having a similar form. Among the recent total variation-based cartoon-texture
decomposition models, Meyer [27] and Haddad and Meyer [20] proposed using
the G-norm defined above, Vese and Osher [35] approximated the G-norm by
the div(L
p
)-norm, Osher, Sole and Vese [32] proposed using the H
1
-norm,
Lieu and Vese [26] proposed using the more general H
s
-norm, and Le and
Vese [24] and Garnett, Le, Meyer and Vese [18] proposed using the homo-
geneous Besov space
˙
B
s
p,q
, 2 < s < 0, 1 p, q , extending Meyer’s
˙
B
1
,
, to model the oscillation component of an image. In addition, Chan and
Esedoglu [12] and Yin, Goldfarb and Osher [38] used the L
1
-norm together
with total variation, following the earlier work by Alliney [2–4] and Nikolova
[29–31].
1.2 Three cartoon-texture decomposition models
In this subsection we present three cartoon-texture decomposition models that
are based on the minimization of total variation. We suggest that readers in-
terested in the theoretical analysis of these models read the referenced works
mentioned below and in the introduction. Although the analysis of the ex-
istence and uniqueness of solutions and duality/conjugacy is not within the
scope of our discussion, in Section 3 we relate the differences among the image
results from these models to the distinguished properties of the three fidelity
terms: kf uk
L
1
, kf uk
G
, and its approximation by Vese and Osher.
In the rest of the paper, we assume the input image f has compact support
contained in a bounded convex open set Ω. In our tests, is an open square.
3

1.2.1
The TV-L
1
model
In [2–4,30,31,12,37] the square of the L
2
norm of f u in the fidelity term
in the original ROF model (min{T V (u) + λkf uk
2
L
2
}) is replaced by the L
1
norm of f u, which yields the following problem:
Constraint model: min
uBV
{
Z
|∇u|, s.t.
Z
|f u| σ}, (3)
Lagrangian model: min
uBV
Z
|∇u| + λ
Z
|f u|. (4)
The above constrained minimization problem (3) is equivalent to its La-
grangian relaxed form (4), where λ is the Lagrange multiplier of the constraint
R
|f u|. The two problems have the same solution if λ is chosen equal to the
optimal value of the dual variable corresponding to the constraint in the con-
strained problem. Given σ or λ, we can calculate the other value by solving the
corresponding problem. The same result also holds for Meyer’s model below.
We chose to solve the Lagrangian relaxed version (4), rather than the con-
straint version (3), in our numerical experiments because several researchers
[12,37] have established the relationship between λ and the scale of f u
. For
example, for the unit disk signal 1
B(0,r)
centered at origin and with radius r,
f u
= 1
B(0,r)
for 0 < λ < 2/r while f u
vanishes for λ > 2/r. Although
this model appears to be simpler than Meyer’s model and the Vese-Osher
model below, it has recently been shown to have very interesting properties
like morphological invariance and texture extraction by scale [12,37]. These
properties are important in various applications in biomedical engineering and
computer vision such as background correction [36], face recognition [14,15],
and brain MR image registration [13]. In Section 3, we demonstrate the ability
of the TV-L
1
model to separate out features of a certain scale in an image.
In addition to the SOCP approach that we use in the paper to solve (4)
numerically, the graph-based approaches [17,11] were recently demonstrated
very efficient in solving an approximate version of (4).
1.2.2 Meyer’s model
To extract cartoon u in the space BV and texture and/or noise v as an oscil-
lating function, Meyer [27] proposed the following model:
Constraint version: inf
uBV
{
Z
|∇u|, s.t. kf uk
G
σ}, (5)
Lagrangian version: inf
uBV
Z
|∇u| + λkf uk
G
. (6)
4

As w
e have pointed out in Section 1.1, G is the dual space of BV, a sub-
space of BV . So G is closely connected to BV . Meyer gave a few examples,
including the one shown at the end of next paragraph, in [27] illustrating the
appropriateness of modeling oscillating patterns by functions in G.
Unfortunately, it is not possible to write down Euler-Lagrange equations for
the Lagrangian form of Meyer’s model (6), and hence, use a straightforward
partial differential equation method to solve it. Alternatively, several models
[5,6,32,35] have been proposed to solve (6) approximately. The Vese-Osher
model [35] described in the next subsection approximates ||(|~g(x)|) k
L
by
||(|~g(x)|) k
L
p
, with 1 p < . The Osher-Sole-Vese model [32] replaces kvk
G
by the Hilbert functional kvk
2
H
1
. The more recent A
2
BC model [5,7,6] is
inspired by Chambolle’s projection algorithm [10] and minimizes T V (u) +
λkf u vk
2
L
2
for (u, v) BV × {v G : kvk
G
µ}. Similar projection
algorithms proposed in [9] and [8] are also used to approximately solve (4) and
(6). Recently, Kindermann and Osher [21] showed that (6) is equivalent to a
minimax problem and proposed a numerical method to solve this saddle-point
problem. Other numerical approaches based on the dual representation of the
G-norm are introduced in [16] by Chung, Le, Lieu, Tanushev, and Vese, [25]
by Lieu, and [23] by Le, Lieu, and Vese. In [34], Starck, Elad, and Donoho
use sparse basis pursuit to achieve a similar decomposition. In Section 2, we
present SOCP-based optimization models to solve both (5) and (6) exactly
(i.e., without any approximation or regularization applied to the non-smooth
terms
R
|∇u| and kvk
G
except for the use of finite differences). In contrast to
our choice for the TV-L
1
model, we chose to solve (5) with specified σ’s in our
numerical experiments because setting an upper bound on kf uk
G
is more
meaningful than penalizing kf uk
G
. The following example demonstrates
that kvk
G
is inversely proportional to the oscillation of v: let v(t) = cos(xt),
which has stronger oscillations for larger t; one can show kvk
G
= 1/t because
cos(xt) =
d(
1
t
sin(xt))
dx
and k
1
t
sin(xt)k
L
=
1
/t. Therefore, to separate a signal
with oscillations stronger than a specific level from f, it is more straightforward
to solve the constrained problem (5).
To calculate the G-norm of a function f alone, one can choose to solve an
SOCP or use the dual method by Kindermann, Osher and Xu [22]. The authors
of the latter work exploit (2) to develop a level-set based iterative method.
1.2.3 The Vese-Osher model
Motivated by the definition of the L
norm of |~g(x)| as the limit
k(|~g|)k
L
= lim
p→∞
k(|~g|)k
L
p
, (7)
5

Citations
More filters

Journal ArticleDOI
Abstract: We propose simple and extremely efficient methods for solving the basis pursuit problem $\min\{\|u\|_1 : Au = f, u\in\mathbb{R}^n\},$ which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of instances of the unconstrained problem $\min_{u\in\mathbb{R}^n} \mu\|u\|_1+\frac{1}{2}\|Au-f^k\|_2^2$ for given matrix $A$ and vector $f^k$. We show analytically that this iterative approach yields exact solutions in a finite number of steps and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrix-vector operations involving $A$ and $A^\top$ can be computed by fast transforms. Utilizing a fast fixed-point continuation solver that is based solely on such operations for solving the above unconstrained subproblem, we were able to quickly solve huge instances of compressed sensing problems on a standard PC.

1,435 citations


Cites background from "A comparison of three total variati..."

  • ...It is proved that the recovery is perfect, i.e., the solution uopt = ū, for any ū whenever k, m, n, and A satisfy certain conditions (e.g., see [13, 30, 37, 42, 78, 95, 96] )....

    [...]


Journal ArticleDOI
TL;DR: New fast algorithms to minimize total variation and more generally $l^1$-norms under a general convex constraint and a recent advance in convex optimization proposed by Yurii Nesterov are presented.
Abstract: This paper presents new fast algorithms to minimize total variation and more generally $l^1$-norms under a general convex constraint. Such problems are standards of image processing. The algorithms are based on a recent advance in convex optimization proposed by Yurii Nesterov. Depending on the regularity of the data fidelity term, we solve either a primal problem or a dual problem. First we show that standard first-order schemes allow one to get solutions of precision $\epsilon$ in $O(\frac{1}{\epsilon^2})$ iterations at worst. We propose a scheme that allows one to obtain a solution of precision $\epsilon$ in $O(\frac{1}{\epsilon})$ iterations for a general convex constraint. For a strongly convex constraint, we solve a dual problem with a scheme that requires $O(\frac{1}{\sqrt{\epsilon}})$ iterations to get a solution of precision $\epsilon$. Finally we perform some numerical experiments which confirm the theoretical results on various problems of image processing.

210 citations


Journal ArticleDOI
TL;DR: In this letter, an enhanced pixel domain JND model with a new algorithm for CM estimation is proposed, and the proposed one shows its advantages brought by the better EM and TM estimation.
Abstract: In just noticeable difference (JND) models, evaluation of contrast masking (CM) is a crucial step. More specifically, CM due to edge masking (EM) and texture masking (TM) needs to be distinguished due to the entropy masking property of the human visual system. However, TM is not estimated accurately in the existing JND models since they fail to distinguish TM from EM. In this letter, we propose an enhanced pixel domain JND model with a new algorithm for CM estimation. In our model, total-variation based image decomposition is used to decompose an image into structural image (i.e., cartoon like, piecewise smooth regions with sharp edges) and textural image for estimation of EM and TM, respectively. Compared with the existing models, the proposed one shows its advantages brought by the better EM and TM estimation. It has been also applied to noise shaping and visual distortion gauge, and favorable results are demonstrated by experiments on different images.

191 citations


Cites background or methods from "A comparison of three total variati..."

  • ...In [9], different TV-based image decomposition models are considered and the model of minimizing TV with an L1-norm fidelity term is shown to achieve better results; we adopt this (TV-L1) model in our work for image decomposition, and then (1) becomes as follows:...

    [...]

  • ...2 to 2 [8], [9] for most natural images....

    [...]


Journal ArticleDOI
TL;DR: This paper converts the linear model, which reduces to a low-pass/high-pass filter pair, into a nonlinear filter pair involving the total variation, which retains both the essential features of Meyer's models and the simplicity and rapidity of thelinear model.
Abstract: Can images be decomposed into the sum of a geometric part and a textural part? In a theoretical breakthrough, [Y. Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations. Providence, RI: American Mathematical Society, 2001] proposed variational models that force the geometric part into the space of functions with bounded variation, and the textural part into a space of oscillatory distributions. Meyer's models are simple minimization problems extending the famous total variation model. However, their numerical solution has proved challenging. It is the object of a literature rich in variants and numerical attempts. This paper starts with the linear model, which reduces to a low-pass/high-pass filter pair. A simple conversion of the linear filter pair into a nonlinear filter pair involving the total variation is introduced. This new-proposed nonlinear filter pair retains both the essential features of Meyer's models and the simplicity and rapidity of the linear model. It depends upon only one transparent parameter: the texture scale, measured in pixel mesh. Comparative experiments show a better and faster separation of cartoon from texture. One application is illustrated: edge detection.

188 citations


Journal ArticleDOI
TL;DR: It is shown that the images produced by this model can be formed from the minimizers of a sequence of decoupled geometry sub-problems, and that the TV-L1 model is able to separate image features according to their scales.
Abstract: This paper studies the total variation regularization with an $L^1$ fidelity term (TV‐$L^1$) model for decomposing an image into features of different scales. We first show that the images produced by this model can be formed from the minimizers of a sequence of decoupled geometry subproblems. Using this result we show that the TV‐$L^1$ model is able to separate image features according to their scales, where the scale is analytically defined by the G‐value. A number of other properties including the geometric and morphological invariance of the TV‐$L^1$ model are also proved and their applications discussed.

103 citations


Cites methods from "A comparison of three total variati..."

  • ...Since the second-order cone programming (SOCP) approach [27, 45 ] has proven to give very accurate solutions for solving TVbased image models, we formulated the TV-L1 model (1.1) and the G-value formula (5.1) as SOCPs and solved them using the commercial optimization package Mosek [33]....

    [...]

  • ...decomposition can also be used to fllter 1D signal [3], to remove impulsive (salt-npepper) noise [35], to extract textures from natural images [ 45 ], to remove varying illumination in face images for face recognition [22, 21], to decompose 2D/3D images for multiscale MR image registration [20], to assess damage from satellite imagery [19], and to remove inhomogeneous background from cDNA microarray and digital microscopic images [44]....

    [...]


References
More filters

Journal ArticleDOI
Abstract: A constrained optimization type of numerical algorithm for removing noise from images is presented. The total variation of the image is minimized subject to constraints involving the statistics of the noise. The constraints are imposed using Lagrange multipliers. The solution is obtained using the gradient-projection method. This amounts to solving a time dependent partial differential equation on a manifold determined by the constraints. As t---~ 0o the solution converges to a steady state which is the denoised image. The numerical algorithm is simple and relatively fast. The results appear to be state-of-the-art for very noisy images. The method is noninvasive, yielding sharp edges in the image. The technique could be interpreted as a first step of moving each level set of the image normal to itself with velocity equal to the curvature of the level set divided by the magnitude of the gradient of the image, and a second step which projects the image back onto the constraint set.

13,575 citations


"A comparison of three total variati..." refers methods in this paper

  • ...Moreover, in [19] it is shown that each interior-point iteration takes O(n(3)) time and O(n(2)logn) bytes for solving an SOCP formulation of the Rudin–Osher–Fatemi model [33]....

    [...]

  • ...Moreover, in [19] it is shown that each interior-point iteration takes O(n3) time and O(n2logn) bytes for solving an SOCP formulation of the Rudin–Osher–Fatemi model [33]....

    [...]

  • ...In image processing, the space BV and the total variation semi-norm were first used by Rudin, Osher, and Fatemi [33] to remove noise from images....

    [...]



BookDOI
03 Jan 1989

2,062 citations


Journal ArticleDOI
TL;DR: SOCP formulations are given for four examples: the convex quadratically constrained quadratic programming (QCQP) problem, problems involving fractional quadRatic functions, and many of the problems presented in the survey paper of Vandenberghe and Boyd as examples of SDPs can in fact be formulated as SOCPs and should be solved as such.
Abstract: Second-order cone programming (SOCP) problems are convex optimization problems in which a linear function is minimized over the intersection of an affine linear manifold with the Cartesian product of second-order (Lorentz) cones. Linear programs, convex quadratic programs and quadratically constrained convex quadratic programs can all be formulated as SOCP problems, as can many other problems that do not fall into these three categories. These latter problems model applications from a broad range of fields from engineering, control and finance to robust optimization and combinatorial optimization. On the other hand semidefinite programming (SDP)—that is the optimization problem over the intersection of an affine set and the cone of positive semidefinite matrices—includes SOCP as a special case. Therefore, SOCP falls between linear (LP) and quadratic (QP) programming and SDP. Like LP, QP and SDP problems, SOCP problems can be solved in polynomial time by interior point methods. The computational effort per iteration required by these methods to solve SOCP problems is greater than that required to solve LP and QP problems but less than that required to solve SDP’s of similar size and structure. Because the set of feasible solutions for an SOCP problem is not polyhedral as it is for LP and QP problems, it is not readily apparent how to develop a simplex or simplex-like method for SOCP. While SOCP problems can be solved as SDP problems, doing so is not advisable both on numerical grounds and computational complexity concerns. For instance, many of the problems presented in the survey paper of Vandenberghe and Boyd [VB96] as examples of SDPs can in fact be formulated as SOCPs and should be solved as such. In §2, 3 below we give SOCP formulations for four of these examples: the convex quadratically constrained quadratic programming (QCQP) problem, problems involving fractional quadratic functions ∗RUTCOR, Rutgers University, e-mail:alizadeh@rutcor.rutgers.edu. Research supported in part by the U.S. National Science Foundation grant CCR-9901991 †IEOR, Columbia University, e-mail: gold@ieor.columbia.edu. Research supported in part by the Department of Energy grant DE-FG02-92ER25126, National Science Foundation grants DMS-94-14438, CDA-97-26385 and DMS-01-04282.

1,387 citations


"A comparison of three total variati..." refers background or methods in this paper

  • ...When 1 < p <1, we use second-order cone formulations presented in [1]....

    [...]

  • ...With these definitions an SOCP can be written in the following form [1]:...

    [...]


MonographDOI
01 Sep 2001
TL;DR: It turns out that this mathematics involves new properties of various Besov-type function spaces and leads to many deep results, including new generalizations of famous Gagliardo-Nirenberg and Poincare inequalities.
Abstract: From the Publisher: "Image compression, the Navier-Stokes equations, and detection of gravitational waves are three seemingly unrelated scientific problems that, remarkably, can be studied from one perspective. The notion that unifies the three problems is that of "oscillating patterns", which are present in many natural images, help to explain nonlinear equations. and are pivotal in studying chirps and frequency-modulated signals." "In the book, the author describes both what the oscillating patterns are and the mathematics necessary for their analysis. It turns out that this mathematics involves new properties of various Besov-type function spaces and leads to many deep results, including new generalizations of famous Gagliardo-Nirenberg and Poincare inequalities." This book can be used either as a textbook in studying applications of wavelets to image processing or as a supplementary resource for studying nonlinear evolution equations or frequency-modulated signals. Most of the material in the book did not appear previously in monograph literature.

1,110 citations


"A comparison of three total variati..." refers background or methods in this paper

  • ...G is the dual of the closed subspace BV of BV, where BV :1⁄4 fu 2 BV : jrf j 2 L(1)g [27]....

    [...]

  • ...Meyer’s model To extract cartoon u in the space BV and texture and/or noise v as an oscillating function, Meyer [27] proposed the following model:...

    [...]

  • ...Among the recent total variation-based cartoon-texture decomposition models, Meyer [27] and Haddad and Meyer [20] proposed using the G-norm defined above, Vese and Osher [35] approximated the G-norm by the div(L)-norm, Osher, Sole and Vese [32] proposed using the H (1)-norm, Lieu and Vese [26] proposed using the more general H -norm, and Le and Vese [24] and Garnett, Le, Meyer and Vese [18] proposed using the homogeneous Besov space _ Bp;q, 2 < s < 0, 1 6 p, q 61, extending Meyer’s _ B 1 1;1, to model the oscillation component of an image....

    [...]

  • ...This paper qualitatively compares three recently proposed models for signal/image texture extraction based on total variation minimization: the Meyer [27], Vese–Osher (VO) [35], and TV-L(1) [12,38,2–4,29–31] models....

    [...]

  • ...Meyer gave a few examples in [27], including the one shown at the end of next paragraph, illustrating the appropriateness of modeling oscillating patterns by functions in G....

    [...]