scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

A fast spatial patch blending algorithm for artefact reduction in pattern-based image inpainting

TL;DR: This work proposes a fast and generic spatial patch blending technique that can be embedded within any kind of pattern-based inpainting algorithm, and provides an open-source and simple-to-use software to make this easily reproducible.
Abstract: We propose a fast and generic spatial patch blending technique that can be embedded within any kind of pattern-based inpainting algorithm. This extends our previous work on the visual enhancement of inpainting results. We optimize this blending algorithm so that the processing time is roughly divided by a factor ten, without any loss of perceived quality. Moreover, we provide an open-source and simple-to-use software to make this easily reproducible.

Summary (1 min read)

1 Introduction

  • Filling unkown or removing undesired contents from images, known as image inpainting, is a widely used tool today.
  • Unfortunately, these methods are often unable to synthesize non-local structures like textures.
  • Recently, [Daisy et al. 2013] introduced a spatial patch blending technique to perceptually reduce these reconstruction artefacts, without any changes in the way geometry is inpainted.
  • The paper addresses these two issues and is organized as follows.

2 Patch Blending Context and Previous Work

  • A result image J is produced where possible inter-patch seams and inconsistencies are cleverly hidden, rendering the image perceptually more pleasant.
  • This first step of this method is empirically based on the two following hypothesis.
  • Illustration of the artefact detection result, also known as Figure 2.
  • The main idea of the spatial patch blending is to point out the fact that parts of the individual patches are discarded during sequential compositing, but these parts contain valuable information that could have been used if a different insertion order had been used.
  • This makes the computation time to be very dependant of the mask size used for the inpainting.

3 Enhanced Spatial Patch Blending

  • The authors propose here a spatial patch blending algorithm for patternbased inpainting algortihms.
  • The storage of all blending scales Modified weight function: Algorithm 1: Fast spatial patch blending for inpainting algorithms.
  • This is mainly thanks to this approximation that their optimized spatial blending version of the algorithm of [Daisy et al. 2013] can be reformulated.
  • Then, there is no meaningful difference of computation time between method in [Criminisi et al. 2004] and ours.

4 Results and Reproducibility

  • Some results provided by their method are illustrated Fig. 6 and compared to state-of-the-art methods.
  • Spatial patch blending is clearly demonstrated through their examples and their way of making it faster allows now this method to be used interactively.
  • The source code of their technique is available as a function named inpaint patch in G’MIC [Tschumperlé 2013] source codes.
  • A dedicated filter has been added to the G’MIC plugin for the open source GIMP2 software, allowing non specialist people to use it easily thanks to an enhanced graphical user interface.

LE MEUR, O., GAUTIER, J., AND GUILLEMOT, C. 2011.

  • Vector-valued image regularization with pdes: A common framework for different applications.
  • Greyc’s magic for image computing, also known as G’MIC.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

HAL Id: hal-00932281
https://hal.archives-ouvertes.fr/hal-00932281
Submitted on 16 Jan 2014
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
A Fast Spatial Patch Blending Algorithm for Artefact
Reduction in Pattern-based Image Inpainting
Maxime Daisy, David Tschumperlé, Olivier Lezoray
To cite this version:
Maxime Daisy, David Tschumperlé, Olivier Lezoray. A Fast Spatial Patch Blending Algorithm for
Artefact Reduction in Pattern-based Image Inpainting. SIGGRAPH Asia 2013 Technical Briefs, Nov
2013, Hong Kong SAR China. pp.8:1–8:4, �10.1145/2542355.2542365�. �hal-00932281�

A Fast Spatial Patch Blending Algorithm for Artefact Reduction in
Pattern-based Image Inpainting
Maxime Daisy, David Tschumperl
´
e, Olivier L
´
ezoray
GREYC Laboratory (CNRS UMR 6072), Image Team, 6 Bd Mar
´
echal Juin, 14050 Caen/France
Figure 1: Illustration of our proposed spatial patch blending algorithm for image inpainting. From left to right : color image with area to
reconstruct, reconstruction result with the inpainting algorithm from [Criminisi et al. 2004], our reconstruction result.
Abstract
We propose a fast and generic spatial patch blending technique that
can be embedded within any kind of pattern-based inpainting algo-
rithm. This extends the works of [Daisy et al. 2013] on the visual
enhancement of inpainting results. We optimize this blending algo-
rithm so that the processing time is roughly divided by a factor ten,
without any loss of perceived quality. Moreover, we provide a free
and simple-to-use software to make this easily reproducible.
CR Categories: I.3.3 [Computer Graphics]: Picture/Image
Generation— [I.3.4]: Computer Graphics—Graphics Utilities I.4.4
[Image Processing and Computer Vision]: Restoration— [I.4.9]:
Image Processing and Computer Vision—Applications
Keywords: inpainting, spatial, patch, blending, patch-based
1 Introduction
Filling unkown or removing undesired contents from images,
known as image inpainting, is a widely used tool today. Movie
producers for example, use it to remove microphones or scratches
from new and older movie sequences. As this kind of reconstruc-
tion tool is employed by users that want their images to look more
realistic, it must obviously not damage the perceptual and visual
quality of the processed images. In the state of the art, there mainly
exist two kinds of inpainting methods. Geometry-based methods
[Masnou and Morel 1998; Bertalmio et al. 2000; Tschumperl
´
e and
Deriche 2005] provide techniques to propagate image structures by
extrapolating the local geometry. Unfortunately, these methods are
often unable to synthesize non-local structures like textures. On the
contrary, in pattern-based methods [Criminisi et al. 2004; Le Meur
e-mails:
{Maxime.Daisy, David.Tschumperle, Olivier.Lezoray}@ensicaen.fr
et al. 2011], user-selected image areas are reconstructed by copy-
ing patches from the known image zones, to those unkown. These
methods work well to reconstruct textures. Even with some vari-
ations, like avering several patches [Le Meur et al. 2011], they
generally do not provide good result in terms of global geometry
consistency. Hybrid methods also exist [Sun et al. 2005], but there
are always some cases where reconstruction let some artefacts ap-
pear. Recently, [Daisy et al. 2013] introduced a spatial patch blend-
ing technique to perceptually reduce these reconstruction artefacts,
without any changes in the way geometry is inpainted. Unfortu-
nately, this method cannot be used for production work due to com-
putational burden and memory overload.
The paper addresses these two issues and is organized as follows.
First, the principle of spatial patch blending is presented through a
complete summary of the method. Then, we redesign the blending
algorithm and show the various improvements it implies. Finally,
we illustrate the relevance of our approach by commented results
and comparisons with some state-of-the-art methods.
2 Patch Blending Context and Previous Work
In [Daisy et al. 2013] was proposed a method that allows reducing
artefacts produced by any patch-based inpainting algorithm [Crim-
inisi et al. 2004; Le Meur et al. 2011] in an image I. A result image
J is produced where possible inter-patch seams and inconsistencies
are cleverly hidden, rendering the image perceptually more pleas-
ant. In this method, it is proposed to modify the process of any
patch-based inpainting algorithm so that it provides additional in-
formation. The latter is used to perform the two steps of our artefact
reduction technique, namely: 1) the artefact detection 2) the spatial
patch blending.
Artefact Detection. This first step of this method is empirically
based on the two following hypothesis. For a reconstructed image
I which reconstruction patch locations are stored in a map U : p
I 7→ q I, it is hypothesized that: 1) there are sharp color or
luminosity variations where artefacts are located, and 2) patches
from remote locations seem probably differents. The idea is first to
combine the latter to estimate a set of points E where the strongest
artefacts are located. Then, the map σ : p I 7→ σ(p) R of
blending amplitudes is computed, and gives to each point p inside
a mask M a weight depending on its distance to the nearest artefact
locations and strengths (cf. Fig. 2, and (3) of [Daisy et al. 2013]).

(a) A reconstructed image where
artefacts are to be detected.
(b) The superimposed set E with the
associated amplitudes σ result of the
artefact detection in 2(a).
Figure 2: Illustration of the artefact detection result.
Spatial Patch Blending. In classic patch-based inpainting meth-
ods, the reconstruction of an image is a kind of patchwork. Patches
are iteratively extracted from the image, cut up, and the remaining
pieces are pasted inside M to complete the given image. The main
idea of the spatial patch blending is to point out the fact that parts of
the individual patches are discarded during sequential compositing,
but these parts contain valuable information that could have been
used if a different insertion order had been used. In this method,
the scrapped offcuts are kept and spatially blended in order to re-
duce seams between the pieces of patches pasted side by side. This
method is defined as a pixelwise process, and for each point p M ,
the set Ψ
p
of patches overlapping at p is extracted. Then, a com-
bination of all the pixels where all the patches ψ
q
Ψ
p
overlap is
computed as follows:
J(p) =
P
ψ
q
Ψ
p
w(q,p) ψ
q
(pq)
ε+
P
ψ
q
Ψ
p
w(q,p)
(1)
The gaussian weight function w(p, q) = e
d(q,p)
2
σ
2
defines the way
patches are blended together during the process. This function
strongly depends on the distance function d. In [Daisy et al. 2013],
they have used the minimal distance from the point p to every point
in the piece of pasted patch ψ
q
. As shown in Fig. 3, this method
provides clearly good results in terms of artefact reduction regard-
ing a classical patch-based inpainting result. On the other hand, the
memory usage for storing the map U is too important. Then, even
if it is reasonable as compared to these of the inpainting process,
the computation time does not allow this method to be used easily
and interactively. The main reason is that as many distance maps as
reconstruction patches have to be computed. This makes the com-
putation time to be very dependant of the mask size used for the
inpainting. In addition, the size of E is a little bit overestimated by
the artefact detection. In this pixel per pixel process, this causes the
computation to increase noticeably. The weaknesses of [Daisy et al.
2013] have brought us to redesign some parts of their algorithm to
make it faster while maintaining the good perceptual quality of the
results. The contributions we propose through this paper are mainly
based on the enhancement of the spatial patch blending algorithm
in terms of time consumption, but also memory usage.
3 Enhanced Spatial Patch Blending
We propose here a spatial patch blending algorithm for pattern-
based inpainting algortihms. Firstly, this method is described as
it is, and then we discuss the different enhancements in comparison
with the method of [Daisy et al. 2013].
Figure 3: Results of a spatial patch blending (zoomed). From left to
right : masked image, result of patch-based inpainting [Criminisi
et al. 2004], result of [Daisy et al. 2013].
Spatial blending reformulation. At first sight, the method we
propose seems to be very different from [Daisy et al. 2013]. Rather
than independently computing each final pixel J(p) by looking for
every p M, the set of local features that allows computing (1), we
propagate each patch feature at once on the pasting neighbourhoods
of p. Loop on each point p is replaced by a loop on all patches ψ
q
pasted in I during the inpainting. This second loop needs much less
computing iterations (approximatly n
2
/2 less where n × n is the
inpainting patch dimension). This loop factorization is theorically
possible only if the bandwidth σ(p) of the blending is considered
as constant on the whole image. This obviously not the case. To
do so, a multiscale approach is adopted. The loop on patches is re-
peated as many times as the number of different scales that can be
considered in the values of σ (we have quantized these values into
N scales). As N can be chosen small enough (typically of about
ten scales, smaller values would leads to some discontinuities in
the blending), the looping repetition factor due to the multi-scale
aspect of our algorithm remains much less important than the aver-
age gain of n
2
/2 at a specified scale. This makes the final algorithm
very interesting in terms of complexity compared to the approach
of [Daisy et al. 2013]. Algorithm 1 details the whole principle of
our multi-scale spatial blending method.
One can notice that this method of blending acts like a post-
processing of the image inpainting result, but requires to modify the
considered patch-based inpainting algorithm, for the reconstruction
patches locations and the reconstruction points to be stored.
Differences with the previous approach. The differences of our
method compared to the approach of [Daisy et al. 2013] are the
following:
Quantized spatial blending scales: Our optimized algorithm con-
siders a quantized version of the spatial blending amplitude map
σ. The set of the blending results J
s
are computed for each scale
σ
s
[1, N] N and are then merged in a final image J. This im-
age contains pixels of J
1
, J
2
, . . . , J
N
depending on the local (quan-
tized) scale defined in ˜σ(p). The storage of all blending scales J
s
can be easily avoided by transferring directly all the pixels com-
puted at a scale s to the final image J. In this case, the last loop of
Algorithm 1 has to be done in the main scale loop (line 5).
Modified weight function: The spatial patch blending is locally
performed as a linear combination of all the patches that would have
overlapped with a different inpainting order. One can demonstrate
that with this new algorithm, the weighting function w(p, q) of each
patch (also used in (1)) depends only on the distance from a point to
the neighbour reconstruction points rather than the distance from a
point to a piece of pasted patch (as described in [Daisy et al. 2013]).

Algorithm 1: Fast spatial patch blending for inpainting algorithms.
Input: Inpainted image I, Inpainting mask M , Number of scales N.
Output: Image J with spatially blended patches.
Initialize P = Ordered list of original patch center locations (p, q)1
pasted in M during the inpainting;
Initialize C = Ordered list of patch pasting locations (x, y) of during2
inpainting;
Initialize σ = Estimated local blending amplitude (section 2);3
Initialize ˜σ = Uniform quantization of σ in N levels (σ
1
, . . . , σ
N
);4
// Computation of the spatial blending levels J
s
// for differents σ
s
for s [1, N] N, do5
Initialize J
s
= Result color image of the blending at scale s,6
initialized to 0 for all pixel in M , and I(p) elsewhere;
Initialize A = Scalar accumulation image of the size of I,7
initialized to 0 for all pixels in M , and 1 elsewhere;
Initialize φ = Image of size m × m, containing a centered8
Gaussian of variance σ
s
;
for k P do9
Add the patch of size m × m of I located at P (k) to the10
image J
s
at C
(k)
;
Add the image φ of the gaussian weights in A at C
(k)
;11
Divide J
s
by A (normalisation of the added colors).12
// Combine all the blending scales
// in a result image.
for p M , do13
s = ˜σ
(p)
;14
J
(p)
= J
s(p)
;15
(a) Weights used in [Daisy et al.
2013].
(b) Weights use in our new method.
Figure 4: Illustration of the difference between weights of [Daisy
et al. 2013] (a), and those used in our fast spatial patch blending
algorithm (b).
This is mainly thanks to this approximation that our optimized spa-
tial blending version of the algorithm of [Daisy et al. 2013] can be
reformulated. From an experimental point of view, one can notice
that the difference between the results produced with the two weight
functions are very difficult to see in the final blending results.
Mask-external spatial patch blending: In Algorithm 1, the spatial
blending is naturally extended to the outside of the inpainting mask
M. In terms of visual appeal, this is very interesting since a smooth
transition is created between the known colors and these of the re-
constructed area. All the results presented in the following section
take advantage of this special feature. To respect the classic inpaint-
ing formalism, one can constraint pixels from outside the mask not
to be modified by our spatial patch blending (by copying all known
pixels from I to the final image J at the end of the process).
Performance improvement: The gain performance of our ap-
proach as compared to [Daisy et al. 2013], and the comparison with
the state-of-the-art approaches of [Criminisi et al. 2004] (inpaint-
ing without spatial patch blending) and Photoshop (very fast, based
on [Wexler et al. 2007; Barnes et al. 2009]) is illustrated Fig. 5.
In order to show the efficiency of our new method, we have made
some experimentations. Fig. 5 summarizes the results on a set of
medium-sized image
1
, and mainly gives us three interesting infor-
mations. First, the gain of time between the approach of [Daisy
et al. 2013] and our method depends on the kind of processed im-
ages (mainly depending on the size of M), but is very significant
in each case (from 6 to 30 times faster for the presented examples).
Then, there is no meaningful difference of computation time be-
tween method in [Criminisi et al. 2004] and ours. This means that
there is no additional cost to process our spatial patch blending al-
gorithm after a classic patch-based inpainting. Also, one can see
that the content-aware filling algorithm [Wexler et al. 2007; Barnes
et al. 2009] provided in Photoshop is noticeably faster than our
method, but is most likely using material accelerations like GPU
processing or multi-core programming. This is not the case of our
method, provided with standard C++ implementation with no ac-
celeration.
Figure 5: Illustration of execution time comparison between our
method, method of [Daisy et al. 2013], with state-of-the-art meth-
ods. The less, the better.
4 Results and Reproducibility
Some results provided by our method are illustrated Fig. 6 and com-
pared to state-of-the-art methods. Spatial patch blending is clearly
demonstrated through our examples and our way of making it faster
allows now this method to be used interactively. In addition, a soft-
ware integration of our method has been made and the source code
is now available to the community, making our fast spatial patch
blending algorithm fully reproducible:
The source code of our technique is available as a function named
inpaint patch() in G’MIC [Tschumperl
´
e 2013] source codes.
A dedicated filter has been added to the G’MIC plugin for the
open source GIMP
2
software, allowing non specialist people to use
it easily thanks to an enhanced graphical user interface.
1
http://daisy.users.greyc.fr/@publications:id=fspba.dtl.2013
2
http://www.gimp.org/

(a) Masked color image. (b) Result obtained with [Criminisi
et al. 2004].
(c) Result obtained with photoshop.
[2009; 2007]
(d) Our result.
(e) Masked color image. (f) Result obtained with [Criminisi
et al. 2004].
(g) Result obtained with photoshop.
[2009; 2007]
(h) Our result.
(i) Masked color image. (j) Result obtained with [Criminisi
et al. 2004].
(k) Result obtained with photo-
shop.[2009; 2007]
(l) Our result.
Figure 6: Comparison with several state-of-the-art methods (zoomed).
References
BARNES, C., SHECHTMAN, E., FINKELSTEIN, A., AND GOLD-
MAN, D. B. 2009. Patchmatch: a randomized correspondence
algorithm for structural image editing. ACM Trans. Graph. 28, 3
(July), 24:1–24:11.
BERTALMIO, M., SAPIRO, G., CASELLES, V., AND BALLESTER,
C. 2000. Image inpainting. In Proc. of the 27th annual SIG-
GRAPH conference, SIGGRAPH ’00, 417–424.
CRIMINISI, A., P
´
EREZ, P., AND TOYAMA, K. 2004. Region filling
and object removal by exemplar-based image inpainting. IEEE
Trans. Im. Proc. 13, 9 (Sept.), 1200–1212.
DAISY, M., TSCHUMPERL
´
E, D., AND L
´
EZORAY, O. 2013. Spatial
patch blending for artefact reduction in pattern-based inpainting
techniques. In Int. Conf. on Computer Analysis of Images and
Patterns(CAIP), vol. LNCS 8048, 523–530.
LE MEUR, O., GAUTIER, J., AND GUILLEMOT, C. 2011.
Examplar-based inpainting based on local geometry. In ICIP,
3401–3404.
MASNOU, S., AND MOREL, J.-M. 1998. Level lines based disoc-
clusion. In ICIP (3), 259–263.
SUN, J., YUAN, L., JIA, J., AND SHUM, H.-Y. 2005. Image
completion with structure propagation. ACM Trans. Graph. 24,
3 (July), 861–868.
TSCHUMPERL
´
E, D., AND DERICHE, R. 2005. Vector-valued im-
age regularization with pdes: A common framework for different
applications. IEEE Trans. PAMI 27, 4, 506–517.
TSCHUMPERL
´
E, D. 2013. G’MIC : Greyc’s magic for image com-
puting. http://gmic.sourceforge.net/.
WEXLER, Y., SHECHTMAN, E., AND IRANI, M. 2007. Space-
time completion of video. IEEE Trans. Pattern Anal. Mach. In-
tell. 29, 3 (Mar.), 463–476.
Citations
More filters
Journal ArticleDOI
TL;DR: A new matrix completion algorithm, better suited to the inpainting application than existing methods, is developed in this paper and demonstrates the robustness of the low rank approach to noisy data as well as large color and illumination variations between the views of the light field.
Abstract: Building up on the advances in low rank matrix completion, this paper presents a novel method for propagating the inpainting of the central view of a light field to all the other views. After generating a set of warped versions of the inpainted central view with random homographies, both the original light field views and the warped ones are vectorized and concatenated into a matrix. Because of the redundancy between the views, the matrix satisfies a low rank assumption enabling us to fill the region to inpaint with low rank matrix completion. To this end, a new matrix completion algorithm, better suited to the inpainting application than existing methods, is also developed in this paper. In its simple form, our method does not require any depth prior, unlike most existing light field inpainting algorithms. The method has then been extended to better handle the case where the area to inpaint contains depth discontinuities. In this case, a segmentation map of the different depth layers of the inpainted central view is required. This information is used to warp the depth layers with different homographies. Our experiments with natural light fields captured with plenoptic cameras demonstrate the robustness of the low rank approach to noisy data as well as large color and illumination variations between the views of the light field.

97 citations


Cites background from "A fast spatial patch blending algor..."

  • ...(a) Central view inpainted with [6] within the red boundary....

    [...]

Journal ArticleDOI
TL;DR: From this analysis, three improvements over Criminisi et al. algorithm are presented and detailed: a tensor-based data term for a better selection of pixel candidates to fill in; a fast patch lookup strategy to ensure a better global coherence of the reconstruction; and a novel fast anisotropic spatial blending algorithm that reduces typical block artifacts using tensor models.
Abstract: This paper proposes a technical review of exemplar-based inpainting approaches with a particular focus on greedy methods. Several comparative and illustrative experiments are provided to deeply explore and enlighten these methods, and to have a better understanding on the state-of-the-art improvements of these approaches. From this analysis, three improvements over Criminisi et al. algorithm are then presented and detailed: 1) a tensor-based data term for a better selection of pixel candidates to fill in; 2) a fast patch lookup strategy to ensure a better global coherence of the reconstruction; and 3) a novel fast anisotropic spatial blending algorithm that reduces typical block artifacts using tensor models. Relevant comparisons with the state-of-the-art inpainting methods are provided that exhibit the effectiveness of our contributions.

90 citations

Journal ArticleDOI
TL;DR: A new image specularity removal method which is based on polarization imaging through global energy minimization, which properly takes into account the long range cue and produces accurate and stable results.

26 citations


Cites methods from "A fast spatial patch blending algor..."

  • ...In this case, inpainting methods, which are based on the smoothness assumption of texture, color, or other features [31], could for example be used....

    [...]

Journal ArticleDOI
TL;DR: An interactive user-driven method that can generate high-relief geometry with large viewing angles, handle complex organic objects with multiple occluded regions and varying shape profiles, and reconstruct objects with double-sided structures is introduced.
Abstract: We introduce an interactive user-driven method to reconstruct high-relief 3D geometry from a single photo. Particularly, we consider two novel but challenging reconstruction issues: i) common non-rigid objects whose shapes are organic rather than polyhedral/symmetric, and ii) double-sided structures, where front and back sides of some curvy object parts are revealed simultaneously on image. To address these issues, we develop a three-stage computational pipeline. First, we construct a 2.5D model from the input image by user-driven segmentation, automatic layering, and region completion, handling three common types of occlusion. Second, users can interactively mark-up slope and curvature cues on the image to guide our constrained optimization model to inflate and lift up the image layers. We provide real-time preview of the inflated geometry to allow interactive editing. Third, we stitch and optimize the inflated layers to produce a high-relief 3D model. Compared to previous work, we can generate high-relief geometry with large viewing angles, handle complex organic objects with multiple occluded regions and varying shape profiles, and reconstruct objects with double-sided structures. Lastly, we demonstrate the applicability of our method on a wide variety of input images with human, animals, flowers, etc.

23 citations


Cites methods from "A fast spatial patch blending algor..."

  • ...Moreover, we fill and inpaint the holes left over on the background image by using [35]....

    [...]

Proceedings ArticleDOI
01 Oct 2014
TL;DR: This paper introduces a structure-tensor-based data-term for a better selection of pixel candidates to fill in based on priority, and proposes a new lookup heuristic in order to locate the best source patches to copy/paste to these targeted points.
Abstract: In this paper, we propose two major improvements to the exemplar-based image inpainting algorithm, initially formulated by Criminisi et al. [1]. First, we introduce a structure-tensor-based data-term for a better selection of pixel candidates to fill in based on priority. Then, we propose a new lookup heuristic in order to locate the best source patches to copy/paste to these targeted points. These two contributions clearly make the inpainting algorithm reconstruct more geometrically coherent images, as well as speed up the process drastically. We illustrate the great performances of our approach compared to existing state-of-the-art methods.

20 citations


Cites background from "A fast spatial patch blending algor..."

  • ...ence (this is one of our previous work published in [18])....

    [...]

References
More filters
Proceedings ArticleDOI
01 Jul 2000
TL;DR: A novel algorithm for digital inpainting of still images that attempts to replicate the basic techniques used by professional restorators, and does not require the user to specify where the novel information comes from.
Abstract: Inpainting, the technique of modifying an image in an undetectable form, is as ancient as art itself. The goals and applications of inpainting are numerous, from the restoration of damaged paintings and photographs to the removal/replacement of selected objects. In this paper, we introduce a novel algorithm for digital inpainting of still images that attempts to replicate the basic techniques used by professional restorators. After the user selects the regions to be restored, the algorithm automatically fills-in these regions with information surrounding them. The fill-in is done in such a way that isophote lines arriving at the regions' boundaries are completed inside. In contrast with previous approaches, the technique here introduced does not require the user to specify where the novel information comes from. This is automatically done (and in a fast way), thereby allowing to simultaneously fill-in numerous regions containing completely different structures and surrounding backgrounds. In addition, no limitations are imposed on the topology of the region to be inpainted. Applications of this technique include the restoration of old photographs and damaged film; removal of superimposed text like dates, subtitles, or publicity; and the removal of entire objects from the image like microphones or wires in special effects.

3,830 citations


"A fast spatial patch blending algor..." refers methods in this paper

  • ...Geometry-based methods [Masnou and Morel 1998; Bertalmio et al. 2000; Tschumperlé and Deriche 2005] provide techniques to propagate image structures by extrapolating the local geometry....

    [...]

  • ...Geometry-based methods [Masnou and Morel 1998; Bertalmio et al. 2000; Tschumperlé and Deriche 2005] provide techniques to propagate image structures by extrapolating the local geometry....

    [...]

Journal ArticleDOI
TL;DR: The simultaneous propagation of texture and structure information is achieved by a single, efficient algorithm that combines the advantages of two approaches: exemplar-based texture synthesis and block-based sampling process.
Abstract: A new algorithm is proposed for removing large objects from digital images. The challenge is to fill in the hole that is left behind in a visually plausible way. In the past, this problem has been addressed by two classes of algorithms: 1) "texture synthesis" algorithms for generating large image regions from sample textures and 2) "inpainting" techniques for filling in small image gaps. The former has been demonstrated for "textures"-repeating two-dimensional patterns with some stochasticity; the latter focus on linear "structures" which can be thought of as one-dimensional patterns, such as lines and object contours. This paper presents a novel and efficient algorithm that combines the advantages of these two approaches. We first note that exemplar-based texture synthesis contains the essential process required to replicate both texture and structure; the success of structure propagation, however, is highly dependent on the order in which the filling proceeds. We propose a best-first algorithm in which the confidence in the synthesized pixel values is propagated in a manner similar to the propagation of information in inpainting. The actual color values are computed using exemplar-based synthesis. In this paper, the simultaneous propagation of texture and structure information is achieved by a single , efficient algorithm. Computational efficiency is achieved by a block-based sampling process. A number of examples on real and synthetic images demonstrate the effectiveness of our algorithm in removing large occluding objects, as well as thin scratches. Robustness with respect to the shape of the manually selected target region is also demonstrated. Our results compare favorably to those obtained by existing techniques.

3,066 citations

Journal ArticleDOI
27 Jul 2009
TL;DR: This paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches, and proposes additional intuitive constraints on the synthesis process that offer the user a level of control unavailable in previous methods.
Abstract: This paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches. Previous research in graphics and vision has leveraged such nearest-neighbor searches to provide a variety of high-level digital image editing tools. However, the cost of computing a field of such matches for an entire image has eluded previous efforts to provide interactive performance. Our algorithm offers substantial performance improvements over the previous state of the art (20-100x), enabling its use in interactive editing tools. The key insights driving the algorithm are that some good patch matches can be found via random sampling, and that natural coherence in the imagery allows us to propagate such matches quickly to surrounding areas. We offer theoretical analysis of the convergence properties of the algorithm, as well as empirical and practical evidence for its high quality and performance. This one simple algorithm forms the basis for a variety of tools -- image retargeting, completion and reshuffling -- that can be used together in the context of a high-level image editing application. Finally, we propose additional intuitive constraints on the synthesis process that offer the user a level of control unavailable in previous methods.

2,888 citations


"A fast spatial patch blending algor..." refers methods or result in this paper

  • ...2004] (inpainting without spatial patch blending) and Photoshop (very fast, based on [Wexler et al. 2007; Barnes et al. 2009]) is illustrated Fig....

    [...]

  • ...…The gain performance of our approach as compared to our previous one, and the comparison with the state-of-the-art approaches of [Criminisi et al. 2004] (inpainting without spatial patch blending) and Photoshop (very fast, based on [Wexler et al. 2007; Barnes et al. 2009]) is illustrated Fig....

    [...]

  • ...Also, one can see that the content-aware filling algorithm [Wexler et al. 2007; Barnes et al. 2009] provided in Photoshop is noticeably faster than our method, but is most likely using material accelerations like GPU processing or multi-core programming....

    [...]

Journal ArticleDOI
TL;DR: This paper presents a new framework for the completion of missing information based on local structures that poses the task of completion as a global optimization problem with a well-defined objective function and derives a new algorithm to optimize it.
Abstract: This paper presents a new framework for the completion of missing information based on local structures. It poses the task of completion as a global optimization problem with a well-defined objective function and derives a new algorithm to optimize it. Missing values are constrained to form coherent structures with respect to reference examples. We apply this method to space-time completion of large space-time "holes" in video sequences of complex dynamic scenes. The missing portions are filled in by sampling spatio-temporal patches from the available parts of the video, while enforcing global spatio-temporal consistency between all patches in and around the hole. The consistent completion of static scene parts simultaneously with dynamic behaviors leads to realistic looking video sequences and images. Space-time video completion is useful for a variety of tasks, including, but not limited to: 1) sophisticated video removal (of undesired static or dynamic objects) by completing the appropriate static or dynamic background information. 2) Correction of missing/corrupted video frames in old movies. 3) Modifying a visual story by replacing unwanted elements. 4) Creation of video textures by extending smaller ones. 5) Creation of complete field-of-view stabilized video. 6) As images are one-frame videos, we apply the method to this special case as well

746 citations


"A fast spatial patch blending algor..." refers methods or result in this paper

  • ...…The gain performance of our approach as compared to our previous one, and the comparison with the state-of-the-art approaches of [Criminisi et al. 2004] (inpainting without spatial patch blending) and Photoshop (very fast, based on [Wexler et al. 2007; Barnes et al. 2009]) is illustrated Fig....

    [...]

  • ...Also, one can see that the content-aware filling algorithm [Wexler et al. 2007; Barnes et al. 2009] provided in Photoshop is noticeably faster than our method, but is most likely using material accelerations like GPU processing or multi-core programming....

    [...]

Journal ArticleDOI
TL;DR: A unifying expression is proposed that gathers the majority of PDE-based formalisms for vector-valued image regularization into a single generic anisotropic diffusion equation, allowing us to implement the authors' regularization framework with accuracy by taking the local filtering properties of the proposed equations into account.
Abstract: In this paper, we focus on techniques for vector-valued image regularization, based on variational methods and PDE. Starting from the study of PDE-based formalisms previously proposed in the literature for the regularization of scalar and vector-valued data, we propose a unifying expression that gathers the majority of these previous frameworks into a single generic anisotropic diffusion equation. On one hand, the resulting expression provides a simple interpretation of the regularization process in terms of local filtering with spatially adaptive Gaussian kernels. On the other hand, it naturally disassembles any regularization scheme into the smoothing process itself and the underlying geometry that drives the smoothing. Thus, we can easily specialize our generic expression into different regularization PDE that fulfill desired smoothing behaviors, depending on the considered application: image restoration, inpainting, magnification, flow visualization, etc. Specific numerical schemes are also proposed, allowing us to implement our regularization framework with accuracy by taking the local filtering properties of the proposed equations into account. Finally, we illustrate the wide range of applications handled by our selected anisotropic diffusion equations with application results on color images.

680 citations

Frequently Asked Questions (1)
Q1. What are the contributions mentioned in the paper "A fast spatial patch blending algorithm for artefact reduction in pattern-based image inpainting" ?

The authors propose a fast and generic spatial patch blending technique that can be embedded within any kind of pattern-based inpainting algorithm. This extends the works of [ Daisy et al. 2013 ] on the visual enhancement of inpainting results. Moreover, the authors provide a free and simple-to-use software to make this easily reproducible.