scispace - formally typeset
Search or ask a question

Showing papers in "Image Processing On Line in 2012"


Journal ArticleDOI
TL;DR: LSD is a linear-time Line Segment Detector giving subpixel accurate results and uses an a contrario validation approach according to Desolneux, Moisan, and Morel’s theory.
Abstract: LSD is a linear-time Line Segment Detector giving subpixel accurate results. It is designed to work on any digital image without parameter tuning. It controls its own number of false detections: on average, one false alarm is allowed per image [1]. The method is based on Burns, Hanson, and Riseman’s method [2], and uses an a contrario validation approach according to Desolneux, Moisan, and Morel’s theory [3, 4]. The version described here includes some further

714 citations


Journal ArticleDOI
TL;DR: An open-source implementation of BM3D is proposed, the description of the method is rewritten with a new notation, and the choice of all parameter methods is discussed to confirm their actual optimality.
Abstract: BM3D is a recent denoising method based on the fact that an image has a locally sparse representation in transform domain. This sparsity is enhanced by grouping similar 2D image patches into 3D groups. In this paper we propose an open-source implementation of the method. We discuss the choice of all parameter methods and confirm their actual optimality. The description of the method is rewritten with a new notation. We hope this new notation is more transparent than in the original paper. A final index gives nonetheless the correspondence between the new notation and the original notation.

321 citations


Journal ArticleDOI
Pascal Getreuer1
TL;DR: This work focuses here on the split Bregman algorithm of Goldstein and Osher for TV-regularized denoising, a technique that was originally developed for AWGN image denoizing and has since been applied to a multitude of other imaging problems.
Abstract: Denoising is the problem of removing noise from an image. The most commonly studied case is with additive white Gaussian noise (AWGN), where the observed noisy image f is related to the underlying true image u by f = u + , and is at each point in space independently and identically distributed as a zero-mean Gaussian random variable. Total variation (TV) regularization is a technique that was originally developed for AWGN image denoising by Rudin, Osher, and Fatemi [9]. The TV regularization technique has since been applied to a multitude of other imaging problems, see for example Chan and Shen’s book [20]. We focus here on the split Bregman algorithm of Goldstein and Osher [31] for TV-regularized denoising.

257 citations


Journal ArticleDOI
Pascal Getreuer1
TL;DR: The level set formulation of the Chan-Vese model and its numerical solution using a semi-implicit gradient descent is described, which allows the segmentation to handle topological changes more easily than explicit snake methods.
Abstract: While many segmentation methods rely heavily in some way on edge detection, the “Active Contours Without Edges” method by Chan and Vese [7, 9] ignores edges completely. Instead, the method optimally fits a two-phase piecewise constant model to the given image. The segmentation boundary is represented implicitly with a level set function, which allows the segmentation to handle topological changes more easily than explicit snake methods. This article describes the level set formulation of the Chan-Vese model and its numerical solution using a semi-implicit gradient descent. We also discuss the Chan–Sandberg–Vese method [8], a straightforward extension of Chan–Vese for vector-valued images.

187 citations


Journal ArticleDOI
TL;DR: The RANSAC algorithm (RANdom SAmple Consensus) is a robust method to estimate parameters of a model tting the data, in presence of outliers among the data.
Abstract: The RANSAC [2] algorithm (RANdom SAmple Consensus) is a robust method to estimate parameters of a model tting the data, in presence of outliers among the data. Its random nature is due only to complexity considerations. It iteratively extracts a random sample out of all data, of minimal size sucient to estimate the parameters. At each such trial, the number

134 citations


Journal ArticleDOI
TL;DR: Two fast approximations of automatic Color Enhancement “ACE” are described, using a polynomial approximation of the slope function to decomposes the main computation into convolutions, reducing the cost to O(N 2 logN).
Abstract: Automatic Color Enhancement “ACE” is an effective method for color image enhancement introduced by Gatta, Rizzi, and Marini based on modeling several low level mechanisms of the human visual system. The direct computation of ACE on an N × N image costs O(N 4 ) operations. This article describes two fast approximations of ACE. First, the algorithm of Bertalm´oo, Caselles, Provenzi, and Rizzi uses a polynomial approximation of the slope function to decomposes the main computation into convolutions, reducing the cost to O(N 2 logN). Second, an algorithm based on interpolating intensity levels also reduces the main computation to convolutions. The use of ACE for image enhancement and color correction is demonstrated.

104 citations


Journal ArticleDOI
Pascal Getreuer1
TL;DR: Inpainting is used to restore regions of an image that are corrupted by noise or where the data is missing, and to solve disocclusion, to estimate the scene behind an obscuring foreground object.
Abstract: Given an image where a specified region is unknown, image inpainting or image completion is the problem of inferring the image content in this region. Traditional retouching or inpainting is the practice of restoring aged artwork, where damaged or missing portions are repainted based on the surrounding content to approximate the original appearance. In the context of digital images, inpainting is used to restore regions of an image that are corrupted by noise or where the data is missing. Inpainting is also used to solve disocclusion, to estimate the scene behind an obscuring foreground object. A popular use of digital inpainting is object removal, for example, to remove a trashcan that disrupts a scene of otherwise natural beauty. Inpainting is an interpolation problem, filling the unknown region with a condition to agree with the known image on the boundary. A classical solution for such an interpolation is to solve Laplace’s equation. However, Laplace’s equation is usually unsatisfactory for images since it is overly smooth. It cannot recover a step edge passing through the region. Total variation (TV) regularization is an effective inpainting technique which is capable of recovering sharp edges under some conditions (these conditions will be explained). The use of TV regularization was originally developed for image denoising by Rudin, Osher, and Fatemi [3] and then applied to inpainting by Chan and Shen [13]. TV-regularized inpainting does not create texture, the method is limited to inpainting the geometric structure.

100 citations


Journal ArticleDOI
Pascal Getreuer1
TL;DR: TV-regularized deconvolution with Gaussian noise and its ecient solution using the split Bregman algorithm of Goldstein and Osher is discussed and a straightforward extension for Laplace or Poisson noise is shown and empirical estimates for the optimal value of the regularization parameter are developed.
Abstract: Deblurring is the inverse problem of restoring an image that has been blurred and possibly corrupted with noise. Deconvolution refers to the case where the blur to be removed is linear and shift-invariant so it may be expressed as a convolution of the image with a point spread function. Convolution corresponds in the Fourier domain to multiplication, and deconvolution is essentially Fourier division. The challenge is that since the multipliers are often small for high frequencies, direct division is unstable and plagued by noise present in the input image. Eective deconvolution requires a balance between frequency recovery and noise suppression. Total variation (TV) regularization is a successful technique for achieving this balance in deblurring problems. It was introduced to image denoising by Rudin, Osher, and Fatemi [4] and then applied to deconvolution by Rudin and Osher [5]. In this article, we discuss TV-regularized deconvolution with Gaussian noise and its ecient solution using the split Bregman algorithm of Goldstein and Osher [16]. We show a straightforward extension for Laplace or Poisson noise and develop empirical estimates for the optimal value of the regularization parameter .

79 citations


Journal ArticleDOI
TL;DR: This single image method works on static images, is fully automatic, has no user parameter, and requires no registration and is able to correct for a fully non-linear non-uniformity.
Abstract: The non-uniformity is a time-dependent noise caused by the lack of sensor equalization. We present here the detailed algorithm and online demo of the non-uniformity correction method by midway infrared equalization. This method was designed to suit infrared images. Nevertheless, it can be applied to images produced for example by scanners, or by push-broom satellites. This single image method works on static images, is fully automatic, has no user parameter, and requires no registration. It needs no camera motion compensation, no closed aperture sensor equalization and is able to correct for a fully non-linear non-uniformity.

65 citations


Journal ArticleDOI
TL;DR: The implementation of the K-SVD-based image denoising algorithm is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.
Abstract: K-SVD is a signal representation method which, from a set of signals, can derive a dictionary able to approximate each signal with a sparse combination of the atoms. This paper focuses on the K-SVD-based image denoising algorithm. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.

53 citations


Journal ArticleDOI
Pascal Getreuer1
TL;DR: The mosaicked contour stencils are then used to guide a simple demosaicking method based on graph regularization, which estimates the image contour orientations directly from the mosaicked data.
Abstract: Demosaicking (or demosaicing) is the problem of interpolating full color information on an image where only one color component is known at each pixel. Most demosaicking methods involve some kind of estimation of the underlying image structure, for example, choosing adaptively between interpolating in the horizontal or vertical direction. This article discusses the implementation details of the method introduced in Getreuer, \Color Demosaicing with Contour Stencils," 2011. Mosaicked contour stencils rst estimate the image contour orientations directly from the mosaicked data. The mosaicked contour stencils are then used to guide a simple demosaicking method based on graph regularization.

Journal ArticleDOI
TL;DR: The proposed algorithm performs a piecewise affine transform of the intensity levels of a digital image such that the new histogram function will be approximately uniform, but where the stretching of the range is locally controlled to avoid brutal noise enhancement.
Abstract: This paper presents a simple contrast enhancement algorithm based on histogram equalization (HE). The proposed algorithm performs a piecewise affine transform of the i ntensity levels of a digital image such that the new histogram function will be approximately uniform (as with HE), but where the stretching of the range is locally controlled to avoid brutal noise enhancement. We call this algorithm Piecewise Affine Equalization (PAE). Several experiments show that, in general, the new algorithm improves HE results. Source Code The proposed algorithm has been implemented in ANSI C. The source code, the code documentation, and the online demo are accessible from the web page of this article 1 .

Journal ArticleDOI
TL;DR: An algorithm to compute morphological snake evolution in real time is presented based on numerical methods which are very simple and fast and does not require to estimate a contour distance function to dene the level set.
Abstract: Active contours, or snakes, are computer-generated curves that move within images to nd out salient image structures like object boundaries. Energy based formulations using a level set approach have been successfully used to model the snake evolution. The Euler-Lagrange equation associated to such energies yields to partial dierential equations (PDE) which are usually solved using level set methods which involve contour distance function estimation and standard methods to discretize the PDE. Recently we have proposed a morphological approach to snake evolution. First, we observe that the dierential operators used in the standard PDE snake models can be approached using morphological operations. By combining the morphological operators associated to the PDE components we achieve a new morphological approach to the PDE snakes evolution. This new approach is based on numerical methods which are very simple and fast. Moreover, since the level set is just a binary piecewise constant function, this approach does not require to estimate a contour distance function to dene the level set. In this paper we present an algorithm to compute morphological snake evolution in real time.

Journal ArticleDOI
TL;DR: An algorithm for the local subpixel estimation of the Point Spread Function (PSF) that models the intrinsic camera blur using the Bernoulli(0:5) random noise calibration pattern, which reaches stringent accuracy levels with a relative error in the order of 2 to 5%.
Abstract: This work presents an algorithm for the local subpixel estimation of the Point Spread Function (PSF) that models the intrinsic camera blur. For this purpose, the Bernoulli(0:5) random noise calibration pattern introduced in a previous article [1] is used. This leads to a well-posed near-optimal accurate estimation. First the pattern position and its illumination conditions are accurately estimated. This allows for accurate geometric registration and radiometric correction. Once these procedures are performed, the local PSF can be directly computed by inverting a linear system. This system is well-posed and consequently its inversion does not require any regularization or prior model. The PSF estimates reach stringent accuracy levels with a relative error in the order of 2 to 5%.

Journal ArticleDOI
TL;DR: The proposed method simulates an embedded flutter shutter camera implemented either analogically or numerically, and computes its performance, and the exact SNR of the deconvolved result is also computed.
Abstract: The proposed method simulates an embedded flutter shutter camera implemented either analogically or numerically, and computes its performance. The goal of the flutter shutter is to make motion blur invertible, by a “fluttering” shutter that opens and closes on a well chosen sequence of time intervals. In the simulations the motion is assumed uniform, and the user can choose its velocity. Several types of flutter shutter codes are tested and evaluated: the original ones considered by the inventors, the classic motion blur, and finally several analog or numerical optimal codes proposed recently. In all cases the exact SNR of the deconvolved result is also computed.