scispace - formally typeset
Search or ask a question
Author

Triet M. Le

Bio: Triet M. Le is an academic researcher from Yale University. The author has contributed to research in topics: Image texture & Regularization (mathematics). The author has an hindex of 10, co-authored 18 publications receiving 935 citations. Previous affiliations of Triet M. Le include National Geospatial-Intelligence Agency & University of California, Los Angeles.

Papers
More filters
Journal ArticleDOI
TL;DR: A new variational model to denoise an image corrupted by Poisson noise uses total-variation regularization, which preserves edges, and the result is that the strength of the regularization is signal dependent, precisely likePoisson noise.
Abstract: We propose a new variational model to denoise an image corrupted by Poisson noise. Like the ROF model described in [1] and [2], the new model uses total-variation regularization, which preserves edges. Unlike the ROF model, our model uses a data-fidelity term that is suitable for Poisson noise. The result is that the strength of the regularization is signal dependent, precisely like Poisson noise. Noise of varying scales will be removed by our model, while preserving low-contrast features in regions of low intensity.

412 citations

Journal ArticleDOI
TL;DR: This paper converts the linear model, which reduces to a low-pass/high-pass filter pair, into a nonlinear filter pair involving the total variation, which retains both the essential features of Meyer's models and the simplicity and rapidity of thelinear model.
Abstract: Can images be decomposed into the sum of a geometric part and a textural part? In a theoretical breakthrough, [Y. Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations. Providence, RI: American Mathematical Society, 2001] proposed variational models that force the geometric part into the space of functions with bounded variation, and the textural part into a space of oscillatory distributions. Meyer's models are simple minimization problems extending the famous total variation model. However, their numerical solution has proved challenging. It is the object of a literature rich in variants and numerical attempts. This paper starts with the linear model, which reduces to a low-pass/high-pass filter pair. A simple conversion of the linear filter pair into a nonlinear filter pair involving the total variation is introduced. This new-proposed nonlinear filter pair retains both the essential features of Meyer's models and the simplicity and rapidity of the linear model. It depends upon only one transparent parameter: the texture scale, measured in pixel mesh. Comparative experiments show a better and faster separation of cartoon from texture. One application is illustrated: edge detection.

203 citations

Journal ArticleDOI
TL;DR: This paper is devoted to the decomposition of an image f into u + v, with u a piecewise-smooth or "cartoon" component, and v an oscillatory component (texture or noise) in a variational approach.
Abstract: This paper is devoted to the decomposition of an image f into u + v, with u a piecewise-smooth or "cartoon" component, and v an oscillatory component (texture or noise), in a variational approach. ...

92 citations

Journal ArticleDOI
TL;DR: In this paper, Mumford et al. proposed a variational approach to decompose an image into a piecewise-smooth or "cartoon" component and an oscillatory component (texture or noise), which can be represented by the weaker spaces of generalized functions G = div(L ∞ ), F = div (BMO ), and E = B ˙ ∞, ∞ −1 have been proposed to model v, instead of the standard L 2 space, while keeping u ∈ BV, a function of bounded variation.

86 citations

Journal ArticleDOI
TL;DR: An RKHS framework for image and video colorization is proposed and Theory as well as a practical algorithm is proposed with a number of numerical experiments.
Abstract: Motivated by the setting of reproducing kernel Hilbert space (RKHS) and its extensions considered in machine learning, we propose an RKHS framework for image and video colorization We review and study RKHS especially in vectorial cases and provide various extensions for colorization problems Theory as well as a practical algorithm is proposed with a number of numerical experiments

60 citations


Cited by
More filters
01 Jan 2016
TL;DR: An introduction to the theory of point processes is universally compatible with any devices to read and will help you get the most less latency time to download any of the authors' books like this one.
Abstract: Thank you for downloading an introduction to the theory of point processes. As you may know, people have search hundreds times for their chosen novels like this an introduction to the theory of point processes, but end up in infectious downloads. Rather than enjoying a good book with a cup of coffee in the afternoon, instead they juggled with some harmful virus inside their computer. an introduction to the theory of point processes is available in our digital library an online access to it is set as public so you can download it instantly. Our book servers hosts in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the an introduction to the theory of point processes is universally compatible with any devices to read.

903 citations

Journal ArticleDOI
TL;DR: A unified theory of neighborhood filters and reliable criteria to compare them to other filter classes are presented and it will be demonstrated that computing trajectories and restricting the neighborhood to them is harmful for denoising purposes and that space-time NL-means preserves more movie details.
Abstract: Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging similar pixels. The first object of the paper is to present a unified theory of these filters and reliable criteria to compare them to other filter classes. A CCD noise model will be presented justifying the involvement of neighborhood filters. A classification of neighborhood filters will be proposed, including classical image and movie denoising methods and discussing further a recently introduced neighborhood filter, NL-means. In order to compare denoising methods three principles will be discussed. The first principle, "method noise", specifies that only noise must be removed from an image. A second principle will be introduced, "noise to noise", according to which a denoising method must transform a white noise into a white noise. Contrarily to "method noise", this principle, which characterizes artifact-free methods, eliminates any subjectivity and can be checked by mathematical arguments and Fourier analysis. "Noise to noise" will be proven to rule out most denoising methods, with the exception of neighborhood filters. This is why a third and new comparison principle, the "statistical optimality", is needed and will be introduced to compare the performance of all neighborhood filters. The three principles will be applied to compare ten different image and movie denoising methods. It will be first shown that only wavelet thresholding methods and NL-means give an acceptable method noise. Second, that neighborhood filters are the only ones to satisfy the "noise to noise" principle. Third, that among them NL-means is closest to statistical optimality. A particular attention will be paid to the application of the statistical optimality criterion for movie denoising methods. It will be pointed out that current movie denoising methods are motion compensated neighborhood filters. This amounts to say that they are neighborhood filters and that the ideal neighborhood of a pixel is its trajectory. Unfortunately the aperture problem makes it impossible to estimate ground true trajectories. It will be demonstrated that computing trajectories and restricting the neighborhood to them is harmful for denoising purposes and that space-time NL-means preserves more movie details.

763 citations

Journal ArticleDOI
TL;DR: The paper shows that the correlation graph between u and ρ may serve as an efficient tool to select the splitting parameter, and proposes a new fast algorithm to solve the TV − L1 minimization problem.
Abstract: This paper explores various aspects of the image decomposition problem using modern variational techniques. We aim at splitting an original image f into two components u and ?, where u holds the geometrical information and ? holds the textural information. The focus of this paper is to study different energy terms and functional spaces that suit various types of textures. Our modeling uses the total-variation energy for extracting the structural part and one of four of the following norms for the textural part: L2, G, L1 and a new tunable norm, suggested here for the first time, based on Gabor functions. Apart from the broad perspective and our suggestions when each model should be used, the paper contains three specific novelties: first we show that the correlation graph between u and ? may serve as an efficient tool to select the splitting parameter, second we propose a new fast algorithm to solve the TV ? L1 minimization problem, and third we introduce the theory and design tools for the TV-Gabor model.

659 citations

Journal ArticleDOI
TL;DR: A hyperspectral image denoising algorithm employing a spectral-spatial adaptive total variation (TV) model, in which the spectral noise differences and spatial information differences are both considered in the process of noise reduction.
Abstract: The amount of noise included in a hyperspectral image limits its application and has a negative impact on hyperspectral image classification, unmixing, target detection, and so on In hyperspectral images, because the noise intensity in different bands is different, to better suppress the noise in the high-noise-intensity bands and preserve the detailed information in the low-noise-intensity bands, the denoising strength should be adaptively adjusted with the noise intensity in the different bands Meanwhile, in the same band, there exist different spatial property regions, such as homogeneous regions and edge or texture regions; to better reduce the noise in the homogeneous regions and preserve the edge and texture information, the denoising strength applied to pixels in different spatial property regions should also be different Therefore, in this paper, we propose a hyperspectral image denoising algorithm employing a spectral-spatial adaptive total variation (TV) model, in which the spectral noise differences and spatial information differences are both considered in the process of noise reduction To reduce the computational load in the denoising process, the split Bregman iteration algorithm is employed to optimize the spectral-spatial hyperspectral TV model and accelerate the speed of hyperspectral image denoising A number of experiments illustrate that the proposed approach can satisfactorily realize the spectral-spatial adaptive mechanism in the denoising process, and superior denoising results are produced

520 citations

Journal ArticleDOI
TL;DR: This paper proposes an approach to deconvolving Poissonian images, which is based upon an alternating direction optimization method of multipliers (ADMM), which belongs to the family of augmented Lagrangian algorithms.
Abstract: Much research has been devoted to the problem of restoring Poissonian images, namely for medical and astronomical applications. However, the restoration of these images using state-of-the-art regularizers (such as those based upon multiscale representations or total variation) is still an active research area, since the associated optimization problems are quite challenging. In this paper, we propose an approach to deconvolving Poissonian images, which is based upon an alternating direction optimization method. The standard regularization [or maximum a posteriori (MAP)] restoration criterion, which combines the Poisson log-likelihood with a (nonsmooth) convex regularizer (log-prior), leads to hard optimization problems: the log-likelihood is nonquadratic and nonseparable, the regularizer is nonsmooth, and there is a nonnegativity constraint. Using standard convex analysis tools, we present sufficient conditions for existence and uniqueness of solutions of these optimization problems, for several types of regularizers: total-variation, frame-based analysis, and frame-based synthesis. We attack these problems with an instance of the alternating direction method of multipliers (ADMM), which belongs to the family of augmented Lagrangian algorithms. We study sufficient conditions for convergence and show that these are satisfied, either under total-variation or frame-based (analysis and synthesis) regularization. The resulting algorithms are shown to outperform alternative state-of-the-art methods, both in terms of speed and restoration accuracy.

442 citations