scispace - formally typeset
Search or ask a question
Author

Yilun Wang

Other affiliations: Cornell University, Rice University
Bio: Yilun Wang is an academic researcher from University of Electronic Science and Technology of China. The author has contributed to research in topics: Computer science & Deblurring. The author has an hindex of 11, co-authored 40 publications receiving 2835 citations. Previous affiliations of Yilun Wang include Cornell University & Rice University.

Papers
More filters
Journal ArticleDOI
TL;DR: An alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization from a new half-quadratic model applicable to not only the anisotropic but also the isotropic forms of TV discretizations is proposed.
Abstract: We propose, analyze, and test an alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization. This algorithm arises from a new half-quadratic model applicable to not only the anisotropic but also the isotropic forms of TV discretizations. The per-iteration computational complexity of the algorithm is three fast Fourier transforms. We establish strong convergence properties for the algorithm including finite convergence for some variables and relatively fast exponential (or $q$-linear in optimization terminology) convergence for the others. Furthermore, we propose a continuation scheme to accelerate the practical convergence of the algorithm. Extensive numerical results show that our algorithm performs favorably in comparison to several state-of-the-art algorithms. In particular, it runs orders of magnitude faster than the lagged diffusivity algorithm for TV-based deblurring. Some extensions of our algorithm are also discussed.

1,883 citations

Journal ArticleDOI
TL;DR: A simple and efficient algorithm for multichannel image deblurring and denoising, applicable to both within-channel and cross-channel blurs in the presence of additive Gaussian noise is constructed.
Abstract: Variational models with $\ell_1$-norm based regularization, in particular total variation (TV) and its variants, have long been known to offer superior image restoration quality, but processing speed remained a bottleneck, preventing their widespread use in the practice of color image processing In this paper, by extending the grayscale image deblurring algorithm proposed in [Y Wang, J Yang, W Yin, and Y Zhang, SIAM J Imaging Sci, 1 (2008), pp 248-272], we construct a simple and efficient algorithm for multichannel image deblurring and denoising, applicable to both within-channel and cross-channel blurs in the presence of additive Gaussian noise The algorithm restores an image by minimizing an energy function consisting of an $\ell_2$-norm fidelity term and a regularization term that can be either TV, weighted TV, or regularization functions based on higher-order derivatives Specifically, we use a multichannel extension of the classic TV regularizer (MTV) and derive our algorithm from an extended half-quadratic transform of Geman and Yang [IEEE Trans Image Process, 4 (1995), pp 932-946] For three-channel color images, the per-iteration computation of this algorithm is dominated by six fast Fourier transforms The convergence results in [Y Wang, J Yang, W Yin, and Y Zhang, SIAM J Imaging Sci, 1 (2008), pp 248-272] for single-channel images, including global convergence with a strong $q$-linear rate and finite convergence for some quantities, are extended to this algorithm We present numerical results including images recovered from various types of blurs, comparisons between our results and those obtained from the deblurring functions in MATLAB's Image Processing Toolbox, as well as images recovered by our algorithm using weighted MTV and higher-order regularization Our numerical results indicate that the processing speed, as attained by the proposed algorithm, of variational models with TV-like regularization can be made comparable to that of less sophisticated but widely used methods for color image restoration

483 citations

Journal ArticleDOI
Yilun Wang, Wotao Yin1
TL;DR: An efficient implementation of ISD is introduced, called threshold-ISD, for recovering signals with fast decaying distributions of nonzeros from compressive sensing measurements, as well as two state-of-the-art algorithms: the iterative reweighted $\ell_1$ minimization algorithm (IRL1) and the iteratives reweighting least-squares algorithm ( IRLS).
Abstract: We present a novel sparse signal reconstruction method, iterative support detection (ISD), aiming to achieve fast reconstruction and a reduced requirement on the number of measurements compared to the classical $\ell_1$ minimization approach. ISD addresses failed reconstructions of $\ell_1$ minimization due to insufficient measurements. It estimates a support set $I$ from a current reconstruction and obtains a new reconstruction by solving the minimization problem $\min\{\sum_{i otin I}|x_i|:Ax=b\}$, and it iterates these two steps for a small number of times. ISD differs from the orthogonal matching pursuit method, as well as its variants, because (i) the index set $I$ in ISD is not necessarily nested or increasing, and (ii) the minimization problem above updates all the components of $x$ at the same time. We generalize the null space property to the truncated null space property and present our analysis of ISD based on the latter. We introduce an efficient implementation of ISD, called threshold-ISD, for recovering signals with fast decaying distributions of nonzeros from compressive sensing measurements. Numerical experiments show that threshold-ISD has significant advantages over the classical $\ell_1$ minimization approach, as well as two state-of-the-art algorithms: the iterative reweighted $\ell_1$ minimization algorithm (IRL1) and the iterative reweighted least-squares algorithm (IRLS). MATLAB code is available for download from http://www.caam.rice.edu/ optimization/L1/ISD/.

255 citations

01 Jan 2007
TL;DR: A simple algorithmic framework for recovering images from blurry and noisy observations based on total variation (TV) regularization when a blurring point-spread function is given runs orders of magnitude faster than a number of existing algorithms for solving TVL2-based de-convolution problems to good accuracies.
Abstract: We propose and test a simple algorithmic framework for recovering images from blurry and noisy observations based on total variation (TV) regularization when a blurring point-spread function is given. Using a splitting technique, we construct an iterative procedure of alternately solving a pair of easy subproblems associated with an increasing sequence of penalty parameter values. The main computation at each iteration is three Fast Fourier Transforms (FFTs). We present numerical results showing that a rudimentary implementation of our algorithm already performs favorably in comparison with two of the existing start-of-the-art algorithms. In particular, it runs orders of magnitude faster than a number of existing algorithms for solving TVL2-based de-convolution problems to good accuracies.

162 citations

Journal ArticleDOI
TL;DR: An active contour model and its corresponding algorithms with detailed implementation for image segmentation with modified algorithm is proposed that is less sensitive to the initialization of the contour and can speed up the convergence rate.

82 citations


Cited by
More filters
Journal ArticleDOI

6,278 citations

Journal ArticleDOI
TL;DR: The guided filter is a novel explicit image filter derived from a local linear model that can be used as an edge-preserving smoothing operator like the popular bilateral filter, but it has better behaviors near edges.
Abstract: In this paper, we propose a novel explicit image filter called guided filter. Derived from a local linear model, the guided filter computes the filtering output by considering the content of a guidance image, which can be the input image itself or another different image. The guided filter can be used as an edge-preserving smoothing operator like the popular bilateral filter [1], but it has better behaviors near edges. The guided filter is also a more generic concept beyond smoothing: It can transfer the structures of the guidance image to the filtering output, enabling new filtering applications like dehazing and guided feathering. Moreover, the guided filter naturally has a fast and nonapproximate linear time algorithm, regardless of the kernel size and the intensity range. Currently, it is one of the fastest edge-preserving filters. Experiments show that the guided filter is both effective and efficient in a great variety of computer vision and computer graphics applications, including edge-aware smoothing, detail enhancement, HDR compression, image matting/feathering, dehazing, joint upsampling, etc.

4,730 citations

Journal ArticleDOI
TL;DR: This paper proposes a “split Bregman” method, which can solve a very broad class of L1-regularized problems, and applies this technique to the Rudin-Osher-Fatemi functional for image denoising and to a compressed sensing problem that arises in magnetic resonance imaging.
Abstract: The class of L1-regularized optimization problems has received much attention recently because of the introduction of “compressed sensing,” which allows images and signals to be reconstructed from small amounts of data. Despite this recent attention, many L1-regularized problems still remain difficult to solve, or require techniques that are very problem-specific. In this paper, we show that Bregman iteration can be used to solve a wide variety of constrained optimization problems. Using this technique, we propose a “split Bregman” method, which can solve a very broad class of L1-regularized problems. We apply this technique to the Rudin-Osher-Fatemi functional for image denoising and to a compressed sensing problem that arises in magnetic resonance imaging.

4,255 citations