scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A New Alternating Minimization Algorithm for Total Variation Image Reconstruction

01 Jul 2008-Siam Journal on Imaging Sciences (Society for Industrial and Applied Mathematics)-Vol. 1, Iss: 3, pp 248-272
TL;DR: An alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization from a new half-quadratic model applicable to not only the anisotropic but also the isotropic forms of TV discretizations is proposed.
Abstract: We propose, analyze, and test an alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization. This algorithm arises from a new half-quadratic model applicable to not only the anisotropic but also the isotropic forms of TV discretizations. The per-iteration computational complexity of the algorithm is three fast Fourier transforms. We establish strong convergence properties for the algorithm including finite convergence for some variables and relatively fast exponential (or $q$-linear in optimization terminology) convergence for the others. Furthermore, we propose a continuation scheme to accelerate the practical convergence of the algorithm. Extensive numerical results show that our algorithm performs favorably in comparison to several state-of-the-art algorithms. In particular, it runs orders of magnitude faster than the lagged diffusivity algorithm for TV-based deblurring. Some extensions of our algorithm are also discussed.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: The guided filter is a novel explicit image filter derived from a local linear model that can be used as an edge-preserving smoothing operator like the popular bilateral filter, but it has better behaviors near edges.
Abstract: In this paper, we propose a novel explicit image filter called guided filter. Derived from a local linear model, the guided filter computes the filtering output by considering the content of a guidance image, which can be the input image itself or another different image. The guided filter can be used as an edge-preserving smoothing operator like the popular bilateral filter [1], but it has better behaviors near edges. The guided filter is also a more generic concept beyond smoothing: It can transfer the structures of the guidance image to the filtering output, enabling new filtering applications like dehazing and guided feathering. Moreover, the guided filter naturally has a fast and nonapproximate linear time algorithm, regardless of the kernel size and the intensity range. Currently, it is one of the fastest edge-preserving filters. Experiments show that the guided filter is both effective and efficient in a great variety of computer vision and computer graphics applications, including edge-aware smoothing, detail enhancement, HDR compression, image matting/feathering, dehazing, joint upsampling, etc.

4,730 citations


Cites background from "A New Alternating Minimization Algo..."

  • ...The weighted least squares filter in [8] adjusts the matrix affinities according to the image gradients and produces halo-free edge-preserving smoothing....

    [...]

Proceedings Article
01 Jun 2010
TL;DR: This work presents a learning framework where features that capture these mid-level cues spontaneously emerge from image data, based on the convolutional decomposition of images under a spar-sity constraint and is totally unsupervised.
Abstract: Building robust low and mid-level image representations, beyond edge primitives, is a long-standing goal in vision. Many existing feature detectors spatially pool edge information which destroys cues such as edge intersections, parallelism and symmetry. We present a learning framework where features that capture these mid-level cues spontaneously emerge from image data. Our approach is based on the convolutional decomposition of images under a spar-sity constraint and is totally unsupervised. By building a hierarchy of such decompositions we can learn rich feature sets that are a robust image representation for both the analysis and synthesis of images.

1,556 citations


Cites background from "A New Alternating Minimization Algo..."

  • ...This has the effect of gradually introducing the sparsity constraint and gives good numerical stability in practice [11, 27]....

    [...]

  • ...[27], xk,l has a closed-form solution given by:...

    [...]

Proceedings Article
07 Dec 2009
TL;DR: This paper describes a deconvolution approach that is several orders of magnitude faster than existing techniques that use hyper-Laplacian priors and is able to deconvolve a 1 megapixel image in less than ~3 seconds, achieving comparable quality to existing methods that take ~20 minutes.
Abstract: The heavy-tailed distribution of gradients in natural scenes have proven effective priors for a range of problems such as denoising, deblurring and super-resolution. These distributions are well modeled by a hyper-Laplacian (p(x) ∝ e-k|x|α ), typically with 0.5 ≤ α ≤ 0.8. However, the use of sparse distributions makes the problem non-convex and impractically slow to solve for multi-megapixel images. In this paper we describe a deconvolution approach that is several orders of magnitude faster than existing techniques that use hyper-Laplacian priors. We adopt an alternating minimization scheme where one of the two phases is a non-convex problem that is separable over pixels. This per-pixel sub-problem may be solved with a lookup table (LUT). Alternatively, for two specific values of α, 1/2 and 2/3 an analytic solution can be found, by finding the roots of a cubic and quartic polynomial, respectively. Our approach (using either LUTs or analytic formulae) is able to deconvolve a 1 megapixel image in less than ~3 seconds, achieving comparable quality to existing methods such as iteratively reweighted least squares (IRLS) that take ~20 minutes. Furthermore, our method is quite general and can easily be extended to related image processing problems, beyond the deconvolution application demonstrated.

1,226 citations


Cites background or methods or result from "A New Alternating Minimization Algo..."

  • ...[22] using a total variation (TV) norm prior and (iv) a varian t of [22] using an l1 (Laplacian) prior; (v) the IRLS approach of Levin et al....

    [...]

  • ...[22] showed how it could be used with a total-variation (TV) norm to deconvolve images....

    [...]

  • ...Like other methods that use this form of alternating minimization [5, 6, 22], there is li ttle theoretical guidance for setting the β schedule....

    [...]

  • ...1, while our method, and the TV and l1 approaches of [22] minimize the cost in Eqn....

    [...]

  • ...This allows a n umber of fastl1 and related TV-norm methods [17, 22] to be deployed, which give good results in a r easonable time....

    [...]

Journal ArticleDOI
TL;DR: A new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer is proposed.
Abstract: We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods.

1,211 citations


Cites background from "A New Alternating Minimization Algo..."

  • ...For any problem of non-trivial dimension, matricesBW, B, andW cannot be stored explicitly, and it is costly, even impractical, to access portions (lines, columns, blocks) of them....

    [...]

  • ...This leads to the constrained problem min u∈Rn, v∈Rd f1(u) + f2(v) subject to g(u) = v, (9) which is clearly equivalent to unconstrained problem (8): in the feasible set{(u,v) : g(u) = v}, the objective function in (9) coincides with that in (8)....

    [...]

Journal ArticleDOI
TL;DR: Treating the hybrid precoder design as a matrix factorization problem, effective alternating minimization (AltMin) algorithms will be proposed for two different hybrid precoding structures, i.e., the fully-connected and partially-connected structures, and simulation comparisons between the two hybrid precode structures will provide valuable design insights.
Abstract: Millimeter wave (mmWave) communications has been regarded as a key enabling technology for 5G networks, as it offers orders of magnitude greater spectrum than current cellular bands. In contrast to conventional multiple-input–multiple-output (MIMO) systems, precoding in mmWave MIMO cannot be performed entirely at baseband using digital precoders, as only a limited number of signal mixers and analog-to-digital converters can be supported considering their cost and power consumption. As a cost-effective alternative, a hybrid precoding transceiver architecture, combining a digital precoder and an analog precoder, has recently received considerable attention. However, the optimal design of such hybrid precoders has not been fully understood. In this paper, treating the hybrid precoder design as a matrix factorization problem, effective alternating minimization (AltMin) algorithms will be proposed for two different hybrid precoding structures, i.e., the fully-connected and partially-connected structures. In particular, for the fully-connected structure, an AltMin algorithm based on manifold optimization is proposed to approach the performance of the fully digital precoder, which, however, has a high complexity. Thus, a low-complexity AltMin algorithm is then proposed, by enforcing an orthogonal constraint on the digital precoder. Furthermore, for the partially-connected structure, an AltMin algorithm is also developed with the help of semidefinite relaxation. For practical implementation, the proposed AltMin algorithms are further extended to the broadband setting with orthogonal frequency division multiplexing modulation. Simulation results will demonstrate significant performance gains of the proposed AltMin algorithms over existing hybrid precoding algorithms. Moreover, based on the proposed algorithms, simulation comparisons between the two hybrid precoding structures will provide valuable design insights.

1,079 citations


Cites methods from "A New Alternating Minimization Algo..."

  • ...has been successfully applied to many applications such as matrix completion [14], phase retrieval [36], image reconstruction [37], blind deconvolution [38] and non-negative matrix...

    [...]

References
More filters
Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Book
01 Nov 2008
TL;DR: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization, responding to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems.
Abstract: Numerical Optimization presents a comprehensive and up-to-date description of the most effective methods in continuous optimization. It responds to the growing interest in optimization in engineering, science, and business by focusing on the methods that are best suited to practical problems. For this new edition the book has been thoroughly updated throughout. There are new chapters on nonlinear interior methods and derivative-free methods for optimization, both of which are used widely in practice and the focus of much current research. Because of the emphasis on practical methods, as well as the extensive illustrations and exercises, the book is accessible to a wide audience. It can be used as a graduate text in engineering, operations research, mathematics, computer science, and business. It also serves as a handbook for researchers and practitioners in the field. The authors have strived to produce a text that is pleasant to read, informative, and rigorous - one that reveals both the beautiful nature of the discipline and its practical side.

17,420 citations

Journal ArticleDOI
TL;DR: In this article, a constrained optimization type of numerical algorithm for removing noise from images is presented, where the total variation of the image is minimized subject to constraints involving the statistics of the noise.

15,225 citations


"A New Alternating Minimization Algo..." refers background or methods in this paper

  • ...The artificial time-marching algorithm used by Rudin and Osher in [37] and by Rudin, Osher, and Fatemi in [38] is easy to implement, but usually takes a large number of iterations, each with a small step size governed by the CFL condition, to reach a modest accuracy....

    [...]

  • ...Another well-known class of regularizers are based on TV, which was first proposed for image denoising by Rudin, Osher and Fatemi in [38], and then extended to image deblurring in [37]....

    [...]

Journal ArticleDOI
TL;DR: Practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference and demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin‐echo brain imaging and 3D contrast enhanced angiography.
Abstract: The sparsity which is implicit in MR images is exploited to significantly undersample k -space. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain–for example, in terms of spatial finite-differences or their wavelet coefficients. According to the recently developed mathematical theory of compressedsensing, images with a sparse representation can be recovered from randomly undersampled k -space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise-like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo-random variable-density undersampling of phase-encodes. The reconstruction is performed by minimizing the 1 norm of a transformed image, subject to data

6,653 citations

Posted Content
TL;DR: It is shown that it is possible to recover x0 accurately based on the data y from incomplete and contaminated observations.
Abstract: Suppose we wish to recover an n-dimensional real-valued vector x_0 (e.g. a digital signal or image) from incomplete and contaminated observations y = A x_0 + e; A is a n by m matrix with far fewer rows than columns (n << m) and e is an error term. Is it possible to recover x_0 accurately based on the data y? To recover x_0, we consider the solution x* to the l1-regularization problem min \|x\|_1 subject to \|Ax-y\|_2 <= epsilon, where epsilon is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the vector x_0 is sufficiently sparse, then the solution is within the noise level \|x* - x_0\|_2 \le C epsilon. As a first example, suppose that A is a Gaussian random matrix, then stable recovery occurs for almost all such A's provided that the number of nonzeros of x_0 is of about the same order as the number of observations. Second, suppose one observes few Fourier samples of x_0, then stable recovery occurs for almost any set of p coefficients provided that the number of nonzeros is of the order of n/[\log m]^6. In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights on the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.

6,226 citations


"A New Alternating Minimization Algo..." refers methods in this paper

  • ...This case occurs, for example, in compressive sensing [4, 16] where a subset of noisy Fourier coefficients, f , is used to recover a signal u0 with a sparse gradient field via solving the model...

    [...]