Journal ArticleDOI
Compressive Sensing via Nonlocal Low-Rank Regularization
Reads0
Chats0
TLDR
A nonlocal low-rank regularization approach toward exploiting structured sparsity and its application into CS of both photographic and MRI images is proposed and the use of a nonconvex log det as a smooth surrogate function for the rank instead of the convex nuclear norm is proposed.Abstract:
Sparsity has been widely exploited for exact reconstruction of a signal from a small number of random measurements. Recent advances have suggested that structured or group sparsity often leads to more powerful signal reconstruction techniques in various compressed sensing (CS) studies. In this paper, we propose a nonlocal low-rank regularization (NLR) approach toward exploiting structured sparsity and explore its application into CS of both photographic and MRI images. We also propose the use of a nonconvex log det ( X) as a smooth surrogate function for the rank instead of the convex nuclear norm and justify the benefit of such a strategy using extensive experiments. To further improve the computational efficiency of the proposed algorithm, we have developed a fast implementation using the alternative direction multiplier method technique. Experimental results have shown that the proposed NLR-CS algorithm can significantly outperform existing state-of-the-art CS techniques for image recovery.read more
Citations
More filters
Proceedings ArticleDOI
ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing
Jian Zhang,Bernard Ghanem +1 more
TL;DR: This paper proposes a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $$ norm CS reconstruction model and develops an effective strategy to solve the proximal mapping associated with the sparsity-inducing regularizer using nonlinear transforms.
Journal ArticleDOI
Weighted Nuclear Norm Minimization and Its Applications to Low Level Vision
TL;DR: It is proved that WNNP is equivalent to a standard quadratic programming problem with linear constrains, which facilitates solving the original problem with off-the-shelf convex optimization solvers and presents an automatic weight setting method, which greatly facilitates the practical implementation of WNNM.
Proceedings ArticleDOI
ReconNet: Non-Iterative Reconstruction of Images from Compressively Sensed Measurements
TL;DR: A novel convolutional neural network architecture which takes in CS measurements of an image as input and outputs an intermediate reconstruction which is fed into an off-the-shelf denoiser to obtain the final reconstructed image, ReconNet.
Journal ArticleDOI
From Denoising to Compressed Sensing
TL;DR: In this paper, a denoising-based approximate message passing (D-AMP) framework is proposed to integrate a wide class of denoisers within its iterations. But, the performance of D-AMP is limited by the use of an appropriate Onsager correction term in its iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.
Journal ArticleDOI
ADMM-CSNet: A Deep Learning Approach for Image Compressive Sensing
TL;DR: Two versions of a novel deep learning architecture are proposed, dubbed as ADMM-CSNet, by combining the traditional model-based CS method and data-driven deep learning method for image reconstruction from sparsely sampled measurements, which achieved favorable reconstruction accuracy in fast computational speed compared with the traditional and the other deep learning methods.
References
More filters
Book
Compressed sensing
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Book
Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Journal ArticleDOI
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
TL;DR: In this paper, the authors considered the model problem of reconstructing an object from incomplete frequency samples and showed that with probability at least 1-O(N/sup -M/), f can be reconstructed exactly as the solution to the lscr/sub 1/ minimization problem.
Journal ArticleDOI
Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering
TL;DR: An algorithm based on an enhanced sparse representation in transform domain based on a specially developed collaborative Wiener filtering achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.
Journal ArticleDOI
Robust principal component analysis
TL;DR: In this paper, the authors prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm.