scispace - formally typeset
Search or ask a question

Showing papers on "Compressed sensing published in 2018"


Journal ArticleDOI
TL;DR: This paper provides a deep learning-based strategy for reconstruction of CS-MRI, and bridges a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training data sets.
Abstract: Compressed sensing magnetic resonance imaging (CS-MRI) enables fast acquisition, which is highly desirable for numerous clinical applications. This can not only reduce the scanning cost and ease patient burden, but also potentially reduce motion artefacts and the effect of contrast washout, thus yielding better image quality. Different from parallel imaging-based fast MRI, which utilizes multiple coils to simultaneously receive MR signals, CS-MRI breaks the Nyquist–Shannon sampling barrier to reconstruct MRI images with much less required raw data. This paper provides a deep learning-based strategy for reconstruction of CS-MRI, and bridges a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training data sets. In particular, a novel conditional Generative Adversarial Networks-based model (DAGAN)-based model is proposed to reconstruct CS-MRI. In our DAGAN architecture, we have designed a refinement learning method to stabilize our U-Net based generator, which provides an end-to-end network to reduce aliasing artefacts. To better preserve texture and edges in the reconstruction, we have coupled the adversarial loss with an innovative content loss. In addition, we incorporate frequency-domain information to enforce similarity in both the image and frequency domains. We have performed comprehensive comparison studies with both conventional CS-MRI reconstruction methods and newly investigated deep learning approaches. Compared with these methods, our DAGAN method provides superior reconstruction with preserved perceptual image details. Furthermore, each image is reconstructed in about 5 ms, which is suitable for real-time processing.

835 citations


Proceedings ArticleDOI
18 Jun 2018
TL;DR: This paper proposes a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $$ norm CS reconstruction model and develops an effective strategy to solve the proximal mapping associated with the sparsity-inducing regularizer using nonlinear transforms.
Abstract: With the aim of developing a fast yet accurate algorithm for compressive sensing (CS) reconstruction of natural images, we combine in this paper the merits of two existing categories of CS methods: the structure insights of traditional optimization-based methods and the speed of recent network-based ones. Specifically, we propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $$ norm CS reconstruction model. To cast ISTA into deep network form, we develop an effective strategy to solve the proximal mapping associated with the sparsity-inducing regularizer using nonlinear transforms. All the parameters in ISTA-Net (e.g. nonlinear transforms, shrinkage thresholds, step sizes, etc.) are learned end-to-end, rather than being hand-crafted. Moreover, considering that the residuals of natural images are more compressible, an enhanced version of ISTA-Net in the residual domain, dubbed ISTA-Net+, is derived to further improve CS reconstruction. Extensive CS experiments demonstrate that the proposed ISTA-Nets outperform existing state-of-the-art optimization-based and network-based CS methods by large margins, while maintaining fast computational speed. Our source codes are available: http://jianzhang.tech/projects/ISTA-Net.

771 citations


Journal ArticleDOI
TL;DR: The learned denoising-based approximate message passing (LDAMP) network is exploited and significantly outperforms state-of-the-art compressed sensing-based algorithms even when the receiver is equipped with a small number of RF chains.
Abstract: Channel estimation is very challenging when the receiver is equipped with a limited number of radio-frequency (RF) chains in beamspace millimeter-wave massive multiple-input and multiple-output systems. To solve this problem, we exploit a learned denoising-based approximate message passing (LDAMP) network. This neural network can learn channel structure and estimate channel from a large number of training data. Furthermore, we provide an analytical framework on the asymptotic performance of the channel estimator. Based on our analysis and simulation results, the LDAMP neural network significantly outperforms state-of-the-art compressed sensing-based algorithms even when the receiver is equipped with a small number of RF chains.

587 citations


Journal ArticleDOI
TL;DR: RefineGAN as mentioned in this paper is a variant of fully-residual convolutional autoencoder and generative adversarial networks (GANs) specifically designed for CS-MRI formulation; it employs deeper generator and discriminator networks with cyclic data consistency loss for faithful interpolation in the given under-sampled data.
Abstract: Compressed sensing magnetic resonance imaging (CS-MRI) has provided theoretical foundations upon which the time-consuming MRI acquisition process can be accelerated. However, it primarily relies on iterative numerical solvers, which still hinders their adaptation in time-critical applications. In addition, recent advances in deep neural networks have shown their potential in computer vision and image processing, but their adaptation to MRI reconstruction is still in an early stage. In this paper, we propose a novel deep learning-based generative adversarial model, RefineGAN , for fast and accurate CS-MRI reconstruction. The proposed model is a variant of fully-residual convolutional autoencoder and generative adversarial networks (GANs), specifically designed for CS-MRI formulation; it employs deeper generator and discriminator networks with cyclic data consistency loss for faithful interpolation in the given under-sampled $k$ -space data. In addition, our solution leverages a chained network to further enhance the reconstruction quality. RefineGAN is fast and accurate—the reconstruction process is extremely rapid, as low as tens of milliseconds for reconstruction of a $256\times 256$ image, because it is one-way deployment on a feed-forward network, and the image quality is superior even for extremely low sampling rate (as low as 10%) due to the data-driven nature of the method. We demonstrate that RefineGAN outperforms the state-of-the-art CS-MRI methods by a large margin in terms of both running time and image quality via evaluation using several open-source MRI databases.

428 citations


Journal ArticleDOI
20 Jan 2018
TL;DR: In this article, a diffuser placed in front of an image sensor is used for single-shot 3D imaging, which exploits sparsity in the sample to solve for more 3D voxels than pixels on the 2D sensor.
Abstract: We demonstrate a compact, easy-to-build computational camera for single-shot three-dimensional (3D) imaging. Our lensless system consists solely of a diffuser placed in front of an image sensor. Every point within the volumetric field-of-view projects a unique pseudorandom pattern of caustics on the sensor. By using a physical approximation and simple calibration scheme, we solve the large-scale inverse problem in a computationally efficient way. The caustic patterns enable compressed sensing, which exploits sparsity in the sample to solve for more 3D voxels than pixels on the 2D sensor. Our 3D reconstruction grid is chosen to match the experimentally measured two-point optical resolution, resulting in 100 million voxels being reconstructed from a single 1.3 megapixel image. However, the effective resolution varies significantly with scene content. Because this effect is common to a wide range of computational cameras, we provide a new theory for analyzing resolution in such systems.

369 citations


Journal ArticleDOI
TL;DR: To bridge the gap between theory and practicality of CS, different CS acquisition strategies and reconstruction approaches are elaborated systematically in this paper.
Abstract: Compressive Sensing (CS) is a new sensing modality, which compresses the signal being acquired at the time of sensing. Signals can have sparse or compressible representation either in original domain or in some transform domain. Relying on the sparsity of the signals, CS allows us to sample the signal at a rate much below the Nyquist sampling rate. Also, the varied reconstruction algorithms of CS can faithfully reconstruct the original signal back from fewer compressive measurements. This fact has stimulated research interest toward the use of CS in several fields, such as magnetic resonance imaging, high-speed video acquisition, and ultrawideband communication. This paper reviews the basic theoretical concepts underlying CS. To bridge the gap between theory and practicality of CS, different CS acquisition strategies and reconstruction approaches are elaborated systematically in this paper. The major application areas where CS is currently being used are reviewed here. This paper also highlights some of the challenges and research directions in this field.

334 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the massive connectivity application in which a large number of devices communicate with a base station (BS) in a sporadic fashion, and proposed an approximate message passing (AMP) algorithm design that exploits the statistics of the wireless channel and provided an analytical characterization of the probabilities of false alarm and missed detection via state evolution.
Abstract: This paper considers the massive connectivity application in which a large number of devices communicate with a base-station (BS) in a sporadic fashion. Device activity detection and channel estimation are central problems in such a scenario. Due to the large number of potential devices, the devices need to be assigned non-orthogonal signature sequences. The main objective of this paper is to show that by using random signature sequences and by exploiting sparsity in the user activity pattern, the joint user detection and channel estimation problem can be formulated as a compressed sensing single measurement vector (SMV) or multiple measurement vector (MMV) problem depending on whether the BS has a single antenna or multiple antennas and efficiently solved using an approximate message passing (AMP) algorithm. This paper proposes an AMP algorithm design that exploits the statistics of the wireless channel and provides an analytical characterization of the probabilities of false alarm and missed detection via state evolution. We consider two cases depending on whether or not the large-scale component of the channel fading is known at the BS and design the minimum mean squared error denoiser for AMP according to the channel statistics. Simulation results demonstrate the substantial advantage of exploiting the channel statistics in AMP design; however, knowing the large-scale fading component does not appear to offer tangible benefits. For the multiple-antenna case, we employ two different AMP algorithms, namely the AMP with vector denoiser and the parallel AMP-MMV, and quantify the benefit of deploying multiple antennas.

326 citations


Journal ArticleDOI
TL;DR: A novel method called improved convolutional deep belief network (CDBN) with compressed sensing (CS) is developed for feature learning and fault diagnosis of rolling bearing and results confirm that the developed method is more effective than the traditional methods.

289 citations


Journal ArticleDOI
Xiuli Chai1, Xiaoyu Zheng1, Zhihua Gan1, Daojun Han1, Yi Chen2 
TL;DR: An image encryption algorithm based on chaotic system and compressive sensing and the ECA that can compress and encrypt the image simultaneously by use of CS, which may reduce the amount of data and storage space.

275 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a non-coherent transmission scheme for mMTC and specifically for grant-free random access, which leverages elements from the approximate message passing (AMP) algorithm.
Abstract: A key challenge of massive MTC (mMTC), is the joint detection of device activity and decoding of data. The sparse characteristics of mMTC makes compressed sensing (CS) approaches a promising solution to the device detection problem. However, utilizing CS-based approaches for device detection along with channel estimation, and using the acquired estimates for coherent data transmission is suboptimal, especially when the goal is to convey only a few bits of data. First, we focus on the coherent transmission and demonstrate that it is possible to obtain more accurate channel state information by combining conventional estimators with CS-based techniques. Moreover, we illustrate that even simple power control techniques can enhance the device detection performance in mMTC setups. Second, we devise a new non-coherent transmission scheme for mMTC and specifically for grant-free random access. We design an algorithm that jointly detects device activity along with embedded information bits. The approach leverages elements from the approximate message passing (AMP) algorithm, and exploits the structured sparsity introduced by the non-coherent transmission scheme. Our analysis reveals that the proposed approach has superior performance compared with application of the original AMP approach.

239 citations


Journal ArticleDOI
TL;DR: In this paper, a range of efficient wireless processes and enabling techniques are put under a magnifier glass in the quest for exploring different manifestations of correlated processes, where sub-Nyquist sampling may be invoked as an explicit benefit of having a sparse transform-domain representation.
Abstract: A range of efficient wireless processes and enabling techniques are put under a magnifier glass in the quest for exploring different manifestations of correlated processes, where sub-Nyquist sampling may be invoked as an explicit benefit of having a sparse transform-domain representation. For example, wide-band next-generation systems require a high Nyquist-sampling rate, but the channel impulse response (CIR) will be very sparse at the high Nyquist frequency, given the low number of reflected propagation paths. This motivates the employment of compressive sensing based processing techniques for frugally exploiting both the limited radio resources and the network infrastructure as efficiently as possible. A diverse range of sophisticated compressed sampling techniques is surveyed, and we conclude with a variety of promising research ideas related to large-scale antenna arrays, non-orthogonal multiple access (NOMA), and ultra-dense network (UDN) solutions, just to name a few.

Journal ArticleDOI
TL;DR: Various applications of sparse representation in wireless communications, with a focus on the most recent compressive sensing (CS)-enabled approaches, are discussed.
Abstract: Sparse representation can efficiently model signals in different applications to facilitate processing. In this article, we will discuss various applications of sparse representation in wireless communications, with a focus on the most recent compressive sensing (CS)-enabled approaches. With the help of the sparsity property, CS is able to enhance the spectrum efficiency (SE) and energy efficiency (EE) of fifth-generation (5G) and Internet of Things (IoT) networks.

Proceedings Article
29 Aug 2018
TL;DR: In this article, a weight structure that is necessary for asymptotic convergence to the true sparse signal is introduced, which can attain a linear convergence, which is better than the sublinear convergence of ISTA/FISTA in general cases.
Abstract: In recent years, unfolding iterative algorithms as neural networks has become an empirical success in solving sparse recovery problems. However, its theoretical understanding is still immature, which prevents us from fully utilizing the power of neural networks. In this work, we study unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery. We introduce a weight structure that is necessary for asymptotic convergence to the true sparse signal. With this structure, unfolded ISTA can attain a linear convergence, which is better than the sublinear convergence of ISTA/FISTA in general cases. Furthermore, we propose to incorporate thresholding in the network to perform support selection, which is easy to implement and able to boost the convergence rate both theoretically and empirically. Extensive simulations, including sparse vector recovery and a compressive sensing experiment on real image data, corroborate our theoretical results and demonstrate their practical usefulness. We have made our codes publicly available: https://github.com/xchen-tamu/linear-lista-cpss.

Book ChapterDOI
TL;DR: An overview of these sparse methods for DOA estimation is provided, with a particular highlight on the recently developed gridless sparse methods, e.g., those based on covariance fitting and the atomic norm.
Abstract: Direction-of-arrival (DOA) estimation refers to the process of retrieving the direction information of several electromagnetic waves/sources from the outputs of a number of receiving antennas that form a sensor array. DOA estimation is a major problem in array signal processing and has wide applications in radar, sonar, wireless communications, etc. With the development of sparse representation and compressed sensing, the last decade has witnessed a tremendous advance in this research topic. The purpose of this article is to provide an overview of these sparse methods for DOA estimation, with a particular highlight on the recently developed gridless sparse methods, e.g., those based on covariance fitting and the atomic norm. Several future research directions are also discussed.

Journal ArticleDOI
TL;DR: In this article, the squared error of a convex Gaussian min-max estimator was shown to converge in probability to a nontrivial limit that is given as the solution to a minimax convex-concave optimization problem on four scalar optimization variables.
Abstract: A popular approach for estimating an unknown signal $ \mathbf {x}_{0}\in \mathbb {R} ^{n}$ from noisy, linear measurements $ \mathbf {y}= \mathbf {A} \mathbf {x} _{0}+ \mathbf {z}\in \mathbb {R}^{m}$ is via solving a so called regularized $M$ -estimator: $\hat{\mathbf {x}} :=\arg \min _ \mathbf {x} \mathcal {L} (\mathbf {y}- \mathbf {A} \mathbf {x})+\lambda f(\mathbf {x})$ . Here, $ \mathcal {L}$ is a convex loss function, $f$ is a convex (typically, non-smooth) regularizer, and $\lambda > 0$ is a regularizer parameter. We analyze the squared error performance $\|\hat{\mathbf {x}} - \mathbf {x}_{0}\|_{2}^{2}$ of such estimators in the high-dimensional proportional regime where $m,n\rightarrow \infty $ and $m/n\rightarrow \delta $ . The design matrix $ \mathbf {A}$ is assumed to have entries iid Gaussian; only minimal and rather mild regularity conditions are imposed on the loss function, the regularizer, and on the noise and signal distributions. We show that the squared error converges in probability to a nontrivial limit that is given as the solution to a minimax convex-concave optimization problem on four scalar optimization variables. We identify a new summary parameter, termed the expected Moreau envelope to play a central role in the error characterization. The precise nature of the results permits an accurate performance comparison between different instances of regularized $M$ -estimators and allows to optimally tune the involved parameters (such as the regularizer parameter and the number of measurements). The key ingredient of our proof is the convex Gaussian min-max theorem which is a tight and strengthened version of a classical Gaussian comparison inequality that was proved by Gordon in 1988.

Posted Content
TL;DR: It is proved that single-layer DIP networks with constant fraction over-parameterization will perfectly fit any signal through gradient descent, despite being a non-convex problem, which provides justification for early stopping.
Abstract: We propose a novel method for compressed sensing recovery using untrained deep generative models. Our method is based on the recently proposed Deep Image Prior (DIP), wherein the convolutional weights of the network are optimized to match the observed measurements. We show that this approach can be applied to solve any differentiable linear inverse problem, outperforming previous unlearned methods. Unlike various learned approaches based on generative models, our method does not require pre-training over large datasets. We further introduce a novel learned regularization technique, which incorporates prior information on the network weights. This reduces reconstruction error, especially for noisy measurements. Finally, we prove that, using the DIP optimization approach, moderately overparameterized single-layer networks can perfectly fit any signal despite the non-convex nature of the fitting problem. This theoretical result provides justification for early stopping.

Journal ArticleDOI
Junxin Chen1, Yu Zhang1, Lin Qi1, Chong Fu1, Lisheng Xu1 
TL;DR: A solution for simultaneous image encryption and compression using compressed sensing using structurally random matrix (SRM), and permutation-diffusion type image encryption using 3-D cat map is presented.
Abstract: This paper presents a solution for simultaneous image encryption and compression. The primary introduced techniques are compressed sensing (CS) using structurally random matrix (SRM), and permutation-diffusion type image encryption. The encryption performance originates from both the techniques, whereas the compression effect is achieved by CS. Three-dimensional (3-D) cat map is employed for key stream generation. The simultaneously produced three state variables of 3-D cat map are respectively used for the SRM generation, image permutation and diffusion. Numerical simulations and security analyses have been carried out, and the results demonstrate the effectiveness and security performance of the proposed system.

Journal ArticleDOI
TL;DR: An overview of nonconvex regularization based sparse and low-rank recovery in various fields in signal processing, statistics, and machine learning, including compressive sensing, sparse regression and variable selection, sparse signals separation, sparse principal component analysis (PCA), large covariance and inverse covariance matrices estimation, matrix completion, and robust PCA is given.
Abstract: In the past decade, sparse and low-rank recovery has drawn much attention in many areas such as signal/image processing, statistics, bioinformatics, and machine learning. To achieve sparsity and/or low-rankness inducing, the $\ell _{1}$ norm and nuclear norm are of the most popular regularization penalties due to their convexity. While the $\ell _{1}$ and nuclear norm are convenient as the related convex optimization problems are usually tractable, it has been shown in many applications that a nonconvex penalty can yield significantly better performance. In recent, nonconvex regularization-based sparse and low-rank recovery is of considerable interest and it in fact is a main driver of the recent progress in nonconvex and nonsmooth optimization. This paper gives an overview of this topic in various fields in signal processing, statistics, and machine learning, including compressive sensing, sparse regression and variable selection, sparse signals separation, sparse principal component analysis (PCA), large covariance and inverse covariance matrices estimation, matrix completion, and robust PCA. We present recent developments of nonconvex regularization based sparse and low-rank recovery in these fields, addressing the issues of penalty selection, applications and the convergence of nonconvex algorithms. Code is available at https://github.com/FWen/ncreg.git .

Journal ArticleDOI
TL;DR: In this paper, the problem of sampling k-bandlimited signals on graphs is studied and two sampling strategies are proposed: one is non-adaptive and the other is adaptive but yields optimal results.

Journal ArticleDOI
TL;DR: In this paper, the proximal operator of the normalized metric was proposed for compressive sensing, and the authors developed new and fast algorithms for recovering a sparse vector from a small number of measurements, which is a fundamental problem in the field of CS.
Abstract: This paper aims to develop new and fast algorithms for recovering a sparse vector from a small number of measurements, which is a fundamental problem in the field of compressive sensing (CS). Currently, CS favors incoherent systems, in which any two measurements are as little correlated as possible. In reality, however, many problems are coherent, and conventional methods such as $$L_1$$ minimization do not work well. Recently, the difference of the $$L_1$$ and $$L_2$$ norms, denoted as $$L_1$$ – $$L_2$$ , is shown to have superior performance over the classic $$L_1$$ method, but it is computationally expensive. We derive an analytical solution for the proximal operator of the $$L_1$$ – $$L_2$$ metric, and it makes some fast $$L_1$$ solvers such as forward–backward splitting (FBS) and alternating direction method of multipliers (ADMM) applicable for $$L_1$$ – $$L_2$$ . We describe in details how to incorporate the proximal operator into FBS and ADMM and show that the resulting algorithms are convergent under mild conditions. Both algorithms are shown to be much more efficient than the original implementation of $$L_1$$ – $$L_2$$ based on a difference-of-convex approach in the numerical experiments.

Journal ArticleDOI
TL;DR: In this paper, a two-stage compressed sensing method for mmWave channel estimation is proposed, where the sparse and low-rank properties are respectively utilized in two consecutive stages, namely, a matrix completion stage and a sparse recovery stage.
Abstract: We consider the problem of channel estimation for millimeter wave (mmWave) systems, where, to minimize the hardware complexity and power consumption, an analog transmit beamforming and receive combining structure with only one radio frequency chain at the base station and mobile station is employed. Most existing works for mmWave channel estimation exploit sparse scattering characteristics of the channel. In addition to sparsity, mmWave channels may exhibit angular spreads over the angle of arrival, angle of departure, and elevation domains. In this paper, we show that angular spreads give rise to a useful low-rank structure that, along with the sparsity, can be simultaneously utilized to reduce the sample complexity, i.e., the number of samples needed to successfully recover the mmWave channel. Specifically, to effectively leverage the joint sparse and low-rank structure, we develop a two-stage compressed sensing method for mmWave channel estimation, where the sparse and low-rank properties are respectively utilized in two consecutive stages, namely, a matrix completion stage and a sparse recovery stage. Our theoretical analysis reveals that the proposed two-stage scheme can achieve a lower sample complexity than a conventional compressed sensing method that exploits only the sparse structure of the mmWave channel. Simulation results are provided to corroborate our theoretical results and to show the superiority of the proposed two-stage method.

Journal ArticleDOI
TL;DR: A novel double-image compression-encryption algorithm is proposed by combining co-sparse representation with random pixel exchanging to enhance the confidentiality and the robustness of double image encryption algorithms.

Journal ArticleDOI
Yuchen He1, Gao Wang1, Guoxiang Dong1, Shitao Zhu1, Hui Chen1, Anxue Zhang1, Zhuo Xu1 
TL;DR: A novel deep learning ghost imaging method is proposed that can be obtained faster and more accurate at low sampling rate compared with conventional GI method and modified the convolutional neural network that is commonly used in deep learning to fit the characteristics of ghost imaging.
Abstract: Even though ghost imaging (GI), an unconventional imaging method, has received increased attention by researchers during the last decades, imaging speed is still not satisfactory. Once the data-acquisition method and the system parameters are determined, only the processing method has the potential to accelerate image-processing significantly. However, both the basic correlation method and the compressed sensing algorithm, which are often used for ghost imaging, have their own problems. To overcome these challenges, a novel deep learning ghost imaging method is proposed in this paper. We modified the convolutional neural network that is commonly used in deep learning to fit the characteristics of ghost imaging. This modified network can be referred to as ghost imaging convolutional neural network. Our simulations and experiments confirm that, using this new method, a target image can be obtained faster and more accurate at low sampling rate compared with conventional GI method.

Journal ArticleDOI
TL;DR: A novel algorithm to reconstruct a sparse signal from a small number of magnitude-only measurements, SPARTA is a simple yet effective, scalable, and fast sparse PR solver that is robust against additive noise of bounded support.
Abstract: This paper develops a novel algorithm, termed SPARse Truncated Amplitude flow (SPARTA), to reconstruct a sparse signal from a small number of magnitude-only measurements. It deals with what is also known as sparse phase retrieval (PR), which is NP-hard in general and emerges in many science and engineering applications. Upon formulating sparse PR as an amplitude-based nonconvex optimization task, SPARTA works iteratively in two stages: In stage one, the support of the underlying sparse signal is recovered using an analytically well-justified rule, and subsequently a sparse orthogonality-promoting initialization is obtained via power iterations restricted on the support; and in the second stage, the initialization is successively refined by means of hard thresholding based gradient-type iterations. SPARTA is a simple yet effective, scalable, and fast sparse PR solver. On the theoretical side, for any $n$ -dimensional $k$ -sparse ( $k\ll n$ ) signal $\boldsymbol {x}$ with minimum (in modulus) nonzero entries on the order of $(1/\sqrt{k})\Vert \boldsymbol {x}\Vert _2$ , SPARTA recovers the signal exactly (up to a global unimodular constant) from about $k^2\log n$ random Gaussian measurements with high probability. Furthermore, SPARTA incurs computational complexity on the order of $k^2n\log n$ with total runtime proportional to the time required to read the data, which improves upon the state of the art by at least a factor of $k$ . Finally, SPARTA is robust against additive noise of bounded support. Extensive numerical tests corroborate markedly improved recovery performance and speedups of SPARTA relative to existing alternatives.

Journal ArticleDOI
TL;DR: Computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals are explored.

Posted Content
TL;DR: Wang et al. as discussed by the authors proposed a fully data-driven deep learning algorithm for k-space interpolation, which can be also easily applied to non-Cartesian K-space trajectories by adding an additional regridding layer.
Abstract: The annihilating filter-based low-rank Hankel matrix approach (ALOHA) is one of the state-of-the-art compressed sensing approaches that directly interpolates the missing k-space data using low-rank Hankel matrix completion. The success of ALOHA is due to the concise signal representation in the k-space domain thanks to the duality between structured low-rankness in the k-space domain and the image domain sparsity. Inspired by the recent mathematical discovery that links convolutional neural networks to Hankel matrix decomposition using data-driven framelet basis, here we propose a fully data-driven deep learning algorithm for k-space interpolation. Our network can be also easily applied to non-Cartesian k-space trajectories by simply adding an additional regridding layer. Extensive numerical experiments show that the proposed deep learning method consistently outperforms the existing image-domain deep learning approaches.

Proceedings ArticleDOI
12 Apr 2018
TL;DR: In this article, a projected gradient descent (PGD) algorithm was proposed to solve linear inverse problems with GAN priors for compressive sensing, and theoretical guarantees on the rate of convergence of this algorithm were provided.
Abstract: In recent works, both sparsity-based methods as well as learning-based methods have proven to be successful in solving several challenging linear inverse problems. However, sparsity priors for natural signals and images suffer from poor discriminative capability, while learning-based methods seldom provide concrete theoretical guarantees. In this work, we advocate the idea of replacing hand-crafted priors, such as sparsity, with a Generative Adversarial Network (GAN) to solve linear inverse problems such as compressive sensing. In particular, we propose a projected gradient descent (PGD) algorithm for effective use of GAN priors for linear inverse problems, and also provide theoretical guarantees on the rate of convergence of this algorithm. Moreover, we show empirically that our algorithm demonstrates superior performance over an existing method of leveraging GANs for compressive sensing.

Book ChapterDOI
16 Sep 2018
TL;DR: In this paper, a visual refinement component is learned on top of an MSE loss-based reconstruction network and a semantic interpretability score is introduced to measure the visibility of the region of interest in both ground truth and reconstructed images.
Abstract: Deep learning approaches have shown promising performance for compressed sensing-based Magnetic Resonance Imaging. While deep neural networks trained with mean squared error (MSE) loss functions can achieve high peak signal to noise ratio, the reconstructed images are often blurry and lack sharp details, especially for higher undersampling rates. Recently, adversarial and perceptual loss functions have been shown to achieve more visually appealing results. However, it remains an open question how to (1) optimally combine these loss functions with the MSE loss function and (2) evaluate such a perceptual enhancement. In this work, we propose a hybrid method, in which a visual refinement component is learnt on top of an MSE loss-based reconstruction network. In addition, we introduce a semantic interpretability score, measuring the visibility of the region of interest in both ground truth and reconstructed images, which allows us to objectively quantify the usefulness of the image quality for image post-processing and analysis. Applied on a large cardiac MRI dataset simulated with 8-fold undersampling, we demonstrate significant improvements (\(p<0.01\)) over the state-of-the-art in both a human observer study and the semantic interpretability score.

Journal ArticleDOI
TL;DR: In this paper, an iterative reweight-based superresolution channel estimation scheme was proposed to improve the channel estimation accuracy for millimeter-wave massive MIMO with hybrid precoding, where a weight parameter was used to control the tradeoff between the sparsity and the data fitting error.
Abstract: Channel estimation is challenging for millimeter-wave massive MIMO with hybrid precoding, since the number of radio frequency chains is much smaller than that of antennas. Conventional compressive sensing based channel estimation schemes suffer from severe resolution loss due to the channel angle quantization. To improve the channel estimation accuracy, we propose an iterative reweight-based superresolution channel estimation scheme in this paper. By optimizing an objective function through the gradient descent method, the proposed scheme can iteratively move the estimated angle of arrivals/departures towards the optimal solutions, and finally realize the superresolution channel estimation. In the optimization, a weight parameter is used to control the tradeoff between the sparsity and the data fitting error. In addition, a singular value decomposition-based preconditioning is developed to reduce the computational complexity of the proposed scheme. Simulation results verify the better performance of the proposed scheme than conventional solutions.

Book ChapterDOI
02 Dec 2018
TL;DR: This paper uses techniques from compressed sensing and the recently developed Alternating Direction Neural Networks to create a deep recurrent auto-encoder that is able to out perform all previously published results, including deep networks with orders of magnitude more parameters.
Abstract: In this paper we consider the problem of estimating a dense depth map from a set of sparse LiDAR points. We use techniques from compressed sensing and the recently developed Alternating Direction Neural Networks (ADNNs) to create a deep network which performs multi-layer convolutional compressed sensing. Our architecture internally performs the optimization for extracting convolutional sparse codes from the input which are then used to make a prediction. Our results demonstrate that with only three layers and 1800 parameters we achieve performance which is competitive with the state of the art, including deep networks with orders of magnitude more parameters and layers.