scispace - formally typeset
Search or ask a question

Showing papers on "Compressed sensing published in 2016"


Journal ArticleDOI
TL;DR: In this paper, a denoising-based approximate message passing (D-AMP) framework is proposed to integrate a wide class of denoisers within its iterations. But, the performance of D-AMP is limited by the use of an appropriate Onsager correction term in its iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.
Abstract: A denoising algorithm seeks to remove noise, errors, or perturbations from a signal. Extensive research has been devoted to this arena over the last several decades, and as a result, todays denoisers can effectively remove large amounts of additive white Gaussian noise. A compressed sensing (CS) reconstruction algorithm seeks to recover a structured signal acquired using a small number of randomized measurements. Typical CS reconstruction algorithms can be cast as iteratively estimating a signal from a perturbed observation. This paper answers a natural question: How can one effectively employ a generic denoiser in a CS reconstruction algorithm? In response, we develop an extension of the approximate message passing (AMP) framework, called denoising-based AMP (D-AMP), that can integrate a wide class of denoisers within its iterations. We demonstrate that, when used with a high-performance denoiser for natural images, D-AMP offers the state-of-the-art CS recovery performance while operating tens of times faster than competing methods. We explain the exceptional performance of D-AMP by analyzing some of its theoretical features. A key element in D-AMP is the use of an appropriate Onsager correction term in its iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.

535 citations


Journal ArticleDOI
TL;DR: A novel framework for free‐breathing MRI is developed called XD‐GRASP, which sorts dynamic data into extra motion‐state dimensions using the self‐navigation properties of radial imaging and reconstructs the multidimensional dataset using compressed sensing.
Abstract: Purpose To develop a novel framework for free-breathing MRI called XD-GRASP, which sorts dynamic data into extra motion-state dimensions using the self-navigation properties of radial imaging and reconstructs the multidimensional dataset using compressed sensing. Methods Radial k-space data are continuously acquired using the golden-angle sampling scheme and sorted into multiple motion-states based on respiratory and/or cardiac motion signals derived directly from the data. The resulting undersampled multidimensional dataset is reconstructed using a compressed sensing approach that exploits sparsity along the new dynamic dimensions. The performance of XD-GRASP is demonstrated for free-breathing three-dimensional (3D) abdominal imaging, two-dimensional (2D) cardiac cine imaging and 3D dynamic contrast-enhanced (DCE) MRI of the liver, comparing against reconstructions without motion sorting in both healthy volunteers and patients. Results XD-GRASP separates respiratory motion from cardiac motion in cardiac imaging, and respiratory motion from contrast enhancement in liver DCE-MRI, which improves image quality and reduces motion-blurring artifacts. Conclusion XD-GRASP represents a new use of sparsity for motion compensation and a novel way to handle motions in the context of a continuous acquisition paradigm. Instead of removing or correcting motion, extra motion-state dimensions are reconstructed, which improves image quality and also offers new physiological information of potential clinical value. Magn Reson Med 75:775–788, 2016. © 2015 Wiley Periodicals, Inc.

489 citations


Journal ArticleDOI
Junho Lee1, Gye-Tae Gil1, Yong Hoon Lee1
TL;DR: An efficient open-loop channel estimator for a millimeter-wave (mm-wave) hybrid multiple-input multiple-output (MIMO) system consisting of radio-frequency beamformers with large antenna arrays followed by a baseband MIMO processor is proposed.
Abstract: We propose an efficient open-loop channel estimator for a millimeter-wave (mm-wave) hybrid multiple-input multiple-output (MIMO) system consisting of radio-frequency (RF) beamformers with large antenna arrays followed by a baseband MIMO processor. A sparse signal recovery problem exploiting the sparse nature of mm-wave channels is formulated for channel estimation based on the parametric channel model with quantized angles of departures/arrivals (AoDs/AoAs), called the angle grids. The problem is solved by the orthogonal matching pursuit (OMP) algorithm employing a redundant dictionary consisting of array response vectors with finely quantized angle grids. We suggest the use of non-uniformly quantized angle grids and show that such grids reduce the coherence of the redundant dictionary. The lower and upper bounds of the sum-of-squared errors of the proposed OMP-based estimator are derived analytically: the lower bound is derived by considering the oracle estimator that assumes the knowledge of AoDs/AoAs, and the upper bound is derived based on the results of the OMP performance guarantees. The design of training vectors (or sensing matrix) is particularly important in hybrid MIMO systems, because the RF beamformer prevents the use of independent and identically distributed random training vectors, which are popular in compressed sensing. We design training vectors so that the total coherence of the equivalent sensing matrix is minimized for a given RF beamforming matrix, which is assumed to be unitary. It is observed that the estimation accuracy can be improved significantly by randomly permuting the columns of the RF beamforming matrix. The simulation results demonstrate the advantage of the proposed OMP with a redundant dictionary over the existing methods such as the least squares method and the OMP based on the virtual channel model.

447 citations


Journal ArticleDOI
TL;DR: Experimental results using in vivo data for single/multicoil imaging as well as dynamic imaging confirmed that the proposed method outperforms the state-of-the-art pMRI and CS-MRI.
Abstract: Parallel MRI (pMRI) and compressed sensing MRI (CS-MRI) have been considered as two distinct reconstruction problems. Inspired by recent k-space interpolation methods, an annihilating filter-based low-rank Hankel matrix approach is proposed as a general framework for sparsity-driven k-space interpolation method which unifies pMRI and CS-MRI. Specifically, our framework is based on a novel observation that the transform domain sparsity in the primary space implies the low-rankness of weighted Hankel matrix in the reciprocal space. This converts pMRI and CS-MRI to a k-space interpolation problem using a structured matrix completion. Experimental results using in vivo data for single/multicoil imaging as well as dynamic imaging confirmed that the proposed method outperforms the state-of-the-art pMRI and CS-MRI.

252 citations


Journal ArticleDOI
TL;DR: This paper investigates the frequency recovery problem in the presence of multiple measurement vectors which share the same frequency components, termed as joint sparse frequency recovery and arising naturally from array processing applications and proposes an MMV atomic norm approach that is a convex relaxation and can be viewed as a continuous counterpart of the ℓ2,1 norm method.
Abstract: Frequency recovery/estimation from discrete samples of superimposed sinusoidal signals is a classic yet important problem in statistical signal processing. Its research has recently been advanced by atomic norm techniques that exploit signal sparsity, work directly on continuous frequencies, and completely resolve the grid mismatch problem of previous compressed sensing methods. In this paper, we investigate the frequency recovery problem in the presence of multiple measurement vectors (MMVs) which share the same frequency components, termed as joint sparse frequency recovery and arising naturally from array processing applications. To study the advantage of MMVs, we first propose an ${\ell }_{2,0}$ norm like approach by exploiting joint sparsity and show that the number of recoverable frequencies can be increased except in a trivial case. While the resulting optimization problem is shown to be rank minimization that cannot be practically solved, we then propose an MMV atomic norm approach that is a convex relaxation and can be viewed as a continuous counterpart of the ${\ell }_{2,1}$ norm method. We show that this MMV atomic norm approach can be solved by semidefinite programming. We also provide theoretical results showing that the frequencies can be exactly recovered under appropriate conditions. The above results either extend the MMV compressed sensing results from the discrete to the continuous setting or extend the recent super-resolution and continuous compressed sensing framework from the single to the multiple measurement vectors case. Extensive simulation results are provided to validate our theoretical findings and they also imply that the proposed MMV atomic norm approach can improve the performance in terms of reduced number of required measurements and/or relaxed frequency separation condition.

222 citations


Journal ArticleDOI
TL;DR: The first theoretical accuracy guarantee for 1-b compressed sensing with unknown covariance matrix of the measurement vectors is given, and the single-index model of non-linearity is considered, allowing the non- linearity to be discontinuous, not one-to-one and even unknown.
Abstract: We study the problem of signal estimation from non-linear observations when the signal belongs to a low-dimensional set buried in a high-dimensional space. A rough heuristic often used in practice postulates that the non-linear observations may be treated as noisy linear observations, and thus, the signal may be estimated using the generalized Lasso. This is appealing because of the abundance of efficient, specialized solvers for this program. Just as noise may be diminished by projecting onto the lower dimensional space, the error from modeling non-linear observations with linear observations will be greatly reduced when using the signal structure in the reconstruction. We allow general signal structure, only assuming that the signal belongs to some set $K \subset \mathbb {R} ^{n}$ . We consider the single-index model of non-linearity. Our theory allows the non-linearity to be discontinuous, not one-to-one and even unknown. We assume a random Gaussian model for the measurement matrix, but allow the rows to have an unknown covariance matrix. As special cases of our results, we recover near-optimal theory for noisy linear observations, and also give the first theoretical accuracy guarantee for 1-b compressed sensing with unknown covariance matrix of the measurement vectors.

216 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of line spectrum denoising and estimation with an ensemble of spectrally-sparse signals composed of the same set of continuous-valued frequencies from their partial and noisy observations is studied.
Abstract: Compressed Sensing suggests that the required number of samples for reconstructing a signal can be greatly reduced if it is sparse in a known discrete basis, yet many real-world signals are sparse in a continuous dictionary. One example is the spectrally-sparse signal, which is composed of a small number of spectral atoms with arbitrary frequencies on the unit interval. In this paper we study the problem of line spectrum denoising and estimation with an ensemble of spectrally-sparse signals composed of the same set of continuous-valued frequencies from their partial and noisy observations. Two approaches are developed based on atomic norm minimization and structured covariance estimation, both of which can be solved efficiently via semidefinite programming. The first approach aims to estimate and denoise the set of signals from their partial and noisy observations via atomic norm minimization, and recover the frequencies via examining the dual polynomial of the convex program. We characterize the optimality condition of the proposed algorithm and derive the expected error rate for denoising, demonstrating the benefit of including multiple measurement vectors. The second approach aims to recover the population covariance matrix from the partially observed sample covariance matrix by motivating its low-rank Toeplitz structure without recovering the signal ensemble. Performance guarantee is derived with a finite number of measurement vectors. The frequencies can be recovered via conventional spectrum estimation methods such as MUSIC from the estimated covariance matrix. Finally, numerical examples are provided to validate the favorable performance of the proposed algorithms, with comparisons against several existing approaches.

202 citations


Journal ArticleDOI
TL;DR: A structured compressive sensing (SCS)-based spatio-temporal joint channel estimation scheme to reduce the required pilot overhead and is capable of approaching the optimal oracle least squares estimator.
Abstract: Massive MIMO is a promising technique for future 5G communications due to its high spectrum and energy efficiency. To realize its potential performance gain, accurate channel estimation is essential. However, due to massive number of antennas at the base station (BS), the pilot overhead required by conventional channel estimation schemes will be unaffordable, especially for frequency division duplex (FDD) massive MIMO. To overcome this problem, we propose a structured compressive sensing (SCS)-based spatio-temporal joint channel estimation scheme to reduce the required pilot overhead, whereby the spatio-temporal common sparsity of delay-domain MIMO channels is leveraged. Particularly, we first propose the nonorthogonal pilots at the BS under the framework of CS theory to reduce the required pilot overhead. Then, an adaptive structured subspace pursuit (ASSP) algorithm at the user is proposed to jointly estimate channels associated with multiple OFDM symbols from the limited number of pilots, whereby the spatio-temporal common sparsity of MIMO channels is exploited to improve the channel estimation accuracy. Moreover, by exploiting the temporal channel correlation, we propose a space-time adaptive pilot scheme to further reduce the pilot overhead. Additionally, we discuss the proposed channel estimation scheme in multicell scenario. Simulation results demonstrate that the proposed scheme can accurately estimate channels with the reduced pilot overhead, and it is capable of approaching the optimal oracle least squares estimator.

196 citations


Proceedings ArticleDOI
Xin Yuan1
01 Sep 2016
TL;DR: The generalized alternating projection (GAP) algorithm is considered and the Alternating Direction Method of Multipliers (ADMM) framework with TV minimization for video and hyperspectral image compressive sensing under the CACTI and CASSI framework is derived.
Abstract: We consider the total variation (TV) minimization problem used for compressive sensing and solve it using the generalized alternating projection (GAP) algorithm. Extensive results demonstrate the high performance of proposed algorithm on compressive sensing, including two dimensional images, hyperspectral images and videos. We further derive the Alternating Direction Method of Multipliers (ADMM) framework with TV minimization for video and hyperspectral image compressive sensing under the CACTI and CASSI framework, respectively. Connections between GAP and ADMM are also provided.

196 citations


Journal ArticleDOI
TL;DR: A framework and corresponding method for compressed sensing in infinite dimensions is introduced and the introduction of two novel concepts in sampling theory, the stable sampling rate and the balancing property, which specify how to appropriately discretize an infinite-dimensional problem.
Abstract: We introduce and analyze a framework and corresponding method for compressed sensing in infinite dimensions This extends the existing theory from finite-dimensional vector spaces to the case of separable Hilbert spaces We explain why such a new theory is necessary by demonstrating that existing finite-dimensional techniques are ill suited for solving a number of key problems This work stems from recent developments in generalized sampling theorems for classical (Nyquist rate) sampling that allows for reconstructions in arbitrary bases A conclusion of this paper is that one can extend these ideas to allow for significant subsampling of sparse or compressible signals Central to this work is the introduction of two novel concepts in sampling theory, the stable sampling rate and the balancing property, which specify how to appropriately discretize an infinite-dimensional problem

188 citations


Journal ArticleDOI
TL;DR: Two new algorithms for wideband spectrum sensing at sub-Nyquist sampling rates, for both single nodes and cooperative multiple nodes are presented and are evaluated on the TV white space, in which pioneering work aimed at enabling dynamic spectrum access into practice has been promoted.
Abstract: This paper presents two new algorithms for wideband spectrum sensing at sub-Nyquist sampling rates, for both single nodes and cooperative multiple nodes. In single-node spectrum sensing, a two-phase spectrum sensing algorithm based on compressive sensing is proposed to reduce the computational complexity and improve the robustness at secondary users (SUs). In the cooperative multiple nodes case, the signals received at SUs exhibit a sparsity property that yields a low-rank matrix of compressed measurements at the fusion center. This therefore leads to a two-phase cooperative spectrum sensing algorithm for cooperative multiple SUs based on low-rank matrix completion. In addition, the two proposed spectrum sensing algorithms are evaluated on the TV white space (TVWS), in which pioneering work aimed at enabling dynamic spectrum access into practice has been promoted by both the Federal Communications Commission and the U.K. Office of Communications. The proposed algorithms are tested on the real-time signals after they have been validated by the simulated signals in TVWS. The numerical results show that our proposed algorithms are more robust to channel noise and have lower computational complexity than the state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: A novel iterative imaging method for optical tomography that combines a nonlinear forward model based on the beam propagation method (BPM) with an edge-preserving three-dimensional (3-D) total variation (TV) regularizer and a time-reversal scheme that allows for an efficient computation of the derivative of the transmitted wave-field with respect to the distribution of the refractive index.
Abstract: Optical tomographic imaging requires an accurate forward model as well as regularization to mitigate missing-data artifacts and to suppress noise. Nonlinear forward models can provide more accurate interpretation of the measured data than their linear counterparts, but they generally result in computationally prohibitive reconstruction algorithms. Although sparsity-driven regularizers significantly improve the quality of reconstructed image, they further increase the computational burden of imaging. In this paper, we present a novel iterative imaging method for optical tomography that combines a nonlinear forward model based on the beam propagation method (BPM) with an edge-preserving three-dimensional (3-D) total variation (TV) regularizer. The central element of our approach is a time-reversal scheme, which allows for an efficient computation of the derivative of the transmitted wave-field with respect to the distribution of the refractive index. This time-reversal scheme together with our stochastic proximal-gradient algorithm makes it possible to optimize under a nonlinear forward model in a computationally tractable way, thus enabling a high-quality imaging of the refractive index throughout the object. We demonstrate the effectiveness of our method through several experiments on simulated and experimentally measured data.

Journal ArticleDOI
TL;DR: In this paper, the authors show that weighted l 1 minimization effectively merges the two approaches, promoting both sparsity and smoothness in reconstruction, and provide specific choices of weights in the l 1 objective to achieve approximation rates for functions with coefficient sequences in weighted l p spaces with p ≤ 1.

Journal ArticleDOI
TL;DR: This paper considers the line spectral estimation problem and proposes an iterative reweighted method which jointly estimates the sparse signals and the unknown parameters associated with the true dictionary, and achieves super resolution and outperforms other state-of-the-art methods in many cases of practical interest.
Abstract: Conventional compressed sensing theory assumes signals have sparse representations in a known dictionary. Nevertheless, in many practical applications such as line spectral estimation, the sparsifying dictionary is usually characterized by a set of unknown parameters in a continuous domain. To apply the conventional compressed sensing technique to such applications, the continuous parameter space has to be discretized to a finite set of grid points, based on which a “nominal dictionary” is constructed for sparse signal recovery. Discretization, however, inevitably incurs errors since the true parameters do not necessarily lie on the discretized grid. This error, also referred to as grid mismatch, leads to deteriorated recovery performance. In this paper, we consider the line spectral estimation problem and propose an iterative reweighted method which jointly estimates the sparse signals and the unknown parameters associated with the true dictionary. The proposed algorithm is developed by iteratively decreasing a surrogate function majorizing a given log-sum objective function, leading to a gradual and interweaved iterative process to refine the unknown parameters and the sparse signal. A simple yet effective scheme is developed for adaptively updating the regularization parameter that controls the tradeoff between the sparsity of the solution and the data fitting error. Theoretical analysis is conducted to justify the proposed method. Simulation results show that the proposed algorithm achieves super resolution and outperforms other state-of-the-art methods in many cases of practical interest.

Journal ArticleDOI
TL;DR: A graph-based redundant wavelet transform is introduced to sparsely represent magnetic resonance images in iterative image reconstructions and outperforms several state-of-the-art reconstruction methods in removing artifacts and achieves fewer reconstruction errors on the tested datasets.

Journal ArticleDOI
TL;DR: For quantized affine affine measurements of the form $ \mathop {\mathrm {sign\kern 0pt}} olimits (\langle {a}_{i, {x} \rangle + b_{i})$, and if the vectors $ {a, i}$ are random, an appropriate choice of the affine shifts $b_{i}$ allows norm recovery to be easily incorporated into existing methods for one-bit compressive sensing.
Abstract: Consider the recovery of an unknown signal $ {x}$ from quantized linear measurements. In the one-bit compressive sensing setting, one typically assumes that $ {x}$ is sparse, and that the measurements are of the form $ \mathop {\mathrm {sign\kern 0pt}} olimits (\langle {a}_{i}, {x} \rangle ) \in \{\pm 1\}$ . Since such measurements give no information on the norm of $ {x}$ , recovery methods typically assume that $\| {x} \|_{2}=1$ . We show that if one allows more generally for quantized affine measurements of the form $ \mathop {\mathrm {sign\kern 0pt}} olimits (\langle {a}_{i}, {x} \rangle + b_{i})$ , and if the vectors $ {a}_{i}$ are random, an appropriate choice of the affine shifts $b_{i}$ allows norm recovery to be easily incorporated into existing methods for one-bit compressive sensing. In addition, we show that for arbitrary fixed $ {x}$ in the annulus $r \leq \| {x} \|_{2} \leq R$ , one may estimate the norm $\| {x} \|_{2}$ up to additive error $\delta $ from $m {\gtrsim } R^{4} r^{-2} \delta ^{-2}$ such binary measurements through a single evaluation of the inverse Gaussian error function. Finally, all of our recovery guarantees can be made universal over sparse vectors in the sense that with high probability, one set of measurements and thresholds can successfully estimate all sparse vectors $ {x}$ in a Euclidean ball of known radius.

Journal ArticleDOI
TL;DR: This work develops compressed sensing strategies for computing the dynamic mode decomposition (DMD) from heavily subsampled or compressed data and demonstrates the invariance of the DMD algorithm to left and right unitary transformations when data and modes are sparse in some transform basis.
Abstract: This work develops compressed sensing strategies for computing the dynamic mode decomposition (DMD) from heavily subsampled or compressed data. The resulting DMD eigenvalues are equal to DMD eigenvalues from the full-state data. It is then possible to reconstruct full-state DMD eigenvectors using $\ell_1$-minimization or greedy algorithms. If full-state snapshots are available, it may be computationally beneficial to compress the data, compute DMD on the compressed data, and then reconstruct full-state modes by applying the compressed DMD transforms to full-state snapshots. These results rely on a number of theoretical advances. First, we establish connections between DMD on full-state and compressed data. Next, we demonstrate the invariance of the DMD algorithm to left and right unitary transformations. When data and modes are sparse in some transform basis, we show a similar invariance of DMD to measurement matrices that satisfy the restricted isometry property from compressed sensing. We demonstrate the success of this architecture on two model systems. In the first example, we construct a spatial signal from a sparse vector of Fourier coefficients with a linear dynamical system driving the coefficients. In the second example, we consider the double gyre flow field, which is a model for chaotic mixing in the ocean. A video abstract of this work may be found at: http://youtu.be/4tLSq_PEFms .

Journal ArticleDOI
TL;DR: The results indicate that CS is in general not secure according to cryptographic standards, but may provide a useful built-in data obfuscation layer.
Abstract: In this paper, the security of the compressed sensing (CS) framework as a form of data confidentiality is analyzed. Two important properties of one-time random linear measurements acquired using a Gaussian independent identically distributed matrix are outlined: 1) the measurements reveal only the energy of the sensed signal and 2) only the energy of the measurements leaks information about the signal. An important consequence of the above facts is that CS provides information theoretic secrecy in a particular setting. Namely, a simple strategy based on the normalization of the Gaussian measurements achieves, at least in theory, perfect secrecy, enabling the use of CS as an additional security layer in privacy preserving applications. In the generic setting in which CS does not provide information theoretic secrecy, two alternative security notions linked to the difficulty of estimating the energy of the signal and distinguishing equal-energy signals are introduced. Useful bounds on the mean square error of any possible estimator and the probability of error of any possible detector are provided and compared with the simulations. The results indicate that CS is in general not secure according to cryptographic standards, but may provide a useful built-in data obfuscation layer.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a projected iterative soft thresholding algorithm (pISTA) and its acceleration pFISTA for CS-MRI image reconstruction, which exploit sparsity of the magnetic resonance (MR) images under the redundant representation of tight frames.
Abstract: Compressed sensing (CS) has exhibited great potential for accelerating magnetic resonance imaging (MRI). In CS-MRI, we want to reconstruct a high-quality image from very few samples in a short time. In this paper, we propose a fast algorithm, called projected iterative soft-thresholding algorithm (pISTA), and its acceleration pFISTA for CS-MRI image reconstruction. The proposed algorithms exploit sparsity of the magnetic resonance (MR) images under the redundant representation of tight frames. We prove that pISTA and pFISTA converge to a minimizer of a convex function with a balanced tight frame sparsity formulation. The pFISTA introduces only one adjustable parameter, the step size, and we provide an explicit rule to set this parameter. Numerical experiment results demonstrate that pFISTA leads to faster convergence speeds than the state-of-art counterpart does, while achieving comparable reconstruction errors. Moreover, reconstruction errors incurred by pFISTA appear insensitive to the step size.

Journal ArticleDOI
TL;DR: The framework described here leverages the statistical structure of random processes to enable signal compression and offers an alternative perspective at sparsity-agnostic inference.
Abstract: Compressed sensing deals with the reconstruction of signals from sub-Nyquist samples by exploiting the sparsity of their projections onto known subspaces. In contrast, this article is concerned with the reconstruction of second-order statistics, such as covariance and power spectrum, even in the absence of sparsity priors. The framework described here leverages the statistical structure of random processes to enable signal compression and offers an alternative perspective at sparsity-agnostic inference. Capitalizing on parsimonious representations, we illustrate how compression and reconstruction tasks can be addressed in popular applications such as power-spectrum estimation, incoherent imaging, direction-of-arrival estimation, frequency estimation, and wideband spectrum sensing.

Journal ArticleDOI
TL;DR: A magnetic resonance imaging (MRI) reconstruction algorithm, which uses decoupled iterations alternating over a denoising step realized by the BM3D algorithm and a reconstruction step through an optimization formulation, which contributes to the reconstruction performance.
Abstract: The block matching 3D (BM3D) is an efficient image model, which has found few applications other than its niche area of denoising. We will develop a magnetic resonance imaging (MRI) reconstruction algorithm, which uses decoupled iterations alternating over a denoising step realized by the BM3D algorithm and a reconstruction step through an optimization formulation. The decoupling of the two steps allows the adoption of a strategy with a varying regularization parameter, which contributes to the reconstruction performance. This new iterative algorithm efficiently harnesses the power of the nonlocal, image-dependent BM3D model. The MRI reconstruction performance of the proposed algorithm is superior to state-of-the-art algorithms from the literature. A convergence analysis of the algorithm is also presented.

Journal ArticleDOI
TL;DR: The results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used.
Abstract: Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Perot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.

Journal ArticleDOI
TL;DR: A specifically designed uniform linear array structure with associated CS-based underdetermined DOA estimation is presented to exploit the difference co-array concept in the spatio-spectral domain, leading to a significant increase in degrees of freedom.
Abstract: Direction of arrival (DOA) estimation from the perspective of sparse signal representation has attracted tremendous attention in past years, where the underlying spatial sparsity reconstruction problem is linked to the compressive sensing (CS) framework. Although this is an area with ongoing intensive research and new methods and results are reported regularly, it is time to have a review about the basic approaches and methods for CS-based DOA estimation, in particular for the underdetermined case. We start from the basic time-domain CS-based formulation for narrowband arrays and then move to the case for recently developed methods for sparse arrays based on the co-array concept. After introducing two specifically designed structures (the two-level nested array and the co-prime array) for optimizing the virtual sensors corresponding to the difference co-array, this CS-based DOA estimation approach is extended to the wideband case by employing the group sparsity concept, where a much larger physical aperture can be achieved by allowing a larger unit inter-element spacing and therefore leading to further improved performance. Finally, a specifically designed uniform linear array structure with associated CS-based underdetermined DOA estimation is presented to exploit the difference co-array concept in the spatio-spectral domain, leading to a significant increase in degrees of freedom. Representative simulation results for typical narrowband and wideband scenarios are provided to demonstrate their performance.

Journal ArticleDOI
TL;DR: This paper provides an in depth survey on compressive sensing techniques and classifies these techniques according to which process they target, namely, sparse representation, sensing matrix, or recovery algorithms.

Journal ArticleDOI
TL;DR: The simulation results obtained from sparse channel estimation and echo cancelation demonstrate that the proposed sparse SM-NLMS algorithms are superior to the previously proposed NLMS, SM- NLMS as well as zero-attracting NLMS (ZA-NL MS) algorithms.
Abstract: In this paper, we propose a type of sparsity-aware set-membership normalized least mean square (SM-NLMS) algorithm for sparse channel estimation and echo cancelation. The proposed algorithm incorporates an l 1 -norm penalty into the cost function of the conventional SM-NLMS algorithm to exploit the sparsity of the sparse systems, which is denoted as zero-attracting SM-NLMS (ZASM-NLMS) algorithm. Furthermore, an improved ZASM-NLMS algorithm is also derived by using a log-sum function instead of the l 1 -norm penalty in the ZASM-NLMS, which is denoted as reweighted ZASM-NLMS (RZASM-NLMS) algorithm. These zero-attracting SM-NLMS algorithms are equivalent to adding shrinkages in their update equations, which result in fast convergence speed and low estimation error when most of the unknown channel coefficients are zero or close to zero. These proposed algorithms are described and analyzed in detail, while the performances of these algorithms are investigated by using computer simulations. The simulation results obtained from sparse channel estimation and echo cancelation demonstrate that the proposed sparse SM-NLMS algorithms are superior to the previously proposed NLMS, SM-NLMS as well as zero-attracting NLMS (ZA-NLMS) algorithms.

Journal ArticleDOI
TL;DR: The long short-term memory (LSTM) is proposed, a data-driven model for sequence modeling that is deep in time that significantly outperforms the general MMV solver (the Simultaneous Orthogonal Matching Pursuit) and a number of the model-based Bayesian methods.
Abstract: Several recent studies on the compressed sensing problem with Multiple Measurement Vectors (MMVs) under the condition that the vectors in the different channels are jointly sparse have been recently carried. In this paper, this condition is relaxed. Instead, these sparse vectors are assumed to depend on each other but this dependency is assumed unknown. We capture this dependency by computing the conditional probability of each entry in each vector being non-zero, given the “ residuals ” of all previous vectors. To estimate these probabilities, we propose the use of the long short-term memory (LSTM), a data-driven model for sequence modeling that is deep in time. To learn the model parameters, we minimize a cross-entropy cost function. To reconstruct the sparse vectors at the decoder, we propose a greedy solver that uses the above model to estimate the conditional probabilities. By performing extensive experiments on two real world datasets, we show that the proposed method significantly outperforms the general MMV solver (the Simultaneous Orthogonal Matching Pursuit (SOMP)) and a number of the model-based Bayesian methods. The proposed method does not add any complexity to the general compressive sensing encoder. The trained model is used at the decoder only. As the proposed method is a data-driven method, it is only applicable when training data is available. In many applications however, training data is indeed available, e.g., in recorded images for which our method is successfully applied as to be reported in this paper.

Journal ArticleDOI
TL;DR: A sophisticated sparse image fusion algorithm, which is named “jointly sparse fusion of images” (J-SparseFI), which overcomes the aforementioned three drawbacks of the existing sparse image fused algorithms and offers a practical solution to overcome spectral range mismatch between the Pan and multispectral images.
Abstract: Recently, sparse signal representation of image patches has been explored to solve the pansharpening problem. Although these proposed sparse-reconstruction-based methods lead to promising results, three issues remained unsolved: 1) high computational cost; 2) no consideration given to the possibility of mutually correlated information in different multispectral channels; and 3) requirement that the spectral responses of the panchromatic (Pan) image and the multispectral image cover the same wavelength range, which is not necessarily valid for most sensors. In this paper, we propose a sophisticated sparse image fusion algorithm, which is named “jointly sparse fusion of images” (J-SparseFI). It is based on the earlier proposed sparse fusion of images (SparseFI) algorithm and overcomes the aforementioned three drawbacks of the existing sparse image fusion algorithms. The computational problem is handled by reducing the problem size and by proposing a fully parallelizable scheme. Moreover, J-SparseFI exploits the possible signal structure correlations between multispectral channels by introducing the joint sparsity model (JSM) and sharpening the highly correlated adjacent multispectral channels together. This is done by exploiting the distributed compressive sensing theory that restricts the solution of an underdetermined system by considering an ensemble of signals being jointly sparse. J-SparseFI also offers a practical solution to overcome spectral range mismatch between the Pan and multispectral images. By means of sensor spectral response and channel mutual correlation analysis, the multispectral channels are assigned to primary groups of joint channels, secondary groups of joint channels, and individual channels. Primary groups of joint channels, individual channels, and secondary groups of joint channels are then reconstructed sequentially, by the JSM or by modified SparseFI, using a dictionary trained from the Pan image or previously reconstructed high-resolution multispectral channels. A recipe of how to choose appropriate algorithm parameters, including the most crucial regularization parameter, is provided. The algorithm is evaluated and validated using WorldView-2-like images that are simulated using very high resolution airborne HySpex hyperspectral imagery and further practically demonstrated using real WorldView-2 images. The algorithm's performance is compared with other state-of-the-art methods. Visual and quantitative analyses demonstrate the high quality of the proposed method. In particular, the analysis of the difference images suggests that J-SparseFI is superior in image resolution recovery.

Posted Content
TL;DR: A CANDECOMP/PARAFAC (CP) decomposition-based method for channel parameter estimation (including angles of arrival/departure, time delays, and fading coefficients) and reveals that the uniqueness of the CP decomposition can be guaranteed even when the size of the tensor is small.
Abstract: We consider the problem of downlink channel estimation for millimeter wave (mmWave) MIMO-OFDM systems, where both the base station (BS) and the mobile station (MS) employ large antenna arrays for directional precoding/beamforming. Hybrid analog and digital beamforming structures are employed in order to offer a compromise between hardware complexity and system performance. Different from most existing studies that are concerned with narrowband channels, we consider estimation of wideband mmWave channels with frequency selectivity, which is more appropriate for mmWave MIMO-OFDM systems. By exploiting the sparse scattering nature of mmWave channels, we propose a CANDECOMP/PARAFAC (CP) decomposition-based method for channel parameter estimation (including angles of arrival/departure, time delays, and fading coefficients). In our proposed method, the received signal at the BS is expressed as a third-order tensor. We show that the tensor has the form of a low-rank CP decomposition, and the channel parameters can be estimated from the associated factor matrices. Our analysis reveals that the uniqueness of the CP decomposition can be guaranteed even when the size of the tensor is small. Hence the proposed method has the potential to achieve substantial training overhead reduction. We also develop Cramer-Rao bound (CRB) results for channel parameters, and compare our proposed method with a compressed sensing-based method. Simulation results show that the proposed method attains mean square errors that are very close to their associated CRBs, and presents a clear advantage over the compressed sensing-based method in terms of both estimation accuracy and computational complexity.

Journal ArticleDOI
TL;DR: This paper examines the impact on the required training overhead when partial support information is applied within a weighted ℓ1 minimization framework, and analytically shows that a sharp estimate of the reduced overhead size can be successfully obtained.
Abstract: Massive multiple-input–multiple-output (MIMO) is a promising technique for providing unprecedented spectral efficiency. However, it has been well recognized that the excessive training overhead required for obtaining the channel side information is a major handicap in frequency-division duplexing (FDD) massive MIMO. Several attempts have been made to reduce this training overhead by exploiting the sparsity structures of massive MIMO channels. So far, however, there has been little discussion about how to exploit the partial support information of these channels to achieve further overhead reductions. Such information, which is a set of indices of the significant elements of a channel vector, can be acquired in advance and hence is an important option to explore. In this paper, we examine the impact on the required training overhead when this information is applied within a weighted $\ell_{1}$ minimization framework, and analytically show that a sharp estimate of the reduced overhead size can be successfully obtained. Furthermore, we examine how the accuracy of the partial support information impacts the achievable overhead reduction. Numerical results for a wide range of sparsity and partial support information reliability levels are presented to quantify our findings and main conclusions.

Proceedings ArticleDOI
10 Jul 2016
TL;DR: Under mild technical conditions, the results show that the limiting MI and MMSE are equal to the values predicted by the replica method from statistical physics, which resolves a well-known problem that has remained open for over a decade.
Abstract: This paper considers the fundamental limit of compressed sensing for i.i.d. signal distributions and i.i.d. Gaussian measurement matrices. Its main contribution is a rigorous characterization of the asymptotic mutual information (MI) and minimum mean-square error (MMSE) in this setting. Under mild technical conditions, our results show that the limiting MI and MMSE are equal to the values predicted by the replica method from statistical physics. This resolves a well-known problem that has remained open for over a decade.