scispace - formally typeset
Search or ask a question

Showing papers on "Compressed sensing published in 2019"


Journal ArticleDOI
TL;DR: A novel CS framework that uses generative adversarial networks (GAN) to model the (low-dimensional) manifold of high-quality MR images that retrieves higher quality images with improved fine texture details compared with conventional Wavelet-based and dictionary- learning-based CS schemes as well as with deep-learning-based schemes using pixel-wise training.
Abstract: Undersampled magnetic resonance image (MRI) reconstruction is typically an ill-posed linear inverse task. The time and resource intensive computations require tradeoffs between accuracy and speed . In addition, state-of-the-art compressed sensing (CS) analytics are not cognizant of the image diagnostic quality . To address these challenges, we propose a novel CS framework that uses generative adversarial networks (GAN) to model the (low-dimensional) manifold of high-quality MR images. Leveraging a mixture of least-squares (LS) GANs and pixel-wise $\ell _{1}/\ell _{2}$ cost, a deep residual network with skip connections is trained as the generator that learns to remove the aliasing artifacts by projecting onto the image manifold. The LSGAN learns the texture details, while the $\ell _{1}/\ell _{2}$ cost suppresses high-frequency noise. A discriminator network, which is a multilayer convolutional neural network (CNN), plays the role of a perceptual cost that is then jointly trained based on high-quality MR images to score the quality of retrieved images. In the operational phase, an initial aliased estimate (e.g., simply obtained by zero-filling) is propagated into the trained generator to output the desired reconstruction. This demands a very low computational overhead. Extensive evaluations are performed on a large contrast-enhanced MR dataset of pediatric patients. Images rated by expert radiologists corroborate that GANCS retrieves higher quality images with improved fine texture details compared with conventional Wavelet-based and dictionary-learning-based CS schemes as well as with deep-learning-based schemes using pixel-wise training. In addition, it offers reconstruction times of under a few milliseconds, which are two orders of magnitude faster than the current state-of-the-art CS-MRI schemes.

468 citations


Journal ArticleDOI
TL;DR: This paper considers a “vector AMP” (VAMP) algorithm and shows that VAMP has a rigorous scalar state-evolution that holds under a much broader class of large random matrices A: those that are right-orthogonally invariant.
Abstract: The standard linear regression (SLR) problem is to recover a vector $\mathrm {x}^{0}$ from noisy linear observations $\mathrm {y}=\mathrm {Ax}^{0}+\mathrm {w}$ . The approximate message passing (AMP) algorithm proposed by Donoho, Maleki, and Montanari is a computationally efficient iterative approach to SLR that has a remarkable property: for large i.i.d. sub-Gaussian matrices A, its per-iteration behavior is rigorously characterized by a scalar state-evolution whose fixed points, when unique, are Bayes optimal. The AMP algorithm, however, is fragile in that even small deviations from the i.i.d. sub-Gaussian model can cause the algorithm to diverge. This paper considers a “vector AMP” (VAMP) algorithm and shows that VAMP has a rigorous scalar state-evolution that holds under a much broader class of large random matrices A: those that are right-orthogonally invariant. After performing an initial singular value decomposition (SVD) of A, the per-iteration complexity of VAMP is similar to that of AMP. In addition, the fixed points of VAMP’s state evolution are consistent with the replica prediction of the minimum mean-squared error derived by Tulino, Caire, Verdu, and Shamai. Numerical experiments are used to confirm the effectiveness of VAMP and its consistency with state-evolution predictions.

263 citations


Journal ArticleDOI
TL;DR: A novel Deep Residual Reconstruction Network (DR2-Net) to reconstruct the image from its Compressively Sensed measurement by outperforms traditional iterative methods and recent deep learning-based methods by large margins at measurement rates 0.01, 0.1, and 0.25.

242 citations


Posted Content
TL;DR: This paper proposes a novel channel estimation protocol for the RIS aided multi-user multi-input multi-output (MIMO) system to estimate the cascade channel, which consists of the channels from the base station to the RIS and from the RIS to the user.
Abstract: Channel acquisition is one of the main challenges for the deployment of reconfigurable intelligent surface (RIS) aided communication system. This is because RIS has a large number of reflective elements, which are passive devices without active transmitting/receiving and signal processing abilities. In this paper, we study the uplink channel estimation for the RIS aided multi-user multi-input multi-output (MIMO) system. Specifically, we propose a novel channel estimation protocol for the above system to estimate the cascade channel, which consists of the channels from the base station (BS) to the RIS and from the RIS to the user. Further, we recognize the cascaded channels are typically sparse, this allows us to formulate the channel estimation problem into a sparse channel matrix recovery problem using the compressive sensing (CS) technique, with which we can achieve robust channel estimation with limited training overhead. In particular, the sparse channel matrixes of the cascaded channels of all users have a common row-column-block sparsity structure due to the common channel between BS and RIS. By considering such a common sparsity, we further propose a two-step procedure based multi-user joint channel estimator. In the first step, by considering common column-block sparsity, we project the signal into the common column subspace for reducing complexity, quantization error, and noise level. In the second step, by considering common row-block sparsity, we apply all the projected signals to formulate a multi-user joint sparse matrix recovery problem, and we propose an iterative approach to solve this non-convex problem efficiently. Moreover, the optimization of the training reflection sequences at the RIS is studied to improve the estimation performance.

206 citations


Journal ArticleDOI
TL;DR: A comprehensive study and a state-of-the-art review of compressive sensing theory algorithms used in imaging, radar, speech recognition, and data acquisition and some open research challenges are presented.
Abstract: Nowadays, a large amount of information has to be transmitted or processed. This implies high-power processing, large memory density, and increased energy consumption. In several applications, such as imaging, radar, speech recognition, and data acquisition, the signals involved can be considered sparse or compressive in some domain. The compressive sensing theory could be a proper candidate to deal with these constraints. It can be used to recover sparse or compressive signals with fewer measurements than the traditional methods. Two problems must be addressed by compressive sensing theory: design of the measurement matrix and development of an efficient sparse recovery algorithm. These algorithms are usually classified into three categories: convex relaxation, non-convex optimization techniques, and greedy algorithms. This paper intends to supply a comprehensive study and a state-of-the-art review of these algorithms to researchers who wish to develop and use them. Moreover, a wide range of compressive sensing theory applications is summarized and some open research challenges are presented.

169 citations


Journal ArticleDOI
TL;DR: Simulation results verify the effectiveness and reliability of the proposed image compression and encryption algorithm with considerable compression and security performance.
Abstract: For a linear image encryption system, it is vulnerable to the chosen-plaintext attack. To overcome the weakness and reduce the correlation among pixels of the encryption image, an effective image compression and encryption algorithm based on chaotic system and compressive sensing is proposed. The original image is first permuted by the Arnold transform to reduce the block effect in the compression process, and then the resulting image is compressed and re-encrypted by compressive sensing, simultaneously. Moreover, the bitwise XOR operation based on chaotic system is performed on the measurements to change the pixel values and a pixel scrambling method is employed to disturb the positions of pixels. Besides, the keys used in chaotic systems are related to the plaintext image. Simulation results verify the effectiveness and reliability of the proposed image compression and encryption algorithm with considerable compression and security performance.

164 citations


Journal ArticleDOI
TL;DR: In this paper, a sparse Bayesian learning (SBL)-based method for estimating the direction of arrival (DOA) in a multiple-input and multiple-output (MIMO) radar system with unknown mutual coupling effect between antennas is investigated.
Abstract: In the practical radar with multiple antennas, the antenna imperfections degrade the system performance. In this paper, the problem of estimating the direction of arrival (DOA) in a multiple-input and multiple-output (MIMO) radar system with unknown mutual coupling effect between antennas is investigated. To exploit the target sparsity in the spatial domain, the compressed sensing based methods have been proposed by discretizing the detection area and formulating the dictionary matrix, so an off-grid gap is caused by the discretization processes. In this paper, different from the present DOA estimation methods, both the off-grid gap due to the sparse sampling and the unknown mutual coupling effect between antennas are considered at the same time, and a novel sparse system model for DOA estimation is formulated. Then, a novel sparse Bayesian learning (SBL)-based method named sparse Bayesian learning with the mutual coupling (SBLMC) is proposed, where an expectation-maximum-based method is established to estimate all the unknown parameters including the noise variance, the mutual coupling vectors, the off-grid vector, and the variance vector of scattering coefficients. Additionally, the prior distributions for all the unknown parameters are theoretically derived. With regard to the DOA estimation performance, the proposed SBLMC method can outperform state-of-the-art methods in the MIMO radar with unknown mutual coupling effect, while keeping the acceptable computational complexity.

126 citations


Journal ArticleDOI
TL;DR: The advantages of SR3 (computational efficiency, higher accuracy, faster convergence rates, and greater flexibility) are demonstrated across a range of regularized regression problems with synthetic and real data, including applications in compressed sensing, LASSO, matrix completion, TV regularization, and group sparsity.
Abstract: Regularized regression problems are ubiquitous in statistical modeling, signal processing, and machine learning. Sparse regression, in particular, has been instrumental in scientific model discovery, including compressed sensing applications, variable selection, and high-dimensional analysis. We propose a broad framework for sparse relaxed regularized regression, called SR3. The key idea is to solve a relaxation of the regularized problem, which has three advantages over the state-of-the-art: 1) solutions of the relaxed problem are superior with respect to errors, false positives, and conditioning; 2) relaxation allows extremely fast algorithms for both convex and nonconvex formulations; and 3) the methods apply to composite regularizers, essential for total variation (TV) as well as sparsity-promoting formulations using tight frames. We demonstrate the advantages of SR3 (computational efficiency, higher accuracy, faster convergence rates, and greater flexibility) across a range of regularized regression problems with synthetic and real data, including applications in compressed sensing, LASSO, matrix completion, TV regularization, and group sparsity. Following standards of reproducible research, we also provide a companion MATLAB package that implements these examples.

115 citations


Journal ArticleDOI
TL;DR: A novel algorithm which combined the merits of the clustering strategy and the compressive sensing-based (CS-based) scheme was proposed in this paper and the effect of EECSR on improving energy efficiency and extending the lifespan of wireless sensor networks was verified.
Abstract: A novel algorithm which combined the merits of the clustering strategy and the compressive sensing-based (CS-based) scheme was proposed in this paper. The lemmas for the relationship between any two adjacent layers, the optimal size of clusters, the optimal distribution of the cluster head (CH), and the corresponding proofs were presented first. In addition, to alleviate the “hot spot problem” and reduce the energy consumption resulted from the rotation of the role of CHs, a third role of backup CH (BCH) as well as the corresponding mechanism to rotate the roles between the CH and BCH were proposed. Subsequently, the energy-efficient compressive sensing-based clustering routing (EECSR) protocol was presented in detail. Finally, extensive simulation experiments were conducted to evaluate its energy performance. Comparisons with the existing clustering algorithms and the CS-based algorithm verified the effect of EECSR on improving energy efficiency and extending the lifespan of wireless sensor networks.

108 citations


Proceedings ArticleDOI
01 Jun 2019
TL;DR: Experimental results demonstrate that SCSNet has the state-of-the-art performance while maintaining a comparable running speed with the existing deep learning based image CS methods.
Abstract: Recently, deep learning based image Compressed Sensing (CS) methods have been proposed and demonstrated superior reconstruction quality with low computational complexity. However, the existing deep learning based image CS methods need to train different models for different sampling ratios, which increases the complexity of the encoder and decoder. In this paper, we propose a scalable convolutional neural network (dubbed SCSNet) to achieve scalable sampling and scalable reconstruction with only one model. Specifically, SCSNet provides both coarse and fine granular scalability. For coarse granular scalability, SCSNet is designed as a single sampling matrix plus a hierarchical reconstruction network that contains a base layer plus multiple enhancement layers. The base layer provides the basic reconstruction quality, while the enhancement layers reference the lower reconstruction layers and gradually improve the reconstruction quality. For fine granular scalability, SCSNet achieves sampling and reconstruction at any sampling ratio by using a greedy method to select the measurement bases. Compared with the existing deep learning based image CS methods, SCSNet achieves scalable sampling and quality scalable reconstruction at any sampling ratio with only one model. Experimental results demonstrate that SCSNet has the state-of-the-art performance while maintaining a comparable running speed with the existing deep learning based image CS methods.

104 citations


Journal ArticleDOI
TL;DR: The dynamic mode decomposition is a regression technique that integrates two of the leading data analysis methods in use today: Fourier transforms and singular value decomposition and the quality of the resulting background model is competitive, quantified by the F-measure, recall and precision.
Abstract: We introduce the method of compressed dynamic mode decomposition (cDMD) for background modeling. The dynamic mode decomposition is a regression technique that integrates two of the leading data analysis methods in use today: Fourier transforms and singular value decomposition. Borrowing ideas from compressed sensing and matrix sketching, cDMD eases the computational workload of high-resolution video processing. The key principal of cDMD is to obtain the decomposition on a (small) compressed matrix representation of the video feed. Hence, the cDMD algorithm scales with the intrinsic rank of the matrix, rather than the size of the actual video (data) matrix. Selection of the optimal modes characterizing the background is formulated as a sparsity-constrained sparse coding problem. Our results show that the quality of the resulting background model is competitive, quantified by the F-measure, recall and precision. A graphics processing unit accelerated implementation is also presented which further boosts the computational performance of the algorithm.

Journal ArticleDOI
TL;DR: In this paper, the angular scattering function of the user channels is invariant over frequency intervals whose size is small with respect to the carrier frequency (as in current FDD cellular standards), which allows us to estimate the users' DL channel covariance matrix from UL pilots without additional overhead.
Abstract: We propose a novel method for massive multiple-input multiple-output (massive MIMO) in frequency division duplexing (FDD) systems. Due to the large frequency separation between uplink (UL) and downlink (DL) in FDD systems, channel reciprocity does not hold. Hence, in order to provide DL channel state information to the base station (BS), closed-loop DL channel probing, and channel state information (CSI) feedback is needed. In massive MIMO, this typically incurs a large training overhead. For example, in a typical configuration with $M \simeq 200$ BS antennas and fading coherence block of $T \simeq 200$ symbols, the resulting rate penalty factor due to the DL training overhead, given by $\max \{0, 1 - M/T\}$ , is close to 0. To reduce this overhead, we build upon the well-known fact that the angular scattering function of the user channels is invariant over frequency intervals whose size is small with respect to the carrier frequency (as in current FDD cellular standards). This allows us to estimate the users’ DL channel covariance matrix from UL pilots without additional overhead. Based on this covariance information, we propose a novel sparsifying precoder in order to maximize the rank of the effective sparsified channel matrix subject to the condition that each effective user channel has sparsity not larger than some desired DL pilot dimension ${\sf T_{dl}}$ , resulting in the DL training overhead factor $\max \{0, 1 - {\sf T_{dl}}/ T\}$ and CSI feedback cost of ${\sf T_{dl}}$ pilot measurements. The optimization of the sparsifying precoder is formulated as a mixed integer linear program , that can be efficiently solved. Extensive simulation results demonstrate the superiority of the proposed approach with respect to the concurrent state-of-the-art schemes based on compressed sensing or UL/DL dictionary learning.

Journal ArticleDOI
TL;DR: This work analyzes a constrained version of the Maximum Likelihood (ML) problem (a combinatorial optimization with exponential complexity) and finds the same fundamental scaling law for the number of identifiable users and provides two algorithms based on Non-Negative Least-Squares.
Abstract: In this paper, we study the problem of user activity detection and large-scale fading coefficient estimation in a random access wireless uplink with a massive MIMO base station with a large number $M$ of antennas and a large number of wireless single-antenna devices (users). We consider a block fading channel model where the $M$-dimensional channel vector of each user remains constant over a coherence block containing $L$ signal dimensions in time-frequency. In the considered setting, the number of potential users $K_\text{tot}$ is much larger than $L$ but at each time slot only $K_a<

Journal ArticleDOI
TL;DR: A novel method is proposed to simulate non-stationary and non-Gaussian random field samples directly from sparse measurement data, bypassing the difficulty in random field parameter estimation from sparse measurements data.

Journal ArticleDOI
TL;DR: In this paper, the authors derived new CS results for structured acquisitions and signal satisfying a prior structured sparsity, and the obtained results are RIPless, in the sense that they do not hold for any s-sparse vector, but for sparse vectors with a given support S. Their results are thus support-dependent, and they offer the possibility for flexible assumptions on the structure of S.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed the first deep learning model for multi-contrast CS-MRI reconstruction, which achieved information sharing through feature sharing units, which significantly reduced the number of model parameters.
Abstract: Compressed sensing (CS) theory can accelerate multi-contrast magnetic resonance imaging (MRI) by sampling fewer measurements within each contrast. However, conventional optimization-based reconstruction models suffer several limitations, including a strict assumption of shared sparse support, time-consuming optimization, and “shallow” models with difficulties in encoding the patterns contained in massive MRI data. In this paper, we propose the first deep learning model for multi-contrast CS-MRI reconstruction. We achieve information sharing through feature sharing units, which significantly reduces the number of model parameters. The feature sharing unit combines with a data fidelity unit to comprise an inference block, which are then cascaded with dense connections, allowing for efficient information transmission across different depths of the network. Experiments on various multi-contrast MRI datasets show that the proposed model outperforms both state-of-the-art single-contrast and multi-contrast MRI methods in accuracy and efficiency. We demonstrate that improved reconstruction quality can bring benefits to subsequent medical image analysis. Furthermore, the robustness of the proposed model to misregistration shows its potential in real MRI applications.

Proceedings Article
24 May 2019
TL;DR: Borrowing insights from the CS perspective, a novel way of improving GANs using gradient information from the discriminator is developed and it is shown that Generative Adversarial Nets (GANs) can be viewed as a special case in this family of models.
Abstract: Compressed sensing (CS) provides an elegant framework for recovering sparse signals from compressed measurements. For example, CS can exploit the structure of natural images and recover an image from only a few random measurements. CS is flexible and data efficient, but its application has been restricted by the strong assumption of sparsity and costly reconstruction process. A recent approach that combines CS with neural network generators has removed the constraint of sparsity, but reconstruction remains slow. Here we propose a novel framework that significantly improves both the performance and speed of signal recovery by jointly training a generator and the optimisation process for reconstruction via meta-learning. We explore training the measurements with different objectives, and derive a family of models based on minimising measurement errors. We show that Generative Adversarial Nets (GANs) can be viewed as a special case in this family of models. Borrowing insights from the CS perspective, we develop a novel way of improving GANs using gradient information from the discriminator.

Journal ArticleDOI
23 Sep 2019-Sensors
TL;DR: This work presents a new compressive imaging approach by using a strategy they call cake-cutting, which can optimally reorder the deterministic Hadamard basis and is capable of recovering images of large pixel-size with dramatically reduced sampling ratios, realizing super sub-Nyquist sampling and significantly decreasing the acquisition time.
Abstract: Single-pixel imaging via compressed sensing can reconstruct high-quality images from a few linear random measurements of an object known a priori to be sparse or compressive, by using a point/bucket detector without spatial resolution. Nevertheless, random measurements still have blindness, limiting the sampling ratios and leading to a harsh trade-off between the acquisition time and the spatial resolution. Here, we present a new compressive imaging approach by using a strategy we call cake-cutting, which can optimally reorder the deterministic Hadamard basis. The proposed method is capable of recovering images of large pixel-size with dramatically reduced sampling ratios, realizing super sub-Nyquist sampling and significantly decreasing the acquisition time. Furthermore, such kind of sorting strategy can be easily combined with the structured characteristic of the Hadamard matrix to accelerate the computational process and to simultaneously reduce the memory consumption of the matrix storage. With the help of differential modulation/measurement technology, we demonstrate this method with a single-photon single-pixel camera under the ulta-weak light condition and retrieve clear images through partially obscuring scenes. Thus, this method complements the present single-pixel imaging approaches and can be applied to many fields.

Proceedings ArticleDOI
07 Jul 2019
TL;DR: Finite blocklength simulations show that the combination of AMP decoding, with suitable approximations, together with an outer code recently proposed by Amalladinne et.
Abstract: This paper studies the optimal achievable performance of compressed sensing based unsourced random-access communication over the real AWGN channel. "Unsourced" means that every user employs the same codebook. This paradigm, recently introduced by Polyanskiy, is a natural consequence of a very large number of potential users of which only a finite number is active in each time slot. The resemblance of compressed sensing based communication and sparse regression codes (SPARCs), a novel type of point-to-point channel codes, allows us to design and analyse an efficient unsourced random-access code. Finite blocklength simulations show that the combination of AMP decoding, with suitable approximations, together with an outer code recently proposed by Amalladinne et. al. outperforms state of the art methods in terms of required energyper-bit at lower decoding complexity.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper explored the potential of convolutional sparse coding (CSC) in sparse-view computed tomography (CT) reconstruction, without the necessity of dividing the image into overlapped patches in DL-based methods.
Abstract: Over the past few years, dictionary learning (DL)-based methods have been successfully used in various image reconstruction problems. However, traditional DL-based computed tomography (CT) reconstruction methods are patch-based and ignore the consistency of pixels in overlapped patches. In addition, the features learned by these methods always contain shifted versions of the same features. In recent years, convolutional sparse coding (CSC) has been developed to address these problems. In this paper, inspired by several successful applications of CSC in the field of signal processing, we explore the potential of CSC in sparse-view CT reconstruction. By directly working on the whole image, without the necessity of dividing the image into overlapped patches in DL-based methods, the proposed methods can maintain more details and avoid artifacts caused by patch aggregation. With predetermined filters, an alternating scheme is developed to optimize the objective function. Extensive experiments with simulated and real CT data were performed to validate the effectiveness of the proposed methods. Qualitative and quantitative results demonstrate that the proposed methods achieve better performance than several existing state-of-the-art methods.

Journal ArticleDOI
TL;DR: A new optimition‐driven design of optimal k‐space trajectories in the context of compressed sensing is presented: Spreading Projection Algorithm for Rapid K‐space sampLING (SPARKLING).
Abstract: Purpose To present a new optimition-driven design of optimal k-space trajectories in the context of compressed sensing: Spreading Projection Algorithm for Rapid K-space sampLING (SPARKLING). Theory The SPARKLING algorithm is a versatile method inspired from stippling techniques that automatically generates optimized sampling patterns compatible with MR hardware constraints on maximum gradient amplitude and slew rate. These non-Cartesian sampling curves are designed to comply with key criteria for optimal sampling: a controlled distribution of samples (e.g., variable density) and a locally uniform k-space coverage. Methods Ex vivo and in vivo prospective T 2 * -weighted acquisitions were performed on a 7-Tesla scanner using the SPARKLING trajectories for various setups and target densities. Our method was compared to radial and variable-density spiral trajectories for high-resolution imaging. Results Combining sampling efficiency with compressed sensing, the proposed sampling patterns allowed up to 20-fold reductions in MR scan time (compared to fully sampled Cartesian acquisitions) for two-dimensional T 2 * -weighted imaging without deterioration of image quality, as demonstrated by our experimental results at 7 Tesla on in vivo human brains for a high in-plane resolution of 390 μm. In comparison to existing non-Cartesian sampling strategies, the proposed technique also yielded superior image quality. Conclusions The proposed optimization-driven design of k-space trajectories is a versatile framework that is able to enhance MR sampling performance in the context of compressed sensing.

Journal ArticleDOI
Shirin Jalali1, Xin Yuan1
TL;DR: In this paper, a compression-based framework is employed for theoretical analysis of snapshot CS systems, which leads to two novel, computationally efficient and theoretically analyzable compressionbased recovery algorithms.
Abstract: Snapshot compressed sensing (CS) refers to compressive imaging systems in which multiple frames are mapped into a single measurement frame. Each pixel in the acquired frame is a noisy linear mapping of the corresponding pixels in the frames that are combined together. While the problem can be cast as a CS problem, due to the very special structure of the sensing matrix, standard CS theory cannot be employed to study such systems. In this paper, a compression-based framework is employed for theoretical analysis of snapshot CS systems. It is shown that this framework leads to two novel, computationally-efficient and theoretically-analyzable compression-based recovery algorithms. The proposed methods are iterative and employ compression codes to define and impose the structure of the desired signal. Theoretical convergence guarantees are derived for both algorithms. In the simulations, it is shown that, in the cases of both noise-free and noisy measurements, combining the proposed algorithms with a customized video compression code, designed to exploit nonlocal structures of video frames, significantly improves the state-of-the-art performance.

Journal ArticleDOI
TL;DR: This paper reconfirm the curious behaviour (previously observed for non-fading MAC) of the possibility of almost perfect multi-user interference (MUI) cancellation for user densities below a critical threshold and discusses the relation of the almost perfect MUI cancellation property and the replica-method predictions.
Abstract: Consider a (multiple-access) wireless communication system where users are connected to a unique base station over a shared-spectrum radio links. Each user has a fixed number $k$ of bits to send to the base station, and his signal gets attenuated by a random channel gain (quasi-static fading). In this paper we consider the many-user asymptotics of Chen-Chen-Guo'2017, where the number of users grows linearly with the blocklength. Differently, though, we adopt a per-user probability of error (PUPE) criterion (as opposed to classical joint-error probability criterion). Under PUPE the finite energy-per-bit communication is possible, and we are able to derive bounds on the tradeoff between energy and spectral efficiencies. We reconfirm the curious behaviour (previously observed for non-fading MAC) of the possibility of almost perfect multi-user interference (MUI) cancellation for user densities below a critical threshold. Further, we demonstrate the suboptimality of standard solutions such as orthogonalization (i.e., TDMA/FDMA) and treating interference as noise (i.e. pseudo-random CDMA without multi-user detection). Notably, the problem treated here can be seen as a variant of support recovery in compressed sensing for the unusual definition of sparsity with one non-zero entry per each contiguous section of $2^k$ coordinates. This identifies our problem with that of the sparse regression codes (SPARCs) and hence our results can be equivalently understood in the context of SPARCs with sections of length $2^{100}$. Finally, we discuss the relation of the almost perfect MUI cancellation property and the replica-method predictions.

Journal ArticleDOI
TL;DR: This tutorial provides an inductive way through this complex field to researchers and practitioners starting from the basics of sparse signal processing up to the most recent and up-to-date methods and signal processing applications.
Abstract: Sparse signals are characterized by a few nonzero coefficients in one of their transformation domains. This was the main premise in designing signal compression algorithms. Compressive sensing as a new approach employs the sparsity property as a precondition for signal recovery. Sparse signals can be fully reconstructed from a reduced set of available measurements. The description and basic definitions of sparse signals, along with the conditions for their reconstruction, are discussed in the first part of this paper. The numerous algorithms developed for the sparse signals reconstruction are divided into three classes. The first one is based on the principle of matching components. Analysis of noise and nonsparsity influence on reconstruction performance is provided. The second class of reconstruction algorithms is based on the constrained convex form of problem formulation where linear programming and regression methods can be used to find a solution. The third class of recovery algorithms is based on the Bayesian approach. Applications of the considered approaches are demonstrated through various illustrative and signal processing examples, using common transformation and observation matrices. With pseudocodes of the presented algorithms and compressive sensing principles illustrated on simple signal processing examples, this tutorial provides an inductive way through this complex field to researchers and practitioners starting from the basics of sparse signal processing up to the most recent and up-to-date methods and signal processing applications.

Journal ArticleDOI
TL;DR: This paper proposes a nonlocal tensor sparse and low-rank regularization (NTSRLR) approach, which can encode essential structured sparsity of an HSI and explores its advantages for HSI-CSR task.
Abstract: Hyperspectral image compressive sensing reconstruction (HSI-CSR) is an important issue in remote sensing, and has recently been investigated increasingly by the sparsity prior based approaches. However, most of the available HSI-CSR methods consider the sparsity prior in spatial and spectral vector domains via vectorizing hyperspectral cubes along a certain dimension. Besides, in most previous works, little attention has been paid to exploiting the underlying nonlocal structure in spatial domain of the HSI. In this paper, we propose a nonlocal tensor sparse and low-rank regularization (NTSRLR) approach, which can encode essential structured sparsity of an HSI and explore its advantages for HSI-CSR task. Specifically, we study how to utilize reasonably the l1 -based sparsity of core tensor and tensor nuclear norm function as tensor sparse and low-rank regularization, respectively, to describe the nonlocal spatial-spectral correlation hidden in an HSI. To study the minimization problem of the proposed algorithm, we design a fast implementation strategy based on the alternative direction multiplier method (ADMM) technique. Experimental results on various HSI datasets verify that the proposed HSI-CSR algorithm can significantly outperform existing state-of-the-art CSR techniques for HSI recovery.

Journal ArticleDOI
TL;DR: Simulations indicate that the proposed UAV-enabled spatial data sampling scheme has improved data reconstruction accuracy under the sampling ratio without introducing extra complexity, as compared to the compressive sensing-based method.
Abstract: Internet of Things (IoT) technology has been pervasively applied to environmental monitoring, due to the advantages of low cost and flexible deployment of IoT enabled systems. In many large-scale IoT systems, accurate and efficient data sampling and reconstruction is among the most critical requirements, since this can relieve the data rate of trunk link for data uploading while ensure data accuracy. To address the related challenges, we have proposed an unmanned aerial vehicle (UAV) enabled spatial data sampling scheme in this paper using denoising autoencoder (DAE) neural network. More specifically, a UAV-enabled edge-cloud collaborative IoT system architecture is first developed for data processing in large-scale IoT monitoring systems, where UAV is utilized as mobile edge computing device. Based on this system architecture, the UAV-enabled spatial data sampling scheme is further proposed, where the wireless sensor nodes of large-scale IoT systems are clustered by a newly developed bounded-size $\boldsymbol K$ -means clustering algorithm. A neural network model, i.e., DAE, is applied to each cluster for data sampling and reconstruction, by exploitation of both linear and nonlinear spatial correlation among data samples. Simulations have been conducted and the results indicate that the proposed scheme has improved data reconstruction accuracy under the sampling ratio without introducing extra complexity, as compared to the compressive sensing-based method.

Posted Content
TL;DR: DeepHoyer is presented, a set of sparsity-inducing regularizers that are both differentiable almost everywhere and scale-invariant, and can be applied to both element-wise and structural pruning.
Abstract: In seeking for sparse and efficient neural network models, many previous works investigated on enforcing L1 or L0 regularizers to encourage weight sparsity during training. The L0 regularizer measures the parameter sparsity directly and is invariant to the scaling of parameter values, but it cannot provide useful gradients, and therefore requires complex optimization techniques. The L1 regularizer is almost everywhere differentiable and can be easily optimized with gradient descent. Yet it is not scale-invariant, causing the same shrinking rate to all parameters, which is inefficient in increasing sparsity. Inspired by the Hoyer measure (the ratio between L1 and L2 norms) used in traditional compressed sensing problems, we present DeepHoyer, a set of sparsity-inducing regularizers that are both differentiable almost everywhere and scale-invariant. Our experiments show that enforcing DeepHoyer regularizers can produce even sparser neural network models than previous works, under the same accuracy level. We also show that DeepHoyer can be applied to both element-wise and structural pruning.

Journal ArticleDOI
TL;DR: To develop a previously reported, electrocardiogram (ECG)‐gated, motion‐resolved 5D compressed sensing whole‐heart sparse MRI methodology into an automated, optimized, and fully self-gated free‐running framework in which external gating or triggering devices are no longer needed.
Abstract: PURPOSE To develop a previously reported, electrocardiogram (ECG)-gated, motion-resolved 5D compressed sensing whole-heart sparse MRI methodology into an automated, optimized, and fully self-gated free-running framework in which external gating or triggering devices are no longer needed. METHODS Cardiac and respiratory self-gating signals were extracted from raw image data acquired in 12 healthy adult volunteers with a non-ECG-triggered 3D radial golden-angle 1.5 T balanced SSFP sequence. To extract cardiac self-gating signals, central k-space coefficient signal analysis (k0 modulation), as well as independent and principal component analyses were performed on selected k-space profiles. The procedure yielding triggers with the smallest deviation from those of the reference ECG was selected for the automated protocol. Thus, optimized cardiac and respiratory self-gating signals were used for binning in a compressed sensing reconstruction pipeline. Coronary vessel length and sharpness of the resultant 5D images were compared with image reconstructions obtained with ECG-gating. RESULTS Principal component analysis-derived cardiac self-gating triggers yielded a smaller deviation ( 17.4±6.1ms ) from the reference ECG counterparts than k0 modulation ( 26±7.5ms ) or independent component analysis ( 19.8±5.2ms ). Cardiac and respiratory motion-resolved 5D images were successfully reconstructed with the automated and fully self-gated approach. No significant difference was found for coronary vessel length and sharpness between images reconstructed with the fully self-gated and the ECG-gated approach (all P≥.06 ). CONCLUSION Motion-resolved 5D compressed sensing whole-heart sparse MRI has successfully been developed into an automated, optimized, and fully self-gated free-running framework in which external gating, triggering devices, or navigators are no longer mandatory. The resultant coronary MRA image quality was equivalent to that obtained with conventional ECG-gating.

Journal ArticleDOI
TL;DR: A new sparse Fourier single-pixel imaging method is proposed that reduces the number of samples explorations while maintaining increased image quality and can effectively improve the quality of object restoration comparing with the existing Fouriers single- pixel imaging methods which only acquire the low-frequency parts.
Abstract: Fourier single-pixel imaging is one of the main single-pixel imaging techniques. To improve the imaging efficiency, some of the recent method typically select the low-frequency and discard the high-frequency information to reduce the number of acquired samples. However, sampling only a small amount of low-frequency components will lead to the loss of object details and will reduce the imaging resolution. At the same time, the ringing effect of the restored image due to frequency truncation is significant. In this paper, a new sparse Fourier single-pixel imaging method is proposed that reduces the number of samples explorations while maintaining increased image quality. The proposed method makes a special use of the characteristics of the Fourier spectrum distribution based on which the power of image information decreases gradually from low to high frequencies in the Fourier space. A variable density random sampling matrix is employed to achieve random sampling with Fourier single-pixel imaging technology, followed by the processing of the sparse Fourier spectra using compressive sensing algorithms to recover the high-quality information of the object. The new algorithm can effectively improve the quality of object restoration comparing with the existing Fourier single-pixel imaging methods which only acquire the low-frequency parts. Additionally, considering that the resolution of the system is diffraction limited, super-resolution imaging can also be achieved. Experimental results demonstrate the mainly correctness but also effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: The generalized Shannon entropy function and Rényi entropy function of the signal as the sparsity promoting regularizers and the proposed entropy functions minimization approaches perform better than other popular approaches and achieve state-of-the-art performances.
Abstract: Compressive sensing relies on the sparse prior imposed on the signal of interest to solve the ill-posed recovery problem in an under-determined linear system. The objective function used to enforce the sparse prior information should be both effective and easily optimizable. Motivated by the entropy concept from information theory, in this paper we propose the generalized Shannon entropy function and Renyi entropy function of the signal as the sparsity promoting regularizers. Both entropy functions are nonconvex, non-separable. Their local minimums only occur on the boundaries of the orthants in the Euclidean space. Compared to other popular objective functions, minimizing the generalized entropy functions adaptively promotes multiple high-energy coefficients while suppressing the rest low-energy coefficients. The corresponding optimization problems can be recasted into a series of reweighted $l_1$ -norm minimization problems and then solved efficiently by adapting the FISTA. Sparse signal recovery experiments on both the simulated and real data show that the proposed entropy functions minimization approaches perform better than other popular approaches and achieve state-of-the-art performances.