scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Compressed Sensing for Networked Data

TL;DR: This article describes a very different approach to the decentralized compression of networked data, considering a particularly salient aspect of this struggle that revolves around large-scale distributed sources of data and their storage, transmission, and retrieval.
Abstract: This article describes a very different approach to the decentralized compression of networked data. Considering a particularly salient aspect of this struggle that revolves around large-scale distributed sources of data and their storage, transmission, and retrieval. The task of transmitting information from one point to another is a common and well-understood exercise. But the problem of efficiently transmitting or sharing information from and among a vast number of distributed nodes remains a great challenge, primarily because we do not yet have well developed theories and tools for distributed signal processing, communications, and information theory in large-scale networked systems.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
09 Aug 2010
TL;DR: An overview of recent gossip algorithms work, including convergence rate results, which are related to the number of transmitted messages and thus the amount of energy consumed in the network for gossiping, and the use of gossip algorithms for canonical signal processing tasks including distributed estimation, source localization, and compression.
Abstract: Gossip algorithms are attractive for in-network processing in sensor networks because they do not require any specialized routing, there is no bottleneck or single point of failure, and they are robust to unreliable wireless network conditions. Recently, there has been a surge of activity in the computer science, control, signal processing, and information theory communities, developing faster and more robust gossip algorithms and deriving theoretical performance guarantees. This paper presents an overview of recent work in the area. We describe convergence rate results, which are related to the number of transmitted messages and thus the amount of energy consumed in the network for gossiping. We discuss issues related to gossiping over wireless links, including the effects of quantization and noise, and we illustrate the use of gossip algorithms for canonical signal processing tasks including distributed estimation, source localization, and compression.

868 citations


Cites methods from "Compressed Sensing for Networked Da..."

  • ...illustrates, via numerical simulation, the tradeoff between varying kand the number of gossip iterations. For more on the theoretical performance guarantees achievable in this formulation, see [122], [123]. A gossip-based approach to solving the reconstruction problem in a distributed fashion is described in [124]. For an alternative approach to using gossip for distributed field estimation, see [125]. ...

    [...]

Journal ArticleDOI
TL;DR: This paper model the components of the compressive sensing (CS) problem, i.e., the signal acquisition process, the unknown signal coefficients and the model parameters for the signal and noise using the Bayesian framework and develops a constructive (greedy) algorithm designed for fast reconstruction useful in practical settings.
Abstract: In this paper, we model the components of the compressive sensing (CS) problem, i.e., the signal acquisition process, the unknown signal coefficients and the model parameters for the signal and noise using the Bayesian framework. We utilize a hierarchical form of the Laplace prior to model the sparsity of the unknown signal. We describe the relationship among a number of sparsity priors proposed in the literature, and show the advantages of the proposed model including its high degree of sparsity. Moreover, we show that some of the existing models are special cases of the proposed model. Using our model, we develop a constructive (greedy) algorithm designed for fast reconstruction useful in practical settings. Unlike most existing CS reconstruction methods, the proposed algorithm is fully automated, i.e., the unknown signal coefficients and all necessary parameters are estimated solely from the observation, and, therefore, no user-intervention is needed. Additionally, the proposed algorithm provides estimates of the uncertainty of the reconstructions. We provide experimental results with synthetic 1-D signals and images, and compare with the state-of-the-art CS reconstruction algorithms demonstrating the superior performance of the proposed approach.

718 citations

Journal ArticleDOI
TL;DR: An extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in WSNs is presented and a comparative guide is provided to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.
Abstract: Wireless sensor networks (WSNs) monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002–2013 of machine learning methods that were used to address common issues in WSNs. The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.

704 citations


Cites background from "Compressed Sensing for Networked Da..."

  • ...For more on the theoretical performance of decentralized compressive sensing, please refer to [142]–[144]....

    [...]

Proceedings ArticleDOI
20 Sep 2009
TL;DR: This paper presents the first complete design to apply compressive sampling theory to sensor data gathering for large-scale wireless sensor networks and shows the efficiency and robustness of the proposed scheme.
Abstract: This paper presents the first complete design to apply compressive sampling theory to sensor data gathering for large-scale wireless sensor networks. The successful scheme developed in this research is expected to offer fresh frame of mind for research in both compressive sampling applications and large-scale wireless sensor networks. We consider the scenario in which a large number of sensor nodes are densely deployed and sensor readings are spatially correlated. The proposed compressive data gathering is able to reduce global scale communication cost without introducing intensive computation or complicated transmission control. The load balancing characteristic is capable of extending the lifetime of the entire sensor network as well as individual sensors. Furthermore, the proposed scheme can cope with abnormal sensor readings gracefully. We also carry out the analysis of the network capacity of the proposed compressive data gathering and validate the analysis through ns-2 simulations. More importantly, this novel compressive data gathering has been tested on real sensor data and the results show the efficiency and robustness of the proposed scheme.

631 citations


Cites background from "Compressed Sensing for Networked Da..."

  • ...[17] also speculate the potential of using compressive sampling theory for data aggregation in a multi-hop WSN....

    [...]

Journal ArticleDOI
TL;DR: In this paper, a compressed sensing-based data sampling and data acquisition in wireless sensor networks and the Internet of Things (IoT) has been investigated, in which the end nodes measure, transmit, and store the sampled data in the framework.
Abstract: The emerging compressed sensing (CS) theory can significantly reduce the number of sampling points that directly corresponds to the volume of data collected, which means that part of the redundant data is never acquired. It makes it possible to create standalone and net-centric applications with fewer resources required in Internet of Things (IoT). CS-based signal and information acquisition/compression paradigm combines the nonlinear reconstruction algorithm and random sampling on a sparse basis that provides a promising approach to compress signal and data in information systems. This paper investigates how CS can provide new insights into data sampling and acquisition in wireless sensor networks and IoT. First, we briefly introduce the CS theory with respect to the sampling and transmission coordination during the network lifetime through providing a compressed sampling process with low computation costs. Then, a CS-based framework is proposed for IoT, in which the end nodes measure, transmit, and store the sampled data in the framework. Then, an efficient cluster-sparse reconstruction algorithm is proposed for in-network compression aiming at more accurate data reconstruction and lower energy efficiency. Performance is evaluated with respect to network size using datasets acquired by a real-life deployment.

478 citations

References
More filters
Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: It is demonstrated theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal.
Abstract: This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

8,604 citations


"Compressed Sensing for Networked Da..." refers background in this paper

  • ...study of efficient decoding algorithms remains at the forefront of current research [ 23 ]‐[25]....

    [...]

01 Aug 2007
TL;DR: In this paper, a greedy algorithm called Orthogonal Matching Pursuit (OMP) was proposed to recover a signal with m nonzero entries in dimension 1 given O(m n d) random linear measurements of that signal.
Abstract: This report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.

7,124 citations

Journal ArticleDOI
TL;DR: F can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program) and numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted.
Abstract: This paper considers a natural error correcting problem with real valued input/output. We wish to recover an input vector f/spl isin/R/sup n/ from corrupted measurements y=Af+e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the /spl lscr//sub 1/-minimization problem (/spl par/x/spl par//sub /spl lscr/1/:=/spl Sigma//sub i/|x/sub i/|) min(g/spl isin/R/sup n/) /spl par/y - Ag/spl par//sub /spl lscr/1/ provided that the support of the vector of errors is not too large, /spl par/e/spl par//sub /spl lscr/0/:=|{i:e/sub i/ /spl ne/ 0}|/spl les//spl rho//spl middot/m for some /spl rho/>0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work. Finally, underlying the success of /spl lscr//sub 1/ is a crucial property we call the uniform uncertainty principle that we shall describe in detail.

6,853 citations

Journal ArticleDOI
TL;DR: If the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program.
Abstract: Suppose we are given a vector f in a class FsubeRopfN , e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision epsi in the Euclidean (lscr2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n)lesRmiddotn-1p/, where R>0 and p>0. Suppose that we take measurements yk=langf# ,Xkrang,k=1,...,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0

6,342 citations