scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Compressed Sensing for Energy-Efficient Wireless Telemonitoring of Noninvasive Fetal ECG Via Block Sparse Bayesian Learning

01 Feb 2013-IEEE Transactions on Biomedical Engineering (IEEE Trans Biomed Eng)-Vol. 60, Iss: 2, pp 300-309
TL;DR: Experimental results show that the block sparse Bayesian learning framework, compared to other algorithms such as current CS algorithms and wavelet algorithms, can greatly reduce code execution in CPU in the data compression stage.
Abstract: Fetal ECG (FECG) telemonitoring is an important branch in telemedicine. The design of a telemonitoring system via a wireless body area network with low energy consumption for ambulatory use is highly desirable. As an emerging technique, compressed sensing (CS) shows great promise in compressing/reconstructing data with low energy consumption. However, due to some specific characteristics of raw FECG recordings such as nonsparsity and strong noise contamination, current CS algorithms generally fail in this application. This paper proposes to use the block sparse Bayesian learning framework to compress/reconstruct nonsparse raw FECG recordings. Experimental results show that the framework can reconstruct the raw recordings with high quality. Especially, the reconstruction does not destroy the interdependence relation among the multichannel recordings. This ensures that the independent component analysis decomposition of the reconstructed recordings has high fidelity. Furthermore, the framework allows the use of a sparse binary sensing matrix with much fewer nonzero entries to compress recordings. Particularly, each column of the matrix can contain only two nonzero entries. This shows that the framework, compared to other algorithms such as current CS algorithms and wavelet algorithms, can greatly reduce code execution in CPU in the data compression stage.
Citations
More filters
Proceedings ArticleDOI
18 Jun 2018
TL;DR: This paper proposes a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $$ norm CS reconstruction model and develops an effective strategy to solve the proximal mapping associated with the sparsity-inducing regularizer using nonlinear transforms.
Abstract: With the aim of developing a fast yet accurate algorithm for compressive sensing (CS) reconstruction of natural images, we combine in this paper the merits of two existing categories of CS methods: the structure insights of traditional optimization-based methods and the speed of recent network-based ones. Specifically, we propose a novel structured deep network, dubbed ISTA-Net, which is inspired by the Iterative Shrinkage-Thresholding Algorithm (ISTA) for optimizing a general $$ norm CS reconstruction model. To cast ISTA into deep network form, we develop an effective strategy to solve the proximal mapping associated with the sparsity-inducing regularizer using nonlinear transforms. All the parameters in ISTA-Net (e.g. nonlinear transforms, shrinkage thresholds, step sizes, etc.) are learned end-to-end, rather than being hand-crafted. Moreover, considering that the residuals of natural images are more compressible, an enhanced version of ISTA-Net in the residual domain, dubbed ISTA-Net+, is derived to further improve CS reconstruction. Extensive CS experiments demonstrate that the proposed ISTA-Nets outperform existing state-of-the-art optimization-based and network-based CS methods by large margins, while maintaining fast computational speed. Our source codes are available: http://jianzhang.tech/projects/ISTA-Net.

771 citations


Cites methods from "Compressed Sensing for Energy-Effic..."

  • ...CS has been applied in many practical applications, including but not limited to singlepixel imaging [11, 33], accelerating magnetic resonance imaging (MRI) [26], wireless tele-monitoring [50] and cognitive radio communication [36]....

    [...]

Journal ArticleDOI
TL;DR: An overview of WBAN main applications, technologies and standards, issues in WBANs design, and evolutions is reported, with the aim of providing useful insights for WBAN designers and of highlighting the main issues affecting the performance of these kind of networks.
Abstract: Interest in Wireless Body Area Networks (WBANs) has increased significantly in recent years thanks to the advances in microelectronics and wireless communications. Owing to the very stringent application requirements in terms of reliability, energy efficiency, and low device complexity, the design of these networks requires the definition of new protocols with respect to those used in general purpose wireless sensor networks. This motivates the effort in research activities and in standardisation process of the last years. This survey paper aims at reporting an overview of WBAN main applications, technologies and standards, issues in WBANs design, and evolutions. Some case studies are reported, based on both real implementation and experimentation on the field, and on simulations. These results have the aim of providing useful insights for WBAN designers and of highlighting the main issues affecting the performance of these kind of networks.

597 citations


Cites methods from "Compressed Sensing for Energy-Effic..."

  • ...In [84] the authors show how to use block sparse Bayesian learning to reconstruct a sub-Nyquist sampled signal (fetal ECG) exploiting its correlation, and they proved the effectiveness of their approach with experimental results....

    [...]

Journal ArticleDOI
TL;DR: It is shown that exploiting intra-block correlation is very helpful in improving recovery performance, and two families of algorithms based on the framework of block sparse Bayesian learning (BSBL) are proposed to exploit such correlation and improve performance.
Abstract: We examine the recovery of block sparse signals and extend the recovery framework in two important directions; one by exploiting the signals' intra-block correlation and the other by generalizing the signals' block structure. We propose two families of algorithms based on the framework of block sparse Bayesian learning (BSBL). One family, directly derived from the BSBL framework, require knowledge of the block structure. Another family, derived from an expanded BSBL framework, are based on a weaker assumption on the block structure, and can be used when the block structure is completely unknown. Using these algorithms, we show that exploiting intra-block correlation is very helpful in improving recovery performance. These algorithms also shed light on how to modify existing algorithms or design new ones to exploit such correlation and improve performance.

491 citations


Cites background from "Compressed Sensing for Energy-Effic..."

  • ...Experiments on real-world data can be found in [10]....

    [...]

  • ...An interesting property of the framework is that it is capable of directly recovering less-sparse or non-sparse signals as shown in [10]....

    [...]

  • ...In practical applications intra-block correlation widely exists in signals, such as physiological signals [10] and images....

    [...]

  • ...3When directly recovering non-sparse signals, performance of the BSBL algorithms is not sensitive to block sizes [10]....

    [...]

Proceedings ArticleDOI
16 Jul 2015
TL;DR: This paper exploits the strategic position of such gateways to offer several higher-level services such as local storage, real-time local data processing, embedded data mining, etc., proposing thus a Smart e-Health Gateway.
Abstract: There have been significant advances in the field of Internet of Things (IoT) recently. At the same time there exists an ever-growing demand for ubiquitous healthcare systems to improve human health and well-being. In most of IoT-based patient monitoring systems, especially at smart homes or hospitals, there exists a bridging point (i.e., gateway) between a sensor network and the Internet which often just performs basic functions such as translating between the protocols used in the Internet and sensor networks. These gateways have beneficial knowledge and constructive control over both the sensor network and the data to be transmitted through the Internet. In this paper, we exploit the strategic position of such gateways to offer several higher-level services such as local storage, real-time local data processing, embedded data mining, etc., proposing thus a Smart e-Health Gateway. By taking responsibility for handling some burdens of the sensor network and a remote healthcare center, a Smart e-Health Gateway can cope with many challenges in ubiquitous healthcare systems such as energy efficiency, scalability, and reliability issues. A successful implementation of Smart e-Health Gateways enables massive deployment of ubiquitous health monitoring systems especially in clinical environments. We also present a case study of a Smart e-Health Gateway called UTGATE where some of the discussed higher-level features have been implemented. Our proof-of-concept design demonstrates an IoT-based health monitoring system with enhanced overall system energy efficiency, performance, interoperability, security, and reliability.

301 citations


Cites background from "Compressed Sensing for Energy-Effic..."

  • ...3) Data Filtering: Physiological systems of the human body such as cardiovascular, nervous and muscular systems generate bio-signals that are the primary source of information for assessing the patient health status....

    [...]

Journal ArticleDOI
TL;DR: A nonlinear projection is applied to achieve the compressed acquisition, which not only reduces the amount of measured data that contained all the information of faults but also realizes the automatic feature extraction in transform domain.
Abstract: Effective intelligent fault diagnosis has long been a research focus on the condition monitoring of rotary machinery systems. Traditionally, time-domain vibration-based fault diagnosis has some deficiencies, such as complex computation of feature vectors, excessive dependence on prior knowledge and diagnostic expertise, and limited capacity for learning complex relationships in fault signals. Furthermore, following the increase in condition data, how to promptly process the massive fault data and automatically provide accurate diagnosis has become an urgent need to solve. Inspired by the idea of compressed sensing and deep learning, a novel intelligent diagnosis method is proposed for fault identification of rotating machines. In this paper, a nonlinear projection is applied to achieve the compressed acquisition, which not only reduces the amount of measured data that contained all the information of faults but also realizes the automatic feature extraction in transform domain. For exploring the discrimination hidden in the acquired data, a stacked sparse autoencoders-based deep neural network is established and performed with an unsupervised learning procedure followed by a supervised fine-tuning process. We studied the significance of compressed acquisition and provided the effects of key factors and comparison with traditional methods. The effectiveness of the proposed method is validated using data sets from rolling element bearings and the analysis shows that it is able to obtain high diagnotic accuracies and is superior to the existing methods. The proposed method reduces the need of human labor and expertise and provides new strategy to handle the massive data more easily.

283 citations

References
More filters
Book
D.L. Donoho1
01 Jan 2004
TL;DR: It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients, and a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing.
Abstract: Suppose x is an unknown vector in Ropfm (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m1/4log5/2(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscrp ball for 0

18,609 citations

Journal ArticleDOI
TL;DR: It is shown that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation, and an algorithm called LARS‐EN is proposed for computing elastic net regularization paths efficiently, much like algorithm LARS does for the lamba.
Abstract: Summary. We propose the elastic net, a new regularization and variable selection method. Real world data and a simulation study show that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation. In addition, the elastic net encourages a grouping effect, where strongly correlated predictors tend to be in or out of the model together.The elastic net is particularly useful when the number of predictors (p) is much bigger than the number of observations (n). By contrast, the lasso is not a very satisfactory variable selection method in the

16,538 citations


Additional excerpts

  • ...They were CoSaMP [27], Elastic-Net [28], Basis Pursuit [29], SL0 [30], and EM-GMAMP [31] (with the “heavy-tailed” mode)....

    [...]

  • ...They were CoSaMP [27], Elastic-Net [28], Basis Pursuit [29], SL0 [30], and EM-GM-AMP [31] (with the ‘heavy-tailed’ mode)....

    [...]

Journal ArticleDOI
TL;DR: Using maximum entropy approximations of differential entropy, a family of new contrast (objective) functions for ICA enable both the estimation of the whole decomposition by minimizing mutual information, and estimation of individual independent components as projection pursuit directions.
Abstract: Independent component analysis (ICA) is a statistical method for transforming an observed multidimensional random vector into components that are statistically as independent from each other as possible. We use a combination of two different approaches for linear ICA: Comon's information theoretic approach and the projection pursuit approach. Using maximum entropy approximations of differential entropy, we introduce a family of new contrast functions for ICA. These contrast functions enable both the estimation of the whole decomposition by minimizing mutual information, and estimation of individual independent components as projection pursuit directions. The statistical properties of the estimators based on such contrast functions are analyzed under the assumption of the linear mixture model, and it is shown how to choose contrast functions that are robust and/or of minimum variance. Finally, we introduce simple fixed-point algorithms for practical optimization of the contrast functions.

6,144 citations


"Compressed Sensing for Energy-Effic..." refers methods in this paper

  • ...Here, we used another ICA algorithm, the FastICA algorithm [34]....

    [...]

Journal ArticleDOI
Michael E. Tipping1
TL;DR: It is demonstrated that by exploiting a probabilistic Bayesian learning framework, the 'relevance vector machine' (RVM) can derive accurate prediction models which typically utilise dramatically fewer basis functions than a comparable SVM while offering a number of additional advantages.
Abstract: This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classification tasks utilising models linear in the parameters Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the 'relevance vector machine' (RVM), a model of identical functional form to the popular and state-of-the-art 'support vector machine' (SVM) We demonstrate that by exploiting a probabilistic Bayesian learning framework, we can derive accurate prediction models which typically utilise dramatically fewer basis functions than a comparable SVM while offering a number of additional advantages These include the benefits of probabilistic predictions, automatic estimation of 'nuisance' parameters, and the facility to utilise arbitrary basis functions (eg non-'Mercer' kernels) We detail the Bayesian framework and associated learning algorithm for the RVM, and give some illustrative examples of its application along with some comparative benchmarks We offer some explanation for the exceptional degree of sparsity obtained, and discuss and demonstrate some of the advantageous features, and potential extensions, of Bayesian relevance learning

5,116 citations

Journal ArticleDOI
TL;DR: A new iterative recovery algorithm called CoSaMP is described that delivers the same guarantees as the best optimization-based approaches and offers rigorous bounds on computational cost and storage.

3,970 citations