scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
TL;DR: The spindle convolutional auto-encoder performs a high-ratio and quality-guaranteed compression and can be considered as a promising compression technique used in tele-transmission and data storage.

39 citations

Journal ArticleDOI
TL;DR: An algorithm based on singular value decomposition (SVD) and wavelet difference reduction (WDR) techniques for ECG signal compression that deals with the huge data of ambulatory system is presented and it was found that it is efficient in compression of different types ofECG signal with lower signal distortion based on different fidelity assessments.
Abstract: In the field of biomedical, it has become necessary to reduce data quantity due to the limitation of storage in real-time ambulatory system and Tel- e -medicine system. Data compression plays an important role in this regard. Research has been underway since very beginning for the development of an efficient and simple technique for longer term benefits. This paper, therefore, presents an algorithm based on singular value decomposition (SVD) and wavelet difference reduction (WDR) techniques for ECG signal compression that deals with the huge data of ambulatory system. In particular, wavelet reduction technique has been adopted with two different scanning approaches such as fixed scan and adaptive scan of wavelet coefficients. SVD based compression techniques have great reconstruction quality with low compression rate, and WDR and adaptive scan wavelet difference reduction (ASWDR) techniques have opposite characteristics. Both the techniques boost up the performance efficiency of each other. The proposed method utilizes the low rank matrix for initial compression on two dimensional (2D) ECG image using SVD, and then WDR/ASWDR is initiated for final compression. The proposed algorithm has been tested on MIT-BIH arrhythmia record, and it was found that it is efficient in compression of different types of ECG signal with lower signal distortion based on different fidelity assessments. The evaluation results illustrate that the proposed algorithm has achieved compression rate up to 21.4:1 with excellent quality of signal reconstruction in terms of percentage-root-mean square difference as 1.7% and feature analysis of reconstructed signal for MIT-BIH Rec. 100.

39 citations

Journal ArticleDOI
TL;DR: A novel and efficient transform method for ECG data compression based on B-spline basis functions based on the quasi-periodic nature of the ECG signal is proposed, and was found superior at any bit-rate.

39 citations

Journal ArticleDOI
TL;DR: A data compression technique is presented for discrete-time electrocardiogram signals that decomposed into several multiresolution subsignals by using a quadrature mirror filter bank.
Abstract: A data compression technique is presented for discrete-time electrocardiogram signals. The single lead electrocardiogram signal is decomposed into several multiresolution subsignals by using a quadrature mirror filter bank. The resultant subsignals are compressed according to their frequency contents using various coding methods, including a discrete cosine transform based technique and pulse code modulation with variable length coding. Compression ratios as high as 5.7 are obtained without introducing any visual distortion.

39 citations

Journal ArticleDOI
TL;DR: An energy efficient vital signal telemonitoring scheme, which exploits compressed sensing for low-complexity signal compression/reconstruction and distributed cooperation for reliable data transmission to the BNC is introduced.
Abstract: Wireless body area networks (WBANs) are composed of sensors that either monitor and transmit vital signals or act as relays that forward the received data to a body node coordinator (BNC). In this paper, we introduce an energy efficient vital signal telemonitoring scheme, which exploits compressed sensing (CS) for low-complexity signal compression/reconstruction and distributed cooperation for reliable data transmission to the BNC. More specifically, we introduce a cooperative compressed sensing (CCS) approach, which increases the energy efficiency of WBANs by exploiting the benefits of random linear network coding (RLNC). We study the energy efficiency of RLNC and compare it with the store-and-forward (FW) protocol. Our mathematical analysis shows that the gain introduced by RLNC increases as the link failure rate increases, especially in practical scenarios with a limited number of relays. Furthermore, we propose a reconstruction algorithm that further enhances the benefits of RLNC by exploiting key characteristics of vital signals. With the aid of electrocardiographic (ECG) and electroencephalographic (EEG) data available in medical databases, extensive simulation results are illustrated, which validate our theoretical findings and show that the proposed recovery algorithm increases the energy efficiency of the body sensor nodes by 40% compared to conventional CS-based reconstruction methods.

39 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...Several sparse representation methods, such as the discrete cosine/wavelet transfrom (DCT, DWT) or the Principal Component Analysis (PCA) transform [30], have been used in the past for enhancing the sparseness of biosignals [7], [31]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations