scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
TL;DR: Experiments with ECG records used in other results from the literature revealed that the proposed method compares favorably with various classical and state-of-the-art ECG compressors.

108 citations


Cites background from "ECG data compression techniques-a u..."

  • ...Several ECG compression methods have been developed during the last 30 years, and average compression ratios (CR) ranging approximately from 2:1 up to 50:1 have been reported [1,2]....

    [...]

Journal ArticleDOI
TL;DR: Experimental results show that the proposed codec improves the direct SPIHT approach and the prior work by about 33% and 26%, respectively.
Abstract: In a prior work, a wavelet-based vector quantization (VQ) approach was proposed to perform lossy compression of electrocardiogram (ECG) signals. We investigate and fix its coding inefficiency problem in lossless compression and extend it to allow both lossy and lossless compression in a unified coding framework. The well-known 9/7 filters and 5/3 integer filters are used to implement the wavelet transform (WT) for lossy and lossless compression, respectively. The codebook updating mechanism, originally designed for lossy compression, is modified to allow lossless compression as well. In addition, a new and cost-effective coding strategy is proposed to enhance the coding efficiency of set partitioning in hierarchical tree (SPIHT) at the less significant bit representation of a WT coefficient. ECG records from the MIT/BIH Arrhythmia and European ST-T Databases are selected as test data. In terms of the coding efficiency for lossless compression, experimental results show that the proposed codec improves the direct SPIHT approach and the prior work by about 33% and 26%, respectively.

106 citations


Cites background from "ECG data compression techniques-a u..."

  • ...Many ECG lossy compression methods have been proposed (see review parts in [1]–[3])....

    [...]

  • ...As pointed out in [1], the performance comparison of various compression methods proposed by different authors can be quite difficult....

    [...]

Journal ArticleDOI
TL;DR: An ECG compression algorithm which allows lossless transmission of compressed ECG over bandwidth constrained wireless link is proposed which will be highly advantageous in patient wellness monitoring system where a doctor has to read and diagnose from compressed ECGs of several patients assigned to him.
Abstract: With the rapid development wireless technologies, mobile phones are gaining acceptance to become an effective tool for cardiovascular monitoring. However, existing technologies have limitations in terms of efficient transmission of compressed ECG over text messaging communications like SMS and MMS. In this paper, we first propose an ECG compression algorithm which allows lossless transmission of compressed ECG over bandwidth constrained wireless link. Then, we propose several algorithms for cardiovascular abnormality detection directly from the compressed ECG maintaining end to end security, patient privacy while offering the benefits of faster diagnosis. Next, we show that our mobile phone based cardiovascular monitoring solution is capable of harnessing up to 6.72 times faster diagnosis compared to existing technologies. As the decompression time on a doctor's mobile phone could be significant, our method will be highly advantageous in patient wellness monitoring system where a doctor has to read and diagnose from compressed ECGs of several patients assigned to him. Finally, we successfully implemented the prototype system by establishing mobile phone based cardiovascular patient monitoring.

97 citations


Cites background or methods from "ECG data compression techniques-a u..."

  • ...Compression algorithms for ECG can be broadly classified into three major groups, namely direct domain method [11] [19], feature extraction method [12] [33] and transformational method [18] [30]....

    [...]

  • ...• Computationally expensive: Most of the existing ECG compression algorithms were designed and tested on PC [11], [19], [12], [33], [18], [30]....

    [...]

  • ...Existing compression algorithms [11], [19], [12], [33], [18], [30], [9], [27] albeit efficient in reducing data size, are mostly lossy and neither support SMS/MMS, nor do they retain key signatures of cardiovascular conditions for a system or doctor to detect abnormalities from the compressed ECG....

    [...]

Journal ArticleDOI
TL;DR: A new algorithm for electrocardiogram (ECG) compression based on the compression of the linearly predicted residuals of the wavelet coefficients of the signal, which reduces the bit rate while keeping the reconstructed signal distortion at a clinically acceptable level.

97 citations

Journal ArticleDOI
TL;DR: A novel electrocardiogram (ECG) compression method by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique, which performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health.
Abstract: This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6–44.5 and percentage root mean square difference (PRD) of 0.8–2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.

96 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations