scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
TL;DR: Results of this study have shown that 8 bits/sample, although frequently in use, does not satisfy quality criteria for medical doctors, and predictive technique for lossless ECG compression using linear time-invariant models is presented.
Abstract: Electrocardiogram (ECG) signal compression suffers of lack of standards for analogue-digital conversion. Results of this study have shown that 8 bits/sample, although frequently in use, does not satisfy quality criteria for medical doctors. This paper also presents predictive technique for lossless ECG compression using linear time-invariant models. Tests on clinically measured ECG signals confirm a very good performance in terms of compression ratio.

5 citations

Patent
Eric Corndorf1
26 Mar 2007
TL;DR: In this paper, a truncated entropy encoding map is generated and the values within the map are selected to be encoded or unencoded to provide an overall compression of the data, which is then used by an encoder to further sub-select the values.
Abstract: Waveforms are digitally sampled and compressed for storage in memory. The compression of the data includes generating a truncated entropy encoding map and using the values within the map to obtain good compression. An encoder further sub-selects values to be encoded and values to remain unencoded to provide an overall compression of the data.

5 citations

Proceedings ArticleDOI
05 Mar 2015
TL;DR: The experimental result shows that the proposed method achieves better compression ratio along with better PRD compared to earlier methods.
Abstract: Compression of bulky electrocardiogram (ECG) signal is a common requirement for most of the computerized applications. In this paper, a new compression and reconstruction technique based on Empirical Mode Decomposition (EMD) is proposed. The performance evaluation of the proposed technique is based on comparisons of Compression Ratio (CR) and Percent Root mean square Difference (PRD). The compression method consists of mainly five stages: EMD based signal decomposition, downsampling, discrete cosine transform (DCT), window filtering and Huffman encoding. The ECG signal reconstruction method follows the compression process in reverse order. The proposed algorithm is validated by testing on 48 ECG records available in MIT/BIH arrhythmia database. The compression efficiency is evaluated and the average values of CR and PRD are found to be 23.74:1 and 1.49, respectively. The experimental result shows that the proposed method achieves better compression ratio along with better PRD compared to earlier methods.

5 citations


Cites background from "ECG data compression techniques-a u..."

  • ...ECG data compression technique is important for rapid transmission and reception of ECG’s over mobile phone networks to remote cardiac center [3]....

    [...]

Proceedings ArticleDOI
Kenzo Akazawa1, Takanori Uchiyama1, S. Tanaka1, Akira Sasamori1, E. Harasawa1 
05 Sep 1993
TL;DR: A new adaptive method of data compression for digital ambulatory electrocardiograms (ECGs), considering the diagnostic significance of each segment of the ECG, using the FAN data compression method SAPA2 (Scan-Along Polygonal Approximation).
Abstract: Proposes a new adaptive method of data compression for digital ambulatory electrocardiograms (ECGs), considering the diagnostic significance of each segment of the ECG. The R-wave is detected, followed by multi-template matching of the detected beat and judgment of the noise level; the templates are successively created during processing. The residual signal (the difference between the original ECG and the best-fit template) is approximated with the FAN data compression method SAPA2 (Scan-Along Polygonal Approximation) and then encoded. The error threshold of FAN is decreased during the P-wave segments and increased during the noise segments; the maximum error of the reconstructed signal at each time is known. This method is applied to ECGs of the AHA (American Heart Association) database and its usefulness is indicated; e.g. the bit rate is approximately 400 bps at 8% PRD (percent RMS difference) and 200 bps at 15% PRD. >

5 citations

Proceedings ArticleDOI
06 Nov 2014
TL;DR: A novel low complexity on-chip ECG data compression methodology targeting remote health-care applications and results in a faithful reconstruction which has been validated using MIT-BIH PTB-DB as well as the institute's health repository IITH-DB.
Abstract: In this paper, we propose a novel low complexity on-chip ECG data compression methodology targeting remote health-care applications. This is to the best of our knowledge the first attempt for on-chip reliable data compression. The proposed methodology has been implemented targeting Application Specific Integrated Circuit platform at 1 MHz at Vdd 1.62 V for UMC 130 nm technology library with 16 bits system word-length. Furthermore the proposed methodology results in a faithful reconstruction which has been validated using MIT-BIH PTB-DB as well as our institute's health repository IITH-DB. On an average about 90% compression is achieved with more than 83% R(2) statistics, 98% Cross Correlation and about 99% Regression between the original and the reconstructed data signifying the diagnostic accuracy. Subsequently the proposed methodology is capable of storing approximately 47 hrs of data in the same on-chip memory when compared to that of 5 hours of continuous data in the state of the art which would lead to enhanced diagnosis and prognosis in remote health-care.

5 citations


Cites background from "ECG data compression techniques-a u..."

  • ...However, the loss-less compression techniques proposed in the recent years gives low compression ratios which forced the designers to think about lossy compression techniques for ECG data compression[2]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations