scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Proceedings ArticleDOI
14 Oct 2008
TL;DR: A diagnostic quality driven mechanism for remote ECG monitoring is presented, which enables a notation of priorities encoded into the wave segments, which provides accurate inference results while effectively compressing the data.
Abstract: We believe that each individual is unique, and that it is necessary for diagnosis purpose to have a distinctive combination of signals and data features that fits the personal health status. It is essential to develop mechanisms for reducing the amount of data that needs to be transferred (to mitigate the troublesome periodically recharging of a device) while maintaining diagnostic accuracy. Thus, the system should not uniformly compress the collected physiological data, but compress data in a personalized fashion that preserves the “important” signal features for each individual such that it is enough to make the diagnosis with a required high confidence level. We present a diagnostic quality driven mechanism for remote ECG monitoring, which enables a notation of priorities encoded into the wave segments. The priority is specified by the diagnosis engine or medical experts and is dynamic and individual dependent. The system pre-processes the collected physiological information according to the assigned priority before delivering to the backend server. We demonstrate that the proposed approach provides accurate inference results while effectively compressing the data.

7 citations

Patent
13 Jul 2010
TL;DR: In this article, a method for hybrid 2D ECG data compression based on wavelet transforms is described. But, the method is not suitable for the use of high-frequency ECG signals.
Abstract: Implementations and techniques for hybrid 2-D ECG data compression based on wavelet transforms are generally disclosed. In accordance with some implementations, a method for compressing electrocardiogram (ECG) data may include receiving a one-dimensional (1-D) ECG signal, generating a two-dimensional (2-D) ECG data array from the 1-D ECG signal, wavelet transforming the 2-D ECG data array to generate wavelet coefficients including a low frequency subband, a first intermediate frequency subband, a second intermediate frequency subband, and a high-frequency subband, and encoding the wavelet coefficients to generate compressed ECG data. Encoding the wavelet coefficients may include subjecting the low frequency subband, the first intermediate frequency subband, the second intermediate frequency subband, and the high frequency subband to different encoding schemes.

7 citations

Proceedings Article
01 Sep 2005
TL;DR: The paper presents a new algorithm for ECG signal compression based on local extreme extraction, adaptive hysteretic filtering and LZW coding that is robust with respect to noise, has a rather small computational complexity and provides good compression ratios with excellent reconstruction quality.
Abstract: The paper presents a new algorithm for ECG signal compression based on local extreme extraction, adaptive hysteretic filtering and LZW coding. Basically the method consists in smoothing the ECG signal with a Savitzky-Golay filter, extraction of the local minima and maxima, a hysteretic filtering and LZW coding. The reconstruction of the ECG signal is done by cubic interpolation.

7 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...Many algorithms for ECG compression have been proposed in the last thirty years; they have been mainly classified into three major categories: direct data compression, transformation methods, and parametric techniques [1]....

    [...]

Proceedings ArticleDOI
01 Oct 2016
TL;DR: A ECG compression scheme using discrete Hermite functions and the 2-D wavelet based compression is carried out, which results in a minimal loss of signal so that the cardiologists are not misguide for false diagnosis.
Abstract: The Electrocardiogram (ECG) signal carries very important information about the Human heart condition. Many researchers have made a significant contribution for use of optimum storage devices and improving the speed of transmission over the wired or wireless channel. These applications demand for the good and very trustworthy compression schemes. In our paper we propose a ECG compression scheme using discrete Hermite functions. The ECG signal is spread over the discrete Hermite functions basis and the 2-D wavelet based compression is carried out. The reconstruction of the signal is achieved by applying the inverse 2-D wavelet with maximal similarities in the diagnostic characters. The results of the compression scheme are checked by use of the standard performance criterion such as the compression ratio (CR), Percent root mean square difference (PRD), and normalized cross correlation coefficient (NCC). The nature of reconstructed signal is assessed by the retrieval of the dominant morphological features like P-part, QRS detection, R-part amplitude, S-part and T-part amplitudes upon reconstruction. There is a minimal loss of signal which do not exceed so that the cardiologists are not misguide for false diagnosis. The analysis is carried out by using the Haar and Deubachies(db2-db20) wavelets filters.

7 citations

Journal ArticleDOI
TL;DR: In this article , the authors present four types of objective distortion measures and evaluate their performance in terms of quality prediction accuracy, Pearson correlation coefficient and computational time, which is performed on different kinds of PPG waveform distortions introduced by the predictive coding, compressed sampling, discrete cosine transform and discrete wavelet transform.
Abstract: Real-time photoplethysmogram (PPG) denoising and data compression has become most essential requirements for accurately measuring vital parameters and efficient data transmission but that may introduce different kinds of waveform distortions due to the lossy processing techniques. Subjective quality assessment tests are the most reliable way to assess the quality, but they are time expensive and also cannot be incorporated with quality-driven compression mechanism. Thus, finding a best objective distortion measure is highly demanded for automatically evaluating quality of reconstructed PPG signal that must be subjectively meaningful and simple. In this paper, we present four types of objective distortion measures and evaluate their performance in terms of quality prediction accuracy, Pearson correlation coefficient and computational time. The performance evaluation is performed on different kinds of PPG waveform distortions introduced by the predictive coding, compressed sampling, discrete cosine transform and discrete wavelet transform. On the normal and abnormal PPG signals taken from five standard databases, evaluation results showed that different subjective quality evaluation groups (5-point, 3-point and 2-point rating scale) had different best objective distortion measures in terms of prediction accuracy and Pearson correlation coefficient. Moreover, selection of a best objective distortion measure depends upon type of PPG features that need to be preserved in the reconstructed signal.

7 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations