scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Proceedings ArticleDOI
22 Oct 2007
TL;DR: This work presents a method of ECG data compression utilizing Jacobi polynomials utilizing Gauss quadratures mechanism for numerical integration and obtained interesting results compared with ECG compression by wavelet decomposition methods.
Abstract: Data compression is a frequent signal processing operation applied to ECG. We present here a method of ECG data compression utilizing Jacobi polynomials. ECG signals are first divided into blocks that match with cardiac cycles before being decomposed in Jacobi polynomials bases. Gauss quadratures mechanism for numerical integration is used to compute Jacobi transforms coefficients. Coefficients of small values are discarded in the reconstruction stage. For experimental purposes, we chose height families of Jacobi polynomials. Various segmentation approaches were considered. We elaborated an efficient strategy to cancel boundary effects. We obtained interesting results compared with ECG compression by wavelet decomposition methods. Some propositions are suggested to improve the results.

26 citations

Proceedings ArticleDOI
01 Jan 2006
TL;DR: The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression, but the author modified the algorithm which provides even better performance than the SPIHT algorithm.
Abstract: The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.

26 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...Average percent root mean square difference (APRD) is used to evaluate the reconstructed signals in multichannel compression [ 10 ]....

    [...]

Journal ArticleDOI
TL;DR: A compression method, based on the choice of a wavelet that minimizes the distortion of compression for each electrocardiogram considered, is proposed in this paper.

26 citations

Proceedings ArticleDOI
21 Jul 2011
TL;DR: A comparative study of Fast Fourier Transform (FFT), Discrete Cosine Transform (DCT) and Wavelet Transform (WT) transformations is carried and shows that ECG data compression using wavelet transform can achieve better compression performance than FFT and DCT.
Abstract: Electrocardiogram (ECG) is widely used in the diagnosis and treatment of cardiac disease. Large amount of signal data needs to be stored and transmitted. So, it is necessary to compress the ECG signal data in an efficient way. In the past decades, many ECG compression methods have been proposed and these methods can be roughly classified into three categories: direct methods, parameter extraction methods and transform methods. In this paper a comparative study of Fast Fourier Transform (FFT), Discrete Cosine Transform (DCT) and Wavelet Transform (WT) transformations is carried. Records selected from MIT-BIH arrhythmia database are tested. For performance evaluation Compression Ratio (CR), Percent Root Mean Square difference (PRD) and Signal to Noise Ratio (SNR) parameters are used. Simulation results shows that using FFT low PRD and high SNR is achieved. DCT increases CR by 58.97% than FFT. WT further increases CR by 31% than DCT with low PRD value. It shows that ECG data compression using wavelet transform can achieve better compression performance than FFT and DCT.

26 citations

Proceedings ArticleDOI
26 May 2013
TL;DR: A new scheme called Variable Pulse Width Finite Rate of Innovation (VPW-FRI) is discussed, which generalizes classical FRI estimation to enable the use of a sum of asymmetric Cauchy-based pulses for modeling electrocardiogram (ECG) signals.
Abstract: Mobile health is gaining increasing importance for society and the quest for new power efficient devices sampling biosignals is becoming critical. We discuss a new scheme called Variable Pulse Width Finite Rate of Innovation (VPW-FRI) to model and compress ECG signals. This technique generalizes classical FRI estimation to enable the use of a sum of asymmetric Cauchy-based pulses for modeling electrocardiogram (ECG) signals. We experimentally show that VPW-FRI indeed models ECG signals with increased accuracy compared to current standards. In addition, we study the compression efficiency of the method: compared with various widely used compression schemes, we showcase improvements in terms of compression efficiency as well as sampling rate.

25 citations


Cites background from "ECG data compression techniques-a u..."

  • ...An elaborate overview of ECG compression algorithms can be found in [3]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations