scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
01 Dec 2021
TL;DR: In this article, a one-dimensional ECG signal is decomposed as symmetry tree structure at each level using discrete wavelet transforms which outcomes from a larger quantity of insignificant coefficients.
Abstract: In this paper, a one-dimensional ECG signal is decomposed as symmetry tree structure at each level using discrete wavelet transforms which outcomes from a larger quantity of insignificant coefficients. They are measured as zero amplitude value and represented as sparse datasets that improve the compression rate, and Huffman coding helps to represent the signal with low bit rate data. These results compressed data codes of large ECG time-series datasets of the signal. Here, different wavelet filters are evaluated for compression based on sparse data from wavelet decomposition. The performance of an algorithm in terms of compression is 43.52% and 42.8% with a 99.9% correlation between original and recovered signals from compressed ECG data for the MIT-BIH arrhythmia and compression dataset, respectively. Further, heart rate variability (HRV) analysis with correlation of R-R intervals in between original and reconstructed ECG signals validates the reconstruction as well as sensitivity of compression technique toward data accuracy.

2 citations

Journal ArticleDOI
Sabah M. Ahmed1
01 Sep 2008
TL;DR: An ECG compressor based on the optimal selection of wavelet filters and threshold levels in different subbands that achieve maximum data volume reduction while guaranteeing reconstruction quality and the computational complexity of the proposed technique is the price paid for the improvement in the compression performance measures.
Abstract: Although most of the theoretical and implementation aspects of wavelet based algorithms in ElectroCardioGram (ECG) signal compression are well studied, many issues related to the choice of wavelet filters and threshold levels selection remain unresolved. The utilization of optimal mother wavelet will lead to localization and maximization of wavelet coefficients' values in wavelet domain. This paper presents an ECG compressor based on the optimal selection of wavelet filters and threshold levels in different subbands that achieve maximum data volume reduction while guaranteeing reconstruction quality. The proposed algorithm starts by segmenting the ECG signal into frames; where each frame is decomposed into m subbands through optimized wavelet filters. The resulting wavelet coefficients are threshold and those having absolute values below specified threshold levels in all subands are deleted and the remaining coefficients are appropriately encoded with a modified version of the run-length coding scheme. The threshold levels to use, before encoding, are adjusted in an optimum manner, until predefined compression ratio and signal quality are achieved. Extensive experimental tests were made by applying the algorithm to ECG records from the MIT-BIH Arrhythmia Database [1]. The compression ratio (CR), the percent root-mean-square difference (PRD) and the zero-mean percent root-mean-square difference (PRD1) measures are used for measuring the algorithm performance (high CR with excellent reconstruction quality). From the obtained results, it can be deduced that the performance of the optimized signal dependent wavelet outperforms that of Daubechies and Coiflet standard wavelets. However, the computational complexity of the proposed technique is the price paid for the improvement in the compression performance measures. Finally, it should be noted that the proposed method is flexible in controlling the quality of the reconstructed signals and the volume of the compressed signals by establishing target PRD and CR a priori respectively.

2 citations


Cites background or methods from "ECG data compression techniques-a u..."

  • ...Wavelet-based ECG compression methods have been proved to perform well [3], [8]-[9]....

    [...]

  • ...Moreover, the subjective judgment solution is expensive and can generally be applied only for research purposes [8]....

    [...]

  • ...Transmission techniques of biomedical signals through communication channels are currently an important issue in many applications related to clinical practice [8]-[9]....

    [...]

  • ...The DWT of the discrete type signal x[n] of length N is computed in a recursive cascade structure consisting of decimators 2 and complementing low-pass (h) and high-pass (g) filters which are uniquely associated with a wavelet [8]....

    [...]

Proceedings ArticleDOI
31 Oct 1996
TL;DR: A fractal coding method commonly used for images is adapted to ECG signals, and parameters in the method are empirically optimized to fit the ECG characteristics.
Abstract: A fractal coding method commonly used for images is adapted to ECG signals Parameters in the method are empirically optimized to fit the ECG characteristics, and test results are shown on typical sample signals We also investigate methods for speeding up the fractal encoding, and compare results from these methods with the basic algorithm

2 citations

Proceedings ArticleDOI
21 Apr 1997
TL;DR: A new technique for ECG compression is presented, which ensures that the maximum reconstruction error in any cycle does not occur in the diagnostically crucial QRS region, while achieving a compression of about 15:1 and a normalized root mean square error of about 10%.
Abstract: A new technique for ECG compression is presented. Each delineated ECG beat is period normalized by multirate processing and then amplitude normalized. The discrete wavelet transform (DWT), based on Daubechies-4 basis functions is applied on these normalized beats, after shifting each of them to the origin. The concatenation of the ordered DWT coefficients of these beats is a near-cyclostationary signal. An algorithm is proposed to select a set of common positions of the significant coefficients to be retained from each beat. Linear prediction is then applied to predict only these DWT coefficients of the current beat from the corresponding coefficients of a certain number of previous beats. Transmitting only the residuals of selected coefficients improves the compression. A significant advantage of this technique is that the maximum reconstruction error in any cycle does not occur in the diagnostically crucial QRS region, while achieving a compression of about 15:1 and a normalized root mean square error of about 10%.

2 citations

Book ChapterDOI
01 Jan 2006

2 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...Many techniques have been developed for ECG data compression ( Jalaleddine et al., 1990 ), and it is not difficult to reduce data to 8:1....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations