scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Proceedings ArticleDOI
23 Sep 1990
TL;DR: Compression algorithms, considering processing time, compression ratio and the error introduced, allow performance that is adequate for real-time data compression in new generation Holter systems.
Abstract: After having analyzed the recent electrocardiogram (ECG) data compression techniques, three algorithms (TRIM, AZTEC-VT, SAPA-2), which can be implemented on a real-time ambulatory ECG recording system, have been tested. The most suitable methods for reconstruction of the compressed signal have also been investigated. Particular attention has been paid to the definition of the performance indexes. Considerations on the filtering power of the algorithms have been made, using an ECG signal synthesizer. The results show that while TRIM and SAPA-2 may be considered at the same level, AZTEC-VT does not achieve similar performance. It is convenient to give a preference to SAPA-2, for online processing, because TRIM requires the input of four dimensional parameters in order to adapt the algorithm to the signal, while SAPA-2 requires only one input parameter. It is concluded that compression algorithms, considering processing time, compression ratio and the error introduced, allow performance that is adequate for real-time data compression in new generation Holter systems. >

9 citations

Journal ArticleDOI
TL;DR: Results show that the technique is able to provide reasonable compression with low error between the original and reconstructed signals, and incorporation of the compression model into a telemedicine system has led to considerable saving in transmission time for patient data.

9 citations

Proceedings ArticleDOI
09 May 1995
TL;DR: The paper presents an ECG data compression technique using multiscale peak analysis as the wavelet maxima representation of which the basic wavelet is the second derivative of a symmetric smoothing function.
Abstract: The paper presents an ECG data compression technique using multiscale peak analysis. The authors define multiscale peak analysis as the wavelet maxima representation of which the basic wavelet is the second derivative of a symmetric smoothing function. The wavelet transform of an ECG shows maxima at the start, peak and stop points of five transient waves P through T. The number of wavelet maxima is expected to be less than the number of original data samples. The wavelet maxima can be enough to reconstruct original signals precisely. The wavelet maxima representation can lead to ECG data compression and analysis. The compressed data still keep the peaks of QRS waves, and abnormal behavior search will be feasible in practice. The result of the compression shows that a normal ECG data is compressed by a factor 10.

9 citations

Proceedings ArticleDOI
26 Sep 1999
TL;DR: The multilead compression algorithm with wavelet packets achieves an average compression ratio of 30.5:1 in two-leads records from MIT-BIH Arrythmia database, in contrast to 21.4:1 for the single-lead algorithm with the same distortion.
Abstract: Presents the extension of transform coding compression methods to multilead ECG recordings in order to reduce the inter-lead correlation. Two orthogonal expansions are considered: the Karhunen-Loeve transform and a fast approximation of it based on wavelet packets, also known as best basis algorithm. The multilead compression algorithm with wavelet packets achieves an average compression ratio of 30.5:1 in two-leads records from MIT-BIH Arrythmia database, in contrast to 21.4:1 for the single-lead algorithm with the same distortion. Better results could be obtained in ECG recordings with more than two leads.

9 citations

01 Jan 2009
TL;DR: In this article, an algorithm for ECG denoising and compres- sion based on a sparse separable 2-dimensional transform for both complete and overcomplete dictionaries is studied.
Abstract: In this paper, an algorithm for ECG denoising and compres­ sion based on a sparse separable 2-dimensional transform for both complete and overcomplete dictionaries is studied. For overcomplete dictionary we have used the combination of two complete dictionaries. The experimental results ob­ tained by the algorithm for both complete and overcomplete transforms are compared to soft thresholding (for denoising) and wavelet db9/7 (for compression). It is experimentally shown that the algorithm outperforms soft thresholding for about 4dB or more and also outperforms Extended Kalman Smoother filtering for about 2dB in higher input SNRs. The idea of the algorithm is also studied for ECG compression, however it does not result in better compression ratios than wavelet compression.

9 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations