scispace - formally typeset
Search or ask a question
Journal Article•DOI•

A novel compression algorithm for electrocardiogram signals based on the linear prediction of the wavelet coefficients

01 Oct 2003-Digital Signal Processing (Academic Press)-Vol. 13, Iss: 4, pp 604-622
TL;DR: A new algorithm for electrocardiogram (ECG) compression based on the compression of the linearly predicted residuals of the wavelet coefficients of the signal, which reduces the bit rate while keeping the reconstructed signal distortion at a clinically acceptable level.
About: This article is published in Digital Signal Processing.The article was published on 2003-10-01. It has received 97 citations till now. The article focuses on the topics: Wavelet transform & Stationary wavelet transform.
Citations
More filters
Proceedings Article•DOI•
24 Mar 2016
TL;DR: An efficient electrocardiogram (ECG) data compression and transmission algorithm based on discrete wavelet transform and run length encoding that provides comparatively high compression ratio and low percent root-mean-square difference values is presented.
Abstract: This paper presents an efficient electrocardiogram (ECG) data compression and transmission algorithm based on discrete wavelet transform and run length encoding. The proposed algorithm provides comparatively high compression ratio and low percent root-mean-square difference values. 48 records of ECG signals are taken from MIT-BIH arrhythmia database for performance evaluation of the proposed algorithm. Each record of ECG signals are of duration one minute and sampled at sampling frequency of 360 Hz over 11-bit resolution. Discrete wavelet transform has been used by means of linear orthogonal transformation of original signal. Using discrete wavelet transform, signal can be analyzed in time and frequency domain both. It also preserves the local features of the signal very well. After thresholding and quantization of wavelet transform coefficients, signals are encoded using run length encoding which improves compression significantly. The proposed algorithm offers average values of compression ratio, percentage root mean square difference, normalized percentage root mean square difference, quality score and signal to noise ratio of 44.0, 0.36, 5.87, 143, 3.53 and 59.52 respectively over 48 records of ECG data.

20 citations

Journal Article•DOI•
TL;DR: In this paper, the authors proposed a novel ECG data compression technique based on the tunable Q-wavelet transform which provides adjustable parameters to achieve good compression performance, and they examined the effect of the proposed compression technique on cardiac arrhythmia classification.

18 citations

Journal Article•DOI•
TL;DR: This paper presents a hybrid technique for the compression of ECG signals based on DWT and exploiting the correlation between signal samples, which possesses higher compression ratios and lower PRD compared to the other wavelet transformation techniques.
Abstract: This paper presents a hybrid technique for the compression of ECG signals based on DWT and exploiting the correlation between signal samples. It incorporates Discrete Wavelet Transform (DWT), Differential Pulse Code Modulation (DPCM), and run-length coding techniques for the compression of different parts of the signal; where lossless compression is adopted in clinically relevant parts and lossy compression is used in those parts that are not clinically relevant. The proposed compression algorithm begins by segmenting the ECG signal into its main components (P-waves, QRS-complexes, T-waves, U-waves and the isoelectric waves). The resulting waves are grouped into Region of Interest (RoI) and Non Region of Interest (NonRoI) parts. Consequently, lossless and lossy compression schemes are applied to the RoI and NonRoI parts respectively. Ideally we would like to compress the signal losslessly, but in many applications this is not an option. Thus, given a fixed bit budget, it makes sense to spend more bits to represent those parts of the signal that belong to a specific RoI and, thus, reconstruct them with higher fidelity, while allowing other parts to suffer larger distortion. For this purpose, the correlation between the successive samples of the RoI part is utilized by adopting DPCM approach. However the NonRoI part is compressed using DWT, thresholding and coding techniques. The wavelet transformation is used for concentrating the signal energy into a small number of transform coefficients. Compression is then achieved by selecting a subset of the most relevant coefficients which afterwards are efficiently coded. Illustrative examples are given to demonstrate thresholding based on energy packing efficiency strategy, coding of DWT coefficients and data packetizing. The performance of the proposed algorithm is tested in terms of the compression ratio and the PRD distortion metrics for the compression of 10 seconds of data extracted from records 100 and 117 of MIT-BIH database. The obtained results revealed that the proposed technique possesses higher compression ratios and lower PRD compared to the other wavelet transformation techniques. The principal advantages of the proposed approach are: 1) the deployment of different compression schemes to compress different ECG parts to reduce the correlation between consecutive signal samples; and 2) getting high compression ratios with acceptable reconstruction signal quality compared to the recently published results.

17 citations


Additional excerpts

  • ...In this section the result of several experiments of the proposed method are compared with other ECG compression algorithms already realized [12-21]....

    [...]

Journal Article•DOI•
TL;DR: An effectual sample entropy (SampEn) based complexity sorting pre-processing technique for two dimensional electrocardiogram (ECG) data compression that demonstrates significantly better performance in comparison to the contemporary state-of-the-art works present in the literature.

15 citations

Journal Article•DOI•
TL;DR: The simulation result included in this paper shows the clearly increased efficacy and performance in the field of biomedical signal processing.
Abstract: In this paper, a wavelet based methodology is presented for compression of electrocardiogram (ECG) signal. The methodology employs new wavelet filters whose coefficients are derived with beta function and its derivatives. A comparative study of performance of different existing wavelet filters and the Beta wavelet filters is made in terms of compression ratio (CR), percent root mean square difference (PRD), mean square error (MSE) and signal-to-noise ratio (SNR). When compared, the Beta wavelet filters give better compression ratio and also yields good fidelity parameters as compared to other wavelet filters. The simulation result included in this paper shows the clearly increased efficacy and performance in the field of biomedical signal processing.

15 citations

References
More filters
Journal Article•DOI•
John Makhoul1•
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Book•
05 Sep 1978
TL;DR: This paper presents a meta-modelling framework for digital Speech Processing for Man-Machine Communication by Voice that automates the very labor-intensive and therefore time-heavy and expensive process of encoding and decoding speech.
Abstract: 1. Introduction. 2. Fundamentals of Digital Speech Processing. 3. Digital Models for the Speech Signal. 4. Time-Domain Models for Speech Processing. 5. Digital Representation of the Speech Waveform. 6. Short-Time Fourier Analysis. 7. Homomorphic Speech Processing. 8. Linear Predictive Coding of Speech. 9. Digital Speech Processing for Man-Machine Communication by Voice.

3,103 citations

Journal Article•DOI•
TL;DR: The perfect reconstruction condition is posed as a Bezout identity, and it is shown how it is possible to find all higher-degree complementary filters based on an analogy with the theory of Diophantine equations.
Abstract: The wavelet transform is compared with the more classical short-time Fourier transform approach to signal analysis. Then the relations between wavelets, filter banks, and multiresolution signal processing are explored. A brief review is given of perfect reconstruction filter banks, which can be used both for computing the discrete wavelet transform, and for deriving continuous wavelet bases, provided that the filters meet a constraint known as regularity. Given a low-pass filter, necessary and sufficient conditions for the existence of a complementary high-pass filter that will permit perfect reconstruction are derived. The perfect reconstruction condition is posed as a Bezout identity, and it is shown how it is possible to find all higher-degree complementary filters based on an analogy with the theory of Diophantine equations. An alternative approach based on the theory of continued fractions is also given. These results are used to design highly regular filter banks, which generate biorthogonal continuous wavelet bases with symmetries. >

1,804 citations

Journal Article•DOI•
TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >

690 citations

Journal Article•DOI•
TL;DR: Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECGs are clinically useful.
Abstract: Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.

445 citations