scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A novel compression algorithm for electrocardiogram signals based on the linear prediction of the wavelet coefficients

TL;DR: A new algorithm for electrocardiogram (ECG) compression based on the compression of the linearly predicted residuals of the wavelet coefficients of the signal, which reduces the bit rate while keeping the reconstructed signal distortion at a clinically acceptable level.
About: This article is published in Digital Signal Processing.The article was published on 2003-10-01. It has received 97 citations till now. The article focuses on the topics: Wavelet transform & Stationary wavelet transform.
Citations
More filters
Dissertation
29 Apr 2008
TL;DR: The proposed algorithm provides improved performance in terms of computational efficiency and compression rate where the clinically significant features in the reconstructed ECG signal are preserved and yields to good results in comparison with other wavelet transform based compression methods described in the literature.
Abstract: The electrocardiogram (ECG), one of the most vital medical signals, represents the electrical activity of a heart. ECG is a well-established diagnostic tool for cardiac abnormality. For the goal of efficient and convenient processing ofECG, automated and computerized ECG processing has become a major topic of research in the field of biomedical engineering. Modem clinical systems require the storage and transmission of large amount of ECG signals. Efficient data compression is needed in order to reduce the amount of data. In ECG signal compression algorithms, the aim is to reach maximum compression ratio, while keeping the relevant diagnostic information in the reconstructed signal. Wavelets have recently emerged as powerful tool for signal compression. In this work, an ECG compression algorithm is presented which is based on energy compaction property of the wavelet coefficients. The wavelet transform decomposes the signal into multi-resolution bands. The lowest resolution band (approximation band) is the smallest band in size and it includes high amplitude approximation coefficients. The wavelet coefficients other than these included by the approximation band, detail coefficients, have small magnitudes. Most of the energy is captured by the approximation coefficients of the lowest resolution band. In this work, we develop three threshold selection rules based on the energy compaction property of the wavelet coefficients. All the rules are applied to lead II of different records of MIT-BIH Arrhythmia Database. Among the three rules, the best rule which offers high compression ratio (CR) with low percent root mean square difference (PRD) than the other two rules is selected. A set of 30 records is taken as test data from MIT-BIH Arrhythmia Database for testing the compression algorithm by the best rule. A compression mtio of 15.12:1 is achieved with a very good reconstructed signal quality (pRD=2.33%). The algorithm provides improved performance in terms of computational efficiency and compression rate where the clinically significant features in the reconstructed ECG signal are preserved. The proposed method yields to good results in comparison with other wavelet transform based compression methods described in the literature.

1 citations

DissertationDOI
25 Feb 2016
TL;DR: This dissertation presents an ECG signal compression algorithm, using wavelet transforms, and proposing a novel quantization process, not found in the literature, that generates better compression results when compared to the majority of state-of-the-art methods.
Abstract: With the increasing development of biomedical devices technology, there is more access to bioelectrical signals. That allows great advances in reaching diagnostics, planning treatments and monitoring patients. Particularly, the electrocardiogram (ECG) has been used for many purposes. Besides that, simple and low-cost ways to acquire the ECG have been found. Nevertheless, those advances require the improvement of the ECG signal coding processes, in a way that allows its efficient storage and transmission in terms of memory requirements and energy consumption. In this context, this dissertation proposes two contributions. Firstly, it presents an ECG signal compression algorithm, using wavelet transforms, and proposing a novel quantization process, not found in the literature. In said process, the transformation is done using the discrete wavelet transform (DWT) and the quantization consists of a non-linear re-ordering of the transformed coefficients magnitudes (gamma correction) in tandem with a sub-band quantization. The second contribution consists in a systematic study of the performance of the different wavelet families through the results obtained by the proposed algorithm, also calculating the optimum quantization parameters for each wavelet family. For the analysis of these methods, tests were done evaluating the performance of the proposed algorithm, comparing its results with other methods presented in the literature. In said tests, signals from the Massachusetts Institute of Technology and Boston’s Beth Israel Hospital database (MIT-BIH) were used as reference. A part of the database was utilized to optimize the parameters of each wavelet family, and the final performance was evaluated with the remaining signals from the database. Specifically, for signal 117 of the MIT-BIH database, which is the most used signal to compare results in the literature, the proposed method led to a compression factor (CR) of 11,40 and a percentage root-mean-square difference (PRD) of 1,38. It was demonstrated that the algorithm generates better compression results when compared to the majority of state-of-the-art methods. The simplicity of the algorithm’s implementation also stands out in relation to other algorithms found in the literature.

1 citations


Cites background from "A novel compression algorithm for e..."

  • ...O algoritmo Al-shrouf [51] é similar ao de Swarnkar [48]....

    [...]

  • ...Algorithm PRD% CR FM JPEG 2000 [57] 1,03 10,00 9,71 Sec&SPIHT [55] 1,01 8,00 7,92 SPIHT&VQ [58] 1,45 8,00 5,52 SPIHT [56] 1,18 8,00 6,78 Hilton [50] 2,60 8,00 3,08 WT&Huffman [51] 5,30 11,60 2,18 Agulhari [46] 4,00 8,36 2,09 Chen [23] 1,08 12,00 11,11 Tohumoglu and Sezgin [52] 5,83 14,90 2,56 Boukhennoufa [53] 2,43 14,30 5,88 Hossain and Amin [54] 2,50 15,10 6,04 Abo-Zahhad [71] 2,80 15,60 5,57 Método proposto 1,38 11,40 8,26...

    [...]

  • ...Algorithm PRD% CR FM JPEG 2000 [57] 1,03 10,00 9,71 Sec&SPIHT [55] 1,01 8,00 7,92 SPIHT&VQ [58] 1,45 8,00 5,52 SPIHT [56] 1,18 8,00 6,78 Hilton [50] 2,60 8,00 3,08 WT&Huffman [51] 5,30 11,60 2,18 Agulhari [46] 4,00 8,36 2,09 Chen [23] 1,08 12,00 11,11 Tohumoglu and Sezgin [52] 5,83 14,90 2,56 Boukhennoufa [53] 2,43 14,30 5,88 Hossain and Amin [54] 2,50 15,10 6,04 Abo-Zahhad [71] 2,80 15,60 5,57 Método proposto 1,38 11,40 8,26 Resumindo o apresentado nesta seção, inicialmente foram calculados os parâmetros de quantização λ1 e λ2 ótimos para cada família wavelet, utilizando 7 sinais do banco de dados....

    [...]

01 Jan 2012
TL;DR: Simulation results show that the proposed hybrid two-stage electrocardiogram signal compression method compares favourably with various state-of-the- art ECG compressors and provides low bit-rate and high quality of the reconstructed signal.
Abstract: A new hybrid two-stage electrocardiogram (ECG) signal compression method based on the modified discrete cosine transform (MDCT) and discrete wavelet transform (DWT) is proposed. The ECG signal is partitioned into blocks and the MDCT is applied to each block to decorrelate the spectral information. Then, the DWT is applied to the resulting MDCT coefficients. Removing spectral redundancy is achieved by compressing the subordinate components more than the dominant components. The resulting wavelet coefficients are then thresholded and compressed using energy packing and binary- significant map coding technique for storage space saving. Experiments on ECG records from the MIT- BIH database are performed with various combinations of MDCT and wavelet filters at different transformation levels, and quantization intervals. The decompressed signals are evaluated using percentage rms error (PRD) and zero-mean rms error (PRD1) measures. The results showed that the proposed method provides low bit-rate and high quality of the reconstructed signal. It offers average compression ratio (CR) of 21.5 and PRD of 5.89%, which would be suitable for most monitoring and diagnoses applications. Simulation results show that the proposed method compares favourably with various state-of-the- art ECG compressors.

1 citations

Journal ArticleDOI
TL;DR: The motif discovery mechanism is used for the analysis of the ECG signal along with the efficient particle swarm optimization algorithm and the proposed method is based upon the variable length of motif and detected the repetitive patterns in inputECG signal.
Abstract: The motif discovery mechanism is used for the analysis of the ECG signal along with the efficient particle swarm optimization algorithm. In this method motif detection in the ECG signal sequences is presented. The proposed method is based upon the variable length of motif and detected the repetitive patterns in input ECG signal. This method has been visualized to analyse the signal data under the in-depth vigilant analytical method.Optimization of signals in the proposed method is performed by Particle Swarm Optimization technique (PSO) for improving the quality of signals. The data set used in this work is collected from the University of California, Riverside (UCR). The performance of the proposed method is measured using various parameters like Execution time, Precision, Recall and F1-measures. The performance of the proposed method is compared with existing methods in terms of accuracy and execution time. The Execution time and Accuracy of proposed method is in more improved form than the existing methods. Keywords-Motif detection, Particle Swarm Optimization,ECGSignal analysis, Execution time.
Proceedings ArticleDOI
02 Oct 2009
TL;DR: Foveation enables the definition of a proper mask that will modulate the coefficients given by the Discrete Wavelet Transform of an ECG record and is further combined with SPIHT method to provide high compression ratios at low reconstruction errors.
Abstract: Foveation enables the definition of a proper mask that will modulate the coefficients given by the Discrete Wavelet Transform of an ECG record. The mask is spatially selective and provides maximum accuracy around specific regions of interest. Subsequent denoising and coefficient quantization is further combined with SPIHT method in order to provide high compression ratios at low reconstruction errors. Experimental results reported on a number of MIT-BIH records show improved performances over existing solutions.
References
More filters
Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Book
05 Sep 1978
TL;DR: This paper presents a meta-modelling framework for digital Speech Processing for Man-Machine Communication by Voice that automates the very labor-intensive and therefore time-heavy and expensive process of encoding and decoding speech.
Abstract: 1. Introduction. 2. Fundamentals of Digital Speech Processing. 3. Digital Models for the Speech Signal. 4. Time-Domain Models for Speech Processing. 5. Digital Representation of the Speech Waveform. 6. Short-Time Fourier Analysis. 7. Homomorphic Speech Processing. 8. Linear Predictive Coding of Speech. 9. Digital Speech Processing for Man-Machine Communication by Voice.

3,103 citations

Journal ArticleDOI
TL;DR: The perfect reconstruction condition is posed as a Bezout identity, and it is shown how it is possible to find all higher-degree complementary filters based on an analogy with the theory of Diophantine equations.
Abstract: The wavelet transform is compared with the more classical short-time Fourier transform approach to signal analysis. Then the relations between wavelets, filter banks, and multiresolution signal processing are explored. A brief review is given of perfect reconstruction filter banks, which can be used both for computing the discrete wavelet transform, and for deriving continuous wavelet bases, provided that the filters meet a constraint known as regularity. Given a low-pass filter, necessary and sufficient conditions for the existence of a complementary high-pass filter that will permit perfect reconstruction are derived. The perfect reconstruction condition is posed as a Bezout identity, and it is shown how it is possible to find all higher-degree complementary filters based on an analogy with the theory of Diophantine equations. An alternative approach based on the theory of continued fractions is also given. These results are used to design highly regular filter banks, which generate biorthogonal continuous wavelet bases with symmetries. >

1,804 citations

Journal ArticleDOI
TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >

690 citations

Journal ArticleDOI
TL;DR: Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECGs are clinically useful.
Abstract: Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.

445 citations