scispace - formally typeset
Search or ask a question
Journal Article•DOI•

A novel compression algorithm for electrocardiogram signals based on the linear prediction of the wavelet coefficients

01 Oct 2003-Digital Signal Processing (Academic Press)-Vol. 13, Iss: 4, pp 604-622
TL;DR: A new algorithm for electrocardiogram (ECG) compression based on the compression of the linearly predicted residuals of the wavelet coefficients of the signal, which reduces the bit rate while keeping the reconstructed signal distortion at a clinically acceptable level.
About: This article is published in Digital Signal Processing.The article was published on 2003-10-01. It has received 97 citations till now. The article focuses on the topics: Wavelet transform & Stationary wavelet transform.
Citations
More filters
Journal Article•DOI•
TL;DR: The orthogonality of coefficient matrices of wavelet filters is utilized to derive the energy equation for the relation between time-domain signal and its corresponding wavelet coefficients and the relationship between the wavelet coefficient error and the reconstruction error is obtained.
Abstract: In this paper, the orthogonality of coefficient matrices of wavelet filters is utilized to derive the energy equation for the relation between time-domain signal and its corresponding wavelet coefficients. Using the energy equation, the relationship between the wavelet coefficient error and the reconstruction error is obtained. The errors considered in this paper include the truncation error and quantization error. This not only helps to control the reconstruction quality but also brings two advantages: (1) It is not necessary to perform inverse transform to obtain the distortion caused by compression using wavelet transform and can thus reduce computation efforts. (2) By using the energy equation, we can search for a threshold value to attain a better compression ratio within the range of a pre-specified percent root-mean-square difference (PRD) value. A compression algorithm with run length encoding is proposed based on the energy equation. In the end, the Matlab software and MIT-BIH database are adopted to perform simulations for verifying the feasibility of our proposed method. The algorithm is also implemented on a DSP chip to examine the practicality and suitability. The required computation time of an ECG segment is less than 0.0786 ,s which is fast enough to process real-time signals. As a result, the proposed algorithm is applicable for implementation on mobile ECG recording devices.

7 citations

Journal Article•DOI•
TL;DR: In this paper , the authors proposed a novel algorithm for the compression of ECG signals to reduce energy consumption in remote healthcare monitoring systems (RHMs) by using discrete Krawtchouk moments as a feature extractor.
Abstract: Remote Healthcare Monitoring Systems (RHMs) that use ECG signals are very effective tools for the early diagnosis of various heart conditions. However, these systems are still confronted with a problem that reduces their efficiency, such as energy consumption in wearable devices because they are battery-powered and have limited storage. This paper presents a novel algorithm for the compression of ECG signals to reduce energy consumption in RHMs. The proposed algorithm uses discrete Krawtchouk moments as a feature extractor to obtain features from the ECG signal. Then the accelerated Ant Lion Optimizer (AALO) selects the optimum features that achieve the best-reconstructed signal. Our proposed algorithm is extensively validated using two benchmark datasets: MIT-BIH arrhythmia and ECG-ID. The proposed algorithm provides the average values of compression ratio (CR), percent root mean square difference (PRD), signal to noise ratio (SNR), Peak Signal to noise ratio (PSNR), and quality score (QS) are 15.56, 0.69, 44.52, 49.04 and 23.92, respectively. The comparison demonstrates the advantages of the proposed compression algorithm on recent algorithms concerning the mentioned performance metrics. It also tested and compared against other existing algorithms concerning Processing Time, compression speed and computational efficiency. The obtained results show that the proposed algorithm extremely outperforms in terms of (Processing Time = 6.89 s), (compression speed = 4640.19 bps) and (computational efficiency = 2.95). The results also indicate that the proposed compression algorithm reduces energy consumption in a wearable device by decreasing the wake-up time by 3600 ms.

7 citations

Proceedings Article•DOI•
25 Jun 2012
TL;DR: A new method and results regarding the compressed sensing (CS) and classification of ECG waveforms using a general dictionary as well as specific dictionaries built using normal and pathological cardiac patterns are presented.
Abstract: The paper presents a new method and results regarding the compressed sensing (CS) and classification of ECG waveforms using a general dictionary as well as specific dictionaries built using normal and pathological cardiac patterns. The proposed method has been validated by computation of the distortion errors between the original and the reconstructed signals and by the classification ratio of the reconstructed signals obtained with the k-nearest neighbors (KNN) algorithm.

7 citations

Proceedings Article•
01 Sep 2005
TL;DR: The paper presents a new algorithm for ECG signal compression based on local extreme extraction, adaptive hysteretic filtering and LZW coding that is robust with respect to noise, has a rather small computational complexity and provides good compression ratios with excellent reconstruction quality.
Abstract: The paper presents a new algorithm for ECG signal compression based on local extreme extraction, adaptive hysteretic filtering and LZW coding. Basically the method consists in smoothing the ECG signal with a Savitzky-Golay filter, extraction of the local minima and maxima, a hysteretic filtering and LZW coding. The reconstruction of the ECG signal is done by cubic interpolation.

7 citations

Dissertation•
15 Nov 2007
TL;DR: In this article, the authors compare polynome orthogonaux-based methods for the compression of signaux ECG with polynomes of Legendre et Tchebychev.
Abstract: La compression des signaux ECG trouve encore plus d'importance avec le developpement de la telemedecine. En effet, la compression permet de reduire considerablement les couts de la transmission des informations medicales a travers les canaux de telecommunication. Notre objectif dans ce travail de these est d'elaborer des nouvelles methodes de compression des signaux ECG a base des polynomes orthogonaux. Pour commencer, nous avons etudie les caracteristiques des signaux ECG, ainsi que differentes operations de traitements souvent appliquees a ce signal. Nous avons aussi decrit de facon exhaustive et comparative, les algorithmes existants de compression des signaux ECG, en insistant sur ceux a base des approximations et interpolations polynomiales. Nous avons aborde par la suite, les fondements theoriques des polynomes orthogonaux, en etudiant successivement leur nature mathematique, les nombreuses et interessantes proprietes qu'ils disposent et aussi les caracteristiques de quelques uns de ces polynomes. La modelisation polynomiale du signal ECG consiste d'abord a segmenter ce signal en cycles cardiaques apres detection des complexes QRS, ensuite, on devra decomposer dans des bases polynomiales, les fenetres de signaux obtenues apres la segmentation. Les coefficients produits par la decomposition sont utilises pour synthetiser les segments de signaux dans la phase de reconstruction. La compression revient a utiliser un petit nombre de coefficients pour representer un segment de signal constitue d'un grand nombre d'echantillons. Nos experimentations ont etabli que les polynomes de Laguerre et les polynomes d'Hermite ne conduisaient pas a une bonne reconstruction du signal ECG. Par contre, les polynomes de Legendre et les polynomes de Tchebychev ont donne des resultats interessants. En consequence, nous concevons notre premier algorithme de compression de l'ECG en utilisant les polynomes de Jacobi. Lorsqu'on optimise cet algorithme en supprimant les effets de bords, il devient universel et n'est plus dedie a la compression des seuls signaux ECG. Bien qu'individuellement, ni les polynomes de Laguerre, ni les fonctions d'Hermite ne permettent une bonne modelisation des segments du signal ECG, nous avons imagine l'association des deux systemes de fonctions pour representer un cycle cardiaque. Le segment de l'ECG correspondant a un cycle cardiaque est scinde en deux parties dans ce cas: la ligne isoelectrique qu'on decompose en series de polynomes de Laguerre et les ondes P-QRS-T modelisees par les fonctions d'Hermite. On obtient un second algorithme de compression des signaux ECG robuste et performant.

6 citations

References
More filters
Journal Article•DOI•
John Makhoul1•
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Book•
05 Sep 1978
TL;DR: This paper presents a meta-modelling framework for digital Speech Processing for Man-Machine Communication by Voice that automates the very labor-intensive and therefore time-heavy and expensive process of encoding and decoding speech.
Abstract: 1. Introduction. 2. Fundamentals of Digital Speech Processing. 3. Digital Models for the Speech Signal. 4. Time-Domain Models for Speech Processing. 5. Digital Representation of the Speech Waveform. 6. Short-Time Fourier Analysis. 7. Homomorphic Speech Processing. 8. Linear Predictive Coding of Speech. 9. Digital Speech Processing for Man-Machine Communication by Voice.

3,103 citations

Journal Article•DOI•
TL;DR: The perfect reconstruction condition is posed as a Bezout identity, and it is shown how it is possible to find all higher-degree complementary filters based on an analogy with the theory of Diophantine equations.
Abstract: The wavelet transform is compared with the more classical short-time Fourier transform approach to signal analysis. Then the relations between wavelets, filter banks, and multiresolution signal processing are explored. A brief review is given of perfect reconstruction filter banks, which can be used both for computing the discrete wavelet transform, and for deriving continuous wavelet bases, provided that the filters meet a constraint known as regularity. Given a low-pass filter, necessary and sufficient conditions for the existence of a complementary high-pass filter that will permit perfect reconstruction are derived. The perfect reconstruction condition is posed as a Bezout identity, and it is shown how it is possible to find all higher-degree complementary filters based on an analogy with the theory of Diophantine equations. An alternative approach based on the theory of continued fractions is also given. These results are used to design highly regular filter banks, which generate biorthogonal continuous wavelet bases with symmetries. >

1,804 citations

Journal Article•DOI•
TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >

690 citations

Journal Article•DOI•
TL;DR: Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECGs are clinically useful.
Abstract: Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.

445 citations