scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A novel compression algorithm for electrocardiogram signals based on the linear prediction of the wavelet coefficients

TL;DR: A new algorithm for electrocardiogram (ECG) compression based on the compression of the linearly predicted residuals of the wavelet coefficients of the signal, which reduces the bit rate while keeping the reconstructed signal distortion at a clinically acceptable level.
About: This article is published in Digital Signal Processing.The article was published on 2003-10-01. It has received 97 citations till now. The article focuses on the topics: Wavelet transform & Stationary wavelet transform.
Citations
More filters
Journal ArticleDOI
TL;DR: A new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors is presented thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications.
Abstract: Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications.

29 citations

Proceedings ArticleDOI
26 Dec 2007
TL;DR: An improved wavelet-based 2-D ECG data compression method is presented which employs a double stage compression and utilizes both inter-beat and inter-sample redundancies in the ECG signal.
Abstract: An improved wavelet-based 2-D ECG data compression method is presented which employs a double stage compression. In the first stage the set partitioning in hierarchical trees (SPIHT) algorithm is used to compress the 2-D data array formed by cutting and beat-aligning the heartbeat data sequence. In the second stage vector quantization is applied to the residual image obtained from the previous stage. The 2-D approach utilizes both inter-beat and inter-sample redundancies in the ECG signal. The proposed algorithm is applied to several records in the MIT-BIH arrhythmia database. Results show lower Percent Root mean square Difference (PRD) than 1-D methods and several 2-D methods for the same Compression Ratio (CR).

29 citations


Cites background or methods from "A novel compression algorithm for e..."

  • ...These methods can be categorized into three groups: 1) direct methods that try to code the signal directly such as AZTEC, DPCM, TP, CORTES and SAPA methods [2], 2) transform methods such as Fourier transform, Walsh, KLT, DCT and wavelet transform [3-6], 3) parameter extraction methods such as linear prediction [7] and longterm prediction [8]....

    [...]

  • ...Because of the good localization property of the wavelet transform in the time and frequency domain, several wavelet and/or wavelet packet based compression algorithms have been proposed in recent years that result in low reconstruction error and fine visual quality [3-6]....

    [...]

  • ...The methods in this table include other 1D and 2-D wavelet-based coders [3-6] and [10], as well as a direct ECG signal coders, AZTEC [2] and a linear prediction coder [7]....

    [...]

Journal ArticleDOI
TL;DR: A high performing, reliable and robust Photoplethysmogram compression and encryption method is proposed for efficient, safe and secure storage and transmission which is superior to existing PPG compression methods reported to date.

29 citations

Journal ArticleDOI
TL;DR: An optimized wavelet filter bank based methodology is presented for compression of electrocardiogram (ECG) signal, which employs a modified thresholding, which improves the compression of signal as compared to earlier existing thresholding technique.

26 citations


Cites methods from "A novel compression algorithm for e..."

  • ...Based on subband decomposition, various techniques [8-13] have been devised for the ECG signal compression....

    [...]

  • ...Several efficient methods [7-9] have reported in literature based on linear prediction....

    [...]

Proceedings ArticleDOI
22 Oct 2007
TL;DR: This work presents a method of ECG data compression utilizing Jacobi polynomials utilizing Gauss quadratures mechanism for numerical integration and obtained interesting results compared with ECG compression by wavelet decomposition methods.
Abstract: Data compression is a frequent signal processing operation applied to ECG. We present here a method of ECG data compression utilizing Jacobi polynomials. ECG signals are first divided into blocks that match with cardiac cycles before being decomposed in Jacobi polynomials bases. Gauss quadratures mechanism for numerical integration is used to compute Jacobi transforms coefficients. Coefficients of small values are discarded in the reconstruction stage. For experimental purposes, we chose height families of Jacobi polynomials. Various segmentation approaches were considered. We elaborated an efficient strategy to cancel boundary effects. We obtained interesting results compared with ECG compression by wavelet decomposition methods. Some propositions are suggested to improve the results.

26 citations

References
More filters
Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Book
05 Sep 1978
TL;DR: This paper presents a meta-modelling framework for digital Speech Processing for Man-Machine Communication by Voice that automates the very labor-intensive and therefore time-heavy and expensive process of encoding and decoding speech.
Abstract: 1. Introduction. 2. Fundamentals of Digital Speech Processing. 3. Digital Models for the Speech Signal. 4. Time-Domain Models for Speech Processing. 5. Digital Representation of the Speech Waveform. 6. Short-Time Fourier Analysis. 7. Homomorphic Speech Processing. 8. Linear Predictive Coding of Speech. 9. Digital Speech Processing for Man-Machine Communication by Voice.

3,103 citations

Journal ArticleDOI
TL;DR: The perfect reconstruction condition is posed as a Bezout identity, and it is shown how it is possible to find all higher-degree complementary filters based on an analogy with the theory of Diophantine equations.
Abstract: The wavelet transform is compared with the more classical short-time Fourier transform approach to signal analysis. Then the relations between wavelets, filter banks, and multiresolution signal processing are explored. A brief review is given of perfect reconstruction filter banks, which can be used both for computing the discrete wavelet transform, and for deriving continuous wavelet bases, provided that the filters meet a constraint known as regularity. Given a low-pass filter, necessary and sufficient conditions for the existence of a complementary high-pass filter that will permit perfect reconstruction are derived. The perfect reconstruction condition is posed as a Bezout identity, and it is shown how it is possible to find all higher-degree complementary filters based on an analogy with the theory of Diophantine equations. An alternative approach based on the theory of continued fractions is also given. These results are used to design highly regular filter banks, which generate biorthogonal continuous wavelet bases with symmetries. >

1,804 citations

Journal ArticleDOI
TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >

690 citations

Journal ArticleDOI
TL;DR: Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECGs are clinically useful.
Abstract: Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.

445 citations