scispace - formally typeset
Search or ask a question
Journal Article•DOI•

A novel compression algorithm for electrocardiogram signals based on the linear prediction of the wavelet coefficients

01 Oct 2003-Digital Signal Processing (Academic Press)-Vol. 13, Iss: 4, pp 604-622
TL;DR: A new algorithm for electrocardiogram (ECG) compression based on the compression of the linearly predicted residuals of the wavelet coefficients of the signal, which reduces the bit rate while keeping the reconstructed signal distortion at a clinically acceptable level.
About: This article is published in Digital Signal Processing.The article was published on 2003-10-01. It has received 97 citations till now. The article focuses on the topics: Wavelet transform & Stationary wavelet transform.
Citations
More filters
01 Jan 2014
TL;DR: ECG signals are watermarked with patient information using LSB watermark technique and compressed the huge amount of ECG data using ASCII character encoding in order to confirm patient or ECG linkage integrity and reduced the bandwidth in telemedicine applications.
Abstract: In the application of telemedicine, ECG signal is transmitted to the Doctor end without any patient details. As a result, confusion is arisen between signal and patient's identity. To avoid this confusion, ECG signals need to be combined with patient confidential information when sent. A typical ECG monitoring device generates massive volumes of digital data. Huge amount of bandwidth is required for the transmission of the ECG signal for telemedicine purposes. This huge amount of bandwidth for the transmission of the ECG signal can be avoided if the signal is compressed after embedding the patient's personal information within the ECG signal. Since the ECG signal and the patient details integrated into one, bandwidth for the transmission can be reduced in telemedicine applications. In this paper ECG signals are watermarked with patient information using LSB watermark technique and compressed the huge amount of ECG data using ASCII character encoding in order to confirm patient or ECG linkage integrity and reduced the bandwidth in telemedicine applications. The whole module has been applied to various ECG data of all the 12 leads taken from PTB diagnostic database (PTB-DB) of physioNet and gives a highly compressed result that can be stored using far less digital space without distorting important ECG characteristics and at the same time, embedded information can be completely retrieved.
Journal Article•DOI•
01 Jan 2021
TL;DR: The compressed sensing theory for sparse modeling and effective multi-channel ECG compression is developed and the proposed algorithm with Gaussian basis matrix reduces the reconstruction error and increases the compression ratio.
Abstract: Compressed Sensing (CS) has been considered a very effective means of reducing energy consumption at the energy-constrained wireless body sensor networks for monitoring the multi-lead Electrocardiogram (MECG) signals. This paper develops the compressed sensing theory for sparse modeling and effective multi-channel ECG compression. A basis matrix with Gaussian kernels is proposed to obtain the sparse representation of each channel, which showed the closest similarity to the ECG signals. Thereafter, the greedy orthogonal matching pursuit (OMP) method is used to obtain the sparse representation of the signals. After obtaining the sparse representation of each ECG signal, the compressed sensing theory could be used to compress the signals as much as possible. Following the compression, the compressed signal is reconstructed utilizing the greedy orthogonal matching pursuit (OMP) optimization technique to demonstrate the accuracy and reliability of the algorithm. Moreover, as the wavelet basis matrix is another sparsifying basis to sparse representations of ECG signals, the compressed sensing is applied to the ECG signals using the wavelet basis matrix. The simulation results indicated that the proposed algorithm with Gaussian basis matrix reduces the reconstruction error and increases the compression ratio.

Cites methods from "A novel compression algorithm for e..."

  • ...Transforms such as the Fourier transform, discrete cosine transform, and discrete wavelet transform are among these methods [8-11]....

    [...]

Proceedings Article•DOI•
01 Dec 2015
TL;DR: A hybrid ECG compression technique based on DWT and reducing the correlation between signal samples and beats has been presented and results illustrate excellent quality of reconstructed signals with %PRD less than 1.5% and CR greater than 20.
Abstract: A hybrid ECG compression technique based on DWT and reducing the correlation between signal samples and beats has been presented in this paper. It starts by segmenting the ECG signal into blocks; each has 1024 samples. Then, DPCM approach is adopted by removing the redundancy between successive samples. So, residual signal of QRS-complex like waveform without the presence of P-, T- and U-waves is generated. Then the first QRS-complex like wave is isolated and all the succeeding ones are subtracted from the preceding ones. The next process depends of on application. For telediagnoses, the resulting residual signal is wavelet transformed while for telemonitoring both the first QRS-complex like wave and the residual signal are wavelet transformed. In both cases the resulting wavelet coefficients are thresholded based of on energy packing efficiency and coded using modified run-length algorithm. The performance of the proposed algorithm has been tested on records extracted from MIT-BIH arrhythmia database. Simulation results illustrate excellent quality of reconstructed signals with %PRD less than 1.5% and CR greater than 20.

Cites background from "A novel compression algorithm for e..."

  • ...These algorithms compress the ECG signal without losing and preserving the relevant clinical information [8]....

    [...]

Journal Article•DOI•
TL;DR: In this article , a flexible modeling technique for the ECG signals is proposed by considering the weighted summation of elementary functions representing the waveforms that describe each component of the cardiac cycle.
Journal Article•
TL;DR: This paper introduces a strategy to compress ECG in remote and zero lossless decompression utilizing a mix of 3 unique procedures so as to expand storage room while diminishing transmission time.
Abstract: This paper introduces a strategy to compress ECG in remote and zero lossless decompression utilizing a mix of 3 unique procedures so as to expand storage room while diminishing transmission time. The principal system utilized in the proposed calculation is a versatile straight expectation; it accomplishes high affectability and positive forecast. The second procedure is content versatile Golomb Rice coding, use with a window size for encoding the leftover of expectation blunder. The third procedure is the utilization of reasonable pressing configuration to empower the ongoing interpreting process. The proposed calculation is assessed to confirm the utilization of more than 48 chronicles from MIT-BIH arrhythmia data set. It appeared to most likely accomplish a lossless piece pressure rate of 2.83 in Lead V1 and 2.77 in Lead V2. This algorithm demonstrates better execution results in contrast with past lossless ECG compression. It very well may be utilized in information transmission strategies for prevalent biomedical signs for limited transfer speed crosswise over e-wellbeing gadgets. This task is created utilizing Xilinx programming.

Cites methods from "A novel compression algorithm for e..."

  • ...The parameter extraction technique extricates the prevailing highlights from the crude flag; others created incorporate the pinnacle picking and forecast strategy [19] and neural network based syntactic strategies [20]....

    [...]

References
More filters
Journal Article•DOI•
John Makhoul1•
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Book•
05 Sep 1978
TL;DR: This paper presents a meta-modelling framework for digital Speech Processing for Man-Machine Communication by Voice that automates the very labor-intensive and therefore time-heavy and expensive process of encoding and decoding speech.
Abstract: 1. Introduction. 2. Fundamentals of Digital Speech Processing. 3. Digital Models for the Speech Signal. 4. Time-Domain Models for Speech Processing. 5. Digital Representation of the Speech Waveform. 6. Short-Time Fourier Analysis. 7. Homomorphic Speech Processing. 8. Linear Predictive Coding of Speech. 9. Digital Speech Processing for Man-Machine Communication by Voice.

3,103 citations

Journal Article•DOI•
TL;DR: The perfect reconstruction condition is posed as a Bezout identity, and it is shown how it is possible to find all higher-degree complementary filters based on an analogy with the theory of Diophantine equations.
Abstract: The wavelet transform is compared with the more classical short-time Fourier transform approach to signal analysis. Then the relations between wavelets, filter banks, and multiresolution signal processing are explored. A brief review is given of perfect reconstruction filter banks, which can be used both for computing the discrete wavelet transform, and for deriving continuous wavelet bases, provided that the filters meet a constraint known as regularity. Given a low-pass filter, necessary and sufficient conditions for the existence of a complementary high-pass filter that will permit perfect reconstruction are derived. The perfect reconstruction condition is posed as a Bezout identity, and it is shown how it is possible to find all higher-degree complementary filters based on an analogy with the theory of Diophantine equations. An alternative approach based on the theory of continued fractions is also given. These results are used to design highly regular filter banks, which generate biorthogonal continuous wavelet bases with symmetries. >

1,804 citations

Journal Article•DOI•
TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >

690 citations

Journal Article•DOI•
TL;DR: Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECGs are clinically useful.
Abstract: Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.

445 citations