scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
TL;DR: In this article , the authors proposed compression techniques using the optimized tunable-Q wavelet transform (TQWT) for ECG signals, which are stored in a digitized format at higher bits per sample that requires ample space for storage.

8 citations

Proceedings ArticleDOI
01 Dec 2011
TL;DR: Results show that both SampEn and ApEn enable a clear distinction between control and epileptic signals, but SampEn shows a more robust performance over a wide range of sample loss ratios.
Abstract: This study is aimed at characterizing three signal entropy measures, Approximate Entropy (ApEn), Sample Entropy (SampEn) and Multiscale Entropy (MSE) over real EEG signals when a number of samples are randomly lost due to, for example, wireless data transmission. The experimental EEG database comprises two main signal groups: control EEGs and epileptic EEGs. Results show that both SampEn and ApEn enable a clear distinction between control and epileptic signals, but SampEn shows a more robust performance over a wide range of sample loss ratios. MSE exhibits a poor behavior for ratios over a 40% of sample loss. The EEG non-stationary and random trends are kept even when a great number of samples are discarded. This behavior is similar for all the records within the same group.

8 citations


Additional excerpts

  • ...In addition, related techniques such as event detection [12], hardware design, energy saving, and data compression [13], [14], among others, may also entail a data loss....

    [...]

Proceedings ArticleDOI
23 Sep 1990
TL;DR: In this paper, a new pattern recognition method is introduced for the multiresolution representation and analysis of electrocardiogram (ECG) waveforms, which is based on filtering the curvature of the curve with Gaussian filters where Gaussian standard deviation increases, and extracting of extrema points in filtered versions of the curvatures (scale-space filtering).
Abstract: A new pattern recognition method is introduced for the multiresolution representation and analysis of electrocardiogram (ECG) waveforms. The multiresolution representation is based on filtering the curvature of the curve with continuum of Gaussian filters where Gaussian standard deviation increases, and on extracting of extrema points in filtered versions of the curvature (scale-space filtering). The original curve is then segmented at each scale into linear parts with regard to the extracted extrema points. After segmentation and linking segments between scales, shape is represented qualitatively in a hierarchical tree form holding information on coarser and finer details of shape. Different methods of tree form analysis can be applied to data compression, classification of heart beats or fine structure analysis. The fast computation scheme and transformation into hierarchical structure are described. In addition to the method of representation, a data compression method is proposed. Comparison to the AZTEC data compression method is given. >

8 citations

Proceedings ArticleDOI
06 Dec 2012
TL;DR: A wavelet-based low-complexity Electrocardiogram (ECG) compression algorithm for mobile healthcare systems, in the backdrop of real clinical requirements, aims at achieving good trade-off between the compression ratio (CR) and the fidelity of the reconstructed signal, to preserve the clinically diagnostic features.
Abstract: This paper presents a wavelet-based low-complexity Electrocardiogram (ECG) compression algorithm for mobile healthcare systems, in the backdrop of real clinical requirements. The proposed method aims at achieving good trade-off between the compression ratio (CR) and the fidelity of the reconstructed signal, to preserve the clinically diagnostic features. Keeping the computational complexity at a minimal level is paramount since the application area we consider is that of remote cardiovascular monitoring, where continuous sensing and processing takes place in low-power, computationally constrained devices. The proposed compression methodology is based on the Discrete Wavelet Transform (DWT). The energy packing efficiency of the DWT coefficients at different resolution levels is analysed and a thresholding policy is applied to select only those coefficients which have significant contribution to the original signal total energy. The proposed methodology is evaluated on normal and abnormal ECG signals extracted from the MIT-BIH database and achieves an average compression ratio of 16.5∶1, an average percent root mean square difference of 0.75 and an average cross correlation value of 0.98.

8 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...In the relevant literature [4], [5] and [6] lossy ECG compression algorithms are grouped into three distinct categories; namely a) Direct Methods - Heuristic algorithms like AZTEC, FAN algorithm, TP, CORTES, SAPA and Entropy Coding, b) Transform Based Methods in which the original signal is transformed to a new domain where compression is performed, for example DCT, STFT, KLT and the more recently Wavelet Transform (WT)....

    [...]

Proceedings ArticleDOI
17 Jan 2011
TL;DR: A new parametric modeling technique for the analysis of the ECG signal is presented in this paper which involves the projection of the excitation signal on the right eigenvectors of the impulse response matrix of the LPC filter.
Abstract: A new parametric modeling technique for the analysis of the ECG signal is presented in this paper. This approach involves the projection of the excitation signal on the right eigenvectors of the impulse response matrix of the LPC filter. Each projected value is then weighted by the corresponding singular value, leading to an approximated sum of exponentially damped sinusoids (EDS). A two-stage procedure is then used to estimate the EDS model parameters. Prony’s algorithm is first used to obtain initial estimates of the model, while the Gauss-Newton method is applied to solve the non-linear least-square optimisation problem. The performance of the proposed model is evaluated on abnormal clinical ECG data selected from the MIT-BIH database using objective measures of distortion. A good compression ratio per beat has been obtained using the proposed algorithm which is quite satisfactory when compared to other techniques.

8 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...INTRODUCTION ECG data compression techniques have been classified into three main categories, namely direct data compression, transformation, and parameter extraction methods [1]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations