scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
TL;DR: A syntactic pattern recognition method of electrocardiograms (ECG) is described in which attributed automata are used to execute the analysis of ECG signals.

119 citations

Journal ArticleDOI
TL;DR: The proposed vector quantizer (VQ) in the wavelet domain for the compression of electrocardiogram (ECG) signals outperforms many recently published methods, including the best one known as the set partitioning in hierarchical trees.
Abstract: In this paper, we propose a novel vector quantizer (VQ) in the wavelet domain for the compression of electrocardiogram (ECG) signals. A vector called tree vector (TV) is formed first in a novel structure, where wavelet transformed (WT) coefficients in the vector are arranged in the order of a hierarchical tree. Then, the TVs extracted from various WT subbands are collected in one single codebook. This feature is an advantage over traditional WT-VQ methods, where multiple codebooks are needed and are usually designed separately because numerical ranges of coefficient values in various WT subbands are quite different. Finally, a distortion-constrained codebook replenishment mechanism is incorporated into the VQ, where codevectors can be updated dynamically, to guarantee reliable quality of reconstructed ECG waveforms. With the proposed approach both visual quality and the objective quality in terms of the percent of root-mean-square difference (PRD) are excellent even in a very low bit rate. For the entire 48 records of Lead II ECG data in the MIT/BIH database, an average PRD of 7.3% at 146 b/s is obtained. For the same test data under consideration, the proposed method outperforms many recently published ones, including the best one known as the set partitioning in hierarchical trees.

118 citations


Cites background or result from "ECG data compression techniques-a u..."

  • ...Here in Table I, we also summarize the reported performances in recently published articles [4]–[10] as a supplement to [3] and [4]....

    [...]

  • ..., direct, transformed, and parameter extracted [2], [3]....

    [...]

  • ...formance summaries of the ECG compression methods can be found in some literatures including [3] and [4]....

    [...]

Journal ArticleDOI
01 Mar 2011
TL;DR: The efficacy of the combined PCA-ICA algorithm lies on the fact that the location of the R-peaks is accurately determined, and none of the peaks are ignored or missed, as Quadratic Spline wavelet is also used.
Abstract: Electrocardiogram (ECG) signals are affected by various kinds of noise and artifacts that may hide important information of interest. Wavelet transform (WT) technique is used to identify the characteristic points of the electrocardiogram (ECG) signal with fairly good accuracy, even in the presence of severe high frequency and low frequency noise. Independent component analysis (ICA) is a new technique suitable for separating independent components from ECG complex signals, whereas principal component analysis (PCA) is used to reduce dimensionality and for feature extraction of the ECG data prior to or at times after performing ICA in special circumstances. In this analysis, PCA is analyzed from three points of view, variance maximization, the singular value decomposition and ECG data compression. The sensitivity of the different ECG components with respect to the ECG data dimensions has been studied using PCA screen plots. The validity and performance of the approaches used are confirmed through computer simulations on common standards for electrocardiography (CSE) base ECG data. Standard or instantaneous ICA, which is the most commonly, accepted ICA technique is first compared with PCA technique and then with constrained ICA, which enables the estimation of only one component close to a particular reference ECG signal. The ICA method can also be extended for QRS detection and reference signal generation, using constrained ICA and as well for multichannel ECG separation after removing noise and artifacts, further favoring segment classification. The results were obtained using Matlab environment. Using composite WT based PCA-ICA methods helps for redundant data reduction as well for better feature extraction. The efficacy of the combined PCA-ICA algorithm lies on the fact that the location of the R-peaks is accurately determined, and none of the peaks are ignored or missed, as Quadratic Spline wavelet is also used.

115 citations

Journal ArticleDOI
K. Shimizu1
TL;DR: The practicality of this technique was investigated through technical considerations required to realize mobile telemedicine, and theoretical analysis verified the feasibility of the proposed technique.
Abstract: We have proposed some techniques for mobile telemedicine and have verified their practical feasibility in experiments. This article presents technical considerations required to realize a practical mobile telemedicine system, techniques developed for multiple medical data transmission, and the satisfactory results of their applications to telemedicine in moving vehicles.

113 citations

Journal ArticleDOI
TL;DR: A prospective review of wavelet-based ECG compression methods and their performances based upon findings obtained from various experiments conducted using both clean and noisy ECG signals is presented.

110 citations


Cites background from "ECG data compression techniques-a u..."

  • ...The signal redundancy can be exploited when sucessive ECG samples are statistically dependent and the quantized CG sample amplitudes occur with unequal probability [19,29,34]....

    [...]

  • ...Although KLT approach is shown to provide a high comression ratio, the computational time needed to calculate the KLT asis functions is very intensive [29,84]....

    [...]

  • ...In general, higher value of the specified error threshold will result in higher CR and lower compressed signal quality and vice-versa [29]....

    [...]

  • ...An excellent review and summary of few statistical, redundancy reduction and adaptive sampling techniques can be found in [29]....

    [...]

  • ...However, the selection of tolerance is difficult for noisy ECG signals [29]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations