scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
TL;DR: The noise sensitivity, specificity and accuracy of the PCA method is evaluated by examining the effect of noise, base-line wander and their combinations on the characteristics of ECG for classification of true and false peaks.
Abstract: Principal component analysis (PCA) is used for ECG data compression, denoising and decorrelation of noisy and useful ECG components or signals. In this study, a comparative analysis of independent component analysis (ICA) and PCA for correction of ECG signals is carried out by removing noise and artifacts from various raw ECG data sets. PCA and ICA scatter plots of various chest and augmented ECG leads and their combinations are plotted to examine the varying orientations of the heart signal. In order to qualitatively illustrate the recovery of the shape of the ECG signals with high fidelity using ICA, corrected source signals and extracted independent components are plotted. In this analysis, it is also investigated if difference between the two kurtosis coefficients is positive than on each of the respective channels and if we get a super-Gaussian signal, or a sub-Gaussian signal. The efficacy of the combined PCA–ICA algorithm is verified on six channels V1, V3, V6, AF, AR and AL of 12-channel ECG data. ICA has been utilized for identifying and for removing noise and artifacts from the ECG signals. ECG signals are further corrected by using statistical measures after ICA processing. PCA scatter plots of various ECG leads give different orientations of the same heart information when considered for different combinations of leads by quadrant analysis. The PCA results have been also obtained for different combinations of ECG leads to find correlations between them and demonstrate that there is significant improvement in signal quality, i.e., signal-to-noise ratio is improved. In this paper, the noise sensitivity, specificity and accuracy of the PCA method is evaluated by examining the effect of noise, base-line wander and their combinations on the characteristics of ECG for classification of true and false peaks.

86 citations


Cites background or methods from "ECG data compression techniques-a u..."

  • ...This facilitates the reduction of the correlation between several ECG signals or segments as well as the dimension of ECG data set [9, 11, 17, 19]....

    [...]

  • ...Restoring the ECG waveform by the limited number of the orthogonal basis corresponds to the orthogonal projection off onto the subspace H defined by the k eigenvectors [4, 9, 11, 17]....

    [...]

  • ...Using PCA compression, recognizable reconstruction of a given ECG signal may be achieved by summing the contributions of just the first few basis vectors as these contain most of the energy [2, 3, 9]....

    [...]

  • ...The conventional methods of data compression are divided into two categories: direct data compression and transformation methods [2, 3, 9, 11]....

    [...]

  • ...In order to give more validation to the behavior of the proposed PCA decomposition method, two additional manipulations were conducted with the CSE based ECG data sets [3, 4, 9, 17, 19]....

    [...]

Proceedings ArticleDOI
07 Aug 2002
TL;DR: An electrocardiogram (ECG) frame classification technique realized by a dynamic time warping (DTW) matching technique, which has been used successfully in speech recognition, is presented, which is used to classify ECG frames because ECG and speech signals have similar nonstationary characteristics.
Abstract: Presents an electrocardiogram (ECG) frame classification technique realized by a dynamic time warping (DTW) matching technique, which has been used successfully in speech recognition. We use the DTW to classify ECG frames because ECG and speech signals have similar nonstationary characteristics. The DTW mapping function is obtained by searching the frame from its end to start. A threshold is setup for DWT matching residual either to classify an ECG frame or to add a new class. Classification and establishment of a template set are carried out simultaneously. A frame is classified into a category with a minimal residual and satisfying a threshold requirement. A classification residual of 1.33% is achieved by the DTW for a 10-minute ECG recording.

85 citations

Journal ArticleDOI
TL;DR: The results show that the N-PR cosine-modulated filter bank method outperforms the WP technique in both quality and efficiency.

82 citations


Cites background from "ECG data compression techniques-a u..."

  • ...A summary of these can be found in [3]....

    [...]

Proceedings ArticleDOI
30 Oct 1997
TL;DR: In this article, the authors developed an affordable real-time ambulatory electrocardiogram (ECG) monitor prototype, which consists of an analogue signal conditioner, a micro-controller, external RAM and ROM, and a PCMCIA flash memory card and interfacing chip.
Abstract: The primary goal of this research is to develop an affordable real-time ambulatory electrocardiogram (ECG) monitor prototype. The system consists of an analogue signal conditioner, a micro-controller, external RAM and ROM, and a PCMCIA flash memory card and interfacing chip. Researches are carried out in the ECG data compression and real-time diagnosis. For the purpose of ECG analysis, a reliable QRS detection algorithm with as little computation as possible has to be developed. This paper reports our works in the development of QRS detection algorithms. They are derived and improved from the first derivative method as described in the paper by Friesen et al. Our results show that the maximum slope detection, with the QRS onset selected when two successive values of slope exceed the threshold, is the best method.

80 citations

Journal ArticleDOI
TL;DR: A direct waveform mean-shape vector quantization (MSVQ) is proposed here as an alternative for electrocardiographic (ECG) signal compression, leading to high compression ratios (CRs) while maintaining a low level of waveform distortion and preserving the main clinically interesting features of the ECG signals.
Abstract: A direct waveform mean-shape vector quantization (MSVQ) is proposed here as an alternative for electrocardiographic (ECG) signal compression. In this method, the mean values for short ECG signal segments are quantized as scalars and compression of the single-lead ECG by average beat substraction and residual differencing their waveshapes coded through a vector quantizer. An entropy encoder is applied to both, mean and vector codes, to further increase compression without degrading the quality of the reconstructed signals. In this paper, the fundamentals of MSVQ are discussed, along with various parameters specifications such as duration of signal segments, the wordlength of the mean-value quantization and the size of the vector codebook. The method is assessed through percent-residual-difference measures on reconstructed signals, whereas its computational complexity is analyzed considering its real-time implementation. As a result, MSVQ has been found to be an efficient compression method, leading to high compression ratios (CRs) while maintaining a low level of waveform distortion and, consequently, preserving the main clinically interesting features of the ECG signals. CRs in excess of 39 have been achieved, yielding low data rates of about 140 bps. This compression factor makes this technique especially attractive in the area of ambulatory monitoring.

79 citations


Cites background or methods from "ECG data compression techniques-a u..."

  • ...ECG data compression techniques are typically classified into one of three major categories [4], [5], namely direct data compression, transform coding, and parameter extraction methods....

    [...]

  • ...However, although this measure is widely used, it may not express the clinical acceptability of the reconstructed signal [4]–[6] since low PRD values do not guarantee clinically acceptable quality, nor are high values synonymous with a truly distorted signal....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations