scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
TL;DR: An ECG compression algorithm using a combination of Lorentzian functions model and discrete Fourier transform is proposed and tested for its coding efficiency and reconstruction capability by applying it to several popular, benchmark ECG signals.

22 citations

Dissertation
01 Jan 2003
TL;DR: The method of using important ECG features plus a suitable number of compressed ECG signals can dramatically decrease the complexity of the neural network structure, which can increase the testing speed and the accuracy rate of the network verification.
Abstract: An electrocardiogram (ECG) is a bioelectrical signal which records the heart's electrical activity versus time. It is an important diagnostic tool for assessing heart functions. The interpretation of ECG signal is an application of pattern recognition. The techniques used in this pattern recognition comprise: signal pre-processing, QRS detection, feature extraction and neural network for signal classification. In this project, signal processing and neural network toolbox will be used in Matlab environment. The processed signal source came from the Massachusetts Institute of Technology Beth Israel Hospital (MIT-BIH) arrhythmia database which was developed for research in cardiac electro-physiology.Five conditions of ECG waveform were selected from MIT-BIH database in this research. The ECG samples were processed and normalised to produce a set of features that can be used in different structures of neural network and subsequent recognition rates were recorded. Backpropagation algorithm will be considered for different structures of neural network and the performance in each case will be measured. This research is focused on finding the best neural network structure for ECG signal classification and a number of signal pre-processing and QRS detection algorithms were also tested. The feature extraction is based on an existing algorithm.The results of recognition rates are compared to find a better structure for ECG classification. Different ECG feature inputs were used in the experiments to compare and find a desirable features input for ECG classification. Among different structures, it was found that a three layer network structure with 25 inputs, 5 neurons in the output layer and 5 neurons in its hidden layers possessed the best performance with highest recognition rate of 91.8% for five cardiac conditions. The average accuracy rate for this kind of structure with different structures was 84.93%. It was also tested that 25 feature input is suitable for training and testing in ECG classification. Based on this result, the method of using important ECG features plus a suitable number of compressed ECG signals can dramatically decrease the complexity of the neural network structure, which can increase the testing speed and the accuracy rate of the network verification. It also gives further suggestions to plan the experiments for the future work.

22 citations

Journal ArticleDOI
TL;DR: An elegant algorithm that uses WT alone to identify the characteristic points of an ECG signal is described, and a composite WT-based PCA method is used for redundant data reduction and better feature extraction is used.
Abstract: In many medical applications, feature selection is obvious; but in medical domains, selecting features and creating a feature vector may require more effort. The wavelet transform (WT) technique is used to identify the characteristic points of an electrocardiogram (ECG) signal with fairly good accuracy, even in the presence of severe high-frequency and low-frequency noise. Principal component analysis (PCA) is a suitable technique for ECG data analysis, feature extraction, and image processing — an important technique that is not based upon a probability model. The aim of the paper is to derive better diagnostic parameters for reducing the size of ECG data while preserving morphology, which can be done by PCA. In this analysis, PCA is used for decorrelation of ECG signals, noise, and artifacts from various raw ECG data sets. The aim of this paper is twofold: first, to describe an elegant algorithm that uses WT alone to identify the characteristic points of an ECG signal; and second, to use a composite WT-based PCA method for redundant data reduction and better feature extraction. PCA scatter plots can be observed as a good basis for feature selection to account for cardiac abnormalities. The study is analyzed with higher-order statistics, in contrast to the conventional methods that use only geometric characteristics of feature waves and lower-order statistics. A new algorithm — viz. PCA variance estimator — is developed for this analysis, and the results are also obtained for different combinations of leads to find correlations for feature classification and useful diagnostic information. PCA scatter plots of various chest and augmented ECG leads are obtained to examine the varying orientations of the ECG data in different quadrants, indicating the cardiac events and abnormalities. The efficacy of the PCA algorithm is tested on different leads of 12-channel ECG data; file no. 01 of the Common Standards for Electrocardiography (CSE) database is used for this study. Better feature extraction is obtained for some specific combinations of leads, and significant improvement in signal quality is achieved by identifying the noise and artifact components. The quadrant analysis discussed in this paper highlights the filtering requirements for further ECG processing after performing PCA, as a primary step for decorrelation and dimensionality reduction. The values of the parameters obtained from the results of PCA are also compared with those of wavelet methods.

22 citations

Proceedings ArticleDOI
05 Jun 2000
TL;DR: A multiwavelet system is used to choose optimal wavelets for electrocardiogram (ECG) signal data compression that can simultaneously provide perfect reconstruction while preserving length, good performance at boundaries, and high order of approximation.
Abstract: This paper presents a technique used to choose optimal wavelets for electrocardiogram (ECG) signal data compression. At present, it is not clear which wavelet function is suitable for data compression of ECG signals. An important issue is that the performance of wavelet based algorithms may depend on the particular basis chosen for the signal compression. Various criteria are used to evaluate the fidelity of the reconstruction. The percent root difference (PRD) has been widely used in the literature as the principal error criterion. In this paper, three more criteria are used, namely, signal to noise ratio (SNR), distortion (D), and root mean square error (RMSE). We use a multiwavelet system that can simultaneously provide perfect reconstruction while preserving length (orthogonality), good performance at boundaries (via linear-phase symmetry), and high order of approximation (vanishing moments). Experimental results are shown for both multiwavelets and scalar wavelets.

22 citations

Journal ArticleDOI
TL;DR: Lossless compression schemes for ECG signals based on neural network predictors and entropy encoders are presented and it is shown that superior performances in terms of the distortion parameters of the reconstructed signals can be achieved with the proposed schemes.
Abstract: This paper presents lossless compression schemes for ECG signals based on neural network predictors and entropy encoders. Decorrelation is achieved by nonlinear prediction in the first stage and encoding of the residues is done by using lossless entropy encoders in the second stage. Different types of lossless encoders, such as Huffman, arithmetic, and runlength encoders, are used. The performances of the proposed neural network predictor-based compression schemes are evaluated using standard distortion and compression efficiency measures. Selected records from MIT-BIH arrhythmia database are used for performance evaluation. The proposed compression schemes are compared with linear predictor-based compression schemes and it is shown that about 11% improvement in compression efficiency can be achieved for neural network predictor-based schemes with the same quality and similar setup. They are also compared with other known ECG compression methods and the experimental results show that superior performances in terms of the distortion parameters of the reconstructed signals can be achieved with the proposed schemes.

21 citations


Cites background from "ECG data compression techniques-a u..."

  • ...…Technology, Multimedia University, Cyberjaya 63100, Malaysia Received 24 May 2006; Revised 22 November 2006; Accepted 11 March 2007 Recommended by William Allan Sandham This paper presents lossless compression schemes for ECG signals based on neural network predictors and entropy encoders....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations