scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Proceedings ArticleDOI
20 Jul 2016
TL;DR: Some of the most used Least Mean Squares (LMS) based Finite Impulse Response (FIR) Adaptive Filters are introduced and which of them are the most effective under varying circumstances is determined.
Abstract: The extraction of the Fetal Electrocardiogram (fECG) from the composite Electrocardiogram (ECG) signal obtained from the abdominal lead is discussed. The main point of this paper is to introduce some of the most used Least Mean Squares (LMS) based Finite Impulse Response (FIR) Adaptive Filters and to determine which of them are the most effective under varying circumstances. Experimental results suggest the ideal combination of the chosen settings for these functions. Results of fECG extraction are assessed by Percentage Root-Mean-Square Difference (PRD), input and output Signal to Noise Ratios (SNRs), and Root Mean Square Error (RMSE). Based on simulations conclusions, optimal convergence constant value and filter order were empirically determined. Setting the optimal value of the convergence constant and filter order of adaptive algorithm can be considered a contribution to original results. These results can be used on real records fECG, where it is difficult to determine because of the missing reference.

17 citations

Journal ArticleDOI
TL;DR: The aim of the work has been to develop a method that can perform segmentation with an acceptable amount of residual error without a need to define a large set of parameters that control the segmentation process.
Abstract: The authors have studied the problem of approximating a digital signal with a suitable continuous broken line. They use the approximative broken line for further analysis of the signal as detection of peaks, waves, and other structural features. They can also save considerable amount of storage space with an approximation that does not lose too much significant information about the original signal. The authors' work is based on examining different distance metrics and different segmentation methods with respect to the remaining residual error in the resulting approximation. The aim of the work has been to develop a method that can perform segmentation with an acceptable amount of residual error without a need to define a large set of parameters that control the segmentation process. The authors' contribution is to examine the effect of the estimated compression ratio of the resulting approximation and finding an estimate of this compression ratio. They first define a target in the form of a compression ratio of the resulting approximation and then by applying their method, try to find a suitable threshold parameter to achieve this target. The authors have tested their method with electrocardiogram (EGG) signals and the compression ratio of the approximation has been found to be a suitable target to control the segmentation process.

17 citations

Journal ArticleDOI
TL;DR: A method for exploiting both the temporal and spatial redundancy, typical of multidimensional biomedical signals, has been proposed and proved to be superior to previous coding schemes.
Abstract: In this paper, we propose a model-based lossy coding technique for biomedical signals in multiple dimensions. The method is based on the codebook-excited linear prediction approach and models signals as filtered noise. The filter models short-term redundancy in time; the shape of the power spectrum of the signal and the residual noise, quantized using an algebraic codebook, is used for reconstruction of the waveforms. In addition to temporal redundancy, redundancy in the coding of the filter and residual noise across spatially related signals is also exploited, yielding better compression performance in terms of SNR for a given bit rate. The proposed coding technique was tested on sets of multichannel electromyography (EMG) and EEG signals as representative examples. For 2-D EMG recordings of 56 signals, the coding technique resulted in SNR greater than 3.4 plusmn 1.3 dB with respect to independent coding of the signals in the grid when the compression ratio was 89%. For EEG recordings of 15 signals and the same compression ratio as for EMG, the average gain in SNR was 2.4 plusmn 0.1 dB. In conclusion, a method for exploiting both the temporal and spatial redundancy, typical of multidimensional biomedical signals, has been proposed and proved to be superior to previous coding schemes.

17 citations


Cites background from "ECG data compression techniques-a u..."

  • ...Extensive work has been performed on biomedical signal compression [3], [4]....

    [...]

Journal ArticleDOI
TL;DR: A novel ECG signal compression framework based on sparse representation using a set of ECG segments as natural basis using k-LiMapS as fine-tuned sparsity solver algorithm guaranteeing the required signal reconstruction quality PRDN (Normalized Percentage Root-mean-square Difference).

17 citations

Journal ArticleDOI
TL;DR: The proposed parametric modeling technique for the electrocardiogram (ECG) signal based on signal dependent orthogonal transform involves the mapping of the ECG heartbeats into the singular values (SV) domain using the left singular vectors matrix of the impulse response Matrix of the LPC filter.
Abstract: In this letter, we propose a parametric modeling technique for the electrocardiogram (ECG) signal based on signal dependent orthogonal transform. The technique involves the mapping of the ECG heartbeats into the singular values (SV) domain using the left singular vectors matrix of the impulse response matrix of the LPC filter. The resulting spectral coefficients vector would be concentrated, leading to an approximation to a sum of exponentially damped sinusoids (EDS). A two-stage procedure is then used to estimate the model parameters. The Prony's method is first employed to obtain initial estimates of the model, while the Levenberg-Marquardt (LM) method is then applied to solve the non-linear least-square optimization problem. The ECG signal is reconstructed using the EDS parameters and the linear prediction coefficients via the inverse transform. The merit of the proposed modeling technique is illustrated on the clinical data collected from the MIT-BIH database including all the arrhythmias classes that are recommended by the Association for the Advancement of Medical Instrumentation (AAMI). For all the tested ECG heartbeats, the average values of the percent root mean square difference (PRDs) between the actual and the reconstructed signals were relatively low, varying between a minimum of 3.1545% for Premature Ventricular Contractions (PVC) class and a maximum of 10.8152% for Nodal Escape (NE) class.

17 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations