scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal Article
TL;DR: In this work four linear transforms are studied and numerically verified with use of the real ECG data, and the combination of spatial and temporal decorrelation is proposed and discussed as the practical and lossless method for a real implementation.
Abstract: This paper is devoted to the tranform-based methods for decorrelation of simultaneously recorded ECG channels. The conventional 12-lead ECG recordings, due to the non-optimal lead positioning, contain highly redundant data. Eliminating this redundancy yields new possibilities for lossless coding of the ECG, meeting the most severe expectations about the quality of stored signal. The statistical properties featured by uncorrelated signals in the transform domain are more appropriate for the data distribution-based coding techniques. In our work four linear transforms are studied and numerically verified with use of the real ECG data. Additionally, the combination of spatial and temporal decorrelation is proposed and discussed as the practical and lossless method for a real implementation. The compression efficiency significantly exceeds the values obtained with use of general-purpose lossless algorithms.

Cites background from "ECG data compression techniques-a u..."

  • ...keywords: ECG compression, linear transform, integer wavelet decomposition Piotr AUGUSTYNIAK * REDUCING THE SPATIAL SIGNAL CORRELATION FOR THE COMPRESSION...

    [...]

Proceedings ArticleDOI
02 Apr 2009
TL;DR: A new hybrid compression scheme of ultrasound images and ECG signals that is compressed by only one coder, without having to use two distinct coders, usually used in this kind of applications.
Abstract: We introduce in this paper, a new hybrid compression scheme of ultrasound images and ECG signals. This approach consists in introducing the ECG signals in the black background regions of the image. The mixture of data (image and ECG) is thus compressed by only one coder (in our case, by JPEG2000), without having to use two distinct coders, usually used in this kind of applications. As ECG signals have a particular importance in the resulting image, we will use in a second time the region of interest coding (ROI) technique to code the region where they were inserted. Finally, the reconstruction quality is evaluated using the PSNR (Peak Signal Noise Ratio) and the PRD (Percent Root Mean Square Difference), respectively for both the ultrasound image and the ECG signals.
Journal ArticleDOI
TL;DR: Bioelectrical signals which are spectrally analyzed for enabling energyquality trade-offs are helpful in observing different health problems as those related with the rate of heart as well as normal and abnormalities using ECG waves.
Abstract: In this paper presenting bioelectrical signals which are spectrally analyzed for enabling energyquality trade-offs, they are helpful in observing different health problems as those related with the rate of heart. To facilitate such tradeoffs, the signals which are processed earlier are expressed primarily in a beginning in which considerable components that hold mainly of the related information can be simply notable from the components that effect the output to a smaller amount. Such an arrangement permits the pruning of operations allied with the less important signal components primary to power savings with loss of minor quality as simply less useful parts are reduced under the certain requirements. This provides the patients normal and abnormalities using ECG waves.

Cites background or methods from "ECG data compression techniques-a u..."

  • ...Offline As abovementioned, this work is described and validated in [7] and it is offline WT based ECG....

    [...]

  • ...Keep the “best” global single-lead delineation [4], [7]....

    [...]

  • ...This choice in scale is taken and accepted since signal’s energy lies mostly in these scales [7]....

    [...]

Journal ArticleDOI
TL;DR: A method for modelling an electrocardiogram (ECG) signal using time-varying parameters by considering that the signal is generated by a linear, time-Varying (LTV) system with a stationary white noise input, then the time- varying coefficients of the LTV system are estimated.
Abstract: In this paper, we present a method for modelling an electrocardiogram (ECG) signal using time-varying parameters by considering that the signal is generated by a linear, time-varying (LTV) system with a stationary white noise input, then we estimate the time-varying coefficients of the LTV system. Since the ECG signal is considered to be non-stationary, this method is based on the Wold-Cramer representation of a non-stationary signal. Because the relationship between the generalised transfer function of an LTV system and the time-varying coefficients of the difference equation of a discrete-time system is not addressed so far in literature, in this paper we propose a solution to this problem and apply it for modelling a human ECG signal. We first derive a relationship between the system generalised transfer function and the time-varying parameters of the system. Then we develop an algorithm to solve for the system time-varying parameters from the time-frequency kernel of the system output using the time-varying auto-correlation function (TVACF). A comparison analysis between the proposed algorithm and the RLS and RLSL algorithms has been discussed. Computer simulations illustrating the effectiveness of our algorithm are presented when the signal is embedded in noise.
References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations