scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Proceedings ArticleDOI
23 Sep 2001
TL;DR: In this paper, the authors propose the use of simulated registers and the definition of indexes that determine the quality of an extraction algorithm, without requiring visual inspection, which is the main problem for the evaluation of fetal electrocardiogram (FECG) extraction algorithms.
Abstract: One of the main problems for the evaluation of fetal electrocardiogram (FECG) extraction algorithms is the difficulty in giving objective measurements of the quality of the recovered signals, and therefore of the proposed extraction method. This is the reason why most of the papers determine the behavior of a certain algorithm by visual inspection criteria. We propose the use of simulated registers and the definition of indexes that determine the quality of an extraction algorithm, without requiring visual inspection. This way, we can have objective measurements to determine the quality of an algorithm or we can have a criterion to fix the operational parameters of the method that show better performance.

13 citations

Proceedings ArticleDOI
01 Dec 2013
TL;DR: A ECG compression system is presented based on two-dimensional discrete wavelet transform (2D DWT) and Huffman coding technique and the average compression performance of algorithm is 65% with 0.999 correlation score.
Abstract: In this paper, a ECG compression system is presented based on two-dimensional discrete wavelet transform (2D DWT) and Huffman coding technique. In this method, two different approaches are utilized to construct a 2D array of 1D ECG signal using cut and align (CAB) technique, therefore ECG 2D array is decomposed with 2D DWT which results more number of insignificant coefficients. They are considered as zero amplitude value which accelerate compression rate and Huffman coding maintains the signal quality due to its lossless nature of compression. The average compression performance of algorithm is 65% with 0.999 correlation score.

13 citations

Journal ArticleDOI
TL;DR: In this article, the authors developed an adaptive transform-domain technique based on rational function systems for electrocardiogram (ECG) signal compression, which is designed especially for the rational transforms in question.
Abstract: In this paper we develop an adaptive transform-domain technique based on rational function systems. It is of general importance in several areas of signal theory, including filter design, transfer function approximation, system identification, control theory etc. The construction of the proposed method is discussed in the framework of a general mathematical model called variable projection. First we generalize this method by adding dimension type free parameters. Then we deal with the optimization problem of the free parameters. To this order, based on the well-known particle swarm optimization (PSO) algorithm, we develop the multi-dimensional hyperbolic PSO algorithm. It is designed especially for the rational transforms in question. As a result, the system along with its dimension is dynamically optimized during the process. The main motivation was to increase the adaptivity while keeping the computational complexity manageable. We note that the proposed method is of general nature. As a case study the problem of electrocardiogram (ECG) signal compression is discussed. By means of comparison tests performed on the PhysioNet MIT-BIH Arrhythmia database we demonstrate that our method outperforms other transformation techniques.

13 citations

Proceedings ArticleDOI
01 Dec 2014
TL;DR: A short range centralized health monitoring system to acquire electrocardiogram (ECG) data using wireless ZigBee communication for computerized analysis of patient modules and post acquisition data analysis is described.
Abstract: Remote health monitoring is a prominent area in modern biomedical research. This involves collection in different biomedical signals from patient using information and communication technology with the objective of remote end assessment of these vital conditions. This paper describes a short range centralized health monitoring system to acquire electrocardiogram (ECG) data using wireless ZigBee communication for computerized analysis. A prototype compact patient data collection system based on ATmega16L microcontroller was developed to collect and compress single lead ECG data for wireless transfer to a centralized station for remote end processing. A state of the art developed software in the central station controlled the patient modules and post acquisition data analysis. Test results with Physionet data and ECG collected from volunteers shown satisfactory result. Average compression achieved using 70 ECG files was 6.93 with average PRD and PRDN of 1.1343 and 8.4645 respectively. Feature extraction results using receiving end ECG data showed an average variance of 0.12%.

13 citations


Cites background from "ECG data compression techniques-a u..."

  • ...ECG compression schemes [11] are of generally three types: direct data compression, transformation type, and parameter extraction....

    [...]

Journal ArticleDOI
TL;DR: Experimental results have shown that this method for compression of electrocardiogram (ECG) signals, based on beat correlation of signal and principle component analysis, is very efficient for compression and suitable for different applications of telecardiology.
Abstract: This study presents an improved technique for compression of electrocardiogram (ECG) signals, based on beat correlation of signal and principle component (PC) analysis, for ECG signal. For this purpose, two-dimensional matrix of ECG signal based on temporal inter-and intra-beat correlation is constructed, and further compression is achieved using PC extraction. Beat correlation helps to generate very few PCs that increase the compression efficiency. A detailed analysis has been presented for ten signals having different rhythms, wave morphologies and abnormalities of Massachusetts Institute of Technology - Beth Israel Hospital (MIT-BIH) arrhythmia database. The effectiveness of the proposed method is examined with several attributes such as percentage root-mean-square difference, compression ratio, signal-to-noise ratio and correlation. Experimental results have shown that this method is very efficient for compression and suitable for different applications of telecardiology.

13 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations