scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Proceedings ArticleDOI
01 Oct 2018
TL;DR: The proposed method has better compression performance which achieves the average compression ratio (CR) of 65.91 with the comparable RMSE of no more than 5% than the state-of-the-art that can achieve the CR of around 40 with the same level error rate.
Abstract: Biosignals often require high data transmission in real-time monitoring and visualization. Low-power techniques are always desirable for designing sustainable wireless sensor nodes. Signal compression techniques provide a promising solution in developing low-power wireless sensor nodes as it can significantly reduce the amount of data transmitted via power-demanding wireless transmission and thus greatly lower the energy consumption of sensor nodes. In this study, we develop a new approach for ECG signal compression on low-power ECG sensor nodes by leveraging sparse features of ECG signals in frequency domain. The experimental results show that our method has better compression performance which achieves the average compression ratio (CR) of 65.91 with the comparable RMSE of no more than 5% than the state-of-the-art that can achieve the CR of around 40 with the same level error rate. The promising compression performance of the proposed method provides a feasible solution to achieve ultra-low power consumption for wireless ECG sensor node design.

4 citations


Additional excerpts

  • ...With the emerging technology of continuous daily healthcare monitoring via Wireless Body Area Network (WBAN), techniques of reducing energy consumption of wireless sensor nodes are highly desirable due to the limited battery life [1]....

    [...]

Proceedings ArticleDOI
31 Oct 1996
TL;DR: The wavelet transform with the threshold detector, the quantizer, and the Huffman coder can compress data with average compression ratio CR=9.2 and percentual root mean square difference PRD=3.0%.
Abstract: An example of application of the wavelet transform to electrocardiography is described in the paper The transform is exploited as a first stage of an ECG signal compression algorithm The signal is decomposed into particular time-frequency components Some of the components are removed because of their low influence to signal shape due to nonstationary character of ECG Resulted components are quantized, composed into one block and compressed by a classical entropic Huffman coder The wavelet transform with the threshold detector, the quantizer, and the Huffman coder can compress data with average compression ratio CR=92 and percentual root mean square difference PRD=30% The lossy compression algorithm was tested on CSE library of rest ECG signals

4 citations


Additional excerpts

  • ...The dyadic discrete time wavelet transform (DTWT) of a finite sequence {x(i) | i=0,1,...,N-1}, where N=2M, can be evaluated as cyclic convolution ( ) ( ) ( )[ ]y m n DFT X k H kN mm, ,= − 21 (1) where m=1,2,...,M; n=0,1,...,N/2m-1; k=0,1,...,N-1; X(k)=DFT[x(i)] and ( ) ( )H k G km m m= 2 2* is a…...

    [...]

Proceedings ArticleDOI
18 Jun 2015
TL;DR: Dual tree discrete wavelet decomposition based ECG signal compression is exploited using zero run-length coding techniques, with main advancement, its sensitivity of generating sparse data set that helps to enhance compression performance of system.
Abstract: An Electrocardiogram (ECG) signal compression becomes more area of interest due to increases demand of tel-e-healthcare system. In this manuscript, dual tree discrete wavelet decomposition (DT-DWT) based ECG signal compression is exploited using zero run-length coding techniques. The main advancement of proposed technique, its sensitivity of generating sparse data set that helps to enhance compression performance of system. Performance of method evaluated through compression ratio and percentage root-mean square difference and quality evaluated using the cross correlation between the original and reconstructed MIT-BIH records. As discuses in results, proposed method is good as compare to earlier developed techniques in term of compression.

4 citations

Proceedings ArticleDOI
08 Sep 1998
TL;DR: An efficient encoding method based on entropy coding for that purpose is developed and the results prove good performance of the exact optimization methods in comparison with traditional time domain compression methods.
Abstract: Traditionally, compression of digital ElectroCardio-Gram (ECG) signals has been tackled by heuristieal approaches. However, it has recently been demonstrated that exact optimization algorithms perform much better with respect to reconstruction error. Time domain compression algorithms are based on the idea of extracting representative signal samples from the original signal. As opposed to the heuristieal approaches, the exact time domain compression algorithms are based upon a sound mathematical foundation. By formulating the sample selection problem as a graph theory problem, optimization theory can be applied in order to yield optimal compression. The signal is reconstructed by interpolation among the extracted signal samples. Different interpolation methods have been implemented, such as linear interpolation [1] and second order polynomial reconstruction [2]. In order to compare the performance of the two algorithms in a fully justified way, the results have to be encoded. In this paper we develop an efficient encoding method based on entropy coding for that purpose. The results prove good performance of the exact optimization methods in comparison with traditional time domain compression methods.

4 citations

01 Jan 2011
TL;DR: This work investigates a set of ECG data compression schemes in frequency domain to compare their performances in compressing ECG signals and finds the appropriate use of a block based DCT associated to a uniform scalar dead zone quantiser and arithmetic coding show very good results.
Abstract: Electrocardiog ram (ECG) data compression algorithm is needed to reduce the amount of data to be transmitted, stored and analyzed, without losing the clinical information content. This work investigates a set of ECG data compression schemes in frequency domain to compare their performances in compressing ECG signals. These schemes are based on transform methods such as discrete cosine transform (DCT), fast fourier transform (FFT), discrete sine transform (DST), and their improvements. An improvement of a discrete cosine transform (DCT)-based method for electrocardiogram (ECG) compression is also presented as DCT-II. A comparative study of performance of different transforms is made in terms of Compression Ratio (CR) and Percent root mean square difference (PRD).The appropriate use of a block based DCT associated to a uniform scalar dead zone quantiser and arithmetic coding show very good results, confirming that the proposed strategy exhibits competitive performances compared with the most popular compressors used for ECG compression. Each specific transform is applied to a pre-selected data segment from the MIT-BIH database and then compression is performed. Index Terms—Compression, Compression ratio, Cosine transform, ECG, Fourier transform, Frequency domain techniques, PRD, Time domain techniques. —————————— a —————————— 1 I

4 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...ECG data compression algorithms have been mainly classified into three major categories [3]: 1) Direct time-domain techniques, e....

    [...]

  • ...2) Transformational approaches [3], e....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations