scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
TL;DR: Simulation results show that the proposed hybrid two-stage electrocardiogram signal compression method compares favourably with various state-of-the-art ECG compressors and provides low bit-rate and high quality of the reconstructed signal.
Abstract: A new hybrid two-stage electrocardiogram (ECG) signal compression method based on the modified discrete cosine transform (MDCT) and discrete wavelet transform (DWT) is proposed. The ECG signal is partitioned into blocks and the MDCT is applied to each block to decorrelate the spectral information. Then, the DWT is applied to the resulting MDCT coefficients. Removing spectral redundancy is achieved by compressing the subordinate components more than the dominant components. The resulting wavelet coefficients are then thresholded and compressed using energy packing and binary-significant map coding technique for storage space saving. Experiments on ECG records from the MIT-BIH database are performed with various combinations of MDCT and wavelet filters at different transformation levels, and quantization intervals. The decompressed signals are evaluated using percentage rms error (PRD) and zero-mean rms error (PRD(1)) measures. The results showed that the proposed method provides low bit-rate and high quality of the reconstructed signal. It offers average compression ratio (CR) of 21.5 and PRD of 5.89%, which would be suitable for most monitoring and diagnoses applications. Simulation results show that the proposed method compares favourably with various state-of-the-art ECG compressors.

20 citations


Cites background from "ECG data compression techniques-a u..."

  • ...Under these circumstances, preserving the most useful information when compressing a signal to an acceptable size becomes the central goal of ECG data compression techniques proposed in literature over the past 30 years [2]....

    [...]

  • ...Some attempts were made in the past to define standards for sampling frequency and quantization, but these standards were not implemented and the algorithms’ developers still use rates and quantizers that are convenient to them [2]....

    [...]

Proceedings ArticleDOI
01 Sep 2016
TL;DR: Quantitative results show the superiority of the SAM scheme against state-of-the-art techniques: compression ratios of up to 35, 70, and 180-fold are generally achievable respectively for PPG, ECG and RESP signals, while reconstruction errors remain within 2% and 7% and the input signal morphology is preserved.
Abstract: Wearable devices allow the seamless and inexpensive gathering of biomedical signals such as electrocardiograms (ECG), photoplethysmograms (PPG), and respiration traces (RESP). They are battery operated and resource constrained, and as such need dedicated algorithms to optimally manage energy and memory. In this work, we design SAM, a Subject-Adaptive (lossy) coMpression technique for physiological quasi-periodic signals. It achieves a substantial reduction in their data volume, allowing efficient storage and transmission, and thus helping extend the devices' battery life. SAM is based upon a subject-adaptive dictionary, which is learned and refined at runtime exploiting the time-adaptive self-organizing map (TASOM) unsupervised learning algorithm. Quantitative results show the superiority of our scheme against state-of-the-art techniques: compression ratios of up to 35-, 70- and 180-fold are generally achievable respectively for PPG, ECG and RESP signals, while reconstruction errors (RMSE) remain within 2% and 7% and the input signal morphology is preserved.

20 citations

Proceedings ArticleDOI
01 Oct 2005
TL;DR: This paper deals with beat variation periods and then exploits the correlation between cycles and the correlation within each ECG cycle (intra-beat) to achieve effective ECG compression algorithm based on two dimensional multiwavelet transform of ECG signals.
Abstract: In this paper, we introduce an effective ECG compression algorithm based on two dimensional multiwavelet transform. The SPIHT algorithm has achieved prominent success in signal compression. Multiwavelets offer simultaneous orthogonality, symmetry, and short support, which is not possible with scalar two-channel wavelet systems. These features are known to be important in signal processing. Therefore multiwavelet offers the possibility of superior performance for image processing applications. This paper deals with beat variation periods and then exploits the correlation between cycles (inter-beat) and the correlation within each ECG cycle (intra-beat). We suggested applying the SPIHT algorithm to 2-D multiwavelet transform of ECG signals. Experiments on selected records of ECG from MIT-BIH arrhythmia database revealed that the proposed algorithm is significantly more efficient for compression in comparison with previously proposed ECG compression schemes

20 citations


Cites background or methods from "ECG data compression techniques-a u..."

  • ...Hilton presented a wavelet and wavelet packet based EZW encoder [2]....

    [...]

  • ...However, transform methods usually achieve higher compression rates and are insensitive to noise contained in the original ECG signal [2]....

    [...]

Journal Article
TL;DR: The simulation results show that the cardbal2 by the means of identity (Id) prefiltering method to be the best effective transformation of ECG compression.
Abstract: In this paper we are to find the optimum multiwavelet for compression of electrocardiogram (ECG) signals. At present, it is not well known which multiwavelet is the best choice for optimum compression of ECG. In this work, we examine different multiwavelets on 24 sets of ECG data with entirely different characteristics, selected from MITBIH database. For assessing the functionality of the different multiwavelets in compressing ECG signals, in addition to known factors such as Compression Ratio (CR), Percent Root Difference (PRD), Distortion (D), Root Mean Square Error (RMSE) in compression literature, we also employed the Cross Correlation (CC) criterion for studying the morphological relations between the reconstructed and the original ECG signal and Signal to reconstruction Noise Ratio (SNR). The simulation results show that the cardbal2 by the means of identity (Id) prefiltering method to be the best effective transformation. Keywords—ECG compression, Multiwavelet, Prefiltering.

20 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...However, transform methods usually achieve higher compression rates and are insensitive to noise contained in original ECG signal [1]....

    [...]

Journal ArticleDOI
TL;DR: A non-recursive 1-D discrete periodized wavelet transform and a reversible round-off linear transformation (RROLT) theorem are developed that can resist truncation error propagation in decomposition processes and can obtain a superior compression performance, particularly in high CR situations.

20 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations