scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
01 Oct 2022-Irbm
TL;DR: In this article , a novel electrocardiogram data compression technique which utilizes modified run-length encoding of wavelet coefficients is presented. But the proposed technique can be utilized for compression of ECG records of Holter monitoring.
Abstract: In cardiac patient-care, compression of long-term ECG data is essential to minimize the data storage requirement and transmission cost. Hence, this paper presents a novel electrocardiogram data compression technique which utilizes modified run-length encoding of wavelet coefficients. First, wavelet transform is applied to the ECG data which decomposes it and packs maximum energy to less number of transform coefficients. The wavelet transform coefficients are quantized using dead-zone quantization. It discards small valued coefficients lying in the dead-zone interval while other coefficients are kept at the formulated quantized output interval. Among all the quantized coefficients, an average value is assigned to those coefficients for which energy packing efficiency is less than 99.99%. The obtained coefficients are encoded using modified run-length coding. It offers higher compression ratio than conventional run-length coding without any loss of information. Compression performance of the proposed technique is evaluated using different ECG records taken from the MIT-BIH arrhythmia database. The average compression performance in terms of compression ratio, percent root mean square difference, normalized percent mean square difference, and signal to noise ratio are 17.18, 3.92, 6.36, and 28.27 dB respectively for 48 ECG records. The compression results obtained by the proposed technique is better than techniques recently introduced by others. The proposed technique can be utilized for compression of ECG records of Holter monitoring.

7 citations

Journal ArticleDOI
TL;DR: This paper proposes a method that integrates watermarking and compression of an electrocardiogram (ECG) of a subject and shows that the proposed method reduces the PRD by a factor of three, and the compression ratio increases by a factors of two.

7 citations

Journal ArticleDOI
01 Jun 2022-Irbm
TL;DR: In this article , the authors present a methodological review of different ECG data compression techniques based on their experimental performance on ECG records of the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database.
Abstract: Objective: Globally, cardiovascular diseases (CVDs) are one of the most leading causes of death. In medical screening and diagnostic procedures of CVDs, electrocardiogram (ECG) signals are widely used. Early detection of CVDs requires acquisition of longer ECG signals. It has triggered the development of personal healthcare systems which can be used by cardio-patients to manage the disease. These healthcare systems continuously record, store, and transmit the ECG data via wired/wireless communication channels. There are many issues with these systems such as data storage limitation, bandwidth limitation and limited battery life. Involvement of ECG data compression techniques can resolve all these issues. Method: In the past, numerous ECG data compression techniques have been proposed. This paper presents a methodological review of different ECG data compression techniques based on their experimental performance on ECG records of the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database. Results: It is observed that experimental performance of different compression techniques depends on several parameters. The existing compression techniques are validated using different distortion measures. Conclusion: This study elaborates advantages and disadvantages of different ECG data compression techniques. It also includes different validation methods of ECG compression techniques. Although compression techniques have been developed very widely but the validation of compression methods is still a prospective research area to accomplish an efficient and reliable performance.

7 citations

Journal Article
TL;DR: The result of ECG signal compression shows better compression performance in DWT compared to DFT, FFT and DCT, and the experimental results are obtained for Percent Root Mean Square Difference, Signal to Noise ratio (SNR), and Compression ratio (CR).
Abstract: Biological signal compression and especially ECG has an important role in the diagnosis, prognosis and survival analysis of heart diseases Various techniques have been proposed over the years addressing the signal compression Compression of digital electrocardiogram (ECG) signals is desirable for three reasons- economic use of storage data, reduction of the data transmission rate and transmission bandwidth conversation ECG signal In this paper a comparative study of Discrete Fourier Transform (DFT), Fast Fourier Transform (FFT), Discrete Cosine compression is used for telemedicine field and re Transform (DCT) and Wavelet Transform (WT) transform based approach is carried out Different ECG signals are tested from MIT-BIH arrhythmia database using MATLAB software The experimental results are obtained for Percent Root Mean Square Difference (PRD), Signal to Noise ratio (SNR) and Compression ratio (CR) The result of ECG signal compression shows better compression performance in DWT compared to DFT, FFT and DCT Keywords: Electrocardiogram (ECG), Discrete Fourier Transform (DFT), Fast Fourier Transform (FFT), Discrete Cosine Transform (DCT), Wavelet Transform (WT)

7 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...For ECG data compression, the lossy type has been applied due to its capability of a high data compression ratio [2]....

    [...]

Journal ArticleDOI
09 Oct 2021-Irbm
TL;DR: In this paper, a novel electrocardiogram data compression technique which utilizes modified run-length encoding of wavelet coefficients is presented. But the proposed technique can be utilized for compression of ECG records of Holter monitoring.
Abstract: Objective In cardiac patient-care, compression of long-term ECG data is essential to minimize the data storage requirement and transmission cost. Hence, this paper presents a novel electrocardiogram data compression technique which utilizes modified run-length encoding of wavelet coefficients. Method First, wavelet transform is applied to the ECG data which decomposes it and packs maximum energy to less number of transform coefficients. The wavelet transform coefficients are quantized using dead-zone quantization. It discards small valued coefficients lying in the dead-zone interval while other coefficients are kept at the formulated quantized output interval. Among all the quantized coefficients, an average value is assigned to those coefficients for which energy packing efficiency is less than 99.99%. The obtained coefficients are encoded using modified run-length coding. It offers higher compression ratio than conventional run-length coding without any loss of information. Results Compression performance of the proposed technique is evaluated using different ECG records taken from the MIT-BIH arrhythmia database. The average compression performance in terms of compression ratio, percent root mean square difference, normalized percent mean square difference, and signal to noise ratio are 17.18, 3.92, 6.36, and 28.27 dB respectively for 48 ECG records. Conclusion The compression results obtained by the proposed technique is better than techniques recently introduced by others. The proposed technique can be utilized for compression of ECG records of Holter monitoring.

7 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations