scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
TL;DR: A new approach to reconstructing ECG signal from undersampled data based on constructing a combined overcomplete dictionary that can find sparse approximation by compressive sensing is proposed.
Abstract: We propose a new approach to reconstructing ECG signal from undersampled data based on constructing a combined overcomplete dictionary. The dictionary is obtained by combining the trained dictionary by K-SVD dictionary learning algorithm with universal types of dictionary such as DCT or wavelet basis. Using the trained overcomplete dictionary, the proposed method can find sparse approximation by compressive sensing. Experimental results on MIT-BIH arrhythmia database confirm that our proposed algorithm has high reconstruction performance while maintaining low distortion.

5 citations

Proceedings ArticleDOI
07 Jul 2008
TL;DR: A two dimensional (2-D) wavelet based electrocardiogram (ECG) data compression method is presented which employs a set partitioning in hierarchical trees (SPIHT) and run length (RL) coding.
Abstract: A two dimensional (2-D) wavelet based electrocardiogram (ECG) data compression method is presented which employs a set partitioning in hierarchical trees (SPIHT) and run length (RL) coding. The proposed 2-D approach utilizes the fact that ECG signal generally show redundancy between adjacent beats and between adjacent samples. Meanwhile a set of wavelet functions for implementation in 2-D ECG array was examined. Eight different wavelets are evaluated for their ability to compress ECG. Results show that RL coding increases the compression ratio in the same error. Moreover biorthogonal-6.8 is clearly the best performer in the statistical measure among the eight different evaluated wavelets.

5 citations


Cites background from "ECG data compression techniques-a u..."

  • ...By observing the ECG waveforms, a fact can be concluded that the heart beat with signals generally show considerable similarity between adjacent heartbeats, along with short-term correlation between adjacent samples, so using two dimensional ECG can improve the compression efficiency [4],[8]....

    [...]

Proceedings ArticleDOI
01 May 2017
TL;DR: A novel hybrid ECG signal data compression technique is proposed, in which lossless compression is applied on QRS segments and lossy compression is applications on other segments, without actually implementing any wave-recognition algorithm.
Abstract: A single cycle of an ECG signal is composed of multiple segments. The QRS segment is considered as the most important segment for accurate diagnosis in many heart related disorders and this segment should be preserved against any major signal distortion during the process of compression. In this paper, a novel hybrid ECG signal data compression technique is proposed, in which lossless compression is applied on QRS segments and lossy compression is applied on other segments, without actually implementing any wave-recognition algorithm. Experimental results have shown that with the optimal selection of threshold and aperture size, it is possible to preserve the quality of QRS segments for enhancing the diagnostic capability with the reconstructed signal while achieving higher compression efficiency at the same time.

5 citations


Cites background or methods from "ECG data compression techniques-a u..."

  • ...Eventhough different ECG compression techniques have been proposed in the literature [1] [2], there is a need for new technique that can achive the conflicting objectives such as obtaining higher compression ratio while preserving the original data without significant loss of information and also without requiring high-performance hardware and longer time to run the algorithm....

    [...]

  • ...Some of the examples are amplitude zone time epoch coding (AZTEC) method, coordinate reduction time encoding system (CORTES), turning point (TP) method, scan along polygonal approximation (SAPA) method, delta coding, SLOPE, and FAN algorithm [4][5][6][1][7]....

    [...]

Proceedings ArticleDOI
23 May 2016
TL;DR: It is shown that the LC-ADC is more suitable for the digitization of the biomedical signal and allows achieving a percentage root mean square difference (PRD) less than 2 % which is the criteria for an efficient reconstructed signal quality.
Abstract: The wireless biomedical acquisition and transmission (Wibio'ACT) is a contributive project for real-time electrocardiogram (ECG) monitoring based on a special level-crossing analog-to-digital converter (LC-ADC). The authors present, as the first step in their design flow, modeling and simulation results of the converter in the presence of noise in ECG and comparator errors. Compared to pulse code modulation (PCM) and differential pulse code modulation (DPCM), it is shown that the LC-ADC is more suitable for the digitization of the biomedical signal. Using LC-ADC allows achieving a percentage root mean square difference (PRD) less than 2 % which is the criteria for an efficient reconstructed signal quality. As far as it concerns the comparator non-idealities, a comparator delay higher than 4 ns, which is equal to 0.21 % of the LC-ADC timer clock period, may imply a PRD higher than 2 %. Besides, the comparator offset voltage impact varies with the design considerations namely the bit resolution and the timer clock period.

5 citations

Proceedings ArticleDOI
01 Feb 2015
TL;DR: The simulation results show that the proposed method achieves high compression ratio at relatively low distortion in comparison with other methods, and can clearly demonstrate the algorithm efficiency towards to save a great amount of storage space, bandwidth and power consumption especially in data transmission for tele-health care or m- health care systems.
Abstract: This paper explores the new algorithm for ECG signal compression based on Joint-multiresolution analysis (J-MRA) using Gaussian pyramid and wavelet analysis. From signal compression prospective, MRA play key role to present signal in few numbers of coefficients with its parental features. The proposed algorithm has been tested on 10 second length of 19 ECG signals from MIT-BIH Arrhythmia database and compared with recent contemporary techniques. The simulation results show that the proposed method achieves high compression ratio at relatively low distortion in comparison with other methods. Analysis contains various simulation results, where the average compression is 86.14% at 4.96% of PRD and correlation founded between original and reconstructed signal is 0.998. It can clearly demonstrate the algorithm efficiency towards to save a great amount of storage space, bandwidth and power consumption especially in data transmission for tele-health care or m-health care systems.

5 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations