scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
TL;DR: In this review, the emerging role of the wavelet transform in the interrogation of the ECG is discussed in detail, where both the continuous and the discrete transform are considered in turn.
Abstract: The wavelet transform has emerged over recent years as a powerful time-frequency analysis and signal coding tool favoured for the interrogation of complex nonstationary signals. Its application to biosignal processing has been at the forefront of these developments where it has been found particularly useful in the study of these, often problematic, signals: none more so than the ECG. In this review, the emerging role of the wavelet transform in the interrogation of the ECG is discussed in detail, where both the continuous and the discrete transform are considered in turn.

794 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...Transform methods, as their name implies, operate by first transforming the ECG signal into another domain including Fourier, Walsh, Kahunen Loeve, discrete cosine transforms and more recently the wavelet transform (Jalaleddine et al 1990)....

    [...]

Journal ArticleDOI
TL;DR: This paper quantifies the potential of the emerging compressed sensing (CS) signal acquisition/compression paradigm for low-complexity energy-efficient ECG compression on the state-of-the-art Shimmer WBSN mote and shows that CS represents a competitive alternative to state- of- the-art digital wavelet transform (DWT)-basedECG compression solutions in the context of WBSn-based ECG monitoring systems.
Abstract: Wireless body sensor networks (WBSN) hold the promise to be a key enabling information and communications technology for next-generation patient-centric telecardiology or mobile cardiology solutions. Through enabling continuous remote cardiac monitoring, they have the potential to achieve improved personalization and quality of care, increased ability of prevention and early diagnosis, and enhanced patient autonomy, mobility, and safety. However, state-of-the-art WBSN-enabled ECG monitors still fall short of the required functionality, miniaturization, and energy efficiency. Among others, energy efficiency can be improved through embedded ECG compression, in order to reduce airtime over energy-hungry wireless links. In this paper, we quantify the potential of the emerging compressed sensing (CS) signal acquisition/compression paradigm for low-complexity energy-efficient ECG compression on the state-of-the-art Shimmer WBSN mote. Interestingly, our results show that CS represents a competitive alternative to state-of-the-art digital wavelet transform (DWT)-based ECG compression solutions in the context of WBSN-based ECG monitoring systems. More specifically, while expectedly exhibiting inferior compression performance than its DWT-based counterpart for a given reconstructed signal quality, its substantially lower complexity and CPU execution time enables it to ultimately outperform DWT-based ECG compression in terms of overall energy efficiency. CS-based ECG compression is accordingly shown to achieve a 37.1% extension in node lifetime relative to its DWT-based counterpart for “good” reconstruction quality.

680 citations

Journal ArticleDOI
TL;DR: This statement examines the relation of the resting ECG to its technology to establish standards that will improve the accuracy and usefulness of the ECG in practice and to recommend recommendations for ECG standards.

649 citations

Journal ArticleDOI
TL;DR: A wavelet electrocardiogram (ECG) data codec based on the set partitioning in hierarchical trees (SPIHT) compression algorithm is proposed and is significantly more efficient in compression and in computation than previously proposed ECG compression schemes.
Abstract: A wavelet electrocardiogram (ECG) data codec based on the set partitioning in hierarchical trees (SPIHT) compression algorithm is proposed in this paper. The SPIHT algorithm (A. Said and W.A. Pearlman, IEEE Trans. Ccts. Syst. II, vol. 6, p. 243-50, 1996) has achieved notable success in still image coding. The authors modified the algorithm for the one-dimensional case and applied it to compression of ECG data. Experiments on selected records from the MIT-BIH arrhythmia database revealed that the proposed codec is significantly more efficient in compression and in computation than previously proposed ECG compression schemes. The coder also attains exact bit rate control and generates a bit stream progressive in quality or rate.

521 citations

References
More filters
Journal ArticleDOI
TL;DR: A highly accurate approximation is realized, which has not been achieved by the linear approximation using the traditional AZTEC method, and is suited to the long-term monitoring of the electrocardiogram to record abnormal waveforms.
Abstract: This paper proposes a highly efficient encoding method for electrocardiogram using spline functions. The method consists of the extraction algorithm, which extracts feature samples from the electrocardiogram from the A/D converter, and the restoration algorithm which restores the original waveform from the extracted samples, using spline functions. In the extraction algorithm, maxima and minima, which is not due to noise, are extracted from the original waveform. Then the spline function is applied to perform a smooth interpolation. By this scheme, a highly accurate approximation is realized, which has not been achieved by the linear approximation using the traditional AZTEC method. According to the result of experiment, the root-mean-square error of the proposed method is approximately half the AZTEC method if the compression ratio is the same. The proposed method consists in the simple extraction algorithm, and can be implemented on a microprocessor. Thus, it is suited to the long-term monitoring of the electrocardiogram to record abnormal waveforms.

38 citations

Journal ArticleDOI
H. Tanaka1
TL;DR: The tree structure is presented by a two-dimensional array which can be applied for the decoding of Huffman codes as a state transition table of the finite-state decoding automaton.
Abstract: The data structure of Huffman codes and its application to efficient encoding and decoding of Huffman codes are studied in detail. The tree structure is presented by a two-dimensional array which can be applied for the decoding of Huffman codes as a state transition table of the finite-state decoding automaton. Inversion produces a one-dimensional state transition table of the semiautonomous finite-state sequential machine which can be used as a Huffman encoder with a push-down stack. The encoding and decoding procedures are simple and efficient. It is not only possible to implement by simple hardware but is also applicable to software implementation.

37 citations

Journal ArticleDOI
Jr. J.B. O'Neal1
01 Mar 1967
TL;DR: An upper bound on the signal-to-quantizing noise power ratio possible for a given bit rate and signal ensemble is derived and this bound is compared with the operation of pulsecode modulation, differential pulse code modulation, and delta modulation systems.
Abstract: When an analog signal is encoded into digital form and then decoded back into an analog signal, quantizing noise is always introduced. The amount of quantizing noise which contaminates the decoded analog signal is inextricably tied to the amount of redundancy present in the signal and in the digital bit stream. Reducing the quantizing noise and, therefore, increasing the fidelity of the resulting signal requires that the redundancy in the digital bit stream be reduced or eliminated. There is a point, however, beyond which the quantizing noise cannot be further reduced. This is discussed in quantitative terms by deriving an upper bound on the signal-to-quantizing noise power ratio possible for a given bit rate and signal ensemble. Ratios of signal-to-quantizing noise greater than this bound are not possible for digital encoding systems. This bound is compared with the operation of pulse code modulation, differential pulse code modulation, and delta modulation systems.

37 citations


"ECG data compression techniques-a u..." refers background in this paper

  • ...It should also be noted that if the residual signal has a Gaussian distribution, minimum variance implies minimum entropy [ 40 ]....

    [...]

Journal ArticleDOI
L. Wilkins1, P. Wintz1
TL;DR: Published papers and reports dealing with data compression, picture properties and models, picture coding and transmission, image enhancement, and human visual information processing are listed.
Abstract: Published papers and reports dealing with data compression, picture properties and models, picture coding and transmission, image enhancement, and human visual information processing are listed.

37 citations