scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
TL;DR: In this review, the emerging role of the wavelet transform in the interrogation of the ECG is discussed in detail, where both the continuous and the discrete transform are considered in turn.
Abstract: The wavelet transform has emerged over recent years as a powerful time-frequency analysis and signal coding tool favoured for the interrogation of complex nonstationary signals. Its application to biosignal processing has been at the forefront of these developments where it has been found particularly useful in the study of these, often problematic, signals: none more so than the ECG. In this review, the emerging role of the wavelet transform in the interrogation of the ECG is discussed in detail, where both the continuous and the discrete transform are considered in turn.

794 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...Transform methods, as their name implies, operate by first transforming the ECG signal into another domain including Fourier, Walsh, Kahunen Loeve, discrete cosine transforms and more recently the wavelet transform (Jalaleddine et al 1990)....

    [...]

Journal ArticleDOI
TL;DR: This paper quantifies the potential of the emerging compressed sensing (CS) signal acquisition/compression paradigm for low-complexity energy-efficient ECG compression on the state-of-the-art Shimmer WBSN mote and shows that CS represents a competitive alternative to state- of- the-art digital wavelet transform (DWT)-basedECG compression solutions in the context of WBSn-based ECG monitoring systems.
Abstract: Wireless body sensor networks (WBSN) hold the promise to be a key enabling information and communications technology for next-generation patient-centric telecardiology or mobile cardiology solutions. Through enabling continuous remote cardiac monitoring, they have the potential to achieve improved personalization and quality of care, increased ability of prevention and early diagnosis, and enhanced patient autonomy, mobility, and safety. However, state-of-the-art WBSN-enabled ECG monitors still fall short of the required functionality, miniaturization, and energy efficiency. Among others, energy efficiency can be improved through embedded ECG compression, in order to reduce airtime over energy-hungry wireless links. In this paper, we quantify the potential of the emerging compressed sensing (CS) signal acquisition/compression paradigm for low-complexity energy-efficient ECG compression on the state-of-the-art Shimmer WBSN mote. Interestingly, our results show that CS represents a competitive alternative to state-of-the-art digital wavelet transform (DWT)-based ECG compression solutions in the context of WBSN-based ECG monitoring systems. More specifically, while expectedly exhibiting inferior compression performance than its DWT-based counterpart for a given reconstructed signal quality, its substantially lower complexity and CPU execution time enables it to ultimately outperform DWT-based ECG compression in terms of overall energy efficiency. CS-based ECG compression is accordingly shown to achieve a 37.1% extension in node lifetime relative to its DWT-based counterpart for “good” reconstruction quality.

680 citations

Journal ArticleDOI
TL;DR: This statement examines the relation of the resting ECG to its technology to establish standards that will improve the accuracy and usefulness of the ECG in practice and to recommend recommendations for ECG standards.

649 citations

Journal ArticleDOI
TL;DR: A wavelet electrocardiogram (ECG) data codec based on the set partitioning in hierarchical trees (SPIHT) compression algorithm is proposed and is significantly more efficient in compression and in computation than previously proposed ECG compression schemes.
Abstract: A wavelet electrocardiogram (ECG) data codec based on the set partitioning in hierarchical trees (SPIHT) compression algorithm is proposed in this paper. The SPIHT algorithm (A. Said and W.A. Pearlman, IEEE Trans. Ccts. Syst. II, vol. 6, p. 243-50, 1996) has achieved notable success in still image coding. The authors modified the algorithm for the one-dimensional case and applied it to compression of ECG data. Experiments on selected records from the MIT-BIH arrhythmia database revealed that the proposed codec is significantly more efficient in compression and in computation than previously proposed ECG compression schemes. The coder also attains exact bit rate control and generates a bit stream progressive in quality or rate.

521 citations

References
More filters
Journal ArticleDOI

869 citations


"ECG data compression techniques-a u..." refers methods in this paper

  • ...One scheme of such polynomial compressors has been employed in speech data compression (called “aperture coding”) [28], [ 29 ]....

    [...]

Journal ArticleDOI
01 Mar 1981
TL;DR: A large variety of algorithms for image data compression are considered, starting with simple techniques of sampling and pulse code modulation (PCM) and state of the art algorithms for two-dimensional data transmission are reviewed.
Abstract: With the continuing growth of modern communications technology, demand for image transmission and storage is increasing rapidly. Advances in computer technology for mass storage and digital processing have paved the way for implementing advanced data compression techniques to improve the efficiency of transmission and storage of images. In this paper a large variety of algorithms for image data compression are considered. Starting with simple techniques of sampling and pulse code modulation (PCM), state of the art algorithms for two-dimensional data transmission are reviewed. Topics covered include differential PCM (DPCM) and predictive coding, transform coding, hybrid coding, interframe coding, adaptive techniques, and applications. Effects of channel errors and other miscellaneous related topics are also considered. While most of the examples and image models have been specialized for visual images, the techniques discussed here could be easily adapted more generally for multidimensional data compression. Our emphasis here is on fundamentals of the various techniques. A comprehensive bibliography with comments is included for a reader interested in further details of the theoretical and experimental results discussed here.

810 citations

Journal ArticleDOI
01 Jun 1973
TL;DR: In this article, the authors examined the relative merits of finite-duration impulse response (FIR) and infinite duration impulse response(IIR) digital filters as interpolation filters and showed that FIR filters are generally to be preferred for interpolation.
Abstract: In many digital signal precessing systems, e.g., vacoders, modulation systems, and digital waveform coding systems, it is necessary to alter the sampling rate of a digital signal Thus it is of considerable interest to examine the problem of interpolation of bandlimited signals from the viewpoint of digital signal processing. A frequency dmnain interpretation of the interpolation process, through which it is clear that interpolation is fundamentally a linear filtering process, is presented, An examination of the relative merits of finite duration impulse response (FIR) and infinite duration impulse response (IIR) digital filters as interpolation filters indicates that FIR filters are generally to be preferred for interpolation. It is shown that linear interpolation and classical polynomial interpolation correspond to the use of the FIR interpolation filter. The use of classical interpolation methods in signal processing applications is illustrated by a discussion of FIR interpolation filters derived from the Lagrange interpolation formula. The limitations of these filters lead us to a consideration of optimum FIR filters for interpolation that can be designed using linear programming techniques. Examples are presented to illustrate the significant improvements that are obtained using the optimum filters.

643 citations

Journal ArticleDOI
TL;DR: A variety of data compression methods are surveyed, from the work of Shannon, Fano, and Huffman in the late 1940s to a technique developed in 1986, which has important application in the areas of file storage and distributed systems.
Abstract: This paper surveys a variety of data compression methods spanning almost 40 years of research, from the work of Shannon, Fano, and Huffman in the late 1940s to a technique developed in 1986. The aim of data compression is to reduce redundancy in stored or communicated data, thus increasing effective data density. Data compression has important application in the areas of file storage and distributed systems. Concepts from information theory as they relate to the goals and evaluation of data compression methods are discussed briefly. A framework for evaluation and comparison of methods is constructed and applied to the algorithms presented. Comparisons of both theoretical and empirical natures are reported, and possibilities for future research are suggested.

581 citations

Journal ArticleDOI
Arun N. Netravali1, J.O. Limb1
01 Mar 1980
TL;DR: This paper presents a review of techniques used for digital encoding of picture material, covering statistical models of picture signals and elements of psychophysics relevant to picture coding, followed by a description of the coding techniques.
Abstract: This paper presents a review of techniques used for digital encoding of picture material. Statistical models of picture signals and elements of psychophysics relevant to picture coding are covered first, followed by a description of the coding techniques. Detailed examples of three typical systems, which combine some of the coding principles, are given. A bright future for new systems is forecasted based on emerging new concepts, technology of integrated circuits and the need to digitize in a variety of contexts.

551 citations