scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Wavelet and wavelet packet compression of electrocardiograms

01 May 1997-IEEE Transactions on Biomedical Engineering (IEEE Trans Biomed Eng)-Vol. 44, Iss: 5, pp 394-402
TL;DR: Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECGs are clinically useful.
Abstract: Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.
Citations
More filters
Proceedings ArticleDOI
01 Dec 2008
TL;DR: A wavelet-based electrocardiogram (ECG) compression algorithm that reduces the bit rate of ECG and preserves its main clinically diagnostic features intact by minimizing reconstructed signal distortion.
Abstract: A wavelet-based electrocardiogram (ECG) compression algorithm is proposed in this paper. The proposed algorithm reduces the bit rate of ECG and preserves its main clinically diagnostic features intact by minimizing reconstructed signal distortion. The original signal is divided into blocks and each block goes through a discrete wavelet transform. A threshold based on energy packing efficiency of the wavelet coefficients is used to threshold the wavelet coefficients. To assess the proper applicability of the proposed technique we have evaluated the effect of threshold level selection on the quality of the reconstructed signal. To generalize the proposed method, the technique is tested for the compression of a large set of normal and abnormal ECG signals extracted from MIT-BIH database. The performance parameters of the compression algorithm are measured and an average compression ratio of 13.57 with percent root mean square difference (PRD) of 4.87 is achieved. This technique outperforms the recently published ones in most of the data sets of MIT-BIH database.

11 citations


Cites methods from "Wavelet and wavelet packet compress..."

  • ...For this purposes, the best selection rule (rule 1) found in our study has been applied for the same data sets used in [2], [3], [9], records 117 of the database....

    [...]

  • ...These wavelets allow perfect reconstruction of the data using linear-phase filter banks, which in turn avoids reconstruction errors at the beginning and end of data segments [2, 3]....

    [...]

  • ...For the sake of comparison with other methods [2], [3], [9], ECG signals extracted from the MIT-BIH arrhythmia database are used for experimentation....

    [...]

Journal ArticleDOI
TL;DR: A method that can be applied in embedded environments by optimizing the processing time and memory usage of dynamic programming applied to the polygonal approximation of an ECG signal and preserve a performance of fiducial point detection is proposed.
Abstract: Arrhythmia is less frequent than a normal heartbeat in an electrocardiogram signal, and the analysis of an electrocardiogram measurement can require more than 24 hours. Therefore, the efficient storage and transmission of electrocardiogram signals have been studied, and their importance has increased recently due to the miniaturization and weight reduction of measurement equipment. The polygonal approximation method based on dynamic programming can effectively achieve signal compression and fiducial point detection by expressing signals with a small number of vertices. However, the execution time and memory area rapidly increase depending on the length of the signal and number of vertices, which are not suitable for lightweight and miniaturized equipment. In this paper, we propose a method that can be applied in embedded environments by optimizing the processing time and memory usage of dynamic programming applied to the polygonal approximation of an ECG signal. The proposed method is divided into three steps to optimize the processing time and memory usage of dynamic programming. The first optimization step is based on the characteristics of electrocardiogram signals in the polygonal approximation. Second, the size of a data bit is used as the threshold for the time difference of each vertex. Finally, a type conversion and memory optimization are applied, which allow real-time processing in embedded environments. After analyzing the performance of the proposed algorithm for a signal length L and number of vertices N, the execution time is reduced from O(L 2 N) to O(L), and the memory usage is reduced from O(L 2 N) to O(LN). In addition, the proposed method preserve a performance of fiducial point detection. In a QT-DB experiment provided by Physionet, achieving values of -4.01 ± 7.99 ms and -5.46 ± 8.03 ms.

11 citations


Cites background from "Wavelet and wavelet packet compress..."

  • ...form [12], [13] and Karhunen-Loeve transform [14], result in loss in during the data compression process....

    [...]

Book ChapterDOI
01 Jan 2009
TL;DR: This paper describes how to use Wavelet transformations to analyze this signal in time and frequency spectrum, which gives a lot more information than FFT.
Abstract: Electromyography (EMG) is an experimental technique developed with the purpose of detecting muscle movement. The technique generates a signal that is formed by impulses of muscle fibers during the movement of muscles. The generated signal is very sensitive and can therefore be influenced by several external factors altering its shape and characteristics. In this paper we describe how to use Wavelet transformations to analyze this signal in time and frequency spectrum, which gives a lot more information than FFT.

11 citations

Journal ArticleDOI
TL;DR: The proposed algorithm is a general-purpose signals compression scheme, with more efficiency for quasi-periodic signals, which is in a comfortable competitive position with the most performant methods for moderate and near-lossless compression.

11 citations


Cites methods from "Wavelet and wavelet packet compress..."

  • ...This table shows the PRD values of five methods (ShaLTeRR, TSVD [23], Lu [20], Hilton [22] and Djohan [21]) at CR ¼ 8:1....

    [...]

  • ...This table shows the PRD values of five methods (ShaLTeRR, TSVD [23], Lu [20], Hilton [22] and Djohan [21]) at CR 1⁄4 8:1....

    [...]

  • ...to compare their method with other wavelet-based methods [21,22]....

    [...]

Proceedings ArticleDOI
23 Mar 2004
TL;DR: This paper describes the wavelet compression of ECG signals by JPEG2000, the latest international standard for compression of still images, which retains the desirable characteristics of the JPEG2000 codec.
Abstract: This paper describes the wavelet compression of ECG signals by JPEG2000. JPEG2000 is the latest international standard for compression of still images. JPEG2000 codec is designed to compress images and it can also be used to compress other signals. The desirable characteristics of the JPEG2000 codec, such as precise rate control and progressive quality, are retained in the presented ECG compression scheme. To compress the ECG data using a JPEG2000 code, the one-dimensional ECG sequence is processed to produce a two-dimensional matrix. This matrix is then encoded using JPEG2000.

11 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, it is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2 /sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions.
Abstract: Multiresolution representations are effective for analyzing the information content of images. The properties of the operator which approximates a signal at a given resolution were studied. It is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2/sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions. In L/sup 2/(R), a wavelet orthonormal basis is a family of functions which is built by dilating and translating a unique function psi (x). This decomposition defines an orthogonal multiresolution representation called a wavelet representation. It is computed with a pyramidal algorithm based on convolutions with quadrature mirror filters. Wavelet representation lies between the spatial and Fourier domains. For images, the wavelet representation differentiates several spatial orientations. The application of this representation to data compression in image coding, texture discrimination and fractal analysis is discussed. >

20,028 citations

Book
01 May 1992
TL;DR: This paper presents a meta-analyses of the wavelet transforms of Coxeter’s inequality and its applications to multiresolutional analysis and orthonormal bases.
Abstract: Introduction Preliminaries and notation The what, why, and how of wavelets The continuous wavelet transform Discrete wavelet transforms: Frames Time-frequency density and orthonormal bases Orthonormal bases of wavelets and multiresolutional analysis Orthonormal bases of compactly supported wavelets More about the regularity of compactly supported wavelets Symmetry for compactly supported wavelet bases Characterization of functional spaces by means of wavelets Generalizations and tricks for orthonormal wavelet bases References Indexes.

16,073 citations

Journal ArticleDOI
TL;DR: In this article, the regularity of compactly supported wavelets and symmetry of wavelet bases are discussed. But the authors focus on the orthonormal bases of wavelets, rather than the continuous wavelet transform.
Abstract: Introduction Preliminaries and notation The what, why, and how of wavelets The continuous wavelet transform Discrete wavelet transforms: Frames Time-frequency density and orthonormal bases Orthonormal bases of wavelets and multiresolutional analysis Orthonormal bases of compactly supported wavelets More about the regularity of compactly supported wavelets Symmetry for compactly supported wavelet bases Characterization of functional spaces by means of wavelets Generalizations and tricks for orthonormal wavelet bases References Indexes.

14,157 citations

Journal ArticleDOI
Ingrid Daubechies1
TL;DR: This work construct orthonormal bases of compactly supported wavelets, with arbitrarily high regularity, by reviewing the concept of multiresolution analysis as well as several algorithms in vision decomposition and reconstruction.
Abstract: We construct orthonormal bases of compactly supported wavelets, with arbitrarily high regularity. The order of regularity increases linearly with the support width. We start by reviewing the concept of multiresolution analysis as well as several algorithms in vision decomposition and reconstruction. The construction then follows from a synthesis of these different approaches.

8,588 citations


"Wavelet and wavelet packet compress..." refers methods in this paper

  • ...In the work described in this paper, was chosen to be Daubechie's W6 wavelet [10], which is illustrated in Figure 1....

    [...]

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations


Additional excerpts

  • ...algorithm was inspired by that in [28]....

    [...]