scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Wavelet and wavelet packet compression of electrocardiograms

01 May 1997-IEEE Transactions on Biomedical Engineering (IEEE Trans Biomed Eng)-Vol. 44, Iss: 5, pp 394-402
TL;DR: Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECGs are clinically useful.
Abstract: Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.
Citations
More filters
Journal ArticleDOI
TL;DR: A joint use of the discrete cosine transform (DCT), and differential pulse code modulation (DPCM) based quantization is presented for predefined quality controlled electrocardiogram (ECG) data compression.
Abstract: In this paper, a joint use of the discrete cosine transform (DCT), and differential pulse code modulation (DPCM) based quantization is presented for predefined quality controlled electrocardiogram (ECG) data compression. The formulated approach exploits the energy compaction property in transformed domain. The DPCM quantization has been applied to zero-sequence grouped DCT coefficients that were optimally thresholded via Regula-Falsi method. The generated sequence is encoded using Huffman coding. This encoded series is further converted to a valid ASCII code using the standard codebook for transmission purpose. Such a coded series possesses inherent encryption capability. The proposed technique is validated on all 48 records of standard MIT-BIH database using different measures for compression and encryption. The acquisition time has been taken in accordance to that existed in literature for the fair comparison with contemporary state-of-art approaches. The chosen measures are (1) compression ratio (CR), (2) percent root mean square difference (PRD), (3) percent root mean square difference without base (PRD1), (4) percent root mean square difference normalized (PRDN), (5) root mean square (RMS) error, (6) signal to noise ratio (SNR), (7) quality score (QS), (8) entropy, (9) Entropy score (ES) and (10) correlation coefficient (r x,y ). Prominently the average values of CR, PRD and QS were equal to 18.03, 1.06, and 17.57 respectively. Similarly, the mean encryption metrics i.e. entropy, ES and r x,y were 7.9692, 0.9962 and 0.0113 respectively. The novelty in combining the approaches is well justified by the values of these metrics that are significantly higher than the comparison counterparts.

31 citations

Proceedings ArticleDOI
02 May 2012
TL;DR: To increase compression ratio and reduce distortion of the ECG signal, a non-uniform binary sensing matrix is proposed and evaluated.
Abstract: Wearable ECG sensors can assist in prolonged monitoring of cardiac patients. Compression of ECG signals is pursued as a means to minimize the energy consumed during transmission of information from a portable ECG sensor to a server. In this paper, compressed sensing is employed in ECG compression. To increase compression ratio and reduce distortion of the ECG signal, a non-uniform binary sensing matrix is proposed and evaluated.

31 citations


Cites background from "Wavelet and wavelet packet compress..."

  • ...Although there are many papers addressing the problem of ECG compression [10-19], only a few studies have been published in the specific area of ECG signal compression using CS....

    [...]

Proceedings ArticleDOI
Hyejung Kim1, Yongsang Kim1, Hoi-Jun Yoo1
14 Oct 2008
TL;DR: A low cost quadratic level compression algorithm is proposed for body sensor network system, which reduces the encoding delay and the hardware cost, while maintaining the reconstructed signal quality.
Abstract: A low cost quadratic level compression algorithm is proposed for body sensor network system. The proposed algorithm reduces the encoding delay and the hardware cost, while maintaining the reconstructed signal quality. The quadratic compression level determined by the mean deviation value is used to preserve the critical information with high compression ratio. The overall CR is 8.4:1, the PRD is 0.897% and the encoding rate is 6.4Mbps. The 16-bit sensor node processor is designed, which supports the proposed compression algorithm. The processor consumes 0.56nJ/bit at 1V supply voltage with 1MHz operating frequency in 0.25-μm CMOS process.

30 citations

Journal ArticleDOI
TL;DR: The results suggested that transient events and changes in autonomic modulation were detected with high temporal resolution and a nonlinear relationship between RR interval and SAP during pharmacologically induced changes in blood pressure was found.
Abstract: In this paper, the discrete wavelet transform (DWT) was applied to analyze the fluctuations in RR interval and systolic arterial pressure (SAP) recorded from eight a-chloralose anesthetized pigs. Our aim was to characterize the autonomic modulation before and after cardiac autonomic blockade and during baroreflex function tests. The instantaneous power of decomposed low-frequency (LF) and high-frequency (HF) components was used for a time-variant spectral analysis. Our results suggested that transient events and changes in autonomic modulation were detected with high temporal resolution. A nonlinear relationship between RR interval and SAP during pharmacologically induced changes in blood pressure was found, when the superimposed effect of respiratory sinus arrhythmia was removed. In addition, the baroslopes were nearly linear when both the LF and HF components were removed using DWT decomposition.

30 citations

Proceedings ArticleDOI
27 Aug 2009
TL;DR: Foveation principles suggested by natural vision systems enable the construction of a proper mask that may modulate the coefficients given by the Discrete Wavelet Transform of an ECG record to provide high compression ratios at low reconstruction errors.
Abstract: Foveation principles suggested by natural vision systems enable the construction of a proper mask that may modulate the coefficients given by the Discrete Wavelet Transform of an ECG record. The mask is spatially selective and provides maximum accuracy around specific regions of interest. Subsequent denoising and coefficient quantization are further combined with efficient coding techniques such as SPIHT in order to provide high compression ratios at low reconstruction errors. Experimental results reported on a number of MIT-BIH records show improved performances over existing solutions.

30 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, it is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2 /sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions.
Abstract: Multiresolution representations are effective for analyzing the information content of images. The properties of the operator which approximates a signal at a given resolution were studied. It is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2/sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions. In L/sup 2/(R), a wavelet orthonormal basis is a family of functions which is built by dilating and translating a unique function psi (x). This decomposition defines an orthogonal multiresolution representation called a wavelet representation. It is computed with a pyramidal algorithm based on convolutions with quadrature mirror filters. Wavelet representation lies between the spatial and Fourier domains. For images, the wavelet representation differentiates several spatial orientations. The application of this representation to data compression in image coding, texture discrimination and fractal analysis is discussed. >

20,028 citations

Book
01 May 1992
TL;DR: This paper presents a meta-analyses of the wavelet transforms of Coxeter’s inequality and its applications to multiresolutional analysis and orthonormal bases.
Abstract: Introduction Preliminaries and notation The what, why, and how of wavelets The continuous wavelet transform Discrete wavelet transforms: Frames Time-frequency density and orthonormal bases Orthonormal bases of wavelets and multiresolutional analysis Orthonormal bases of compactly supported wavelets More about the regularity of compactly supported wavelets Symmetry for compactly supported wavelet bases Characterization of functional spaces by means of wavelets Generalizations and tricks for orthonormal wavelet bases References Indexes.

16,073 citations

Journal ArticleDOI
TL;DR: In this article, the regularity of compactly supported wavelets and symmetry of wavelet bases are discussed. But the authors focus on the orthonormal bases of wavelets, rather than the continuous wavelet transform.
Abstract: Introduction Preliminaries and notation The what, why, and how of wavelets The continuous wavelet transform Discrete wavelet transforms: Frames Time-frequency density and orthonormal bases Orthonormal bases of wavelets and multiresolutional analysis Orthonormal bases of compactly supported wavelets More about the regularity of compactly supported wavelets Symmetry for compactly supported wavelet bases Characterization of functional spaces by means of wavelets Generalizations and tricks for orthonormal wavelet bases References Indexes.

14,157 citations

Journal ArticleDOI
Ingrid Daubechies1
TL;DR: This work construct orthonormal bases of compactly supported wavelets, with arbitrarily high regularity, by reviewing the concept of multiresolution analysis as well as several algorithms in vision decomposition and reconstruction.
Abstract: We construct orthonormal bases of compactly supported wavelets, with arbitrarily high regularity. The order of regularity increases linearly with the support width. We start by reviewing the concept of multiresolution analysis as well as several algorithms in vision decomposition and reconstruction. The construction then follows from a synthesis of these different approaches.

8,588 citations


"Wavelet and wavelet packet compress..." refers methods in this paper

  • ...In the work described in this paper, was chosen to be Daubechie's W6 wavelet [10], which is illustrated in Figure 1....

    [...]

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations


Additional excerpts

  • ...algorithm was inspired by that in [28]....

    [...]