scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Wavelet and wavelet packet compression of electrocardiograms

01 May 1997-IEEE Transactions on Biomedical Engineering (IEEE Trans Biomed Eng)-Vol. 44, Iss: 5, pp 394-402
TL;DR: Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECGs are clinically useful.
Abstract: Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.
Citations
More filters
Proceedings ArticleDOI
16 May 2012
TL;DR: This work compares the efficiency of compression algorithm by using difference type of Mother Wavelet and finds the `db1' mother wavelet has the best performance on more than a half of all signals.
Abstract: The electrocardiogram compression method presented in this research processes the residual signal which is the difference between the original signal and the reference signal. The residual signal is transformed to wavelet domain and then the redundant information is eliminated in wavelet domain. The selection of mother wavelet is one of main factor to maintain the important data in wavelet domain. The difference type of mother wavelet has its own shape and its own characteristic. Therefore this affects to the performance of compression. This work compares the efficiency of compression algorithm by using difference type of Mother Wavelet. The test shows that no mother wavelet which is the best for all ECG. Thereby, reducing the time consuming for selection the proper mother wavelet, the Best of Four Method is introduced. This algorithm uses four types of mother wavelet to be competitors, ‘db1’, ‘db2’, ‘db9’ and ‘bior2.4’. The result shows that the selected mother wavelet types have the good performance on overall tested signals. Moreover, the ‘db1’ mother wavelet has the best performance on more than a half of all signals.

12 citations


Additional excerpts

  • ...ECG compression techniques are classified by domain into two categories, direct time domain compression and transform domain compression such as frequency domain and wavelet domain [1-5]....

    [...]

Book ChapterDOI
01 Jan 2017
TL;DR: An in-depth study on recent trends in CS focused on ECG compression using Compression Ratio, % Root-mean-squared Difference, Signal-to-Noise Ratio, Root-Mean Square Error, and power consumption are presented.
Abstract: Compressed Sensing (CS) is a fast growing signal processing technique that compresses the signal while sensing and enables exact reconstruction of the signal if the signal is sparse with a few numbers of measurements only. This scheme results in reduction of storage requirement and low power consumption of system compared to Nyquist sampling theorem, where the sampling frequency must be at least double the maximum frequency present in the signal for the exact reconstruction of the signal. This paper presents an in-depth study on recent trends in CS focused on ECG compression. Compression Ratio (CR), % Root-mean-squared Difference (% PRD), Signal-to-Noise Ratio (SNR), Root-Mean Square Error (RMSE), Sparsity and power consumption are used as the performance evaluation parameters. Finally, we have presented the conclusions based on the literature review and discussed the major challenges in CS ECG implementation.

11 citations

Proceedings ArticleDOI
17 Dec 2000
TL;DR: A new algorithm, based on the compression of the linearly predicted residuals of the wavelet coefficients, for electrocardiogram (EGG) compression, to reduce the bit rate while keeping the reconstructed signal distortion at a clinically acceptable level.
Abstract: This paper describes a new algorithm, based on the compression of the linearly predicted residuals of the wavelet coefficients, for electrocardiogram (EGG) compression. The main goal of the algorithm is to reduce the bit rate while keeping the reconstructed signal distortion at a clinically acceptable level. The input signal is divided into blocks and each block goes through a discrete wavelet transform; then the resulting wavelet coefficients are linearly predicted. In this way, a set of uncorrelated transform domain signals is obtained. These signals are compressed using modified run-length and Huffman coding techniques. The error corresponding to the difference between the wavelet coefficients and the predicted coefficients is minimized in order to get the best predictor. The method is assessed through the use of percent residual difference (PRD) and visual inspection measures. By this compression method small PRD with high compression ratio and low implementation complexity are achieved.

11 citations

Proceedings ArticleDOI
29 Jul 2013
TL;DR: The property of periodicity is used in these naturally occurring signals such as heart rate or gait measurements to design a simple low cost scheme for data compression that has successfully tested for good compression performance in ECG, motion accelerometer data and Parkinson patients samples.
Abstract: There is an increase rise in the usage of mobile health sensors in wearable devices and smartphones. These embedded systems have tight limits on storage, computation power, network connectivity and battery usage making it important to ensure efficient storage/ communication of sensor readings to centralized node/ server. Frequency Transform or Entropy encoding schemes such as arithmetic or Huffman coding can be used for compression, but they incur high computational cost in some scenarios or are oblivious to the higher level redundancies in signal. To this end, we used the property of periodicity in these naturally occurring signals such as heart rate or gait measurements to design a simple low cost scheme for data compression. First, a modified Chi-square periodogram metric is used to adaptively determine the exact time-varying periodicity of the signal. Next, the time-series signal is folded into Frames of length equal to a pre-determined period value. We have successfully tested the scheme for good compression performance in ECG, motion accelerometer data and Parkinson patients samples, leading to 8-14X compression in large sample sizes (6-8K samples) and 2-3X in small sample sizes (200 samples). The proposed scheme can be used stand-alone or as pre-processing step for existing techniques in literature.

11 citations


Cites methods from "Wavelet and wavelet packet compress..."

  • ...Discrete Cosine Transform [3, 4], Discrete Legendre Transform [17] and Wavelets [11, 9] based techniques have been proposed which transpose the signal to frequency domain followed by quantization and entropy encoding procedures....

    [...]

  • ...Discrete Cosine Trans­form [3, 4], Discrete Legendre Transform [17] and Wavelets [11, 9] based techniques have been proposed which transpose the signal to frequency domain followed by quantization and entropy encoding procedures....

    [...]

  • ...The algorithm is compatible with existing Wavelets, Fourier, Arith­metic or Huffman based approach and complements their perfor­mance....

    [...]

Journal ArticleDOI
TL;DR: A new family of wireless networks-the multimodal wireless networks offer multiple functionalities realized on the same infrastructure and proposes different strategies to track and update the parameter estimates to prevent performance degradation due to slowly drifting parameter values.
Abstract: In this paper, we propose a new family of wireless networks-the multimodal wireless networks. These networks offer multiple functionalities realized on the same infrastructure. A multimodal wireless network has two modes of operation: 1) the communication mode, when the network is used as a traditional wireless communication network, and 2) the surveillance mode, when the network is used as a distributed sensor network that can detect illegal intrusion. The surveillance functionality is realized by analyzing the properties of the received signals, and the change of the propagation environment caused by the intruder serves as the basis for intrusion detection. We derive maximum likelihood estimators and detectors based on the generalized likelihood ratio test that detects changes in the propagation environment: the single-cycle detector, which makes decisions at the end of each scanning cycle, and the multicycle detector, which combines information from multiple scanning cycles prior to detection. We also analyze the performance of these detectors by deriving the Cramer-Rao lower bound on the variance of the parameter estimators and determining the distribution of the log-likelihood ratio under both detection hypotheses. This will allow us to compare the theoretical performance of the single-cycle and the multicycle detectors and obtain analytical results to determine the decision threshold for a given probability of false alarm. To prevent performance degradation due to slowly drifting parameter values, we propose different strategies to track and update the parameter estimates. The experimental results obtained from an implemented prototype surveillance system show very promising detection capabilities. For example, the state of a door (open or closed) could be detected with a probability of 0.99 at a probability of false alarm 10-5.

11 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, it is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2 /sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions.
Abstract: Multiresolution representations are effective for analyzing the information content of images. The properties of the operator which approximates a signal at a given resolution were studied. It is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2/sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions. In L/sup 2/(R), a wavelet orthonormal basis is a family of functions which is built by dilating and translating a unique function psi (x). This decomposition defines an orthogonal multiresolution representation called a wavelet representation. It is computed with a pyramidal algorithm based on convolutions with quadrature mirror filters. Wavelet representation lies between the spatial and Fourier domains. For images, the wavelet representation differentiates several spatial orientations. The application of this representation to data compression in image coding, texture discrimination and fractal analysis is discussed. >

20,028 citations

Book
01 May 1992
TL;DR: This paper presents a meta-analyses of the wavelet transforms of Coxeter’s inequality and its applications to multiresolutional analysis and orthonormal bases.
Abstract: Introduction Preliminaries and notation The what, why, and how of wavelets The continuous wavelet transform Discrete wavelet transforms: Frames Time-frequency density and orthonormal bases Orthonormal bases of wavelets and multiresolutional analysis Orthonormal bases of compactly supported wavelets More about the regularity of compactly supported wavelets Symmetry for compactly supported wavelet bases Characterization of functional spaces by means of wavelets Generalizations and tricks for orthonormal wavelet bases References Indexes.

16,073 citations

Journal ArticleDOI
TL;DR: In this article, the regularity of compactly supported wavelets and symmetry of wavelet bases are discussed. But the authors focus on the orthonormal bases of wavelets, rather than the continuous wavelet transform.
Abstract: Introduction Preliminaries and notation The what, why, and how of wavelets The continuous wavelet transform Discrete wavelet transforms: Frames Time-frequency density and orthonormal bases Orthonormal bases of wavelets and multiresolutional analysis Orthonormal bases of compactly supported wavelets More about the regularity of compactly supported wavelets Symmetry for compactly supported wavelet bases Characterization of functional spaces by means of wavelets Generalizations and tricks for orthonormal wavelet bases References Indexes.

14,157 citations

Journal ArticleDOI
Ingrid Daubechies1
TL;DR: This work construct orthonormal bases of compactly supported wavelets, with arbitrarily high regularity, by reviewing the concept of multiresolution analysis as well as several algorithms in vision decomposition and reconstruction.
Abstract: We construct orthonormal bases of compactly supported wavelets, with arbitrarily high regularity. The order of regularity increases linearly with the support width. We start by reviewing the concept of multiresolution analysis as well as several algorithms in vision decomposition and reconstruction. The construction then follows from a synthesis of these different approaches.

8,588 citations


"Wavelet and wavelet packet compress..." refers methods in this paper

  • ...In the work described in this paper, was chosen to be Daubechie's W6 wavelet [10], which is illustrated in Figure 1....

    [...]

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations


Additional excerpts

  • ...algorithm was inspired by that in [28]....

    [...]