scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Wavelet and wavelet packet compression of electrocardiograms

01 May 1997-IEEE Transactions on Biomedical Engineering (IEEE Trans Biomed Eng)-Vol. 44, Iss: 5, pp 394-402
TL;DR: Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECGs are clinically useful.
Abstract: Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.
Citations
More filters
Journal ArticleDOI
TL;DR: The results obtained were positive with low PRD, PRDN and PMAE at different compression ratios compared to many other loss-type compressing methods, proving the high efficiency of the proposed algorithm.
Abstract: Compressing the ECG signal is considered a feasible solution for supporting a system to manipulate the package size, a major factor leading to congestion in an ECG wireless network. Hence, this paper proposes a compression algorithm, called the advanced two-state algorithm, which achieves three necessary characteristics: a) flexibility towards all ECG signal conditions, b) the ability to adapt to each requirement of the package size and c) be simple enough. In this algorithm, the ECG pattern is divided into two categories: “complex” durations such as QRS complexes, are labeled as low-state durations, and “plain” duratio ns such P or T waves, are labeled as high-state durations. Each duration type can be compressed at different compression ratios, and Piecewise Cubic Spline can be used for reconstructing the signal. For evaluation, the algorithm was applied to 48 records of the MIT-BIH arrhythmia database (clear PQRST complexes) and 9 records of the CU ventricular tachyarrhythmia database (unclear PQRST complexes). Parameters including Compression Ratio (CR), Percentage Root mean square Difference (PRD), Percentage Root mean square Difference, Normalized (PRDN), root mean square (RMS), Signal-to-noise Ratio (SNR) and a new proposed index called Peak Maximum Absolute Error (PMAE) were used to comprehensively evaluate the performance of the algorithm. Eventually, the results obtained were positive with low PRD, PRDN and PMAE at different compression ratios compared to many other loss-type compressing methods, proving the high efficiency of the proposed algorithm. All in all, with its extremely low-cost computation, versatility and good-quality reconstruction, this algorithm could be applied to a number of wireless applications to control package size and overcome congested situations.

3 citations

Journal ArticleDOI
TL;DR: Experimental results show that this coder outperforms other coders such as Djohn, EZW, SPIHT, Novel algorithm etc. that exist in the literature in terms of coding efficiency and computation.
Abstract: Wavelets are the powerful tool for signal processing especially bio-signal processing. Wavelet transform is used to represent the signal to some other time frequency representation better suited for detecting and removing redundancies. In this paper, electrocardiogram (ECG) signal coding using biorthogonal wavelet-based Burrows–Wheeler Coder is discussed. Biorthogonal wavelet transform is used to decompose the ECG signal. Then the Burrows–Wheeler Coder is applied in order to compress the decomposed ECG signal. The Burrows–Wheeler Coder is the combination of Burrows–Wheeler Transformation (BWT), Move-to-Front (MTF) coder and Huffman coder. Compression Ratio (CR) and Percent Root mean square Difference (PRD) are used as performance measures. ECG signals/records from MIT-BIH arrhythmic database are used to evaluate the performance of this coder. This algorithm is tested with 25 different records from MIT-BIH arrhythmia database and obtained the average PRD as 0.0307% to 3.8706% for the average CR of 3.6362 : 1 to 280.48 : 1. For record 117, the CR of 8.1638 : 1 is achieved with PRD 0.1652%. This experimental results show that this coder outperforms other coders such as Djohn, EZW, SPIHT, Novel algorithm etc. that exist in the literature in terms of coding efficiency and computation.

3 citations

Proceedings ArticleDOI
23 Mar 2016
TL;DR: Improved version of existing ASCII character encoding ECG data compression method is proposed, which reduces to 8 ASCII characters by eliminating the necessity of some ASCII characters from existing method, and improved the compression ratio improved up to 65%, while the quality of the signal remain same.
Abstract: Telecardiology includes variety of applications and is one of the fastest-growing fields in telemedicine. In telecardiology the amount of recorded ECG data is very much high, hence the necessity of efficient data compression methods for biomedical signals is currently widely recognized. In this paper improved version of existing ASCII character encoding ECG data compression method is proposed. The existing method require minimum of 12 ASCII characters to store an array of 8 ECG voltage values but in proposed method it reduces to 8 ASCII characters. Hence by eliminating the necessity of some ASCII characters from existing method the compression ratio improved up to 65%, while the quality of the signal remain same. The proposed gives the compression ratio (CR) is 11.25 and the percent root mean squared difference (PRD) is 0.0206 on average for all 12 ECG signals. In addition, propose method is more efficient than existing ASCII character encoding as well as other ECG data compression methods.

3 citations


Cites methods from "Wavelet and wavelet packet compress..."

  • ...Hilton [10] proposed two algorithms for compressing one-dimensional signals which are based on wavelets and wavelet packets....

    [...]

Proceedings ArticleDOI
01 Oct 2014
TL;DR: The proposed ECG compression method presents the new beat segmentation algorithm which uses only simple operation such as accumulation and shift operation and the decision rule is not complicated as the previous method.
Abstract: The proposed ECG compression method presents the new beat segmentation algorithm. Because this proposed compression method uses the residual difference between original ECG signal beat and the reference ECG beat, the ECG signal must be separated into each beat before doing the compression process. That is the duty of beat segmentation process. Therefore, this process is important step of the selective Mapping Technique ECG compression method. The main goal of this work is to design the beat segmentation algorithm which is the most suitable for this compression method. And this proposed beat segmentation algorithm is designed to replace the complicated operation algorithm. Consequently, this proposed beat segmentation algorithm uses only simple operation such as accumulation and shift operation. And moreover, the decision rule is not complicated as the previous method. The test results show that more than half of tested signals return higher compression ratio (CR). In addition, almost quarter of tested signals have better performance of percent root mean square difference (PRD). Therefore this is the alternative method for the best comparing selection.

3 citations


Cites methods from "Wavelet and wavelet packet compress..."

  • ...There are time domain compression technique and transform domain compression technique [1-5]....

    [...]

Journal ArticleDOI
TL;DR: It is found that ECG signals sampled at 400 Hz could be compressed to one fourth of their original storage space, but the values of their medical parameters changed less than 5% due to compression, which indicates reliable results.
Abstract: Electrocardiogram (ECG) signals are the most prominent biomedical signal type used in clinical medicine. Their compression is important and widely researched in the medical informatics community. In the previous literature compression efficacy has been investigated only in the context of how much known or developed methods reduced the storage required by compressed forms of original ECG signals. Sometimes statistical signal evaluations based on, for example, root mean square error were studied. In previous research we developed a refined method for signal compression and tested it jointly with several known techniques for other biomedical signals. Our method of so-called successive approximation quantization used with wavelets was one of the most successful in those tests. In this paper, we studied to what extent these lossy compression methods altered values of medical parameters (medical information) computed from signals. Since the methods are lossy, some information is lost due to the compression when a high enough compression ratio is reached. We found that ECG signals sampled at 400 Hz could be compressed to one fourth of their original storage space, but the values of their medical parameters changed less than 5% due to compression, which indicates reliable results.

3 citations


Cites methods from "Wavelet and wavelet packet compress..."

  • ...In [15] a subjective evaluation of cardiologists was exploited for the ECG signals of the MIT-BIH Database (sampled at 250 Hz and with 12 bits per sample) to check deformation of signals caused by compression followed by decompression....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this paper, it is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2 /sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions.
Abstract: Multiresolution representations are effective for analyzing the information content of images. The properties of the operator which approximates a signal at a given resolution were studied. It is shown that the difference of information between the approximation of a signal at the resolutions 2/sup j+1/ and 2/sup j/ (where j is an integer) can be extracted by decomposing this signal on a wavelet orthonormal basis of L/sup 2/(R/sup n/), the vector space of measurable, square-integrable n-dimensional functions. In L/sup 2/(R), a wavelet orthonormal basis is a family of functions which is built by dilating and translating a unique function psi (x). This decomposition defines an orthogonal multiresolution representation called a wavelet representation. It is computed with a pyramidal algorithm based on convolutions with quadrature mirror filters. Wavelet representation lies between the spatial and Fourier domains. For images, the wavelet representation differentiates several spatial orientations. The application of this representation to data compression in image coding, texture discrimination and fractal analysis is discussed. >

20,028 citations

Book
01 May 1992
TL;DR: This paper presents a meta-analyses of the wavelet transforms of Coxeter’s inequality and its applications to multiresolutional analysis and orthonormal bases.
Abstract: Introduction Preliminaries and notation The what, why, and how of wavelets The continuous wavelet transform Discrete wavelet transforms: Frames Time-frequency density and orthonormal bases Orthonormal bases of wavelets and multiresolutional analysis Orthonormal bases of compactly supported wavelets More about the regularity of compactly supported wavelets Symmetry for compactly supported wavelet bases Characterization of functional spaces by means of wavelets Generalizations and tricks for orthonormal wavelet bases References Indexes.

16,073 citations

Journal ArticleDOI
TL;DR: In this article, the regularity of compactly supported wavelets and symmetry of wavelet bases are discussed. But the authors focus on the orthonormal bases of wavelets, rather than the continuous wavelet transform.
Abstract: Introduction Preliminaries and notation The what, why, and how of wavelets The continuous wavelet transform Discrete wavelet transforms: Frames Time-frequency density and orthonormal bases Orthonormal bases of wavelets and multiresolutional analysis Orthonormal bases of compactly supported wavelets More about the regularity of compactly supported wavelets Symmetry for compactly supported wavelet bases Characterization of functional spaces by means of wavelets Generalizations and tricks for orthonormal wavelet bases References Indexes.

14,157 citations

Journal ArticleDOI
Ingrid Daubechies1
TL;DR: This work construct orthonormal bases of compactly supported wavelets, with arbitrarily high regularity, by reviewing the concept of multiresolution analysis as well as several algorithms in vision decomposition and reconstruction.
Abstract: We construct orthonormal bases of compactly supported wavelets, with arbitrarily high regularity. The order of regularity increases linearly with the support width. We start by reviewing the concept of multiresolution analysis as well as several algorithms in vision decomposition and reconstruction. The construction then follows from a synthesis of these different approaches.

8,588 citations


"Wavelet and wavelet packet compress..." refers methods in this paper

  • ...In the work described in this paper, was chosen to be Daubechie's W6 wavelet [10], which is illustrated in Figure 1....

    [...]

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations


Additional excerpts

  • ...algorithm was inspired by that in [28]....

    [...]