scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Proceedings ArticleDOI
08 Dec 2008
TL;DR: An adaptive beat subtraction method for ECG compression scheme, which can be maximized the compression ratio (CR) and minimized the percent root-mean-square difference (PRD) is proposed.
Abstract: This paper proposes an adaptive beat subtraction method for ECG compression scheme, which can be maximized the compression ratio (CR) and minimized the percent root-mean-square difference (PRD). The compression process of this scheme is based on the wavelet domain, combining with the various preprocessing process such as: beat detection, beat normalization and beat difference. The advantage of this scheme is applied the adaptive average beat subtraction before introduce to the scalar quantization process and Huffman coding process. Since, ECG signal is generally composed of a number of beats, repeated at fairly regular intervals. The Huffman code should be able to increase the compression ratio by using adaptive average beat subtraction method. This method is operated the subtraction between an adaptive average beat with individual normalized beat intervals. The experimental results show the compression performance of the proposed method refer to the MIT/BIH arrhythmia database and the record 114, 222, and 234 have been employed as input data. The obtained of compression ratio is approximately 4 to 15, and the percent root-mean-square difference is 0.05% to 1.5%.

2 citations


Cites background from "ECG data compression techniques-a u..."

  • ...4 * [X(i + 1)-x(i)] (1) remove in the period recovery process....

    [...]

  • ...Thus, the by refer to equation (1) and (2)....

    [...]

Proceedings ArticleDOI
01 Oct 1992
TL;DR: A novel algorithm for compression of single lead Electrocardiogram (ECG) signals, based on Pole-Zero modelling of the Discrete Cosine Transformed (DCT) signal, which accomplishes a compression ratio in the range of 1:20 to 1:40, which far exceeds those achieved by most of the current methods.
Abstract: This paper presents a novel algorithm for compression of single lead Electrocardiogram (ECG) signals. The method is based on Pole-Zero modelling of the Discrete Cosine Transformed (DCT) signal. An extension is proposed to the well known Steiglitz-Hcbride algorithm, to model the higher frequency components of the input signal more accurately. This is achieved by weighting the error function minimized by the algorithm to estimate the model parameters. The data compression achieved by the parametric model is further enhanced by Differential Pulse Code Modulation (DPCM) of the model parameters. The method accomplishes a compression ratio in the range of 1:20 to 1:40, which far exceeds those achieved by most of the current methods.

2 citations

Journal ArticleDOI
TL;DR: A real-time application of data compression/decompression method in u-Health monitoringsystem in order to improve the network efficiency and produce an outstanding PRD compared toother previous reports is proposed.
Abstract: A sensor network system can be an efficient tool for healthcare telemetry for multiple users due to its power efficiency. One drawbackis its limited data size. This paper proposed a real-time application of data compression/decompression method in u-Health monitoringsystem in order to improve the network efficiency. Our high priority was given to maintain a high quality of signal reconstruction since itis important to receive undistorted waveform. Our method consisted of down sampling coding and differential Huffman coding. Downsampling was applied based on the Nyquist-Shannon sampling theorem and signal amplitude was taken into account to increasecompression rate in the differential Huffman coding. Our method was successfully tested in a ZigBee and WLAN dual network. Electro-cardiogram (ECG) had an average compression ratio of 3.99 : 1 with 0.24% percentage root mean square difference (PRD). Photo-plethysmogram (PPG) showed an average CR of 37.99 : 1 with 0.16% PRD. Our method produced an outstanding PRD compared toother previous reports.

2 citations


Additional excerpts

  • ...축으로 분류될 수 있다[6]....

    [...]

01 Jan 2014
TL;DR: The results prove the suitability of CS as an ultra-low power compression technique for limited resource WBSNs and show that, indeed CS when implemented as a digital compression technique could outperform state-of-the-art ECG compression in terms of overall energy consumption.
Abstract: Our modern society is today threatened by an incipient healthcare delivery crisis caused by the current demographic and lifestyle trends. Current traditional healthcare infrastructures are increasingly overwhelmed by the escalating levels of supervision and medical management required by the increasingly prevalent aging-related and lifestyle-induced disorders, while healthcare costs are skyrocketing. Consequently, there is a consensus around the need for next-generation advanced citizen-centric eHealth delivery solutions. Wearable personal health systems based on wireless body sensors or more generally wireless body sensor networks (WBSN)for continuous monitoring and care are widely recognized to be crucial ICT tools to cost-effectively achieve such eHealth delivery solutions. More specifically, WBSN-enabled eHealth solutions consist in outfitting patients with wearable, miniaturized and wireless sensors able to measure, pre-process and wirelessly report various physiological, metabolic and kinematic biosignals to tele-health providers, enabling the required personalized, long-term and real-time remote monitoring of chronic patients, its seamless integration with the patient’s medical record and its coordination with nursing/medical support. However, state-of-the-art WBSN-enabled biosignal monitors still fall short of the required functionality, miniaturization and energy efficiency. Among others, energy efficiency can be improved through embedded biosignal compression, in order to reduce airtime over energy-hungry wireless links. Within this thesis, we present novel and promising approaches to tackle the challenge of ultra-low-power biosignal compression on resource-constrained wireless body sensor nodes. Within this thesis, we quantify the potential of the emerging compressed sensing (CS) paradigm for low-complexity and energy-efficient Electrocardiogram (ECG) sensing and data compression for storage or transmission, considering both software and hardware aspects. We have focused in ECG, because it is a key biosignal in all WBSN designs. This thesis is the first work to present and fully investigate the potential of CS as an ultra-low power sensing/compression technique for ECG signals. Our results prove the suitability of CS as an ultra-low power compression technique for limited resource WBSNs. Our results show that, indeed CS when implemented as a digital compression technique could outperform state-of-the-art ECG compression in terms of overall energy consumption. The need for fast and robust reconstruction algorithms inspired us to develop new modelbased reconstruction technique to fully leverage the prior information (beyond simple sparsity) from the underlying signal, improving the compression results for both single lead and joint multi-lead ECG compression. Inspired by the promise of the CS to merge

2 citations


Cites background or methods from "ECG data compression techniques-a u..."

  • ...Moreover, to quantify the compression performance while assessing the diagnostic quality of the compressed ECG records, we employ two most widely used performance metrics, the compression ratio (CR) and percentage root-mean-square difference (PRD) [39], the same metrics which are presented in 1....

    [...]

  • ...A concise overview of the most relevant techniques can be found in [39, 45, 46]....

    [...]

  • ...To quantify the compression performance while assessing the diagnostic quality of the compressed ECG records, we employ the two most widely used performance metrics, namely the compression ratio (CR) and percentage root-mean-square difference (PRD) [39]....

    [...]

Proceedings ArticleDOI
05 Sep 1993
TL;DR: The VQ of the ECG signal and its first-difference using several codebook sizes is evaluated, and the results obtained are good enough to permit a real-time implementation.
Abstract: Long-term ECG monitoring demands a bit rate of around 200 bps with acceptable distortion. Vector quantization (VQ) techniques allow one to reach that performance, despite the high computational effort. The most time-consuming stage in the VQ is the search for the best vector in the codebook, and, as a result, this makes real-time implementation infeasible. This paper evaluates the compression of ECG data using VQ with some techniques to optimize the searching for the best vector. The algorithm based on those techniques can be implemented in real-time using low-cost microprocessors. We have evaluated the VQ of the ECG signal and its first-difference using several codebook sizes. The results obtained are good enough to permit a real-time implementation. >

2 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations