scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
22 Dec 2017
TL;DR: Wavelet based compression algorithms for one-dimensional signals are presented along with the results of compression ECG data and compression using HAAR wavelet and local thresholding are found to be optimal in terms of compression ratio.
Abstract: Although digital storage media is not expensive and computational power has exponentially increased in past few years, the possibility of electrocardiogram (ECG) compression still attracts the attention, due to the huge amount of data that has to be stored and transmitted. ECG compression methods can be classified into two categories; direct method and transform method. A wide range of compression techniques were based on different transformation techniques. In this work, transform based signal compression is proposed. This method is used to exploit the redundancy in the signal. Wavelet based compression is evaluated to find an optimal compression strategy for ECG data compression. The algorithm for the one-dimensional case is modified and it is applied to compress ECG data. A wavelet ECG data code based on Run-length encoding compression algorithm is proposed in this research. Wavelet based compression algorithms for one-dimensional signals are presented along with the results of compression ECG data. Firstly, ECG signals are decomposed by discrete wavelet transform (DWT). The decomposed signals are compressed using thresholding and run-length encoding. Global and local thresholding are employed in the research. Different types of wavelets such as daubechies, haar, coiflets and symlets are applied for decomposition. Finally the compressed signal is reconstructed. Different types of wavelets are applied and their performances are evaluated in terms of compression ratio (CR), percent root mean square difference (PRD). Compression using HAAR wavelet and local thresholding are found to be optimal in terms of compression ratio.

11 citations

Proceedings ArticleDOI
01 Oct 2010
TL;DR: A new Rate-switching Un-Equal Protection (RUEP) mechanism is proposed, which optimizes the distortion reduction of ECG data by adaptively assigning different Rate Compatible Punctured Convolutional (RCPC) codes to protect the different parts of the compressed ECGData.
Abstract: Energy efficiency for mobile wireless electrocardiography (ECG) communication is an important issue due to resource constraints in wireless Body Area Sensor Networks (BASNs). Traditional high quality ECG transmission schemes require substantial amounts of energy usage, which may not be available in BASN. Therefore, an adaptive approach is necessary to provide high quality ECG transmission with efficient usage of available energy resources. Related literature mainly focuses on data compression, where transmission energy is saved because the amount of data being transmitted is reduced. However, further reduction of energy consumption based on communication strategy is rarely discussed in literature. In this paper, we analyze the characteristics of compressed ECG data, which show that different parts of the data are unequally important to quality of ECG transmission in BASN. In this work, we propose a new Rate-switching Un-Equal Protection (RUEP) mechanism, which optimizes the distortion reduction of ECG data by adaptively assigning different Rate Compatible Punctured Convolutional (RCPC) codes to protect the different parts of the compressed ECG data. Simulation results demonstrate that our RUEP scheme results in an improved communication energy efficiency by at least 45 percent compared with traditional schemes in the AWGN channel.

11 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...ECG data, consist of short and high frequency QRS complex signal, long and flat P and T signals, therefore, it is very appropriate to use wavelet transform in analyzing and classifying ECG signals [5]- [10]....

    [...]

Journal ArticleDOI
TL;DR: Numerical results show that decoding by F CE is on average 33 times faster than the fastest tested CS-based ECG decoding technique, and high-quality ECG signal reconstruction by FCE is achieved at 32% higher compression ratio.
Abstract: Compressed Sensing (CS) has been proposed as a low-complexity ECG data compression scheme for wearable wireless bio-sensor devices. However, CS decoding is characterized by high computational complexity. As a result, it represents a burden to the computational and energy resources of the network gateway node, where decoding is performed. In this article, we propose a Fast Compressive Electrocardiography (FCE) technique to address this problem. CS decoding in FCE is based on Weighted Regularized Least-Squares (WRLS), rather than the standard approach based on $\ell _{1}$ norm minimization. The WRLS formulation takes into account prior knowledge of ECG signal properties to estimate an optimally compact and accurate representation of ECG signals. Numerical results show that decoding by FCE is on average 33 times faster than the fastest tested CS-based ECG decoding technique. In addition, high-quality ECG signal reconstruction by FCE is achieved at 32% higher compression ratio. Therefore, FCE can contribute to improving the overall energy and computational resource efficiency of CS-based remote ECG monitoring systems.

11 citations

Proceedings ArticleDOI
01 Jun 2008
TL;DR: Experimental results show that the proposed 2-D MSPIHT algorithm is valid and greater compression ratio can be achieved when the quality of the reconstructed signal is good.
Abstract: A new two-dimensional (2-D) Electrocardiogram (ECG) signal compression algorithm named 2D Modified Set Partitioning In Hierarchical Trees (2-D MSPIHT), which is based on the SPIHT algorithm, has been proposed in this paper. According to the two correlativity of ECG signal, an ECG signal is cut and aligned to form a 2-D data array, and then 2-D MSPIHT can be applied. Simulation experiments by using the 2-D MSPIHT algorithm and SPIHT compression algorithms have been conducted to verify the validity of the 2-D MSPIHT algorithm and test its performance. The experimental results show that the proposed 2-D MSPIHT algorithm is valid and greater compression ratio can be achieved when the quality of the reconstructed signal is good.

11 citations

Journal ArticleDOI
TL;DR: To accommodate both clean PPG and signals affected by motion artifacts, the effect of step size is evaluated on the performance of DM in order to optimize this technique before it can be deployed in a wireless data acquisition system.

11 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations