scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
TL;DR: A joint use of the discrete cosine transform (DCT), and differential pulse code modulation (DPCM) based quantization is presented for predefined quality controlled electrocardiogram (ECG) data compression.
Abstract: In this paper, a joint use of the discrete cosine transform (DCT), and differential pulse code modulation (DPCM) based quantization is presented for predefined quality controlled electrocardiogram (ECG) data compression. The formulated approach exploits the energy compaction property in transformed domain. The DPCM quantization has been applied to zero-sequence grouped DCT coefficients that were optimally thresholded via Regula-Falsi method. The generated sequence is encoded using Huffman coding. This encoded series is further converted to a valid ASCII code using the standard codebook for transmission purpose. Such a coded series possesses inherent encryption capability. The proposed technique is validated on all 48 records of standard MIT-BIH database using different measures for compression and encryption. The acquisition time has been taken in accordance to that existed in literature for the fair comparison with contemporary state-of-art approaches. The chosen measures are (1) compression ratio (CR), (2) percent root mean square difference (PRD), (3) percent root mean square difference without base (PRD1), (4) percent root mean square difference normalized (PRDN), (5) root mean square (RMS) error, (6) signal to noise ratio (SNR), (7) quality score (QS), (8) entropy, (9) Entropy score (ES) and (10) correlation coefficient (r x,y ). Prominently the average values of CR, PRD and QS were equal to 18.03, 1.06, and 17.57 respectively. Similarly, the mean encryption metrics i.e. entropy, ES and r x,y were 7.9692, 0.9962 and 0.0113 respectively. The novelty in combining the approaches is well justified by the values of these metrics that are significantly higher than the comparison counterparts.

31 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...The DPCM quantization has been applied to zero-sequence grouped DCT coefficients that were optimally thresholded via Regula-Falsi method....

    [...]

  • ...A joint application of transform domain method for ECG data compression such as DCT and direct method such as efficient zero sequence grouping followed by optimum differential pulse code modulation (DPCM) encoding and ASCII coding provides a breakeven point between CR and PRD....

    [...]

  • ...In this paper, an algorithm based on joint use DCT domain, zero sequence encoding, DPCM quantization and ASCII encoding is presented to improve the ECG data compression efficiency along with the efficient inherent encryption....

    [...]

  • ...The third stage converts the Huffman decoded samples back to m-bit quantization levels from 8-bit data type, followed by DPCM based decoding....

    [...]

  • ...6-bit encoding of DPCM coefficient 35 38 42 39 41 40 41 41 Index 1 2 3 4 5 6 7 8 8-bit encoding of 6-bit DPCM coefficient 142 106 167 166 138 105 Index 1 2 3 4 5 6 Huffman coding and ASCII encoding For the final process of compression, Huffman coding along with the ASCII encoding has been utilized....

    [...]

Proceedings ArticleDOI
02 May 2012
TL;DR: To increase compression ratio and reduce distortion of the ECG signal, a non-uniform binary sensing matrix is proposed and evaluated.
Abstract: Wearable ECG sensors can assist in prolonged monitoring of cardiac patients. Compression of ECG signals is pursued as a means to minimize the energy consumed during transmission of information from a portable ECG sensor to a server. In this paper, compressed sensing is employed in ECG compression. To increase compression ratio and reduce distortion of the ECG signal, a non-uniform binary sensing matrix is proposed and evaluated.

31 citations


Cites background or methods from "ECG data compression techniques-a u..."

  • ...[12] use distributed CS for adjacent beats of ECG....

    [...]

  • ...Although there are many papers addressing the problem of ECG compression [10-19], only a few studies have been published in the specific area of ECG signal compression using CS....

    [...]

Journal ArticleDOI
TL;DR: This paper uses the multidimensional multiscale parser (MMP) algorithm, a recently developed universal lossy compression method, to compress data from electrocardiogram (ECG) signals and shows simulation results were MMP performs as well as some of the best encoders in the literature, although at the expense of a high computational complexity.
Abstract: In this paper, we use the multidimensional multiscale parser (MMP) algorithm, a recently developed universal lossy compression method, to compress data from electrocardiogram (ECG) signals. The MMP is based on approximate multiscale pattern matching , encoding segments of an input signal using expanded and contracted versions of patterns stored in a dictionary. The dictionary is updated using concatenated and displaced versions of previously encoded segments, therefore MMP builds its own dictionary while the input data is being encoded. The MMP can be easily adapted to compress signals of any number of dimensions, and has been successfully applied to compress two-dimensional (2-D) image data. The quasi-periodic nature of ECG signals makes them suitable for compression using recurrent patterns, like MMP does. However, in order for MMP to be able to efficiently compress ECG signals, several adaptations had to be performed, such as the use of a continuity criterion among segments and the adoption of a prune-join strategy for segmentation. The rate-distortion performance achieved was very good. We show simulation results were MMP performs as well as some of the best encoders in the literature, although at the expense of a high computational complexity.

30 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...A residual signal is then generated by subtracting the actual sample values from the predicted ones and the difference is quantized and encoded [2]....

    [...]

Proceedings ArticleDOI
18 Apr 2010
TL;DR: A low complexity ECG compression method is proposed based on the special considerations of (processing and transmission) power consumption at wireless ECG sensors and Comparisons with existing approaches confirm the superior performance of this method.
Abstract: Remote health monitoring by exploiting wireless communications technologies is an emerging area receiving increasing interests from academia, research labs and industry. This issue has also been brought up in the standardization process of IEEE 802.15 Task Group 6 Wireless Body Area Networks (WBAN). The challenge is to find the appropriate combination of medical data processing techniques and the wireless technology in order to meet the particularly stringent error, latency and power consumption requirements of healthcare applications. In this paper, we present the results of a case study on wireless electrocardiogram (ECG) monitoring over Bluetooth. Based on the special considerations of (processing and transmission) power consumption at wireless ECG sensors, we first propose a low complexity ECG compression method. Comparisons with existing approaches confirm the superior performance of our method. We then study the data reconstruction performance at the Bluetooth receiver and find that: i) the uncompressed ECG data transmission is not necessarily better than the compressed transmission; and ii) there exists an optimum ECG data compression ratio for the wireless link.

30 citations


Cites background from "ECG data compression techniques-a u..."

  • ...The system is intended to provide the following functions: i) monitoring of the health parameters in order to detect an emergency; and ii) providing medical assistance in cases of emergency....

    [...]

  • ...Summarizing remarks will be given in Section V....

    [...]

Journal ArticleDOI
TL;DR: Efficient algorithms which have been developed after carrying out an exhaustive study of methods such as amplitude zone time epoch coding (AZTEC), modified AZTEC, Fan and scan along polygonal approximation (SAPA) techniques and promise wider scope for application in telemedicine.
Abstract: Several techniques have been developed during the last four decades for the compression of ECG signal. The ECG signal compression is required for two main reasons, effective and economic data storage and on-line transmission of the signal. However, in present information technology (IT) the ECG data compression has become more significant for Telemedicine. Hence, it is essential to review the existing direct data compression (DDC) techniques for the purpose of telemedicine. This paper deals with efficient algorithms which have been developed after carrying out an exhaustive study of methods such as amplitude zone time epoch coding (AZTEC), modified AZTEC, Fan and scan along polygonal approximation (SAPA) techniques. In each of these techniques, modifications have been made to make it suitable for telemedicine purposes. Suitability of the system has been checked over transport control protocol (TCP), internet protocol (IP), local area network (LAN) and wide area network (WAN). The techniques have been tested for all standard leads of ECG signal of CSE database. The results of improved direct data compression techniques are encouraging and promise wider scope for application in telemedicine.

30 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...…methods, namely peakpeaking, cycle-pool-based compression (CPBC), linear prediction and neural network based methods, the extraction of a set of useful parameters from the original signal is carried out and the same are used in the reconstruction process (Sateh et al. 1990, Skordalakis 1996)....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations