scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
01 Jan 2013
TL;DR: This proposed paper discusses various techniques proposed earlier in literature for compression of an ECG signal and provides comparative study of these techniques and suggested a new coding framework for the compression ofECG signal based on Empirical Mode Decomposition (EMD).
Abstract: Jalandhar-144011 Abstract— Electrocardiogram (ECG) plays a significant role in diagnosing most of the cardiac diseases. One cardiac cycle in an ECG signal consists of the P-QRS-T waves. Many types of ECG recordings generate a vast amount of data. ECG compression becomes mandatory to efficiently store and retrieve this data from medical database. Recently, numerous research and techniques have been developed for compression of the signal. These techniques are essential to a variety of application ranging from diagnostic to ambulatory ECG's. Thus, the need for effective ECG compression techniques is of great importance. Many existing compression algorithms have shown some success in electrocardiogram compression; however, algorithms that produce better compression ratios and less loss of data in the reconstructed signal are needed. This proposed paper discusses various techniques proposed earlier in literature for compression of an ECG signal and provide comparative study of these techniques. In addition this paper also suggested a new coding framework for the compression of ECG signal based on Empirical Mode Decomposition (EMD).

17 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...The AZTEC, Fan/SAPA, TP, and CORTES, SLOPE, Delta coding are ECG compression schemes, which are basically based on the tolerance-comparison compression methods [18]....

    [...]

Journal ArticleDOI
TL;DR: The simplicity and low cost infrastructural requirement of the algorithm, makes it suitable for implementation on an embedded platform to be used in mobile devices.
Abstract: This study proposes an algorithm for electrocardiogram (ECG) data compression using the conventional discrete Fourier transform. The coefficients are calculated using sine and cosine basis functions instead of complex exponentials, to avoid generation of complex coefficient values. Two well defined strategies are proposed for the choice of the significant coefficients – a fixed strategy based on the selection of a fixed band-limiting frequency, and an adaptive strategy depending on the spectral energy distribution of the signal. The different parameters for the two strategies are empirically selected based on extensive study of a wide variety of ECG data chosen from different databases. The significant coefficients are encoded using a unique adaptive bit assignment scheme to optimise the bit usage. The bit assignment map created to store the bit allocation information is run-length encoded to eliminate further redundancies. For the MIT-BIH arrhythmia database, the proposed technique achieves an average compression ratio of 14.67 for the fixed strategy and 16.58 for the adaptive strategy with excellent reconstruction quality, which is quite comparable to the other reported techniques. The simplicity and low cost infrastructural requirement of the algorithm, makes it suitable for implementation on an embedded platform to be used in mobile devices.

17 citations

Journal ArticleDOI
TL;DR: This paper presents a hybrid technique for the compression of ECG signals based on DWT and exploiting the correlation between signal samples, which possesses higher compression ratios and lower PRD compared to the other wavelet transformation techniques.
Abstract: This paper presents a hybrid technique for the compression of ECG signals based on DWT and exploiting the correlation between signal samples. It incorporates Discrete Wavelet Transform (DWT), Differential Pulse Code Modulation (DPCM), and run-length coding techniques for the compression of different parts of the signal; where lossless compression is adopted in clinically relevant parts and lossy compression is used in those parts that are not clinically relevant. The proposed compression algorithm begins by segmenting the ECG signal into its main components (P-waves, QRS-complexes, T-waves, U-waves and the isoelectric waves). The resulting waves are grouped into Region of Interest (RoI) and Non Region of Interest (NonRoI) parts. Consequently, lossless and lossy compression schemes are applied to the RoI and NonRoI parts respectively. Ideally we would like to compress the signal losslessly, but in many applications this is not an option. Thus, given a fixed bit budget, it makes sense to spend more bits to represent those parts of the signal that belong to a specific RoI and, thus, reconstruct them with higher fidelity, while allowing other parts to suffer larger distortion. For this purpose, the correlation between the successive samples of the RoI part is utilized by adopting DPCM approach. However the NonRoI part is compressed using DWT, thresholding and coding techniques. The wavelet transformation is used for concentrating the signal energy into a small number of transform coefficients. Compression is then achieved by selecting a subset of the most relevant coefficients which afterwards are efficiently coded. Illustrative examples are given to demonstrate thresholding based on energy packing efficiency strategy, coding of DWT coefficients and data packetizing. The performance of the proposed algorithm is tested in terms of the compression ratio and the PRD distortion metrics for the compression of 10 seconds of data extracted from records 100 and 117 of MIT-BIH database. The obtained results revealed that the proposed technique possesses higher compression ratios and lower PRD compared to the other wavelet transformation techniques. The principal advantages of the proposed approach are: 1) the deployment of different compression schemes to compress different ECG parts to reduce the correlation between consecutive signal samples; and 2) getting high compression ratios with acceptable reconstruction signal quality compared to the recently published results.

17 citations


Cites result from "ECG data compression techniques-a u..."

  • ...In [11], Jalaleddine showed that for ECG signals, a first order linear predictor (DPCM) yields better results compared to LP models of higher orders....

    [...]

Journal ArticleDOI
TL;DR: A new compression scheme for single channel ECG, by delineating each ECG cycle using multirate processing to normalize the varying period beats, followed by amplitude normalization.

17 citations

Journal ArticleDOI
TL;DR: This literature review aims at providing the research community a summary of lossless compression methods developed specifically for ECGs and compares existing methods based on the depth and breadth of the databases in which they were tested, the specific compression algorithms used, and how their performances were evaluated.

17 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations