scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
TL;DR: This paper proposes on-line ECG lossless compression for a given data on a finite alphabet and gives not only better compression ratios than traditional lossless data compression algorithms but also uses less computational space than they do.
Abstract: An antidictionary is particularly useful for data compression, and on-line electrocardiogram (ECG) lossless compression algorithms using antidictionaries have been proposed. They work in real-time with constant memory and give better compression ratios than traditional lossless data compression algorithms, while they only deal with ECG data on a binary alphabet. This paper proposes on-line ECG lossless compression for a given data on a finite alphabet. The proposed algorithm gives not only better compression ratios than those algorithms but also uses less computational space than they do. Moreover, the proposed algorithm work in real-time. Its effectiveness is demonstrated by simulation results.

11 citations

Journal ArticleDOI
TL;DR: A cubic Hermitian vector-based technique for online compression of asynchronously sampled electrocardiogram signals with compression ratio of up to 90% with achievable percentage root-mean-square difference ratios as a low as 0.97.
Abstract: Asynchronous level crossing sampling analog-to-digital converters (ADCs) are known to be more energy efficient and produce fewer samples than their equidistantly sampling counterparts. However, as the required threshold voltage is lowered, the number of samples and, in turn, the data rate and the energy consumed by the overall system increases. In this paper, we present a cubic Hermitian vector-based technique for online compression of asynchronously sampled electrocardiogram signals. The proposed method is computationally efficient data compression. The algorithm has complexity $O(n)$ , thus well suited for asynchronous ADCs. Our algorithm requires no data buffering, maintaining the energy advantage of asynchronous ADCs. The proposed method of compression has a compression ratio of up to 90% with achievable percentage root-mean-square difference ratios as a low as 0.97. The algorithm preserves the superior feature-to-feature timing accuracy of asynchronously sampled signals. These advantages are achieved in a computationally efficient manner since algorithm boundary parameters for the signals are extracted a priori .

11 citations


Cites background or methods from "ECG data compression techniques-a u..."

  • ...These developments have seen different kinds of methods; these can be broadly classified into direct and transformation data compression methods [6]....

    [...]

  • ...We present Monte–Carlo simulation results for two widely used performance metrics, namely the percentage root-meansquare difference (PRD) [6], and CR, with relation to our system variables, ΔV in bits/mV, log2(τ), log2(κ), and log2(η)....

    [...]

  • ...We see that the PRD has a higher variance for high values of κL and ηL....

    [...]

  • ...6–9 show the relationship of our system variables κ, τ , and η with respect to PRD and compression at ΔV level of 5 and 7 bits....

    [...]

  • ...The top plot in the figure shows the compressed samples, with ΔV resolution of 5 bits/mV, τL = 6, ηL = 5, and κL = 5, reconstructed signal, and error signal with resulting performance of PRD = 2.6 and CR = 79....

    [...]

Proceedings ArticleDOI
07 Aug 2002
TL;DR: This novel distortion measure, which corresponds to the area enclosed between the original and the reconstructed signal, is also used as a maximum tolerance comparison criterion in a new non-uniform sampling method, designated Maximum Enclosed Area algorithm (MEA), which is proposed here for direct ECG signal compression.
Abstract: A new measure of distortion, named the Percentage Area Difference (PAD), is proposed as an alternative to existing ways of evaluating the performance of Electrocardiogram (ECG) signal lossy compression. This novel distortion measure, which corresponds to the area enclosed between the original and the reconstructed signal, is also used as a maximum tolerance comparison criterion in a new non-uniform sampling method, designated Maximum Enclosed Area algorithm (MEA), which is proposed here for direct ECG signal compression.

10 citations

01 Jan 2009
TL;DR: Skewness of all the necessary and possible combinations are calculated and it is found that the combination of move-to-front coder with the Huffman coder gives a better result when the input data is in numbers rather than text.
Abstract: The field of data compression has developed many algorithms so far and still the process seems to be always increasing in search of a better compression scheme. The Burrows-Wheeler transform (BWT) has been a crucial tool for data compression. Normally, the BWT earns maxi- mum efficiency when the input is in the text format. In the present paper, some real possible difficulties are explored when the input data is in text format. Skewness of all the necessary and possible combinations are calculated and it is found that the combination of move-to-front coder with the Huffman coder gives a better result when the input data is in numbers rather than text. On contrary to the use of a large amount of input data in the existing algorithms which use BWT, in this work all the data used are of small durati- on and finally, a standard compression ratio of 2.7247 is achieved if the quantity of input is compared with that of the output simultaneously. For this reason this technique may be considered quite useful for the data transfer in an ECG event recorder.

10 citations


Cites background from "ECG data compression techniques-a u..."

  • ...Many papers have come in these years which are collectively provided in [1]....

    [...]

Proceedings ArticleDOI
13 Aug 2016
TL;DR: Squish as mentioned in this paper uses a combination of Bayesian Networks and Arithmetic Coding to capture multiple kinds of dependencies among attributes and achieves near-entropy compression rate for relational data.
Abstract: Relational datasets are being generated at an alarmingly rapid rate across organizations and industries. Compressing these datasets could significantly reduce storage and archival costs. Traditional compression algorithms, e.g., gzip, are suboptimal for compressing relational datasets since they ignore the table structure and relationships between attributes. We study compression algorithms that leverage the relational structure to compress datasets to a much greater extent. We develop Squish, a system that uses a combination of Bayesian Networks and Arithmetic Coding to capture multiple kinds of dependencies among attributes and achieve near-entropy compression rate. Squish also supports user-defined attributes: users can instantiate new data types by simply implementing five functions for a new class interface. We prove the asymptotic optimality of our compression algorithm and conduct experiments to show the effectiveness of our system: Squish achieves a reduction of over 50% in storage size relative to systems developed in prior work on a variety of real datasets.

10 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations