scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, a lossy coding technique based on the algebraic code excited linear prediction (ACELP) paradigm, widely used for speech signal coding, was proposed for surface EMG signals.

25 citations

Journal ArticleDOI
01 Jan 2013
TL;DR: It was found that the proposed methodology performs better as compared to earlier reported results in terms of reconstruction error, number of iteration (NOI) and computation time (CPU time).
Abstract: In this paper, an efficient iterative algorithm is proposed for the design of multi-channel nearly perfect reconstructed non-uniform filter bank. The method employs the constrained equiripple FIR technique to design the prototype filter for filter banks with novelty of exploiting a new perfect reconstruction condition of the non-uniform filter banks instead of using complex objective functions. In the proposed algorithm, passband edge frequency (@w"p) is optimized using linear optimization technique such that the filter coefficients values at quadrature frequency are approximately equal to 0.707. Several design examples are included to illustrate the efficacy of this methodology for designing non-uniform filter bank (NUFB). It was found that the proposed methodology performs better as compared to earlier reported results in terms of reconstruction error (RE), number of iteration (NOI) and computation time (CPU time). The proposed algorithm is very simple, linear in nature, and easy to implement.

25 citations

Journal ArticleDOI
TL;DR: A quality controlled reconstruction of ECG signal by the formulation of 2D Discrete Cosine Transform (DCT) coefficient and iterative JPEG2000 encoding scheme for Electrocardiogram (ECG) data compression is presented.

25 citations

Journal ArticleDOI
TL;DR: This paper elaborates on the relationship between the values of the fitness function and the approximation capabilities of the segments of the signal and shows that these two descriptors are highly related.
Abstract: This paper is concerned with a development of a segmentation technique for electrocardiogram (ECG) signals. Such segmentation is aimed at a lossy signal compression in which each segment can be captured by a simple geometric construct such as, e.g., a linear or quadratic function. The crux of the proposed construct lies in the determination of the optimal segments of data over which they exhibit the highest possible monotonicity (or lowest variability) of the ECG signal. In this sense, the proposed approach generalizes a fundamental and commonly encountered problem of function (data) linearization. The segments are genetically developed using a standard technique of genetic algorithms (GAs). The two fundamental GA constructs, namely a topology of a chromosome and a fitness function governing the optimization process are discussed in detail. The chromosome being coded as a series of floating point numbers contains the endpoints of the segments (segmentation points). The fitness function to be maximized quantifies a level of monotonicity of the ECG data encountered within the segments and takes into consideration differences between the extreme values (minimum and maximum) of its derivatives. As a result of the genetic optimization, we build segments of ECG signals encompassing monotonic (increasing or decreasing) regions of the signal exhibiting a minimal level of variability. A series of experiments dealing with several classes of ECG signals (namely, normal, left bundle branch block beat, and right bundle branch block beat) visualize the effectiveness of the approach and shows the specificity of the linear segments of data. Furthermore, we elaborate on the relationship between the values of the fitness function and the approximation capabilities (quantified by a sum of squared errors between the local model and the data) of the segments of the signal and show that these two descriptors are highly related.

25 citations


Cites background from "ECG data compression techniques-a u..."

  • ...The diversity of techniques of signal compression both at the fundamental and applied end [4], [10], [11] is enormous....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors first estimate the rate-distortion bound, which is the theoretical limit in the compression of the ECG data, and then present ECG-data-compression schemes based on codebook quantizer and finite-state VQ (FSVQ), which are suitable for coding a correlative signal.
Abstract: The authors first estimate the ECG rate-distortion bound, which is the theoretical limit in the compression of the ECG data. They then present ECG data-compression schemes based on codebook quantizer and finite-state VQ (FSVQ), which are suitable for coding a correlative signal. Then, the authors' modified FSVQ-based scheme is presented.

24 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations