scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
01 Jan 2011
TL;DR: In this paper, a generalized method to generate pulse width modulation signals in multilevel inverters that have an odd number of levels is proposed, which is based on a freely selectable modulation method of an (N+1)/2 level imaginary inverter.
Abstract: Nowadays, multilevel voltage source inverters offer several advantages compared to their conventional two-level inverters. In these inverters, by synthesizing several levels of dc voltages, the staircase output waveform is produced. The structure of this waveform will have lower total harmonic distortion which leads to an approach to a desired sinusoidal waveform. Achieving higher output voltage and lower stress on power switches are other advantages of theses inverters. But in multilevel inverters the problem of common mode voltage which had been found in conventional two level inverters can still be considered as a major issue which leads to motor bearing failures. Therefore to eliminate these voltages proposing some methods seems to be necessary. This paper proposes a generalized method to generate pulse width modulation signals in multilevel inverters that have an odd number of levels. The main idea of this method to generate these signals for an n- level inverter is based on a freely selectable modulation method of an (N+1)/2 level imaginary inverter. This method which leads to eliminate common mode voltages of the n-level inverter can be extended to higher levels. Index Terms— Common mode voltage, Phase voltage, Line voltage, N level inverter. —————————— a ——————————

12 citations

Journal ArticleDOI
TL;DR: The experimental results show that with the increase in sampling frequency the compression ratio increases and the percent-root-mean-square difference generally decreases, and the reconstructed signal is of higher quality having larger bandwidth and higher resolution.
Abstract: The existing techniques for data compression can be divided into three main groups, namely direct, transformation and parameter extraction methods The present paper deals with the direct data compression (DDC) methods as applied to ECG data The performance has been evaluated on the basis of compression ratio, percent-root-mean-square difference and fidelity of the reconstructed signal Further, in order to know the extent to which the diagnostic information is preserved during compression, peak and boundary measurements have been made both on the reconstructed and original ECG signal and compared The objective of the present paper is to report the effect of sampling frequency on the aforementioned parameters as studied by the authors The experimental results show that with the increase in sampling frequency the compression ratio increases and the percent-root-mean-square difference generally decreases Further, the reconstructed signal is of higher quality having larger bandwidth and higher resolution

12 citations

Journal ArticleDOI
TL;DR: In this article, an optimised wavelet filter bank based methodology is presented for compression of Electrocardiogram (ECG) signal using simple linear optimisation, the methodology employs new wavelet filtering bank whose coefficients are derived with different window techniques such as Kaiser and Blackman windows, which gives better compression ratio and also yields good fidelity parameters as compared to other wavelet filters.
Abstract: In this paper, an optimised wavelet filter bank based methodology is presented for compression of Electrocardiogram (ECG) signal. The methodology employs new wavelet filter bank whose coefficients are derived with different window techniques such as Kaiser and Blackman windows using simple linear optimisation. A comparative study of performance of different existing wavelet filters and the proposed wavelet filter is made in terms of Compression Ratio (CR), Percent Root mean square Difference (PRD), Mean Square Error (MSE) and Signal-to-Noise Ratio (SNR). When compared, the developed wavelet filter gives better CR and also yields good fidelity parameters as compared to other wavelet filters. The simulation result included in this paper shows the clearly increased efficacy and performance in the field of biomedical signal processing.

11 citations


Cites background or methods from "ECG data compression techniques-a u..."

  • ...A detailed review on these techniques is presented in (Addison, 2005; Jalaleddine et al., 1990; Ole-Aase et al., 1998) and the references there in....

    [...]

  • ...555% in Blackman window which is in the acceptable range in practice (Jalaleddine et al., 1990)....

    [...]

  • ...In early stage of research, several methods (Addison, 2005; Jalaleddine et al., 1990; Ole-Aase et al., 1998) such as the Amplitude Zone Time Epoch Coding (AZTEC), and Coordinate Reduction Time Encoding System (CORTES) were developed based on direct scheme in which compression is achieved by eliminating redundancy between different ECG samples in the time domain....

    [...]

Journal ArticleDOI
TL;DR: The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles.
Abstract: Background New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques.

11 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...Background In the medical field, compression techniques have been primarily applied to medical images, electrocardiography, and electroencephalography [1-7]....

    [...]

Proceedings ArticleDOI
20 Sep 1995
TL;DR: A new high compression ratio technique and a noise suppression algorithm are offered for ECG recorders that solve the problems of voluminous data storage and power line noise.
Abstract: Conventional ambulatory electrocardiogram (ECG) recorders are analog devices with limited bandwidth and hence low resolution. Although these devices are being replaced with solid state systems capable of high resolution, designers are faced with two problems: voluminous data storage and power line noise. This paper discusses solutions to these problems by offering a new high compression ratio technique and a noise suppression algorithm. Results of evaluating these design solutions on real and simulated ECG signals are presented.

11 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...Although several ECG data compression techniques are available [SI, most of them distort the signals when used to compress data at high compression ratios [ 6 ]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations