scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Proceedings ArticleDOI
03 Jun 2015
TL;DR: A compression algorithm for power quality (PQ) signals, which does not separate the signal into stationary and transient components, is presented, characterized by high compression ratio and low computational cost, making its implementation suitable for embedded systems.
Abstract: This paper presents a compression algorithm for power quality (PQ) signals, which does not separate the signal into stationary and transient components. The proposed algorithm is composed of a number of different compression techniques which are combined to improve both compression ratio and total computing time. This method is based on a sequence of three phases. The first phase transforms the signal to the frequency domain using the FFT algorithm. In a second phase the signal is approximated by a polynomial approximation algorithm, which is finally compressed by a lossless compression algorithm. The proposed method is characterized by high compression ratio and low computational cost, making its implementation suitable for embedded systems.

2 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...Deflate is a lossless data compression algorithm that combines the LZ77 algorithm and Huffman coding [6,7]....

    [...]

Journal ArticleDOI
TL;DR: A B-spline approximation based electrocardiogram signal compressing method is provided to use a base function of a B- Spline curve as a cubic B-Spline, thereby enhancing the effectiveness of the compression.
Abstract: PURPOSE: A B-spline approximation based electrocardiogram signal compressing method is provided to use a base function of a B-spline curve as a cubic B-spline, thereby enhancing the effectiveness of the compression. CONSTITUTION: A knot vector of a constant interval is set up and a B-spline base function is produced (S100). A control point is formed in the determined knot vector and the recovery data is produced (S200). A restoration error, which is the arithmetical difference between the original data and the recovery data of an electrocardiogram signal, is calculated (S300). Each root mean square (RMS) of the restoration error of each knot vector and each difference value of the average value are calculated (S400). If at least one value among each of the difference value exceeds the predetermined reference value (S500), a new knot vector is added in the middle of the exceeded knot vector section (S550). A new knot vector including the added knot vector is set up and the original data of the electrocardiogram signal is compressed by approximating the B-spline (S600). [Reference numerals] (AA) Start; (BB) End; (S100) Produce a knot vector of a constant interval is set up and a B-spline base function; (S200) Form a control point in the determined knot vector and produce recovery data; (S300) Calculate a restoration error, which is the arithmetical difference between the original data and the recovery data of an electrocardiogram signal; (S400) Calculate each root mean square (RMS) of the restoration error of each knot vector and each difference value of the average value; (S500) Each RMS value - an average value > a predetermined reference value?; (S550) Add a new knot vector in the middle of the exceeded knot vector section; (S600) Compress the original data of the electrocardiogram signal by approximating the B-spline according to the knot vector

2 citations

01 Dec 2009
TL;DR: In this article, a low-complexity ECG signal compression and transmission scheme for use with ultra-low-power wireless ECG sensor nodes is proposed, which has features that reduce the amount of data transmission while minimizing the complexity of implementation.
Abstract: A low-complexity Electrocardiograph (ECG) signal compression and transmission scheme for use with ultra-low-power wireless ECG sensor nodes is proposed The proposed scheme has features that reduce the amount of data transmission while minimizing the complexity of implementation ECG compression ratios of the order of 10 can be achieved with minimal hardware and memory overhead The scheme does not require embedded controllers or digital signal processors A tolerance can be specified to control the fidelity of the compressed waveform An efficient way of transmitting the compressed waveform using the least amount of bits is devised, considering the nature of the ECG waveform This results in a reduction of the total number of transmitted bits to a minimum As the RF energy transmitted is directly proportional to the amount of data transmitted, this scheme can significantly reduce the energy consumption of a wireless ECG sensor node with little hardware overhead

2 citations

Proceedings ArticleDOI
05 Sep 1993
TL;DR: A dictionary based coding scheme is proposed that can efficiently handle the arbitrary valued data that result from the modelling phase and is faster and exhibits better compression ratio than 0-order Huffman encoders.
Abstract: ECG data compression is usually performed in two steps: (a) modelling of the signal and (b) coding of the data that result from the model in (a). A dictionary based coding scheme is proposed that can efficiently handle the arbitrary valued data that result from the modelling phase. The majority of the compression schemes described in the literature employ Huffman and more rarely arithmetic encoders, adopted from text compression, that were not designed to handle data exceeding the range covered by an 8-bit quantity. The coding scheme proposed can handle 16-bit symbols and furthermore is faster and exhibits better compression ratio than 0-order Huffman encoders. >

2 citations

Proceedings ArticleDOI
01 Apr 2017
TL;DR: A comparative study of turning point (TP) compression technique and fan compression technique is done and the main comparison has been made morphologically.
Abstract: Electrocardiogram (ECG) is utilized in finding and treatment of various heart diseases. ECG data is compressed so that it can be effectively used in telemedicine. For telemedicine huge quantity of data signals are to be stored and sent to different places. So it is very essential to compress the ECG signal data in a resourceful way. Electrocardiogram (ECG) signals are mainly compressed for two reasons, on-line data transmission and effective and economical data storage. In the last five decades several data compression techniques has been developed for the compression of ECG signals. These techniques can be classified into three categories: direct data compression (DDC), transformation compression (TC), parameter extraction compression (PEC). In this paper a comparative study of turning point (TP) compression technique and fan compression technique is done. The comparison has been on parameters like compression ratio (CR), percentage root mean square difference (PRD) and quality score (QS). The main comparison has been made morphologically.

2 citations


Cites background from "ECG data compression techniques-a u..."

  • ...It uses first order interpolation method [1]....

    [...]

  • ...Electrocardiogram (ECG) [1] in the modern world is one of the very useful non-invasive physiological consideration which gives information about electromechanical activities of the cardiac system of the body....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations