scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
01 Sep 2004
TL;DR: The proposed ECG compression technique combined two approaches, ECG beat alignment and the polynomial modelling, and a comparison with the DCT approach is performed by means CR/PRD curves.
Abstract: The proposed ECG compression technique combined two approaches, ECG beat alignment and the polynomial modelling. QRS complexes are first detected then aligned in order to reduce high frequency changes from beat to beat. These changes are modelled by means of a polynomial projection. ECGs from MIT-BIH database are used to evaluate the performance of the proposed technique. A comparison with the DCT approach is performed by means CR/PRD curves.

19 citations

Journal ArticleDOI
TL;DR: An offline compression technique, which is implemented for ECG transmission in a global system of mobile (GSM) network for preliminary level evaluation of patient's cardiac condition in a non-critical condition, is described.
Abstract: Compression of Electrocardiographic (ECG) data is an important requirement to develop an efficient telecardiology application. This study describes an offline compression technique, which is implemented for ECG transmission in a global system of mobile (GSM) network for preliminary level evaluation of patient's cardiac condition in a non-critical condition. A short-duration (5 - 6 beats) ECG data from Massachusetts Institute of Technology- Beth Israel Hospital (MIT - BIH) arrhythmia database is used for the trial. The compression algorithm is based on direct processing of ECG samples in four major steps: viz., down-sampling of dataset, normalising inter-sample differences, grouping for sign and magnitude encoding, zero element compression and finally, conversion of bytes into corresponding 8 bit American standard code for information interchange (ASCII) characters. The developed software at the patient side computer also converts the compressed data file into formatted sequence of short text messages (SMSs). Using a dedicated GSM module these message are delivered to the mobile phone of the remote cardiologist. The received SMSs are to be downloaded at the authors computer for concatenation and decompression to obtain back the original ECG for visual or automated investigation. An average percentage root-mean- squared difference and compression ratio values of 43.54 and 1.73 are obtained, respectively, with MIT - BIH arrhythmia data. The proposed technique is useful for rural clinics in India for preliminary level cardiac investigation.

19 citations

Journal ArticleDOI
TL;DR: An adaptive VQ (AVQ) scheme is proposed, based on a one-dimensional codebook structure, where codevectors are overlapped and linearly shifted, which makes this easier to be hardware implemented than any existing AVQ method.
Abstract: A discrete semi-periodic signal can be described as x(n)=x(n+T+ΔT) +Δx,∀n, where T is the fundamental period, ΔT represents a random period variation, and Δx is an amplitude variation. Discrete ECG signals are treated as semi-periodic, where T and Δx are associated with the heart beat rate and the baseline drift, respectively. These two factors cause coding inefficiency for ECG signal compression using vector quantisation (VQ). First, the periodic characteristic of ECG signals creates data redundancy among codevectors in a traditional two-dimensional codebook. Secondly, the fixed codevectors in traditional VQ result in low adaptability to signal variations. To solve these two problems simultaneously, an adaptive VQ (AVQ) scheme is proposed, based on a one-dimensional (1D) codebook structure, where codevectors are overlapped and linearly shifted. To further enhance the coding performance, the Δx term is extracted and encoded separately, before 1D-AVQ is applied. The data in the first 3 min of all 48 ECG records from the MIT/BIH arrhythmic database are used as the test signals, and no codebook training is carried out in advance. The compressed data rate is 265.2±92.3 bits s−1 at 10.0±4.1% PRD. No codebook storage or transmission is required. Only a very small codebook storage space is needed temporarily during the coding process. In addition, the linearly shifted nature of codevectors makes this easier to be hardware implemented than any existing AVQ method.

19 citations


Cites background or methods or result from "ECG data compression techniques-a u..."

  • ...ECG compression methods can be categorised as direct, transformed or parameter-extracted (JALALEDDINE et al., 1990; NAVE and COHEN, 1993). in the direct approach, time-domain signals are processed directly. In the transformed approach, time-domain ECG signals are transformed to another domain, and the compression is performed in the transformed domain. In the parameter-extracted approach, compression is achieved by preserving significant features of the signal. Any lossy compression scheme described above involves some form of quantisation. There are two broad types of quantisation, namely scalar quantisation (SQ) and vector quantisation (VQ). it is well known that VQ is more efficient than SQ in the rate-distortion sense. Therefore VQ has been used extensively and successfully in speech and image coding (GERSHO and GRAY, 1992). Recently, VQ has also been applied to ECG data compression. For example, a finite-state VQ and a mean-subtracted basic VQ (BVQ) are applied to direct ECG compression by WANG and YUAN (1997) and CARDENAS-BARRERA and LORENZO-GINORI (1999), respectively. A BVQ, classified VQ (CVQ) or learning VQ (LVQ) is proposed to encode the wavelet coefficients of ECG signals (ANANT et al., 1995; MIAOU and SHIAOU, 1996; ISHIKAWA et al., 1996). The beat cycles of ECG are extracted and encoded by BVQ (RAMAKaISHNAN and SAHA, 1996). The self-organising feature map and LVQ are used to extract the ECG features for coding purpose (KOSKI, 1996). Line segments and the corresponding slopes, extracted from ECG signals, are quantised separately by CVQ (MAMMEN and RAMAMUaTHI, 1990). Although VQ is an attractive compression tool for ECG and other signals, from a practical viewpoint, VQ suffers from the drawbacks of high computational complexity and incompatibility between training and testing. We consider the incompatibility problem in the following Section. For the first problem, please refer to GEaSHO and GRAY (1992). For the codebook design, the popular LBG procedure is optimum for a particular training set (LINDE et al....

    [...]

  • ...summarised in JALALEDDINE et al. (1990) and CARDENASBARRERA and LORENZO-GINORI (1999) and are either comparable with or slightly worse than others. Based on the similarity o f the approach, we are particularly interested in the results presented in CARDENAS-BARRERA and LORENZO-GINORI (1999), where a mean-shape BVQ is implemented....

    [...]

  • ...ECG compression methods can be categorised as direct, transformed or parameter-extracted (JALALEDDINE et al., 1990; NAVE and COHEN, 1993)....

    [...]

  • ...As pointed out by NAVE and COHEN ( 1993 ) and JALALEDDINE et al. (1990), a fair comparison of various compression methods is difficult....

    [...]

  • ...summarised in JALALEDDINE et al. (1990) and CARDENASBARRERA and LORENZO-GINORI (1999) and are either comparable with or slightly worse than others....

    [...]

Journal ArticleDOI
TL;DR: This paper proposes the application of kernel Principal Component Analysis (KPCA) to SVM for feature extraction and PSO Algorithm is adopted to optimization of these parameters in SVM.
Abstract: As an effective tool in pattern recognition and machine learning, support vector machine (SVM) has been adopted abroad. In developing a successful SVM classifier, eliminating noise and extracting feature are very important. This paper proposes the application of kernel Principal Component Analysis (KPCA) to SVM for feature extraction. Then PSO Algorithm is adopted to optimization of these parameters in SVM. The novel time series analysis model integrates the advantage of wavelet, PSO, KPCA and SVM. Compared with other predictors, this model has greater generality ability and higher accuracy.

19 citations


Cites background from "ECG data compression techniques-a u..."

  • ...Discrete Wavelet Transform Many transform techniques have been previously proposed for signal compression [7][8]....

    [...]

Journal ArticleDOI
TL;DR: An improved and simplified approach is presented for design of nearly perfect reconstructed cosine-modulated (CM) filter banks with prescribed stopband attenuation and channel overlapping.
Abstract: An improved and simplified approach is presented for design of nearly perfect reconstructed cosine-modulated (CM) filter banks with prescribed stopband attenuation and channel overlapping. The method employs Kaiser window technique to design the prototype filter for filter banks with the novelty of exploiting spline functions in the transition band of the ideal filter instead of using the conventional brick-wall filter based on linear optimization of filter coefficients such that their value at frequency (??=??/2M) is 0.707. The simulation results illustrate the proposed method and its improvement over other existing methods in terms of amplitude distortion (e am ), number of iterations (NOI), aliasing distortion (e a ) and computation time (CPU time).

18 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations