scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
TL;DR: It is concluded that the vertexes selected by the proposed ECG signal compression method preserve all feature points of the ECG signals and are more efficient than the AZTEC(Amplitude Zone Time Epoch Coding) method.
Abstract: As electrocardiogram(ECG) signals are generally sampled with a frequency of over 200Hz, a method to compress diagnostic information without losing data is required to store and transmit them efficiently. In this paper, an ECG signal compression method, which uses feature points based on curvature, is proposed. The feature points of P, Q, R, S, T waves, which are critical components of the ECG signal, have large curvature values compared to other vertexes. Thus, these vertexes are extracted with the proposed method, which uses local extremum of curvatures. Furthermore, in order to minimize reconstruction errors of the ECG signal, extra vertexes are added according to the iterative vertex selection method. Through the experimental results on the ECG signals from MIT-BIH Arrhythmia database, it is concluded that the vertexes selected by the proposed method preserve all feature points of the ECG signals. In addition, they are more efficient than the AZTEC(Amplitude Zone Time Epoch Coding) method.

6 citations


Additional excerpts

  • ...이 러한 압축 기법은 심전도 파형의 모든 정점을 대상으로 압 축을 수행함으로 인해 심전도 신호의 중요 구성요소인 P, Q, R, S, T파의 특징점에 대한 왜곡이 발생할 수 있다[9]....

    [...]

Dissertation
15 Nov 2007
TL;DR: In this article, the authors compare polynome orthogonaux-based methods for the compression of signaux ECG with polynomes of Legendre et Tchebychev.
Abstract: La compression des signaux ECG trouve encore plus d'importance avec le developpement de la telemedecine. En effet, la compression permet de reduire considerablement les couts de la transmission des informations medicales a travers les canaux de telecommunication. Notre objectif dans ce travail de these est d'elaborer des nouvelles methodes de compression des signaux ECG a base des polynomes orthogonaux. Pour commencer, nous avons etudie les caracteristiques des signaux ECG, ainsi que differentes operations de traitements souvent appliquees a ce signal. Nous avons aussi decrit de facon exhaustive et comparative, les algorithmes existants de compression des signaux ECG, en insistant sur ceux a base des approximations et interpolations polynomiales. Nous avons aborde par la suite, les fondements theoriques des polynomes orthogonaux, en etudiant successivement leur nature mathematique, les nombreuses et interessantes proprietes qu'ils disposent et aussi les caracteristiques de quelques uns de ces polynomes. La modelisation polynomiale du signal ECG consiste d'abord a segmenter ce signal en cycles cardiaques apres detection des complexes QRS, ensuite, on devra decomposer dans des bases polynomiales, les fenetres de signaux obtenues apres la segmentation. Les coefficients produits par la decomposition sont utilises pour synthetiser les segments de signaux dans la phase de reconstruction. La compression revient a utiliser un petit nombre de coefficients pour representer un segment de signal constitue d'un grand nombre d'echantillons. Nos experimentations ont etabli que les polynomes de Laguerre et les polynomes d'Hermite ne conduisaient pas a une bonne reconstruction du signal ECG. Par contre, les polynomes de Legendre et les polynomes de Tchebychev ont donne des resultats interessants. En consequence, nous concevons notre premier algorithme de compression de l'ECG en utilisant les polynomes de Jacobi. Lorsqu'on optimise cet algorithme en supprimant les effets de bords, il devient universel et n'est plus dedie a la compression des seuls signaux ECG. Bien qu'individuellement, ni les polynomes de Laguerre, ni les fonctions d'Hermite ne permettent une bonne modelisation des segments du signal ECG, nous avons imagine l'association des deux systemes de fonctions pour representer un cycle cardiaque. Le segment de l'ECG correspondant a un cycle cardiaque est scinde en deux parties dans ce cas: la ligne isoelectrique qu'on decompose en series de polynomes de Laguerre et les ondes P-QRS-T modelisees par les fonctions d'Hermite. On obtient un second algorithme de compression des signaux ECG robuste et performant.

6 citations

Journal ArticleDOI
TL;DR: The test results and performance indices have proved beyond doubt that the EBP-NN method is very efficient for the data compression and help in the management of ECG data in both offline and real-time applications.
Abstract: This paper deals with an efficient algorithm, which has been developed for Electrocardiogram (ECG) data compression using error back propagation neural networks (EBP-NN). Four EBP-NN have been trained to retrieve all the 12 standard leads of the ECG signal. The combination of leads and the network topologies have been finalized after an extensive study of correlation between the ECG leads using CSE database. Each network has a topology of N-4-4-N, where N represents the number of samples in one cycle in any particular lead. It has been observed that this method compresses the data as well as improves the quality of retrieved signal due to elimination of high frequency noise. The compression ratio (CR) in EBP-NN method goes on increasing with the increase in the number of ECG cycles. This method is best suited for data compression in Holter monitoring, ambulatory care and telemedicine. The performance of the algorithm has been evaluated by comparing the vital reference points like onsets, offsets, amplitud...

6 citations


Cites background or methods from "ECG data compression techniques-a u..."

  • ...The disadvantage of this method is that saved points do not represent the equally spaced time intervals [4]....

    [...]

  • ...Amplitude Zone Time Epoch Coding (AZTEC), Modified AZTEC, Turning Point (TP), Co-ordinate Reduction Time Encoding System (CORTES), Scan Along Polygonal Approximation (SAP A), Fan, Peak picking and Cycle-to-Cycle (CTC), Differential Pulse Code Modulation (DPCM) and SAP A-CfC techniques are the DOC techniques [4]....

    [...]

  • ...In the DPCM, the error (residual) between the actual sample and the estimated sample value is quantized and transmitted/stored [4]....

    [...]

  • ...All the methods of DOC attempt to reduce the redundancy in a data sequence by examining the number of neighbouring samples that can be implied by examining the preceding and succeeding samples [4]....

    [...]

  • ...Out of different transforms, the highest CR has been reported for KLT technique for multilead ECG signal [4]....

    [...]

Journal ArticleDOI
TL;DR: Experimental results show that the proposed scheme can effectively guarantee the transmission secrecy against eavesdropping, while improving the spectrum efficiency and energy efficiency compared to other existing methods.
Abstract: Due to the inherent openness of wireless channels and the restriction of communication resources and energy supply, the privacy protection of the sensing data transmission in the security-critical Internet of Medical Things (IoMT) has become a great challenge. In order to guarantee the privacy of IoMT sensing and transmission in a wireless wiretap channel and reduce the power consumption, a privacy-aware sensing and transmission scheme with the name of sparse-learning-based encryption and recovery (SLER) is proposed. The sparse sensing signal is compressed and encrypted at the IoMT devices in the encryption stage and transmitted to the network coordinator or edge devices, where the sparse signal is accurately recovered via sparse learning in the decryption stage. The encryption stage is conducted based on compressed sensing. The decryption stage utilizes a model-based sparsity-aware deep neural network to accurately recover the sensing signal, whose sparse features are extracted to decrease the required size of measurement signals and increase the spectrum efficiency. The secrecy performance of the proposed SLER algorithm is theoretically analyzed. Experiments of electrocardiogram (ECG) signal transmission are performed as a typical IoMT application. The experimental results show that the proposed scheme can effectively guarantee the transmission secrecy against eavesdropping, while improving the spectrum efficiency and energy efficiency compared to other existing methods.

6 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations