scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
TL;DR: The results show that nonlinear transform (ENOCA) gives better performance at high PRD where as at low PRD, DCT performs better.
Abstract: This paper presents and analyzes nonlinear transform-based method electrocardiogram (ECG) compression. The procedure used is similar to that used in linear transform-based method. The ECG signal is first transformed using (i) linear transform: discrete cosine transforms (DCT), Laplacian pyramid (LP), wavelet transform (WT) and it is transformed using (ii) nonlinear transform: essentially nonoscillatory cell average (ENOCA). The transformed coefficients (TC) are thresholded using the bisection algorithm in order to match the predefined user-specified percentage root mean square difference (PRD) within the tolerance. Then, the binary lookup table is made to store the position map for zero and nonzero coefficients (NZCs). The NZCs are quantized by Max–Lloyd quantizer followed by arithmetic coding. Lookup table is encoded by Huffman coding. The results are presented on different ECG signals of varying characteristics. The results show that nonlinear transform (ENOCA) gives better performance at high PRD where as at low PRD, DCT performs better.

4 citations

Journal ArticleDOI
TL;DR: The state-of-the-art methods used for fPCG signal extraction and processing, as well as means of detection and classification of various features defining fetal health state are introduced.
Abstract: Fetal phonocardiography (fPCG) is receiving attention as it is a promising method for continuous fetal monitoring due to its non-invasive and passive nature. However, it suffers from the interference from various sources, overlapping the desired signal in the time and frequency domains. This paper introduces the state-of-the-art methods used for fPCG signal extraction and processing, as well as means of detection and classification of various features defining fetal health state. It also provides an extensive summary of remaining challenges, along with the practical insights and suggestions for the future research directions.

4 citations

Proceedings ArticleDOI
01 Jan 2005
TL;DR: Examination of a set of wavelet functions for implementation in an electrocardiogram (ECG) image compression system shows that clinically useful information in original ECG image is preserved by 15:1 compression and in some cases 20: 1 compression is clinically useful.
Abstract: The aim of our paper is to examine a set of wavelet functions for implementation in an electrocardiogram (ECG) image compression system. Eight different wavelets are evaluated for their ability to compress ECG as image. Image quality is compared objectively using mean square error (MSE) and peak signal to noise ratio (PSNR) along with visual appearance. Results show that clinically useful information in original ECG image is preserved by 15:1 compression and in some cases 20:1 compression is clinically useful.

4 citations

Journal ArticleDOI
TL;DR: This paper looks into previous research in relation to each processing step for ECG diagnosis and proposes detection and classification method of arrhythmia using rhythm features of ECG signal, which shows detection performance of 100% for arrhythmmia with only normal rhythm rule and applicability of classification for rhythm types with arrHythmia rhythm rules.
Abstract: In this paper, we look into previous research in relation to each processing step for ECG diagnosis and propose detection and classification method of arrhythmia using rhythm features of ECG signal. Rhythm features for distribution of rhythm and heartbeat such as identity, regularity, etc. are extracted in feature extraction, and rhythm type is classified using rule-base constructed in advance for features of rhythm section in rhythm classification. Experimental results for all of rhythm types in the MIT-BIH arrhythmia database show detection performance of 100% for arrhythmia with only normal rhythm rule and applicability of classification for rhythm types with arrhythmia rhythm rules.

4 citations


Additional excerpts

  • ...시간영역 압축기술은 일반적 신호처리에서 이용되는 고전적기술과심전도처리에서이용되는새로운또는변형된 기술로 다시 분류된다[15]....

    [...]

Proceedings ArticleDOI
14 May 2006
TL;DR: This work proposes the use of orthogonal wavelets parameterized by their scaling filter, with optimization criterion based on the minimization of signal distortion rate given the desired compression rate.
Abstract: In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define a family of wavelets that depend on a set of parameters and a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of orthogonal wavelets parameterized by their scaling filter, with optimization criterion based on the minimization of signal distortion rate given the desired compression rate. For coding the wavelet coefficients we adopted the embedded zerotree wavelet coding algorithm. Results on electromyographic signals show that optimization significantly improves performance.

4 citations


Cites background from "ECG data compression techniques-a u..."

  • ...Past research has focused on a number of compression schemes, especially in the fields of electrocardiogram and electroencephalogram [2][3] [4]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations