scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
TL;DR: A scalable neural recording interface with embedded lossless compression to reduce dynamic power consumption and data rate reduction for LFPs and spikes in high-density neural recording systems is reported.
Abstract: We report a scalable neural recording interface with embedded lossless compression to reduce dynamic power consumption ( $\text{P}_{D}$ ) for data transmission in high-density neural recording systems. We investigated the characteristics of neural signals and implemented effective lossless compression for local field potential (LFP) and extracellular action potential (EAP or spike) in separate signal paths. For LFP, spatial–temporal (spatiotemporal) correlation of the LFP signals is exploited in a $\Delta $ -modulated $\Delta \Sigma $ analog-to-digital converter ( $\Delta -\Delta \Sigma $ ADC) and a dedicated digital difference circuit. Then, statistical redundancy is further eliminated through entropy encoding without information loss. For spikes, only essential parts of waveforms in the spikes are extracted from the raw data by using spike detectors and reconfigurable analog memories. The prototype chip was fabricated using 180-nm CMOS processes, incorporating 128 channels into a modular architecture that is easily scalable and expandable for high-density neural recordings. The fabricated chip achieved the data rate reduction for the LFPs and spikes by a factor of 5.35 and 10.54, respectively, from the proposed compression scheme. Consequently, $P_{D}$ was reduced by 89%, when compared to the uncompressed case. We also achieved the state-of-the-art recording performance of 3.37 $\mu \text{W}$ per channel, 5.18 $\mu V_{\mathrm {rms}}$ noise, and 3.41 ${\text {NEF}}^{2}V_{\mathrm {DD}}$ .

33 citations

Journal ArticleDOI
TL;DR: A lossless, real-time compression technique based on combination of second-order delta and the Huffman encoding is proposed for the PPG signal and the low time complexity of the proposed algorithm encourages its implementation for development low-cost real- time PPG measurement application in patient monitoring.
Abstract: Photoplethysmogram (PPG) signal can provide vital diagnostic information on cardiovascular functions of the human body. In this paper, a lossless, real-time compression technique based on combination of second-order delta and the Huffman encoding is proposed for the PPG signal. The algorithm was validated with 10-bit quantized PPG data collected from multiparameter intelligent monitoring in intensive care-II database under Physionet and healthy volunteers using Biopac Systems at 125-Hz sampling frequency. Using a block size of 48 samples, the average compression ratio, percentage root-mean-squared difference (PRD), and PRD normalized achieved was 2.223, 0.127, and 0.187 respectively with 30 sets of volunteers’ data. Three prime clinical features, systolic amplitude, systolic upstroke time, and pulse width from the decompressed PPG waveform were evaluated with less than 1% distortion on the diagnostic measures. A study was also done to estimate the compression efficiency for different sample block size, wave morphology, and sampling frequency of raw data. The low time complexity of the proposed algorithm encourages its implementation for development low-cost real-time PPG measurement application in patient monitoring.

33 citations


Cites background from "ECG data compression techniques-a u..."

  • ...Although a lot of research is already available for ECG signal compression [25], [26], the area of PPG compression is largely unexplored till date by the research community....

    [...]

BookDOI
01 Jan 2006
TL;DR: A. Enis Çetin Hayrettin Köymen Bilkent University 3.3m euros (£3.2m; $3.6m) research project to develop a new method for measuring the impact of natural disasters on education in Turkey.
Abstract: A. Enis Çetin Hayrettin Köymen Bilkent University 3.

32 citations


Additional excerpts

  • ...Various modified of AZTEC are proposed [5]....

    [...]

Journal ArticleDOI
TL;DR: Experimental results indicate the good overall performance of the lossless ECG compression algorithms (reducing the storage needs from 12 to about 3-4 bits per sample).

31 citations


Cites background from "ECG data compression techniques-a u..."

  • ...The available lossy compression algorithms for ECG signals exploit the shortterm correlation of the signal samples [2], or the short and long-term correlation [3], or additionally the correlation between different recorded channels [1]....

    [...]

Journal ArticleDOI
TL;DR: It is shown that the amplitude modulated (AM) sinusoidal signal model, which is a special case of the time-dependent AR/ARMA model, can have the periodicity property, and the model can exhibit a burst-like feature very well, when the modulating signal is an exponential function.
Abstract: LIKE MANY other natural signals, the electrocardiograph (ECG) is also a non-stationary signal (GrU~N1ER, 1983). The burst-like QRS feature contributes localised high-frequency components in the ECG signal, making it distinctly non-stationary (WALDO and CHITRAPA, 1991). Although this feature of the QRS wave has helped detection of the wave by filtering/linear prediction (FRIESEN et al., 1990), it makes the modelling of the signal very difficult. Most of the work in modelling a ECG is non-parametric in nature (GRAHAM, 1976; WOMBLE et al., 1977; JALALEDDINE et al., 1990). An attempt to represent a segment of the ECG by the impulse response of a pole-zero model was unsuccessful because of its prohibitively large order (MURTHY et al., 1979). Later, modelling a small segment (about a period) of the ECG by damped sinusoids was found to be superior to the earlier attempt. The method, however, fails to exploit the global nature (e.g. pseudo-periodicity) of the ECG signal (NIRANJAN and MURTHY, 1993). The time-dependent autoregressive (AR)/autoregressive moving average (ARMA) model is the representative of the general class of non-stationary signals (GRENIER, 1983). As such, the model can also be used for the ECG signal. However, the ECG has some distinctive features; its pseudo-periodicity, and different features of the constituent signals (P, QRS and T) representing actions of various parts of the heart (GUYTON, 1985) etc. It would be useful to know how the general timedependent AR/ARMA model is restricted by the special features of the ECG-type signals. We show that the amplitude modulated (AM) sinusoidal signal model, which is a special case of the time-dependent AR/ARMA model, can have the periodicity property, and the model can exhibit a burst-like feature very well, when the modulating signal is an exponential function. We propose that one or more AM sinusoidal signal(s) can be employed to model separately each feature of the ECG signal. The suitability of the developed model is then investigated for the ECG signal using an analysis-by-synthesis technique.

31 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations