scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Proceedings ArticleDOI
25 Apr 2016
TL;DR: The implementation of the low-power algorithm for the compression of biosignals (ECG, EMG, gait pattern) on a general purpose microcontroller of the transmitter node of a body-area network is described.
Abstract: This paper presents a low-power algorithm for the compression of biosignals (ECG, EMG, gait pattern). Sample decimation is guided by the second derivative of the signal as a metric for signal activity. Here, we describe the implementation of the algorithm on a general purpose microcontroller of the transmitter node of a body-area network. The algorithm is optimized for low computational complexity and consists of 180 controller instructions. It incorporates feedback to achieve a targeted compression factor (CF). Simulated and measured results with different biosignals confirm that the code achieves a typical CF around 10, needs 137 controller cycles per input sample and consumes 507 nJ per sample on the prototype hardware.

5 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...As examples, the Fan algorithm [1] and AZTEC [2] are dedicated to ECG compression....

    [...]

Proceedings ArticleDOI
01 Feb 2017
TL;DR: The presented compression method is exploited the uniform quantization (UQ) methods with different thresholding criteria on slantlet coefficients of ECG signal and returned the better reconstruction from compressed data as compare to wavelet and discrete cosine transform technique due to better energy compaction efficiency.
Abstract: In this paper, an ECG signal compression technique is presented based on Slantlet transform with different thresholding functions. The presented compression method is exploited the uniform quantization (UQ) methods with different thresholding criteria on slantlet coefficients of ECG signal. This technique the returned the better reconstruction from compressed data as compare to wavelet and discrete cosine transform technique due to better energy compaction efficiency. A detail analysis has been presented for compression and signal retrieval efficiency along with evaluation of thresholding criteria. The presented technique is tested with different ECG signal obtained from MIT-BIH arrhythmia database. The simulation and experimental results clearly illustrate the slantlet transform gives better performance in term of fidelity assessments in term of CR, SNR and PRD with better compression as compared to contemporary techniques.

5 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...In direct techniques, ECG signal is obtained from various sources and processed directly; therefore, several techniques are present to compress the data such as, coordinate reduction time encoding system (CORTES), turning point, and amplitude zone time epoch coding (AZTEC) [1-4] known as direct compression techniques....

    [...]

  • ...Generally, these transform techniques are used for compression such as discrete cosine transform (DCT), discrete wavelet transform (DWT), and Karhunen–Loeve transform (KLT) [1-4]....

    [...]

  • ...These techniques are classified as lossy and lossless that is further divided in different categories: direct techniques, transform based techniques and parameter extraction techniques of compression [1-4]....

    [...]

Journal ArticleDOI
TL;DR: In this paper, a combination of tunable-Q wavelet transform (TQWT) and adaptive Fourier decomposition (AFD) was used for ECG signal compression.
Abstract: Long-term electrocardiogram (ECG) signal monitoring necessitates a large amount of memory space for storage, which affects the transmission channel efficiency during real-time data transfer. Using a combination of tunable-Q wavelet transform (TQWT) and adaptive Fourier decomposition (AFD), the proposed work develops a new single-channel ECG signal compression algorithm. The input parameters of TQWT were selected so that the lowest frequency subband contained highest energy along with minimal loss. A new Mobius transform-based AFD was introduced to improve the fidelity, by computing highest energy coefficients using Nevanlinna factorization, with suitable decomposition level. Finally, the lossless compression was performed in polar coordinate of final complex coefficients that significantly improved the compression ratio (CR). The algorithm was tested on “python” programming platform, tested in Raspberry Pi (R-Pi), for real-time data processing, and wireless transmission to cloud server and smartphone devices. The suggested work yielded CR, percent root mean square error (PRD), and PRD normalized (PRDN) of 30.06, 7.80, and 11.62, respectively, after testing on 48 Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) ECG data with 30-min duration. A rigorous quality assessment of the reconstructed signal ensured that there was minimal impact on various characteristic domains in the ECG signal, enhancing its acceptability in medical applications.

5 citations

Journal ArticleDOI
TL;DR: The proposed WNC-ECGlet method is observed to perform better than conventional methods in CS-based ECG signal reconstruction and may enhance the use of WBAN technology in healthcare informatics and telemedicine.

5 citations

Proceedings ArticleDOI
19 Apr 2011
TL;DR: Two algorithms are described that are suited for real-time biomedical signal compression, Amplitude Threshold compression and SQ segment compression, where the PRD value for both proposed methods is lower than thePRD values of reference methods DCT and TP.
Abstract: In this article, two algorithms are described that are suited for real-time biomedical signal compression. These being, Amplitude Threshold compression and SQ segment compression. Comparison of these methods with well known methods such as lossy Discrete Cosine Transform (DCT) and lossless Turning Point (TP) is shown. The compression method outputs were reconstructed using a cubic spline approximation and compared. The values of compression ratio (CR), percent mean square difference (PRD) and area criteria were chosen for method comparison. Here it is shown that the method presented here (Threshold, SQ segment) provide considerably lower CR values than the DCT method and slightly higher CR values than the TP method. However, the PRD value for both proposed methods is lower than the PRD values of reference methods DCT and TP.

4 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...Data compression techniques [1, 2] are categorized as those in which the compressed data is reconstructed to an exact form of the original signal (lossless) and those in which higher compression ratios can be achieved at the cost of some error in the reconstructed signal (lossy)....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations