scispace - formally typeset
Journal ArticleDOI

ECG data compression techniques-a unified approach

TLDR
The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract
Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >

read more

Citations
More filters
Proceedings ArticleDOI

ECG Compression Based on Successive Differences

Mile Jovanov, +1 more
TL;DR: In this paper , an algorithm inspired by pulse code modulation in telecommunication theory based on successive differences between neighboring electrocardiogram samples is proposed to reduce the amount of physical space the electrocardia data occupies and needs to be transferred, designing an efficient algorithm that can run on hardware commonly found in a medical device setting.

Successive partition zero coder for embedded lossless wavelet- based egc signal coding

TL;DR: In this article, Successive Partition Zero Coder (SPZC) for embedded loss less Wavelet based ECG signal coding which use both horizontal and vertical bit scanning is proposed.
Proceedings ArticleDOI

A robust method of electrocardiogram data compression using adaptive template matching on discrete cosine transform coefflicients

TL;DR: This work introduces a method of iemplate matching applied to DCT coeficients of ECG data, which results in compression ratios on the order of 1O:l with little signal degradation.

SmoothRetrieved Quality forTelecardiology

TL;DR: The proposed wavelet-threshold based ECG signal compression method requires less computation timesince it does not need QRS detection, period normalization, amplitude normalization and meanremoval and can beused for the transmission ofECG signal over bandlimited telephone networks.
Journal ArticleDOI

A Compression Method for Power Quality Data

TL;DR: The proposed method uses the Deflate algorithm as lossless compression algorithm and the polynomial approximation is intended to decrease the entropy of the signal, thus increasing the compression ratio of the deflate algorithm.
References
More filters
Journal ArticleDOI

A mathematical theory of communication

TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Journal ArticleDOI

A Method for the Construction of Minimum-Redundancy Codes

TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Journal ArticleDOI

Linear prediction: A tutorial review

TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Journal ArticleDOI

Arithmetic coding for data compression

TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Proceedings ArticleDOI

Orthogonal transforms for digital signal processing

TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Related Papers (5)