scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Proceedings ArticleDOI
22 Oct 2007
TL;DR: This work describes a lossy electrocardiogram (ECG) compression algorithm based on R-R segmentation and segment matching that reveals very high compression ratios are possible on very regular signals.
Abstract: This work describes a lossy electrocardiogram (ECG) compression algorithm based on R-R segmentation and segment matching. An ECG can be thought of as a quasi-periodic signal, with many similarities existing between heartbeats acquired from the same source. Through the use of an adaptive dictionary, it is possible to explore the similarities between new and previously encountered patterns, incorporating new patterns when a significant change in morphology has been observed. Algorithm simulation reveals very high compression ratios are possible on very regular signals.
Journal ArticleDOI
TL;DR: In this article , the authors leverage the self-similarity of the electrocardiogram (ECG) signal to recover missing features in event-based sampled ECG signals, dynamically selecting patient-representative templates together with a novel dynamic time warping algorithm to infer the morphology of sampled heartbeats.
Journal ArticleDOI
01 Jul 2021
TL;DR: In this article, the authors proposed data compression algorithms for optimizing data rate and power in Bluetooth Low Energy (BLE) network for monitoring patients in large-scale high bandwidth connected patient monitoring solution.
Abstract: There is a need for low power high bandwidth connected patient monitoring solution for patients in large volumes with easy deployment. The paper proposes data compression algorithms for optimizing data rate and power in Bluetooth Low energy network. The system is implemented using Bluetooth Low Energy (BLE) devices for data collection from a patient and a gateway to transfer the data to remote computer in an Internet Protocol (IP) network. The compression algorithms are optimized for the ARM microprocessor to compress the recorded values along with time stamps into a block. The sensor module is designed to be wearable continuously reads temperature and Electrocardiogram (ECG) and transmits them to a nearby gateway device. The gateway decompresses the data and uploads them to a cloud infrastructure. The system also allows for intensive monitoring of patient vitals. The collected data is processed and displayed in a moving graph in a remote web terminal. Results indicate that data rate and power has reduced by more than 45% and 3% respectively. With these results it is shown that high bandwidth monitoring solution is feasible with no extra expense in power. This system would be very beneficial to society in terms of affordable medical care as it can be used in large hospitals and remote medical camps alike where the patients can be monitored remotely.
Proceedings ArticleDOI
23 Sep 2001
TL;DR: In this article, the authors used piecewise linear approximation (PLA) and beat detection (the peak method) algorithms to compress and store intracardiac electrograms in implantable devices.
Abstract: Studies methods for intracardiac electrogram compression that are suitable for implementation in implantable devices. The algorithms are based on piecewise linear approximation (PLA) methods and beat detection (the peak method). Intracardiac electrograms were obtained, from the right atrium and ventricle, during electrophysiological studies. The total atrial set consists of 5060 s of bipolar recordings and 680 s of unipolar electrograms; the ventricular data set contains 1210 s of bipolar and 480 s of unipolar signals. The peak method clearly performs better than the others to compress bipolar signals, while PLA methods are needed in order to have reliably compressed unipolar data. Performances over the whole bipolar database (including atrial and ventricular sets) reached an average compression ratio (CR) of 7.6, while first-order piecewise linear approximation on unipolar electrograms reached CR=6.6. These preliminary results show that time consumption can be reduced suitably with real-time compression for implantable devices. The ability to compress and store intracardiac electrograms in implantable devices allows detailed verification of appropriate intervention of the device and it might open up new perspectives in the study of the mechanisms involved in the onset of malignant tachyarrhythmias.
References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations