scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Book ChapterDOI
27 May 2022
TL;DR: In this paper , a quality-controlled compression method that compresses the ECG data efficiently with retaining its quality up to a certain mark is presented, where a distortion measure is used with specifying its value in a tolerable range.
Abstract: Electrocardiogram (ECG) signals are widely used by cardiologists for the early detection of cardiovascular diseases (CVDs). In the early detection of CVDs, long-term ECG data is used for analysis. Healthcare devices used for the acquisition of long-term ECG data require an efficient ECG data compression algorithm. But compression of ECG signal with maintaining its quality is a challenge. Hence, this chapter presents a quality-controlled compression method that compresses the ECG data efficiently with retaining its quality up to a certain mark. For this, a distortion measure is used with specifying its value in a tolerable range. The compression performance of the proposed algorithm is evaluated using ECG records of the MIT-BIH arrhythmia database. In performance assessment, it is found that the compression algorithm performs well. The compressed ECG data are also used for normal and arrhythmia beat classification. The classification performance for ECG beats obtained from the compressed ECG data is good. It denotes the better diagnostic quality of the compressed ECG data.
Posted ContentDOI
09 May 2023
TL;DR: In this paper , an IoT-based real-time healthcare prototype has been implemented using unobtrusive heterogeneous sensor nodes to create a Wireless Body Area Network (WBAN) architecture.
Abstract: Abstract Internet of Things (IoT)-based health monitoring system is centred on continuous, real-time monitoring of the health of individuals. The emergence of IoT-enabled healthcare devices is rapidly changing the health infrastructure. IoT-enabled technologies can facilitate an effortless interaction among different devices and platforms. Smart health monitoring applications are a notable and important application in IoT and therefore, different types of IoT frameworks have been proposed by researchers. In the present work, an IoT–based real-time healthcare prototype has been implemented using unobtrusive heterogeneous sensor nodes to create a Wireless Body Area Network (WBAN) architecture. The prototype has been designed to collect vital parameters of the human body using heterogeneous sensors like: electrocardiogram (ECG), temperature, Saturation of peripheral oxygen (SPO2), and pulse rate. The collected data from the prototype shows the condition of the patient in a graphical user interface (GUI) in compliance with the Modified Early Warning Score (MEWS) medical guidelines. The prototype along with the user interface is named SWAST KHOJ. The data is periodically uploaded with the timestamp information on the local server. Further, the Quality of Service (QoS) parameter of the prototype is evaluated for different short-range communication like LAN (or Wire), ZigBee, and Bluetooth in an indoor environment. The performance of the proposed prototype is evaluated on different communication technologies and it was observed that the proposed prototype requires an end-to-end delay of 0.514 ms, 0.62 ms, 0.417 ms and 1.92 ms for wire, Wi-Fi, ZigBee and Bluetooth respectively. Also, it is observed that the end-to-end delay of the proposed method is less than that of the previous works. Moreover, the throughput of the prototype using different communication technology has been evaluated for the prototype.
Proceedings ArticleDOI
03 Sep 2002
TL;DR: The MDL based digital signal segmentation method proposed in [1], a new QRS detection algorithm is designed and tested and experimentally shown that the algorithm performs better than other 11 algorithms, in terms of False Positive/False Negative estimations.
Abstract: Applying the MDL based digital signal segmentation method proposed in [1], a new QRS detection algorithm is designed and tested. It is experimentally shown that the algorithm performs better than other 11 algorithms, in terms of False Positive/False Negative estimations. The newly defined Weighted Diagnostic Distortion (WDD) measure[2] is computed for annotated files from MIT-BIH database to evaluate the accuracy of the detection algorithm and, also, to verify how well the P,T-waves are conserved by the broken line approximation.

Cites background or methods from "ECG data compression techniques-a u..."

  • ...The FAN method uses only first order inter­ polator for approximating the original signal with a broken line; a line segment approximates all that consecutive sam­ ples for which the maximum error does not exceed a given threshold[4]....

    [...]

  • ...In the case we investigate, the degree of the polynomial for each segment is constrained to be 1 ac­ cording to a wide accepted model of ECG signals[4]....

    [...]

  • ...The most important drawback related by methods as AZTEC or FAN is the damage produced on P-wave and T­ wave by lossy compression[4] [9]....

    [...]

Book ChapterDOI
01 Jan 2007
TL;DR: A model to compression of ECG signals based in wavelet transform, the coefficients of expansion of the original signal are coding with the Set partitioning in Hierarchical Trees algorithm (SPIHT), to introduce a new modification to signal analysis in 1-D.
Abstract: This paper present a model to compression of ECG signals based in wavelet transform, the coefficients of expansion of the original signal are coding with the Set partitioning in Hierarchical Trees algorithm (SPIHT). The SPIHT algorithm is the last generation of coders used with wavelet transform, this algorithm is employing more sophisticated coding of images and signals. In this work we implemented a modification in the MSPIHT algorithm, to this algorithm we introduce a new modification to signal analysis in 1-D. Compression ratios of up to 24:1 for ECG signals lead to acceptable results for visual inspection and analysis by medical doctors.
References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations