scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Journal ArticleDOI
01 Sep 2022
TL;DR: This study tested the hypothesis that a computational mapping system incorporating a comprehensive arrhythmia simulation library would provide accurate localization of the site-of-origin for atrial and ventricular arrhythmias and pacing using 12-lead ECG data when compared with the gold standard of invasive electrophysiology study and ablation.
Abstract: Background: The accuracy of noninvasive arrhythmia source localization using a forward-solution computational mapping system has not yet been evaluated in blinded, multicenter analysis. This study tested the hypothesis that a computational mapping system incorporating a comprehensive arrhythmia simulation library would provide accurate localization of the site-of-origin for atrial and ventricular arrhythmias and pacing using 12-lead ECG data when compared with the gold standard of invasive electrophysiology study and ablation. Methods: The VMAP study (Vectorcardiographic Mapping of Arrhythmogenic Probability) was a blinded, multicenter evaluation with final data analysis performed by an independent core laboratory. Eligible episodes included atrial and ventricular: tachycardia, fibrillation, pacing, premature atrial and ventricular complexes, and orthodromic atrioventricular reentrant tachycardia. Mapping system results were compared with the gold standard site of successful ablation or pacing during electrophysiology study and ablation. Mapping time was assessed from time-stamped logs. Prespecified performance goals were used for statistical comparisons. Results: A total of 255 episodes from 225 patients were enrolled from 4 centers. Regional accuracy for ventricular tachycardia and premature ventricular complexes in patients without significant structural heart disease (n=75, primary end point) was 98.7% (95% CI, 96.0%–100%; P<0.001 to reject predefined H0 <0.80). Regional accuracy for all episodes (secondary end point 1) was 96.9% (95% CI, 94.7%–99.0%; P<0.001 to reject predefined H0 <0.75). Accuracy for the exact or neighboring segment for all episodes (secondary end point 2) was 97.3% (95% CI, 95.2%–99.3%; P<0.001 to reject predefined H0 <0.70). Median spatial accuracy was 15 mm (n=255, interquartile range, 7–25 mm). The mapping process was completed in a median of 0.8 minutes (interquartile range, 0.4–1.4 minutes). Conclusions: Computational ECG mapping using a forward-solution approach exceeded prespecified accuracy goals for arrhythmia and pacing localization. Spatial accuracy analysis demonstrated clinically actionable results. This rapid, noninvasive mapping technology may facilitate catheter-based and noninvasive targeted arrhythmia therapies. Registration: URL: https://www.clinicaltrials.gov; Unique identifier: NCT04559061.

2 citations

Journal ArticleDOI
TL;DR: In this work, a novel analysis-by-synthesis method to process ECG signals is presented, based on the matching pursuit algorithm, which is employed here to decompose the ECG in the time domain.
Abstract: The electrocardiogram (ECG) is relevant for several medical purposes. In this work, a novel analysis-by-synthesis method to process ECG signals is presented. It is based on the matching pursuit algorithm, which is employed here to decompose the ECG in the time domain. The main features of the ECG are extracted through a dictionary of triangular functions, due to their good correlation with the typical electrocardiographic waveforms, especially the R wave. The individual elements of this signal representation can be further employed for different processing tasks, such as ECG compression and QRS detection. Compression is required to store and transmit signals in situations related to massive acquisitions, frequent monitoring, high-resolution data, real-time needs or narrow bandwidths. QRS detection is not only essential to study the heart rate variability, but also the basis of automatic systems for ECG applications such as heartbeat classification or anomaly identification. In this study, it is shown how to employ the proposed processing approach to perform ECG compression and beat detection jointly. The resulting algorithm is tested over the whole MIT-BIH Arrhythmia Database, with a wide variety of ECG records, yielding both high compression and efficient QRS detection.

2 citations

Dissertation
01 Jul 2015
TL;DR: A new ECG signal compression technique based on EMD is proposed, in which first EMD technique is applied to decompose theECG signal into several intrinsic mode functions (IMFs) and the performance of the compression and decompression techniques are evaluated.
Abstract: Electrocardiogram (ECG) is an efficient diagnostic tool to monitor the electrical activity of heart. One of the most vital benefit of using telecommunication technologies in medical field is to provide cardiac health care at a distance. Telecardiology is the most efficient way to provide faster and affordable health care for the cardiac patients located at rural areas. Early detection of cardiac disorders can minimize cardiac death rates. In real time monitoring process, ECG data from a patient usually takes large storage space in the order of gigabytes (GB). Hence, compression of bulky ECG signal is a common requirement for faster transmission of cardiac signals using wireless technologies. Several techniques such as the Fourier transform based methods, wavelet transform based methods, etc., have been reported for compression of ECG data. Though Fourier transform is suitable for analyzing the stationary signals. An improved version, the wavelet transform allows the analysis of non-stationary signal. It provides a uniform resolution for all the scales, however, wavelet transform faces difficulties like uniformly poor resolution due to limited size of the basic wavelet function and it is nonadaptive in nature. A data adaptive method to analyse non-stationary signal is based on empirical mode decomposition (EMD), where the bases are derived from the multivariate data which are nonlinear and non-stationary. A new ECG signal compression technique based on EMD is proposed, in which first EMD technique is applied to decompose the ECG signal into several intrinsic mode functions (IMFs). Next, downsampling, discrete cosine transform (DCT), window filtering and Huffman encoding processes are used sequentially to compress the ECG signal. The compressed ECG is then transmitted as short messageservice (SMS) message using a global system for mobile communications (GSM) modem. First the AT-command ‘+CMGF’ is used to set the SMS to text mode. Next, the GSM modem uses the AT-command ‘+CMGS’ to send a SMS message. The received text SMS messages are transferred to a personal computer (PC) using blue-tooth. All text SMS messages are combined in PC as per the received sequence and fed as data input to decompress the compressed ECG data. The decompression method which is used to reconstruct the original ECG signal consists of Huffman decoding, inverse discrete cosine transform (IDCT) and spline interpolation. The performance of the compression and decompression techniques are evaluated in terms of compression ratio (CR) and percent root mean square difference (PRD) respectively by using both European ST-T database and Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database. The average values of CR and PRD for selected ECG records of European ST-T database are found to be 23.5:1 and 1.38 respectively. All 48 ECG records of MIT-BIH arrhythmia database are used for comparison purpose and the average values of CR and PRD are found to be 23.74:1 and 1.49 respectively. The reconstructed ECG signal is then used for detection of cardiac disorders like bradycardia, tachycardia and ischemia. The preprocessing stage of the detection technique filters the normalized signal to reduce noise components and detects the QRS-complexes. Next, ECG feature extraction, ischemic beat classification and ischemic episode detection processes are applied sequentially to the filtered ECG by using rule based medical knowledge. The ST-segment and T-wave are the two features generally used for ischemic beat classification. As per the recommendation of ESC (European Society of cardiology) the ischemic episode detection procedure considers minimum 30s duration of signal. The performance of the ischemic episode detection technique is evaluated in terms of sensitivity (Se) and positive predictive accuracy (PPA) by using European ST-T database. This technique achieves an average Se and PPA of 83.08% and 92.42% respectively.

2 citations

Proceedings ArticleDOI
01 Jan 2022
TL;DR: This paper explores two direct data compression methods for ECG data: delta coding and Huffman coding, as well as their variations.
Abstract: Wireless ECG devices are the latest novelty in the field of electrocardiography. ECG is commonly used in healthcare systems to observe cardiac activity, however wireless devices bring new challenges to the field of ECG monitoring. These challenges include limited battery capacity, as well as increased data storage requirements caused by daily uninterrupted ECG measurements. Both of these issues can be mitigated by introducing an efficient compression technique. This paper explores two direct data compression methods for ECG data: delta coding and Huffman coding, as well as their variations. We performed experiments both on measurements from a wireless ECG sensor – the Savvy ECG sensor, as well as on measurements from a standard public ECG database – the MIT-BIH Arrhythmia Database. We were able to select suitable parameters for delta coding for efficient compression of multiple ECG recordings from the Savvy ECG sensor, with a compression ratio of 1.6.

2 citations

Journal ArticleDOI
TL;DR: A deconvolution preprocessing module for ECG codec performance improvement (DPM-ECPI) is presented and it is shown that this new compression scheme is particularly interesting than a direct coding.

2 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations