scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
01 Jan 2001
TL;DR: This paper discusses practical aspects of implementation of the signal-dependent adjustable subband coding in a real-world portable long-term digital recorder and the methodological consequences of the use of the fixed-point data representation.
Abstract: This paper discusses practical aspects of implementation of the signal-dependent adjustable subband coding in a real-world portable long-term digital recorder. Various dependency rules for the local sampling frequency are considered. The most sophisticated one consists in the on-line detection of QRS complexes and adjusting the effective sampling rate to the expected signal bandwidth. The recorded signal is digitized at a constant maximum rate, but immediately afterwards it is split into the mandatory coarse approximation and the facultative, ruledependant upper-band details. Regardless the algorithm uses orthogonal time-frequency decomposition, it is feasible to implement it in the recorder's hardware. The only difference for the hardware using the fixed-point arithmetic is the mandatory use of wavelets that maps integers to integers. The methodological consequences of the use of the fixed-point data representation are focused on in the closing chapter of this paper.

3 citations


Cites background from "ECG data compression techniques-a u..."

  • ...Fortunately, the ECG signal is predictable in some aspects [3] and has several properties that may be important when considering the local optimization of sampling frequency: -...

    [...]

Journal ArticleDOI
05 Aug 2021-Sensors
TL;DR: In this paper, a new approach for the optimization of a dictionary used in ECG signal compression and reconstruction systems, based on Compressed Sensing (CS), is presented, which uses an over complete wavelet dictionary, which is then reduced by means of a training phase.
Abstract: This paper presents a new approach for the optimization of a dictionary used in ECG signal compression and reconstruction systems, based on Compressed Sensing (CS). Alternatively to fully data driven methods, which learn the dictionary from the training data, the proposed approach uses an over complete wavelet dictionary, which is then reduced by means of a training phase. Moreover, the alignment of the frames according to the position of the R-peak is proposed, such that the dictionary optimization can exploit the different scaling features of the ECG waves. Therefore, at first, a training phase is performed in order to optimize the overcomplete dictionary matrix by reducing its number of columns. Then, the optimized matrix is used in combination with a dynamic sensing matrix to compress and reconstruct the ECG waveform. In this paper, the mathematical formulation of the patient-specific optimization is presented and three optimization algorithms have been evaluated. For each of them, an experimental tuning of the convergence parameter is carried out, in order to ensure that the algorithm can work in its most suitable conditions. The performance of each considered algorithm is evaluated by assessing the Percentage of Root-mean-squared Difference (PRD) and compared with the state of the art techniques. The obtained experimental results demonstrate that: (i) the utilization of an optimized dictionary matrix allows a better performance to be reached in the reconstruction quality of the ECG signals when compared with other methods, (ii) the regularization parameters of the optimization algorithms should be properly tuned to achieve the best reconstruction results, and (iii) the Multiple Orthogonal Matching Pursuit (M-OMP) algorithm is the better suited algorithm among those examined.

3 citations

Proceedings ArticleDOI
09 Oct 2014
TL;DR: A new compression technique for Electrocardiogram (ECG) signals is proposed, which uses transform based with hybrid stage, suitable for compressing the complete ECG signal as well as the QRS complex.
Abstract: In this paper, we present a compression technique for Electrocardiogram (ECG) signals. Different data compression techniques are available. As the ECG data to be handled is huge, we must use an appropriate compression technique. When we try to compress an ECG signal, the clinically important features of the signal must be properly retained which are useful to the doctors for proper diagnosis. We have proposed a new compression technique, which uses transform based with hybrid stage. In this method, a Discrete Cosine Transform (DCT) stage is combined with 1D or 2D discrete wavelet transforms (DWT) respectively. The hybrid stage is suitable for compressing the complete ECG signal as well as the QRS complex. The compression ratio (CR) and Percent Root Mean Square Difference (PRD) are controlled by proper selection of the threshold value. The threshold value has better hold on the CR.

3 citations


Cites background from "ECG data compression techniques-a u..."

  • ...Some of presented in [3] 468 | P a g e D and 2D atil Dept. of E&TC Technology, , India fmail.com complex of ECG signal....

    [...]

Proceedings ArticleDOI
01 Nov 2002
TL;DR: Two different neural network based methods for ECG data compression are investigated and the back propagation networks are used as nonlinear predictors for achieving the data compression.
Abstract: ECG data compression algorithms are important for storage, transmission and analysis. An essential requirement of the compression algorithms is that the significant morphological features of the signal should not be lost upon reconstruction. In this paper two different neural network based methods are investigated for ECG data compression. The first method uses filters for attenuating noise and interferences, a radial-basis function network for the detection of R-points for separating the waveform into different cycles and finally multilayer back propagation networks for data compression. In the second method, the back propagation networks are used as nonlinear predictors for achieving the data compression. Compression results obtained by using the two different methods are evaluated based on standard MIT-BIH ECG Test Database.

3 citations

Book ChapterDOI
03 Jan 2018
TL;DR: This work describes the implementation of a complete Wireless Body Area Network (WBAN) that is capable of monitoring multiple physiological signals of a patient by means of IEEE 802.15.6 scheduled access MAC protocol and proposes a fast Discrete Wavelet Transform (DWT) based data compression algorithm at the BNC, termed herein as B-DWT.
Abstract: This work describes the implementation of a complete Wireless Body Area Network (WBAN) that is capable of monitoring multiple physiological signals of a patient by means of IEEE 802.15.6 scheduled access MAC protocol. In the WBAN setup, data from multiple sensors are sent to a Body Network Controller (BNC) using low power transceivers. To this end, the BNC is designed to multiplex the data from multiple sensors, and send them to a remote server over the Internet using a backhaul cellular network, thereby enabling ubiquitous remote health monitoring. Furthermore, to facilitate an energy efficient backhaul transmission that incurs low data transfer costs to the users, we introduce the concept of data compression at the BNC. In this regard, we propose a fast Discrete Wavelet Transform (DWT) based data compression algorithm at the BNC, termed herein as B-DWT, that is implementable in real-time using the limited on-board resources. The remote server is configured to accept data from multiple patients, de-multiplex different data of a single patient and store them in a database for pervasive access. Issues related to the hardware implementation of sensor nodes and BNC, and the design of the scheduled access mechanism and B-DWT are addressed. Detailed performance analysis of the WBAN is performed in OPNET simulator to determine the optimum allocation intervals for the sensor nodes that maximizes network capacity while maintaining a frame delay constraint. Further, in order to prolong the battery life of sensor nodes, we obtain the optimal payload sizes that maximizes their energy efficiency. Additionally, through implementation of B-DWT at the BNC we determine the optimal wavelet filer and compression levels, that allow maximum data compression within acceptable limits of information loss. The resulting B-DWT algorithm is shown to outperform traditional DWT with significant gains in execution speed and low memory footprint at the BNC.

3 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations