scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
BookDOI
07 Dec 2010
TL;DR: A conceptual framework that provides modeling facilities for context-aware, multi-channel Web applications and explains how to use high-level modeling constructs to drive the application development process through automatic code generation is detailed.
Abstract: To help you address contemporary challenges, the text details a conceptual framework that provides modeling facilities for context-aware, multi-channel Web applications. It compares various platforms for developing mobile services—from the developer and user perspectives—and explains how to use high-level modeling constructs to drive the application development process through automatic code generation.

9 citations

Patent
29 Mar 2007
TL;DR: In this paper, a data compressor is used to compress the electrocardiogram data with either a wavelet transform, a Huffman coding, or an arithmetic coding, and then transmitted to a remote receiver.
Abstract: In an electrocardiogram telemeter, a data acquirer is operable to acquire electrocardiogram data. A data compressor is operable to compress the electrocardiogram data with either a wavelet transform, a Huffman coding, or an arithmetic coding, thereby generating compressed electrocardiogram data adapted to be transmitted to a remote receiver which is configured to reconstruct the electrocardiogram data.

9 citations

Proceedings ArticleDOI
03 Apr 2014
TL;DR: A DPCM based approach for real-time compression of ECG data for real time telemonitoring application and the computational simplicity of the algorithm provides an opportunity to implement the coder using a low cost microcontroller.
Abstract: This paper illustrates a DPCM based approach for real-time compression of ECG data for real time telemonitoring application. For real time implementation a ‘frame’ is considered with one original sample followed by 64 first difference elements. The coder compresses the non-QRS regions of an ECG data stream through stages of first difference, joint sign and magnitude coding, and run length encoding. A hard thresholding at the equipotential regions have been applied to enhance the RLE efficiency. For testing 10 second ECG data from Physionet has been used with 10-bit quantization level. The CR, PRD and PRDN achieved with PTB Database (ptbdb) are 6.42, 9.77 and 9.77 respectively. With MIT-BIH arrhythmia data (mitdb), these values are 5.92, 8.19 and 8.19 respectively. With MIT-BIH ECG Compression test data (cdb), these values are 4.25, 5.37 and 6.65 respectively. The frame wise compression rises to a value of 12–14 in flat (TP) segments and low 1–2 in QRS regions. The computational simplicity of the algorithm provides an opportunity to implement the coder using a low cost microcontroller.

9 citations

Journal ArticleDOI
12 Aug 2013
TL;DR: A data compression scheme for large-scale particle simulations, which has favorable prospects for scientific visualization of particle systems, and is named "TOKI (Time-Order Kinetic Irreversible compression)".
Abstract: We propose in this paper a data compression scheme for large-scale particle simulations, which has favorable prospects for scientific visualization of particle systems. Our data compression concepts deal with the data of particle orbits obtained by simulation directly and have the following features: (i) Through control over the compression scheme, the difference between the simulation variables and the reconstructed values for the visualization from the compressed data becomes smaller than a given constant. (ii) The particles in the simulation are regarded as independent particles and the time-series data for each particle is compressed with an independent time-step for the particle. (iii) A particle trajectory is approximated by a polynomial function based on the characteristic motion of the particle. It is reconstructed as a continuous curve through interpolation from the values of the function for intermediate values of the sample data. We name this concept "TOKI (Time-Order Kinetic Irreversible compression)". In this paper, we present an example of an implementation of a data-compression scheme with the above features. Several application results are shown for plasma and galaxy formation simulation data.

9 citations

Patent
16 Apr 2015
TL;DR: In this paper, a data item string is separated into a defined number of component segments and each component segment is used as a coefficient of a polynomial equation, and a plurality of signal features are then identified from a physiological signal.
Abstract: Systems and methods are provided for encoding and decoding data (such as, for example, an encryption key) using a physiological signal. A data item string is separated into a defined number of component segments and each component segment is used as a coefficient of a polynomial equation. A plurality of signal features are then identified from a physiological signal and a plurality of ordered pairs are created based on the plurality of identified signal features using the polynomial equation. A data package including the plurality of ordered pairs and obfuscated by a plurality of chaff points is transmitted to another system. The receiver system uses a corresponding physiological signal to filter out the chaff points and to reconstruct the polynomial equation, for example, by LaGrangian interpolation. The coefficients of the reconstructed polynomial equation are then used to derive the encoded data item string.

9 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations