scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
01 Jan 2012
TL;DR: This paper compared the performance of various types of ECG compression techniques based on transformation methods such as Discrete Cosine Transform (DCT), Discrete Sine transform (DST), Fast Fourier Transform (FFT), and the improved method DiscreteCosine Transform- II (D CT-II).
Abstract: In this paper, we compared the performance of various types of ECG compression techniques. These techniques are essential to reduce the size of data to be transmitted without losing the clinical information. We also studied Computerized Electrocardiogram(ECG), electroencephalogram (EEG), and magneto encephalogram (MEG) processing systems have been widely used in clinical practice. These schemes are based on transformation methods such as Discrete Cosine Transform (DCT), Discrete Sine Transform (DST), Fast Fourier Transform (FFT), and the improved method Discrete Cosine Transform- II (DCT-II). The comparative study is made in terms of Compression Ratio (CR) and Percent Root mean square Difference (PRD)

2 citations

Journal ArticleDOI
TL;DR: This paper presents the stability analysis of the linear recursive (prediction) filters with higher order predictors in a DPCM (differential pulse-code modulation) system, where traditional methods become too difficult and complex.
Abstract: This paper presents the stability analysis of the linear recursive (prediction) filters with higher order predictors in a DPCM (differential pulse-code modulation) system, where traditional methods become too difficult and complex. Stability conditions for the third- and fourth-order predictor are given by using the Schur–Cohn stability criterion. The probability of stability estimation is performed by using the Monte Carlo method. Verification of the proposed method is performed for lower order predictors (the first- and second-order). We calculated numerical values of the probability of stability for higher order predictors and previously experimentally obtained parameters. With large enough number of trials (samples) in Monte Carlo simulation, we reach the desired accuracy. DOI: http://dx.doi.org/10.5755/j01.itc.46.2.14038

2 citations


Cites methods from "ECG data compression techniques-a u..."

  • ...This technique is widely used in telecommunications, speech [16, 22] and image coding [20, 28], medical research [10, 15, 21], etc....

    [...]

DissertationDOI
03 Sep 2014
TL;DR: This article present a caracterizacion parametrica of the medidas for senales de longitud reducidas and se proponen two metodos de optimizacion no supervisada for el analisis de registros de corta duracion con QSE o CosEn, ante situaciones in which las senales han perdido muestras or cuya longitud is limitada.
Abstract: Las medidas de complejidad son un conjunto de metodos estadisticos que permiten estimar la regularidad de un sistema. Estos metodos se basan en tecnicas de analisis no lineal de forma que se pueda caracterizar un senal sin hacer asunciones implicitas de estacionariedad o ergocidad de la misma. Estos metodos se estan aplicando ampliamente sobre senales biologicas debido a la naturaleza de las mismas. Las senales biologicas se caracterizan por ser irregulares, no lineales y variables en el tiempo, de forma que los metodos tradicionales de analisis lineal no consiguen caracterizar su comportamiento completamente. Estas medidas funcionan muy bien en la practica, ya que consiguen extraer informacion de las senales que de otra forma no es posible. Entre otras capacidades, consiguen diferenciar estados patologicos, predecir la aparicion de un ataque epileptico o distinguir entre estados del sueno. Pero su aplicacion presenta cierta controversia, ya que carecen de una caracterizacion que indique al usuario que medida aplicar en funcion de las caracteristicas del registro, como debe ser aplicada o incluso como interpretar los resultados obtenidos. En este trabajo se ha propuesto abordar una caracterizacion de algunas de las medidas de complejidad de uso mas comun. Se muestra una caracterizacion de la entropia aproximada (ApEn), la entropia muestral (SampEn), la entropia en multiples escalas o multiescala (MSE), el analisis de fluctuaciones sin tendencias (DFA), la entropia cuadrada de Renyi (QSE) y el coeficiente de entropia muestral (CosEn), ante situaciones en las que las senales han perdido muestras o cuya longitud es limitada. La perdida de muestras es algo muy comun en la actualidad, donde la mayoria de los registros se hacen de forma ambulatoria y el espacio de almacenamiento es limitado (compresion de datos) o la transmision se hace de forma inalambrica, donde el canal puede presentar condiciones inestables o interferencias que causen la perdida de muestras, bien de forma uniforme o aleatoria. La longitud limitada de los registros puede deberse, entre otras posibilidades, a que la toma de datos se ha realizado de forma manual o ?esta resulta incomoda para el paciente. Se presenta una caracterizacion parametrica de las medidas para las senales de longitud reducidas y se proponen dos metodos de optimizacion no supervisada para el analisis de registros de corta duracion con QSE o CosEn. Este trabajo muestra como las medidas de entropia consideradas, presentan un comportamiento similar ante una misma situacion, conservando las capacidad de separabilidad entre clases, independientemente del registro biologico analizado, siempre y cuando la medida se use de forma correcta. SampEn se ha erigido como la medida mas estable y de mayor aplicabilidad en registros de duracion media (300< N <5000) cuando las senales pierden muestras, tanto de forma aleatoria, como uniforme manteniendo coeficientes de correlacion cruzados por encima de 0.8 hasta un 70% de perdidas. Si las senales presentan desviaciones estandar altas o gran variabilidad, se recomienda la aplicacion de MSE ya que introduce un suavizado y decorrelacion de los patrones. En senales de corta duracion (100 < N < 300) se recomienda el uso de DFA, ya que permite una caracterizacion de la complejidad de forma estable y robusta aunque con un coste computacional alto y la necesidad de realizar una inspeccion visual para determinar el numero de coeficientes de escalado necesarios. Finalmente, en senales de muy corta duracion (N < 100) se recomienda el uso de CosEn. Se han conseguido segmentar senales de HTA en humanos de apenas 55 muestras, algo muy novedoso, con mejores estadisticos que QSE.

2 citations


Cites background from "ECG data compression techniques-a u..."

  • ...También algunas técnicas de procesado como puede ser detección de eventos, compresión de datos o especificaciones hardware pueden causar la pérdida de información que puede resultar relevante [77, 115]....

    [...]

Journal ArticleDOI
TL;DR: A novel approach in signal decomposition and data compression is proposed based on the maximum energy principle of multiple transforms (MEPOMT), making MEPOMT an efficient data compression method for signals containing both narrowband and wideband components.
Abstract: A novel approach in signal decomposition and data compression is proposed based on the maximum energy principle (MEP) of multiple transforms (MEPOMT). The complementary property of the discrete cosine transform (DCT) and wavelet transform (WT) in signal decomposition makes MEPOMT an efficient data compression method for signals containing both narrowband and wideband components.

2 citations

01 Jan 2016
TL;DR: This dissertation proposes reconstructing ECG signal from undersampled data based on compressive sensing framework that can reconstruct the ECG signals precisely from fewer samples so long as the signal is sparse or compressible.
Abstract: Author(s): Lee, SeungJae | Advisor(s): Chou, Pai H. | Abstract: Wearable embedded systems with sensing, communication, and computing capabilities have given rise to innovations in e-health and telemedicine in general. The scope of such systems ranges from devices and mobile apps to cloud backend and analysis algorithms, all of which must be well integrated. To manage the development, operation, and evolution of such complex systems, a framework systematic framework is needed. This dissertation makes contributions in two parts. First is a framework for defining the structure of a wide range of wearable medical applications with modern cloud support. The second part includes several algorithms that can be plugged into this framework for making these systems more efficient in terms of processing performance and data size. We propose a novel QT analysis algorithm that can take advantage of GPU as well as in a server-client environment, and we show competitive results in terms of both performance and energy consumption with or without parallelization. We also propose ECG compression techniques using trained overcomplete dictionary. After constructing the dictionary through learning process with a given dataset, the signal can be compressed by sparse estimation using the trained dictionary. We propose reconstructing ECG signal from undersampled data based on compressive sensing framework that can reconstruct the ECG signals precisely from fewer samples so long as the signal is sparse or compressible. Together, these algorithms operating in the context of our proposed framework validate the effectiveness of our structured approach to the framework for wearable medical applications.

2 citations


Cites background from "ECG data compression techniques-a u..."

  • ...53 4.8 Average of CR and PRD over testing dataset(MIT-BIH arrhythmia database) 53 5.1 Corrected QT Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.2 Procedure of the proposed algorithm . . . . . . . . . . . . . . . . . . . . ....

    [...]

  • ...PRD [30] shows the reconstruction error as a percentage and is defined as...

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations