scispace - formally typeset
Search or ask a question
Journal ArticleDOI

ECG data compression techniques-a unified approach

TL;DR: The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods and a framework for evaluation and comparison of ECG compression schemes is presented.
Abstract: Electrocardiogram (ECG) compression techniques are compared, and a unified view of these techniques is established. ECG data compression schemes are presented in two major groups: direct data compression and transformation methods. The direct data compression techniques are ECG differential pulse code modulation (DPCM) and entropy coding, AZTEC, Turning-point, CORTES, Fan and SAPA algorithms, peak-picking, and cycle-to-cycle compression methods. The transformation methods include Fourier, Walsh, and Karhunen-Loeve transforms. The theoretical bases behind the direct ECG data compression schemes are presented and classified into three categories: tolerance-comparison compression, DPCM, and entropy coding methods. A framework for evaluation and comparison of ECG compression schemes is presented. >
Citations
More filters
Proceedings ArticleDOI
28 May 2012
TL;DR: By conducting various experiments, it is found that this approach can effectively compress ECG signal and meanwhile keep a good fidelity of the reconstructed signal, to prove that, the approach did some compare between its approach and traditional ones.
Abstract: We propose a novel scheme for ECG compression based on Gradient difference used in BSN. As a newly emerged application, BSN can continuously monitor the patients' physiological signal without hindering their everyday life, however, it has some constraints, namely, size, power, capability of storage and computation. ECG is a nontrivial signal collected by BSN, continuous collection will bring huge data flow which is a burden both to sensor nodes and central node, therefore we have to do some compression to relieve the burden. As the constraints exist, most of the traditional algorithms are not adaptive enough, we need a simple, real-time and effective approach. This paper introduces an approach to meet those requirements, by conducting various experiments, we found that this approach can effectively compress ECG signal and meanwhile keep a good fidelity of the reconstructed signal, to prove that, we did some compare between our approach and traditional ones. Yet, this algorithm has not been tested on the clinical ECG signals and there is still room for the performance improvement.

2 citations


Cites background from "ECG data compression techniques-a u..."

  • ...Our criteria are CR,PRD and simple comments, some data is from [3]....

    [...]

Book ChapterDOI
21 Jul 2016
TL;DR: The investigation and experimental results by using clinicalquality synthetic data generated by the novel fECG signal generator suggest that adaptive neuro-fuzzy inference systems could produce a significant advancement in fetal monitoring during pregnancy and childbirth.
Abstract: The abdominal fetal electrocardiogram (fECG) conveys valuable information that can aid clinicians with the diagnosis and monitoring of a potentially at risk fetus during pregnancy and in childbirth. This chapter primarily focuses on noninvasive (external and indirect) transabdominal fECG monitoring. Even though it is the preferred monitoring method, unlike its classical invasive (internal and direct) counterpart (transvaginal monitoring), it may be contaminated by a variety of undesirable signals that deteriorate its quality and reduce its value in reliable detection of hypoxic conditions in the fetus. A stronger maternal electrocardiogram (the mECG signal) along with technical and biological artifacts constitutes the main interfering signal components that diminish the diagnostic quality of the transabdominal fECG recordings. Currently, transabdominal fECG monitoring relies solely on the determination of the fetus’ pulse or heart rate (FHR) by detecting RR intervals and does not take into account the morphology and duration of the fECG waves (P, QRS, T), intervals, and segments, which collectively convey very useful diagnostic information in adult cardiology. The main reason for the exclusion of these valuable pieces of information in the determination of the fetus’ status from clinical practice is the fact that there are no sufficiently reliable and well-proven techniques for accurate extraction of fECG signals and robust derivation of these informative features. To address this shortcoming in fetal cardiology, we focus on adaptive signal processing methods and pay particular attention to nonlinear approaches that carry great promise in improving the quality of transabdominal fECG monitoring and consequently impacting fetal cardiolo‐ gy in clinical practice. Our investigation and experimental results by using clinicalquality synthetic data generated by our novel fECG signal generator suggest that adaptive neuro-fuzzy inference systems could produce a significant advancement in fetal monitoring during pregnancy and childbirth. The possibility of using a single device to © 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. leverage two advanced methods of fetal monitoring, namely noninvasive cardiotocog‐ raphy (CTG) and ST segment analysis (STAN) simultaneously, to detect fetal hypoxic conditions is very promising.

2 citations

Journal ArticleDOI
TL;DR: In this paper , a beat-wise MECG data compression is proposed that is based on adaptive Fourier decomposition (AFD) to reduce dimensionality, an ECG beat was treated as a multiagent, upon which principal component (PC) analysis was used in nonlinear space.

2 citations

Journal ArticleDOI
TL;DR: Eye movement and auditory brainstem response signals recorded for balance and hearing investigations were used as a medical test battery for several types of lossy compression techniques.

2 citations

Journal ArticleDOI
TL;DR: This research assessed the influence of lossy compression on medically interesting parameter values that are computed from eye movement signals and found that high compression ratios with bit rates lower than 1.5 bits per sample on signals produced results without significant changes to the medical parameters values.

2 citations

References
More filters
Journal ArticleDOI
TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Abstract: In this final installment of the paper we consider the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now. To a considerable extent the continuous case can be obtained through a limiting process from the discrete case by dividing the continuum of messages and signals into a large but finite number of small regions and calculating the various parameters involved on a discrete basis. As the size of the regions is decreased these parameters in general approach as limits the proper values for the continuous case. There are, however, a few new effects that appear and also a general change of emphasis in the direction of specialization of the general results to particular cases.

65,425 citations

Journal ArticleDOI
01 Sep 1952
TL;DR: A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.
Abstract: An optimum method of coding an ensemble of messages consisting of a finite number of members is developed. A minimum-redundancy code is one constructed in such a way that the average number of coding digits per message is minimized.

5,221 citations

Journal ArticleDOI
John Makhoul1
01 Apr 1975
TL;DR: This paper gives an exposition of linear prediction in the analysis of discrete signals as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal.
Abstract: This paper gives an exposition of linear prediction in the analysis of discrete signals The signal is modeled as a linear combination of its past values and present and past values of a hypothetical input to a system whose output is the given signal In the frequency domain, this is equivalent to modeling the signal spectrum by a pole-zero spectrum The major part of the paper is devoted to all-pole models The model parameters are obtained by a least squares analysis in the time domain Two methods result, depending on whether the signal is assumed to be stationary or nonstationary The same results are then derived in the frequency domain The resulting spectral matching formulation allows for the modeling of selected portions of a spectrum, for arbitrary spectral shaping in the frequency domain, and for the modeling of continuous as well as discrete spectra This also leads to a discussion of the advantages and disadvantages of the least squares error criterion A spectral interpretation is given to the normalized minimum prediction error Applications of the normalized error are given, including the determination of an "optimal" number of poles The use of linear prediction in data compression is reviewed For purposes of transmission, particular attention is given to the quantization and encoding of the reflection (or partial correlation) coefficients Finally, a brief introduction to pole-zero modeling is given

4,206 citations

Journal ArticleDOI
TL;DR: The state of the art in data compression is arithmetic coding, not the better-known Huffman method, which gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.
Abstract: The state of the art in data compression is arithmetic coding, not the better-known Huffman method. Arithmetic coding gives greater compression, is faster for adaptive models, and clearly separates the model from the channel encoding.

3,188 citations

Proceedings ArticleDOI
12 Apr 1976
TL;DR: The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.
Abstract: A tutorial-review paper on discrete orthogonal transforms and their applications in digital signal and image (both monochrome and color) processing is presented. Various transforms such as discrete Fourier, discrete cosine, Walsh-Hadamard, slant, Haar, discrete linear basis, Hadamard-Haar, rapid, lower triangular, generalized Haar, slant Haar and Karhunen-Loeve are defined and developed. Pertinent properties of these transforms such as power spectra, cyclic and dyadic convolution and correlation are outlined. Efficient algorithms for fast implementation of these transforms based on matrix partitioning or matrix factoring are presented. The application of these transforms in speech and image processing, spectral analysis, digital filtering (linear, nonlinear, optimal and suboptimal), nonlinear systems analysis, spectrography, digital holography, industrial testing, spectrometric imaging, feature selection, and patter recognition is presented. The utility and effectiveness of these transforms are evaluated in terms of some standard performance criteria such as computational complexity, variance distribution, mean-square error, correlated rms error, rate distortion, data compression, classification error, and digital hardware realization.

928 citations