scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A Simple but Efficient EEG Data Compression Algorithm for Neuromorphic Applications

03 May 2020-Iete Journal of Research (Taylor & Francis)-Vol. 66, Iss: 3, pp 303-314
TL;DR: A computationally simple and novel methodology Normalized Spatial Pseudo Codec (n-SPC) to compress MCEEG signals to detect sleep spindle was proposed and results indicate that the algorithm exhibits good storage efficiency and decompressed signal quality.
Abstract: Widespread use of Multichannel Electroencephalograph (MCEEG) in diversified fields ranging from clinical studies to Brain Computer Interface (BCI) application, has put in a lot of thrust in data pr...
Citations
More filters
Journal ArticleDOI
TL;DR: Hardware efficient and dedicated human emotion classification processor for CND's and a look-up-table based logarithmic division unit (LDU) to represent the division features in machine learning (ML) applications.
Abstract: Chronic neurological disorders (CND's) are lifelong diseases and cannot be eradicated, but their severe effects can be alleviated by early preemptive measures. CND's, such as Alzheimer's, Autism Spectrum Disorder (ASD), and Amyotrophic Lateral Sclerosis (ALS), are the chronic ailment of the central nervous system that causes the degradation of emotional and cognitive abilities. Long term continuous monitoring with neuro-feedback of human emotions for patients with CND's is crucial in mitigating its harmful effect. This paper presents hardware efficient and dedicated human emotion classification processor for CND's. Scalp EEG is used for the emotion's classification using the valence and arousal scales. A linear support vector machine classifier is used with power spectral density, logarithmic interhemispheric power spectral ratio, and the interhemispheric power spectral difference of eight EEG channel locations suitable for a wearable non-invasive classification system. A look-up-table based logarithmic division unit (LDU) is to represent the division features in machine learning (ML) applications. The implemented LDU minimizes the cost of integer division by 34% for ML applications. The implemented emotion's classification processor achieved an accuracy of 72.96% and 73.14%, respectively, for the valence and arousal classification on multiple publicly available datasets. The 2 x 3mm2 processor is fabricated using a 0.18 μm 1P6M CMOS process with power and energy utilization of 2.04 mW and 16 μJ/classification, respectively, for 8-channel operation.

34 citations


Cites background or methods from "A Simple but Efficient EEG Data Com..."

  • ...and later visualization by neurologist, efficient data compression techniques have been adopted in the literature [49], for applications like sleep [50], [51] and seizure [38], where long term Fig....

    [...]

  • ...[11], [49], and generic applications [52], [53] are implemented....

    [...]

Journal ArticleDOI
TL;DR: In this article, a survey was conducted between December 2020 and January 2021 among German epilepsy centers using well-established customer satisfaction (CS) and quality assurance metrics, and the greatest potential for improvement was identified for software and hardware stability as well as customer service.

3 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an optimal tensor truncation method for performing compression of the data, which first reshapes the multi-channel EEG signal as a tensor and initially identifies the optimum size of the compressed tensor.

2 citations

Journal ArticleDOI
TL;DR: The principal goal of this study is to implement strategies for low power consumption rates during the neurostimulation device’s smooth and uninterrupted operation as well as during data transmission.
Abstract: Neurostimulation devices applied for the treatment of epilepsy that collect, encode, temporarily store, and transfer electroencephalographic (EEG) signals recorded intracranially from epileptic patients, suffer from short battery life spans. The principal goal of this study is to implement strategies for low power consumption rates during the device’s smooth and uninterrupted operation as well as during data transmission. Our approach is organised in three basic levels. The first level regards the initial modelling and creation of the template for the following two stages. The second level regards the development of code for programming integrated circuits and simulation. The third and final stage regards the transmitter’s implementation at the evaluation level. In particular, more than one software and device are involved in this phase, in order to achieve realistic performance. Our research aims to evolve such technologies so that they can transmit wireless data with simultaneous energy efficiency.

1 citations

Proceedings ArticleDOI
24 Sep 2021
TL;DR: In this article, the authors studied the effect of delta encoding on power dissipation for wireless transmission from implantable devices and showed that up to 23% power savings are possible for a negligible power penalty due to the delta encoding process.
Abstract: This paper studies the Delta encoding scheme and its effect on power dissipation, for wireless transmission from implantable devices. The study was performed on data from electroencephalographic signals. For the implementation of the proposed system, the design approach followed three phases. The first design phase is related to the initial modelling. The second phase included the development of the hardware description code for the proposed system. The third and last phase is related to the evaluation of the transmitted signal and the measurement of the power dissipation. The results showed that up to 23% power savings are possible for a negligible power penalty due to the delta encoding process.

1 citations

References
More filters
Journal ArticleDOI
TL;DR: The newly inaugurated Research Resource for Complex Physiologic Signals (RRSPS) as mentioned in this paper was created under the auspices of the National Center for Research Resources (NCR Resources).
Abstract: —The newly inaugurated Research Resource for Complex Physiologic Signals, which was created under the auspices of the National Center for Research Resources of the National Institutes of He...

11,407 citations

Journal ArticleDOI
TL;DR: An efficient algorithm is proposed, which allows the computation of the ICA of a data matrix within a polynomial time and may actually be seen as an extension of the principal component analysis (PCA).

8,522 citations


"A Simple but Efficient EEG Data Com..." refers methods in this paper

  • ...Other mechanisms for data reduction include dimensionality reduction techniques like Principal Component Analysis (PCA) [18], Independent Component Analysis (ICA) [19,20] and Compressive Sensing [21,22]....

    [...]

  • ...Recent works in MCEEG compression include Tensor Decomposition using Parallel Factor Decomposition (PARAFAC), Singular Value Decomposition (SVD) [27], Wavelet transform with Volumetric coding [28], hybrid system of PCA, Fast ICAwith SPIHT coding [29] and Fast Discrete Cosine Transform (fDCT) [30], Differential Pulse Code Modulation(DPCM) with kNN clustering [31], and Spatial Pseudo coders using Logarithmic Normalization [32]....

    [...]

  • ...Majority of EEG compression algorithms available in the literature commonly employ PCA, Fast ICA and Compressed sensing to get an equivalent structure of the actual data....

    [...]

Book
18 May 2001
TL;DR: Independent component analysis as mentioned in this paper is a statistical generative model based on sparse coding, which is basically a proper probabilistic formulation of the ideas underpinning sparse coding and can be interpreted as providing a Bayesian prior.
Abstract: In this chapter, we discuss a statistical generative model called independent component analysis. It is basically a proper probabilistic formulation of the ideas underpinning sparse coding. It shows how sparse coding can be interpreted as providing a Bayesian prior, and answers some questions which were not properly answered in the sparse coding framework.

8,333 citations

Journal ArticleDOI
TL;DR: The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods.
Abstract: Embedded zerotree wavelet (EZW) coding, introduced by Shapiro (see IEEE Trans. Signal Processing, vol.41, no.12, p.3445, 1993), is a very effective and computationally simple technique for image compression. We offer an alternative explanation of the principles of its operation, so that the reasons for its excellent performance can be better understood. These principles are partial ordering by magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and exploitation of self-similarity across different scales of an image wavelet transform. Moreover, we present a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than our previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by the arithmetic code.

5,890 citations


"A Simple but Efficient EEG Data Com..." refers methods in this paper

  • ...Secondary operations to enhance the compression further can be realized by exploiting various coding techniques like Arithmetic [23], Set Partition InHierarchical Trees (SPIHT) [24] and predictors [25,26]....

    [...]

  • ...Recent works in MCEEG compression include Tensor Decomposition using Parallel Factor Decomposition (PARAFAC), Singular Value Decomposition (SVD) [27], Wavelet transform with Volumetric coding [28], hybrid system of PCA, Fast ICAwith SPIHT coding [29] and Fast Discrete Cosine Transform (fDCT) [30], Differential Pulse Code Modulation(DPCM) with kNN clustering [31], and Spatial Pseudo coders using Logarithmic Normalization [32]....

    [...]

Reference EntryDOI
31 Aug 2012
TL;DR: A statistical generative model called independent component analysis is discussed, which shows how sparse coding can be interpreted as providing a Bayesian prior, and answers some questions which were not properly answered in the sparse coding framework.
Abstract: Independent component models have gained increasing interest in various fields of applications in recent years. The basic independent component model is a semiparametric model assuming that a p-variate observed random vector is a linear transformation of an unobserved vector of p independent latent variables. This linear transformation is given by an unknown mixing matrix, and one of the main objectives of independent component analysis (ICA) is to estimate an unmixing matrix by means of which the latent variables can be recovered. In this article, we discuss the basic independent component model in detail, define the concepts and analysis tools carefully, and consider two families of ICA estimates. The statistical properties (consistency, asymptotic normality, efficiency, robustness) of the estimates can be analyzed and compared via the so called gain matrices. Some extensions of the basic independent component model, such as models with additive noise or models with dependent observations, are briefly discussed. The article ends with a short example. Keywords: blind source separation; fastICA; independent component model; independent subspace analysis; mixing matrix; overcomplete ICA; undercomplete ICA; unmixing matrix

2,976 citations