scispace - formally typeset
Journal ArticleDOI

Generalized Hidden-Mapping Transductive Transfer Learning for Recognition of Epileptic Electroencephalogram Signals

Reads0
Chats0
TLDR
The generalized hidden-mapping transductive learning method is proposed to realize transfer learning for several classical intelligent models, including feedforward neural networks, fuzzy systems, and kernelized linear models, which can be trained effectively even though the data available are insufficient for model training.
Abstract
Electroencephalogram (EEG) signal identification based on intelligent models is an important means in epilepsy detection. In the recognition of epileptic EEG signals, traditional intelligent methods usually assume that the training dataset and testing dataset have the same distribution, and the data available for training are adequate. However, these two conditions cannot always be met in practice, which reduces the ability of the intelligent recognition model obtained in detecting epileptic EEG signals. To overcome this issue, an effective strategy is to introduce transfer learning in the construction of the intelligent models, where knowledge is learned from the related scenes (source domains) to enhance the performance of model trained in the current scene (target domain). Although transfer learning has been used in EEG signal identification, many existing transfer learning techniques are designed only for a specific intelligent model, which limit their applicability to other classical intelligent models. To extend the scope of application, the generalized hidden-mapping transductive learning method is proposed to realize transfer learning for several classical intelligent models, including feedforward neural networks, fuzzy systems, and kernelized linear models. These intelligent models can be trained effectively by the proposed method even though the data available are insufficient for model training, and the generalization abilities of the trained model is also enhanced by transductive learning. A number of experiments are carried out to demonstrate the effectiveness of the proposed method in epileptic EEG recognition. The results show that the method is highly competitive or superior to some existing state-of-the-art methods.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Making Sense of Spatio-Temporal Preserving Representations for EEG-Based Human Intention Recognition

TL;DR: Two deep learning-based frameworks with novel spatio-temporal preserving representations of raw EEG streams to precisely identify human intentions are introduced with high accuracy and outperform a set of state-of-the-art and baseline models.
Journal ArticleDOI

A review on transfer learning in EEG signal analysis

TL;DR: Four main methods of transfer learning are described and their practical applications in EEG signal analysis in recent years are explored.
Journal ArticleDOI

A sparse stacked denoising autoencoder with optimized transfer learning applied to the fault diagnosis of rolling bearings

TL;DR: The results for data from the Case Western Reserve University Bearing Data Center show that the proposed SSDAE-TL algorithm is feasible and easy to implement for the fault diagnosis of bearings.
Journal ArticleDOI

Unsupervised transfer learning for anomaly detection: Application to complementary operating condition transfer

TL;DR: The proposed end-to-end framework uses adversarial deep learning to ensure alignment of the different units' distributions and introduces a new loss, inspired by a dimensionality reduction tool, to enforce the conservation of the inherent variability of each dataset.
Journal ArticleDOI

Error Correction Regression Framework for Enhancing the Decoding Accuracies of Ear-EEG Brain–Computer Interfaces

TL;DR: It is demonstrated that SSVEP BCI based on ear-EEG can achieve reliable performance with the proposed error correction regression (ECR) framework.
References
More filters
Journal ArticleDOI

Multilayer feedforward networks are universal approximators

TL;DR: It is rigorously established that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available.
Journal ArticleDOI

A Survey on Transfer Learning

TL;DR: The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.
Journal ArticleDOI

Ridge regression: biased estimation for nonorthogonal problems

TL;DR: In this paper, an estimation procedure based on adding small positive quantities to the diagonal of X′X was proposed, which is a method for showing in two dimensions the effects of nonorthogonality.
Journal ArticleDOI

A learning algorithm for continually running fully recurrent neural networks

TL;DR: The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks.
Related Papers (5)