scispace - formally typeset
Journal ArticleDOI

Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks

Reads0
Chats0
TLDR
The experiment results show that neural signatures associated with different emotions do exist and they share commonality across sessions and individuals, and the performance of deep models with shallow models is compared.
Abstract
To investigate critical frequency bands and channels, this paper introduces deep belief networks (DBNs) to constructing EEG-based emotion recognition models for three emotions: positive, neutral and negative. We develop an EEG dataset acquired from 15 subjects. Each subject performs the experiments twice at the interval of a few days. DBNs are trained with differential entropy features extracted from multichannel EEG data. We examine the weights of the trained DBNs and investigate the critical frequency bands and channels. Four different profiles of 4, 6, 9, and 12 channels are selected. The recognition accuracies of these four profiles are relatively stable with the best accuracy of 86.65%, which is even better than that of the original 62 channels. The critical frequency bands and channels determined by using the weights of trained DBNs are consistent with the existing observations. In addition, our experiment results show that neural signatures associated with different emotions do exist and they share commonality across sessions and individuals. We compare the performance of deep models with shallow models. The average accuracies of DBN, SVM, LR, and KNN are 86.08%, 83.99%, 82.70%, and 72.60%, respectively.

read more

Citations
More filters
Book ChapterDOI

Emotion Classification Using Xception and Support Vector Machine

TL;DR: In this paper , the robust depth-wise separable convolution architecture called Xception has been implemented in this study to observe its performance as a feature extractor to the notoriously mutating EEG data, which achieved stellar results as a performance score of 98% can be observed for the measures accuracy, precision, recall as well as F1 score.
Journal ArticleDOI

Learning a robust unified domain adaptation framework for cross-subject EEG-based emotion recognition

TL;DR: In this paper , a fine-grained domain alignment at subject and class levels, while inter-class separation and robustness against input perturbations are encouraged in coarse grain, is proposed.
Proceedings ArticleDOI

Increasing the Stability of EEG-based Emotion Recognition with a Variant of Neural Processes

TL;DR: A robust variant of the neural processes model is proposed and the stability of the model under various circumstances is evaluated to simulate random data corruption in real applications and sheds light on the widespread usage of portable EEG-based affective computing.
Proceedings ArticleDOI

Spatial Spectral based 3D Feature Map for EEG Emotion Recognition

TL;DR: In this paper , a 3D brain map is used as an input to a CNN model for classifying emotions into valence and arousal by producing accuracy of 89.38% and 90.12% respectively.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

LIBSVM: A library for support vector machines

TL;DR: Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
Journal ArticleDOI

Reducing the Dimensionality of Data with Neural Networks

TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Journal ArticleDOI

A fast learning algorithm for deep belief nets

TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Book

Neural Networks And Learning Machines

Simon Haykin
TL;DR: Refocused, revised and renamed to reflect the duality of neural networks and learning machines, this edition recognizes that the subject matter is richer when these topics are studied together.
Related Papers (5)