scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks

08 May 2015-IEEE Transactions on Autonomous Mental Development (IEEE)-Vol. 7, Iss: 3, pp 162-175
TL;DR: The experiment results show that neural signatures associated with different emotions do exist and they share commonality across sessions and individuals, and the performance of deep models with shallow models is compared.
Abstract: To investigate critical frequency bands and channels, this paper introduces deep belief networks (DBNs) to constructing EEG-based emotion recognition models for three emotions: positive, neutral and negative. We develop an EEG dataset acquired from 15 subjects. Each subject performs the experiments twice at the interval of a few days. DBNs are trained with differential entropy features extracted from multichannel EEG data. We examine the weights of the trained DBNs and investigate the critical frequency bands and channels. Four different profiles of 4, 6, 9, and 12 channels are selected. The recognition accuracies of these four profiles are relatively stable with the best accuracy of 86.65%, which is even better than that of the original 62 channels. The critical frequency bands and channels determined by using the weights of trained DBNs are consistent with the existing observations. In addition, our experiment results show that neural signatures associated with different emotions do exist and they share commonality across sessions and individuals. We compare the performance of deep models with shallow models. The average accuracies of DBN, SVM, LR, and KNN are 86.08%, 83.99%, 82.70%, and 72.60%, respectively.
Citations
More filters
Journal ArticleDOI
TL;DR: Using SVM classifier with external library LibSVM (3.23), the EEG SEED dataset is classified and improved in accuracy and performance with 79.38% accuracy having tensor flow environment.
Abstract: In this paper, we solely focus on the EEG dataset. Using SVM classifier with external library LibSVM (3.23), we have classified our EEG SEED dataset and have achieved tremendous improvement in the accuracy and performance. Moreover, we have listed and explained the different approaches followed to improvise the performance and accuracy of our dataset. Further, we have also compared various existing approaches performed on our dataset using various classifiers ELM, SVM with KNN, SVM with SEED dataset and SVM with DEAP dataset. By using library LibSVM (3.23), we increased the performance of each run by 4% finally resulting with 79.38% accuracy having tensor flow environment.

4 citations

Journal ArticleDOI
TL;DR: In this paper , a novel multiple frequency bands parallel spatial-temporal 3D deep residual learning framework (MFBPST-3D-DRLF) is proposed for EEG-based emotion recognition.

4 citations

Journal ArticleDOI
TL;DR: In this article , a technique termed brain rhythm sequencing (BRS) that interprets EEG based on a dominant brain rhythm having the maximum instantaneous power at each 0.2 s timestamp has been proposed.
Abstract: Recently, electroencephalography (EEG) signals have shown great potential for emotion recognition. Nevertheless, multichannel EEG recordings lead to redundant data, computational burden, and hardware complexity. Hence, efficient channel selection, especially single-channel selection, is vital. For this purpose, a technique termed brain rhythm sequencing (BRS) that interprets EEG based on a dominant brain rhythm having the maximum instantaneous power at each 0.2 s timestamp has been proposed. Then, dynamic time warping (DTW) is used for rhythm sequence classification through the similarity measure. After evaluating the rhythm sequences for the emotion recognition task, the representative channel that produces impressive accuracy can be found, which realizes single-channel selection accordingly. In addition, the appropriate time segment for emotion recognition is estimated during the assessments. The results from the music emotion recognition (MER) experiment and three emotional datasets (SEED, DEAP, and MAHNOB) indicate that the classification accuracies achieve 70-82% by single-channel data with a 10 s time length. Such performances are remarkable when considering minimum data sources as the primary concerns. Furthermore, the individual characteristics in emotion recognition are investigated based on the channels and times found. Therefore, this study provides a novel method to solve single-channel selection for emotion recognition.

4 citations

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a Siamese graph convolutional attention network (Siam-GCAN), which mainly considers the following two aspects: on the one hand, a deep attention layer implemented by a multi-head attention mechanism to abstract deeper and valuable features rather than stacking graph convolutions layers.
Abstract: The graph convolutional network (GCN) shows effective performance in electroencephalogram (EEG) emotion recognition owing to the ability to capture brain connectivity. However, the depth information cannot be extracted only through the GCN structure, and the learning process of the GCN model ignores the intraclass and the interclass information. Regarding the above problems, we propose a Siamese graph convolutional attention network, named Siam-GCAN, which mainly considers the following two aspects: on the one hand, we use a deep attention layer implemented by a multihead attention mechanism to abstract deeper and valuable features rather than stacking graph convolution layers. On the other hand, we employ the Siamese network to cluster the outputs of GCNs based on Euclidean distance to ensure the learned information has a certain class separability. Experimental results on two public emotional datasets, the Shanghai Jiao Tong University (SJTU) emotion EEG dataset and the SJTU emotion EEG dataset-IV, demonstrate that Siam-GCAN outperforms the state-of-the-art baselines in EEG emotion recognition.

4 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
TL;DR: Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
Abstract: LIBSVM is a library for Support Vector Machines (SVMs). We have been actively developing this package since the year 2000. The goal is to help users to easily apply SVM to their applications. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.

40,826 citations


"Investigating Critical Frequency Ba..." refers methods in this paper

  • ...We use LIBSVM software [56] to implement the SVM classifier and employ linear kernel....

    [...]

Journal ArticleDOI
28 Jul 2006-Science
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Abstract: High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such "autoencoder" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.

16,717 citations

Journal ArticleDOI
TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Abstract: We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.

15,055 citations


"Investigating Critical Frequency Ba..." refers background in this paper

  • ...MLP, SVMs, CRFs) in many challenge tasks, especially in speech and image domains [29]–[31]....

    [...]

  • ...Many deep architecture models are proposed such as deep auto-encoder [26], convolution neural network [27], [28] and deep belief network [29]....

    [...]

  • ...Deep Belief Network is a probabilistic generative model with deep architecture, which characterizes the input data distribution using hidden variables [25], [29]....

    [...]

Book
01 Jan 2010
TL;DR: Refocused, revised and renamed to reflect the duality of neural networks and learning machines, this edition recognizes that the subject matter is richer when these topics are studied together.
Abstract: For graduate-level neural network courses offered in the departments of Computer Engineering, Electrical Engineering, and Computer Science. Neural Networks and Learning Machines, Third Edition is renowned for its thoroughness and readability. This well-organized and completely upto-date text remains the most comprehensive treatment of neural networks from an engineering perspective. This is ideal for professional engineers and research scientists. Matlab codes used for the computer experiments in the text are available for download at: http://www.pearsonhighered.com/haykin/ Refocused, revised and renamed to reflect the duality of neural networks and learning machines, this edition recognizes that the subject matter is richer when these topics are studied together. Ideas drawn from neural networks and machine learning are hybridized to perform improved learning tasks beyond the capability of either independently.

4,943 citations


"Investigating Critical Frequency Ba..." refers background in this paper

  • ...According to the rules of knowledge representation, if a particular feature is important, there should be a larger number of neurons involved in representing it in the network [59]....

    [...]