scispace - formally typeset
Search or ask a question
Author

Shuaiqi Liu

Other affiliations: Chinese Academy of Sciences
Bio: Shuaiqi Liu is an academic researcher from Hebei University. The author has contributed to research in topics: Feature extraction & Convolutional neural network. The author has an hindex of 3, co-authored 6 publications receiving 14 citations. Previous affiliations of Shuaiqi Liu include Chinese Academy of Sciences.

Papers
More filters
Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a three-dimensional convolutional attention neural network (3DCANN) for EEG emotion recognition, which is composed of spatio-temporal feature extraction module and EEG channel attention weight learning module.
Abstract: Since electroencephalogram (EEG) signals can truly reflect human emotional state, emotion recognition based on EEG has turned into a critical branch in the field of artificial intelligence. Aiming at the disparity of EEG signals in various emotional states, we propose a new deep learning model named three-dimension convolution attention neural network (3DCANN) for EEG emotion recognition in this paper. The 3DCANN model is composed of spatio-temporal feature extraction module and EEG channel attention weight learning module, which can extract the dynamic relation well among multi-channel EEG signals and the internal spatial relation of multi-channel EEG signals during continuous time period. In this model, the spatio-temporal features are fused with the weights of dual attention learning, and the fused features are input into softmax classifier for emotion classification. In addition, we utilize SJTU Emotion EEG Dataset (SEED) to appraise the feasibility and effectiveness of the proposed algorithm. Finally, experimental results display that the 3DCANN method has superior performance over the state-of-the-art models in EEG emotion recognition.

50 citations

Journal ArticleDOI
01 Jan 2021
TL;DR: This work proposes an algorithm based on convolutional denoising autoencoder (CDAE) and adaptive boosting decision trees (AdaDT) and shows that the method offers improved classification compared with state-of-the-art methods in terms of the average accuracy of each individual site and all sites.
Abstract: Attention deficit/Hyperactivity disorder (ADHD) is a complex, universal and heterogeneous neurodevelopmental disease. The traditional diagnosis of ADHD relies on the long-term analysis of complex information such as clinical data (electroencephalogram, etc.), patients’ behavior and psychological tests by professional doctors. In recent years, functional magnetic resonance imaging (fMRI) has been developing rapidly and is widely employed in the study of brain cognition due to its non-invasive and non-radiation characteristics. We propose an algorithm based on convolutional denoising autoencoder (CDAE) and adaptive boosting decision trees (AdaDT) to improve the results of ADHD classification. Firstly, combining the advantages of convolutional neural networks (CNNs) and the denoising autoencoder (DAE), we developed a convolutional denoising autoencoder to extract the spatial features of fMRI data and obtain spatial features sorted by time. Then, AdaDT was exploited to classify the features extracted by CDAE. Finally, we validate the algorithm on the ADHD-200 test dataset. The experimental results show that our method offers improved classification compared with state-of-the-art methods in terms of the average accuracy of each individual site and all sites, meanwhile, our algorithm can maintain a certain balance between specificity and sensitivity.

43 citations

Journal ArticleDOI
Shuaiqi Liu1, Xu Wang1, Ling Zhao1, Jie Zhao1, Qi Xin, Shuihua Wang 
TL;DR: A subject-independent emotion recognition algorithm based on dynamic empirical convolutional neural network (DECNN) in view of the challenges is proposed, combining the advantages of empirical mode decomposition (EMD) and differential entropy (DE), and the algorithm is verified on SJTU Emotion EEG Dataset (SEED).
Abstract: Affective computing is one of the key technologies to achieve advanced brain-machine interfacing. It is increasingly concerning research orientation in the field of artificial intelligence. Emotion recognition is closely related to affective computing. Although emotion recognition based on electroencephalogram (EEG) has attracted more and more attention at home and abroad, subject-independent emotion recognition still faces enormous challenges. We proposed a subject-independent emotion recognition algorithm based on dynamic empirical convolutional neural network (DECNN) in view of the challenges. Combining the advantages of empirical mode decomposition (EMD) and differential entropy (DE), we proposed a dynamic differential entropy (DDE) algorithm to extract the features of EEG signals. After that, the extracted DDE features were classified by convolutional neural networks (CNN). Finally, the proposed algorithm is verified on SJTU Emotion EEG Dataset (SEED). In addition, we discuss the brain area closely related to emotion and design the best profile of electrode placements to reduce the calculation and complexity. Experimental results show that the accuracy of this algorithm is 3.53 percent higher than that of the state-of-the-art emotion recognition methods. What's more, we studied the key electrodes for EEG emotion recognition, which is of guiding significance for the development of wearable EEG devices.

34 citations

Journal ArticleDOI
TL;DR: In this article, an unsupervised multi-attention-guided network named UMAG-Net was proposed to fuse a low-resolution hyperspectral image (HSI) with a high-resolution (HR) multispectral images (MSI) of the same scene.
Abstract: To reconstruct images with high spatial resolution and high spectral resolution, one of the most common methods is to fuse a low-resolution hyperspectral image (HSI) with a high-resolution (HR) multispectral image (MSI) of the same scene. Deep learning has been widely applied in the field of HSI-MSI fusion, which is limited with hardware. In order to break the limits, we construct an unsupervised multiattention-guided network named UMAG-Net without training data to better accomplish HSI-MSI fusion. UMAG-Net first extracts deep multiscale features of MSI by using a multiattention encoding network. Then, a loss function containing a pair of HSI and MSI is used to iteratively update parameters of UMAG-Net and learn prior knowledge of the fused image. Finally, a multiscale feature-guided network is constructed to generate an HR-HSI. The experimental results show the visual and quantitative superiority of the proposed method compared to other methods.

24 citations

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a weighted fusion method of multisource information to screen drug-target interactions, which improved specificity, sensitivity, precision, and accuracy compared with the BLM-NII method.
Abstract: Recently, in most existing studies, it is assumed that there are no interaction relationships between drugs and targets with unknown interactions. However, unknown interactions mean the relationships between drugs and targets have just not been confirmed. In this paper, samples for which the relationship between drugs and targets has not been determined are considered unlabeled. A weighted fusion method of multisource information is proposed to screen drug-target interactions. Firstly, some drug-target pairs which may have interactions are selected. Secondly, the selected drug-target pairs are added to the positive samples, which are regarded as known to have interaction relationships, and the original interaction relationship matrix is revised. Finally, the revised datasets are used to predict the interaction derived from the bipartite local model with neighbor-based interaction profile inferring (BLM-NII). Experiments demonstrate that the proposed method has greatly improved specificity, sensitivity, precision, and accuracy compared with the BLM-NII method. In addition, compared with several state-of-the-art methods, the area under the receiver operating characteristic curve (AUC) and the area under the precision-recall curve (AUPR) of the proposed method are excellent.

7 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors conducted a rigorous review on the state-of-the-art emotion recognition systems, published in recent literature, and summarized some of the common emotion recognition steps with relevant definitions, theories, and analyses to provide key knowledge to develop a proper framework.
Abstract: Recently, electroencephalogram-based emotion recognition has become crucial in enabling the Human-Computer Interaction (HCI) system to become more intelligent. Due to the outstanding applications of emotion recognition, e.g., person-based decision making, mind-machine interfacing, cognitive interaction, affect detection, feeling detection, etc., emotion recognition has become successful in attracting the recent hype of AI-empowered research. Therefore, numerous studies have been conducted driven by a range of approaches, which demand a systematic review of methodologies used for this task with their feature sets and techniques. It will facilitate the beginners as guidance towards composing an effective emotion recognition system. In this article, we have conducted a rigorous review on the state-of-the-art emotion recognition systems, published in recent literature, and summarized some of the common emotion recognition steps with relevant definitions, theories, and analyses to provide key knowledge to develop a proper framework. Moreover, studies included here were dichotomized based on two categories: i) deep learning-based, and ii) shallow machine learning-based emotion recognition systems. The reviewed systems were compared based on methods, classifier, the number of classified emotions, accuracy, and dataset used. An informative comparison, recent research trends, and some recommendations are also provided for future research directions.

60 citations

Journal ArticleDOI
TL;DR: In this article , a review of the EEG-based emotion recognition methods is presented, including feature extraction, feature selection/reduction, machine learning methods (e.g., k-nearest neighbor), support vector machine, decision tree, artificial neural network, random forest, and naive Bayes) and deep learning methods.
Abstract: Abstract Affective computing, a subcategory of artificial intelligence, detects, processes, interprets, and mimics human emotions. Thanks to the continued advancement of portable non-invasive human sensor technologies, like brain–computer interfaces (BCI), emotion recognition has piqued the interest of academics from a variety of domains. Facial expressions, speech, behavior (gesture/posture), and physiological signals can all be used to identify human emotions. However, the first three may be ineffectual because people may hide their true emotions consciously or unconsciously (so-called social masking). Physiological signals can provide more accurate and objective emotion recognition. Electroencephalogram (EEG) signals respond in real time and are more sensitive to changes in affective states than peripheral neurophysiological signals. Thus, EEG signals can reveal important features of emotional states. Recently, several EEG-based BCI emotion recognition techniques have been developed. In addition, rapid advances in machine and deep learning have enabled machines or computers to understand, recognize, and analyze emotions. This study reviews emotion recognition methods that rely on multi-channel EEG signal-based BCIs and provides an overview of what has been accomplished in this area. It also provides an overview of the datasets and methods used to elicit emotional states. According to the usual emotional recognition pathway, we review various EEG feature extraction, feature selection/reduction, machine learning methods (e.g., k-nearest neighbor), support vector machine, decision tree, artificial neural network, random forest, and naive Bayes) and deep learning methods (e.g., convolutional and recurrent neural networks with long short term memory). In addition, EEG rhythms that are strongly linked to emotions as well as the relationship between distinct brain areas and emotions are discussed. We also discuss several human emotion recognition studies, published between 2015 and 2021, that use EEG data and compare different machine and deep learning algorithms. Finally, this review suggests several challenges and future research directions in the recognition and classification of human emotional states using EEG.

45 citations

Journal ArticleDOI
01 Jan 2021
TL;DR: The relevantly scientific literature in the past five years is investigated and the emotional feature extraction methods and the classification methods using EEG signals are reviewed, finding that emotion recognition rapidly becomes a multiple discipline research field through EEG signals.
Abstract: As a subjectively psychological and physiological response to external stimuli, emotion is ubiquitous in our daily life. With the continuous development of the artificial intelligence and brain science, emotion recognition rapidly becomes a multiple discipline research field through EEG signals. This paper investigates the relevantly scientific literature in the past five years and reviews the emotional feature extraction methods and the classification methods using EEG signals. Commonly used feature extraction analysis methods include time domain analysis, frequency domain analysis, and time-frequency domain analysis. The widely used classification methods include machine learning algorithms based on Support Vector Machine (SVM), k-Nearest Neighbor (KNN), Naive Bayes (NB), etc., and their classification accuracy ranges from 57.50% to 95.70%. The classification accuracy of the deep learning algorithms based on Neural Network (NN), Long and Short-Term Memory (LSTM), and Deep Belief Network (DBN) ranges from 63.38% to 97.56%.

41 citations

Journal ArticleDOI
TL;DR: In this paper , a literature survey is conducted to analyze the trends of multimodal remote sensing data fusion, and some prevalent sub-fields in multi-modal RS data fusion are reviewed in terms of the to-be-fused data modalities.
Abstract: With the extremely rapid advances in remote sensing (RS) technology, a great quantity of Earth observation (EO) data featuring considerable and complicated heterogeneity is readily available nowadays, which renders researchers an opportunity to tackle current geoscience applications in a fresh way. With the joint utilization of EO data, much research on multimodal RS data fusion has made tremendous progress in recent years, yet these developed traditional algorithms inevitably meet the performance bottleneck due to the lack of the ability to comprehensively analyse and interpret these strongly heterogeneous data. Hence, this non-negligible limitation further arouses an intense demand for an alternative tool with powerful processing competence. Deep learning (DL), as a cutting-edge technology, has witnessed remarkable breakthroughs in numerous computer vision tasks owing to its impressive ability in data representation and reconstruction. Naturally, it has been successfully applied to the field of multimodal RS data fusion, yielding great improvement compared with traditional methods. This survey aims to present a systematic overview in DL-based multimodal RS data fusion. More specifically, some essential knowledge about this topic is first given. Subsequently, a literature survey is conducted to analyse the trends of this field. Some prevalent sub-fields in the multimodal RS data fusion are then reviewed in terms of the to-be-fused data modalities, i.e., spatiospectral, spatiotemporal, light detection and ranging-optical, synthetic aperture radar-optical, and RS-Geospatial Big Data fusion. Furthermore, We collect and summarize some valuable resources for the sake of the development in multimodal RS data fusion. Finally, the remaining challenges and potential future directions are highlighted.

39 citations

Journal ArticleDOI
TL;DR: In this article, a novel textural features generation method inspired by the Tetris game called Tetromino is proposed in order to classify emotions of EEG signals using a novel game-based feature generation function with high accuracy.

29 citations