An Improved Approach for Text Sentiment Classification Based on a Deep Neural Network via a Sentiment Attention Mechanism
TLDR
A novel sentiment attention mechanism to help select the crucial sentiment-word-relevant context words by leveraging the sentiment lexicon in an attention mechanism is introduced and an improved deep neural network to extract sequential correlation information and text local features by combining bidirectional gated recurrent units with a convolutional neural network is developed.Abstract:
Text sentiment analysis is an important but challenging task. Remarkable success has been achieved along with the wide application of deep learning methods, but deep learning methods dealing with text sentiment classification tasks cannot fully exploit sentiment linguistic knowledge, which hinders the development of text sentiment analysis. In this paper, we propose a sentiment-feature-enhanced deep neural network (SDNN) to address the problem by integrating sentiment linguistic knowledge into a deep neural network via a sentiment attention mechanism. Specifically, first we introduce a novel sentiment attention mechanism to help select the crucial sentiment-word-relevant context words by leveraging the sentiment lexicon in an attention mechanism, which bridges the gap between traditional sentiment linguistic knowledge and current popular deep learning methods. Second, we develop an improved deep neural network to extract sequential correlation information and text local features by combining bidirectional gated recurrent units with a convolutional neural network, which further enhances the ability of comprehensive text representation learning. With this design, the SDNN model can generate a powerful semantic representation of text to improve the performance of text sentiment classification tasks. Extensive experiments were conducted to evaluate the effectiveness of the proposed SDNN model on two real-world datasets with a binary-sentiment-label and a multi-sentiment-label. The experimental results demonstrated that the SDNN achieved substantially better performance than the strong competitors for text sentiment classification tasks.read more
Citations
More filters
Journal ArticleDOI
GHS-NET a generic hybridized shallow neural network for multi-label biomedical text classification.
Muhammad Ali Ibrahim,Muhammad Usman Ghani Khan,Faiza Mehmood,Muhammad Nabeel Asim,Muhammad Nabeel Asim,Waqar Mahmood +5 more
TL;DR: GHS-Net as mentioned in this paper is a generic deep learning based hybrid multi-label classification methodology which can be utilized to accurately classify biomedical text of diverse genre, such as biomedical literature or clinical notes.
Journal ArticleDOI
A Novel Text Mining Approach for Mental Health Prediction Using Bi-LSTM and BERT Model
TL;DR: A novel framework to efficiently and effectively identify depression and anxiety-related posts while maintaining the contextual and semantic meaning of the words used in the whole corpus when applying bidirectional encoder representations from transformers (BERT).
Journal ArticleDOI
RETRACTED ARTICLE: Integrated CNN- and LSTM-DNN-based sentiment analysis over big social data for opinion mining
P. Kaladevi,K. Thyagarajah +1 more
TL;DR: The interactive and real-time characteristics of gathering public opinion through the process of investigating big social data have gained more popularity and attention from the recent past.
Journal ArticleDOI
Topic Modeling and Sentiment Analysis of Online Education in the COVID-19 Era Using Social Networks Based Datasets
TL;DR: This study suggested a powerful and effective technique that can tackle the large contents and can specifically examine the attitudes, sentiments, and fake news of “E-learning”, which is considered a big challenge, as online textual data related to the education sector is considered of great importance.
Journal ArticleDOI
Unsteady Multi-Element Time Series Analysis and Prediction Based on Spatial-Temporal Attention and Error Forecast Fusion
Xiaofan Wang,Lingyu Xu +1 more
TL;DR: The Long Short-Term Memory (LSTM) based spatial-temporal attentions model for Chlorophyll-a (Chl-a) concentration prediction is proposed, a model which can capture the correlation between various factors and Chl- a adaptively and catch dynamic temporal information from previous time intervals for making predictions.
References
More filters
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal Article
Dropout: a simple way to prevent neural networks from overfitting
TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings ArticleDOI
Glove: Global Vectors for Word Representation
TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Proceedings Article
Distributed Representations of Words and Phrases and their Compositionality
TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Proceedings ArticleDOI
Learning Phrase Representations using RNN Encoder--Decoder for Statistical Machine Translation
Kyunghyun Cho,Bart van Merriënboer,Caglar Gulcehre,Dzmitry Bahdanau,Fethi Bougares,Holger Schwenk,Yoshua Bengio,Yoshua Bengio,Yoshua Bengio +8 more
TL;DR: In this paper, the encoder and decoder of the RNN Encoder-Decoder model are jointly trained to maximize the conditional probability of a target sequence given a source sequence.