Open AccessProceedings Article
A Position-aware Bidirectional Attention Network for Aspect-level Sentiment Analysis
Shuqin Gu,Lipeng Zhang,Yuexian Hou,Yin Song +3 more
- pp 774-784
Reads0
Chats0
TLDR
A position-aware bid Directional attention network (PBAN) based on bidirectional GRU, which not only concentrates on the position information of aspect terms, but also mutually models the relation between aspect term and sentence by employing biddirectional attention mechanism.Abstract:
Aspect-level sentiment analysis aims to distinguish the sentiment polarity of each specific aspect term in a given sentence. Both industry and academia have realized the importance of the relationship between aspect term and sentence, and made attempts to model the relationship by designing a series of attention models. However, most existing methods usually neglect the fact that the position information is also crucial for identifying the sentiment polarity of the aspect term. When an aspect term occurs in a sentence, its neighboring words should be given more attention than other words with long distance. Therefore, we propose a position-aware bidirectional attention network (PBAN) based on bidirectional GRU. PBAN not only concentrates on the position information of aspect terms, but also mutually models the relation between aspect term and sentence by employing bidirectional attention mechanism. The experimental results on SemEval 2014 Datasets demonstrate the effectiveness of our proposed PBAN model.read more
Citations
More filters
Synthesis Lectures on Human Language Technologies
Ido Dagan,Dan Roth,Mark Sammons,Fabio Massimo Zanzotto,Web Corpus Construction,Roland Schäfer,Felix Bildhauer +6 more
TL;DR: This book gives a comprehensive view of state-of-the-art techniques that are used to build spoken dialogue systems and presents dialogue modelling and system development issues relevant in both academic and industrial environments and also discusses requirements and challenges for advanced interaction management and future research.
Proceedings ArticleDOI
Aspect-Level Sentiment Analysis Via Convolution over Dependency Tree
TL;DR: A convolution over a dependency tree (CDT) model which exploits a Bi-directional Long Short Term Memory (Bi-LSTM) to learn representations for features of a sentence, and further enhance the embeddings with a graph convolutional network (GCN) which operates directly on the dependency tree of the sentence.
Proceedings ArticleDOI
Transfer Capsule Network for Aspect Level Sentiment Classification.
Zhuang Chen,Tieyun Qian +1 more
TL;DR: A Transfer Capsule Network (TransCap) model for transferring document-level knowledge to aspect-level sentiment classification and extends the dynamic routing approach to adaptively couple the semantic capsules with the class capsules under the transfer learning framework.
Proceedings ArticleDOI
Convolution over Hierarchical Syntactic and Lexical Graphs for Aspect Level Sentiment Analysis
Mi Zhang,Tieyun Qian +1 more
TL;DR: This work proposes a novel architecture which convolutes over hierarchical syntactic and lexical graphs, which employs a global lexical graph to encode the corpus level word co-occurrence information and designs a bi-level interactive graph convolution network to fully exploit these two graphs.
Journal ArticleDOI
Modeling sentiment dependencies with graph convolutional networks for aspect-level sentiment classification
Pinlong Zhao,Linlin Hou,Ou Wu +2 more
TL;DR: This model firstly introduces bidirectional attention mechanism with position encoding to model aspect-specific representations between each aspect and its context words, then employs GCN over the attention mechanism to capture the sentiment dependencies between different aspects in one sentence.
References
More filters
Proceedings ArticleDOI
Glove: Global Vectors for Word Representation
TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Posted Content
Efficient Estimation of Word Representations in Vector Space
TL;DR: This paper proposed two novel model architectures for computing continuous vector representations of words from very large data sets, and the quality of these representations is measured in a word similarity task and the results are compared to the previously best performing techniques based on different types of neural networks.
Proceedings Article
Neural Machine Translation by Jointly Learning to Align and Translate
TL;DR: It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.
Proceedings ArticleDOI
Learning Phrase Representations using RNN Encoder--Decoder for Statistical Machine Translation
Kyunghyun Cho,Bart van Merriënboer,Caglar Gulcehre,Dzmitry Bahdanau,Fethi Bougares,Holger Schwenk,Yoshua Bengio,Yoshua Bengio,Yoshua Bengio +8 more
TL;DR: In this paper, the encoder and decoder of the RNN Encoder-Decoder model are jointly trained to maximize the conditional probability of a target sequence given a source sequence.
Posted Content
Neural Machine Translation by Jointly Learning to Align and Translate
TL;DR: In this paper, the authors propose to use a soft-searching model to find the parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.