scispace - formally typeset
Open AccessPosted Content

Relational Graph Attention Network for Aspect-based Sentiment Analysis

Reads0
Chats0
TLDR
This paper defines a unified aspect-oriented dependency tree structure rooted at a target aspect by reshaping and pruning an ordinary dependency parse tree and proposes a relational graph attention network (R-GAT) to encode the new tree structure for sentiment prediction.
Abstract
Aspect-based sentiment analysis aims to determine the sentiment polarity towards a specific aspect in online reviews. Most recent efforts adopt attention-based neural network models to implicitly connect aspects with opinion words. However, due to the complexity of language and the existence of multiple aspects in a single sentence, these models often confuse the connections. In this paper, we address this problem by means of effective encoding of syntax information. Firstly, we define a unified aspect-oriented dependency tree structure rooted at a target aspect by reshaping and pruning an ordinary dependency parse tree. Then, we propose a relational graph attention network (R-GAT) to encode the new tree structure for sentiment prediction. Extensive experiments are conducted on the SemEval 2014 and Twitter datasets, and the experimental results confirm that the connections between aspects and opinion words can be better established with our approach, and the performance of the graph attention network (GAT) is significantly improved as a consequence.

read more

Citations
More filters
Posted Content

An Attentive Survey of Attention Models

TL;DR: A taxonomy that groups existing techniques into coherent categories in attention models is proposed, and how attention has been used to improve the interpretability of neural networks is described.
Posted Content

Does syntax matter? A strong baseline for Aspect-based Sentiment Analysis with RoBERTa

TL;DR: This paper compares the induced trees from PTMs and the dependency parsing trees on several popular models for the ABSA task, showing that the induced tree from fine-tuned RoBERTa (FT-RoBERTa) outperforms the parser-provided tree and reveals that the FT-RoberTa Induced Tree is more sentiment-word-oriented and could benefit theABSA task.
Proceedings ArticleDOI

Aspect-based Sentiment Analysis with Type-aware Graph Convolutional Networks and Layer Ensemble.

TL;DR: This paper proposes an approach to explicitly utilize dependency types for ABSA with type-aware graph convolutional networks (T-GCN), where attention is used in T- GCN to distinguish different edges in the graph and attentive layer ensemble is proposed to comprehensively learn from different layers of T-GCn.
Journal ArticleDOI

Knowledge-enabled BERT for aspect-based sentiment analysis

TL;DR: This work proposes a knowledge-enabled language representation model BERT that leverages the additional information from a sentiment knowledge graph by injecting sentiment domain knowledge into the language representation models, which obtains the embedding vectors of entities in the sentiment knowledge graphs and words in the text in a consistent vector space.
Proceedings ArticleDOI

Inducing Target-Specific Latent Structures for Aspect Sentiment Classification

TL;DR: This work proposes gating mechanisms to dynamically combine information from word dependency graphs and latent graphs which are learned by self-attention networks to complement supervised syntactic features with latent semantic dependencies inpect-level sentiment analysis.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings ArticleDOI

Glove: Global Vectors for Word Representation

TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Proceedings ArticleDOI

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

TL;DR: BERT as mentioned in this paper pre-trains deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
Posted Content

Adam: A Method for Stochastic Optimization

TL;DR: In this article, the adaptive estimates of lower-order moments are used for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimate of lowerorder moments.
Proceedings ArticleDOI

The Stanford CoreNLP Natural Language Processing Toolkit

TL;DR: The design and use of the Stanford CoreNLP toolkit is described, an extensible pipeline that provides core natural language analysis, and it is suggested that this follows from a simple, approachable design, straightforward interfaces, the inclusion of robust and good quality analysis components, and not requiring use of a large amount of associated baggage.
Related Papers (5)