scispace - formally typeset
Open AccessProceedings ArticleDOI

Variational Graph Autoencoding as Cheap Supervision for AMR Coreference Resolution

I Y Li, +3 more
- pp 2790-2800
TLDR
This work proposes a general pretraining method using variational graph autoencoder (VGAE) for AMR coreference resolution, which can leverage any general AMR corpus and even automatically parsed AMR data.
Abstract
Coreference resolution over semantic graphs like AMRs aims to group the graph nodes that represent the same entity. This is a crucial step for making document-level formal semantic representations. With annotated data on AMR coreference resolution, deep learning approaches have recently shown great potential for this task, yet they are usually data hunger and annotations are costly. We propose a general pretraining method using variational graph autoencoder (VGAE) for AMR coreference resolution, which can leverage any general AMR corpus and even automatically parsed AMR data. Experiments on benchmarks show that the pretraining approach achieves performance gains of up to 6% absolute F1 points. Moreover, our model significantly improves on the previous state-of-the-art model by up to 11% F1.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

HiPool: Modeling Long Documents Using Graph Neural Networks

TL;DR: This article proposed a graph-based method to encode long sequences in Natural Language Processing (NLP) tasks, where they first chunk the sequence with a fixed length to model the sentence-level information and then leverage graphs to model intra-and cross-sentence correlations with a new attention mechanism.
Journal ArticleDOI

Document-level relation extraction based on sememe knowledge-enhanced abstract meaning representation and reasoning

TL;DR: Li et al. as discussed by the authors introduced a document-level relation extraction method called SKAMRR (Semeeme K nowledge-enhanced A bstract M eaning R epresentation and R easoning).
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Posted Content

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

TL;DR: A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
Proceedings ArticleDOI

Graph Attention Networks

TL;DR: Graph Attention Networks (GATs) as mentioned in this paper leverage masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations.
Posted Content

SQuAD: 100,000+ Questions for Machine Comprehension of Text

TL;DR: The Stanford Question Answering Dataset (SQuAD) as mentioned in this paper is a reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage.
Related Papers (5)