Neural Semantic Parsing with Type Constraints for Semi-Structured Tables
Jayant Krishnamurthy,Pradeep Dasigi,Matt Gardner +2 more
- pp 1516-1526
Reads0
Chats0
TLDR
A new semantic parsing model for answering compositional questions on semi-structured Wikipedia tables with a state-of-the-art accuracy and type constraints and entity linking are valuable components to incorporate in neural semantic parsers.Abstract:
We present a new semantic parsing model for answering compositional questions on semi-structured Wikipedia tables Our parser is an encoder-decoder neural network with two key technical innovations: (1) a grammar for the decoder that only generates well-typed logical forms; and (2) an entity embedding and linking module that identifies entity mentions while generalizing across tables We also introduce a novel method for training our neural model with question-answer supervision On the WikiTableQuestions data set, our parser achieves a state-of-the-art accuracy of 433% for a single model and 459% for a 5-model ensemble, improving on the best prior score of 387% set by a 15-model ensemble These results suggest that type constraints and entity linking are valuable components to incorporate in neural semantic parsersread more
Citations
More filters
Journal ArticleDOI
A Survey of the Usages of Deep Learning for Natural Language Processing
TL;DR: The field of natural language processing has been propelled forward by an explosion in the use of deep learning models over the last several years as mentioned in this paper, which includes several core linguistic processing issues in addition to many applications of computational linguistics.
Proceedings ArticleDOI
AllenNLP: A Deep Semantic Natural Language Processing Platform
Matt Gardner,Joel Grus,Mark Neumann,Oyvind Tafjord,Pradeep Dasigi,Nelson F. Liu,Matthew E. Peters,Michael Schmitz,Luke Zettlemoyer +8 more
TL;DR: AllenNLP as mentioned in this paper is a library for applying deep learning methods to NLP research that addresses these issues with easy-to-use command-line tools, declarative configuration-driven experiments, and modular NLP abstractions.
Proceedings ArticleDOI
DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs
TL;DR: A new reading comprehension benchmark, DROP, which requires Discrete Reasoning Over the content of Paragraphs, and presents a new model that combines reading comprehension methods with simple numerical reasoning to achieve 51% F1.
Proceedings ArticleDOI
Coarse-to-Fine Decoding for Neural Semantic Parsing
Li Dong,Mirella Lapata +1 more
TL;DR: The authors propose a structure-aware neural architecture which decomposes the semantic parsing process into two stages, where given an input utterance, they first generate a rough sketch of its meaning, where low-level information such as variable names and arguments are glossed over.
Proceedings ArticleDOI
TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data
TL;DR: TaBERT is a pretrained LM that jointly learns representations for NL sentences and (semi-)structured tables that achieves new best results on the challenging weakly-supervised semantic parsing benchmark WikiTableQuestions, while performing competitively on the text-to-SQL dataset Spider.
References
More filters
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article
Understanding the difficulty of training deep feedforward neural networks
Xavier Glorot,Yoshua Bengio +1 more
TL;DR: The objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future.
Journal ArticleDOI
Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning
TL;DR: This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units that are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reInforcement tasks, and they do this without explicitly computing gradient estimates.
Book
Types and Programming Languages
TL;DR: This text provides a comprehensive introduction both to type systems in computer science and to the basic theory of programming languages, with a variety of approaches to modeling the features of object-oriented languages.
Proceedings Article
Semantic Parsing on Freebase from Question-Answer Pairs
TL;DR: This paper trains a semantic parser that scales up to Freebase and outperforms their state-of-the-art parser on the dataset of Cai and Yates (2013), despite not having annotated logical forms.
Related Papers (5)
Learning to parse database queries using inductive logic programming
John M. Zelle,Raymond J. Mooney +1 more