scispace - formally typeset
Open AccessProceedings ArticleDOI

Question Generation for Question Answering

Reads0
Chats0
TLDR
Experimental results show that, by using generated questions as an extra signal, significant QA improvement can be achieved.
Abstract
This paper presents how to generate questions from given passages using neural networks, where large scale QA pairs are automatically crawled and processed from Community-QA website, and used as training data The contribution of the paper is 2-fold: First, two types of question generation approaches are proposed, one is a retrieval-based method using convolution neural network (CNN), the other is a generation-based method using recurrent neural network (RNN); Second, we show how to leverage the generated questions to improve existing question answering systems We evaluate our question generation method for the answer sentence selection task on three benchmark datasets, including SQuAD, MS MARCO, and WikiQA Experimental results show that, by using generated questions as an extra signal, significant QA improvement can be achieved

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Paragraph-level Neural Question Generation with Maxout Pointer and Gated Self-attention Networks

TL;DR: A maxout pointer mechanism with gated self-attention encoder to address the challenges of processing long text inputs for question generation, which outperforms previous approaches with either sentence-level or paragraph-level inputs.
Proceedings ArticleDOI

Event Extraction as Machine Reading Comprehension.

TL;DR: This paper proposes a new learning paradigm of EE, by explicitly casting it as a machine reading comprehension problem (MRC), which includes an unsupervised question generation process, which can transfer event schema into a set of natural questions, followed by a BERT-based question-answering process to retrieve answers as EE results.
Proceedings ArticleDOI

Answer-focused and Position-aware Neural Question Generation.

TL;DR: The experimental results show that the proposed answer-focused and position-aware neural question generation model significantly improves the baseline and outperforms the state-of-the-art system.
Proceedings ArticleDOI

Harvesting Paragraph-Level Question-Answer Pairs from Wikipedia

TL;DR: This article proposed a neural network approach that incorporates coreference knowledge via a novel gating mechanism to generate question-answer pairs that cover content beyond a single sentence and found that the linguistic knowledge introduced by the coreference representation aids question generation significantly.
Proceedings ArticleDOI

Generating Clarifying Questions for Information Retrieval

TL;DR: A taxonomy of clarification for open-domain search queries is identified by analyzing large-scale query reformulation data sampled from Bing search logs, and supervised and reinforcement learning models for generating clarifying questions learned from weak supervision data are proposed.
References
More filters
Proceedings ArticleDOI

Bleu: a Method for Automatic Evaluation of Machine Translation

TL;DR: This paper proposed a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run.
Proceedings Article

Neural Machine Translation by Jointly Learning to Align and Translate

TL;DR: It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.
Posted Content

Neural Machine Translation by Jointly Learning to Align and Translate

TL;DR: In this paper, the authors propose to use a soft-searching model to find the parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.
Posted Content

ADADELTA: An Adaptive Learning Rate Method

Matthew D. Zeiler
- 22 Dec 2012 - 
TL;DR: A novel per-dimension learning rate method for gradient descent called ADADELTA that dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent is presented.
Posted Content

SQuAD: 100,000+ Questions for Machine Comprehension of Text

TL;DR: The Stanford Question Answering Dataset (SQuAD) as mentioned in this paper is a reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage.