scispace - formally typeset
Open AccessProceedings ArticleDOI

Learning to Ask: Neural Question Generation for Reading Comprehension

Xinya Du, +2 more
- Vol. 1, pp 1342-1352
TLDR
This paper proposed an attention-based sequence learning model for question generation from text passages in reading comprehension, which is trainable end-to-end via sequence-tosequence learning and significantly outperforms the state-of-the-art rule-based system.
Abstract
We study automatic question generation for sentences from text passages in reading comprehension. We introduce an attention-based sequence learning model for the task and investigate the effect of encoding sentence- vs. paragraph-level information. In contrast to all previous work, our model does not rely on hand-crafted rules or a sophisticated NLP pipeline; it is instead trainable end-to-end via sequence-to-sequence learning. Automatic evaluation results show that our system significantly outperforms the state-of-the-art rule-based system. In human evaluations, questions generated by our system are also rated as being more natural (i.e.,, grammaticality, fluency) and as more difficult to answer (in terms of syntactic and lexical divergence from the original text and reasoning needed to answer).

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

AI-assisted programming question generation: Constructing semantic networks of programming knowledge by local knowledge graph and abstract syntax tree

TL;DR: In this paper , a knowledge-based PQG model was proposed that aims to help the instructor generate new programming questions and expand the assessment items by the Local Knowledge Graph and Abstract Syntax Tree.
Posted Content

On Training Instance Selection for Few-Shot Neural Text Generation

TL;DR: This article proposed a simple selection strategy with K-means clustering to select the few-shot training instances, based on the intuition that the training instances should be diverse and representative of the entire data distribution, and showed that even with the naive clustering-based approach, the generation models consistently outperform random sampling on three text generation tasks: data-to-text generation, document summarization and question generation.
Posted Content

How Well Do You Know Your Audience? Reader-aware Question Generation.

TL;DR: In this paper, a new data set of questions and posts from social media, augmented with background information about the post readers, was collected and used to address the task of reader-aware question generation.
Journal ArticleDOI

Question answering with deep neural networks for semi-structured heterogeneous genealogical knowledge graphs

- 15 Dec 2022 - 
TL;DR: In this paper , an end-to-end approach for question answering using genealogical family trees was proposed by combining knowledge graphs with unstructured texts and training a transformer-based question answering model.
Proceedings Article

Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation.

TL;DR: The authors proposed a framework to automatically extract, denoise, and enforce important input concepts as lexical constraints, which performs comparably or better than its unconstrained counterpart on automatic metrics, demonstrates higher coverage for concept preservation, and receives better ratings in human evaluation.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings ArticleDOI

Glove: Global Vectors for Word Representation

TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Proceedings ArticleDOI

Bleu: a Method for Automatic Evaluation of Machine Translation

TL;DR: This paper proposed a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run.
Proceedings Article

Neural Machine Translation by Jointly Learning to Align and Translate

TL;DR: It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.
Proceedings ArticleDOI

Learning Phrase Representations using RNN Encoder--Decoder for Statistical Machine Translation

TL;DR: In this paper, the encoder and decoder of the RNN Encoder-Decoder model are jointly trained to maximize the conditional probability of a target sequence given a source sequence.