scispace - formally typeset
Open AccessProceedings Article

Automatic Cloze-Questions Generation

Reads0
Chats0
TLDR
This work presents an automatic Cloze Question Generation (CQG) system that generates a list of important cloze questions given an English article and is divided into three modules: sentence selection, keyword selection and distractor selection.
Abstract
Cloze questions are questions containing sentences with one or more blanks and multiple choices listed to pick an answer from. In this work, we present an automatic Cloze Question Generation (CQG) system that generates a list of important cloze questions given an English article. Our system is divided into three modules: sentence selection, keyword selection and distractor selection. We also present evaluation guidelines to evaluate CQG systems. Using these guidelines three evaluators report an average score of 3.18 (out of 4) on Cricket World Cup 2011 data.

read more

Citations
More filters
Journal ArticleDOI

Automatic Multiple Choice Question Generation From Text: A Survey

TL;DR: A generic workflow for an automatic MCQ generation system is outlined and the list of techniques adopted in the literature is discussed, including the evaluation techniques for assessing the quality of the system generated MCQs.
Proceedings ArticleDOI

A System for Generating Multiple Choice Questions: With a Novel Approach for Sentence Selection

TL;DR: This paper presents a system that generates MCQs automatically using a sports domain text as input and proposes a novel technique to select informative sentences by using topic modeling and parse structure similarity.
Proceedings ArticleDOI

Knowledge Questions from Knowledge Graphs

TL;DR: This paper proposed an end-to-end approach to automatically generate quiz-style knowledge questions from a knowledge graph such as DBpedia by selecting a named entity from the knowledge graph as an answer, which yields the answer as its sole result.
Proceedings ArticleDOI

Knowledge Questions from Knowledge Graphs

TL;DR: This paper proposed an end-to-end approach to automatically generate quiz-style knowledge questions from a knowledge graph such as DBpedia using a template-based method to verbalize the structured query and yield a natural language question.
Proceedings ArticleDOI

Automatic Generation of English Vocabulary Tests

TL;DR: A novel method for automatically generating English vocabulary tests using TOEFL vocabulary questions as a model, which suggests that the machine-generated questions succeeded in capturing some characteristics of the human- generated questions, and half of them can be used for English test.
References
More filters
Proceedings ArticleDOI

Feature-rich part-of-speech tagging with a cyclic dependency network

TL;DR: A new part-of-speech tagger is presented that demonstrates the following ideas: explicit use of both preceding and following tag contexts via a dependency network representation, broad use of lexical features, and effective use of priors in conditional loglinear models.
Proceedings ArticleDOI

Accurate Unlexicalized Parsing

TL;DR: It is demonstrated that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar.
Proceedings ArticleDOI

Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling

TL;DR: By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference.
Proceedings ArticleDOI

The automated acquisition of topic signatures for text summarization

TL;DR: A method for automatically training topic signatures-sets of related words, with associated weights, organized around head topics, is described and illustrated with signatures the authors created with 6,194 TREC collection texts over 4 selected topics.
Proceedings Article

Stanford’s Multi-Pass Sieve Coreference Resolution System at the CoNLL-2011 Shared Task

TL;DR: The coreference resolution system submitted by Stanford at the CoNLL-2011 shared task was ranked first in both tracks, with a score of 57.8 in the closed track and 58.3 in the open track.
Related Papers (5)