scispace - formally typeset
P

Peter Clark

Researcher at Allen Institute for Artificial Intelligence

Publications -  194
Citations -  11889

Peter Clark is an academic researcher from Allen Institute for Artificial Intelligence. The author has contributed to research in topics: Question answering & Natural language. The author has an hindex of 47, co-authored 175 publications receiving 9851 citations. Previous affiliations of Peter Clark include University of Arizona & National Research Council.

Papers
More filters
Journal ArticleDOI

The CN2 Induction Algorithm

TL;DR: A description and empirical evaluation of a new induction system, CN2, designed for the efficient induction of simple, comprehensible production rules in domains where problems of poor description language and/or noise may be present.
Book ChapterDOI

Rule Induction with CN2: Some Recent Improvements

TL;DR: Improvements to the CN2 algorithm are described, including the use of the Laplacian error estimate as an alternative evaluation function and it is shown how unordered as well as ordered rules can be generated.
Journal Article

The Seventh PASCAL Recognizing Textual Entailment Challenge.

TL;DR: This paper presents the Seventh Recognizing Textual Entailment (RTE-7) challenge, which replicated the exercise proposed in RTE-6, consisting of a Main Task, a Main subtask aimed at detecting novel information; and a KBP Validation Task, in which RTE systems had to validate the output of systems participating in the KBP Slot Filling Task.
Posted Content

Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge

TL;DR: A new question set, text corpus, and baselines assembled to encourage AI research in advanced question answering constitute the AI2 Reasoning Challenge (ARC), which requires far more powerful knowledge and reasoning than previous challenges such as SQuAD or SNLI.
Proceedings ArticleDOI

UNIFIEDQA: Crossing Format Boundaries with a Single QA System

TL;DR: This work uses the latest advances in language modeling to build a single pre-trained QA model, UNIFIEDQA, that performs well across 19 QA datasets spanning 4 diverse formats, and results in a new state of the art on 10 factoid and commonsense question answering datasets.