scispace - formally typeset
C

Christopher D. Manning

Researcher at Stanford University

Publications -  537
Citations -  173242

Christopher D. Manning is an academic researcher from Stanford University. The author has contributed to research in topics: Parsing & Computer science. The author has an hindex of 138, co-authored 499 publications receiving 147595 citations. Previous affiliations of Christopher D. Manning include Charles University in Prague & University of Sydney.

Papers
More filters
Posted Content

A large annotated corpus for learning natural language inference

TL;DR: The Stanford Natural Language Inference corpus is introduced, a new, freely available collection of labeled sentence pairs, written by humans doing a novel grounded task based on image captioning, which allows a neural network-based model to perform competitively on natural language inference benchmarks for the first time.
Proceedings Article

ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

TL;DR: This paper proposed a more sample-efficient pre-training task called replaced token detection, which corrupts the input by replacing some input tokens with plausible alternatives sampled from a small generator network and then predicts whether each token in the corrupted input was replaced by a generator sample or not.
Proceedings ArticleDOI

Enriching the Knowledge Sources Used in a Maximum Entropy Part-of-Speech Tagger

TL;DR: This paper presents results for a maximum-entropy-based part of speech tagger, which achieves superior performance principally by enriching the information sources used for tagging by incorporating these features: more extensive treatment of capitalization for unknown words, and features for the disambiguation of the tense forms of verbs.
Proceedings Article

Parsing Natural Scenes and Natural Language with Recursive Neural Networks

TL;DR: A max-margin structure prediction architecture based on recursive neural networks that can successfully recover such structure both in complex scene images as well as sentences is introduced.
Proceedings Article

Semantic Compositionality through Recursive Matrix-Vector Spaces

TL;DR: A recursive neural network model that learns compositional vector representations for phrases and sentences of arbitrary syntactic type and length and can learn the meaning of operators in propositional logic and natural language is introduced.