scispace - formally typeset
A

Alan Ritter

Researcher at Georgia Institute of Technology

Publications -  91
Citations -  11930

Alan Ritter is an academic researcher from Georgia Institute of Technology. The author has contributed to research in topics: Named-entity recognition & Relationship extraction. The author has an hindex of 36, co-authored 91 publications receiving 10647 citations. Previous affiliations of Alan Ritter include Carnegie Mellon University & Ohio State University.

Papers
More filters
Proceedings Article

Named Entity Recognition in Tweets: An Experimental Study

TL;DR: The novel T-ner system doubles F1 score compared with the Stanford NER system, and leverages the redundancy inherent in tweets to achieve this performance, using LabeledLDA to exploit Freebase dictionaries as a source of distant supervision.
Proceedings ArticleDOI

Deep Reinforcement Learning for Dialogue Generation

TL;DR: This work simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity, non-repetitive turns, coherence, and ease of answering.
Proceedings ArticleDOI

SemEval-2016 Task 4: Sentiment Analysis in Twitter

TL;DR: The SemEval-2016 Task 4 comprises five subtasks, three of which represent a significant departure from previous editions. as mentioned in this paper discusses the fourth year of the Sentiment Analysis in Twitter Task and discusses the three new subtasks focus on two variants of the basic sentiment classification in Twitter task.
Proceedings Article

Data-Driven Response Generation in Social Media

TL;DR: It is found that mapping conversational stimuli onto responses is more difficult than translating between languages, due to the wider range of possible responses, the larger fraction of unaligned words/phrases, and the presence of large phrase pairs whose alignment cannot be further decomposed.
Posted Content

Adversarial Learning for Neural Dialogue Generation

TL;DR: This paper proposed using adversarial training for open-domain dialogue generation, where the generator is trained to generate sequences that are indistinguishable from human-generated dialogue utterances, and the outputs from the discriminator are used as rewards for the generator.