D
Dan Jurafsky
Researcher at Stanford University
Publications - 348
Citations - 50756
Dan Jurafsky is an academic researcher from Stanford University. The author has contributed to research in topics: Language model & Parsing. The author has an hindex of 93, co-authored 344 publications receiving 44536 citations. Previous affiliations of Dan Jurafsky include Carnegie Mellon University & University of Colorado Boulder.
Papers
More filters
Posted Content
Improving Factual Completeness and Consistency of Image-to-Text Radiology Report Generation
TL;DR: In this paper, the authors introduce two simple rewards to encourage the generation of factually complete and consistent radiology reports: one that encourages the system to generate radiology domain entities consistent with the reference, and one that uses natural language inference to encourage these entities to be described in inferentially consistent ways.
Proceedings ArticleDOI
Hidden Conditional Random Fields for phone recognition
Yun-Hsuan Sung,Dan Jurafsky +1 more
TL;DR: This work applies Hidden Conditional Random Fields to the task of TIMIT phone recognition, and shows that HCRFs' ability to simultaneously optimize discriminative language models and acoustic models is a powerful property that has important implications for speech recognition.
Proceedings Article
The Best Lexical Metric for Phrase-Based Statistical MT System Optimization
TL;DR: It is shown that people tend to prefer BLEU and NIST trained models to those trained on edit distance based metrics like TER or WER, and that using BLEu or NIST produces models that are more robust to evaluation by other metrics and perform well in human judgments.
Proceedings Article
He Said, She Said: Gender in the ACL Anthology
Adam Vogel,Dan Jurafsky +1 more
TL;DR: A fine-grained study of gender in the field of Natural Language Processing finds that women publish more on dialog, discourse, and sentiment, while men publish more than women in parsing, formal semantics, and finite state models.
Proceedings Article
To Memorize or to Predict: Prominence labeling in Conversational Speech
Ani Nenkova,Jason Brenier,Anubha Kothari,Sasha Calhoun,Laura Whitton,David Beaver,Dan Jurafsky +6 more
TL;DR: This paper examines a new feature, accent ratio, which captures how likely it is that a word will be realized as prominent or not, and suggests that carefully chosen lexicalized features can outperform less fine-grained features.