D
Danqi Chen
Researcher at Princeton University
Publications - 81
Citations - 29658
Danqi Chen is an academic researcher from Princeton University. The author has contributed to research in topics: Computer science & Question answering. The author has an hindex of 28, co-authored 55 publications receiving 16626 citations. Previous affiliations of Danqi Chen include Tsinghua University & Stanford University.
Papers
More filters
Posted Content
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu,Myle Ott,Naman Goyal,Jingfei Du,Mandar Joshi,Danqi Chen,Omer Levy,Michael Lewis,Luke Zettlemoyer,Veselin Stoyanov +9 more
TL;DR: It is found that BERT was significantly undertrained, and can match or exceed the performance of every model published after it, and the best model achieves state-of-the-art results on GLUE, RACE and SQuAD.
Proceedings ArticleDOI
A Fast and Accurate Dependency Parser using Neural Networks
TL;DR: This work proposes a novel way of learning a neural network classifier for use in a greedy, transition-based dependency parser that can work very fast, while achieving an about 2% improvement in unlabeled and labeled attachment scores on both English and Chinese datasets.
Proceedings Article
Reasoning With Neural Tensor Networks for Knowledge Base Completion
TL;DR: An expressive neural tensor network suitable for reasoning over relationships between two entities given a subset of the knowledge base is introduced and performance can be improved when entities are represented as an average of their constituting word vectors.
Posted Content
Reading Wikipedia to Answer Open-Domain Questions
TL;DR: In this paper, a multi-layer recurrent neural network model was proposed to detect answer spans in Wikipedia paragraphs, which combines a search component based on bigram hashing and TF-IDF matching.
Posted Content
Dense Passage Retrieval for Open-Domain Question Answering
Vladimir Karpukhin,Barlas Oguz,Sewon Min,Patrick S. H. Lewis,Ledell Wu,Sergey Edunov,Danqi Chen,Wen-tau Yih +7 more
TL;DR: This work shows that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework.