N
Niki Parmar
Researcher at Google
Publications - 41
Citations - 66115
Niki Parmar is an academic researcher from Google. The author has contributed to research in topics: Transformer (machine learning model) & Machine translation. The author has an hindex of 22, co-authored 39 publications receiving 31763 citations. Previous affiliations of Niki Parmar include University of Southern California.
Papers
More filters
Proceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Posted Content
Attention Is All You Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
TL;DR: A new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely is proposed, which generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
Posted Content
Conformer: Convolution-augmented Transformer for Speech Recognition
Anmol Gulati,James Qin,Chung-Cheng Chiu,Niki Parmar,Yu Zhang,Jiahui Yu,Wei Han,Shibo Wang,Zhengdong Zhang,Yonghui Wu,Ruoming Pang +10 more
TL;DR: This work proposes the convolution-augmented transformer for speech recognition, named Conformer, which significantly outperforms the previous Transformer and CNN based models achieving state-of-the-art accuracies.
Posted Content
Image Transformer
Niki Parmar,Ashish Vaswani,Jakob Uszkoreit,Łukasz Kaiser,Noam Shazeer,Alexander Ku,Dustin Tran +6 more
TL;DR: In this article, a self-attention mechanism is used to attend to local neighborhoods to increase the size of images generated by the model, despite maintaining significantly larger receptive fields per layer than typical CNNs.
Proceedings ArticleDOI
Bottleneck Transformers for Visual Recognition
TL;DR: BoTNet as mentioned in this paper incorporates self-attention for image classification, object detection, and instance segmentation, and achieves state-of-the-art performance on the ImageNet benchmark.