scispace - formally typeset
P

Prakhar Gupta

Researcher at École Polytechnique Fédérale de Lausanne

Publications -  13
Citations -  2001

Prakhar Gupta is an academic researcher from École Polytechnique Fédérale de Lausanne. The author has contributed to research in topics: Sentence & Word (computer architecture). The author has an hindex of 6, co-authored 12 publications receiving 1587 citations.

Papers
More filters
Proceedings Article

Learning Word Vectors for 157 Languages

TL;DR: This article used two sources of data to train these models: the free online encyclopedia Wikipedia and data from the common crawl project, and introduced three new word analogy datasets to evaluate these word vectors, for French, Hindi and Polish.
Proceedings ArticleDOI

Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features

TL;DR: This work presents a simple but efficient unsupervised objective to train distributed representations of sentences, which outperforms the state-of-the-art un supervised models on most benchmark tasks, highlighting the robustness of the produced general-purpose sentence embeddings.
Posted Content

Learning Word Vectors for 157 Languages

TL;DR: This paper describes how high quality word representations for 157 languages were trained on the free online encyclopedia Wikipedia and data from the common crawl project, and introduces three new word analogy datasets to evaluate these word vectors.
Proceedings ArticleDOI

Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features

TL;DR: This article presented a simple but efficient unsupervised objective to train distributed representations of sentences, which outperformed the state-of-the-art models on most benchmark tasks, highlighting the robustness of the produced general-purpose sentence embeddings.
Proceedings ArticleDOI

Better Word Embeddings by Disentangling Contextual n-Gram Information

TL;DR: This paper claims that training word embeddings along with higher n-gram embeddins helps in the removal of the contextual information from the unigrams, resulting in better stand-alone word embedDings, and empirically shows the validity of this hypothesis.