scispace - formally typeset
I

Ikuya Yamada

Researcher at Keio University

Publications -  39
Citations -  1729

Ikuya Yamada is an academic researcher from Keio University. The author has contributed to research in topics: Question answering & Entity linking. The author has an hindex of 14, co-authored 39 publications receiving 1105 citations. Previous affiliations of Ikuya Yamada include National Institute of Informatics.

Papers
More filters
Posted Content

LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention

TL;DR: New pretrained contextualized representations of words and entities based on the bidirectional transformer, and an entity-aware self-attention mechanism that considers the types of tokens (words or entities) when computing attention scores are proposed.
Proceedings ArticleDOI

Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation

TL;DR: In this paper, the authors proposed a novel embedding method for NED, which jointly maps words and entities into the same continuous vector space by using skip-gram model and anchor context model, and achieved state-of-the-art accuracy of 93.1% on the standard CoNLL dataset.
Posted Content

Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation

TL;DR: A novel embedding method specifically designed for NED that jointly maps words and entities into the same continuous vector space and extends the skip-gram model by using two models.
Proceedings ArticleDOI

LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention

TL;DR: This article proposed new pretrained contextualized representations of words and entities based on the bidirectional transformer, which treats words and entity in a given text as independent tokens, and outputs contextualised representations of them.
Proceedings ArticleDOI

Wikipedia2Vec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from Wikipedia

TL;DR: Wikipedia2Vec, a Python-based open-source tool for learning the embeddings of words and entities from Wikipedia, is presented and achieves a state-of-the-art result on the KORE entity relatedness dataset, and competitive results on various standard benchmark datasets.