H
Hiroyuki Shindo
Researcher at Nara Institute of Science and Technology
Publications - 116
Citations - 2398
Hiroyuki Shindo is an academic researcher from Nara Institute of Science and Technology. The author has contributed to research in topics: Sentence & Parsing. The author has an hindex of 20, co-authored 111 publications receiving 1675 citations. Previous affiliations of Hiroyuki Shindo include Hitachi & Nippon Telegraph and Telephone.
Papers
More filters
Posted Content
LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
TL;DR: New pretrained contextualized representations of words and entities based on the bidirectional transformer, and an entity-aware self-attention mechanism that considers the types of tokens (words or entities) when computing attention scores are proposed.
Proceedings ArticleDOI
Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation
TL;DR: In this paper, the authors proposed a novel embedding method for NED, which jointly maps words and entities into the same continuous vector space by using skip-gram model and anchor context model, and achieved state-of-the-art accuracy of 93.1% on the standard CoNLL dataset.
Posted Content
Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation
TL;DR: A novel embedding method specifically designed for NED that jointly maps words and entities into the same continuous vector space and extends the skip-gram model by using two models.
Proceedings ArticleDOI
Interpretable Adversarial Perturbation in Input Embedding Space for Text
TL;DR: This paper restores interpretability to adversarial training methods by restricting the directions of perturbations toward the existing words in the input embedding space and can straightforwardly reconstruct each input with perturbATIONS to an actual text by considering the perturbation to be the replacement of words in a sentence while maintaining or even improving the task performance.
Proceedings ArticleDOI
LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
TL;DR: This article proposed new pretrained contextualized representations of words and entities based on the bidirectional transformer, which treats words and entity in a given text as independent tokens, and outputs contextualised representations of them.