Y
Yoshimasa Tsuruoka
Researcher at University of Tokyo
Publications - 162
Citations - 6282
Yoshimasa Tsuruoka is an academic researcher from University of Tokyo. The author has contributed to research in topics: Computer science & Parsing. The author has an hindex of 38, co-authored 147 publications receiving 5723 citations. Previous affiliations of Yoshimasa Tsuruoka include National Institute of Advanced Industrial Science and Technology & Japan Advanced Institute of Science and Technology.
Papers
More filters
Proceedings ArticleDOI
Introduction to the bio-entity recognition task at JNLPBA
TL;DR: The JNLPBA shared task of bio-entity recognition using an extended version of the GENIA version 3 named entity corpus of MEDLINE abstracts is described and a general discussion of the approaches taken by participating systems is presented.
Proceedings ArticleDOI
A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks
TL;DR: The authors introduce a joint many-task model together with a strategy for successively growing its depth to solve increasingly complex tasks, and use a simple regularization term to allow for optimizing all model weights to improve one task's loss without exhibiting catastrophic interference of the other tasks.
Book ChapterDOI
Developing a robust part-of-speech tagger for biomedical text
Yoshimasa Tsuruoka,Yuka Tateishi,Jin-Dong Kim,Tomoko Ohta,John McNaught,Sophia Ananiadou,Jun'ichi Tsujii +6 more
TL;DR: Experimental results on the Wall Street Journal corpus, the GENIA corpus, and the PennBioIE corpus revealed that adding training data from a different domain does not hurt the performance of a tagger, and the authors' tagger exhibits very good precision on all these corpora.
Proceedings ArticleDOI
Bidirectional Inference with the Easiest-First Strategy for Tagging Sequence Data
TL;DR: This paper presents a bidirectional inference algorithm for sequence labeling problems such as part-of-speech tagging, named entity recognition and text chunking that can enumerate all possible decomposition structures and find the highest probability sequence together with the corresponding decomposition structure in polynomial time.
Proceedings ArticleDOI
Tree-to-Sequence Attentional Neural Machine Translation
TL;DR: This work proposes a novel end-to-end syntactic NMT model, extending a sequence- to-sequence model with the source-side phrase structure, which has an attention mechanism that enables the decoder to generate a translated word while softly aligning it with phrases as well as words of the source sentence.