D
Daisuke Kawahara
Researcher at Hiroshima University
Publications - 217
Citations - 2892
Daisuke Kawahara is an academic researcher from Hiroshima University. The author has contributed to research in topics: Computer science & Medicine. The author has an hindex of 25, co-authored 178 publications receiving 2543 citations. Previous affiliations of Daisuke Kawahara include Yamagata University & Waseda University.
Papers
More filters
Proceedings ArticleDOI
The CoNLL-2009 Shared Task: Syntactic and Semantic Dependencies in Multiple Languages
Jan Hajiċ,Massimiliano Ciaramita,Richard Johansson,Daisuke Kawahara,Maria Antònia Martí,Lluís Màrquez,Adam Meyers,Joakim Nivre,Sebastian Padó,Jan Štėpánek,Pavel Straňák,Mihai Surdeanu,Nianwen Xue,Yi Zhang +13 more
TL;DR: This shared task combines the shared tasks of the previous five years under a unique dependency-based formalism similar to the 2008 task and describes how the data sets were created and show their quantitative properties.
Proceedings ArticleDOI
A Fully-Lexicalized Probabilistic Model for Japanese Syntactic and Case Structure Analysis
Daisuke Kawahara,Sadao Kurohashi +1 more
TL;DR: This paper proposed an integrated probabilistic model for Japanese syntactic and case structure analysis, which selects the syntactically and case structures that have the highest generative probability for each sentence.
Proceedings Article
Case Frame Compilation from the Web using High-Performance Computing
Daisuke Kawahara,Sadao Kurohashi +1 more
TL;DR: A huge text corpus from the web is built, and case frames are constructed from the corpus, which contain most examples of usual use and are ready to be applied to lots of NLP analyses and applications.
Journal ArticleDOI
A Fully-Lexicalized Probabilistic Model for Japanese Syntactic and Case Structure Analysis
Daisuke Kawahara,Sadao Kurohashi +1 more
TL;DR: ウェブのテキストを用いて実験を行い, 特に述語項構造に関連する係り受けの精度が向上することができた.
Proceedings ArticleDOI
Morphological Analysis for Unsegmented Languages using Recurrent Neural Network Language Model
TL;DR: A new morphological analysis model that considers semantic plausibility of word sequences by using a recurrent neural network language model, RNNLM, and in experiments on two Japanese corpora, the proposed model significantly outperformed baseline models.