K
Kyunghyun Cho
Researcher at New York University
Publications - 351
Citations - 116609
Kyunghyun Cho is an academic researcher from New York University. The author has contributed to research in topics: Machine translation & Recurrent neural network. The author has an hindex of 77, co-authored 316 publications receiving 94919 citations. Previous affiliations of Kyunghyun Cho include Facebook & Université de Montréal.
Papers
More filters
Journal ArticleDOI
On integrating a language model into neural machine translation
TL;DR: This work combines scores from neural language model trained only on target monolingual data with neural machine translation model and fusing hidden-states of these two models, and obtains up to 2 BLEU improvement over hierarchical and phrase-based baseline on low-resource language pair, Turkish English.
Proceedings ArticleDOI
Zero-shot transfer learning for event extraction
TL;DR: A transferable architecture of structural and compositional neural networks is designed to jointly represent and map event mentions and types into a shared semantic space and can select, for each event mention, the event type which is semantically closest in this space as its type.
Journal ArticleDOI
Segmentation of the Proximal Femur from MR Images using Deep Convolutional Neural Networks
Cem M. Deniz,Siyuan Xiang,R. Spencer Hallyburton,Arakua Welbeck,James S. Babb,Stephen Honig,Kyunghyun Cho,Gregory Chang +7 more
TL;DR: In this article, the authors presented an automatic proximal femur segmentation method that is based on deep convolutional neural networks (CNNs), which achieved a high dice similarity score of 0.95.
Posted Content
Continual Learning via Neural Pruning
TL;DR: Continual Learning via Neural Pruning is introduced, a new method aimed at lifelong learning in fixed capacity models based on neuronal model sparsification, and the concept of graceful forgetting is formalized and incorporated.
Proceedings ArticleDOI
Parallel tempering is efficient for learning restricted Boltzmann machines
TL;DR: This work proposes to use an advanced Monte Carlo method called parallel tempering instead of contrastive divergence learning to learn restricted Boltzmann machines, and shows experimentally that it works efficiently.