scispace - formally typeset
Y

Yun-Nung Chen

Researcher at National Taiwan University

Publications -  162
Citations -  4622

Yun-Nung Chen is an academic researcher from National Taiwan University. The author has contributed to research in topics: Spoken language & Natural language understanding. The author has an hindex of 29, co-authored 153 publications receiving 3595 citations. Previous affiliations of Yun-Nung Chen include Microsoft & Carnegie Mellon University.

Papers
More filters
Proceedings ArticleDOI

Multi-Domain Joint Semantic Frame Parsing Using Bi-Directional RNN-LSTM.

TL;DR: Experimental results show the power of a holistic multi-domain, multi-task modeling approach to estimate complete semantic frames for all user utterances addressed to a conversational system over alternative methods based on single domain/task deep learning.
Proceedings ArticleDOI

Slot-gated modeling for joint slot filling and intent prediction

TL;DR: A slot gate that focuses on learning the relationship between intent and slot attention vectors in order to obtain better semantic frame results by the global optimization is proposed.
Proceedings ArticleDOI

Towards End-to-End Reinforcement Learning of Dialogue Agents for Information Access

TL;DR: The authors proposed KBInfoBot, a multi-turn dialogue agent which helps users search Knowledge Bases (KBs) without composing complicated queries, by replacing symbolic queries with an induced "soft" posterior distribution over the KB that indicates which entities the user is interested in.
Posted Content

End-to-End Task-Completion Neural Dialogue Systems

TL;DR: The end-to-end system not only outperforms modularized dialogue system baselines for both objective and subjective evaluation, but also is robust to noises as demonstrated by several systematic experiments with different error granularity and rates specific to the language understanding module.
Proceedings ArticleDOI

End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Language Understanding.

TL;DR: The experiments on Microsoft Cortana conversational data show that the proposed memory network architecture can effectively extract salient semantics for modeling knowledge carryover in the multi-turn conversations and outperform the results using the state-of-the-art recurrent neural network framework (RNN) designed for single-turn SLU.