scispace - formally typeset
Open AccessProceedings ArticleDOI

Neural Belief Tracker: Data-Driven Dialogue State Tracking

TLDR
This work proposes a novel Neural Belief Tracking (NBT) framework which overcomes past limitations, matching the performance of state-of-the-art models which rely on hand-crafted semantic lexicons and outperforming them when such lexicons are not provided.
Abstract
One of the core components of modern spoken dialogue systems is the belief tracker, which estimates the user’s goal at every step of the dialogue. However, most current approaches have difficulty scaling to larger, more complex dialogue domains. This is due to their dependency on either: a) Spoken Language Understanding models that require large amounts of annotated training data; or b) hand-crafted lexicons for capturing some of the linguistic variation in users’ language. We propose a novel Neural Belief Tracking (NBT) framework which overcomes these problems by building on recent advances in representation learning. NBT models reason over pre-trained word vectors, learning to compose them into distributed representations of user utterances and dialogue context. Our evaluation on two datasets shows that this approach surpasses past limitations, matching the performance of state-of-the-art models which rely on hand-crafted semantic lexicons and outperforming them when such lexicons are not provided.

read more

Citations
More filters

Learning to Generate Prompts for Dialogue Generation through Reinforcement Learning

TL;DR: The experiment results show that the proposed method can successfully control several state-of-the-art (SOTA) dialogue models without accessing their parameters and demonstrates the strong ability to quickly adapt to an unseen task in fewer steps than the baseline model.
Journal ArticleDOI

XQA-DST: Multi-Domain and Multi-Lingual Dialogue State Tracking

TL;DR: This paper proposes a multi-domain and multi-lingual dialogue state 008 tracker in a neural reading comprehension approach and shows its competitive transferability by zero-shot domain-adaptation experiments on MultiWOZ 2.1.
Proceedings ArticleDOI

WeaSuL: Weakly Supervised Dialogue Policy Learning: Reward Estimation for Multi-turn Dialogue

TL;DR: In this article, an agent uses dynamic blocking to generate ranked diverse responses and exploration-exploitation to select among the Top-K responses, and each simulated state-action pair is evaluated (works as a weak annotation) with three quality modules: Semantic Relevant, Semantic Coherence and Consistent Flow.
Peer Review

« Est-ce que tu me suis ? » : une revue du suivi de l’état du dialogue (“Do you follow me ?" : a review of dialogue state tracking )

Léo Jacqmin
TL;DR: In this article , a système de dialogue orienté tâche doit suivre les besoins de l'utilisateur à chaque étape selon l'historique de la conversation.
Journal ArticleDOI

A Chit-Chats Enhanced Task-Oriented Dialogue Corpora for Fuse-Motive Conversation Systems

Changhong Yu, +2 more
- 12 May 2022 - 
TL;DR: This work releases a multi-turn dialogues dataset called Chinese ChatEnhanced-Task (CCET) and proposes a line of fuse-motive dialogues formalization approach, along with several evaluation metrics for TOD sessions that are integrated by CC utterances.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings ArticleDOI

Glove: Global Vectors for Word Representation

TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Journal Article

Visualizing Data using t-SNE

TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Proceedings Article

Rectified Linear Units Improve Restricted Boltzmann Machines

TL;DR: Restricted Boltzmann machines were developed using binary stochastic hidden units that learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset.
Related Papers (5)